path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
sample-notebooks/genomics-gatk-resource-bundle.ipynb | ###Markdown
'GATK resource bundle' on Azure Genomics Data Lake DATASET 1. Downloading 'broad-public-datasets'from Azure Genomics Data Lake Jupyter notebooks are a great tool for data scientists who is working on Genomics data analysis. We will demonstrate how to download specific 'broad-public-datasets' from Azure Genomics Data Lake. **Here is the coverage of this notebook:**1. Downloading genomics data2. Annotate genotypes using VariantFiltration3. Select Specific Variants4. Convert gVCF file to a table **Dependencies:**This notebook requires the following libraries:- Azure storage `pip install azure-storage-blob==2.1.0`. Please visit [this page](https://github.com/Azure/azure-storage-python/wiki) for frequently encountered problem for this SDK.- Genome Analysis Toolkit (GATK) (*Users need to download GATK from Broad Institute's webpage into the same compute environment with this notebook: https://github.com/broadinstitute/gatk/releases*)**Important information: This notebook is using Python 3.6 kernel**
###Code
pip install azure-storage-blob==2.1.0
###Output
_____no_output_____
###Markdown
Getting the Genomics data from Azure Open Datasets[Description of dataset](https://gatk.broadinstitute.org/hc/en-us/articles/360035890811-Resource-bundle): Stores public test data, often used to test workflows. For example, it contains NA12878 CRAM, gVCF, and unmapped BAM files. Downloading the specific 'broad-public-dataset'
###Code
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetbroadpublic', sas_token='sv=2020-04-08&si=prod&sr=c&sig=u%2Bg2Ab7WKZEGiAkwlj6nKiEeZ5wdoJb10Az7uUwis%2Fg%3D')
blob_service_client.get_blob_to_path('dataset/NA12878', 'NA12878.g.vcf.gz', './NA12878.g.vcf.gz')
###Output
_____no_output_____
###Markdown
Downloading the the index file of gVCF file
###Code
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetbroadpublic', sas_token='sv=2020-04-08&si=prod&sr=c&sig=u%2Bg2Ab7WKZEGiAkwlj6nKiEeZ5wdoJb10Az7uUwis%2Fg%3D')
blob_service_client.get_blob_to_path('dataset/NA12878', 'NA12878.g.vcf.gz.tbi', './NA12878.g.vcf.gz.tbi')
###Output
_____no_output_____
###Markdown
Downloading `hg38` reference genome Downloading the the reference genome bundle (hg38) from `public-broad-ref` on Azure Genomics Data Lake
###Code
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetpublicbroadref', sas_token='sv=2020-04-08&si=prod&sr=c&sig=DQxmjB4D1lAfOW9AxIWbXwZx6ksbwjlNkixw597JnvQ%3D')
blob_service_client.get_blob_to_path('dataset/hg38/v0', 'Homo_sapiens_assembly38.fasta', './Homo_sapiens_assembly38.fasta')
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetpublicbroadref', sas_token='sv=2020-04-08&si=prod&sr=c&sig=DQxmjB4D1lAfOW9AxIWbXwZx6ksbwjlNkixw597JnvQ%3D')
blob_service_client.get_blob_to_path('dataset/hg38/v0', 'Homo_sapiens_assembly38.dict', './Homo_sapiens_assembly38.dict')
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetpublicbroadref', sas_token='sv=2020-04-08&si=prod&sr=c&sig=DQxmjB4D1lAfOW9AxIWbXwZx6ksbwjlNkixw597JnvQ%3D')
blob_service_client.get_blob_to_path('dataset/hg38/v0', 'Homo_sapiens_assembly38.fasta.fai', './Homo_sapiens_assembly38.fai')
###Output
_____no_output_____
###Markdown
1. Annotate genotypes using VariantFiltration**Important note: Please check your GATK is running on your system.**If we want to filter heterozygous genotypes, we use VariantFiltration's `--genotype-filter-expression isHet == 1` option. We can specify the annotation value for the tool to label the heterozygous genotypes with with the `--genotype-filter-name` option. Here, this parameter's value is set to `isHetFilter`. In our first example, we used `NA12877.vcf.gz` from Illimina Platinum Genomes but users can use any vcf files from other datasets:`Platinum Genomes`
###Code
run gatk VariantFiltration -V NA12878.g.vcf.gz -O outputannot.vcf --genotype-filter-expression "isHet == 1" --genotype-filter-name "isHetFilter"
###Output
_____no_output_____
###Markdown
2. Select Specific Variants Select a subset of variants from a VCF fileThis tool makes it possible to select a subset of variants based on various criteria in order to facilitate certain analyses. Examples of such analyses include comparing and contrasting cases vs. controls, extracting variant or non-variant loci that meet certain requirements, or troubleshooting some unexpected results, to name a few.There are many different options for selecting subsets of variants from a larger call set:Extract one or more samples from a callset based on either a complete sample name or a pattern match.Specify criteria for inclusion that place thresholds on annotation values, **e.g. "DP > 1000" (depth of coverage greater than 1000x), "AF < 0.25" (sites with allele frequency less than 0.25)**.These criteria are written as "JEXL expressions", which are documented in the article about using JEXL expressions.Provide concordance or discordance tracks in order to include or exclude variants that are also present in other given callsets.Select variants based on criteria like their type (e.g. INDELs only), evidence of mendelian violation, filtering status, allelicity, etc.There are also several options for recording the original values of certain annotations which are recalculated when one subsets the new callset, trims alleles, etc.**Input**A variant call set in VCF format from which a subset can be selected.**Output**A new VCF file containing the selected subset of variants.
###Code
run gatk SelectVariants -R Homo_sapiens_assembly38.fasta -V outputannot.vcf --select-type-to-include SNP --select-type-to-include INDEL -O selective.vcf
###Output
_____no_output_____
###Markdown
3. VariantsToTable Extract fields from a VCF file to a tab-delimited tableThis tool extracts specified fields for each variant in a VCF file to a tab-delimited table, which may be easier to work with than a VCF. By default, the tool only extracts PASS or . (unfiltered) variants in the VCF file. Filtered variants may be included in the output by adding the --show-filtered flag. The tool can extract both INFO (i.e. site-level) fields and FORMAT (i.e. sample-level) fields.**INFO/site-level fields**Use the `-F` argument to extract INFO fields; each field will occupy a single column in the output file. The field can be any standard VCF column (e.g. CHROM, ID, QUAL) or any annotation name in the INFO field (e.g. AC, AF). The tool also supports the following additional fields:EVENTLENGTH (length of the event)TRANSITION (1 for a bi-allelic transition (SNP), 0 for bi-allelic transversion (SNP), -1 for INDELs and multi-allelics)HET (count of het genotypes)HOM-REF (count of homozygous reference genotypes)HOM-VAR (count of homozygous variant genotypes)NO-CALL (count of no-call genotypes)TYPE (type of variant, possible values are NO_VARIATION, SNP, MNP, INDEL, SYMBOLIC, and MIXEDVAR (count of non-reference genotypes)NSAMPLES (number of samples)NCALLED (number of called samples)MULTI-ALLELIC (is this variant multi-allelic? true/false)**FORMAT/sample-level fields**Use the `-GF` argument to extract FORMAT/sample-level fields. The tool will create a new column per sample with the name "SAMPLE_NAME.FORMAT_FIELD_NAME" e.g. NA12877.GQ, NA12878.GQ. **Input**A VCF file to convert to a table**Output**A tab-delimited file containing the values of the requested fields in the VCF file.
###Code
run gatk VariantsToTable -V NA12878.g.vcf.gz -F CHROM -F POS -F TYPE -F AC -F AD -F AF -GF DP -GF AD -O outputtable.table
###Output
_____no_output_____
###Markdown
References1. VariantFiltration: https://gatk.broadinstitute.org/hc/en-us/articles/360036827111-VariantFiltration 2. Select Variants:https://gatk.broadinstitute.org/hc/en-us/articles/360037052272-SelectVariants3. Concordance: https://gatk.broadinstitute.org/hc/en-us/articles/360041851651-Concordance4. Variants to table: https://gatk.broadinstitute.org/hc/en-us/articles/360036882811-VariantsToTable 5. Broad Resource bundle:https://gatk.broadinstitute.org/hc/en-us/articles/360035890811-Resource-bundle DATASET 2. Downloading 'public-data-broad-references'from Azure Genomics Data Lake We will demonstrate how to download specific 'public-data-broad-references' from Azure Genomics Data Lake. Getting the Genomics data from Azure Open Datasets[Description of dataset](https://gatk.broadinstitute.org/hc/en-us/articles/360035890811-Resource-bundle): This is the Broad's public hg38 and b37 reference and resource data. Additional information can be found in the GATK Resource Bundle article. This bucket is controlled by Broad, but hosted by `Microsoft`.* Whole-Genome-Analysis-Pipeline* GATK4-Germline-Preprocessing-VariantCalling-JointCalling Downloading `hg38` reference genome Downloading the the reference genome bundle (hg38) from `public-data-broad-reference` on Azure Genomics Data Lake
###Code
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetpublicbroadref', sas_token='sv=2020-04-08&si=prod&sr=c&sig=DQxmjB4D1lAfOW9AxIWbXwZx6ksbwjlNkixw597JnvQ%3D')
blob_service_client.get_blob_to_path('dataset/hg38/v0', 'Homo_sapiens_assembly38.fasta', './Homo_sapiens_assembly38.fasta')
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetpublicbroadref', sas_token='sv=2020-04-08&si=prod&sr=c&sig=DQxmjB4D1lAfOW9AxIWbXwZx6ksbwjlNkixw597JnvQ%3D')
blob_service_client.get_blob_to_path('dataset/hg38/v0', 'Homo_sapiens_assembly38.dict', './Homo_sapiens_assembly38.dict')
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetpublicbroadref', sas_token='sv=2020-04-08&si=prod&sr=c&sig=DQxmjB4D1lAfOW9AxIWbXwZx6ksbwjlNkixw597JnvQ%3D')
blob_service_client.get_blob_to_path('dataset/hg38/v0', 'Homo_sapiens_assembly38.fasta.fai', './Homo_sapiens_assembly38.fai')
###Output
_____no_output_____
###Markdown
DATASET 3. Downloading 'gatk-legacy-bundles' from Azure Genomic Data Lake We will demonstrate how to download specific 'gatk-legacy-bundles' from Azure Genomics Data Lake. Getting the Genomics data from Azure Genomics Data Lake[Description of dataset](https://gatk.broadinstitute.org/hc/en-us/articles/360035890811-Resource-bundle): Broad public legacy b37 and hg19 reference and resource data. Downloading the `ucsc.hg.19` data bundle from 'gatk-test-data'
###Code
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetgatklegacybundles', sas_token='sv=2020-04-08&si=prod&sr=c&sig=xBfxOPBqHKUCszzwbNCBYF0k9osTQjKnZbEjXCW7gU0%3D')
blob_service_client.get_blob_to_path('dataset/hg19', 'ucsc.hg19.fasta', './ucsc.hg19.fasta')
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetgatklegacybundles', sas_token='sv=2020-04-08&si=prod&sr=c&sig=xBfxOPBqHKUCszzwbNCBYF0k9osTQjKnZbEjXCW7gU0%3D')
blob_service_client.get_blob_to_path('dataset/hg19', 'ucsc.hg19.fasta.fai', './ucsc.hg19.fasta.fai')
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetgatklegacybundles', sas_token='sv=2020-04-08&si=prod&sr=c&sig=xBfxOPBqHKUCszzwbNCBYF0k9osTQjKnZbEjXCW7gU0%3D')
blob_service_client.get_blob_to_path('dataset/hg19', 'ucsc.hg19.dict', './ucsc.hg19.dict')
###Output
_____no_output_____
###Markdown
DATASET 4. Downloading 'gatk-best-practices' from Azure Genomic Data Lake We will demonstrate how to download specific 'gatk-best-practices' from Azure Genomics Data Lake. Getting the Genomics data from Azure Genomics Data Lake[Description of dataset](https://gatk.broadinstitute.org/hc/en-us/articles/360035890811-Resource-bundle): Stores GATK workflow specific plumbing, reference, and resources data. Example workspaces include:Somatic-SNVs-Indels-GATK4 Downloading the `cnv_germline_pipeline` data bundle from 'gatk-best-practices'
###Code
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetgatkbestpractices', sas_token='sv=2020-04-08&si=prod&sr=c&sig=6SaDfKtXAIfdpO%2BkvNA%2FsTNmNij%2Byh%2F%2F%2Bf98WAUqs7I%3D')
blob_service_client.get_blob_to_path('dataset/cnv_germline_pipeline', 'Homo_sapiens_assembly19.truncated.fasta', './Homo_sapiens_assembly19.truncated.fasta')
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetgatkbestpractices', sas_token='sv=2020-04-08&si=prod&sr=c&sig=6SaDfKtXAIfdpO%2BkvNA%2FsTNmNij%2Byh%2F%2F%2Bf98WAUqs7I%3D')
blob_service_client.get_blob_to_path('dataset/cnv_germline_pipeline', 'Homo_sapiens_assembly19.truncated.fasta.fai', './Homo_sapiens_assembly19.truncated.fasta.fai')
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetgatkbestpractices', sas_token='sv=2020-04-08&si=prod&sr=c&sig=6SaDfKtXAIfdpO%2BkvNA%2FsTNmNij%2Byh%2F%2F%2Bf98WAUqs7I%3D')
blob_service_client.get_blob_to_path('dataset/cnv_germline_pipeline', 'Homo_sapiens_assembly19.truncated.dict', './Homo_sapiens_assembly19.truncated.dict')
###Output
_____no_output_____
###Markdown
DATASET 5. Downloading 'gatk-test-data' from Azure Genomic Data Lake We will demonstrate how to download specific 'gatk-test-data' from Azure Genomics Data Lake. Getting the Genomics data from Azure Genomics Data Lake[Description of dataset](https://gatk.broadinstitute.org/hc/en-us/articles/360035890811-Resource-bundle): Additional public test data focusing on smaller data sets. For example, whole genome BAM, FASTQ, gVCF, VCF, etc. Example Workspaces include:Somatic-CNVs-GATK4. Downloading the `mutect2` data bundle from 'gatk-test-data'
###Code
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetgatktestdata', sas_token='sv=2020-04-08&si=prod&sr=c&sig=fzLts1Q2vKjuvR7g50vE4HteEHBxTcJbNvf%2FZCeDMO4%3D')
blob_service_client.get_blob_to_path('dataset/mutect2', 'human_g1k_v37.20.21.fasta', './human_g1k_v37.20.21.fasta')
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetgatktestdata', sas_token='sv=2020-04-08&si=prod&sr=c&sig=fzLts1Q2vKjuvR7g50vE4HteEHBxTcJbNvf%2FZCeDMO4%3D')
blob_service_client.get_blob_to_path('dataset/mutect2', 'human_g1k_v37.20.21.fasta.fai', './human_g1k_v37.20.21.fasta.fai')
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetgatktestdata', sas_token='sv=2020-04-08&si=prod&sr=c&sig=fzLts1Q2vKjuvR7g50vE4HteEHBxTcJbNvf%2FZCeDMO4%3D')
blob_service_client.get_blob_to_path('dataset/mutect2', 'human_g1k_v37.20.21.dict', './human_g1k_v37.20.21.dict')
###Output
_____no_output_____ |
examples/alanine_dipeptide_tps/AD_tps_2b_run_fixed.ipynb | ###Markdown
This is file runs the main calculation for the fixed length TPS simulation. It requires the file `alanine_dipeptide_fixed_tps_traj.nc`, which is written in the notebook `alanine_dipeptide_fixed_tps_traj.ipynb`.In this file, you will learn:* how to set up and run a fixed length TPS simulationNB: This is a long calculation. In practice, it would be best to export the Python from this notebook, remove the `live_visualizer`, and run non-interactively on a computing node.
###Code
import openpathsampling as paths
###Output
_____no_output_____
###Markdown
Load engine, trajectory, and states from file
###Code
old_storage = paths.Storage("tps_nc_files/alanine_dipeptide_fixed_tps_traj.nc", "r")
engine = old_storage.engines[0]
C_7eq = old_storage.volumes.find('C_7eq')
alpha_R = old_storage.volumes.find('alpha_R')
traj = old_storage.trajectories[0]
phi = old_storage.cvs.find('phi')
psi = old_storage.cvs.find('psi')
template = old_storage.snapshots[0]
print engine.name
print engine.snapshot_timestep
###Output
300K
0.02 ps
###Markdown
TPSThe only difference between this and the flexible path length example in `alanine_dipeptide_tps_run.ipynb` is that we used a `FixedLengthTPSNetwork`. We selected the `length=400` (8 ps) as a maximum length based on the results from a flexible path length run.
###Code
network = paths.FixedLengthTPSNetwork(C_7eq, alpha_R, length=400)
scheme = paths.OneWayShootingMoveScheme(network, selector=paths.UniformSelector(), engine=engine)
initial_conditions = scheme.initial_conditions_from_trajectories(traj)
sampler = paths.PathSampling(storage=paths.Storage("tps_nc_files/alanine_dipeptide_fixed_tps.nc", "w", template),
move_scheme=scheme,
sample_set=initial_conditions)
#sampler.live_visualizer = paths.StepVisualizer2D(network, phi, psi, [-3.14, 3.14], [-3.14, 3.14])
#sampler.live_visualizer = None
sampler.run(10000)
###Output
Working on Monte Carlo cycle number 10000
Running for 124921 seconds - 0.08 steps per second
Expected time to finish: 12 seconds
DONE! Completed 10000 Monte Carlo cycles.
###Markdown
Running a fixed-length TPS simulation This is file runs the main calculation for the fixed length TPS simulation. It requires the file `ad_fixed_tps_traj.nc`, which is written in the notebook `AD_tps_1b_trajectory.ipynb`.In this file, you will learn:* how to set up and run a fixed length TPS simulationNB: This is a long calculation. In practice, it would be best to export the Python from this notebook, remove the `live_visualizer`, and run non-interactively on a computing node.
###Code
from __future__ import print_function
import openpathsampling as paths
###Output
_____no_output_____
###Markdown
Load engine, trajectory, and states from file
###Code
old_storage = paths.Storage("ad_fixed_tps_traj.nc", "r")
engine = old_storage.engines['300K']
network = old_storage.networks['fixed_tps_network']
traj = old_storage.trajectories[0]
###Output
_____no_output_____
###Markdown
TPSThe only difference between this and the flexible path length example in `AD_tps_2a_run_flex.ipynb` is that we used a `FixedLengthTPSNetwork`. We selected the `length=400` (8 ps) as a maximum length based on the results from a flexible path length run.
###Code
scheme = paths.OneWayShootingMoveScheme(network,
selector=paths.UniformSelector(),
engine=engine)
initial_conditions = scheme.initial_conditions_from_trajectories(traj)
storage = paths.Storage("ad_fixed_tps.nc", "w")
storage.save(initial_conditions); # save these to give storage a template
sampler = paths.PathSampling(storage=storage,
move_scheme=scheme,
sample_set=initial_conditions)
sampler.run(10000)
old_storage.close()
storage.close()
###Output
_____no_output_____ |
docs/tutorials/nmt_with_attention.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation. This is an advanced example that assumes some knowledge of sequence to sequence models.After training the model in this notebook, you will be able to input a Spanish sentence, such as *"¿todavia estan en casa?"*, and return the English translation: *"are you still at home?"*The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU.
###Code
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
###Output
_____no_output_____
###Markdown
Download and prepare the datasetWe'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence).encode('utf-8'))
# 1. Remove the accents
# 2. Clean the sentences
# 3. Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in line.split('\t')]
for line in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# creating cleaned input, output pairs
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
###Output
_____no_output_____
###Markdown
Limit the size of the dataset to experiment faster (optional)Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with fewer data):
###Code
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file,
num_examples)
# Calculate max_length of the target tensors
max_length_targ, max_length_inp = target_tensor.shape[1], input_tensor.shape[1]
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
def convert(lang, tensor):
for t in tensor:
if t != 0:
print(f'{t} ----> {lang.index_word[t]}')
print("Input Language; index to word mapping")
convert(inp_lang, input_tensor_train[0])
print()
print("Target Language; index to word mapping")
convert(targ_lang, target_tensor_train[0])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset
###Code
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
###Output
_____no_output_____
###Markdown
Write the encoder and decoder modelImplement an encoder-decoder model with attention which you can read about in the TensorFlow [Neural Machine Translation (seq2seq) tutorial](https://www.tensorflow.org/text/tutorials/nmt_with_attention). This example uses a more recent set of APIs. This notebook implements the [attention equations](https://github.com/tensorflow/nmtbackground-on-the-attention-mechanism) from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence. The below picture and formulas are an example of attention mechanism from [Luong's paper](https://arxiv.org/abs/1508.04025v5). The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*.Here are the equations that are implemented:This tutorial uses [Bahdanau attention](https://arxiv.org/pdf/1409.0473.pdf) for the encoder. Let's decide on notation before writing the simplified form:* FC = Fully connected (dense) layer* EO = Encoder output* H = hidden state* X = input to the decoderAnd the pseudo-code:* `score = FC(tanh(FC(EO) + FC(H)))`* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, hidden_size)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.* `embedding output` = The input to the decoder X is passed through an embedding layer.* `merged vector = concat(embedding output, context vector)`* This merged vector is then given to the GRUThe shapes of all the vectors at each step have been specified in the comments in the code:
###Code
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state=hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# sample input
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print('Encoder output shape: (batch size, sequence length, units)', sample_output.shape)
print('Encoder Hidden state shape: (batch size, units)', sample_hidden.shape)
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
# we are doing this to broadcast addition along the time axis to calculate the score
query_with_time_axis = tf.expand_dims(query, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
# the shape of the tensor before applying self.V is (batch_size, max_length, units)
score = self.V(tf.nn.tanh(
self.W1(query_with_time_axis) + self.W2(values)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = BahdanauAttention(10)
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units)", attention_result.shape)
print("Attention weights shape: (batch_size, sequence_length, 1)", attention_weights.shape)
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _, _ = decoder(tf.random.uniform((BATCH_SIZE, 1)),
sample_hidden, sample_output)
print('Decoder output shape: (batch_size, vocab size)', sample_decoder_output.shape)
###Output
_____no_output_____
###Markdown
Define the optimizer and the loss function
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True,
reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
###Output
_____no_output_____
###Markdown
Checkpoints (Object-based saving)
###Code
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
###Output
_____no_output_____
###Markdown
Training1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.2. The encoder output, encoder hidden state and the decoder input (which is the *start token*) is passed to the decoder.3. The decoder returns the *predictions* and the *decoder hidden state*.4. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.5. Use *teacher forcing* to decide the next input to the decoder.6. *Teacher forcing* is the technique where the *target word* is passed as the *next input* to the decoder.7. The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
###Code
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print(f'Epoch {epoch+1} Batch {batch} Loss {batch_loss.numpy():.4f}')
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
print(f'Epoch {epoch+1} Loss {total_loss/steps_per_epoch:.4f}')
print(f'Time taken for 1 epoch {time.time()-start:.2f} sec\n')
###Output
_____no_output_____
###Markdown
Translate* The evaluate function is similar to the training loop, except we don't use *teacher forcing* here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.* Stop predicting when the model predicts the *end token*.* And store the *attention weights for every time step*.Note: The encoder output is calculated only once for one input.
###Code
def evaluate(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# storing the attention weights to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# function for plotting the attention weights
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def translate(sentence):
result, sentence, attention_plot = evaluate(sentence)
print('Input:', sentence)
print('Predicted translation:', result)
attention_plot = attention_plot[:len(result.split(' ')),
:len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
###Output
_____no_output_____
###Markdown
Restore the latest checkpoint and test
###Code
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.')
translate(u'esta es mi vida.')
translate(u'¿todavia estan en casa?')
# wrong translation
translate(u'trata de averiguarlo.')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025v5). This is an advanced example that assumes some knowledge of:* Sequence to sequence models* TensorFlow fundamentals below the keras layer: * Working with tensors directly * Writing custom `keras.Model`s and `keras.layers`While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to [Transformers](transformers.ipynb)).After training the model in this notebook, you will be able to input a Spanish sentence, such as "*¿todavia estan en casa?*", and return the English translation: "*are you still at home?*"The resulting model is exportable as a `tf.saved_model`, so it can be used in other TensorFlow environments.The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU. Setup
###Code
!pip install tensorflow_text
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
###Output
_____no_output_____
###Markdown
This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
###Code
use_builtins = True
###Output
_____no_output_____
###Markdown
This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
###Code
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
###Output
_____no_output_____
###Markdown
The data We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```They have a variety of languages available, but we'll use the English-Spanish dataset. Download and prepare the datasetFor convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset From these arrays of strings you can create a `tf.data.Dataset` of strings that shuffles and batches them efficiently:
###Code
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
###Output
_____no_output_____
###Markdown
Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a `tf.saved_model`. To make that exported model useful it should take `tf.string` inputs, and retrun `tf.string` outputs: All the text processing happens inside the model. Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.The `tensroflow_text` package contains a unicode normalize operation:
###Code
example_text = tf.constant('¿Todavía está en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
###Output
_____no_output_____
###Markdown
Unicode normalization will be the first step in the text standardization function:
###Code
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
###Output
_____no_output_____
###Markdown
Text Vectorization This standardization function will be wrapped up in a `preprocessing.TextVectorization` layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
###Code
max_vocab_size = 5000
input_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
###Output
_____no_output_____
###Markdown
The `TextVectorization` layer and many other `experimental.preprocessing` layers have an `adapt` method. This method reads one epoch of the training data, and works a lot like `Model.fix`. This `adapt` method initializes the layer based on the data. Here it determines the vocabulary:
###Code
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
That's the Spanish `TextVectorization` layer, now build and `.adapt()` the English one:
###Code
output_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
Now these layers can convert a batch of strings into a batch of token IDs:
###Code
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
###Output
_____no_output_____
###Markdown
The `get_vocabulary` method can be used to convert token IDs back to text:
###Code
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
###Output
_____no_output_____
###Markdown
The returned token IDs are zero-padded. This can easily be turned into a mask:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
The encoder/decoder modelThe following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from [Luong's paper](https://arxiv.org/abs/1508.04025v5). Before getting into it define a few constants for the model:
###Code
embedding_dim = 256
units = 1024
###Output
_____no_output_____
###Markdown
The encoderStart by building the encoder, the blue part of the diagram above.The encoder:1. Takes a list of token IDs (from `input_text_processor`).3. Looks up an embedding vector for each token (Using a `layers.Embedding`).4. Processes the embeddings into a new sequence (Using a `layers.GRU`).5. Returns: * The processed sequence. This will be passed to the attention head. * The internal state. This will be used to initialize the decoder
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
###Output
_____no_output_____
###Markdown
Here is how it fits together so far:
###Code
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
###Output
_____no_output_____
###Markdown
The encoder returns its internal state so that its state can be used to initialize the decoder.It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder. The attention headThe decoder uses attention to selectively focus on parts of the input sequence.The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a `layers.GlobalAveragePoling1D` but the attention layer performs a _weighted_ average.Let's look at how this works: Where:* $s$ is the encoder index.* $t$ is the decoder index.* $\alpha_{ts}$ is the attention weights.* $h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).* $h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).* $c_t$ is the resulting context vector.* $a_t$ is the final output combining the "context" and "query".The equations:1. Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.2. Calculates the context vector as the weighted sum of the encoder outputs. Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:This tutorial uses [Bahdanau's additive attention](https://arxiv.org/pdf/1409.0473.pdf). TensorFlow includes implementations of both as `layers.Attention` and`layers.AdditiveAttention`. The class below handles the weight matrices in a pair of `layers.Dense` layers, and calls the builtin implementation.
###Code
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
###Output
_____no_output_____
###Markdown
Test the Attention layerCreate a `BahdanauAttention` layer:
###Code
attention_layer = BahdanauAttention(units)
###Output
_____no_output_____
###Markdown
This layer takes 3 inputs:* The `query`: This will be generated by the decoder, later.* The `value`: This Will be the output of the encoder.* The `mask`: To exclude the padding, `example_tokens != 0`
###Code
(example_tokens != 0).shape
###Output
_____no_output_____
###Markdown
The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:1. A batch of sequences of result vectors the size of the queries.2. A batch attention maps, with size `(query_length, value_length)`.
###Code
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
###Output
_____no_output_____
###Markdown
The attention weights should sum to `1.0` for each sequence.Here are the attention weights across the sequences at `t=0`:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
Because of the small-random initialization the attention weights are all close to `1/(sequence_length)`. If you zoom in on the weights for a single sequence, you can see that there is some _small_ variation that the model can learn to expand, and exploit.
###Code
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
###Output
_____no_output_____
###Markdown
The decoderThe decoder's job is to generate predictions for the next output token.1. The decoder receives the complete encoder output.2. It uses an RNN to keep track of what it has generated so far.3. It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.4. It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".5. It generates logit predictions for the next token based on the "attention vector". Here is the `Decoder` class and its initializer. The initializer creates all the necessary layers.
###Code
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
###Output
_____no_output_____
###Markdown
The `call` method for this layer takes and returns multiple tensors. Organize those into simple container classes:
###Code
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
###Output
_____no_output_____
###Markdown
Here is the implementation of the `call` method:
###Code
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'dec_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
###Output
_____no_output_____
###Markdown
The **encoder** processes its full input sequence with a single call to its RNN. This implementation of the **decoder** _can_ do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:* Flexibility: Writing the loop gives you direct control over the training procedure.* Clarity: It's possible to do masking tricks and use `layers.RNN`, or `tfa.seq2seq` APIs to pack this all into a single call. But writing it out as a loop may be clearer. * Loop free training is demonstrated in the [Text generation](text_generation.ipynb) tutiorial. Now try using this decoder.
###Code
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
###Output
_____no_output_____
###Markdown
The decoder takes 4 inputs.* `new_tokens` - The last token generated. Initialize the decoder with the `"[START]"` token.* `enc_output` - Generated by the `Encoder`.* `mask` - A boolean tensor indicating where `tokens != 0`* `state` - The previous `state` output from the decoder (the internal state of the decoder's RNN). Pass `None` to zero-initialize it. The original paper initializes it from the encoder's final RNN state.
###Code
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor._index_lookup_layer('[START]').numpy()
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
###Output
_____no_output_____
###Markdown
Sample a token according to the logits:
###Code
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
###Output
_____no_output_____
###Markdown
Decode the token as the first word of the output:
###Code
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
Now use the decoder to generate a second set of logits.- Pass the same `enc_output` and `mask`, these haven't changed.- Pass the sampled token as `new_tokens`.- Pass the `decoder_state` the decoder returned last time, so the RNN continues with a memory of where it left off last time.
###Code
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
TrainingNow that you have all the model components, it's time to start training the model. You'll need:- A loss function and optimizer to perform the optimization.- A training step function defining how to update the model for each input/target batch.- A training loop to drive the training and save checkpoints. Define the loss function
###Code
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
###Output
_____no_output_____
###Markdown
Implement the training step Start with a model class, the training process will be implemented as the `train_step` method on this model. See [Customizing fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit) for details.Here the `train_step` method is a wrapper around the `_train_step` implementation which will come later. This wrapper includes a switch to turn on and off `tf.function` compilation, to make debugging easier.
###Code
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
###Output
_____no_output_____
###Markdown
Overall the implementation for the `Model.train_step` method is as follows:1. Receive a batch of `input_text, target_text` from the `tf.data.Dataset`.2. Convert those raw text inputs to token-embeddings and masks. 3. Run the encoder on the `input_tokens` to get the `encoder_output` and `encoder_state`.4. Initialize the decoder state and loss. 5. Loop over the `target_tokens`: 1. Run the decoder one step at a time. 2. Calculate the loss for each step. 3. Accumulate the average loss.6. Calculate the gradient of the loss and use the optimizer to apply updates to the model's `trainable_variables`. The `_preprocess` method, added below, implements steps 1 and 2:
###Code
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
###Output
_____no_output_____
###Markdown
The `_train_step` method, added below, handles the remaining steps except for actually running the decoder:
###Code
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target the target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
###Output
_____no_output_____
###Markdown
The `_loop_step` method, added below, executes the decoder and calculates the incremental loss and new decoder state (`dec_state`).
###Code
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
###Output
_____no_output_____
###Markdown
Test the training stepBuild a `TrainTranslator`, and configure it for training using the `Model.compile` method:
###Code
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Test out the `train_step`. For a text model like this the loss should start near:
###Code
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
While it's easier to debug without a `tf.function` it does give a performance boost. So now that the `_train_step` method is working, try the `tf.function`-wrapped `_tf_train_step`, to maximize performance while training:
###Code
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
###Output
_____no_output_____
###Markdown
The first call will be slow, because it traces the function.
###Code
translator.train_step([example_input_batch, example_target_batch])
###Output
_____no_output_____
###Markdown
But after that it's usually 2-3x faster than the eager `train_step` method:
###Code
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
###Code
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
###Code
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Train the modelWhile there's nothing wrong with writing your own custom training loop, implementing the `Model.train_step` method, as in the previous section, allows you to run `Model.fit` and avoid rewriting all that boiler-plate code. This tutorial only trains for a couple of epochs, so use a `callbacks.Callback` to collect the history of batch losses, for plotting:
###Code
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
###Output
_____no_output_____
###Markdown
The visible jumps in the plot are at the epoch boundaries. TranslateNow that the model is trained, implement a function to execute the full `text => text` translation.For this the model needs to invert the `text => token IDs` mapping provided by the `output_text_processor`. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
###Code
class Translator(tf.Module):
def __init__(self, encoder, decoder, input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
mask_token='',
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(), mask_token='')
token_mask_ids = index_from_string(['', '[UNK]', '[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string('[START]')
self.end_token = index_from_string('[END]')
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
###Output
_____no_output_____
###Markdown
Convert token IDs to text The first method to implement is `tokens_to_text` which converts from token IDs to human readable text.
###Code
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
###Output
_____no_output_____
###Markdown
Input some random token IDs and see what it generates:
###Code
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
###Output
_____no_output_____
###Markdown
Sample from the decoder's predictions This function takes the decoder's logit outputs and samples token IDs from that distribution:
###Code
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
###Output
_____no_output_____
###Markdown
Test run this function on some random inputs:
###Code
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
###Output
_____no_output_____
###Markdown
Implement the translation loopHere is a complete implementation of the text to text translation loop.This implementation collects the results into python lists, before using `tf.concat` to join them into tensors.This implementation statically unrolls the graph out to `max_length` iterations.This is okay with eager execution in python.
###Code
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
###Output
_____no_output_____
###Markdown
Run it on a simple input:
###Code
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
If you want to export this model you'll need to wrap this method in a `tf.function`. This basic implementation has a few issues if you try to do that:1. The resulting graphs are very large and take a few seconds to build, save or load.2. You can't break from a statically unrolled loop, so it will always run `max_length` iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
Run the `tf.function` once to compile it:
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text,
*,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(
new_tokens=new_tokens, enc_output=enc_output, mask=(input_tokens != 0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generated token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
###Output
_____no_output_____
###Markdown
The initial implementation used python lists to collect the outputs. This uses `tf.range` as the loop iterator, allowing `tf.autograph` to convert the loop. The biggest change in this implementation is the use of `tf.TensorArray` instead of python `list` to accumulate tensors. `tf.TensorArray` is required to collect a variable number of tensors in graph mode. With eager execution this implementation performs on par with the original:
###Code
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
But when you wrap it in a `tf.function` you'll notice two differences.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
First: Graph creation is much faster (~10x), since it doesn't create `max_iterations` copies of the model.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
###Output
_____no_output_____
###Markdown
Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Visualize the process The attention weights returned by the `translate` method show where the model was "looking" when it generated each output token.So the sum of the attention over the input should return all ones:
###Code
a = result['attention'][0]
print(np.sum(a, axis=-1))
###Output
_____no_output_____
###Markdown
Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
###Code
_ = plt.bar(range(len(a[0, :])), a[0, :])
###Output
_____no_output_____
###Markdown
Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
###Code
plt.imshow(np.array(a), vmin=0.0)
###Output
_____no_output_____
###Markdown
Here is some code to make a better attention plot:
###Code
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
Translate a few more sentences and plot them:
###Code
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'¿Todavía están en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:1. The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.2. The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. [Transformers](transformer.ipynb) solve this by using self-attention in the encoder and decoder.
###Code
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
###Output
_____no_output_____
###Markdown
Export Once you have a model you're satisfied with you might want to export it as a `tf.saved_model` for use outside of this python program that created it.Since the model is a subclass of `tf.Module` (through `keras.Model`), and all the functionality for export is compiled in a `tf.function` the model should export cleanly with `tf.saved_model.save`: Now that the function has been traced it can be exported using `saved_model.save`:
###Code
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025v5). This is an advanced example that assumes some knowledge of:* Sequence to sequence models* TensorFlow fundamentals below the keras layer: * Working with tensors directly * Writing custom `keras.Model`s and `keras.layers`While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to [Transformers](transformers.ipynb)).After training the model in this notebook, you will be able to input a Spanish sentence, such as "*¿todavia estan en casa?*", and return the English translation: "*are you still at home?*"The resulting model is exportable as a `tf.saved_model`, so it can be used in other TensorFlow environments.The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU. Setup
###Code
!pip install tensorflow_text
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
###Output
_____no_output_____
###Markdown
This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
###Code
use_builtins = True
###Output
_____no_output_____
###Markdown
This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
###Code
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
###Output
_____no_output_____
###Markdown
The data We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```They have a variety of languages available, but we'll use the English-Spanish dataset. Download and prepare the datasetFor convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path_to_file.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset From these arrays of strings you can create a `tf.data.Dataset` of strings that shuffles and batches them efficiently:
###Code
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
###Output
_____no_output_____
###Markdown
Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a `tf.saved_model`. To make that exported model useful it should take `tf.string` inputs, and retrun `tf.string` outputs: All the text processing happens inside the model. Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.The `tensroflow_text` package contains a unicode normalize operation:
###Code
example_text = tf.constant('¿Todavía está en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
###Output
_____no_output_____
###Markdown
Unicode normalization will be the first step in the text standardization function:
###Code
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
###Output
_____no_output_____
###Markdown
Text Vectorization This standardization function will be wrapped up in a `preprocessing.TextVectorization` layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
###Code
max_vocab_size = 5000
input_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
###Output
_____no_output_____
###Markdown
The `TextVectorization` layer and many other `experimental.preprocessing` layers have an `adapt` method. This method reads one epoch of the training data, and works a lot like `Model.fix`. This `adapt` method initializes the layer based on the data. Here it determines the vocabulary:
###Code
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
That's the Spanish `TextVectorization` layer, now build and `.adapt()` the English one:
###Code
output_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
Now these layers can convert a batch of strings into a batch of token IDs:
###Code
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
###Output
_____no_output_____
###Markdown
The `get_vocabulary` method can be used to convert token IDs back to text:
###Code
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
###Output
_____no_output_____
###Markdown
The returned token IDs are zero-padded. This can easily be turned into a mask:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
The encoder/decoder modelThe following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from [Luong's paper](https://arxiv.org/abs/1508.04025v5). Before getting into it define a few constants for the model:
###Code
embedding_dim = 256
units = 1024
###Output
_____no_output_____
###Markdown
The encoderStart by building the encoder, the blue part of the diagram above.The encoder:1. Takes a list of token IDs (from `input_text_processor`).3. Looks up an embedding vector for each token (Using a `layers.Embedding`).4. Processes the embeddings into a new sequence (Using a `layers.GRU`).5. Returns: * The processed sequence. This will be passed to the attention head. * The internal state. This will be used to initialize the decoder
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
###Output
_____no_output_____
###Markdown
Here is how it fits together so far:
###Code
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
###Output
_____no_output_____
###Markdown
The encoder returns its internal state so that its state can be used to initialize the decoder.It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder. The attention headThe decoder uses attention to selectively focus on parts of the input sequence.The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a `layers.GlobalAveragePoling1D` but the attention layer performs a _weighted_ average.Let's look at how this works: Where:* $s$ is the encoder index.* $t$ is the decoder index.* $\alpha_{ts}$ is the attention weights.* $h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).* $h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).* $c_t$ is the resulting context vector.* $a_t$ is the final output combining the "context" and "query".The equations:1. Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.2. Calculates the context vector as the weighted sum of the encoder outputs. Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:This tutorial uses [Bahdanau's additive attention](https://arxiv.org/pdf/1409.0473.pdf). TensorFlow includes implementations of both as `layers.Attention` and`layers.AdditiveAttention`. The class below handles the weight matrices in a pair of `layers.Dense` layers, and calls the builtin implementation.
###Code
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
###Output
_____no_output_____
###Markdown
Test the Attention layerCreate a `BahdanauAttention` layer:
###Code
attention_layer = BahdanauAttention(units)
###Output
_____no_output_____
###Markdown
This layer takes 3 inputs:* The `query`: This will be generated by the decoder, later.* The `value`: This Will be the output of the encoder.* The `mask`: To exclude the padding, `example_tokens != 0`
###Code
(example_tokens != 0).shape
###Output
_____no_output_____
###Markdown
The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:1. A batch of sequences of result vectors the size of the queries.2. A batch attention maps, with size `(query_length, value_length)`.
###Code
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
###Output
_____no_output_____
###Markdown
The attention weights should sum to `1.0` for each sequence.Here are the attention weights across the sequences at `t=0`:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
Because of the small-random initialization the attention weights are all close to `1/(sequence_length)`. If you zoom in on the weights for a single sequence, you can see that there is some _small_ variation that the model can learn to expand, and exploit.
###Code
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
###Output
_____no_output_____
###Markdown
The decoderThe decoder's job is to generate predictions for the next output token.1. The decoder receives the complete encoder output.2. It uses an RNN to keep track of what it has generated so far.3. It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.4. It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".5. It generates logit predictions for the next token based on the "attention vector". Here is the `Decoder` class and its initializer. The initializer creates all the necessary layers.
###Code
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
###Output
_____no_output_____
###Markdown
The `call` method for this layer takes and returns multiple tensors. Organize those into simple container classes:
###Code
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
###Output
_____no_output_____
###Markdown
Here is the implementation of the `call` method:
###Code
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'dec_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
###Output
_____no_output_____
###Markdown
The **encoder** processes its full input sequence with a single call to its RNN. This implementation of the **decoder** _can_ do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:* Flexibility: Writing the loop gives you direct control over the training procedure.* Clarity: It's possible to do masking tricks and use `layers.RNN`, or `tfa.seq2seq` APIs to pack this all into a single call. But writing it out as a loop may be clearer. * Loop free training is demonstrated in the [Text generation](text_generation.ipynb) tutiorial. Now try using this decoder.
###Code
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
###Output
_____no_output_____
###Markdown
The decoder takes 4 inputs.* `new_tokens` - The last token generated. Initialize the decoder with the `"[START]"` token.* `enc_output` - Generated by the `Encoder`.* `mask` - A boolean tensor indicating where `tokens != 0`* `state` - The previous `state` output from the decoder (the internal state of the decoder's RNN). Pass `None` to zero-initialize it. The original paper initializes it from the encoder's final RNN state.
###Code
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor._index_lookup_layer('[START]').numpy()
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
###Output
_____no_output_____
###Markdown
Sample a token according to the logits:
###Code
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
###Output
_____no_output_____
###Markdown
Decode the token as the first word of the output:
###Code
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
Now use the decoder to generate a second set of logits.- Pass the same `enc_output` and `mask`, these haven't changed.- Pass the sampled token as `new_tokens`.- Pass the `decoder_state` the decoder returned last time, so the RNN continues with a memory of where it left off last time.
###Code
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
TrainingNow that you have all the model components, it's time to start training the model. You'll need:- A loss function and optimizer to perform the optimization.- A training step function defining how to update the model for each input/target batch.- A training loop to drive the training and save checkpoints. Define the loss function
###Code
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
###Output
_____no_output_____
###Markdown
Implement the training step Start with a model class, the training process will be implemented as the `train_step` method on this model. See [Customizing fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit) for details.Here the `train_step` method is a wrapper around the `_train_step` implementation which will come later. This wrapper includes a switch to turn on and off `tf.function` compilation, to make debugging easier.
###Code
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
###Output
_____no_output_____
###Markdown
Overall the implementation for the `Model.train_step` method is as follows:1. Receive a batch of `input_text, target_text` from the `tf.data.Dataset`.2. Convert those raw text inputs to token-embeddings and masks. 3. Run the encoder on the `input_tokens` to get the `encoder_output` and `encoder_state`.4. Initialize the decoder state and loss. 5. Loop over the `target_tokens`: 1. Run the decoder one step at a time. 2. Calculate the loss for each step. 3. Accumulate the average loss.6. Calculate the gradient of the loss and use the optimizer to apply updates to the model's `trainable_variables`. The `_preprocess` method, added below, implements steps 1 and 2:
###Code
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
###Output
_____no_output_____
###Markdown
The `_train_step` method, added below, handles the remaining steps except for actually running the decoder:
###Code
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target the target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
###Output
_____no_output_____
###Markdown
The `_loop_step` method, added below, executes the decoder and calculates the incremental loss and new decoder state (`dec_state`).
###Code
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
###Output
_____no_output_____
###Markdown
Test the training stepBuild a `TrainTranslator`, and configure it for training using the `Model.compile` method:
###Code
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Test out the `train_step`. For a text model like this the loss should start near:
###Code
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
While it's easier to debug without a `tf.function` it does give a performance boost. So now that the `_train_step` method is working, try the `tf.function`-wrapped `_tf_train_step`, to maximize performance while training:
###Code
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
###Output
_____no_output_____
###Markdown
The first call will be slow, because it traces the function.
###Code
translator.train_step([example_input_batch, example_target_batch])
###Output
_____no_output_____
###Markdown
But after that it's usually 2-3x faster than the eager `train_step` method:
###Code
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
###Code
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
###Code
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Train the modelWhile there's nothing wrong with writing your own custom training loop, implementing the `Model.train_step` method, as in the previous section, allows you to run `Model.fit` and avoid rewriting all that boiler-plate code. This tutorial only trains for a couple of epochs, so use a `callbacks.Callback` to collect the history of batch losses, for plotting:
###Code
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
###Output
_____no_output_____
###Markdown
The visible jumps in the plot are at the epoch boundaries. TranslateNow that the model is trained, implement a function to execute the full `text => text` translation.For this the model needs to invert the `text => token IDs` mapping provided by the `output_text_processor`. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
###Code
class Translator(tf.Module):
def __init__(self,
encoder, decoder,
input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary())
token_mask_ids = index_from_string(['',
'[UNK]',
'[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string('[START]')
self.end_token = index_from_string('[END]')
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
###Output
_____no_output_____
###Markdown
Convert token IDs to text The first method to implement is `tokens_to_text` which converts from token IDs to human readable text.
###Code
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
###Output
_____no_output_____
###Markdown
Input some random token IDs and see what it generates:
###Code
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
###Output
_____no_output_____
###Markdown
Sample from the decoder's predictions This function takes the decoder's logit outputs and samples token IDs from that distribution:
###Code
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
###Output
_____no_output_____
###Markdown
Test run this function on some random inputs:
###Code
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
###Output
_____no_output_____
###Markdown
Implement the translation loopHere is a complete implementation of the text to text translation loop.This implementation collects the results into python lists, before using `tf.concat` to join them into tensors.This implementation statically unrolls the graph out to `max_length` iterations.This is okay with eager execution in python.
###Code
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
###Output
_____no_output_____
###Markdown
Run it on a simple input:
###Code
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
If you want to export this model you'll need to wrap this method in a `tf.function`. This basic implementation has a few issues if you try to do that:1. The resulting graphs are very large and take a few seconds to build, save or load.2. You can't break from a statically unrolled loop, so it will always run `max_length` iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
###Code
f = tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
Run the `tf.function` once to compile it:
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask = (input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
###Output
_____no_output_____
###Markdown
The initial implementation used python lists to collect the outputs. This uses `tf.range` as the loop iterator, allowing `tf.autograph` to convert the loop. The biggest change in this implementation is the use of `tf.TensorArray` instead of python `list` to accumulate tensors. `tf.TensorArray` is required to collect a variable number of tensors in graph mode. With eager execution this implementation performs on par with the original:
###Code
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
But when you wrap it in a `tf.function` you'll notice two differences.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
First: Graph creation is much faster (~10x), since it doesn't create `max_iterations` copies of the model.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
###Output
_____no_output_____
###Markdown
Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Visualize the process The attention weights returned by the `translate` method show where the model was "looking" when it generated each output token.So the sum of the attention over the input should return all ones:
###Code
a = result['attention'][0]
print(np.sum(a, axis=-1))
###Output
_____no_output_____
###Markdown
Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
###Code
_ = plt.bar(range(len(a[0, :])), a[0, :])
###Output
_____no_output_____
###Markdown
Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
###Code
plt.imshow(np.array(a), vmin=0.0)
###Output
_____no_output_____
###Markdown
Here is some code to make a better attention plot:
###Code
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
Translate a few more sentences and plot them:
###Code
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'¿Todavía están en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:1. The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.2. The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. [Transformers](transformer.ipynb) solve this by using self-attention in the encoder and decoder.
###Code
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
###Output
_____no_output_____
###Markdown
Export Once you have a model you're satisfied with you might want to export it as a `tf.saved_model` for use outside of this python program that created it.Since the model is a subclass of `tf.Module` (through `keras.Model`), and all the functionality for export is compiled in a `tf.function` the model should export cleanly with `tf.saved_model.save`: Now that the function has been traced it can be exported using `saved_model.save`:
###Code
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025v5). This is an advanced example that assumes some knowledge of:* Sequence to sequence models* TensorFlow fundamentals below the keras layer: * Working with tensors directly * Writing custom `keras.Model`s and `keras.layers`While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to [Transformers](transformer.ipynb)).After training the model in this notebook, you will be able to input a Spanish sentence, such as "*¿todavia estan en casa?*", and return the English translation: "*are you still at home?*"The resulting model is exportable as a `tf.saved_model`, so it can be used in other TensorFlow environments.The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU. Setup
###Code
!pip install tensorflow_text==2.7.3
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
###Output
_____no_output_____
###Markdown
This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
###Code
use_builtins = True
###Output
_____no_output_____
###Markdown
This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
###Code
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
###Output
_____no_output_____
###Markdown
The data We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```They have a variety of languages available, but we'll use the English-Spanish dataset. Download and prepare the datasetFor convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset From these arrays of strings you can create a `tf.data.Dataset` of strings that shuffles and batches them efficiently:
###Code
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
###Output
_____no_output_____
###Markdown
Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a `tf.saved_model`. To make that exported model useful it should take `tf.string` inputs, and return `tf.string` outputs: All the text processing happens inside the model. Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.The `tensorflow_text` package contains a unicode normalize operation:
###Code
example_text = tf.constant('¿Todavía está en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
###Output
_____no_output_____
###Markdown
Unicode normalization will be the first step in the text standardization function:
###Code
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
###Output
_____no_output_____
###Markdown
Text Vectorization This standardization function will be wrapped up in a `tf.keras.layers.TextVectorization` layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
###Code
max_vocab_size = 5000
input_text_processor = tf.keras.layers.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
###Output
_____no_output_____
###Markdown
The `TextVectorization` layer and many other preprocessing layers have an `adapt` method. This method reads one epoch of the training data, and works a lot like `Model.fix`. This `adapt` method initializes the layer based on the data. Here it determines the vocabulary:
###Code
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
That's the Spanish `TextVectorization` layer, now build and `.adapt()` the English one:
###Code
output_text_processor = tf.keras.layers.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
Now these layers can convert a batch of strings into a batch of token IDs:
###Code
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
###Output
_____no_output_____
###Markdown
The `get_vocabulary` method can be used to convert token IDs back to text:
###Code
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
###Output
_____no_output_____
###Markdown
The returned token IDs are zero-padded. This can easily be turned into a mask:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
The encoder/decoder modelThe following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from [Luong's paper](https://arxiv.org/abs/1508.04025v5). Before getting into it define a few constants for the model:
###Code
embedding_dim = 256
units = 1024
###Output
_____no_output_____
###Markdown
The encoderStart by building the encoder, the blue part of the diagram above.The encoder:1. Takes a list of token IDs (from `input_text_processor`).3. Looks up an embedding vector for each token (Using a `layers.Embedding`).4. Processes the embeddings into a new sequence (Using a `layers.GRU`).5. Returns: * The processed sequence. This will be passed to the attention head. * The internal state. This will be used to initialize the decoder
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
###Output
_____no_output_____
###Markdown
Here is how it fits together so far:
###Code
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
###Output
_____no_output_____
###Markdown
The encoder returns its internal state so that its state can be used to initialize the decoder.It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder. The attention headThe decoder uses attention to selectively focus on parts of the input sequence.The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a `layers.GlobalAveragePoling1D` but the attention layer performs a _weighted_ average.Let's look at how this works: Where:* $s$ is the encoder index.* $t$ is the decoder index.* $\alpha_{ts}$ is the attention weights.* $h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).* $h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).* $c_t$ is the resulting context vector.* $a_t$ is the final output combining the "context" and "query".The equations:1. Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.2. Calculates the context vector as the weighted sum of the encoder outputs. Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:This tutorial uses [Bahdanau's additive attention](https://arxiv.org/pdf/1409.0473.pdf). TensorFlow includes implementations of both as `layers.Attention` and`layers.AdditiveAttention`. The class below handles the weight matrices in a pair of `layers.Dense` layers, and calls the builtin implementation.
###Code
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
###Output
_____no_output_____
###Markdown
Test the Attention layerCreate a `BahdanauAttention` layer:
###Code
attention_layer = BahdanauAttention(units)
###Output
_____no_output_____
###Markdown
This layer takes 3 inputs:* The `query`: This will be generated by the decoder, later.* The `value`: This Will be the output of the encoder.* The `mask`: To exclude the padding, `example_tokens != 0`
###Code
(example_tokens != 0).shape
###Output
_____no_output_____
###Markdown
The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:1. A batch of sequences of result vectors the size of the queries.2. A batch attention maps, with size `(query_length, value_length)`.
###Code
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
###Output
_____no_output_____
###Markdown
The attention weights should sum to `1.0` for each sequence.Here are the attention weights across the sequences at `t=0`:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
Because of the small-random initialization the attention weights are all close to `1/(sequence_length)`. If you zoom in on the weights for a single sequence, you can see that there is some _small_ variation that the model can learn to expand, and exploit.
###Code
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
###Output
_____no_output_____
###Markdown
The decoderThe decoder's job is to generate predictions for the next output token.1. The decoder receives the complete encoder output.2. It uses an RNN to keep track of what it has generated so far.3. It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.4. It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".5. It generates logit predictions for the next token based on the "attention vector". Here is the `Decoder` class and its initializer. The initializer creates all the necessary layers.
###Code
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
###Output
_____no_output_____
###Markdown
The `call` method for this layer takes and returns multiple tensors. Organize those into simple container classes:
###Code
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
###Output
_____no_output_____
###Markdown
Here is the implementation of the `call` method:
###Code
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'dec_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
###Output
_____no_output_____
###Markdown
The **encoder** processes its full input sequence with a single call to its RNN. This implementation of the **decoder** _can_ do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:* Flexibility: Writing the loop gives you direct control over the training procedure.* Clarity: It's possible to do masking tricks and use `layers.RNN`, or `tfa.seq2seq` APIs to pack this all into a single call. But writing it out as a loop may be clearer. * Loop free training is demonstrated in the [Text generation](text_generation.ipynb) tutiorial. Now try using this decoder.
###Code
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
###Output
_____no_output_____
###Markdown
The decoder takes 4 inputs.* `new_tokens` - The last token generated. Initialize the decoder with the `"[START]"` token.* `enc_output` - Generated by the `Encoder`.* `mask` - A boolean tensor indicating where `tokens != 0`* `state` - The previous `state` output from the decoder (the internal state of the decoder's RNN). Pass `None` to zero-initialize it. The original paper initializes it from the encoder's final RNN state.
###Code
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor.get_vocabulary().index('[START]')
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
###Output
_____no_output_____
###Markdown
Sample a token according to the logits:
###Code
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
###Output
_____no_output_____
###Markdown
Decode the token as the first word of the output:
###Code
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
Now use the decoder to generate a second set of logits.- Pass the same `enc_output` and `mask`, these haven't changed.- Pass the sampled token as `new_tokens`.- Pass the `decoder_state` the decoder returned last time, so the RNN continues with a memory of where it left off last time.
###Code
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
TrainingNow that you have all the model components, it's time to start training the model. You'll need:- A loss function and optimizer to perform the optimization.- A training step function defining how to update the model for each input/target batch.- A training loop to drive the training and save checkpoints. Define the loss function
###Code
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
###Output
_____no_output_____
###Markdown
Implement the training step Start with a model class, the training process will be implemented as the `train_step` method on this model. See [Customizing fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit) for details.Here the `train_step` method is a wrapper around the `_train_step` implementation which will come later. This wrapper includes a switch to turn on and off `tf.function` compilation, to make debugging easier.
###Code
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
###Output
_____no_output_____
###Markdown
Overall the implementation for the `Model.train_step` method is as follows:1. Receive a batch of `input_text, target_text` from the `tf.data.Dataset`.2. Convert those raw text inputs to token-embeddings and masks. 3. Run the encoder on the `input_tokens` to get the `encoder_output` and `encoder_state`.4. Initialize the decoder state and loss. 5. Loop over the `target_tokens`: 1. Run the decoder one step at a time. 2. Calculate the loss for each step. 3. Accumulate the average loss.6. Calculate the gradient of the loss and use the optimizer to apply updates to the model's `trainable_variables`. The `_preprocess` method, added below, implements steps 1 and 2:
###Code
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
###Output
_____no_output_____
###Markdown
The `_train_step` method, added below, handles the remaining steps except for actually running the decoder:
###Code
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
###Output
_____no_output_____
###Markdown
The `_loop_step` method, added below, executes the decoder and calculates the incremental loss and new decoder state (`dec_state`).
###Code
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
###Output
_____no_output_____
###Markdown
Test the training stepBuild a `TrainTranslator`, and configure it for training using the `Model.compile` method:
###Code
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Test out the `train_step`. For a text model like this the loss should start near:
###Code
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
While it's easier to debug without a `tf.function` it does give a performance boost. So now that the `_train_step` method is working, try the `tf.function`-wrapped `_tf_train_step`, to maximize performance while training:
###Code
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
###Output
_____no_output_____
###Markdown
The first call will be slow, because it traces the function.
###Code
translator.train_step([example_input_batch, example_target_batch])
###Output
_____no_output_____
###Markdown
But after that it's usually 2-3x faster than the eager `train_step` method:
###Code
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
###Code
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
###Code
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Train the modelWhile there's nothing wrong with writing your own custom training loop, implementing the `Model.train_step` method, as in the previous section, allows you to run `Model.fit` and avoid rewriting all that boiler-plate code. This tutorial only trains for a couple of epochs, so use a `callbacks.Callback` to collect the history of batch losses, for plotting:
###Code
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
###Output
_____no_output_____
###Markdown
The visible jumps in the plot are at the epoch boundaries. TranslateNow that the model is trained, implement a function to execute the full `text => text` translation.For this the model needs to invert the `text => token IDs` mapping provided by the `output_text_processor`. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
###Code
class Translator(tf.Module):
def __init__(self, encoder, decoder, input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
mask_token='',
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.StringLookup(
vocabulary=output_text_processor.get_vocabulary(), mask_token='')
token_mask_ids = index_from_string(['', '[UNK]', '[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string(tf.constant('[START]'))
self.end_token = index_from_string(tf.constant('[END]'))
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
###Output
_____no_output_____
###Markdown
Convert token IDs to text The first method to implement is `tokens_to_text` which converts from token IDs to human readable text.
###Code
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
###Output
_____no_output_____
###Markdown
Input some random token IDs and see what it generates:
###Code
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
###Output
_____no_output_____
###Markdown
Sample from the decoder's predictions This function takes the decoder's logit outputs and samples token IDs from that distribution:
###Code
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
###Output
_____no_output_____
###Markdown
Test run this function on some random inputs:
###Code
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
###Output
_____no_output_____
###Markdown
Implement the translation loopHere is a complete implementation of the text to text translation loop.This implementation collects the results into python lists, before using `tf.concat` to join them into tensors.This implementation statically unrolls the graph out to `max_length` iterations.This is okay with eager execution in python.
###Code
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
###Output
_____no_output_____
###Markdown
Run it on a simple input:
###Code
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
If you want to export this model you'll need to wrap this method in a `tf.function`. This basic implementation has a few issues if you try to do that:1. The resulting graphs are very large and take a few seconds to build, save or load.2. You can't break from a statically unrolled loop, so it will always run `max_length` iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
Run the `tf.function` once to compile it:
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text,
*,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(
new_tokens=new_tokens, enc_output=enc_output, mask=(input_tokens != 0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generated token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
###Output
_____no_output_____
###Markdown
The initial implementation used python lists to collect the outputs. This uses `tf.range` as the loop iterator, allowing `tf.autograph` to convert the loop. The biggest change in this implementation is the use of `tf.TensorArray` instead of python `list` to accumulate tensors. `tf.TensorArray` is required to collect a variable number of tensors in graph mode. With eager execution this implementation performs on par with the original:
###Code
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
But when you wrap it in a `tf.function` you'll notice two differences.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
First: Graph creation is much faster (~10x), since it doesn't create `max_iterations` copies of the model.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
###Output
_____no_output_____
###Markdown
Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Visualize the process The attention weights returned by the `translate` method show where the model was "looking" when it generated each output token.So the sum of the attention over the input should return all ones:
###Code
a = result['attention'][0]
print(np.sum(a, axis=-1))
###Output
_____no_output_____
###Markdown
Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
###Code
_ = plt.bar(range(len(a[0, :])), a[0, :])
###Output
_____no_output_____
###Markdown
Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
###Code
plt.imshow(np.array(a), vmin=0.0)
###Output
_____no_output_____
###Markdown
Here is some code to make a better attention plot:
###Code
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
Translate a few more sentences and plot them:
###Code
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'¿Todavía están en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:1. The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.2. The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. [Transformers](transformer.ipynb) solve this by using self-attention in the encoder and decoder.
###Code
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
###Output
_____no_output_____
###Markdown
Export Once you have a model you're satisfied with you might want to export it as a `tf.saved_model` for use outside of this python program that created it.Since the model is a subclass of `tf.Module` (through `keras.Model`), and all the functionality for export is compiled in a `tf.function` the model should export cleanly with `tf.saved_model.save`: Now that the function has been traced it can be exported using `saved_model.save`:
###Code
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation. This is an advanced example that assumes some knowledge of sequence to sequence models.After training the model in this notebook, you will be able to input a Spanish sentence, such as *"¿todavia estan en casa?"*, and return the English translation: *"are you still at home?"*The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU.
###Code
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
###Output
_____no_output_____
###Markdown
Download and prepare the datasetWe'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence).encode('utf-8'))
# 1. Remove the accents
# 2. Clean the sentences
# 3. Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in line.split('\t')]
for line in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# creating cleaned input, output pairs
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
###Output
_____no_output_____
###Markdown
Limit the size of the dataset to experiment faster (optional)Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with fewer data):
###Code
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file,
num_examples)
# Calculate max_length of the target tensors
max_length_targ, max_length_inp = target_tensor.shape[1], input_tensor.shape[1]
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
def convert(lang, tensor):
for t in tensor:
if t != 0:
print(f'{t} ----> {lang.index_word[t]}')
print("Input Language; index to word mapping")
convert(inp_lang, input_tensor_train[0])
print()
print("Target Language; index to word mapping")
convert(targ_lang, target_tensor_train[0])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset
###Code
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
###Output
_____no_output_____
###Markdown
Write the encoder and decoder modelImplement an encoder-decoder model with attention which you can read about in the TensorFlow [Neural Machine Translation (seq2seq) tutorial](https://github.com/tensorflow/nmt). This example uses a more recent set of APIs. This notebook implements the [attention equations](https://github.com/tensorflow/nmtbackground-on-the-attention-mechanism) from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence. The below picture and formulas are an example of attention mechanism from [Luong's paper](https://arxiv.org/abs/1508.04025v5). The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*.Here are the equations that are implemented:This tutorial uses [Bahdanau attention](https://arxiv.org/pdf/1409.0473.pdf) for the encoder. Let's decide on notation before writing the simplified form:* FC = Fully connected (dense) layer* EO = Encoder output* H = hidden state* X = input to the decoderAnd the pseudo-code:* `score = FC(tanh(FC(EO) + FC(H)))`* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, hidden_size)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.* `embedding output` = The input to the decoder X is passed through an embedding layer.* `merged vector = concat(embedding output, context vector)`* This merged vector is then given to the GRUThe shapes of all the vectors at each step have been specified in the comments in the code:
###Code
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state=hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# sample input
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print('Encoder output shape: (batch size, sequence length, units)', sample_output.shape)
print('Encoder Hidden state shape: (batch size, units)', sample_hidden.shape)
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
# we are doing this to broadcast addition along the time axis to calculate the score
query_with_time_axis = tf.expand_dims(query, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
# the shape of the tensor before applying self.V is (batch_size, max_length, units)
score = self.V(tf.nn.tanh(
self.W1(query_with_time_axis) + self.W2(values)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = BahdanauAttention(10)
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units)", attention_result.shape)
print("Attention weights shape: (batch_size, sequence_length, 1)", attention_weights.shape)
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _, _ = decoder(tf.random.uniform((BATCH_SIZE, 1)),
sample_hidden, sample_output)
print('Decoder output shape: (batch_size, vocab size)', sample_decoder_output.shape)
###Output
_____no_output_____
###Markdown
Define the optimizer and the loss function
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True,
reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
###Output
_____no_output_____
###Markdown
Checkpoints (Object-based saving)
###Code
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
###Output
_____no_output_____
###Markdown
Training1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.2. The encoder output, encoder hidden state and the decoder input (which is the *start token*) is passed to the decoder.3. The decoder returns the *predictions* and the *decoder hidden state*.4. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.5. Use *teacher forcing* to decide the next input to the decoder.6. *Teacher forcing* is the technique where the *target word* is passed as the *next input* to the decoder.7. The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
###Code
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print(f'Epoch {epoch+1} Batch {batch} Loss {batch_loss.numpy():.4f}')
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
print(f'Epoch {epoch+1} Loss {total_loss/steps_per_epoch:.4f}')
print(f'Time taken for 1 epoch {time.time()-start:.2f} sec\n')
###Output
_____no_output_____
###Markdown
Translate* The evaluate function is similar to the training loop, except we don't use *teacher forcing* here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.* Stop predicting when the model predicts the *end token*.* And store the *attention weights for every time step*.Note: The encoder output is calculated only once for one input.
###Code
def evaluate(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# storing the attention weights to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# function for plotting the attention weights
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def translate(sentence):
result, sentence, attention_plot = evaluate(sentence)
print('Input:', sentence)
print('Predicted translation:', result)
attention_plot = attention_plot[:len(result.split(' ')),
:len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
###Output
_____no_output_____
###Markdown
Restore the latest checkpoint and test
###Code
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.')
translate(u'esta es mi vida.')
translate(u'¿todavia estan en casa?')
# wrong translation
translate(u'trata de averiguarlo.')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025v5). This is an advanced example that assumes some knowledge of:* Sequence to sequence models* TensorFlow fundamentals below the keras layer: * Working with tensors directly * Writing custom `keras.Model`s and `keras.layers`While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to [Transformers](transformer.ipynb)).After training the model in this notebook, you will be able to input a Spanish sentence, such as "*¿todavia estan en casa?*", and return the English translation: "*are you still at home?*"The resulting model is exportable as a `tf.saved_model`, so it can be used in other TensorFlow environments.The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU. Setup
###Code
!pip install tensorflow_text
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
###Output
_____no_output_____
###Markdown
This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
###Code
use_builtins = True
###Output
_____no_output_____
###Markdown
This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
###Code
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
###Output
_____no_output_____
###Markdown
The data We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```They have a variety of languages available, but we'll use the English-Spanish dataset. Download and prepare the datasetFor convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset From these arrays of strings you can create a `tf.data.Dataset` of strings that shuffles and batches them efficiently:
###Code
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
###Output
_____no_output_____
###Markdown
Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a `tf.saved_model`. To make that exported model useful it should take `tf.string` inputs, and retrun `tf.string` outputs: All the text processing happens inside the model. Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.The `tensroflow_text` package contains a unicode normalize operation:
###Code
example_text = tf.constant('¿Todavía está en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
###Output
_____no_output_____
###Markdown
Unicode normalization will be the first step in the text standardization function:
###Code
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
###Output
_____no_output_____
###Markdown
Text Vectorization This standardization function will be wrapped up in a `preprocessing.TextVectorization` layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
###Code
max_vocab_size = 5000
input_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
###Output
_____no_output_____
###Markdown
The `TextVectorization` layer and many other `experimental.preprocessing` layers have an `adapt` method. This method reads one epoch of the training data, and works a lot like `Model.fix`. This `adapt` method initializes the layer based on the data. Here it determines the vocabulary:
###Code
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
That's the Spanish `TextVectorization` layer, now build and `.adapt()` the English one:
###Code
output_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
Now these layers can convert a batch of strings into a batch of token IDs:
###Code
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
###Output
_____no_output_____
###Markdown
The `get_vocabulary` method can be used to convert token IDs back to text:
###Code
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
###Output
_____no_output_____
###Markdown
The returned token IDs are zero-padded. This can easily be turned into a mask:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
The encoder/decoder modelThe following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from [Luong's paper](https://arxiv.org/abs/1508.04025v5). Before getting into it define a few constants for the model:
###Code
embedding_dim = 256
units = 1024
###Output
_____no_output_____
###Markdown
The encoderStart by building the encoder, the blue part of the diagram above.The encoder:1. Takes a list of token IDs (from `input_text_processor`).3. Looks up an embedding vector for each token (Using a `layers.Embedding`).4. Processes the embeddings into a new sequence (Using a `layers.GRU`).5. Returns: * The processed sequence. This will be passed to the attention head. * The internal state. This will be used to initialize the decoder
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
###Output
_____no_output_____
###Markdown
Here is how it fits together so far:
###Code
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
###Output
_____no_output_____
###Markdown
The encoder returns its internal state so that its state can be used to initialize the decoder.It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder. The attention headThe decoder uses attention to selectively focus on parts of the input sequence.The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a `layers.GlobalAveragePoling1D` but the attention layer performs a _weighted_ average.Let's look at how this works: Where:* $s$ is the encoder index.* $t$ is the decoder index.* $\alpha_{ts}$ is the attention weights.* $h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).* $h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).* $c_t$ is the resulting context vector.* $a_t$ is the final output combining the "context" and "query".The equations:1. Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.2. Calculates the context vector as the weighted sum of the encoder outputs. Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:This tutorial uses [Bahdanau's additive attention](https://arxiv.org/pdf/1409.0473.pdf). TensorFlow includes implementations of both as `layers.Attention` and`layers.AdditiveAttention`. The class below handles the weight matrices in a pair of `layers.Dense` layers, and calls the builtin implementation.
###Code
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
###Output
_____no_output_____
###Markdown
Test the Attention layerCreate a `BahdanauAttention` layer:
###Code
attention_layer = BahdanauAttention(units)
###Output
_____no_output_____
###Markdown
This layer takes 3 inputs:* The `query`: This will be generated by the decoder, later.* The `value`: This Will be the output of the encoder.* The `mask`: To exclude the padding, `example_tokens != 0`
###Code
(example_tokens != 0).shape
###Output
_____no_output_____
###Markdown
The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:1. A batch of sequences of result vectors the size of the queries.2. A batch attention maps, with size `(query_length, value_length)`.
###Code
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
###Output
_____no_output_____
###Markdown
The attention weights should sum to `1.0` for each sequence.Here are the attention weights across the sequences at `t=0`:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
Because of the small-random initialization the attention weights are all close to `1/(sequence_length)`. If you zoom in on the weights for a single sequence, you can see that there is some _small_ variation that the model can learn to expand, and exploit.
###Code
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
###Output
_____no_output_____
###Markdown
The decoderThe decoder's job is to generate predictions for the next output token.1. The decoder receives the complete encoder output.2. It uses an RNN to keep track of what it has generated so far.3. It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.4. It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".5. It generates logit predictions for the next token based on the "attention vector". Here is the `Decoder` class and its initializer. The initializer creates all the necessary layers.
###Code
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
###Output
_____no_output_____
###Markdown
The `call` method for this layer takes and returns multiple tensors. Organize those into simple container classes:
###Code
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
###Output
_____no_output_____
###Markdown
Here is the implementation of the `call` method:
###Code
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'dec_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
###Output
_____no_output_____
###Markdown
The **encoder** processes its full input sequence with a single call to its RNN. This implementation of the **decoder** _can_ do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:* Flexibility: Writing the loop gives you direct control over the training procedure.* Clarity: It's possible to do masking tricks and use `layers.RNN`, or `tfa.seq2seq` APIs to pack this all into a single call. But writing it out as a loop may be clearer. * Loop free training is demonstrated in the [Text generation](text_generation.ipynb) tutiorial. Now try using this decoder.
###Code
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
###Output
_____no_output_____
###Markdown
The decoder takes 4 inputs.* `new_tokens` - The last token generated. Initialize the decoder with the `"[START]"` token.* `enc_output` - Generated by the `Encoder`.* `mask` - A boolean tensor indicating where `tokens != 0`* `state` - The previous `state` output from the decoder (the internal state of the decoder's RNN). Pass `None` to zero-initialize it. The original paper initializes it from the encoder's final RNN state.
###Code
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor.get_vocabulary().index('[START]')
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
###Output
_____no_output_____
###Markdown
Sample a token according to the logits:
###Code
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
###Output
_____no_output_____
###Markdown
Decode the token as the first word of the output:
###Code
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
Now use the decoder to generate a second set of logits.- Pass the same `enc_output` and `mask`, these haven't changed.- Pass the sampled token as `new_tokens`.- Pass the `decoder_state` the decoder returned last time, so the RNN continues with a memory of where it left off last time.
###Code
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
TrainingNow that you have all the model components, it's time to start training the model. You'll need:- A loss function and optimizer to perform the optimization.- A training step function defining how to update the model for each input/target batch.- A training loop to drive the training and save checkpoints. Define the loss function
###Code
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
###Output
_____no_output_____
###Markdown
Implement the training step Start with a model class, the training process will be implemented as the `train_step` method on this model. See [Customizing fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit) for details.Here the `train_step` method is a wrapper around the `_train_step` implementation which will come later. This wrapper includes a switch to turn on and off `tf.function` compilation, to make debugging easier.
###Code
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
###Output
_____no_output_____
###Markdown
Overall the implementation for the `Model.train_step` method is as follows:1. Receive a batch of `input_text, target_text` from the `tf.data.Dataset`.2. Convert those raw text inputs to token-embeddings and masks. 3. Run the encoder on the `input_tokens` to get the `encoder_output` and `encoder_state`.4. Initialize the decoder state and loss. 5. Loop over the `target_tokens`: 1. Run the decoder one step at a time. 2. Calculate the loss for each step. 3. Accumulate the average loss.6. Calculate the gradient of the loss and use the optimizer to apply updates to the model's `trainable_variables`. The `_preprocess` method, added below, implements steps 1 and 2:
###Code
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
###Output
_____no_output_____
###Markdown
The `_train_step` method, added below, handles the remaining steps except for actually running the decoder:
###Code
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
###Output
_____no_output_____
###Markdown
The `_loop_step` method, added below, executes the decoder and calculates the incremental loss and new decoder state (`dec_state`).
###Code
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
###Output
_____no_output_____
###Markdown
Test the training stepBuild a `TrainTranslator`, and configure it for training using the `Model.compile` method:
###Code
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Test out the `train_step`. For a text model like this the loss should start near:
###Code
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
While it's easier to debug without a `tf.function` it does give a performance boost. So now that the `_train_step` method is working, try the `tf.function`-wrapped `_tf_train_step`, to maximize performance while training:
###Code
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
###Output
_____no_output_____
###Markdown
The first call will be slow, because it traces the function.
###Code
translator.train_step([example_input_batch, example_target_batch])
###Output
_____no_output_____
###Markdown
But after that it's usually 2-3x faster than the eager `train_step` method:
###Code
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
###Code
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
###Code
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Train the modelWhile there's nothing wrong with writing your own custom training loop, implementing the `Model.train_step` method, as in the previous section, allows you to run `Model.fit` and avoid rewriting all that boiler-plate code. This tutorial only trains for a couple of epochs, so use a `callbacks.Callback` to collect the history of batch losses, for plotting:
###Code
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
###Output
_____no_output_____
###Markdown
The visible jumps in the plot are at the epoch boundaries. TranslateNow that the model is trained, implement a function to execute the full `text => text` translation.For this the model needs to invert the `text => token IDs` mapping provided by the `output_text_processor`. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
###Code
class Translator(tf.Module):
def __init__(self, encoder, decoder, input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
mask_token='',
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(), mask_token='')
token_mask_ids = index_from_string(['', '[UNK]', '[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string(tf.constant('[START]'))
self.end_token = index_from_string(tf.constant('[END]'))
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
###Output
_____no_output_____
###Markdown
Convert token IDs to text The first method to implement is `tokens_to_text` which converts from token IDs to human readable text.
###Code
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
###Output
_____no_output_____
###Markdown
Input some random token IDs and see what it generates:
###Code
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
###Output
_____no_output_____
###Markdown
Sample from the decoder's predictions This function takes the decoder's logit outputs and samples token IDs from that distribution:
###Code
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
###Output
_____no_output_____
###Markdown
Test run this function on some random inputs:
###Code
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
###Output
_____no_output_____
###Markdown
Implement the translation loopHere is a complete implementation of the text to text translation loop.This implementation collects the results into python lists, before using `tf.concat` to join them into tensors.This implementation statically unrolls the graph out to `max_length` iterations.This is okay with eager execution in python.
###Code
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
###Output
_____no_output_____
###Markdown
Run it on a simple input:
###Code
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
If you want to export this model you'll need to wrap this method in a `tf.function`. This basic implementation has a few issues if you try to do that:1. The resulting graphs are very large and take a few seconds to build, save or load.2. You can't break from a statically unrolled loop, so it will always run `max_length` iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
Run the `tf.function` once to compile it:
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text,
*,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(
new_tokens=new_tokens, enc_output=enc_output, mask=(input_tokens != 0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generated token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
###Output
_____no_output_____
###Markdown
The initial implementation used python lists to collect the outputs. This uses `tf.range` as the loop iterator, allowing `tf.autograph` to convert the loop. The biggest change in this implementation is the use of `tf.TensorArray` instead of python `list` to accumulate tensors. `tf.TensorArray` is required to collect a variable number of tensors in graph mode. With eager execution this implementation performs on par with the original:
###Code
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
But when you wrap it in a `tf.function` you'll notice two differences.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
First: Graph creation is much faster (~10x), since it doesn't create `max_iterations` copies of the model.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
###Output
_____no_output_____
###Markdown
Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Visualize the process The attention weights returned by the `translate` method show where the model was "looking" when it generated each output token.So the sum of the attention over the input should return all ones:
###Code
a = result['attention'][0]
print(np.sum(a, axis=-1))
###Output
_____no_output_____
###Markdown
Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
###Code
_ = plt.bar(range(len(a[0, :])), a[0, :])
###Output
_____no_output_____
###Markdown
Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
###Code
plt.imshow(np.array(a), vmin=0.0)
###Output
_____no_output_____
###Markdown
Here is some code to make a better attention plot:
###Code
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
Translate a few more sentences and plot them:
###Code
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'¿Todavía están en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:1. The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.2. The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. [Transformers](transformer.ipynb) solve this by using self-attention in the encoder and decoder.
###Code
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
###Output
_____no_output_____
###Markdown
Export Once you have a model you're satisfied with you might want to export it as a `tf.saved_model` for use outside of this python program that created it.Since the model is a subclass of `tf.Module` (through `keras.Model`), and all the functionality for export is compiled in a `tf.function` the model should export cleanly with `tf.saved_model.save`: Now that the function has been traced it can be exported using `saved_model.save`:
###Code
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025v5). This is an advanced example that assumes some knowledge of:* Sequence to sequence models* TensorFlow fundamentals below the keras layer: * Working with tensors directly * Writing custom `keras.Model`s and `keras.layers`While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to [Transformers](transformer.ipynb)).After training the model in this notebook, you will be able to input a Spanish sentence, such as "*¿todavia estan en casa?*", and return the English translation: "*are you still at home?*"The resulting model is exportable as a `tf.saved_model`, so it can be used in other TensorFlow environments.The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU. Setup
###Code
!pip install "tensorflow-text==2.8.*"
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
###Output
_____no_output_____
###Markdown
This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
###Code
use_builtins = True
###Output
_____no_output_____
###Markdown
This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
###Code
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
###Output
_____no_output_____
###Markdown
The data We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```They have a variety of languages available, but we'll use the English-Spanish dataset. Download and prepare the datasetFor convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset From these arrays of strings you can create a `tf.data.Dataset` of strings that shuffles and batches them efficiently:
###Code
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
###Output
_____no_output_____
###Markdown
Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a `tf.saved_model`. To make that exported model useful it should take `tf.string` inputs, and return `tf.string` outputs: All the text processing happens inside the model. Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.The `tensorflow_text` package contains a unicode normalize operation:
###Code
example_text = tf.constant('¿Todavía está en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
###Output
_____no_output_____
###Markdown
Unicode normalization will be the first step in the text standardization function:
###Code
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
###Output
_____no_output_____
###Markdown
Text Vectorization This standardization function will be wrapped up in a `tf.keras.layers.TextVectorization` layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
###Code
max_vocab_size = 5000
input_text_processor = tf.keras.layers.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
###Output
_____no_output_____
###Markdown
The `TextVectorization` layer and many other preprocessing layers have an `adapt` method. This method reads one epoch of the training data, and works a lot like `Model.fix`. This `adapt` method initializes the layer based on the data. Here it determines the vocabulary:
###Code
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
That's the Spanish `TextVectorization` layer, now build and `.adapt()` the English one:
###Code
output_text_processor = tf.keras.layers.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
Now these layers can convert a batch of strings into a batch of token IDs:
###Code
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
###Output
_____no_output_____
###Markdown
The `get_vocabulary` method can be used to convert token IDs back to text:
###Code
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
###Output
_____no_output_____
###Markdown
The returned token IDs are zero-padded. This can easily be turned into a mask:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
The encoder/decoder modelThe following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from [Luong's paper](https://arxiv.org/abs/1508.04025v5). Before getting into it define a few constants for the model:
###Code
embedding_dim = 256
units = 1024
###Output
_____no_output_____
###Markdown
The encoderStart by building the encoder, the blue part of the diagram above.The encoder:1. Takes a list of token IDs (from `input_text_processor`).3. Looks up an embedding vector for each token (Using a `layers.Embedding`).4. Processes the embeddings into a new sequence (Using a `layers.GRU`).5. Returns: * The processed sequence. This will be passed to the attention head. * The internal state. This will be used to initialize the decoder
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
###Output
_____no_output_____
###Markdown
Here is how it fits together so far:
###Code
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
###Output
_____no_output_____
###Markdown
The encoder returns its internal state so that its state can be used to initialize the decoder.It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder. The attention headThe decoder uses attention to selectively focus on parts of the input sequence.The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a `layers.GlobalAveragePoling1D` but the attention layer performs a _weighted_ average.Let's look at how this works: Where:* $s$ is the encoder index.* $t$ is the decoder index.* $\alpha_{ts}$ is the attention weights.* $h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).* $h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).* $c_t$ is the resulting context vector.* $a_t$ is the final output combining the "context" and "query".The equations:1. Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.2. Calculates the context vector as the weighted sum of the encoder outputs. Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:This tutorial uses [Bahdanau's additive attention](https://arxiv.org/pdf/1409.0473.pdf). TensorFlow includes implementations of both as `layers.Attention` and`layers.AdditiveAttention`. The class below handles the weight matrices in a pair of `layers.Dense` layers, and calls the builtin implementation.
###Code
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
###Output
_____no_output_____
###Markdown
Test the Attention layerCreate a `BahdanauAttention` layer:
###Code
attention_layer = BahdanauAttention(units)
###Output
_____no_output_____
###Markdown
This layer takes 3 inputs:* The `query`: This will be generated by the decoder, later.* The `value`: This Will be the output of the encoder.* The `mask`: To exclude the padding, `example_tokens != 0`
###Code
(example_tokens != 0).shape
###Output
_____no_output_____
###Markdown
The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:1. A batch of sequences of result vectors the size of the queries.2. A batch attention maps, with size `(query_length, value_length)`.
###Code
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
###Output
_____no_output_____
###Markdown
The attention weights should sum to `1.0` for each sequence.Here are the attention weights across the sequences at `t=0`:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
Because of the small-random initialization the attention weights are all close to `1/(sequence_length)`. If you zoom in on the weights for a single sequence, you can see that there is some _small_ variation that the model can learn to expand, and exploit.
###Code
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
###Output
_____no_output_____
###Markdown
The decoderThe decoder's job is to generate predictions for the next output token.1. The decoder receives the complete encoder output.2. It uses an RNN to keep track of what it has generated so far.3. It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.4. It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".5. It generates logit predictions for the next token based on the "attention vector". Here is the `Decoder` class and its initializer. The initializer creates all the necessary layers.
###Code
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
###Output
_____no_output_____
###Markdown
The `call` method for this layer takes and returns multiple tensors. Organize those into simple container classes:
###Code
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
###Output
_____no_output_____
###Markdown
Here is the implementation of the `call` method:
###Code
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'dec_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
###Output
_____no_output_____
###Markdown
The **encoder** processes its full input sequence with a single call to its RNN. This implementation of the **decoder** _can_ do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:* Flexibility: Writing the loop gives you direct control over the training procedure.* Clarity: It's possible to do masking tricks and use `layers.RNN`, or `tfa.seq2seq` APIs to pack this all into a single call. But writing it out as a loop may be clearer. * Loop free training is demonstrated in the [Text generation](text_generation.ipynb) tutiorial. Now try using this decoder.
###Code
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
###Output
_____no_output_____
###Markdown
The decoder takes 4 inputs.* `new_tokens` - The last token generated. Initialize the decoder with the `"[START]"` token.* `enc_output` - Generated by the `Encoder`.* `mask` - A boolean tensor indicating where `tokens != 0`* `state` - The previous `state` output from the decoder (the internal state of the decoder's RNN). Pass `None` to zero-initialize it. The original paper initializes it from the encoder's final RNN state.
###Code
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor.get_vocabulary().index('[START]')
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
###Output
_____no_output_____
###Markdown
Sample a token according to the logits:
###Code
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
###Output
_____no_output_____
###Markdown
Decode the token as the first word of the output:
###Code
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
Now use the decoder to generate a second set of logits.- Pass the same `enc_output` and `mask`, these haven't changed.- Pass the sampled token as `new_tokens`.- Pass the `decoder_state` the decoder returned last time, so the RNN continues with a memory of where it left off last time.
###Code
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
TrainingNow that you have all the model components, it's time to start training the model. You'll need:- A loss function and optimizer to perform the optimization.- A training step function defining how to update the model for each input/target batch.- A training loop to drive the training and save checkpoints. Define the loss function
###Code
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
###Output
_____no_output_____
###Markdown
Implement the training step Start with a model class, the training process will be implemented as the `train_step` method on this model. See [Customizing fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit) for details.Here the `train_step` method is a wrapper around the `_train_step` implementation which will come later. This wrapper includes a switch to turn on and off `tf.function` compilation, to make debugging easier.
###Code
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
###Output
_____no_output_____
###Markdown
Overall the implementation for the `Model.train_step` method is as follows:1. Receive a batch of `input_text, target_text` from the `tf.data.Dataset`.2. Convert those raw text inputs to token-embeddings and masks. 3. Run the encoder on the `input_tokens` to get the `encoder_output` and `encoder_state`.4. Initialize the decoder state and loss. 5. Loop over the `target_tokens`: 1. Run the decoder one step at a time. 2. Calculate the loss for each step. 3. Accumulate the average loss.6. Calculate the gradient of the loss and use the optimizer to apply updates to the model's `trainable_variables`. The `_preprocess` method, added below, implements steps 1 and 2:
###Code
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
###Output
_____no_output_____
###Markdown
The `_train_step` method, added below, handles the remaining steps except for actually running the decoder:
###Code
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
###Output
_____no_output_____
###Markdown
The `_loop_step` method, added below, executes the decoder and calculates the incremental loss and new decoder state (`dec_state`).
###Code
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
###Output
_____no_output_____
###Markdown
Test the training stepBuild a `TrainTranslator`, and configure it for training using the `Model.compile` method:
###Code
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Test out the `train_step`. For a text model like this the loss should start near:
###Code
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
While it's easier to debug without a `tf.function` it does give a performance boost. So now that the `_train_step` method is working, try the `tf.function`-wrapped `_tf_train_step`, to maximize performance while training:
###Code
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
###Output
_____no_output_____
###Markdown
The first call will be slow, because it traces the function.
###Code
translator.train_step([example_input_batch, example_target_batch])
###Output
_____no_output_____
###Markdown
But after that it's usually 2-3x faster than the eager `train_step` method:
###Code
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
###Code
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
###Code
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Train the modelWhile there's nothing wrong with writing your own custom training loop, implementing the `Model.train_step` method, as in the previous section, allows you to run `Model.fit` and avoid rewriting all that boiler-plate code. This tutorial only trains for a couple of epochs, so use a `callbacks.Callback` to collect the history of batch losses, for plotting:
###Code
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
###Output
_____no_output_____
###Markdown
The visible jumps in the plot are at the epoch boundaries. TranslateNow that the model is trained, implement a function to execute the full `text => text` translation.For this the model needs to invert the `text => token IDs` mapping provided by the `output_text_processor`. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
###Code
class Translator(tf.Module):
def __init__(self, encoder, decoder, input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
mask_token='',
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.StringLookup(
vocabulary=output_text_processor.get_vocabulary(), mask_token='')
token_mask_ids = index_from_string(['', '[UNK]', '[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string(tf.constant('[START]'))
self.end_token = index_from_string(tf.constant('[END]'))
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
###Output
_____no_output_____
###Markdown
Convert token IDs to text The first method to implement is `tokens_to_text` which converts from token IDs to human readable text.
###Code
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
###Output
_____no_output_____
###Markdown
Input some random token IDs and see what it generates:
###Code
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
###Output
_____no_output_____
###Markdown
Sample from the decoder's predictions This function takes the decoder's logit outputs and samples token IDs from that distribution:
###Code
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
###Output
_____no_output_____
###Markdown
Test run this function on some random inputs:
###Code
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
###Output
_____no_output_____
###Markdown
Implement the translation loopHere is a complete implementation of the text to text translation loop.This implementation collects the results into python lists, before using `tf.concat` to join them into tensors.This implementation statically unrolls the graph out to `max_length` iterations.This is okay with eager execution in python.
###Code
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
###Output
_____no_output_____
###Markdown
Run it on a simple input:
###Code
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
If you want to export this model you'll need to wrap this method in a `tf.function`. This basic implementation has a few issues if you try to do that:1. The resulting graphs are very large and take a few seconds to build, save or load.2. You can't break from a statically unrolled loop, so it will always run `max_length` iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
Run the `tf.function` once to compile it:
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text,
*,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(
new_tokens=new_tokens, enc_output=enc_output, mask=(input_tokens != 0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generated token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
###Output
_____no_output_____
###Markdown
The initial implementation used python lists to collect the outputs. This uses `tf.range` as the loop iterator, allowing `tf.autograph` to convert the loop. The biggest change in this implementation is the use of `tf.TensorArray` instead of python `list` to accumulate tensors. `tf.TensorArray` is required to collect a variable number of tensors in graph mode. With eager execution this implementation performs on par with the original:
###Code
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
But when you wrap it in a `tf.function` you'll notice two differences.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
First: Graph creation is much faster (~10x), since it doesn't create `max_iterations` copies of the model.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
###Output
_____no_output_____
###Markdown
Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Visualize the process The attention weights returned by the `translate` method show where the model was "looking" when it generated each output token.So the sum of the attention over the input should return all ones:
###Code
a = result['attention'][0]
print(np.sum(a, axis=-1))
###Output
_____no_output_____
###Markdown
Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
###Code
_ = plt.bar(range(len(a[0, :])), a[0, :])
###Output
_____no_output_____
###Markdown
Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
###Code
plt.imshow(np.array(a), vmin=0.0)
###Output
_____no_output_____
###Markdown
Here is some code to make a better attention plot:
###Code
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
Translate a few more sentences and plot them:
###Code
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'¿Todavía están en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:1. The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.2. The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. [Transformers](transformer.ipynb) solve this by using self-attention in the encoder and decoder.
###Code
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
###Output
_____no_output_____
###Markdown
Export Once you have a model you're satisfied with you might want to export it as a `tf.saved_model` for use outside of this python program that created it.Since the model is a subclass of `tf.Module` (through `keras.Model`), and all the functionality for export is compiled in a `tf.function` the model should export cleanly with `tf.saved_model.save`: Now that the function has been traced it can be exported using `saved_model.save`:
###Code
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025v5). This is an advanced example that assumes some knowledge of:* Sequence to sequence models* TensorFlow fundamentals below the keras layer: * Working with tensors directly * Writing custom `keras.Model`s and `keras.layers`While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to [Transformers](transformer.ipynb)).After training the model in this notebook, you will be able to input a Spanish sentence, such as "*¿todavia estan en casa?*", and return the English translation: "*are you still at home?*"The resulting model is exportable as a `tf.saved_model`, so it can be used in other TensorFlow environments.The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU. Setup
###Code
!pip install tensorflow_text
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
###Output
_____no_output_____
###Markdown
This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
###Code
use_builtins = True
###Output
_____no_output_____
###Markdown
This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
###Code
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
###Output
_____no_output_____
###Markdown
The data We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```They have a variety of languages available, but we'll use the English-Spanish dataset. Download and prepare the datasetFor convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset From these arrays of strings you can create a `tf.data.Dataset` of strings that shuffles and batches them efficiently:
###Code
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
###Output
_____no_output_____
###Markdown
Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a `tf.saved_model`. To make that exported model useful it should take `tf.string` inputs, and retrun `tf.string` outputs: All the text processing happens inside the model. Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.The `tensroflow_text` package contains a unicode normalize operation:
###Code
example_text = tf.constant('¿Todavía está en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
###Output
_____no_output_____
###Markdown
Unicode normalization will be the first step in the text standardization function:
###Code
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
###Output
_____no_output_____
###Markdown
Text Vectorization This standardization function will be wrapped up in a `preprocessing.TextVectorization` layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
###Code
max_vocab_size = 5000
input_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
###Output
_____no_output_____
###Markdown
The `TextVectorization` layer and many other `experimental.preprocessing` layers have an `adapt` method. This method reads one epoch of the training data, and works a lot like `Model.fix`. This `adapt` method initializes the layer based on the data. Here it determines the vocabulary:
###Code
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
That's the Spanish `TextVectorization` layer, now build and `.adapt()` the English one:
###Code
output_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
Now these layers can convert a batch of strings into a batch of token IDs:
###Code
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
###Output
_____no_output_____
###Markdown
The `get_vocabulary` method can be used to convert token IDs back to text:
###Code
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
###Output
_____no_output_____
###Markdown
The returned token IDs are zero-padded. This can easily be turned into a mask:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
The encoder/decoder modelThe following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from [Luong's paper](https://arxiv.org/abs/1508.04025v5). Before getting into it define a few constants for the model:
###Code
embedding_dim = 256
units = 1024
###Output
_____no_output_____
###Markdown
The encoderStart by building the encoder, the blue part of the diagram above.The encoder:1. Takes a list of token IDs (from `input_text_processor`).3. Looks up an embedding vector for each token (Using a `layers.Embedding`).4. Processes the embeddings into a new sequence (Using a `layers.GRU`).5. Returns: * The processed sequence. This will be passed to the attention head. * The internal state. This will be used to initialize the decoder
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
###Output
_____no_output_____
###Markdown
Here is how it fits together so far:
###Code
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
###Output
_____no_output_____
###Markdown
The encoder returns its internal state so that its state can be used to initialize the decoder.It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder. The attention headThe decoder uses attention to selectively focus on parts of the input sequence.The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a `layers.GlobalAveragePoling1D` but the attention layer performs a _weighted_ average.Let's look at how this works: Where:* $s$ is the encoder index.* $t$ is the decoder index.* $\alpha_{ts}$ is the attention weights.* $h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).* $h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).* $c_t$ is the resulting context vector.* $a_t$ is the final output combining the "context" and "query".The equations:1. Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.2. Calculates the context vector as the weighted sum of the encoder outputs. Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:This tutorial uses [Bahdanau's additive attention](https://arxiv.org/pdf/1409.0473.pdf). TensorFlow includes implementations of both as `layers.Attention` and`layers.AdditiveAttention`. The class below handles the weight matrices in a pair of `layers.Dense` layers, and calls the builtin implementation.
###Code
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
###Output
_____no_output_____
###Markdown
Test the Attention layerCreate a `BahdanauAttention` layer:
###Code
attention_layer = BahdanauAttention(units)
###Output
_____no_output_____
###Markdown
This layer takes 3 inputs:* The `query`: This will be generated by the decoder, later.* The `value`: This Will be the output of the encoder.* The `mask`: To exclude the padding, `example_tokens != 0`
###Code
(example_tokens != 0).shape
###Output
_____no_output_____
###Markdown
The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:1. A batch of sequences of result vectors the size of the queries.2. A batch attention maps, with size `(query_length, value_length)`.
###Code
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
###Output
_____no_output_____
###Markdown
The attention weights should sum to `1.0` for each sequence.Here are the attention weights across the sequences at `t=0`:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
Because of the small-random initialization the attention weights are all close to `1/(sequence_length)`. If you zoom in on the weights for a single sequence, you can see that there is some _small_ variation that the model can learn to expand, and exploit.
###Code
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
###Output
_____no_output_____
###Markdown
The decoderThe decoder's job is to generate predictions for the next output token.1. The decoder receives the complete encoder output.2. It uses an RNN to keep track of what it has generated so far.3. It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.4. It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".5. It generates logit predictions for the next token based on the "attention vector". Here is the `Decoder` class and its initializer. The initializer creates all the necessary layers.
###Code
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
###Output
_____no_output_____
###Markdown
The `call` method for this layer takes and returns multiple tensors. Organize those into simple container classes:
###Code
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
###Output
_____no_output_____
###Markdown
Here is the implementation of the `call` method:
###Code
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'enc_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
###Output
_____no_output_____
###Markdown
The **encoder** processes its full input sequence with a single call to its RNN. This implementation of the **decoder** _can_ do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:* Flexibility: Writing the loop gives you direct control over the training procedure.* Clarity: It's possible to do masking tricks and use `layers.RNN`, or `tfa.seq2seq` APIs to pack this all into a single call. But writing it out as a loop may be clearer. * Loop free training is demonstrated in the [Text generation](text_generation.ipynb) tutiorial. Now try using this decoder.
###Code
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
###Output
_____no_output_____
###Markdown
The decoder takes 4 inputs.* `new_tokens` - The last token generated. Initialize the decoder with the `"[START]"` token.* `enc_output` - Generated by the `Encoder`.* `mask` - A boolean tensor indicating where `tokens != 0`* `state` - The previous `state` output from the decoder (the internal state of the decoder's RNN). Pass `None` to zero-initialize it. The original paper initializes it from the encoder's final RNN state.
###Code
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor.get_vocabulary().index('[START]')
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
###Output
_____no_output_____
###Markdown
Sample a token according to the logits:
###Code
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
###Output
_____no_output_____
###Markdown
Decode the token as the first word of the output:
###Code
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
Now use the decoder to generate a second set of logits.- Pass the same `enc_output` and `mask`, these haven't changed.- Pass the sampled token as `new_tokens`.- Pass the `decoder_state` the decoder returned last time, so the RNN continues with a memory of where it left off last time.
###Code
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
TrainingNow that you have all the model components, it's time to start training the model. You'll need:- A loss function and optimizer to perform the optimization.- A training step function defining how to update the model for each input/target batch.- A training loop to drive the training and save checkpoints. Define the loss function
###Code
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
###Output
_____no_output_____
###Markdown
Implement the training step Start with a model class, the training process will be implemented as the `train_step` method on this model. See [Customizing fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit) for details.Here the `train_step` method is a wrapper around the `_train_step` implementation which will come later. This wrapper includes a switch to turn on and off `tf.function` compilation, to make debugging easier.
###Code
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
###Output
_____no_output_____
###Markdown
Overall the implementation for the `Model.train_step` method is as follows:1. Receive a batch of `input_text, target_text` from the `tf.data.Dataset`.2. Convert those raw text inputs to token-embeddings and masks. 3. Run the encoder on the `input_tokens` to get the `encoder_output` and `encoder_state`.4. Initialize the decoder state and loss. 5. Loop over the `target_tokens`: 1. Run the decoder one step at a time. 2. Calculate the loss for each step. 3. Accumulate the average loss.6. Calculate the gradient of the loss and use the optimizer to apply updates to the model's `trainable_variables`. The `_preprocess` method, added below, implements steps 1 and 2:
###Code
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
###Output
_____no_output_____
###Markdown
The `_train_step` method, added below, handles the remaining steps except for actually running the decoder:
###Code
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target the target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
###Output
_____no_output_____
###Markdown
The `_loop_step` method, added below, executes the decoder and calculates the incremental loss and new decoder state (`dec_state`).
###Code
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
###Output
_____no_output_____
###Markdown
Test the training stepBuild a `TrainTranslator`, and configure it for training using the `Model.compile` method:
###Code
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Test out the `train_step`. For a text model like this the loss should start near:
###Code
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
While it's easier to debug without a `tf.function` it does give a performance boost. So now that the `_train_step` method is working, try the `tf.function`-wrapped `_tf_train_step`, to maximize performance while training:
###Code
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
###Output
_____no_output_____
###Markdown
The first call will be slow, because it traces the function.
###Code
translator.train_step([example_input_batch, example_target_batch])
###Output
_____no_output_____
###Markdown
But after that it's usually 2-3x faster than the eager `train_step` method:
###Code
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
###Code
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
###Code
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Train the modelWhile there's nothing wrong with writing your own custom training loop, implementing the `Model.train_step` method, as in the previous section, allows you to run `Model.fit` and avoid rewriting all that boiler-plate code. This tutorial only trains for a couple of epochs, so use a `callbacks.Callback` to collect the history of batch losses, for plotting:
###Code
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
###Output
_____no_output_____
###Markdown
The visible jumps in the plot are at the epoch boundaries. TranslateNow that the model is trained, implement a function to execute the full `text => text` translation.For this the model needs to invert the `text => token IDs` mapping provided by the `output_text_processor`. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
###Code
class Translator(tf.Module):
def __init__(self, encoder, decoder, input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
mask_token='',
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(), mask_token='')
token_mask_ids = index_from_string(['', '[UNK]', '[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string(tf.constant('[START]'))
self.end_token = index_from_string(tf.constant('[END]'))
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
###Output
_____no_output_____
###Markdown
Convert token IDs to text The first method to implement is `tokens_to_text` which converts from token IDs to human readable text.
###Code
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
###Output
_____no_output_____
###Markdown
Input some random token IDs and see what it generates:
###Code
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
###Output
_____no_output_____
###Markdown
Sample from the decoder's predictions This function takes the decoder's logit outputs and samples token IDs from that distribution:
###Code
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
###Output
_____no_output_____
###Markdown
Test run this function on some random inputs:
###Code
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
###Output
_____no_output_____
###Markdown
Implement the translation loopHere is a complete implementation of the text to text translation loop.This implementation collects the results into python lists, before using `tf.concat` to join them into tensors.This implementation statically unrolls the graph out to `max_length` iterations.This is okay with eager execution in python.
###Code
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
###Output
_____no_output_____
###Markdown
Run it on a simple input:
###Code
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
If you want to export this model you'll need to wrap this method in a `tf.function`. This basic implementation has a few issues if you try to do that:1. The resulting graphs are very large and take a few seconds to build, save or load.2. You can't break from a statically unrolled loop, so it will always run `max_length` iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
Run the `tf.function` once to compile it:
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text,
*,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(
new_tokens=new_tokens, enc_output=enc_output, mask=(input_tokens != 0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generated token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
###Output
_____no_output_____
###Markdown
The initial implementation used python lists to collect the outputs. This uses `tf.range` as the loop iterator, allowing `tf.autograph` to convert the loop. The biggest change in this implementation is the use of `tf.TensorArray` instead of python `list` to accumulate tensors. `tf.TensorArray` is required to collect a variable number of tensors in graph mode. With eager execution this implementation performs on par with the original:
###Code
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
But when you wrap it in a `tf.function` you'll notice two differences.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
First: Graph creation is much faster (~10x), since it doesn't create `max_iterations` copies of the model.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
###Output
_____no_output_____
###Markdown
Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Visualize the process The attention weights returned by the `translate` method show where the model was "looking" when it generated each output token.So the sum of the attention over the input should return all ones:
###Code
a = result['attention'][0]
print(np.sum(a, axis=-1))
###Output
_____no_output_____
###Markdown
Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
###Code
_ = plt.bar(range(len(a[0, :])), a[0, :])
###Output
_____no_output_____
###Markdown
Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
###Code
plt.imshow(np.array(a), vmin=0.0)
###Output
_____no_output_____
###Markdown
Here is some code to make a better attention plot:
###Code
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
Translate a few more sentences and plot them:
###Code
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'¿Todavía están en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:1. The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.2. The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. [Transformers](transformer.ipynb) solve this by using self-attention in the encoder and decoder.
###Code
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
###Output
_____no_output_____
###Markdown
Export Once you have a model you're satisfied with you might want to export it as a `tf.saved_model` for use outside of this python program that created it.Since the model is a subclass of `tf.Module` (through `keras.Model`), and all the functionality for export is compiled in a `tf.function` the model should export cleanly with `tf.saved_model.save`: Now that the function has been traced it can be exported using `saved_model.save`:
###Code
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025v5). This is an advanced example that assumes some knowledge of:* Sequence to sequence models* TensorFlow fundamentals below the keras layer: * Working with tensors directly * Writing custom `keras.Model`s and `keras.layers`While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to [Transformers](transformer.ipynb)).After training the model in this notebook, you will be able to input a Spanish sentence, such as "*¿todavia estan en casa?*", and return the English translation: "*are you still at home?*"The resulting model is exportable as a `tf.saved_model`, so it can be used in other TensorFlow environments.The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU. Setup
###Code
!pip install tensorflow_text
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
###Output
_____no_output_____
###Markdown
This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
###Code
use_builtins = True
###Output
_____no_output_____
###Markdown
This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
###Code
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
###Output
_____no_output_____
###Markdown
The data We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```They have a variety of languages available, but we'll use the English-Spanish dataset. Download and prepare the datasetFor convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset From these arrays of strings you can create a `tf.data.Dataset` of strings that shuffles and batches them efficiently:
###Code
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
###Output
_____no_output_____
###Markdown
Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a `tf.saved_model`. To make that exported model useful it should take `tf.string` inputs, and return `tf.string` outputs: All the text processing happens inside the model. Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.The `tensorflow_text` package contains a unicode normalize operation:
###Code
example_text = tf.constant('¿Todavía está en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
###Output
_____no_output_____
###Markdown
Unicode normalization will be the first step in the text standardization function:
###Code
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
###Output
_____no_output_____
###Markdown
Text Vectorization This standardization function will be wrapped up in a `tf.keras.layers.TextVectorization` layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
###Code
max_vocab_size = 5000
input_text_processor = tf.keras.layers.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
###Output
_____no_output_____
###Markdown
The `TextVectorization` layer and many other preprocessing layers have an `adapt` method. This method reads one epoch of the training data, and works a lot like `Model.fix`. This `adapt` method initializes the layer based on the data. Here it determines the vocabulary:
###Code
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
That's the Spanish `TextVectorization` layer, now build and `.adapt()` the English one:
###Code
output_text_processor = tf.keras.layers.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
Now these layers can convert a batch of strings into a batch of token IDs:
###Code
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
###Output
_____no_output_____
###Markdown
The `get_vocabulary` method can be used to convert token IDs back to text:
###Code
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
###Output
_____no_output_____
###Markdown
The returned token IDs are zero-padded. This can easily be turned into a mask:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
The encoder/decoder modelThe following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from [Luong's paper](https://arxiv.org/abs/1508.04025v5). Before getting into it define a few constants for the model:
###Code
embedding_dim = 256
units = 1024
###Output
_____no_output_____
###Markdown
The encoderStart by building the encoder, the blue part of the diagram above.The encoder:1. Takes a list of token IDs (from `input_text_processor`).3. Looks up an embedding vector for each token (Using a `layers.Embedding`).4. Processes the embeddings into a new sequence (Using a `layers.GRU`).5. Returns: * The processed sequence. This will be passed to the attention head. * The internal state. This will be used to initialize the decoder
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
###Output
_____no_output_____
###Markdown
Here is how it fits together so far:
###Code
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
###Output
_____no_output_____
###Markdown
The encoder returns its internal state so that its state can be used to initialize the decoder.It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder. The attention headThe decoder uses attention to selectively focus on parts of the input sequence.The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a `layers.GlobalAveragePoling1D` but the attention layer performs a _weighted_ average.Let's look at how this works: Where:* $s$ is the encoder index.* $t$ is the decoder index.* $\alpha_{ts}$ is the attention weights.* $h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).* $h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).* $c_t$ is the resulting context vector.* $a_t$ is the final output combining the "context" and "query".The equations:1. Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.2. Calculates the context vector as the weighted sum of the encoder outputs. Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:This tutorial uses [Bahdanau's additive attention](https://arxiv.org/pdf/1409.0473.pdf). TensorFlow includes implementations of both as `layers.Attention` and`layers.AdditiveAttention`. The class below handles the weight matrices in a pair of `layers.Dense` layers, and calls the builtin implementation.
###Code
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
###Output
_____no_output_____
###Markdown
Test the Attention layerCreate a `BahdanauAttention` layer:
###Code
attention_layer = BahdanauAttention(units)
###Output
_____no_output_____
###Markdown
This layer takes 3 inputs:* The `query`: This will be generated by the decoder, later.* The `value`: This Will be the output of the encoder.* The `mask`: To exclude the padding, `example_tokens != 0`
###Code
(example_tokens != 0).shape
###Output
_____no_output_____
###Markdown
The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:1. A batch of sequences of result vectors the size of the queries.2. A batch attention maps, with size `(query_length, value_length)`.
###Code
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
###Output
_____no_output_____
###Markdown
The attention weights should sum to `1.0` for each sequence.Here are the attention weights across the sequences at `t=0`:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
Because of the small-random initialization the attention weights are all close to `1/(sequence_length)`. If you zoom in on the weights for a single sequence, you can see that there is some _small_ variation that the model can learn to expand, and exploit.
###Code
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
###Output
_____no_output_____
###Markdown
The decoderThe decoder's job is to generate predictions for the next output token.1. The decoder receives the complete encoder output.2. It uses an RNN to keep track of what it has generated so far.3. It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.4. It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".5. It generates logit predictions for the next token based on the "attention vector". Here is the `Decoder` class and its initializer. The initializer creates all the necessary layers.
###Code
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
###Output
_____no_output_____
###Markdown
The `call` method for this layer takes and returns multiple tensors. Organize those into simple container classes:
###Code
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
###Output
_____no_output_____
###Markdown
Here is the implementation of the `call` method:
###Code
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'dec_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
###Output
_____no_output_____
###Markdown
The **encoder** processes its full input sequence with a single call to its RNN. This implementation of the **decoder** _can_ do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:* Flexibility: Writing the loop gives you direct control over the training procedure.* Clarity: It's possible to do masking tricks and use `layers.RNN`, or `tfa.seq2seq` APIs to pack this all into a single call. But writing it out as a loop may be clearer. * Loop free training is demonstrated in the [Text generation](text_generation.ipynb) tutiorial. Now try using this decoder.
###Code
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
###Output
_____no_output_____
###Markdown
The decoder takes 4 inputs.* `new_tokens` - The last token generated. Initialize the decoder with the `"[START]"` token.* `enc_output` - Generated by the `Encoder`.* `mask` - A boolean tensor indicating where `tokens != 0`* `state` - The previous `state` output from the decoder (the internal state of the decoder's RNN). Pass `None` to zero-initialize it. The original paper initializes it from the encoder's final RNN state.
###Code
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor.get_vocabulary().index('[START]')
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
###Output
_____no_output_____
###Markdown
Sample a token according to the logits:
###Code
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
###Output
_____no_output_____
###Markdown
Decode the token as the first word of the output:
###Code
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
Now use the decoder to generate a second set of logits.- Pass the same `enc_output` and `mask`, these haven't changed.- Pass the sampled token as `new_tokens`.- Pass the `decoder_state` the decoder returned last time, so the RNN continues with a memory of where it left off last time.
###Code
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
TrainingNow that you have all the model components, it's time to start training the model. You'll need:- A loss function and optimizer to perform the optimization.- A training step function defining how to update the model for each input/target batch.- A training loop to drive the training and save checkpoints. Define the loss function
###Code
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
###Output
_____no_output_____
###Markdown
Implement the training step Start with a model class, the training process will be implemented as the `train_step` method on this model. See [Customizing fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit) for details.Here the `train_step` method is a wrapper around the `_train_step` implementation which will come later. This wrapper includes a switch to turn on and off `tf.function` compilation, to make debugging easier.
###Code
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
###Output
_____no_output_____
###Markdown
Overall the implementation for the `Model.train_step` method is as follows:1. Receive a batch of `input_text, target_text` from the `tf.data.Dataset`.2. Convert those raw text inputs to token-embeddings and masks. 3. Run the encoder on the `input_tokens` to get the `encoder_output` and `encoder_state`.4. Initialize the decoder state and loss. 5. Loop over the `target_tokens`: 1. Run the decoder one step at a time. 2. Calculate the loss for each step. 3. Accumulate the average loss.6. Calculate the gradient of the loss and use the optimizer to apply updates to the model's `trainable_variables`. The `_preprocess` method, added below, implements steps 1 and 2:
###Code
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
###Output
_____no_output_____
###Markdown
The `_train_step` method, added below, handles the remaining steps except for actually running the decoder:
###Code
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
###Output
_____no_output_____
###Markdown
The `_loop_step` method, added below, executes the decoder and calculates the incremental loss and new decoder state (`dec_state`).
###Code
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
###Output
_____no_output_____
###Markdown
Test the training stepBuild a `TrainTranslator`, and configure it for training using the `Model.compile` method:
###Code
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Test out the `train_step`. For a text model like this the loss should start near:
###Code
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
While it's easier to debug without a `tf.function` it does give a performance boost. So now that the `_train_step` method is working, try the `tf.function`-wrapped `_tf_train_step`, to maximize performance while training:
###Code
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
###Output
_____no_output_____
###Markdown
The first call will be slow, because it traces the function.
###Code
translator.train_step([example_input_batch, example_target_batch])
###Output
_____no_output_____
###Markdown
But after that it's usually 2-3x faster than the eager `train_step` method:
###Code
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
###Code
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
###Code
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Train the modelWhile there's nothing wrong with writing your own custom training loop, implementing the `Model.train_step` method, as in the previous section, allows you to run `Model.fit` and avoid rewriting all that boiler-plate code. This tutorial only trains for a couple of epochs, so use a `callbacks.Callback` to collect the history of batch losses, for plotting:
###Code
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
###Output
_____no_output_____
###Markdown
The visible jumps in the plot are at the epoch boundaries. TranslateNow that the model is trained, implement a function to execute the full `text => text` translation.For this the model needs to invert the `text => token IDs` mapping provided by the `output_text_processor`. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
###Code
class Translator(tf.Module):
def __init__(self, encoder, decoder, input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
mask_token='',
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.StringLookup(
vocabulary=output_text_processor.get_vocabulary(), mask_token='')
token_mask_ids = index_from_string(['', '[UNK]', '[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string(tf.constant('[START]'))
self.end_token = index_from_string(tf.constant('[END]'))
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
###Output
_____no_output_____
###Markdown
Convert token IDs to text The first method to implement is `tokens_to_text` which converts from token IDs to human readable text.
###Code
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
###Output
_____no_output_____
###Markdown
Input some random token IDs and see what it generates:
###Code
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
###Output
_____no_output_____
###Markdown
Sample from the decoder's predictions This function takes the decoder's logit outputs and samples token IDs from that distribution:
###Code
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
###Output
_____no_output_____
###Markdown
Test run this function on some random inputs:
###Code
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
###Output
_____no_output_____
###Markdown
Implement the translation loopHere is a complete implementation of the text to text translation loop.This implementation collects the results into python lists, before using `tf.concat` to join them into tensors.This implementation statically unrolls the graph out to `max_length` iterations.This is okay with eager execution in python.
###Code
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
###Output
_____no_output_____
###Markdown
Run it on a simple input:
###Code
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
If you want to export this model you'll need to wrap this method in a `tf.function`. This basic implementation has a few issues if you try to do that:1. The resulting graphs are very large and take a few seconds to build, save or load.2. You can't break from a statically unrolled loop, so it will always run `max_length` iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
Run the `tf.function` once to compile it:
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text,
*,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(
new_tokens=new_tokens, enc_output=enc_output, mask=(input_tokens != 0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generated token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
###Output
_____no_output_____
###Markdown
The initial implementation used python lists to collect the outputs. This uses `tf.range` as the loop iterator, allowing `tf.autograph` to convert the loop. The biggest change in this implementation is the use of `tf.TensorArray` instead of python `list` to accumulate tensors. `tf.TensorArray` is required to collect a variable number of tensors in graph mode. With eager execution this implementation performs on par with the original:
###Code
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
But when you wrap it in a `tf.function` you'll notice two differences.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
First: Graph creation is much faster (~10x), since it doesn't create `max_iterations` copies of the model.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
###Output
_____no_output_____
###Markdown
Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Visualize the process The attention weights returned by the `translate` method show where the model was "looking" when it generated each output token.So the sum of the attention over the input should return all ones:
###Code
a = result['attention'][0]
print(np.sum(a, axis=-1))
###Output
_____no_output_____
###Markdown
Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
###Code
_ = plt.bar(range(len(a[0, :])), a[0, :])
###Output
_____no_output_____
###Markdown
Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
###Code
plt.imshow(np.array(a), vmin=0.0)
###Output
_____no_output_____
###Markdown
Here is some code to make a better attention plot:
###Code
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
Translate a few more sentences and plot them:
###Code
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'¿Todavía están en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:1. The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.2. The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. [Transformers](transformer.ipynb) solve this by using self-attention in the encoder and decoder.
###Code
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
###Output
_____no_output_____
###Markdown
Export Once you have a model you're satisfied with you might want to export it as a `tf.saved_model` for use outside of this python program that created it.Since the model is a subclass of `tf.Module` (through `keras.Model`), and all the functionality for export is compiled in a `tf.function` the model should export cleanly with `tf.saved_model.save`: Now that the function has been traced it can be exported using `saved_model.save`:
###Code
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025v5). This is an advanced example that assumes some knowledge of:* Sequence to sequence models* TensorFlow fundamentals below the keras layer: * Working with tensors directly * Writing custom `keras.Model`s and `keras.layers`While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to [Transformers](transformer.ipynb)).After training the model in this notebook, you will be able to input a Spanish sentence, such as "*¿todavia estan en casa?*", and return the English translation: "*are you still at home?*"The resulting model is exportable as a `tf.saved_model`, so it can be used in other TensorFlow environments.The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU. Setup
###Code
!pip install tensorflow_text
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
###Output
_____no_output_____
###Markdown
This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
###Code
use_builtins = True
###Output
_____no_output_____
###Markdown
This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
###Code
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
###Output
_____no_output_____
###Markdown
The data We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```They have a variety of languages available, but we'll use the English-Spanish dataset. Download and prepare the datasetFor convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset From these arrays of strings you can create a `tf.data.Dataset` of strings that shuffles and batches them efficiently:
###Code
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
###Output
_____no_output_____
###Markdown
Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a `tf.saved_model`. To make that exported model useful it should take `tf.string` inputs, and retrun `tf.string` outputs: All the text processing happens inside the model. Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.The `tensroflow_text` package contains a unicode normalize operation:
###Code
example_text = tf.constant('¿Todavía está en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
###Output
_____no_output_____
###Markdown
Unicode normalization will be the first step in the text standardization function:
###Code
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
###Output
_____no_output_____
###Markdown
Text Vectorization This standardization function will be wrapped up in a `preprocessing.TextVectorization` layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
###Code
max_vocab_size = 5000
input_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
###Output
_____no_output_____
###Markdown
The `TextVectorization` layer and many other `experimental.preprocessing` layers have an `adapt` method. This method reads one epoch of the training data, and works a lot like `Model.fix`. This `adapt` method initializes the layer based on the data. Here it determines the vocabulary:
###Code
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
That's the Spanish `TextVectorization` layer, now build and `.adapt()` the English one:
###Code
output_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
Now these layers can convert a batch of strings into a batch of token IDs:
###Code
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
###Output
_____no_output_____
###Markdown
The `get_vocabulary` method can be used to convert token IDs back to text:
###Code
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
###Output
_____no_output_____
###Markdown
The returned token IDs are zero-padded. This can easily be turned into a mask:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
The encoder/decoder modelThe following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from [Luong's paper](https://arxiv.org/abs/1508.04025v5). Before getting into it define a few constants for the model:
###Code
embedding_dim = 256
units = 1024
###Output
_____no_output_____
###Markdown
The encoderStart by building the encoder, the blue part of the diagram above.The encoder:1. Takes a list of token IDs (from `input_text_processor`).3. Looks up an embedding vector for each token (Using a `layers.Embedding`).4. Processes the embeddings into a new sequence (Using a `layers.GRU`).5. Returns: * The processed sequence. This will be passed to the attention head. * The internal state. This will be used to initialize the decoder
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
###Output
_____no_output_____
###Markdown
Here is how it fits together so far:
###Code
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
###Output
_____no_output_____
###Markdown
The encoder returns its internal state so that its state can be used to initialize the decoder.It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder. The attention headThe decoder uses attention to selectively focus on parts of the input sequence.The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a `layers.GlobalAveragePoling1D` but the attention layer performs a _weighted_ average.Let's look at how this works: Where:* $s$ is the encoder index.* $t$ is the decoder index.* $\alpha_{ts}$ is the attention weights.* $h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).* $h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).* $c_t$ is the resulting context vector.* $a_t$ is the final output combining the "context" and "query".The equations:1. Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.2. Calculates the context vector as the weighted sum of the encoder outputs. Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:This tutorial uses [Bahdanau's additive attention](https://arxiv.org/pdf/1409.0473.pdf). TensorFlow includes implementations of both as `layers.Attention` and`layers.AdditiveAttention`. The class below handles the weight matrices in a pair of `layers.Dense` layers, and calls the builtin implementation.
###Code
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
###Output
_____no_output_____
###Markdown
Test the Attention layerCreate a `BahdanauAttention` layer:
###Code
attention_layer = BahdanauAttention(units)
###Output
_____no_output_____
###Markdown
This layer takes 3 inputs:* The `query`: This will be generated by the decoder, later.* The `value`: This Will be the output of the encoder.* The `mask`: To exclude the padding, `example_tokens != 0`
###Code
(example_tokens != 0).shape
###Output
_____no_output_____
###Markdown
The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:1. A batch of sequences of result vectors the size of the queries.2. A batch attention maps, with size `(query_length, value_length)`.
###Code
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
###Output
_____no_output_____
###Markdown
The attention weights should sum to `1.0` for each sequence.Here are the attention weights across the sequences at `t=0`:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
Because of the small-random initialization the attention weights are all close to `1/(sequence_length)`. If you zoom in on the weights for a single sequence, you can see that there is some _small_ variation that the model can learn to expand, and exploit.
###Code
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
###Output
_____no_output_____
###Markdown
The decoderThe decoder's job is to generate predictions for the next output token.1. The decoder receives the complete encoder output.2. It uses an RNN to keep track of what it has generated so far.3. It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.4. It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".5. It generates logit predictions for the next token based on the "attention vector". Here is the `Decoder` class and its initializer. The initializer creates all the necessary layers.
###Code
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
###Output
_____no_output_____
###Markdown
The `call` method for this layer takes and returns multiple tensors. Organize those into simple container classes:
###Code
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
###Output
_____no_output_____
###Markdown
Here is the implementation of the `call` method:
###Code
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'dec_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
###Output
_____no_output_____
###Markdown
The **encoder** processes its full input sequence with a single call to its RNN. This implementation of the **decoder** _can_ do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:* Flexibility: Writing the loop gives you direct control over the training procedure.* Clarity: It's possible to do masking tricks and use `layers.RNN`, or `tfa.seq2seq` APIs to pack this all into a single call. But writing it out as a loop may be clearer. * Loop free training is demonstrated in the [Text generation](text_generation.ipynb) tutiorial. Now try using this decoder.
###Code
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
###Output
_____no_output_____
###Markdown
The decoder takes 4 inputs.* `new_tokens` - The last token generated. Initialize the decoder with the `"[START]"` token.* `enc_output` - Generated by the `Encoder`.* `mask` - A boolean tensor indicating where `tokens != 0`* `state` - The previous `state` output from the decoder (the internal state of the decoder's RNN). Pass `None` to zero-initialize it. The original paper initializes it from the encoder's final RNN state.
###Code
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor._index_lookup_layer('[START]').numpy()
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
###Output
_____no_output_____
###Markdown
Sample a token according to the logits:
###Code
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
###Output
_____no_output_____
###Markdown
Decode the token as the first word of the output:
###Code
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
Now use the decoder to generate a second set of logits.- Pass the same `enc_output` and `mask`, these haven't changed.- Pass the sampled token as `new_tokens`.- Pass the `decoder_state` the decoder returned last time, so the RNN continues with a memory of where it left off last time.
###Code
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
TrainingNow that you have all the model components, it's time to start training the model. You'll need:- A loss function and optimizer to perform the optimization.- A training step function defining how to update the model for each input/target batch.- A training loop to drive the training and save checkpoints. Define the loss function
###Code
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
###Output
_____no_output_____
###Markdown
Implement the training step Start with a model class, the training process will be implemented as the `train_step` method on this model. See [Customizing fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit) for details.Here the `train_step` method is a wrapper around the `_train_step` implementation which will come later. This wrapper includes a switch to turn on and off `tf.function` compilation, to make debugging easier.
###Code
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
###Output
_____no_output_____
###Markdown
Overall the implementation for the `Model.train_step` method is as follows:1. Receive a batch of `input_text, target_text` from the `tf.data.Dataset`.2. Convert those raw text inputs to token-embeddings and masks. 3. Run the encoder on the `input_tokens` to get the `encoder_output` and `encoder_state`.4. Initialize the decoder state and loss. 5. Loop over the `target_tokens`: 1. Run the decoder one step at a time. 2. Calculate the loss for each step. 3. Accumulate the average loss.6. Calculate the gradient of the loss and use the optimizer to apply updates to the model's `trainable_variables`. The `_preprocess` method, added below, implements steps 1 and 2:
###Code
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
###Output
_____no_output_____
###Markdown
The `_train_step` method, added below, handles the remaining steps except for actually running the decoder:
###Code
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target the target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
###Output
_____no_output_____
###Markdown
The `_loop_step` method, added below, executes the decoder and calculates the incremental loss and new decoder state (`dec_state`).
###Code
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
###Output
_____no_output_____
###Markdown
Test the training stepBuild a `TrainTranslator`, and configure it for training using the `Model.compile` method:
###Code
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Test out the `train_step`. For a text model like this the loss should start near:
###Code
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
While it's easier to debug without a `tf.function` it does give a performance boost. So now that the `_train_step` method is working, try the `tf.function`-wrapped `_tf_train_step`, to maximize performance while training:
###Code
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
###Output
_____no_output_____
###Markdown
The first call will be slow, because it traces the function.
###Code
translator.train_step([example_input_batch, example_target_batch])
###Output
_____no_output_____
###Markdown
But after that it's usually 2-3x faster than the eager `train_step` method:
###Code
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
###Code
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
###Code
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Train the modelWhile there's nothing wrong with writing your own custom training loop, implementing the `Model.train_step` method, as in the previous section, allows you to run `Model.fit` and avoid rewriting all that boiler-plate code. This tutorial only trains for a couple of epochs, so use a `callbacks.Callback` to collect the history of batch losses, for plotting:
###Code
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
###Output
_____no_output_____
###Markdown
The visible jumps in the plot are at the epoch boundaries. TranslateNow that the model is trained, implement a function to execute the full `text => text` translation.For this the model needs to invert the `text => token IDs` mapping provided by the `output_text_processor`. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
###Code
class Translator(tf.Module):
def __init__(self, encoder, decoder, input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
mask_token='',
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(), mask_token='')
token_mask_ids = index_from_string(['', '[UNK]', '[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string('[START]')
self.end_token = index_from_string('[END]')
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
###Output
_____no_output_____
###Markdown
Convert token IDs to text The first method to implement is `tokens_to_text` which converts from token IDs to human readable text.
###Code
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
###Output
_____no_output_____
###Markdown
Input some random token IDs and see what it generates:
###Code
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
###Output
_____no_output_____
###Markdown
Sample from the decoder's predictions This function takes the decoder's logit outputs and samples token IDs from that distribution:
###Code
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
###Output
_____no_output_____
###Markdown
Test run this function on some random inputs:
###Code
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
###Output
_____no_output_____
###Markdown
Implement the translation loopHere is a complete implementation of the text to text translation loop.This implementation collects the results into python lists, before using `tf.concat` to join them into tensors.This implementation statically unrolls the graph out to `max_length` iterations.This is okay with eager execution in python.
###Code
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
###Output
_____no_output_____
###Markdown
Run it on a simple input:
###Code
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
If you want to export this model you'll need to wrap this method in a `tf.function`. This basic implementation has a few issues if you try to do that:1. The resulting graphs are very large and take a few seconds to build, save or load.2. You can't break from a statically unrolled loop, so it will always run `max_length` iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
Run the `tf.function` once to compile it:
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text,
*,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(
new_tokens=new_tokens, enc_output=enc_output, mask=(input_tokens != 0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generated token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
###Output
_____no_output_____
###Markdown
The initial implementation used python lists to collect the outputs. This uses `tf.range` as the loop iterator, allowing `tf.autograph` to convert the loop. The biggest change in this implementation is the use of `tf.TensorArray` instead of python `list` to accumulate tensors. `tf.TensorArray` is required to collect a variable number of tensors in graph mode. With eager execution this implementation performs on par with the original:
###Code
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
But when you wrap it in a `tf.function` you'll notice two differences.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
First: Graph creation is much faster (~10x), since it doesn't create `max_iterations` copies of the model.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
###Output
_____no_output_____
###Markdown
Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Visualize the process The attention weights returned by the `translate` method show where the model was "looking" when it generated each output token.So the sum of the attention over the input should return all ones:
###Code
a = result['attention'][0]
print(np.sum(a, axis=-1))
###Output
_____no_output_____
###Markdown
Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
###Code
_ = plt.bar(range(len(a[0, :])), a[0, :])
###Output
_____no_output_____
###Markdown
Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
###Code
plt.imshow(np.array(a), vmin=0.0)
###Output
_____no_output_____
###Markdown
Here is some code to make a better attention plot:
###Code
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
Translate a few more sentences and plot them:
###Code
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'¿Todavía están en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:1. The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.2. The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. [Transformers](transformer.ipynb) solve this by using self-attention in the encoder and decoder.
###Code
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
###Output
_____no_output_____
###Markdown
Export Once you have a model you're satisfied with you might want to export it as a `tf.saved_model` for use outside of this python program that created it.Since the model is a subclass of `tf.Module` (through `keras.Model`), and all the functionality for export is compiled in a `tf.function` the model should export cleanly with `tf.saved_model.save`: Now that the function has been traced it can be exported using `saved_model.save`:
###Code
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025v5). This is an advanced example that assumes some knowledge of:* Sequence to sequence models* TensorFlow fundamentals below the keras layer: * Working with tensors directly * Writing custom `keras.Model`s and `keras.layers`While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to [Transformers](transformers.ipynb)).After training the model in this notebook, you will be able to input a Spanish sentence, such as "*¿todavia estan en casa?*", and return the English translation: "*are you still at home?*"The resulting model is exportable as a `tf.saved_model`, so it can be used in other TensorFlow environments.The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU. Setup
###Code
!pip install tensorflow_text
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
###Output
_____no_output_____
###Markdown
This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
###Code
use_builtins = True
###Output
_____no_output_____
###Markdown
This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
###Code
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
###Output
_____no_output_____
###Markdown
The data We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```They have a variety of languages available, but we'll use the English-Spanish dataset. Download and prepare the datasetFor convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset From these arrays of strings you can create a `tf.data.Dataset` of strings that shuffles and batches them efficiently:
###Code
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
###Output
_____no_output_____
###Markdown
Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a `tf.saved_model`. To make that exported model useful it should take `tf.string` inputs, and retrun `tf.string` outputs: All the text processing happens inside the model. Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.The `tensroflow_text` package contains a unicode normalize operation:
###Code
example_text = tf.constant('¿Todavía está en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
###Output
_____no_output_____
###Markdown
Unicode normalization will be the first step in the text standardization function:
###Code
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
###Output
_____no_output_____
###Markdown
Text Vectorization This standardization function will be wrapped up in a `preprocessing.TextVectorization` layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
###Code
max_vocab_size = 5000
input_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
###Output
_____no_output_____
###Markdown
The `TextVectorization` layer and many other `experimental.preprocessing` layers have an `adapt` method. This method reads one epoch of the training data, and works a lot like `Model.fix`. This `adapt` method initializes the layer based on the data. Here it determines the vocabulary:
###Code
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
That's the Spanish `TextVectorization` layer, now build and `.adapt()` the English one:
###Code
output_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
Now these layers can convert a batch of strings into a batch of token IDs:
###Code
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
###Output
_____no_output_____
###Markdown
The `get_vocabulary` method can be used to convert token IDs back to text:
###Code
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
###Output
_____no_output_____
###Markdown
The returned token IDs are zero-padded. This can easily be turned into a mask:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
The encoder/decoder modelThe following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from [Luong's paper](https://arxiv.org/abs/1508.04025v5). Before getting into it define a few constants for the model:
###Code
embedding_dim = 256
units = 1024
###Output
_____no_output_____
###Markdown
The encoderStart by building the encoder, the blue part of the diagram above.The encoder:1. Takes a list of token IDs (from `input_text_processor`).3. Looks up an embedding vector for each token (Using a `layers.Embedding`).4. Processes the embeddings into a new sequence (Using a `layers.GRU`).5. Returns: * The processed sequence. This will be passed to the attention head. * The internal state. This will be used to initialize the decoder
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
###Output
_____no_output_____
###Markdown
Here is how it fits together so far:
###Code
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
###Output
_____no_output_____
###Markdown
The encoder returns its internal state so that its state can be used to initialize the decoder.It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder. The attention headThe decoder uses attention to selectively focus on parts of the input sequence.The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a `layers.GlobalAveragePoling1D` but the attention layer performs a _weighted_ average.Let's look at how this works: Where:* $s$ is the encoder index.* $t$ is the decoder index.* $\alpha_{ts}$ is the attention weights.* $h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).* $h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).* $c_t$ is the resulting context vector.* $a_t$ is the final output combining the "context" and "query".The equations:1. Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.2. Calculates the context vector as the weighted sum of the encoder outputs. Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:This tutorial uses [Bahdanau's additive attention](https://arxiv.org/pdf/1409.0473.pdf). TensorFlow includes implementations of both as `layers.Attention` and`layers.AdditiveAttention`. The class below handles the weight matrices in a pair of `layers.Dense` layers, and calls the builtin implementation.
###Code
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
###Output
_____no_output_____
###Markdown
Test the Attention layerCreate a `BahdanauAttention` layer:
###Code
attention_layer = BahdanauAttention(units)
###Output
_____no_output_____
###Markdown
This layer takes 3 inputs:* The `query`: This will be generated by the decoder, later.* The `value`: This Will be the output of the encoder.* The `mask`: To exclude the padding, `example_tokens != 0`
###Code
(example_tokens != 0).shape
###Output
_____no_output_____
###Markdown
The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:1. A batch of sequences of result vectors the size of the queries.2. A batch attention maps, with size `(query_length, value_length)`.
###Code
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
###Output
_____no_output_____
###Markdown
The attention weights should sum to `1.0` for each sequence.Here are the attention weights across the sequences at `t=0`:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
Because of the small-random initialization the attention weights are all close to `1/(sequence_length)`. If you zoom in on the weights for a single sequence, you can see that there is some _small_ variation that the model can learn to expand, and exploit.
###Code
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
###Output
_____no_output_____
###Markdown
The decoderThe decoder's job is to generate predictions for the next output token.1. The decoder receives the complete encoder output.2. It uses an RNN to keep track of what it has generated so far.3. It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.4. It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".5. It generates logit predictions for the next token based on the "attention vector". Here is the `Decoder` class and its initializer. The initializer creates all the necessary layers.
###Code
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
###Output
_____no_output_____
###Markdown
The `call` method for this layer takes and returns multiple tensors. Organize those into simple container classes:
###Code
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
###Output
_____no_output_____
###Markdown
Here is the implementation of the `call` method:
###Code
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'dec_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
###Output
_____no_output_____
###Markdown
The **encoder** processes its full input sequence with a single call to its RNN. This implementation of the **decoder** _can_ do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:* Flexibility: Writing the loop gives you direct control over the training procedure.* Clarity: It's possible to do masking tricks and use `layers.RNN`, or `tfa.seq2seq` APIs to pack this all into a single call. But writing it out as a loop may be clearer. * Loop free training is demonstrated in the [Text generation](text_generation.ipynb) tutiorial. Now try using this decoder.
###Code
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
###Output
_____no_output_____
###Markdown
The decoder takes 4 inputs.* `new_tokens` - The last token generated. Initialize the decoder with the `"[START]"` token.* `enc_output` - Generated by the `Encoder`.* `mask` - A boolean tensor indicating where `tokens != 0`* `state` - The previous `state` output from the decoder (the internal state of the decoder's RNN). Pass `None` to zero-initialize it. The original paper initializes it from the encoder's final RNN state.
###Code
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor._index_lookup_layer('[START]').numpy()
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
###Output
_____no_output_____
###Markdown
Sample a token according to the logits:
###Code
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
###Output
_____no_output_____
###Markdown
Decode the token as the first word of the output:
###Code
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
Now use the decoder to generate a second set of logits.- Pass the same `enc_output` and `mask`, these haven't changed.- Pass the sampled token as `new_tokens`.- Pass the `decoder_state` the decoder returned last time, so the RNN continues with a memory of where it left off last time.
###Code
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
TrainingNow that you have all the model components, it's time to start training the model. You'll need:- A loss function and optimizer to perform the optimization.- A training step function defining how to update the model for each input/target batch.- A training loop to drive the training and save checkpoints. Define the loss function
###Code
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
###Output
_____no_output_____
###Markdown
Implement the training step Start with a model class, the training process will be implemented as the `train_step` method on this model. See [Customizing fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit) for details.Here the `train_step` method is a wrapper around the `_train_step` implementation which will come later. This wrapper includes a switch to turn on and off `tf.function` compilation, to make debugging easier.
###Code
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
###Output
_____no_output_____
###Markdown
Overall the implementation for the `Model.train_step` method is as follows:1. Receive a batch of `input_text, target_text` from the `tf.data.Dataset`.2. Convert those raw text inputs to token-embeddings and masks. 3. Run the encoder on the `input_tokens` to get the `encoder_output` and `encoder_state`.4. Initialize the decoder state and loss. 5. Loop over the `target_tokens`: 1. Run the decoder one step at a time. 2. Calculate the loss for each step. 3. Accumulate the average loss.6. Calculate the gradient of the loss and use the optimizer to apply updates to the model's `trainable_variables`. The `_preprocess` method, added below, implements steps 1 and 2:
###Code
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
###Output
_____no_output_____
###Markdown
The `_train_step` method, added below, handles the remaining steps except for actually running the decoder:
###Code
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target the target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
###Output
_____no_output_____
###Markdown
The `_loop_step` method, added below, executes the decoder and calculates the incremental loss and new decoder state (`dec_state`).
###Code
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
###Output
_____no_output_____
###Markdown
Test the training stepBuild a `TrainTranslator`, and configure it for training using the `Model.compile` method:
###Code
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Test out the `train_step`. For a text model like this the loss should start near:
###Code
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
While it's easier to debug without a `tf.function` it does give a performance boost. So now that the `_train_step` method is working, try the `tf.function`-wrapped `_tf_train_step`, to maximize performance while training:
###Code
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
###Output
_____no_output_____
###Markdown
The first call will be slow, because it traces the function.
###Code
translator.train_step([example_input_batch, example_target_batch])
###Output
_____no_output_____
###Markdown
But after that it's usually 2-3x faster than the eager `train_step` method:
###Code
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
###Code
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
###Code
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Train the modelWhile there's nothing wrong with writing your own custom training loop, implementing the `Model.train_step` method, as in the previous section, allows you to run `Model.fit` and avoid rewriting all that boiler-plate code. This tutorial only trains for a couple of epochs, so use a `callbacks.Callback` to collect the history of batch losses, for plotting:
###Code
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
###Output
_____no_output_____
###Markdown
The visible jumps in the plot are at the epoch boundaries. TranslateNow that the model is trained, implement a function to execute the full `text => text` translation.For this the model needs to invert the `text => token IDs` mapping provided by the `output_text_processor`. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
###Code
class Translator(tf.Module):
def __init__(self,
encoder, decoder,
input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary())
token_mask_ids = index_from_string(['',
'[UNK]',
'[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string('[START]')
self.end_token = index_from_string('[END]')
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
###Output
_____no_output_____
###Markdown
Convert token IDs to text The first method to implement is `tokens_to_text` which converts from token IDs to human readable text.
###Code
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
###Output
_____no_output_____
###Markdown
Input some random token IDs and see what it generates:
###Code
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
###Output
_____no_output_____
###Markdown
Sample from the decoder's predictions This function takes the decoder's logit outputs and samples token IDs from that distribution:
###Code
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
###Output
_____no_output_____
###Markdown
Test run this function on some random inputs:
###Code
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
###Output
_____no_output_____
###Markdown
Implement the translation loopHere is a complete implementation of the text to text translation loop.This implementation collects the results into python lists, before using `tf.concat` to join them into tensors.This implementation statically unrolls the graph out to `max_length` iterations.This is okay with eager execution in python.
###Code
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
###Output
_____no_output_____
###Markdown
Run it on a simple input:
###Code
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
If you want to export this model you'll need to wrap this method in a `tf.function`. This basic implementation has a few issues if you try to do that:1. The resulting graphs are very large and take a few seconds to build, save or load.2. You can't break from a statically unrolled loop, so it will always run `max_length` iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
###Code
f = tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
Run the `tf.function` once to compile it:
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask = (input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
###Output
_____no_output_____
###Markdown
The initial implementation used python lists to collect the outputs. This uses `tf.range` as the loop iterator, allowing `tf.autograph` to convert the loop. The biggest change in this implementation is the use of `tf.TensorArray` instead of python `list` to accumulate tensors. `tf.TensorArray` is required to collect a variable number of tensors in graph mode. With eager execution this implementation performs on par with the original:
###Code
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
But when you wrap it in a `tf.function` you'll notice two differences.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
First: Graph creation is much faster (~10x), since it doesn't create `max_iterations` copies of the model.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
###Output
_____no_output_____
###Markdown
Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Visualize the process The attention weights returned by the `translate` method show where the model was "looking" when it generated each output token.So the sum of the attention over the input should return all ones:
###Code
a = result['attention'][0]
print(np.sum(a, axis=-1))
###Output
_____no_output_____
###Markdown
Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
###Code
_ = plt.bar(range(len(a[0, :])), a[0, :])
###Output
_____no_output_____
###Markdown
Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
###Code
plt.imshow(np.array(a), vmin=0.0)
###Output
_____no_output_____
###Markdown
Here is some code to make a better attention plot:
###Code
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
Translate a few more sentences and plot them:
###Code
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'¿Todavía están en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:1. The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.2. The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. [Transformers](transformer.ipynb) solve this by using self-attention in the encoder and decoder.
###Code
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
###Output
_____no_output_____
###Markdown
Export Once you have a model you're satisfied with you might want to export it as a `tf.saved_model` for use outside of this python program that created it.Since the model is a subclass of `tf.Module` (through `keras.Model`), and all the functionality for export is compiled in a `tf.function` the model should export cleanly with `tf.saved_model.save`: Now that the function has been traced it can be exported using `saved_model.save`:
###Code
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural machine translation with attention View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025v5). This is an advanced example that assumes some knowledge of:* Sequence to sequence models* TensorFlow fundamentals below the keras layer: * Working with tensors directly * Writing custom `keras.Model`s and `keras.layers`While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to [Transformers](transformer.ipynb)).After training the model in this notebook, you will be able to input a Spanish sentence, such as "*¿todavia estan en casa?*", and return the English translation: "*are you still at home?*"The resulting model is exportable as a `tf.saved_model`, so it can be used in other TensorFlow environments.The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 minutes to run on a single P100 GPU. Setup
###Code
!pip install tensorflow_text
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
###Output
_____no_output_____
###Markdown
This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
###Code
use_builtins = True
###Output
_____no_output_____
###Markdown
This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
###Code
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
###Output
_____no_output_____
###Markdown
The data We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```They have a variety of languages available, but we'll use the English-Spanish dataset. Download and prepare the datasetFor convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
###Output
_____no_output_____
###Markdown
Create a tf.data dataset From these arrays of strings you can create a `tf.data.Dataset` of strings that shuffles and batches them efficiently:
###Code
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
###Output
_____no_output_____
###Markdown
Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a `tf.saved_model`. To make that exported model useful it should take `tf.string` inputs, and retrun `tf.string` outputs: All the text processing happens inside the model. Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.The `tensroflow_text` package contains a unicode normalize operation:
###Code
example_text = tf.constant('¿Todavía está en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
###Output
_____no_output_____
###Markdown
Unicode normalization will be the first step in the text standardization function:
###Code
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
###Output
_____no_output_____
###Markdown
Text Vectorization This standardization function will be wrapped up in a `preprocessing.TextVectorization` layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
###Code
max_vocab_size = 5000
input_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
###Output
_____no_output_____
###Markdown
The `TextVectorization` layer and many other `experimental.preprocessing` layers have an `adapt` method. This method reads one epoch of the training data, and works a lot like `Model.fix`. This `adapt` method initializes the layer based on the data. Here it determines the vocabulary:
###Code
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
That's the Spanish `TextVectorization` layer, now build and `.adapt()` the English one:
###Code
output_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
###Output
_____no_output_____
###Markdown
Now these layers can convert a batch of strings into a batch of token IDs:
###Code
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
###Output
_____no_output_____
###Markdown
The `get_vocabulary` method can be used to convert token IDs back to text:
###Code
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
###Output
_____no_output_____
###Markdown
The returned token IDs are zero-padded. This can easily be turned into a mask:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
The encoder/decoder modelThe following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from [Luong's paper](https://arxiv.org/abs/1508.04025v5). Before getting into it define a few constants for the model:
###Code
embedding_dim = 256
units = 1024
###Output
_____no_output_____
###Markdown
The encoderStart by building the encoder, the blue part of the diagram above.The encoder:1. Takes a list of token IDs (from `input_text_processor`).3. Looks up an embedding vector for each token (Using a `layers.Embedding`).4. Processes the embeddings into a new sequence (Using a `layers.GRU`).5. Returns: * The processed sequence. This will be passed to the attention head. * The internal state. This will be used to initialize the decoder
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
###Output
_____no_output_____
###Markdown
Here is how it fits together so far:
###Code
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
###Output
_____no_output_____
###Markdown
The encoder returns its internal state so that its state can be used to initialize the decoder.It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder. The attention headThe decoder uses attention to selectively focus on parts of the input sequence.The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a `layers.GlobalAveragePoling1D` but the attention layer performs a _weighted_ average.Let's look at how this works: Where:* $s$ is the encoder index.* $t$ is the decoder index.* $\alpha_{ts}$ is the attention weights.* $h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).* $h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).* $c_t$ is the resulting context vector.* $a_t$ is the final output combining the "context" and "query".The equations:1. Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.2. Calculates the context vector as the weighted sum of the encoder outputs. Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:This tutorial uses [Bahdanau's additive attention](https://arxiv.org/pdf/1409.0473.pdf). TensorFlow includes implementations of both as `layers.Attention` and`layers.AdditiveAttention`. The class below handles the weight matrices in a pair of `layers.Dense` layers, and calls the builtin implementation.
###Code
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
###Output
_____no_output_____
###Markdown
Test the Attention layerCreate a `BahdanauAttention` layer:
###Code
attention_layer = BahdanauAttention(units)
###Output
_____no_output_____
###Markdown
This layer takes 3 inputs:* The `query`: This will be generated by the decoder, later.* The `value`: This Will be the output of the encoder.* The `mask`: To exclude the padding, `example_tokens != 0`
###Code
(example_tokens != 0).shape
###Output
_____no_output_____
###Markdown
The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:1. A batch of sequences of result vectors the size of the queries.2. A batch attention maps, with size `(query_length, value_length)`.
###Code
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
###Output
_____no_output_____
###Markdown
The attention weights should sum to `1.0` for each sequence.Here are the attention weights across the sequences at `t=0`:
###Code
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
###Output
_____no_output_____
###Markdown
Because of the small-random initialization the attention weights are all close to `1/(sequence_length)`. If you zoom in on the weights for a single sequence, you can see that there is some _small_ variation that the model can learn to expand, and exploit.
###Code
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
###Output
_____no_output_____
###Markdown
The decoderThe decoder's job is to generate predictions for the next output token.1. The decoder receives the complete encoder output.2. It uses an RNN to keep track of what it has generated so far.3. It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.4. It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".5. It generates logit predictions for the next token based on the "attention vector". Here is the `Decoder` class and its initializer. The initializer creates all the necessary layers.
###Code
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
###Output
_____no_output_____
###Markdown
The `call` method for this layer takes and returns multiple tensors. Organize those into simple container classes:
###Code
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
###Output
_____no_output_____
###Markdown
Here is the implementation of the `call` method:
###Code
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'dec_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
###Output
_____no_output_____
###Markdown
The **encoder** processes its full input sequence with a single call to its RNN. This implementation of the **decoder** _can_ do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:* Flexibility: Writing the loop gives you direct control over the training procedure.* Clarity: It's possible to do masking tricks and use `layers.RNN`, or `tfa.seq2seq` APIs to pack this all into a single call. But writing it out as a loop may be clearer. * Loop free training is demonstrated in the [Text generation](text_generation.ipynb) tutiorial. Now try using this decoder.
###Code
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
###Output
_____no_output_____
###Markdown
The decoder takes 4 inputs.* `new_tokens` - The last token generated. Initialize the decoder with the `"[START]"` token.* `enc_output` - Generated by the `Encoder`.* `mask` - A boolean tensor indicating where `tokens != 0`* `state` - The previous `state` output from the decoder (the internal state of the decoder's RNN). Pass `None` to zero-initialize it. The original paper initializes it from the encoder's final RNN state.
###Code
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor.get_vocabulary().index('[START]')
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
###Output
_____no_output_____
###Markdown
Sample a token according to the logits:
###Code
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
###Output
_____no_output_____
###Markdown
Decode the token as the first word of the output:
###Code
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
Now use the decoder to generate a second set of logits.- Pass the same `enc_output` and `mask`, these haven't changed.- Pass the sampled token as `new_tokens`.- Pass the `decoder_state` the decoder returned last time, so the RNN continues with a memory of where it left off last time.
###Code
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
###Output
_____no_output_____
###Markdown
TrainingNow that you have all the model components, it's time to start training the model. You'll need:- A loss function and optimizer to perform the optimization.- A training step function defining how to update the model for each input/target batch.- A training loop to drive the training and save checkpoints. Define the loss function
###Code
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
###Output
_____no_output_____
###Markdown
Implement the training step Start with a model class, the training process will be implemented as the `train_step` method on this model. See [Customizing fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit) for details.Here the `train_step` method is a wrapper around the `_train_step` implementation which will come later. This wrapper includes a switch to turn on and off `tf.function` compilation, to make debugging easier.
###Code
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
###Output
_____no_output_____
###Markdown
Overall the implementation for the `Model.train_step` method is as follows:1. Receive a batch of `input_text, target_text` from the `tf.data.Dataset`.2. Convert those raw text inputs to token-embeddings and masks. 3. Run the encoder on the `input_tokens` to get the `encoder_output` and `encoder_state`.4. Initialize the decoder state and loss. 5. Loop over the `target_tokens`: 1. Run the decoder one step at a time. 2. Calculate the loss for each step. 3. Accumulate the average loss.6. Calculate the gradient of the loss and use the optimizer to apply updates to the model's `trainable_variables`. The `_preprocess` method, added below, implements steps 1 and 2:
###Code
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
###Output
_____no_output_____
###Markdown
The `_train_step` method, added below, handles the remaining steps except for actually running the decoder:
###Code
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target the target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
###Output
_____no_output_____
###Markdown
The `_loop_step` method, added below, executes the decoder and calculates the incremental loss and new decoder state (`dec_state`).
###Code
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
###Output
_____no_output_____
###Markdown
Test the training stepBuild a `TrainTranslator`, and configure it for training using the `Model.compile` method:
###Code
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Test out the `train_step`. For a text model like this the loss should start near:
###Code
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
While it's easier to debug without a `tf.function` it does give a performance boost. So now that the `_train_step` method is working, try the `tf.function`-wrapped `_tf_train_step`, to maximize performance while training:
###Code
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
###Output
_____no_output_____
###Markdown
The first call will be slow, because it traces the function.
###Code
translator.train_step([example_input_batch, example_target_batch])
###Output
_____no_output_____
###Markdown
But after that it's usually 2-3x faster than the eager `train_step` method:
###Code
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
###Output
_____no_output_____
###Markdown
A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
###Code
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
###Code
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
###Output
_____no_output_____
###Markdown
Train the modelWhile there's nothing wrong with writing your own custom training loop, implementing the `Model.train_step` method, as in the previous section, allows you to run `Model.fit` and avoid rewriting all that boiler-plate code. This tutorial only trains for a couple of epochs, so use a `callbacks.Callback` to collect the history of batch losses, for plotting:
###Code
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
###Output
_____no_output_____
###Markdown
The visible jumps in the plot are at the epoch boundaries. TranslateNow that the model is trained, implement a function to execute the full `text => text` translation.For this the model needs to invert the `text => token IDs` mapping provided by the `output_text_processor`. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
###Code
class Translator(tf.Module):
def __init__(self, encoder, decoder, input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
mask_token='',
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(), mask_token='')
token_mask_ids = index_from_string(['', '[UNK]', '[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string(tf.constant('[START]'))
self.end_token = index_from_string(tf.constant('[END]'))
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
###Output
_____no_output_____
###Markdown
Convert token IDs to text The first method to implement is `tokens_to_text` which converts from token IDs to human readable text.
###Code
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
###Output
_____no_output_____
###Markdown
Input some random token IDs and see what it generates:
###Code
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
###Output
_____no_output_____
###Markdown
Sample from the decoder's predictions This function takes the decoder's logit outputs and samples token IDs from that distribution:
###Code
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
###Output
_____no_output_____
###Markdown
Test run this function on some random inputs:
###Code
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
###Output
_____no_output_____
###Markdown
Implement the translation loopHere is a complete implementation of the text to text translation loop.This implementation collects the results into python lists, before using `tf.concat` to join them into tensors.This implementation statically unrolls the graph out to `max_length` iterations.This is okay with eager execution in python.
###Code
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
###Output
_____no_output_____
###Markdown
Run it on a simple input:
###Code
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
If you want to export this model you'll need to wrap this method in a `tf.function`. This basic implementation has a few issues if you try to do that:1. The resulting graphs are very large and take a few seconds to build, save or load.2. You can't break from a statically unrolled loop, so it will always run `max_length` iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
Run the `tf.function` once to compile it:
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text,
*,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(
new_tokens=new_tokens, enc_output=enc_output, mask=(input_tokens != 0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generated token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
###Output
_____no_output_____
###Markdown
The initial implementation used python lists to collect the outputs. This uses `tf.range` as the loop iterator, allowing `tf.autograph` to convert the loop. The biggest change in this implementation is the use of `tf.TensorArray` instead of python `list` to accumulate tensors. `tf.TensorArray` is required to collect a variable number of tensors in graph mode. With eager execution this implementation performs on par with the original:
###Code
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
But when you wrap it in a `tf.function` you'll notice two differences.
###Code
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
###Output
_____no_output_____
###Markdown
First: Graph creation is much faster (~10x), since it doesn't create `max_iterations` copies of the model.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
###Output
_____no_output_____
###Markdown
Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
###Code
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
###Output
_____no_output_____
###Markdown
Visualize the process The attention weights returned by the `translate` method show where the model was "looking" when it generated each output token.So the sum of the attention over the input should return all ones:
###Code
a = result['attention'][0]
print(np.sum(a, axis=-1))
###Output
_____no_output_____
###Markdown
Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
###Code
_ = plt.bar(range(len(a[0, :])), a[0, :])
###Output
_____no_output_____
###Markdown
Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
###Code
plt.imshow(np.array(a), vmin=0.0)
###Output
_____no_output_____
###Markdown
Here is some code to make a better attention plot:
###Code
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
Translate a few more sentences and plot them:
###Code
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'¿Todavía están en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
###Output
_____no_output_____
###Markdown
The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:1. The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.2. The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. [Transformers](transformer.ipynb) solve this by using self-attention in the encoder and decoder.
###Code
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
###Output
_____no_output_____
###Markdown
Export Once you have a model you're satisfied with you might want to export it as a `tf.saved_model` for use outside of this python program that created it.Since the model is a subclass of `tf.Module` (through `keras.Model`), and all the functionality for export is compiled in a `tf.function` the model should export cleanly with `tf.saved_model.save`: Now that the function has been traced it can be exported using `saved_model.save`:
###Code
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
###Output
_____no_output_____ |
Design/0107/146. LRU Cache.ipynb | ###Markdown
###Code
# 1、使用双向链表,可以实现时间复杂度 为 O(1) 的 put 和 get
# 每次 get 的时候,需要把 get 的节点移动至双向链表的头步,每次如果超过capicity
# 删除的都是链表的尾节点
class DLinkedNode:
def __init__(self):
self.key = 0
self.val = 0
self.next = None
self.prev = None
class LRUCache:
def __init__(self, capacity: int):
self.cache = {}
self.capacity = capacity
self.size = 0
# 构建双向链表
self.head, self.tail = DLinkedNode(), DLinkedNode()
self.head.next = self.tail
self.tail.prev = self.head
def get(self, key: int) -> int:
node = self.cache.get(key, None)
if not node:
return -1
self._move_to_head(node) # 将访问的节点移动到双向链表的头节点
return node.val
def put(self, key: int, value: int) -> None:
node = self.cache.get(key)
if not node: # 当前节点不在 cache 中
newNode = DLinkedNode()
newNode.key = key
newNode.val = value
self.cache[key] = newNode # cache中存储的是 [key, node]
self._add_node(newNode)
self.size += 1
if self.size > self.capacity:
tail = self._pop_tail() # 大于capacity, 删除的是尾结点
del self.cache[tail.key]
self.size -= 1
else:
node.val = value
self._move_to_head(node) # 更新存在的节点,移动至头部
def _add_node(self, node): # 添加 node 节点到首节点的后面
node.prev = self.head
node.next = self.head.next
self.head.next.prev = node
self.head.next = node
def _remove_node(self, node): # 删除 node 节点
prev = node.prev
nex = node.next
prev.next = nex
nex.prev = prev
def _move_to_head(self, node): # 移动 node 节点到head
self._remove_node(node) # 先删除当前这个节点,再把当前节点加入到头节点之后
self._add_node(node)
def _pop_tail(self): # 删除尾部的 node 节点
tail = self.tail.prev # 获得尾部节点 prev 节点
self._remove_node(tail) # 删除该节点
return tail
# Your LRUCache object will be instantiated and called as such:
# obj = LRUCache(capacity)
# param_1 = obj.get(key)
# obj.put(key,value)
# 1、使用双向链表,可以实现时间复杂度 为 O(1) 的 put 和 get
# 每次 get 的时候,需要把 get 的节点移动至双向链表的头步,每次如果超过capicity
# 删除的都是链表的尾节点
class DLinkedNode:
def __init__(self):
self.key = 0 # 如果不存储 key,则无法直到删除的尾结点是什么,没有必要存储 value
self.next = None
self.prev = None
class LRUCache:
def __init__(self, capacity: int):
self.cache = {}
self.capacity = capacity
self.size = 0
# 构建双向链表
self.head, self.tail = DLinkedNode(), DLinkedNode()
self.head.next = self.tail
self.tail.prev = self.head
def get(self, key: int) -> int:
if key not in self.cache:
return -1
val, node = self.cache[key]
self._move_to_head(node) # 将访问的节点移动到双向链表的头节点
return val
def put(self, key: int, value: int) -> None:
if key not in self.cache: # 当前节点不在 cache 中
newNode = DLinkedNode()
newNode.key = key
self.cache[key] = [value, newNode] # cache中存储的是 [key:, [val, node]]
self._add_node(newNode)
self.size += 1
if self.size > self.capacity:
tail = self._pop_tail() # 大于capacity, 删除的是尾结点
del self.cache[tail.key]
self.size -= 1
else:
_, node = self.cache[key]
self.cache[key][0] = value
self._move_to_head(node) # 更新存在的节点,移动至头部
def _add_node(self, node): # 添加 node 节点到首节点的后面
node.prev = self.head
node.next = self.head.next
self.head.next.prev = node
self.head.next = node
def _remove_node(self, node): # 删除 node 节点
prev = node.prev
nex = node.next
prev.next = nex
nex.prev = prev
def _move_to_head(self, node): # 移动 node 节点到head
self._remove_node(node) # 先删除当前这个节点,再把当前节点加入到头节点之后
self._add_node(node)
def _pop_tail(self): # 删除尾部的 node 节点
tail = self.tail.prev # 获得尾部节点 prev 节点
self._remove_node(tail) # 删除该节点
return tail
# Your LRUCache object will be instantiated and called as such:
# obj = LRUCache(capacity)
# param_1 = obj.get(key)
# obj.put(key,value)
lRUCache = LRUCache(2)
lRUCache.put(1, 1); # 缓存是 {1=1}
lRUCache.put(2, 2); # 缓存是 {1=1, 2=2}
lRUCache.get(1); # 返回 1
lRUCache.put(3, 3); # 该操作会使得关键字 2 作废,缓存是 {1=1, 3=3}
lRUCache.get(2); # 返回 -1 (未找到)
lRUCache.put(4, 4); # 该操作会使得关键字 1 作废,缓存是 {4=4, 3=3}
lRUCache.get(1); # 返回 -1 (未找到)
lRUCache.get(3); # 返回 3
lRUCache.get(4); # 返回 4
###Output
_____no_output_____ |
solenoid_rebuild/solenoid_fit_4qfv.ipynb | ###Markdown
Helix Pair
###Code
test_structure = isambard.ampal.convert_pdb_to_ampal('4qfv.pdb')
sol1 = test_structure[0]
sol1.helices.sequences
rep_hels = test_structure.helices[4:6]
rep_hels.relabel_all()
hel_pair = isambard.specifications.HelixPair(aas=(7, 9))
hel_pair.sequences
rep_hels.sequences
hel_pair.relabel_all()
hel_pair.pack_new_sequences([rep_hels.sequences[0], rep_hels.sequences[1]])
hel_pair.sequences
class HelPairOpt(isambard.specifications.HelixPair):
def __init__(self, rad, zshift, phi1, phi2, sp, op):
super().__init__(aas=(7, 9), axis_distances=(-rad, rad), z_shifts=(0, zshift),
phis=(phi1, phi2), splays=(0, sp), off_plane=(0, op))
self.relabel_all()
isambard.external_programs.run_profit(rep_hels.pdb, hel_pair.pdb, path1=False, path2=False)
opt = isambard.optimisation.DE_RMSD(
HelPairOpt, rep_hels.pdb)
opt.parameters([rep_hels.sequences[0], rep_hels.sequences[1]],
[3, 0, 0, 0, 0, 180],
[3, 6, 180, 180, 45, 90],
['var0', 'var1', 'var2', 'var3', 'var4', 'var5'])
opt.run_opt(50, 50, 4)
best_params = opt.parse_individual(opt.halloffame[0])
best = HelPairOpt(*best_params)
best.pack_new_sequences(hel_pair.sequences)
###Output
_____no_output_____
###Markdown
Solenoid
###Code
target = isambard.ampal.Assembly()
test_structure = isambard.ampal.convert_pdb_to_ampal('4qfv.pdb')
cha = test_structure[0]
cha.sequence
target.append(cha.get_slice_from_res_id('41', '47'))
target.append(cha.get_slice_from_res_id('51', '59'))
target.append(cha.get_slice_from_res_id('74', '80'))
target.append(cha.get_slice_from_res_id('84', '92'))
target.append(cha.get_slice_from_res_id('107', '113'))
target.append(cha.get_slice_from_res_id('117', '125'))
target.append(cha.get_slice_from_res_id('140', '146'))
target.append(cha.get_slice_from_res_id('150', '158'))
target.sequences
rep_unit = HelPairOpt(*best_params)
class SolenoidOpt(isambard.specifications.Solenoid):
def __init__(self, in_ru, repeats, rad, rise, rot_ang, xr, yr, zr):
ru = copy.deepcopy(in_ru)
ru.rotate(xr, (1, 0, 0))
ru.rotate(yr, (0, 1, 0))
ru.rotate(zr, (0, 0, 1))
super().__init__(ru, repeats, rad, rise, rot_ang, 'left')
target.relabel_polymers()
opt2 = isambard.optimisation.DE_RMSD(
SolenoidOpt, target.pdb)
opt2.parameters(target.sequences,
[35, 10, 0, 0, 0, 0],
[10, 8.0, 200, 200, 200, 200],
[rep_unit, 4, 'var0', 'var1', 'var2', 'var3', 'var4', 'var5'])
opt2.run_opt(40, 100, 4)
best = SolenoidOpt(*opt2.parse_individual(opt2.halloffame[0]))
best.pack_new_sequences(target.sequences)
###Output
_____no_output_____ |
Week 03 High Dimensionality/High Dimensional Data.ipynb | ###Markdown
High Dimensional DataAll of our work to this point is building dimensions on the bases of linear algebra and function analysis. This section involves studying some of the specific characteristics of high dimensionality that affect your analysis and data preparation.1. High dimensional data cannot be intuited without modification2. As dimensions increase, the number of choices explode3. Selecting good data gets more difficult4. In extremely high dimensional cases, only automated methods can be used.
###Code
# LAMBDA SCHOOL
#
# MACHINE LEARNING
#
# MIT LICENSE
import numpy as np
import matplotlib.pyplot as plt
# 1d data
x = 5
y = 6
plt.figure(figsize=(4,1))
plt.axhline(0)
plt.plot(x,0,'ro')
plt.plot(y,0,'bo')
plt.plot((x+y)/2,'o',c="black")
plt.xlim((0,10))
plt.ylim((-1,1))
# 2d data
x = np.array([5,6])
y = np.array([7,8])
plt.plot(x[0],x[1],'ro')
plt.plot(y[0],y[1],'bo')
plt.xlim((0,10))
plt.ylim((0,10))
# 3d data
from mpl_toolkits.mplot3d import Axes3D
import pandas as pd
a = pd.DataFrame([5,6,7]).T
b = pd.DataFrame([8,9,10]).T
data = pd.concat([a,b])
data.columns = ['x','y','z']
print(data)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(data['x'], data['y'], data['z'], c=['r','b'])
ax.set_xlim(0,10)
ax.set_ylim(0,10)
ax.set_zlim(0,10)
import matplotlib.lines as lines
scatter1_proxy = lines.Line2D([0],[0], linestyle="none", c='r', marker = 'o')
scatter2_proxy = lines.Line2D([0],[0], linestyle="none", c='b', marker = 'o')
ax.legend([scatter1_proxy, scatter2_proxy], ['dead', 'alive'], numpoints = 1);
###Output
x y z
0 5 6 7
0 8 9 10
###Markdown
What about 4d?We've played with high dimensional random datasets so far. Random data is random data, by and large, though we can create interesting datasets from it. Real data is better. Back to Titanic
###Code
import seaborn as sns
titanic = sns.load_dataset('titanic')
titanic = titanic.drop(['alive','adult_male','who','class','embark_town'], axis=1)
titanic['embarked'] = titanic['embarked'].fillna(method='ffill')
titanic = titanic.drop(['deck'], axis=1)
titanic['age'] = titanic['age'].fillna(method='ffill')
print('Any more NaN?')
print(titanic.isna().any())
print(titanic.head())
for col in titanic.columns:
plt.figure(figsize=(3,2))
titanic[col].hist()
plt.title(col)
plt.show()
###Output
Any more NaN?
survived False
pclass False
sex False
age False
sibsp False
parch False
fare False
embarked False
alone False
dtype: bool
survived pclass sex age sibsp parch fare embarked alone
0 0 3 male 22.0 1 0 7.2500 S False
1 1 1 female 38.0 1 0 71.2833 C False
2 1 3 female 26.0 0 0 7.9250 S True
3 1 1 female 35.0 1 0 53.1000 S False
4 0 3 male 35.0 0 0 8.0500 S True
###Markdown
Three binomials, two categoricals, and four numerical features.
###Code
from sklearn.preprocessing import LabelEncoder
# Convert binomials and categoricals to encoded labels
for label in ['embarked','sex', 'alone']:
titanic[label] = LabelEncoder().fit_transform(titanic[label])
# 3d data
from mpl_toolkits.mplot3d import Axes3D
import pandas as pd
labels = titanic['survived']
features = titanic.drop(['survived'],axis=1)
# 0, red
# 1, blue
#["red","blue"][0, 1, 1, 1, 0, 0, 0, 0, 1, 1]
#["red", "red", "blue", "blue", "blue"]
# Convert labels to colors
colors = pd.Series(['red','blue'])[labels.values]
print(labels.head(10))
print(colors.head(10))
# Start graphing...
def plot3axes(data,axes):
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(data[axes[0]], data[axes[1]], data[axes[2]], c=colors)
return ax
plot3axes(features,['sex','age','sibsp'])
plot3axes(features,['fare','embarked','alone']);
# separating the data by the category you want is huge
women = titanic[titanic['sex'] == 0]
men = titanic[titanic['sex'] == 1]
print(women.head(10))
###Output
survived pclass sex age sibsp parch fare embarked alone
1 1 1 0 38.0 1 0 71.2833 0 0
2 1 3 0 26.0 0 0 7.9250 2 1
3 1 1 0 35.0 1 0 53.1000 2 0
8 1 3 0 27.0 0 2 11.1333 2 0
9 1 2 0 14.0 1 0 30.0708 0 0
10 1 3 0 4.0 1 1 16.7000 2 0
11 1 1 0 58.0 0 0 26.5500 2 1
14 0 3 0 14.0 0 0 7.8542 2 1
15 1 2 0 55.0 0 0 16.0000 2 1
18 0 3 0 31.0 1 0 18.0000 2 0
###Markdown
Assignment in 8D* Calculate the centroid of the survivors.* Calculate the centroid of the casualties.* Calculate the average distance between each survivor.* Calculate the average distance between each casualty.* Calculate the distance between the two centroids. Along which axis is this distance the greatest? The least? Additional PreprocessingSo that some features do not contribute to the distance between points a disproportionate amount due to their scale, I will scale `fare` to be between 0 and 1. I will do the same with `pclass`, `parch`, and `sibsp`, since they are categorical variables with a meaningful ordering. Embarked does not have a meaningful order, so I will one-hot encode that feature.
###Code
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
titanic[['fare', 'age', 'pclass', 'sibsp', 'parch']] = pd.DataFrame(MinMaxScaler().fit_transform(titanic[['fare', 'age', 'pclass', 'sibsp', 'parch']]),
columns=['fare', 'age', 'pclass', 'sibsp', 'parch'])
embarked_one_hot = OneHotEncoder().fit_transform(titanic[['embarked']]).toarray()
# by inspection, 0 -> Southampton, 1 -> Cherbourg, 2 -> Queenstown
embarked = pd.DataFrame(embarked_one_hot, columns=['Southampton', 'Cherbourg', 'Queenstown'], dtype=np.int64)
titanic_enc = titanic.join([embarked]).drop(['embarked'], axis=1)
titanic_enc.head()
###Output
_____no_output_____
###Markdown
Finding CentroidsThe centroid of a set of 8D vector is itself an 8D vector where each component corresponds to the mean of the values of that component over the set.In order to solve this problem, I will first divide the data into two subsets, one for survivors and one for casualties. [`np.mean`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.mean.html), with the appropriate `axis` argument, can then handle calculating the centroid. The pandas [`as_matrix()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.as_matrix.html) DataFrame method can be used to convert the data to a numpy representation.
###Code
survivors = titanic_enc[titanic_enc['survived'] == 1]
casualties = titanic_enc[titanic_enc['survived'] == 0]
survivors.head(3)
casualties.head(3)
###Output
_____no_output_____
###Markdown
I will drop the `survived` columns from these DataFrames, since it is a global label, and won't add information when the centroid is calculated from the features.
###Code
survivors = survivors.drop('survived', axis=1)
casualties = casualties.drop('survived', axis=1)
survivor_centroid = np.mean(survivors.as_matrix(), axis=0)
casualty_centroid = np.mean(casualties.as_matrix(), axis=0)
print("Survivors centered around:", survivor_centroid)
print("Casualties centered around:", casualty_centroid)
###Output
Survivors centered around: [0.4751462 0.31871345 0.35125748 0.05921053 0.07748538 0.09446154
0.47660819 0.2748538 0.09064327 0.63450292]
Casualties centered around: [0.76593807 0.85245902 0.37590337 0.06921676 0.05494839 0.04317124
0.68123862 0.13661202 0.0856102 0.77777778]
###Markdown
Average Distance Between Points in the Same Cluster I will simply enumerate all possible pairs of points, calculate the distances between them, and average.
###Code
def distance(v1, v2):
return np.linalg.norm(v1-v2)
def average_point_distance(cluster):
"""Averages over all possible distances between any two points in a set of points
Args:
cluster (np.ndarray): rank 2 array, with rows corresponding to vectors
Returns:
The numeric (float) value of the average distance between points in 'cluster'
"""
distances = [distance(cluster[i], cluster[j]) for i in range(cluster.shape[0]) for j in range(cluster.shape[0]) if j > i]
return np.mean(distances)
survivor_average_distance = average_point_distance(survivors.as_matrix())
casualty_average_distance = average_point_distance(casualties.as_matrix())
print("Survivors an average of {} apart".format(survivor_average_distance))
print("Casualties an average of {} apart".format(casualty_average_distance))
###Output
Survivors an average of 1.4965355772670488 apart
Casualties an average of 1.2271173101548993 apart
###Markdown
Distance Between CentroidsSince the centroids are themselves vectors, their distance can be calculated as the magnitude of their vector difference.
###Code
centroid_distance = distance(survivor_centroid, casualty_centroid)
print("Centroids {} apart".format(centroid_distance))
###Output
Centroids 0.6744092334853754 apart
###Markdown
The distance between centroids as not as great as the average distance between both survivors and casualties. This indicates that considering all features of the data jointly does not yield too much predictive power. Clearly, the two clusters are not linearly separable in the 8D space. Considering Individual DimensionsSo far, I have dealt with the global centroids of all 8 features (10, if the one hot encoded feature is treated as 3). We can also consider features individually and perform the same analysis. In one dimension, the term "mean" is generally used instead of centroid, since it is not as meaningful to treat the values as vectors.
###Code
print((survivors.columns.all()==casualties.columns.all()))
# the features are the same for both survivors and casualties
def get_cluster_data(survivors, casualties, subsets, log_every=50):
cluster_data = pd.DataFrame([])
for subset in subsets:
data_s = survivors[subset].as_matrix()
data_c = casualties[subset].as_matrix()
centroid_s = np.mean(data_s, axis=0)
centroid_c = np.mean(data_c, axis=0)
distance_s = average_point_distance(data_s)
distance_c = average_point_distance(data_c)
centroid_distance = distance(centroid_s, centroid_c)
data = pd.DataFrame([[distance_s, distance_c, centroid_distance]], index=[str(subset)], columns=["Average Distance Between Survivors",
"Average Distance Between Casualties",
"Distance Between Centroids"])
cluster_data = cluster_data.append(data)
if cluster_data.shape[0] % log_every == 0:
print("{} subsets analyzed...".format(cluster_data.shape[0]))
return cluster_data
data = get_cluster_data(survivors, casualties, survivors.columns)
data
print("Min:")
print(data.min(), '\n')
print("Max:")
print(data.max())
###Output
Min:
Average Distance Between Survivors 0.080435
Average Distance Between Casualties 0.044693
Distance Between Centroids 0.005033
dtype: float64
Max:
Average Distance Between Survivors 0.500369
Average Distance Between Casualties 0.435098
Distance Between Centroids 0.533746
dtype: float64
###Markdown
The largest difference between centroid components is along the `sex` dimension. This is in line with well-known results. `pclass` and `alone` have the 2nd and 3rd greatest differences in their means. Large differences can be interpreted as saying that the average `sex` of a survivor, for instance, was quite different from he average `sex` of a casualty.The dimensions with the tightest clusters are `sibsp` and `fare`, for survivors and casualties respectively. This can be interpreted as meaning that survivors were similar to each other in terms of their `sibsp` value while casualties were similar to each other in terms of their `fare`.Note that these are two slightly different forms of reasoning about the distribution of features given the outcome (survivor vs casualty). Stretch Goal Automate it: Find the optimal set of dimensions* Automate this: Find the set of dimensions where: the mean distance between each survivor and the mean distance between each casualty is less than the distance.The set of dimensions that maximizes centroid distance and minimizes the cluster distances is the "best" dataset. Or is it? Why do we need the other dimensions? What if we have 30 dimensions and not just 8? What if there is no optimal clustering arrangement? Instead of considering each dimension separately, I will now consider every possible subset of dimensions.What I am performing now is feature selection, where my selection criteria are the centroid and cluster distances. This form of selection generalizes to higher dimensions.
###Code
from itertools import chain, combinations
def all_subsets(data_cols):
return chain(*map(lambda x: combinations(data_cols, x), range(0, len(data_cols)+1)))
subsets = [list(subset) for subset in all_subsets(survivors.columns) if len(subset) > 0]
data = get_cluster_data(survivors, casualties, subsets)
data.head()
data.tail()
centroid_lessthan_cluster = data[(data["Average Distance Between Casualties"] < data["Distance Between Centroids"])
& (data["Average Distance Between Survivors"] < data["Distance Between Centroids"])]
centroid_lessthan_cluster
distance_ratios = 2 * data["Distance Between Centroids"] / (data["Average Distance Between Casualties"] + data["Average Distance Between Survivors"])
# get the subsets with the 10 smallest distance ratios
small_distance_ratios = distance_ratios[distance_ratios < sorted(distance_ratios)[10]]
small_distance_ratios
###Output
_____no_output_____
###Markdown
Based on this analysis, there are 5 sets of features such that the distance between the centroids of survivors and casualties is greater than the average distances between pairs of survivors or pairs of casualties. These are:* `["sex"]`* `["sex", "sibsp"]`* `["sex", "parch"]`* `["sex", "fare"]`* `["sex", "sibsp", "fare"]`Similar sets of features have the lowest ratios between the inter-centroid and intra-centroid distances. In particular, passenger who embarked from Cherbourg were clustered, but this can also be due to the fact that most passengers embarked from Cherbourg to begin with, so there is not much variance along that dimension - leading to small intra-centroid distances.We can visualize the features of `["sex", "sibsp", "fare"]` to gain some intuition about the result of these findings.
###Code
ax = plot3axes(titanic, ['sex', 'sibsp', 'fare'])
ax.set(title="Titanic Passengers (casualties in red)", xlabel='sex (0: female, 1: male)', ylabel='sibsp (scaled between (0, 1))', zlabel='fare (scaled between (0, 1)')
###Output
_____no_output_____ |
Models/Training utils/ROC curves.ipynb | ###Markdown
Precision-recall
###Code
resnet_prec, resnet_rec, _ = precision_recall_curve(y_test, predictions_resnet)
inception_prec, inception_rec, _ = precision_recall_curve(y_test, predictions_inception)
mobilenet_prec, mobilenet_rec, _ = precision_recall_curve(y_test, predictions_mobilenet)
plt.figure(figsize=(15, 9))
plt.grid()
plt.title('Precision-recall curves')
plt.step(resnet_rec, resnet_prec, label='ResNet50')
plt.step(inception_rec, inception_prec, label='Inception v3')
plt.step(mobilenet_rec, mobilenet_prec, label='MobileNet v2')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.xticks(np.arange(0.0, 1.00001, 0.1))
plt.yticks(np.arange(0.2, 1.00001, 0.1))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
ROC
###Code
resnet_fpr, resnet_tpr, _ = roc_curve(y_test, predictions_resnet)
inception_fpr, inception_tpr, _ = roc_curve(y_test, predictions_inception)
mobilenet_fpr, mobilenet_tpr, _ = roc_curve(y_test, predictions_mobilenet)
plt.figure(figsize=(15, 9))
plt.grid()
plt.title('ROC curves')
plt.step(resnet_fpr, resnet_tpr, label='ResNet50')
plt.step(inception_fpr, inception_tpr, label='Inception v3')
plt.step(mobilenet_fpr, mobilenet_tpr, label='MobileNet v2')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.xticks(np.arange(0.0, 1.00001, 0.1))
plt.yticks(np.arange(0.2, 1.00001, 0.1))
plt.legend()
plt.show()
###Output
_____no_output_____ |
01 python/lecture 14 materials pandas/Кирилл Сетдеков Homework7.ipynb | ###Markdown
Python для анализа данных*Татьяна Рогович, НИУ ВШЭ* Библиотека pandas. Упражнения.
###Code
import pandas as pd
%matplotlib inline
import seaborn as sns
from scipy.stats import norm
###Output
_____no_output_____
###Markdown
Будем работать с датасетом Pima Indian Diabetes - это набор данных из Национального института диабета, болезней органов пищеварения и почек. Целью набора данных является диагностическое прогнозирование наличия диабета у пациента. Несколько ограничений были наложены на выбор этих экземпляров из большой базы данных. В частности, все пациенты здесь - женщины в возрасте от 21 года, индийского происхождения.
###Code
data = pd.read_csv('https://raw.githubusercontent.com/pileyan/Data/master/data/pima-indians-diabetes.csv')
data.head(10)
###Output
_____no_output_____
###Markdown
Описание данных:- __Pregnancies__ - данная единица отображает количество беременностей, единицы измерения - целые числа от 0 до N. Тип переменной - количественная, дискретная.- __Glucose__ - данная единица отображает уровень глюкозы в крови, единицы измерения - целые числа. Тип переменной - количественная, дискретная.- __BloodPressure__ - данная единица отображает артериальное давление, единицы измерения - миллиметры р/с, целые числа. Тип переменной - количественная, дискретная.- __SkinThickness__ - данная единица отображает обхват трицепса в миллиметрах, целые числа. Тип переменной - количественная, дискретная.- __Insulin__ - данная единица отображает уровень инсулина в крови, целые числа. Тип переменной - количественная, дискретная.- __BMI__ - данная единица отображает индекс массы тела. Тип переменной - количественная, непрерывная.- __DiabetesPedigreeFunction__ - данная единица отображает риск наследственного диабета в зависимости наличия диабета у родственников. Выражается десятичной дробью от 0 до 1. Тип переменной - количественная, непрерывная.- __Age__ - данная единица отражает возраст в целых числах. Тип переменной - количественная, дискретная.- __Class__ - данная единица отражает наличие диабета у субъекта, выражена 0(здоров) или 1(болен). Тип переменной - категориальная, бинарная. __Задание 1.__Как вы видите, в данных много пропусков (NaN). Посчитайте количество пропусков в каждом из столбцов.
###Code
data.isna().agg(['sum', 'mean']) # штуки и % пропусков
###Output
_____no_output_____
###Markdown
__Задание 2.__Замените все пропуски дискретных признаков соответствующими медианами, непрерывных признаков - средними значениями.
###Code
# data['Pregnancies'].fillna(data['Pregnancies'].median()) # проверял индивидуальные колонки
# data['Glucose'].fillna(data['Glucose'].median())
# data['BloodPressure'].fillna(data['BloodPressure'].median())
# data['SkinThickness'].fillna(data['SkinThickness'].median())
# data['Insulin'].fillna(data['Insulin'].median())
# data['BMI'].fillna(data['BMI'].mean())
# data['DiabetesPedigreeFunction'].fillna(data['DiabetesPedigreeFunction'].mean())
# data['Age'].fillna(data['Age'].median())
# data['Class'].fillna(data['Class'].median())
data = data.apply(lambda x: x.fillna(x.mean()) if x.name in ['BMI', 'DiabetesPedigreeFunction'] else x.fillna(x.median()))
data.isna().agg(['sum', 'mean']) # штуки и % пропусков
###Output
_____no_output_____
###Markdown
__Задание 3.__Вычислите основные статистики (минимум, максимум, среднее, дисперсию, квантили) для всех столбцов.
###Code
data.describe()
###Output
_____no_output_____
###Markdown
__Задание 4.__У скольких женщин старше 50 лет обнаружен диабет?
###Code
data[(data["Class"] == 1) & (data["Age"] > 50)].shape[0]
###Output
_____no_output_____
###Markdown
__Задание 5.__Найдите трех женщин с наибольшим числом беременностей.
###Code
import pandasql as ps
query = """
select *
from data
order by Pregnancies desc
limit 3
"""
tst = ps.sqldf(query, locals())
tst
###Output
_____no_output_____
###Markdown
__Задание 6.__Сколько женщин возраста между 30 и 40 успело родить 3 или более детей?
###Code
data[(data["Pregnancies"] >= 3) & (data["Age"] < 40) & (data["Age"] >= 30)].shape[0]
###Output
_____no_output_____
###Markdown
__Задание 7.__Нормальным кровяным давлением будем считать давление в диапазоне [80-89]. У какого процента женщин давление нормальное?
###Code
((data["BloodPressure"] <= 89) & (data["BloodPressure"] >= 80)).mean()
###Output
_____no_output_____
###Markdown
__Задание 8.__Считается, что BMI >= 30 - это признак ожирения. У скольких женщин с признаками ожирения кровяное давление выше среднего?
###Code
data[(data.BloodPressure >= data.BloodPressure.mean()) & (data["BMI"] >= 30)].shape[0]
###Output
_____no_output_____
###Markdown
__Задание 9.__Сравните средние значения для признаков __Glucose, BloodPressure, Insulin__ среди тех, у кого обнаружен диабет, и тех, у кого его нет.
###Code
data.groupby('Class')[['Glucose', 'BloodPressure', 'Insulin']].mean()
###Output
_____no_output_____
###Markdown
__Задание 10.__Постройте гистограммы для любых двух количественных признаков.
###Code
data.BMI.hist();
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
sns.set_theme(style="ticks")
f, ax = plt.subplots(figsize=(7, 5))
sns.despine(f)
sns.histplot(
data,
x="Glucose", hue="Class",
multiple="stack",
palette="light:m_r",
edgecolor=".3",
linewidth=.5)
###Output
_____no_output_____
###Markdown
__Задание 11.__Постройте круговую диаграмму для признака __Class__.
###Code
data.groupby('Class').size().plot(kind = 'pie', title = 'Разбивка наблюдейний по наличию диабета');
###Output
_____no_output_____
###Markdown
__Задание 12.__Постройте распределения для признаков __Age__ и __BloodPressure__ и сравните оба распределения с нормальным.
###Code
from scipy.stats import norm
from scipy import stats
import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
x = np.linspace(min(data.Age),
max(data.Age), 100)
sns.histplot(data.Age, stat="density")
mu, std = norm.fit(data.Age)
k2, p = stats.normaltest(data.Age)
alpha = 0.01
print("p = {:g}".format(p))
if p < alpha: # null hypothesis: x comes from a normal distribution
print("The null hypothesis can be rejected")
else:
print("The null hypothesis cannot be rejected")
from scipy.stats import norm
import numpy as np
import matplotlib.pyplot as plt
plotdata = data.BloodPressure
fig, ax = plt.subplots(1, 1)
x = np.linspace(min(plotdata),
max(plotdata), 100)
sns.histplot(plotdata, stat="density")
mu, std = norm.fit(plotdata)
ax.plot(x, norm.pdf(x, mu, std), 'r-', lw=5, alpha=0.6)
k2, p = stats.normaltest(plotdata)
alpha = 0.01
print("p = {:g}".format(p))
if p < alpha: # null hypothesis: x comes from a normal distribution
print("The null hypothesis can be rejected")
else:
print("The null hypothesis cannot be rejected")
###Output
p = 2.73373e-05
The null hypothesis can be rejected
###Markdown
__Задание 13.__Постройте следующий график: среднее число больных диабетом в зависимости от числа беременностей.
###Code
data.groupby('Pregnancies').Class.mean().plot(); # процент с признаком Class = 1 для каждой группы по значению Pregnancies
###Output
_____no_output_____
###Markdown
__Задание 14.__Добавьте новый бинарный признак:__wasPregnant__ $\in$ {0,1} - была женщина беременна (1) или нет (0)
###Code
def to_binary_if_above(x):
if x==0:
return 0
else:
return 1
data["wasPregnant"] = data['Pregnancies'].apply(to_binary_if_above)
###Output
_____no_output_____
###Markdown
__Задание 15.__Сравните процент больных диабетом среди женщин, которые были беременны и не были.
###Code
data.groupby('wasPregnant').Class.mean()
###Output
_____no_output_____
###Markdown
__Задание 16.__Добавьте новый категориальный признак __bodyType__ на основе столбца BMI:__BMI Categories:__ Underweight = <18.5Normal weight = 18.5–24.9 Overweight = 25–29.9 Obesity = BMI of 30 or greaterПризнак должен принимать значения Underweight, Normal weight, Overweight и Obesity.
###Code
def bmi_convert(x):
if x >= 25:
if x >= 30:
return "Obesity"
else:
return "Overweight"
else:
if x >= 18.5:
return "Normal weight"
else:
return "Underweight"
data["bodyType"] = data['BMI'].apply(bmi_convert)
data.bodyType.hist()
###Output
_____no_output_____
###Markdown
__Задание 17.__Будем считать "здоровыми" тех, у кого нормальный вес и кровяное давление. Какой процент "здоровых" женщин больны диабетом?
###Code
p = data[((data["BloodPressure"] <= 89) & (data["BloodPressure"] >= 80)) & (data["bodyType"] == 'Normal weight')].Class.mean()
print(f'{p*100} % "здоровых" женщин больны диабетом')
###Output
10.0 % "здоровых" женщин больны диабетом
|
MySQL/MySQL_Exercise_10_Useful_Logical_Functions.ipynb | ###Markdown
Copyright Jana Schaich Borg/Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) MySQL Exercise 10: Useful Logical OperatorsThere are a few more logical operators we haven't covered yet that you might find useful when designing your queries. Expressions that use logical operators return a result of "true" or "false", depending on whether the conditions you specify are met. The "true" or "false" results are usually used to determine which, if any, subsequent parts of your query will be run. We will discuss the IF operator, the CASE operator, and the order of operations within logical expressions in this lesson.**Begin by loading the sql library and database, and making the Dognition database your default database:**
###Code
%load_ext sql
%sql mysql://studentuser:studentpw@localhost/dognitiondb
%sql USE dognitiondb
###Output
* mysql://studentuser:***@localhost/dognitiondb
0 rows affected.
###Markdown
1. IF expressionsIF expressions are used to return one of two results based on whether inputs to the expressions meet the conditions you specify. They are frequently used in SELECT statements as a compact way to rename values in a column. The basic syntax is as follows:```IF([your conditions],[value outputted if conditions are met],[value outputted if conditions are NOT met])```So we could write:```sqlSELECT created_at, IF(created_at<'2014-06-01','early_user','late_user') AS user_typeFROM users``` to output one column that provided the time stamp of when a user account was created, and a second column called user_type that used that time stamp to determine whether the user was an early or late user. User_type could then be used in a GROUP BY statement to segment summary calculations (in database systems that support the use of aliases in GROUP BY statements). For example, since we know there are duplicate user_guids in the user table, we could combine a subquery with an IF statement to retrieve a list of unique user_guids with their classification as either an early or late user (based on when their first user entry was created): ```sqlSELECT cleaned_users.user_guid as UserID, IF(cleaned_users.first_account<'2014-06-01','early_user','late_user') AS user_typeFROM (SELECT user_guid, MIN(created_at) AS first_account FROM users GROUP BY user_guid) AS cleaned_users``` We could then use a GROUP BY statement to count the number of unique early or late users:```sql SELECT IF(cleaned_users.first_account<'2014-06-01','early_user','late_user') AS user_type, COUNT(cleaned_users.first_account)FROM (SELECT user_guid, MIN(created_at) AS first_account FROM users GROUP BY user_guid) AS cleaned_usersGROUP BY user_type```**Try it yourself:**
###Code
%%sql
SELECT IF(cleaned_users.first_account<'2014-06-01','early_user','late_user') AS user_type,
COUNT(cleaned_users.first_account) AS Count
FROM (SELECT user_guid, MIN(created_at) AS first_account
FROM users
GROUP BY user_guid) AS cleaned_users
GROUP BY user_type;
###Output
_____no_output_____
###Markdown
**Question 1: Write a query that will output distinct user_guids and their associated country of residence from the users table, excluding any user_guids that have NULL values. You should get 16,261 rows in your result.**
###Code
%%sql
SELECT DISTINCT u.user_guid AS UserID, u.country AS Country
FROM users u
WHERE u.country IS NOT NULL;
###Output
_____no_output_____
###Markdown
**Question 2: Use an IF expression and the query you wrote in Question 1 as a subquery to determine the number of unique user_guids who reside in the United States (abbreviated "US") and outside of the US.**
###Code
%%sql
SELECT IF(sub.country='US','In US','Outside US') AS Location,
COUNT(sub.user_guid) AS Count
FROM (SELECT DISTINCT u.user_guid, u.country
FROM users u
WHERE u.country IS NOT NULL AND u.user_guid IS NOT NULL) AS sub
GROUP BY Location;
###Output
_____no_output_____
###Markdown
Single IF expressions can only result in one of two specified outputs, but multiple IF expressions can be nested to result in more than two possible outputs. When you nest IF expressions, it is important to encase each IF expression--as well as the entire IF expression put together--in parentheses. For example, if you examine the entries contained in the non-US countries category, you will see that many users are associated with a country called "N/A." "N/A" is an abbreviation for "Not Applicable"; it is not a real country name. We should separate these entries from the "Outside of the US" category we made earlier. We could use a nested query to say whenever "country" does not equal "US", use the results of a second IF expression to determine whether the outputed value should be "Not Applicable" or "Outside US." The IF expression would look like this:```sqlIF(cleaned_users.country='US','In US', IF(cleaned_users.country='N/A','Not Applicable','Outside US'))```Since the second IF expression is in the position within the IF expression where you specify "value outputted if conditions are not met," its two possible outputs will only be considered if cleaned_users.country='US' is evaluated as false.The full query to output the number of unique users in each of the three groups would be:```sql SELECT IF(cleaned_users.country='US','In US', IF(cleaned_users.country='N/A','Not Applicable','Outside US')) AS US_user, count(cleaned_users.user_guid) FROM (SELECT DISTINCT user_guid, country FROM users WHERE country IS NOT NULL) AS cleaned_usersGROUP BY US_user```**Try it yourself. You should get 5,642 unique user_guids in the "Not Applicable" category, and 1,263 users in the "Outside US" category.**
###Code
%%sql
SELECT IF(cleaned_users.country='US','In US',
IF(cleaned_users.country='N/A','Not Applicable','Outside US')) AS US_user,
count(cleaned_users.user_guid)
FROM (SELECT DISTINCT user_guid, country
FROM users
WHERE country IS NOT NULL) AS cleaned_users
GROUP BY US_user;
###Output
_____no_output_____
###Markdown
The IF function is not supported by all database platforms, and some spell the function as IIF rather than IF, so be sure to double-check how the function works in the platform you are using.If nested IF expressions seem confusing or hard to read, don't worry, there is a better function available for situations when you want to use conditional logic to output more than two groups. That function is called CASE. 2. CASE expressionsThe main purpose of CASE expressions is to return a singular value based on one or more conditional tests. You can think of CASE expressions as an efficient way to write a set of IF and ELSEIF statements. There are two viable syntaxes for CASE expressions. If you need to manipulate values in a current column of your data, you would use this syntax:Using this syntax, our nested IF statement from above could be written as:```sqlSELECT CASE WHEN cleaned_users.country="US" THEN "In US" WHEN cleaned_users.country="N/A" THEN "Not Applicable" ELSE "Outside US" END AS US_user, count(cleaned_users.user_guid) FROM (SELECT DISTINCT user_guid, country FROM users WHERE country IS NOT NULL) AS cleaned_usersGROUP BY US_user```**Go ahead and try it:**
###Code
%%sql
SELECT CASE WHEN cleaned_users.country="US" THEN "In US"
WHEN cleaned_users.country="N/A" THEN "Not Applicable"
ELSE "Outside US"
END AS US_user,
count(cleaned_users.user_guid)
FROM (SELECT DISTINCT user_guid, country
FROM users
WHERE country IS NOT NULL) AS cleaned_users
GROUP BY US_user
###Output
_____no_output_____
###Markdown
Since our query does not require manipulation of any of the values in the country column, though, we could also take advantage of this syntax, which is slightly more compact:Our query written in this syntax would look like this:```sqlSELECT CASE cleaned_users.country WHEN "US" THEN "In US" WHEN "N/A" THEN "Not Applicable" ELSE "Outside US" END AS US_user, count(cleaned_users.user_guid) FROM (SELECT DISTINCT user_guid, country FROM users WHERE country IS NOT NULL) AS cleaned_usersGROUP BY US_user```**Try this query as well:**
###Code
%%sql
SELECT CASE cleaned_users.country
WHEN "US" THEN "In US"
WHEN "N/A" THEN "Not Applicable"
ELSE "Outside US"
END AS US_user,
count(cleaned_users.user_guid)
FROM (SELECT DISTINCT user_guid, country
FROM users
WHERE country IS NOT NULL) AS cleaned_users
GROUP BY US_user;
###Output
_____no_output_____
###Markdown
There are a couple of things to know about CASE expressions: + Make sure to include the word END at the end of the expression+ CASE expressions do not require parentheses+ ELSE expressions are optional+ If an ELSE expression is omitted, NULL values will be outputted for all rows that do not meet any of the conditions stated explicitly in the expression+ CASE expressions can be used anywhere in a SQL statement, including in GROUP BY, HAVING, and ORDER BY clauses or the SELECT column list.You will find that CASE statements are useful in many contexts. For example, they can be used to rename or revise values in a column.**Question 3: Write a query using a CASE statement that outputs 3 columns: dog_guid, dog_fixed, and a third column that reads "neutered" every time there is a 1 in the "dog_fixed" column of dogs, "not neutered" every time there is a value of 0 in the "dog_fixed" column of dogs, and "NULL" every time there is a value of anything else in the "dog_fixed" column. Limit your results for troubleshooting purposes.**
###Code
%%sql
SELECT dog_guid,dog_fixed,
CASE
WHEN dog_fixed=1 THEN 'neutered'
WHEN dog_fixed=0 THEN 'not neutered'
END AS NeuterStatus
FROM dogs
LIMIT 100;
###Output
* mysql://studentuser:***@localhost/dognitiondb
100 rows affected.
###Markdown
You can also use CASE statements to standardize or combine several values into one. **Question 4: We learned that NULL values should be treated the same as "0" values in the exclude columns of the dogs and users tables. Write a query using a CASE statement that outputs 3 columns: dog_guid, exclude, and a third column that reads "exclude" every time there is a 1 in the "exclude" column of dogs and "keep" every time there is any other value in the exclude column. Limit your results for troubleshooting purposes.**
###Code
%%sql
SELECT dog_guid,exclude,
CASE
WHEN exclude=1 THEN 'exclude'
ELSE 'keep'
END AS ExcludeStat
FROM dogs
LIMIT 100;
###Output
* mysql://studentuser:***@localhost/dognitiondb
100 rows affected.
###Markdown
**Question 5: Re-write your query from Question 4 using an IF statement instead of a CASE statement.**
###Code
%%sql
SELECT dog_guid,exclude,
IF(exclude=1,'exclude','keep') AS ExcludeStat
FROM dogs
LIMIT 100;
###Output
_____no_output_____
###Markdown
Case expressions are also useful for breaking values in a column up into multiple groups that meet specific criteria or that have specific ranges of values.**Question 6: Write a query that uses a CASE expression to output 3 columns: dog_guid, weight, and a third column that reads... "very small" when a dog's weight is 1-10 pounds "small" when a dog's weight is greater than 10 pounds to 30 pounds "medium" when a dog's weight is greater than 30 pounds to 50 pounds "large" when a dog's weight is greater than 50 pounds to 85 pounds "very large" when a dog's weight is greater than 85 pounds Limit your results for troubleshooting purposes.**
###Code
%%sql
SELECT dog_guid, weight,
CASE weight
WHEN weight>=1 AND weight<=10 THEN 'very small'
WHEN weight>10 AND weight<=30 THEN 'small'
WHEN weight>30 AND weight<=50 THEN 'medium'
WHEN weight>50 AND weight<=85 THEN 'large'
WHEN weight>85 THEN 'very large'
ELSE 'test'
END AS weight_status
FROM dogs
LIMIT 200;
###Output
_____no_output_____
###Markdown
3. Pay attention to the order of operations within logical expressionsAs you started to see with the query you wrote in Question 6, CASE expressions often end up needing multiple AND and OR operators to accurately describe the logical conditions you want to impose on the groups in your queries. You must pay attention to the order in which these operators are included in your logical expressions, because unless parentheses are included, the NOT operator is always evaluated before an AND operator, and an AND operator is always evaluated before the OR operator. When parentheses are included, the expressions within the parenthese are evaluated first. That means this expression:```sqlCASE WHEN "condition 1" OR "condition 2" AND "condition 3"...```will lead to different results than this expression: ```sqlCASE WHEN "condition 3" AND "condition 1" OR "condition 2"...``` or this expression:```sqlCASE WHEN ("condition 1" OR "condition 2") AND "condition 3"...``` In the first case you will get rows that meet condition 2 and 3, or condition 1. In the second case you will get rows that meet condition 1 and 3, or condition 2. In the third case, you will get rows that meet condition 1 or 2, and condition 3.Let's see a concrete example of how the order in which logical operators are evaluated affects query results. **Question 7: How many distinct dog_guids are found in group 1 using this query?** ```sqlSELECT COUNT(DISTINCT dog_guid), CASE WHEN breed_group='Sporting' OR breed_group='Herding' AND exclude!='1' THEN "group 1" ELSE "everything else" END AS groupsFROM dogsGROUP BY groups```
###Code
%%sql
SELECT COUNT(DISTINCT dog_guid),
CASE WHEN breed_group='Sporting' OR breed_group='Herding' AND exclude!='1' THEN "group 1"
ELSE "everything else"
END AS groups
FROM dogs
GROUP BY groups;
###Output
* mysql://studentuser:***@localhost/dognitiondb
2 rows affected.
###Markdown
**Question 8: How many distinct dog_guids are found in group 1 using this query?** ```sqlSELECT COUNT(DISTINCT dog_guid), CASE WHEN exclude!='1' AND breed_group='Sporting' OR breed_group='Herding' THEN "group 1" ELSE "everything else" END AS group_nameFROM dogsGROUP BY group_name```
###Code
%%sql
SELECT COUNT(DISTINCT dog_guid),
CASE WHEN exclude!='1' AND breed_group='Sporting' OR breed_group='Herding' THEN "group 1"
ELSE "everything else"
END AS group_name
FROM dogs
GROUP BY group_name;
###Output
* mysql://studentuser:***@localhost/dognitiondb
2 rows affected.
###Markdown
**Question 9: How many distinct dog_guids are found in group 1 using this query?**```sqlSELECT COUNT(DISTINCT dog_guid), CASE WHEN exclude!='1' AND (breed_group='Sporting' OR breed_group='Herding') THEN "group 1" ELSE "everything else" END AS group_nameFROM dogsGROUP BY group_name```
###Code
%%sql
SELECT COUNT(DISTINCT dog_guid),
CASE WHEN exclude!='1' AND (breed_group='Sporting' OR breed_group='Herding') THEN "group 1"
ELSE "everything else"
END AS group_name
FROM dogs
GROUP BY group_name;
###Output
* mysql://studentuser:***@localhost/dognitiondb
2 rows affected.
###Markdown
**So make sure you always pay attention to the order in which your logical operators are listed in your expressions, and whenever possible, include parentheses to ensure that the expressions are evaluated in the way you intend!** Let's practice some more IF and CASE statements**Question 10: For each dog_guid, output its dog_guid, breed_type, number of completed tests, and use an IF statement to include an extra column that reads "Pure_Breed" whenever breed_type equals 'Pure Breed" and "Not_Pure_Breed" whenever breed_type equals anything else. LIMIT your output to 50 rows for troubleshooting. HINT: you will need to use a join to complete this query.**
###Code
%%sql
SELECT d.dog_guid,d.breed_type,COUNT(c.created_at),
IF(d.breed_type='Pure Breed','Pure_Breed','Not_Pure_Breed') AS Extra
FROM complete_tests c JOIN dogs d
ON d.dog_guid=c.dog_guid
GROUP BY d.dog_guid
LIMIT 50;
###Output
_____no_output_____
###Markdown
**Question 11: Write a query that uses a CASE statement to report the number of unique user_guids associated with customers who live in the United States and who are in the following groups of states:****Group 1: New York (abbreviated "NY") or New Jersey (abbreviated "NJ") Group 2: North Carolina (abbreviated "NC") or South Carolina (abbreviated "SC") Group 3: California (abbreviated "CA") Group 4: All other states with non-null values****You should find 898 unique user_guids in Group1.**
###Code
%%sql
SELECT CASE
WHEN (state='NY' OR state='NJ') THEN 'Group 1'
WHEN (state='NC' OR state='SC') THEN 'Group 2'
WHEN state='CA' THEN 'GROUP 3'
ELSE 'Group 4'
END AS GroupState,
COUNT(DISTINCT user_guid)
FROM users
WHERE country='US' AND state IS NOT NULL
GROUP BY GroupState;
###Output
* mysql://studentuser:***@localhost/dognitiondb
4 rows affected.
###Markdown
**Question 12: Write a query that allows you to determine how many unique dog_guids are associated with dogs who are DNA tested and have either stargazer or socialite personality dimensions. Your answer should be 70.**
###Code
%%sql
SELECT COUNT(DISTINCT dog_guid) AS COUNT
FROM dogs
WHERE dna_tested=1 AND (dimension="stargazer" OR dimension="socialite");
###Output
* mysql://studentuser:***@localhost/dognitiondb
1 rows affected.
|
notebooks/Tide-Prediction-Plus-NDWI.ipynb | ###Markdown
Tide Prediction TaskStart with some imports.
###Code
from gbdxtools import Interface
from shapely.wkt import loads
import dateutil.parser
from datetime import datetime
###Output
_____no_output_____
###Markdown
Create a GBDX interface.
###Code
gbdx = Interface()
###Output
_____no_output_____
###Markdown
Start with a catalog ID (this is some image of the Palm Islands in Dubai).
###Code
cat_id = '103001000349F800'
###Output
_____no_output_____
###Markdown
Get metadata record for parsing of footprint WKT and timestamp.
###Code
record = gbdx.catalog.get(cat_id)
###Output
_____no_output_____
###Markdown
Get the centroid of the footprint.
###Code
centroid = loads(record.get('properties').get('footprintWkt')).centroid
###Output
_____no_output_____
###Markdown
The latitude corresponds to `y` and longitude corresponds to `x`.
###Code
lat = centroid.y
lon = centroid.x
print('lat=%f, lon=%f' % (lat, lon))
###Output
lat=25.113527, lon=55.133696
###Markdown
Next, we get the timestamp of the image.
###Code
timestamp = dateutil.parser.parse(record.get('properties').get('timestamp'))
###Output
_____no_output_____
###Markdown
The timestamp is converted to the date time group format that the tide prediction algorithm expects (Y-m-d-H-M).
###Code
dtg = datetime.strftime(timestamp, '%Y-%m-%d-%H-%M')
print('dtg="%s"' % dtg)
###Output
dtg="2010-01-12-07-12"
###Markdown
At this point, we'd like to call the task with
###Code
print('lat="%s", lon="%s", dtg="%s"' % (str(lat), str(lon), dtg))
###Output
lat="25.1135269383", lon="55.1336961813", dtg="2010-01-12-07-12"
###Markdown
and receive an output like```json{ "minimumTide24Hours":null, "maximumTide24Hours":null, "currentTide":null}``` We have currently implemented tide prediction using a modified version of bf-tideprediction (just without the flask wrapper for endpoints). The image `chambbj/hello-gbdx` v0.0.9 is available as GBDX task `hello-gbdx-chambbj`. It takes the aforementioned lat, lon, dtg all as strings.
###Code
aoptask = gbdx.Task('AOP_Strip_Processor',
data=gbdx.catalog.get_data_location(cat_id),
bands='MS',
enable_dra=False,
enable_pansharpen=False,
enable_acomp=True,
ortho_epsg='UTM')
tide_task = gbdx.Task('ShorelineDetection',
lat = str(lat),
lon = str(lon),
dtg = dtg,
image = aoptask.outputs.data.value)
###Output
_____no_output_____
###Markdown
The tide prediction json file will be saved to `/mnt/work/output/json` and is persisted to `some_new_folder` within my user bucket/prefix on S3.
###Code
import uuid
from os.path import join
workflow = gbdx.Workflow([aoptask, tide_task])
random_str = str(uuid.uuid4())
workflow.savedata(tide_task.outputs.results, location=join('some_random_folder', random_str))
workflow.execute()
workflow.status
###Output
_____no_output_____
###Markdown
Check periodically for status `complete`.
###Code
import time
import sys
from __future__ import print_function
while not workflow.status.get('state') == u'complete':
print(datetime.now(), workflow.status, end='\r')
sys.stdout.flush()
time.sleep(5.0)
workflow.status
###Output
2017-06-20 14:16:01.481899 {u'state': u'running', u'event': u'started'}'}
###Markdown
And download the result. Yes, we could display it here, but this is good enough.
###Code
gbdx.s3.download(join('some_random_folder', random_str))
###Output
_____no_output_____
###Markdown
The tides.json file is now in the local directory.
###Code
print(join('some_random_folder', random_str))
workflow.events
workflow.stdout
###Output
_____no_output_____ |
student-notebooks/16.06-PyRosettaCluster-Simple-protocol.ipynb | ###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
--- *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* PyRosettaCluster Tutorial 1A. Simple protocolPyRosettaCluster Tutorial 1A is a Jupyter Lab that generates a decoy using `PyRosettaCluster`. It is the simplest use case, where one protocol takes one input `.pdb` file and returns one output `.pdb` file. All information needed to reproduce the simulation is included in the output `.pdb` file. After completing PyRosettaCluster Tutorial 1A, see PyRosettaCluster Tutorial 1B to learn how to reproduce simulations from PyRosettaCluster Tutorial 1A. *Warning*: This notebook uses `pyrosetta.distributed.viewer` code, which runs in `jupyter notebook` and might not run if you're using `jupyterlab`. *Note:* This Jupyter notebook uses parallelization and is **not** meant to be executed within a Google Colab environment. *Note:* This Jupyter notebook requires the PyRosetta distributed layer which is obtained by building PyRosetta with the `--serialization` flag or installing PyRosetta from the RosettaCommons conda channel **Please see Chapter 16.00 for setup instructions** *Note:* This Jupyter notebook is intended to be run within **Jupyter Lab**, but may still be run as a standalone Jupyter notebook. 1. Import packages
###Code
import bz2
import glob
import logging
import os
import pyrosetta
import pyrosetta.distributed.io as io
import pyrosetta.distributed.viewer as viewer
from pyrosetta.distributed.cluster import PyRosettaCluster
logging.basicConfig(level=logging.INFO)
###Output
_____no_output_____
###Markdown
2. Initialize a compute cluster using `dask`1. Click the "Dask" tab in Jupyter Lab (arrow, left)2. Click the "+ NEW" button to launch a new compute cluster (arrow, lower)![title](Media/dask_labextension_1.png)3. Once the cluster has started, click the brackets to "inject client code" for the cluster into your notebook![title](Media/dask_labextension_2.png)Inject client code here, then run the cell:
###Code
# This cell is an example of the injected client code. You should delete this cell and instantiate your own client with scheduler IP/port address.
if not os.getenv("DEBUG"):
from dask.distributed import Client
client = Client("tcp://127.0.0.1:40329")
else:
client = None
client
###Output
_____no_output_____
###Markdown
Providing a `client` allows you to monitor parallelization diagnostics from within this Jupyter Lab Notebook. However, providing a `client` is only optional for the `PyRosettaCluster` instance and `reproduce` function. If you do not provide a `client`, then `PyRosettaCluster` will instantiate a `LocalCluster` object using the `dask` module by default, or an `SGECluster` or `SLURMCluster` object using the `dask-jobqueue` module if you provide the `scheduler` argument parameter, e.g.:***```PyRosettaCluster( ... client=client, Monitor diagnostics with existing client (see above) scheduler=None, Bypasses making a LocalCluster because client is provided ...)```***```PyRosettaCluster( ... client=None, Existing client was not input (default) scheduler=None, Runs the simluations on a LocalCluster (default) ...)```***```PyRosettaCluster( ... client=None, Existing client was not input (default) scheduler="sge", Runs the simluations on the SGE job scheduler ...)```***```PyRosettaCluster( ... client=None, Existing client was not input (default) scheduler="slurm", Runs the simluations on the SLURM job scheduler ...)``` 3. Define or import the user-provided PyRosetta protocol(s):Remember, you *must* import `pyrosetta` locally within each user-provided PyRosetta protocol. Other libraries may not need to be locally imported because they are serializable by the `distributed` module. Although, it is a good practice to locally import all of your modules in each user-provided PyRosetta protocol.
###Code
if not os.getenv("DEBUG"):
from additional_scripts.my_protocols import my_protocol
if not os.getenv("DEBUG"):
client.upload_file("additional_scripts/my_protocols.py") # This sends a local file up to all worker nodes.
###Output
_____no_output_____
###Markdown
Let's look at the definition of the user-provided PyRosetta protocol `my_protocol` located in `additional_scripts/my_protocols.py`: ```def my_protocol(input_packed_pose=None, **kwargs): """ Relax the input `PackedPose` object. Args: input_packed_pose: A `PackedPose` object to be repacked. Optional. **kwargs: PyRosettaCluster task keyword arguments. Returns: A `PackedPose` object. """ import pyrosetta Local import import pyrosetta.distributed.io as io Local import import pyrosetta.distributed.tasks.rosetta_scripts as rosetta_scripts Local import packed_pose = io.pose_from_file(kwargs["s"]) xml = """ """ return rosetta_scripts.SingleoutputRosettaScriptsTask(xml)(packed_pose)``` 4. Define the user-provided keyword argument(s) (i.e. `kwargs`):Upon PyRosetta initialization on the remote worker, the "`options`" and "`extra_options`" `kwargs` get concatenated before initialization. However, specifying the "`extra_options`" `kwargs` will override the default `-out:levels all:warning` command line flags, and specifying the "`options`" `kwargs` will override the default `-ex1 -ex2aro` command line flags.
###Code
def create_kwargs():
yield {
"options": "-ex1",
"extra_options": "-out:level 300 -multithreading:total_threads 1", # Used by pyrosetta.init() on disributed workers
"set_logging_handler": "interactive", # Used by pyrosetta.init() on disributed workers
"s": os.path.join(os.getcwd(), "inputs", "1QYS.pdb"),
}
###Output
_____no_output_____
###Markdown
Ideally, all pose manipulation is accomplished with the user-provided PyRosetta protocols. If you must manipulate a pose prior to instantiating `PyRosettaCluster`, here are some considerations:- Avoid passing `Pose` and `PackedPose` objects through `create_kwargs()`. You might notice that the above cell passes the protein structure information to `PyRosettaCluster` as a `str` type locating the `.pdb` file. In this way, the input `PackedPose` object is instantiated from that `str` within `PyRosettaCluster` on the remote workers (using `io.pose_from_file(kwargs["s"])`) using a random seed which is saved by `PyRosettaCluster`. This allows the protocol to be reproduced, and avoids passing redundant large chunks of data over the network.- It may be tempting to instantiate your pose before `PyRosettaCluster`, and pass a `Pose` or `PackedPose` object into the `create_kwargs()`. However, in this case PyRosetta will be initialized with a random seed outside `PyRosettaCluster`, and that random seed will not be saved by `PyRosettaCluster`. As a consequence, any action taken on the pose (e.g. filling in missing heavy atoms) will not be reproducible.-If you must instantiate your pose before `PyRosettaCluster`, to ensure reproducibility the user must initialize PyRosetta with the constant seed `1111111` within the Jupyter notebook or standalone python script using:```import pyrosettapyrosetta.init("-run:constant_seed 1")```The `-run:constant_seed 1` command line flag defaults to the seed `1111111` ([documentation](https://www.rosettacommons.org/docs/latest/rosetta_basics/options/run-options)). Then, instantiate the pose:```input_packed_pose = pyrosetta.io.pose_from_sequence("TEST")...Perform any pose manipulation...```and then instantiate `PyRosettaCluster` with the additional `input_packed_pose` parameter argument, e.g.:```PyRosettaCluster( ... input_packed_pose=input_packed_pose, ...)```For an initialization example, see Tutorial 4.In summary, the best practice involves giving `create_kwargs` information which will be used by the distributed protocol to create a pose within `PyRosettaCluster`. In edge cases, the user may provide a `Pose` or `PackedPose` object to the `input_packed_pose` argument of `PyRosettaCluster` and set a constant seed of `1111111` outside of `PyRosettaCluster`. 5. Launch the original simulation using the `distribute()` methodThe protocol produces an output decoy, the exact coordinates of which we will reproduce in Tutorial 1B. If the Jupyter Lab Notebook or standalone PyRosetta script did not yet initialize PyRosetta before instantiating `PyRosettaCluster` (preferred workflow), then `PyRosettaCluster` automatically initializes PyRosetta within the Jupyter Lab Notebook or standalone PyRosetta script with the command line flags `-run:constant_seed 1 -multithreading:total_threads 1 -mute all`. Thus, the master node is initialized with the default constant seed, where the master node acts as the client to the distributed workers. The distributed workers actually run the user-provided PyRosetta protocol(s), and each distributed worker initializes PyRosetta with a random seed, which is the seed saved by PyRosettaCluster for downstream reproducibility. The master node is always initialized with a constant seed as best practices. To monitor parallelization diagnostics in real-time, in the "Dask" tab, click the various diagnostic tools _(arrows)_ to open new tabs: ![title](Media/dask_labextension_4.png) Arrange the diagnostic tool tabs within Jupyter Lab how you best see fit by clicking and dragging them: ![title](Media/dask_labextension_3.png)
###Code
if not os.getenv("DEBUG"):
output_path = os.path.join(os.getcwd(), "outputs_1A")
PyRosettaCluster(
tasks=create_kwargs,
client=client,
scratch_dir=output_path,
output_path=output_path,
nstruct=4, # Run the first user-provided PyRosetta protocol four times in parallel
).distribute(protocols=[my_protocol])
###Output
_____no_output_____
###Markdown
While jobs are running, you may monitor their progress using the dask dashboard diagnostics within Jupyter Lab! 7. Visualize the resultant decoy Gather the output decoys on disk into poses in memory:
###Code
if not os.getenv("DEBUG"):
results = glob.glob(os.path.join(output_path, "decoys", "*", "*.pdb.bz2"))
packed_poses = []
for bz2file in results:
with open(bz2file, "rb") as f:
packed_poses.append(io.pose_from_pdbstring(bz2.decompress(f.read()).decode()))
###Output
_____no_output_____
###Markdown
View the poses in memory by clicking and draging to rotate, and zooming in and out with the mouse scroller.
###Code
if not os.getenv("DEBUG"):
view = viewer.init(packed_poses, window_size=(800, 600))
view.add(viewer.setStyle())
view.add(viewer.setStyle(colorscheme="whiteCarbon", radius=0.25))
view.add(viewer.setHydrogenBonds())
view.add(viewer.setHydrogens(polar_only=True))
view.add(viewer.setDisulfides(radius=0.25))
view()
###Output
_____no_output_____ |
.ipynb_checkpoints/Contact-Checker-Runs-checkpoint.ipynb | ###Markdown
Start Here
###Code
% run contactsScraper.py
cc.ContactSheetOutput.currentRow
###Output
_____no_output_____
###Markdown
Verification Handler Use
###Code
orgsForToday = ['National Association for Multi-Ethnicity In Communications (NAMIC)',
'Association for Women in Science',
'Brain Injury Association of America',
'American Society of Home Inspectors',
'NAADAC, the Association for Addiction Professionals',
'American Public Transportation Association',
'Indiana Soybean Alliance',
'Associated Builders and Contractors (ABC)',
'National Association of Social Workers',
'American Marketing Association (AMA)']
org = orgsForToday[9]
vh = cc.VerificationHandler(org)
vh.output.name
vh.write_contact_pointers()
vh.records
cp3 = vh.pointers[5]
cp3.tom
cp3.tom is cp3.nathan
cp3.minnie
cp3.martina
cp3.minnie_here()
###Output
_____no_output_____
###Markdown
Check Above, we have a minnie contidtion that is not detected Contact Checker Setup
###Code
# Dummy Org Session
org = 'Indiana Soybean Alliance'
a = dm.OrgSession(orgRecords) # The verifcation handler class must have its own OrgSession, it must be initialized at startup
a1 = a.processSession(org)
a.orgSessionStatusCheck()
a.anyQueryTimeouts()
from bs4 import BeautifulSoup
rap = a1[0].get_pageSource()
strips = ['\n', '\r', '\t','\xa0']
slash = rap
for c in strips:
slash = slash.replace(c, ' ')
## Try to soup the newline striped page text and see what happens
soup_slash = BeautifulSoup(slash, 'lxml')
cc.ContactPointerFamily.set_soup(soup_slash)
if a.anyQueryTimeouts():
dm.OrgSession.PageLoadTimeout = 40
dm.OrgSession.ScriptLoadTimeout = 40
a1 = a.processSession(org)
a.orgSessionStatusCheck()
# Set Soup Object for the Contact Checker Class
orgSoup = a1[0].get_soup()
cc.ContactPointerFamily.set_soup(orgSoup)
# Get a record to check
contacts = cr[cr["Account Name"] == org]
rec = contacts.iloc[[3]]
rec
contacts
###Output
_____no_output_____
###Markdown
Contact Checker Use
###Code
cp2 = cc.VerifiedPointer(rec)
cp2.mary
cp2.mary
cp2.larry.parent
cp2.fred is cp2.larry
cp1 = cc.VerifiedPointer(rec)
cp1.tom.parent
cp1.reggieCounts
orgSoup.findAll(text=cp1.titleReg)
print(orgSoup.prettify())
cp1.titleReg
import re
vptest = re.compile('^.*%s.*$' % '\nVice President\n')
orgSoup.findAll(text=vptest)
unit = cp1.fred.parent.parent.parent
print(cp1.fred.parent.parent.parent.prettify())
tests = unit.children
x = next(tests)
x
s = x.strings
v = next(s)
type(v)
v
###Output
_____no_output_____
###Markdown
Interseting Stuff Above Option 1) Try finding the title with the newlines I found it with the newlines Option 2) What happens if I use the html.parser or html5lib to parse this junk Did not try Option 3) Aggressivly Remove the newlines Stripping from document before souping
###Code
cp1.titleReg
cp1.titleWords
cp1.write_output_row()
cp1.get_output_row()
cp1.contactPointers['nathan']
cp1.mary_here()
cp1.output.name
cp1.rec['Mailing State/Province'].to_string(index=False)
contactKeys[:14]
myDataRow = [cp1.rec[x].to_string(index=False) for x in cp1.output.get_contact_keys()]
myDataRow
a = [1,2,3]
b = [4,5,6]
a.extend(b)
a
type(cc.ContactPointerFamily.docSoup)
a.sessionBrowser.close()
cp1.fred
orgRecords
f = getContacts()
len(f)
###Output
_____no_output_____ |
notebooks/Dataset F - Indian Liver Patient/Synthetic data evaluation/Privacy/3_Attribute_Inference_Test_Dataset F.ipynb | ###Markdown
Attribute Inference Attack (AIA) Dataset F
###Code
#import libraries
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
import os
print('Libraries imported!!')
#define directory of functions and actual directory
HOME_PATH = '' #home path of the project
FUNCTIONS_DIR = "EVALUATION FUNCTIONS/PRIVACY"
ACTUAL_DIR = os.getcwd()
#change directory to functions directory
os.chdir(HOME_PATH + FUNCTIONS_DIR)
#import functions for membership attack simulation
from attribute_inference import DataPreProcessor
from attribute_inference import RiskAttributesPredictors
from attribute_inference import identified_attributes_percentage
#change directory to actual directory
os.chdir(ACTUAL_DIR)
print('Functions imported!!')
###Output
Functions imported!!
###Markdown
1. Read real and synthetic datasetsIn this part real and synthetic datasets are read.
###Code
#Define global variables
DATA_TYPES = ['Real','GM','SDV','CTGAN','WGANGP']
SYNTHESIZERS = ['GM','SDV','CTGAN','WGANGP']
FILEPATHS = {'Real' : HOME_PATH + 'REAL DATASETS/TRAIN DATASETS/F_IndianLiverPatient_Real_Train.csv',
'GM' : HOME_PATH + 'SYNTHETIC DATASETS/GM/F_IndianLiverPatient_Synthetic_GM.csv',
'SDV' : HOME_PATH + 'SYNTHETIC DATASETS/SDV/F_IndianLiverPatient_Synthetic_SDV.csv',
'CTGAN' : HOME_PATH + 'SYNTHETIC DATASETS/CTGAN/F_IndianLiverPatient_Synthetic_CTGAN.csv',
'WGANGP' : HOME_PATH + 'SYNTHETIC DATASETS/WGANGP/F_IndianLiverPatient_Synthetic_WGANGP.csv'}
categorical_columns = ['gender','class']
numerical_columns = ['age','TB','DB','alkphos','sgpt','sgot','TP','ALB','A_G']
qid_columns = ['age','gender']
risk_attributes = ['TB','DB','alkphos','sgpt','sgot','TP','ALB','A_G','class']
data = dict()
data_qid = dict()
data_risk = dict()
#iterate over all datasets filepaths and read each dataset
for name, path in FILEPATHS.items() :
data[name] = pd.read_csv(path)
#data[name] = data[name].drop(['id'],axis=1)
for col in categorical_columns :
data[name][col] = data[name][col].astype('category').cat.codes
data_qid[name] = data[name][qid_columns]
data_risk[name] = data[name][risk_attributes]
data
data_qid
data_risk
###Output
_____no_output_____
###Markdown
2. Train models to predict attributes values
###Code
#initialize classifiers
categorical_columns = None
numerical_columns = ['age']
categories = None
classifiers_all = dict()
data_preprocessors = dict()
attributes_models_all = dict()
for name in SYNTHESIZERS :
print(name)
data_preprocessors[name] = DataPreProcessor(categorical_columns, numerical_columns, categories)
x_train = data_preprocessors[name].preprocess_train_data(data_qid[name])
# attributes_models = dict()
# attributes_models = train_attributes_prediction_models(data_risk[name], x_train)
attributes_models_all[name] = RiskAttributesPredictors(data_risk[name], qid_columns)
attributes_models_all[name].train_attributes_prediction_models(x_train)
print('####################################################')
###Output
GM
Model trained for TB attribute
Model trained for DB attribute
Model trained for alkphos attribute
Model trained for sgpt attribute
Model trained for sgot attribute
Model trained for TP attribute
Model trained for ALB attribute
Model trained for A_G attribute
Model trained for class attribute
####################################################
SDV
Model trained for TB attribute
Model trained for DB attribute
Model trained for alkphos attribute
Model trained for sgpt attribute
Model trained for sgot attribute
Model trained for TP attribute
Model trained for ALB attribute
Model trained for A_G attribute
Model trained for class attribute
####################################################
CTGAN
Model trained for TB attribute
Model trained for DB attribute
Model trained for alkphos attribute
Model trained for sgpt attribute
Model trained for sgot attribute
Model trained for TP attribute
Model trained for ALB attribute
Model trained for A_G attribute
Model trained for class attribute
####################################################
WGANGP
Model trained for TB attribute
Model trained for DB attribute
Model trained for alkphos attribute
Model trained for sgpt attribute
Model trained for sgot attribute
Model trained for TP attribute
Model trained for ALB attribute
Model trained for A_G attribute
Model trained for class attribute
####################################################
###Markdown
3. Read Real Data and Find Combinations
###Code
#read real dataset
real_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TRAIN DATASETS/F_IndianLiverPatient_Real_Train.csv')
real_data['class'] = real_data['class'].astype('category').cat.codes
real_data = real_data.sample(frac=1)
real_data = real_data[0:int(len(real_data)*0.5)]
real_data
combinations = real_data[qid_columns]
combinations.drop_duplicates(keep='first',inplace=True)
combinations
results_data_all = dict()
columns_results = ['age','gender','TB_rmse','DB_rmse','alkphos_rmse','sgpt_rmse','sgot_rmse','TP_rmse','ALB_rmse','A_G_rmse','class_accuracy']
for name in SYNTHESIZERS :
print(name)
results_data = pd.DataFrame(columns = columns_results)
for comb in combinations.values :
batch = real_data.loc[(real_data['age'] == comb[0]) & (real_data['gender'] == comb[1])]
row_data = (batch[qid_columns].values[0]).tolist()
print(row_data)
x_test = data_preprocessors[name].preprocess_test_data(batch[qid_columns])
print(x_test.shape)
row = attributes_models_all[name].evaluate_attributes_prediction_models(x_test, batch, columns_results)
results_data = results_data.append(row)
results_data_all[name] = results_data
print('#######################################')
results_data_all
###Output
_____no_output_____
###Markdown
5. Visuzalize obtained results
###Code
results_columns = ['TB_rmse','DB_rmse','alkphos_rmse','sgpt_rmse','sgot_rmse','TP_rmse','ALB_rmse',
'A_G_rmse','class_accuracy']
len(results_columns)
for name in SYNTHESIZERS :
identified_attributes = identified_attributes_percentage(results_data_all[name], results_columns)
print(name,' : ', identified_attributes)
boxplots_data = dict()
for c in results_columns :
boxplots_data[c] = results_data_all[SYNTHESIZERS[0]][c]
for i in range(1,len(SYNTHESIZERS)) :
boxplots_data[c] = np.column_stack((boxplots_data[c], results_data_all[SYNTHESIZERS[i]][c]))
fig, axs = plt.subplots(nrows=2, ncols=5, figsize=(13,2.5*2))
axs_idxs = [[0,0], [0,1], [0,2], [0,3], [0,4], [1,0], [1,1], [1,2], [1,3]]
idx = dict(zip(results_columns,axs_idxs))
for c in results_columns :
ax = axs[idx[c][0], idx[c][1]]
ax.boxplot(boxplots_data[c])
ax.set_title(c)
ax.set_xticklabels(SYNTHESIZERS)
for ax in axs.ravel():
ax.set_xticklabels(ax.get_xticklabels(), rotation = 30, ha="right")
fig.delaxes(axs[1,4])
plt.tight_layout()
fig.savefig('INFERENCE TESTS RESULTS/ATTRIBUTES INFERENCE TESTS RESULTS.svg', bbox_inches='tight')
###Output
_____no_output_____ |
examples/0. Embeddings Generation/Pipelines/ML20M/5. The Big Merge.ipynb | ###Markdown
The Big MergeOther methods will be added soon
###Code
import json
import pandas as pd
roberta = pd.read_csv('../../../../data/engineering/roberta.csv')
cat = pd.read_csv('../../../../data/engineering/mca.csv')
num = pd.read_csv('../../../../data/engineering/pca.csv')
num = num.set_index('idx')
cat = cat.set_index(cat.columns[0])
roberta = roberta.set_index('idx')
movies = pd.read_csv('../../../../data/ml-20m/links.csv')
df = pd.concat([roberta, cat, num], axis=1)
from ppca import PPCA
ppca = PPCA()
ppca.fit(data=df.values.astype(float), d=128, verbose=True)
ppca.var_exp
df.index
import pickle
import torch
transformed = ppca.transform()
films_dict = dict([(k, torch.tensor(transformed[i]).float()) for k, i in zip(df.index, range(transformed.shape[0]))])
pickle.dump(films_dict, open('../../../../data/embeddings/ml20_pca128.pkl', 'wb'))
###Output
_____no_output_____ |
UpperDivisionClasses/Data_Science/week8/ie_handout.ipynb | ###Markdown
Challenge 2I'd like you to modify the pattern to capture the relation in as many of these sentences as you can. Some mods can be made easily. Some mods will be difficult - we are dealing with all the complexities of English grammar here.If you feel you just can't write a pattern that works in general, go ahead and write it for the specific case, i.e., may only work with one sentence in sentences. Hey, it's a start.
###Code
my_chunker = nltk.RegexpParser(r'''
NP:
{<DT>?<JJ>*<NN>} # chunk determiner (optional), adjectives (optional) and noun
{<NNP>*<NNP>}
{<NNPP><VBZ><NNP>}
{<VBZ><.*>*?<NN>}
{<PRP>|<NNP>*<PRP>|<NNP>}
{<NNP><CD>*<NN>}
{<PRP>*<VBZ><JJ>}
{<CD><NNS><JJ>}
{<RB>?<VBN>*<TO>}
{<RB>}
''')
all_relations = []
for i,s in enumerate(sentences):
relation = build_relation(s, my_chunker)
all_relations.append((i, relation))
print(relation)
print('===============')
###Output
(S
(NP Victor/NNP Frankenstein/NNP)
builds/VBZ
(NP the/DT creature/NN)
in/IN
his/PRP$
(NP laboratory/NN))
(Tree('NP', [('Victor', 'NNP'), ('Frankenstein', 'NNP')]), ('builds', 'VBZ'), Tree('NP', [('the', 'DT'), ('creature', 'NN')]))
===============
(S (NP The/DT creature/NN) is/VBZ (NP 8/CD feet/NNS tall/JJ))
(Tree('NP', [('The', 'DT'), ('creature', 'NN')]), ('is', 'VBZ'), Tree('NP', [('8', 'CD'), ('feet', 'NNS'), ('tall', 'JJ')]))
===============
(S
(NP the/DT monster/NN)
wanders/NNS
through/IN
(NP the/DT wilderness/NN))
()
===============
(S
(NP He/PRP)
(NP finds/VBZ brief/JJ)
solace/JJ
beside/IN
(NP a/DT remote/JJ cottage/NN)
inhabited/VBN
by/IN
(NP a/DT family/NN)
of/IN
peasants/NNS)
()
===============
(S
(NP Eavesdropping/NN)
,/,
(NP the/DT creature/NN)
familiarizes/VBZ
(NP himself/PRP)
with/IN
their/PRP$
lives/NNS
and/CC
learns/NNS
(NP to/TO)
speak/VB)
(Tree('NP', [('the', 'DT'), ('creature', 'NN')]), ('familiarizes', 'VBZ'), Tree('NP', [('himself', 'PRP')]))
===============
(S
(NP The/DT creature/NN)
(NP eventually/RB)
introduces/VBZ
(NP himself/PRP)
(NP to/TO)
(NP the/DT family/NN)
's/POS
(NP blind/NN)
(NP father/NN))
(Tree('NP', [('eventually', 'RB')]), ('introduces', 'VBZ'), Tree('NP', [('himself', 'PRP')]))
===============
(S
(NP the/DT creature/NN)
rescues/VBZ
(NP a/DT peasant/JJ girl/NN)
from/IN
(NP a/DT river/NN)
./.)
(Tree('NP', [('the', 'DT'), ('creature', 'NN')]), ('rescues', 'VBZ'), Tree('NP', [('a', 'DT'), ('peasant', 'JJ'), ('girl', 'NN')]))
===============
(S
(NP He/PRP)
finds/VBZ
(NP Frankenstein/NNP)
's/POS
(NP journal/NN)
in/IN
(NP the/DT pocket/NN)
of/IN
(NP the/DT jacket/NN)
(NP he/PRP)
found/VBD
in/IN
(NP the/DT laboratory/NN))
(Tree('NP', [('He', 'PRP')]), ('finds', 'VBZ'), Tree('NP', [('Frankenstein', 'NNP')]))
===============
(S
(NP The/DT monster/NN)
kills/VBZ
(NP Victor/NNP)
's/POS
younger/JJR
(NP brother/NN)
(NP William/NNP)
upon/IN
(NP learning/NN)
of/IN
(NP the/DT boy/NN)
's/POS
(NP relation/NN)
(NP to/TO)
his/PRP$
(NP hated/JJ creator/NN)
./.)
(Tree('NP', [('The', 'DT'), ('monster', 'NN')]), ('kills', 'VBZ'), Tree('NP', [('Victor', 'NNP')]))
===============
(S
(NP Frankenstein/NNP)
builds/VBZ
(NP a/DT female/JJ creature/NN)
./.)
(Tree('NP', [('Frankenstein', 'NNP')]), ('builds', 'VBZ'), Tree('NP', [('a', 'DT'), ('female', 'JJ'), ('creature', 'NN')]))
===============
(S
(NP the/DT monster/NN)
kills/VBZ
(NP Frankenstein/NNP)
's/POS
best/JJS
(NP friend/NN)
(NP Henry/NNP Clerva/NNP)
./.)
(Tree('NP', [('the', 'DT'), ('monster', 'NN')]), ('kills', 'VBZ'), Tree('NP', [('Frankenstein', 'NNP')]))
===============
(S (NP the/DT monster/NN) boards/VBD (NP the/DT ship/NN) ./.)
()
===============
(S
(NP The/DT monster/NN)
has/VBZ
(NP also/RB been/VBN analogized/VBN to/TO)
(NP an/DT oppressed/JJ class/NN))
(Tree('NP', [('The', 'DT'), ('monster', 'NN')]), ('has', 'VBZ'), Tree('NP', [('also', 'RB'), ('been', 'VBN'), ('analogized', 'VBN'), ('to', 'TO')]))
===============
(S
(NP the/DT monster/NN)
is/VBZ
(NP the/DT tragic/JJ result/NN)
of/IN
(NP uncontrolled/JJ technology/NN)
./.)
(Tree('NP', [('the', 'DT'), ('monster', 'NN')]), ('is', 'VBZ'), Tree('NP', [('the', 'DT'), ('tragic', 'JJ'), ('result', 'NN')]))
===============
###Markdown
Challenge 3Use the relations to generate new sentences. Maybe we can write summaries of a sentence by just generating words from our triples. Here are some examples I generated from triples.'Victor Frankenstein builds the creature.''Frankenstein found the laboratory.''The monster has an oppressed class.'Write a function that given one of your relations, will produce a sentence as a string.
###Code
def summarizer(triple):
string = ""
for entry in triple:
if isinstance(entry,nltk.tree.Tree):
for leaf in entry.leaves():
string += leaf[0] + " "
else:
string += entry[0] + " "
return string
for s in sentences:
rel = build_relation(s, my_chunker)
summary = summarizer(rel)
print(rel)
print(summary)
print('='*10)
###Output
(S
(NP Victor/NNP Frankenstein/NNP)
builds/VBZ
(NP the/DT creature/NN)
in/IN
his/PRP$
(NP laboratory/NN))
(Tree('NP', [('Victor', 'NNP'), ('Frankenstein', 'NNP')]), ('builds', 'VBZ'), Tree('NP', [('the', 'DT'), ('creature', 'NN')]))
Victor Frankenstein builds the creature
==========
(S (NP The/DT creature/NN) is/VBZ (NP 8/CD feet/NNS tall/JJ))
(Tree('NP', [('The', 'DT'), ('creature', 'NN')]), ('is', 'VBZ'), Tree('NP', [('8', 'CD'), ('feet', 'NNS'), ('tall', 'JJ')]))
The creature is 8 feet tall
==========
(S
(NP the/DT monster/NN)
wanders/NNS
through/IN
(NP the/DT wilderness/NN))
()
==========
(S
(NP He/PRP)
(NP finds/VBZ brief/JJ)
solace/JJ
beside/IN
(NP a/DT remote/JJ cottage/NN)
inhabited/VBN
by/IN
(NP a/DT family/NN)
of/IN
peasants/NNS)
()
==========
(S
(NP Eavesdropping/NN)
,/,
(NP the/DT creature/NN)
familiarizes/VBZ
(NP himself/PRP)
with/IN
their/PRP$
lives/NNS
and/CC
learns/NNS
(NP to/TO)
speak/VB)
(Tree('NP', [('the', 'DT'), ('creature', 'NN')]), ('familiarizes', 'VBZ'), Tree('NP', [('himself', 'PRP')]))
the creature familiarizes himself
==========
(S
(NP The/DT creature/NN)
(NP eventually/RB)
introduces/VBZ
(NP himself/PRP)
(NP to/TO)
(NP the/DT family/NN)
's/POS
(NP blind/NN)
(NP father/NN))
(Tree('NP', [('eventually', 'RB')]), ('introduces', 'VBZ'), Tree('NP', [('himself', 'PRP')]))
eventually introduces himself
==========
(S
(NP the/DT creature/NN)
rescues/VBZ
(NP a/DT peasant/JJ girl/NN)
from/IN
(NP a/DT river/NN)
./.)
(Tree('NP', [('the', 'DT'), ('creature', 'NN')]), ('rescues', 'VBZ'), Tree('NP', [('a', 'DT'), ('peasant', 'JJ'), ('girl', 'NN')]))
the creature rescues a peasant girl
==========
(S
(NP He/PRP)
finds/VBZ
(NP Frankenstein/NNP)
's/POS
(NP journal/NN)
in/IN
(NP the/DT pocket/NN)
of/IN
(NP the/DT jacket/NN)
(NP he/PRP)
found/VBD
in/IN
(NP the/DT laboratory/NN))
(Tree('NP', [('He', 'PRP')]), ('finds', 'VBZ'), Tree('NP', [('Frankenstein', 'NNP')]))
He finds Frankenstein
==========
(S
(NP The/DT monster/NN)
kills/VBZ
(NP Victor/NNP)
's/POS
younger/JJR
(NP brother/NN)
(NP William/NNP)
upon/IN
(NP learning/NN)
of/IN
(NP the/DT boy/NN)
's/POS
(NP relation/NN)
(NP to/TO)
his/PRP$
(NP hated/JJ creator/NN)
./.)
(Tree('NP', [('The', 'DT'), ('monster', 'NN')]), ('kills', 'VBZ'), Tree('NP', [('Victor', 'NNP')]))
The monster kills Victor
==========
(S
(NP Frankenstein/NNP)
builds/VBZ
(NP a/DT female/JJ creature/NN)
./.)
(Tree('NP', [('Frankenstein', 'NNP')]), ('builds', 'VBZ'), Tree('NP', [('a', 'DT'), ('female', 'JJ'), ('creature', 'NN')]))
Frankenstein builds a female creature
==========
(S
(NP the/DT monster/NN)
kills/VBZ
(NP Frankenstein/NNP)
's/POS
best/JJS
(NP friend/NN)
(NP Henry/NNP Clerva/NNP)
./.)
(Tree('NP', [('the', 'DT'), ('monster', 'NN')]), ('kills', 'VBZ'), Tree('NP', [('Frankenstein', 'NNP')]))
the monster kills Frankenstein
==========
(S (NP the/DT monster/NN) boards/VBD (NP the/DT ship/NN) ./.)
()
==========
(S
(NP The/DT monster/NN)
has/VBZ
(NP also/RB been/VBN analogized/VBN to/TO)
(NP an/DT oppressed/JJ class/NN))
(Tree('NP', [('The', 'DT'), ('monster', 'NN')]), ('has', 'VBZ'), Tree('NP', [('also', 'RB'), ('been', 'VBN'), ('analogized', 'VBN'), ('to', 'TO')]))
The monster has also been analogized to
==========
(S
(NP the/DT monster/NN)
is/VBZ
(NP the/DT tragic/JJ result/NN)
of/IN
(NP uncontrolled/JJ technology/NN)
./.)
(Tree('NP', [('the', 'DT'), ('monster', 'NN')]), ('is', 'VBZ'), Tree('NP', [('the', 'DT'), ('tragic', 'JJ'), ('result', 'NN')]))
the monster is the tragic result
==========
###Markdown
Module 8We are shifting focus this week. We will start treating sentences more like a linguist would. We will break a sentence into parts-of-speech (POS). And then revisit regular experssion matching, but now using the extra information that POS gives us.The ultimate goal will be to pull out what are called relations from a sentence. A relation is a triple of [noun-phrase, verb, noun phrase]. For instance, ['the big dog', 'ate', 'the dirty bone']. Pulling relations like this out of text is a big research area. Once you have relations, you can start using AI techniques to reason about them or do question answering, e.g., "Who ate the dirty bone?".To start, here are the POS tags that nltk gives us. Each word will be given one and only one of these tags.
###Code
1. CC Coordinating conjunction
2. CD Cardinal number
3. DT Determiner
4. EX Existential there
5. FW Foreign word
6. IN Preposition or subordinating conjunction
7. JJ Adjective
8. JJR Adjective, comparative
9. JJS Adjective, superlative
10. LS List item marker
11. MD Modal
12. NN Noun, singular or mass
13. NNS Noun, plural
14. NNP Proper noun, singular
15. NNPS Proper noun, plural
16. PDT Predeterminer
17. POS Possessive ending
18. PRP Personal pronoun
19. PRP$ Possessive pronoun
20. RB Adverb
21. RBR Adverb, comparative
22. RBS Adverb, superlative
23. RP Particle
24. SYM Symbol
25. TO to
26. UH Interjection
27. VB Verb, base form
28. VBD Verb, past tense
29. VBG Verb, gerund or present participle
30. VBN Verb, past participle
31. VBP Verb, non-3rd person singular present
32. VBZ Verb, 3rd person singular present
33. WDT Wh-determiner
34. WP Wh-pronoun
35. WP$ Possessive wh-pronoun
36. WRB Wh-adverb
###Output
_____no_output_____
###Markdown
Here are sentences we will be testing on. I pulled them from the Frankenstien wikipedia page. For each of them, I would like to pull out a relation that captures the meaning of the sentence.
###Code
sentences = [
'Victor Frankenstein builds the creature in his laboratory',
'The creature is 8 feet tall', # tricky
'the monster wanders through the wilderness', # tricky
'He finds brief solace beside a remote cottage inhabited by a family of peasants',
'Eavesdropping, the creature familiarizes himself with their lives and learns to speak', # tricky
"The creature eventually introduces himself to the family's blind father",
'the creature rescues a peasant girl from a river.',
"He finds Frankenstein's journal in the pocket of the jacket he found in the laboratory",
"The monster kills Victor's younger brother William upon learning of the boy's relation to his hated creator.",
"Frankenstein builds a female creature.",
"the monster kills Frankenstein's best friend Henry Clerva.",
"the monster boards the ship.",
"The monster has also been analogized to an oppressed class",
"the monster is the tragic result of uncontrolled technology."
]
import nltk
from nltk.tree import Tree
###Output
_____no_output_____
###Markdown
Let's work on the first sentence. First I will tokenize it like we have been doing in past weeks. But then I will use somethning called a POS tagger to add basic parts of speech to each word.
###Code
s = sentences[0]
print(s)
print('='*10)
data_tok = nltk.word_tokenize(s) #tokenization
print(data_tok)
print('='*10)
data_pos = nltk.pos_tag(data_tok) #POS tagging
print(data_pos)
###Output
Victor Frankenstein builds the creature in his laboratory
==========
['Victor', 'Frankenstein', 'builds', 'the', 'creature', 'in', 'his', 'laboratory']
==========
[('Victor', 'NNP'), ('Frankenstein', 'NNP'), ('builds', 'VBZ'), ('the', 'DT'), ('creature', 'NN'), ('in', 'IN'), ('his', 'PRP$'), ('laboratory', 'NN')]
###Markdown
You can match up the pos tags in table above. Big new idea: chunkingWhat we have after pos_tag is a flat structure. All we really have done is gone word by word and added the pos to the word. So we have a list of tuples instead of a list of words.What we want now is to structure this a bit more. I'd like to group words into noun-phrases or other collections of words that make sense for my problem. First I'll use a chunker that is built into nltk. It looks at various forms of nouns and tags them with more information. Let's take a look.
###Code
chunk = nltk.ne_chunk(data_pos) # notice takes pos tagged version and not raw text
chunk
###Output
_____no_output_____
###Markdown
You can see we have new nodes in the tree: PERSON and ORGANIZATION. So the ne_chunk chunker knows about lots of different names of entities. And it can give you a more abstract view of your words. In some cases, this may be all you want. You are just looking through lots of tweets for people or organizations being mentioned: ne_chunk can flag them for you.Also notice that ne_chunk does not combine names well, guessing that Frankenstein is an organization. There is an interesting discussion here on how to post-process ne_chunk results to get more accurate tags: https://stackoverflow.com/questions/24398536/named-entity-recognition-with-regular-expression-nltk. For one, to group Victor and Frankenstein under the single node PERSON.Jargon alert: the ne_chunk chunker works in a research area called Named Entity Recognition or NER. It can be a tricker problem than you might think to classify words and phrases in a sentence into useful categories. Let's look at the components of chunk. You can see that some items are Trees and some are leaves.
###Code
for x in chunk:
print((type(x) == Tree, x))
###Output
(True, Tree('PERSON', [('Victor', 'NNP')]))
(True, Tree('ORGANIZATION', [('Frankenstein', 'NNP')]))
(False, ('builds', 'VBZ'))
(False, ('the', 'DT'))
(False, ('creature', 'NN'))
(False, ('in', 'IN'))
(False, ('his', 'PRP$'))
(False, ('laboratory', 'NN'))
###Markdown
If you view chunk as the big tree, then there are 2 sub-trees and 6 leaves under it. DIY: chunkingIt turns out we will need to build our own chunker to chunk the pieces of a relation. Before getting to that, let's take a look at building a verb-phrase chunker. It is pretty cool. All we need to do is write a regular-expression type pattern to chunk on. Look at the example below. A chunk pattern consists of a name for the chunk (MY_VP) and a pattern within {}. The pattern itself is a mixture of pos tags and vanilla re operators. My pattern below should match a Verb that is 3rd person singular present (VBZ). Then 0 or more of any type of elements. Then a singular noun (NN).
###Code
vb_pattern = "MY_VP: {<VBZ><.*>*?<NN>}"
###Output
_____no_output_____
###Markdown
We need a special chunker that allows us to define our own chunking patterns.
###Code
vb_chunker = nltk.RegexpParser(vb_pattern)
###Output
_____no_output_____
###Markdown
Test it out.
###Code
vb_chunked = vb_chunker.parse(data_pos) # remember chunkers want pos tagged sentences as input
vb_chunked
for x in vb_chunked:
print(x)
###Output
('Victor', 'NNP')
('Frankenstein', 'NNP')
(MY_VP builds/VBZ the/DT creature/NN)
('in', 'IN')
('his', 'PRP$')
('laboratory', 'NN')
###Markdown
Noun-phrase (NP) chunker (version 0)The goal is to build a relation which is a triple consisting of a tuple (NP, VERB, NP) for each sentence in sentences. So our first step is to build a chunking pattern that will tag noun-phrases in a sentence. I am going to give you a start below.Notice I am using VERB instead of MY_VP. I found it more difficult to work with verb phrases so I will not use them. But if you want to play around with them, no problem. I could easily see catching some set of adverbs preceding a verb and tagging that as an ADV_VERB or somesuch.
###Code
np_chunker = nltk.RegexpParser(r'''
NP:
{<DT>?<JJ>*<NN>} # NP chunk is determiner (optional), adjectives (optional) and noun
''')
chunk1 = np_chunker.parse(data_pos)
chunk1
###Output
_____no_output_____
###Markdown
HmmmmWe got "the creature" and have the verb but missed "Victor Frankenstein". Have to get back to that.I am going to want to do some experimentation with patterns so I am going to create function that will make my life easier. I'll pass in raw text and then chunker I want to test out.Notice that I have a chunking pipeline going. I like this about nltk. I can first chunk to tag noun phrases. Then I can pass that to next chunker in line to tag relations. Cool.
###Code
def build_relation(text, chunker):
#chunk the text with chunker
chunks = chunker.parse(nltk.pos_tag(nltk.word_tokenize(text)))
print(chunks) # debugging
#Now re-chunk looking for our triples. Call the chunk REL for relation
chunker2 = nltk.RegexpParser(r'''
REL:
{<NP><VBZ><NP>}
''')
relation_chunk = chunker2.parse(chunks)
for t in relation_chunk:
if type(t) != Tree: continue
if t.label() == 'REL':
return (t[0], t[1], t[2])
return tuple([])
#Here it is again so I can play with it
rel_chunker = nltk.RegexpParser(r'''
NP:
{<DT>?<JJ>*<NN>} # chunk determiner (optional), adjectives (optional) and noun
''')
build_relation(sentences[0], rel_chunker)
###Output
(S
Victor/NNP
Frankenstein/NNP
builds/VBZ
(NP the/DT creature/NN)
in/IN
his/PRP$
(NP laboratory/NN))
###Markdown
Challenge 1Solve the problem. Modify rel_chunker so that it builds the target relation for sentence 0. And remember, that Victor liked to be called by his full name: Victor Dennis Frankenstein. I just made that up but you get the drift.
###Code
rel_chunker2 = nltk.RegexpParser(r'''
NP:
{<DT>?<JJ>*<NN>} # chunk determiner (optional), adjectives (optional) and noun
{<NNP>*<NNP>}
''')
build_relation(sentences[0], rel_chunker2)
###Output
(S
(NP Victor/NNP Frankenstein/NNP)
builds/VBZ
(NP the/DT creature/NN)
in/IN
his/PRP$
(NP laboratory/NN))
###Markdown
Let's try on all the sentencesSee how many we can pull relations from
###Code
all_relations = []
for i,s in enumerate(sentences):
relation = build_relation(s, rel_chunker2)
all_relations.append((i, relation))
print(relation)
print('===============')
###Output
(S
(NP Victor/NNP Frankenstein/NNP)
builds/VBZ
(NP the/DT creature/NN)
in/IN
his/PRP$
(NP laboratory/NN))
(Tree('NP', [('Victor', 'NNP'), ('Frankenstein', 'NNP')]), ('builds', 'VBZ'), Tree('NP', [('the', 'DT'), ('creature', 'NN')]))
===============
(S (NP The/DT creature/NN) is/VBZ 8/CD feet/NNS tall/JJ)
()
===============
(S
(NP the/DT monster/NN)
wanders/NNS
through/IN
(NP the/DT wilderness/NN))
()
===============
(S
He/PRP
finds/VBZ
brief/JJ
solace/JJ
beside/IN
(NP a/DT remote/JJ cottage/NN)
inhabited/VBN
by/IN
(NP a/DT family/NN)
of/IN
peasants/NNS)
()
===============
(S
(NP Eavesdropping/NN)
,/,
(NP the/DT creature/NN)
familiarizes/VBZ
himself/PRP
with/IN
their/PRP$
lives/NNS
and/CC
learns/NNS
to/TO
speak/VB)
()
===============
(S
(NP The/DT creature/NN)
eventually/RB
introduces/VBZ
himself/PRP
to/TO
(NP the/DT family/NN)
's/POS
(NP blind/NN)
(NP father/NN))
()
===============
(S
(NP the/DT creature/NN)
rescues/VBZ
(NP a/DT peasant/JJ girl/NN)
from/IN
(NP a/DT river/NN)
./.)
(Tree('NP', [('the', 'DT'), ('creature', 'NN')]), ('rescues', 'VBZ'), Tree('NP', [('a', 'DT'), ('peasant', 'JJ'), ('girl', 'NN')]))
===============
(S
He/PRP
finds/VBZ
(NP Frankenstein/NNP)
's/POS
(NP journal/NN)
in/IN
(NP the/DT pocket/NN)
of/IN
(NP the/DT jacket/NN)
he/PRP
found/VBD
in/IN
(NP the/DT laboratory/NN))
()
===============
(S
(NP The/DT monster/NN)
kills/VBZ
(NP Victor/NNP)
's/POS
younger/JJR
(NP brother/NN)
(NP William/NNP)
upon/IN
(NP learning/NN)
of/IN
(NP the/DT boy/NN)
's/POS
(NP relation/NN)
to/TO
his/PRP$
(NP hated/JJ creator/NN)
./.)
(Tree('NP', [('The', 'DT'), ('monster', 'NN')]), ('kills', 'VBZ'), Tree('NP', [('Victor', 'NNP')]))
===============
(S
(NP Frankenstein/NNP)
builds/VBZ
(NP a/DT female/JJ creature/NN)
./.)
(Tree('NP', [('Frankenstein', 'NNP')]), ('builds', 'VBZ'), Tree('NP', [('a', 'DT'), ('female', 'JJ'), ('creature', 'NN')]))
===============
(S
(NP the/DT monster/NN)
kills/VBZ
(NP Frankenstein/NNP)
's/POS
best/JJS
(NP friend/NN)
(NP Henry/NNP Clerva/NNP)
./.)
(Tree('NP', [('the', 'DT'), ('monster', 'NN')]), ('kills', 'VBZ'), Tree('NP', [('Frankenstein', 'NNP')]))
===============
(S (NP the/DT monster/NN) boards/VBD (NP the/DT ship/NN) ./.)
()
===============
(S
(NP The/DT monster/NN)
has/VBZ
also/RB
been/VBN
analogized/VBN
to/TO
(NP an/DT oppressed/JJ class/NN))
()
===============
(S
(NP the/DT monster/NN)
is/VBZ
(NP the/DT tragic/JJ result/NN)
of/IN
(NP uncontrolled/JJ technology/NN)
./.)
(Tree('NP', [('the', 'DT'), ('monster', 'NN')]), ('is', 'VBZ'), Tree('NP', [('the', 'DT'), ('tragic', 'JJ'), ('result', 'NN')]))
===============
|
.ipynb_checkpoints/grapher_python_plot-checkpoint.ipynb | ###Markdown
NABS Density plot SystemDensity relaxation for a time peroid of 1ns.System: Al0, Al1, Al2.5, Al5, Al7.5, Al10, Al12.5, Al15, Al17.5, Al20.
###Code
!echo "hello python"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
font = {'family' : 'CMU Serif',
'weight' : 'bold',
'size' : 20}
matplotlib.rc('font', **font)
plt.rc('text', usetex=True)
plt.rc('font', family='CMU Serif',)
matplotlib.rcParams['text.latex.preamble'] = [r'\boldmath']
cooling_rate = [10,1,0.1,100]
a=['al0','al1','al2.5','al5','al7.5','al10','al12.5','al15','al17.5','al20']
seq = [4,1,2,3]
for j in range(1):
fig,(ax) = plt.subplots(1, 1, figsize=(10, 8), tight_layout=True)
ax.clear()
data = []
names = []
df = []
for i in range(4):
foldername = './cool'+str(j+1)+'_'+str(seq[i])+'/'
filename = foldername + 'log.lammps'
data.append(np.genfromtxt(filename, skip_header=120,max_rows=1001))
names.append(np.genfromtxt(filename, skip_header=119,max_rows=1,dtype=str))
df.append(pd.DataFrame(data[i],columns=names[i]))
df[i]['Density_moving_avg'] = df[i]['Density'].rolling(window=10).mean()
df[i].to_csv(foldername+'data.csv')
for i in range(4):
ax.plot(df[i]['Step']/1000, df[i]['Density_moving_avg'], label=r'\textbf{{{}}}'.format(str(cooling_rate[seq[i]-1])+'K/ps'))
ax.set_title(r'\textbf{{{}}}'.format(a[j].title()))
ax.set_xlabel(r'\textbf{Time (ps)}')
ax.set_ylabel(r'\textbf{Density (moving avg)} $(g/cm^3)$')
ax.set_ylim([2.35,2.65])
ax.legend(loc='upper right') #,bbox_to_anchor=(1,1),)
plt.legend(frameon=False)
ax.tick_params(direction='in', length=9, width=2, colors='k',
grid_color='k', grid_alpha=0.5, )
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(2)
fig.savefig(a[j]+'.png',format=None,dpi=300,bbox_inches="tight")
fig.show()
#import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
data = np.array(['a','b','c','d'])
s = pd.Series(data)
print(s)
s = pd.Series(data,index=[100,101,102,103])
print (s)
#import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
data = {'a' : 0., 'b' : 1., 'c' : 2.}
s = pd.Series(data)
print (s)
#import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
data = {'a' : 0., 'b' : 1., 'c' : 2.}
s = pd.Series(data,index=['b','c','d','a','e','f'])
print (s)
#import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
s = pd.Series(5, index=[0, 1, 2, 3])
print (s)
/Users/rajesh/work/simulations/OCT18/NABS/shell_script
###Output
_____no_output_____ |
site/en-snapshot/lattice/tutorials/premade_models.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TF Lattice Premade Models View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewPremade Models are quick and easy ways to build TFL `tf.keras.model` instances for typical use cases. This guide outlines the steps needed to construct a TFL Premade Model and train/test it. SetupInstalling TF Lattice package:
###Code
#@test {"skip": true}
!pip install tensorflow-lattice pydot
###Output
_____no_output_____
###Markdown
Importing required packages:
###Code
import tensorflow as tf
import copy
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
###Output
_____no_output_____
###Markdown
Setting the default values used for training in this guide:
###Code
LEARNING_RATE = 0.01
BATCH_SIZE = 128
NUM_EPOCHS = 500
PREFITTING_NUM_EPOCHS = 10
###Output
_____no_output_____
###Markdown
Downloading the UCI Statlog (Heart) dataset:
###Code
heart_csv_file = tf.keras.utils.get_file(
'heart.csv',
'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
heart_df = pd.read_csv(heart_csv_file)
thal_vocab_list = ['normal', 'fixed', 'reversible']
heart_df['thal'] = heart_df['thal'].map(
{v: i for i, v in enumerate(thal_vocab_list)})
heart_df = heart_df.astype(float)
heart_train_size = int(len(heart_df) * 0.8)
heart_train_dict = dict(heart_df[:heart_train_size])
heart_test_dict = dict(heart_df[heart_train_size:])
# This ordering of input features should match the feature configs. If no
# feature config relies explicitly on the data (i.e. all are 'quantiles'),
# then you can construct the feature_names list by simply iterating over each
# feature config and extracting it's name.
feature_names = [
'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg',
'exang', 'oldpeak', 'slope', 'ca', 'thal'
]
# Since we have some features that manually construct their input keypoints,
# we need an index mapping of the feature names.
feature_name_indices = {name: index for index, name in enumerate(feature_names)}
label_name = 'target'
heart_train_xs = [
heart_train_dict[feature_name] for feature_name in feature_names
]
heart_test_xs = [heart_test_dict[feature_name] for feature_name in feature_names]
heart_train_ys = heart_train_dict[label_name]
heart_test_ys = heart_test_dict[label_name]
###Output
_____no_output_____
###Markdown
Feature ConfigsFeature calibration and per-feature configurations are set using [tfl.configs.FeatureConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/FeatureConfig). Feature configurations include monotonicity constraints, per-feature regularization (see [tfl.configs.RegularizerConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/RegularizerConfig)), and lattice sizes for lattice models.Note that we must fully specify the feature config for any feature that we want our model to recognize. Otherwise the model will have no way of knowing that such a feature exists. Defining Our Feature ConfigsNow that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.
###Code
# Features:
# - age
# - sex
# - cp chest pain type (4 values)
# - trestbps resting blood pressure
# - chol serum cholestoral in mg/dl
# - fbs fasting blood sugar > 120 mg/dl
# - restecg resting electrocardiographic results (values 0,1,2)
# - thalach maximum heart rate achieved
# - exang exercise induced angina
# - oldpeak ST depression induced by exercise relative to rest
# - slope the slope of the peak exercise ST segment
# - ca number of major vessels (0-3) colored by flourosopy
# - thal normal; fixed defect; reversable defect
#
# Feature configs are used to specify how each feature is calibrated and used.
heart_feature_configs = [
tfl.configs.FeatureConfig(
name='age',
lattice_size=3,
monotonicity='increasing',
# We must set the keypoints manually.
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints='quantiles',
pwl_calibration_clip_max=100,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),
],
),
tfl.configs.FeatureConfig(
name='sex',
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='cp',
monotonicity='increasing',
# Keypoints that are uniformly spaced.
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=np.linspace(
np.min(heart_train_xs[feature_name_indices['cp']]),
np.max(heart_train_xs[feature_name_indices['cp']]),
num=4),
),
tfl.configs.FeatureConfig(
name='chol',
monotonicity='increasing',
# Explicit input keypoints initialization.
pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
# Calibration can be forced to span the full output range by clamping.
pwl_calibration_clamp_min=True,
pwl_calibration_clamp_max=True,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
],
),
tfl.configs.FeatureConfig(
name='fbs',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='trestbps',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints='quantiles',
),
tfl.configs.FeatureConfig(
name='thalach',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints='quantiles',
),
tfl.configs.FeatureConfig(
name='restecg',
# Partial monotonicity: output(0) <= output(1), output(0) <= output(2)
monotonicity=[(0, 1), (0, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='exang',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='oldpeak',
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints='quantiles',
),
tfl.configs.FeatureConfig(
name='slope',
# Partial monotonicity: output(0) <= output(1), output(1) <= output(2)
monotonicity=[(0, 1), (1, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='ca',
monotonicity='increasing',
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints='quantiles',
),
tfl.configs.FeatureConfig(
name='thal',
# Partial monotonicity:
# output(normal) <= output(fixed)
# output(normal) <= output(reversible)
monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],
num_buckets=3,
# We must specify the vocabulary list in order to later set the
# monotonicities since we used names and not indices.
vocabulary_list=thal_vocab_list,
),
]
###Output
_____no_output_____
###Markdown
Set Monotonicities and KeypointsNext we need to make sure to properly set the monotonicities for features where we used a custom vocabulary (such as 'thal' above).
###Code
tfl.premade_lib.set_categorical_monotonicities(heart_feature_configs)
###Output
_____no_output_____
###Markdown
Finally we can complete our feature configs by calculating and setting the keypoints.
###Code
feature_keypoints = tfl.premade_lib.compute_feature_keypoints(
feature_configs=heart_feature_configs, features=heart_train_dict)
tfl.premade_lib.set_feature_keypoints(
feature_configs=heart_feature_configs,
feature_keypoints=feature_keypoints,
add_missing_feature_configs=False)
###Output
_____no_output_____
###Markdown
Calibrated Linear ModelTo construct a TFL premade model, first construct a model configuration from [tfl.configs](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs). A calibrated linear model is constructed using the [tfl.configs.CalibratedLinearConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLinearConfig). It applies piecewise-linear and categorical calibration on the input features, followed by a linear combination and an optional output piecewise-linear calibration. When using output calibration or when output bounds are specified, the linear layer will apply weighted averaging on calibrated inputs.This example creates a calibrated linear model on the first 5 features.
###Code
# Model config defines the model structure for the premade model.
linear_model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=heart_feature_configs[:5],
use_bias=True,
output_calibration=True,
output_calibration_num_keypoints=10,
# We initialize the output to [-2.0, 2.0] since we'll be using logits.
output_initialization=np.linspace(-2.0, 2.0, num=10),
regularizer_configs=[
# Regularizer for the output calibrator.
tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),
])
# A CalibratedLinear premade model constructed from the given model config.
linear_model = tfl.premade.CalibratedLinear(linear_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
Now, as with any other [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model), we compile and fit the model to our data.
###Code
linear_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.AUC(from_logits=True)],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
linear_model.fit(
heart_train_xs[:5],
heart_train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
###Output
_____no_output_____
###Markdown
After training our model, we can evaluate it on our test set.
###Code
print('Test Set Evaluation...')
print(linear_model.evaluate(heart_test_xs[:5], heart_test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice ModelA calibrated lattice model is constructed using [tfl.configs.CalibratedLatticeConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeConfig). A calibrated lattice model applies piecewise-linear and categorical calibration on the input features, followed by a lattice model and an optional output piecewise-linear calibration.This example creates a calibrated lattice model on the first 5 features.
###Code
# This is a calibrated lattice model: inputs are calibrated, then combined
# non-linearly using a lattice layer.
lattice_model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=heart_feature_configs[:5],
# We initialize the output to [-2.0, 2.0] since we'll be using logits.
output_initialization=[-2.0, 2.0],
regularizer_configs=[
# Torsion regularizer applied to the lattice to make it more linear.
tfl.configs.RegularizerConfig(name='torsion', l2=1e-2),
# Globally defined calibration regularizer is applied to all features.
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2),
])
# A CalibratedLattice premade model constructed from the given model config.
lattice_model = tfl.premade.CalibratedLattice(lattice_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
lattice_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.AUC(from_logits=True)],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
lattice_model.fit(
heart_train_xs[:5],
heart_train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
print('Test Set Evaluation...')
print(lattice_model.evaluate(heart_test_xs[:5], heart_test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice Ensemble ModelWhen the number of features is large, you can use an ensemble model, which creates multiple smaller lattices for subsets of the features and averages their output instead of creating just a single huge lattice. Ensemble lattice models are constructed using [tfl.configs.CalibratedLatticeEnsembleConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeEnsembleConfig). A calibrated lattice ensemble model applies piecewise-linear and categorical calibration on the input feature, followed by an ensemble of lattice models and an optional output piecewise-linear calibration. Explicit Lattice Ensemble InitializationIf you already know which subsets of features you want to feed into your lattices, then you can explicitly set the lattices using feature names. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
explicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=heart_feature_configs,
lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'],
['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'],
['restecg', 'age', 'sex']],
num_lattices=5,
lattice_rank=3,
# We initialize the output to [-2.0, 2.0] since we'll be using logits.
output_initialization=[-2.0, 2.0])
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
explicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
explicit_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
explicit_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
explicit_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.AUC(from_logits=True)],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
explicit_ensemble_model.fit(
heart_train_xs,
heart_train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
print('Test Set Evaluation...')
print(explicit_ensemble_model.evaluate(heart_test_xs, heart_test_ys))
###Output
_____no_output_____
###Markdown
Random Lattice EnsembleIf you are not sure which subsets of features to feed into your lattices, another option is to use random subsets of features for each lattice. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
random_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=heart_feature_configs,
lattices='random',
num_lattices=5,
lattice_rank=3,
# We initialize the output to [-2.0, 2.0] since we'll be using logits.
output_initialization=[-2.0, 2.0],
random_seed=42)
# Now we must set the random lattice structure and construct the model.
tfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
random_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
random_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
random_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
random_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.AUC(from_logits=True)],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
random_ensemble_model.fit(
heart_train_xs,
heart_train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
print('Test Set Evaluation...')
print(random_ensemble_model.evaluate(heart_test_xs, heart_test_ys))
###Output
_____no_output_____
###Markdown
RTL Layer Random Lattice EnsembleWhen using a random lattice ensemble, you can specify that the model use a single `tfl.layers.RTL` layer. We note that `tfl.layers.RTL` only supports monotonicity constraints and must have the same lattice size for all features and no per-feature regularization. Note that using a `tfl.layers.RTL` layer lets you scale to much larger ensembles than using separate `tfl.layers.Lattice` instances.This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# Make sure our feature configs have the same lattice size, no per-feature
# regularization, and only monotonicity constraints.
rtl_layer_feature_configs = copy.deepcopy(heart_feature_configs)
for feature_config in rtl_layer_feature_configs:
feature_config.lattice_size = 2
feature_config.unimodality = 'none'
feature_config.reflects_trust_in = None
feature_config.dominates = None
feature_config.regularizer_configs = None
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
rtl_layer_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=rtl_layer_feature_configs,
lattices='rtl_layer',
num_lattices=5,
lattice_rank=3,
# We initialize the output to [-2.0, 2.0] since we'll be using logits.
output_initialization=[-2.0, 2.0],
random_seed=42)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config. Note that we do not have to specify the lattices by calling
# a helper function (like before with random) because the RTL Layer will take
# care of that for us.
rtl_layer_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
rtl_layer_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
rtl_layer_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
rtl_layer_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.AUC(from_logits=True)],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
rtl_layer_ensemble_model.fit(
heart_train_xs,
heart_train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
print('Test Set Evaluation...')
print(rtl_layer_ensemble_model.evaluate(heart_test_xs, heart_test_ys))
###Output
_____no_output_____
###Markdown
Crystals Lattice EnsemblePremade also provides a heuristic feature arrangement algorithm, called [Crystals](https://papers.nips.cc/paper/6377-fast-and-flexible-monotonic-functions-with-ensembles-of-lattices). To use the Crystals algorithm, first we train a prefitting model that estimates pairwise feature interactions. We then arrange the final ensemble such that features with more non-linear interactions are in the same lattices.the Premade Library offers helper functions for constructing the prefitting model configuration and extracting the crystals structure. Note that the prefitting model does not need to be fully trained, so a few epochs should be enough.This example creates a calibrated lattice ensemble model with 5 lattice and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combines non-linearly and averaged using multiple lattice layers.
crystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=heart_feature_configs,
lattices='crystals',
num_lattices=5,
lattice_rank=3,
# We initialize the output to [-2.0, 2.0] since we'll be using logits.
output_initialization=[-2.0, 2.0],
random_seed=42)
# Now that we have our model config, we can construct a prefitting model config.
prefitting_model_config = tfl.premade_lib.construct_prefitting_model_config(
crystals_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# prefitting model config.
prefitting_model = tfl.premade.CalibratedLatticeEnsemble(
prefitting_model_config)
# We can compile and train our prefitting model as we like.
prefitting_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
prefitting_model.fit(
heart_train_xs,
heart_train_ys,
epochs=PREFITTING_NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
# Now that we have our trained prefitting model, we can extract the crystals.
tfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config,
prefitting_model_config,
prefitting_model)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
crystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
crystals_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
crystals_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
crystals_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.AUC(from_logits=True)],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
crystals_ensemble_model.fit(
heart_train_xs,
heart_train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
print('Test Set Evaluation...')
print(crystals_ensemble_model.evaluate(heart_test_xs, heart_test_ys))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TF Lattice Premade Models View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewPremade Models are quick and easy ways to build TFL `tf.keras.model` instances for typical use cases. This guide outlines the steps needed to construct a TFL Premade Model and train/test it. SetupInstalling TF Lattice package:
###Code
#@test {"skip": true}
!pip install tensorflow-lattice pydot
###Output
_____no_output_____
###Markdown
Importing required packages:
###Code
import tensorflow as tf
import copy
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
###Output
_____no_output_____
###Markdown
Downloading the UCI Statlog (Heart) dataset:
###Code
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/applied-dl/heart.csv')
df = pd.read_csv(csv_file)
train_size = int(len(df) * 0.8)
train_dataframe = df[:train_size]
test_dataframe = df[train_size:]
df.head()
###Output
_____no_output_____
###Markdown
Extract and convert features and labels to tensors:
###Code
# Features:
# - age
# - sex
# - cp chest pain type (4 values)
# - trestbps resting blood pressure
# - chol serum cholestoral in mg/dl
# - fbs fasting blood sugar > 120 mg/dl
# - restecg resting electrocardiographic results (values 0,1,2)
# - thalach maximum heart rate achieved
# - exang exercise induced angina
# - oldpeak ST depression induced by exercise relative to rest
# - slope the slope of the peak exercise ST segment
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg',
'exang', 'oldpeak', 'slope', 'ca', 'thal'
]
feature_name_indices = {name: index for index, name in enumerate(feature_names)}
# This is the vocab list and mapping we will use for the 'thal' categorical
# feature.
thal_vocab_list = ['normal', 'fixed', 'reversible']
thal_map = {category: i for i, category in enumerate(thal_vocab_list)}
# Custom function for converting thal categories to buckets
def convert_thal_features(thal_features):
# Note that two examples in the test set are already converted.
return np.array([
thal_map[feature] if feature in thal_vocab_list else feature
for feature in thal_features
])
# Custom function for extracting each feature.
def extract_features(dataframe,
label_name='target',
feature_names=feature_names):
features = []
for feature_name in feature_names:
if feature_name == 'thal':
features.append(
convert_thal_features(dataframe[feature_name].values).astype(float))
else:
features.append(dataframe[feature_name].values.astype(float))
labels = dataframe[label_name].values.astype(float)
return features, labels
train_xs, train_ys = extract_features(train_dataframe)
test_xs, test_ys = extract_features(test_dataframe)
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
# Our lattice models may have predictions above 1.0 due to numerical errors.
# We can subtract this small epsilon value from our output_max to make sure we
# do not predict values outside of our label bound.
numerical_error_epsilon = 1e-5
###Output
_____no_output_____
###Markdown
Setting the default values used for training in this guide:
###Code
LEARNING_RATE = 0.01
BATCH_SIZE = 128
NUM_EPOCHS = 500
PREFITTING_NUM_EPOCHS = 10
###Output
_____no_output_____
###Markdown
Feature ConfigsFeature calibration and per-feature configurations are set using [tfl.configs.FeatureConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/FeatureConfig). Feature configurations include monotonicity constraints, per-feature regularization (see [tfl.configs.RegularizerConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/RegularizerConfig)), and lattice sizes for lattice models.Note that we must fully specify the feature config for any feature that we want our model to recognize. Otherwise the model will have no way of knowing that such a feature exists. Compute QuantilesAlthough the default setting for `pwl_calibration_input_keypoints` in `tfl.configs.FeatureConfig` is 'quantiles', for premade models we have to manually define the input keypoints. To do so, we first define our own helper function for computing quantiles.
###Code
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
###Output
_____no_output_____
###Markdown
Defining Our Feature ConfigsNow that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.
###Code
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='age',
lattice_size=3,
monotonicity='increasing',
# We must set the keypoints manually.
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['age']],
num_keypoints=5,
clip_max=100),
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),
],
),
tfl.configs.FeatureConfig(
name='sex',
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='cp',
monotonicity='increasing',
# Keypoints that are uniformly spaced.
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=np.linspace(
np.min(train_xs[feature_name_indices['cp']]),
np.max(train_xs[feature_name_indices['cp']]),
num=4),
),
tfl.configs.FeatureConfig(
name='chol',
monotonicity='increasing',
# Explicit input keypoints initialization.
pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
# Calibration can be forced to span the full output range by clamping.
pwl_calibration_clamp_min=True,
pwl_calibration_clamp_max=True,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
],
),
tfl.configs.FeatureConfig(
name='fbs',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='trestbps',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['trestbps']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='thalach',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['thalach']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='restecg',
# Partial monotonicity: output(0) <= output(1), output(0) <= output(2)
monotonicity=[(0, 1), (0, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='exang',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='oldpeak',
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['oldpeak']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='slope',
# Partial monotonicity: output(0) <= output(1), output(1) <= output(2)
monotonicity=[(0, 1), (1, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='ca',
monotonicity='increasing',
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['ca']], num_keypoints=4),
),
tfl.configs.FeatureConfig(
name='thal',
# Partial monotonicity:
# output(normal) <= output(fixed)
# output(normal) <= output(reversible)
monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],
num_buckets=3,
# We must specify the vocabulary list in order to later set the
# monotonicities since we used names and not indices.
vocabulary_list=thal_vocab_list,
),
]
###Output
_____no_output_____
###Markdown
Next we need to make sure to properly set the monotonicities for features where we used a custom vocabulary (such as 'thal' above).
###Code
tfl.premade_lib.set_categorical_monotonicities(feature_configs)
###Output
_____no_output_____
###Markdown
Calibrated Linear ModelTo construct a TFL premade model, first construct a model configuration from [tfl.configs](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs). A calibrated linear model is constructed using the [tfl.configs.CalibratedLinearConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLinearConfig). It applies piecewise-linear and categorical calibration on the input features, followed by a linear combination and an optional output piecewise-linear calibration. When using output calibration or when output bounds are specified, the linear layer will apply weighted averaging on calibrated inputs.This example creates a calibrated linear model on the first 5 features.
###Code
# Model config defines the model structure for the premade model.
linear_model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs[:5],
use_bias=True,
# We must set the output min and max to that of the label.
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=10,
output_initialization=np.linspace(min_label, max_label, num=10),
regularizer_configs=[
# Regularizer for the output calibrator.
tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),
])
# A CalibratedLinear premade model constructed from the given model config.
linear_model = tfl.premade.CalibratedLinear(linear_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
Now, as with any other [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model), we compile and fit the model to our data.
###Code
linear_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
linear_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
###Output
_____no_output_____
###Markdown
After training our model, we can evaluate it on our test set.
###Code
print('Test Set Evaluation...')
print(linear_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice ModelA calibrated lattice model is constructed using [tfl.configs.CalibratedLatticeConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeConfig). A calibrated lattice model applies piecewise-linear and categorical calibration on the input features, followed by a lattice model and an optional output piecewise-linear calibration.This example creates a calibrated lattice model on the first 5 features.
###Code
# This is a calibrated lattice model: inputs are calibrated, then combined
# non-linearly using a lattice layer.
lattice_model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=feature_configs[:5],
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
regularizer_configs=[
# Torsion regularizer applied to the lattice to make it more linear.
tfl.configs.RegularizerConfig(name='torsion', l2=1e-2),
# Globally defined calibration regularizer is applied to all features.
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2),
])
# A CalibratedLattice premade model constructed from the given model config.
lattice_model = tfl.premade.CalibratedLattice(lattice_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
lattice_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
lattice_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(lattice_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice Ensemble ModelWhen the number of features is large, you can use an ensemble model, which creates multiple smaller lattices for subsets of the features and averages their output instead of creating just a single huge lattice. Ensemble lattice models are constructed using [tfl.configs.CalibratedLatticeEnsembleConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeEnsembleConfig). A calibrated lattice ensemble model applies piecewise-linear and categorical calibration on the input feature, followed by an ensemble of lattice models and an optional output piecewise-linear calibration. Explicit Lattice Ensemble InitializationIf you already know which subsets of features you want to feed into your lattices, then you can explicitly set the lattices using feature names. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
explicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'],
['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'],
['restecg', 'age', 'sex']],
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label])
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
explicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
explicit_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
explicit_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
explicit_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
explicit_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(explicit_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Random Lattice EnsembleIf you are not sure which subsets of features to feed into your lattices, another option is to use random subsets of features for each lattice. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
random_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='random',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now we must set the random lattice structure and construct the model.
tfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
random_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
random_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
random_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
random_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
random_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(random_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
RTL Layer Random Lattice EnsembleWhen using a random lattice ensemble, you can specify that the model use a single `tfl.layers.RTL` layer. We note that `tfl.layers.RTL` only supports monotonicity constraints and must have the same lattice size for all features and no per-feature regularization. Note that using a `tfl.layers.RTL` layer lets you scale to much larger ensembles than using separate `tfl.layers.Lattice` instances.This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# Make sure our feature configs have the same lattice size, no per-feature
# regularization, and only monotonicity constraints.
rtl_layer_feature_configs = copy.deepcopy(feature_configs)
for feature_config in rtl_layer_feature_configs:
feature_config.lattice_size = 2
feature_config.unimodality = 'none'
feature_config.reflects_trust_in = None
feature_config.dominates = None
feature_config.regularizer_configs = None
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
rtl_layer_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=rtl_layer_feature_configs,
lattices='rtl_layer',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config. Note that we do not have to specify the lattices by calling
# a helper function (like before with random) because the RTL Layer will take
# care of that for us.
rtl_layer_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
rtl_layer_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
rtl_layer_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
rtl_layer_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
rtl_layer_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(rtl_layer_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Crystals Lattice EnsemblePremade also provides a heuristic feature arrangement algorithm, called [Crystals](https://papers.nips.cc/paper/6377-fast-and-flexible-monotonic-functions-with-ensembles-of-lattices). To use the Crystals algorithm, first we train a prefitting model that estimates pairwise feature interactions. We then arrange the final ensemble such that features with more non-linear interactions are in the same lattices.the Premade Library offers helper functions for constructing the prefitting model configuration and extracting the crystals structure. Note that the prefitting model does not need to be fully trained, so a few epochs should be enough.This example creates a calibrated lattice ensemble model with 5 lattice and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combines non-linearly and averaged using multiple lattice layers.
crystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='crystals',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now that we have our model config, we can construct a prefitting model config.
prefitting_model_config = tfl.premade_lib.construct_prefitting_model_config(
crystals_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# prefitting model config.
prefitting_model = tfl.premade.CalibratedLatticeEnsemble(
prefitting_model_config)
# We can compile and train our prefitting model as we like.
prefitting_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
prefitting_model.fit(
train_xs,
train_ys,
epochs=PREFITTING_NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
# Now that we have our trained prefitting model, we can extract the crystals.
tfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config,
prefitting_model_config,
prefitting_model)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
crystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
crystals_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
crystals_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
crystals_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
crystals_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(crystals_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TF Lattice Premade Models View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewPremade Models are quick and easy ways to build TFL `tf.keras.model` instances for typical use cases. This guide outlines the steps needed to construct a TFL Premade Model and train/test it. SetupInstalling TF Lattice package:
###Code
#@test {"skip": true}
!pip install tensorflow-lattice pydot
###Output
_____no_output_____
###Markdown
Importing required packages:
###Code
import tensorflow as tf
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
###Output
_____no_output_____
###Markdown
Downloading the UCI Statlog (Heart) dataset:
###Code
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/applied-dl/heart.csv')
df = pd.read_csv(csv_file)
train_size = int(len(df) * 0.8)
train_dataframe = df[:train_size]
test_dataframe = df[train_size:]
df.head()
###Output
_____no_output_____
###Markdown
Extract and convert features and labels to tensors:
###Code
# Features:
# - age
# - sex
# - cp chest pain type (4 values)
# - trestbps resting blood pressure
# - chol serum cholestoral in mg/dl
# - fbs fasting blood sugar > 120 mg/dl
# - restecg resting electrocardiographic results (values 0,1,2)
# - thalach maximum heart rate achieved
# - exang exercise induced angina
# - oldpeak ST depression induced by exercise relative to rest
# - slope the slope of the peak exercise ST segment
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg',
'exang', 'oldpeak', 'slope', 'ca', 'thal'
]
feature_name_indices = {name: index for index, name in enumerate(feature_names)}
# This is the vocab list and mapping we will use for the 'thal' categorical
# feature.
thal_vocab_list = ['normal', 'fixed', 'reversible']
thal_map = {category: i for i, category in enumerate(thal_vocab_list)}
# Custom function for converting thal categories to buckets
def convert_thal_features(thal_features):
# Note that two examples in the test set are already converted.
return np.array([
thal_map[feature] if feature in thal_vocab_list else feature
for feature in thal_features
])
# Custom function for extracting each feature.
def extract_features(dataframe,
label_name='target',
feature_names=feature_names):
features = []
for feature_name in feature_names:
if feature_name == 'thal':
features.append(
convert_thal_features(dataframe[feature_name].values).astype(float))
else:
features.append(dataframe[feature_name].values.astype(float))
labels = dataframe[label_name].values.astype(float)
return features, labels
train_xs, train_ys = extract_features(train_dataframe)
test_xs, test_ys = extract_features(test_dataframe)
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
# Our lattice models may have predictions above 1.0 due to numerical errors.
# We can subtract this small epsilon value from our output_max to make sure we
# do not predict values outside of our label bound.
numerical_error_epsilon = 1e-5
###Output
_____no_output_____
###Markdown
Setting the default values used for training in this guide:
###Code
LEARNING_RATE = 0.01
BATCH_SIZE = 128
NUM_EPOCHS = 500
PREFITTING_NUM_EPOCHS = 10
###Output
_____no_output_____
###Markdown
Feature ConfigsFeature calibration and per-feature configurations are set using [tfl.configs.FeatureConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/FeatureConfig). Feature configurations include monotonicity constraints, per-feature regularization (see [tfl.configs.RegularizerConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/RegularizerConfig)), and lattice sizes for lattice models.Note that we must fully specify the feature config for any feature that we want our model to recognize. Otherwise the model will have no way of knowing that such a feature exists. Compute QuantilesAlthough the default setting for `pwl_calibration_input_keypoints` in `tfl.configs.FeatureConfig` is 'quantiles', for premade models we have to manually define the input keypoints. To do so, we first define our own helper function for computing quantiles.
###Code
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
###Output
_____no_output_____
###Markdown
Defining Our Feature ConfigsNow that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.
###Code
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='age',
lattice_size=3,
monotonicity='increasing',
# We must set the keypoints manually.
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['age']],
num_keypoints=5,
clip_max=100),
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),
],
),
tfl.configs.FeatureConfig(
name='sex',
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='cp',
monotonicity='increasing',
# Keypoints that are uniformly spaced.
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=np.linspace(
np.min(train_xs[feature_name_indices['cp']]),
np.max(train_xs[feature_name_indices['cp']]),
num=4),
),
tfl.configs.FeatureConfig(
name='chol',
monotonicity='increasing',
# Explicit input keypoints initialization.
pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
# Calibration can be forced to span the full output range by clamping.
pwl_calibration_clamp_min=True,
pwl_calibration_clamp_max=True,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
],
),
tfl.configs.FeatureConfig(
name='fbs',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='trestbps',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['trestbps']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='thalach',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['thalach']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='restecg',
# Partial monotonicity: output(0) <= output(1), output(0) <= output(2)
monotonicity=[(0, 1), (0, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='exang',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='oldpeak',
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['oldpeak']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='slope',
# Partial monotonicity: output(0) <= output(1), output(1) <= output(2)
monotonicity=[(0, 1), (1, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='ca',
monotonicity='increasing',
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['ca']], num_keypoints=4),
),
tfl.configs.FeatureConfig(
name='thal',
# Partial monotonicity:
# output(normal) <= output(fixed)
# output(normal) <= output(reversible)
monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],
num_buckets=3,
# We must specify the vocabulary list in order to later set the
# monotonicities since we used names and not indices.
vocabulary_list=thal_vocab_list,
),
]
###Output
_____no_output_____
###Markdown
Next we need to make sure to properly set the monotonicities for features where we used a custom vocabulary (such as 'thal' above).
###Code
tfl.premade_lib.set_categorical_monotonicities(feature_configs)
###Output
_____no_output_____
###Markdown
Calibrated Linear ModelTo construct a TFL premade model, first construct a model configuration from [tfl.configs](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs). A calibrated linear model is constructed using the [tfl.configs.CalibratedLinearConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLinearConfig). It applies piecewise-linear and categorical calibration on the input features, followed by a linear combination and an optional output piecewise-linear calibration. When using output calibration or when output bounds are specified, the linear layer will apply weighted averaging on calibrated inputs.This example creates a calibrated linear model on the first 5 features.
###Code
# Model config defines the model structure for the premade model.
linear_model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs[:5],
use_bias=True,
# We must set the output min and max to that of the label.
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=10,
output_initialization=np.linspace(min_label, max_label, num=10),
regularizer_configs=[
# Regularizer for the output calibrator.
tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),
])
# A CalibratedLinear premade model constructed from the given model config.
linear_model = tfl.premade.CalibratedLinear(linear_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
Now, as with any other [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model), we compile and fit the model to our data.
###Code
linear_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
linear_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
###Output
_____no_output_____
###Markdown
After training our model, we can evaluate it on our test set.
###Code
print('Test Set Evaluation...')
print(linear_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice ModelA calibrated lattice model is constructed using [tfl.configs.CalibratedLatticeConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeConfig). A calibrated lattice model applies piecewise-linear and categorical calibration on the input features, followed by a lattice model and an optional output piecewise-linear calibration.This example creates a calibrated lattice model on the first 5 features.
###Code
# This is a calibrated lattice model: inputs are calibrated, then combined
# non-linearly using a lattice layer.
lattice_model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=feature_configs[:5],
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
regularizer_configs=[
# Torsion regularizer applied to the lattice to make it more linear.
tfl.configs.RegularizerConfig(name='torsion', l2=1e-2),
# Globally defined calibration regularizer is applied to all features.
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2),
])
# A CalibratedLattice premade model constructed from the given model config.
lattice_model = tfl.premade.CalibratedLattice(lattice_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
lattice_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
lattice_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(lattice_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice Ensemble ModelWhen the number of features is large, you can use an ensemble model, which creates multiple smaller lattices for subsets of the features and averages their output instead of creating just a single huge lattice. Ensemble lattice models are constructed using [tfl.configs.CalibratedLatticeEnsembleConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeEnsembleConfig). A calibrated lattice ensemble model applies piecewise-linear and categorical calibration on the input feature, followed by an ensemble of lattice models and an optional output piecewise-linear calibration. Explicit Lattice Ensemble InitializationIf you already know which subsets of features you want to feed into your lattices, then you can explicitly set the lattices using feature names. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
explicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'],
['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'],
['restecg', 'age', 'sex']],
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label])
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
explicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
explicit_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
explicit_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
explicit_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
explicit_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(explicit_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Random Lattice EnsembleIf you are not sure which subsets of features to feed into your lattices, another option is to use random subsets of features for each lattice. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
random_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='random',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now we must set the random lattice structure and construct the model.
tfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
random_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
random_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
random_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
random_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
random_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(random_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Crystals Lattice EnsemblePremade also provides a heuristic feature arrangement algorithm, called [Crystals](https://papers.nips.cc/paper/6377-fast-and-flexible-monotonic-functions-with-ensembles-of-lattices). To use the Crystals algorithm, first we train a prefitting model that estimates pairwise feature interactions. We then arrange the final ensemble such that features with more non-linear interactions are in the same lattices.the Premade Library offers helper functions for constructing the prefitting model configuration and extracting the crystals structure. Note that the prefitting model does not need to be fully trained, so a few epochs should be enough.This example creates a calibrated lattice ensemble model with 5 lattice and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combines non-linearly and averaged using multiple lattice layers.
crystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='crystals',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now that we have our model config, we can construct a prefitting model config.
prefitting_model_config = tfl.premade_lib.construct_prefitting_model_config(
crystals_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# prefitting model config.
prefitting_model = tfl.premade.CalibratedLatticeEnsemble(
prefitting_model_config)
# We can compile and train our prefitting model as we like.
prefitting_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
prefitting_model.fit(
train_xs,
train_ys,
epochs=PREFITTING_NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
# Now that we have our trained prefitting model, we can extract the crystals.
tfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config,
prefitting_model_config,
prefitting_model)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
crystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
crystals_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
crystals_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
crystals_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
crystals_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(crystals_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TF Lattice Premade Models View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewPremade Models are quick and easy ways to build TFL `tf.keras.model` instances for typical use cases. This guide outlines the steps needed to construct a TFL Premade Model and train/test it. SetupInstalling TF Lattice package:
###Code
#@test {"skip": true}
!pip install tensorflow-lattice pydot
###Output
_____no_output_____
###Markdown
Importing required packages:
###Code
import tensorflow as tf
import copy
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
###Output
_____no_output_____
###Markdown
Downloading the UCI Statlog (Heart) dataset:
###Code
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/applied-dl/heart.csv')
df = pd.read_csv(csv_file)
train_size = int(len(df) * 0.8)
train_dataframe = df[:train_size]
test_dataframe = df[train_size:]
df.head()
###Output
_____no_output_____
###Markdown
Extract and convert features and labels to tensors:
###Code
# Features:
# - age
# - sex
# - cp chest pain type (4 values)
# - trestbps resting blood pressure
# - chol serum cholestoral in mg/dl
# - fbs fasting blood sugar > 120 mg/dl
# - restecg resting electrocardiographic results (values 0,1,2)
# - thalach maximum heart rate achieved
# - exang exercise induced angina
# - oldpeak ST depression induced by exercise relative to rest
# - slope the slope of the peak exercise ST segment
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg',
'exang', 'oldpeak', 'slope', 'ca', 'thal'
]
feature_name_indices = {name: index for index, name in enumerate(feature_names)}
# This is the vocab list and mapping we will use for the 'thal' categorical
# feature.
thal_vocab_list = ['normal', 'fixed', 'reversible']
thal_map = {category: i for i, category in enumerate(thal_vocab_list)}
# Custom function for converting thal categories to buckets
def convert_thal_features(thal_features):
# Note that two examples in the test set are already converted.
return np.array([
thal_map[feature] if feature in thal_vocab_list else feature
for feature in thal_features
])
# Custom function for extracting each feature.
def extract_features(dataframe,
label_name='target',
feature_names=feature_names):
features = []
for feature_name in feature_names:
if feature_name == 'thal':
features.append(
convert_thal_features(dataframe[feature_name].values).astype(float))
else:
features.append(dataframe[feature_name].values.astype(float))
labels = dataframe[label_name].values.astype(float)
return features, labels
train_xs, train_ys = extract_features(train_dataframe)
test_xs, test_ys = extract_features(test_dataframe)
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
# Our lattice models may have predictions above 1.0 due to numerical errors.
# We can subtract this small epsilon value from our output_max to make sure we
# do not predict values outside of our label bound.
numerical_error_epsilon = 1e-5
###Output
_____no_output_____
###Markdown
Setting the default values used for training in this guide:
###Code
LEARNING_RATE = 0.01
BATCH_SIZE = 128
NUM_EPOCHS = 500
PREFITTING_NUM_EPOCHS = 10
###Output
_____no_output_____
###Markdown
Feature ConfigsFeature calibration and per-feature configurations are set using [tfl.configs.FeatureConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/FeatureConfig). Feature configurations include monotonicity constraints, per-feature regularization (see [tfl.configs.RegularizerConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/RegularizerConfig)), and lattice sizes for lattice models.Note that we must fully specify the feature config for any feature that we want our model to recognize. Otherwise the model will have no way of knowing that such a feature exists. Compute QuantilesAlthough the default setting for `pwl_calibration_input_keypoints` in `tfl.configs.FeatureConfig` is 'quantiles', for premade models we have to manually define the input keypoints. To do so, we first define our own helper function for computing quantiles.
###Code
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
###Output
_____no_output_____
###Markdown
Defining Our Feature ConfigsNow that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.
###Code
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='age',
lattice_size=3,
monotonicity='increasing',
# We must set the keypoints manually.
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['age']],
num_keypoints=5,
clip_max=100),
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),
],
),
tfl.configs.FeatureConfig(
name='sex',
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='cp',
monotonicity='increasing',
# Keypoints that are uniformly spaced.
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=np.linspace(
np.min(train_xs[feature_name_indices['cp']]),
np.max(train_xs[feature_name_indices['cp']]),
num=4),
),
tfl.configs.FeatureConfig(
name='chol',
monotonicity='increasing',
# Explicit input keypoints initialization.
pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
# Calibration can be forced to span the full output range by clamping.
pwl_calibration_clamp_min=True,
pwl_calibration_clamp_max=True,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
],
),
tfl.configs.FeatureConfig(
name='fbs',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='trestbps',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['trestbps']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='thalach',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['thalach']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='restecg',
# Partial monotonicity: output(0) <= output(1), output(0) <= output(2)
monotonicity=[(0, 1), (0, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='exang',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='oldpeak',
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['oldpeak']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='slope',
# Partial monotonicity: output(0) <= output(1), output(1) <= output(2)
monotonicity=[(0, 1), (1, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='ca',
monotonicity='increasing',
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['ca']], num_keypoints=4),
),
tfl.configs.FeatureConfig(
name='thal',
# Partial monotonicity:
# output(normal) <= output(fixed)
# output(normal) <= output(reversible)
monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],
num_buckets=3,
# We must specify the vocabulary list in order to later set the
# monotonicities since we used names and not indices.
vocabulary_list=thal_vocab_list,
),
]
###Output
_____no_output_____
###Markdown
Next we need to make sure to properly set the monotonicities for features where we used a custom vocabulary (such as 'thal' above).
###Code
tfl.premade_lib.set_categorical_monotonicities(feature_configs)
###Output
_____no_output_____
###Markdown
Calibrated Linear ModelTo construct a TFL premade model, first construct a model configuration from [tfl.configs](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs). A calibrated linear model is constructed using the [tfl.configs.CalibratedLinearConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLinearConfig). It applies piecewise-linear and categorical calibration on the input features, followed by a linear combination and an optional output piecewise-linear calibration. When using output calibration or when output bounds are specified, the linear layer will apply weighted averaging on calibrated inputs.This example creates a calibrated linear model on the first 5 features.
###Code
# Model config defines the model structure for the premade model.
linear_model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs[:5],
use_bias=True,
# We must set the output min and max to that of the label.
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=10,
output_initialization=np.linspace(min_label, max_label, num=10),
regularizer_configs=[
# Regularizer for the output calibrator.
tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),
])
# A CalibratedLinear premade model constructed from the given model config.
linear_model = tfl.premade.CalibratedLinear(linear_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
Now, as with any other [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model), we compile and fit the model to our data.
###Code
linear_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
linear_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
###Output
_____no_output_____
###Markdown
After training our model, we can evaluate it on our test set.
###Code
print('Test Set Evaluation...')
print(linear_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice ModelA calibrated lattice model is constructed using [tfl.configs.CalibratedLatticeConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeConfig). A calibrated lattice model applies piecewise-linear and categorical calibration on the input features, followed by a lattice model and an optional output piecewise-linear calibration.This example creates a calibrated lattice model on the first 5 features.
###Code
# This is a calibrated lattice model: inputs are calibrated, then combined
# non-linearly using a lattice layer.
lattice_model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=feature_configs[:5],
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
regularizer_configs=[
# Torsion regularizer applied to the lattice to make it more linear.
tfl.configs.RegularizerConfig(name='torsion', l2=1e-2),
# Globally defined calibration regularizer is applied to all features.
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2),
])
# A CalibratedLattice premade model constructed from the given model config.
lattice_model = tfl.premade.CalibratedLattice(lattice_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
lattice_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
lattice_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(lattice_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice Ensemble ModelWhen the number of features is large, you can use an ensemble model, which creates multiple smaller lattices for subsets of the features and averages their output instead of creating just a single huge lattice. Ensemble lattice models are constructed using [tfl.configs.CalibratedLatticeEnsembleConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeEnsembleConfig). A calibrated lattice ensemble model applies piecewise-linear and categorical calibration on the input feature, followed by an ensemble of lattice models and an optional output piecewise-linear calibration. Explicit Lattice Ensemble InitializationIf you already know which subsets of features you want to feed into your lattices, then you can explicitly set the lattices using feature names. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
explicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'],
['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'],
['restecg', 'age', 'sex']],
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label])
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
explicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
explicit_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
explicit_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
explicit_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
explicit_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(explicit_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Random Lattice EnsembleIf you are not sure which subsets of features to feed into your lattices, another option is to use random subsets of features for each lattice. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
random_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='random',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now we must set the random lattice structure and construct the model.
tfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
random_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
random_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
random_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
random_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
random_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(random_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
RTL Layer Random Lattice EnsembleWhen using a random lattice ensemble, you can specify that the model use a single `tfl.layers.RTL` layer. We note that `tfl.layers.RTL` only supports monotonicity constraints and must have the same lattice size for all features and no per-feature regularization. Note that using a `tfl.layers.RTL` layer lets you scale to much larger ensembles than using separate `tfl.layers.Lattice` instances.This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# Make sure our feature configs have the same lattice size, no per-feature
# regularization, and only monotonicity constraints.
rtl_layer_feature_configs = copy.deepcopy(feature_configs)
for feature_config in rtl_layer_feature_configs:
feature_config.lattice_size = 2
feature_config.unimodality = 'none'
feature_config.reflects_trust_in = None
feature_config.dominates = None
feature_config.regularizer_configs = None
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
rtl_layer_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=rtl_layer_feature_configs,
lattices='rtl_layer',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config. Note that we do not have to specify the lattices by calling
# a helper function (like before with random) because the RTL Layer will take
# care of that for us.
rtl_layer_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
rtl_layer_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
rtl_layer_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
rtl_layer_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
rtl_layer_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(rtl_layer_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Crystals Lattice EnsemblePremade also provides a heuristic feature arrangement algorithm, called [Crystals](https://papers.nips.cc/paper/6377-fast-and-flexible-monotonic-functions-with-ensembles-of-lattices). To use the Crystals algorithm, first we train a prefitting model that estimates pairwise feature interactions. We then arrange the final ensemble such that features with more non-linear interactions are in the same lattices.the Premade Library offers helper functions for constructing the prefitting model configuration and extracting the crystals structure. Note that the prefitting model does not need to be fully trained, so a few epochs should be enough.This example creates a calibrated lattice ensemble model with 5 lattice and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combines non-linearly and averaged using multiple lattice layers.
crystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='crystals',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now that we have our model config, we can construct a prefitting model config.
prefitting_model_config = tfl.premade_lib.construct_prefitting_model_config(
crystals_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# prefitting model config.
prefitting_model = tfl.premade.CalibratedLatticeEnsemble(
prefitting_model_config)
# We can compile and train our prefitting model as we like.
prefitting_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
prefitting_model.fit(
train_xs,
train_ys,
epochs=PREFITTING_NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
# Now that we have our trained prefitting model, we can extract the crystals.
tfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config,
prefitting_model_config,
prefitting_model)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
crystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
crystals_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
crystals_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
crystals_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
crystals_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(crystals_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TF Lattice Premade Models View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewPremade Models are quick and easy ways to build TFL `tf.keras.model` instances for typical use cases. This guide outlines the steps needed to construct a TFL Premade Model and train/test it. SetupInstalling TF Lattice package:
###Code
#@test {"skip": true}
!pip install tensorflow-lattice pydot
###Output
_____no_output_____
###Markdown
Importing required packages:
###Code
import tensorflow as tf
import copy
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
###Output
_____no_output_____
###Markdown
Downloading the UCI Statlog (Heart) dataset:
###Code
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
df = pd.read_csv(csv_file)
train_size = int(len(df) * 0.8)
train_dataframe = df[:train_size]
test_dataframe = df[train_size:]
df.head()
###Output
_____no_output_____
###Markdown
Extract and convert features and labels to tensors:
###Code
# Features:
# - age
# - sex
# - cp chest pain type (4 values)
# - trestbps resting blood pressure
# - chol serum cholestoral in mg/dl
# - fbs fasting blood sugar > 120 mg/dl
# - restecg resting electrocardiographic results (values 0,1,2)
# - thalach maximum heart rate achieved
# - exang exercise induced angina
# - oldpeak ST depression induced by exercise relative to rest
# - slope the slope of the peak exercise ST segment
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg',
'exang', 'oldpeak', 'slope', 'ca', 'thal'
]
feature_name_indices = {name: index for index, name in enumerate(feature_names)}
# This is the vocab list and mapping we will use for the 'thal' categorical
# feature.
thal_vocab_list = ['normal', 'fixed', 'reversible']
thal_map = {category: i for i, category in enumerate(thal_vocab_list)}
# Custom function for converting thal categories to buckets
def convert_thal_features(thal_features):
# Note that two examples in the test set are already converted.
return np.array([
thal_map[feature] if feature in thal_vocab_list else feature
for feature in thal_features
])
# Custom function for extracting each feature.
def extract_features(dataframe,
label_name='target',
feature_names=feature_names):
features = []
for feature_name in feature_names:
if feature_name == 'thal':
features.append(
convert_thal_features(dataframe[feature_name].values).astype(float))
else:
features.append(dataframe[feature_name].values.astype(float))
labels = dataframe[label_name].values.astype(float)
return features, labels
train_xs, train_ys = extract_features(train_dataframe)
test_xs, test_ys = extract_features(test_dataframe)
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
# Our lattice models may have predictions above 1.0 due to numerical errors.
# We can subtract this small epsilon value from our output_max to make sure we
# do not predict values outside of our label bound.
numerical_error_epsilon = 1e-5
###Output
_____no_output_____
###Markdown
Setting the default values used for training in this guide:
###Code
LEARNING_RATE = 0.01
BATCH_SIZE = 128
NUM_EPOCHS = 500
PREFITTING_NUM_EPOCHS = 10
###Output
_____no_output_____
###Markdown
Feature ConfigsFeature calibration and per-feature configurations are set using [tfl.configs.FeatureConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/FeatureConfig). Feature configurations include monotonicity constraints, per-feature regularization (see [tfl.configs.RegularizerConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/RegularizerConfig)), and lattice sizes for lattice models.Note that we must fully specify the feature config for any feature that we want our model to recognize. Otherwise the model will have no way of knowing that such a feature exists. Compute QuantilesAlthough the default setting for `pwl_calibration_input_keypoints` in `tfl.configs.FeatureConfig` is 'quantiles', for premade models we have to manually define the input keypoints. To do so, we first define our own helper function for computing quantiles.
###Code
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
###Output
_____no_output_____
###Markdown
Defining Our Feature ConfigsNow that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.
###Code
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='age',
lattice_size=3,
monotonicity='increasing',
# We must set the keypoints manually.
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['age']],
num_keypoints=5,
clip_max=100),
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),
],
),
tfl.configs.FeatureConfig(
name='sex',
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='cp',
monotonicity='increasing',
# Keypoints that are uniformly spaced.
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=np.linspace(
np.min(train_xs[feature_name_indices['cp']]),
np.max(train_xs[feature_name_indices['cp']]),
num=4),
),
tfl.configs.FeatureConfig(
name='chol',
monotonicity='increasing',
# Explicit input keypoints initialization.
pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
# Calibration can be forced to span the full output range by clamping.
pwl_calibration_clamp_min=True,
pwl_calibration_clamp_max=True,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
],
),
tfl.configs.FeatureConfig(
name='fbs',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='trestbps',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['trestbps']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='thalach',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['thalach']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='restecg',
# Partial monotonicity: output(0) <= output(1), output(0) <= output(2)
monotonicity=[(0, 1), (0, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='exang',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='oldpeak',
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['oldpeak']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='slope',
# Partial monotonicity: output(0) <= output(1), output(1) <= output(2)
monotonicity=[(0, 1), (1, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='ca',
monotonicity='increasing',
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['ca']], num_keypoints=4),
),
tfl.configs.FeatureConfig(
name='thal',
# Partial monotonicity:
# output(normal) <= output(fixed)
# output(normal) <= output(reversible)
monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],
num_buckets=3,
# We must specify the vocabulary list in order to later set the
# monotonicities since we used names and not indices.
vocabulary_list=thal_vocab_list,
),
]
###Output
_____no_output_____
###Markdown
Next we need to make sure to properly set the monotonicities for features where we used a custom vocabulary (such as 'thal' above).
###Code
tfl.premade_lib.set_categorical_monotonicities(feature_configs)
###Output
_____no_output_____
###Markdown
Calibrated Linear ModelTo construct a TFL premade model, first construct a model configuration from [tfl.configs](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs). A calibrated linear model is constructed using the [tfl.configs.CalibratedLinearConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLinearConfig). It applies piecewise-linear and categorical calibration on the input features, followed by a linear combination and an optional output piecewise-linear calibration. When using output calibration or when output bounds are specified, the linear layer will apply weighted averaging on calibrated inputs.This example creates a calibrated linear model on the first 5 features.
###Code
# Model config defines the model structure for the premade model.
linear_model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs[:5],
use_bias=True,
# We must set the output min and max to that of the label.
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=10,
output_initialization=np.linspace(min_label, max_label, num=10),
regularizer_configs=[
# Regularizer for the output calibrator.
tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),
])
# A CalibratedLinear premade model constructed from the given model config.
linear_model = tfl.premade.CalibratedLinear(linear_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
Now, as with any other [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model), we compile and fit the model to our data.
###Code
linear_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
linear_model.fit(
train_xs[:5],
train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
###Output
_____no_output_____
###Markdown
After training our model, we can evaluate it on our test set.
###Code
print('Test Set Evaluation...')
print(linear_model.evaluate(test_xs[:5], test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice ModelA calibrated lattice model is constructed using [tfl.configs.CalibratedLatticeConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeConfig). A calibrated lattice model applies piecewise-linear and categorical calibration on the input features, followed by a lattice model and an optional output piecewise-linear calibration.This example creates a calibrated lattice model on the first 5 features.
###Code
# This is a calibrated lattice model: inputs are calibrated, then combined
# non-linearly using a lattice layer.
lattice_model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=feature_configs[:5],
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
regularizer_configs=[
# Torsion regularizer applied to the lattice to make it more linear.
tfl.configs.RegularizerConfig(name='torsion', l2=1e-2),
# Globally defined calibration regularizer is applied to all features.
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2),
])
# A CalibratedLattice premade model constructed from the given model config.
lattice_model = tfl.premade.CalibratedLattice(lattice_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
lattice_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
lattice_model.fit(
train_xs[:5],
train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
print('Test Set Evaluation...')
print(lattice_model.evaluate(test_xs[:5], test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice Ensemble ModelWhen the number of features is large, you can use an ensemble model, which creates multiple smaller lattices for subsets of the features and averages their output instead of creating just a single huge lattice. Ensemble lattice models are constructed using [tfl.configs.CalibratedLatticeEnsembleConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeEnsembleConfig). A calibrated lattice ensemble model applies piecewise-linear and categorical calibration on the input feature, followed by an ensemble of lattice models and an optional output piecewise-linear calibration. Explicit Lattice Ensemble InitializationIf you already know which subsets of features you want to feed into your lattices, then you can explicitly set the lattices using feature names. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
explicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'],
['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'],
['restecg', 'age', 'sex']],
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label])
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
explicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
explicit_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
explicit_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
explicit_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
explicit_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(explicit_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Random Lattice EnsembleIf you are not sure which subsets of features to feed into your lattices, another option is to use random subsets of features for each lattice. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
random_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='random',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now we must set the random lattice structure and construct the model.
tfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
random_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
random_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
random_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
random_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
random_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(random_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
RTL Layer Random Lattice EnsembleWhen using a random lattice ensemble, you can specify that the model use a single `tfl.layers.RTL` layer. We note that `tfl.layers.RTL` only supports monotonicity constraints and must have the same lattice size for all features and no per-feature regularization. Note that using a `tfl.layers.RTL` layer lets you scale to much larger ensembles than using separate `tfl.layers.Lattice` instances.This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# Make sure our feature configs have the same lattice size, no per-feature
# regularization, and only monotonicity constraints.
rtl_layer_feature_configs = copy.deepcopy(feature_configs)
for feature_config in rtl_layer_feature_configs:
feature_config.lattice_size = 2
feature_config.unimodality = 'none'
feature_config.reflects_trust_in = None
feature_config.dominates = None
feature_config.regularizer_configs = None
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
rtl_layer_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=rtl_layer_feature_configs,
lattices='rtl_layer',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config. Note that we do not have to specify the lattices by calling
# a helper function (like before with random) because the RTL Layer will take
# care of that for us.
rtl_layer_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
rtl_layer_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
rtl_layer_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
rtl_layer_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
rtl_layer_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(rtl_layer_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Crystals Lattice EnsemblePremade also provides a heuristic feature arrangement algorithm, called [Crystals](https://papers.nips.cc/paper/6377-fast-and-flexible-monotonic-functions-with-ensembles-of-lattices). To use the Crystals algorithm, first we train a prefitting model that estimates pairwise feature interactions. We then arrange the final ensemble such that features with more non-linear interactions are in the same lattices.the Premade Library offers helper functions for constructing the prefitting model configuration and extracting the crystals structure. Note that the prefitting model does not need to be fully trained, so a few epochs should be enough.This example creates a calibrated lattice ensemble model with 5 lattice and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combines non-linearly and averaged using multiple lattice layers.
crystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='crystals',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now that we have our model config, we can construct a prefitting model config.
prefitting_model_config = tfl.premade_lib.construct_prefitting_model_config(
crystals_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# prefitting model config.
prefitting_model = tfl.premade.CalibratedLatticeEnsemble(
prefitting_model_config)
# We can compile and train our prefitting model as we like.
prefitting_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
prefitting_model.fit(
train_xs,
train_ys,
epochs=PREFITTING_NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
# Now that we have our trained prefitting model, we can extract the crystals.
tfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config,
prefitting_model_config,
prefitting_model)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
crystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
crystals_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
crystals_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
crystals_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
crystals_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(crystals_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TF Lattice Premade Models View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewPremade Models are quick and easy ways to build TFL `tf.keras.model` instances for typical use cases. This guide outlines the steps needed to construct a TFL Premade Model and train/test it. SetupInstalling TF Lattice package:
###Code
#@test {"skip": true}
!pip install tensorflow-lattice pydot
###Output
_____no_output_____
###Markdown
Importing required packages:
###Code
import tensorflow as tf
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
from tensorflow import feature_column as fc
logging.disable(sys.maxsize)
###Output
_____no_output_____
###Markdown
Downloading the UCI Statlog (Heart) dataset:
###Code
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/applied-dl/heart.csv')
df = pd.read_csv(csv_file)
train_size = int(len(df) * 0.8)
train_dataframe = df[:train_size]
test_dataframe = df[train_size:]
df.head()
###Output
_____no_output_____
###Markdown
Extract and convert features and labels to tensors:
###Code
# Features:
# - age
# - sex
# - cp chest pain type (4 values)
# - trestbps resting blood pressure
# - chol serum cholestoral in mg/dl
# - fbs fasting blood sugar > 120 mg/dl
# - restecg resting electrocardiographic results (values 0,1,2)
# - thalach maximum heart rate achieved
# - exang exercise induced angina
# - oldpeak ST depression induced by exercise relative to rest
# - slope the slope of the peak exercise ST segment
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg',
'exang', 'oldpeak', 'slope', 'ca', 'thal'
]
feature_name_indices = {name: index for index, name in enumerate(feature_names)}
# This is the vocab list and mapping we will use for the 'thal' categorical
# feature.
thal_vocab_list = ['normal', 'fixed', 'reversible']
thal_map = {category: i for i, category in enumerate(thal_vocab_list)}
# Custom function for converting thal categories to buckets
def convert_thal_features(thal_features):
# Note that two examples in the test set are already converted.
return np.array([
thal_map[feature] if feature in thal_vocab_list else feature
for feature in thal_features
])
# Custom function for extracting each feature.
def extract_features(dataframe,
label_name='target',
feature_names=feature_names):
features = []
for feature_name in feature_names:
if feature_name == 'thal':
features.append(
convert_thal_features(dataframe[feature_name].values).astype(float))
else:
features.append(dataframe[feature_name].values.astype(float))
labels = dataframe[label_name].values.astype(float)
return features, labels
train_xs, train_ys = extract_features(train_dataframe)
test_xs, test_ys = extract_features(test_dataframe)
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
# Our lattice models may have predictions above 1.0 due to numerical errors.
# We can subtract this small epsilon value from our output_max to make sure we
# do not predict values outside of our label bound.
numerical_error_epsilon = 1e-5
###Output
_____no_output_____
###Markdown
Setting the default values used for training in this guide:
###Code
LEARNING_RATE = 0.01
BATCH_SIZE = 128
NUM_EPOCHS = 500
PREFITTING_NUM_EPOCHS = 10
###Output
_____no_output_____
###Markdown
Feature ConfigsFeature calibration and per-feature configurations are set using [tfl.configs.FeatureConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/FeatureConfig). Feature configurations include monotonicity constraints, per-feature regularization (see [tfl.configs.RegularizerConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/RegularizerConfig)), and lattice sizes for lattice models.Note that we must fully specify the feature config for any feature that we want our model to recognize. Otherwise the model will have no way of knowing that such a feature exists. Compute QuatilesAlthough the default setting for `pwl_calibration_input_keypoints` in `tfl.configs.FeatureConfig` is 'quatiles', for premade models we have to manually define the input keypoints. To do so, we first define our own helper function for computing quantiles.
###Code
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
###Output
_____no_output_____
###Markdown
Defining Our Feature ConfigsNow that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.
###Code
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='age',
lattice_size=3,
monotonicity='increasing',
# We must set the keypoints manually.
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['age']],
num_keypoints=5,
clip_max=100),
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),
],
),
tfl.configs.FeatureConfig(
name='sex',
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='cp',
monotonicity='increasing',
# Keypoints that are uniformly spaced.
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=np.linspace(
np.min(train_xs[feature_name_indices['cp']]),
np.max(train_xs[feature_name_indices['cp']]),
num=4),
),
tfl.configs.FeatureConfig(
name='chol',
monotonicity='increasing',
# Explicit input keypoints initialization.
pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
# Calibration can be forced to span the full output range by clamping.
pwl_calibration_clamp_min=True,
pwl_calibration_clamp_max=True,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
],
),
tfl.configs.FeatureConfig(
name='fbs',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='trestbps',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['trestbps']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='thalach',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['thalach']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='restecg',
# Partial monotonicity: output(0) <= output(1), output(0) <= output(2)
monotonicity=[(0, 1), (0, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='exang',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='oldpeak',
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['oldpeak']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='slope',
# Partial monotonicity: output(0) <= output(1), output(1) <= output(2)
monotonicity=[(0, 1), (1, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='ca',
monotonicity='increasing',
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['ca']], num_keypoints=4),
),
tfl.configs.FeatureConfig(
name='thal',
# Partial monotonicity:
# output(normal) <= output(fixed)
# output(normal) <= output(reversible)
monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],
num_buckets=3,
# We must specify the vocabulary list in order to later set the
# monotonicities since we used names and not indices.
vocabulary_list=thal_vocab_list,
),
]
###Output
_____no_output_____
###Markdown
Next we need to make sure to properly set the monotonicities for features where we used a custom vocabulary (such as 'thal' above).
###Code
tfl.premade_lib.set_categorical_monotonicities(feature_configs)
###Output
_____no_output_____
###Markdown
Calibrated Linear ModelTo construct a TFL premade model, first construct a model configuration from [tfl.configs](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs). A calibrated linear model is constructed using the [tfl.configs.CalibratedLinearConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLinearConfig). It applies piecewise-linear and categorical calibration on the input features, followed by a linear combination and an optional output piecewise-linear calibration. When using output calibration or when output bounds are specified, the linear layer will apply weighted averaging on calibrated inputs.This example creates a calibrated linear model on the first 5 features.
###Code
# Model config defined the model structure for the estimator.
linear_model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs[:5],
use_bias=True,
# We must set the output min and max to that of the label.
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=10,
output_initialization=np.linspace(min_label, max_label, num=10),
regularizer_configs=[
# Regularizer for the output calibrator.
tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),
])
# A CalibratedLinear premade model constructed from the given model config.
linear_model = tfl.premade.CalibratedLinear(linear_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
Now, as with any other [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model), we compile and fit the model to our data.
###Code
linear_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
linear_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
###Output
_____no_output_____
###Markdown
After training our model, we can evaluate it on our test set.
###Code
print('Test Set Evaluation...')
print(linear_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice ModelA calibrated lattice model is constructed using [tfl.configs.CalibratedLatticeConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeConfig). A calibrated lattice model applies piecewise-linear and categorical calibration on the input features, followed by a lattice model and an optional output piecewise-linear calibration.This example creates a calibrated lattice model on the first 5 features.
###Code
# This is a calibrated lattice model: inputs are calibrated, then combined
# non-linearly using a lattice layer.
lattice_model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=feature_configs[:5],
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
regularizer_configs=[
# Torsion regularizer applied to the lattice to make it more linear.
tfl.configs.RegularizerConfig(name='torsion', l2=1e-2),
# Globally defined calibration regularizer is applied to all features.
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2),
])
# A CalibratedLattice premade model constructed from the given model config.
lattice_model = tfl.premade.CalibratedLattice(lattice_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
lattice_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
lattice_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(lattice_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice Ensemble ModelWhen the number of features is large, you can use an ensemble model, which creates multiple smaller lattices for subsets of the features and averages their output instead of creating just a single huge lattice. Ensemble lattice models are constructed using [tfl.configs.CalibratedLatticeEnsembleConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeEnsembleConfig). A calibrated lattice ensemble model applies piecewise-linear and categorical calibration on the input feature, followed by an ensemble of lattice models and an optional output piecewise-linear calibration. Explicit Lattice Ensemble InitializationIf you already know which subsets of features you want to feed into your lattices, then you can explicitly set the lattices using feature names. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
explicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'],
['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'],
['restecg', 'age', 'sex']],
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label])
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
explicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
explicit_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
explicit_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
explicit_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
explicit_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(explicit_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Random Lattice EnsembleIf you are not sure which subsets of features to feed into your lattices, another option is to use random subsets of features for each lattice. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
random_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='random',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now we must set the random lattice structure and construct the model.
tfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
random_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
random_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
random_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
random_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
random_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(random_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Crystals Lattice EnsemblePremade also provides a heuristic feature arrangement algorithm, called [Crystals](https://papers.nips.cc/paper/6377-fast-and-flexible-monotonic-functions-with-ensembles-of-lattices). To use the Crystals algorithm, first we train a prefitting model that estimates pairwise feature interactions. We then arrange the final ensemble such that features with more non-linear interactions are in the same lattices.the Premade Library offers helper functions for constructing the prefitting model configuration and extracting the crystals structure. Note that the prefitting model does not need to be fully trained, so a few epochs should be enough.This example creates a calibrated lattice ensemble model with 5 lattice and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combines non-linearly and averaged using multiple lattice layers.
crystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='crystals',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now that we have our model config, we can construct a prefitting model config.
prefitting_model_config = tfl.premade_lib.construct_prefitting_model_config(
crystals_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# prefitting model config.
prefitting_model = tfl.premade.CalibratedLatticeEnsemble(
prefitting_model_config)
# We can compile and train our prefitting model as we like.
prefitting_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
prefitting_model.fit(
train_xs,
train_ys,
epochs=PREFITTING_NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
# Now that we have our trained prefitting model, we can extract the crystals.
tfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config,
prefitting_model_config,
prefitting_model)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
crystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
crystals_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
crystals_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
crystals_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
crystals_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(random_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TF Lattice Premade Models View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewPremade Models are quick and easy ways to build TFL `tf.keras.model` instances for typical use cases. This guide outlines the steps needed to construct a TFL Premade Model and train/test it. SetupInstalling TF Lattice package:
###Code
#@test {"skip": true}
!pip install tensorflow-lattice pydot
###Output
_____no_output_____
###Markdown
Importing required packages:
###Code
import tensorflow as tf
import copy
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
###Output
_____no_output_____
###Markdown
Downloading the UCI Statlog (Heart) dataset:
###Code
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/applied-dl/heart.csv')
df = pd.read_csv(csv_file)
train_size = int(len(df) * 0.8)
train_dataframe = df[:train_size]
test_dataframe = df[train_size:]
df.head()
###Output
_____no_output_____
###Markdown
Extract and convert features and labels to tensors:
###Code
# Features:
# - age
# - sex
# - cp chest pain type (4 values)
# - trestbps resting blood pressure
# - chol serum cholestoral in mg/dl
# - fbs fasting blood sugar > 120 mg/dl
# - restecg resting electrocardiographic results (values 0,1,2)
# - thalach maximum heart rate achieved
# - exang exercise induced angina
# - oldpeak ST depression induced by exercise relative to rest
# - slope the slope of the peak exercise ST segment
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg',
'exang', 'oldpeak', 'slope', 'ca', 'thal'
]
feature_name_indices = {name: index for index, name in enumerate(feature_names)}
# This is the vocab list and mapping we will use for the 'thal' categorical
# feature.
thal_vocab_list = ['normal', 'fixed', 'reversible']
thal_map = {category: i for i, category in enumerate(thal_vocab_list)}
# Custom function for converting thal categories to buckets
def convert_thal_features(thal_features):
# Note that two examples in the test set are already converted.
return np.array([
thal_map[feature] if feature in thal_vocab_list else feature
for feature in thal_features
])
# Custom function for extracting each feature.
def extract_features(dataframe,
label_name='target',
feature_names=feature_names):
features = []
for feature_name in feature_names:
if feature_name == 'thal':
features.append(
convert_thal_features(dataframe[feature_name].values).astype(float))
else:
features.append(dataframe[feature_name].values.astype(float))
labels = dataframe[label_name].values.astype(float)
return features, labels
train_xs, train_ys = extract_features(train_dataframe)
test_xs, test_ys = extract_features(test_dataframe)
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
# Our lattice models may have predictions above 1.0 due to numerical errors.
# We can subtract this small epsilon value from our output_max to make sure we
# do not predict values outside of our label bound.
numerical_error_epsilon = 1e-5
###Output
_____no_output_____
###Markdown
Setting the default values used for training in this guide:
###Code
LEARNING_RATE = 0.01
BATCH_SIZE = 128
NUM_EPOCHS = 500
PREFITTING_NUM_EPOCHS = 10
###Output
_____no_output_____
###Markdown
Feature ConfigsFeature calibration and per-feature configurations are set using [tfl.configs.FeatureConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/FeatureConfig). Feature configurations include monotonicity constraints, per-feature regularization (see [tfl.configs.RegularizerConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/RegularizerConfig)), and lattice sizes for lattice models.Note that we must fully specify the feature config for any feature that we want our model to recognize. Otherwise the model will have no way of knowing that such a feature exists. Compute QuantilesAlthough the default setting for `pwl_calibration_input_keypoints` in `tfl.configs.FeatureConfig` is 'quantiles', for premade models we have to manually define the input keypoints. To do so, we first define our own helper function for computing quantiles.
###Code
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
###Output
_____no_output_____
###Markdown
Defining Our Feature ConfigsNow that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.
###Code
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='age',
lattice_size=3,
monotonicity='increasing',
# We must set the keypoints manually.
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['age']],
num_keypoints=5,
clip_max=100),
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),
],
),
tfl.configs.FeatureConfig(
name='sex',
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='cp',
monotonicity='increasing',
# Keypoints that are uniformly spaced.
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=np.linspace(
np.min(train_xs[feature_name_indices['cp']]),
np.max(train_xs[feature_name_indices['cp']]),
num=4),
),
tfl.configs.FeatureConfig(
name='chol',
monotonicity='increasing',
# Explicit input keypoints initialization.
pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
# Calibration can be forced to span the full output range by clamping.
pwl_calibration_clamp_min=True,
pwl_calibration_clamp_max=True,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
],
),
tfl.configs.FeatureConfig(
name='fbs',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='trestbps',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['trestbps']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='thalach',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['thalach']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='restecg',
# Partial monotonicity: output(0) <= output(1), output(0) <= output(2)
monotonicity=[(0, 1), (0, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='exang',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='oldpeak',
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['oldpeak']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='slope',
# Partial monotonicity: output(0) <= output(1), output(1) <= output(2)
monotonicity=[(0, 1), (1, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='ca',
monotonicity='increasing',
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['ca']], num_keypoints=4),
),
tfl.configs.FeatureConfig(
name='thal',
# Partial monotonicity:
# output(normal) <= output(fixed)
# output(normal) <= output(reversible)
monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],
num_buckets=3,
# We must specify the vocabulary list in order to later set the
# monotonicities since we used names and not indices.
vocabulary_list=thal_vocab_list,
),
]
###Output
_____no_output_____
###Markdown
Next we need to make sure to properly set the monotonicities for features where we used a custom vocabulary (such as 'thal' above).
###Code
tfl.premade_lib.set_categorical_monotonicities(feature_configs)
###Output
_____no_output_____
###Markdown
Calibrated Linear ModelTo construct a TFL premade model, first construct a model configuration from [tfl.configs](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs). A calibrated linear model is constructed using the [tfl.configs.CalibratedLinearConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLinearConfig). It applies piecewise-linear and categorical calibration on the input features, followed by a linear combination and an optional output piecewise-linear calibration. When using output calibration or when output bounds are specified, the linear layer will apply weighted averaging on calibrated inputs.This example creates a calibrated linear model on the first 5 features.
###Code
# Model config defines the model structure for the premade model.
linear_model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs[:5],
use_bias=True,
# We must set the output min and max to that of the label.
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=10,
output_initialization=np.linspace(min_label, max_label, num=10),
regularizer_configs=[
# Regularizer for the output calibrator.
tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),
])
# A CalibratedLinear premade model constructed from the given model config.
linear_model = tfl.premade.CalibratedLinear(linear_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
Now, as with any other [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model), we compile and fit the model to our data.
###Code
linear_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
linear_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
###Output
_____no_output_____
###Markdown
After training our model, we can evaluate it on our test set.
###Code
print('Test Set Evaluation...')
print(linear_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice ModelA calibrated lattice model is constructed using [tfl.configs.CalibratedLatticeConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeConfig). A calibrated lattice model applies piecewise-linear and categorical calibration on the input features, followed by a lattice model and an optional output piecewise-linear calibration.This example creates a calibrated lattice model on the first 5 features.
###Code
# This is a calibrated lattice model: inputs are calibrated, then combined
# non-linearly using a lattice layer.
lattice_model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=feature_configs[:5],
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
regularizer_configs=[
# Torsion regularizer applied to the lattice to make it more linear.
tfl.configs.RegularizerConfig(name='torsion', l2=1e-2),
# Globally defined calibration regularizer is applied to all features.
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2),
])
# A CalibratedLattice premade model constructed from the given model config.
lattice_model = tfl.premade.CalibratedLattice(lattice_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
lattice_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
lattice_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(lattice_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Calibrated Lattice Ensemble ModelWhen the number of features is large, you can use an ensemble model, which creates multiple smaller lattices for subsets of the features and averages their output instead of creating just a single huge lattice. Ensemble lattice models are constructed using [tfl.configs.CalibratedLatticeEnsembleConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeEnsembleConfig). A calibrated lattice ensemble model applies piecewise-linear and categorical calibration on the input feature, followed by an ensemble of lattice models and an optional output piecewise-linear calibration. Explicit Lattice Ensemble InitializationIf you already know which subsets of features you want to feed into your lattices, then you can explicitly set the lattices using feature names. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
explicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'],
['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'],
['restecg', 'age', 'sex']],
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label])
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
explicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
explicit_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
explicit_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
explicit_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
explicit_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(explicit_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Random Lattice EnsembleIf you are not sure which subsets of features to feed into your lattices, another option is to use random subsets of features for each lattice. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
random_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='random',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now we must set the random lattice structure and construct the model.
tfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
random_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
random_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
random_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
random_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
random_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(random_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
RTL Layer Random Lattice EnsembleWhen using a random lattice ensemble, you can specify that the model use a single `tfl.layers.RTL` layer. We note that `tfl.layers.RTL` only supports monotonicity constraints and must have the same lattice size for all features and no per-feature regularization. Note that using a `tfl.layers.RTL` layer lets you scale to much larger ensembles than using separate `tfl.layers.Lattice` instances.This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.
###Code
# Make sure our feature configs have the same lattice size, no per-feature
# regularization, and only monotonicity constraints.
rtl_layer_feature_configs = copy.deepcopy(feature_configs)
for feature_config in rtl_layer_feature_configs:
feature_config.lattice_size = 2
feature_config.unimodality = 'none'
feature_config.reflects_trust_in = None
feature_config.dominates = None
feature_config.regularizer_configs = None
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
rtl_layer_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=rtl_layer_feature_configs,
lattices='rtl_layer',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config. Note that we do not have to specify the lattices by calling
# a helper function (like before with random) because the RTL Layer will take
# care of that for us.
rtl_layer_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
rtl_layer_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
rtl_layer_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
rtl_layer_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
rtl_layer_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(rtl_layer_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____
###Markdown
Crystals Lattice EnsemblePremade also provides a heuristic feature arrangement algorithm, called [Crystals](https://papers.nips.cc/paper/6377-fast-and-flexible-monotonic-functions-with-ensembles-of-lattices). To use the Crystals algorithm, first we train a prefitting model that estimates pairwise feature interactions. We then arrange the final ensemble such that features with more non-linear interactions are in the same lattices.the Premade Library offers helper functions for constructing the prefitting model configuration and extracting the crystals structure. Note that the prefitting model does not need to be fully trained, so a few epochs should be enough.This example creates a calibrated lattice ensemble model with 5 lattice and 3 features per lattice.
###Code
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combines non-linearly and averaged using multiple lattice layers.
crystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='crystals',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now that we have our model config, we can construct a prefitting model config.
prefitting_model_config = tfl.premade_lib.construct_prefitting_model_config(
crystals_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# prefitting model config.
prefitting_model = tfl.premade.CalibratedLatticeEnsemble(
prefitting_model_config)
# We can compile and train our prefitting model as we like.
prefitting_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
prefitting_model.fit(
train_xs,
train_ys,
epochs=PREFITTING_NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
# Now that we have our trained prefitting model, we can extract the crystals.
tfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config,
prefitting_model_config,
prefitting_model)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
crystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
crystals_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
crystals_ensemble_model, show_layer_names=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
As before, we compile, fit, and evaluate our model.
###Code
crystals_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
crystals_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(crystals_ensemble_model.evaluate(test_xs, test_ys))
###Output
_____no_output_____ |
Notebook-Class-Assignment-Answers/Step-5-Evaluate-Model-Task-2-Classification-Class-Assignment.ipynb | ###Markdown
Step 5 - Evaluate Model - Task 2. Evaluate Classification Model - CLASS ASSIGNMENT Load Libraries
###Code
!pip install sklearn --upgrade
import pandas as pd
import numpy as np
from datetime import date
from datetime import timedelta
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
from sklearn.metrics import accuracy_score
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from sklearn import tree
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Set up environment and connect to Google Drive
###Code
using_Google_colab = True
using_Anaconda_on_Mac_or_Linux = False
using_Anaconda_on_windows = False
if using_Google_colab:
from google.colab import drive
drive.mount('/content/drive')
###Output
Mounted at /content/drive
###Markdown
EM2.1 Open Notebook, upload the interim pre-processed data from step 5 for LA county - Activity 1 Upload intermediate classification modeling data for LA County(see Step-4-Develop-Model-Task-4-Classification notebook for details)
###Code
if using_Google_colab:
merged_LA = pd.read_csv('/content/drive/MyDrive/COVID_Project/output/merged_LA.csv')
if using_Anaconda_on_Mac_or_Linux:
merged_LA = pd.read_csv('../output/merged_LA.csv')
if using_Anaconda_on_windows:
merged_LA = pd.read_csv(r'..\output\merged_LA.csv')
merged_LA
###Output
_____no_output_____
###Markdown
Prepare and apply the model
###Code
y_LA = merged_LA['case_direction'].values
X_LA = merged_LA[['delta_retail_recreation',
'delta_grocery_pharmacy',
'delta_parks',
'delta_transit',
'delta_workplaces',
'delta_residential']].values
X_LA
y_LA
###Output
_____no_output_____
###Markdown
EM2.2 Divide data into train and test, develop and test the model
###Code
decision_tree = DecisionTreeClassifier()
X_train_LA, X_test_LA, y_train_LA, y_test_LA = train_test_split(X_LA,
y_LA,
test_size=0.2,
random_state=0)
model_LA = decision_tree.fit(X_train_LA, y_train_LA)
X_train_LA
y_train_LA
plt.figure(figsize=(17,8))
tree.plot_tree(model_LA, max_depth=2)
plt.show()
print(decision_tree.feature_importances_)
y_predict_LA = decision_tree.predict(X_test_LA)
y_predict_LA
y_test_LA
accuracy_score(y_test_LA, y_predict_LA)
###Output
_____no_output_____
###Markdown
EM2.3 Load all county Data
###Code
if using_Google_colab:
merged_all = pd.read_csv('/content/drive/MyDrive/COVID_Project/output/merged_all.csv')
if using_Anaconda_on_Mac_or_Linux:
merged_all = pd.read_csv('../output/merged_all.csv')
if using_Anaconda_on_windows:
merged_all = pd.read_csv(r'..\output\merged_all.csv')
merged_all
###Output
_____no_output_____
###Markdown
Prepare and apply the model to all counties data
###Code
y_all = merged_all['case_direction'].values
X_all = merged_all[['delta_retail_recreation',
'delta_grocery_pharmacy',
'delta_parks',
'delta_transit',
'delta_workplaces',
'delta_residential']].values
X_all.shape
y_all.shape
decision_tree_all = DecisionTreeClassifier()
X_train_all, X_test_all, y_train_all, y_test_all = train_test_split(X_all, y_all, test_size=0.2, random_state=0)
model_all = decision_tree_all.fit(X_train_all, y_train_all)
X_train_all.shape
X_test_all.shape
print(decision_tree_all.feature_importances_)
plt.figure(figsize=(20,15))
tree.plot_tree(model_all, max_depth=2)
plt.show()
y_predict_all = decision_tree_all.predict(X_test_all)
accuracy_score(y_test_all, y_predict_all)
###Output
_____no_output_____
###Markdown
Compute Confusion_matrix EM2.4 Test the model and compute confusion matrix, precision and recall
###Code
pd.DataFrame(
confusion_matrix(y_test_all, y_predict_all),
columns=['Predicted Not increase', 'Predicted Increase'],
index=['True Not Increase', 'True Increase'])
average_precision = precision_score(y_test_all, y_predict_all)
average_precision
recall_score(y_test_all, y_predict_all)
###Output
_____no_output_____
###Markdown
EM2.5 Test the model you developed in last section with 2 month delay - CLASS ASSIGNMENT
###Code
if using_Google_colab:
merged_all_2_months = pd.read_csv('/content/drive/MyDrive/COVID_Project/output/merged_all_2_months_lag.csv')
if using_Anaconda_on_Mac_or_Linux:
merged_all_2_months = pd.read_csv('../output/merged_all_2_months_lag.csv')
if using_Anaconda_on_windows:
merged_all_2_months = pd.read_csv(r'..\output\merged_all_2_months_lag.csv')
merged_all_2_months
y_all = merged_all_2_months['case_direction'].values
X_all = merged_all_2_months[['delta_retail_recreation',
'delta_grocery_pharmacy',
'delta_parks',
'delta_transit',
'delta_workplaces',
'delta_residential']].values
X_all.shape
y_all.shape
decision_tree_all = DecisionTreeClassifier()
X_train_all, X_test_all, y_train_all, y_test_all = train_test_split(X_all, y_all, test_size=0.2, random_state=0)
model_all = decision_tree_all.fit(X_train_all, y_train_all)
X_train_all.shape
X_test_all.shape
print(decision_tree_all.feature_importances_)
plt.figure(figsize=(20,15))
tree.plot_tree(model_all, max_depth=2)
plt.show()
y_predict_all = decision_tree_all.predict(X_test_all)
accuracy_score(y_test_all, y_predict_all)
pd.DataFrame(
confusion_matrix(y_test_all, y_predict_all),
columns=['Predicted Not increase', 'Predicted Increase'],
index=['True Not Increase', 'True Increase'])
average_precision = precision_score(y_test_all, y_predict_all)
average_precision
recall_score(y_test_all, y_predict_all)
###Output
_____no_output_____ |
examples/chicago_taxi/chicago_taxi_tfma_local_playground.ipynb | ###Markdown
TFMA Notebook exampleThis notebook describes how to export your model for TFMA and demonstrates the analysis tooling it offers. SetupImport necessary packages.
###Code
import apache_beam as beam
import os
import preprocess
import shutil
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from google.protobuf import text_format
from tensorflow.python.lib.io import file_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import schema_utils
from trainer import task
from trainer import taxi
###Output
_____no_output_____
###Markdown
Helper functions and some constants for running the notebook locally.
###Code
BASE_DIR = os.getcwd()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
# Base dir containing train and eval data
TRAIN_DATA_DIR = os.path.join(DATA_DIR, 'train')
EVAL_DATA_DIR = os.path.join(DATA_DIR, 'eval')
# Base dir where TFT writes training data
TFT_TRAIN_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_train')
TFT_TRAIN_FILE_PREFIX = 'train_transformed'
# Base dir where TFT writes eval data
TFT_EVAL_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_eval')
TFT_EVAL_FILE_PREFIX = 'eval_transformed'
TF_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tf')
# Base dir where TFMA writes eval data
TFMA_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tfma')
SERVING_MODEL_DIR = 'serving_model_dir'
EVAL_MODEL_DIR = 'eval_model_dir'
def get_tft_train_output_dir(run_id):
return _get_output_dir(TFT_TRAIN_OUTPUT_BASE_DIR, run_id)
def get_tft_eval_output_dir(run_id):
return _get_output_dir(TFT_EVAL_OUTPUT_BASE_DIR, run_id)
def get_tf_output_dir(run_id):
return _get_output_dir(TF_OUTPUT_BASE_DIR, run_id)
def get_tfma_output_dir(run_id):
return _get_output_dir(TFMA_OUTPUT_BASE_DIR, run_id)
def _get_output_dir(base_dir, run_id):
return os.path.join(base_dir, 'run_' + str(run_id))
def get_schema_file():
return os.path.join(OUTPUT_DIR, 'schema.pbtxt')
###Output
_____no_output_____
###Markdown
Clean up output directories.
###Code
shutil.rmtree(TFT_TRAIN_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TFT_EVAL_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TF_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(get_schema_file(), ignore_errors=True)
###Output
_____no_output_____
###Markdown
Compute and visualize descriptive data statistics
###Code
# Compute stats over training data.
train_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(TRAIN_DATA_DIR, 'data.csv'))
# Visualize training data stats.
tfdv.visualize_statistics(train_stats)
###Output
_____no_output_____
###Markdown
Infer a schema
###Code
# Infer a schema from the training data stats.
schema = tfdv.infer_schema(statistics=train_stats, infer_feature_shape=False)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Check evaluation data for errors
###Code
# Compute stats over eval data.
eval_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(EVAL_DATA_DIR, 'data.csv'))
# Compare stats of eval data with training data.
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
# Update the schema based on the observed anomalies.
# Relax the minimum fraction of values that must come from the domain for feature company.
company = tfdv.get_feature(schema, 'company')
company.distribution_constraints.min_domain_mass = 0.9
# Add new value to the domain of feature payment_type.
payment_type_domain = tfdv.get_domain(schema, 'payment_type')
payment_type_domain.value.append('Prcard')
# Validate eval stats after updating the schema
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
###Output
_____no_output_____
###Markdown
Freeze the schemaNow that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state.
###Code
file_io.recursive_create_dir(OUTPUT_DIR)
file_io.write_string_to_file(get_schema_file(), text_format.MessageToString(schema))
###Output
_____no_output_____
###Markdown
Preprocess Inputstransform_data is defined in preprocess.py and uses the tensorflow_transform library to perform preprocessing. The same code is used for both local preprocessing in this notebook and preprocessing in the Cloud (via Dataflow).
###Code
# Transform eval data
preprocess.transform_data(input_handle=os.path.join(EVAL_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_EVAL_FILE_PREFIX,
working_dir=get_tft_eval_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
# Transform training data
preprocess.transform_data(input_handle=os.path.join(TRAIN_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_TRAIN_FILE_PREFIX,
working_dir=get_tft_train_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
###Output
_____no_output_____
###Markdown
Compute statistics over transformed data
###Code
# Compute stats over transformed training data.
TRANSFORMED_TRAIN_DATA = os.path.join(get_tft_train_output_dir(0), TFT_TRAIN_FILE_PREFIX + "*")
transformed_train_stats = tfdv.generate_statistics_from_tfrecord(data_location=TRANSFORMED_TRAIN_DATA)
# Visualize transformed training data stats and compare to raw training data.
# Use 'Feature search' to focus on a feature and see statistics pre- and post-transformation.
tfdv.visualize_statistics(transformed_train_stats, train_stats, lhs_name='TRANSFORMED', rhs_name='RAW')
###Output
_____no_output_____
###Markdown
Prepare the ModelTo use TFMA, export the model into an **EvalSavedModel** by calling ``tfma.export.export_eval_savedmodel``.``tfma.export.export_eval_savedmodel`` is analogous to ``estimator.export_savedmodel`` but exports the evaluation graph as opposed to the training or inference graph. Notice that one of the inputs is ``eval_input_receiver_fn`` which is analogous to ``serving_input_receiver_fn`` for ``estimator.export_savedmodel``. For more details, refer to the documentation for TFMA on Github.Contruct the **EvalSavedModel** after training is completed.
###Code
def run_experiment(hparams):
"""Run the training and evaluate using the high level API"""
# Train and evaluate the model as usual.
estimator = task.train_and_maybe_evaluate(hparams)
# Export TFMA's sepcial EvalSavedModel
eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR)
receiver_fn = lambda: eval_input_receiver_fn(hparams.tf_transform_dir)
tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=eval_model_dir,
eval_input_receiver_fn=receiver_fn)
def eval_input_receiver_fn(working_dir):
# Extract feature spec from the schema.
raw_feature_spec = schema_utils.schema_as_feature_spec(schema).feature_spec
serialized_tf_example = tf.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# First we deserialize our examples using the raw schema.
features = tf.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, we must process them through tft
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(working_dir, transform_fn_io.TRANSFORM_FN_DIR),
features))
# The key MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=transformed_features[taxi.transformed_name(taxi.LABEL_KEY)])
print('Done')
###Output
_____no_output_____
###Markdown
Train and export the model for TFMA
###Code
def run_local_experiment(tft_run_id, tf_run_id, num_layers, first_layer_size, scale_factor):
"""Helper method to train and export the model for TFMA
The caller specifies the input and output directory by providing run ids. The optional parameters
allows the user to change the modelfor time series view.
Args:
tft_run_id: The run id for the preprocessing. Identifies the folder containing training data.
tf_run_id: The run for this training run. Identify where the exported model will be written to.
num_layers: The number of layers used by the hiden layer.
first_layer_size: The size of the first hidden layer.
scale_factor: The scale factor between each layer in in hidden layers.
"""
hparams = tf.contrib.training.HParams(
# Inputs: are tf-transformed materialized features
train_files=os.path.join(get_tft_train_output_dir(tft_run_id), TFT_TRAIN_FILE_PREFIX + '-00000-of-*'),
eval_files=os.path.join(get_tft_eval_output_dir(tft_run_id), TFT_EVAL_FILE_PREFIX + '-00000-of-*'),
schema_file=get_schema_file(),
# Output: dir for trained model
job_dir=get_tf_output_dir(tf_run_id),
tf_transform_dir=get_tft_train_output_dir(tft_run_id),
# Output: dir for both the serving model and eval_model which will go into tfma
# evaluation
output_dir=get_tf_output_dir(tf_run_id),
train_steps=10000,
eval_steps=5000,
num_layers=num_layers,
first_layer_size=first_layer_size,
scale_factor=scale_factor,
num_epochs=None,
train_batch_size=40,
eval_batch_size=40)
run_experiment(hparams)
print('Done')
run_local_experiment(tft_run_id=0,
tf_run_id=0,
num_layers=4,
first_layer_size=100,
scale_factor=0.7)
print('Done')
###Output
_____no_output_____
###Markdown
Run TFMA to compute metricsFor local analysis, TFMA offers a helper method ``tfma.run_model_analysis``
###Code
help(tfma.run_model_analysis)
###Output
_____no_output_____
###Markdown
You can also write your own custom pipeline if you want to perform extra transformations on the data before evaluation.
###Code
def run_tfma(slice_spec, tf_run_id, tfma_run_id, input_csv, schema_file, add_metrics_callbacks=None):
"""A simple wrapper function that runs tfma locally.
A function that does extra transformations on the data and then run model analysis.
Args:
slice_spec: The slicing spec for how to slice the data.
tf_run_id: An id to contruct the model directories with.
tfma_run_id: An id to construct output directories with.
input_csv: The evaluation data in csv format.
schema_file: The file holding a text-serialized schema for the input data.
add_metrics_callback: Optional list of callbacks for computing extra metrics.
Returns:
An EvalResult that can be used with TFMA visualization functions.
"""
eval_model_base_dir = os.path.join(get_tf_output_dir(tf_run_id), EVAL_MODEL_DIR)
eval_model_dir = os.path.join(eval_model_base_dir, next(os.walk(eval_model_base_dir))[1][0])
eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=eval_model_dir,
add_metrics_callbacks=add_metrics_callbacks)
schema = taxi.read_schema(schema_file)
print(eval_model_dir)
display_only_data_location = input_csv
with beam.Pipeline() as pipeline:
csv_coder = taxi.make_csv_coder(schema)
raw_data = (
pipeline
| 'ReadFromText' >> beam.io.ReadFromText(
input_csv,
coder=beam.coders.BytesCoder(),
skip_header_lines=True)
| 'ParseCSV' >> beam.Map(csv_coder.decode))
# Examples must be in clean tf-example format.
coder = taxi.make_proto_coder(schema)
raw_data = (
raw_data
| 'ToSerializedTFExample' >> beam.Map(coder.encode))
_ = (raw_data
| 'ExtractEvaluateAndWriteResults' >>
tfma.ExtractEvaluateAndWriteResults(
eval_shared_model=eval_shared_model,
slice_spec=slice_spec,
output_path=get_tfma_output_dir(tfma_run_id),
display_only_data_location=input_csv))
return tfma.load_eval_result(output_path=get_tfma_output_dir(tfma_run_id))
print('Done')
###Output
_____no_output_____
###Markdown
You can also compute metrics on slices of your data in TFMA. Slices can be specified using ``tfma.slicer.SingleSliceSpec``.Below are examples of how slices can be specified.
###Code
# An empty slice spec means the overall slice, that is, the whole dataset.
OVERALL_SLICE_SPEC = tfma.slicer.SingleSliceSpec()
# Data can be sliced along a feature column
# In this case, data is sliced along feature column trip_start_hour.
FEATURE_COLUMN_SLICE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_hour'])
# Data can be sliced by crossing feature columns
# In this case, slices are computed for trip_start_day x trip_start_month.
FEATURE_COLUMN_CROSS_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day', 'trip_start_month'])
# Metrics can be computed for a particular feature value.
# In this case, metrics is computed for all data where trip_start_hour is 12.
FEATURE_VALUE_SPEC = tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 12)])
# It is also possible to mix column cross and feature value cross.
# In this case, data where trip_start_hour is 12 will be sliced by trip_start_day.
COLUMN_CROSS_VALUE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])
ALL_SPECS = [
OVERALL_SLICE_SPEC,
FEATURE_COLUMN_SLICE_SPEC,
FEATURE_COLUMN_CROSS_SPEC,
FEATURE_VALUE_SPEC,
COLUMN_CROSS_VALUE_SPEC
]
###Output
_____no_output_____
###Markdown
Let's run TFMA!
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_result_1 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id=1,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
###Output
_____no_output_____
###Markdown
Visualization: Slicing MetricsTo see the slices, either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec). If neither is provided, the overall will be displayed.The default visualization is **slice overview** when the number of slices is small. It shows the value of a metric for each slice sorted by the another metric. It is also possible to set a threshold to filter out slices with smaller weights.This view also supports **metrics histogram** as an alternative visualization. It is also the defautl view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Slices with small weights can be fitlered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can be used to remove outliers in the visualization and the metrics table below.
###Code
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result_1, slicing_column='trip_start_hour')
# Show metrics sliced by COLUMN_CROSS_VALUE_SPEC above.
tfma.view.render_slicing_metrics(tfma_result_1, slicing_spec=COLUMN_CROSS_VALUE_SPEC)
# Show overall metrics.
tfma.view.render_slicing_metrics(tfma_result_1)
###Output
_____no_output_____
###Markdown
Visualization: PlotsTFMA offers a number of built-in plots. To see them, add them to ``add_metrics_callbacks``
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_vis = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='vis',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# calibration_plot_and_prediction_histogram computes calibration plot and prediction
# distribution at different thresholds.
tfma.post_export_metrics.calibration_plot_and_prediction_histogram(),
# auc_plots enables precision-recall curve and ROC visualization at different thresholds.
tfma.post_export_metrics.auc_plots()
])
print('Done')
###Output
_____no_output_____
###Markdown
Plots must be visualized for an individual slice. To specify a slice, use ``tfma.slicer.SingleSliceSpec``.In the example below, we are using ``tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 0)])`` to specify the slice where trip_start_hour is 0.Plots are interactive:- Drag to pan- Scroll to zoom- Right click to reset the viewSimply hover over the desired data point to see more details.
###Code
tfma.view.render_plot(tfma_vis, tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 0)]))
###Output
_____no_output_____
###Markdown
Custom metricsIn addition to plots, it is also possible to compute additional metrics not present at export time or custom metrics metrics using ``add_metrics_callbacks``.All metrics in ``tf.metrics`` are supported in the callback and can be used to compose other metrics:https://www.tensorflow.org/api_docs/python/tf/metricsIn the cells below, false negative rate is computed as an example.
###Code
# Defines a callback that adds FNR to the result.
def add_fnr_for_threshold(threshold):
def _add_fnr_callback(features_dict, predictions_dict, labels_dict):
metric_ops = {}
prediction_tensor = tf.cast(
predictions_dict.get(tf.contrib.learn.PredictionKey.LOGISTIC), tf.float64)
fn_value_op, fn_update_op = tf.metrics.false_negatives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
tp_value_op, tp_update_op = tf.metrics.true_positives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
fnr = fn_value_op[0] / (fn_value_op[0] + tp_value_op[0])
metric_ops['FNR@' + str(threshold)] = (fnr, tf.group(fn_update_op, tp_update_op))
return metric_ops
return _add_fnr_callback
tf.logging.set_verbosity(tf.logging.INFO)
tfma_fnr = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='fnr',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# Simply add the call here.
add_fnr_for_threshold(0.75)
])
tfma.view.render_slicing_metrics(tfma_fnr, slicing_spec=FEATURE_COLUMN_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Visualization: Time SeriesIt is important to track how your model is doing over time. TFMA offers two modes to show your model performs over time.**Multiple model analysis** shows how model perfoms from one version to another. This is useful early on to see how the addition of new features, change in modeling technique, etc, affects the performance. TFMA offers a convenient method.
###Code
help(tfma.multiple_model_analysis)
###Output
_____no_output_____
###Markdown
**Multiple data analysis** shows how a model perfoms under different evaluation data set. This is useful to ensure that model performance does not degrade over time. TFMA offer a conveneient method.
###Code
help(tfma.multiple_data_analysis)
###Output
_____no_output_____
###Markdown
It is also possible to compose a time series manually.
###Code
# Create different models.
# Run some experiments with different hidden layer configurations.
run_local_experiment(tft_run_id=0,
tf_run_id=1,
num_layers=3,
first_layer_size=200,
scale_factor=0.7)
run_local_experiment(tft_run_id=0,
tf_run_id=2,
num_layers=4,
first_layer_size=240,
scale_factor=0.5)
print('Done')
tfma_result_2 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=1,
tfma_run_id=2,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
tfma_result_3 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=2,
tfma_run_id=3,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
###Output
_____no_output_____
###Markdown
Like plots, time series view must visualized for a slice too.In the example below, we are showing the overall slice.Select a metric to see its time series graph. Hover over each data point to get more details.
###Code
eval_results = tfma.make_eval_results([tfma_result_1, tfma_result_2, tfma_result_3],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results, OVERALL_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Serialized results can also be used to construct a time series. Thus, there is no need to re-run TFMA for models already evaluated for a long running pipeline.
###Code
# Visualize the results in a Time Series. In this case, we are showing the slice specified.
eval_results_from_disk = tfma.load_eval_results([get_tfma_output_dir(1),
get_tfma_output_dir(2),
get_tfma_output_dir(3)],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, FEATURE_VALUE_SPEC)
###Output
_____no_output_____
###Markdown
TFMA Notebook exampleThis notebook describes how to export your model for TFMA and demonstrates the analysis tooling it offers. SetupImport necessary packages.
###Code
import apache_beam as beam
import os
import preprocess
import shutil
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from google.protobuf import text_format
from tensorflow.python.lib.io import file_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import schema_utils
from trainer import task
from trainer import taxi
###Output
_____no_output_____
###Markdown
Helper functions and some constants for running the notebook locally.
###Code
BASE_DIR = os.getcwd()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
# Base dir containing train and eval data
TRAIN_DATA_DIR = os.path.join(DATA_DIR, 'train')
EVAL_DATA_DIR = os.path.join(DATA_DIR, 'eval')
# Base dir where TFT writes training data
TFT_TRAIN_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_train')
TFT_TRAIN_FILE_PREFIX = 'train_transformed'
# Base dir where TFT writes eval data
TFT_EVAL_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_eval')
TFT_EVAL_FILE_PREFIX = 'eval_transformed'
TF_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tf')
# Base dir where TFMA writes eval data
TFMA_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tfma')
SERVING_MODEL_DIR = 'serving_model_dir'
EVAL_MODEL_DIR = 'eval_model_dir'
def get_tft_train_output_dir(run_id):
return _get_output_dir(TFT_TRAIN_OUTPUT_BASE_DIR, run_id)
def get_tft_eval_output_dir(run_id):
return _get_output_dir(TFT_EVAL_OUTPUT_BASE_DIR, run_id)
def get_tf_output_dir(run_id):
return _get_output_dir(TF_OUTPUT_BASE_DIR, run_id)
def get_tfma_output_dir(run_id):
return _get_output_dir(TFMA_OUTPUT_BASE_DIR, run_id)
def _get_output_dir(base_dir, run_id):
return os.path.join(base_dir, 'run_' + str(run_id))
def get_schema_file():
return os.path.join(OUTPUT_DIR, 'schema.pbtxt')
###Output
_____no_output_____
###Markdown
Clean up output directories.
###Code
shutil.rmtree(TFT_TRAIN_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TFT_EVAL_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TF_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(get_schema_file(), ignore_errors=True)
###Output
_____no_output_____
###Markdown
Compute and visualize descriptive data statistics
###Code
# Compute stats over training data.
train_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(TRAIN_DATA_DIR, 'data.csv'))
# Visualize training data stats.
tfdv.visualize_statistics(train_stats)
###Output
_____no_output_____
###Markdown
Infer a schema
###Code
# Infer a schema from the training data stats.
schema = tfdv.infer_schema(statistics=train_stats, infer_feature_shape=False)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Check evaluation data for errors
###Code
# Compute stats over eval data.
eval_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(EVAL_DATA_DIR, 'data.csv'))
# Compare stats of eval data with training data.
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
# Update the schema based on the observed anomalies.
# Relax the minimum fraction of values that must come from the domain for feature company.
company = tfdv.get_feature(schema, 'company')
company.distribution_constraints.min_domain_mass = 0.9
# Add new value to the domain of feature payment_type.
payment_type_domain = tfdv.get_domain(schema, 'payment_type')
payment_type_domain.value.append('Prcard')
# Validate eval stats after updating the schema
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
###Output
_____no_output_____
###Markdown
Freeze the schemaNow that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state.
###Code
file_io.recursive_create_dir(OUTPUT_DIR)
file_io.write_string_to_file(get_schema_file(), text_format.MessageToString(schema))
###Output
_____no_output_____
###Markdown
Preprocess Inputstransform_data is defined in preprocess.py and uses the tensorflow_transform library to perform preprocessing. The same code is used for both local preprocessing in this notebook and preprocessing in the Cloud (via Dataflow).
###Code
# Transform eval data
preprocess.transform_data(input_handle=os.path.join(EVAL_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_EVAL_FILE_PREFIX,
working_dir=get_tft_eval_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
# Transform training data
preprocess.transform_data(input_handle=os.path.join(TRAIN_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_TRAIN_FILE_PREFIX,
working_dir=get_tft_train_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
###Output
_____no_output_____
###Markdown
Compute statistics over transformed data
###Code
# Compute stats over transformed training data.
TRANSFORMED_TRAIN_DATA = os.path.join(get_tft_train_output_dir(0), TFT_TRAIN_FILE_PREFIX + "*")
transformed_train_stats = tfdv.generate_statistics_from_tfrecord(data_location=TRANSFORMED_TRAIN_DATA)
# Visualize transformed training data stats and compare to raw training data.
# Use 'Feature search' to focus on a feature and see statistics pre- and post-transformation.
tfdv.visualize_statistics(transformed_train_stats, train_stats, lhs_name='TRANSFORMED', rhs_name='RAW')
###Output
_____no_output_____
###Markdown
Prepare the ModelTo use TFMA, export the model into an **EvalSavedModel** by calling ``tfma.export.export_eval_savedmodel``.``tfma.export.export_eval_savedmodel`` is analogous to ``estimator.export_savedmodel`` but exports the evaluation graph as opposed to the training or inference graph. Notice that one of the inputs is ``eval_input_receiver_fn`` which is analogous to ``serving_input_receiver_fn`` for ``estimator.export_savedmodel``. For more details, refer to the documentation for TFMA on Github.Contruct the **EvalSavedModel** after training is completed.
###Code
def run_experiment(hparams):
"""Run the training and evaluate using the high level API"""
# Train and evaluate the model as usual.
estimator = task.train_and_maybe_evaluate(hparams)
# Export TFMA's sepcial EvalSavedModel
eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR)
receiver_fn = lambda: eval_input_receiver_fn(hparams.tf_transform_dir)
tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=eval_model_dir,
eval_input_receiver_fn=receiver_fn)
def eval_input_receiver_fn(working_dir):
# Extract feature spec from the schema.
raw_feature_spec = schema_utils.schema_as_feature_spec(schema).feature_spec
serialized_tf_example = tf.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# First we deserialize our examples using the raw schema.
features = tf.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, we must process them through tft
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(working_dir, transform_fn_io.TRANSFORM_FN_DIR),
features))
# The key MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=transformed_features[taxi.transformed_name(taxi.LABEL_KEY)])
print('Done')
###Output
_____no_output_____
###Markdown
Train and export the model for TFMA
###Code
def run_local_experiment(tft_run_id, tf_run_id, num_layers, first_layer_size, scale_factor):
"""Helper method to train and export the model for TFMA
The caller specifies the input and output directory by providing run ids. The optional parameters
allows the user to change the modelfor time series view.
Args:
tft_run_id: The run id for the preprocessing. Identifies the folder containing training data.
tf_run_id: The run for this training run. Identify where the exported model will be written to.
num_layers: The number of layers used by the hiden layer.
first_layer_size: The size of the first hidden layer.
scale_factor: The scale factor between each layer in in hidden layers.
"""
hparams = tf.contrib.training.HParams(
# Inputs: are tf-transformed materialized features
train_files=os.path.join(get_tft_train_output_dir(tft_run_id), TFT_TRAIN_FILE_PREFIX + '-00000-of-*'),
eval_files=os.path.join(get_tft_eval_output_dir(tft_run_id), TFT_EVAL_FILE_PREFIX + '-00000-of-*'),
schema_file=get_schema_file(),
# Output: dir for trained model
job_dir=get_tf_output_dir(tf_run_id),
tf_transform_dir=get_tft_train_output_dir(tft_run_id),
# Output: dir for both the serving model and eval_model which will go into tfma
# evaluation
output_dir=get_tf_output_dir(tf_run_id),
train_steps=10000,
eval_steps=5000,
num_layers=num_layers,
first_layer_size=first_layer_size,
scale_factor=scale_factor,
num_epochs=None,
train_batch_size=40,
eval_batch_size=40)
run_experiment(hparams)
print('Done')
run_local_experiment(tft_run_id=0,
tf_run_id=0,
num_layers=4,
first_layer_size=100,
scale_factor=0.7)
print('Done')
###Output
_____no_output_____
###Markdown
Run TFMA to compute metricsFor local analysis, TFMA offers a helper method ``tfma.run_model_analysis``
###Code
help(tfma.run_model_analysis)
###Output
_____no_output_____
###Markdown
You can also write your own custom pipeline if you want to perform extra transformations on the data before evaluation.
###Code
def run_tfma(slice_spec, tf_run_id, tfma_run_id, input_csv, schema_file, add_metrics_callbacks=None):
"""A simple wrapper function that runs tfma locally.
A function that does extra transformations on the data and then run model analysis.
Args:
slice_spec: The slicing spec for how to slice the data.
tf_run_id: An id to contruct the model directories with.
tfma_run_id: An id to construct output directories with.
input_csv: The evaluation data in csv format.
schema_file: The file holding a text-serialized schema for the input data.
add_metrics_callback: Optional list of callbacks for computing extra metrics.
Returns:
An EvalResult that can be used with TFMA visualization functions.
"""
eval_model_base_dir = os.path.join(get_tf_output_dir(tf_run_id), EVAL_MODEL_DIR)
eval_model_dir = os.path.join(eval_model_base_dir, next(os.walk(eval_model_base_dir))[1][0])
eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=eval_model_dir,
add_metrics_callbacks=add_metrics_callbacks)
schema = taxi.read_schema(schema_file)
print(eval_model_dir)
display_only_data_location = input_csv
with beam.Pipeline() as pipeline:
csv_coder = taxi.make_csv_coder(schema)
raw_data = (
pipeline
| 'ReadFromText' >> beam.io.ReadFromText(
input_csv,
coder=beam.coders.BytesCoder(),
skip_header_lines=True)
| 'ParseCSV' >> beam.Map(csv_coder.decode))
# Examples must be in clean tf-example format.
coder = taxi.make_proto_coder(schema)
raw_data = (
raw_data
| 'ToSerializedTFExample' >> beam.Map(coder.encode))
_ = (raw_data
| 'ExtractEvaluateAndWriteResults' >>
tfma.ExtractEvaluateAndWriteResults(
eval_shared_model=eval_shared_model,
slice_spec=slice_spec,
output_path=get_tfma_output_dir(tfma_run_id),
display_only_data_location=input_csv))
return tfma.load_eval_result(output_path=get_tfma_output_dir(tfma_run_id))
print('Done')
###Output
_____no_output_____
###Markdown
You can also compute metrics on slices of your data in TFMA. Slices can be specified using ``tfma.slicer.SingleSliceSpec``.Below are examples of how slices can be specified.
###Code
# An empty slice spec means the overall slice, that is, the whole dataset.
OVERALL_SLICE_SPEC = tfma.slicer.SingleSliceSpec()
# Data can be sliced along a feature column
# In this case, data is sliced along feature column trip_start_hour.
FEATURE_COLUMN_SLICE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_hour'])
# Data can be sliced by crossing feature columns
# In this case, slices are computed for trip_start_day x trip_start_month.
FEATURE_COLUMN_CROSS_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day', 'trip_start_month'])
# Metrics can be computed for a particular feature value.
# In this case, metrics is computed for all data where trip_start_hour is 12.
FEATURE_VALUE_SPEC = tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 12)])
# It is also possible to mix column cross and feature value cross.
# In this case, data where trip_start_hour is 12 will be sliced by trip_start_day.
COLUMN_CROSS_VALUE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])
ALL_SPECS = [
OVERALL_SLICE_SPEC,
FEATURE_COLUMN_SLICE_SPEC,
FEATURE_COLUMN_CROSS_SPEC,
FEATURE_VALUE_SPEC,
COLUMN_CROSS_VALUE_SPEC
]
###Output
_____no_output_____
###Markdown
Let's run TFMA!
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_result_1 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id=1,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
###Output
_____no_output_____
###Markdown
Visualization: Slicing MetricsTo see the slices, either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec). If neither is provided, the overall will be displayed.The default visualization is **slice overview** when the number of slices is small. It shows the value of a metric for each slice sorted by the another metric. It is also possible to set a threshold to filter out slices with smaller weights.This view also supports **metrics histogram** as an alternative visualization. It is also the defautl view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Slices with small weights can be fitlered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can be used to remove outliers in the visualization and the metrics table below.
###Code
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result_1, slicing_column='trip_start_hour')
# Show metrics sliced by COLUMN_CROSS_VALUE_SPEC above.
tfma.view.render_slicing_metrics(tfma_result_1, slicing_spec=COLUMN_CROSS_VALUE_SPEC)
# Show overall metrics.
tfma.view.render_slicing_metrics(tfma_result_1)
###Output
_____no_output_____
###Markdown
Visualization: PlotsTFMA offers a number of built-in plots. To see them, add them to ``add_metrics_callbacks``
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_vis = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='vis',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# calibration_plot_and_prediction_histogram computes calibration plot and prediction
# distribution at different thresholds.
tfma.post_export_metrics.calibration_plot_and_prediction_histogram(),
# auc_plots enables precision-recall curve and ROC visualization at different thresholds.
tfma.post_export_metrics.auc_plots()
])
print('Done')
###Output
_____no_output_____
###Markdown
Plots must be visualized for an individual slice. To specify a slice, use ``tfma.slicer.SingleSliceSpec``.In the example below, we are using ``tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 0)])`` to specify the slice where trip_start_hour is 0.Plots are interactive:- Drag to pan- Scroll to zoom- Right click to reset the viewSimply hover over the desired data point to see more details.
###Code
tfma.view.render_plot(tfma_vis, tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 0)]))
###Output
_____no_output_____
###Markdown
Custom metricsIn addition to plots, it is also possible to compute additional metrics not present at export time or custom metrics metrics using ``add_metrics_callbacks``.All metrics in ``tf.metrics`` are supported in the callback and can be used to compose other metrics:https://www.tensorflow.org/api_docs/python/tf/metricsIn the cells below, false negative rate is computed as an example.
###Code
# Defines a callback that adds FNR to the result.
def add_fnr_for_threshold(threshold):
def _add_fnr_callback(features_dict, predictions_dict, labels_dict):
metric_ops = {}
prediction_tensor = tf.cast(
predictions_dict.get(tf.contrib.learn.PredictionKey.LOGISTIC), tf.float64)
fn_value_op, fn_update_op = tf.metrics.false_negatives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
tp_value_op, tp_update_op = tf.metrics.true_positives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
fnr = fn_value_op[0] / (fn_value_op[0] + tp_value_op[0])
metric_ops['FNR@' + str(threshold)] = (fnr, tf.group(fn_update_op, tp_update_op))
return metric_ops
return _add_fnr_callback
tf.logging.set_verbosity(tf.logging.INFO)
tfma_fnr = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='fnr',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# Simply add the call here.
add_fnr_for_threshold(0.75)
])
tfma.view.render_slicing_metrics(tfma_fnr, slicing_spec=FEATURE_COLUMN_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Visualization: Time SeriesIt is important to track how your model is doing over time. TFMA offers two modes to show your model performs over time.**Multiple model analysis** shows how model perfoms from one version to another. This is useful early on to see how the addition of new features, change in modeling technique, etc, affects the performance. TFMA offers a convenient method.
###Code
help(tfma.multiple_model_analysis)
###Output
_____no_output_____
###Markdown
**Multiple data analysis** shows how a model perfoms under different evaluation data set. This is useful to ensure that model performance does not degrade over time. TFMA offer a conveneient method.
###Code
help(tfma.multiple_data_analysis)
###Output
_____no_output_____
###Markdown
It is also possible to compose a time series manually.
###Code
# Create different models.
# Run some experiments with different hidden layer configurations.
run_local_experiment(tft_run_id=0,
tf_run_id=1,
num_layers=3,
first_layer_size=200,
scale_factor=0.7)
run_local_experiment(tft_run_id=0,
tf_run_id=2,
num_layers=4,
first_layer_size=240,
scale_factor=0.5)
print('Done')
tfma_result_2 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=1,
tfma_run_id=2,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
tfma_result_3 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=2,
tfma_run_id=3,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
###Output
_____no_output_____
###Markdown
Like plots, time series view must visualized for a slice too.In the example below, we are showing the overall slice.Select a metric to see its time series graph. Hover over each data point to get more details.
###Code
eval_results = tfma.make_eval_results([tfma_result_1, tfma_result_2, tfma_result_3],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results, OVERALL_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Serialized results can also be used to construct a time series. Thus, there is no need to re-run TFMA for models already evaluated for a long running pipeline.
###Code
# Visualize the results in a Time Series. In this case, we are showing the slice specified.
eval_results_from_disk = tfma.load_eval_results([get_tfma_output_dir(1),
get_tfma_output_dir(2),
get_tfma_output_dir(3)],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, FEATURE_VALUE_SPEC)
###Output
_____no_output_____
###Markdown
TFMA Notebook exampleThis notebook describes how to export your model for TFMA and demonstrates the analysis tooling it offers. SetupImport necessary packages.
###Code
import apache_beam as beam
import os
import preprocess
import shutil
import tensorflow as tf
import tensorflow_model_analysis as tfma
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import dataset_schema
from trainer import task
from trainer import taxi
###Output
_____no_output_____
###Markdown
Helper functions and some constants for running the notebook locally.
###Code
BASE_DIR = os.getcwd()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
# Base dir where TFT writes training data
TFT_TRAIN_DATA_DIR = os.path.join(DATA_DIR, 'train')
TFT_TRAIN_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_train')
TFT_TRAIN_FILE_PREFIX = 'train_transformed'
# Base dir where TFT writes eval data
TFT_EVAL_DATA_DIR = os.path.join(DATA_DIR, 'eval')
TFT_EVAL_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_eval')
TFT_EVAL_FILE_PREFIX = 'eval_transformed'
TF_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tf')
# Base dir where TFMA writes eval data
TFMA_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tfma')
SERVING_MODEL_DIR = 'serving_model_dir'
EVAL_MODEL_DIR = 'eval_model_dir'
def get_tft_train_output_dir(run_id):
return _get_output_dir(TFT_TRAIN_OUTPUT_BASE_DIR, run_id)
def get_tft_eval_output_dir(run_id):
return _get_output_dir(TFT_EVAL_OUTPUT_BASE_DIR, run_id)
def get_tf_output_dir(run_id):
return _get_output_dir(TF_OUTPUT_BASE_DIR, run_id)
def get_tfma_output_dir(run_id):
return _get_output_dir(TFMA_OUTPUT_BASE_DIR, run_id)
def _get_output_dir(base_dir, run_id):
return os.path.join(base_dir, 'run_' + str(run_id))
###Output
_____no_output_____
###Markdown
Clean up output directories.
###Code
shutil.rmtree(TFT_TRAIN_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TFT_EVAL_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TF_OUTPUT_BASE_DIR, ignore_errors=True)
###Output
_____no_output_____
###Markdown
Preprocess Inputstransform_data is defined in preprocess.py and uses the tensorflow_transform library to perform preprocessing. The same code is used for both local preprocessing in this notebook and preprocessing in the Cloud (via Dataflow).
###Code
# Transform eval data
preprocess.transform_data(input_handle=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_EVAL_FILE_PREFIX,
working_dir=get_tft_eval_output_dir(0),
pipeline_args=['--runner=DirectRunner'])
print('Done')
# Transform traning data
preprocess.transform_data(input_handle=os.path.join(TFT_TRAIN_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_TRAIN_FILE_PREFIX,
working_dir=get_tft_train_output_dir(0),
pipeline_args=['--runner=DirectRunner'])
print('Done')
###Output
_____no_output_____
###Markdown
Prepare the ModelTo use TFMA, export the model into an **EvalSavedModel** by calling ``tfma.export.export_eval_savedmodel``.``tfma.export.export_eval_savedmodel`` is analogous to ``estimator.export_savedmodel`` but exports the evaluation graph as opposed to the training or inference graph. Notice that one of the inputs is ``eval_input_receiver_fn`` which is analogous to ``serving_input_receiver_fn`` for ``estimator.export_savedmodel``. For more details, refer to the documentation for TFMA on Github.Contruct the **EvalSavedModel** after training is completed.
###Code
def run_experiment(hparams):
"""Run the training and evaluate using the high level API"""
# Train and evaluate the model as usual.
estimator = task.train_and_maybe_evaluate(hparams)
# Export TFMA's sepcial EvalSavedModel
eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR)
receiver_fn = lambda: eval_input_receiver_fn(hparams.tf_transform_dir)
tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=eval_model_dir,
eval_input_receiver_fn=receiver_fn)
def eval_input_receiver_fn(working_dir):
raw_feature_spec = taxi.get_raw_feature_spec()
serialized_tf_example = tf.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# First we deserialize our examples using the raw schema.
features = tf.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, we must process them through tft
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(working_dir, transform_fn_io.TRANSFORM_FN_DIR),
features))
# The key MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features=transformed_features,
receiver_tensors=receiver_tensors,
labels=transformed_features[taxi.LABEL_KEY])
print('Done')
###Output
_____no_output_____
###Markdown
Train and export the model for TFMA
###Code
def run_local_experiment(tft_run_id, tf_run_id, num_layers, first_layer_size, scale_factor):
"""Helper method to train and export the model for TFMA
The caller specifies the input and output directory by providing run ids. The optional parameters
allows the user to change the modelfor time series view.
Args:
tft_run_id: The run id for the preprocessing. Identifies the folder containing training data.
tf_run_id: The run for this training run. Identify where the exported model will be written to.
num_layers: The number of layers used by the hiden layer.
first_layer_size: The size of the first hidden layer.
scale_factor: The scale factor between each layer in in hidden layers.
"""
hparams = tf.contrib.training.HParams(
# Inputs: are tf-transformed materialized features
train_files=os.path.join(get_tft_train_output_dir(tft_run_id), TFT_TRAIN_FILE_PREFIX + '-00000-of-*'),
eval_files=os.path.join(get_tft_eval_output_dir(tft_run_id), TFT_EVAL_FILE_PREFIX + '-00000-of-*'),
# Output: dir for trained model
job_dir=get_tf_output_dir(tf_run_id),
tf_transform_dir=get_tft_train_output_dir(tft_run_id),
# Output: dir for both the serving model and eval_model which will go into tfma
# evaluation
output_dir=get_tf_output_dir(tf_run_id),
train_steps=10000,
eval_steps=5000,
num_layers=num_layers,
first_layer_size=first_layer_size,
scale_factor=scale_factor,
num_epochs=None,
train_batch_size=40,
eval_batch_size=40)
run_experiment(hparams)
print('Done.')
run_local_experiment(tft_run_id=0,
tf_run_id=0,
num_layers=4,
first_layer_size=100,
scale_factor=0.7)
print('Done.')
###Output
_____no_output_____
###Markdown
Run TFMA to compute metricsFor local analysis, TFMA offers a helper method ``tfma.run_model_analysis``
###Code
help(tfma.run_model_analysis)
###Output
_____no_output_____
###Markdown
You can also write your own custom pipeline if you want to perform extra transformations on the data before evaluation.
###Code
def run_tfma(slice_spec, tf_run_id, tfma_run_id, input_csv, add_metrics_callbacks=None):
"""A simple wrapper function that runs tfma locally.
A function that does extra transformations on the data and then run model analysis.
Args:
slice_spec: The slicing spec for how to slice the data.
tf_run_id: An id to contruct the model directories with.
tfma_run_id: An id to construct output directories with.
input_csv: The evaluation data in csv format.
add_metrics_callback: Optional list of callbacks for computing extra metrics.
Returns:
An EvalResult that can be used with TFMA visualization functions.
"""
eval_model_base_dir = os.path.join(get_tf_output_dir(tf_run_id), EVAL_MODEL_DIR)
eval_model_dir = os.path.join(eval_model_base_dir, next(os.walk(eval_model_base_dir))[1][0])
display_only_data_location = input_csv
with beam.Pipeline() as pipeline:
csv_coder = taxi.make_csv_coder()
raw_data = (
pipeline
| 'ReadFromText' >> beam.io.ReadFromText(
input_csv,
coder=beam.coders.BytesCoder(),
skip_header_lines=True)
| 'ParseCSV' >> beam.Map(csv_coder.decode))
# Examples must be in clean tf-example format.
raw_feature_spec = taxi.get_raw_feature_spec()
raw_schema = dataset_schema.from_feature_spec(raw_feature_spec)
coder = example_proto_coder.ExampleProtoCoder(raw_schema)
raw_data = (
raw_data
| 'CleanData' >> beam.Map(taxi.clean_raw_data_dict)
| 'ToSerializedTFExample' >> beam.Map(coder.encode))
_ = raw_data | 'EvaluateAndWriteResults' >> tfma.EvaluateAndWriteResults(
eval_saved_model_path=eval_model_dir,
slice_spec=slice_spec,
output_path=get_tfma_output_dir(tfma_run_id),
add_metrics_callbacks=add_metrics_callbacks,
display_only_data_location=input_csv)
return tfma.load_eval_result(output_path=get_tfma_output_dir(tfma_run_id))
print('Done')
###Output
_____no_output_____
###Markdown
You can also compute metrics on slices of your data in TFMA. Slices can be specified using ``tfma.SingleSliceSpec``.Below are examples of how slices can be specified.
###Code
# An empty slice spec means the overall slice, that is, the whole dataset.
OVERALL_SLICE_SPEC = tfma.SingleSliceSpec()
# Data can be sliced along a feature column
# In this case, data is sliced along feature column trip_start_hour.
FEATURE_COLUMN_SLICE_SPEC = tfma.SingleSliceSpec(columns=['trip_start_hour'])
# Data can be sliced by crossing feature columns
# In this case, slices are computed for trip_start_day x trip_start_month.
FEATURE_COLUMN_CROSS_SPEC = tfma.SingleSliceSpec(columns=['trip_start_day', 'trip_start_month'])
# Metrics can be computed for a particular feature value.
# In this case, metrics is computed for all data where trip_start_hour is 12.
FEATURE_VALUE_SPEC = tfma.SingleSliceSpec(features=[('trip_start_hour', 12)])
# It is also possible to mix column cross and feature value cross.
# In this case, data where trip_start_hour is 12 will be sliced by trip_start_day.
COLUMN_CROSS_VALUE_SPEC = tfma.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])
ALL_SPECS = [
OVERALL_SLICE_SPEC,
FEATURE_COLUMN_SLICE_SPEC,
FEATURE_COLUMN_CROSS_SPEC,
FEATURE_VALUE_SPEC,
COLUMN_CROSS_VALUE_SPEC
]
###Output
_____no_output_____
###Markdown
Let's run TFMA!
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_result_1 = run_tfma(input_csv=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id=1,
slice_spec=ALL_SPECS)
print('Done.')
###Output
_____no_output_____
###Markdown
Visualization: Slicing MetricsTo see the slices, either use the name of the column (by setting slicing_column) or provide a tfma.SingleSliceSpec (by setting slicing_spec). If neither is provided, the overall will be displayed.The default visualization is **slice overview** when the number of slcies is small. It shows the value of a metric for each slice sorted by the another metric. It is also possible to set a threshold to filter out slices with smaller weights.This view also supports **metrics histogram** as an alternative visualization. It is also the defautl view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Slices with small weights can be fitlered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can be used to remove outliers in the visualization and the metrics table below.
###Code
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result_1, slicing_column='trip_start_hour')
# Show metrics sliced by COLUMN_CROSS_VALUE_SPEC above.
tfma.view.render_slicing_metrics(tfma_result_1, slicing_spec=COLUMN_CROSS_VALUE_SPEC)
# Show overall metrics.
tfma.view.render_slicing_metrics(tfma_result_1)
###Output
_____no_output_____
###Markdown
Visualization: PlotsTFMA offers a number of built-in plots. To see them, add them to ``add_metrics_callabcks``
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_vis = run_tfma(input_csv=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='vis',
slice_spec=ALL_SPECS,
add_metrics_callbacks=[
# calibration_plot_and_prediction_histogram computes calibration plot and prediction
# distribution at different thresholds.
tfma.post_export_metrics.post_export_metrics.calibration_plot_and_prediction_histogram(),
# auc_plots enables precision-recall curve and ROC visualization at different thresholds.
tfma.post_export_metrics.post_export_metrics.auc_plots()
])
print('Done.')
###Output
_____no_output_____
###Markdown
Plots must be visualized for an individual slice. To specify a slice, use ``tfma.SingleSliceSpec``.In the example below, we are using ``tfma.SingleSliceSpec(features=[('trip_start_hour', 0)])`` to specify the slice where trip_start_hour is 0.Plots are interactive:- Drag to pan- Scroll to zoom- Right click to reset the viewSimply hover over the desired data point to see more details.
###Code
tfma.view.render_plot(tfma_vis, tfma.SingleSliceSpec(features=[('trip_start_hour', 0)]))
###Output
_____no_output_____
###Markdown
Custom metricsIn addition to plots, it is also possible to compute additional metrics not present at export time or custom metrics metrics using ``add_metrics_callbacks``.All metrics in ``tf.metrics`` are supported in the callback and can be used to compose other metrics:https://www.tensorflow.org/api_docs/python/tf/metricsIn the cells below, false negative rate is computed as an example.
###Code
# Defines a callback that adds FNR to the result.
def add_fnr_for_threshold(threshold):
def _add_fnr_callback(features_dict, predictions_dict, labels_dict):
metric_ops = {}
prediction_tensor = tf.cast(
predictions_dict.get(tf.contrib.learn.PredictionKey.LOGISTIC), tf.float64)
fn_value_op, fn_update_op = tf.metrics.false_negatives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
tp_value_op, tp_update_op = tf.metrics.true_positives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
fnr = fn_value_op[0] / (fn_value_op[0] + tp_value_op[0])
metric_ops['FNR@' + str(threshold)] = (fnr, tf.group(fn_update_op, tp_update_op))
return metric_ops
return _add_fnr_callback
tf.logging.set_verbosity(tf.logging.INFO)
tfma_fnr = run_tfma(input_csv=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='fnr',
slice_spec=ALL_SPECS,
add_metrics_callbacks=[
# Simply add the call here.
add_fnr_for_threshold(0.75)
])
tfma.view.render_slicing_metrics(tfma_fnr, slicing_spec=FEATURE_COLUMN_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Visualization: Time SeriesIt is important to track how your model is doing over time. TFMA offers two modes to show your model performs over time.**Multiple model analysis** shows how model perfoms from one version to another. This is useful early on to see how the addition of new features, change in modeling technique, etc, affects the performance. TFMA offers a convenient method.
###Code
help(tfma.multiple_model_analysis)
###Output
_____no_output_____
###Markdown
**Multiple data analysis** shows how a model perfoms under different evaluation data set. This is useful to ensure that model performance does not degrade over time. TFMA offer a conveneient method.
###Code
help(tfma.multiple_data_analysis)
###Output
_____no_output_____
###Markdown
It is also possible to compose a time series manually.
###Code
# Create different models.
# Run some experiments with different hidden layer configurations.
run_local_experiment(tft_run_id=0,
tf_run_id=1,
num_layers=3,
first_layer_size=200,
scale_factor=0.7)
run_local_experiment(tft_run_id=0,
tf_run_id=2,
num_layers=4,
first_layer_size=240,
scale_factor=0.5)
print('Done.')
tfma_result_2 = run_tfma(input_csv=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
tf_run_id=1,
tfma_run_id=2,
slice_spec=ALL_SPECS)
tfma_result_3 = run_tfma(input_csv=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
tf_run_id=2,
tfma_run_id=3,
slice_spec=ALL_SPECS)
print('Done.')
###Output
_____no_output_____
###Markdown
Like plots, time series view must visualized for a slice too.In the example below, we are showing the overall slice.Select a metric to see its time series graph. Hover over each data point to get more details.
###Code
eval_results = tfma.make_eval_results([tfma_result_1, tfma_result_2, tfma_result_3],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results, OVERALL_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Serialized results can also be used to construct a time series. Thus, there is no need to re-run TFMA for models already evaluated for a long running pipeline.
###Code
# Visualize the results in a Time Series. In this case, we are showing the slice specified.
eval_results_from_disk = tfma.load_eval_results([get_tfma_output_dir(1),
get_tfma_output_dir(2),
get_tfma_output_dir(3)],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, FEATURE_VALUE_SPEC)
###Output
_____no_output_____
###Markdown
TFMA Notebook exampleThis notebook describes how to export your model for TFMA and demonstrates the analysis tooling it offers. SetupImport necessary packages.
###Code
import apache_beam as beam
import os
import preprocess
import shutil
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from google.protobuf import text_format
from tensorflow.python.lib.io import file_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import schema_utils
from trainer import task
from trainer import taxi
###Output
_____no_output_____
###Markdown
Helper functions and some constants for running the notebook locally.
###Code
BASE_DIR = os.getcwd()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
# Base dir containing train and eval data
TRAIN_DATA_DIR = os.path.join(DATA_DIR, 'train')
EVAL_DATA_DIR = os.path.join(DATA_DIR, 'eval')
# Base dir where TFT writes training data
TFT_TRAIN_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_train')
TFT_TRAIN_FILE_PREFIX = 'train_transformed'
# Base dir where TFT writes eval data
TFT_EVAL_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_eval')
TFT_EVAL_FILE_PREFIX = 'eval_transformed'
TF_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tf')
# Base dir where TFMA writes eval data
TFMA_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tfma')
SERVING_MODEL_DIR = 'serving_model_dir'
EVAL_MODEL_DIR = 'eval_model_dir'
def get_tft_train_output_dir(run_id):
return _get_output_dir(TFT_TRAIN_OUTPUT_BASE_DIR, run_id)
def get_tft_eval_output_dir(run_id):
return _get_output_dir(TFT_EVAL_OUTPUT_BASE_DIR, run_id)
def get_tf_output_dir(run_id):
return _get_output_dir(TF_OUTPUT_BASE_DIR, run_id)
def get_tfma_output_dir(run_id):
return _get_output_dir(TFMA_OUTPUT_BASE_DIR, run_id)
def _get_output_dir(base_dir, run_id):
return os.path.join(base_dir, 'run_' + str(run_id))
def get_schema_file():
return os.path.join(OUTPUT_DIR, 'schema.pbtxt')
###Output
_____no_output_____
###Markdown
Clean up output directories.
###Code
shutil.rmtree(TFT_TRAIN_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TFT_EVAL_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TF_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(get_schema_file(), ignore_errors=True)
###Output
_____no_output_____
###Markdown
Compute and visualize descriptive data statistics
###Code
# Compute stats over training data.
train_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(TRAIN_DATA_DIR, 'data.csv'))
# Visualize training data stats.
tfdv.visualize_statistics(train_stats)
###Output
_____no_output_____
###Markdown
Infer a schema
###Code
# Infer a schema from the training data stats.
schema = tfdv.infer_schema(statistics=train_stats, infer_feature_shape=False)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Check evaluation data for errors
###Code
# Compute stats over eval data.
eval_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(EVAL_DATA_DIR, 'data.csv'))
# Compare stats of eval data with training data.
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
# Update the schema based on the observed anomalies.
# Relax the minimum fraction of values that must come from the domain for feature company.
company = tfdv.get_feature(schema, 'company')
company.distribution_constraints.min_domain_mass = 0.9
# Add new value to the domain of feature payment_type.
payment_type_domain = tfdv.get_domain(schema, 'payment_type')
payment_type_domain.value.append('Prcard')
# Validate eval stats after updating the schema
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
###Output
_____no_output_____
###Markdown
Freeze the schemaNow that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state.
###Code
file_io.recursive_create_dir(OUTPUT_DIR)
file_io.write_string_to_file(get_schema_file(), text_format.MessageToString(schema))
###Output
_____no_output_____
###Markdown
Preprocess Inputstransform_data is defined in preprocess.py and uses the tensorflow_transform library to perform preprocessing. The same code is used for both local preprocessing in this notebook and preprocessing in the Cloud (via Dataflow).
###Code
# Transform eval data
preprocess.transform_data(input_handle=os.path.join(EVAL_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_EVAL_FILE_PREFIX,
working_dir=get_tft_eval_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
# Transform training data
preprocess.transform_data(input_handle=os.path.join(TRAIN_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_TRAIN_FILE_PREFIX,
working_dir=get_tft_train_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
###Output
_____no_output_____
###Markdown
Compute statistics over transformed data
###Code
# Compute stats over transformed training data.
TRANSFORMED_TRAIN_DATA = os.path.join(get_tft_train_output_dir(0), TFT_TRAIN_FILE_PREFIX + "*")
transformed_train_stats = tfdv.generate_statistics_from_tfrecord(data_location=TRANSFORMED_TRAIN_DATA)
# Visualize transformed training data stats and compare to raw training data.
# Use 'Feature search' to focus on a feature and see statistics pre- and post-transformation.
tfdv.visualize_statistics(transformed_train_stats, train_stats, lhs_name='TRANSFORMED', rhs_name='RAW')
###Output
_____no_output_____
###Markdown
Prepare the ModelTo use TFMA, export the model into an **EvalSavedModel** by calling ``tfma.export.export_eval_savedmodel``.``tfma.export.export_eval_savedmodel`` is analogous to ``estimator.export_savedmodel`` but exports the evaluation graph as opposed to the training or inference graph. Notice that one of the inputs is ``eval_input_receiver_fn`` which is analogous to ``serving_input_receiver_fn`` for ``estimator.export_savedmodel``. For more details, refer to the documentation for TFMA on Github.Contruct the **EvalSavedModel** after training is completed.
###Code
def run_experiment(hparams):
"""Run the training and evaluate using the high level API"""
# Train and evaluate the model as usual.
estimator = task.train_and_maybe_evaluate(hparams)
# Export TFMA's sepcial EvalSavedModel
eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR)
receiver_fn = lambda: eval_input_receiver_fn(hparams.tf_transform_dir)
tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=eval_model_dir,
eval_input_receiver_fn=receiver_fn)
def eval_input_receiver_fn(working_dir):
# Extract feature spec from the schema.
raw_feature_spec = schema_utils.schema_as_feature_spec(schema).feature_spec
serialized_tf_example = tf.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# First we deserialize our examples using the raw schema.
features = tf.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, we must process them through tft
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(working_dir, transform_fn_io.TRANSFORM_FN_DIR),
features))
# The key MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=transformed_features[taxi.transformed_name(taxi.LABEL_KEY)])
print('Done')
###Output
_____no_output_____
###Markdown
Train and export the model for TFMA
###Code
def run_local_experiment(tft_run_id, tf_run_id, num_layers, first_layer_size, scale_factor):
"""Helper method to train and export the model for TFMA
The caller specifies the input and output directory by providing run ids. The optional parameters
allows the user to change the modelfor time series view.
Args:
tft_run_id: The run id for the preprocessing. Identifies the folder containing training data.
tf_run_id: The run for this training run. Identify where the exported model will be written to.
num_layers: The number of layers used by the hiden layer.
first_layer_size: The size of the first hidden layer.
scale_factor: The scale factor between each layer in in hidden layers.
"""
hparams = tf.contrib.training.HParams(
# Inputs: are tf-transformed materialized features
train_files=os.path.join(get_tft_train_output_dir(tft_run_id), TFT_TRAIN_FILE_PREFIX + '-00000-of-*'),
eval_files=os.path.join(get_tft_eval_output_dir(tft_run_id), TFT_EVAL_FILE_PREFIX + '-00000-of-*'),
schema_file=get_schema_file(),
# Output: dir for trained model
job_dir=get_tf_output_dir(tf_run_id),
tf_transform_dir=get_tft_train_output_dir(tft_run_id),
# Output: dir for both the serving model and eval_model which will go into tfma
# evaluation
output_dir=get_tf_output_dir(tf_run_id),
train_steps=10000,
eval_steps=5000,
num_layers=num_layers,
first_layer_size=first_layer_size,
scale_factor=scale_factor,
num_epochs=None,
train_batch_size=40,
eval_batch_size=40)
run_experiment(hparams)
print('Done.')
run_local_experiment(tft_run_id=0,
tf_run_id=0,
num_layers=4,
first_layer_size=100,
scale_factor=0.7)
print('Done.')
###Output
_____no_output_____
###Markdown
Run TFMA to compute metricsFor local analysis, TFMA offers a helper method ``tfma.run_model_analysis``
###Code
help(tfma.run_model_analysis)
###Output
_____no_output_____
###Markdown
You can also write your own custom pipeline if you want to perform extra transformations on the data before evaluation.
###Code
def run_tfma(slice_spec, tf_run_id, tfma_run_id, input_csv, schema_file, add_metrics_callbacks=None):
"""A simple wrapper function that runs tfma locally.
A function that does extra transformations on the data and then run model analysis.
Args:
slice_spec: The slicing spec for how to slice the data.
tf_run_id: An id to contruct the model directories with.
tfma_run_id: An id to construct output directories with.
input_csv: The evaluation data in csv format.
schema_file: The file holding a text-serialized schema for the input data.
add_metrics_callback: Optional list of callbacks for computing extra metrics.
Returns:
An EvalResult that can be used with TFMA visualization functions.
"""
eval_model_base_dir = os.path.join(get_tf_output_dir(tf_run_id), EVAL_MODEL_DIR)
eval_model_dir = os.path.join(eval_model_base_dir, next(os.walk(eval_model_base_dir))[1][0])
schema = taxi.read_schema(schema_file)
print(eval_model_dir)
display_only_data_location = input_csv
with beam.Pipeline() as pipeline:
csv_coder = taxi.make_csv_coder(schema)
raw_data = (
pipeline
| 'ReadFromText' >> beam.io.ReadFromText(
input_csv,
coder=beam.coders.BytesCoder(),
skip_header_lines=True)
| 'ParseCSV' >> beam.Map(csv_coder.decode))
# Examples must be in clean tf-example format.
coder = taxi.make_proto_coder(schema)
raw_data = (
raw_data
| 'ToSerializedTFExample' >> beam.Map(coder.encode))
_ = raw_data | 'EvaluateAndWriteResults' >> tfma.EvaluateAndWriteResults(
eval_saved_model_path=eval_model_dir,
slice_spec=slice_spec,
output_path=get_tfma_output_dir(tfma_run_id),
add_metrics_callbacks=add_metrics_callbacks,
display_only_data_location=input_csv)
return tfma.load_eval_result(output_path=get_tfma_output_dir(tfma_run_id))
print('Done')
###Output
_____no_output_____
###Markdown
You can also compute metrics on slices of your data in TFMA. Slices can be specified using ``tfma.SingleSliceSpec``.Below are examples of how slices can be specified.
###Code
# An empty slice spec means the overall slice, that is, the whole dataset.
OVERALL_SLICE_SPEC = tfma.SingleSliceSpec()
# Data can be sliced along a feature column
# In this case, data is sliced along feature column trip_start_hour.
FEATURE_COLUMN_SLICE_SPEC = tfma.SingleSliceSpec(columns=['trip_start_hour'])
# Data can be sliced by crossing feature columns
# In this case, slices are computed for trip_start_day x trip_start_month.
FEATURE_COLUMN_CROSS_SPEC = tfma.SingleSliceSpec(columns=['trip_start_day', 'trip_start_month'])
# Metrics can be computed for a particular feature value.
# In this case, metrics is computed for all data where trip_start_hour is 12.
FEATURE_VALUE_SPEC = tfma.SingleSliceSpec(features=[('trip_start_hour', 12)])
# It is also possible to mix column cross and feature value cross.
# In this case, data where trip_start_hour is 12 will be sliced by trip_start_day.
COLUMN_CROSS_VALUE_SPEC = tfma.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])
ALL_SPECS = [
OVERALL_SLICE_SPEC,
FEATURE_COLUMN_SLICE_SPEC,
FEATURE_COLUMN_CROSS_SPEC,
FEATURE_VALUE_SPEC,
COLUMN_CROSS_VALUE_SPEC
]
###Output
_____no_output_____
###Markdown
Let's run TFMA!
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_result_1 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id=1,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done.')
###Output
_____no_output_____
###Markdown
Visualization: Slicing MetricsTo see the slices, either use the name of the column (by setting slicing_column) or provide a tfma.SingleSliceSpec (by setting slicing_spec). If neither is provided, the overall will be displayed.The default visualization is **slice overview** when the number of slices is small. It shows the value of a metric for each slice sorted by the another metric. It is also possible to set a threshold to filter out slices with smaller weights.This view also supports **metrics histogram** as an alternative visualization. It is also the defautl view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Slices with small weights can be fitlered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can be used to remove outliers in the visualization and the metrics table below.
###Code
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result_1, slicing_column='trip_start_hour')
# Show metrics sliced by COLUMN_CROSS_VALUE_SPEC above.
tfma.view.render_slicing_metrics(tfma_result_1, slicing_spec=COLUMN_CROSS_VALUE_SPEC)
# Show overall metrics.
tfma.view.render_slicing_metrics(tfma_result_1)
###Output
_____no_output_____
###Markdown
Visualization: PlotsTFMA offers a number of built-in plots. To see them, add them to ``add_metrics_callbacks``
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_vis = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='vis',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# calibration_plot_and_prediction_histogram computes calibration plot and prediction
# distribution at different thresholds.
tfma.post_export_metrics.post_export_metrics.calibration_plot_and_prediction_histogram(),
# auc_plots enables precision-recall curve and ROC visualization at different thresholds.
tfma.post_export_metrics.post_export_metrics.auc_plots()
])
print('Done.')
###Output
_____no_output_____
###Markdown
Plots must be visualized for an individual slice. To specify a slice, use ``tfma.SingleSliceSpec``.In the example below, we are using ``tfma.SingleSliceSpec(features=[('trip_start_hour', 0)])`` to specify the slice where trip_start_hour is 0.Plots are interactive:- Drag to pan- Scroll to zoom- Right click to reset the viewSimply hover over the desired data point to see more details.
###Code
tfma.view.render_plot(tfma_vis, tfma.SingleSliceSpec(features=[('trip_start_hour', 0)]))
###Output
_____no_output_____
###Markdown
Custom metricsIn addition to plots, it is also possible to compute additional metrics not present at export time or custom metrics metrics using ``add_metrics_callbacks``.All metrics in ``tf.metrics`` are supported in the callback and can be used to compose other metrics:https://www.tensorflow.org/api_docs/python/tf/metricsIn the cells below, false negative rate is computed as an example.
###Code
# Defines a callback that adds FNR to the result.
def add_fnr_for_threshold(threshold):
def _add_fnr_callback(features_dict, predictions_dict, labels_dict):
metric_ops = {}
prediction_tensor = tf.cast(
predictions_dict.get(tf.contrib.learn.PredictionKey.LOGISTIC), tf.float64)
fn_value_op, fn_update_op = tf.metrics.false_negatives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
tp_value_op, tp_update_op = tf.metrics.true_positives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
fnr = fn_value_op[0] / (fn_value_op[0] + tp_value_op[0])
metric_ops['FNR@' + str(threshold)] = (fnr, tf.group(fn_update_op, tp_update_op))
return metric_ops
return _add_fnr_callback
tf.logging.set_verbosity(tf.logging.INFO)
tfma_fnr = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='fnr',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# Simply add the call here.
add_fnr_for_threshold(0.75)
])
tfma.view.render_slicing_metrics(tfma_fnr, slicing_spec=FEATURE_COLUMN_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Visualization: Time SeriesIt is important to track how your model is doing over time. TFMA offers two modes to show your model performs over time.**Multiple model analysis** shows how model perfoms from one version to another. This is useful early on to see how the addition of new features, change in modeling technique, etc, affects the performance. TFMA offers a convenient method.
###Code
help(tfma.multiple_model_analysis)
###Output
_____no_output_____
###Markdown
**Multiple data analysis** shows how a model perfoms under different evaluation data set. This is useful to ensure that model performance does not degrade over time. TFMA offer a conveneient method.
###Code
help(tfma.multiple_data_analysis)
###Output
_____no_output_____
###Markdown
It is also possible to compose a time series manually.
###Code
# Create different models.
# Run some experiments with different hidden layer configurations.
run_local_experiment(tft_run_id=0,
tf_run_id=1,
num_layers=3,
first_layer_size=200,
scale_factor=0.7)
run_local_experiment(tft_run_id=0,
tf_run_id=2,
num_layers=4,
first_layer_size=240,
scale_factor=0.5)
print('Done.')
tfma_result_2 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=1,
tfma_run_id=2,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
tfma_result_3 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=2,
tfma_run_id=3,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done.')
###Output
_____no_output_____
###Markdown
Like plots, time series view must visualized for a slice too.In the example below, we are showing the overall slice.Select a metric to see its time series graph. Hover over each data point to get more details.
###Code
eval_results = tfma.make_eval_results([tfma_result_1, tfma_result_2, tfma_result_3],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results, OVERALL_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Serialized results can also be used to construct a time series. Thus, there is no need to re-run TFMA for models already evaluated for a long running pipeline.
###Code
# Visualize the results in a Time Series. In this case, we are showing the slice specified.
eval_results_from_disk = tfma.load_eval_results([get_tfma_output_dir(1),
get_tfma_output_dir(2),
get_tfma_output_dir(3)],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, FEATURE_VALUE_SPEC)
###Output
_____no_output_____
###Markdown
TFMA Notebook exampleThis notebook describes how to export your model for TFMA and demonstrates the analysis tooling it offers. SetupImport necessary packages.
###Code
import apache_beam as beam
import os
import preprocess
import shutil
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from google.protobuf import text_format
from tensorflow.python.lib.io import file_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import schema_utils
from trainer import task
from trainer import taxi
###Output
_____no_output_____
###Markdown
Helper functions and some constants for running the notebook locally.
###Code
BASE_DIR = os.getcwd()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
# Base dir containing train and eval data
TRAIN_DATA_DIR = os.path.join(DATA_DIR, 'train')
EVAL_DATA_DIR = os.path.join(DATA_DIR, 'eval')
# Base dir where TFT writes training data
TFT_TRAIN_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_train')
TFT_TRAIN_FILE_PREFIX = 'train_transformed'
# Base dir where TFT writes eval data
TFT_EVAL_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_eval')
TFT_EVAL_FILE_PREFIX = 'eval_transformed'
TF_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tf')
# Base dir where TFMA writes eval data
TFMA_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tfma')
SERVING_MODEL_DIR = 'serving_model_dir'
EVAL_MODEL_DIR = 'eval_model_dir'
def get_tft_train_output_dir(run_id):
return _get_output_dir(TFT_TRAIN_OUTPUT_BASE_DIR, run_id)
def get_tft_eval_output_dir(run_id):
return _get_output_dir(TFT_EVAL_OUTPUT_BASE_DIR, run_id)
def get_tf_output_dir(run_id):
return _get_output_dir(TF_OUTPUT_BASE_DIR, run_id)
def get_tfma_output_dir(run_id):
return _get_output_dir(TFMA_OUTPUT_BASE_DIR, run_id)
def _get_output_dir(base_dir, run_id):
return os.path.join(base_dir, 'run_' + str(run_id))
def get_schema_file():
return os.path.join(OUTPUT_DIR, 'schema.pbtxt')
###Output
_____no_output_____
###Markdown
Clean up output directories.
###Code
shutil.rmtree(TFT_TRAIN_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TFT_EVAL_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TF_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(get_schema_file(), ignore_errors=True)
###Output
_____no_output_____
###Markdown
Compute and visualize descriptive data statistics
###Code
# Compute stats over training data.
train_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(TRAIN_DATA_DIR, 'data.csv'))
print(train_stats)
# Visualize training data stats.
tfdv.visualize_statistics(train_stats)
###Output
_____no_output_____
###Markdown
Infer a schema
###Code
# Infer a schema from the training data stats.
schema = tfdv.infer_schema(statistics=train_stats, infer_feature_shape=False)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Check evaluation data for errors
###Code
# Compute stats over eval data.
eval_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(EVAL_DATA_DIR, 'data.csv'))
# Compare stats of eval data with training data.
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
# Update the schema based on the observed anomalies.
# Relax the minimum fraction of values that must come from the domain for feature company.
company = tfdv.get_feature(schema, 'company')
company.distribution_constraints.min_domain_mass = 0.9
# Add new value to the domain of feature payment_type.
payment_type_domain = tfdv.get_domain(schema, 'payment_type')
payment_type_domain.value.append('Prcard')
# Validate eval stats after updating the schema
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
###Output
_____no_output_____
###Markdown
Freeze the schemaNow that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state.
###Code
file_io.recursive_create_dir(OUTPUT_DIR)
file_io.write_string_to_file(get_schema_file(), text_format.MessageToString(schema))
###Output
_____no_output_____
###Markdown
Preprocess Inputstransform_data is defined in preprocess.py and uses the tensorflow_transform library to perform preprocessing. The same code is used for both local preprocessing in this notebook and preprocessing in the Cloud (via Dataflow).
###Code
# Transform eval data
preprocess.transform_data(input_handle=os.path.join(EVAL_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_EVAL_FILE_PREFIX,
working_dir=get_tft_eval_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
# Transform training data
preprocess.transform_data(input_handle=os.path.join(TRAIN_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_TRAIN_FILE_PREFIX,
working_dir=get_tft_train_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
###Output
fare 3 3
trip_start_hour 2 2
pickup_census_tract 1 1
Dropping feature with min_count=0: pickup_census_tract
dropoff_census_tract 3 3
company 1 1
trip_start_timestamp 2 2
pickup_longitude 3 3
trip_start_month 2 2
trip_miles 3 3
dropoff_longitude 3 3
dropoff_community_area 3 3
pickup_community_area 2 2
payment_type 1 1
trip_seconds 3 3
trip_start_day 2 2
tips 3 3
pickup_latitude 3 3
dropoff_latitude 3 3
{0: [], 1: [u'company', u'payment_type'], 2: [u'trip_start_hour', u'trip_start_timestamp', u'trip_start_month', u'pickup_community_area', u'trip_start_day'], 3: [u'fare', u'dropoff_census_tract', u'pickup_longitude', u'trip_miles', u'dropoff_longitude', u'dropoff_community_area', u'trip_seconds', u'tips', u'pickup_latitude', u'dropoff_latitude']}
INFO:tensorflow:Assets added to graph.
###Markdown
Compute statistics over transformed data
###Code
# Compute stats over transformed training data.
TRANSFORMED_TRAIN_DATA = os.path.join(get_tft_train_output_dir(0), TFT_TRAIN_FILE_PREFIX + "*")
transformed_train_stats = tfdv.generate_statistics_from_tfrecord(data_location=TRANSFORMED_TRAIN_DATA)
# Visualize transformed training data stats and compare to raw training data.
# Use 'Feature search' to focus on a feature and see statistics pre- and post-transformation.
tfdv.visualize_statistics(transformed_train_stats, train_stats, lhs_name='TRANSFORMED', rhs_name='RAW')
###Output
_____no_output_____
###Markdown
Prepare the ModelTo use TFMA, export the model into an **EvalSavedModel** by calling ``tfma.export.export_eval_savedmodel``.``tfma.export.export_eval_savedmodel`` is analogous to ``estimator.export_savedmodel`` but exports the evaluation graph as opposed to the training or inference graph. Notice that one of the inputs is ``eval_input_receiver_fn`` which is analogous to ``serving_input_receiver_fn`` for ``estimator.export_savedmodel``. For more details, refer to the documentation for TFMA on Github.Contruct the **EvalSavedModel** after training is completed.
###Code
def run_experiment(hparams):
"""Run the training and evaluate using the high level API"""
# Train and evaluate the model as usual.
estimator = task.train_and_maybe_evaluate(hparams)
# Export TFMA's sepcial EvalSavedModel
eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR)
receiver_fn = lambda: eval_input_receiver_fn(hparams.tf_transform_dir)
tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=eval_model_dir,
eval_input_receiver_fn=receiver_fn)
def eval_input_receiver_fn(working_dir):
# Extract feature spec from the schema.
raw_feature_spec = schema_utils.schema_as_feature_spec(schema).feature_spec
serialized_tf_example = tf.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# First we deserialize our examples using the raw schema.
features = tf.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, we must process them through tft
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(working_dir, transform_fn_io.TRANSFORM_FN_DIR),
features))
# The key MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=transformed_features[taxi.transformed_name(taxi.LABEL_KEY)])
print('Done')
###Output
_____no_output_____
###Markdown
Train and export the model for TFMA
###Code
def run_local_experiment(tft_run_id, tf_run_id, num_layers, first_layer_size, scale_factor):
"""Helper method to train and export the model for TFMA
The caller specifies the input and output directory by providing run ids. The optional parameters
allows the user to change the modelfor time series view.
Args:
tft_run_id: The run id for the preprocessing. Identifies the folder containing training data.
tf_run_id: The run for this training run. Identify where the exported model will be written to.
num_layers: The number of layers used by the hiden layer.
first_layer_size: The size of the first hidden layer.
scale_factor: The scale factor between each layer in in hidden layers.
"""
hparams = tf.contrib.training.HParams(
# Inputs: are tf-transformed materialized features
train_files=os.path.join(get_tft_train_output_dir(tft_run_id), TFT_TRAIN_FILE_PREFIX + '-00000-of-*'),
eval_files=os.path.join(get_tft_eval_output_dir(tft_run_id), TFT_EVAL_FILE_PREFIX + '-00000-of-*'),
schema_file=get_schema_file(),
# Output: dir for trained model
job_dir=get_tf_output_dir(tf_run_id),
tf_transform_dir=get_tft_train_output_dir(tft_run_id),
# Output: dir for both the serving model and eval_model which will go into tfma
# evaluation
output_dir=get_tf_output_dir(tf_run_id),
train_steps=10000,
eval_steps=5000,
num_layers=num_layers,
first_layer_size=first_layer_size,
scale_factor=scale_factor,
num_epochs=None,
train_batch_size=40,
eval_batch_size=40)
run_experiment(hparams)
print('Done')
run_local_experiment(tft_run_id=0,
tf_run_id=0,
num_layers=4,
first_layer_size=100,
scale_factor=0.7)
print('Done')
###Output
_____no_output_____
###Markdown
Run TFMA to compute metricsFor local analysis, TFMA offers a helper method ``tfma.run_model_analysis``
###Code
help(tfma.run_model_analysis)
###Output
_____no_output_____
###Markdown
You can also write your own custom pipeline if you want to perform extra transformations on the data before evaluation.
###Code
def run_tfma(slice_spec, tf_run_id, tfma_run_id, input_csv, schema_file, add_metrics_callbacks=None):
"""A simple wrapper function that runs tfma locally.
A function that does extra transformations on the data and then run model analysis.
Args:
slice_spec: The slicing spec for how to slice the data.
tf_run_id: An id to contruct the model directories with.
tfma_run_id: An id to construct output directories with.
input_csv: The evaluation data in csv format.
schema_file: The file holding a text-serialized schema for the input data.
add_metrics_callback: Optional list of callbacks for computing extra metrics.
Returns:
An EvalResult that can be used with TFMA visualization functions.
"""
eval_model_base_dir = os.path.join(get_tf_output_dir(tf_run_id), EVAL_MODEL_DIR)
eval_model_dir = os.path.join(eval_model_base_dir, next(os.walk(eval_model_base_dir))[1][0])
eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=eval_model_dir,
add_metrics_callbacks=add_metrics_callbacks)
schema = taxi.read_schema(schema_file)
print(eval_model_dir)
display_only_data_location = input_csv
with beam.Pipeline() as pipeline:
csv_coder = taxi.make_csv_coder(schema)
raw_data = (
pipeline
| 'ReadFromText' >> beam.io.ReadFromText(
input_csv,
coder=beam.coders.BytesCoder(),
skip_header_lines=True)
| 'ParseCSV' >> beam.Map(csv_coder.decode))
# Examples must be in clean tf-example format.
coder = taxi.make_proto_coder(schema)
raw_data = (
raw_data
| 'ToSerializedTFExample' >> beam.Map(coder.encode))
_ = (raw_data
| 'ExtractEvaluateAndWriteResults' >>
tfma.ExtractEvaluateAndWriteResults(
eval_shared_model=eval_shared_model,
slice_spec=slice_spec,
output_path=get_tfma_output_dir(tfma_run_id),
display_only_data_location=input_csv))
return tfma.load_eval_result(output_path=get_tfma_output_dir(tfma_run_id))
print('Done')
###Output
_____no_output_____
###Markdown
You can also compute metrics on slices of your data in TFMA. Slices can be specified using ``tfma.slicer.SingleSliceSpec``.Below are examples of how slices can be specified.
###Code
# An empty slice spec means the overall slice, that is, the whole dataset.
OVERALL_SLICE_SPEC = tfma.slicer.slicer.SingleSliceSpec()
# Data can be sliced along a feature column
# In this case, data is sliced along feature column trip_start_hour.
FEATURE_COLUMN_SLICE_SPEC = tfma.slicer.slicer.SingleSliceSpec(columns=['trip_start_hour'])
# Data can be sliced by crossing feature columns
# In this case, slices are computed for trip_start_day x trip_start_month.
FEATURE_COLUMN_CROSS_SPEC = tfma.slicer.slicer.SingleSliceSpec(columns=['trip_start_day', 'trip_start_month'])
# Metrics can be computed for a particular feature value.
# In this case, metrics is computed for all data where trip_start_hour is 12.
FEATURE_VALUE_SPEC = tfma.slicer.slicer.SingleSliceSpec(features=[('trip_start_hour', 12)])
# It is also possible to mix column cross and feature value cross.
# In this case, data where trip_start_hour is 12 will be sliced by trip_start_day.
COLUMN_CROSS_VALUE_SPEC = tfma.slicer.slicer.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])
ALL_SPECS = [
OVERALL_SLICE_SPEC,
FEATURE_COLUMN_SLICE_SPEC,
FEATURE_COLUMN_CROSS_SPEC,
FEATURE_VALUE_SPEC,
COLUMN_CROSS_VALUE_SPEC
]
###Output
_____no_output_____
###Markdown
Let's run TFMA!
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_result_1 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id=1,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
###Output
_____no_output_____
###Markdown
Visualization: Slicing MetricsTo see the slices, either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec). If neither is provided, the overall will be displayed.The default visualization is **slice overview** when the number of slices is small. It shows the value of a metric for each slice sorted by the another metric. It is also possible to set a threshold to filter out slices with smaller weights.This view also supports **metrics histogram** as an alternative visualization. It is also the defautl view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Slices with small weights can be fitlered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can be used to remove outliers in the visualization and the metrics table below.
###Code
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result_1, slicing_column='trip_start_hour')
# Show metrics sliced by COLUMN_CROSS_VALUE_SPEC above.
tfma.view.render_slicing_metrics(tfma_result_1, slicing_spec=COLUMN_CROSS_VALUE_SPEC)
# Show overall metrics.
tfma.view.render_slicing_metrics(tfma_result_1)
###Output
_____no_output_____
###Markdown
Visualization: PlotsTFMA offers a number of built-in plots. To see them, add them to ``add_metrics_callbacks``
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_vis = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='vis',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# calibration_plot_and_prediction_histogram computes calibration plot and prediction
# distribution at different thresholds.
tfma.post_export_metrics.calibration_plot_and_prediction_histogram(),
# auc_plots enables precision-recall curve and ROC visualization at different thresholds.
tfma.post_export_metrics.auc_plots()
])
print('Done')
###Output
_____no_output_____
###Markdown
Plots must be visualized for an individual slice. To specify a slice, use ``tfma.slicer.SingleSliceSpec``.In the example below, we are using ``tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 0)])`` to specify the slice where trip_start_hour is 0.Plots are interactive:- Drag to pan- Scroll to zoom- Right click to reset the viewSimply hover over the desired data point to see more details.
###Code
tfma.view.render_plot(tfma_vis, tfma.slicer.slicer.SingleSliceSpec(features=[('trip_start_hour', 0)]))
###Output
_____no_output_____
###Markdown
Custom metricsIn addition to plots, it is also possible to compute additional metrics not present at export time or custom metrics metrics using ``add_metrics_callbacks``.All metrics in ``tf.metrics`` are supported in the callback and can be used to compose other metrics:https://www.tensorflow.org/api_docs/python/tf/metricsIn the cells below, false negative rate is computed as an example.
###Code
# Defines a callback that adds FNR to the result.
def add_fnr_for_threshold(threshold):
def _add_fnr_callback(features_dict, predictions_dict, labels_dict):
metric_ops = {}
prediction_tensor = tf.cast(
predictions_dict.get(tf.contrib.learn.PredictionKey.LOGISTIC), tf.float64)
fn_value_op, fn_update_op = tf.metrics.false_negatives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
tp_value_op, tp_update_op = tf.metrics.true_positives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
fnr = fn_value_op[0] / (fn_value_op[0] + tp_value_op[0])
metric_ops['FNR@' + str(threshold)] = (fnr, tf.group(fn_update_op, tp_update_op))
return metric_ops
return _add_fnr_callback
tf.logging.set_verbosity(tf.logging.INFO)
tfma_fnr = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='fnr',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# Simply add the call here.
add_fnr_for_threshold(0.75)
])
tfma.view.render_slicing_metrics(tfma_fnr, slicing_spec=FEATURE_COLUMN_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Visualization: Time SeriesIt is important to track how your model is doing over time. TFMA offers two modes to show your model performs over time.**Multiple model analysis** shows how model perfoms from one version to another. This is useful early on to see how the addition of new features, change in modeling technique, etc, affects the performance. TFMA offers a convenient method.
###Code
help(tfma.multiple_model_analysis)
###Output
_____no_output_____
###Markdown
**Multiple data analysis** shows how a model perfoms under different evaluation data set. This is useful to ensure that model performance does not degrade over time. TFMA offer a conveneient method.
###Code
help(tfma.multiple_data_analysis)
###Output
_____no_output_____
###Markdown
It is also possible to compose a time series manually.
###Code
# Create different models.
# Run some experiments with different hidden layer configurations.
run_local_experiment(tft_run_id=0,
tf_run_id=1,
num_layers=3,
first_layer_size=200,
scale_factor=0.7)
run_local_experiment(tft_run_id=0,
tf_run_id=2,
num_layers=4,
first_layer_size=240,
scale_factor=0.5)
print('Done')
tfma_result_2 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=1,
tfma_run_id=2,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
tfma_result_3 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=2,
tfma_run_id=3,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
###Output
_____no_output_____
###Markdown
Like plots, time series view must visualized for a slice too.In the example below, we are showing the overall slice.Select a metric to see its time series graph. Hover over each data point to get more details.
###Code
eval_results = tfma.make_eval_results([tfma_result_1, tfma_result_2, tfma_result_3],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results, OVERALL_SLICE_SPEC)
print('done.')
###Output
_____no_output_____
###Markdown
Serialized results can also be used to construct a time series. Thus, there is no need to re-run TFMA for models already evaluated for a long running pipeline.
###Code
# Visualize the results in a Time Series. In this case, we are showing the slice specified.
eval_results_from_disk = tfma.load_eval_results([get_tfma_output_dir(1),
get_tfma_output_dir(2),
get_tfma_output_dir(3)],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, FEATURE_VALUE_SPEC)
print('done.')
###Output
_____no_output_____
###Markdown
TFMA Notebook exampleThis notebook describes how to export your model for TFMA and demonstrates the analysis tooling it offers. SetupImport necessary packages.
###Code
import apache_beam as beam
import os
import preprocess
import shutil
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from google.protobuf import text_format
from tensorflow.python.lib.io import file_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import schema_utils
from trainer import task
from trainer import taxi
###Output
_____no_output_____
###Markdown
Helper functions and some constants for running the notebook locally.
###Code
BASE_DIR = os.getcwd()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
# Base dir containing train and eval data
TRAIN_DATA_DIR = os.path.join(DATA_DIR, 'train')
EVAL_DATA_DIR = os.path.join(DATA_DIR, 'eval')
# Base dir where TFT writes training data
TFT_TRAIN_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_train')
TFT_TRAIN_FILE_PREFIX = 'train_transformed'
# Base dir where TFT writes eval data
TFT_EVAL_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_eval')
TFT_EVAL_FILE_PREFIX = 'eval_transformed'
TF_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tf')
# Base dir where TFMA writes eval data
TFMA_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tfma')
SERVING_MODEL_DIR = 'serving_model_dir'
EVAL_MODEL_DIR = 'eval_model_dir'
def get_tft_train_output_dir(run_id):
return _get_output_dir(TFT_TRAIN_OUTPUT_BASE_DIR, run_id)
def get_tft_eval_output_dir(run_id):
return _get_output_dir(TFT_EVAL_OUTPUT_BASE_DIR, run_id)
def get_tf_output_dir(run_id):
return _get_output_dir(TF_OUTPUT_BASE_DIR, run_id)
def get_tfma_output_dir(run_id):
return _get_output_dir(TFMA_OUTPUT_BASE_DIR, run_id)
def _get_output_dir(base_dir, run_id):
return os.path.join(base_dir, 'run_' + str(run_id))
def get_schema_file():
return os.path.join(OUTPUT_DIR, 'schema.pbtxt')
###Output
_____no_output_____
###Markdown
Clean up output directories.
###Code
shutil.rmtree(TFT_TRAIN_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TFT_EVAL_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TF_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(get_schema_file(), ignore_errors=True)
###Output
_____no_output_____
###Markdown
Compute and visualize descriptive data statistics
###Code
# Compute stats over training data.
train_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(TRAIN_DATA_DIR, 'data.csv'))
# Visualize training data stats.
tfdv.visualize_statistics(train_stats)
###Output
_____no_output_____
###Markdown
Infer a schema
###Code
# Infer a schema from the training data stats.
schema = tfdv.infer_schema(statistics=train_stats, infer_feature_shape=False)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Check evaluation data for errors
###Code
# Compute stats over eval data.
eval_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(EVAL_DATA_DIR, 'data.csv'))
# Compare stats of eval data with training data.
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
# Update the schema based on the observed anomalies.
# Relax the minimum fraction of values that must come from the domain for feature company.
company = tfdv.get_feature(schema, 'company')
company.distribution_constraints.min_domain_mass = 0.9
# Add new value to the domain of feature payment_type.
payment_type_domain = tfdv.get_domain(schema, 'payment_type')
payment_type_domain.value.append('Prcard')
# Validate eval stats after updating the schema
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
###Output
_____no_output_____
###Markdown
Freeze the schemaNow that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state.
###Code
file_io.recursive_create_dir(OUTPUT_DIR)
file_io.write_string_to_file(get_schema_file(), text_format.MessageToString(schema))
###Output
_____no_output_____
###Markdown
Preprocess Inputstransform_data is defined in preprocess.py and uses the tensorflow_transform library to perform preprocessing. The same code is used for both local preprocessing in this notebook and preprocessing in the Cloud (via Dataflow).
###Code
# Transform eval data
preprocess.transform_data(input_handle=os.path.join(EVAL_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_EVAL_FILE_PREFIX,
working_dir=get_tft_eval_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
# Transform training data
preprocess.transform_data(input_handle=os.path.join(TRAIN_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_TRAIN_FILE_PREFIX,
working_dir=get_tft_train_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
###Output
_____no_output_____
###Markdown
Compute statistics over transformed data
###Code
# Compute stats over transformed training data.
TRANSFORMED_TRAIN_DATA = os.path.join(get_tft_train_output_dir(0), TFT_TRAIN_FILE_PREFIX + "*")
transformed_train_stats = tfdv.generate_statistics_from_tfrecord(data_location=TRANSFORMED_TRAIN_DATA)
# Visualize transformed training data stats and compare to raw training data.
# Use 'Feature search' to focus on a feature and see statistics pre- and post-transformation.
tfdv.visualize_statistics(transformed_train_stats, train_stats, lhs_name='TRANSFORMED', rhs_name='RAW')
###Output
_____no_output_____
###Markdown
Prepare the ModelTo use TFMA, export the model into an **EvalSavedModel** by calling ``tfma.export.export_eval_savedmodel``.``tfma.export.export_eval_savedmodel`` is analogous to ``estimator.export_savedmodel`` but exports the evaluation graph as opposed to the training or inference graph. Notice that one of the inputs is ``eval_input_receiver_fn`` which is analogous to ``serving_input_receiver_fn`` for ``estimator.export_savedmodel``. For more details, refer to the documentation for TFMA on Github.Contruct the **EvalSavedModel** after training is completed.
###Code
def run_experiment(hparams):
"""Run the training and evaluate using the high level API"""
# Train and evaluate the model as usual.
estimator = task.train_and_maybe_evaluate(hparams)
# Export TFMA's sepcial EvalSavedModel
eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR)
receiver_fn = lambda: eval_input_receiver_fn(hparams.tf_transform_dir)
tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=eval_model_dir,
eval_input_receiver_fn=receiver_fn)
def eval_input_receiver_fn(working_dir):
# Extract feature spec from the schema.
raw_feature_spec = schema_utils.schema_as_feature_spec(schema).feature_spec
serialized_tf_example = tf.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# First we deserialize our examples using the raw schema.
features = tf.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, we must process them through tft
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(working_dir, transform_fn_io.TRANSFORM_FN_DIR),
features))
# The key MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=transformed_features[taxi.transformed_name(taxi.LABEL_KEY)])
print('Done')
###Output
_____no_output_____
###Markdown
Train and export the model for TFMA
###Code
def run_local_experiment(tft_run_id, tf_run_id, num_layers, first_layer_size, scale_factor):
"""Helper method to train and export the model for TFMA
The caller specifies the input and output directory by providing run ids. The optional parameters
allows the user to change the modelfor time series view.
Args:
tft_run_id: The run id for the preprocessing. Identifies the folder containing training data.
tf_run_id: The run for this training run. Identify where the exported model will be written to.
num_layers: The number of layers used by the hiden layer.
first_layer_size: The size of the first hidden layer.
scale_factor: The scale factor between each layer in in hidden layers.
"""
hparams = tf.contrib.training.HParams(
# Inputs: are tf-transformed materialized features
train_files=os.path.join(get_tft_train_output_dir(tft_run_id), TFT_TRAIN_FILE_PREFIX + '-00000-of-*'),
eval_files=os.path.join(get_tft_eval_output_dir(tft_run_id), TFT_EVAL_FILE_PREFIX + '-00000-of-*'),
schema_file=get_schema_file(),
# Output: dir for trained model
job_dir=get_tf_output_dir(tf_run_id),
tf_transform_dir=get_tft_train_output_dir(tft_run_id),
# Output: dir for both the serving model and eval_model which will go into tfma
# evaluation
output_dir=get_tf_output_dir(tf_run_id),
train_steps=10000,
eval_steps=5000,
num_layers=num_layers,
first_layer_size=first_layer_size,
scale_factor=scale_factor,
num_epochs=None,
train_batch_size=40,
eval_batch_size=40)
run_experiment(hparams)
print('Done')
run_local_experiment(tft_run_id=0,
tf_run_id=0,
num_layers=4,
first_layer_size=100,
scale_factor=0.7)
print('Done')
###Output
_____no_output_____
###Markdown
Run TFMA to compute metricsFor local analysis, TFMA offers a helper method ``tfma.run_model_analysis``
###Code
help(tfma.run_model_analysis)
###Output
_____no_output_____
###Markdown
You can also write your own custom pipeline if you want to perform extra transformations on the data before evaluation.
###Code
def run_tfma(slice_spec, tf_run_id, tfma_run_id, input_csv, schema_file, add_metrics_callbacks=None):
"""A simple wrapper function that runs tfma locally.
A function that does extra transformations on the data and then run model analysis.
Args:
slice_spec: The slicing spec for how to slice the data.
tf_run_id: An id to contruct the model directories with.
tfma_run_id: An id to construct output directories with.
input_csv: The evaluation data in csv format.
schema_file: The file holding a text-serialized schema for the input data.
add_metrics_callback: Optional list of callbacks for computing extra metrics.
Returns:
An EvalResult that can be used with TFMA visualization functions.
"""
eval_model_base_dir = os.path.join(get_tf_output_dir(tf_run_id), EVAL_MODEL_DIR)
eval_model_dir = os.path.join(eval_model_base_dir, next(os.walk(eval_model_base_dir))[1][0])
eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=eval_model_dir,
add_metrics_callbacks=add_metrics_callbacks)
schema = taxi.read_schema(schema_file)
print(eval_model_dir)
display_only_data_location = input_csv
with beam.Pipeline() as pipeline:
csv_coder = taxi.make_csv_coder(schema)
raw_data = (
pipeline
| 'ReadFromText' >> beam.io.ReadFromText(
input_csv,
coder=beam.coders.BytesCoder(),
skip_header_lines=True)
| 'ParseCSV' >> beam.Map(csv_coder.decode))
# Examples must be in clean tf-example format.
coder = taxi.make_proto_coder(schema)
raw_data = (
raw_data
| 'ToSerializedTFExample' >> beam.Map(coder.encode))
_ = (raw_data
| 'ExtractEvaluateAndWriteResults' >>
tfma.ExtractEvaluateAndWriteResults(
eval_shared_model=eval_shared_model,
slice_spec=slice_spec,
output_path=get_tfma_output_dir(tfma_run_id),
display_only_data_location=input_csv))
return tfma.load_eval_result(output_path=get_tfma_output_dir(tfma_run_id))
print('Done')
###Output
_____no_output_____
###Markdown
You can also compute metrics on slices of your data in TFMA. Slices can be specified using ``tfma.slicer.SingleSliceSpec``.Below are examples of how slices can be specified.
###Code
# An empty slice spec means the overall slice, that is, the whole dataset.
OVERALL_SLICE_SPEC = tfma.slicer.SingleSliceSpec()
# Data can be sliced along a feature column
# In this case, data is sliced along feature column trip_start_hour.
FEATURE_COLUMN_SLICE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_hour'])
# Data can be sliced by crossing feature columns
# In this case, slices are computed for trip_start_day x trip_start_month.
FEATURE_COLUMN_CROSS_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day', 'trip_start_month'])
# Metrics can be computed for a particular feature value.
# In this case, metrics is computed for all data where trip_start_hour is 12.
FEATURE_VALUE_SPEC = tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 12)])
# It is also possible to mix column cross and feature value cross.
# In this case, data where trip_start_hour is 12 will be sliced by trip_start_day.
COLUMN_CROSS_VALUE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])
ALL_SPECS = [
OVERALL_SLICE_SPEC,
FEATURE_COLUMN_SLICE_SPEC,
FEATURE_COLUMN_CROSS_SPEC,
FEATURE_VALUE_SPEC,
COLUMN_CROSS_VALUE_SPEC
]
###Output
_____no_output_____
###Markdown
Let's run TFMA!
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_result_1 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id=1,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
###Output
_____no_output_____
###Markdown
Visualization: Slicing MetricsTo see the slices, either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec). If neither is provided, the overall will be displayed.The default visualization is **slice overview** when the number of slices is small. It shows the value of a metric for each slice sorted by the another metric. It is also possible to set a threshold to filter out slices with smaller weights.This view also supports **metrics histogram** as an alternative visualization. It is also the defautl view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Slices with small weights can be fitlered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can be used to remove outliers in the visualization and the metrics table below.
###Code
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result_1, slicing_column='trip_start_hour')
# Show metrics sliced by COLUMN_CROSS_VALUE_SPEC above.
tfma.view.render_slicing_metrics(tfma_result_1, slicing_spec=COLUMN_CROSS_VALUE_SPEC)
# Show overall metrics.
tfma.view.render_slicing_metrics(tfma_result_1)
###Output
_____no_output_____
###Markdown
Visualization: PlotsTFMA offers a number of built-in plots. To see them, add them to ``add_metrics_callbacks``
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_vis = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='vis',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# calibration_plot_and_prediction_histogram computes calibration plot and prediction
# distribution at different thresholds.
tfma.post_export_metrics.calibration_plot_and_prediction_histogram(),
# auc_plots enables precision-recall curve and ROC visualization at different thresholds.
tfma.post_export_metrics.auc_plots()
])
print('Done')
###Output
_____no_output_____
###Markdown
Plots must be visualized for an individual slice. To specify a slice, use ``tfma.slicer.SingleSliceSpec``.In the example below, we are using ``tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 1)])`` to specify the slice where trip_start_hour is 1.Plots are interactive:- Drag to pan- Scroll to zoom- Right click to reset the viewSimply hover over the desired data point to see more details.
###Code
tfma.view.render_plot(tfma_vis, tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 1)]))
###Output
_____no_output_____
###Markdown
Custom metricsIn addition to plots, it is also possible to compute additional metrics not present at export time or custom metrics metrics using ``add_metrics_callbacks``.All metrics in ``tf.metrics`` are supported in the callback and can be used to compose other metrics:https://www.tensorflow.org/api_docs/python/tf/metricsIn the cells below, false negative rate is computed as an example.
###Code
# Defines a callback that adds FNR to the result.
def add_fnr_for_threshold(threshold):
def _add_fnr_callback(features_dict, predictions_dict, labels_dict):
metric_ops = {}
prediction_tensor = tf.cast(
predictions_dict.get(tf.contrib.learn.PredictionKey.LOGISTIC), tf.float64)
fn_value_op, fn_update_op = tf.metrics.false_negatives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
tp_value_op, tp_update_op = tf.metrics.true_positives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
fnr = fn_value_op[0] / (fn_value_op[0] + tp_value_op[0])
metric_ops['FNR@' + str(threshold)] = (fnr, tf.group(fn_update_op, tp_update_op))
return metric_ops
return _add_fnr_callback
tf.logging.set_verbosity(tf.logging.INFO)
tfma_fnr = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='fnr',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# Simply add the call here.
add_fnr_for_threshold(0.75)
])
tfma.view.render_slicing_metrics(tfma_fnr, slicing_spec=FEATURE_COLUMN_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Visualization: Time SeriesIt is important to track how your model is doing over time. TFMA offers two modes to show your model performs over time.**Multiple model analysis** shows how model perfoms from one version to another. This is useful early on to see how the addition of new features, change in modeling technique, etc, affects the performance. TFMA offers a convenient method.
###Code
help(tfma.multiple_model_analysis)
###Output
_____no_output_____
###Markdown
**Multiple data analysis** shows how a model perfoms under different evaluation data set. This is useful to ensure that model performance does not degrade over time. TFMA offer a conveneient method.
###Code
help(tfma.multiple_data_analysis)
###Output
_____no_output_____
###Markdown
It is also possible to compose a time series manually.
###Code
# Create different models.
# Run some experiments with different hidden layer configurations.
run_local_experiment(tft_run_id=0,
tf_run_id=1,
num_layers=3,
first_layer_size=200,
scale_factor=0.7)
run_local_experiment(tft_run_id=0,
tf_run_id=2,
num_layers=4,
first_layer_size=240,
scale_factor=0.5)
print('Done')
tfma_result_2 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=1,
tfma_run_id=2,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
tfma_result_3 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=2,
tfma_run_id=3,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
###Output
_____no_output_____
###Markdown
Like plots, time series view must visualized for a slice too.In the example below, we are showing the overall slice.Select a metric to see its time series graph. Hover over each data point to get more details.
###Code
eval_results = tfma.make_eval_results([tfma_result_1, tfma_result_2, tfma_result_3],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results, OVERALL_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Serialized results can also be used to construct a time series. Thus, there is no need to re-run TFMA for models already evaluated for a long running pipeline.
###Code
# Visualize the results in a Time Series. In this case, we are showing the slice specified.
eval_results_from_disk = tfma.load_eval_results([get_tfma_output_dir(1),
get_tfma_output_dir(2),
get_tfma_output_dir(3)],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, FEATURE_VALUE_SPEC)
###Output
_____no_output_____
###Markdown
TFMA Notebook exampleThis notebook describes how to export your model for TFMA and demonstrates the analysis tooling it offers. SetupImport necessary packages.
###Code
import apache_beam as beam
import os
import preprocess
import shutil
import tensorflow as tf
import tensorflow_model_analysis as tfma
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import dataset_schema
from trainer import task
from trainer import taxi
###Output
_____no_output_____
###Markdown
Helper functions and some constants for running the notebook locally.
###Code
BASE_DIR = os.getcwd()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
# Base dir where TFT writes training data
TFT_TRAIN_DATA_DIR = os.path.join(DATA_DIR, 'train')
TFT_TRAIN_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_train')
TFT_TRAIN_FILE_PREFIX = 'train_transformed'
# Base dir where TFT writes eval data
TFT_EVAL_DATA_DIR = os.path.join(DATA_DIR, 'eval')
TFT_EVAL_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_eval')
TFT_EVAL_FILE_PREFIX = 'eval_transformed'
TF_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tf')
# Base dir where TFMA writes eval data
TFMA_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tfma')
SERVING_MODEL_DIR = 'serving_model_dir'
EVAL_MODEL_DIR = 'eval_model_dir'
def get_tft_train_output_dir(run_id):
return _get_output_dir(TFT_TRAIN_OUTPUT_BASE_DIR, run_id)
def get_tft_eval_output_dir(run_id):
return _get_output_dir(TFT_EVAL_OUTPUT_BASE_DIR, run_id)
def get_tf_output_dir(run_id):
return _get_output_dir(TF_OUTPUT_BASE_DIR, run_id)
def get_tfma_output_dir(run_id):
return _get_output_dir(TFMA_OUTPUT_BASE_DIR, run_id)
def _get_output_dir(base_dir, run_id):
return os.path.join(base_dir, 'run_' + str(run_id))
###Output
_____no_output_____
###Markdown
Clean up output directories.
###Code
shutil.rmtree(TFT_TRAIN_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TFT_EVAL_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TF_OUTPUT_BASE_DIR, ignore_errors=True)
###Output
_____no_output_____
###Markdown
Preprocess Inputstransform_data is defined in preprocess.py and uses the tensorflow_transform library to perform preprocessing. The same code is used for both local preprocessing in this notebook and preprocessing in the Cloud (via Dataflow).
###Code
# Transform eval data
preprocess.transform_data(input_handle=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_EVAL_FILE_PREFIX,
working_dir=get_tft_eval_output_dir(0),
pipeline_args=['--runner=DirectRunner'])
print('Done')
# Transform traning data
preprocess.transform_data(input_handle=os.path.join(TFT_TRAIN_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_TRAIN_FILE_PREFIX,
working_dir=get_tft_train_output_dir(0),
pipeline_args=['--runner=DirectRunner'])
print('Done')
###Output
_____no_output_____
###Markdown
Prepare the ModelTo use TFMA, export the model into an **EvalSavedModel** by calling ``tfma.export.export_eval_savedmodel``.``tfma.export.export_eval_savedmodel`` is analogous to ``estimator.export_savedmodel`` but exports the evaluation graph as opposed to the training or inference graph. Notice that one of the inputs is ``eval_input_receiver_fn`` which is analogous to ``serving_input_receiver_fn`` for ``estimator.export_savedmodel``. For more details, refer to the documentation for TFMA on Github.Contruct the **EvalSavedModel** after training is completed.
###Code
def run_experiment(hparams):
"""Run the training and evaluate using the high level API"""
# Train and evaluate the model as usual.
estimator = task.train_and_maybe_evaluate(hparams)
# Export TFMA's sepcial EvalSavedModel
eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR)
receiver_fn = lambda: eval_input_receiver_fn(hparams.tf_transform_dir)
tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=eval_model_dir,
eval_input_receiver_fn=receiver_fn)
def eval_input_receiver_fn(working_dir):
raw_feature_spec = taxi.get_raw_feature_spec()
serialized_tf_example = tf.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# First we deserialize our examples using the raw schema.
features = tf.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, we must process them through tft
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(working_dir, transform_fn_io.TRANSFORM_FN_DIR),
features))
# The key MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features=transformed_features,
receiver_tensors=receiver_tensors,
labels=transformed_features[taxi.LABEL_KEY])
print('Done')
###Output
_____no_output_____
###Markdown
Train and export the model for TFMA
###Code
def run_local_experiment(tft_run_id, tf_run_id, num_layers, first_layer_size, scale_factor):
"""Helper method to train and export the model for TFMA
The caller specifies the input and output directory by providing run ids. The optional parameters
allows the user to change the modelfor time series view.
Args:
tft_run_id: The run id for the preprocessing. Identifies the folder containing training data.
tf_run_id: The run for this training run. Identify where the exported model will be written to.
num_layers: The number of layers used by the hiden layer.
first_layer_size: The size of the first hidden layer.
scale_factor: The scale factor between each layer in in hidden layers.
"""
hparams = tf.contrib.training.HParams(
# Inputs: are tf-transformed materialized features
train_files=os.path.join(get_tft_train_output_dir(tft_run_id), TFT_TRAIN_FILE_PREFIX + '-00000-of-*'),
eval_files=os.path.join(get_tft_eval_output_dir(tft_run_id), TFT_EVAL_FILE_PREFIX + '-00000-of-*'),
# Output: dir for trained model
job_dir=get_tf_output_dir(tf_run_id),
tf_transform_dir=get_tft_train_output_dir(tft_run_id),
# Output: dir for both the serving model and eval_model which will go into tfma
# evaluation
output_dir=get_tf_output_dir(tf_run_id),
train_steps=10000,
eval_steps=5000,
num_layers=num_layers,
first_layer_size=first_layer_size,
scale_factor=scale_factor,
num_epochs=None,
train_batch_size=40,
eval_batch_size=40)
run_experiment(hparams)
print('Done.')
run_local_experiment(tft_run_id=0,
tf_run_id=0,
num_layers=4,
first_layer_size=100,
scale_factor=0.7)
print('Done.')
###Output
_____no_output_____
###Markdown
Run TFMA to compute metricsFor local analysis, TFMA offers a helper method ``tfma.run_model_analysis``
###Code
help(tfma.run_model_analysis)
###Output
_____no_output_____
###Markdown
You can also write your own custom pipeline if you want to perform extra transformations on the data before evaluation.
###Code
def run_tfma(slice_spec, tf_run_id, tfma_run_id, input_csv, add_metrics_callbacks=None):
"""A simple wrapper function that runs tfma locally.
A function that does extra transformations on the data and then run model analysis.
Args:
slice_spec: The slicing spec for how to slice the data.
tf_run_id: An id to contruct the model directories with.
tfma_run_id: An id to construct output directories with.
input_csv: The evaluation data in csv format.
add_metrics_callback: Optional list of callbacks for computing extra metrics.
Returns:
An EvalResult that can be used with TFMA visualization functions.
"""
eval_model_base_dir = os.path.join(get_tf_output_dir(tf_run_id), EVAL_MODEL_DIR)
eval_model_dir = os.path.join(eval_model_base_dir, next(os.walk(eval_model_base_dir))[1][0])
display_only_data_location = input_csv
with beam.Pipeline() as pipeline:
csv_coder = taxi.make_csv_coder()
raw_data = (
pipeline
| 'ReadFromText' >> beam.io.ReadFromText(
input_csv,
coder=beam.coders.BytesCoder(),
skip_header_lines=True)
| 'ParseCSV' >> beam.Map(csv_coder.decode))
# Examples must be in clean tf-example format.
raw_feature_spec = taxi.get_raw_feature_spec()
raw_schema = dataset_schema.from_feature_spec(raw_feature_spec)
coder = example_proto_coder.ExampleProtoCoder(raw_schema)
raw_data = (
raw_data
| 'CleanData' >> beam.Map(taxi.clean_raw_data_dict)
| 'ToSerializedTFExample' >> beam.Map(coder.encode))
_ = raw_data | 'EvaluateAndWriteResults' >> tfma.EvaluateAndWriteResults(
eval_saved_model_path=eval_model_dir,
slice_spec=slice_spec,
output_path=get_tfma_output_dir(tfma_run_id),
add_metrics_callbacks=add_metrics_callbacks,
display_only_data_location=input_csv)
return tfma.load_eval_result(output_path=get_tfma_output_dir(tfma_run_id))
print('Done')
###Output
_____no_output_____
###Markdown
You can also compute metrics on slices of your data in TFMA. Slices can be specified using ``tfma.SingleSliceSpec``.Below are examples of how slices can be specified.
###Code
# An empty slice spec means the overall slice, that is, the whole dataset.
OVERALL_SLICE_SPEC = tfma.SingleSliceSpec()
# Data can be sliced along a feature column
# In this case, data is sliced along feature column trip_start_hour.
FEATURE_COLUMN_SLICE_SPEC = tfma.SingleSliceSpec(columns=['trip_start_hour'])
# Data can be sliced by crossing feature columns
# In this case, slices are computed for trip_start_day x trip_start_month.
FEATURE_COLUMN_CROSS_SPEC = tfma.SingleSliceSpec(columns=['trip_start_day', 'trip_start_month'])
# Metrics can be computed for a particular feature value.
# In this case, metrics is computed for all data where trip_start_hour is 12.
FEATURE_VALUE_SPEC = tfma.SingleSliceSpec(features=[('trip_start_hour', 12)])
# It is also possible to mix column cross and feature value cross.
# In this case, data where trip_start_hour is 12 will be sliced by trip_start_day.
COLUMN_CROSS_VALUE_SPEC = tfma.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])
ALL_SPECS = [
OVERALL_SLICE_SPEC,
FEATURE_COLUMN_SLICE_SPEC,
FEATURE_COLUMN_CROSS_SPEC,
FEATURE_VALUE_SPEC,
COLUMN_CROSS_VALUE_SPEC
]
###Output
_____no_output_____
###Markdown
Let's run TFMA!
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_result_1 = run_tfma(input_csv=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id=1,
slice_spec=ALL_SPECS)
print('Done.')
###Output
_____no_output_____
###Markdown
Visualization: Slicing MetricsTo see the slices, either use the name of the column (by setting slicing_column) or provide a tfma.SingleSliceSpec (by setting slicing_spec). If neither is provided, the overall will be displayed.The default visualization is **slice overview** when the number of slices is small. It shows the value of a metric for each slice sorted by the another metric. It is also possible to set a threshold to filter out slices with smaller weights.This view also supports **metrics histogram** as an alternative visualization. It is also the defautl view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Slices with small weights can be fitlered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can be used to remove outliers in the visualization and the metrics table below.
###Code
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result_1, slicing_column='trip_start_hour')
# Show metrics sliced by COLUMN_CROSS_VALUE_SPEC above.
tfma.view.render_slicing_metrics(tfma_result_1, slicing_spec=COLUMN_CROSS_VALUE_SPEC)
# Show overall metrics.
tfma.view.render_slicing_metrics(tfma_result_1)
###Output
_____no_output_____
###Markdown
Visualization: PlotsTFMA offers a number of built-in plots. To see them, add them to ``add_metrics_callabcks``
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_vis = run_tfma(input_csv=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='vis',
slice_spec=ALL_SPECS,
add_metrics_callbacks=[
# calibration_plot_and_prediction_histogram computes calibration plot and prediction
# distribution at different thresholds.
tfma.post_export_metrics.post_export_metrics.calibration_plot_and_prediction_histogram(),
# auc_plots enables precision-recall curve and ROC visualization at different thresholds.
tfma.post_export_metrics.post_export_metrics.auc_plots()
])
print('Done.')
###Output
_____no_output_____
###Markdown
Plots must be visualized for an individual slice. To specify a slice, use ``tfma.SingleSliceSpec``.In the example below, we are using ``tfma.SingleSliceSpec(features=[('trip_start_hour', 0)])`` to specify the slice where trip_start_hour is 0.Plots are interactive:- Drag to pan- Scroll to zoom- Right click to reset the viewSimply hover over the desired data point to see more details.
###Code
tfma.view.render_plot(tfma_vis, tfma.SingleSliceSpec(features=[('trip_start_hour', 0)]))
###Output
_____no_output_____
###Markdown
Custom metricsIn addition to plots, it is also possible to compute additional metrics not present at export time or custom metrics metrics using ``add_metrics_callbacks``.All metrics in ``tf.metrics`` are supported in the callback and can be used to compose other metrics:https://www.tensorflow.org/api_docs/python/tf/metricsIn the cells below, false negative rate is computed as an example.
###Code
# Defines a callback that adds FNR to the result.
def add_fnr_for_threshold(threshold):
def _add_fnr_callback(features_dict, predictions_dict, labels_dict):
metric_ops = {}
prediction_tensor = tf.cast(
predictions_dict.get(tf.contrib.learn.PredictionKey.LOGISTIC), tf.float64)
fn_value_op, fn_update_op = tf.metrics.false_negatives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
tp_value_op, tp_update_op = tf.metrics.true_positives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
fnr = fn_value_op[0] / (fn_value_op[0] + tp_value_op[0])
metric_ops['FNR@' + str(threshold)] = (fnr, tf.group(fn_update_op, tp_update_op))
return metric_ops
return _add_fnr_callback
tf.logging.set_verbosity(tf.logging.INFO)
tfma_fnr = run_tfma(input_csv=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='fnr',
slice_spec=ALL_SPECS,
add_metrics_callbacks=[
# Simply add the call here.
add_fnr_for_threshold(0.75)
])
tfma.view.render_slicing_metrics(tfma_fnr, slicing_spec=FEATURE_COLUMN_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Visualization: Time SeriesIt is important to track how your model is doing over time. TFMA offers two modes to show your model performs over time.**Multiple model analysis** shows how model perfoms from one version to another. This is useful early on to see how the addition of new features, change in modeling technique, etc, affects the performance. TFMA offers a convenient method.
###Code
help(tfma.multiple_model_analysis)
###Output
_____no_output_____
###Markdown
**Multiple data analysis** shows how a model perfoms under different evaluation data set. This is useful to ensure that model performance does not degrade over time. TFMA offer a conveneient method.
###Code
help(tfma.multiple_data_analysis)
###Output
_____no_output_____
###Markdown
It is also possible to compose a time series manually.
###Code
# Create different models.
# Run some experiments with different hidden layer configurations.
run_local_experiment(tft_run_id=0,
tf_run_id=1,
num_layers=3,
first_layer_size=200,
scale_factor=0.7)
run_local_experiment(tft_run_id=0,
tf_run_id=2,
num_layers=4,
first_layer_size=240,
scale_factor=0.5)
print('Done.')
tfma_result_2 = run_tfma(input_csv=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
tf_run_id=1,
tfma_run_id=2,
slice_spec=ALL_SPECS)
tfma_result_3 = run_tfma(input_csv=os.path.join(TFT_EVAL_DATA_DIR, 'data.csv'),
tf_run_id=2,
tfma_run_id=3,
slice_spec=ALL_SPECS)
print('Done.')
###Output
_____no_output_____
###Markdown
Like plots, time series view must visualized for a slice too.In the example below, we are showing the overall slice.Select a metric to see its time series graph. Hover over each data point to get more details.
###Code
eval_results = tfma.make_eval_results([tfma_result_1, tfma_result_2, tfma_result_3],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results, OVERALL_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Serialized results can also be used to construct a time series. Thus, there is no need to re-run TFMA for models already evaluated for a long running pipeline.
###Code
# Visualize the results in a Time Series. In this case, we are showing the slice specified.
eval_results_from_disk = tfma.load_eval_results([get_tfma_output_dir(1),
get_tfma_output_dir(2),
get_tfma_output_dir(3)],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, FEATURE_VALUE_SPEC)
###Output
_____no_output_____
###Markdown
TFMA Notebook exampleThis notebook describes how to export your model for TFMA and demonstrates the analysis tooling it offers. SetupImport necessary packages.
###Code
import apache_beam as beam
import os
import preprocess
import shutil
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from google.protobuf import text_format
from tensorflow.python.lib.io import file_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import schema_utils
from trainer import task
from trainer import taxi
###Output
_____no_output_____
###Markdown
Helper functions and some constants for running the notebook locally.
###Code
BASE_DIR = os.getcwd()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
# Base dir containing train and eval data
TRAIN_DATA_DIR = os.path.join(DATA_DIR, 'train')
EVAL_DATA_DIR = os.path.join(DATA_DIR, 'eval')
# Base dir where TFT writes training data
TFT_TRAIN_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_train')
TFT_TRAIN_FILE_PREFIX = 'train_transformed'
# Base dir where TFT writes eval data
TFT_EVAL_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_eval')
TFT_EVAL_FILE_PREFIX = 'eval_transformed'
TF_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tf')
# Base dir where TFMA writes eval data
TFMA_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tfma')
SERVING_MODEL_DIR = 'serving_model_dir'
EVAL_MODEL_DIR = 'eval_model_dir'
def get_tft_train_output_dir(run_id):
return _get_output_dir(TFT_TRAIN_OUTPUT_BASE_DIR, run_id)
def get_tft_eval_output_dir(run_id):
return _get_output_dir(TFT_EVAL_OUTPUT_BASE_DIR, run_id)
def get_tf_output_dir(run_id):
return _get_output_dir(TF_OUTPUT_BASE_DIR, run_id)
def get_tfma_output_dir(run_id):
return _get_output_dir(TFMA_OUTPUT_BASE_DIR, run_id)
def _get_output_dir(base_dir, run_id):
return os.path.join(base_dir, 'run_' + str(run_id))
def get_schema_file():
return os.path.join(OUTPUT_DIR, 'schema.pbtxt')
###Output
_____no_output_____
###Markdown
Clean up output directories.
###Code
shutil.rmtree(TFT_TRAIN_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TFT_EVAL_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TF_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(get_schema_file(), ignore_errors=True)
###Output
_____no_output_____
###Markdown
Compute and visualize descriptive data statistics
###Code
# Compute stats over training data.
train_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(TRAIN_DATA_DIR, 'data.csv'))
# Visualize training data stats.
tfdv.visualize_statistics(train_stats)
###Output
_____no_output_____
###Markdown
Infer a schema
###Code
# Infer a schema from the training data stats.
schema = tfdv.infer_schema(statistics=train_stats, infer_feature_shape=False)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Check evaluation data for errors
###Code
# Compute stats over eval data.
eval_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(EVAL_DATA_DIR, 'data.csv'))
# Compare stats of eval data with training data.
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
# Update the schema based on the observed anomalies.
# Relax the minimum fraction of values that must come from the domain for feature company.
company = tfdv.get_feature(schema, 'company')
company.distribution_constraints.min_domain_mass = 0.9
# Add new value to the domain of feature payment_type.
payment_type_domain = tfdv.get_domain(schema, 'payment_type')
payment_type_domain.value.append('Prcard')
# Validate eval stats after updating the schema
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
###Output
_____no_output_____
###Markdown
Freeze the schemaNow that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state.
###Code
file_io.recursive_create_dir(OUTPUT_DIR)
file_io.write_string_to_file(get_schema_file(), text_format.MessageToString(schema))
###Output
_____no_output_____
###Markdown
Preprocess Inputstransform_data is defined in preprocess.py and uses the tensorflow_transform library to perform preprocessing. The same code is used for both local preprocessing in this notebook and preprocessing in the Cloud (via Dataflow).
###Code
# Transform eval data
preprocess.transform_data(input_handle=os.path.join(EVAL_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_EVAL_FILE_PREFIX,
working_dir=get_tft_eval_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
# Transform training data
preprocess.transform_data(input_handle=os.path.join(TRAIN_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_TRAIN_FILE_PREFIX,
working_dir=get_tft_train_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
###Output
_____no_output_____
###Markdown
Compute statistics over transformed data
###Code
# Compute stats over transformed training data.
TRANSFORMED_TRAIN_DATA = os.path.join(get_tft_train_output_dir(0), TFT_TRAIN_FILE_PREFIX + "*")
transformed_train_stats = tfdv.generate_statistics_from_tfrecord(data_location=TRANSFORMED_TRAIN_DATA)
# Visualize transformed training data stats and compare to raw training data.
# Use 'Feature search' to focus on a feature and see statistics pre- and post-transformation.
tfdv.visualize_statistics(transformed_train_stats, train_stats, lhs_name='TRANSFORMED', rhs_name='RAW')
###Output
_____no_output_____
###Markdown
Prepare the ModelTo use TFMA, export the model into an **EvalSavedModel** by calling ``tfma.export.export_eval_savedmodel``.``tfma.export.export_eval_savedmodel`` is analogous to ``estimator.export_savedmodel`` but exports the evaluation graph as opposed to the training or inference graph. Notice that one of the inputs is ``eval_input_receiver_fn`` which is analogous to ``serving_input_receiver_fn`` for ``estimator.export_savedmodel``. For more details, refer to the documentation for TFMA on Github.Contruct the **EvalSavedModel** after training is completed.
###Code
def run_experiment(hparams):
"""Run the training and evaluate using the high level API"""
# Train and evaluate the model as usual.
estimator = task.train_and_maybe_evaluate(hparams)
# Export TFMA's sepcial EvalSavedModel
eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR)
receiver_fn = lambda: eval_input_receiver_fn(hparams.tf_transform_dir)
tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=eval_model_dir,
eval_input_receiver_fn=receiver_fn)
def eval_input_receiver_fn(working_dir):
# Extract feature spec from the schema.
raw_feature_spec = schema_utils.schema_as_feature_spec(schema).feature_spec
serialized_tf_example = tf.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# First we deserialize our examples using the raw schema.
features = tf.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, we must process them through tft
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(working_dir, transform_fn_io.TRANSFORM_FN_DIR),
features))
# The key MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=transformed_features[taxi.transformed_name(taxi.LABEL_KEY)])
print('Done')
###Output
_____no_output_____
###Markdown
Train and export the model for TFMA
###Code
def run_local_experiment(tft_run_id, tf_run_id, num_layers, first_layer_size, scale_factor):
"""Helper method to train and export the model for TFMA
The caller specifies the input and output directory by providing run ids. The optional parameters
allows the user to change the modelfor time series view.
Args:
tft_run_id: The run id for the preprocessing. Identifies the folder containing training data.
tf_run_id: The run for this training run. Identify where the exported model will be written to.
num_layers: The number of layers used by the hiden layer.
first_layer_size: The size of the first hidden layer.
scale_factor: The scale factor between each layer in in hidden layers.
"""
hparams = tf.contrib.training.HParams(
# Inputs: are tf-transformed materialized features
train_files=os.path.join(get_tft_train_output_dir(tft_run_id), TFT_TRAIN_FILE_PREFIX + '-00000-of-*'),
eval_files=os.path.join(get_tft_eval_output_dir(tft_run_id), TFT_EVAL_FILE_PREFIX + '-00000-of-*'),
schema_file=get_schema_file(),
# Output: dir for trained model
job_dir=get_tf_output_dir(tf_run_id),
tf_transform_dir=get_tft_train_output_dir(tft_run_id),
# Output: dir for both the serving model and eval_model which will go into tfma
# evaluation
output_dir=get_tf_output_dir(tf_run_id),
train_steps=10000,
eval_steps=5000,
num_layers=num_layers,
first_layer_size=first_layer_size,
scale_factor=scale_factor,
num_epochs=None,
train_batch_size=40,
eval_batch_size=40)
run_experiment(hparams)
print('Done')
run_local_experiment(tft_run_id=0,
tf_run_id=0,
num_layers=4,
first_layer_size=100,
scale_factor=0.7)
print('Done')
###Output
_____no_output_____
###Markdown
Run TFMA to compute metricsFor local analysis, TFMA offers a helper method ``tfma.run_model_analysis``
###Code
help(tfma.run_model_analysis)
###Output
_____no_output_____
###Markdown
You can also write your own custom pipeline if you want to perform extra transformations on the data before evaluation.
###Code
def run_tfma(slice_spec, tf_run_id, tfma_run_id, input_csv, schema_file, add_metrics_callbacks=None):
"""A simple wrapper function that runs tfma locally.
A function that does extra transformations on the data and then run model analysis.
Args:
slice_spec: The slicing spec for how to slice the data.
tf_run_id: An id to contruct the model directories with.
tfma_run_id: An id to construct output directories with.
input_csv: The evaluation data in csv format.
schema_file: The file holding a text-serialized schema for the input data.
add_metrics_callback: Optional list of callbacks for computing extra metrics.
Returns:
An EvalResult that can be used with TFMA visualization functions.
"""
eval_model_base_dir = os.path.join(get_tf_output_dir(tf_run_id), EVAL_MODEL_DIR)
eval_model_dir = os.path.join(eval_model_base_dir, next(os.walk(eval_model_base_dir))[1][0])
eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=eval_model_dir,
add_metrics_callbacks=add_metrics_callbacks)
schema = taxi.read_schema(schema_file)
print(eval_model_dir)
display_only_data_location = input_csv
with beam.Pipeline() as pipeline:
csv_coder = taxi.make_csv_coder(schema)
raw_data = (
pipeline
| 'ReadFromText' >> beam.io.ReadFromText(
input_csv,
coder=beam.coders.BytesCoder(),
skip_header_lines=True)
| 'ParseCSV' >> beam.Map(csv_coder.decode))
# Examples must be in clean tf-example format.
coder = taxi.make_proto_coder(schema)
raw_data = (
raw_data
| 'ToSerializedTFExample' >> beam.Map(coder.encode))
_ = (raw_data
| 'ExtractEvaluateAndWriteResults' >>
tfma.ExtractEvaluateAndWriteResults(
eval_shared_model=eval_shared_model,
slice_spec=slice_spec,
output_path=get_tfma_output_dir(tfma_run_id),
display_only_data_location=input_csv))
return tfma.load_eval_result(output_path=get_tfma_output_dir(tfma_run_id))
print('Done')
###Output
_____no_output_____
###Markdown
You can also compute metrics on slices of your data in TFMA. Slices can be specified using ``tfma.SingleSliceSpec``.Below are examples of how slices can be specified.
###Code
# An empty slice spec means the overall slice, that is, the whole dataset.
OVERALL_SLICE_SPEC = tfma.SingleSliceSpec()
# Data can be sliced along a feature column
# In this case, data is sliced along feature column trip_start_hour.
FEATURE_COLUMN_SLICE_SPEC = tfma.SingleSliceSpec(columns=['trip_start_hour'])
# Data can be sliced by crossing feature columns
# In this case, slices are computed for trip_start_day x trip_start_month.
FEATURE_COLUMN_CROSS_SPEC = tfma.SingleSliceSpec(columns=['trip_start_day', 'trip_start_month'])
# Metrics can be computed for a particular feature value.
# In this case, metrics is computed for all data where trip_start_hour is 12.
FEATURE_VALUE_SPEC = tfma.SingleSliceSpec(features=[('trip_start_hour', 12)])
# It is also possible to mix column cross and feature value cross.
# In this case, data where trip_start_hour is 12 will be sliced by trip_start_day.
COLUMN_CROSS_VALUE_SPEC = tfma.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])
ALL_SPECS = [
OVERALL_SLICE_SPEC,
FEATURE_COLUMN_SLICE_SPEC,
FEATURE_COLUMN_CROSS_SPEC,
FEATURE_VALUE_SPEC,
COLUMN_CROSS_VALUE_SPEC
]
###Output
_____no_output_____
###Markdown
Let's run TFMA!
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_result_1 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id=1,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
###Output
_____no_output_____
###Markdown
Visualization: Slicing MetricsTo see the slices, either use the name of the column (by setting slicing_column) or provide a tfma.SingleSliceSpec (by setting slicing_spec). If neither is provided, the overall will be displayed.The default visualization is **slice overview** when the number of slices is small. It shows the value of a metric for each slice sorted by the another metric. It is also possible to set a threshold to filter out slices with smaller weights.This view also supports **metrics histogram** as an alternative visualization. It is also the defautl view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Slices with small weights can be fitlered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can be used to remove outliers in the visualization and the metrics table below.
###Code
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result_1, slicing_column='trip_start_hour')
# Show metrics sliced by COLUMN_CROSS_VALUE_SPEC above.
tfma.view.render_slicing_metrics(tfma_result_1, slicing_spec=COLUMN_CROSS_VALUE_SPEC)
# Show overall metrics.
tfma.view.render_slicing_metrics(tfma_result_1)
###Output
_____no_output_____
###Markdown
Visualization: PlotsTFMA offers a number of built-in plots. To see them, add them to ``add_metrics_callbacks``
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tfma_vis = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='vis',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# calibration_plot_and_prediction_histogram computes calibration plot and prediction
# distribution at different thresholds.
tfma.post_export_metrics.calibration_plot_and_prediction_histogram(),
# auc_plots enables precision-recall curve and ROC visualization at different thresholds.
tfma.post_export_metrics.auc_plots()
])
print('Done')
###Output
_____no_output_____
###Markdown
Plots must be visualized for an individual slice. To specify a slice, use ``tfma.SingleSliceSpec``.In the example below, we are using ``tfma.SingleSliceSpec(features=[('trip_start_hour', 0)])`` to specify the slice where trip_start_hour is 0.Plots are interactive:- Drag to pan- Scroll to zoom- Right click to reset the viewSimply hover over the desired data point to see more details.
###Code
tfma.view.render_plot(tfma_vis, tfma.SingleSliceSpec(features=[('trip_start_hour', 0)]))
###Output
_____no_output_____
###Markdown
Custom metricsIn addition to plots, it is also possible to compute additional metrics not present at export time or custom metrics metrics using ``add_metrics_callbacks``.All metrics in ``tf.metrics`` are supported in the callback and can be used to compose other metrics:https://www.tensorflow.org/api_docs/python/tf/metricsIn the cells below, false negative rate is computed as an example.
###Code
# Defines a callback that adds FNR to the result.
def add_fnr_for_threshold(threshold):
def _add_fnr_callback(features_dict, predictions_dict, labels_dict):
metric_ops = {}
prediction_tensor = tf.cast(
predictions_dict.get(tf.contrib.learn.PredictionKey.LOGISTIC), tf.float64)
fn_value_op, fn_update_op = tf.metrics.false_negatives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
tp_value_op, tp_update_op = tf.metrics.true_positives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
fnr = fn_value_op[0] / (fn_value_op[0] + tp_value_op[0])
metric_ops['FNR@' + str(threshold)] = (fnr, tf.group(fn_update_op, tp_update_op))
return metric_ops
return _add_fnr_callback
tf.logging.set_verbosity(tf.logging.INFO)
tfma_fnr = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='fnr',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# Simply add the call here.
add_fnr_for_threshold(0.75)
])
tfma.view.render_slicing_metrics(tfma_fnr, slicing_spec=FEATURE_COLUMN_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Visualization: Time SeriesIt is important to track how your model is doing over time. TFMA offers two modes to show your model performs over time.**Multiple model analysis** shows how model perfoms from one version to another. This is useful early on to see how the addition of new features, change in modeling technique, etc, affects the performance. TFMA offers a convenient method.
###Code
help(tfma.multiple_model_analysis)
###Output
_____no_output_____
###Markdown
**Multiple data analysis** shows how a model perfoms under different evaluation data set. This is useful to ensure that model performance does not degrade over time. TFMA offer a conveneient method.
###Code
help(tfma.multiple_data_analysis)
###Output
_____no_output_____
###Markdown
It is also possible to compose a time series manually.
###Code
# Create different models.
# Run some experiments with different hidden layer configurations.
run_local_experiment(tft_run_id=0,
tf_run_id=1,
num_layers=3,
first_layer_size=200,
scale_factor=0.7)
run_local_experiment(tft_run_id=0,
tf_run_id=2,
num_layers=4,
first_layer_size=240,
scale_factor=0.5)
print('Done')
tfma_result_2 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=1,
tfma_run_id=2,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
tfma_result_3 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=2,
tfma_run_id=3,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
###Output
_____no_output_____
###Markdown
Like plots, time series view must visualized for a slice too.In the example below, we are showing the overall slice.Select a metric to see its time series graph. Hover over each data point to get more details.
###Code
eval_results = tfma.make_eval_results([tfma_result_1, tfma_result_2, tfma_result_3],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results, OVERALL_SLICE_SPEC)
###Output
_____no_output_____
###Markdown
Serialized results can also be used to construct a time series. Thus, there is no need to re-run TFMA for models already evaluated for a long running pipeline.
###Code
# Visualize the results in a Time Series. In this case, we are showing the slice specified.
eval_results_from_disk = tfma.load_eval_results([get_tfma_output_dir(1),
get_tfma_output_dir(2),
get_tfma_output_dir(3)],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, FEATURE_VALUE_SPEC)
###Output
_____no_output_____ |
Boston Housing.ipynb | ###Markdown
The median home values across all Boston Suburbs in dollars.
###Code
np.median(df['MEDV'].dropna()) * 1000
###Output
_____no_output_____
###Markdown
Average home values all Boston Suburbs
###Code
np.mean(df['MEDV'].dropna()) * 1000
###Output
_____no_output_____
###Markdown
Median home value of suburbs with newest houses
###Code
df[df.AGE == min(df['AGE'])]['MEDV'] * 1000
###Output
_____no_output_____
###Markdown
The relationship between per-capita crime rate and the pupil-teacher ratio. Differentiate between whether or not the suburb is bounded by the Charles River.
###Code
df_river = df[df.CHAS == 1]
df_noriver = df[df.CHAS == 0]
x_river = df_river['PTRATIO']
x_noriver = df_noriver['PTRATIO']
y_river = df_river['CRIM']
y_noriver = df_noriver['CRIM']
plt.xlabel('Pupil-Teacher Ratio')
plt.ylabel('Per-capita Crime Rate')
x = plt.scatter(x_noriver, y_noriver, color='#EEAA55', marker='.')
y = plt.scatter(x_river, y_river, color='#00CCFF')
plt.legend(handles=[x, y], labels=['Not on river', 'On river'])
plt.title('Pupil-Teacher Ratio vs Crime Rate')
plt.show()
###Output
_____no_output_____
###Markdown
Boston housing prediction-ProjectBy Anirban SaikiaElectronics and Telecommunication EngineeringAssam Engineering College,Guwahati,Assam
###Code
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from sklearn.datasets import load_boston
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from IPython.display import HTML
###Output
_____no_output_____
###Markdown
Loading the Dataset
###Code
boston=load_boston()
print(boston.DESCR)
#put the data into pandas dataframe
features=pd.DataFrame(boston.data,columns=boston.feature_names)
features
features['AGE']
target=pd.DataFrame(boston.target,columns=['target'])
target
max(target['target'])
min(target['target'])
#concatenate features and target into a single dataframe
#axis=1 makes it concatenate columns wise
df=pd.concat([features,target],axis=1)
df
###Output
_____no_output_____
###Markdown
Use describe() to generate the summary of the dataset
###Code
df.describe().round(decimals=2)
###Output
_____no_output_____
###Markdown
Calculate correlation
###Code
#calculate correlation between every columns of the data
corr=df.corr('pearson')
#Take absolute value of the correlation
corrs=[abs(corr[attr]['target']) for attr in list(features)]
#make a list of pairs [(corrs,features)]
l=list(zip(corrs,list(features)))
#sort the list of pairs in reverse/descending order,
#with the correlation value as the key of sorting
l.sort(key=lambda x: x[0],reverse=True)
corrs,labels=list(zip((*l)))
#Plot correlations with respect to the target value as a bar graph
index=np.arange(len(labels))
plt.figure(figsize=(15,5))
plt.bar(index,corrs,width=0.5)
plt.xlabel('Attributes')
plt.ylabel('Correlation with the target variable')
plt.xticks(index,labels)
###Output
_____no_output_____
###Markdown
Normalize the datanormalize the data with MinMaxscaler
###Code
X=df['RM'].values
Y=df['target'].values
#Before normalization
print(Y[:5])
x_scaler=MinMaxScaler()
X=x_scaler.fit_transform(X.reshape(-1,1))
X=X[:,-1]
y_scaler=MinMaxScaler()
Y=y_scaler.fit_transform(Y.reshape(-1,1))
Y=Y[:,-1]
#after normalization
print(Y[:5])
###Output
[0.42222222 0.36888889 0.66 0.63111111 0.69333333]
###Markdown
Step 2: Define error MSE- Mean squred error
###Code
n=200
#Generate n evenly spaced values from zero radians to 2 PI radians
x=np.linspace(0,2*np.pi,n)
sine_values=np.sin(x)
#Plot the sine wave
plt.plot(x,sine_values)
#Add some noisy to the sine wave
noise=0.5
noisy_sine_values=sine_values+np.random.uniform(-noise,noise,n)
plt.plot(x,noisy_sine_values,color='r')
plt.plot(x,sine_values,linewidth=5)
#Calculate MSE using the equation
error_value=(1/n)*sum(np.power(sine_values-noisy_sine_values,2))
error_value
#Calculate MSE using the function sklearn library
mean_squared_error(sine_values,noisy_sine_values)
###Output
_____no_output_____
###Markdown
the three functions that constitute the model are: errorupdategradient_descent
###Code
def error(m,x,c,t):
N=x.size
e=sum(((m*x+c)-t)**2)
return e*1/(2*N)
###Output
_____no_output_____
###Markdown
Step 3: Split the data
###Code
xtrain,xtest,ytrain,ytest=train_test_split(X,Y,test_size=0.2)
def update(m,x,c,t,learning_rate):
grad_m=sum(2*((m*x+c)-t)*x)
grad_c=sum(2*((m*x+c)-t))
m=m-grad_m*learning_rate
c=c-grad_c*learning_rate
return m,c
def gradient_descent(init_m,init_c,x,t,learning_rate,iterations,error_threshold):
m=init_m
c=init_c
error_values=list()
mc_values=list()
for i in range(iterations):
e=error(m,x,c,t)
if e<error_threshold:
print('Error less than threshold.Stopping gradient descent')
break
error_values.append(e)
m,c=update(m,x,c,t,learning_rate)
mc_values.append((m,c))
return m,c,error_values,mc_values
init_m=0.9
init_c=0
learning_rate=0.001
iterations=250
error_threshold=0.001
m,c,error_values,mc_values=gradient_descent(init_m,init_c,xtrain,ytrain,learning_rate,iterations,error_threshold)
plt.plot(np.arange(len(error_values)),error_values)
plt.ylabel('Error')
plt.xlabel('Iterations')
###Output
_____no_output_____
###Markdown
Animation
###Code
mc_values_anim=mc_values[0:250:5]
fig,ax=plt.subplots()
ln,=plt.plot([],[],'ro-',animated=True)
def init():
plt.scatter(xtest,ytest,color='g')
ax.set_xlim(0,1.0)
ax.set_ylim(0,1.0)
return ln,
def update_frame(frame):
m,c=mc_values_anim[frame]
x1,y1=-0.5,m*-.5+c
x2,y2=1.5,m*1.5+c
ln.set_data([x1,x2],[y1,y2])
return ln,
anim=FuncAnimation(fig,update_frame,frames=range(len(mc_values_anim)),init_func=init,blit=True)
HTML(anim.to_html5_video())
###Output
_____no_output_____
###Markdown
Visualization the learning process
###Code
plt.scatter(xtrain,ytrain,color='b')
plt.plot(xtrain,(m*xtrain+c),color='r')
plt.plot(np.arange(len(error_values)),error_values)
plt.ylabel("Error")
plt.xlabel('Iterations')
###Output
_____no_output_____
###Markdown
Step 5 : Prediction
###Code
predicted=(m*xtest)+c
mean_squared_error(ytest,predicted)
p=pd.DataFrame(list(zip(xtest,ytest,predicted)),columns=['x','target','predicted'])
p.head()
plt.scatter(xtest,ytest,color='b')
plt.plot(xtest,predicted,color='g')
###Output
_____no_output_____
###Markdown
Revert Normalization
###Code
predicted=predicted.reshape(-1,1)
xtest=xtest.reshape(-1,1)
ytest=ytest.reshape(-1,1)
xtest_scaled=x_scaler.inverse_transform(xtest)
ytest_scaled=y_scaler.inverse_transform(ytest)
predicted_scaled=y_scaler.inverse_transform(predicted)
#this is to remove the extra dimension
xtest_scaled=xtest_scaled[:,-1]
ytest_scaled=ytest_scaled[:,-1]
predicted_scaled=predicted_scaled[:,-1]
p=pd.DataFrame(list(zip(xtest_scaled,ytest_scaled,predicted_scaled)),columns=['x','target_y','predicted_y'])
p.head()
###Output
_____no_output_____
###Markdown
Boston Housing--- Imports
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import rcParams
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
from sklearn.tree import DecisionTreeRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn import metrics
from sklearn.metrics import make_scorer
# from sklearn.pipeline import Pipeline
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.datasets import boston_housing
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import optimizers
from tensorflow.keras import losses
# from tensorflow.keras import metrics
%matplotlib inline
plt.style.use('seaborn')
rcParams['figure.figsize'] = (14,7)
(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis
###Code
train_data.shape
test_data.shape
###Output
_____no_output_____
###Markdown
There are 506 total samples, 404 in the training set and 102 in the test set, which is about an 80/20 split.The 13 features correspond to:1. CRIM - per capita crime rate by town2. ZN - proportion of residential land zoned for lots over 25,000 sq.ft.3. INDUS - proportion of non-retail business acres per town.4. CHAS - Charles River dummy variable (1 if tract bounds river; 0 otherwise)5. NOX - nitric oxides concentration (parts per 10 million)6. RM - average number of rooms per dwelling7. AGE - proportion of owner-occupied units built prior to 19408. DIS - weighted distances to five Boston employment centres9. RAD - index of accessibility to radial highways10. TAX - full-value property-tax rate per $10,00011. PTRATIO - pupil-teacher ratio by town12. B - 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town13. LSTAT - % lower status of the populationWith the single target variable being:1. MEDV - Median value of owner-occupied homes in $1000's Let's use some easier to understand names...
###Code
# features = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT']
features = ['crime', 'zoned', 'industry', 'river', 'nox', 'rooms', 'age', 'distance', 'highways', 'tax', 'ptratio', 'blk', 'lower']
boston_train = pd.DataFrame(train_data, columns = features)
boston_train.head()
boston_train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 404 entries, 0 to 403
Data columns (total 13 columns):
crime 404 non-null float64
zoned 404 non-null float64
industry 404 non-null float64
river 404 non-null float64
nox 404 non-null float64
rooms 404 non-null float64
age 404 non-null float64
distance 404 non-null float64
highways 404 non-null float64
tax 404 non-null float64
ptratio 404 non-null float64
blk 404 non-null float64
lower 404 non-null float64
dtypes: float64(13)
memory usage: 41.2 KB
###Markdown
Notice that the CHAS feature should be categorical.
###Code
# boston_train['CHAS'] = boston_train['CHAS'].astype('category')
boston_train['river'] = boston_train['river'].astype('category')
boston_train.info()
boston_train.describe()
boston_train.plot(kind = 'density', subplots = True, layout = (4,4), sharex = False)
plt.tight_layout();
temp = boston_train.corr()
corr_ = pd.DataFrame(np.rot90(temp), columns = temp.columns, index = temp.columns[::-1]) # accounts for seaborn rotating the matrix
ax = sns.heatmap(corr_, cmap = 'coolwarm', linewidths = 1.0, square = True, annot = True)
ax.xaxis.set_ticks_position('top')
# fixes a current matplotlib issue when plotting seaborn heatmaps, top and bottom row are cut off
plt.ylim(0.0, len(corr_.columns))
plt.yticks(ticks = np.arange(0.5, len(corr_.columns), 1.0), rotation = 22.25)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now let's have a quick look at the target variable, median value...
###Code
sns.distplot(train_targets);
plt.scatter(x = range(len(train_targets)), y = train_targets)
plt.axhline(y = np.mean(train_targets), color = 'red', lw = 3, label = 'Mean')
plt.axhline(y = np.mean(train_targets) + np.std(train_targets), color = 'red', lw = 1.5, ls = '--', label = '1 std. dev.')
plt.axhline(y = np.mean(train_targets) - np.std(train_targets), color = 'red', lw = 1.5, ls = '--')
plt.legend(fontsize = 14, loc = 'upper right')
plt.xlim(-25,425)
plt.ylim(0, 60);
###Output
_____no_output_____
###Markdown
From the above two plots, we can see the median value is capped at $50k...Let's see how correlated the features are to the target...
###Code
full = (pd.DataFrame(train_targets, columns = ['Median Value'])).join(boston_train)
target_corr = full.corr()
target_corr.iloc[:,0]
temp = pd.DataFrame(target_corr.iloc[:,0].values.T.reshape(1,13), columns = target_corr.columns)
temp.index = ['Correlation']
plt.figure(figsize = (17,3))
sns.heatmap(temp, cmap = 'coolwarm', annot = True, linewidths = 1.0)
plt.xticks(rotation = 22.5, fontsize = 12)
plt.yticks(rotation = 0, fontsize = 12);
###Output
_____no_output_____
###Markdown
---
###Code
# I'll center the data now since the features are on different scales and in different units.
mean = train_data.mean(axis = 0)
std = train_data.std(axis = 0)
train_data = (train_data - mean) / std
test_data = (test_data - mean) / std # be sure to normalize with training statistics and not test....
###Output
_____no_output_____
###Markdown
You may have just noticed that I normalized the categorical variable 'CHAS'. Debate is still open, but it scales the categorical variable with the other features, and it shouldn't affect the categories importance if it is scaled. Baseline model - Linear RegressionCreating a simple model first, and then seeing how much we can improve it or how it compares with other advanced models.
###Code
lm = LinearRegression()
lm.fit(train_data, train_targets)
predictions = lm.predict(test_data)
def plot_predictions(X_test, preds):
data = pd.DataFrame([X_test, preds]).T
data.columns = ['Actual', 'Predicted']
data.sort_values(by = 'Actual', inplace = True)
plt.fill_between(data['Actual'], data['Actual'] - 2*data['Predicted'].std(), data['Actual'] + 2*data['Predicted'].std(),
interpolate = True, alpha = 0.13)
plt.errorbar(X_test, X_test, yerr = preds - X_test, alpha = 0.39)
plt.scatter(data['Actual'], data['Predicted'], lw = 1.5, color = 'red')
plt.xlabel('Actual', fontsize = 14)
plt.ylabel('Predicted', fontsize = 14)
plt.title('Actual vs Predicted', fontsize = 18)
plot_predictions(test_targets, predictions)
baseline_mae = metrics.mean_absolute_error(test_targets, predictions)
print('MAE:', metrics.mean_absolute_error(test_targets, predictions))
print('MSE:', metrics.mean_squared_error(test_targets, predictions))
###Output
MAE: 3.4641858124067166
MSE: 23.195599256422977
###Markdown
--- Now let's see if we can improve using other Machine Learning Algorithms To do... Add in description of models...
###Code
regressors = []
regressors.append(('Lasso', Lasso()))
regressors.append(('EN', ElasticNet()))
regressors.append(('Tree', DecisionTreeRegressor()))
regressors.append(('KNN', KNeighborsRegressor()))
regressors.append(('SVR', SVR()))
regressors.append(('RFR', RandomForestRegressor()))
regressors.append(('GBR', GradientBoostingRegressor()))
regressors.append(('ADA', AdaBoostRegressor()))
scorer = make_scorer(metrics.mean_absolute_error)
labels = ['Lasso', 'EN', 'Tree', 'KNN', 'SVR', 'RFR', 'GBR', 'ADA']
k = 4
N = 13
lasso_cv = pd.DataFrame(data = 0.0, columns = ['Fold ' + str(i) for i in range(1,k+1)], index = range(N))
en_cv = pd.DataFrame(data = 0.0, columns = ['Fold ' + str(i) for i in range(1,k+1)], index = range(N))
tree_cv = pd.DataFrame(data = 0.0, columns = ['Fold ' + str(i) for i in range(1,k+1)], index = range(N))
knn_cv = pd.DataFrame(data = 0.0, columns = ['Fold ' + str(i) for i in range(1,k+1)], index = range(N))
svr_cv = pd.DataFrame(data = 0.0, columns = ['Fold ' + str(i) for i in range(1,k+1)], index = range(N))
rfr_cv = pd.DataFrame(data = 0.0, columns = ['Fold ' + str(i) for i in range(1,k+1)], index = range(N))
gbr_cv = pd.DataFrame(data = 0.0, columns = ['Fold ' + str(i) for i in range(1,k+1)], index = range(N))
ada_cv = pd.DataFrame(data = 0.0, columns = ['Fold ' + str(i) for i in range(1,k+1)], index = range(N))
for i in range(N):
for name, regressor in regressors:
kfold = KFold(n_splits = k, shuffle = True)
if name == 'Lasso':
lasso_cv.iloc[i,:] = cross_val_score(regressor, train_data, train_targets, cv = kfold, scoring = scorer)
if name == 'EN':
en_cv.iloc[i,:] = cross_val_score(regressor, train_data, train_targets, cv = kfold, scoring = scorer)
if name == 'Tree':
tree_cv.iloc[i,:] = cross_val_score(regressor, train_data, train_targets, cv = kfold, scoring = scorer)
if name == 'KNN':
knn_cv.iloc[i,:] = cross_val_score(regressor, train_data, train_targets, cv = kfold, scoring = scorer)
if name == 'SVR':
svr_cv.iloc[i,:] = cross_val_score(regressor, train_data, train_targets, cv = kfold, scoring = scorer)
if name == 'RFR':
rfr_cv.iloc[i,:] = cross_val_score(regressor, train_data, train_targets, cv = kfold, scoring = scorer)
if name == 'GBR':
gbr_cv.iloc[i,:] = cross_val_score(regressor, train_data, train_targets, cv = kfold, scoring = scorer)
if name == 'ADA':
ada_cv.iloc[i,:] = cross_val_score(regressor, train_data, train_targets, cv = kfold, scoring = scorer)
named_cvs = zip(labels, [lasso_cv, en_cv, tree_cv, knn_cv, svr_cv, rfr_cv, gbr_cv, ada_cv])
cv_results = pd.DataFrame(data = 0.0, columns = labels, index = range(N*k))
for name, df in named_cvs:
cv_results.loc[:,name] = pd.melt(df)['value']
cv_results.head()
###Output
_____no_output_____
###Markdown
seaborn color palette 0 is blue, 1 is green, 2 is red. using here just to match colors, otherwise the caps would be black
###Code
cv_results.boxplot(boxprops = dict(lw = 2.0), medianprops = dict(lw = 2.0),
whiskerprops = dict(lw = 1.5, ls = '--'), capprops = dict(lw = 2.0, color = sns.color_palette()[0]))
plt.axhline(y = baseline_mae, color = 'black', lw = 0.5, ls = '--', label = 'baseline')
plt.legend(fontsize = 14)
plt.xlim(0, 9)
plt.ylim(0, 5);
results = pd.DataFrame(data = 0.0, columns = ['MAE', 'MSE'], index = labels)
# fig, ax = plt.subplots(4,2)
for i, (name, regressor) in enumerate(regressors):
regressor.fit(train_data, train_targets)
predicts = regressor.predict(test_data)
results.loc[name, 'MAE'] = metrics.mean_absolute_error(test_targets, predicts)
results.loc[name, 'MSE'] = metrics.mean_squared_error(test_targets, predicts)
# plot_predictions(test_targets, predicts)
results
comparison = pd.DataFrame([cv_results.mean(), results['MAE']]).T
comparison.columns = ['CV Results', 'Test Results']
comparison['Abs. Diff'] = (comparison['CV Results'] - comparison['Test Results']).abs()
comparison
cv_results.boxplot(boxprops = dict(lw = 2.0), medianprops = dict(lw = 2.0),
whiskerprops = dict(lw = 1.5, ls = '--'), capprops = dict(lw = 2.0, color = sns.color_palette()[0]))
plt.scatter(x = range(1,9), y = results['MAE'].values, color = 'red', label = 'Test Results')
plt.axhline(y = baseline_mae, color = 'black', lw = 0.5, ls = '--', label = 'baseline')
plt.xticks(ticks = range(1,9), labels = labels)
plt.legend(fontsize = 14)
plt.xlim(0, 9)
plt.ylim(0, 5);
###Output
_____no_output_____
###Markdown
Hyperparameter tuning to see if we can improve our results anymore...
###Code
import warnings
warnings.filterwarnings('ignore')
param_grids = {}
param_grids['Lasso'] = {'alpha' : np.arange(0.05, 5.0, 0.1)}
param_grids['EN'] = {'alpha' : np.arange(0.05, 5.0, 0.1), 'l1_ratio' : np.arange(0, 1, 0.1)}
param_grids['Tree'] = {'max_depth' : [3, 7, 11, 17, 26]}
param_grids['KNN'] = {'n_neighbors' : np.arange(1,11,3)}
param_grids['SVR'] = {'C' : [1, 10, 100, 1000, 10000], 'gamma' : [0.01, 0.001, 0.0001]}
param_grids['RFR'] = {'n_estimators' : np.arange(5, 100, 5), 'max_depth' : [3, 7, 11, 17, 26]}
param_grids['GBR'] = {'alpha' : np.arange(0.1, 0.9, 0.1), 'n_estimators' : np.arange(50, 500, 50), 'max_depth' : [3, 7, 11, 17, 26]}
param_grids['ADA'] = {'n_estimators' : np.arange(50, 500, 50)}
best_estimators = []
best_scores = []
for name, regressor in regressors:
param_grid = param_grids[name]
model = GridSearchCV(regressor, param_grids[name], scoring = 'neg_mean_absolute_error')
model.fit(train_data, train_targets)
best_estimators.append(model.best_estimator_)
best_scores.append(model.best_score_)
best_estimators
labels = ['Lasso', 'EN', 'Tree', 'KNN', 'SVR', 'RFR', 'GBR', 'ADA']
gs_results = pd.DataFrame(data = 0.0, columns = ['MAE', 'MSE'], index = labels)
for i, model in enumerate(best_estimators):
predicts = model.predict(test_data)
gs_results.loc[labels[i], 'MAE'] = metrics.mean_absolute_error(test_targets, predicts)
gs_results.loc[labels[i], 'MSE'] = metrics.mean_squared_error(test_targets, predicts)
# plot_predictions(test_targets, predicts)
gs_results
cv_results.boxplot(boxprops = dict(lw = 2.0), medianprops = dict(lw = 2.0),
whiskerprops = dict(lw = 1.5, ls = '--'), capprops = dict(lw = 2.0, color = sns.color_palette()[0]))
plt.scatter(x = range(1,9), y = results['MAE'].values, color = 'red', label = 'Test Results (no tuning)')
plt.scatter(x = range(1,9), y = gs_results['MAE'].values, color = 'green', label = 'Test Results (w/ tuning)')
plt.xticks(ticks = range(1,9), labels = labels)
plt.legend(fontsize = 14)
plt.xlim(0, 9)
plt.ylim(0, 8);
###Output
_____no_output_____
###Markdown
Now let's see if a DNN can outperform these classical machine learning models
###Code
def build_model():
model = models.Sequential()
model.add(layers.Dense(64, activation = 'relu', input_shape = (train_data.shape[1],)))
model.add(layers.Dense(64, activation = 'relu'))
model.add(layers.Dense(1))
model.compile(optimizer = 'rmsprop', loss = 'mse', metrics = ['mae'])
return model
k = 4
num_val_samples = len(train_data) // k
num_epochs = 200
all_mae = []
for i in range(k):
print('processing fold #', i)
# Prepare the validation data: data from partition # k
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
# Prepare the training data: data from all other partitions
partial_train_data = np.concatenate([train_data[:i * num_val_samples], train_data[(i + 1) * num_val_samples:]], axis = 0)
partial_train_targets = np.concatenate([train_targets[:i * num_val_samples], train_targets[(i + 1) * num_val_samples:]], axis = 0)
# Build the Keras model (already compiled)
model = build_model()
# Train the model (in silent mode, verbose=0)
history = model.fit(partial_train_data, partial_train_targets,
epochs = num_epochs, batch_size = 1, verbose = False,
validation_data = (val_data, val_targets))
# Evaluate the model on the validation data
all_mae.append(history.history['val_mae'])
# averages each mae with the mae from a different fold but the same epoch
average_mae_history = [np.mean([x[i] for x in all_mae]) for i in range(num_epochs)]
plt.plot(average_mae_history[5:]);
def smooth_curve(points, factor=0.95):
smoothed_points = []
for point in points:
if smoothed_points:
previous = smoothed_points[-1]
smoothed_points.append(previous * factor + point * (1 - factor))
else:
smoothed_points.append(point)
return smoothed_points
smooth_mae_history = smooth_curve(average_mae_history[5:])
plt.plot(range(1, len(smooth_mae_history) + 1), smooth_mae_history)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE');
test_mae = []
for i in range(10):
model = build_model()
model.fit(train_data, train_targets, epochs = 80, batch_size = 8, verbose = False)
test_mse_score, test_mae_score = model.evaluate(test_data, test_targets)
test_mae.append(test_mae_score)
np.mean(test_mae)
predictions = model.predict(test_data)
plot_predictions(test_targets, predictions.flatten())
###Output
_____no_output_____
###Markdown
--- See if we can improve the DNNs performance...
###Code
def build_model(n_hidden = 1, n_neurons = 64, activation = 'relu', learning_rate = 0.001,
optimizer = 'rmsprop', input_shape = (train_data.shape[1],)):
model = keras.models.Sequential()
model.add(keras.layers.InputLayer(input_shape=input_shape))
for layer in range(n_hidden):
model.add(keras.layers.Dense(n_neurons, activation = activation))
model.add(keras.layers.Dense(1))
if optimizer == 'rmsprop':
optimizer = keras.optimizers.RMSprop(lr = learning_rate)
if optimizer == 'adam':
optimizer == keras.optimizers.Adam(lr = learning_rate)
model.compile(optimizer = optimizer, loss = 'mse', metrics = ['mae'])
return model
keras_reg = keras.wrappers.scikit_learn.KerasRegressor(build_model)
param_dist = {'n_hidden': np.arange(1, 7, 1),
'n_neurons': np.arange(0, 129, 32),
'activation': ['relu', 'selu'],
'learning_rate': [0.1, 0.01, 0.001],
'optimizer': ['rmsprop', 'adam']}
grid_search_cv = GridSearchCV(keras_reg, param_dist, cv = 4, verbose = 0)
grid_search_cv.fit(train_data, train_targets, epochs = 75, batch_size = 32)
###Output
_____no_output_____
###Markdown
Current error with scikit learn and keras wrapper that doesn't allow a deep copy of the best parameters.But access to best_params_ still exists, but have to build the model manually instead of accessing best_estimator_
###Code
grid_search_cv.best_params_
model = build_model(n_hidden = 4, n_neurons = 128, activation = 'relu', learning_rate = 0.01, optimizer = 'adam')
model.fit(train_data, train_targets, epochs = 100)
predictions = model.predict(test_data)
metrics.mean_absolute_error(test_targets, predictions)
plot_predictions(test_targets, predictions.flatten())
k = 4
num_val_samples = len(train_data) // k
num_epochs = 500
all_mae = []
for i in range(k):
print('processing fold #', i)
# Prepare the validation data: data from partition # k
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
# Prepare the training data: data from all other partitions
partial_train_data = np.concatenate([train_data[:i * num_val_samples], train_data[(i + 1) * num_val_samples:]], axis = 0)
partial_train_targets = np.concatenate([train_targets[:i * num_val_samples], train_targets[(i + 1) * num_val_samples:]], axis = 0)
# Build the Keras model (already compiled)
model = build_model(n_hidden = 4, n_neurons = 128, activation = 'relu', learning_rate = 0.01, optimizer = 'adam')
# Train the model (in silent mode, verbose=0)
history = model.fit(partial_train_data, partial_train_targets,
epochs = num_epochs, batch_size = 1, verbose = False,
validation_data = (val_data, val_targets))
# Evaluate the model on the validation data
all_mae.append(history.history['val_mae'])
average_mae_history = [np.mean([x[i] for x in all_mae]) for i in range(num_epochs)]
plt.plot(average_mae_history[5:]);
def smooth_curve(points, factor=0.9):
smoothed_points = []
for point in points:
if smoothed_points:
previous = smoothed_points[-1]
smoothed_points.append(previous * factor + point * (1 - factor))
else:
smoothed_points.append(point)
return smoothed_points
smooth_mae_history = smooth_curve(average_mae_history[5:])
plt.plot(range(1, len(smooth_mae_history) + 1), smooth_mae_history)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE');
rnd_search_cv = RandomizedSearchCV(keras_reg, param_dist, n_iter = 200, cv = 4, verbose = 0)
val_data = train_data[325:]
val_targets = train_targets[325:]
rnd_search_cv.fit(train_data, train_targets, epochs = 75, batch_size = 32,
validation_data = (val_data, val_targets),
callbacks = [keras.callbacks.EarlyStopping(patience = 10)])
rnd_search_cv.best_params_
rnd_search_cv.best_score_
model = rnd_search_cv.best_estimator_
predictions = model.predict(test_data)
metrics.mean_absolute_error(test_targets, predictions)
###Output
_____no_output_____
###Markdown
to do... add initializers to param grid, include dropout?, try other optimizers?
###Code
plot_predictions(test_targets, predictions)
plt.scatter(x = range(len(test_targets)), y = test_targets - predictions)
###Output
_____no_output_____ |
demos/quasirandom_generators.ipynb | ###Markdown
Quasi-Random Sequence Generator Comparison
###Code
from qmcpy import *
import pandas as pd
pd.options.display.float_format = '{:.2e}'.format
from numpy import *
set_printoptions(threshold=2**10)
set_printoptions(precision=3)
from matplotlib import pyplot as plt
import matplotlib
%matplotlib inline
SMALL_SIZE = 10
MEDIUM_SIZE = 12
BIGGER_SIZE = 14
plt.rc('font', size=BIGGER_SIZE) # controls default text sizes
plt.rc('axes', titlesize=BIGGER_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=BIGGER_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=BIGGER_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
###Output
_____no_output_____
###Markdown
General Usage
###Code
# Unshifted Samples
lattice = Lattice(dimension=2, randomize=False, seed=7)
unshifted_samples = lattice.gen_samples(n_min=2,n_max=8)
print('Shape: %s'%str(unshifted_samples.shape))
print('Samples:\n'+str(unshifted_samples))
# Shifted Samples
lattice = Lattice(dimension=2, randomize=True, seed=7)
shifted_samples = lattice.gen_samples(n_min=4, n_max=8)
print('Shape: %s'%str(shifted_samples.shape))
print('Samples:\n'+str(shifted_samples))
###Output
Shape: (4, 2)
Samples:
[[0.201 0.155]
[0.701 0.655]
[0.451 0.905]
[0.951 0.405]]
###Markdown
QMCPy Generator Times ComparisonCompare the speed of low-discrepancy-sequence generators from Python (QMCPy), MATLAB, and R.The following blocks visualize speed comparisons when generating 1 dimensional unshifted/unscrambled sequences. Note that the generators are reinitialized before every trial.
###Code
# Load AccumulateData
df_py = pd.read_csv('../workouts/lds_sequences/out/python_sequences.csv')
df_py.columns = ['n',
'py_n','py_l','py_mps',
'py_h_QRNG','py_h_Owen',
'py_k_QRNG',
'py_s_QMCPy']
df_m = pd.read_csv('../workouts/lds_sequences/out/matlab_sequences.csv', header=None)
df_m.columns = ['n', 'm_l', 'm_s','m_h']
df_r = pd.read_csv('../workouts/lds_sequences/out/r_sequences.csv')
df_r.columns = ['n','r_s','r_h','r_k']
df_r.reset_index(drop=True, inplace=True)
def plt_lds_comp(df,name,colors):
fig,ax = plt.subplots(nrows=1, ncols=1, figsize=(8,5))
labels = df.columns[1:]
n = df['N']
for label,color in zip(labels,colors):
ax.loglog(n, df[label], label=label, color=color)
ax.legend(loc='upper left')
ax.set_xlabel('Sampling Points')
ax.set_ylabel('Generation Time (Seconds)')
# Metas and Export
fig.suptitle('Speed Comparison of %s Generators'%name)
###Output
_____no_output_____
###Markdown
Lattice
###Code
df_l = pd.concat([df_py['n'], df_py['py_n'], df_py['py_l'], df_py['py_mps'], df_m['m_l']], axis=1)
df_l.columns = ['N','QMCPy_Natural','QMCPy_Linear','QMCPy_MPS','MATLAB_GAIL']
df_l.set_index('N')
plt_lds_comp(df_l,'Lattice',colors=['r','g','b','k'])
###Output
_____no_output_____
###Markdown
Sobol
###Code
df_s = pd.concat([df_py['n'], df_py['py_s_QMCPy'],df_m['m_s'], df_r['r_s']], axis=1)
df_s.columns = ['N','QMCPy','MATLAB','R_QRNG']
df_s.set_index('N')
plt_lds_comp(df_s,'Sobol',['r','g','b','c','m']) # GC = GrayCode, N=Natural
###Output
_____no_output_____
###Markdown
Halton (Generalized)
###Code
df_h = pd.concat([df_py['n'], df_py['py_h_QRNG'],df_py['py_h_Owen'], df_r['r_h'],df_m['m_h']], axis=1)
df_h.columns = ['N','QMCPy_QRNG','QMCPy_Owen','R_QRNG','MATLAB']
df_h.set_index('N')
plt_lds_comp(df_h,'Halton',colors=['r','g','b','c'])
###Output
_____no_output_____
###Markdown
Korobov
###Code
df_k = pd.concat([df_py['n'], df_py['py_h_QRNG'],df_r['r_k']], axis=1)
df_k.columns = ['N','QMCPy_QRNG','R_QRNG']
df_k.set_index('N')
plt_lds_comp(df_k,'Korobov',colors=['r','g','b'])
###Output
_____no_output_____
###Markdown
QMCPy Default Generators
###Code
df_qmcpy = pd.concat([df_py['n'], df_py['py_h_Owen'], df_py['py_k_QRNG'], df_py['py_n'], df_py['py_s_QMCPy']], axis=1)
df_qmcpy.columns = ['N','Halton_QRNG', 'Korobov_QRNG', 'Lattice', 'Sobol']
df_qmcpy.set_index('N')
plt_lds_comp(df_qmcpy,'QMCPy Generators with Default Backends',colors=['r','g','b','c'])
###Output
_____no_output_____
###Markdown
Quasi-Random Sequence Generator Comparison
###Code
from qmcpy import *
import pandas as pd
pd.options.display.float_format = '{:.2e}'.format
from numpy import *
set_printoptions(threshold=2**10)
set_printoptions(precision=3)
from matplotlib import pyplot as plt
import matplotlib
%matplotlib inline
SMALL_SIZE = 10
MEDIUM_SIZE = 12
BIGGER_SIZE = 14
plt.rc('font', size=BIGGER_SIZE) # controls default text sizes
plt.rc('axes', titlesize=BIGGER_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=BIGGER_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=BIGGER_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
###Output
_____no_output_____
###Markdown
General Usage
###Code
# Unshifted Samples
lattice = Lattice(dimension=2, randomize=False, seed=7)
unshifted_samples = lattice.gen_samples(n_min=2,n_max=8)
print('Shape: %s'%str(unshifted_samples.shape))
print('Samples:\n'+str(unshifted_samples))
# Shifted Samples
lattice = Lattice(dimension=2, randomize=True, seed=7)
shifted_samples = lattice.gen_samples(n_min=4, n_max=8)
print('Shape: %s'%str(shifted_samples.shape))
print('Samples:\n'+str(shifted_samples))
###Output
Shape: (4, 2)
Samples:
[[0.201 0.155]
[0.701 0.655]
[0.451 0.905]
[0.951 0.405]]
###Markdown
QMCPy Generator Times ComparisonCompare the speed of low-discrepancy-sequence generators from Python (QMCPy), MATLAB, and R.The following blocks visualize speed comparisons when generating 1 dimensional unshifted/unscrambled sequences. Note that the generators are reinitialized before every trial.
###Code
# Load AccumulateData
df_py = pd.read_csv('../workouts/lds_sequences/out/python_sequences.csv')
df_py.columns = ['n',
'py_n','py_l','py_mps',
'py_h_QRNG','py_h_Owen',
'py_k_QRNG',
'py_s_QMCPy']
df_m = pd.read_csv('../workouts/lds_sequences/out/matlab_sequences.csv', header=None)
df_m.columns = ['n', 'm_l', 'm_s','m_h']
df_r = pd.read_csv('../workouts/lds_sequences/out/r_sequences.csv')
df_r.columns = ['n','r_s','r_h','r_k']
df_r.reset_index(drop=True, inplace=True)
def plt_lds_comp(df,name,colors):
fig,ax = plt.subplots(nrows=1, ncols=1, figsize=(8,5))
labels = df.columns[1:]
n = df['N']
for label,color in zip(labels,colors):
ax.loglog(n, df[label], label=label, color=color)
ax.legend(loc='upper left')
ax.set_xlabel('Sampling Points')
ax.set_ylabel('Generation Time (Seconds)')
# Metas and Export
fig.suptitle('Speed Comparison of %s Generators'%name)
###Output
_____no_output_____
###Markdown
Lattice
###Code
df_l = pd.concat([df_py['n'], df_py['py_n'], df_py['py_l'], df_py['py_mps'], df_m['m_l']], axis=1)
df_l.columns = ['N','QMCPy_Natural','QMCPy_Linear','QMCPy_MPS','MATLAB_GAIL']
df_l.set_index('N')
plt_lds_comp(df_l,'Lattice',colors=['r','g','b','k'])
###Output
_____no_output_____
###Markdown
Sobol
###Code
df_s = pd.concat([df_py['n'], df_py['py_s_QMCPy'],df_m['m_s'], df_r['r_s']], axis=1)
df_s.columns = ['N','QMCPy','MATLAB','R_QRNG']
df_s.set_index('N')
plt_lds_comp(df_s,'Sobol',['r','g','b','c','m']) # GC = GrayCode, N=Natural
###Output
_____no_output_____
###Markdown
Halton (Generalized)
###Code
df_h = pd.concat([df_py['n'], df_py['py_h_QRNG'],df_py['py_h_Owen'], df_r['r_h'],df_m['m_h']], axis=1)
df_h.columns = ['N','QMCPy_QRNG','QMCPy_Owen','R_QRNG','MATLAB']
df_h.set_index('N')
plt_lds_comp(df_h,'Halton',colors=['r','g','b','c'])
###Output
_____no_output_____
###Markdown
Korobov
###Code
df_k = pd.concat([df_py['n'], df_py['py_h_QRNG'],df_r['r_k']], axis=1)
df_k.columns = ['N','QMCPy_QRNG','R_QRNG']
df_k.set_index('N')
plt_lds_comp(df_k,'Korobov',colors=['r','g','b'])
###Output
_____no_output_____
###Markdown
QMCPy Default Generators
###Code
df_qmcpy = pd.concat([df_py['n'], df_py['py_h_Owen'], df_py['py_k_QRNG'], df_py['py_n'], df_py['py_s_QMCPy']], axis=1)
df_qmcpy.columns = ['N','Halton_QRNG', 'Korobov_QRNG', 'Lattice', 'Sobol']
df_qmcpy.set_index('N')
plt_lds_comp(df_qmcpy,'QMCPy Generators with Default Backends',colors=['r','g','b','c'])
###Output
_____no_output_____
###Markdown
Quasi-Random Sequence Generator Comparison
###Code
from qmcpy import *
import pandas as pd
pd.options.display.float_format = '{:.2e}'.format
from numpy import *
set_printoptions(threshold=2**10)
set_printoptions(precision=3)
from matplotlib import pyplot as plt
import matplotlib
%matplotlib inline
SMALL_SIZE = 10
MEDIUM_SIZE = 12
BIGGER_SIZE = 14
plt.rc('font', size=BIGGER_SIZE) # controls default text sizes
plt.rc('axes', titlesize=BIGGER_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=BIGGER_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=BIGGER_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
###Output
_____no_output_____
###Markdown
General Usage
###Code
# Unshifted Samples
lattice = Lattice(dimension=2, randomize=False, seed=7)
unshifted_samples = lattice.gen_samples(n_min=2,n_max=8)
print('Shape: %s'%str(unshifted_samples.shape))
print('Samples:\n'+str(unshifted_samples))
# Shifted Samples
lattice = Lattice(dimension=2, randomize=True, seed=7)
shifted_samples = lattice.gen_samples(n_min=4, n_max=8)
print('Shape: %s'%str(shifted_samples.shape))
print('Samples:\n'+str(shifted_samples))
###Output
Shape: (4, 2)
Samples:
[[0.169 0.962]
[0.669 0.462]
[0.419 0.712]
[0.919 0.212]]
###Markdown
QMCPy Generator Times ComparisonCompare the speed of low-discrepancy-sequence generators from Python (QMCPy), MATLAB, and R.The following blocks visualize speed comparisons when generating 1 dimensional unshifted/unscrambled sequences. Note that the generators are reinitialized before every trial.
###Code
# Load AccumulateData
df_py = pd.read_csv('../workouts/lds_sequences/out/python_sequences.csv')
df_py.columns = ['n',
'py_n','py_l','py_mps',
'py_h_QRNG','py_h_Owen',
'py_s_QMCPy','py_s_SciPy']
df_m = pd.read_csv('../workouts/lds_sequences/out/matlab_sequences.csv', header=None)
df_m.columns = ['n', 'm_l', 'm_s','m_h']
df_r = pd.read_csv('../workouts/lds_sequences/out/r_sequences.csv')
df_r.columns = ['n','r_s','r_h','r_k']
df_r.reset_index(drop=True, inplace=True)
def plt_lds_comp(df,name,colors):
fig,ax = plt.subplots(nrows=1, ncols=1, figsize=(8,5))
labels = df.columns[1:]
n = df['N']
for label,color in zip(labels,colors):
ax.plot(n, df[label], label=label, color=color)
ax.set_xscale('log',base=2)
ax.set_yscale('log',base=10)
ax.legend(loc='upper left')
ax.set_xlabel('Sampling Points')
ax.set_ylabel('Generation Time (Seconds)')
# Metas and Export
fig.suptitle('Speed Comparison of %s Generators'%name)
###Output
_____no_output_____
###Markdown
Lattice
###Code
df_l = pd.concat([df_py['n'], df_py['py_n'], df_py['py_l'], df_py['py_mps'], df_m['m_l']], axis=1)
df_l.columns = ['N','QMCPy_Natural','QMCPy_Linear','QMCPy_MPS','MATLAB_GAIL']
df_l.set_index('N')
plt_lds_comp(df_l,'Lattice',colors=['r','g','b','k'])
###Output
_____no_output_____
###Markdown
Sobol
###Code
df_s = pd.concat([df_py['n'], df_py['py_s_QMCPy'], df_py['py_s_SciPy'], df_m['m_s'], df_r['r_s']], axis=1)
df_s.columns = ['N','QMCPy','SciPy','MATLAB','R_QRNG']
df_s.set_index('N')
plt_lds_comp(df_s,'Sobol',['r','g','b','c','m']) # GC = GrayCode, N=Natural
###Output
_____no_output_____
###Markdown
Halton (Generalized)
###Code
df_h = pd.concat([df_py['n'], df_py['py_h_QRNG'],df_py['py_h_Owen'], df_r['r_h'],df_m['m_h']], axis=1)
df_h.columns = ['N','QMCPy_QRNG','QMCPy_Owen','R_QRNG','MATLAB']
df_h.set_index('N')
plt_lds_comp(df_h,'Halton',colors=['r','g','b','c'])
###Output
_____no_output_____
###Markdown
QMCPy Default Generators
###Code
df_qmcpy = pd.concat([df_py['n'], df_py['py_h_Owen'], df_py['py_n'], df_py['py_s_QMCPy']], axis=1)
df_qmcpy.columns = ['N','Halton_QRNG', 'Lattice', 'Sobol']
df_qmcpy.set_index('N')
plt_lds_comp(df_qmcpy,'QMCPy Generators with Default Backends',colors=['r','g','b','c'])
###Output
_____no_output_____
###Markdown
Quasi-Random Sequence Generator Comparison
###Code
from qmcpy import *
import pandas as pd
pd.options.display.float_format = '{:.2e}'.format
from numpy import *
set_printoptions(threshold=2**10)
set_printoptions(precision=3)
from matplotlib import pyplot as plt
import matplotlib
%matplotlib inline
SMALL_SIZE = 10
MEDIUM_SIZE = 12
BIGGER_SIZE = 14
plt.rc('font', size=BIGGER_SIZE) # controls default text sizes
plt.rc('axes', titlesize=BIGGER_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=BIGGER_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=BIGGER_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
###Output
_____no_output_____
###Markdown
General Usage
###Code
# Unshifted Samples
lattice = Lattice(dimension=2, randomize=False, seed=7)
unshifted_samples = lattice.gen_samples(n_min=2,n_max=8)
print('Shape: %s'%str(unshifted_samples.shape))
print('Samples:\n'+str(unshifted_samples))
# Shifted Samples
lattice = Lattice(dimension=2, randomize=True, seed=7)
shifted_samples = lattice.gen_samples(n_min=4, n_max=8)
print('Shape: %s'%str(shifted_samples.shape))
print('Samples:\n'+str(shifted_samples))
###Output
Shape: (4, 2)
Samples:
[[0.169 0.962]
[0.669 0.462]
[0.419 0.712]
[0.919 0.212]]
###Markdown
QMCPy Generator Times ComparisonCompare the speed of low-discrepancy-sequence generators from Python (QMCPy), MATLAB, and R.The following blocks visualize speed comparisons when generating 1 dimensional unshifted/unscrambled sequences. Note that the generators are reinitialized before every trial.
###Code
# Load AccumulateData
df_py = pd.read_csv('../workouts/lds_sequences/out/python_sequences.csv')
df_py.columns = ['n',
'py_n','py_l','py_mps',
'py_h_QRNG','py_h_Owen',
'py_s_QMCPy','py_s_SciPy']
df_m = pd.read_csv('../workouts/lds_sequences/out/matlab_sequences.csv', header=None)
df_m.columns = ['n', 'm_l', 'm_s','m_h']
df_r = pd.read_csv('../workouts/lds_sequences/out/r_sequences.csv')
df_r.columns = ['n','r_s','r_h','r_k']
df_r.reset_index(drop=True, inplace=True)
def plt_lds_comp(df,name,colors):
fig,ax = plt.subplots(nrows=1, ncols=1, figsize=(8,5))
labels = df.columns[1:]
n = df['N']
for label,color in zip(labels,colors):
ax.plot(n, df[label], label=label, color=color)
ax.set_xscale('log',basex=2)
ax.set_yscale('log',basey=10)
ax.legend(loc='upper left')
ax.set_xlabel('Sampling Points')
ax.set_ylabel('Generation Time (Seconds)')
# Metas and Export
fig.suptitle('Speed Comparison of %s Generators'%name)
###Output
_____no_output_____
###Markdown
Lattice
###Code
df_l = pd.concat([df_py['n'], df_py['py_n'], df_py['py_l'], df_py['py_mps'], df_m['m_l']], axis=1)
df_l.columns = ['N','QMCPy_Natural','QMCPy_Linear','QMCPy_MPS','MATLAB_GAIL']
df_l.set_index('N')
plt_lds_comp(df_l,'Lattice',colors=['r','g','b','k'])
###Output
_____no_output_____
###Markdown
Sobol
###Code
df_s = pd.concat([df_py['n'], df_py['py_s_QMCPy'], df_py['py_s_SciPy'], df_m['m_s'], df_r['r_s']], axis=1)
df_s.columns = ['N','QMCPy','SciPy','MATLAB','R_QRNG']
df_s.set_index('N')
plt_lds_comp(df_s,'Sobol',['r','g','b','c','m']) # GC = GrayCode, N=Natural
###Output
_____no_output_____
###Markdown
Halton (Generalized)
###Code
df_h = pd.concat([df_py['n'], df_py['py_h_QRNG'],df_py['py_h_Owen'], df_r['r_h'],df_m['m_h']], axis=1)
df_h.columns = ['N','QMCPy_QRNG','QMCPy_Owen','R_QRNG','MATLAB']
df_h.set_index('N')
plt_lds_comp(df_h,'Halton',colors=['r','g','b','c'])
###Output
_____no_output_____
###Markdown
QMCPy Default Generators
###Code
df_qmcpy = pd.concat([df_py['n'], df_py['py_h_Owen'], df_py['py_n'], df_py['py_s_QMCPy']], axis=1)
df_qmcpy.columns = ['N','Halton_QRNG', 'Lattice', 'Sobol']
df_qmcpy.set_index('N')
plt_lds_comp(df_qmcpy,'QMCPy Generators with Default Backends',colors=['r','g','b','c'])
###Output
_____no_output_____
###Markdown
Quasi-Random Sequence Generator Comparison
###Code
from qmcpy import *
import pandas as pd
pd.options.display.float_format = '{:.2e}'.format
from numpy import *
set_printoptions(threshold=2**10)
set_printoptions(precision=3)
from matplotlib import pyplot as plt
import matplotlib
%matplotlib inline
SMALL_SIZE = 10
MEDIUM_SIZE = 12
BIGGER_SIZE = 14
plt.rc('font', size=BIGGER_SIZE) # controls default text sizes
plt.rc('axes', titlesize=BIGGER_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=BIGGER_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=BIGGER_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
###Output
_____no_output_____
###Markdown
General Usage
###Code
# Unshifted Samples
lattice = Lattice(dimension=2, randomize=False, seed=7)
unshifted_samples = lattice.gen_samples(n_min=2,n_max=8)
print('Shape: %s'%str(unshifted_samples.shape))
print('Samples:\n'+str(unshifted_samples))
# Shifted Samples
lattice = Lattice(dimension=2, randomize=True, seed=7)
shifted_samples = lattice.gen_samples(n_min=4, n_max=8)
print('Shape: %s'%str(shifted_samples.shape))
print('Samples:\n'+str(shifted_samples))
###Output
Shape: (4, 2)
Samples:
[[0.201 0.155]
[0.701 0.655]
[0.451 0.905]
[0.951 0.405]]
###Markdown
QMCPy Generator Times ComparisonCompare the speed of low-discrepancy-sequence generators from Python (QMCPy), MATLAB, and R.The following blocks visualize speed comparisons when generating 1 dimensional unshifted/unscrambled sequences. Note that the generators are reinitialized before every trial.
###Code
# Load AccumulateData
df_py = pd.read_csv('../workouts/lds_sequences/out/python_sequences.csv')
df_py.columns = ['n',
'py_n','py_l','py_mps',
'py_h_QRNG','py_h_Owen',
'py_k_QRNG',
'py_s_QMCPy','py_s_PyTorch']
df_m = pd.read_csv('../workouts/lds_sequences/out/matlab_sequences.csv', header=None)
df_m.columns = ['n', 'm_l', 'm_s','m_h']
df_r = pd.read_csv('../workouts/lds_sequences/out/r_sequences.csv')
df_r.columns = ['n','r_s','r_h','r_k']
df_r.reset_index(drop=True, inplace=True)
def plt_lds_comp(df,name,colors):
fig,ax = plt.subplots(nrows=1, ncols=1, figsize=(8,5))
labels = df.columns[1:]
n = df['N']
for label,color in zip(labels,colors):
ax.loglog(n, df[label], label=label, color=color)
ax.legend(loc='upper left')
ax.set_xlabel('Sampling Points')
ax.set_ylabel('Generation Time (Seconds)')
# Metas and Export
fig.suptitle('Speed Comparison of %s Generators'%name)
###Output
_____no_output_____
###Markdown
Lattice
###Code
df_l = pd.concat([df_py['n'], df_py['py_n'], df_py['py_l'], df_py['py_mps'], df_m['m_l']], axis=1)
df_l.columns = ['N','QMCPy_Natural','QMCPy_Linear','QMCPy_MPS','MATLAB_GAIL']
df_l.set_index('N')
plt_lds_comp(df_l,'Lattice',colors=['r','g','b','k'])
###Output
_____no_output_____
###Markdown
Sobol
###Code
df_s = pd.concat([df_py['n'], df_py['py_s_QMCPy'], df_py['py_s_PyTorch'], \
df_m['m_s'], df_r['r_s']], axis=1)
df_s.columns = ['N','QMCPy','PyTorch','MATLAB','R_QRNG']
df_s.set_index('N')
plt_lds_comp(df_s,'Sobol',['r','g','b','c','m','y']) # GC = GrayCode, N=Natural
###Output
_____no_output_____
###Markdown
Halton (Generalized)
###Code
df_h = pd.concat([df_py['n'], df_py['py_h_QRNG'],df_py['py_h_Owen'], df_r['r_h'],df_m['m_h']], axis=1)
df_h.columns = ['N','QMCPy_QRNG','QMCPy_Owen','R_QRNG','MATLAB']
df_h.set_index('N')
plt_lds_comp(df_h,'Halton',colors=['r','g','b','c'])
###Output
_____no_output_____
###Markdown
Korobov
###Code
df_k = pd.concat([df_py['n'], df_py['py_h_QRNG'],df_r['r_k']], axis=1)
df_k.columns = ['N','QMCPy_QRNG','R_QRNG']
df_k.set_index('N')
plt_lds_comp(df_k,'Korobov',colors=['r','g','b'])
###Output
_____no_output_____
###Markdown
QMCPy Default Generators
###Code
df_qmcpy = pd.concat([df_py['n'], df_py['py_h_Owen'], df_py['py_k_QRNG'], df_py['py_n'], df_py['py_s_QMCPy']], axis=1)
df_qmcpy.columns = ['N','Halton_QRNG', 'Korobov_QRNG', 'Lattice', 'Sobol']
df_qmcpy.set_index('N')
plt_lds_comp(df_qmcpy,'QMCPy Generators with Default Backends',colors=['r','g','b','c'])
###Output
_____no_output_____ |
Day8 of 10_Mohamed Abdelhay.ipynb | ###Markdown
Day 8 : Data Loading and Manipulation and Visulatiozation (Facies) You can use the following liberaries for your assignment:> Numpy, Pandas, Matplotlib, Seaborn, LASIO, Welly Kindly load the las file of well1513.csv file from the data folder Perform the below Tasks:>1. Investigate the component of the data file (number of columns , numbers of observations, Null values, normal statistics) 2. Plot well logs together with Facies column (FORCE_2020_LITHOFACIES_LITHOLOGY) as striplog (facies log)3. How many classes in the facies log.4. How many data points per each class.
###Code
import pandas as pd
import numpy as np
from IPython.display import display
import matplotlib.pyplot as plt
import seaborn as sb
import lasio
df=pd.read_csv(r'C:/Users/GoSmart/Documents/GitHub/GeoML-2.0/10DaysChallenge/Dutch_F3_Logs/well1513.csv',delimiter=',')
display('Name of columns',df.columns)
display(df.info())
print ('the Number of NAN value')
df.isnull().sum()
df.describe()
df.keys()
import matplotlib.pyplot as plt
plt.figure(figsize=(15,15))
plt.subplot(1, 9, 1)
plt.plot(df.CALI, -1*df.index, label = 'CALI', c = 'b')
plt.xlabel('CALIBER')
plt.ylabel('DEPTH')
plt.subplot(1, 9, 2)
plt.plot(df.RDEP, -1*df.index, label = 'RDEP', c = 'r')
plt.xlabel('Resistivity')
plt.subplot(1, 9, 3)
plt.plot(df.RHOB, -1*df.index, label = 'RHOB', c = 'black')
plt.xlabel('Density')
plt.subplot(1, 9, 4)
plt.plot(df.GR, -1*df.index, label = 'GR', c = 'g')
plt.xlabel('GR')
plt.subplot(1, 9, 5)
plt.plot(df.NPHI, -1*df.index, label = 'NPHI', c = 'orange')
plt.xlabel('NPHI')
plt.subplot(1, 9, 6)
plt.plot(df.PEF, -1*df.index, label = 'PEF', c = 'yellow')
plt.xlabel('PEF')
plt.subplot(1, 9, 7)
plt.plot(df.DTC, -1*df.index, label = 'DTC', c = 'violet')
plt.xlabel('Sonic')
plt.subplot(1, 9, 8)
plt.plot(df.SP, -1*df.index, label = 'SP', c = 'b')
plt.xlabel('SP')
plt.subplot(1, 9, 9)
F = np.vstack((df.iloc[:,18],df.iloc[:,18])).T
plt.imshow(F, aspect='auto', cmap = 'rainbow')
plt.xlabel('Lithology');
display('Number of Facies classes:',df["FORCE_2020_LITHOFACIES_LITHOLOGY"].nunique())
display('Number of data points per each Facies class:', df.value_counts("FORCE_2020_LITHOFACIES_LITHOLOGY"))
###Output
_____no_output_____ |
Improving Deep Neural Networks/Initialization.ipynb | ###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate=0.01, num_iterations=15000, print_cost=True, initialization="he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l - 1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print("predictions_train = " + str(predictions_train))
print("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: inf
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print(predictions_train)
print(predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * np.sqrt(2 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l],layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1]) *10
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: divide by zero encountered in log
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: invalid value encountered in multiply
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1]) * np.sqrt(2/layers_dims[l-1])
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: divide by zero encountered in log
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: invalid value encountered in multiply
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2.0/layers_dims[l-1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1]) * 10
parameters['b' + str(l)] = np.zeros(shape=(layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: divide by zero encountered in log
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: invalid value encountered in multiply
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1])* np.sqrt(2/layers_dims[l-1])
parameters['b' + str(l)] = np.zeros(shape=(layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____ |
8-Labs/Z-Spring2021/Lab4/Lab4_Dev.ipynb | ###Markdown
Laboratory 4: FUNctions
###Code
# Preamble script block to identify host, user, and kernel
import sys
! hostname
! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
###Output
DESKTOP-EH6HD63
desktop-eh6hd63\farha
C:\Users\Farha\Anaconda3\python.exe
3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
###Markdown
Full name: R: Title of the notebook: Date:___ ![](https://runestone.academy/runestone/books/published/fopp/_images/function_call.gif) What is a function in Python?Functions are simply pre-written code fragments that perform a certain task.In older procedural languages functions and subroutines are similar, but a function returns a value whereasa subroutine operates on data. The difference is subtle but important. More recent thinking has functions being able to operate on data (they always could) and the value returned may be simply an exit code.An analogy are the functions in *MS Excel*. To add numbers, we can use the sum(range) function and type `=sum(A1:A5)` instead of typing `=A1+A2+A3+A4+A5` Calling the FunctionWe call a function simply by typing the name of the function or by using the dot notation.Whether we can use the dot notation or not depends on how the function is written, whether it is part of a class, and how it is imported into a program.Some functions expect us to pass data to them to perform their tasks. These data are known as parameters( older terminology is arguments, or argument list) and we pass them to the function by enclosing their values in parenthesis ( ) separated by commas. For instance, the `print()` function for displaying text on the screen is \called" by typing `print('Hello World')` where print is the name of the function and the literal (a string) 'Hello World' is the argument. Program flowA function, whether built-in, or added must be defined *before* it is called, otherwise the script will fail. Certain built-in functions "self define" upon start (such as `print()` and `type()` and we need not worry about those funtions). The diagram below illustrates the requesite flow control for functions that need to be defined before use.![](https://3.137.111.182/engr-1330-webroot/8-Labs/Lab4/flow-control-diagram.png)An example below will illustrate, change the cell to code and run it, you should get an error.Then fix the indicated line (remove the leading "" in the import math ... line) and rerun, should get a functioning script. reset the notebook using a magic function in JupyterLab%reset -f An example, run once as is then activate indicated line, run again - what happens?x= 4.sqrt_by_arithmetic = x**0.5print('Using arithmetic square root of ', x, ' is ',sqrt_by_arithmetic )import math import the math package activate and rerunsqrt_by_math = math.sqrt(x) note the dot notationprint('Using math package square root of ', x,' is ',sqrt_by_arithmetic)
###Code
# Here is an alternative way: We just load the function that we want:
# reset the notebook using a magic function in JupyterLab
%reset -f
# An example, run once as is then activate indicated line, run again - what happens?
x= 4.
sqrt_by_arithmetic = x**0.5
print('Using arithmetic square root of ', x, ' is ',sqrt_by_arithmetic )
from math import sqrt # import sqrt from the math package ## activate and rerun
sqrt_by_math = sqrt(x) # note the notation
print('Using math package square root of ', x,' is ',sqrt_by_arithmetic)
###Output
Using arithmetic square root of 4.0 is 2.0
Using math package square root of 4.0 is 2.0
###Markdown
___ Built-In in Primitive Python (Base install)The base Python functions and types built into it that are always available, the figure below lists those functions.![](https://3.137.111.182/engr-1330-webroot/8-Labs/Lab4/base-functions.png)Notice all have the structure of `function_name()`, except `__import__()` which has a constructor type structure, and is not intended for routine use. We will learn about constructors later. ___ Added-In using External Packages/Modules and Libaries (e.g. math)Python is also distributed with a large number of external functions. These functions are savedin files known as modules. To use the built-in codes in Python modules, we have to importthem into our programs first. We do that by using the import keyword. There are threeways to import:1. Import the entire module by writing import moduleName; For instance, to import the random module, we write import random. To use the randrange() function in the random module, we write random.randrange( 1, 10);282. Import and rename the module by writing import random as r (where r is any name of your choice). Now to use the randrange() function, you simply write r.randrange(1, 10); and3. Import specific functions from the module by writing from moduleName import name1[,name2[, ... nameN]]. For instance, to import the randrange() function from the random module, we write from random import randrange. To import multiple functions, we separate them with a comma. To import the randrange() and randint() functions, we write from random import randrange, randint. To use the function now, we do not have to use the dot notation anymore. Just write randrange( 1, 10).
###Code
# Example 1 of import
%reset -f
import random
low = 1 ; high = 10
random.randrange(low,high) #generate random number in range low to high
# Example 2 of import
%reset -f
import random as r
low = 1 ; high = 10
r.randrange(low,high)
# Example 3 of import
%reset -f
from random import randrange
low = 1 ; high = 10
randrange(low,high)
###Output
_____no_output_____
###Markdown
___The modules that come with Python are extensive and listed at https://docs.python.org/3/py-modindex.html.There are also other modules that can be downloaded and used(just like user defined modules below). In these labs we are building primitive codes to learn how to code and how to create algorithms. For many practical cases you will want to load a well-tested package to accomplish the tasks. That exercise is saved for the end of the document. User-BuiltWe can define our own functions in Python and reuse them throughout the program.The syntax for defining a function is: def functionName( argument ): code detailing what the function should do note the colon above and indentation ... ... return [expression] The keyword `def` tells the program that the indented code from the next line onwards ispart of the function. The keyword `return `tells the program to return an answer from thefunction. There can be multiple return statements in a function. Once the function executesa return statement, the program exits the function and continues with *its* next executablestatement. If the function does not need to return any value, you can omit the returnstatement.Functions can be pretty elaborate; they can search for things in a list, determine variabletypes, open and close files, read and write to files. To get started we will build a few reallysimple mathematical functions; we will need this skill in the future anyway, especially inscientific programming contexts. ___ User-built within a Code Block For our first function we will code $$f(x) = x\sqrt{1 + x}$$ into a function named `dusty()`.When you run the next cell, all it does is prototype the function (defines it), nothing happens until we use the function.
###Code
def dusty(x) :
temp = x * ((1.0+x)**(0.5)) # don't need the math package
return temp
# the function should make the evaluation
# store in the local variable temp
# return contents of temp
# wrapper to run the dusty function
yes = 0
while yes == 0:
xvalue = input('enter a numeric value')
try:
xvalue = float(xvalue)
yes = 1
except:
print('enter a bloody number! Try again \n')
# call the function, get value , write output
yvalue = dusty(xvalue)
print('f(',xvalue,') = ',yvalue) # and we are done
###Output
enter a numeric value 5
###Markdown
___ Example: The Average FunctionCreate the AVERAGE function for three values and test it for these values:- 3,4,5- 10,100,1000- -5,15,5
###Code
def AVERAGE3(x,y,z) : #define the function "AVERAGE3"
Ave = (x+y+z)/3 #computes the average
return Ave
print(AVERAGE3(3,4,5))
print(AVERAGE3(10,100,1000))
print(AVERAGE3(-5,15,5))
###Output
4.0
370.0
5.0
###Markdown
___ Example: The KATANA FunctionCreate the Katana function for rounding off to the nearest hundredths (to 2 decimal places) and test it for these values:- 25.33694- 15.753951- 3.14159265359
###Code
def Katana(x) : #define the function "Katana"
newX = round(x, 2)
return newX
print(Katana(25.33694))
print(Katana(15.753951))
print(Katana(3.14159265359))
###Output
25.34
15.75
3.14
###Markdown
___ Variable ScopeAn important concept when defining a function is the concept of variable scope. Variables defined inside a function are treated differently from variables defined outside. Firstly, any variable declared within a function is only accessible within the function. These are known as local variables. In the `dusty()` function, the variables `x` and `temp` are local to the function.Any variable declared outside a function in a main program is known as a program variableand is accessible anywhere in the program. In the example, the variables `xvalue` and `yvalue` are program variables (global to the program; if they are addressed within a function, they could be operated on.)Generally we want to protect the program variables from the function unless the intent is to change their values. The way the function is written in the example, the function cannot damage `xvalue` or `yvalue`.If a local variable shares the same name as a program variable, any code inside the function isaccessing the local variable. Any code outside is accessing the program variable ___ As Separate Module/FileIn this section we will invent the `neko()` function, export it to a file, so we can reuse it in later notebooks without having to retype or cut-and-paste. The `neko()` function evaluates:$$f(x) = x\sqrt{|(1 + x)|}$$Its the same as the dusty() function, except operates on the absolute value in the wadical.1. Create a text file named "mylibrary.txt"2. Copy the neko() function script below into that file. def neko(input_argument) : import math ok to import into a function local_variable = input_argument * math.sqrt(abs(1.0+input_argument)) return local_variable4. rename mylibrary.txt to mylibrary.py5. modify the wrapper script to use the neko function as an external module
###Code
# wrapper to run the neko function
import mylibrary
yes = 0
while yes == 0:
xvalue = input('enter a numeric value')
try:
xvalue = float(xvalue)
yes = 1
except:
print('enter a bloody number! Try again \n')
# call the function, get value , write output
yvalue = mylibrary.neko(xvalue)
print('f(',xvalue,') = ',yvalue) # and we are done
###Output
enter a numeric value 5
###Markdown
___In JupyterHub environments, you may discover that changes you make to your external python file are not reflected when you re-run your script; you need to restart the kernel to get the changes to actually update. The figure below depicts the notebook, external file relatonship![](https://3.137.111.182/engr-1330-webroot/8-Labs/Lab4/external-file-import.png) ______ Rudimentary GraphicsGraphing values is part of the broader field of data visualization, which has two maingoals: 1. To explore data, and 2. To communicate data.In this subsection we will concentrate on introducing skills to start exploring data and toproduce meaningful visualizations we can use throughout the rest of this notebook. Data visualization is a rich field of study that fills entire books.The reason to start visualization here instead of elsewhere is that with functions plottingis a natural activity and we have to import the matplotlib module to make the plots.The example below is code adapted from Grus (2015) that illustrates simple genericplots. I added a single line (label the x-axis), and corrected some transcriptionerrors (not the original author's mistake, just the consequence of how the API handled thecut-and-paste), but otherwise the code is unchanged.
###Code
# python script to illustrate plotting
# CODE BELOW IS ADAPTED FROM:
# Grus, Joel (2015-04-14). Data Science from Scratch: First Principles with Python
# (Kindle Locations 1190-1191). O'Reilly Media. Kindle Edition.
#
from matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()
years = [1950, 1960, 1970, 1980, 1990, 2000, 2010] # define one list for years
gdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3] # and another one for Gross Domestic Product (GDP)
plt.plot( years, gdp, color ='green', marker ='o', linestyle ='solid') # create a line chart, years on x-axis, gdp on y-axis
# what if "^", "P", "*" for marker?
# what if "red" for color?
# what if "dashdot", '--' for linestyle?
plt.title("Nominal GDP")# add a title
plt.ylabel("Billions of $")# add a label to the x and y-axes
plt.xlabel("Year")
plt.show() # display the plot
# Now lets put the plotting script into a function so we can make line charts of any two numeric lists
def plotAline(list1,list2,strx,stry,strtitle): # plot list1 on x, list2 on y, xlabel, ylabel, title
from matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()
plt.plot( list1, list2, color ='green', marker ='o', linestyle ='solid') # create a line chart, years on x-axis, gdp on y-axis
plt.title(strtitle)# add a title
plt.ylabel(stry)# add a label to the x and y-axes
plt.xlabel(strx)
plt.show() # display the plot
return #null return
# wrapper
years = [1950, 1960, 1970, 1980, 1990, 2000, 2010] # define two lists years and gdp
gdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3]
print(type(years[0]))
print(type(gdp[0]))
plotAline(years,gdp,"Year","Billions of $","Nominal GDP")
###Output
<class 'int'>
<class 'float'>
###Markdown
___ Example- The Hopeless Romantic! Copy the wrapper script for the `plotAline()` function, and modify the copy to create a plot of$$ x = 16sin^3(t) $$$$ y = 13cos(t) - 5cos(2t) - 2cos(3t) - cos(4t) $$for t raging from [0,2$\Pi$] (inclusive).Label the plot and the plot axes.
###Code
from matplotlib import pyplot as plt # import the plotting library
import numpy as np # import NumPy: for large, multi-dimensional arrays and matrices, along with high-level mathematical functions to operate on these arrays.
pi = np.pi #pi value from the np package
t= np.linspace(0,2*pi,360)# the NumPy function np.linspace is similar to the range()
x = 16*np.sin(t)**3
y = 13*np.cos(t) - 5*np.cos(2*t) - 2*np.cos(3*t) - np.cos(4*t)
plt.plot( x, y, color ='purple', marker ='.', linestyle ='solid')
plt.ylabel("Y-axis")# add a label to the x and y-axes
plt.xlabel("X-axis")
plt.axis('equal') #sets equal axis ratios
plt.title("A Hopeless Romantic's Curve")# add a title
plt.show() # display the plot
###Output
_____no_output_____ |
hpc/multi_gpu_nways/labs/CFD/English/C/jupyter_notebook/nccl/nccl.ipynb | ###Markdown
The NVIDIA Collectives Communications Library (NCCL) IntroductionNCCL (pronounced “Nickel”) is a library of multi-GPU collective and point-to-point communication primitives that are topology-aware and are designed to be light-weight, depending only on the standard C++ and CUDA libraries. NCCL can be deployed in single-process or multi-process applications, handling required inter-process communication transparently. Moreover, the API is quite similar to MPI and provides the functionality of most-used MPI primitives.In general, NCCL is optimized for high bandwidth and low latency over PCIe and NVLink high speed interconnect for intra-node communication and sockets and InfiniBand for inter-node communication. NCCL—allows CUDA applications and DL frameworks in particular—to efficiently use multiple GPUs without having to implement complex communication algorithms and adapt them to every platform.**Relevance to Deep Learning:** The NVIDIA AI libraries in CUDA-X depend on NCCL to provide a programming abstraction that is highly tuned for each platform and topology-aware through advanced topology detection, generic path search, and algorithms optimized for NVIDIA architectures. Consequently, developers using deep learning frameworks can rely on NCCL’s highly optimized, MPI compatible and topology aware routines, to take full advantage of all available GPUs within and across multiple nodes. NCCL compared with CUDA-aware MPIHere are some differenciating factors of NCCL compared to CUDA-aware MPI:* NCCL APIs are initiated from the CPU, but they execute on the GPU and they move or exchange data among GPU memories whereas MPI executes entirely on CPU. * It also uses CUDA Stream semantics with a stream parameter while MPI (CUDA-aware or otherwise) is not stream-aware.* NCCL requires a parent communication framework like MPI or SHMEM. * Unlike MPI, it does not have tags.* NCCL is most optimized for collective communication and is more efficient than MPI in dense systems like DGX. Architecture Overview Here's an overview of NCCL's architecture:![nccl_architecture](../../images/nccl_architecture.png)NCCL maintains separate threads in GPU and CPU for intra-node and network-bound communication, respectively. Crucially, it detects complex multi-node and intra-node topologies at runtime and generates optimal paths for data transfer between two GPUs dynamically.For example, recall that within a DGX-1V node, device-to-device `cudaMemcpy` between GPU 0 and GPU 7 requires data movement through PCIe bus and SMP interconnect, both of which are bandwidth constrained compared to NVLink. This route is denoted by the blue path below.![nccl_dgx1_topology](../../images/nccl_dgx1_topology.png)Note that there isn't a direct NVLink connection between GPUs 0 and 7. NCCL, however, can utilize GPU 4 to establish a one-hop NVLink-based connection between GPU 0 and 7 as deonted by the red path above. NCCL employs many such optimizations transparently to the users that ultimately results in higher network utilization and application performance. Using NCCL Communicator InitializationThe NCCL API closely follows MPI. Before performing data transfer operations, a communicator object must be created for each GPU. The communicators identify the set of GPUs that will communicate and maps the communication paths between them. We call the set of associated communicators a "clique". The most general method to initialize communicators in NCCL is to call `ncclCommInitRank()` once for each GPU:```cncclComm_t nccl_comm;NCCL_CALL(ncclCommInitRank(&nccl_comm, nGPUs, nccl_uid, rank));```This function assumes that the GPU belonging to the specified rank has already been selected using `cudaSetDevice()`. `nGPUs` is the number of GPUs in the clique. `nccl_uid` allows the ranks of the clique to find each other. The same `nccl_uid` must be used by all ranks. To achieve this, call `ncclGetUniqueId()` in one rank and broadcast the resulting `nccl_uid` to the other ranks of the clique using MPI as follows:```cncclUniqueId nccl_uid;if (rank == 0) { ncclGetUniqueId(&nccl_uid);}MPI_Bcast(&nccl_uid, sizeof(ncclUniqueId), MPI_BYTE, 0, MPI_COMM_WORLD);```The last argument to `ncclCommInitRank()`, `rank`, specifies the index of the current GPU within the clique. It must be unique for each rank in the clique and in the range $[0, nGPUs)$.Internally, `ncclInitRank()` performs a synchronization between all communicators in the clique. As a consequence, it must be called from a different host thread for each GPU, or from separate processes (e.g., MPI ranks). Thus, in a multi-node program that uses MPI, the steps to initialize NCCL communicator are as follows:1. Get `rank` and `size` from `MPI_Comm_rank` and `MPI_Comm_size` functions, respectively. Now, `nGPUs`, which is the total number of GPUs used, will be equal to `size`, as we are using the single-GPU per rank communication model.2. Get the unique clique ID, `nccl_uid`, using `ncclGetUniqueId` function on rank 0 and broadcast it using `MPI_Bcast`.3. Get the local rank, `local_rank`, of the process within a node using a local MPI communicator and `MPI_Comm_split_type`, `MPI_Comm_rank`, and `MPI_Comm_free` functions as done in previous labs.4. Use `cudaSetDevice` to set the current GPU as `local_rank`.5. Use `ncclCommInitRank` function to initialize NCCL communicator on all ranks.The code for this process is as follows:```cMPI_Init(&argc, &argv);int rank;MPI_Comm_rank(MPI_COMM_WORLD, &rank);int size;MPI_Comm_size(MPI_COMM_WORLD, &size);ncclUniqueId nccl_uid;if (rank == 0) ncclGetUniqueId(&nccl_uid);MPI_Bcast(&nccl_uid, sizeof(ncclUniqueId), MPI_BYTE, 0, MPI_COMM_WORLD);int local_rank = -1;{ MPI_Comm local_comm; MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, rank, MPI_INFO_NULL, &local_comm); MPI_Comm_rank(local_comm, &local_rank); MPI_Comm_free(&local_comm);}cudaSetDevice(local_rank);ncclComm_t nccl_comm;ncclCommInitRank(&nccl_comm, size, nccl_uid, rank);``` Group CallsSeveral data transfer calls can be merged together using NCCL Groups by encapsulating memory copy operations between `ncclGroupStart()` and `ncclGroupEnd()` function calls. This is needed for three purposes: managing multiple GPUs from one thread (to avoid deadlocks), aggregating communication operations to improve performance, or merging multiple send/receive point-to-point operations.It is advisible to always encapsulate NCCL communication functions within group calls. API SummaryWe now give an API overview and list below the essential functions like communicator creation/ destruction, commonly used point-to-point communication functions, a couple of collective communication functions, and aggregating communication functions using group calls for performance optimization. ```bash// Communicator creationncclGetUniqueId(ncclUniqueId* commId);ncclCommInitRank(ncclComm_t* comm, int nranks, ncclUniqueId commId, int rank);// Communicator destructionncclCommDestroy(ncclComm_t comm);// Point-to-point communicationncclSend(void* sbuff, size_t count, ncclDataType_t type, int peer, ncclComm_t comm, cudaStream_t stream);ncclRecv(void* rbuff, size_t count, ncclDataType_t type, int peer, ncclComm_t comm, cudaStream_t stream);// Collective communicationncclAllReduce(void* sbuff, void* rbuff, size_t count, ncclDataType_t type, ncclRedOp_t op, ncclComm_t comm, cudaStream_t stream);ncclBroadcast(void* sbuff, void* rbuff, size_t count, ncclDataType_t type, int root, ncclComm_t comm, cudaStream_t stream);// Aggregation/CompositionncclGroupStart();ncclGroupEnd();```Observe that the communication calls are quite similar in synatx to MPI calls and that they allow using a stream parameter, making them stream-aware. Implementation ExerciseOpen the [jacobi_nccl.cpp](../../source_code/nccl/jacobi_nccl.cpp) file and understand the flow of the program. Alternatively, you can navigate to `CFD/English/C/source_code/nccl/` directory in Jupyter's file browser in the left pane. Then, click to open the `jacobi_nccl.cpp` file.Notice how closely it resembles the [jacobi_cuda_aware_mpi.cpp](../../source_code/mpi/jacobi_cuda_aware_mpi.cpp) program that we have used in previous lab.Also open the [Makefile](../../source_code/nccl/Makefile) and notice that we include NCCL header files in `mpicxx` build command and we link NCCL libraries by using `-lnccl` flag.After the Jacobi device kernel computation, we will use `ncclAllReduce` function to first reduce the (square of) L2 norm in the GPUs and then transfer it to the CPU in each rank. We will use the default stream in this exercise. Recall that the default (or NULL) stream is denoted by "0" and is synchronizing for the device. However, NCCL communication calls are asynchronous and will not block the host.Implement the following marked as `TODO`:* Reduce the device-local L2 norm, `l2_norm_d` to the global L2 norm on each device, `l2_global_norm_d`, using `ncclAllReduce()` function. Use `ncclSum` as the reduction operation. Make sure to encapsulate this funciton call within NCCL group calls, `ncclGroupStart()` and `ncclGroupEnd()`.* Transfer the global L2 norm from each device to the host using `cudaMemcpyAsync` function.* Perform the first set of halo exchanges by: - Receiving the top halo from the `top` neighbour into the `a_new` device memory array location. - Sending current device's bottom halo to `bottom` neighbour from the `a_new + (iy_end - 1) * nx` device memory array location.* Similarily, perform the second set of halo exchanges.* For all NCCL calls, use "0" in the stream parameter function argument.After implementing these, compile the program:
###Code
! cd ../../source_code/nccl/ && make clean && make
###Output
_____no_output_____
###Markdown
Ensure that there are no errors. Now, validate your implementation by running the program binary across 2 nodes on 16 GPUs with $16K\times32K$ grid size:
###Code
! cd ../../source_code/nccl/ && mpirun -np 16 --map-by ppr:4:socket ./jacobi_nccl -ny 32768
###Output
_____no_output_____
###Markdown
We share the partial ouput from our DGX-1V system:```bashNum GPUs: 16.16384x32768: 1 GPU: 8.9160 s, 16 GPUs: 0.7863 s, speedup: 11.34, efficiency: 70.87```Like in MPI, the first few NCCL calls have a high overhead. Increases the number of iterations and run the program again:
###Code
! cd ../../source_code/nccl/ && mpirun -np 16 --map-by ppr:4:socket ./jacobi_nccl -ny 32768 -niter 5000
###Output
_____no_output_____
###Markdown
Our output is as follows:```bashNum GPUs: 16.16384x32768: 1 GPU: 44.5218 s, 16 GPUs: 3.3858 s, speedup: 13.15, efficiency: 82.18 ```Recall that the efficiency after 5K iterations with CUDA-aware MPI was about $74\%$. NCCL is able to utilize the dense DGX-1V communication topology more efficiently, resulting in better performance. Like in the MPI module labs, we can use the `-skip_single_gpu` option after validating the implementation to reduce application runtime and profiling time.Let us now profile to learn more about NCCL optimizations. ProfilingProfile the application using `nsys` for 5K iterations. Skip the single-GPU run and use a $16K\times32K$ grid size.
###Code
! cd ../../source_code/nccl && nsys profile --trace=mpi,cuda,nvtx --stats=true --force-overwrite true -o jacobi_nccl_report \
mpirun -np 16 --map-by ppr:4:socket ./jacobi_nccl -ny 32768 -niter 5000 -skip_single_gpu
###Output
_____no_output_____ |
from github/gym-trading/gym_trading/envs/.ipynb_checkpoints/TradingEnv-checkpoint.ipynb | ###Markdown
TradingEnv-v0 Open AI 'Gym' for reinforcement-learning based trading algorithmsThis gym implements a very simple trading environment for reinforcement learning.The gym provides daily observations based on real market data pulled from Quandl on, by default, the SPY etf. An episode is defined as 252 contiguous days sampled from the overall dataset. Each day is one 'step' within the gym and for each step, the algo has a choice: - SHORT (0) - FLAT (1) - LONG (2) If you trade, you will be charged, by default, 10 BPS of the size of your trade. Thus, going from short to long costs twice as much as going from short to/from flat. Not trading also has a default cost of 1 BPS per step. Nobody said it would be easy! At the beginning of your episode, you are allocated 1 unit of cash. This is your starting Net Asset Value (NAV). Beating the trading game For our purposes, we'll say that beating a buy & hold strategy, on average, over one hundred episodes will notch a win to the proud ai player. We'll illustrate exactly what that means below. Let's look at some code using the environment imports
###Code
import gym
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import interactive
interactive(True)
###Output
_____no_output_____
###Markdown
create the environmentThis may take a moment as we are pulling historical data from quandl.
###Code
env = gym.make('trading-v0')
#env.time_cost_bps = 0 #
###Output
_____no_output_____
###Markdown
the trading modelEach time step is a day. Each episode is 252 trading days - a year. Each day, we can choose to be short (0), flat (1) or long (2) the single instrument in our trading universe.Let's run through a day and stay flat.
###Code
observation = env.reset()
done = False
navs = []
while not done:
action = 1 # stay flat
observation, reward, done, info = env.step(action)
navs.append(info['nav'])
if done:
print 'Annualized return: ',navs[len(navs)-1]-1
pd.DataFrame(navs).plot()
###Output
Annualized return: -0.0247888380589
###Markdown
Note that you are charged just for playing - to the tune of 1 basis point per day! RenderingFor now, no rendering has been implemented for this gym, but with each step, the following datum are provided which you can easily graph and otherwise visualize as we see above with the NAV: - pnl - how much did we make or lose between yesterday and today? - costs - how much did we pay in costs today - nav - our current nav utility methods: running strategies once or repeatedlyAlthough the gym can be 'exercised' directly as seen above, we've also written utility methods which allow for the running of a strategy once or over many episodes, facilitating training or other sorts of analysis.To utilize these methods, strategies should be exposed as a function or lambda with the following signature:`Action a = strategy( observation, environment )` Below, we define some simple strategies and look briefly at their behavior to better understand the trading gym.
###Code
import trading_env as te
stayflat = lambda o,e: 1 # stand pat
buyandhold = lambda o,e: 2 # buy on day #1 and hold
randomtrader = lambda o,e: e.action_space.sample() # retail trader
# to run singly, we call run_strat. we are returned a dataframe containing
# all steps in the sim.
bhdf = env.run_strat(buyandhold)
print bhdf.head()
# we can easily plot our nav in time:
bhdf.bod_nav.plot(title='buy & hold nav')
###Output
action bod_nav mkt_nav mkt_return sim_return position costs trade
0 2.0 1.000000 1.000000 -0.011808 -0.001100 1.0 0.0011 1.0
1 2.0 0.998900 0.988192 -0.004627 -0.004727 1.0 0.0001 0.0
2 2.0 0.994178 0.983619 -0.002354 -0.002454 1.0 0.0001 0.0
3 2.0 0.991738 0.981304 -0.002890 -0.002990 1.0 0.0001 0.0
4 2.0 0.988772 0.978467 0.003845 0.003745 1.0 0.0001 0.0
###Markdown
running the same strategy multiple times will likely yield different results as underlying data changes
###Code
env.run_strat(buyandhold).bod_nav.plot(title='same strat, different results')
env.run_strat(buyandhold).bod_nav.plot()
env.run_strat(buyandhold).bod_nav.plot()
###Output
_____no_output_____
###Markdown
comparing the buyandhold and random traders
###Code
# running a strategy multiple times should yield insights
# into its expected behavior or give it oppty to learn
bhdf = env.run_strats(buyandhold,100)
rdf = env.run_strats(randomtrader,100)
comparo = pd.DataFrame({'buyhold':bhdf.mean(),
'random': rdf.mean()})
comparo
###Output
[2017-01-04 18:38:22,877] writing log to /tmp/tmpZDmiZ2
[2017-01-04 18:38:30,835] writing log to /tmp/tmpH7XAk2
###Markdown
Object of the gameFrom the above examples, we can see that buying and holding will, over the long run, give you the market return with low costs.Randomly trading will instead destroy value rather quickly as costs overwhelm. So, what does it mean to win the trading game? For our purposes, we'll say that beating a buy & hold strategy, on average, over one hundred episodes will notch a win to the proud ai player.To support this, the trading environment maintains the *mkt_return* which can be compared with the *sim_return*.Note that the *mkt_return* is frictionless while the *sim_return* incurs both trading costs and the decay cost of 1 basis point per day, so overcoming the hurdle we've set here should be challenging. Playing the game: purloined policy gradientsI've taken and adapted (see [code](policy_gradient.py) for details) a policy gradient implementation based on tensorflow to try to play the single-instrument trading game. Let's see how it does.
###Code
import tensorflow as tf
import policy_gradient
# create the tf session
sess = tf.InteractiveSession()
# create policygradient
pg = policy_gradient.PolicyGradient(sess, obs_dim=5, num_actions=3, learning_rate=1e-2 )
# and now let's train it and evaluate its progress. NB: this could take some time...
df,sf = pg.train_model( env,episodes=25001, log_freq=100)#, load_model=True)
###Output
[2017-01-04 18:38:42,890] year # 0, mean reward: -0.0014, sim ret: -0.1423, mkt ret: 0.2069, net: -0.3492
[2017-01-04 18:39:24,015] year # 100, mean reward: -0.0496, sim ret: 0.0993, mkt ret: -0.1906, net: 0.2900
[2017-01-04 18:40:00,117] year # 200, mean reward: -0.0583, sim ret: -0.1106, mkt ret: 0.0868, net: -0.1974
[2017-01-04 18:40:32,235] year # 300, mean reward: -0.0039, sim ret: 0.1549, mkt ret: 0.1977, net: -0.0427
[2017-01-04 18:41:03,065] year # 400, mean reward: 0.0216, sim ret: -0.0935, mkt ret: -0.0765, net: -0.0171
[2017-01-04 18:41:33,705] year # 500, mean reward: 0.0280, sim ret: 0.1924, mkt ret: 0.2065, net: -0.0140
[2017-01-04 18:42:05,916] year # 600, mean reward: 0.0228, sim ret: -0.1764, mkt ret: -0.1498, net: -0.0267
[2017-01-04 18:42:36,725] year # 700, mean reward: 0.0204, sim ret: -0.1844, mkt ret: -0.1657, net: -0.0187
[2017-01-04 18:43:07,595] year # 800, mean reward: 0.0227, sim ret: 0.1769, mkt ret: 0.2113, net: -0.0345
[2017-01-04 18:43:38,511] year # 900, mean reward: 0.0278, sim ret: -0.0158, mkt ret: 0.0217, net: -0.0375
[2017-01-04 18:44:09,833] year # 1000, mean reward: 0.0336, sim ret: -0.0186, mkt ret: 0.0106, net: -0.0292
[2017-01-04 18:44:41,030] year # 1100, mean reward: 0.0176, sim ret: -0.4554, mkt ret: -0.4159, net: -0.0395
[2017-01-04 18:45:13,080] year # 1200, mean reward: 0.0391, sim ret: 0.2577, mkt ret: 0.3129, net: -0.0552
[2017-01-04 18:45:44,538] year # 1300, mean reward: 0.0330, sim ret: 0.0901, mkt ret: 0.0966, net: -0.0065
[2017-01-04 18:46:15,954] year # 1400, mean reward: 0.0187, sim ret: -0.0248, mkt ret: 0.1588, net: -0.1836
[2017-01-04 18:46:47,241] year # 1500, mean reward: 0.0080, sim ret: -0.0121, mkt ret: 0.0114, net: -0.0235
[2017-01-04 18:47:19,123] year # 1600, mean reward: 0.0104, sim ret: 0.3213, mkt ret: 0.3635, net: -0.0422
[2017-01-04 18:47:50,704] year # 1700, mean reward: 0.0219, sim ret: 0.0137, mkt ret: 0.0324, net: -0.0187
[2017-01-04 18:48:22,082] year # 1800, mean reward: 0.0189, sim ret: 0.0686, mkt ret: 0.1008, net: -0.0322
[2017-01-04 18:48:53,719] year # 1900, mean reward: 0.0177, sim ret: -0.0068, mkt ret: 0.0408, net: -0.0476
[2017-01-04 18:49:25,522] year # 2000, mean reward: 0.0261, sim ret: 0.0557, mkt ret: 0.1406, net: -0.0849
[2017-01-04 18:49:57,353] year # 2100, mean reward: 0.0024, sim ret: 0.1470, mkt ret: 0.1751, net: -0.0280
[2017-01-04 18:50:29,307] year # 2200, mean reward: 0.0212, sim ret: 0.1023, mkt ret: 0.1336, net: -0.0313
[2017-01-04 18:51:01,190] year # 2300, mean reward: 0.0209, sim ret: -0.0628, mkt ret: -0.0341, net: -0.0288
[2017-01-04 18:51:33,224] year # 2400, mean reward: 0.0179, sim ret: -0.1410, mkt ret: -0.1095, net: -0.0314
[2017-01-04 18:52:05,555] year # 2500, mean reward: 0.0167, sim ret: 0.1409, mkt ret: 0.1547, net: -0.0138
[2017-01-04 18:52:37,700] year # 2600, mean reward: 0.0233, sim ret: -0.1693, mkt ret: -0.1511, net: -0.0181
[2017-01-04 18:53:10,125] year # 2700, mean reward: 0.0265, sim ret: -0.0217, mkt ret: 0.0031, net: -0.0248
[2017-01-04 18:53:42,708] year # 2800, mean reward: 0.0316, sim ret: 0.0659, mkt ret: 0.0963, net: -0.0304
[2017-01-04 18:54:15,123] year # 2900, mean reward: 0.0402, sim ret: 0.2952, mkt ret: 0.3198, net: -0.0247
[2017-01-04 18:54:47,606] year # 3000, mean reward: 0.0326, sim ret: -0.0337, mkt ret: -0.0028, net: -0.0308
[2017-01-04 18:55:20,475] year # 3100, mean reward: 0.0274, sim ret: -0.1920, mkt ret: -0.1626, net: -0.0294
[2017-01-04 18:55:53,102] year # 3200, mean reward: 0.0273, sim ret: 0.3358, mkt ret: 0.4451, net: -0.1093
[2017-01-04 18:56:25,748] year # 3300, mean reward: 0.0390, sim ret: -0.1185, mkt ret: -0.0884, net: -0.0301
[2017-01-04 18:56:59,508] year # 3400, mean reward: 0.0217, sim ret: 0.1176, mkt ret: 0.1614, net: -0.0438
[2017-01-04 18:57:33,551] year # 3500, mean reward: 0.0208, sim ret: 0.1133, mkt ret: 0.1361, net: -0.0228
[2017-01-04 18:58:07,625] year # 3600, mean reward: 0.0237, sim ret: -0.1696, mkt ret: -0.1554, net: -0.0142
[2017-01-04 18:58:41,622] year # 3700, mean reward: 0.0255, sim ret: 0.0853, mkt ret: 0.1110, net: -0.0257
[2017-01-04 18:59:15,811] year # 3800, mean reward: 0.0532, sim ret: 0.0460, mkt ret: 0.0753, net: -0.0293
[2017-01-04 18:59:50,132] year # 3900, mean reward: 0.0317, sim ret: 0.0195, mkt ret: 0.0581, net: -0.0385
[2017-01-04 19:00:25,373] year # 4000, mean reward: 0.0531, sim ret: 0.1510, mkt ret: 0.1830, net: -0.0321
[2017-01-04 19:00:59,832] year # 4100, mean reward: 0.0402, sim ret: 0.0071, mkt ret: 0.0312, net: -0.0240
[2017-01-04 19:01:34,659] year # 4200, mean reward: 0.0416, sim ret: 0.0066, mkt ret: 0.0528, net: -0.0461
[2017-01-04 19:02:09,403] year # 4300, mean reward: 0.0427, sim ret: -0.0008, mkt ret: 0.0044, net: -0.0052
[2017-01-04 19:02:44,251] year # 4400, mean reward: 0.0169, sim ret: -0.3836, mkt ret: -0.3684, net: -0.0152
[2017-01-04 19:03:19,237] year # 4500, mean reward: 0.0318, sim ret: -0.1078, mkt ret: -0.0879, net: -0.0198
[2017-01-04 19:03:54,391] year # 4600, mean reward: 0.0327, sim ret: 0.1177, mkt ret: 0.1298, net: -0.0121
[2017-01-04 19:04:29,610] year # 4700, mean reward: 0.0155, sim ret: -0.2140, mkt ret: -0.2001, net: -0.0138
[2017-01-04 19:05:04,766] year # 4800, mean reward: 0.0202, sim ret: 0.0299, mkt ret: 0.0827, net: -0.0528
[2017-01-04 19:05:40,327] year # 4900, mean reward: 0.0228, sim ret: 0.0891, mkt ret: 0.1131, net: -0.0240
[2017-01-04 19:06:16,125] year # 5000, mean reward: 0.0313, sim ret: 0.4537, mkt ret: 0.4500, net: 0.0038
[2017-01-04 19:06:51,917] year # 5100, mean reward: 0.0065, sim ret: -0.0334, mkt ret: -0.0135, net: -0.0199
[2017-01-04 19:07:27,693] year # 5200, mean reward: 0.0171, sim ret: -0.2388, mkt ret: -0.2185, net: -0.0203
[2017-01-04 19:08:03,762] year # 5300, mean reward: 0.0348, sim ret: -0.2228, mkt ret: -0.2040, net: -0.0187
[2017-01-04 19:08:39,880] year # 5400, mean reward: 0.0189, sim ret: -0.4815, mkt ret: -0.4753, net: -0.0062
[2017-01-04 19:09:16,105] year # 5500, mean reward: 0.0265, sim ret: 0.0207, mkt ret: 0.0684, net: -0.0477
[2017-01-04 19:09:52,425] year # 5600, mean reward: 0.0189, sim ret: 0.0987, mkt ret: 0.1261, net: -0.0274
[2017-01-04 19:10:29,301] year # 5700, mean reward: 0.0168, sim ret: 0.0716, mkt ret: 0.0906, net: -0.0189
[2017-01-04 19:11:05,765] year # 5800, mean reward: 0.0109, sim ret: 0.2570, mkt ret: 0.0397, net: 0.2173
[2017-01-04 19:11:42,388] year # 5900, mean reward: 0.0207, sim ret: 0.0216, mkt ret: 0.1320, net: -0.1104
[2017-01-04 19:12:18,877] year # 6000, mean reward: 0.0217, sim ret: -0.1750, mkt ret: -0.3850, net: 0.2101
[2017-01-04 19:12:55,854] year # 6100, mean reward: 0.0243, sim ret: -0.0769, mkt ret: -0.3378, net: 0.2608
[2017-01-04 19:13:32,872] year # 6200, mean reward: 0.0158, sim ret: -0.0140, mkt ret: -0.0409, net: 0.0269
[2017-01-04 19:14:09,913] year # 6300, mean reward: 0.0213, sim ret: -0.0615, mkt ret: 0.1333, net: -0.1948
[2017-01-04 19:14:46,935] year # 6400, mean reward: 0.0215, sim ret: 0.1142, mkt ret: 0.1600, net: -0.0457
[2017-01-04 19:15:25,110] year # 6500, mean reward: 0.0238, sim ret: 0.1413, mkt ret: 0.1120, net: 0.0293
[2017-01-04 19:16:02,441] year # 6600, mean reward: 0.0440, sim ret: 0.1504, mkt ret: 0.1820, net: -0.0316
[2017-01-04 19:16:40,124] year # 6700, mean reward: 0.0464, sim ret: 0.0445, mkt ret: 0.0624, net: -0.0180
[2017-01-04 19:17:17,911] year # 6800, mean reward: 0.0386, sim ret: -0.4160, mkt ret: -0.3908, net: -0.0251
[2017-01-04 19:17:55,812] year # 6900, mean reward: 0.0408, sim ret: -0.1465, mkt ret: -0.0121, net: -0.1344
[2017-01-04 19:18:33,493] year # 7000, mean reward: 0.0333, sim ret: 0.0706, mkt ret: 0.0853, net: -0.0147
[2017-01-04 19:19:11,662] year # 7100, mean reward: 0.0278, sim ret: 0.0645, mkt ret: 0.0931, net: -0.0286
[2017-01-04 19:19:49,676] year # 7200, mean reward: 0.0202, sim ret: 0.0974, mkt ret: 0.1169, net: -0.0195
[2017-01-04 19:20:27,757] year # 7300, mean reward: 0.0288, sim ret: -0.1560, mkt ret: -0.1057, net: -0.0504
[2017-01-04 19:21:06,048] year # 7400, mean reward: 0.0355, sim ret: 0.0255, mkt ret: 0.0529, net: -0.0274
[2017-01-04 19:21:44,533] year # 7500, mean reward: 0.0309, sim ret: 0.1371, mkt ret: -0.1105, net: 0.2476
[2017-01-04 19:22:23,011] year # 7600, mean reward: 0.0111, sim ret: -0.0037, mkt ret: 0.0312, net: -0.0349
[2017-01-04 19:23:01,259] year # 7700, mean reward: 0.0098, sim ret: 0.2571, mkt ret: 0.3316, net: -0.0746
[2017-01-04 19:23:40,027] year # 7800, mean reward: 0.0250, sim ret: 0.0101, mkt ret: 0.0379, net: -0.0278
[2017-01-04 19:24:18,549] year # 7900, mean reward: 0.0296, sim ret: 0.0649, mkt ret: 0.0311, net: 0.0338
[2017-01-04 19:24:57,520] year # 8000, mean reward: 0.0423, sim ret: 0.0434, mkt ret: 0.1647, net: -0.1213
[2017-01-04 19:25:36,745] year # 8100, mean reward: 0.0344, sim ret: -0.0339, mkt ret: 0.0326, net: -0.0665
[2017-01-04 19:26:15,869] year # 8200, mean reward: 0.0484, sim ret: 0.0393, mkt ret: 0.2542, net: -0.2149
[2017-01-04 19:26:54,824] year # 8300, mean reward: 0.0578, sim ret: 0.1697, mkt ret: 0.1877, net: -0.0179
[2017-01-04 19:27:34,005] year # 8400, mean reward: 0.0409, sim ret: 0.1537, mkt ret: 0.1954, net: -0.0417
[2017-01-04 19:28:13,314] year # 8500, mean reward: 0.0405, sim ret: -0.0635, mkt ret: -0.2173, net: 0.1538
[2017-01-04 19:28:52,578] year # 8600, mean reward: 0.0432, sim ret: 0.0535, mkt ret: 0.0959, net: -0.0424
[2017-01-04 19:29:31,985] year # 8700, mean reward: 0.0331, sim ret: -0.0117, mkt ret: 0.1279, net: -0.1396
[2017-01-04 19:30:11,636] year # 8800, mean reward: 0.0345, sim ret: -0.2345, mkt ret: -0.2042, net: -0.0303
[2017-01-04 19:30:51,730] year # 8900, mean reward: 0.0082, sim ret: 0.0326, mkt ret: 0.0491, net: -0.0165
[2017-01-04 19:31:31,584] year # 9000, mean reward: 0.0338, sim ret: -0.0678, mkt ret: -0.0395, net: -0.0284
[2017-01-04 19:32:11,608] year # 9100, mean reward: 0.0357, sim ret: 0.1347, mkt ret: 0.1740, net: -0.0393
[2017-01-04 19:32:51,849] year # 9200, mean reward: 0.0471, sim ret: -0.3771, mkt ret: -0.3699, net: -0.0073
[2017-01-04 19:33:32,166] year # 9300, mean reward: 0.0267, sim ret: -0.0426, mkt ret: 0.0230, net: -0.0656
[2017-01-04 19:34:12,236] year # 9400, mean reward: 0.0243, sim ret: 0.0734, mkt ret: 0.1154, net: -0.0420
[2017-01-04 19:34:52,433] year # 9500, mean reward: 0.0372, sim ret: -0.0451, mkt ret: -0.0161, net: -0.0290
[2017-01-04 19:35:33,058] year # 9600, mean reward: 0.0557, sim ret: -0.0195, mkt ret: 0.0215, net: -0.0410
[2017-01-04 19:36:13,957] year # 9700, mean reward: 0.0511, sim ret: 0.1976, mkt ret: 0.2210, net: -0.0234
[2017-01-04 19:36:54,464] year # 9800, mean reward: 0.0242, sim ret: -0.2499, mkt ret: -0.2314, net: -0.0185
[2017-01-04 19:37:35,588] year # 9900, mean reward: 0.0323, sim ret: -0.2221, mkt ret: -0.1900, net: -0.0321
[2017-01-04 19:38:16,867] year # 10000, mean reward: 0.0286, sim ret: 0.1059, mkt ret: 0.1234, net: -0.0175
[2017-01-04 19:38:57,535] year # 10100, mean reward: 0.0368, sim ret: 0.1403, mkt ret: 0.1541, net: -0.0138
[2017-01-04 19:39:38,856] year # 10200, mean reward: 0.0222, sim ret: 0.0345, mkt ret: 0.0763, net: -0.0418
[2017-01-04 19:40:20,401] year # 10300, mean reward: 0.0489, sim ret: 0.3831, mkt ret: 0.3198, net: 0.0633
[2017-01-04 19:41:01,580] year # 10400, mean reward: 0.0274, sim ret: 0.1003, mkt ret: 0.1081, net: -0.0078
[2017-01-04 19:41:43,146] year # 10500, mean reward: 0.0532, sim ret: 0.0021, mkt ret: 0.0252, net: -0.0231
[2017-01-04 19:42:24,940] year # 10600, mean reward: 0.0394, sim ret: -0.1884, mkt ret: -0.1813, net: -0.0070
[2017-01-04 19:43:06,708] year # 10700, mean reward: 0.0427, sim ret: -0.0127, mkt ret: 0.0174, net: -0.0301
[2017-01-04 19:43:48,379] year # 10800, mean reward: 0.0381, sim ret: 0.1549, mkt ret: 0.1977, net: -0.0427
[2017-01-04 19:44:30,342] year # 10900, mean reward: 0.0318, sim ret: 0.1292, mkt ret: 0.1530, net: -0.0238
[2017-01-04 19:45:12,388] year # 11000, mean reward: 0.0322, sim ret: 0.0064, mkt ret: 0.0496, net: -0.0432
[2017-01-04 19:45:54,849] year # 11100, mean reward: 0.0160, sim ret: 0.0845, mkt ret: 0.1241, net: -0.0396
[2017-01-04 19:46:36,942] year # 11200, mean reward: 0.0155, sim ret: 0.0038, mkt ret: 0.0384, net: -0.0347
[2017-01-04 19:47:19,214] year # 11300, mean reward: 0.0166, sim ret: -0.1119, mkt ret: -0.1982, net: 0.0862
[2017-01-04 19:48:01,707] year # 11400, mean reward: 0.0056, sim ret: 0.0283, mkt ret: 0.0126, net: 0.0157
[2017-01-04 19:48:44,519] year # 11500, mean reward: 0.0312, sim ret: 0.0603, mkt ret: 0.0999, net: -0.0396
[2017-01-04 19:49:27,290] year # 11600, mean reward: 0.0078, sim ret: -0.1797, mkt ret: -0.1575, net: -0.0222
[2017-01-04 19:50:10,311] year # 11700, mean reward: 0.0129, sim ret: 0.0173, mkt ret: 0.0473, net: -0.0299
[2017-01-04 19:50:53,321] year # 11800, mean reward: 0.0412, sim ret: 0.0607, mkt ret: 0.0922, net: -0.0315
[2017-01-04 19:51:36,528] year # 11900, mean reward: 0.0494, sim ret: -0.0033, mkt ret: 0.0059, net: -0.0092
[2017-01-04 19:52:19,816] year # 12000, mean reward: 0.0389, sim ret: -0.0106, mkt ret: 0.0138, net: -0.0244
[2017-01-04 19:53:03,617] year # 12100, mean reward: 0.0356, sim ret: -0.1403, mkt ret: -0.1086, net: -0.0317
[2017-01-04 19:53:48,292] year # 12200, mean reward: 0.0405, sim ret: 0.0306, mkt ret: 0.0501, net: -0.0195
[2017-01-04 19:54:37,376] year # 12300, mean reward: 0.0562, sim ret: 0.0805, mkt ret: 0.1107, net: -0.0302
[2017-01-04 19:55:37,580] year # 12400, mean reward: 0.0442, sim ret: 0.1737, mkt ret: 0.1925, net: -0.0188
[2017-01-04 19:56:24,450] year # 12500, mean reward: 0.0531, sim ret: 0.0182, mkt ret: 0.0439, net: -0.0257
[2017-01-04 19:57:07,836] year # 12600, mean reward: 0.0390, sim ret: 0.1751, mkt ret: 0.2084, net: -0.0333
[2017-01-04 19:57:51,425] year # 12700, mean reward: 0.0275, sim ret: -0.2386, mkt ret: -0.2187, net: -0.0198
[2017-01-04 19:58:35,480] year # 12800, mean reward: 0.0252, sim ret: 0.1572, mkt ret: 0.1887, net: -0.0316
[2017-01-04 19:59:19,389] year # 12900, mean reward: 0.0168, sim ret: 0.0285, mkt ret: 0.0617, net: -0.0333
[2017-01-04 20:00:03,664] year # 13000, mean reward: 0.0173, sim ret: -0.1507, mkt ret: -0.1239, net: -0.0268
[2017-01-04 20:00:48,080] year # 13100, mean reward: 0.0178, sim ret: 0.0373, mkt ret: 0.1009, net: -0.0636
[2017-01-04 20:01:32,258] year # 13200, mean reward: 0.0181, sim ret: 0.0789, mkt ret: 0.1074, net: -0.0285
[2017-01-04 20:02:16,897] year # 13300, mean reward: 0.0390, sim ret: 0.1763, mkt ret: 0.1920, net: -0.0157
[2017-01-04 20:03:01,466] year # 13400, mean reward: 0.0233, sim ret: 0.0188, mkt ret: 0.0623, net: -0.0435
[2017-01-04 20:03:46,347] year # 13500, mean reward: 0.0200, sim ret: -0.3897, mkt ret: -0.3730, net: -0.0167
[2017-01-04 20:04:31,347] year # 13600, mean reward: 0.0185, sim ret: 0.0749, mkt ret: 0.1145, net: -0.0396
[2017-01-04 20:05:16,027] year # 13700, mean reward: 0.0285, sim ret: 0.0532, mkt ret: 0.0733, net: -0.0201
[2017-01-04 20:06:00,655] year # 13800, mean reward: 0.0203, sim ret: 0.0573, mkt ret: 0.0408, net: 0.0165
[2017-01-04 20:06:45,757] year # 13900, mean reward: 0.0328, sim ret: 0.0306, mkt ret: 0.0461, net: -0.0155
[2017-01-04 20:07:30,914] year # 14000, mean reward: -0.0035, sim ret: -0.0876, mkt ret: -0.0642, net: -0.0234
[2017-01-04 20:08:16,307] year # 14100, mean reward: 0.0054, sim ret: 0.1786, mkt ret: 0.2028, net: -0.0242
[2017-01-04 20:09:01,755] year # 14200, mean reward: 0.0144, sim ret: 0.0172, mkt ret: 0.0425, net: -0.0254
[2017-01-04 20:09:47,199] year # 14300, mean reward: 0.0238, sim ret: -0.0164, mkt ret: 0.0055, net: -0.0220
[2017-01-04 20:10:32,821] year # 14400, mean reward: 0.0261, sim ret: 0.1941, mkt ret: 0.2565, net: -0.0624
[2017-01-04 20:11:18,237] year # 14500, mean reward: 0.0248, sim ret: 0.1435, mkt ret: 0.1636, net: -0.0201
[2017-01-04 20:12:04,028] year # 14600, mean reward: 0.0194, sim ret: 0.1786, mkt ret: 0.2028, net: -0.0242
[2017-01-04 20:12:49,979] year # 14700, mean reward: 0.0189, sim ret: -0.1033, mkt ret: -0.0771, net: -0.0262
[2017-01-04 20:13:35,911] year # 14800, mean reward: 0.0193, sim ret: 0.0914, mkt ret: 0.1190, net: -0.0277
[2017-01-04 20:14:22,181] year # 14900, mean reward: 0.0312, sim ret: 0.0691, mkt ret: 0.0968, net: -0.0277
[2017-01-04 20:15:08,120] year # 15000, mean reward: 0.0430, sim ret: -0.0045, mkt ret: 0.0166, net: -0.0211
[2017-01-04 20:15:54,430] year # 15100, mean reward: 0.0290, sim ret: -0.1809, mkt ret: -0.1719, net: -0.0091
[2017-01-04 20:16:40,641] year # 15200, mean reward: 0.0401, sim ret: 0.0207, mkt ret: 0.0684, net: -0.0477
[2017-01-04 20:17:27,469] year # 15300, mean reward: 0.0197, sim ret: 0.1698, mkt ret: 0.2025, net: -0.0327
[2017-01-04 20:18:13,949] year # 15400, mean reward: 0.0152, sim ret: -0.1433, mkt ret: -0.1230, net: -0.0203
[2017-01-04 20:19:00,609] year # 15500, mean reward: 0.0426, sim ret: 0.3919, mkt ret: 0.3953, net: -0.0034
[2017-01-04 20:19:47,536] year # 15600, mean reward: 0.0442, sim ret: 0.0353, mkt ret: 0.0629, net: -0.0275
[2017-01-04 20:20:34,406] year # 15700, mean reward: 0.0272, sim ret: 0.1823, mkt ret: 0.2207, net: -0.0384
[2017-01-04 20:21:21,556] year # 15800, mean reward: 0.0208, sim ret: 0.1019, mkt ret: 0.1289, net: -0.0271
[2017-01-04 20:22:08,674] year # 15900, mean reward: 0.0094, sim ret: 0.0721, mkt ret: 0.0982, net: -0.0261
[2017-01-04 20:22:55,774] year # 16000, mean reward: 0.0361, sim ret: -0.0621, mkt ret: -0.0250, net: -0.0370
[2017-01-04 20:23:43,409] year # 16100, mean reward: 0.0352, sim ret: 0.0180, mkt ret: 0.0458, net: -0.0277
[2017-01-04 20:24:30,661] year # 16200, mean reward: 0.0443, sim ret: -0.0128, mkt ret: 0.0105, net: -0.0232
[2017-01-04 20:25:18,164] year # 16300, mean reward: 0.0455, sim ret: -0.1260, mkt ret: -0.1187, net: -0.0073
[2017-01-04 20:26:05,970] year # 16400, mean reward: 0.0444, sim ret: -0.0909, mkt ret: -0.0689, net: -0.0221
[2017-01-04 20:26:53,663] year # 16500, mean reward: 0.0289, sim ret: 0.0185, mkt ret: 0.0484, net: -0.0299
[2017-01-04 20:27:41,399] year # 16600, mean reward: 0.0489, sim ret: 0.0123, mkt ret: 0.0348, net: -0.0225
[2017-01-04 20:28:30,193] year # 16700, mean reward: 0.0498, sim ret: 0.1901, mkt ret: 0.2226, net: -0.0324
[2017-01-04 20:29:19,884] year # 16800, mean reward: 0.0201, sim ret: -0.1387, mkt ret: -0.1339, net: -0.0048
[2017-01-04 20:30:10,007] year # 16900, mean reward: 0.0188, sim ret: 0.1452, mkt ret: 0.1696, net: -0.0244
[2017-01-04 20:31:00,346] year # 17000, mean reward: 0.0311, sim ret: 0.0853, mkt ret: 0.1844, net: -0.0990
[2017-01-04 20:31:50,019] year # 17100, mean reward: 0.0249, sim ret: 0.0484, mkt ret: 0.0639, net: -0.0155
[2017-01-04 20:32:39,388] year # 17200, mean reward: 0.0204, sim ret: -0.0096, mkt ret: 0.0901, net: -0.0997
[2017-01-04 20:33:29,852] year # 17300, mean reward: 0.0105, sim ret: 0.0601, mkt ret: 0.0837, net: -0.0236
[2017-01-04 20:34:20,302] year # 17400, mean reward: 0.0161, sim ret: 0.1111, mkt ret: 0.1627, net: -0.0516
[2017-01-04 20:35:10,903] year # 17500, mean reward: 0.0279, sim ret: 0.1548, mkt ret: 0.1965, net: -0.0417
[2017-01-04 20:36:01,451] year # 17600, mean reward: 0.0268, sim ret: 0.0548, mkt ret: 0.0694, net: -0.0146
[2017-01-04 20:36:51,913] year # 17700, mean reward: 0.0376, sim ret: 0.2018, mkt ret: 0.2487, net: -0.0469
[2017-01-04 20:37:42,734] year # 17800, mean reward: 0.0169, sim ret: 0.0555, mkt ret: 0.1445, net: -0.0891
[2017-01-04 20:38:33,654] year # 17900, mean reward: 0.0187, sim ret: -0.3343, mkt ret: -0.4248, net: 0.0904
[2017-01-04 20:39:25,260] year # 18000, mean reward: -0.0040, sim ret: -0.0649, mkt ret: 0.1400, net: -0.2049
[2017-01-04 20:40:16,835] year # 18100, mean reward: 0.0304, sim ret: 0.1287, mkt ret: 0.1719, net: -0.0432
[2017-01-04 20:41:08,345] year # 18200, mean reward: 0.0206, sim ret: 0.0073, mkt ret: 0.0971, net: -0.0899
[2017-01-04 20:42:00,130] year # 18300, mean reward: 0.0177, sim ret: 0.0756, mkt ret: 0.1238, net: -0.0482
[2017-01-04 20:42:51,938] year # 18400, mean reward: -0.0039, sim ret: 0.1786, mkt ret: -0.2047, net: 0.3833
[2017-01-04 20:43:43,549] year # 18500, mean reward: 0.0048, sim ret: -0.2996, mkt ret: -0.2804, net: -0.0192
[2017-01-04 20:44:36,098] year # 18600, mean reward: 0.0117, sim ret: 0.0700, mkt ret: 0.1898, net: -0.1198
[2017-01-04 20:45:28,320] year # 18700, mean reward: 0.0122, sim ret: 0.1472, mkt ret: 0.2082, net: -0.0609
[2017-01-04 20:46:20,570] year # 18800, mean reward: 0.0014, sim ret: 0.1384, mkt ret: 0.2131, net: -0.0747
[2017-01-04 20:47:13,167] year # 18900, mean reward: 0.0030, sim ret: 0.0945, mkt ret: 0.0500, net: 0.0445
[2017-01-04 20:48:05,532] year # 19000, mean reward: 0.0021, sim ret: -0.2712, mkt ret: -0.2294, net: -0.0418
[2017-01-04 20:48:57,700] year # 19100, mean reward: 0.0040, sim ret: 0.0553, mkt ret: 0.0998, net: -0.0445
[2017-01-04 20:49:49,608] year # 19200, mean reward: 0.0269, sim ret: 0.1664, mkt ret: 0.1987, net: -0.0323
[2017-01-04 20:50:41,805] year # 19300, mean reward: 0.0211, sim ret: -0.2273, mkt ret: -0.2146, net: -0.0127
[2017-01-04 20:51:34,825] year # 19400, mean reward: 0.0305, sim ret: -0.0342, mkt ret: 0.0321, net: -0.0663
[2017-01-04 20:52:27,550] year # 19500, mean reward: 0.0380, sim ret: 0.1770, mkt ret: 0.2046, net: -0.0277
[2017-01-04 20:53:21,002] year # 19600, mean reward: 0.0284, sim ret: -0.2069, mkt ret: -0.1775, net: -0.0294
[2017-01-04 20:54:12,639] year # 19700, mean reward: 0.0209, sim ret: 0.1873, mkt ret: 0.1924, net: -0.0051
[2017-01-04 20:55:12,014] year # 19800, mean reward: 0.0081, sim ret: 0.1034, mkt ret: 0.1340, net: -0.0306
[2017-01-04 20:56:15,991] year # 19900, mean reward: 0.0087, sim ret: -0.0520, mkt ret: -0.0168, net: -0.0352
[2017-01-04 20:57:18,487] year # 20000, mean reward: 0.0372, sim ret: 0.3376, mkt ret: 0.3931, net: -0.0555
[2017-01-04 20:58:20,133] year # 20100, mean reward: 0.0234, sim ret: 0.1031, mkt ret: 0.1265, net: -0.0233
[2017-01-04 20:59:21,972] year # 20200, mean reward: 0.0212, sim ret: 0.1047, mkt ret: 0.1338, net: -0.0291
[2017-01-04 21:00:24,310] year # 20300, mean reward: 0.0133, sim ret: 0.0803, mkt ret: 0.1132, net: -0.0329
[2017-01-04 21:01:29,130] year # 20400, mean reward: 0.0148, sim ret: -0.0562, mkt ret: -0.0241, net: -0.0321
[2017-01-04 21:02:29,951] year # 20500, mean reward: 0.0028, sim ret: 0.0464, mkt ret: 0.0044, net: 0.0419
[2017-01-04 21:03:36,035] year # 20600, mean reward: 0.0096, sim ret: 0.0849, mkt ret: 0.1100, net: -0.0250
[2017-01-04 21:04:40,466] year # 20700, mean reward: 0.0231, sim ret: 0.3389, mkt ret: 0.3542, net: -0.0154
[2017-01-04 21:05:47,024] year # 20800, mean reward: 0.0262, sim ret: 0.1329, mkt ret: 0.1719, net: -0.0390
[2017-01-04 21:06:52,908] year # 20900, mean reward: 0.0246, sim ret: 0.1606, mkt ret: 0.1793, net: -0.0187
[2017-01-04 21:07:56,873] year # 21000, mean reward: 0.0254, sim ret: 0.5144, mkt ret: 0.4349, net: 0.0796
[2017-01-04 21:09:02,984] year # 21100, mean reward: 0.0379, sim ret: 0.1224, mkt ret: 0.1451, net: -0.0228
[2017-01-04 21:10:02,503] year # 21200, mean reward: 0.0377, sim ret: 0.0984, mkt ret: 0.2038, net: -0.1055
[2017-01-04 21:11:04,317] year # 21300, mean reward: 0.0309, sim ret: -0.0656, mkt ret: -0.3503, net: 0.2847
[2017-01-04 21:12:07,841] year # 21400, mean reward: 0.0225, sim ret: -0.0748, mkt ret: 0.0601, net: -0.1348
[2017-01-04 21:13:12,992] year # 21500, mean reward: 0.0288, sim ret: 0.1502, mkt ret: 0.1843, net: -0.0340
[2017-01-04 21:14:20,167] year # 21600, mean reward: 0.0326, sim ret: 0.1087, mkt ret: 0.1133, net: -0.0046
[2017-01-04 21:15:27,390] year # 21700, mean reward: 0.0235, sim ret: -0.0921, mkt ret: -0.2245, net: 0.1324
[2017-01-04 21:16:33,016] year # 21800, mean reward: 0.0307, sim ret: 0.0293, mkt ret: 0.1157, net: -0.0864
[2017-01-04 21:17:35,766] year # 21900, mean reward: 0.0361, sim ret: -0.1333, mkt ret: -0.2545, net: 0.1212
[2017-01-04 21:18:42,299] year # 22000, mean reward: 0.0368, sim ret: -0.1477, mkt ret: -0.1292, net: -0.0185
[2017-01-04 21:19:45,125] year # 22100, mean reward: 0.0291, sim ret: 0.1572, mkt ret: 0.2405, net: -0.0833
[2017-01-04 21:20:48,943] year # 22200, mean reward: 0.0203, sim ret: 0.0008, mkt ret: 0.1612, net: -0.1604
[2017-01-04 21:21:51,554] year # 22300, mean reward: 0.0336, sim ret: 0.0387, mkt ret: 0.0659, net: -0.0272
[2017-01-04 21:22:57,809] year # 22400, mean reward: 0.0554, sim ret: -0.0483, mkt ret: 0.0994, net: -0.1477
[2017-01-04 21:24:03,555] year # 22500, mean reward: 0.0389, sim ret: 0.0512, mkt ret: 0.0885, net: -0.0374
[2017-01-04 21:25:08,969] year # 22600, mean reward: 0.0400, sim ret: 0.1163, mkt ret: 0.1696, net: -0.0534
[2017-01-04 21:26:12,200] year # 22700, mean reward: 0.0410, sim ret: 0.0839, mkt ret: -0.0529, net: 0.1368
[2017-01-04 21:27:19,013] year # 22800, mean reward: 0.0453, sim ret: -0.0238, mkt ret: -0.1961, net: 0.1723
[2017-01-04 21:27:19,411] Congratulations, Warren Buffet! You won the trading game.
###Markdown
ResultsPolicy gradients beat the trading game! That said, it doesn't work every time and it seems, looking at the charts below, as though it's a bit of a lucky thing. But luck counts in the trading game as in life!
###Code
sf['net'] = sf.simror - sf.mktror
#sf.net.plot()
sf.net.expanding().mean().plot()
sf.net.rolling(100).mean().plot()
sf.net.rolling(100).mean().tail()
###Output
_____no_output_____ |
RL/RL-Adventure-2/3.ppo.ipynb | ###Markdown
Use CUDA
###Code
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
###Output
cuda
###Markdown
Create Environments
###Code
from common.multiprocessing_env import SubprocVecEnv
num_envs = 16
env_name = "Pendulum-v0"
def make_env():
def _thunk():
env = gym.make(env_name)
return env
return _thunk
envs = [make_env() for i in range(num_envs)]
envs = SubprocVecEnv(envs)
env = gym.make(env_name)
###Output
_____no_output_____
###Markdown
Neural Network
###Code
def init_weights(m):
if isinstance(m, nn.Linear):
nn.init.normal_(m.weight, mean=0., std=0.1)
nn.init.constant_(m.bias, 0.1)
# print("hello")
# print(m)
class ActorCritic(nn.Module):
def __init__(self, num_inputs, num_outputs, hidden_size, std=0.0):
super(ActorCritic, self).__init__()
self.critic = nn.Sequential(
nn.Linear(num_inputs, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, 1)
)
self.actor = nn.Sequential(
nn.Linear(num_inputs, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, num_outputs),
)
self.log_std = nn.Parameter(torch.ones(1, num_outputs) * std)
self.apply(init_weights)
def forward(self, x):
value = self.critic(x)
mu = self.actor(x)
std = self.log_std.exp().expand_as(mu)
dist = Normal(mu, std)
return dist, value
temp = ActorCritic(envs.observation_space.shape[0],envs.action_space.shape[0],10).to(device)
# print(temp)
temp_state = envs.reset()
temp_state = torch.FloatTensor(temp_state).to(device)
tm_dist, tm_val = temp(temp_state)
# print(temp_state)
# print(tm_dist.sample())
# print(tm_dist)
# print(tm_val)
def plot(frame_idx, rewards):
clear_output(True)
plt.figure(figsize=(20,5))
plt.subplot(131)
plt.title('frame %s. reward: %s' % (frame_idx, rewards[-1]))
plt.plot(rewards)
plt.show()
def test_env(vis=False):
state = env.reset()
if vis: env.render()
done = False
total_reward = 0
while not done:
state = torch.FloatTensor(state).unsqueeze(0).to(device)
dist, _ = model(state)
next_state, reward, done, _ = env.step(dist.sample().cpu().numpy()[0])
state = next_state
if vis: env.render()
total_reward += reward
return total_reward
def custom_test_env(vis=False):
state = env.reset()
if vis: env.render()
done = False
total_reward = 0
while not done:
state = torch.FloatTensor(state).unsqueeze(0).to(device)
# dist, _ = model(state)
dist_c, dist_s = model.act(state)
next_state, reward_c, done, _ = env.step(dist_c.sample().cpu().numpy()[0])
state = next_state
if vis: env.render()
total_reward += reward_c
return total_reward
###Output
_____no_output_____
###Markdown
GAE
###Code
def compute_gae(next_value, rewards, masks, values, gamma=0.99, tau=0.95):
values = values + [next_value]
gae = 0
returns = []
for step in reversed(range(len(rewards))):
delta = rewards[step] + gamma * values[step + 1] * masks[step] - values[step]
gae = delta + gamma * tau * masks[step] * gae
returns.insert(0, gae + values[step])
return returns
###Output
_____no_output_____
###Markdown
Proximal Policy Optimization AlgorithmArxiv
###Code
def ppo_iter(mini_batch_size, states, actions, log_probs, returns, advantage):
batch_size = states.size(0)
for _ in range(batch_size // mini_batch_size):
rand_ids = np.random.randint(0, batch_size, mini_batch_size)
yield states[rand_ids, :], actions[rand_ids, :], log_probs[rand_ids, :], returns[rand_ids, :], advantage[rand_ids, :]
def ppo_update(ppo_epochs, mini_batch_size, states, actions, log_probs, returns, advantages, clip_param=0.2):
for _ in range(ppo_epochs):
for state, action, old_log_probs, return_, advantage in ppo_iter(mini_batch_size, states, actions, log_probs, returns, advantages):
dist, value = model(state)
entropy = dist.entropy().mean()
new_log_probs = dist.log_prob(action)
ratio = (new_log_probs - old_log_probs).exp()
surr1 = ratio * advantage
surr2 = torch.clamp(ratio, 1.0 - clip_param, 1.0 + clip_param) * advantage
actor_loss = - torch.min(surr1, surr2).mean()
critic_loss = (return_ - value).pow(2).mean()
loss = 0.5 * critic_loss + actor_loss - 0.001 * entropy
optimizer.zero_grad()
loss.backward()
optimizer.step()
class CustomActorCritic(nn.Module):
def __init__(self, state_dim, action_dim, n_latent_var, safety_dim, std=0.0):
super(CustomActorCritic, self).__init__()
layers_shared_actor = []
layers_safety_actor = []
layers_control_actor = []
layers_shared_critic = []
layers_safety_critic = []
layers_control_critic = []
in_dim = state_dim
out_dim = n_latent_var
# shared part
for i in range(Num_Hidden_Shared):
layers_shared_actor.append(nn.Linear(in_dim, out_dim))
layers_shared_actor.append(nn.Tanh())
in_dim = out_dim
# safety head
for i in range(Num_Hidden_Safety):
layers_safety_actor.append(nn.Linear(in_dim, out_dim))
layers_safety_actor.append(nn.Tanh())
# action head
for i in range(Num_Hidden_Action):
layers_control_actor.append(nn.Linear(in_dim, out_dim))
layers_control_actor.append(nn.Tanh())
self.base_actor = nn.Sequential(*layers_shared_actor)
self.safety_layer_actor = nn.Sequential(*layers_safety_actor,
nn.Linear(out_dim, safety_dim)
)
self.control_layer_actor = nn.Sequential(*layers_control_actor,
nn.Linear(out_dim, action_dim)
)
in_dim = state_dim
out_dim = n_latent_var
# shared part
for i in range(Num_Hidden_Shared):
layers_shared_critic.append(nn.Linear(in_dim, out_dim))
layers_shared_critic.append(nn.Tanh())
in_dim = out_dim
# safety head
for i in range(Num_Hidden_Safety):
layers_safety_critic.append(nn.Linear(in_dim, out_dim))
layers_safety_critic.append(nn.Tanh())
# action head
for i in range(Num_Hidden_Action):
layers_control_critic.append(nn.Linear(in_dim, out_dim))
layers_control_critic.append(nn.Tanh())
self.base_critic = nn.Sequential(*layers_shared_critic)
self.safety_layer_critic = nn.Sequential(*layers_safety_critic,
nn.Linear(out_dim, 1)
)
self.control_layer_critic = nn.Sequential(*layers_control_critic,
nn.Linear(out_dim, 1)
)
self.log_std1 = nn.Parameter(torch.ones(1, action_dim) * std)
self.log_std2 = nn.Parameter(torch.ones(1, safety_dim) * std)
self.apply(init_weights)
def forward(self):
raise NotImplementedError
def act(self, state):
# state = torch.from_numpy(state).float().to(device)
mu1 = self.control_layer_actor(self.base_actor(state)) # mu1
mu2 = self.safety_layer_actor(self.base_actor(state)) # mu2
std1 = self.log_std1.exp().expand_as(mu1)
dist1 = Normal(mu1, std1)
std2 = self.log_std2.exp().expand_as(mu2)
dist2 = Normal(mu1, std2)
return dist1, dist2
# memory.states.append(state)
# memory.actions.append(action)
# return action.item(), safety.item()
def evaluate(self, state):
# action_probs = self.action_layer(state)
# dist = Categorical(action_probs)
# action_logprobs = dist.log_prob(action)
# dist_entropy = dist.entropy()
control_state_value = self.control_layer_critic(self.base_critic(state))
safety_state_value = self.safety_layer_critic(self.base_critic(state))
# return action_logprobs, torch.squeeze(state_value), dist_entropy
return control_state_value, safety_state_value
'''
We need to update the weights of safety side similar to control side. (by symmetry)
We just need to get the same inputs that are required to calculate the losses from the safety part as well
Just assume we have two networks and we want to do ppo on both of them. So calculate loss and do weighted addition
and back propagate
Equivalents needed:
actions : safety
log_probs : safety_log_probs
returns : safety_returns
advantages : safety_advantages
'''
CTRL_W = 1
SFTY_W = 0
# def ppo_iter_new(mini_batch_size, states, actions, log_probs, returns, advantage):
def custom_ppo_iter(mini_batch_size, states, controls, safetys, log_probs_c, log_probs_s, returns_c, returns_s, \
advantages_c, advantages_s):
batch_size = states.size(0)
for _ in range(batch_size // mini_batch_size):
rand_ids = np.random.randint(0, batch_size, mini_batch_size)
# yield states[rand_ids, :], actions[rand_ids, :], log_probs[rand_ids, :], returns[rand_ids, :], advantage[rand_ids, :]
yield states[rand_ids, :], controls[rand_ids, :], safetys[rand_ids, :], log_probs_c[rand_ids, :], \
log_probs_s[rand_ids, :], returns_c[rand_ids, :], returns_s[rand_ids, :], advantages_c[rand_ids, :], advantages_s[rand_ids, :]
# def ppo_update_new(ppo_epochs, mini_batch_size, states, actions, log_probs, returns, advantages, clip_param=0.2):
def custom_ppo_update(ppo_epochs, mini_batch_size, states, controls, safetys, log_probs_c, log_probs_s \
, returns_c, returns_s, advantages_c, advantages_s, clip_param=0.2):
for _ in range(ppo_epochs):
# for state, action, old_log_probs, return_, advantage in ppo_iter(mini_batch_size, states, actions, log_probs, returns, advantages):
for state, control, safety, old_log_probs_c, old_log_probs_s, return_c, return_s, advantage_c, advantage_s \
in custom_ppo_iter(mini_batch_size, states, controls, safetys, log_probs_c, log_probs_s, returns_c, returns_s, \
advantages_c, advantages_s):
# dist, value = model(state)
dist_c, dist_s = model.act(state)
value_c, value_s = model.evaluate(state)
# entropy = dist.entropy().mean()
entropy_c = dist_c.entropy().mean()
entropy_s = dist_s.entropy().mean()
# new_log_probs = dist.log_prob(action)
new_log_probs_c = dist_c.log_prob(control)
new_log_probs_s = dist_s.log_prob(safety)
# ratio = (new_log_probs - old_log_probs).exp()
ratio_c = (new_log_probs_c - old_log_probs_c).exp()
ratio_s = (new_log_probs_s - old_log_probs_s).exp()
# surr1 = ratio * advantage
surr1_c = ratio_c * advantage_c
surr1_s = ratio_s * advantage_s
# surr2 = torch.clamp(ratio, 1.0 - clip_param, 1.0 + clip_param) * advantage
surr2_c = torch.clamp(ratio_c, 1.0 - clip_param, 1.0 + clip_param) * advantage_c
surr2_s = torch.clamp(ratio_s, 1.0 - clip_param, 1.0 + clip_param) * advantage_s
# actor_loss = - torch.min(surr1, surr2).mean()
actor_loss_c = - torch.min(surr1_c, surr2_c).mean()
actor_loss_s = - torch.min(surr1_s, surr2_s).mean()
# critic_loss = (return_ - value).pow(2).mean()
critic_loss_c = (return_c - value_c).pow(2).mean()
critic_loss_s = (return_s - value_s).pow(2).mean()
# loss = 0.5 * critic_loss + actor_loss - 0.001 * entropy
loss_c = 0.5 * critic_loss_c + actor_loss_c - 0.001 * entropy_c
loss_s = 0.5 * critic_loss_s + actor_loss_s - 0.001 * entropy_s
loss = CTRL_W*loss_c + SFTY_W*loss_s
optimizer.zero_grad()
loss.backward()
optimizer.step()
max_frames = 15000
frame_idx = 0
test_rewards = []
num_inputs = envs.observation_space.shape[0]
num_outputs = envs.action_space.shape[0]
#Hyper params:
hidden_size = 256
lr = 3e-4
num_steps = 20
mini_batch_size = 5
ppo_epochs = 4
threshold_reward = -200
Num_Hidden_Shared = 3
Num_Hidden_Safety = 2
Num_Hidden_Action = 2
safety_dim = 1
action_dim = num_outputs
model = CustomActorCritic(num_inputs, num_outputs, hidden_size, safety_dim).to(device)
optimizer = optim.Adam(model.parameters(), lr=lr)
print(envs.observation_space.shape[0], envs.action_space.shape[0])
'''
Should be changed currently it is the same as the control reward
'''
def safety_reward(state, next_state, reward_c):
return reward_c
# raise NotImplementedError
state = envs.reset()
early_stop = False
while frame_idx < max_frames and not early_stop:
# log_probs = []
log_probs_c = []
log_probs_s = []
# values = []
values_c = []
values_s = []
states = []
# actions = []
controls = []
safetys = []
# rewards = []
rewards_c = []
rewards_s = []
masks = []
# entropy = 0
entropy_c = 0
entropy_s = 0
for _ in range(num_steps):
state = torch.FloatTensor(state).to(device)
# dist, value = model(state)
dist_c, dist_s = model.act(state)
value_c, value_s = model.evaluate(state)
# action = dist.sample()
control = dist_c.sample()
safety = dist_s.sample()
# next_state, reward, done, _ = envs.step(action.cpu().numpy())
next_state, reward_c, done, _ = envs.step(control.cpu().numpy())
# log_prob = dist.log_prob(action)
log_prob_c = dist_c.log_prob(control)
log_prob_s = dist_s.log_prob(safety)
# entropy += dist.entropy().mean()
entropy_c += dist_c.entropy().mean()
entropy_s += dist_s.entropy().mean()
# log_probs.append(log_prob)
log_probs_c.append(log_prob_c)
log_probs_s.append(log_prob_s)
# values.append(value)
values_c.append(value_c)
values_s.append(value_s)
reward_s = safety_reward(state, next_state, reward_c)
# rewards.append(torch.FloatTensor(reward).unsqueeze(1).to(device))
rewards_c.append(torch.FloatTensor(reward_c).unsqueeze(1).to(device))
rewards_s.append(torch.FloatTensor(reward_s).unsqueeze(1).to(device))
masks.append(torch.FloatTensor(1 - done).unsqueeze(1).to(device))
states.append(state)
# actions.append(action)
controls.append(control)
safetys.append(safety)
state = next_state
frame_idx += 1
# if frame_idx % 1000 == 0:
if frame_idx % 100 == 0:
test_reward = np.mean([custom_test_env() for _ in range(10)])
test_rewards.append(test_reward)
plot(frame_idx, test_rewards)
if test_reward > threshold_reward: early_stop = True
next_state = torch.FloatTensor(next_state).to(device)
# _, next_value = model(next_state)
next_value_c, next_value_s = model.evaluate(next_state)
# returns = compute_gae(next_value, rewards, masks, values)
returns_c = compute_gae(next_value_c, rewards_c, masks, values_c)
returns_s = compute_gae(next_value_s, rewards_s, masks, values_s)
# returns = torch.cat(returns).detach()
returns_c = torch.cat(returns_c).detach()
returns_s = torch.cat(returns_s).detach()
# log_probs = torch.cat(log_probs).detach()
log_probs_c = torch.cat(log_probs_c).detach()
log_probs_s = torch.cat(log_probs_s).detach()
# values = torch.cat(values).detach()
values_c = torch.cat(values_c).detach()
values_s = torch.cat(values_s).detach()
states = torch.cat(states)
# actions = torch.cat(actions)
controls = torch.cat(controls)
safetys = torch.cat(safetys)
# advantage = returns - values
advantages_c = returns_c - values_c
advantages_s = returns_s - values_s
print(frame_idx)
# ppo_update(ppo_epochs, mini_batch_size, states, actions, log_probs, returns, advantage)
custom_ppo_update(ppo_epochs, mini_batch_size, states, controls, safetys, log_probs_c, log_probs_s \
, returns_c, returns_s, advantages_c, advantages_s, clip_param=0.2)
num_inputs = envs.observation_space.shape[0]
num_outputs = envs.action_space.shape[0]
#Hyper params:
hidden_size = 256
lr = 3e-4
num_steps = 20
mini_batch_size = 5
ppo_epochs = 4
threshold_reward = -200
model = ActorCritic(num_inputs, num_outputs, hidden_size).to(device)
optimizer = optim.Adam(model.parameters(), lr=lr)
max_frames = 15000
frame_idx = 0
test_rewards = []
state = envs.reset()
early_stop = False
while frame_idx < max_frames and not early_stop:
log_probs = []
values = []
states = []
actions = []
rewards = []
masks = []
entropy = 0
for _ in range(num_steps):
state = torch.FloatTensor(state).to(device)
dist, value = model(state)
action = dist.sample()
next_state, reward, done, _ = envs.step(action.cpu().numpy())
log_prob = dist.log_prob(action)
entropy += dist.entropy().mean()
log_probs.append(log_prob)
values.append(value)
rewards.append(torch.FloatTensor(reward).unsqueeze(1).to(device))
masks.append(torch.FloatTensor(1 - done).unsqueeze(1).to(device))
states.append(state)
actions.append(action)
state = next_state
frame_idx += 1
if frame_idx % 1000 == 0:
test_reward = np.mean([test_env() for _ in range(10)])
test_rewards.append(test_reward)
plot(frame_idx, test_rewards)
if test_reward > threshold_reward: early_stop = True
next_state = torch.FloatTensor(next_state).to(device)
_, next_value = model(next_state)
returns = compute_gae(next_value, rewards, masks, values)
returns = torch.cat(returns).detach()
log_probs = torch.cat(log_probs).detach()
values = torch.cat(values).detach()
states = torch.cat(states)
actions = torch.cat(actions)
advantage = returns - values
ppo_update(ppo_epochs, mini_batch_size, states, actions, log_probs, returns, advantage)
###Output
_____no_output_____
###Markdown
Saving trajectories for GAIL
###Code
from itertools import count
max_expert_num = 50000
num_steps = 0
expert_traj = []
for i_episode in count():
state = env.reset()
done = False
total_reward = 0
while not done:
state = torch.FloatTensor(state).unsqueeze(0).to(device)
dist, _ = model(state)
action = dist.sample().cpu().numpy()[0]
next_state, reward, done, _ = env.step(action)
state = next_state
total_reward += reward
expert_traj.append(np.hstack([state, action]))
num_steps += 1
print("episode:", i_episode, "reward:", total_reward)
if num_steps >= max_expert_num:
break
expert_traj = np.stack(expert_traj)
print()
print(expert_traj.shape)
print()
np.save("expert_traj.npy", expert_traj)
###Output
episode: 0 reward: -9.379095587032706
episode: 1 reward: -9.650909347140873
episode: 2 reward: -9.239020824482216
episode: 3 reward: -9.08145820218229
episode: 4 reward: -9.754058352832722
episode: 5 reward: -8.787385990259725
episode: 6 reward: -9.960827471049809
episode: 7 reward: -9.383103664844352
episode: 8 reward: -9.21930627510877
episode: 9 reward: -9.392032602524893
episode: 10 reward: -8.971582591460887
episode: 11 reward: -9.117912064597304
episode: 12 reward: -8.78152662300433
episode: 13 reward: -9.890438673703049
episode: 14 reward: -9.54285322963302
episode: 15 reward: -9.41677616518869
episode: 16 reward: -8.978002589039926
episode: 17 reward: -9.786747572291842
episode: 18 reward: -9.040926985899137
episode: 19 reward: -9.48965082152691
episode: 20 reward: -9.508202258492254
episode: 21 reward: -9.057053059323461
episode: 22 reward: -9.819452759432647
episode: 23 reward: -9.805622747689124
episode: 24 reward: -9.254766119386941
episode: 25 reward: -9.474071662816893
episode: 26 reward: -8.992316209890427
episode: 27 reward: -10.282954007778248
episode: 28 reward: -9.011238642006349
episode: 29 reward: -8.972805993176761
episode: 30 reward: -9.623902668422575
episode: 31 reward: -9.859476343867318
episode: 32 reward: -8.919275223025556
episode: 33 reward: -9.240241611169468
episode: 34 reward: -9.361412326843997
episode: 35 reward: -9.320278867882282
episode: 36 reward: -9.652660506034406
episode: 37 reward: -9.212631603521288
episode: 38 reward: -9.224924478397542
episode: 39 reward: -9.678577056498254
episode: 40 reward: -9.472000122268634
episode: 41 reward: -8.984706614087257
episode: 42 reward: -9.395115387380619
episode: 43 reward: -9.37510810653667
episode: 44 reward: -9.655783567346932
episode: 45 reward: -10.021133317766772
episode: 46 reward: -10.339772991224601
episode: 47 reward: -9.28109806473477
episode: 48 reward: -9.290320988579284
episode: 49 reward: -9.582074860599231
episode: 50 reward: -9.24944545068257
(50949, 3)
|
boards/Pynq-Z1/base/notebooks/pmod/pmod_timer.ipynb | ###Markdown
PMOD TIMERIn this notebook, PMOD Timer functionalities are illustrated. The Timer has two sub-modules: Timer0 and Timer1. The Generate output and Capture Input of Timer 0 are assumed to be connected to PMODA pin 0. 1. The Generate function outputs one clock (10 ns) pulse after a desired period. 2. The Capture input is sensitive to a rising edge or high level logic.To see the results of this notebook, you will need a [Digilent Analog Discovery 2](http://store.digilentinc.com/analog-discovery-2-100msps-usb-oscilloscope-logic-analyzer-and-variable-power-supply/) and [WaveForms 2015](https://reference.digilentinc.com/waveforms3newest) 1. InstantiationImport overlay to use the timers.
###Code
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
###Output
_____no_output_____
###Markdown
Instantiate Pmod_Timer class. The method `stop()` will stop both timer sub-modules.In this example, we will use pin 0 of the PMODA interface. PMODB and other pins can also be used.
###Code
from time import sleep
from pynq.lib import Pmod_Timer
pt = Pmod_Timer(base.PMODA,0)
pt.stop()
###Output
_____no_output_____
###Markdown
2. Generate pulses for a certain period of timeIn this example, we choose the Digilent Analog Discovery 2 as the scope. * The `1+` pin (of channel 1) has to be connected to pin 0 on PMODA interface. Use the following settings for waveform.Generate a 10 ns clock pulse every 1 microseconds for 4 seconds and then stop the generation.Note that pulses are generated every $count\times10$ ns. Here count is defined as period.You should see output like this:
###Code
# Generate a 10 ns pulse every period*10 ns
period=100
pt.generate_pulse(period)
# Sleep for 4 seconds and stop the timer
sleep(4)
pt.stop()
###Output
_____no_output_____
###Markdown
3. Generate a certain number of pulsesNote first parameter is the period interval. Denoting the desired period as $T$ (in ns), we need to set the first parameter `period` to:$period = \frac{T}{10} $The second parameter is the number of pulses to be generated.Run the following cell and you should see output in the scope like this:
###Code
# Generate 3 pulses at every 1 us
count=3
period=100
pt.generate_pulse(period, count)
###Output
_____no_output_____
###Markdown
Now generate the pulses at every 1 $\mu$s interval.
###Code
# Generate pulses per 1 us forever
count=0
period=100
pt.generate_pulse(period, count)
###Output
_____no_output_____
###Markdown
Stop the generation.
###Code
pt.stop()
###Output
_____no_output_____
###Markdown
4. Determine if an event has occurred at the input An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the event is to be detected. It returns 0 if no event occurred, otherwise it returns 1.Use a waveform generator in this example. Connect W1 channel of the Analog Discovery to pin 0 of PMODA.Do not run the waveform generation in the next cell.
###Code
# Detect any event within 10 us
period=1000
pt.event_detected(period)
###Output
_____no_output_____
###Markdown
Now run the waveform generation and then run the next cell. Set the waveform generator settings as shown below:
###Code
# Detect any event within 20 ms
period=200000
pt.event_detected(period)
###Output
_____no_output_____
###Markdown
5. Count number of events occurred during a desired period An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the number of event are counted. In this example we are interested in number of events occurring in 10 $\mu$s.Use a waveform generator in this example. Use the following settings of the waveform generator and run the generator. Then run the next example.
###Code
# Count number of events within 10 us
period=1000
pt.event_count(period)
###Output
_____no_output_____
###Markdown
6. Measure period between two rising edges An event is either a rising edge or a high logic level. It expects at least two rising edges. The return result is in units of nanoseconds.Use a waveform generator in this example. Use the following settings of the waveform generator and run the generation. Then run the next example.
###Code
period = pt.get_period_ns()
print("The measured waveform frequency: {} Hz".format(1e9/period))
###Output
The measured waveform frequency: 200000.0 Hz
###Markdown
PMOD TIMERIn this notebook, PMOD Timer functionalities are illustrated. The Timer has two sub-modules: Timer0 and Timer1. The Generate output and Capture Input of Timer 0 are assumed to be connected to PMODA pin 0. 1. The Generate function outputs one clock (10 ns) pulse after a desired period. 2. The Capture input is sensitive to a rising edge or high level logic.To see the results of this notebook, you will need a [Digilent Analog Discovery 2](http://store.digilentinc.com/analog-discovery-2-100msps-usb-oscilloscope-logic-analyzer-and-variable-power-supply/) and [WaveForms 2015](https://reference.digilentinc.com/waveforms3newest) 1. InstantiationImport overlay to use the timers.
###Code
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
###Output
_____no_output_____
###Markdown
Instantiate Pmod_Timer class. The method `stop()` will stop both timer sub-modules.In this example, we will use pin 0 of the PMODA interface. PMODB and other pins can also be used.
###Code
from time import sleep
from pynq.lib import Pmod_Timer
pt = Pmod_Timer(base.PMODA,0)
pt.stop()
###Output
_____no_output_____
###Markdown
2. Generate pulses for a certain period of timeIn this example, we choose the Digilent Analog Discovery 2 as the scope. * The `1+` pin (of channel 1) has to be connected to pin 0 on PMODA interface. Use the following settings for waveform.Generate a 10 ns clock pulse every 1 microseconds for 4 seconds and then stop the generation.Note that pulses are generated every $count\times10$ ns. Here count is defined as period.You should see output like this:
###Code
# Generate a 10 ns pulse every period*10 ns
period=100
pt.generate_pulse(period)
# Sleep for 4 seconds and stop the timer
sleep(4)
pt.stop()
###Output
_____no_output_____
###Markdown
3. Generate a certain number of pulsesNote first parameter is the period interval. Denoting the desired period as $T$ (in ns), we need to set the first parameter `period` to:$period = \frac{T}{10} $The second parameter is the number of pulses to be generated.Run the following cell and you should see output in the scope like this:
###Code
# Generate 3 pulses at every 1 us
count=3
period=100
pt.generate_pulse(period, count)
###Output
_____no_output_____
###Markdown
Now generate the pulses at every 1 $\mu$s interval.
###Code
# Generate pulses per 1 us forever
count=0
period=100
pt.generate_pulse(period, count)
###Output
_____no_output_____
###Markdown
Stop the generation.
###Code
pt.stop()
###Output
_____no_output_____
###Markdown
4. Determine if an event has occurred at the input An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the event is to be detected. It returns 0 if no event occurred, otherwise it returns 1.Use a waveform generator in this example. Connect W1 channel of the Analog Discovery to pin 0 of PMODA.Do not run the waveform generation in the next cell.
###Code
# Detect any event within 10 us
period=1000
pt.event_detected(period)
###Output
_____no_output_____
###Markdown
Now run the waveform generation and then run the next cell. Set the waveform generator settings as shown below:
###Code
# Detect any event within 20 ms
period=200000
pt.event_detected(period)
###Output
_____no_output_____
###Markdown
5. Count number of events occurred during a desired period An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the number of event are counted. In this example we are interested in number of events occurring in 10 $\mu$s.Use a waveform generator in this example. Use the following settings of the waveform generator and run the generator. Then run the next example.
###Code
# Count number of events within 10 us
period=1000
pt.event_count(period)
###Output
_____no_output_____
###Markdown
6. Measure period between two rising edges An event is either a rising edge or a high logic level. It expects at least two rising edges. The return result is in units of nanoseconds.Use a waveform generator in this example. Use the following settings of the waveform generator and run the generation. Then run the next example.
###Code
period = pt.get_period_ns()
print("The measured waveform frequency: {} Hz".format(1e9/period))
###Output
The measured waveform frequency: 200000.0 Hz
###Markdown
PMOD TIMERIn this notebook, PMOD Timer functionalities are illustrated. The Timer has two sub-modules: Timer0 and Timer1. The Generate output and Capture Input of Timer 0 are assumed to be connected to PMODA pin 0. 1. The Generate function outputs one clock (10 ns) pulse after a desired period. 2. The Capture input is sensitive to a rising edge or high level logic.To see the results of this notebook, you will need a [Digilent Analog Discovery 2](http://store.digilentinc.com/analog-discovery-2-100msps-usb-oscilloscope-logic-analyzer-and-variable-power-supply/) and [WaveForms 2015](https://reference.digilentinc.com/waveforms3newest) 1. InstantiationImport overlay to use the timers.
###Code
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
###Output
_____no_output_____
###Markdown
Instantiate Pmod_Timer class. The method `stop()` will stop both timer sub-modules.In this example, we will use pin 0 of the PMODA interface. PMODB and other pins can also be used.
###Code
from time import sleep
from pynq.lib import Pmod_Timer
pt = Pmod_Timer(base.PMODA,0)
pt.stop()
###Output
_____no_output_____
###Markdown
2. Generate pulses for a certain period of timeIn this example, we choose the Digilent Analog Discovery 2 as the scope. * The `1+` pin (of channel 1) has to be connected to pin 0 on PMODA interface. Use the following settings for waveform.Generate a 10 ns clock pulse every 1 microseconds for 4 seconds and then stop the generation.Note that pulses are generated every $count\times10$ ns. Here count is defined as period.You should see output like this:
###Code
# Generate a 10 ns pulse every period*10 ns
period=100
pt.generate_pulse(period)
# Sleep for 4 seconds and stop the timer
sleep(4)
pt.stop()
###Output
_____no_output_____
###Markdown
3. Generate a certain number of pulsesNote first parameter is the period interval. Denoting the desired period as $T$ (in ns), we need to set the first parameter `period` to:$period = \frac{T}{10} $The second parameter is the number of pulses to be generated.Run the following cell and you should see output in the scope like this:
###Code
# Generate 3 pulses at every 1 us
count=3
period=100
pt.generate_pulse(period, count)
###Output
_____no_output_____
###Markdown
Now generate the pulses at every 1 $\mu$s interval.
###Code
# Generate pulses per 1 us forever
count=0
period=100
pt.generate_pulse(period, count)
###Output
_____no_output_____
###Markdown
Stop the generation.
###Code
pt.stop()
###Output
_____no_output_____
###Markdown
4. Determine if an event has occurred at the input An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the event is to be detected. It returns 0 if no event occurred, otherwise it returns 1.Use a waveform generator in this example. Connect W1 channel of the Analog Discovery to pin 0 of PMODA.Do not run the waveform generation in the next cell.
###Code
# Detect any event within 10 us
period=1000
pt.event_detected(period)
###Output
_____no_output_____
###Markdown
Now run the waveform generation and then run the next cell. Set the waveform generator settings as shown below:
###Code
# Detect any event within 20 ms
period=200000
pt.event_detected(period)
###Output
_____no_output_____
###Markdown
5. Count number of events occurred during a desired period An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the number of event are counted. In this example we are interested in number of events occurring in 10 $\mu$s.Use a waveform generator in this example. Use the following settings of the waveform generator and run the generator. Then run the next example.
###Code
# Count number of events within 10 us
period=1000
pt.event_count(period)
###Output
_____no_output_____
###Markdown
6. Measure period between two rising edges An event is either a rising edge or a high logic level. It expects at least two rising edges. The return result is in units of nanoseconds.Use a waveform generator in this example. Use the following settings of the waveform generator and run the generation. Then run the next example.
###Code
period = pt.get_period_ns()
print("The measured waveform frequency: {} Hz".format(1e9/period))
###Output
The measured waveform frequency: 200000.0 Hz
|
Week2-Introduction-to-Python-_-NumPy/Intro to Python.ipynb | ###Markdown
Nesting ListsA great feature of of Python data structures is that they support *nesting*. This means we can have data structures within data structures. For example: A list inside a list.Let's see how this works!
###Code
# Let's make three lists
lst_1=[1,2,3]
lst_2=[4,5,6]
lst_3=[7,8,9]
# Make a list of lists to form a matrix
matrix = [lst_1,lst_2,lst_3]
# Show
matrix
###Output
_____no_output_____
###Markdown
We can again use indexing to grab elements, but now there are two levels for the index. The items in the matrix object, and then the items inside that list!
###Code
# Grab first item in matrix object
matrix[0]
# Grab first item of the first item in the matrix object
matrix[0][0]
###Output
_____no_output_____
###Markdown
List ComprehensionsPython has an advanced feature called list comprehensions. They allow for quick construction of lists. To fully understand list comprehensions we need to understand for loops. So don't worry if you don't completely understand this section, and feel free to just skip it since we will return to this topic later.But in case you want to know now, here are a few examples!
###Code
# Build a list comprehension by deconstructing a for loop within a []
first_col = [row[0] for row in matrix]
first_col
###Output
_____no_output_____
###Markdown
We used a list comprehension here to grab the first element of every row in the matrix object. We will cover this in much more detail later on! TuplesIn Python tuples are very similar to lists, however, unlike lists they are *immutable* meaning they can not be changed. You would use tuples to present things that shouldn't be changed, such as days of the week, or dates on a calendar. In this section, we will get a brief overview of the following: 1.) Constructing Tuples 2.) Basic Tuple Methods 3.) Immutability 4.) When to Use TuplesYou'll have an intuition of how to use tuples based on what you've learned about lists. We can treat them very similarly with the major distinction being that tuples are immutable. Constructing TuplesThe construction of a tuples use () with elements separated by commas. For example:
###Code
# Create a tuple
t = (1,2,3)
# Check len just like a list
len(t)
# Can also mix object types
t = ('one',2)
# Show
t
# Use indexing just like we did in lists
t[0]
# Slicing just like a list
t[-1]
###Output
_____no_output_____
###Markdown
Basic Tuple MethodsTuples have built-in methods, but not as many as lists do. Let's look at two of them:
###Code
# Use .index to enter a value and return the index
t.index('one')
# Use .count to count the number of times a value appears
t.count('one')
###Output
_____no_output_____
###Markdown
ImmutabilityIt can't be stressed enough that tuples are immutable. To drive that point home:
###Code
t[0]= 'change'
###Output
_____no_output_____
###Markdown
Because of this immutability, tuples can't grow. Once a tuple is made we can not add to it.
###Code
t.append('nope')
###Output
_____no_output_____
###Markdown
When to use TuplesYou may be wondering, "Why bother using tuples when they have fewer available methods?" To be honest, tuples are not used as often as lists in programming, but are used when immutability is necessary. If in your program you are passing around an object and need to make sure it does not get changed, then a tuple becomes your solution. It provides a convenient source of data integrity.You should now be able to create and use tuples in your programming as well as have an understanding of their immutability.Up next Sets and Booleans!! DictionariesWe've been learning about *sequences* in Python but now we're going to switch gears and learn about *mappings* in Python. If you're familiar with other languages you can think of these Dictionaries as hash tables. This section will serve as a brief introduction to dictionaries and consist of: 1.) Constructing a Dictionary 2.) Accessing objects from a dictionary 3.) Nesting Dictionaries 4.) Basic Dictionary MethodsSo what are mappings? Mappings are a collection of objects that are stored by a *key*, unlike a sequence that stored objects by their relative position. This is an important distinction, since mappings won't retain order since they have objects defined by a key.A Python dictionary consists of a key and then an associated value. That value can be almost any Python object. Constructing a DictionaryLet's see how we can construct dictionaries to get a better understanding of how they work!
###Code
# Make a dictionary with {} and : to signify a key and a value
my_dict = {'key1':'value1','key2':'value2'}
# Call values by their key
my_dict['key2']
###Output
_____no_output_____
###Markdown
Its important to note that dictionaries are very flexible in the data types they can hold. For example:
###Code
my_dict = {'key1':123,'key2':[12,23,33],'key3':['item0','item1','item2']}
# Let's call items from the dictionary
my_dict['key3']
# Can call an index on that value
my_dict['key3'][0]
# Can then even call methods on that value
my_dict['key3'][0].upper()
###Output
_____no_output_____
###Markdown
We can affect the values of a key as well. For instance:
###Code
my_dict['key1']
# Subtract 123 from the value
my_dict['key1'] = my_dict['key1'] - 123
#Check
my_dict['key1']
###Output
_____no_output_____
###Markdown
A quick note, Python has a built-in method of doing a self subtraction or addition (or multiplication or division). We could have also used += or -= for the above statement. For example:
###Code
# Set the object equal to itself minus 123
my_dict['key1'] -= 123
my_dict['key1']
###Output
_____no_output_____
###Markdown
We can also create keys by assignment. For instance if we started off with an empty dictionary, we could continually add to it:
###Code
# Create a new dictionary
d = {}
# Create a new key through assignment
d['animal'] = 'Dog'
# Can do this with any object
d['answer'] = 42
#Show
d
###Output
_____no_output_____
###Markdown
Nesting with DictionariesHopefully you're starting to see how powerful Python is with its flexibility of nesting objects and calling methods on them. Let's see a dictionary nested inside a dictionary:
###Code
# Dictionary nested inside a dictionary nested inside a dictionary
d = {'key1':{'nestkey':{'subnestkey':'value'}}}
###Output
_____no_output_____
###Markdown
Wow! That's a quite the inception of dictionaries! Let's see how we can grab that value:
###Code
# Keep calling the keys
d['key1']['nestkey']['subnestkey']
###Output
_____no_output_____
###Markdown
A few Dictionary MethodsThere are a few methods we can call on a dictionary. Let's get a quick introduction to a few of them:
###Code
# Create a typical dictionary
d = {'key1':1,'key2':2,'key3':3}
# Method to return a list of all keys
d.keys()
# Method to grab all values
d.values()
# Method to return tuples of all items (we'll learn about tuples soon)
d.items()
###Output
_____no_output_____
###Markdown
Hopefully you now have a good basic understanding how to construct dictionaries. There's a lot more to go into but we won't do that here. After this section all you need to know is how to create a dictionary and how to retrieve values from it. Functions Introduction to FunctionsHere, we will explain what a function is in Python and how to create one. Functions will be one of our main building blocks when we construct larger and larger amounts of code to solve problems.**So what is a function?**Formally, a function is a useful device that groups together a set of statements so they can be run more than once. They can also let us specify parameters that can serve as inputs to the functions.On a more fundamental level, functions allow us to not have to repeatedly write the same code again and again. If you remember back to the lessons on strings and lists, remember that we used a function len() to get the length of a string. Since checking the length of a sequence is a common task you would want to write a function that can do this repeatedly at command.Functions will be one of most basic levels of reusing code in Python, and it will also allow us to start thinking of program design. def StatementsLet's see how to build out a function's syntax in Python. It has the following form:
###Code
def name_of_function(arg1,arg2):
'''
This is where the function's Document String (docstring) goes
'''
# Do stuff here
# Return desired result
###Output
_____no_output_____
###Markdown
We begin with def then a space followed by the name of the function. Try to keep names relevant, for example len() is a good name for a length() function. Also be careful with names, you wouldn't want to call a function the same name as a [built-in function in Python](https://docs.python.org/2/library/functions.html) (such as len).Next come a pair of parentheses with a number of arguments separated by a comma. These arguments are the inputs for your function. You'll be able to use these inputs in your function and reference them. After this you put a colon.Now here is the important step, you must indent to begin the code inside your function correctly. Python makes use of *whitespace* to organize code. Lots of other programing languages do not do this, so keep that in mind.Next you'll see the docstring, this is where you write a basic description of the function. Docstrings are not necessary for simple functions, but it's good practice to put them in so you or other people can easily understand the code you write.After all this you begin writing the code you wish to execute.The best way to learn functions is by going through examples. So let's try to go through examples that relate back to the various objects and data structures we learned about before. Example 1: A simple print 'hello' function
###Code
def say_hello():
print('hello')
###Output
_____no_output_____
###Markdown
Call the function:
###Code
say_hello()
###Output
_____no_output_____
###Markdown
Example 2: A simple greeting functionLet's write a function that greets people with their name.
###Code
def greeting(name):
print('Hello %s' %(name))
greeting('Bob')
###Output
_____no_output_____
###Markdown
Using returnLet's see some example that use a return statement. return allows a function to *return* a result that can then be stored as a variable, or used in whatever manner a user wants. Example 3: Addition function
###Code
def add_num(num1,num2):
return num1+num2
add_num(4,5)
# Can also save as variable due to return
result = add_num(4,5)
print(result)
###Output
_____no_output_____
###Markdown
What happens if we input two strings?
###Code
add_num('one','two')
###Output
_____no_output_____
###Markdown
Note that because we don't declare variable types in Python, this function could be used to add numbers or sequences together! We'll later learn about adding in checks to make sure a user puts in the correct arguments into a function.Let's also start using break, continue, and pass statements in our code. We introduced these during the while lecture. Finally let's go over a full example of creating a function to check if a number is prime (a common interview exercise).We know a number is prime if that number is only evenly divisible by 1 and itself. Let's write our first version of the function to check all the numbers from 1 to N and perform modulo checks.
###Code
def is_prime(num):
'''
Naive method of checking for primes.
'''
for n in range(2,num):
if num % n == 0:
print(num,'is not prime')
break
else: # If never mod zero, then prime
print(num,'is prime!')
is_prime(16)
is_prime(17)
###Output
_____no_output_____
###Markdown
Note how the else lines up under for and not if. This is because we want the for loop to exhaust all possibilities in the range before printing our number is prime.Also note how we break the code after the first print statement. As soon as we determine that a number is not prime we break out of the for loop.We can actually improve this function by only checking to the square root of the target number, and by disregarding all even numbers after checking for 2. We'll also switch to returning a boolean value to get an example of using return statements:
###Code
import math
def is_prime2(num):
'''
Better method of checking for primes.
'''
if num % 2 == 0 and num > 2:
return False
for i in range(3, int(math.sqrt(num)) + 1, 2):
if num % i == 0:
return False
return True
is_prime2(27)
###Output
_____no_output_____
###Markdown
Why don't we have any break statements? It should be noted that as soon as a function *returns* something, it shuts down. A function can deliver multiple print statements, but it will only obey one return. Great! You should now have a basic understanding of creating your own functions to save yourself from repeatedly writing code! Errors and Exception HandlingIn this section we will cover Errors and Exception Handling in Python. You've definitely already encountered errors by this point in the course. For example:
###Code
print('Hello)
###Output
_____no_output_____
###Markdown
Note how we get a SyntaxError, with the further description that it was an EOL (End of Line Error) while scanning the string literal. This is specific enough for us to see that we forgot a single quote at the end of the line. Understanding these various error types will help you debug your code much faster. This type of error and description is known as an Exception. Even if a statement or expression is syntactically correct, it may cause an error when an attempt is made to execute it. Errors detected during execution are called exceptions and are not unconditionally fatal.You can check out the full list of built-in exceptions [here](https://docs.python.org/3/library/exceptions.html). Now let's learn how to handle errors and exceptions in our own code. try and exceptThe basic terminology and syntax used to handle errors in Python are the try and except statements. The code which can cause an exception to occur is put in the try block and the handling of the exception is then implemented in the except block of code. The syntax follows: try: You do your operations here... ... except ExceptionI: If there is ExceptionI, then execute this block. except ExceptionII: If there is ExceptionII, then execute this block. ... else: If there is no exception then execute this block. We can also just check for any exception with just using except: To get a better understanding of all this let's check out an example: We will look at some code that opens and writes a file:
###Code
try:
f = open('testfile','w')
f.write('Test write this')
except IOError:
# This will only check for an IOError exception and then execute this print statement
print("Error: Could not find file or read data")
else:
print("Content written successfully")
f.close()
###Output
_____no_output_____
###Markdown
Now let's see what would happen if we did not have write permission (opening only with 'r'):
###Code
try:
f = open('testfile','r')
f.write('Test write this')
except IOError:
# This will only check for an IOError exception and then execute this print statement
print("Error: Could not find file or read data")
else:
print("Content written successfully")
f.close()
###Output
_____no_output_____
###Markdown
Great! Notice how we only printed a statement! The code still ran and we were able to continue doing actions and running code blocks. This is extremely useful when you have to account for possible input errors in your code. You can be prepared for the error and keep running code, instead of your code just breaking as we saw above.We could have also just said except: if we weren't sure what exception would occur. For example:
###Code
try:
f = open('testfile','r')
f.write('Test write this')
except:
# This will check for any exception and then execute this print statement
print("Error: Could not find file or read data")
else:
print("Content written successfully")
f.close()
###Output
_____no_output_____
###Markdown
Great! Now we don't actually need to memorize that list of exception types! Now what if we kept wanting to run code after the exception occurred? This is where finally comes in. finallyThe finally: block of code will always be run regardless if there was an exception in the try code block. The syntax is: try: Code block here ... Due to any exception, this code may be skipped! finally: This code block would always be executed.For example:
###Code
try:
f = open("testfile", "w")
f.write("Test write statement")
f.close()
finally:
print("Always execute finally code blocks")
###Output
_____no_output_____
###Markdown
We can use this in conjunction with except. Let's see a new example that will take into account a user providing the wrong input:
###Code
def askint():
try:
val = int(input("Please enter an integer: "))
except:
print("Looks like you did not enter an integer!")
finally:
print("Finally, I executed!")
print(val)
askint()
askint()
###Output
_____no_output_____
###Markdown
Notice how we got an error when trying to print val (because it was never properly assigned). Let's remedy this by asking the user and checking to make sure the input type is an integer:
###Code
def askint():
try:
val = int(input("Please enter an integer: "))
except:
print("Looks like you did not enter an integer!")
val = int(input("Try again-Please enter an integer: "))
finally:
print("Finally, I executed!")
print(val)
askint()
###Output
_____no_output_____
###Markdown
Hmmm...that only did one check. How can we continually keep checking? We can use a while loop!
###Code
def askint():
while True:
try:
val = int(input("Please enter an integer: "))
except:
print("Looks like you did not enter an integer!")
continue
else:
print("Yep that's an integer!")
break
finally:
print("Finally, I executed!")
print(val)
askint()
###Output
_____no_output_____
###Markdown
So why did our function print "Finally, I executed!" after each trial, yet it never printed `val` itself? This is because with a try-except-finally clause, any continue or break statements are reserved until *after* the try clause is completed. This means that even though a successful input of **3** brought us to the else: block, and a break statement was thrown, the try clause continued through to finally: before breaking out of the while loop. And since print(val) was outside the try clause, the break statement prevented it from running.Let's make one final adjustment:
###Code
def askint():
while True:
try:
val = int(input("Please enter an integer: "))
except:
print("Looks like you did not enter an integer!")
continue
else:
print("Yep that's an integer!")
print(val)
break
finally:
print("Finally, I executed!")
askint()
###Output
_____no_output_____
###Markdown
**Great! Now you know how to handle errors and exceptions in Python with the try, except, else, and finally notation!** Modules and PackagesThe goal of this section is to:* code out a basic module and show how to import it into a Python script* run a Python script from a code block* show how command line arguments can be passed into a script Writing modules
###Code
%%writefile myfile.py
def afunc(x):
return [num for num in range(x) if num%2!=0]
mylist = afunc(20)
###Output
Writing myfile.py
###Markdown
**myfile.py** is going to be used as a module.Notice that **myfile.py** doesn't print or return anything,it just defines a function called *afunc* and a variable called *mylist*. Writing scripts
###Code
%%writefile myfile2.py
import myfile
myfile.mylist.append(30)
print(myfile.mylist)
###Output
Writing myfile2.py
###Markdown
**myfile2.py** is a Python script.First, we import our **myfile** module (note the lack of a .py extension).Next, we access the *mylist* variable inside **myfile**, and perform a list method on it.`.append(30)` proves we're working with a Python list object, and not just a string.Finally, we tell our script to print the modified list. Running scripts
###Code
! python myfile2.py
###Output
[1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 30]
###Markdown
Here we run our script from the command line. The exclamation point is a Jupyter trick that lets you run command line statements from inside a jupyter cell.
###Code
import myfile
print(myfile.mylist)
###Output
[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
###Markdown
The above cell proves that we never altered **myfile.py**, we just appended a number to the list *after* it was brought into **myfile2**. Passing command line argumentsPython's `sys` module gives you access to command line arguments when calling scripts.
###Code
%%writefile myfile3.py
import sys
import myfile
num = int(sys.argv[1])
print(myfile.afunc(num))
###Output
Overwriting myfile3.py
###Markdown
Note that we selected the second item in the list of arguments with `sys.argv[1]`.This is because the list created with `sys.argv` always starts with the name of the file being used.
###Code
! python myfile3.py 21
###Output
[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
###Markdown
Here we're passing 21 to be the upper range value used by the *myfunc* function in **list1.py** Understanding modulesModules in Python are simply Python files with the .py extension, which implement a set of functions. Modules are imported from other modules using the import command.To import a module, we use the import command. Check out the full list of built-in modules in the Python standard library [here](https://docs.python.org/3/py-modindex.html).The first time a module is loaded into a running Python script, it is initialized by executing the code in the module once. If another module in your code imports the same module again, it will not be loaded twice.If we want to import the math module, we simply import the name of the module:
###Code
# import the library
import math
# use it (ceiling rounding)
math.ceil(3.2)
###Output
_____no_output_____
###Markdown
Exploring built-in modulesTwo very important functions come in handy when exploring modules in Python - the dir and help functions.We can look for which functions are implemented in each module by using the dir function:
###Code
print(dir(math))
###Output
_____no_output_____
###Markdown
When we find the function in the module we want to use, we can read about it more using the help function, inside the Python interpreter:
###Code
help(math.ceil)
###Output
_____no_output_____
###Markdown
Writing modulesWriting Python modules is very simple. To create a module of your own, simply create a new .py file with the module name, and then import it using the Python file name (without the .py extension) using the import command. Writing packagesPackages are name-spaces which contain multiple packages and modules themselves. They are simply directories, but with a twist.Each package in Python is a directory which MUST contain a special file called **\__init\__.py**. This file can be empty, and it indicates that the directory it contains is a Python package, so it can be imported the same way a module can be imported.If we create a directory called foo, which marks the package name, we can then create a module inside that package called bar. We also must not forget to add the **\__init\__.py** file inside the foo directory.To use the module bar, we can import it in two ways:
###Code
# Just an example, this won't work
import foo.bar
# OR could do it this way
from foo import bar
###Output
_____no_output_____
###Markdown
In the first method, we must use the foo prefix whenever we access the module bar. In the second method, we don't, because we import the module to our module's name-space.The **\__init\__.py** file can also decide which modules the package exports as the API, while keeping other modules internal, by overriding the **\__all\__** variable, like so:
###Code
__init__.py:
__all__ = ["bar"]
###Output
_____no_output_____
###Markdown
Strings Strings are used in Python to record text information, such as names. Strings in Python are actually a *sequence*, which basically means Python keeps track of every element in the string as a sequence. For example, Python understands the string "hello' to be a sequence of letters in a specific order. This means we will be able to use indexing to grab particular letters (like the first letter, or the last letter).This idea of a sequence is an important one in Python and we will touch upon it later on in the future.In this lecture we'll learn about the following: 1.) Creating Strings 2.) Printing Strings 3.) String Properties 4.) String Methods 5.) Print Formatting Creating a StringTo create a string in Python you need to use either single quotes or double quotes. For example:
###Code
# Single word
'hello'
# Entire phrase
'This is also a string'
# We can also use double quote
"String built with double quotes"
# Be careful with quotes!
' I'm using single quotes, but this will create an error'
###Output
_____no_output_____
###Markdown
The reason for the error above is because the single quote in I'm stopped the string. You can use combinations of double and single quotes to get the complete statement.
###Code
"Now I'm ready to use the single quotes inside a string!"
###Output
_____no_output_____
###Markdown
Now let's learn about printing strings! Printing a StringIn a Jupyter Notebook, just a string in a cell will automatically output strings, but the correct way to display strings in your output is by using a print function.
###Code
# We can simply declare a string
'Hello World'
# Note that we can't output multiple strings this way
'Hello World 1'
'Hello World 2'
###Output
_____no_output_____
###Markdown
We can use a print statement to print a string.
###Code
print('Hello World 1')
print('Hello World 2')
print('Use \n to print a new line')
print('\n')
print('See what I mean?')
###Output
_____no_output_____
###Markdown
String Basics We can also use a function called len() to check the length of a string!
###Code
len('Hello World')
###Output
_____no_output_____
###Markdown
Python's built-in len() function counts all of the characters in the string, including spaces and punctuation.
###Code
# Assign s as a string
s = 'Hello World'
#Check
s
# Print the object
print(s)
###Output
_____no_output_____
###Markdown
Let's start indexing!
###Code
# Show first element (in this case a letter)
s[0]
s[1]
s[2]
###Output
_____no_output_____
###Markdown
We can use a : to perform *slicing* which grabs everything up to a designated point. For example:
###Code
# Grab everything past the first term all the way to the length of s which is len(s)
s[1:]
# Note that there is no change to the original s
s
# Grab everything UP TO the 3rd index
s[:3]
###Output
_____no_output_____
###Markdown
Note the above slicing. Here we're telling Python to grab everything from 0 up to 3. It doesn't include the 3rd index. You'll notice this a lot in Python, where statements and are usually in the context of "up to, but not including".
###Code
#Everything
s[:]
###Output
_____no_output_____
###Markdown
We can also use negative indexing to go backwards.
###Code
# Last letter (one index behind 0 so it loops back around)
s[-1]
# Grab everything but the last letter
s[:-1]
###Output
_____no_output_____
###Markdown
We can also use index and slice notation to grab elements of a sequence by a specified step size (the default is 1). For instance we can use two colons in a row and then a number specifying the frequency to grab elements. For example:
###Code
# Grab everything, but go in steps size of 1
s[::1]
# Grab everything, but go in step sizes of 2
s[::2]
# We can use this to print a string backwards
s[::-1]
###Output
_____no_output_____
###Markdown
String PropertiesIt's important to note that strings have an important property known as *immutability*. This means that once a string is created, the elements within it can not be changed or replaced. For example:
###Code
s
# Let's try to change the first letter to 'x'
s[0] = 'x'
###Output
_____no_output_____
###Markdown
Notice how the error tells us directly what we can't do, change the item assignment!Something we *can* do is concatenate strings!
###Code
s
# Concatenate strings!
s + ' concatenate me!'
# We can reassign s completely though!
s = s + ' concatenate me!'
print(s)
s
###Output
_____no_output_____
###Markdown
We can use the multiplication symbol to create repetition!
###Code
letter = 'z'
letter*10
###Output
_____no_output_____
###Markdown
Basic Built-in String methodsObjects in Python usually have built-in methods. These methods are functions inside the object (we will learn about these in much more depth later) that can perform actions or commands on the object itself.We call methods with a period and then the method name. Methods are in the form:object.method(parameters)Where parameters are extra arguments we can pass into the method. Don't worry if the details don't make 100% sense right now. Later on we will be creating our own objects and functions!Here are some examples of built-in methods in strings:
###Code
s
# Upper Case a string
s.upper()
# Lower case
s.lower()
# Split a string by blank space (this is the default)
s.split()
# Split by a specific element (doesn't include the element that was split on)
s.split('W')
###Output
_____no_output_____
###Markdown
There are many more methods than the ones covered here. Visit the Advanced String section to find out more! Print FormattingWe can use the .format() method to add formatted objects to printed string statements. The easiest way to show this is through an example:
###Code
'Insert another string with curly brackets: {}'.format('The inserted string')
###Output
_____no_output_____
###Markdown
ListsEarlier when discussing strings we introduced the concept of a *sequence* in Python. Lists can be thought of the most general version of a *sequence* in Python. Unlike strings, they are mutable, meaning the elements inside a list can be changed!In this section we will learn about: 1.) Creating lists 2.) Basic List Methods 3.) Nesting Lists 4.) Introduction to List ComprehensionsLists are constructed with brackets [] and commas separating every element in the list.Let's go ahead and see how we can construct lists!
###Code
# Assign a list to an variable named my_list
my_list = [1,2,3]
###Output
_____no_output_____
###Markdown
We just created a list of integers, but lists can actually hold different object types. For example:
###Code
my_list = ['A string',23,100.232,'o']
###Output
_____no_output_____
###Markdown
Just like strings, the len() function will tell you how many items are in the sequence of the list.
###Code
len(my_list)
my_list = ['one','two','three',4,5]
# Grab element at index 0
my_list[0]
# Grab index 1 and everything past it
my_list[1:]
# Grab everything UP TO index 3
my_list[:3]
###Output
_____no_output_____
###Markdown
We can also use + to concatenate lists, just like we did for strings.
###Code
my_list + ['new item']
###Output
_____no_output_____
###Markdown
Note: This doesn't actually change the original list!
###Code
my_list
###Output
_____no_output_____
###Markdown
You would have to reassign the list to make the change permanent.
###Code
# Reassign
my_list = my_list + ['add new item permanently']
my_list
###Output
_____no_output_____
###Markdown
We can also use the * for a duplication method similar to strings:
###Code
# Make the list double
my_list * 2
# Again doubling not permanent
my_list
###Output
_____no_output_____
###Markdown
Basic List MethodsIf you are familiar with another programming language, you might start to draw parallels between arrays in another language and lists in Python. Lists in Python however, tend to be more flexible than arrays in other languages for a two good reasons: they have no fixed size (meaning we don't have to specify how big a list will be), and they have no fixed type constraint (like we've seen above).Let's go ahead and explore some more special methods for lists:
###Code
# Create a new list
list1 = [1,2,3]
###Output
_____no_output_____
###Markdown
Use the **append** method to permanently add an item to the end of a list:
###Code
# Append
list1.append('append me!')
# Show
list1
###Output
_____no_output_____
###Markdown
Use **pop** to "pop off" an item from the list. By default pop takes off the last index, but you can also specify which index to pop off. Let's see an example:
###Code
# Pop off the 0 indexed item
list1.pop(0)
# Show
list1
# Assign the popped element, remember default popped index is -1
popped_item = list1.pop()
popped_item
# Show remaining list
list1
###Output
_____no_output_____
###Markdown
It should also be noted that lists indexing will return an error if there is no element at that index. For example:
###Code
list1[100]
###Output
_____no_output_____
###Markdown
We can use the **sort** method and the **reverse** methods to also effect your lists:
###Code
new_list = ['a','e','x','b','c']
#Show
new_list
# Use reverse to reverse order (this is permanent!)
new_list.reverse()
new_list
# Use sort to sort the list (in this case alphabetical order, but for numbers it will go ascending)
new_list.sort()
new_list
###Output
_____no_output_____ |
chapter15_symbolic/05_number_theory.ipynb | ###Markdown
15.5. A bit of number theory with SymPyhttps://ipython-books.github.io/155-a-bit-of-number-theory-with-sympy/ Ref* Undergraduate level: Elementary Number Theory, Gareth A. Jones, Josephine M. Jones, Springer, (1998)* Graduate level: A Classical Introduction to Modern Number Theory, Kenneth Ireland, Michael Rosen, Springer, (1982)* SymPy's number-theory module, available at http://docs.sympy.org/latest/modules/ntheory.html* The Chinese Remainder Theorem on Wikipedia, at https://en.wikipedia.org/wiki/Chinese_remainder_theorem* Applications of the Chinese Remainder Theorem, given at http://mathoverflow.net/questions/10014/applications-of-the-chinese-remainder-theorem* Number theory lectures on Awesome Math, at https://github.com/rossant/awesome-math/number-theory
###Code
from sympy import *
import sympy.ntheory as nt
init_printing()
nt.isprime(11), nt.isprime(121)
nt.isprime(2017)
nt.nextprime(2017)
nt.prime(1000)
nt.primepi(2017)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.arange(2, 10000)
fig, ax = plt.subplots(1, 1, figsize=(6, 4))
ax.plot(x, list(map(nt.primepi, x)), '-k',
label='$\pi(x)$')
ax.plot(x, x / np.log(x), '--k',
label='$x/\log(x)$')
ax.legend(loc=2)
factors = nt.factorint(2020)
type(factors), factors.keys(), factors.values()
n = 1
for k,v in factors.items():
n *= k**v
print(n)
nt.factorint(1998)
2 * 3**3 * 37
from sympy.ntheory.modular import solve_congruence
solve_congruence((1, 3), (2, 4), (3, 5))
###Output
_____no_output_____ |
notebooks/additional_topics.ipynb | ###Markdown
Saving modelsIt is possible to save fitted Prophet models so that they can be loaded and used later.In R, this is done with `saveRDS` and `readRDS`:
###Code
%%R
saveRDS(m, file="model.RDS") # Save model
m <- readRDS(file="model.RDS") # Load model
###Output
_____no_output_____
###Markdown
In Python, models should not be saved with pickle; the Stan backend attached to the model object will not pickle well, and will produce issues under certain versions of Python. Instead, you should use the built-in serialization functions to serialize the model to json:
###Code
import json
from prophet.serialize import model_to_json, model_from_json
with open('serialized_model.json', 'w') as fout:
json.dump(model_to_json(m), fout) # Save model
with open('serialized_model.json', 'r') as fin:
m = model_from_json(json.load(fin)) # Load model
###Output
_____no_output_____
###Markdown
The json file will be portable across systems, and deserialization is backwards compatible with older versions of prophet. Flat trend and custom trendsFor time series that exhibit strong seasonality patterns rather than trend changes, it may be useful to force the trend growth rate to be flat. This can be achieved simply by passing `growth=flat` when creating the model:
###Code
%%R
m <- prophet(df, growth='flat')
m = Prophet(growth='flat')
###Output
_____no_output_____
###Markdown
Note that if this is used on a time series that doesn't have a constant trend, any trend will be fit with the noise term and so there will be high predictive uncertainty in the forecast.To use a trend besides these three built-in trend functions (piecewise linear, piecewise logistic growth, and flat), you can download the source code from github, modify the trend function as desired in a local branch, and then install that local version. This PR provides a good illustration of what must be done to implement a custom trend (https://github.com/facebook/prophet/pull/1466/files), as does this one that implements a step function trend (https://github.com/facebook/prophet/pull/1794) and this one for a new trend in R (https://github.com/facebook/prophet/pull/1778). Updating fitted modelsA common setting for forecasting is fitting models that need to be updated as additional data come in. Prophet models can only be fit once, and a new model must be re-fit when new data become available. In most settings, model fitting is fast enough that there isn't any issue with re-fitting from scratch. However, it is possible to speed things up a little by warm-starting the fit from the model parameters of the earlier model. This code example shows how this can be done in Python:
###Code
def stan_init(m):
"""Retrieve parameters from a trained model.
Retrieve parameters from a trained model in the format
used to initialize a new Stan model.
Parameters
----------
m: A trained model of the Prophet class.
Returns
-------
A Dictionary containing retrieved parameters of m.
"""
res = {}
for pname in ['k', 'm', 'sigma_obs']:
res[pname] = m.params[pname][0][0]
for pname in ['delta', 'beta']:
res[pname] = m.params[pname][0]
return res
df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv')
df1 = df.loc[df['ds'] < '2016-01-19', :] # All data except the last day
m1 = Prophet().fit(df1) # A model fit to all data except the last day
%timeit m2 = Prophet().fit(df) # Adding the last day, fitting from scratch
%timeit m2 = Prophet().fit(df, init=stan_init(m1)) # Adding the last day, warm-starting from m1
###Output
1.33 s ± 55.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
185 ms ± 4.46 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Saving modelsIt is possible to save fitted Prophet models so that they can be loaded and used later.In R, this is done with `saveRDS` and `readRDS`:
###Code
%%R
saveRDS(m, file="model.RDS") # Save model
m <- readRDS(file="model.RDS") # Load model
###Output
_____no_output_____
###Markdown
In Python, models should not be saved with pickle; the Stan backend attached to the model object will not pickle well, and will produce issues under certain versions of Python. Instead, you should use the built-in serialization functions to serialize the model to json:
###Code
import json
from fbprophet.serialize import model_to_json, model_from_json
with open('serialized_model.json', 'w') as fout:
json.dump(model_to_json(m), fout) # Save model
with open('serialized_model.json', 'r') as fin:
m = model_from_json(json.load(fin)) # Load model
###Output
_____no_output_____
###Markdown
The json file will be portable across systems, and deserialization is backwards compatible with older versions of fbprophet. Flat trend and custom trendsFor time series that exhibit strong seasonality patterns rather than trend changes, it may be useful to force the trend growth rate to be flat. This can be achieved simply by passing `growth=flat` when creating the model:
###Code
%%R
m <- prophet(df, growth='flat')
m = Prophet(growth='flat')
###Output
_____no_output_____
###Markdown
Note that if this is used on a time series that doesn't have a constant trend, any trend will be fit with the noise term and so there will be high predictive uncertainty in the forecast.To use a trend besides these three built-in trend functions (piecewise linear, piecewise logistic growth, and flat), you can download the source code from github, modify the trend function as desired in a local branch, and then install that local version. This PR provides a good illustration of what must be done to implement a custom trend: https://github.com/facebook/prophet/pull/1466/files. Updating fitted modelsA common setting for forecasting is fitting models that need to be updated as additional data come in. Prophet models can only be fit once, and a new model must be re-fit when new data become available. In most settings, model fitting is fast enough that there isn't any issue with re-fitting from scratch. However, it is possible to speed things up a little by warm-starting the fit from the model parameters of the earlier model. This code example shows how this can be done in Python:
###Code
def stan_init(m):
"""Retrieve parameters from a trained model.
Retrieve parameters from a trained model in the format
used to initialize a new Stan model.
Parameters
----------
m: A trained model of the Prophet class.
Returns
-------
A Dictionary containing retrieved parameters of m.
"""
res = {}
for pname in ['k', 'm', 'sigma_obs']:
res[pname] = m.params[pname][0][0]
for pname in ['delta', 'beta']:
res[pname] = m.params[pname][0]
return res
df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv')
df1 = df.loc[df['ds'] < '2016-01-19', :] # All data except the last day
m1 = Prophet().fit(df1) # A model fit to all data except the last day
%timeit m2 = Prophet().fit(df) # Adding the last day, fitting from scratch
%timeit m2 = Prophet().fit(df, init=stan_init(m1)) # Adding the last day, warm-starting from m1
###Output
1.44 s ± 121 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
860 ms ± 203 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Saving modelsIt is possible to save fitted Prophet models so that they can be loaded and used later.In R, this is done with `saveRDS` and `readRDS`:
###Code
%%R
saveRDS(m, file="model.RDS") # Save model
m <- readRDS(file="model.RDS") # Load model
###Output
_____no_output_____
###Markdown
In Python, models should not be saved with pickle; the Stan backend attached to the model object will not pickle well, and will produce issues under certain versions of Python. Instead, you should use the built-in serialization functions to serialize the model to json:
###Code
from prophet.serialize import model_to_json, model_from_json
with open('serialized_model.json', 'w') as fout:
fout.write(model_to_json(m)) # Save model
with open('serialized_model.json', 'r') as fin:
m = model_from_json(fin.read()) # Load model
###Output
_____no_output_____
###Markdown
The json file will be portable across systems, and deserialization is backwards compatible with older versions of prophet. Flat trend and custom trendsFor time series that exhibit strong seasonality patterns rather than trend changes, it may be useful to force the trend growth rate to be flat. This can be achieved simply by passing `growth=flat` when creating the model:
###Code
%%R
m <- prophet(df, growth='flat')
m = Prophet(growth='flat')
###Output
_____no_output_____
###Markdown
Note that if this is used on a time series that doesn't have a constant trend, any trend will be fit with the noise term and so there will be high predictive uncertainty in the forecast.To use a trend besides these three built-in trend functions (piecewise linear, piecewise logistic growth, and flat), you can download the source code from github, modify the trend function as desired in a local branch, and then install that local version. [This PR](https://github.com/facebook/prophet/pull/1466/files) provides a good illustration of what must be done to implement a custom trend, as does [this one](https://github.com/facebook/prophet/pull/1794) that implements a step function trend and [this one](https://github.com/facebook/prophet/pull/1778) for a new trend in R. Updating fitted modelsA common setting for forecasting is fitting models that need to be updated as additional data come in. Prophet models can only be fit once, and a new model must be re-fit when new data become available. In most settings, model fitting is fast enough that there isn't any issue with re-fitting from scratch. However, it is possible to speed things up a little by warm-starting the fit from the model parameters of the earlier model. This code example shows how this can be done in Python:
###Code
def stan_init(m):
"""Retrieve parameters from a trained model.
Retrieve parameters from a trained model in the format
used to initialize a new Stan model.
Parameters
----------
m: A trained model of the Prophet class.
Returns
-------
A Dictionary containing retrieved parameters of m.
"""
res = {}
for pname in ['k', 'm', 'sigma_obs']:
res[pname] = m.params[pname][0][0]
for pname in ['delta', 'beta']:
res[pname] = m.params[pname][0]
return res
df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv')
df1 = df.loc[df['ds'] < '2016-01-19', :] # All data except the last day
m1 = Prophet().fit(df1) # A model fit to all data except the last day
%timeit m2 = Prophet().fit(df) # Adding the last day, fitting from scratch
%timeit m2 = Prophet().fit(df, init=stan_init(m1)) # Adding the last day, warm-starting from m1
###Output
1.33 s ± 55.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
185 ms ± 4.46 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Saving modelsIt is possible to save fitted Prophet models so that they can be loaded and used later.In R, this is done with `saveRDS` and `readRDS`:
###Code
%%R
saveRDS(m, file="model.RDS") # Save model
m <- readRDS(file="model.RDS") # Load model
###Output
_____no_output_____
###Markdown
In Python, models should not be saved with pickle; the Stan backend attached to the model object will not pickle well, and will produce issues under certain versions of Python. Instead, you should use the built-in serialization functions to serialize the model to json:
###Code
import json
from fbprophet.serialize import model_to_json, model_from_json
with open('serialized_model.json', 'w') as fout:
json.dump(model_to_json(m), fout) # Save model
with open('serialized_model.json', 'r') as fin:
m = model_from_json(json.load(fin)) # Load model
###Output
_____no_output_____
###Markdown
The json file will be portable across systems, and deserialization is backwards compatible with older versions of fbprophet. Flat trend and custom trendsFor time series that exhibit strong seasonality patterns rather than trend changes, it may be useful to force the trend growth rate to be flat. This can be achieved simply by passing `growth=flat` when creating the model:
###Code
m = Prophet(growth='flat')
###Output
_____no_output_____
###Markdown
This is currently implemented only in the Python version of Prophet. Note that if this is used on a time series that doesn't have a constant trend, any trend will be fit with the noise term and so there will be high predictive uncertainty in the forecast.To use a trend besides these three built-in trend functions (piecewise linear, piecewise logistic growth, and flat), you can download the source code from github, modify the trend function as desired in a local branch, and then install that local version. This PR provides a good illustration of what must be done to implement a custom trend: https://github.com/facebook/prophet/pull/1466/files. Updating fitted modelsA common setting for forecasting is fitting models that need to be updated as additional data come in. Prophet models can only be fit once, and a new model must be re-fit when new data become available. In most settings, model fitting is fast enough that there isn't any issue with re-fitting from scratch. However, it is possible to speed things up a little by warm-starting the fit from the model parameters of the earlier model. This code example shows how this can be done in Python:
###Code
def stan_init(m):
"""Retrieve parameters from a trained model.
Retrieve parameters from a trained model in the format
used to initialize a new Stan model.
Parameters
----------
m: A trained model of the Prophet class.
Returns
-------
A Dictionary containing retrieved parameters of m.
"""
res = {}
for pname in ['k', 'm', 'sigma_obs']:
res[pname] = m.params[pname][0][0]
for pname in ['delta', 'beta']:
res[pname] = m.params[pname][0]
return res
df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv')
df1 = df.loc[df['ds'] < '2016-01-19', :] # All data except the last day
m1 = Prophet().fit(df1) # A model fit to all data except the last day
%timeit m2 = Prophet().fit(df) # Adding the last day, fitting from scratch
%timeit m2 = Prophet().fit(df, init=stan_init(m1)) # Adding the last day, warm-starting from m1
###Output
1.44 s ± 121 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
860 ms ± 203 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Saving modelsIt is possible to save fitted Prophet models so that they can be loaded and used later.In R, this is done with `saveRDS` and `readRDS`:
###Code
%%R
saveRDS(m, file="model.RDS") # Save model
m <- readRDS(file="model.RDS") # Load model
###Output
_____no_output_____
###Markdown
In Python, models should not be saved with pickle; the Stan backend attached to the model object will not pickle well, and will produce issues under certain versions of Python. Instead, you should use the built-in serialization functions to serialize the model to json:
###Code
import json
from fbprophet.serialize import model_to_json, model_from_json
with open('serialized_model.json', 'w') as fout:
json.dump(model_to_json(m), fout) # Save model
with open('serialized_model.json', 'r') as fin:
m = model_from_json(json.load(fin)) # Load model
###Output
_____no_output_____
###Markdown
The json file will be portable across systems, and deserialization is backwards compatible with older versions of fbprophet. Flat trend and custom trendsFor time series that exhibit strong seasonality patterns rather than trend changes, it may be useful to force the trend growth rate to be flat. This can be achieved simply by passing `growth=flat` when creating the model:
###Code
%%R
m <- prophet(df, growth='flat')
m = Prophet(growth='flat')
###Output
_____no_output_____
###Markdown
Note that if this is used on a time series that doesn't have a constant trend, any trend will be fit with the noise term and so there will be high predictive uncertainty in the forecast.To use a trend besides these three built-in trend functions (piecewise linear, piecewise logistic growth, and flat), you can download the source code from github, modify the trend function as desired in a local branch, and then install that local version. This PR provides a good illustration of what must be done to implement a custom trend (https://github.com/facebook/prophet/pull/1466/files), as does this one that implements a step function trend (https://github.com/facebook/prophet/pull/1794) and this one for a new trend in R (https://github.com/facebook/prophet/pull/1778). Updating fitted modelsA common setting for forecasting is fitting models that need to be updated as additional data come in. Prophet models can only be fit once, and a new model must be re-fit when new data become available. In most settings, model fitting is fast enough that there isn't any issue with re-fitting from scratch. However, it is possible to speed things up a little by warm-starting the fit from the model parameters of the earlier model. This code example shows how this can be done in Python:
###Code
def stan_init(m):
"""Retrieve parameters from a trained model.
Retrieve parameters from a trained model in the format
used to initialize a new Stan model.
Parameters
----------
m: A trained model of the Prophet class.
Returns
-------
A Dictionary containing retrieved parameters of m.
"""
res = {}
for pname in ['k', 'm', 'sigma_obs']:
res[pname] = m.params[pname][0][0]
for pname in ['delta', 'beta']:
res[pname] = m.params[pname][0]
return res
df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv')
df1 = df.loc[df['ds'] < '2016-01-19', :] # All data except the last day
m1 = Prophet().fit(df1) # A model fit to all data except the last day
%timeit m2 = Prophet().fit(df) # Adding the last day, fitting from scratch
%timeit m2 = Prophet().fit(df, init=stan_init(m1)) # Adding the last day, warm-starting from m1
###Output
1.44 s ± 121 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
860 ms ± 203 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Partial Fitting Prophet also allows the option of partial fitting i.e. using a previous model's fitted parameters to initialize parameters of a new model. This could be useful when the model needs to be re-trained with new data coming in e.g. online learning. This works best when the newly added data follows the same trend as the history that has been previously fitted. An example is shown below in Python using the Peyton Manning dataset introduced in the Quick Start. In this case, a model `m1` is initially fit to `df1` with two years less history. A new model `m2` is then fit to `df` with full history, with parameters initialised to `m1`parameter values. These are passed to the `init` keyword as a dictionary by calling `stan_init`. Depending on the dataset, this can lead to an improvement in training time, as the parameters passed downstream to Stan's optimizing function have a more optimal initialization from the previous model's fit. In this case, we get over 20% improvement in training time compared to fitting model `m` to `df` with default parameter initialization (without partial fitting).
###Code
def stan_init(m):
res = {}
for pname in ['k', 'm', 'sigma_obs']:
res[pname] = m.params[pname][0][0]
for pname in ['delta', 'beta']:
res[pname] = m.params[pname][0]
return res
df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv')
df1 = df.loc[df['ds'] < '2014-01-21', :]
m1 = Prophet()
m1.fit(df1)
%timeit m2 = Prophet().fit(df, init=stan_init(m1))
%timeit m = Prophet().fit(df)
###Output
2.41 s ± 52.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
3.06 s ± 35.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
|
book/IntroTestExample.ipynb | ###Markdown
Demonstration of GitHub Classroom Q & A &x1f468;&x200d;&x1f3eb; PurposeGitHub Classroom not only allows us to automatically test programming/code. Other types of questions are supported too.1. &x1f3b2; Multiple-Choice Questions (one or more answers, separated by spaces or commas)2. &x1fa99; True/False Questions (write True/False)3. &x1F5A9; Numerical answers requiring some calculation (write the numerical answer in the requested units; significant figures are not checked)4. &x1f9ee; Numerical answers requiring no calculation (usually just counting) (write the answer as a number)4. &x1f500; Matching Questions (write pairs sequentially).The way we do it, it is important that you *always* put your answer in the indicated position and never use the bold-faced word "Answer" elsewhere. Other than that restriction, the rest of your notebook is yours to play with.----- &x1f4dc; InstructionsThe next few questions demonstrate the workflow. For each question type, you are first given an example; then you need to do another question of the same type on your own.- You can upload files to show your mathematical work; you can also type mathematics using Markdown.- You can use the notebook as a calculator for numerical problems; but you can also just type in your answer computed offline.- You may find these [sheets containing reference data and mathematical formulas/identities](https://github.com/QC-Edu/IntroQM2022/blob/master/documents/ReferenceConstantsConversionsMath.pdf) useful. -----**1. &x1f3b2; Which of the following phenomena are strongly associated with the particle-like nature of light.** A. Blackbody radiation B. Compton Scattering C. Electron Diffraction D. Stern-Gerlach Experiment E. Photoelectric effect**Answer**: A, E, B More information can be found in the [course notes](https://qchem.qc-edu.org/ipynb/History.htmlhow-was-quantum-mechanics-discovered). -----**2. &x1f3b2; Which of the following changes would double the energy of a photon:**A. Doubling its frequency B. Doubling its wavelength C. Doubling its momentum D. Doubling its speed E. Doubling its effective (relativistic) mass F. Doubling its wavenumber.**Answer**: -------**3. &x1fa99; Doubling the wavelength of radiation doubles its frequency. (True/False)****Answer**: False The wavelength, $\lambda$, of light is related to frequency, $\nu$ by the equation $\nu = \frac{c}{\lambda}$. So doubling the wavelength halves the frequency. -----**4. &x1fa99; Doubling the wavelength of radiation halves its speed. (True/False)****Answer**: --------**5. &x1F5A9; A helium-neon laser emits light at 632.8 nm. What is the energy of the photons generated by this laser, in Joules?****Answer**: 3.139e-19
###Code
import numpy
import scipy
from scipy import constants
Energy = constants.h*constants.c/632.8e-9
print("Energy of the photon in Joules", Energy)
###Output
Energy of the photon in Joules 3.139136942397169e-19
|
Week3/Jun26_classification_tsdata.ipynb | ###Markdown
Objective: Classify echo curves based on Stencil type Loading Data
###Code
import os, sys
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns ; sns.set()
from google.colab import drive
drive.mount('/content/drive')
sys.path.append("/content/drive/MyDrive/GSOC-NMR-project/Work/Notebooks")
from auxillary_functions import *
from polynomial_featextract import poly_featextract
# import raw data and params.txt file
datadir_path = "/content/drive/MyDrive/GSOC-NMR-project/Work/Data/2021-06-21_classify_datagen_all_funcs"
raw_data = load_data(path=datadir_path,as_df=False)
print("Shape of Raw Data:",raw_data.shape)
params_data = load_params(path=datadir_path)
# Stencil type : {'0' : 'Gaussian', '1' : 'Power Law', '2' : 'RKKY'}
### Selecting a time-window of 150 steps around the echo-pulse
offset = 150
shifted_data, center = get_window(raw_data,2/3,width=offset)
print("The Echo pulse occurs at timestep:",center)
# Rescaled data
rscl_data = shifted_data / np.max(shifted_data,axis=1,keepdims=True)
###Output
The Echo pulse occurs at timestep: 628
###Markdown
Machine LearningHere we use 5-fold cross-validation for each model
###Code
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import PowerTransformer, QuantileTransformer
from sklearn.pipeline import Pipeline
# Linear Models
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV, SGDClassifier
from sklearn.naive_bayes import GaussianNB, BernoulliNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.gaussian_process import GaussianProcessClassifier
# Tree models
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier, ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
###Output
_____no_output_____
###Markdown
Training
###Code
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score, cross_validate
###Output
_____no_output_____
###Markdown
Linear Models
###Code
# Setup Pipeline
def linear_pipeline(model):
pipe = Pipeline([
('scaler', QuantileTransformer(output_distribution='normal')),
('model', model)
])
return pipe
def tree_pipeline(model):
return Pipeline([('model', model)])
def get_model_statistics(model,X,y, linear=True):
"""For the given model (linear or tree type) computes the 5-fold CV metrics
and returns a dataframe object with metric statistics"""
modelname = str(model).split('(')[0]
print(f"Running CV for model: {modelname}")
scores = ['accuracy','precision_weighted','recall_weighted','f1_weighted','roc_auc_ovr_weighted']
if linear == True:
results_kf = cross_validate(estimator=linear_pipeline(model),
verbose=True, X=X, y=y,
scoring=scores, cv=5, n_jobs=-1)
results_kf_df = pd.DataFrame(results_kf)
if linear == False:
results_kf = cross_validate(estimator=tree_pipeline(model), verbose=True,
X=X, y=y, scoring=scores, cv=5, n_jobs=-1)
results_kf_df = pd.DataFrame(results_kf)
return {f'{str(modelname)}_Mean': dict(results_kf_df.mean()),
f'{str(modelname)}_Std': dict(results_kf_df.std()),
f'{str(modelname)}_params' : str(model)}
X, y = rscl_data, params_data['stencil_type'].values
# Try out for SVC
svm = LinearSVC(multi_class='ovr',random_state=0)
res = get_model_statistics(svm, X,y, linear=True)
res
# Try out ovevsrest SVC
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
res2 = get_model_statistics(OneVsRestClassifier(SVC()), X,y, linear=True)
res2
Linearmodels = [KNeighborsClassifier(n_neighbors=3, n_jobs=-1),
LogisticRegression(multi_class='ovr',n_jobs=-1, max_iter=800),
LogisticRegression(multi_class='multinomial',n_jobs=-1, max_iter=800),
SGDClassifier(loss='modified_huber',shuffle=True, random_state=0),
GaussianProcessClassifier(random_state=0, multi_class='ovr',n_jobs=-1),
GaussianNB(), BernoulliNB()]
LinearResults = []
for model in Linearmodels:
LinearResults.append(get_model_statistics(model, X,y, linear=True))
print()
# path = "/content/drive/MyDrive/GSOC-NMR-project/Work/Notebooks"
with open("/content/drive/MyDrive/GSOC-NMR-project/Work/Notebooks/linearmodel_classification_results.txt", 'w') as wf:
wf.write(str(LinearResults))
###Output
_____no_output_____
###Markdown
Tree based models
###Code
treemodels = [DecisionTreeClassifier(),
ExtraTreesClassifier(),
#GradientBoostingClassifier(),
RandomForestClassifier()]
# Try GradientBoostingClassifier later
get_model_statistics(RandomForestClassifier(), X,y, linear=False)
TreeResults = []
for model in treemodels:
TreeResults.append(get_model_statistics(model, X,y, linear=False))
print()
with open("/content/drive/MyDrive/GSOC-NMR-project/Work/Notebooks/treemodel_classification_results_ts.txt", 'w') as wf:
wf.write(str(TreeResults))
###Output
_____no_output_____
###Markdown
Classification metricsAll metrics are somehow associated and can be derived from the confusion matrix. 1. Precision2. Recall3. F1-score4. ROC-AUC score (AUROC)5. Cohen kappa score6. Matthew's correlaton coefficient7. Log loss
###Code
###Output
_____no_output_____ |
ilqr-master_new/examples/pendulum.ipynb | ###Markdown
Inverted Pendulum ProblemThe state and control vectors $\textbf{x}$ and $\textbf{u}$ are defined as follows:$$\begin{equation*}\textbf{x} = \begin{bmatrix} \theta & \dot{\theta} \end{bmatrix}\end{equation*}$$$$\begin{equation*}\textbf{u} = \begin{bmatrix} \tau \end{bmatrix}\end{equation*}$$The goal is to swing the pendulum upright:$$\begin{equation*}\textbf{x}_{goal} = \begin{bmatrix} 0 & 0 \end{bmatrix}\end{equation*}$$In order to deal with potential angle wrap-around issues (i.e. $2\pi = 0$), weaugment the state as follows and use that instead:$$\begin{equation*}\textbf{x}_{augmented} = \begin{bmatrix} \sin\theta & \cos\theta & \dot{\theta} \end{bmatrix}\end{equation*}$$**Note**: The torque is constrained between $-1$ and $1$. This is achieved byinstead fitting for unconstrained actions and then applying it to a squashingfunction $\tanh(\textbf{u})$. This is directly embedded into the dynamics modelin order to be auto-differentiated. This also means that we need to apply thistransformation manually to the output of our iLQR at the end.
###Code
%matplotlib inline
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from ilqr import iLQR
from ilqr.cost import QRCost
from ilqr.dynamics import constrain
from ilqr.examples.pendulum import InvertedPendulumDynamics
def on_iteration(iteration_count, xs, us, J_opt, accepted, converged):
J_hist.append(J_opt)
info = "converged" if converged else ("accepted" if accepted else "failed")
final_state = dynamics.reduce_state(xs[-1])
print("iteration", iteration_count, info, J_opt, final_state)
dt = 0.01
dynamics = InvertedPendulumDynamics(dt)
# Note that the augmented state is not all 0.
x_goal = dynamics.augment_state(np.array([0.0, 0.0]))
Q = np.array([[100.0, 0.0, 0.0], [0.0, 100.0, 0.0], [0.0, 0.0, 1.0]])
Q_terminal = 100 * np.eye(dynamics.state_size)
R = np.array([[1.0]])
cost = QRCost(Q, R, Q_terminal=Q_terminal, x_goal=x_goal)
N = 600
x0 = dynamics.augment_state(np.array([np.pi, 0.0]))
us_init = np.random.uniform(-1, 1, (N, dynamics.action_size))
ilqr = iLQR(dynamics, cost, N)
J_hist = []
xs, us = ilqr.fit(x0, us_init, n_iterations=200, on_iteration=on_iteration)
# Reduce the state to something more reasonable.
xs = dynamics.reduce_state(xs)
# Constrain the actions to see what's actually applied to the system.
us = constrain(us, dynamics.min_bounds, dynamics.max_bounds)
t = np.arange(N) * dt
theta = np.unwrap(xs[:, 0]) # Makes for smoother plots.
theta_dot = xs[:, 1]
_ = plt.plot(theta, theta_dot)
_ = plt.xlabel("theta (rad)")
_ = plt.ylabel("theta_dot (rad/s)")
_ = plt.title("Phase Plot")
_ = plt.plot(t, us)
_ = plt.xlabel("time (s)")
_ = plt.ylabel("Force (N)")
_ = plt.title("Action path")
_ = plt.plot(J_hist)
_ = plt.xlabel("Iteration")
_ = plt.ylabel("Total cost")
_ = plt.title("Total cost-to-go")
###Output
_____no_output_____ |
Remove Element/Remove_Element.ipynb | ###Markdown
Remove Elementfrom [Leetcode](https://leetcode.com/problems/remove-element/).
###Code
def RemoveElement(nums, val):
index = 0
nb = 0
for i in range(0,len(nums)):
if nums[i] != val:
nums[index] = nums[i]
index += 1
#print(nums, i, index)
else:
nb += 1
if i == len(nums)-1 and nums[i] == val:
nums[i] = nums[0]
#print(nums)
return len(nums)-nb
RemoveElement([3,2,2,3], 3)
###Output
[2, 2, 2, 2]
|
Part_02.ipynb | ###Markdown
`` Adding Location data``
###Code
!pip install geocoder
import geocoder
print("---------------------Successfully Installed---------------------------")
!pip install bs4
import pandas as pd # for data manipulation
# import numpy as np
import requests # to fetch data from given url
from bs4 import BeautifulSoup # for webscrapping
url = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
html_data = requests.get(url).text
soup = BeautifulSoup(html_data, features='html5lib')
table_contents=[]
table=soup.find('table')
for row in table.findAll('td'):
cell = {}
if row.span.text=='Not assigned':
pass
else:
cell['PostalCode'] = row.p.text[:3]
cell['Borough'] = (row.span.text).split('(')[0]
cell['Neighborhood'] = (((((row.span.text).split('(')[1]).strip(')')).replace(' /',',')).replace(')',' ')).strip(' ')
table_contents.append(cell)
# print(table_contents)
df=pd.DataFrame(table_contents)
df['Borough']=df['Borough'].replace({'Downtown TorontoStn A PO Boxes25 The Esplanade':'Downtown Toronto Stn A',
'East TorontoBusiness reply mail Processing Centre969 Eastern':'East Toronto Business',
'EtobicokeNorthwest':'Etobicoke Northwest','East YorkEast Toronto':'East York/East Toronto',
'MississaugaCanada Post Gateway Processing Centre':'Mississauga'})
df.head()
df.shape
location_data = pd.read_csv('Geospatial_Coordinates.csv')
location_data.head()
location_data.shape
df_merge = df.join(location_data.set_index('Postal Code'), on='PostalCode', how='inner')
df_merge.head()
df_merge.shape
###Output
_____no_output_____
###Markdown
common used
###Code
L = ['start']
L.append(1)
L[len(L):] = [2]
L.extend([3,4])
L[len(L):] = [5,6]
L.remove('start') # specific value
L.pop(5) # given index !!
L.copy() == L[:]
L.copy().clear() == None
L.count(3)
L.sort(reverse=True)
L.reverse()
L
# About index, there's an offical explanation
# "Passing the extra arguments ('start','end')
# is roughly equivalent to using s[i:j].index(x)"
try:
# return the index of 3
L.index(3)
# 1,5 won't affect the index
L.index(3,1,5)
L.index(3,0)
# but this case does
L.index(3,3,5)
except ValueError as e:
print(e)
###Output
_____no_output_____
###Markdown
stacks and queues
###Code
S = [1,2,3]
# Last In, First Out
S.append('the-end')
S.pop(-1)
S
# First In, First Out
from collections import deque
Q = deque([1,2,3])
# the Deque is much faster than list -- 0(1) vs 0(n) --
Q.appendleft("Begin")
Q.append("End")
Q.popleft()
Q.pop()
Q.extendleft(">>")
Q.extend([4,5])
a1 = deque([">",">",1,2,3,4,5])
a2 = deque([">",">",1,2,3,4,5])
a3 = deque([">",">",1,2,3,4,5])
a1.rotate(0) # stay the same as before
a1
a2.rotate(3) # put 3 rightmost items to the left
a2
a3.rotate(-2) # put 2 leftmost items to the right
a3
list(reversed(a3))
deque(reversed(a3))
###Output
_____no_output_____
###Markdown
list comprehensions
###Code
# same thing though
[ (x,y) for x in [1,2,3] for y in [1,2,3] if x!=y ]
for x in [1,2,3]:
for y in [1,2,3]:
if x != y:
print((x,y),end=' ')
vec = [-4,-2,0,2,4]
[ x for x in vec ]
[ x for x in vec if x >=0 ]
[ abs(x) for x in vec]
[ (x,x**3) for x in vec ]
from math import pi
[ str(round(pi,i)) for i in range(1,9) ]
list(zip([1,2,3],[4,5]))
list(zip([1,2,3],[4,5,6]))
idx = ['001','002','003']
name = ['Alex','John','Bob']
list(zip(idx,name))
dict(zip(idx,name))
###Output
_____no_output_____
###Markdown
set
###Code
set('abcd') - set('cdef') # ab # A - B
set('abcd') | set('cdef') # abcdef # A + B
set('abcd') & set('cdef') # cd # both have
set('abcd') ^ set('cdef') # abef # All - both have
{ x for x in 'abcdef' if x not in 'g'}
###Output
_____no_output_____
###Markdown
dict
###Code
profile = {'name':'Ariel','age':24}
# these methods are just a 'View' ( of the dict )
profile.keys()
profile.values()
profile.items()
'name' in profile
'age' not in profile
{ x:x**3 for x in (11,22,33) }
###Output
_____no_output_____
###Markdown
looping techniques
###Code
info = {'lang':'node.js','db':'GraphQL'}
# items for dict
for k,v in info.items():
k,v
# enumerate for auto-index
for i,v in enumerate(info.values()):
i,v
dict(
zip(info.keys(),info.values())
)
###Output
_____no_output_____
###Markdown
more on conditions
###Code
a,b,c = 3,4,4
a < b == c
a < b and b == c
v1,v2,v3 = '',0,'666'
non_null = v1 or v2 or v3
non_null
(1, 2, 3) < (1, 2, 4)
[1, 2, 3] < [1, 2, 4]
'ABC' < 'C' < 'Pascal' < 'Python'
(1, 2, 3, 4) < (1, 2, 4)
(1, 2) < (1, 2, -1)
(1, 2, 3) == (1.0, 2.0, 3.0)
(1, 2, ('aa', 'ab')) < (1, 2, ('abc', 'a'))
###Output
_____no_output_____ |
SAS_R_Python.ipynb | ###Markdown
###Code
! pip install saspy
! pip install pyreadstat
! which java
##First Import saspy using this then Enter the IOM user which is You can use either this user ID or your email address
## ([email protected] Sas_R_Python_3),
## along with your SAS Profile password, to sign in to SAS OnDemand for Academics:
## https://welcome.oda.sas.com and user Login and then password is
import saspy
sas = saspy.SASsession(iomhost=['odaws01-usw2.oda.sas.com', 'odaws02-usw2.oda.sas.com',
'odaws03-usw2.oda.sas.com', 'odaws04-usw2.oda.sas.com'],
java='/usr/bin/java', iomport=8591)
###Output
Using SAS Config named: default
Please enter the IOM user id: [email protected]
Please enter the password for IOM user : ··········
SAS Connection established. Subprocess id is 4797
###Markdown
> Python Packages used pandas(open source BSD-licensed library providing,easy to use data structures and data analysis tools for Python programming ) Similarly Packages like saspy for sas Interface and matplotlib.pyplot for graphs
###Code
import saspy
import pandas as pd
pd.set_option('display.max_columns' , None)
import pyreadstat
from IPython.display import HTML
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
saspy.__version__
#sas1 = saspy.SASsession()
## Mounting data from Google drive
# Load the Drive helper and mount
from google.colab import drive
# This will prompt for authorization.
drive.mount('/content/drive')
# After executing the cell above, Drive
# files will be present in "/content/drive/My Drive".
!ls "/content/drive/My Drive"
!ls "/content/drive/My Drive/datasets"
!ls "/content/drive/My Drive/datasets/adsl.xpt"
!ls "/content/drive/My Drive/datasets/ts.xpt"
!ls "/content/drive/My Drive/datasets/adae.xpt"
!ls "/content/drive/My Drive/datasets/adsl.sas7bdat"
!ls"/content/drive/My Drive/datasets/ADaM_spec.xlsx"
###Output
'Colab Notebooks' 'Feb R-Ladies Seattle Q&A.gdoc'
Computer_desktop.zip 'Getting started.pdf'
DataManipulationinR_beigners.pdf 'Help Center.gsite'
datasets my_documents_zip.rar
adae.xpt new_head_ts_xpt_long.xpt ts.xpt
ADaM_spec.xlsx new_head_ts_xpt.xpt ts_xpt_r.xpt
adsl.sas7bdat remain ts_xpt_using_py_short.xpt
adsl.xpt SDTM_spec.xlsx ts_xpt_write_py_long.xpt
dm.xpt ts_sasdt.sas7bdat ts_xpt_write_py.xpt
'/content/drive/My Drive/datasets/adsl.xpt'
'/content/drive/My Drive/datasets/ts.xpt'
'/content/drive/My Drive/datasets/adae.xpt'
'/content/drive/My Drive/datasets/adsl.sas7bdat'
/bin/bash: ls/content/drive/My Drive/datasets/ADaM_spec.xlsx: No such file or directory
###Markdown
Way to load R into ipython
###Code
## reading the .xpt files using the R language
# activate R magic
%load_ext rpy2.ipython
##The rpy2.ipython extension is already loaded. To reload it, use:
#%reload_ext rpy2.ipython
%%R
install.packages("devtools")
install.packages('dplyr')
install.packages('haven')
install.packages('reticulate')
install.packages('admiral') #Clinical datasets#
install.packages("rio")
install.packages("SASxport")
%%R
library(devtools)
library(dplyr)
library(admiral)
library(rio)
library(reticulate)
library(dplyr)
library(ggplot2)
library(admiral)
library(SASxport)
#devtools::install_github("https://github.com/atorus-research/xportr.git")
#library(xportr) # https://github.com/atorus-research/xportr/blob/master/inst/specs/ADaM_spec.xlsx
#@title Installing the packages if needed
%%R
if (file.exists("/usr/local/lib/python3.7/dist-packages/google/colab/_ipython.py")) {
install.packages("R.utils")
install.packages("rio")
library("R.utils")
library("httr")
my_check <- function() {return(TRUE)}
reassignInPackage("is_interactive", pkgName = "httr", my_check)
options(rlang_interactive=TRUE)
}
# A bit of imports
# Load in the r magic
%reload_ext rpy2.ipython
%config IPCompleter.greedy=True
%config InlineBackend.figure_format = 'retina'
###Output
_____no_output_____
###Markdown
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>< -----Lets make a simple SAS dataset------
###Code
## First SAS program using sas.submitLST
sas.submitLST('data a ; x = 1 ; run;proc print data = a ; run;' )
sas.submitLST("data class ; set sashelp.class (obs = 5); run; proc print data=class; run;", method='listorlog')
print(sas.lastlog())
###Output
_____no_output_____
###Markdown
To see how a SASDATA object to see PYTHON convention '?' in front of the funtion and see the DOCSTRING
###Code
?sas.sasdata
###Output
_____no_output_____
###Markdown
To see the source code PYTHON convention '??' in front of the funtion
###Code
??sas.sasdata
?sas.submitLST("data class ; set sashelp.class (obs = 5); run; proc print data=class; run;", method='listorlog')
###Output
_____no_output_____
###Markdown
---Reading the external data using Python and input dataset for SASsession to use---
###Code
#import pandas as pd
ad_sl = pd.read_csv("https://raw.githubusercontent.com/sas2r/clinical_fd/master/data-raw/inst/extdata/adsl.csv")
#ad_sl.head(ad_sl)
sasdf = sas.df2sd(ad_sl , 'sasdf')
###Output
_____no_output_____
###Markdown
Print first five observations of the SAS dataset
###Code
sas.submitLST('proc print data = work.sasdf (obs = 10 keep = usubjid trt01p age );where age > 40 ; run;', method='listorlog')
###Output
_____no_output_____
###Markdown
Print first five or last observations of the SAS dataset
###Code
age_40_above = sas.sasdata2dataframe (
table = 'sasdf' ,
libref= 'work' ,
dsopts = {
'where' : ' age > 40 ' ,
'obs' : 10,
'keep' : ['usubjid' , 'trt01pn', 'age']
},
)
print(type (age_40_above))
print()
print(age_40_above.head ())
print(age_40_above.tail ())
###Output
<class 'pandas.core.frame.DataFrame'>
usubjid trt01pn age
0 01-701-1015 0.0 63.0
1 01-701-1023 0.0 64.0
2 01-701-1028 81.0 71.0
3 01-701-1033 54.0 74.0
4 01-701-1034 81.0 77.0
usubjid trt01pn age
5 01-701-1047 0.0 85.0
6 01-701-1097 54.0 68.0
7 01-701-1111 54.0 81.0
8 01-701-1115 54.0 84.0
9 01-701-1118 0.0 52.0
###Markdown
Create a simple statistics of age by treatment arm using SAS-listorlog- this is the default as of V3.6.5. returns the LST,unless it's empty, then it returns the LOG instead
###Code
sas.submitLST('proc means data = work.sasdf ; class trt01p ; Var age ; run;', method='listorlog')
# Create a simple statistics of age by treatment arm using Python
result = ad_sl.groupby('trt01p').agg({'age':[ 'count', 'mean','std', 'min','median', 'max' ]})
print(result)
###Output
age
count mean std min median max
trt01p
Placebo 86 75.209302 8.590167 52 76.0 89
Xanomeline High Dose 84 74.380952 7.886094 56 76.0 88
Xanomeline Low Dose 84 75.666667 8.286051 51 77.5 88
###Markdown
> Getting the contents of the dataset see *sas.sasdata* creates a SASdata object
###Code
age_40_above = sas.sasdata (
table = 'sasdf' ,
libref= 'work' ,
dsopts = {
'where' : ' age > 40 ' ,
'obs' : 10,
'keep' : ['usubjid' , 'trt01pn', 'age']
},
)
print(type(age_40_above))
print ()
age_40_above.columnInfo()
print ()
age_40_above.describe()
###Output
<class 'saspy.sasdata.SASdata'>
###Markdown
> Display Generated SAS Code
###Code
sas.teach_me_SAS (True)
age_40_above.describe()
sas.teach_me_SAS (False)
###Output
proc means data=work.'sasdf'n (where=( age > 40 ) obs=10 keep=usubjid trt01pn age ) stackodsoutput n nmiss median mean std min p25 p50 p75 max;run;
###Markdown
age_40_above.describe() actually generates this SAS codeproc means data=work.'sasdf'n (where=( age > 40 ) obs=10 keep=usubjid trt01pn age ) stackodsoutput n nmiss median mean std min p25 p50 p75 max;run; Reading the external data using **R rio package**
###Code
%%R
library(dplyr)
library(rio)
adsl_r <- rio::import ("https://raw.githubusercontent.com/sas2r/clinical_fd/master/data-raw/inst/extdata/adsl.csv")
###Output
_____no_output_____
###Markdown
Create a simple statistics of age by treatment arm using dPLYR
###Code
# R way of creating the Basic statistics
%%R
summ_adsl_r <- adsl_r %>%
group_by(trt01p) %>%
summarise (ave_age = mean(age , na.rm = TRUE),
med_age = median (age , na.rm = TRUE),
min_age = min(age , na.rm = TRUE),
max_age = max(age , na.rm = TRUE)
)
summ_adsl_r
###Output
# A tibble: 3 × 5
trt01p ave_age med_age min_age max_age
<chr> <dbl> <dbl> <int> <int>
1 Placebo 75.2 76 52 89
2 Xanomeline High Dose 74.4 76 56 88
3 Xanomeline Low Dose 75.7 77.5 51 88
###Markdown
Admiral package is an ADaM in R Asset Library, It has couple of ADS based datasets that can used just by calling admiral::
###Code
%%R
#VISIT1DT IS NOT AVAILABLE IN ADMIRAL DATASET
names(admiral::adsl)
adsl_r <- admiral::adsl %>%
select(USUBJID,AGE,DTHFL,TRT01P,TRTSDTM , SAFFL)
head(adsl_r)
###Output
# A tibble: 6 × 6
USUBJID AGE DTHFL TRT01P TRTSDTM SAFFL
<chr> <dbl> <chr> <chr> <dttm> <chr>
1 01-701-1015 63 <NA> Pbo 2014-01-02 00:00:00 Y
2 01-701-1023 64 <NA> Pbo 2012-08-05 00:00:00 Y
3 01-701-1028 71 <NA> Xan_Hi 2013-07-19 00:00:00 Y
4 01-701-1033 74 <NA> Xan_Lo 2014-03-18 00:00:00 Y
5 01-701-1034 77 <NA> Xan_Hi 2014-07-01 00:00:00 Y
6 01-701-1047 85 <NA> Pbo 2013-02-12 00:00:00 Y
###Markdown
> Read SAS .xpt files using SAS Proc Copy This does not work as it is creating sasdataset outside SAS Envirnoment. Work in progress for this chunck
###Code
sas.submitLST("Libname xptfile xport '/content/drive/My Drive/datasets/adsl.xpt' ;"
"proc copy inlib = xptfile outlib = work ; run;"
,
method='listorlog')
###Output
_____no_output_____
###Markdown
>> Read SAS .xpt files using python
###Code
#import pandas as pd
read_adsl_xpt_using_py = pd.read_sas('/content/drive/My Drive/datasets/adsl.xpt')
read_adsl_xpt_using_py.head()
# Summary statistics of all the columns using describe
#read_adsl_xpt_using_py.describe(include = ['float' , 'category']) # Will give general statistics of the var
###Output
_____no_output_____
###Markdown
>> Read SAS .xpt files using R (RIO package is new universal data reader package)
###Code
%%R
adsl_xpt_r <- rio::import("/content/drive/My Drive/datasets/adsl.xpt")
head(tibble(adsl_xpt_r))
###Output
# A tibble: 6 × 48
STUDYID USUBJID SUBJID SITEID SITEGR1 ARM TRT01P TRT01PN TRT01A TRT01AN
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <chr> <dbl>
1 CDISCPILOT01 01-701… 1015 701 701 Plac… Place… 0 Place… 0
2 CDISCPILOT01 01-701… 1023 701 701 Plac… Place… 0 Place… 0
3 CDISCPILOT01 01-701… 1028 701 701 Xano… Xanom… 81 Xanom… 81
4 CDISCPILOT01 01-701… 1033 701 701 Xano… Xanom… 54 Xanom… 54
5 CDISCPILOT01 01-701… 1034 701 701 Xano… Xanom… 81 Xanom… 81
6 CDISCPILOT01 01-701… 1047 701 701 Plac… Place… 0 Place… 0
# … with 38 more variables: TRTSDT <date>, TRTEDT <date>, TRTDUR <dbl>,
# AVGDD <dbl>, CUMDOSE <dbl>, AGE <dbl>, AGEGR1 <chr>, AGEGR1N <dbl>,
# AGEU <chr>, RACE <chr>, RACEN <dbl>, SEX <chr>, ETHNIC <chr>, SAFFL <chr>,
# ITTFL <chr>, EFFFL <chr>, COMP8FL <chr>, COMP16FL <chr>, COMP24FL <chr>,
# DISCONFL <chr>, DSRAEFL <chr>, DTHFL <chr>, BMIBL <dbl>, BMIBLGR1 <chr>,
# HEIGHTBL <dbl>, WEIGHTBL <dbl>, EDUCLVL <dbl>, DISONSDT <date>,
# DURDIS <dbl>, DURDSGR1 <chr>, VISIT1DT <date>, RFSTDTC <chr>, …
###Markdown
> Way to read SAS dataset into R
###Code
%%R
adsl_atorus <- haven::read_sas("/content/drive/My Drive/datasets/adsl.sas7bdat")
head(adsl_atorus)
###Output
# A tibble: 6 × 23
STUDYID SITEID USUBJID SUBJID COUNTRY ACOUNTRY AGE AGEU SEX RACE RACEN
<chr> <dbl> <chr> <dbl> <chr> <chr> <dbl> <chr> <chr> <chr> <dbl>
1 mid987650 214356 987650… 1 USA UNITED … 35 YEARS M ASIAN 2
2 mid987650 214356 987650… 2 USA UNITED … 62 YEARS M WHITE 1
3 mid987650 214356 987650… 3 USA UNITED … 27 YEARS F ASIAN 2
4 mid987650 214356 987650… 4 USA UNITED … 42 YEARS M ASIAN 2
5 mid987650 214356 987650… 5 USA UNITED … 59 YEARS F WHITE 1
6 mid987650 214356 987650… 6 USA UNITED … 28 YEARS M WHITE 1
# … with 12 more variables: WEIGHTBL <dbl>, TRT01A <chr>, TRT01AN <dbl>,
# SAFFL <chr>, SCRDT <date>, RANDDT <date>, TRTSDT <date>, TRTSTM <time>,
# TRTEDT <date>, TRTETM <time>, BRTHDT <date>, BRTHDTC <dbl>
###Markdown
You can read .Xlsx file var_spec % dplyr::rename(type = "Data Type") %>% rlang::set_names(tolower) data_spec % rlang::set_names(tolower) %>% dplyr::rename(label = "description") > Creating R data frame
###Code
%%R
ts_xpt_r <- rio::import("/content/drive/My Drive/datasets/ts.xpt")
head_ts_xpt <- head(ts_xpt_r)
###Output
_____no_output_____
###Markdown
> Way to see how a data frame created dput
###Code
%%R
#dput(ts_xpt_r )
#dput(head_ts_xpt ,control = c('keepNA' , 'keepInteger' , 'niceNames' , 'showAttributes'))
## Way to Drop a variable
new_head_ts_xpt <- head_ts_xpt %>%
select(-STUDYID ) %>%
mutate(LONG_DOM_AIN_NAME = DOMAIN) %>%
relocate(LONG_DOM_AIN_NAME ,.before = DOMAIN)
new_head_ts_xpt
###Output
LONG_DOM_AIN_NAME DOMAIN TSSEQ TSPARMCD TSPARM
1 TS TS 1 ADDON Added on to Existing Treatments
2 TS TS 1 AGEMAX Planned Maximum Age of Subjects
3 TS TS 1 AGEMIN Planned Minimum Age of Subjects
4 TS TS 1 AGESPAN Age Group
5 TS TS 2 AGESPAN Age Group
6 TS TS 1 TBLIND Trial Blinding Schema
TSVAL
1 Y
2 No maximum
3 50 years
4 ADULT (18-65)
5 ELDERLY (> 65)
6 DOUBLE BLIND
###Markdown
> Write it out a .Xpt file using SASxport, see the new variable NAME_OF_THE_DOMAIN (longer than 8 Char) SASxport default to V5
###Code
%%R
write.xport (new_head_ts_xpt , file='/content/drive/MyDrive/datasets/ts_xpt_r.xpt')
#Check if it is written right#
ts_xpt_r <- rio::import("/content/drive/My Drive/datasets/ts_xpt_r.xpt")
ts_xpt_r
###Output
LONG_DOM DOMAIN TSSEQ TSPARMCD TSPARM TSVAL
1 TS TS 1 ADDON Added on to Existing Treatments Y
2 TS TS 1 AGEMAX Planned Maximum Age of Subjects No maximum
3 TS TS 1 AGEMIN Planned Minimum Age of Subjects 50 years
4 TS TS 1 AGESPAN Age Group ADULT (18-65)
5 TS TS 2 AGESPAN Age Group ELDERLY (> 65)
6 TS TS 1 TBLIND Trial Blinding Schema DOUBLE BLIND
###Markdown
> Write out to .Xpt file using HAVEN package, see the new variable NAME_OF_THE_DOMAIN (longer than 8 Char) HAVEN package can used to write V8
###Code
%%R
tmp <- tempfile (fileext = ".xpt")
new_head_ts_xpt_long <- new_head_ts_xpt
haven::write_xpt (new_head_ts_xpt_long , "/content/drive/My Drive/datasets/new_head_ts_xpt_long.xpt",
version = 8 ,
name = NULL )
%%R
tmp <- tempfile (fileext = ".xpt")
new_head_ts_xpt_long <- new_head_ts_xpt
haven::write_xpt (new_head_ts_xpt_long , tmp, version = 8 , name = NULL )
haven::read_xpt(tmp)
%%R
ts_xpt_r <- rio::import("/content/drive/My Drive/datasets/new_head_ts_xpt_long.xpt")
ts_xpt_r
###Output
LONG_DOM_AIN_NAME DOMAIN TSSEQ TSPARMCD TSPARM
1 TS TS 1 ADDON Added on to Existing Treatments
2 TS TS 1 AGEMAX Planned Maximum Age of Subjects
3 TS TS 1 AGEMIN Planned Minimum Age of Subjects
4 TS TS 1 AGESPAN Age Group
5 TS TS 2 AGESPAN Age Group
6 TS TS 1 TBLIND Trial Blinding Schema
TSVAL
1 Y
2 No maximum
3 50 years
4 ADULT (18-65)
5 ELDERLY (> 65)
6 DOUBLE BLIND
###Markdown
> Writing .XPT file using Pandas write_xport( ) function
###Code
ts_xpt_using_py = pd.read_sas('/content/drive/My Drive/datasets/ts.xpt')
pd_data = pd.DataFrame(ts_xpt_using_py.head())
##Rename the columns
#pd_data.assign(LONG_DOM_AIN_NAME = pd_data['DOMAIN'] )
pd_data['LONG_DOM_AIN_NAME'] = pd_data['DOMAIN']
print(pd_data )
pd.set_option ("display.max_columns" , None)
pd.set_option ("display.max_rows" , None)
### Writing it out as .XPT file
path = "/content/drive/My Drive/datasets/ts_xpt_write_py_long.xpt"
pyreadstat.write_xport(pd_data , path)
ts_xpt_using_py_short = ts_xpt_using_py.head()
path = "/content/drive/My Drive/datasets/ts_xpt_using_py_short.xpt"
pyreadstat.write_xport(ts_xpt_using_py_short , path)
###Output
_____no_output_____
###Markdown
> read_sas *.Xpt* generated by *write_xport* as per the SAS user community read_sas is not able to read back dataset created by python using SAS9.4 pandas
###Code
#read_ts_xpt_using_py_long = pd.read_sas('/content/drive/My Drive/datasets/ts_xpt_write_py_long.xpt')
#print(read_ts_xpt_using_py_long)
#read_ts_xpt_using_py_short = pd.read_sas('/content/drive/My Drive/datasets/ts_xpt_using_py_short.xpt')
#print(read_ts_xpt_using_py_short)
###Output
_____no_output_____
###Markdown
> Creating the Graph
###Code
#import pandas as pd
ad_sl = pd.read_csv("https://raw.githubusercontent.com/sas2r/clinical_fd/master/data-raw/inst/extdata/adsl.csv")
#ad_sl.head(ad_sl)
sasdf = sas.df2sd(ad_sl , 'sasdf')
#ad_sl.info()
age = sas.sasdata (
table = 'sasdf' ,
libref= 'work' ,
dsopts = {
'keep' : ['usubjid' , 'trt01pn','trt01p', 'age' , 'sex','race','heightbl']
},
)
###Output
_____no_output_____
###Markdown
> SASpy has easy way of generating Graphs ( by easy Wrappers ) for example
###Code
age.bar('trt01p')
###Output
_____no_output_____
###Markdown
> SAS equalvant of age.bar() is
###Code
sas.submitLST('proc sgplot data=work.sasdf (keep=usubjid trt01pn trt01p age sex race heightbl ); vbar trt01p ;run;title;', method='listorlog')
###Output
_____no_output_____
###Markdown
> for cross check SAS equilvant code of age.bar('trt01p')
###Code
sas.teach_me_SAS (True)
age.bar('trt01p')
sas.teach_me_SAS (False)
###Output
proc sgplot data=work.'sasdf'n (keep=usubjid trt01pn trt01p age sex race heightbl );
vbar 'trt01p'n;
run;
title;
###Markdown
> Similar Graph using Python matplot library is
###Code
import matplotlib.pyplot as plt
import numpy as np
adsl_summ = ad_sl.groupby('trt01p').size()
adsl_summ
print(adsl_summ)
print()
ax = adsl_summ.plot.bar( x = 'trt01p' , rot = 0 )
%%R
library(ggplot2)
library(dplyr)
library(rio)
adsl_r <- rio::import ("https://raw.githubusercontent.com/sas2r/clinical_fd/master/data-raw/inst/extdata/adsl.csv")
summ_adsl_r <- adsl_r %>%
group_by(trt01a) %>%
summarise( Big_N = n( ))
ggplot(summ_adsl_r, aes(x = trt01a , y = Big_N , fill = Big_N )) +
geom_bar( stat = "identity" )
###Output
_____no_output_____ |
_notebooks/2020-02-28-python_introduction_for_chemists.ipynb | ###Markdown
A Short Interactive Introduction to Python> This is a short introduction to Python programming for Chemists and Engineers.- toc: True- metadata_key1: Python- metadata_key2: Programming- metadata_key2: Chemistry This Python introductions aims at readers such as chemists and engineers with some programming background. Its intention is to give you a rough overview on the key features of the Python programming language. If you are completely new to programming, you may want to check this nice introduction first https://wiki.python.org/moin/BeginnersGuide/NonProgrammers.It is also hosted on the [google colab platform](https://colab.research.google.com/drive/1_qg3wu0dtF4d5aW-R4hyMFqyY6zgm0ra) and only a web browser is needed to start it.We will discuss a bit the differences to other languages, but we will also explain or provide links for most of the concepts mentioned here, so do not worry if some of those are not yet familiar to you. Some key features of Python* Python is a scripting language. (There is no compiled binary code per default)* It is procedural (i.e. you can program with classic functions, that take parameters)* it is object-oriented (you can use more complex structures, that can hold several variables) * it has some functional programming features as well (e.g. lambda functions, map, reduce & filter, for more information see https://en.wikipedia.org/wiki/Functional_programming) What makes Python special...* Indendation is used to organize code, i.e. to define blocks of code space or tabs are used (no curly brackets, semi-colons etc, no ";" or "{}")* variables can hold any type* all variables are objects* great support for interactive use (like in this jupyter notebook here!)* Many specialized libraries, in particular scientific [libraries](Python-Libraries) are available, where the performance intensive part is done by C/C++/Fortran and the control/IO is done via Python *Assignment* of a variable
###Code
a = 1.5 # click the cell and press SHIFT+RETURN to run the code. This line is a comment
a
###Output
_____no_output_____
###Markdown
All variables are objects, use type() function to get the type. In this case: `a` is of the type "float" (short for floating point number). Comments are declared with the hashtag `` in Python.
###Code
type(a)
###Output
_____no_output_____
###Markdown
Variables can be easily re-defined, here variable `a` becomes an integer:
###Code
a = 1
type(a)
###Output
_____no_output_____
###Markdown
The type of a variable is determined by assignment!There are no context prefixes like @, %, $ like in e.g. Perl___ A simple Python programBelow is a very simple Python program printing something, and demonstrating some basic features. A lot of useful functionality in Python is kept in external libraries that has to be imported before use with the `import` statement. Also, functions have of course to be defined prior to use.By the way, note how identation is used to define code blocks. In Python you can use either spaces or tabs to indent your code. In Python3 mixing of tabs and spaces is not allowed, and the use of 4 consecutive spaces is recommended.
###Code
# import a library
import os
# definition of a python function
def hello():
# get current path as a string and assign it to variable
current_dir = os.getcwd()
# concatentate strings and print them
print('Hello world from '+current_dir+' !')
# do not forget to call the function
hello()
###Output
Hello world from /content !
###Markdown
If we were not in an interactive session, we would save this chunk of code to a file, e.g. `hello_word.py` and run that by invoking:`python hello_word.py` on the command line.___ `IF` statement`if` statements are pretty straightforward, there is no "THEN" and indentation helps to group the `if` and the `else` (or `elif` means `else if`) block.
###Code
a = 3.0
if a > 2:
print(a)
if not (isinstance(a,int)):
print("a is not an integer")
else:
print("Hmm, we should never be here...")
###Output
_____no_output_____
###Markdown
Since Python 3, the `print` statement is a proper function, in Python 2 one could do something like (i.e. without parentheses): `print "a is not an integer"`which is no longer possible. It is strongly recommended to use Python 3 as Python 2 is no longer maintained and many external libraries have switched meanwhile competely to Python 3.Logical negation: `if not`Checking of variable type: `isinstance(variable,type)`
###Code
isinstance(a,float)
###Output
_____no_output_____
###Markdown
___ `FOR` LOOPS
###Code
for i in range(10):
print(i)
###Output
0
1
2
3
4
5
6
7
8
9
###Markdown
The Python `range` function is somewhat special and for loops look somewhat different, than in other languages:* **Python**: `for i in range(start=0, stop=10, step=1):`* **JAVA/C**: `for (int i=0; i<10; i++) {}`* **Fortran**: `do i=0,9, end do`Go on and manipulate the code above to check the effects.Looping over lists (which will be explained below) is quite simple as well:
###Code
drugs = ['ibuprofen','paracetamol','aspirin']
for d in drugs:
print(d)
###Output
ibuprofen
paracetamol
aspirin
###Markdown
Some standard keywords are used to control behaviour inside the loop:* **continue**: continue to next iteration of the loop* **break**: leave/exit the loop
###Code
for d in drugs:
if d=='paracetamol': continue
print(d)
###Output
ibuprofen
aspirin
###Markdown
___ Python Data Types| type | example | comment || --- | --- | --- || int | i=1 | Integer: 0,1,2,3 || boolean | b=False | bolean value (True/False) || float | x=0.1 | floating point number || str | s = "how are you?" | String || list | l = [1,2,2,50,7] | Changeable list with order || set | set([1,2,2,50,7])==set([1,2,50,7]) | Changeable set (list of unique elements) without order || tuple | t = (99.9,2,3,"aspirin") | "Tuple": Unchangeable ordered list, used e.g. for returning function values || dict | d = {“hans“: 123, “peter“:344, “dirk“: 623} | Dictionary to save key/value pairs |The table shows the most important data types in Python. There are more specialized types e.g. https://docs.python.org/3/library/collections.html.Some useful data types are provided by external libraries, such as `pandas` (Python for data analysis: https://pandas.pydata.org/)In general, the object oriented features in Python can be used to declare your own and special types/classes.___ ListsIn Python a list is a collection of elements that are ordered and changeable.
###Code
integer_list = [10,20,0,5]
mixed_list = ["hello",1,5.0,[2,3]]
###Output
_____no_output_____
###Markdown
List elements can themselves be a list and can contain different types.Accessing elements in the list is done via their indices (position in the list):
###Code
print(integer_list[2])
print(integer_list[1:3])
mixed_list[-1]
###Output
0
[20, 0]
###Markdown
Note, how indexing starts at index 0, which is the first element in the list. The `[1:3]` operation in the example is called slicing and is useful to get subsets of lists or arrays.The last element of an array can be accessed with the index `-1`.Manipulating and working with lists:
###Code
integer_list = [5,10,20,0,5]
integer_list.append(42)
print(len(integer_list))
integer_list
###Output
6
###Markdown
Lists can be sorted:
###Code
integer_list.sort(reverse=True)
integer_list
###Output
_____no_output_____
###Markdown
Check if some element is contained within the list or find its index:
###Code
print(10 in integer_list)
print(integer_list.index(10))
###Output
True
2
###Markdown
List can be turned into sets (only unique elements):> Indented block
###Code
set(integer_list)
###Output
_____no_output_____
###Markdown
DictionariesDictionaries are very handy data types and consists of key-value pairs. They may also be called maps or associatve arrays.
###Code
en_de = {'red':'rot', 'blue':'blau'}
en_de['black'] = 'schwarz'
en_de['cyan'] = None
en_de
###Output
_____no_output_____
###Markdown
Dictionaries are a collection of key, e.g. 'red' and value, e.g. 'rot' pairs, where both can of course be of different types. It is important to note that they maintain no specific order. For an ordered dictionary use a special type `OrderedDict` (https://docs.python.org/3/library/collections.htmlcollections.OrderedDict).There are several ways of accessing dictionaries:
###Code
en_de['red']
###Output
_____no_output_____
###Markdown
One can easily iterated over an dictionary:
###Code
for key in en_de:
print(key, en_de[key])
###Output
red rot
blue blau
black schwarz
cyan None
###Markdown
And they are used a lot in Python, e.g. the current environmental variables are saved in dictionary (`os.environ`):
###Code
import os
os.environ['PATH']
###Output
_____no_output_____
###Markdown
Python functionsPython functions are declared with the `def` keyword followed by a function name and a parameter list. Within the parameter list default parameters can be specified:
###Code
def test_function(name='paracetamol'):
print('There is no '+name+' left.')
test_function()
test_function('aspirin')
###Output
There is no paracetamol left.
There is no aspirin left.
###Markdown
Return results of function with the `return` statement, in this case we generate a string with the function and print it later, i.e. outside the function:
###Code
def test_function(name='paracetamol'):
answer = 'There is no '+name+' left.'
return answer
print(test_function())
print(test_function('aspirin'))
###Output
There is no paracetamol left.
There is no aspirin left.
###Markdown
Its also possible to return several return values as a tuple, e.g.`return answer,status`___ Type conversion & castingSometimes it is necessary to convert variables from one type to another, e.g. convert a `float` to an `int`, or a an `int` to an `str`. That is where conversion functions come into play. Unlike in other languages types are converted via functions, where the return value is the converted type.| function | converting from | converting to || --- | --- | --- || int() | string, floating point | integer || float() | string, integer | floating point || str() | integer, float, list, tuple, dictionary | string || list() | string, tuple, dictionary | list || tuple() | string, list | tuple |
###Code
import math # not sure why one wants to do this :-)
int(math.pi)
float('1.999')
list('paracetamol') # create a list of characters!
list((1,2,3,6,8)) # creates a list from a tuple
'the answer is '+str(42) # before concatentation integer or float should be converted to string
###Output
_____no_output_____
###Markdown
Implicit (automatic) conversion takes place in Python to avoid data loss or loss in accuracy:
###Code
a = 2
print(type(a))
b = 4.5
print(type(b))
c = b / a
print(type(c))
c
###Output
<class 'int'>
<class 'float'>
<class 'float'>
###Markdown
Note, that in this example the implicit conversion avoid data loss by conversion of integer `a` to a floating point number.___ Object oriented progammingObject oriented programming is a huge field of its own, and we will not go into details here. Objects are kind of complex variables that themselves can contain different variables and functions. There a implemented in the code via so-called classes. In a way a class defines the behaviour of objects and specific objects are created during runtime. Classes relate to objects as types (e.g. `int`) to a specific variable (e.g. `a`). As such, in Python there is no real difference between a class and a type: Each variable is an object of a certain type/class.New classes in Python are defined the following way:
###Code
class Greeting:
def __init__(self, name):
self.name = name
def say_hello(self):
print('Hi '+self.name)
###Output
_____no_output_____
###Markdown
This code has created the `class` Greeting with its function `say_hello`. `__init__` defines the so called constructor, which is called when the object is created. Object variable are saved using `self` keyword.Objects can be created and its function be called:
###Code
x1 = Greeting('Greta')
x1.say_hello()
###Output
Hi Greta
###Markdown
Object variables can be accessed directly:
###Code
x1.name
###Output
_____no_output_____
###Markdown
Import statements in PythonPython comes with a lot of useful libraries. Some are contained in a default installation, some have to be installed with tools like `pip` or `conda`. The most conveniant way to install new packages/libraries in particular for scientific purposes is to use an installation like Anaconda (https://www.anaconda.com/).Once a library is available it can be imported with an `import` statement. Import the whole library:
###Code
import math
print(math.cos(math.pi))
###Output
-1.0
###Markdown
Import a module or function from a library:
###Code
from math import cos, pi
print(cos(pi))
###Output
-1.0
###Markdown
Import and use an acronym for use:
###Code
import numpy as np
array = np.asarray([1.0,2.0,1.0])
###Output
_____no_output_____ |
numpy/Practica Numpy.ipynb | ###Markdown
Practica de Numpy - [Numerical Python / Realizar operaciones en paralelo](numpy)- [Creacion de distintos tipos de arreglos](creacionArreglos)- [Arreglos Indexados](arreglosIndexados)- [Slicing](slicing)- [Start Stop Step](startStopStep)- [Indices Negativos](indicesNegativos)- [Filtros Booleanos](numpyBooleanos)- [Filtros Universales](filtrosUniversales)- [Funciones de agregacion](numpyAgregacion)- Otras funciones matematicas- [Numpy Add](numpyAdd)- [Numpy Subtract](numpysubtract)- [Numpy Multiply](numpymultiply)- [Numpy Divide](numpydivide)- [Numpy Logaddexp](numpylogaddexp)- [Numpy Logaddexp2](numpylogaddexp2)- [Numpy True Divide](numpytrue_divide)- [Numpy Floor Divide](numpyfloor_divide)- [Numpy Negative](numpynegative)- [Numpy Positive](numpypositive)- [Numpy Power](numpypower)- [Numpy Remainder](numpyremainder)- [Numpy Mod](numpymod)- [Numpy Fmod](numpyfmod)- Funciones mateticas, trigonometricas, de comparación, flotantes y de bit-twiddling faltantes- Enlace externo - Tutorial en youtube (El material es de un tutorial de la Universidad de Alicante, cubre varios temas, estos videos son los de Numpy.)- Arrays de NumPy - Parte 2.1 - Curso Python para científicos e ingenieros - Arrays de NumPy - Parte 2.2 - Curso Python para científicos e ingenieros - Arrays de NumPy - Parte 2.3 - Curso Python para científicos e ingenieros
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
- Numerical Python / Realizar operaciones en paralelo
###Code
### Manejo simple de matrices Estructura n dimencional
data = np.random.rand(4,5)
data
### Multiplicacion aplicada a todos los elementos
data * 9
### Suma de todos los elementos consigo mismo
data + data
### Estos arreglos de Numpy se pueden crear con arreglos de Python normales
dataDos = [[1,2,3], [4,5,6], [7,8,9], [10,11,12]]
arr = np.array(dataDos)
### Columnas por filas
arr.shape
### Dimension array
arr.ndim
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Creacion de distintos tipos de arreglos
###Code
### Arreglos de ceros
np.zeros((4,5))
### Arreglo de 10 elementos, empieza de 0
np.arange(10)
### Arreglo que vaya del 1 al 10 en base a 20 resultados
np.linspace(1,10,20)
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Arreglos Indexados
###Code
### Arreglo de 8 elementos empezando de 0
arr = np.arange(8)
arr
### Posicion 5 contando desde el cero
arr[5]
### Muesta los elementos del 2 al 5 sin incluir el 5 (El ultimo no se incluye)
arr[2:5]
### Los dos puntos engloban a todo el arreglo, al igualarlo a 1 estoy cambiando todos los valores a 1
arr[:] = 1
arr
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Slicing
###Code
### Array de Python
arrDos = [[1,2,3], [4,5,6], [7,8,9], [10,11,12], [13,14,15]]
### Array convertido a array de numpy
arrDosNumpy = np.array(arrDos)
### Columnas por filas
arrDosNumpy.shape
### Dimensiones
arrDosNumpy.ndim
### Slicing simple
arrDosNumpy[:2]
# De los arrays 1 y 2 todos los elementos que sigan dsp del primero
arrDosNumpy[:2, 1:]
### Del array 1 todos los elementos hasta el segundo sin incluirlo (Al igual que el array sus
### elementos internos tambien empiezan desde cero)
arrDosNumpy[1, :2]
arrDosNumpy[1, :2] = 0
arrDosNumpy
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Start Stop Step
###Code
arrStartStopStep = np.arange(10)
arrStartStopStep
###Output
_____no_output_____
###Markdown
Sistaxis x [start:stop:step]
###Code
### Se van a tomar los elementos desde start hasta stop saltando de a step elementos
arrStartStopStep[2:9:3]
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Indices Negativos
###Code
arrIndicesNegativos = np.arange(10)
## Empeza del total del array menos -2 + 10 = 8 y llega a 10
arrIndicesNegativos[-2:10]
### Empeza del total del array menos -3 + 10 = 7 y llega al 3 restando de a un elemento
arrIndicesNegativos[-3:3:-1]
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Filtros Booleanos
###Code
arrBooleano = np.random.randint(5, size=(3, 3))
arrBooleano
### Verifica si cada elemento es menor a dos y devuelve True o False
arrBooleano < 2
### Devuelve los elementos que devuelven True en el filtro booleano
arrBooleano[arrBooleano < 2]
arrBooleano > 0
### Elementos mayores a cero
arrBooleano[arrBooleano > 0]
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Filtros Universales
###Code
universal = np.arange(10)
universal
### Obtener raiz cuadrada de cada uno de los elementos del array
np.sqrt(universal)
### Devuelve el valor absoluto de cada uno de los elementos del arreglo
np.abs([-2, 1, -3])
### Devuelve los caudrados de cada uno de los elementos
np.square(universal)
### Suma los elementos de cada array
np.add(universal, universal)
### Seria igual a esto
np.add([0,1,2,3,4,5,6,7,8,9], [0,1,2,3,4,5,6,7,8,9])
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Funciones de agregacion
###Code
### Arreglo de 5 columnas x 5 filas con valores enteros que van del 0 al 5 de forma aleatoria
arrRandom = np.random.randint(5, size=(5, 5))
arrRandom
### Promedio de todos sus valores
arrRandom.mean()
### Suma de todos los valores del arreglo
arrRandom.sum()
arrRandom.prod()
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) Otras funciones - Numpy Add
###Code
arrAddUno = np.arange(1, 11)
arrAddDos = np.arange(10, 20)
arrAddUno, arrAddDos
### Suma los elementos de los arrays entre si
np.add(arrAddUno, arrAddDos)
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy Subtract
###Code
arrSubUno = np.arange(1, 11)
arrSubDos = np.arange(10, 20)
arrSubUno, arrSubDos
### Resta los elementos de los arrays entre si
np.subtract(arrSubUno, arrSubDos)
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy Multiply
###Code
arrMulUno = np.arange(1, 11)
arrMulDos = np.arange(10, 20)
arrMulUno, arrMulDos
### Multiplica los elementos de los arrays entre si
np.multiply(arrSubUno, arrSubDos)
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy Divide
###Code
arrDivUno = np.arange(1, 11)
arrDivDos = np.arange(10, 20)
arrDivUno, arrDivDos
### Divide los elementos de los arrays entre si
np.true_divide(arrSubUno, arrSubDos)
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy Logaddexp
###Code
logaddexpUno = np.log(1e-10)
logaddexpDos = np.log(5e-10)
logaddexpUno, logaddexpDos
### Logaritmo de la suma de exponenciaciones de las entradas
np.logaddexp(logaddexpUno, logaddexpDos)
np.exp(np.logaddexp(logaddexpUno, logaddexpDos))
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy Logaddexp2
###Code
logaddexp2Uno = np.log(1e-10)
logaddexp2Dos = np.log(5e-10)
logaddexp2Uno, logaddexp2Dos
### Logaritmo de la suma de exponenciaciones de las entradas en base 2
np.logaddexp2(logaddexp2Uno, logaddexp2Dos)
np.exp(np.logaddexp2(logaddexp2Uno, logaddexp2Dos))
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy True Divide
###Code
trueDivide = np.arange(5)
trueDivide
### Devuelve una division verdadera de las entradas, esta division ajusta el tipo de salida
### para presentar la mejor respuesta, independientemente de los tipos de entrada.
np.true_divide(trueDivide, 4)
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy Floor Divide
###Code
floorDivide = np.arange(5)
floorDivide
### Devuelve el entero más grande, más pequeño o igual a la división de las entradas.
### Es equivalente al operador resto de Python 3 / 2 tiene de resto 1
np.floor_divide(floorDivide, 3)
###Output
_____no_output_____
###Markdown
- Numpy Negative
###Code
### Pasa a negativo los valores, en el ejemplo 1 positivo a negativo = 1 negativo,
### 1 negativo a negativo = negativo x negativo = positivo = 1 positivo
np.negative([1.,-1.])
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy Positive
###Code
### Pasa a positivo los valores, en el ejemplo 1 positivo a positivo = 1 positivo,
### 1 negativo a positivo = negativo x positivo = negativo = 1 negativo
np.positive([2.,-2.])
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy Power
###Code
power = np.arange(10)
power
### Eleva a la potencia que le digamos a cada numero del array
np.power(power, 3)
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy Remainder
###Code
### Devuelve el resto de la division entre los elementos de los arrays
np.remainder([1, 4, 7], [1, 3, 5])
remainder = np.arange(10)
remainder
np.remainder(remainder, 3)
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy Mod
###Code
### Similar a Remainder (Arriba)
np.mod([1, 4, 7], [1, 3, 5])
###Output
_____no_output_____
###Markdown
[Volver al listado](arriba) - Numpy Fmod
###Code
### Devuelve el resto de la división de elementos.
fmodUno = np.arange(1, 7).reshape(3, 2)
fmodUno
np.fmod(fmodUno, [2, 2])
###Output
_____no_output_____ |
Route-Planner.ipynb | ###Markdown
Implementing a Route Planner Using A* Search The Map
###Code
# Run this cell first!
from helpers import Map, load_map_10, load_map_40, show_map
import math
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Map Basics
###Code
map_10 = load_map_10()
show_map(map_10)
###Output
_____no_output_____
###Markdown
The map above shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network.These `Map` objects have two properties that will be used to implement A\* search: `intersections` and `roads`**Intersections**The `intersections` are represented as a dictionary. In the example below, there are only 10 intersections.
###Code
map_10.intersections
###Output
_____no_output_____
###Markdown
**Roads**The `roads` property is a list where `roads[i]` contains a list of the intersections that intersection `i` connects to.
###Code
# this shows that intersection 0 connects to intersections 7, 6, and 5
map_10.roads[0]
# This shows the full connectivity of the map
map_10.roads
# map_40 is a bigger map than map_10
map_40 = load_map_40()
show_map(map_40)
###Output
_____no_output_____
###Markdown
Advanced VisualizationsThe map above has a network of roads which spans 40 different intersections (labeled 0 through 39). * `start` - The "start" node for the search algorithm.* `goal` - The "goal" node.* `path` - An array of integers which corresponds to a valid sequence of intersection visits on the map.
###Code
# run this code, note the effect of including the optional
# parameters in the function call.
show_map(map_40, start=5, goal=34, path=[5,16,37,12,34])
###Output
_____no_output_____
###Markdown
The Algorithm PathPlanner class`__init__` - The path planner is initialized with a map, M, and typically a start and goal node. If either of these are `None`, the rest of the variables here are also set to none. - `closedSet` includes any explored/visited nodes. - `openSet` are any nodes on our frontier for potential future exploration. - `cameFrom` will hold the previous node that best reaches a given node- `gScore` is the `g` in our `f = g + h` equation, or the actual cost to reach our current node- `fScore` is the combination of `g` and `h`, i.e. the `gScore` plus a heuristic; total cost to reach the goal- `path` comes from the `run_search` function.`reconstruct_path` - This function just rebuilds the path after search is run, going from the goal node backwards using each node's `cameFrom` information.`_reset` - Resets *most* of our initialized variables for PathPlanner. This *does not* reset the map, start or goal variables.`run_search` - This checks whether the map, goal and start have been added to the class. Then, it will also check if the other variables, other than `path` are initialized.From here, the `is_open_empty` is implemented to check that there are still nodes to explore. If the goal has been reached, the path is reconstructed. If not, the current node is moved from the frontier (`openSet`) and into explored (`closedSet`). Then, the neighbors of the current node are checked along with their costs, and finally the next move is planned.This is the main idea behind A*.
###Code
class PathPlanner():
"""Construct a PathPlanner Object"""
def __init__(self, M, start=None, goal=None):
""" """
self.map = M
self.start= start
self.goal = goal
self.closedSet = self.create_closedSet() if goal != None and start != None else None
self.openSet = self.create_openSet() if goal != None and start != None else None
self.cameFrom = self.create_cameFrom() if goal != None and start != None else None
self.gScore = self.create_gScore() if goal != None and start != None else None
self.fScore = self.create_fScore() if goal != None and start != None else None
self.path = self.run_search() if self.map and self.start != None and self.goal != None else None
def reconstruct_path(self, current):
""" Reconstructs path after search """
total_path = [current]
while current in self.cameFrom.keys():
current = self.cameFrom[current]
total_path.append(current)
return total_path
def _reset(self):
"""Private method used to reset the closedSet, openSet, cameFrom, gScore, fScore, and path attributes"""
self.closedSet = None
self.openSet = None
self.cameFrom = None
self.gScore = None
self.fScore = None
self.path = self.run_search() if self.map and self.start and self.goal else None
def run_search(self):
""" """
if self.map == None:
raise(ValueError, "Must create map before running search. Try running PathPlanner.set_map(start_node)")
if self.goal == None:
raise(ValueError, "Must create goal node before running search. Try running PathPlanner.set_goal(start_node)")
if self.start == None:
raise(ValueError, "Must create start node before running search. Try running PathPlanner.set_start(start_node)")
self.closedSet = self.closedSet if self.closedSet != None else self.create_closedSet()
self.openSet = self.openSet if self.openSet != None else self.create_openSet()
self.cameFrom = self.cameFrom if self.cameFrom != None else self.create_cameFrom()
self.gScore = self.gScore if self.gScore != None else self.create_gScore()
self.fScore = self.fScore if self.fScore != None else self.create_fScore()
while not self.is_open_empty():
current = self.get_current_node()
if current == self.goal:
self.path = [x for x in reversed(self.reconstruct_path(current))]
return self.path
else:
self.openSet.remove(current)
self.closedSet.add(current)
for neighbor in self.get_neighbors(current):
if neighbor in self.closedSet:
continue # Ignore the neighbor which is already evaluated.
if not neighbor in self.openSet: # Discover a new node
self.openSet.add(neighbor)
# The distance from start to a neighbor
#the "dist_between" function may vary as per the solution requirements.
if self.get_tentative_gScore(current, neighbor) >= self.get_gScore(neighbor):
continue # This is not a better path.
# This path is the best until now. Record it!
self.record_best_path_to(current, neighbor)
print("No Path Found")
self.path = None
return False
###Output
_____no_output_____
###Markdown
Data Structures
###Code
def create_closedSet(self):
""" Creates and returns a data structure suitable to hold the set of nodes already evaluated"""
# EXAMPLE: return a data structure suitable to hold the set of nodes already evaluated
return set()
def create_openSet(self):
""" Creates and returns a data structure suitable to hold the set of currently discovered nodes
that are not evaluated yet. Initially, only the start node is known."""
if self.start != None:
# TODO: return a data structure suitable to hold the set of currently discovered nodes
# that are not evaluated yet. Make sure to include the start node.
openSet = set()
openSet.add(self.start)
return openSet
raise(ValueError, "Must create start node before creating an open set. Try running PathPlanner.set_start(start_node)")
def create_cameFrom(self):
"""Creates and returns a data structure that shows which node can most efficiently be reached from another,
for each node."""
# TODO: return a data structure that shows which node can most efficiently be reached from another,
# for each node.
return {}
def create_gScore(self):
"""Creates and returns a data structure that holds the cost of getting from the start node to that node,
for each node. The cost of going from start to start is zero."""
# TODO: return a data structure that holds the cost of getting from the start node to that node, for each node.
# for each node. The cost of going from start to start is zero. The rest of the node's values should
# be set to infinity.
gScore = {}
for i in range(len(self.map.intersections)):
gScore[i] = float("inf")
gScore[self.start] = 0
return gScore
def create_fScore(self):
"""Creates and returns a data structure that holds the total cost of getting from the start node to the goal
by passing by that node, for each node. That value is partly known, partly heuristic.
For the first node, that value is completely heuristic."""
# TODO: return a data structure that holds the total cost of getting from the start node to the goal
# by passing by that node, for each node. That value is partly known, partly heuristic.
# For the first node, that value is completely heuristic. The rest of the node's value should be
# set to infinity.
fScore = {}
for i in range(len(self.map.intersections)):
fScore[i] = float("inf")
fScore[self.start] = 0
return fScore
###Output
_____no_output_____
###Markdown
Set certain variablesThe below functions help set certain variables if they weren't a part of initializating the `PathPlanner` class.
###Code
def set_map(self, M):
"""Method used to set map attribute """
self._reset(self)
self.start = None
self.goal = None
# TODO: Set map to new value.
self.map = M
return None
def set_start(self, start):
"""Method used to set start attribute """
self._reset(self)
# TODO: Set start value. Remember to remove goal, closedSet, openSet, cameFrom, gScore, fScore,
# and path attributes' values.
self.start = start
self.goal = None
self.closedSet = None
self.openSet = None
self.cameFrom = None
self.gScore = None
self.fScore = None
self.path = None
return None
def set_goal(self, goal):
"""Method used to set goal attribute """
self._reset(self)
# TODO: Set goal value.
self.set_goal = goal
return None
###Output
_____no_output_____
###Markdown
Get node informationThe below functions concern grabbing certain node information. The `is_open_empty` function checks whether there are still nodes on the frontier to explore. `get_current_node()` comes up with a way to find the lowest `fScore` of the nodes on the frontier. `get_neighbors` gathers information from the map to find the neighbors of the current node.
###Code
def is_open_empty(self):
"""returns True if the open set is empty. False otherwise. """
# TODO: Return True if the open set is empty. False otherwise.
if self.openSet == None:
return True
else:
return False
def get_current_node(self):
""" Returns the node in the open set with the lowest value of f(node)."""
# TODO: Return the node in the open set with the lowest value of f(node).
minVal = {}
for key in self.openSet:
minVal[key] = self.calculate_fscore(key)
minNode = min(minVal, key = minVal.get)
return minNode
def get_neighbors(self, node):
"""Returns the neighbors of a node"""
# TODO: Return the neighbors of a node
return set(self.map.roads[node])
###Output
_____no_output_____
###Markdown
Scores and CostsCalculates the various parts of the `fScore`.
###Code
def get_gScore(self, node):
"""Returns the g Score of a node"""
# TODO: Return the g Score of a node
newgScore = self.gScore.get(node,0.0)
return newgScore
def distance(self, node_1, node_2):
""" Computes the Euclidean L2 Distance"""
# TODO: Compute and return the Euclidean L2 Distance
dx = self.map.intersections[node_1][0] - self.map.intersections[node_2][0]
dy = self.map.intersections[node_1][1] - self.map.intersections[node_2][1]
return math.sqrt(dx**2 + dy**2)
def get_tentative_gScore(self, current, neighbor):
"""Returns the tentative g Score of a node"""
# TODO: Return the g Score of the current node
# plus distance from the current node to it's neighbors
return (self.get_gScore(current) + self.distance(current,neighbor))
def heuristic_cost_estimate(self, node):
""" Returns the heuristic cost estimate of a node """
# TODO: Return the heuristic cost estimate of a node
return self.distance(node, self.goal)
def calculate_fscore(self, node):
"""Calculate the f score of a node. """
# TODO: Calculate and returns the f score of a node.
# REMEMBER F = G + H
return (self.get_gScore(node) + self.heuristic_cost_estimate(node))
###Output
_____no_output_____
###Markdown
Recording the best pathRecords the best path to a given neighbor node from the current node.
###Code
def record_best_path_to(self, current, neighbor):
"""Record the best path to a node """
# TODO: Record the best path to a node, by updating cameFrom, gScore, and fScore
self.cameFrom[neighbor] = current
self.gScore[neighbor] = self.get_tentative_gScore(current, neighbor)
self.fScore[current] = self.calculate_fscore(current)
return None
###Output
_____no_output_____
###Markdown
Associating your functions with the `PathPlanner` class
###Code
# Associates implemented functions with PathPlanner class
PathPlanner.create_closedSet = create_closedSet
PathPlanner.create_openSet = create_openSet
PathPlanner.create_cameFrom = create_cameFrom
PathPlanner.create_gScore = create_gScore
PathPlanner.create_fScore = create_fScore
PathPlanner.set_map = set_map
PathPlanner.set_start = set_start
PathPlanner.set_goal = set_goal
PathPlanner.is_open_empty = is_open_empty
PathPlanner.get_current_node = get_current_node
PathPlanner.get_neighbors = get_neighbors
PathPlanner.get_gScore = get_gScore
PathPlanner.distance = distance
PathPlanner.get_tentative_gScore = get_tentative_gScore
PathPlanner.heuristic_cost_estimate = heuristic_cost_estimate
PathPlanner.calculate_fscore = calculate_fscore
PathPlanner.record_best_path_to = record_best_path_to
###Output
_____no_output_____
###Markdown
VisualizeVisualize the results of the algorithm
###Code
start = 34
goal = 5
show_map(map_40, start=start, goal=goal, path=PathPlanner(map_40, start, goal).path)
###Output
_____no_output_____ |
Generate_tests.ipynb | ###Markdown
This notebook serves to generate test sets for scarlet tests
###Code
%pylab inline
# Setup: declaring survey properties, loading catalog and making sure we have pretty colorbars
data_dir='/Users/remy/Desktop/LSST_Project/GalSim/examples/data'
HST, EUCLID, WFIRST, HSC, RUBIN = gct.load_surveys()
cat = galsim.COSMOSCatalog(dir=data_dir, file_name = 'real_galaxy_catalog_23.5_example.fits')
# Channel names (scarlet-specific)
channel_hr = ['hr']
channel_lr = ['lr']
channels = channel_lr+channel_hr
mymap = 'gnuplot2'#mcolors.LinearSegmentedColormap.from_list('my_colormap', colors)
matplotlib.rc('image', cmap='gist_stern')
matplotlib.rc('image', interpolation='none')
# Choose to surveys to match
surveys = [HST, EUCLID, WFIRST, HSC, RUBIN]
# PSF size (pixels)
npsf = 41
# Size of the high resolution image (pixels)
n_hr = 131
# The low resolution image will span the same physical area
shift = (0, -15, -2, +1, +4)
datas = []
wcss = []
psfs = []
for i in range(len(surveys)-1):
s_hr = surveys[i]
s_lr = surveys[i + 1]
n_lr = np.int(n_hr*s_hr['pixel']/s_lr['pixel'])
# Make the simulations
data_hr, data_lr, psf_hr, psf_lr, angle = gct.mk_sim(39, s_hr, s_lr, (n_hr, n_hr), (n_lr, n_lr), npsf, cat)
datas.append(data_hr.array)
psfs.append(psf_hr)
wcss.append(data_hr.wcs)
n_hr = n_lr
datas.append(data_lr.array)
psfs.append(psf_lr)
wcss.append(data_lr.wcs)
np.savez('Multiresolution_tests.npz', images = datas, wcs = wcss, psf = psfs)
#galsim.fits.writeMulti(datas, file_name='MultiResolution_images.fits')
#galsim.fits.writeMulti(psfs, file_name='MultiResolution_psfs.fits')
n_hr = 50
n_lr = 50
datas = []
wcss = []
psfs = []
for s in shift:
s_hr = WFIRST
s_lr = WFIRST
data_hr, data_lr, psf_hr, psf_lr, angle = gct.mk_sim(39, s_hr, s_lr, (n_hr, n_hr), (n_lr+s, n_lr+s), npsf, cat)
datas.append(data_lr.array)
psfs.append(psf_lr)
wcss.append(data_lr.wcs)
np.savez('Multiresolution_padded_tests.npz', images = datas, wcs = wcss, psf = psfs)
###Output
_____no_output_____ |
week4/day4/theory/matplotplib/1_distributions_matplotlib/4.distNormal.ipynb | ###Markdown
Normal Distribution- Different displays of normally distributed data- Compare different samples from a normal distribution- Check for normality- Work with the cumulative distribution function (CDF)
###Code
%pylab inline
import scipy.stats as stats
# seaborn is a package for the visualization of statistical data
import seaborn as sns
sns.set(style='ticks')
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Different Representations
###Code
''' Different aspects of a normal distribution'''
# Generate the data
x = r_[-10:10:0.1]
rv = stats.norm(0,1) # random variate
x2 = r_[0:1:0.001]
ax = subplot2grid((3,2),(0,0), colspan=2)
plot(x,rv.pdf(x))
xlim([-10,10])
title('Normal Distribution - PDF')
subplot(323)
plot(x,rv.cdf(x))
xlim([-4,4])
title('CDF: cumulative distribution fct')
subplot(324)
plot(x,rv.sf(x))
xlim([-4,4])
title('SF: survival fct')
subplot(325)
plot(x2,rv.ppf(x2))
title('PPF')
subplot(326)
plot(x2,rv.isf(x2))
title('ISF')
tight_layout()
show()
###Output
_____no_output_____
###Markdown
Multiple normal sample distributions
###Code
'''Show multiple samples from the same distribution, and compare means.'''
# Do this 25 times, and show the histograms
numRows = 5
numData = 100
for ii in range(numRows):
for jj in range(numRows):
data = stats.norm.rvs(myMean, mySD, size=numData)
subplot(numRows,numRows,numRows*ii+jj+1)
hist(data)
xticks([])
yticks([])
xlim(myMean-3*mySD, myMean+3*mySD)
tight_layout()
show()
# Check out the mean of 1000 normally distributded samples
numTrials = 1000;
numData = 100
myMeans = ones(numTrials)*nan
for ii in range(numTrials):
data = stats.norm.rvs(myMean, mySD, size=numData)
myMeans[ii] = mean(data)
print('The standard error of the mean, with {0} samples, is {1:5.3f}'.format(numData, std(myMeans)))
###Output
_____no_output_____
###Markdown
Normality Check
###Code
'''Check if the distribution is normal.'''
# Generate and show a distribution
numData = 100
data = stats.norm.rvs(myMean, mySD, size=numData)
hist(data)
# Graphical test: if the data lie on a line, they are pretty much
# normally distributed
_ = stats.probplot(data, plot=plt)
# The scipy "normaltest" is based on D’Agostino and Pearson’s test that
# combines skew and kurtosis to produce an omnibus test of normality.
_, pVal = stats.normaltest(data)
# Or you can check for normality with Kolmogorov-Smirnov test: but this is only advisable for large sample numbers!
#_,pVal = stats.kstest((data-np.mean(data))/np.std(data,ddof=1), 'norm')
if pVal > 0.05:
print('Data are probably normally distributed')
###Output
Data are probably normally distributed
###Markdown
Values from the Cumulative Distribution Function
###Code
'''Calculate an empirical cumulative distribution function, compare it with the exact one, and
find the exact point for a specific data value.'''
# Generate normally distributed random data
myMean = 5
mySD = 2
numData = 100
data = stats.norm.rvs(myMean, mySD, size=numData)
# Calculate the cumulative distribution function, CDF
numbins = 20
counts, bin_edges = histogram(data, bins=numbins, normed=True)
cdf = cumsum(counts)
cdf /= max(cdf)
# compare with the exact CDF
step(bin_edges[1:],cdf)
plot(x, stats.norm.cdf(x, myMean, mySD),'r')
# Find out the value corresponding to the x-th percentile: the
# "cumulative distribution function"
value = 2
myMean = 5
mySD = 2
cdf = stats.norm.cdf(value, myMean, mySD)
print(('With a threshold of {0:4.2f}, you get {1}% of the data'.format(value, round(cdf*100))))
# For the percentile corresponding to a certain value:
# the "inverse cumulative distribution function"
value = 0.025
icdf = stats.norm.isf(value, myMean, mySD)
print('To get {0}% of the data, you need a threshold of {1:4.2f}.'.format((1-value)*100, icdf))
###Output
With a threshold of 2.00, you get 7.0% of the data
To get 97.5% of the data, you need a threshold of 8.92.
|
word_embedding_task.ipynb | ###Markdown
Learning a group of topic related sentences using Gensim (Word2Vec)**NOTE**: The ipython notebook from which this html page was created can be downloaded [here](word_embedding_task.ipynb).
###Code
# Loading a text file containing a few sentences (utterances)
from gensim.models.word2vec import LineSentence
sentences = LineSentence('small_dataset.txt')
for sentence in sentences:
print(sentence)
# Create a Word2Vec model using the list of sentences from text file
from gensim.models import Word2Vec
# min_count is set to 1, meaning that all words ocurring in the
# list of sentences shall be used to learn the topic specific model
model = Word2Vec(sentences, min_count=1)
###Output
_____no_output_____
###Markdown
Once the model has been trained it can be used for word similarity search, but in this case we are only interested in finding out the individual word vector representation (word embeddings) of the words that were part of the dataset.
###Code
# Print the default size of the word vector for the English language
print("English word vector dimension: {} [default]".format(model.vector_size))
# Print the word vector for a given word from the vocabulary
print("What WV:\n", model.wv['What'])
# Print the vocabulary of words in the list of sentences
words = list(model.wv.vocab)
print(words)
# Print word vector for each word in learned vocabulary
vocab_words = []
for word in words:
word_vector = model.wv[word]
print("{}:\n{}".format(word, word_vector))
vocab_words.append(word_vector)
# Convert python list into numpy array and save np.array as csv
import numpy as np
vocab_words_arr = np.array(vocab_words)
print("Shape: {}".format(vocab_words_arr.shape))
# Saving word vectors in csv file
# REF: https://thispointer.com/how-to-save-numpy-array-to-a-csv-file-using-numpy-savetxt-in-python/
np.savetxt('word_vectors.csv', vocab_words_arr, delimiter=',',fmt='%10.9f')
# Verify the word vectors have been saved as expected (CSV format)
csv_file = open('word_vectors.csv')
for line in csv_file:
print(line)
csv_file.close()
###Output
0.003817931,0.000240543,0.001416539,0.004691710,0.000901456,-0.003332141,0.004412716,0.001016370,-0.003054140,0.003326045,-0.003360831,-0.002418606,0.003422318,-0.000054590,-0.001297349,0.002558626,0.003658168,0.002790377,-0.004865745,-0.001417813,0.004051880,0.001257447,-0.000137752,0.000515029,-0.004225304,0.002308279,-0.002388692,-0.000381204,0.002681723,-0.001535533,0.004453339,0.004972558,-0.004890865,-0.000750126,0.002139885,-0.004434817,0.004936552,0.001371434,-0.000213899,0.000238225,0.003336452,0.000725949,-0.002061287,0.000591304,-0.004873754,-0.000840170,-0.002101362,0.004949273,-0.004063197,-0.001487249,-0.004136848,0.003548552,-0.000470372,-0.000223519,-0.004024534,0.003330080,0.002322505,-0.002862491,0.003721145,0.000099605,0.003717714,-0.003095280,-0.002681174,-0.000475856,-0.002738236,-0.000456624,0.004250429,-0.000523769,0.001175903,-0.003073386,0.003857980,0.003220433,0.001414227,-0.002455805,-0.003524332,-0.003382101,-0.004817462,-0.004495983,0.000933415,0.002115293,0.002137979,-0.001491709,0.001830439,0.003725068,-0.002583590,-0.002205328,0.000464510,0.001217631,0.004228801,0.002590737,0.000722574,0.003168549,0.002179903,0.003697334,0.003867740,-0.001168576,0.000786878,0.004533237,-0.004246540,-0.000020715
0.003210727,0.004206004,-0.003663773,-0.000686529,-0.000347193,0.002548580,0.001087819,-0.003601703,0.004293544,0.003165134,-0.000327175,0.001526551,-0.003140299,-0.002050865,-0.004402772,-0.004869504,0.003244910,0.001199283,-0.004840914,0.003609258,-0.000495356,-0.003785335,-0.003165283,-0.001884488,0.001005024,-0.004480501,0.004700046,-0.000749720,-0.004116219,0.003248483,0.000601241,0.001391371,0.000117276,0.004598539,0.000024336,0.002392100,-0.002779946,0.004339508,-0.000136888,-0.000032235,-0.002102793,0.003521864,-0.003311983,-0.000438981,0.001563378,-0.001334500,-0.000477182,-0.004754326,0.003653715,0.000063842,0.003483489,0.001623938,0.000031861,0.003711789,-0.001431723,0.000521184,0.004581107,0.003985182,0.002460982,-0.003777409,0.000931412,0.004293477,0.004432143,-0.003381311,0.003305174,0.002534920,-0.002117295,0.001092874,0.004225812,0.001084702,-0.002722353,-0.002401858,-0.000692096,0.003697560,-0.002205801,0.004397293,-0.001319862,0.001039880,0.001116216,-0.003308026,0.000220052,-0.000656609,0.000689448,0.000773951,-0.000621197,-0.003584078,-0.000483105,0.001930859,0.000828746,-0.003753677,0.002765012,0.001621977,-0.001084417,-0.002968208,0.004456817,-0.002509371,-0.001254782,0.002661642,0.001481566,-0.001303918
0.003674826,-0.003571348,-0.000191541,0.001344451,-0.002554256,-0.003097865,0.004781094,-0.000825347,0.003269454,0.004795059,0.003730626,-0.004904617,0.004878842,0.004764771,-0.002186559,0.003184578,-0.001807262,0.001630266,-0.004281288,0.003623117,0.004772794,-0.002545045,-0.002828064,0.000378306,-0.003922058,-0.004730593,-0.004972465,-0.003865715,0.000081975,0.002812293,0.001733413,0.001886800,0.002862202,-0.001782437,0.002692326,0.004961068,0.001995665,0.004144954,-0.002252660,0.002411428,-0.002145783,0.001008826,-0.004088235,-0.001989136,-0.000023787,-0.004509038,-0.002133450,-0.001680033,0.002368989,-0.003178182,0.000087073,-0.000029682,0.001928918,-0.001150559,-0.002154327,0.001724265,-0.004377858,-0.003030868,0.003598090,0.000678240,-0.000806179,-0.003760305,-0.000940498,-0.002072354,0.001001567,-0.002585263,0.001393008,0.002331270,0.001819359,0.003088729,-0.002426107,-0.002888257,0.002158948,0.001131268,-0.001734020,0.003708762,0.000816599,-0.002599154,-0.001948593,-0.002351754,-0.004170528,0.002501507,0.003792182,-0.002535307,0.002777177,0.003856736,-0.000755920,0.003154794,0.001848053,0.002365645,0.000764980,0.001450406,0.004395378,-0.003632784,-0.002627003,-0.004848392,0.004114818,-0.004930914,-0.003292832,-0.004951911
-0.000222452,-0.002020811,-0.000074854,-0.002983667,-0.002435990,-0.000269519,0.004872921,0.004458871,-0.002523144,-0.002015503,0.003505631,-0.002289893,-0.003419180,0.003306475,0.003875477,0.000244253,-0.002020532,-0.001089657,0.002507019,-0.001430361,-0.004072055,-0.003968218,-0.004288846,0.003184181,-0.004521288,-0.001299986,-0.003998723,-0.001507517,-0.001611203,0.002369087,0.001721815,0.004099016,-0.004079534,-0.003085327,0.000674107,-0.000347205,0.004872961,-0.001715644,0.001478903,0.001638741,-0.002287590,0.000679099,-0.001660143,-0.000688783,-0.003666479,0.000947129,0.002102259,0.003962159,-0.002034871,-0.000581646,-0.002815537,0.001170566,0.004989890,-0.000150676,-0.002540431,0.000058383,0.001720051,-0.002976317,-0.002202987,-0.001081301,-0.000422305,-0.001514667,-0.001417259,0.003415458,0.000861990,-0.000437711,0.000602080,0.004669177,0.001216143,0.004172822,-0.004752431,-0.004741377,0.004021507,-0.000612152,0.004299669,-0.003404021,-0.001261945,0.000047447,0.004643647,-0.000547884,0.003773933,0.002713092,0.000675568,-0.001298256,0.001044217,-0.001651349,-0.001051316,0.003037205,0.002114655,0.002246170,-0.004160744,0.001448473,0.002517059,0.003477251,-0.002886437,0.004718667,0.000681026,-0.003512484,0.000387840,0.003752019
-0.000411817,0.001031531,-0.004387416,0.003767172,-0.003274641,-0.000004643,0.001408141,0.001321710,-0.002436806,-0.004081811,0.004744458,-0.002334235,0.003084215,-0.002743960,0.002329503,0.002317159,0.000250908,0.002562344,-0.002111642,0.001439892,-0.004102187,-0.001258579,-0.002101198,0.001704094,-0.001704065,-0.001231279,-0.001816481,0.001348637,0.002657363,0.004610908,-0.002588544,0.000783808,0.002477845,0.004390165,0.004940482,0.001877305,-0.002717261,0.001289295,0.002137240,0.000952962,0.003104394,-0.003070509,0.002765270,-0.002548112,-0.002991907,0.004975020,0.004492360,-0.004701657,0.002740256,-0.003360935,0.003389908,-0.000756317,0.004478015,0.001745322,-0.004043107,-0.001296660,0.001943053,0.001951275,0.001916065,-0.000870430,-0.004718317,0.004565259,-0.003310209,-0.003661135,-0.004720970,0.003580528,-0.003493848,0.002716023,-0.000561293,0.004344202,-0.003896487,-0.000000665,0.000581205,-0.002495488,0.002797519,-0.002126662,0.000617417,-0.004589611,0.002643034,-0.000492294,-0.001321406,-0.002859528,-0.003564191,-0.000713403,-0.004317243,-0.000128334,0.001742486,0.002280568,-0.002020508,0.001656407,-0.002015481,0.003072242,0.003107897,-0.002544093,-0.000429843,0.003550183,-0.002553718,-0.003608285,-0.001567773,-0.001987635
-0.001864515,0.001373479,-0.000836967,-0.003302562,0.003790177,-0.004468769,-0.002653168,-0.004925818,0.002373363,0.004573006,-0.002851809,0.002632587,-0.004017908,0.004175689,0.003005243,-0.003316673,0.002885726,0.003121722,-0.000793711,0.003121253,-0.001541987,0.001126222,-0.004446653,-0.002641870,-0.000303148,-0.000464120,-0.004745431,-0.003109756,-0.002107441,0.003646016,-0.001759752,-0.000207695,-0.003194482,0.003033609,-0.004094131,0.000489929,-0.003227535,0.003268803,0.002406566,-0.003309834,-0.002582477,-0.000253990,-0.004925283,0.004280434,-0.000179595,0.004456446,0.004400559,-0.002434625,-0.004561713,-0.000911605,-0.001258161,0.004686960,-0.001743124,-0.002001905,0.000142499,-0.004949318,0.004079982,-0.004231700,0.000570606,-0.002278521,0.003476738,0.004426540,0.000423273,-0.000808042,0.003392611,0.004140855,-0.002566157,-0.003228715,-0.001670340,0.003593167,0.003192448,-0.003227527,-0.003232877,-0.002873841,0.001699205,0.002489904,-0.003828980,0.003288641,-0.001119972,0.004309856,-0.003985735,-0.004958433,-0.004574311,-0.000137618,0.003544780,-0.004434935,0.003211811,0.003856679,-0.003005776,0.002402224,0.000473650,0.004561917,-0.002578255,0.002030434,0.002085141,0.003507044,-0.002414612,-0.004391708,-0.000564961,-0.003041856
0.002083804,0.000185112,-0.001297297,-0.000756690,-0.004521549,0.003328628,0.001041075,-0.003612709,-0.003456157,0.004327875,-0.001406352,-0.001953227,-0.003801002,-0.002925902,-0.001707792,0.002220873,-0.003038259,0.002720386,0.000476768,0.000578258,-0.003751707,0.003908345,0.001490492,-0.000985550,-0.003659871,-0.004124812,0.000565404,0.003644140,-0.001462909,0.003753708,0.004828874,-0.003433861,0.000347152,-0.002591626,0.002581793,-0.001502431,-0.002767565,0.003040117,-0.001803420,0.000685495,0.002114195,-0.003132028,-0.000906253,0.000996878,-0.000376887,-0.004596074,-0.001401772,0.000985112,-0.004755624,0.004639362,-0.002685509,0.003919043,-0.003925897,-0.001363555,0.003416640,-0.003647086,-0.000023532,0.000888324,0.002262088,-0.001588371,-0.003934195,-0.004993487,0.003169652,-0.004821768,0.004221914,0.000800890,-0.000794803,0.004258242,-0.004665796,-0.000774856,0.003393713,0.001235540,-0.003830061,0.001613938,-0.003937627,0.003056985,-0.001619292,0.000329647,-0.003922953,0.000598421,0.004635705,-0.001909530,-0.000523938,-0.004452768,-0.003068262,-0.003302673,0.004895813,-0.004481561,0.001774233,0.003235243,0.000041263,0.000441788,-0.002569754,0.002221948,0.001409875,-0.004605914,0.000409432,-0.004986185,-0.001939686,0.001388773
-0.004524785,-0.000082761,0.001344166,-0.001308491,-0.004173415,-0.000477306,0.002550670,0.003049623,-0.001624354,0.002579004,-0.001426249,-0.000588882,0.002482026,0.002177876,0.004517978,0.001962312,0.003350365,0.000797085,-0.004386803,-0.002588492,-0.000427377,0.001887309,0.001758292,0.003855386,-0.002096200,0.001675106,-0.002998640,0.004446175,-0.002822248,0.004153183,-0.000311449,0.003908654,0.001238267,0.004746101,-0.003776486,0.002945307,0.003975336,0.001787028,0.002847854,-0.002385633,-0.004190207,0.002514215,0.002822408,-0.004302884,-0.003043089,-0.003634417,0.001532008,0.004389332,-0.001287615,0.003245565,0.000321517,0.003210413,-0.000121637,0.003899426,0.000807648,-0.004524990,0.002463930,0.000824562,-0.000050452,0.004247010,0.003318256,0.000626681,-0.001242278,-0.000276118,-0.001470903,-0.002432091,-0.000956393,-0.004628008,0.002378333,-0.002314688,-0.004017786,-0.003177505,0.001363087,-0.004377664,0.003556205,-0.004129882,-0.000339673,-0.002583473,0.002170198,0.003288934,-0.004446351,-0.001516089,-0.003567090,0.001164676,0.001844926,0.003867152,0.001270182,0.003334168,-0.004383455,0.002012893,-0.000060215,0.004142641,0.004080990,0.004210973,-0.002638659,0.000479213,0.001851480,0.004564449,-0.003438693,0.004493677
0.004399624,0.004035682,0.000697936,0.004247237,0.002059278,-0.003452584,0.003548060,0.004240761,0.003514757,0.003210853,-0.000847083,0.003950995,-0.004332834,0.000596751,0.003049675,-0.004127180,0.001841687,-0.002763313,-0.000955597,-0.001407070,0.002841942,0.000690408,0.002620158,-0.001761163,-0.000884843,-0.003027077,-0.001375156,0.002526988,0.004846165,-0.004722992,-0.004492566,0.004268701,-0.000739816,-0.004498047,0.003048500,0.004140293,0.004327537,-0.000605210,0.002100012,-0.001628684,-0.003179468,-0.004928745,-0.001219748,0.001440234,0.004349819,0.002334262,0.000501344,-0.001987961,-0.003553948,0.001359077,0.003800764,0.003144713,0.004794118,0.000189692,0.004183372,-0.003757539,-0.001115601,-0.002257821,0.001507791,-0.002788340,-0.003596700,-0.003648319,-0.001757251,0.002770506,-0.002373127,0.002270610,0.002901798,-0.002945518,0.001693466,-0.001940831,0.000615295,-0.000239611,-0.000981431,-0.004320279,0.001840662,-0.001328830,-0.004685832,0.004383149,-0.002911736,0.001039827,0.004454552,0.004788126,0.001263409,-0.003745772,0.003819452,-0.003133278,0.003925006,0.002981337,-0.000106395,0.003665343,-0.003954073,0.003927421,-0.002100799,0.002870406,0.003933676,0.004709582,-0.001434668,0.000941973,-0.004834119,-0.001814729
0.000582425,-0.001995029,-0.002560596,-0.004738598,0.003012758,0.004315978,-0.002621417,0.004163972,0.001226903,0.004662769,0.000515285,-0.001389529,-0.000776563,0.001079848,-0.004944326,0.002206085,0.000968753,0.003908291,0.000643126,-0.004214401,0.002998922,0.004208522,0.000303126,0.004968326,0.004525483,0.000888703,-0.004517377,-0.003261171,0.002642208,0.000567887,-0.001135134,0.001714238,0.002215913,0.002685995,0.002986633,0.004685978,-0.004091960,0.004928145,0.002932473,0.003745180,0.002423043,0.000692887,0.003944248,0.000388811,0.002279771,-0.002728700,-0.000835884,-0.001745480,-0.000096281,-0.004591076,-0.002129493,0.004425991,0.003285552,0.000654756,-0.003665627,-0.003257605,0.004568901,0.000609758,-0.003364448,0.002718718,0.000069378,-0.001404978,0.004275855,0.003931813,-0.001676456,0.003637827,-0.000165096,-0.000660321,0.001008493,-0.002815404,0.002393979,0.001862974,-0.003340333,-0.001244314,-0.002043558,-0.002354992,-0.000278712,0.003556966,-0.004484300,0.002680563,0.003797257,0.004588718,0.003099740,0.004149638,-0.003230521,-0.002940395,-0.001050141,0.004884538,0.004025082,0.004515919,-0.002269387,0.003505049,0.004811891,-0.002474037,-0.003682449,0.002194555,0.000704110,0.003159191,0.004505565,-0.002750369
0.001807055,0.001482391,0.003640742,0.002861534,0.003277561,-0.000218878,-0.000768885,-0.001577405,-0.004086194,0.004821918,-0.003505459,0.001584166,-0.001537497,-0.004905193,0.003506108,0.003449397,0.004909414,0.000484279,0.003214687,0.000606185,0.000418593,0.000840798,-0.000861986,-0.002713780,0.003889919,0.003238820,0.002768204,-0.002533596,0.002615663,-0.004668020,-0.003513384,0.004600955,0.002624720,-0.003677261,0.001899035,-0.000282824,0.001430757,-0.000656093,-0.002860825,-0.001554281,0.001426420,0.004991319,0.003405737,0.002172158,-0.000982281,0.003735932,-0.002378379,-0.001531669,-0.002093826,-0.003966506,0.001287842,-0.003535219,0.004989313,-0.002133006,0.003139523,0.000749678,-0.003005003,0.001122474,-0.002744948,-0.003470656,-0.003714474,0.002273069,-0.003288239,-0.001081686,0.002220158,-0.000255012,0.000865234,-0.001924170,0.003614284,0.003238471,0.002771382,-0.004875888,0.004355529,0.003585251,-0.004386844,-0.002568674,-0.004573023,0.003315045,0.004361329,0.003574316,0.001287812,-0.004417873,-0.001776814,0.003493238,-0.004458631,0.004862845,0.000663698,0.000987754,0.003649038,0.004419119,-0.000541994,0.004886407,-0.004881249,-0.003972621,0.004757818,-0.002776639,0.004400791,0.002661295,-0.003599606,0.000035928
-0.004503912,-0.002072076,-0.000457462,-0.000368874,0.001431779,0.003057245,0.003365659,-0.001800904,-0.003175810,0.001901331,-0.004808282,0.002999293,0.001000900,0.001447439,0.000172942,-0.004838186,0.002396333,0.000850160,0.002267114,0.001246432,0.003786929,0.001308687,-0.004187817,0.001207123,-0.004095980,-0.000263701,-0.001943937,0.004231289,-0.002964900,0.001635242,-0.002017087,0.003685338,0.003554486,-0.001094241,-0.002703571,-0.000816441,-0.001537545,-0.003037880,-0.000820635,0.002205257,0.004092403,0.002465538,-0.000187139,0.002806864,-0.000222877,0.003836341,-0.000873528,0.004238372,-0.000493126,-0.004749946,0.001491639,0.003100158,-0.003779121,0.002891161,0.002078098,0.001985809,-0.001531198,-0.003276223,-0.000648208,-0.001039936,0.003858607,-0.002615700,0.003129110,-0.000803772,0.002626174,-0.001874710,-0.002252506,0.000729204,0.004132686,-0.004088869,0.000118010,0.001529291,-0.001083651,-0.002261329,0.003885066,-0.003143535,-0.000151218,0.004464128,0.003956762,-0.004390112,-0.004837497,0.004542808,0.000919564,0.001810841,0.000518207,0.001128588,-0.002939371,-0.003337226,0.002337038,0.002969255,0.002487508,-0.000099437,-0.004751501,-0.003946782,0.001651213,-0.000214328,-0.002519911,0.004607647,-0.002334000,-0.001199362
-0.004928974,0.004163571,-0.004896658,0.001367014,-0.000803271,-0.002624454,0.002833892,-0.001181352,-0.000350717,-0.000790759,-0.000013208,-0.000666526,0.001259258,0.003830937,0.003771543,0.003093390,0.002723618,0.002800283,-0.002805521,0.000336886,0.002897438,0.001515341,0.003131865,0.004439905,-0.004461104,0.000836212,-0.003164481,-0.000013326,-0.002782034,0.004385897,0.002504944,0.003992734,-0.000652203,-0.004125108,0.000493921,-0.001274538,-0.001654283,-0.003565131,0.003641885,-0.004812181,0.004956494,0.000330009,-0.004630533,0.000293294,0.004092799,0.002796956,0.002971912,-0.001625885,-0.000566098,-0.000209553,0.002592088,-0.001871402,0.003996106,-0.000418828,-0.002911846,-0.004681819,0.000806902,-0.001613116,-0.002911210,-0.000686847,-0.004011068,-0.003159445,-0.004610986,-0.003245871,-0.003116059,-0.003956487,0.004973822,0.004446249,0.004458625,0.000368943,-0.004045520,0.004459363,0.001085300,0.002918393,0.000486052,0.001345280,0.002322100,0.003861350,0.001934975,0.003498708,-0.000955241,-0.003971362,-0.002918935,0.001368464,-0.003327844,0.004203349,-0.001181577,-0.000750780,0.002406405,-0.003201805,-0.004511515,0.002237360,0.004612728,0.003026917,0.001306076,0.002819537,0.003314123,0.003639240,-0.003045269,-0.004627620
-0.003569922,0.000330057,0.004385530,0.003622187,0.003032313,-0.003085528,-0.002928551,-0.004725656,-0.002228942,0.004755533,0.000232557,-0.003698951,-0.001095745,0.001862028,-0.000544857,-0.002820791,-0.002464040,-0.003365880,0.003676980,-0.000354506,-0.001820753,-0.003382289,-0.003037914,0.002677164,0.001356682,-0.003128917,-0.001047414,0.002685058,0.001721020,0.000579543,0.001058701,0.000595501,-0.002236531,0.004029586,0.004638659,-0.000862967,0.001770840,0.004105789,-0.002693951,0.000586167,0.002524491,-0.002803808,0.001387484,0.002432521,0.000271937,0.002893984,-0.002446209,-0.003339823,-0.003093228,-0.001165797,0.003740537,0.003051803,0.001867001,0.002437808,-0.004622570,-0.000845934,0.002181866,0.000682652,-0.001372197,0.003214772,-0.001970941,0.003984753,-0.002554676,0.003219810,-0.003718485,-0.000084152,-0.002669760,0.004477538,0.000917570,0.001201726,-0.003155211,-0.004926162,-0.001386371,-0.003869604,0.003003376,0.000760497,0.001866077,0.003391706,-0.000751379,0.000811684,-0.001245629,-0.000102671,-0.004415276,0.000461854,-0.002732823,0.004164207,-0.000375621,-0.003578715,0.000946569,0.001409877,-0.000058599,0.001280894,-0.002413697,0.003419583,-0.000132068,-0.004132921,0.004702473,-0.001499305,0.001174167,0.000004004
0.004061591,-0.003528844,0.000508196,-0.004205417,-0.004561308,0.001123622,0.004632813,-0.001128932,0.003343439,-0.004224335,-0.001358706,0.000635367,-0.003886746,-0.002129512,-0.004123504,-0.000301319,-0.004013869,-0.000059615,-0.002384399,-0.002546186,-0.002686617,-0.004072641,0.003870830,-0.002490126,-0.001894392,-0.001251840,0.002121477,-0.004595706,-0.000873851,0.001418021,0.002918800,-0.000094019,0.001867510,-0.003844282,0.004292718,0.002601394,0.001220452,0.000749227,-0.002484608,0.000532627,0.001747751,0.004480196,-0.000170085,0.004815174,-0.004529788,-0.002678558,-0.004396016,0.002494818,-0.000864304,0.004998409,0.002090547,0.003660179,0.000655883,0.002594976,-0.004403094,0.004209508,-0.001105539,0.001273826,-0.001948062,-0.000470764,0.002582881,0.000853612,0.001838253,-0.003790727,-0.003302251,0.002862376,-0.002694332,0.001456507,-0.000187511,0.002911711,-0.000913213,0.000765257,0.004246960,-0.001369165,0.003125421,0.004798896,0.002299545,0.001138909,-0.002888136,-0.002645023,-0.004464292,-0.001133950,-0.004352375,-0.002426649,-0.004610454,0.003804179,0.003727813,0.000559402,-0.003605954,0.004553828,-0.004709322,-0.004228146,0.003613922,-0.000859950,0.001120219,0.004028273,-0.000722873,-0.003593198,0.003761207,-0.003529821
-0.002610515,-0.001513924,0.002249748,0.004394666,-0.001076142,-0.000830374,-0.002708044,0.002119424,-0.004507874,0.004735226,-0.000467914,0.002159383,-0.001237711,-0.002327546,-0.004485449,-0.004568680,-0.003471485,-0.001834344,0.004751876,0.003132673,0.002067684,-0.003088873,-0.000077247,0.004477009,-0.004725019,-0.003055657,0.000214931,0.001483929,0.000882963,-0.001433064,0.000071952,0.002993784,-0.003832532,0.002278110,0.001174448,0.004420365,-0.003510359,0.001050444,-0.001733937,0.004200966,-0.002603715,0.002580709,-0.003649752,0.003723222,0.003696071,-0.003155912,0.001875691,0.002170887,0.002401461,-0.000377849,0.001024881,-0.000820995,0.000932343,0.003985397,-0.000593116,0.001291899,-0.002647158,0.003066986,0.004803015,-0.004067435,-0.000637422,-0.000469290,-0.002002281,-0.001004849,0.000583382,0.000309143,-0.003146123,-0.003988509,-0.001090364,0.002824127,0.004664457,0.000664972,0.000510259,0.001906933,0.002304966,-0.003352575,0.002720332,-0.003337473,0.001810418,0.000188195,-0.001156295,0.001999078,0.001749165,0.004012133,0.004780558,0.000453578,-0.000921542,0.002788794,0.002092947,0.002518242,0.004577437,-0.001570703,0.000279791,-0.002159988,0.000250616,-0.001883890,0.002733525,0.003532090,-0.001746783,-0.003584358
0.001495907,0.004706251,0.001924489,0.002407656,-0.002717736,0.003532988,-0.002004257,-0.001417354,-0.001768720,-0.002942180,0.002133427,0.002368610,0.000655821,-0.003662645,-0.001549772,-0.004580202,-0.003901934,-0.001775475,-0.004376114,0.004710689,-0.003396035,-0.000064211,-0.000506707,0.001399909,0.003602629,-0.002926735,0.001253511,0.001899321,-0.002721695,-0.001623763,-0.003811655,0.002415461,0.000174196,-0.000856901,-0.003764977,-0.004573717,0.001286730,0.001049154,-0.004946705,-0.001917542,-0.003005622,0.002025295,0.000306656,0.003415600,-0.002934078,-0.003341220,-0.004485791,0.001262086,-0.003688653,-0.002744825,-0.003136090,-0.002978198,-0.002561376,0.004578986,0.002629119,0.001995340,0.002884744,-0.002269716,0.004303843,0.004950364,-0.004728658,-0.003060127,0.001616868,0.000564014,-0.003955293,0.004969637,-0.002506688,-0.003266509,-0.000159783,-0.000852137,-0.002008277,0.001947258,-0.001513864,0.003544710,-0.001437615,0.001462062,-0.004021433,0.003027402,0.003263205,0.002319596,0.003768360,-0.000109071,-0.003483753,0.001931388,-0.000406984,-0.002751034,-0.003324771,0.000529878,-0.003205302,0.003827305,-0.002402248,0.002341603,0.001521269,-0.002324604,0.002071467,0.000482224,-0.000443249,-0.004183101,-0.003153585,-0.003445678
0.001260047,-0.002828801,-0.004716149,-0.001662441,-0.002153185,-0.002106376,-0.001461217,-0.003701535,-0.000966081,-0.001894470,-0.002660606,-0.003653061,-0.001513066,0.000564956,-0.003814873,0.002442938,0.001471756,-0.001487325,0.002807789,-0.004854155,0.004097696,0.002402664,-0.000361273,0.001859629,0.000388060,0.000708121,0.002486899,0.001291184,0.004719518,0.003797235,0.001234346,-0.001051427,-0.002420838,0.004536666,0.001055030,-0.002534297,0.001966413,0.003905471,0.003804576,0.003283150,-0.004892209,0.002734637,-0.002859326,-0.004853325,0.001197146,0.001240689,-0.001795105,-0.004819203,-0.001704558,-0.002200101,-0.003268958,-0.000167097,0.001893432,-0.001170361,0.004518558,0.003997180,0.003444503,-0.004444042,-0.000583004,-0.004811881,-0.002529098,0.004550904,-0.004113932,0.002651085,0.001876554,0.004069814,0.001323539,-0.004760347,0.003362559,-0.000021758,-0.004662250,-0.000978935,-0.002236756,0.003798361,-0.003846924,0.002445487,-0.002691993,-0.003081509,-0.004734617,0.003331871,0.002581980,0.000235720,-0.003106468,-0.002818120,-0.002848818,-0.001025835,-0.001623353,-0.001164589,0.001954014,0.003230086,-0.000763662,0.004823037,-0.004290072,0.002578180,0.003607719,0.003926848,0.003136232,0.000696717,0.004950671,-0.001723229
0.000069785,-0.000984092,0.004371011,0.001382446,-0.002503760,0.002208635,0.003070087,0.002350721,0.002348333,-0.003342167,-0.003524914,-0.002928586,-0.002985520,0.004270495,-0.002962348,0.000684160,-0.001527267,-0.001193941,0.001550901,-0.001719981,-0.003279423,-0.004098625,-0.003103685,-0.001870776,-0.002552802,0.000451914,-0.003430785,-0.003346668,-0.002938616,-0.003493941,0.003255263,-0.004932823,0.001763470,-0.004427499,0.003352699,0.001580873,-0.003524664,0.003123550,-0.001787046,0.001691088,-0.002082454,0.003001458,0.003992999,-0.002797649,0.002057538,0.003448277,-0.003476268,0.002003564,-0.001617643,0.003724294,0.004041464,0.003194123,0.000938895,0.002460375,-0.000731631,0.002391569,-0.004241601,0.000379819,0.004204068,-0.003105665,0.004121041,-0.003642072,0.004993078,0.004487300,-0.002785370,0.000219495,-0.002639228,0.003690223,-0.003937371,-0.003932493,-0.003444071,-0.003932725,0.000795800,-0.003157678,-0.001477731,0.002902487,0.004722435,-0.000833997,-0.004406638,0.003833995,-0.002547798,0.002047658,-0.000589682,-0.003472134,0.001356578,0.003248756,0.000677165,-0.001540462,-0.000900298,-0.004755037,0.000561629,0.004203455,-0.003011465,0.004024420,-0.000012337,-0.002123598,-0.000260589,0.004310809,0.004263875,-0.000659135
|
Exercise 3/ADA2022_exercise3.ipynb | ###Markdown
Exercise 3 | TKO_2096 Applications of Data Analysis 2022 Water permeability prediction in forestry In this task, the client wants you to estimate the spatial prediction performance of K-nearest neighbor regression model with K=7 (7NN), using spatial leave-one-out cross-validation (i.e. SKCV, with number of folds == number of data points). The client wants you to use the C-index as the performance measure. In other words, the client wants you to answer the question: "What happens to the prediction performance of water permeability using 7-nearest neighbor regression model, when the geographical distance between known data and unknown data increases?".In this task, you have three data files available (with 1691 data points): - input.csv, contains the 75 predictor features. - output.csv, contains the water permebility values. - coordinates.csv, contains the corresponding geographical coordinate locations of the data points. The unit of the coordinates is metre, and you can use Euclidean distance to calculate distances between the coordinate points. Implement the following tasks to complete this exercise:******************************************** 1. Z-score standardize the predictor features (input.csv). 2. Perform spatial leave-one-out cross-validation with 7NN model for the provided data set (refer to the lectures 3.1.3 and 3.1.4 for help). Estimate the water permeability prediction performance (using 7NN model and C-index) with the following distance parameter values: d = 0, 10, 20, ..., 200 (that is, 10 meter intervals from 0m to 200m). 3. When you have calculated the C-index performance measure for each value of d, visualize the results with the C-index (y-axis) as a function of d (x-axis).********************************************Your .ipynb-file must include the following: - Your own implementation of the spatial leave-one-out cross-validation for the current task. Remember to also take advantage of earlier exercises (e.g. C-index). For the 7-nearest neighbor and Euclidean distance calculation you can use third-party libraries (e.g. Scikit-learn) if you want. - Plot of the graph C-index vs. distance parameter value. -- START IMPLEMENTING YOUR EXERCISE AFTER THIS LINE -- Import necessary libraries
###Code
# In this cell, import all the libraries that you need. For example:
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsRegressor
from scipy.spatial import distance
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Read in the datasets
###Code
# In this cell, read the files input.csv, output.csv and coordinates.csv.
# Print out the dataset dimesions (i.e. number of rows and columns).
#
# Note that the coordinates are in EUREF-TM35FIN format, so you
# can use the Euclidean distance to calculate the distance between two coordinate points.
input = np.genfromtxt('data/input.csv', delimiter = ',')
output = np.genfromtxt('data/output.csv', delimiter = ',')
coordinates = np.genfromtxt('data/coordinates.csv', delimiter = ',')
print('Input:', input.shape)
print('Output:', output.shape)
print('Coordinates:', coordinates.shape)
###Output
Input: (1691, 75)
Output: (1691,)
Coordinates: (1691, 2)
###Markdown
Standardization of the predictor features (input.csv)
###Code
# Standardize the predictor features (input.csv) by removing the mean and scaling to unit variance.
# In other words, z-score the predictor features. You are allowed to use third-party libraries for doing this.
input_z = np.asarray(StandardScaler().fit_transform(input))
# print(input_z[:5])
###Output
_____no_output_____
###Markdown
Functions
###Code
# Include here all the functions (for example the C-index-function) that you need in order to implement the task.
# C-index score function from previous exercise
def cindex(true_labels, pred_labels):
n = 0
h_num = 0
for i in range(0, len(true_labels)):
t = true_labels[i]
p = pred_labels[i]
for j in range(i+1, len(true_labels)):
nt = true_labels[j]
np = pred_labels[j]
if (t != nt):
n = n + 1
if (p < np and t < nt) or (p > np and t > nt):
h_num += 1
elif (p == np):
h_num += 0.5
cindx = h_num /n
return cindx
# Expected parameters: X = input, y = output
def skcv_cindex(X, y, d_matrix, delta, neighbors):
preds = []
for i in range(len(X)):
# Point at i as test set
X_test = X[i].reshape(1, -1)
# Train sets formed from points where distance to the test point is greater than current delta
X_train = X[d_matrix[i] > delta]
y_train = y[d_matrix[i] > delta]
# Basic kNN prediction calculating
knn = KNeighborsRegressor(n_neighbors = neighbors, metric = 'euclidean')
knn.fit(X_train, y_train)
pred = knn.predict(X_test)
preds.append(pred)
# Calculate and return the C-index score
return cindex(y, preds)
###Output
_____no_output_____
###Markdown
Results for spatial leave-one-out cross-validation with 7-nearest neighbor regression model
###Code
# In this cell, run your script for the Spatial leave-One-Out cross-validation
# with 7-nearest neighbor regression model and visualize the results as
# requested in the task assignment.
# Use scipy.spatial.distance to calculate euclidean distances
d_matrix = distance.cdist(coordinates, coordinates, metric = 'euclidean')
# Prepare variables for loop
deltas = range(0, 201, 10)
cindex_scores = []
# Spatial LOOCV
# Execution can take a while
# Loop through all values for delta
for d in deltas:
# SKCV using our function
score = skcv_cindex(input_z, output, d_matrix, d, 7)
# Add to list of scores
cindex_scores.append(score)
print(d)
###Output
0
10
20
30
40
50
60
70
80
90
100
110
120
130
140
150
160
170
180
190
200
###Markdown
Interpretation of the results
###Code
# In this cell, give a brief commentary on the results, what happens to the prediction
# performance as the prediction distance increases?
plt.plot(deltas, cindex_scores)
plt.xlabel('Delta')
plt.ylabel('C-index')
plt.grid()
###Output
_____no_output_____ |
jupyter/training/english/entity-ruler/EntityRuler_LightPipeline.ipynb | ###Markdown
This notebook test serialization and LightPipeline for EntityRuler
###Code
import sparknlp
from sparknlp.base import *
from sparknlp.annotator import *
from pyspark.sql import SparkSession
print("Spark NLP version", sparknlp.version())
data = spark.createDataFrame([[""]]).toDF("text")
entity_ruler = EntityRulerApproach() \
.setInputCols(["document", "token"]) \
.setOutputCol("entity") \
.setPatternsResource("sample_data/patterns.json")
entity_ruler_model = entity_ruler.fit(data)
entity_ruler_model.write().overwrite().save("tmp_entity_ruler_model")
entity_ruler_loaded = EntityRulerModel().load("tmp_entity_ruler_model")
document_assembler = DocumentAssembler().setInputCol("text").setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols("document") \
.setOutputCol("token") \
.setExceptions(["John Snow", "Eddard Stark"])
pipeline = Pipeline(stages=[document_assembler, tokenizer, entity_ruler])
pipeline_model = pipeline.fit(data)
light_pipeline = LightPipeline(pipeline_model)
result = light_pipeline.annotate("Lord Eddard Stark was the head of House Stark. John Snow lives in Winterfell.")
print(result)
###Output
{'document': ['Lord Eddard Stark was the head of House Stark. John Snow lives in Winterfell.'], 'token': ['Lord', 'Eddard Stark', 'was', 'the', 'head', 'of', 'House', 'Stark', '.', 'John Snow', 'lives', 'in', 'Winterfell', '.'], 'entity': ['Eddard Stark', 'John Snow', 'Winterfell']}
###Markdown
This notebook test serialization and LightPipeline for EntityRuler
###Code
import sparknlp
from sparknlp.base import *
from sparknlp.annotator import *
from pyspark.sql import SparkSession
print("Spark NLP version", sparknlp.version())
data = spark.createDataFrame([[""]]).toDF("text")
entity_ruler = EntityRulerApproach() \
.setInputCols(["document", "token"]) \
.setOutputCol("entity") \
.setPatternsResource("sample_data/patterns.json")
entity_ruler_model = entity_ruler.fit(data)
entity_ruler_model.write().overwrite().save("tmp_entity_ruler_model")
entity_ruler_loaded = EntityRulerModel().load("tmp_entity_ruler_model")
document_assembler = DocumentAssembler().setInputCol("text").setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols("document") \
.setOutputCol("token") \
.setExceptions(["John Snow", "Eddard Stark"])
pipeline = Pipeline(stages=[document_assembler, tokenizer, entity_ruler])
pipeline_model = pipeline.fit(data)
light_pipeline = LightPipeline(pipeline_model)
result = light_pipeline.annotate("Lord Eddard Stark was the head of House Stark. John Snow lives in Winterfell.")
print(result)
###Output
{'document': ['Lord Eddard Stark was the head of House Stark. John Snow lives in Winterfell.'], 'token': ['Lord', 'Eddard Stark', 'was', 'the', 'head', 'of', 'House', 'Stark', '.', 'John Snow', 'lives', 'in', 'Winterfell', '.'], 'entity': ['Eddard Stark', 'John Snow', 'Winterfell']}
|
notebooks/milestone3/Milestone3-Task4.ipynb | ###Markdown
DSCI 525 - Web and Cloud Computing Group 04: Heidi Ye, Junting He, Kamal Moravej, Tanmay Sharma Date: 23-04-2021 Repo Link: https://github.com/UBC-MDS/group4-525 Milestone 3: Task 4 We haven't discussed MLlib in detail in our class, so consider MLlib as another python package that you are using, like the scikit-learn. What you write using this package, pyspark will be using the spark engine to run your code. I have put guidelines and helpful links (as comments) along with this notebook for taking you through this. Imports
###Code
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
###Output
Starting Spark application
###Markdown
Read Read 100 data points for testing the code, once you get to the bottom then read the entire dataset
###Code
aws_credentials = {"key": "","secret": ""}
## here 100 data points for testing the code
pandas_df = pd.read_csv("s3://mds-s3-student47/output/ml_data_SYD.csv", storage_options=aws_credentials, index_col=0, parse_dates=True).iloc[:100].dropna()
feature_cols = list(pandas_df.drop(columns="Observed").columns)
###Output
_____no_output_____
###Markdown
Preparing dataset for ML
###Code
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "Observed")
###Output
_____no_output_____
###Markdown
Find best hyperparameter settings You can refer to [here](https://www.sparkitecture.io/machine-learning/regression/random-forest) and [here](https://www.silect.is/blog/random-forest-models-in-spark-ml/) as a reference. All what you need to complete this task are in there. Some additional info [here](https://projector-video-pdf-converter.datacamp.com/14989/chapter4.pdf)Official Documentation of MLlib, Random forest regression [here](http://spark.apache.org/docs/3.0.1/ml-classification-regression.htmlrandom-forest-regression). When using spark documentation always keep in my API sometimes change with versions, new updates/features come in every version release, so always make sure you choose the documentation of the correct spark version. Please find version what you use [here](http://spark.apache.org/docs/).Use these parameters for coming up with ideal parameters, you could try more parameters, but unfourtunately with this single node cluster we dont have enough power to do it. - Use numTrees as [10, 50,100] - maxDepth as [5, 10] - bootstrap as [False, True] - In the CrossValidator use evaluator to be RegressionEvaluator(labelCol="Observed")
###Code
##Once you finish testing the model on 100 data points, then load entire dataset and run , this could take ~15 min.
## write code here.
#Initialize Random Forest object
rf = RandomForestRegressor(labelCol="Observed", featuresCol="Features")
#Create a parameter grid for tuning the model
rfparamGrid = (ParamGridBuilder()
.addGrid(rf.numTrees, [5, 20, 100])
.addGrid(rf.maxDepth, [5, 10])
.addGrid(rf.bootstrap, [True,False])
.build())
#Define how you want the model to be evaluated
rfevaluator = RegressionEvaluator(predictionCol="prediction", labelCol="Observed", metricName="rmse")
#Define the type of cross-validation you want to perform
rfcv = CrossValidator(estimator = rf,
estimatorParamMaps = rfparamGrid,
evaluator = rfevaluator,
numFolds = 5)
#Fit the model to the data
rfcvModel = rfcv.fit(training)
print(rfcvModel)
# Print run info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(rfcvModel.avgMetrics):.2f}")
print(f"numTrees: {rfcvModel.bestModel.getNumTrees}")
print(f"numTrees: {rfcvModel.bestModel.getMaxDepth()}")
###Output
_____no_output_____
###Markdown
Tuning hyperparameters of the model for the entire dataset
###Code
# read the whole dataset
pandas_df = pd.read_csv("s3://mds-s3-student40/output/ml_data_SYD.csv",
storage_options=aws_credentials, index_col=0,
parse_dates=True).dropna()
feature_cols = list(pandas_df.drop(columns="Observed").columns)
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "Observed")
#Initialize Random Forest object
rf = RandomForestRegressor(labelCol="Observed", featuresCol="Features")
#Create a parameter grid for tuning the model
rfparamGrid = (ParamGridBuilder()
.addGrid(rf.maxDepth, [5, 10])
.addGrid(rf.bootstrap, [True,False])
.addGrid(rf.numTrees, [5, 20, 100])
.build())
#Define how you want the model to be evaluated
rfevaluator = RegressionEvaluator(predictionCol="prediction", labelCol="Observed", metricName="rmse")
#Define the type of cross-validation you want to perform
rfcv = CrossValidator(estimator = rf,
estimatorParamMaps = rfparamGrid,
evaluator = rfevaluator,
numFolds = 5)
#Fit the model to the data
rfcvModel = rfcv.fit(training)
# Print best model info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(rfcvModel.avgMetrics):.2f}")
print(f"numTrees: {rfcvModel.bestModel.getNumTrees}")
print(f"numTrees: {rfcvModel.bestModel.getMaxDepth()}")
###Output
_____no_output_____
###Markdown
Task 4 We haven't discussed MLlib in detail in our class, so consider MLlib as another python package that you are using, like the scikit-learn. What you write using this package, pyspark will be using the spark engine to run your code. I have put guidelines and helpful links (as comments) along with this notebook for taking you through this. Imports
###Code
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
###Output
Starting Spark application
###Markdown
Read Read 100 data points for testing the code, once you get to the bottom then read the entire dataset
###Code
# aws_credentials = {
# "key": "",
# "secret": "",
# } # removed secret and key when submitting the notebook
## After testing code with 100 data points, we read the entire dataset here
pandas_df = (
pd.read_csv(
"s3://mds-s3-student96/ml_data_SYD.csv",
storage_options=aws_credentials,
index_col=0,
parse_dates=True,
)
# .iloc[:100]
.dropna()
)
feature_cols = list(pandas_df.drop(columns="Observed").columns)
pandas_df.head()
###Output
_____no_output_____
###Markdown
Preparing dataset for ML
###Code
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "Observed")
###Output
_____no_output_____
###Markdown
Find best hyperparameter settings You can refer to [here](https://www.sparkitecture.io/machine-learning/regression/random-forest) and [here](https://www.silect.is/blog/random-forest-models-in-spark-ml/) as a reference. All what you need to complete this task are in there. Some additional info [here](https://projector-video-pdf-converter.datacamp.com/14989/chapter4.pdf)Official Documentation of MLlib, Random forest regression [here](http://spark.apache.org/docs/3.0.1/ml-classification-regression.htmlrandom-forest-regression). When using spark documentation always keep in my API sometimes change with versions, new updates/features come in every version release, so always make sure you choose the documentation of the correct spark version. Please find version what you use [here](http://spark.apache.org/docs/).Use these parameters for coming up with ideal parameters, you could try more parameters, but unfourtunately with this single node cluster we dont have enough power to do it. - Use numTrees as [10, 50,100] - maxDepth as [5, 10] - bootstrap as [False, True] - In the CrossValidator use evaluator to be RegressionEvaluator(labelCol="Observed")
###Code
##Once you finish testing the model on 100 data points, then load entire dataset and run , this could take ~15 min.
## write code here.
# initialise RandomForest Object
rf = RandomForestRegressor(labelCol="Observed", featuresCol="Features")
# Create a parameter grid for tuning the model
rfparamGrid = (
ParamGridBuilder()
.addGrid(rf.numTrees, [10, 50, 100])
.addGrid(rf.maxDepth, [5, 10])
.addGrid(rf.bootstrap, [False, True])
.build()
)
# Define how we want the model to be evaluated
rfevaluator = RegressionEvaluator(
labelCol="Observed", metricName="rmse", predictionCol="prediction"
)
# Define a 5-fold cross validation
rfcv = CrossValidator(
estimator=rf, estimatorParamMaps=rfparamGrid, evaluator=rfevaluator, numFolds=5
)
# Fit the model to the data
cvModel = rfcv.fit(training)
print(cvModel)
# Print run info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(cvModel.avgMetrics):.2f}")
print(f"numTrees: {cvModel.bestModel.getNumTrees}")
print(f"maxDepth: {cvModel.bestModel.getMaxDepth()}")
###Output
_____no_output_____
###Markdown
DSCI 525 - Web and Cloud Computing Project: Daily Rainfall Over NSW, Australia Milestone 3: Setup Spark Cluster and Develop Machine Learning Group members: Arash, Charles, Neel Task 4 We haven't discussed MLlib in detail in our class, so consider MLlib as another python package that you are using, like the scikit-learn. What you write using this package, pyspark will be using the spark engine to run your code. I have put guidelines and helpful links (as comments) along with this notebook for taking you through this. Imports
###Code
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
###Output
Starting Spark application
###Markdown
Read the Entire Data
###Code
aws_credentials ={"key": "AKIATB63UHM3LLTQ2EI6",
"secret": "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"}
## here 100 data points for testing the code
pandas_df = pd.read_csv("s3://mds-s3-student70/output/ml_data_SYD.csv", index_col=0, parse_dates=True).iloc[:].dropna()
feature_cols = list(pandas_df.drop(columns="rain (mm/day)").columns)
###Output
_____no_output_____
###Markdown
Preparing dataset for ML
###Code
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "rain (mm/day)")
###Output
_____no_output_____
###Markdown
Find best hyperparameter settings You can refer to [here](https://www.sparkitecture.io/machine-learning/regression/random-forest) and [here](https://www.silect.is/blog/random-forest-models-in-spark-ml/) as a reference. All what you need to complete this task are in there. Some additional info [here](https://projector-video-pdf-converter.datacamp.com/14989/chapter4.pdf)Official Documentation of MLlib, Random forest regression [here](http://spark.apache.org/docs/3.0.1/ml-classification-regression.htmlrandom-forest-regression). When using spark documentation always keep in my API sometimes change with versions, new updates/features come in every version release, so always make sure you choose the documentation of the correct spark version. Please find version what you use [here](http://spark.apache.org/docs/).Use these parameters for coming up with ideal parameters, you could try more parameters, but unfourtunately with this single node cluster we dont have enough power to do it. - Use numTrees as [10, 50,100] - maxDepth as [5, 10] - bootstrap as [False, True] - In the CrossValidator use evaluator to be RegressionEvaluator(labelCol="Observed")
###Code
##Once you finish testing the model on 100 data points, then load entire dataset and run , this could take ~15 min.
## write code here.
# Split the data into training and test sets (30% held out for testing)
(train, test) = training.randomSplit([0.7, 0.3], seed=123)
# Train a RandomForest model.
rf = RandomForestRegressor(labelCol='rain (mm/day)', featuresCol="Features")
rfparamGrid = (ParamGridBuilder()
.addGrid(rf.bootstrap, [True, False])
.addGrid(rf.maxDepth, [5, 10])
.addGrid(rf.numTrees, [10, 50, 100])
.build())
rfevaluator = RegressionEvaluator(labelCol="rain (mm/day)")
rfcv = CrossValidator(estimator = rf,
estimatorParamMaps = rfparamGrid,
evaluator = rfevaluator,
numFolds = 5)
rfcvModel = rfcv.fit(training)
# Print run info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(rfcvModel.avgMetrics):.2f}")
print(f"numTrees: {rfcvModel.bestModel.getNumTrees}")
print(f"numDepths: {rfcvModel.bestModel.getMaxDepth()}")
print(f"BootStrap: {rfcvModel.bestModel.getBootstrap()}")
###Output
_____no_output_____
###Markdown
Task 3 (Guided Exercise) This notebook is part of Milestone 3, task 3 and is a guided exercise. I have put guidelines and helpful links (as comments) along with this notebook to take you through this.In this exercise you will be using Spark's MLlib. The idea is to tune some hyperparameters of a Random Forest to find an optimum model. Once we know the optimum settings, we'll train a Random Forest in sklearn (task 4)and save it with joblib (task 5) (so that we can use it next week to deploy).Here consider MLlib as another python package that you are using, like the scikit-learn. You will be seeing many scikit-learn similar classes and methods available in MLlib for various ML related tasks, you might also notice that some of them are not yet implimented in MLlib. What you write using pyspark package will be using the spark engine to run your code, and hence all the benefits of distributed computing what we discussed in class.NOTE: Here whenever you use spark makes sure that you refer to the right documentation based on the version what you will be using. [Here](https://spark.apache.org/docs/) you can select the version of the spark and go to the correct documentation. In our case we are using spark 3.1.2, and here is the link to spark documetation that you can refer to,- [MLlib Documentation](https://spark.apache.org/docs/3.1.2/ml-guide.html)- [MLlib API Reference](https://spark.apache.org/docs/3.1.2/api/python/reference/pyspark.ml.html)You may notice that there are RDD-based API and DataFrame-based (Main Guide) API available in the documentation. You want to focus on DataFrame based API as no one these days use RDD based API. We will discuss the difference in class.Before you start this notebook make sure that you are using EMR jupyterHub and the kernal that you selected is PySpark. Import necessary libraries
###Code
from pyspark.ml import Pipeline
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
from pyspark.ml.feature import VectorAssembler, UnivariateFeatureSelector
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import RandomForestRegressor as sparkRFR
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
###Output
Starting Spark application
###Markdown
Read the data To start with; read 100 data points for development purpose. Once your code is ready then try on the whole dataset.
###Code
## Depending on the permissions that you provided to your bucket you might need to provide your aws credentials
## to read from the bucket, if so provide with your credentials and pass as storage_options=aws_credentials
# aws_credentials = {"key": "","secret": "","token":""}
## here 100 data points for testing the code
# pandas_df = pd.read_csv("s3://mds-s3-001/output/ml_data_SYD.csv", index_col=0, parse_dates=True).iloc[:100].dropna()
pandas_df = pd.read_csv("s3://mds-s3-001/output/ml_data_SYD.csv", index_col=0, parse_dates=True).dropna()
feature_cols = list(pandas_df.drop(columns="observed_rainfall").columns)
pandas_df
###Output
_____no_output_____
###Markdown
Preparing dataset for ML
###Code
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# It is automatically created for you in this notebook.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "observed_rainfall")
###Output
_____no_output_____
###Markdown
Find best hyperparameter settings Official Documentation of MLlib, Random forest regression [here](http://spark.apache.org/docs/3.0.1/ml-classification-regression.htmlrandom-forest-regression).Here we will be mainly using following classes and methods;- [RandomForestRegressor](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.regression.RandomForestRegressor.html)- [ParamGridBuilder](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.tuning.ParamGridBuilder.html) - addGrid - build- [CrossValidator](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.tuning.CrossValidator.html) - fitUse these parameters for coming up with ideal parameters, you could try more parameters, but make sure you have enough power to do it. But you are required to try only following parameters. This will take around 15 min on entire dataset.... - Use numTrees as [10, 50,100] - maxDepth as [5, 10] - bootstrap as [False, True] - In the CrossValidator use evaluator to be RegressionEvaluator(labelCol="Observed") ***Additional reference:*** You can refer to [here](https://www.sparkitecture.io/machine-learning/regression/random-forest) and [here](https://www.silect.is/blog/random-forest-models-in-spark-ml/).Some additional reading [here](https://projector-video-pdf-converter.datacamp.com/14989/chapter4.pdf)
###Code
##Once you finish testing the model on 100 data points, then load entire dataset and run , this could take ~15 min.
## write code here.
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
from pyspark.ml.evaluation import RegressionEvaluator
rf = RandomForestRegressor(labelCol="observed_rainfall", featuresCol="Features")
rfparamGrid = (ParamGridBuilder()
.addGrid(rf.maxDepth, [5, 10])
.addGrid(rf.bootstrap, [False, True])
.addGrid(rf.numTrees, [10, 50, 100])
.build())
rfevaluator = RegressionEvaluator(labelCol="observed_rainfall", metricName="rmse")
rfcv = CrossValidator(estimator = rf,
estimatorParamMaps = rfparamGrid,
evaluator = rfevaluator,
numFolds = 5)
rfcvModel = rfcv.fit(training)
print(rfcvModel)
# Print run info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(rfcvModel.avgMetrics):.2f}")
print(f"numTrees: {rfcvModel.bestModel.getNumTrees}")
print(f"MaxDepth: {rfcvModel.bestModel.getMaxDepth()}")
###Output
_____no_output_____
###Markdown
Task 4 We haven't discussed MLlib in detail in our class, so consider MLlib as another python package that you are using, like the scikit-learn. What you write using this package, pyspark will be using the spark engine to run your code. I have put guidelines and helpful links (as comments) along with this notebook for taking you through this. Imports
###Code
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
###Output
Starting Spark application
###Markdown
Read Read 100 data points for testing the code, once you get to the bottom then read the entire dataset
###Code
## here 100 data points for testing the code
pandas_df = pd.read_csv("s3://mds-s3-student96/ml_data_SYD.csv", storage_options=aws_credentials, index_col=0, parse_dates=True).dropna()
feature_cols = list(pandas_df.drop(columns="Observed").columns)
###Output
_____no_output_____
###Markdown
Preparing dataset for ML
###Code
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "Observed")
###Output
_____no_output_____
###Markdown
Find best hyperparameter settings You can refer to [here](https://www.sparkitecture.io/machine-learning/regression/random-forest) and [here](https://www.silect.is/blog/random-forest-models-in-spark-ml/) as a reference. All what you need to complete this task are in there. Some additional info [here](https://projector-video-pdf-converter.datacamp.com/14989/chapter4.pdf)Official Documentation of MLlib, Random forest regression [here](http://spark.apache.org/docs/3.0.1/ml-classification-regression.htmlrandom-forest-regression). When using spark documentation always keep in my API sometimes change with versions, new updates/features come in every version release, so always make sure you choose the documentation of the correct spark version. Please find version what you use [here](http://spark.apache.org/docs/).Use these parameters for coming up with ideal parameters, you could try more parameters, but unfourtunately with this single node cluster we dont have enough power to do it. - Use numTrees as [10, 50,100] - maxDepth as [5, 10] - bootstrap as [False, True] - In the CrossValidator use evaluator to be RegressionEvaluator(labelCol="Observed")
###Code
##Once you finish testing the model on 100 data points, then load entire dataset and run , this could take ~15 min.
## write code here.
model = RandomForestRegressor(labelCol="Observed", featuresCol="Features")
paramGrid = ParamGridBuilder().addGrid(model.numTrees, [10,50,100]).addGrid(
model.maxDepth, [5,10]).addGrid(model.bootstrap, [False, True]).build()
crossval = CrossValidator(estimator=model,
estimatorParamMaps=paramGrid,
evaluator=RegressionEvaluator(labelCol = 'Observed'),
numFolds=3)
cvModel = crossval.fit(training)
# Print run info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(cvModel.avgMetrics):.2f}")
print(f"numTrees: {cvModel.bestModel.getNumTrees}")
print(f"numTrees: {cvModel.bestModel.getMaxDepth()}")
###Output
_____no_output_____
###Markdown
Task 4*By Group III: Mitchie, Jianru, Aishwarya, Aditya**Date: April 24, 2021* We haven't discussed MLlib in detail in our class, so consider MLlib as another python package that you are using, like the scikit-learn. What you write using this package, pyspark will be using the spark engine to run your code. I have put guidelines and helpful links (as comments) along with this notebook for taking you through this. Imports
###Code
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('abc').getOrCreate()
import pyspark
#from pyspark import SparkContext
from pyspark.sql import SparkSession
#from pyspark.sql import SQLContext
###Output
_____no_output_____
###Markdown
Read Read 100 data points for testing the code, once you get to the bottom then read the entire dataset
###Code
aws_credentials = {'key': ' ','secret': ' '}
## here 100 data points for testing the code
pandas_df = pd.read_csv("s3://mds-s3-student91/output/ml_data_SYD.csv", storage_options=aws_credentials, index_col=0, parse_dates=True).iloc[:100].dropna()
feature_cols = list(pandas_df.drop(columns="observed_rainfall").columns)
aws_credentials = {'key': ' ','secret': ' '}
## here pass the whole dataset
pandas_df = pd.read_csv("s3://mds-s3-student91/output/ml_data_SYD.csv", storage_options=aws_credentials, index_col=0, parse_dates=True).dropna()
feature_cols = list(pandas_df.drop(columns="observed_rainfall").columns)
###Output
_____no_output_____
###Markdown
Preparing dataset for ML
###Code
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "observed_rainfall")
###Output
_____no_output_____
###Markdown
Find best hyperparameter settings You can refer to [here](https://www.sparkitecture.io/machine-learning/regression/random-forest) and [here](https://www.silect.is/blog/random-forest-models-in-spark-ml/) as a reference. All what you need to complete this task are in there. Some additional info [here](https://projector-video-pdf-converter.datacamp.com/14989/chapter4.pdf)Official Documentation of MLlib, Random forest regression [here](http://spark.apache.org/docs/3.0.1/ml-classification-regression.htmlrandom-forest-regression). When using spark documentation always keep in my API sometimes change with versions, new updates/features come in every version release, so always make sure you choose the documentation of the correct spark version. Please find version what you use [here](http://spark.apache.org/docs/).Use these parameters for coming up with ideal parameters, you could try more parameters, but unfourtunately with this single node cluster we dont have enough power to do it. - Use numTrees as [10, 50,100] - maxDepth as [5, 10] - bootstrap as [False, True] - In the CrossValidator use evaluator to be RegressionEvaluator(labelCol="Observed")
###Code
##Once you finish testing the model on 100 data points, then load entire dataset and run , this could take ~15 min.
## write code here.
rf = RandomForestRegressor(labelCol="observed_rainfall", featuresCol="Features")
rfparamGrid = (ParamGridBuilder()
.addGrid(rf.maxDepth, [5, 10])
.addGrid(rf.bootstrap, [False, True])
.addGrid(rf.numTrees, [10, 50, 100])
.build())
rfevaluator = RegressionEvaluator(labelCol="observed_rainfall", metricName="rmse")
rfcv = CrossValidator(estimator = rf,
estimatorParamMaps = rfparamGrid,
evaluator = rfevaluator,
numFolds = 5)
rfcvModel = rfcv.fit(training)
# Print run info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(rfcvModel.avgMetrics):.2f}")
print(f"numTrees: {rfcvModel.bestModel.getNumTrees}")
print(f"MaxDepth: {rfcvModel.bestModel.getMaxDepth()}")
###Output
_____no_output_____
###Markdown
Task 4 (Guided Exercise) This notebook is part of Milestone 3, task 3 and is a guided exercise. I have put guidelines and helpful links (as comments) along with this notebook to take you through this.In this exercise you will be using Spark's MLlib. The idea is to tune some hyperparameters of a Random Forest to find an optimum model. Once we know the optimum settings, we'll train a Random Forest in sklearn (task 4)and save it with joblib (task 5) (so that we can use it next week to deploy).Here consider MLlib as another python package that you are using, like the scikit-learn. You will be seeing many scikit-learn similar classes and methods available in MLlib for various ML related tasks, you might also notice that some of them are not yet implimented in MLlib. What you write using pyspark package will be using the spark engine to run your code, and hence all the benefits of distributed computing what we discussed in class.NOTE: Here whenever you use spark makes sure that you refer to the right documentation based on the version what you will be using. [Here](https://spark.apache.org/docs/) you can select the version of the spark and go to the correct documentation. In our case we are using spark 3.1.2, and here is the link to spark documetation that you can refer to,- [MLlib Documentation](https://spark.apache.org/docs/3.1.2/ml-guide.html)- [MLlib API Reference](https://spark.apache.org/docs/3.1.2/api/python/reference/pyspark.ml.html)You may notice that there are RDD-based API and DataFrame-based (Main Guide) API available in the documentation. You want to focus on DataFrame based API as no one these days use RDD based API. We will discuss the difference in class.Before you start this notebook make sure that you are using EMR jupyterHub and the kernal that you selected is PySpark. Import necessary libraries
###Code
from pyspark.ml import Pipeline
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
from pyspark.ml.feature import VectorAssembler, UnivariateFeatureSelector
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import RandomForestRegressor as sparkRFR
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
###Output
Starting Spark application
###Markdown
Read the data To start with; read 100 data points for development purpose. Once your code is ready then try on the whole dataset.
###Code
aws_credentials = {"key": "ASIA6GBAHHZH4ZOEGCVK","secret": "fPuqpGDCgg2hnnZtHWmsqLvRjSBPu7GxnRLsaD9x", "token": "FwoGZXIvYXdzELf//////////wEaDD65jIpKf242x8J88iLGAQp4utwkdejKJJ8atkFlAlfhFd/STs9CLSUmLNhYVJ1hzy2nQ1kPax8OptFtgL67BcFBqAB5r56RU3WJoJONFJoMvymI70MGtFbEiM6fvO8EyDKEHIiErdCWQZOeSv1QwBFhtXRVYrvdZgCAbaHyLi6iJm4BIeMuWUOkJpltqKyXtHHQI8x89Ue5/N0iFRY3ifIfWbJV/9kCwBdqH2OSibWHiQ8bQGL0UnxlOBriuhT85xf7G8zUjs/FPyd3osaOjqspwSmX7yiL69eSBjItMohigSM9MoTejBwMNgHmtZZZ7v3IfW5o12CjO3diMwIVMdAgv87vqj5dPSSh"}
## here 100 data points for testing the code
pandas_df = pd.read_csv("s3://mds-s3-28/output/ml_data_SYD.csv",
storage_options=aws_credentials,
index_col=0,
parse_dates=True).dropna()
feature_cols = list(pandas_df.drop(columns="observed").columns)
pandas_df.head()
feature_cols = list(pandas_df.drop(columns="observed").columns)
feature_cols
###Output
_____no_output_____
###Markdown
Preparing dataset for ML
###Code
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# It is automatically created for you in this notebook.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "Observed")
###Output
_____no_output_____
###Markdown
Find best hyperparameter settings Official Documentation of MLlib, Random forest regression [here](http://spark.apache.org/docs/3.0.1/ml-classification-regression.htmlrandom-forest-regression).Here we will be mainly using following classes and methods;- [RandomForestRegressor](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.regression.RandomForestRegressor.html)- [ParamGridBuilder](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.tuning.ParamGridBuilder.html) - addGrid - build- [CrossValidator](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.tuning.CrossValidator.html) - fitUse these parameters for coming up with ideal parameters, you could try more parameters, but make sure you have enough power to do it. But you are required to try only following parameters. This will take around 15 min on entire dataset.... - Use numTrees as [10, 50,100] - maxDepth as [5, 10] - bootstrap as [False, True] - In the CrossValidator use evaluator to be RegressionEvaluator(labelCol="Observed") ***Additional reference:*** You can refer to [here](https://www.sparkitecture.io/machine-learning/regression/random-forest) and [here](https://www.silect.is/blog/random-forest-models-in-spark-ml/).Some additional reading [here](https://projector-video-pdf-converter.datacamp.com/14989/chapter4.pdf)
###Code
##Once you finish testing the model on 100 data points, then load entire dataset and run , this could take ~15 min.
## write code here.
from pyspark.ml.regression import RandomForestRegressor
# from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
# from pyspark.ml.evaluation import RegressionEvaluator
(trainingData, testData) = training.randomSplit([0.8, 0.2])
rf = RandomForestRegressor(labelCol="Observed", featuresCol="Features")
rfparamGrid = (ParamGridBuilder()
.addGrid(rf.maxDepth, [5, 10])
.addGrid(rf.bootstrap, [False, True])
.addGrid(rf.numTrees, [10, 50, 100])
.build())
rfevaluator = RegressionEvaluator(predictionCol="prediction", labelCol="Observed", metricName="rmse")
rfcv = CrossValidator(estimator = rf,
estimatorParamMaps = rfparamGrid,
evaluator = rfevaluator,
numFolds = 5)
rfcvModel = rfcv.fit(trainingData)
print(rfcvModel)
rfpredictions = rfcvModel.transform(testData)
print('RMSE:', rfevaluator.evaluate(rfpredictions))
# Print run info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(rfcvModel.avgMetrics):.2f}")
print(f"numTrees: {rfcvModel.bestModel.getNumTrees}")
print(f"numTrees: {rfcvModel.bestModel.getMaxDepth()}")
###Output
_____no_output_____
###Markdown
Task 4 We haven't discussed MLlib in detail in our class, so consider MLlib as another python package that you are using, like the scikit-learn. What you write using this package, pyspark will be using the spark engine to run your code. I have put guidelines and helpful links (as comments) along with this notebook for taking you through this. Imports
###Code
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
###Output
_____no_output_____
###Markdown
Read Read 100 data points for testing the code, once you get to the bottom then read the entire dataset
###Code
aws_credentials = {"key": "","secret": ""}
## here 100 data points for testing the code
pandas_df = pd.read_csv("s3://mds-s3-student96/ml_data_SYD.csv", storage_options=aws_credentials, index_col=0, parse_dates=True).iloc[:100].dropna()
feature_cols = list(pandas_df.drop(columns="Observed").columns)
###Output
_____no_output_____
###Markdown
Preparing dataset for ML
###Code
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "Observed")
###Output
_____no_output_____
###Markdown
Find best hyperparameter settings You can refer to [here](https://www.sparkitecture.io/machine-learning/regression/random-forest) and [here](https://www.silect.is/blog/random-forest-models-in-spark-ml/) as a reference. All what you need to complete this task are in there. Some additional info [here](https://projector-video-pdf-converter.datacamp.com/14989/chapter4.pdf)Official Documentation of MLlib, Random forest regression [here](http://spark.apache.org/docs/3.0.1/ml-classification-regression.htmlrandom-forest-regression). When using spark documentation always keep in my API sometimes change with versions, new updates/features come in every version release, so always make sure you choose the documentation of the correct spark version. Please find version what you use [here](http://spark.apache.org/docs/).Use these parameters for coming up with ideal parameters, you could try more parameters, but unfourtunately with this single node cluster we dont have enough power to do it. - Use numTrees as [10, 50,100] - maxDepth as [5, 10] - bootstrap as [False, True] - In the CrossValidator use evaluator to be RegressionEvaluator(labelCol="Observed")
###Code
##Once you finish testing the model on 100 data points, then load entire dataset and run , this could take ~15 min.
## write code here.
# Print run info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(cvModel.avgMetrics):.2f}")
print(f"numTrees: {cvModel.bestModel.getNumTrees}")
print(f"numTrees: {cvModel.bestModel.getMaxDepth()}")
###Output
_____no_output_____
###Markdown
DSCI 525 - Web and Cloud Computing Project: Daily Rainfall Over NSW, Australia Milestone 3: Setup Spark Cluster and Develop Machine Learning Authors: Group 24 Huanhuan Li, Nash Makhija and Nicholas Wu Task 4 We haven't discussed MLlib in detail in our class, so consider MLlib as another python package that you are using, like the scikit-learn. What you write using this package, pyspark will be using the spark engine to run your code. I have put guidelines and helpful links (as comments) along with this notebook for taking you through this. Imports
###Code
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
###Output
Starting Spark application
###Markdown
Read the Entire Data
###Code
aws_credentials = {"key": "","secret": ""}
pandas_df = pd.read_csv("s3://mds-s3-student82/output/ml_data_SYD.csv", storage_options=aws_credentials, index_col=0, parse_dates=True).dropna()
feature_cols = list(pandas_df.drop(columns="Observed").columns)
###Output
_____no_output_____
###Markdown
Preparing dataset for ML
###Code
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
#split entire dataset to train and test set
train_df = pandas_df.sample(frac=0.8,random_state=200)
test_df = pandas_df.drop(train_df.index)
training = spark.createDataFrame(train_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "Observed")
###Output
_____no_output_____
###Markdown
Find best hyperparameter settings You can refer to [here](https://www.sparkitecture.io/machine-learning/regression/random-forest) and [here](https://www.silect.is/blog/random-forest-models-in-spark-ml/) as a reference. All what you need to complete this task are in there. Some additional info [here](https://projector-video-pdf-converter.datacamp.com/14989/chapter4.pdf)Official Documentation of MLlib, Random forest regression [here](http://spark.apache.org/docs/3.0.1/ml-classification-regression.htmlrandom-forest-regression). When using spark documentation always keep in my API sometimes change with versions, new updates/features come in every version release, so always make sure you choose the documentation of the correct spark version. Please find version what you use [here](http://spark.apache.org/docs/).Use these parameters for coming up with ideal parameters, you could try more parameters, but unfourtunately with this single node cluster we dont have enough power to do it. - Use numTrees as [10, 50,100] - maxDepth as [5, 10] - bootstrap as [False, True] - In the CrossValidator use evaluator to be RegressionEvaluator(labelCol="Observed")
###Code
##Once you finish testing the model on 100 data points, then load entire dataset and run , this could take ~15 min.
## write code here.
#initialize RandomForest Object
rf = RandomForestRegressor(labelCol="Observed", featuresCol="Features")
#Create a parameter grid for tuning the model
rfparamGrid = (ParamGridBuilder()
#.addGrid(rf.maxDepth, [5, 10])
.addGrid(rf.maxDepth, [5, 10])
#.addGrid(rf.numTrees, [10, 50, 100])
.addGrid(rf.numTrees, [10, 50, 100])
#addGrid(rd.bootstrap, [False, True])
.addGrid(rf.bootstrap, [False, True])
.build())
#Define how we want the model to be evaluated
rfevaluator = RegressionEvaluator(labelCol="Observed")
# Create 5-fold CrossValidator
rfcv = CrossValidator(estimator = rf,
estimatorParamMaps = rfparamGrid,
evaluator = rfevaluator,
numFolds = 5)
# Fit the model to the data
rfcvModel = rfcv.fit(training)
# Print run info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(rfcvModel.avgMetrics):.2f}")
print(f"numTrees: {rfcvModel.bestModel.getNumTrees}")
print(f"numDepths: {rfcvModel.bestModel.getMaxDepth()}")
print(f"BootStrap: {rfcvModel.bestModel.getBootstrap()}")
###Output
_____no_output_____
###Markdown
Task 4 We haven't discussed MLlib in detail in our class, so consider MLlib as another python package that you are using, like the scikit-learn. What you write using this package, pyspark will be using the spark engine to run your code. I have put guidelines and helpful links (as comments) along with this notebook for taking you through this.
###Code
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
###Output
Starting Spark application
###Markdown
Read Read 100 data points for testing the code, once you get to the bottom then read the entire dataset
###Code
aws_credentials = {"key": "...","secret": "..."}
## here 100 data points for testing the code
pandas_df = pd.read_csv("s3://mds-s3-student96/ml_data_SYD.csv", storage_options=aws_credentials, index_col=0, parse_dates=True).iloc[:100].dropna()
feature_cols = list(pandas_df.drop(columns="Observed").columns)
###Output
_____no_output_____
###Markdown
Preparing dataset for ML
###Code
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "Observed")
###Output
_____no_output_____
###Markdown
Find best hyperparameter settings You can refer to [here](https://www.sparkitecture.io/machine-learning/regression/random-forest) and [here](https://www.silect.is/blog/random-forest-models-in-spark-ml/) as a reference. All what you need to complete this task are in there. Some additional info [here](https://projector-video-pdf-converter.datacamp.com/14989/chapter4.pdf)Official Documentation of MLlib, Random forest regression [here](http://spark.apache.org/docs/3.0.1/ml-classification-regression.htmlrandom-forest-regression). When using spark documentation always keep in my API sometimes change with versions, new updates/features come in every version release, so always make sure you choose the documentation of the correct spark version. Please find version what you use [here](http://spark.apache.org/docs/).Use these parameters for coming up with ideal parameters, you could try more parameters, but unfourtunately with this single node cluster we dont have enough power to do it. - Use numTrees as [10, 50,100] - maxDepth as [5, 10] - bootstrap as [False, True] - In the CrossValidator use evaluator to be RegressionEvaluator(labelCol="Observed")
###Code
##Once you finish testing the model on 100 data points, then load entire dataset and run , this could take ~15 min.
## write code here.
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
from pyspark.ml.evaluation import RegressionEvaluator
pandas_df = pd.read_csv("s3://mds-s3-student96/ml_data_SYD.csv", storage_options=aws_credentials, index_col=0, parse_dates=True).dropna()
feature_cols = list(pandas_df.drop(columns="Observed").columns)
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "Observed")
rf = RandomForestRegressor(labelCol="Observed", featuresCol="Features")
rfparamGrid = (ParamGridBuilder()
.addGrid(rf.numTrees, [10, 50, 100])
.addGrid(rf.maxDepth, [5, 10])
.addGrid(rf.bootstrap, [False, True])
.build())
rfevaluator = RegressionEvaluator(predictionCol="prediction", labelCol="Observed")
rfcv = CrossValidator(estimator = rf,
estimatorParamMaps = rfparamGrid,
evaluator = rfevaluator,
numFolds = 5)
cvModel = rfcv.fit(training)
# Print run info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(cvModel.avgMetrics):.2f}")
print(f"numTrees: {cvModel.bestModel.getNumTrees}")
print(f"numTrees: {cvModel.bestModel.getMaxDepth()}")
###Output
_____no_output_____
###Markdown
Task 4 - group15 We haven't discussed MLlib in detail in our class, so consider MLlib as another python package that you are using, like the scikit-learn. What you write using this package, pyspark will be using the spark engine to run your code. I have put guidelines and helpful links (as comments) along with this notebook for taking you through this. Imports
###Code
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
###Output
Starting Spark application
###Markdown
Read Read 100 data points for testing the code, once you get to the bottom then read the entire dataset
###Code
aws_credentials = {"key": "...","secret": "..."}
## here 100 data points for testing the code
pandas_df = pd.read_csv("s3://mds-s3-student77/output/ml_data_SYD.csv", storage_options=aws_credentials, index_col=0, parse_dates=True).dropna()
feature_cols = list(pandas_df.drop(columns="observed_rainfall").columns)
###Output
_____no_output_____
###Markdown
Preparing dataset for ML
###Code
# Load dataframe and coerce features into a single column called "Features"
# This is a requirement of MLlib
# Here we are converting your pandas dataframe to a spark dataframe,
# Here "spark" is a spark session I will discuss this in our Wed class.
# read more here https://blog.knoldus.com/spark-createdataframe-vs-todf/
training = spark.createDataFrame(pandas_df)
assembler = VectorAssembler(inputCols=feature_cols, outputCol="Features")
training = assembler.transform(training).select("Features", "observed_rainfall")
###Output
_____no_output_____
###Markdown
Find best hyperparameter settings You can refer to [here](https://www.sparkitecture.io/machine-learning/regression/random-forest) and [here](https://www.silect.is/blog/random-forest-models-in-spark-ml/) as a reference. All what you need to complete this task are in there. Some additional info [here](https://projector-video-pdf-converter.datacamp.com/14989/chapter4.pdf)Official Documentation of MLlib, Random forest regression [here](http://spark.apache.org/docs/3.0.1/ml-classification-regression.htmlrandom-forest-regression). When using spark documentation always keep in my API sometimes change with versions, new updates/features come in every version release, so always make sure you choose the documentation of the correct spark version. Please find version what you use [here](http://spark.apache.org/docs/).Use these parameters for coming up with ideal parameters, you could try more parameters, but unfourtunately with this single node cluster we dont have enough power to do it. - Use numTrees as [10, 50,100] - maxDepth as [5, 10] - bootstrap as [False, True] - In the CrossValidator use evaluator to be RegressionEvaluator(labelCol="Observed")
###Code
##Once you finish testing the model on 100 data points, then load entire dataset and run , this could take ~15 min.
## write code here.
rf = RandomForestRegressor(labelCol="observed_rainfall", featuresCol="Features")
rfparamGrid = (ParamGridBuilder()
.addGrid(rf.maxDepth, [5, 10])
.addGrid(rf.bootstrap, [False, True])
.addGrid(rf.numTrees, [10, 50, 100])
.build())
rfevaluator = RegressionEvaluator(predictionCol="prediction", labelCol="observed_rainfall", metricName="rmse")
# Create 5-fold CrossValidator
rfcv = CrossValidator(estimator = rf,
estimatorParamMaps = rfparamGrid,
evaluator = rfevaluator,
numFolds = 5)
cvModel = rfcv.fit(training)
print(cvModel)
# Print run info
print("\nBest model")
print("==========")
print(f"\nCV Score: {min(cvModel.avgMetrics):.2f}")
print(f"NumTrees: {cvModel.bestModel.getNumTrees}")
print(f"MaxDepth: {cvModel.bestModel.getMaxDepth()}")
print(f"bootstrap: {cvModel.bestModel.getBootstrap()}")
###Output
_____no_output_____ |
01-analisis-exploratorio.ipynb | ###Markdown
Análisis Exploratorio de Datos===**Juan David Velásquez Henao** [email protected] Universidad Nacional de Colombia, Sede Medellín Facultad de Minas Medellín, Colombia---Haga click [aquí](https://github.com/jdvelasq/Python-for-descriptive-analytics/tree/master/) para acceder al repositorio online.Haga click [aquí](http://nbviewer.jupyter.org/github/jdvelasq/Python-for-descriptive-analytics/tree/master/) para explorar el repositorio usando `nbviewer`. --- Haga click [aquí](https://github.com/jdvelasq/statistics-for-analytics/blob/master/04-analisis-exploratorio.ipynb) para acceder a un documento conceptual sobre análisis exploratorio de datos. --- **Preparación de los datos**.
###Code
import pandas as pd
df = pd.read_csv('files/indicadores-mundiales.csv', encoding='latin-1')
df.head()
###Output
_____no_output_____ |
EmployeesSQL/avgSalary_byTitle.ipynb | ###Markdown
Bonnus question
###Code
import psycopg2 as pg
import pandas.io.sql as psql
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from config import password
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, inspect, func
# I gogoled how to instal and work with psycopg2 and I follwed the instrutions as show down in this code.
import psycopg2
try:
connection = psycopg2.connect(user = "postgres",
password = password,
host = "127.0.0.1",
port = "5432",
database = "Homework_db")
cursor = connection.cursor()
# Print PostgreSQL Connection properties
print ( connection.get_dsn_parameters(),"\n")
# Print PostgreSQL version
cursor.execute("SELECT version();")
record = cursor.fetchone()
print("You are connected to - ", record,"\n")
except (Exception, psycopg2.Error) as error :
print ("Error while connecting to PostgreSQL", error)
finally:
#closing database connection.
if(connection):
cursor.close()
connection.close()
print("PostgreSQL connection is closed")
# Import the database from SQL to jupyter
engine = create_engine(f'postgresql://postgres:{password}@localhost:5432/Homework_db')
connection = engine.connect()
engine.table_names()
# Convert 'Salaries' table from sql into a dataframe
salaries_df = pd.read_sql('select * from "Salaries"',connection)
salaries_df.head()
# Convert 'Titles' table from sql into a dataframe
Titles_df = pd.read_sql('select * from "Titles"',connection)
Titles_df.head()
# connect both databases together
combined_df = pd.merge(salaries_df,Titles_df, on='emp_no')
combined_df
# Select only the columns we need, aplying the average function to 'salary'
avg_salary_title_df = combined_df.groupby(['title']).mean()[['salary']]
avg_salary_title_df = avg_salary_title_df.sort_values(by=['salary'], ascending=False)
avg_salary_title_df.reset_index(level=0, inplace=True)
avg_salary_title_df.head()
# Create a histogram to visualize the most common salary ranges for employees.
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
def gradientbars(bars):
grad = np.atleast_2d(np.linspace(0,100,256)).T # Gradient of your choice
rectangles = bars.containers[0]
# ax = bars[0].axes
fig, ax = plt.subplots()
xList = []
yList = []
for rectangle in rectangles:
x0 = rectangle._x0
x1 = rectangle._x1
y0 = rectangle._y0
y1 = rectangle._y1
xList.extend([x0,x1])
yList.extend([y0,y1])
ax.imshow(grad, extent=[x0,x1,y0,y1], aspect="auto", zorder=0)
ax.axis([min(xList), max(xList), min(yList), max(yList)*1.1]) # *1.1 to add some buffer to top of plot
return fig,ax
#Create a bar chart of average salary by title.
sns.set(style="whitegrid", color_codes=True)
# Make Seaborn barplot
seabornAxHandle = sns.barplot(x='salary',y='title', data=avg_salary_title_df, palette="Greens_d")
ax.set(xlabel = "", ylabel = "Average Salary ($)", title = "Average Salary by Title")
ax.set_xticklabels(ax.get_xticklabels(), rotation=60, ha='right')
plt.show() # Vertical bars with horizontal gradient
###Output
_____no_output_____ |
Pandas_Quiz_Dphi.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
ckd_data = pd.read_csv("https://raw.githubusercontent.com/dphi-official/Datasets/master/Chronic%20Kidney%20Disease%20(CKD)%20Dataset/ChronicKidneyDisease.csv")
ckd_data.head()
ckd_data.dtypes.value_counts()
ckd_data.classification.value_counts()
data2 = ckd_data.loc[247:253, ["id", "age", "classification"]]
data2
ckd_data.age.max()
ckd_data.age.min()
ckd_data.shape
ckd_data.notnull().sum()
ckd_data.describe()
ckd_data.dtypes
#Finding the mean of the column having NaN
median_value=ckd_data['bp'].median()
median_value
# # Replace NaNs in column S2 with the
# # mean of values in the same column
# gfg['G2'].fillna(value=mean_value, inplace=True)
# print('Updated Dataframe:')
# # print(gfg)
ckd_data['bp'].fillna(value=median_value, inplace=True)
ckd_data.describe()
ckd_data.dtypes
ckd_data.notnull().sum()
###Output
_____no_output_____ |
Python Absolute Beginner/Module_3_Required_Code_IntroPy.ipynb | ###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
max_order = 100
min_order = 0.25
price = 55.5
order_amount = float(input("Enter order amount:"))
if order_amount > 100:
print(order_amount, "is more than currently available stock")
elif order_amount < 0.25:
print(order_amount,"is below minimum order amount")
else:
print(order_amount, "costs",order_amount * price)
###Output
Enter order amount:15.98
15.98 costs 886.89
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
maximum_order = 50
minimum_order = 3
unit_price = 2
def cheese():
order_amount = int(input("how many units of cheese would you like?: "))
if order_amount > maximum_order:
print("We dont have that much")
elif order_amount < minimum_order:
print("Thats lower than our minimum quantity")
else:
print("Your order will cost $", unit_price * order_amount)
return order_amount
cheese()
###Output
how many units of cheese would you like?: 45
Your order will cost $ 90
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
maximum_order = 113.0
minimum_order = 0.15
def cheese_program(order_amount):
if order_amount.isdigit() == False:
print('Enter numeric value')
elif float(order_amount) > maximum_order:
print(order_amount, "is more than currently available stock")
elif float(order_amount) < minimum_order:
print(order_amount, "is less than currently available stock")
elif (float(order_amount) <= maximum_order) and (float(order_amount) >= minimum_order):
print(order_amount, "pounds costs", "$", int(order_amount) * 7.99)
else:
print("Enter numeric value")
weight = input("Enter cheese order weight (pounds numeric value): ")
function = cheese_program(weight)
###Output
Enter cheese order weight (pounds numeric value): 2
2 pounds costs $ 15.98
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
maximum = 60
minimum = 5
order_amount = float(input("Enter cheese order weight (numeric value): "))
price = float(order_amount) * 5
if order_amount > maximum:
print (order_amount, "is more than currently available stock")
elif order_amount < minimum:
print(order_amount, "is below our minimum order amount.")
else:
print(order_amount, "costs $ ", price,".")
###Output
Enter cheese order weight (numeric value): 22
22.0 costs $ 110.0 .
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
maximum_order = 200.0
minimum_order = 1.0
def cheese_program(order_amount):
if order_amount.isdigit() == False:
print('Enter numeric value')
elif float(order_amount) > maximum_order:
print(order_amount, "is more than currently available stock")
elif float(order_amount) < minimum_order:
print(order_amount, "is less than currently available stock")
elif (float(order_amount) <= maximum_order) and (float(order_amount) >= minimum_order):
print(order_amount, "pounds costs", "$", int(order_amount) * 6)
else:
print("Enter numeric value")
weight = input("Enter cheese order weight (pounds numeric value): ")
function = cheese_program(weight)
input ('Press ENTER to exit')
###Output
Enter cheese order weight (pounds numeric value): 4
4 pounds costs $ 24
Press ENTER to exit
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
max_cheese = 50
min_cheese = 2
price = 3.5
order_weight = input("enter cheese order weight :")
if order_weight.isdigit() == True:
weight = int(order_weight)
if weight > max_cheese:
print(weight, "is more than currently available stock")
elif weight < min_cheese :
print(weight,"is below minimum order amount")
else:
print(weight,"costs $",weight * price)
else:
print("you didnt enter numeric value")
###Output
enter cheese order weight :4
4 costs $ 14.0
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
max_order = 200.00
min_order = 2.00
price = 5.75
order_amt = float(input("How much would you like to order?"))
if order_amt > max_order:
print (order_amt, "exceeds available stock only", max_order, "available")
elif order_amt < min_order:
print (order_amt, "is below the minimum order threshold of", min_order)
else:
print (order_amt, "costs $", order_amt*price)
###Output
How much would you like to order?32
32.0 costs $ 184.0
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
def cheese_order(weight):
max=85
min=10
price=2.5
if weight>max:
print(weight,"is more than available to currently order.")
elif min>weight:
print(weight,"is less the minimum order amount of 10")
else:
print(weight,"costs $"+str(weight*price))
order_amount=int(input("Enter your cheese ordere weight (numbers only): "))
cheese_order(order_amount)
###Output
Enter your cheese ordere weight (numbers only): 45
45 costs $112.5
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
maximum = 88
minimum = 0.2
price = 4.5
order_amount = float(input("Enter cheese order weight (numeric value):"))
if maximum >= order_amount >= minimum:
print(order_amount,"costs $" + str(price*order_amount))
elif order_amount >= maximum:
print(order_amount,"is more than currently available stock")
elif minimum >= order_amount:
print(order_amount,"is below minimum order amount")
else:
print("Invalid response. Please try again.")
###Output
Enter cheese order weight (numeric value):90
90.0 is more than currently available stock
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
setMax = 100
setMin = 10
setPrice = 2.25
order_amount = float(input("Enter a cheese order weight (numeric value): "))
if order_amount < 10:
print(order_amount, "is below the minimum order amount.")
elif order_amount > 100:
print(order_amount, "is more than the currently available stock.")
else:
print(order_amount, "costs", (order_amount * setPrice))
###Output
Enter a cheese order weight (numeric value): 11
11.0 costs 24.75
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
maximum_order=10
minimum_order=2
price=2
order_amount=input("Enter Order Amount: ")
if int(order_amount)>maximum_order:
print("Order Maximum is", maximum_order)
elif int(order_amount)<minimum_order:
print("Order Minimum is",minimum_order)
else:
print(int(order_amount)*price)
###Output
Enter Order Amount: 3
6
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
greet(name)
name = input('enter your name: ')
greet(name)
def greet(person):
print("Hi,", person)
greet(name)
###Output
_____no_output_____
###Markdown
Module 3 Required Coding Activity Introduction to Python Unit 1 This is an activity based on code similar to the Jupyter Notebook **`Practice_MOD03_1-4_IntroPy.ipynb`** and **`Practice_MOD03_1-5_IntroPy.ipynb`** which you may have completed as practice.> **NOTE:** This program requires the use of **`if, elif, else`**, and casting between strings and numbers. The program should use the various code syntax covered in module 3. > >The program must result in print output using numeric input similar to that shown in the sample below. Program: Cheese Order - set values for maximum and minimum order variables - set value for price variable- get order_amount input and cast to a number - check order_amount and give message checking against - over maximum - under minimum- else within maximum and minimum give message with calculated price Sample input and output:```Enter cheese order weight (numeric value): 113113.0 is more than currently available stock``````Enter cheese order weight (numeric value): .150.15 is below minimum order amount``` ```Enter cheese order weight (numeric value): 22.0 costs $15.98```
###Code
# [ ] create, call and test
max_order=50
min_order=5
cheese_price=10
order_amount=input("enter cheese order weight (numeric value): ")
if order_amount.isdigit():
order_amount_int=int(order_amount)
if order_amount_int>max_order:
print(order_amount, "is more than currently available stock")
elif order_amount_int<min_order:
print(order_amount,"is below minimum order amount")
else:
cheese_cost=cheese_price*order_amount_int
print(order_amount,"costs","$"+str(cheese_cost))
else:
print("invalid input")
###Output
enter cheese order weight (numeric value): 35
35 costs $350
|
Case Study 1 Titanic.ipynb | ###Markdown
Logistic Regression
###Code
#First we separate char.data & numerical data and then for char. data we will create dummy variables &then will catinate them
numerical_cols = dataset.select_dtypes(include=[np.number]).columns
numerical_cols #these are the columns which have numerical datatype
numerical_df = dataset.select_dtypes(include=[np.number]).copy()
numerical_df #here we will have datafrane now which is having only those numerical columns
character_df = dataset.select_dtypes(include='object').copy()
character_df
dummies_df = pd.get_dummies(character_df)
dummies_df.head()
#^Now char_df got convereted to dummies_df=pd.get where we have dummy values now
#now we will add numerical df & dummy df
combined_df = pd.concat([numerical_df,dummies_df],axis=1)
#^axis=1 just to tell python that we want to add them columnwise
combined_df.head()
#Now we will divide in X & Y(independent & dependent variables)
Y = combined_df['Survived']
X = combined_df.drop(columns='Survived') #now everything will go in X except survived
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,train_size=0.7, random_state=1)
X_train.head()
#^ If we will run it again then we will get the same output as we have mentioned random_state=1
#Now lets build our model using training data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train,Y_train)
#^We are asking the machine to learn from this data & build a model
#now we can predict values of X or Y i.e hard predictions
# Hard predictions
Y_pred = model.predict(X_test)
Y_pred
#O/P=0 means the person will not survive & 1 means person will survive.For Test data the model has made predictions,now we will
# compare it with actual Y values & see how good or bad our model is
from sklearn.metrics import confusion_matrix, accuracy_score
#lets see our confusion matrix now
cm = confusion_matrix(Y_test,Y_pred)
print(cm)
# TN FP
# FN TP
#We can also check accuracy using accuracy_score
accuracy_score(Y_test,Y_pred)
###Output
_____no_output_____
###Markdown
ROC CURVE
###Code
from sklearn.metrics import roc_auc_score #this is to calculate area under the curve
# Based on hard predictions
area_under_ROC = roc_auc_score(Y_test,Y_pred)
print('Area under ROC Curve =',area_under_ROC)
#^Higher the area under the ROC curve,the better is the model
# Based on soft predictions
area_under_ROC = roc_auc_score(Y_test,model.predict_proba(X_test)[:,1])
#NOTE=RATHER THAN HARD PREDICTIONS IF WE USE SOFT PREDICTIONS TO FIND AREA UNDER ROC CURVE THEN IT WILL NOT CHANGE i.e hard
# predictions hi humesha change hote rehte h
print(area_under_ROC)
###Output
0.8482362592288762
###Markdown
Plotting the ROC curve
###Code
from sklearn.metrics import roc_curve
# Soft predictions
model.predict_proba(X_test)
#^o/p=0.83062502 is probability of 0=not survived and 0.16937498 is probability of 1=survived
fpr,tpr,thresholds = roc_curve(Y_test,model.predict_proba(X_test)[:,1]) #[:=rows,1=columns]
#^here we are saying make the predictions based on column 1(not for 0 only for 1 means secomd wala)
fpr #o/p=these are different values of fpr
a = pd.Series(fpr)
b = pd.Series(tpr)
c = pd.Series(thresholds)
df_roc = pd.concat([a,b,c],axis=1)
a = pd.Series(fpr)
b = pd.Series(tpr)
c = pd.Series(thresholds)
df_roc = pd.concat([a,b,c],axis=1,keys=['FPR','TPR','THRESHOLD']).sort_values(by='TPR',ascending=False)
df_roc[(df_roc['THRESHOLD']>0.45) & (df_roc['THRESHOLD']<0.55)]
#^O/P= jaise-2 threshold change krenge humari hard predictions bhi change hogi & agr hard prediction change hogi toh FPR &
#TPR bhi change hoga
plt.plot(fpr,tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
###Output
_____no_output_____
###Markdown
Change the Classifier cutoff Value
###Code
threshold = 0.35
Y_pred = (model.predict_proba(X_test)[:,1]>=threshold).astype(int)
Y_pred
threshold = 0.15
Y_pred = (model.predict_proba(X_test)[:,1]>=threshold).astype(int)
Y_pred
#^so we can change our Y_pred based on any threshold we want
cm = confusion_matrix(Y_test,Y_pred)
print(cm)
accuracy_score(Y_test,Y_pred)
#^Now if we change the threshold the accuracy will change
# Based on hard predictions
area_under_ROC = roc_auc_score(Y_test,Y_pred)
print(area_under_ROC)
# Based on soft predictions
area_under_ROC = roc_auc_score(Y_test,model.predict_proba(X_test)[:,1])
print(area_under_ROC)
###Output
0.8482362592288762
|
mlmodels/model_dev/nlp_tfflow/neural-machine-translation/49.bert-multilanguage-transformer-decoder-beam.ipynb | ###Markdown
Make sure you already run1. [bert-preprocessing.ipynb](bert-preprocessing.ipynb)
###Code
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
import numpy as np
import tensorflow as tf
from tensor2tensor.utils import beam_search
import pickle
with open('train-test-bert.pkl', 'rb') as fopen:
dataset = pickle.load(fopen)
train_X = dataset['train_X']
train_Y = dataset['train_Y']
test_X = dataset['test_X']
test_Y = dataset['test_Y']
GO = 101
PAD = 0
EOS = 102
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
from bert import modeling
BERT_VOCAB = 'multi_cased_L-12_H-768_A-12/vocab.txt'
BERT_INIT_CHKPNT = 'multi_cased_L-12_H-768_A-12/bert_model.ckpt'
BERT_CONFIG = 'multi_cased_L-12_H-768_A-12/bert_config.json'
tokenizer = tokenization.FullTokenizer(
vocab_file=BERT_VOCAB, do_lower_case=False)
size_vocab = len(tokenizer.vocab)
bert_config = modeling.BertConfig.from_json_file(BERT_CONFIG)
epoch = 20
batch_size = 32
warmup_proportion = 0.1
num_train_steps = int(len(train_X) / batch_size * epoch)
num_warmup_steps = int(num_train_steps * warmup_proportion)
def pad_second_dim(x, desired_size):
padding = tf.tile([[[0.0]]], tf.stack([tf.shape(x)[0], desired_size - tf.shape(x)[1], tf.shape(x)[2]], 0))
return tf.concat([x, padding], 1)
def ln(inputs, epsilon = 1e-8, scope="ln"):
with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):
inputs_shape = inputs.get_shape()
params_shape = inputs_shape[-1:]
mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)
beta= tf.get_variable("beta", params_shape, initializer=tf.zeros_initializer())
gamma = tf.get_variable("gamma", params_shape, initializer=tf.ones_initializer())
normalized = (inputs - mean) / ( (variance + epsilon) ** (.5) )
outputs = gamma * normalized + beta
return outputs
def scaled_dot_product_attention(Q, K, V,
causality=False, dropout_rate=0.,
training=True,
scope="scaled_dot_product_attention"):
with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):
d_k = Q.get_shape().as_list()[-1]
outputs = tf.matmul(Q, tf.transpose(K, [0, 2, 1])) # (N, T_q, T_k)
outputs /= d_k ** 0.5
outputs = mask(outputs, Q, K, type="key")
if causality:
outputs = mask(outputs, type="future")
outputs = tf.nn.softmax(outputs)
attention = tf.transpose(outputs, [0, 2, 1])
#tf.summary.image("attention", tf.expand_dims(attention[:1], -1))
outputs = mask(outputs, Q, K, type="query")
outputs = tf.layers.dropout(outputs, rate=dropout_rate, training=training)
outputs = tf.matmul(outputs, V)
return outputs
def mask(inputs, queries=None, keys=None, type=None):
padding_num = -2 ** 32 + 1
if type in ("k", "key", "keys"):
masks = tf.sign(tf.reduce_sum(tf.abs(keys), axis=-1)) # (N, T_k)
masks = tf.expand_dims(masks, 1) # (N, 1, T_k)
masks = tf.tile(masks, [1, tf.shape(queries)[1], 1]) # (N, T_q, T_k)
paddings = tf.ones_like(inputs) * padding_num
outputs = tf.where(tf.equal(masks, 0), paddings, inputs) # (N, T_q, T_k)
elif type in ("q", "query", "queries"):
masks = tf.sign(tf.reduce_sum(tf.abs(queries), axis=-1)) # (N, T_q)
masks = tf.expand_dims(masks, -1) # (N, T_q, 1)
masks = tf.tile(masks, [1, 1, tf.shape(keys)[1]]) # (N, T_q, T_k)
outputs = inputs*masks
elif type in ("f", "future", "right"):
diag_vals = tf.ones_like(inputs[0, :, :]) # (T_q, T_k)
tril = tf.linalg.LinearOperatorLowerTriangular(diag_vals).to_dense() # (T_q, T_k)
masks = tf.tile(tf.expand_dims(tril, 0), [tf.shape(inputs)[0], 1, 1]) # (N, T_q, T_k)
paddings = tf.ones_like(masks) * padding_num
outputs = tf.where(tf.equal(masks, 0), paddings, inputs)
else:
print("Check if you entered type correctly!")
return outputs
def multihead_attention(queries, keys, values,
num_heads=8,
dropout_rate=0,
training=True,
causality=False,
scope="multihead_attention"):
d_model = queries.get_shape().as_list()[-1]
with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):
# Linear projections
Q = tf.layers.dense(queries, d_model, use_bias=False) # (N, T_q, d_model)
K = tf.layers.dense(keys, d_model, use_bias=False) # (N, T_k, d_model)
V = tf.layers.dense(values, d_model, use_bias=False) # (N, T_k, d_model)
Q_ = tf.concat(tf.split(Q, num_heads, axis=2), axis=0) # (h*N, T_q, d_model/h)
K_ = tf.concat(tf.split(K, num_heads, axis=2), axis=0) # (h*N, T_k, d_model/h)
V_ = tf.concat(tf.split(V, num_heads, axis=2), axis=0) # (h*N, T_k, d_model/h)
outputs = scaled_dot_product_attention(Q_, K_, V_, causality, dropout_rate, training)
outputs = tf.concat(tf.split(outputs, num_heads, axis=0), axis=2 ) # (N, T_q, d_model)
outputs += queries
outputs = ln(outputs)
return outputs
def ff(inputs, num_units, scope="positionwise_feedforward"):
with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):
outputs = tf.layers.dense(inputs, num_units[0], activation=tf.nn.relu)
outputs = tf.layers.dense(outputs, num_units[1])
outputs += inputs
outputs = ln(outputs)
return outputs
def label_smoothing(inputs, epsilon=0.1):
V = inputs.get_shape().as_list()[-1] # number of channels
return ((1-epsilon) * inputs) + (epsilon / V)
def sinusoidal_position_encoding(inputs, mask, repr_dim):
T = tf.shape(inputs)[1]
pos = tf.reshape(tf.range(0.0, tf.to_float(T), dtype=tf.float32), [-1, 1])
i = np.arange(0, repr_dim, 2, np.float32)
denom = np.reshape(np.power(10000.0, i / repr_dim), [1, -1])
enc = tf.expand_dims(tf.concat([tf.sin(pos / denom), tf.cos(pos / denom)], 1), 0)
return tf.tile(enc, [tf.shape(inputs)[0], 1, 1]) * tf.expand_dims(tf.to_float(mask), -1)
class Translator:
def __init__(self, size_layer, learning_rate,
num_blocks = 4, num_heads = 8, ratio_hidden = 2, beam_width = 5):
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.count_nonzero(self.X, 1, dtype=tf.int32)
self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype=tf.int32)
batch_size = tf.shape(self.X)[0]
def forward(x, y, reuse = False):
with tf.variable_scope('bert',reuse=reuse):
model = modeling.BertModel(
config=bert_config,
is_training=False,
input_ids=x,
use_one_hot_embeddings=False)
embedding = model.get_embedding_table()
memory = model.get_sequence_output()
decoder_embedded = tf.nn.embedding_lookup(embedding, y)
de_masks = tf.sign(y)
decoder_embedded += sinusoidal_position_encoding(y, de_masks, size_layer)
dec = decoder_embedded
for i in range(num_blocks):
with tf.variable_scope('decoder_self_attn_%d'%i,reuse=reuse):
dec = multihead_attention(queries=dec,
keys=dec,
values=dec,
num_heads=num_heads,
causality=True,
scope="self_attention")
dec = multihead_attention(queries=dec,
keys=memory,
values=memory,
num_heads=num_heads,
causality=False,
scope="vanilla_attention")
dec = ff(dec, num_units=[size_layer * ratio_hidden, size_layer])
weights = tf.transpose(embedding)
logits = tf.einsum('ntd,dk->ntk', dec, weights)
return logits
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
self.training_logits = forward(self.X, decoder_input)
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = optimization.create_optimizer(self.cost, learning_rate,
num_train_steps, num_warmup_steps, False)
y_t = tf.argmax(self.training_logits,axis=2)
y_t = tf.cast(y_t, tf.int32)
self.prediction = tf.boolean_mask(y_t, masks)
mask_label = tf.boolean_mask(self.Y, masks)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
initial_ids = tf.fill([batch_size], GO)
def symbols_to_logits(ids):
x = tf.contrib.seq2seq.tile_batch(self.X, beam_width)
logits = forward(x, ids, reuse = True)
return logits[:, tf.shape(ids)[1]-1, :]
final_ids, final_probs, _ = beam_search.beam_search(
symbols_to_logits,
initial_ids,
beam_width,
tf.reduce_max(self.X_seq_len),
size_vocab,
0.0,
eos_id = EOS)
self.predicting_ids = final_ids
size_layer = 768
learning_rate = 1e-5
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Translator(size_layer, learning_rate)
sess.run(tf.global_variables_initializer())
import collections
import re
def get_assignment_map_from_checkpoint(tvars, init_checkpoint):
"""Compute the union of the current variables and checkpoint variables."""
assignment_map = {}
initialized_variable_names = {}
name_to_variable = collections.OrderedDict()
for var in tvars:
name = var.name
m = re.match('^(.*):\\d+$', name)
if m is not None:
name = m.group(1)
name_to_variable[name] = var
init_vars = tf.train.list_variables(init_checkpoint)
assignment_map = collections.OrderedDict()
for x in init_vars:
(name, var) = (x[0], x[1])
if 'bert/' + name not in name_to_variable:
continue
assignment_map[name] = name_to_variable['bert/' + name]
initialized_variable_names[name] = 1
initialized_variable_names[name + ':0'] = 1
return (assignment_map, initialized_variable_names)
tvars = tf.trainable_variables()
checkpoint = BERT_INIT_CHKPNT
assignment_map, initialized_variable_names = get_assignment_map_from_checkpoint(tvars,
checkpoint)
saver = tf.train.Saver(var_list = assignment_map)
saver.restore(sess, checkpoint)
sess.run(model.predicting_ids, feed_dict = {model.X: [train_X[0]]}).shape
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
import tqdm
for e in range(epoch):
pbar = tqdm.tqdm(
range(0, len(train_X), batch_size), desc = 'minibatch loop')
train_loss, train_acc, test_loss, test_acc = [], [], [], []
for i in pbar:
index = min(i + batch_size, len(train_X))
maxlen = max([len(s) for s in train_X[i : index] + train_Y[i : index]])
batch_x, seq_x = pad_sentence_batch(train_X[i : index], PAD)
batch_y, seq_y = pad_sentence_batch(train_Y[i : index], PAD)
feed = {model.X: batch_x,
model.Y: batch_y}
accuracy, loss, _ = sess.run([model.accuracy,model.cost,model.optimizer],
feed_dict = feed)
train_loss.append(loss)
train_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
pbar = tqdm.tqdm(
range(0, len(test_X), batch_size), desc = 'minibatch loop')
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x, seq_x = pad_sentence_batch(test_X[i : index], PAD)
batch_y, seq_y = pad_sentence_batch(test_Y[i : index], PAD)
feed = {model.X: batch_x,
model.Y: batch_y,}
accuracy, loss = sess.run([model.accuracy,model.cost],
feed_dict = feed)
test_loss.append(loss)
test_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
print('epoch %d, training avg loss %f, training avg acc %f'%(e+1,
np.mean(train_loss),np.mean(train_acc)))
print('epoch %d, testing avg loss %f, testing avg acc %f'%(e+1,
np.mean(test_loss),np.mean(test_acc)))
test_size = 20
batch_x, _ = pad_sentence_batch(test_X[: test_size], PAD)
feed = {model.X: batch_x}
logits = sess.run(model.predicting_ids, feed_dict = feed)
logits.shape
###Output
_____no_output_____ |
C3/W3/ungraded_labs/C3_W3_Lab_2_multiple_layer_LSTM.ipynb | ###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Note:** This notebook can run using TensorFlow 2.5.0
###Code
#!pip install tensorflow==2.5.0
###Output
_____no_output_____
###Markdown
Multiple Layer LSTM
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow_datasets as tfds
import tensorflow as tf
print(tf.__version__)
# Get the data
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
tokenizer = info.features['text'].encoder
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(train_dataset))
test_dataset = test_dataset.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(test_dataset))
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
NUM_EPOCHS = 10
history = model.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=test_dataset)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
###Output
_____no_output_____
###Markdown
Ungraded Lab: Multiple LSTMsIn this lab, you will look at how to build a model with multiple LSTM layers. Since you know the preceding steps already (e.g. downloading datasets, preparing the data, etc.), we won't expound on it anymore so you can just focus on the model building code. Download and Prepare the Dataset
###Code
import tensorflow_datasets as tfds
# Download the subword encoded pretokenized dataset
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True)
# Get the tokenizer
tokenizer = info.features['text'].encoder
###Output
_____no_output_____
###Markdown
Like the previous lab, we increased the `BATCH_SIZE` here to make the training faster. If you are doing this on your local machine and have a powerful processor, feel free to use the value used in the lecture (i.e. 64) to get the same results as Laurence.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 256
# Get the train and test splits
train_data, test_data = dataset['train'], dataset['test'],
# Shuffle the training data
train_dataset = train_data.shuffle(BUFFER_SIZE)
# Batch and pad the datasets to the maximum length of the sequences
train_dataset = train_dataset.padded_batch(BATCH_SIZE)
test_dataset = test_data.padded_batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build and Compile the ModelYou can build multiple layer LSTM models by simply appending another `LSTM` layer in your `Sequential` model and enabling the `return_sequences` flag to `True`. This is because an `LSTM` layer expects a sequence input so if the previous layer is also an LSTM, then it should output a sequence as well. See the code cell below that demonstrates this flag in action. You'll notice that the output dimension is in 3 dimensions `(batch_size, timesteps, features)` when when `return_sequences` is True.
###Code
import tensorflow as tf
import numpy as np
# Hyperparameters
batch_size = 1
timesteps = 20
features = 16
lstm_dim = 8
print(f'batch_size: {batch_size}')
print(f'timesteps (sequence length): {timesteps}')
print(f'features (embedding size): {features}')
print(f'lstm output units: {lstm_dim}')
# Define array input with random values
random_input = np.random.rand(batch_size,timesteps,features)
print(f'shape of input array: {random_input.shape}')
# Define LSTM that returns a single output
lstm = tf.keras.layers.LSTM(lstm_dim)
result = lstm(random_input)
print(f'shape of lstm output(return_sequences=False): {result.shape}')
# Define LSTM that returns a sequence
lstm_rs = tf.keras.layers.LSTM(lstm_dim, return_sequences=True)
result = lstm_rs(random_input)
print(f'shape of lstm output(return_sequences=True): {result.shape}')
###Output
_____no_output_____
###Markdown
The next cell implements the stacked LSTM architecture.
###Code
import tensorflow as tf
# Hyperparameters
embedding_dim = 64
lstm1_dim = 64
lstm2_dim = 32
dense_dim = 64
# Build the model
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, embedding_dim),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm1_dim, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm2_dim)),
tf.keras.layers.Dense(dense_dim, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Print the model summary
model.summary()
# Set the training parameters
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the ModelThe additional LSTM layer will lengthen the training time compared to the previous lab. Given the default parameters we set, it will take around 2 minutes per epoch with the Colab GPU enabled.
###Code
NUM_EPOCHS = 10
# Train the model
history = model.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=test_dataset)
import matplotlib.pyplot as plt
# Plot utility
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
# Plot the accuracy and results
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
###Output
_____no_output_____
###Markdown
Ungraded Lab: Multiple LSTMsIn this lab, you will look at how to build a model with multiple LSTM layers. Since you know the preceding steps already (e.g. downloading datasets, preparing the data, etc.), we won't expound on it anymore so you can just focus on the model building code. Download and Prepare the Dataset
###Code
import tensorflow_datasets as tfds
# Download the subword encoded pretokenized dataset
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True)
# Get the tokenizer
tokenizer = info.features['text'].encoder
###Output
WARNING:absl:TFDS datasets with text encoding are deprecated and will be removed in a future version. Instead, you should use the plain text version and tokenize the text using `tensorflow_text` (See: https://www.tensorflow.org/tutorials/tensorflow_text/intro#tfdata_example)
###Markdown
Like the previous lab, we increased the `BATCH_SIZE` here to make the training faster. If you are doing this on your local machine and have a powerful processor, feel free to use the value used in the lecture (i.e. 64) to get the same results as Laurence.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
# Get the train and test splits
train_data, test_data = dataset['train'], dataset['test'],
# Shuffle the training data
train_dataset = train_data.shuffle(BUFFER_SIZE)
# Batch and pad the datasets to the maximum length of the sequences
train_dataset = train_dataset.padded_batch(BATCH_SIZE)
test_dataset = test_data.padded_batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build and Compile the ModelYou can build multiple layer LSTM models by simply appending another `LSTM` layer in your `Sequential` model and enabling the `return_sequences` flag to `True`. This is because an `LSTM` layer expects a sequence input so if the previous layer is also an LSTM, then it should output a sequence as well. See the code cell below that demonstrates this flag in action. You'll notice that the output dimension is in 3 dimensions `(batch_size, timesteps, features)` when when `return_sequences` is True.
###Code
import tensorflow as tf
import numpy as np
# Hyperparameters
batch_size = 1
timesteps = 20
features = 16
lstm_dim = 8
print(f'batch_size: {batch_size}')
print(f'timesteps (sequence length): {timesteps}')
print(f'features (embedding size): {features}')
print(f'lstm output units: {lstm_dim}')
# Define array input with random values
random_input = np.random.rand(batch_size,timesteps,features)
print(f'shape of input array: {random_input.shape}')
# Define LSTM that returns a single output
lstm = tf.keras.layers.LSTM(lstm_dim)
result = lstm(random_input)
print(f'shape of lstm output(return_sequences=False): {result.shape}')
# Define LSTM that returns a sequence
lstm_rs = tf.keras.layers.LSTM(lstm_dim, return_sequences=True)
result = lstm_rs(random_input)
print(f'shape of lstm output(return_sequences=True): {result.shape}')
###Output
batch_size: 1
timesteps (sequence length): 20
features (embedding size): 16
lstm output units: 8
shape of input array: (1, 20, 16)
shape of lstm output(return_sequences=False): (1, 8)
shape of lstm output(return_sequences=True): (1, 20, 8)
###Markdown
The next cell implements the stacked LSTM architecture.
###Code
import tensorflow as tf
# Hyperparameters
embedding_dim = 64
lstm1_dim = 64
lstm2_dim = 32
dense_dim = 64
# Build the model
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, embedding_dim),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm1_dim, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm2_dim)),
tf.keras.layers.Dense(dense_dim, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Print the model summary
model.summary()
# Set the training parameters
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
from tensorflow.keras.utils import plot_model
plot_model(model)
###Output
('You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) ', 'for plot_model/model_to_dot to work.')
###Markdown
Train the ModelThe additional LSTM layer will lengthen the training time compared to the previous lab. Given the default parameters we set, it will take around 2 minutes per epoch with the Colab GPU enabled.
###Code
NUM_EPOCHS = 10
# Train the model
history = model.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=test_dataset)
import matplotlib.pyplot as plt
# Plot utility
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
# Plot the accuracy and results
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
###Output
_____no_output_____
###Markdown
Ungraded Lab: Multiple LSTMsIn this lab, you will look at how to build a model with multiple LSTM layers. Since you know the preceding steps already (e.g. downloading datasets, preparing the data, etc.), we won't expound on it anymore so you can just focus on the model building code. Download and Prepare the Dataset
###Code
import tensorflow_datasets as tfds
# Download the subword encoded pretokenized dataset
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True)
# Get the tokenizer
tokenizer = info.features['text'].encoder
###Output
WARNING:absl:TFDS datasets with text encoding are deprecated and will be removed in a future version. Instead, you should use the plain text version and tokenize the text using `tensorflow_text` (See: https://www.tensorflow.org/tutorials/tensorflow_text/intro#tfdata_example)
###Markdown
Like the previous lab, we increased the `BATCH_SIZE` here to make the training faster. If you are doing this on your local machine and have a powerful processor, feel free to use the value used in the lecture (i.e. 64) to get the same results as Laurence.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 256
# Get the train and test splits
train_data, test_data = dataset['train'], dataset['test'],
# Shuffle the training data
train_dataset = train_data.shuffle(BUFFER_SIZE)
# Batch and pad the datasets to the maximum length of the sequences
train_dataset = train_dataset.padded_batch(BATCH_SIZE)
test_dataset = test_data.padded_batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build and Compile the ModelYou can build multiple layer LSTM models by simply appending another `LSTM` layer in your `Sequential` model and enabling the `return_sequences` flag to `True`. This is because an `LSTM` layer expects a sequence input so if the previous layer is also an LSTM, then it should output a sequence as well. See the code cell below that demonstrates this flag in action. You'll notice that the output dimension is in 3 dimensions `(batch_size, timesteps, features)` when when `return_sequences` is True.
###Code
import tensorflow as tf
import numpy as np
# Hyperparameters
batch_size = 1
timesteps = 20
features = 16
lstm_dim = 8
print(f'batch_size: {batch_size}')
print(f'timesteps (sequence length): {timesteps}')
print(f'features (embedding size): {features}')
print(f'lstm output units: {lstm_dim}')
# Define array input with random values
random_input = np.random.rand(batch_size,timesteps,features)
print(f'shape of input array: {random_input.shape}')
# Define LSTM that returns a single output
lstm = tf.keras.layers.LSTM(lstm_dim)
result = lstm(random_input)
print(f'shape of lstm output(return_sequences=False): {result.shape}')
# Define LSTM that returns a sequence
lstm_rs = tf.keras.layers.LSTM(lstm_dim, return_sequences=True)
result = lstm_rs(random_input)
print(f'shape of lstm output(return_sequences=True): {result.shape}')
###Output
batch_size: 1
timesteps (sequence length): 20
features (embedding size): 16
lstm output units: 8
shape of input array: (1, 20, 16)
shape of lstm output(return_sequences=False): (1, 8)
shape of lstm output(return_sequences=True): (1, 20, 8)
###Markdown
The next cell implements the stacked LSTM architecture.
###Code
import tensorflow as tf
# Hyperparameters
embedding_dim = 64
lstm1_dim = 64
lstm2_dim = 32
dense_dim = 64
# Build the model
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, embedding_dim),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm1_dim, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm2_dim)),
tf.keras.layers.Dense(dense_dim, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Print the model summary
model.summary()
# Set the training parameters
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the ModelThe additional LSTM layer will lengthen the training time compared to the previous lab. Given the default parameters we set, it will take around 2 minutes per epoch with the Colab GPU enabled.
###Code
NUM_EPOCHS = 10
# Train the model
history = model.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=test_dataset)
import matplotlib.pyplot as plt
# Plot utility
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
# Plot the accuracy and results
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
###Output
_____no_output_____
###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Note:** This notebook can run using TensorFlow 2.5.0
###Code
#!pip install tensorflow==2.5.0
###Output
_____no_output_____
###Markdown
Multiple Layer LSTM
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow_datasets as tfds
import tensorflow as tf
print(tf.__version__)
# Get the data
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
tokenizer = info.features['text'].encoder
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(train_dataset))
test_dataset = test_dataset.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(test_dataset))
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
NUM_EPOCHS = 10
history = model.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=test_dataset)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
###Output
_____no_output_____
###Markdown
Ungraded Lab: Multiple LSTMsIn this lab, you will look at how to build a model with multiple LSTM layers. Since you know the preceding steps already (e.g. downloading datasets, preparing the data, etc.), we won't expound on it anymore so you can just focus on the model building code. Download and Prepare the Dataset
###Code
import tensorflow_datasets as tfds
# Download the subword encoded pretokenized dataset
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True)
# Get the tokenizer
tokenizer = info.features['text'].encoder
###Output
_____no_output_____
###Markdown
Like the previous lab, we increased the `BATCH_SIZE` here to make the training faster. If you are doing this on your local machine and have a powerful processor, feel free to use the value used in the lecture (i.e. 64) to get the same results as Laurence.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 256
# Get the train and test splits
train_data, test_data = dataset['train'], dataset['test'],
# Shuffle the training data
train_dataset = train_data.shuffle(BUFFER_SIZE)
# Batch and pad the datasets to the maximum length of the sequences
train_dataset = train_dataset.padded_batch(BATCH_SIZE)
test_dataset = test_data.padded_batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build and Compile the ModelYou can build multiple layer LSTM models by simply appending another `LSTM` layer in your `Sequential` model and enabling the `return_sequences` flag to `True`. This is because an `LSTM` layer expects a sequence input so if the previous layer is also an LSTM, then it should output a sequence as well. See the code cell below that demonstrates this flag in action. You'll notice that the output dimension is in 3 dimensions `(batch_size, timesteps, features)` when when `return_sequences` is True.
###Code
import tensorflow as tf
import numpy as np
# Hyperparameters
batch_size = 1
timesteps = 20
features = 16
lstm_dim = 8
print(f'batch_size: {batch_size}')
print(f'timesteps (sequence length): {timesteps}')
print(f'features (embedding size): {features}')
print(f'lstm output units: {lstm_dim}')
# Define array input with random values
random_input = np.random.rand(batch_size,timesteps,features)
print(f'shape of input array: {random_input.shape}')
# Define LSTM that returns a single output
lstm = tf.keras.layers.LSTM(lstm_dim)
result = lstm(random_input)
print(f'shape of lstm output(return_sequences=False): {result.shape}')
# Define LSTM that returns a sequence
lstm_rs = tf.keras.layers.LSTM(lstm_dim, return_sequences=True)
result = lstm_rs(random_input)
print(f'shape of lstm output(return_sequences=True): {result.shape}')
###Output
_____no_output_____
###Markdown
The next cell implements the stacked LSTM architecture.
###Code
import tensorflow as tf
# Hyperparameters
embedding_dim = 64
lstm1_dim = 64
lstm2_dim = 32
dense_dim = 64
# Build the model
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, embedding_dim),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm1_dim, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm2_dim)),
tf.keras.layers.Dense(dense_dim, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Print the model summary
model.summary()
# Set the training parameters
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the ModelThe additional LSTM layer will lengthen the training time compared to the previous lab. Given the default parameters we set, it will take around 2 minutes per epoch with the Colab GPU enabled.
###Code
NUM_EPOCHS = 10
# Train the model
history = model.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=test_dataset)
import matplotlib.pyplot as plt
# Plot utility
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
# Plot the accuracy and results
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
###Output
_____no_output_____
###Markdown
Ungraded Lab: Multiple LSTMsIn this lab, you will look at how to build a model with multiple LSTM layers. Since you know the preceding steps already (e.g. downloading datasets, preparing the data, etc.), we won't expound on it anymore so you can just focus on the model building code. Download and Prepare the Dataset
###Code
import tensorflow_datasets as tfds
# Download the subword encoded pretokenized dataset
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True)
# Get the tokenizer
tokenizer = info.features['text'].encoder
###Output
_____no_output_____
###Markdown
Like the previous lab, we increased the `BATCH_SIZE` here to make the training faster. If you are doing this on your local machine and have a powerful processor, feel free to use the value used in the lecture (i.e. 64) to get the same results as Laurence.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 256
# Get the train and test splits
train_data, test_data = dataset['train'], dataset['test'],
# Shuffle the training data
train_dataset = train_data.shuffle(BUFFER_SIZE)
# Batch and pad the datasets to the maximum length of the sequences
train_dataset = train_dataset.padded_batch(BATCH_SIZE)
test_dataset = test_data.padded_batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build and Compile the ModelYou can build multiple layer LSTM models by simply appending another `LSTM` layer in your `Sequential` model and enabling the `return_sequences` flag to `True`. This is because an `LSTM` layer expects a sequence input so if the previous layer is also an LSTM, then it should output a sequence as well. See the code cell below that demonstrates this flag in action. You'll notice that the output dimension is in 3 dimensions `(batch_size, timesteps, features)` when when `return_sequences` is True.
###Code
import tensorflow as tf
import numpy as np
# Hyperparameters
batch_size = 1
timesteps = 20
features = 16
lstm_dim = 8
print(f'batch_size: {batch_size}')
print(f'timesteps (sequence length): {timesteps}')
print(f'features (embedding size): {features}')
print(f'lstm output units: {lstm_dim}')
# Define array input with random values
random_input = np.random.rand(batch_size,timesteps,features)
print(f'shape of input array: {random_input.shape}')
# Define LSTM that returns a single output
lstm = tf.keras.layers.LSTM(lstm_dim)
result = lstm(random_input)
print(f'shape of lstm output(return_sequences=False): {result.shape}')
# Define LSTM that returns a sequence
lstm_rs = tf.keras.layers.LSTM(lstm_dim, return_sequences=True)
result = lstm_rs(random_input)
print(f'shape of lstm output(return_sequences=True): {result.shape}')
###Output
_____no_output_____
###Markdown
The next cell implements the stacked LSTM architecture.
###Code
import tensorflow as tf
# Hyperparameters
embedding_dim = 64
lstm1_dim = 64
lstm2_dim = 32
dense_dim = 64
# Build the model
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, embedding_dim),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm1_dim, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm2_dim)),
tf.keras.layers.Dense(dense_dim, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Print the model summary
model.summary()
# Set the training parameters
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the ModelThe additional LSTM layer will lengthen the training time compared to the previous lab. Given the default parameters we set, it will take around 2 minutes per epoch with the Colab GPU enabled.
###Code
NUM_EPOCHS = 10
# Train the model
history = model.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=test_dataset)
import matplotlib.pyplot as plt
# Plot utility
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
# Plot the accuracy and results
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
###Output
_____no_output_____ |
day_3/Lab_19_DL Transfer Learning Short.ipynb | ###Markdown
Transfer Learning
###Code
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing import image
batch_size = 32
img_size = 299
train_path = '../data/sports/train/'
test_path = '../data/sports/test/'
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
from tensorflow.keras.applications.xception import Xception
from tensorflow.keras.applications.xception import preprocess_input
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
base_model = Xception(include_top=False,
weights='imagenet',
input_shape=(img_size, img_size, 3),
pooling='avg')
model = Sequential([
base_model,
Dense(256, activation='relu'),
Dropout(0.5),
Dense(3, activation='softmax')
])
model.summary()
model.layers[0].trainable = False
model.summary()
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
preprocessing_function=preprocess_input)
batch_size = 32
train_generator = datagen.flow_from_directory(
train_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
test_generator = datagen.flow_from_directory(
test_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
model.fit_generator(train_generator, steps_per_epoch=len(train_generator))
model.evaluate_generator(test_generator, steps=len(test_generator))
###Output
_____no_output_____
###Markdown
Transfer Learning
###Code
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from keras.preprocessing import image
batch_size = 32
img_size = 299
train_path = '../data/sports/train/'
test_path = '../data/sports/test/'
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
from keras.applications.xception import Xception
from keras.applications.xception import preprocess_input
from keras.models import Sequential
from keras.layers import Dense, Dropout
base_model = Xception(include_top=False,
weights='imagenet',
input_shape=(img_size, img_size, 3),
pooling='avg')
model = Sequential([
base_model,
Dense(256, activation='relu'),
Dropout(0.5),
Dense(3, activation='softmax')
])
model.summary()
model.layers[0].trainable = False
model.summary()
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
preprocessing_function=preprocess_input)
batch_size = 32
train_generator = datagen.flow_from_directory(
train_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
test_generator = datagen.flow_from_directory(
test_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
model.fit_generator(train_generator)
model.evaluate_generator(test_generator)
###Output
_____no_output_____
###Markdown
Yay! in a single epoch we can classify images with a decent accuracy. ExerciseThe base model takes an image and returns a vector of 2048 numbers. We will call this vector a bottleneck feature. Use the `base_model.predict_generator` function and the train generator to generate bottleneck features for the training set. Save these vectors to a numpy array using the code provided.
###Code
base_model.summary()
import os
os.makedirs('models', exist_ok = True)
np.save(open('models/bf_train.npy', 'wb'), bf_train)
###Output
_____no_output_____
###Markdown
Transfer Learning
###Code
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing import image
batch_size = 32
img_size = 299
train_path = '../data/sports/train/'
test_path = '../data/sports/test/'
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
from tensorflow.keras.applications.xception import Xception
from tensorflow.keras.applications.xception import preprocess_input
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
base_model = Xception(include_top=False,
weights='imagenet',
input_shape=(img_size, img_size, 3),
pooling='avg')
model = Sequential([
base_model,
Dense(256, activation='relu'),
Dropout(0.5),
Dense(3, activation='softmax')
])
model.summary()
model.layers[0].trainable = False
model.summary()
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
preprocessing_function=preprocess_input)
batch_size = 32
train_generator = datagen.flow_from_directory(
train_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
test_generator = datagen.flow_from_directory(
test_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
model.fit(train_generator, steps_per_epoch=len(train_generator))
model.evaluate_generator(test_generator, steps=len(test_generator))
###Output
_____no_output_____
###Markdown
Transfer Learning
###Code
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from keras.preprocessing import image
batch_size = 32
img_size = 299
train_path = '../data/sports/train/'
test_path = '../data/sports/test/'
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
from keras.applications.xception import Xception
from keras.applications.xception import preprocess_input
from keras.models import Sequential
from keras.layers import Dense, Dropout
base_model = Xception(include_top=False,
weights='imagenet',
input_shape=(img_size, img_size, 3),
pooling='avg')
model = Sequential([
base_model,
Dense(256, activation='relu'),
Dropout(0.5),
Dense(3, activation='softmax')
])
model.summary()
model.layers[0].trainable = False
model.summary()
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
preprocessing_function=preprocess_input)
batch_size = 32
train_generator = datagen.flow_from_directory(
train_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
test_generator = datagen.flow_from_directory(
test_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
model.fit_generator(train_generator, steps_per_epoch=len(train_generator))
model.evaluate_generator(test_generator, steps=len(test_generator))
###Output
_____no_output_____
###Markdown
Transfer Learning
###Code
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from keras.preprocessing import image
batch_size = 32
img_size = 299
train_path = '../data/sports/train/'
test_path = '../data/sports/test/'
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
from keras.applications.xception import Xception
from keras.applications.xception import preprocess_input
from keras.models import Sequential
from keras.layers import Dense, Dropout
base_model = Xception(include_top=False,
weights='imagenet',
input_shape=(img_size, img_size, 3),
pooling='avg')
model = Sequential([
base_model,
Dense(256, activation='relu'),
Dropout(0.5),
Dense(3, activation='softmax')
])
model.summary()
model.layers[0].trainable = False
model.summary()
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
preprocessing_function=preprocess_input)
batch_size = 32
train_generator = datagen.flow_from_directory(
train_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
test_generator = datagen.flow_from_directory(
test_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
model.fit_generator(train_generator)
model.evaluate_generator(test_generator)
###Output
_____no_output_____
###Markdown
Yay! in a single epoch we can classify images with a decent accuracy. ExerciseThe base model takes an image and returns a vector of 2048 numbers. We will call this vector a bottleneck feature. Use the `base_model.predict_generator` function and the train generator to generate bottleneck features for the training set. Save these vectors to a numpy array using the code provided.
###Code
base_model.summary()
import os
os.makedirs('models', exist_ok = True)
np.save(open('models/bf_train.npy', 'wb'), bf_train)
###Output
_____no_output_____ |
Statements Assessment Test.ipynb | ###Markdown
Statements Assessment TestLet's test your knowledge! **Use for, .split(), and if to create a Statement that will print out words that start with 's':**
###Code
st = 'Print only the words that start with s in this sentence'
#Code here
for word in st.split():
if word[0] == 's':
print (word)
###Output
start
s
sentence
###Markdown
______**Use range() to print all the even numbers from 0 to 10.**
###Code
#Code Here
digits = range(0,11,2)
for i in digits:
print(i)
###Output
0
2
4
6
8
10
###Markdown
___**Use a List Comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.**
###Code
#Code in this cell
[x for x in range(1,50) if x%3 == 0]
###Output
_____no_output_____
###Markdown
_____**Go through the string below and if the length of a word is even print "even!"**
###Code
st = 'Print every word in this sentence that has an even number of letters'
#Code in this cell
for word in st.split():
if len(word)%2 == 0:
print (word + " -- even length")
###Output
word -- even length
in -- even length
this -- even length
sentence -- even length
that -- even length
an -- even length
even -- even length
number -- even length
of -- even length
###Markdown
____**Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".**
###Code
#Code in this cell
for x in range(1,101):
if x % 5 == 0 and x % 3 == 0:
print ("FizzBuzz")
elif x % 3 == 0:
print ("Fizz")
elif x % 5 == 0:
print ("Buzz")
else:
print (x)
###Output
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz
16
17
Fizz
19
Buzz
Fizz
22
23
Fizz
Buzz
26
Fizz
28
29
FizzBuzz
31
32
Fizz
34
Buzz
Fizz
37
38
Fizz
Buzz
41
Fizz
43
44
FizzBuzz
46
47
Fizz
49
Buzz
Fizz
52
53
Fizz
Buzz
56
Fizz
58
59
FizzBuzz
61
62
Fizz
64
Buzz
Fizz
67
68
Fizz
Buzz
71
Fizz
73
74
FizzBuzz
76
77
Fizz
79
Buzz
Fizz
82
83
Fizz
Buzz
86
Fizz
88
89
FizzBuzz
91
92
Fizz
94
Buzz
Fizz
97
98
Fizz
Buzz
###Markdown
____**Use List Comprehension to create a list of the first letters of every word in the string below:**
###Code
st = 'Create a list of the first letters of every word in this string'
#Code in this cell
[word[0] for word in st.split()]
###Output
_____no_output_____
###Markdown
Statements Assessment TestLets test your knowledge! _____**Use for, split(), and if to create a Statement that will print out words that start with 's':**
###Code
st = 'Print only the words that start with s in this sentence'
#Code here
# to note: a for in for a string iterates through letters, not numbers
for word in st.split():
letter = word[0].lower()
if letter == 's':
print word
###Output
start
s
sentence
###Markdown
______**Use range() to print all the even numbers from 0 to 10.**
###Code
#Code Here
range(0, 11, 2)
###Output
_____no_output_____
###Markdown
___**Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.**
###Code
#Code in this cell
[num for num in range(1, 50) if num % 3 == 0]
###Output
_____no_output_____
###Markdown
_____**Go through the string below and if the length of a word is even print "even!"**
###Code
st = 'Print every word in this sentence that has an even number of letters'
#Code in this cell
for word in st.split():
if len(word) % 2 == 0:
print word
###Output
word
in
this
sentence
that
an
even
number
of
###Markdown
____**Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".**
###Code
#Code in this cell
def fizzbuzz(start, end):
for i in range(start, end):
is_fizzy = i % 3 == 0
is_buzzy = i % 5 == 0
if is_fizzy and not is_buzzy:
print "Fizz"
elif is_buzzy and not is_fizzy:
print "Buzz"
elif is_fizzy and is_buzzy:
print "FizzBuzz"
else:
print i
fizzbuzz(0, 100)
###Output
FizzBuzz
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz
16
17
Fizz
19
Buzz
Fizz
22
23
Fizz
Buzz
26
Fizz
28
29
FizzBuzz
31
32
Fizz
34
Buzz
Fizz
37
38
Fizz
Buzz
41
Fizz
43
44
FizzBuzz
46
47
Fizz
49
Buzz
Fizz
52
53
Fizz
Buzz
56
Fizz
58
59
FizzBuzz
61
62
Fizz
64
Buzz
Fizz
67
68
Fizz
Buzz
71
Fizz
73
74
FizzBuzz
76
77
Fizz
79
Buzz
Fizz
82
83
Fizz
Buzz
86
Fizz
88
89
FizzBuzz
91
92
Fizz
94
Buzz
Fizz
97
98
Fizz
###Markdown
____**Use List Comprehension to create a list of the first letters of every word in the string below:**
###Code
st = 'Create a list of the first letters of every word in this string'
#Code in this cell
running_list = []
for word in st.split():
running_list.append(word[0])
print running_list
###Output
['C', 'a', 'l', 'o', 't', 'f', 'l', 'o', 'e', 'w', 'i', 't', 's']
|
Master Thesis Project/JN3_dentate_analysis.ipynb | ###Markdown
Jupyter Notebook 3: Disentanglement Scores, Latent Space Plots Dentate Gyrus Dataset
###Code
import scanpy as sc
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import sys
import util_loss as ul
#import the package to use
import beta_vae_5
from dentate_features import *
from all_obs_linear_classifier_package import *
import os,glob
data = sc.read("./data/dentate_gyrus_normalized.h5ad")
#Adding more features
data.obs["seq_depth"] = np.sum(data.X,axis=1)
data.obs["exp_gene"] = np.count_nonzero(data.X.toarray(),axis=1)
min_dep = min(data.obs['seq_depth'])
data.obs["seq_depth"] = data.obs["seq_depth"] - min_dep
fac_seq = max(data.obs["seq_depth"])/10
data.obs['seq_depth'] = data.obs["seq_depth"]/fac_seq
data.obs['seq_depth'] = data.obs['seq_depth'].astype('int64')
data.obs['seq_depth'] = np.where(data.obs['seq_depth']==10, 9,data.obs['seq_depth'])
min_exp = min(data.obs['exp_gene'])
data.obs["exp_gene"] = data.obs["exp_gene"] - min_exp
fac_exp = max(data.obs["exp_gene"])/10
data.obs['exp_gene'] = data.obs["exp_gene"]/fac_exp
data.obs['exp_gene'] = data.obs['exp_gene'].astype('int64')
data.obs['exp_gene'] = np.where(data.obs['exp_gene']==10, 9,data.obs['exp_gene'])
print(data.obs)
'''
Difference scores between features are calculated: 1st level
'''
# Restoring pre-trained models
os.chdir("/storage/groups/ml01/workspace/harshita.agarwala/models_dentate_1000epochs")
path = "latent5_alpha50_c30/"
scg_model = beta_vae_5.C_VAEArithKeras(x_dimension= data.shape[1],z_dimension=5,model_to_use=path,
alpha=5,c_max=30)
scg_model.restore_model()
print(scg_model)
observation = "4_observation" #a name to identify the score files
L = 20 #number of samples in a batch
B = 2 #number of batches
try:
os.makedirs(path+observation+"_disentangled_score/")
except OSError:
print ("Check if path %s already exists" % path)
else:
print ("Successfully created the directory ", path+observation+"_disentangled_score/")
for i in range(5):
df = feature_scores(model=scg_model,L=L,B=B,data=data)
print(df)
df.to_csv(path+observation+"_disentangled_score/matrix_all_dim"+str(i)+".csv",index=False)
'''
Difference scores between features are are now classified
'''
os.chdir("/storage/groups/ml01/workspace/harshita.agarwala/models_dentate_1000epochs")
path = "latent5_alpha50_c30/"
observation="4_observation"
feature_classification(path=path,z_dim = 5,observation=observation)
'''
Creating latent space plots for each feature and also
saving the latent space values for each feature
'''
from convert_to_latent_space import *
observations = ["age(days)","clusters","exp_gene","seq_depth"]
observations = ["clusters"]
os.chdir("/storage/groups/ml01/workspace/harshita.agarwala/models_dentate_1000epochs")
path = "latent5_alpha50_c30/"
scg_model = beta_vae_5.C_VAEArithKeras(x_dimension= data.shape[1],z_dimension=5,model_to_use=path,
alpha=5,c_max=30)
scg_model.restore_model()
for obs in observations:
single_feature_to_latent(path=path,adata=data,feature=obs,model=scg_model,z_dim=5)
os.chdir("/storage/groups/ml01/workspace/harshita.agarwala/models_dentate_1000epochs")
'''
Difference scores within features are calculated: 2nd level
It depends on the function 'single_feature_to_latent' used in the previous section.
'''
from latent_space_scores import *
os.chdir("/storage/groups/ml01/workspace/harshita.agarwala/models_dentate_1000epochs")
path = "latent5_alpha50_c30/"
observation = "clusters" #feature name to identify the score files
L = 20 #number of samples in a batch
B = 2 #number of batches
data = pd.read_csv(path+"cells_latent_"+observation+"/cells_in_latent.csv",index_col = 0)
#print(data)
try:
os.makedirs(path+observation+"_disentangled_score/")
except OSError:
print ("Check if path %s already exists" % path)
else:
print ("Successfully created the directory ", path+observation+"_disentangled_score/")
for i in range(2):
df = latent_space_scores(L=L,B=B,data=data)
print(df)
df.to_csv(path+observation+"_disentangled_score/matrix_all_dim"+str(i)+".csv",index=False)
'''
Latent Space scores within feature is now classified
'''
os.chdir("/storage/groups/ml01/workspace/harshita.agarwala/models_dentate_1000epochs")
path = "latent5_alpha50_c30/"
feature_classification(path=path,z_dim = 5,observation="clusters") #keep changing the observation
'''
Plot KL Divergence per dimension over epochs
'''
from kl_divergence_plot import *
os.chdir("/storage/groups/ml01/workspace/harshita.agarwala/models_dentate_1000epochs")
path = "latent5_alpha50_c30/"
plot_kl_loss(path=path,z_dim=5)
###Output
_____no_output_____ |
assignments/course_1/assignment_3_solutions.ipynb | ###Markdown
---_You are currently looking at **version 1.5** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._--- Assignment 3 - More PandasThis assignment requires more individual learning then the last one did - you are encouraged to check out the [pandas documentation](http://pandas.pydata.org/pandas-docs/stable/) to find functions or methods you might not have used yet, or ask questions on [Stack Overflow](http://stackoverflow.com/) and tag them as pandas and python related. And of course, the discussion forums are open for interaction with your peers and the course staff. Question 1 (20%)Load the energy data from the file `Energy Indicators.xls`, which is a list of indicators of [energy supply and renewable electricity production](Energy%20Indicators.xls) from the [United Nations](http://unstats.un.org/unsd/environment/excel_file_tables/2013/Energy%20Indicators.xls) for the year 2013, and should be put into a DataFrame with the variable name of **energy**.Keep in mind that this is an Excel file, and not a comma separated values file. Also, make sure to exclude the footer and header information from the datafile. The first two columns are unneccessary, so you should get rid of them, and you should change the column labels so that the columns are:`['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']`Convert `Energy Supply` to gigajoules (there are 1,000,000 gigajoules in a petajoule). For all countries which have missing data (e.g. data with "...") make sure this is reflected as `np.NaN` values.Rename the following list of countries (for use in later questions):```"Republic of Korea": "South Korea","United States of America": "United States","United Kingdom of Great Britain and Northern Ireland": "United Kingdom","China, Hong Kong Special Administrative Region": "Hong Kong"```There are also several countries with numbers and/or parenthesis in their name. Be sure to remove these, e.g. `'Bolivia (Plurinational State of)'` should be `'Bolivia'`, `'Switzerland17'` should be `'Switzerland'`.Next, load the GDP data from the file `world_bank.csv`, which is a csv containing countries' GDP from 1960 to 2015 from [World Bank](http://data.worldbank.org/indicator/NY.GDP.MKTP.CD). Call this DataFrame **GDP**. Make sure to skip the header, and rename the following list of countries:```"Korea, Rep.": "South Korea", "Iran, Islamic Rep.": "Iran","Hong Kong SAR, China": "Hong Kong"```Finally, load the [Sciamgo Journal and Country Rank data for Energy Engineering and Power Technology](http://www.scimagojr.com/countryrank.php?category=2102) from the file `scimagojr-3.xlsx`, which ranks countries based on their journal contributions in the aforementioned area. Call this DataFrame **ScimEn**.Join the three datasets: GDP, Energy, and ScimEn into a new dataset (using the intersection of country names). Use only the last 10 years (2006-2015) of GDP data and only the top 15 countries by Scimagojr 'Rank' (Rank 1 through 15). The index of this DataFrame should be the name of the country, and the columns should be ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations', 'Citations per document', 'H index', 'Energy Supply', 'Energy Supply per Capita', '% Renewable', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015'].*This function should return a DataFrame with 20 columns and 15 entries.*
###Code
import pandas as pd
import numpy as np
energy = pd.read_excel('Energy Indicators.xls', skiprows=17)
# remove header and footer
energy = energy[:227]
# drop useless columns
energy = energy.drop(energy.columns[[0, 1]], axis=1)
# rename columns
energy.columns = ['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']
# replace ... with NaN
energy.replace('...', np.nan, inplace = True)
# convert Energy Supply units
energy['Energy Supply'] = energy['Energy Supply'] * 1000000
# remove all numbers from country names and parenthesis
def remove_numbers(country):
return ''.join(filter(lambda x: not x.isdigit(), country))
def remove_parenthesis(country):
start = country.find('(')
if start > -1:
return country[:start-1]
return country
energy['Country'] = energy['Country'].apply(remove_numbers)
energy['Country'] = energy['Country'].apply(remove_parenthesis)
# replace country names
country_replace = {'Republic of Korea': 'South Korea',
'United States of America': 'United States',
'United Kingdom of Great Britain and Northern Ireland': 'United Kingdom',
'China, Hong Kong Special Administrative Region': 'Hong Kong',
}
# energy.replace(country_replace, inplace = True)
energy.replace({'Country': country_replace}, inplace = True)
# energy[energy['Country'] == 'Iran']
GDP = pd.read_csv('world_bank.csv', skiprows=4)
# rename Country Name to Country so that merge will work later on.
GDP.rename(columns={'Country Name': 'Country'}, inplace = True)
country_replace = {'Korea, Rep.': 'South Korea',
'Iran, Islamic Rep.': 'Iran',
'Hong Kong SAR, China': 'Hong Kong'
}
GDP.replace({'Country': country_replace}, inplace = True)
# GDP[GDP['Country'] == 'Iran']
ScimEn = pd.read_excel('scimagojr-3.xlsx')
def answer_one():
global energy, GDP, ScimEn
# merge all three dataframes
combined_df = pd.merge(pd.merge(energy, GDP, on = 'Country'), ScimEn, on = 'Country')
# set index to country name
combined_df.set_index('Country', inplace = True)
# keep specific columns
combined_df = combined_df[['Rank', 'Documents', 'Citable documents', 'Citations',
'Self-citations', 'Citations per document', 'H index',
'Energy Supply', 'Energy Supply per Capita', '% Renewable',
'2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013',
'2014', '2015']]
# only include Ranking 15 or higher
combined_df = combined_df[combined_df['Rank'] <= 15]
# organize the Ranking
combined_df = combined_df.sort_values(['Rank'], ascending = [1])
return combined_df
# # len(answer_one())
# answer_one()
###Output
_____no_output_____
###Markdown
Question 2 (6.6%)The previous question joined three datasets then reduced this to just the top 15 entries. When you joined the datasets, but before you reduced this to the top 15 items, how many entries did you lose?*This function should return a single number.*
###Code
%%HTML
<svg width="800" height="300">
<circle cx="150" cy="180" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="blue" />
<circle cx="200" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="red" />
<circle cx="100" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="green" />
<line x1="150" y1="125" x2="300" y2="150" stroke="black" stroke-width="2" fill="black" stroke-dasharray="5,3"/>
<text x="300" y="165" font-family="Verdana" font-size="35">Everything but this!</text>
</svg>
def answer_two():
global energy, GDP, ScimEn
un = pd.merge(pd.merge(energy, GDP, on = 'Country', how = 'outer'), ScimEn, on = 'Country', how = 'outer')
int = pd.merge(pd.merge(energy, GDP, on = 'Country'), ScimEn, on = 'Country')
return (len(un) - len(int))
# answer_two()
###Output
_____no_output_____
###Markdown
Answer the following questions in the context of only the top 15 countries by Scimagojr Rank (aka the DataFrame returned by `answer_one()`) Question 3 (6.6%)What is the average GDP over the last 10 years for each country? (exclude missing values from this calculation.)*This function should return a Series named `avgGDP` with 15 countries and their average GDP sorted in descending order.*
###Code
def answer_three():
Top15 = answer_one()
avgGDP = Top15[['2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015']].mean(axis = 1).sort_values(ascending = False)
return avgGDP
# answer_three()
###Output
_____no_output_____
###Markdown
Question 4 (6.6%)By how much had the GDP changed over the 10 year span for the country with the 6th largest average GDP?*This function should return a single number.*
###Code
def answer_four():
Top15 = answer_one()
avgGDP = answer_three().reset_index()
return np.float64(Top15.filter(like = avgGDP.iloc[5]['Country'], axis = 0)['2015'] - Top15.filter(like = avgGDP.iloc[5]['Country'], axis = 0)['2006'])
# answer_four()
# type(answer_four())
###Output
_____no_output_____
###Markdown
Question 5 (6.6%)What is the mean `Energy Supply per Capita`?*This function should return a single number.*
###Code
def answer_five():
Top15 = answer_one()
return Top15['Energy Supply per Capita'].mean()
# answer_five()
###Output
_____no_output_____
###Markdown
Question 6 (6.6%)What country has the maximum % Renewable and what is the percentage?*This function should return a tuple with the name of the country and the percentage.*
###Code
def answer_six():
Top15 = answer_one()
country = Top15.sort_values(by = '% Renewable', ascending = False).iloc[0]
return (country.name, country['% Renewable'])
# answer_six()
###Output
_____no_output_____
###Markdown
Question 7 (6.6%)Create a new column that is the ratio of Self-Citations to Total Citations. What is the maximum value for this new column, and what country has the highest ratio?*This function should return a tuple with the name of the country and the ratio.*
###Code
def answer_seven():
Top15 = answer_one()
Top15['Citation ratio'] = (Top15['Self-citations'] / Top15['Citations'])
country = Top15.sort_values(by = 'Citation ratio', ascending = False).iloc[0]
return (country.name, country['Citation ratio'])
# answer_seven()
###Output
_____no_output_____
###Markdown
Question 8 (6.6%)Create a column that estimates the population using Energy Supply and Energy Supply per capita. What is the third most populous country according to this estimate?*This function should return a single string value.*
###Code
def answer_eight():
Top15 = answer_one()
Top15['Population'] = (Top15['Energy Supply'] / Top15['Energy Supply per Capita'])
return Top15.sort_values(by = 'Population', ascending=False).iloc[2].name
# answer_eight()
###Output
_____no_output_____
###Markdown
Question 9 (6.6%)Create a column that estimates the number of citable documents per person. What is the correlation between the number of citable documents per capita and the energy supply per capita? Use the `.corr()` method, (Pearson's correlation).*This function should return a single number.**(Optional: Use the built-in function `plot9()` to visualize the relationship between Energy Supply per Capita vs. Citable docs per Capita)*
###Code
def answer_nine():
Top15 = answer_one()
Top15['PopEst'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']
Top15['Citable docs per Capita'] = Top15['Citable documents'] / Top15['PopEst']
return Top15[['Energy Supply per Capita', 'Citable docs per Capita']].corr().loc['Energy Supply per Capita', 'Citable docs per Capita']
# answer_nine()
def plot9():
import matplotlib as plt
%matplotlib inline
Top15 = answer_one()
Top15['PopEst'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']
Top15['Citable docs per Capita'] = Top15['Citable documents'] / Top15['PopEst']
Top15.plot(x='Citable docs per Capita', y='Energy Supply per Capita', kind='scatter', xlim=[0, 0.0006])
# plot9() # Be sure to comment out plot9() before submitting the assignment!
###Output
_____no_output_____
###Markdown
Question 10 (6.6%)Create a new column with a 1 if the country's % Renewable value is at or above the median for all countries in the top 15, and a 0 if the country's % Renewable value is below the median.*This function should return a series named `HighRenew` whose index is the country name sorted in ascending order of rank.*
###Code
def answer_ten():
Top15 = answer_one()
# T / F for % renewable over median or not
Top15['HighRenew'] = Top15['% Renewable'] >= Top15['% Renewable'].median()
Top15['HighRenew'] = Top15['HighRenew'].apply(lambda x:1 if x else 0)
# sorted by Rank
Top15.sort_values(by = 'Rank', inplace=True)
return Top15['HighRenew']
# answer_ten()
###Output
_____no_output_____
###Markdown
Question 11 (6.6%)Use the following dictionary to group the Countries by Continent, then create a dateframe that displays the sample size (the number of countries in each continent bin), and the sum, mean, and std deviation for the estimated population of each country.```pythonContinentDict = {'China':'Asia', 'United States':'North America', 'Japan':'Asia', 'United Kingdom':'Europe', 'Russian Federation':'Europe', 'Canada':'North America', 'Germany':'Europe', 'India':'Asia', 'France':'Europe', 'South Korea':'Asia', 'Italy':'Europe', 'Spain':'Europe', 'Iran':'Asia', 'Australia':'Australia', 'Brazil':'South America'}```*This function should return a DataFrame with index named Continent `['Asia', 'Australia', 'Europe', 'North America', 'South America']` and columns `['size', 'sum', 'mean', 'std']`*
###Code
def answer_eleven():
Top15 = answer_one()
ContinentDict = {'China':'Asia',
'United States':'North America',
'Japan':'Asia',
'United Kingdom':'Europe',
'Russian Federation':'Europe',
'Canada':'North America',
'Germany':'Europe',
'India':'Asia',
'France':'Europe',
'South Korea':'Asia',
'Italy':'Europe',
'Spain':'Europe',
'Iran':'Asia',
'Australia':'Australia',
'Brazil':'South America'
}
groups = pd.DataFrame(columns = ['size', 'sum', 'mean', 'std'])
Top15['PopEst'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']
for continent, count in Top15.groupby(ContinentDict):
# print(count['PopEst'])
# print('*'*40)
groups.loc[continent] = [len(count), count['PopEst'].sum(), count['PopEst'].mean(), count['PopEst'].std()]
return groups
# answer_eleven()
###Output
_____no_output_____
###Markdown
Question 12 (6.6%)Cut % Renewable into 5 bins. Group Top15 by the Continent, as well as these new % Renewable bins. How many countries are in each of these groups?*This function should return a __Series__ with a MultiIndex of `Continent`, then the bins for `% Renewable`. Do not include groups with no countries.*
###Code
def answer_twelve():
Top15 = answer_one()
ContinentDict = {'China':'Asia',
'United States':'North America',
'Japan':'Asia',
'United Kingdom':'Europe',
'Russian Federation':'Europe',
'Canada':'North America',
'Germany':'Europe',
'India':'Asia',
'France':'Europe',
'South Korea':'Asia',
'Italy':'Europe',
'Spain':'Europe',
'Iran':'Asia',
'Australia':'Australia',
'Brazil':'South America'
}
Top15['Continent'] = Top15.index.map(lambda c: ContinentDict[c])
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html
Top15['% Renewable bins'] = pd.cut(Top15['% Renewable'], 5)
return Top15.groupby(['Continent','% Renewable bins']).size()
# answer_twelve()
###Output
_____no_output_____
###Markdown
Question 13 (6.6%)Convert the Population Estimate series to a string with thousands separator (using commas). Do not round the results.e.g. 317615384.61538464 -> 317,615,384.61538464*This function should return a Series `PopEst` whose index is the country name and whose values are the population estimate string.*
###Code
def answer_thirteen():
Top15 = answer_one()
Top15['PopEst'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']
# https://stackoverflow.com/questions/22617/format-numbers-to-strings-in-python
return Top15['PopEst'].apply(lambda str: '{0:,}'.format(str))
# answer_thirteen()
###Output
_____no_output_____
###Markdown
OptionalUse the built in function `plot_optional()` to see an example visualization.
###Code
def plot_optional():
import matplotlib as plt
%matplotlib inline
Top15 = answer_one()
ax = Top15.plot(x='Rank', y='% Renewable', kind='scatter',
c=['#e41a1c','#377eb8','#e41a1c','#4daf4a','#4daf4a','#377eb8','#4daf4a','#e41a1c',
'#4daf4a','#e41a1c','#4daf4a','#4daf4a','#e41a1c','#dede00','#ff7f00'],
xticks=range(1,16), s=6*Top15['2014']/10**10, alpha=.75, figsize=[16,6]);
for i, txt in enumerate(Top15.index):
ax.annotate(txt, [Top15['Rank'][i], Top15['% Renewable'][i]], ha='center')
print("This is an example of a visualization that can be created to help understand the data. \
This is a bubble chart showing % Renewable vs. Rank. The size of the bubble corresponds to the countries' \
2014 GDP, and the color corresponds to the continent.")
# plot_optional() # Be sure to comment out plot_optional() before submitting the assignment!
###Output
_____no_output_____ |
svdo.ipynb | ###Markdown
Sparse Variational Dropout ![alt text](https://ars-ashuha.github.io/images/ss_vd1.png)![alt text](https://ars-ashuha.github.io/images/ss_vd2.png)- Variational Dropout Sparsifies Deep Neural Networks https://arxiv.org/abs/1701.05369- Cheating link https://github.com/ars-ashuha/sparse-vd-pytorch/blob/master/svdo-solution.ipynb Install
###Code
!pip3 install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp36-cp36m-linux_x86_64.whl
!pip3 install torchvision
# Logger
!pip install tabulate -q
from google.colab import files
src = list(files.upload().values())[0]
open('logger.py','wb').write(src)
from logger import Logger
###Output
_____no_output_____
###Markdown
Implementation
###Code
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from logger import Logger
from torch.nn import Parameter
from torchvision import datasets, transforms
# Load a dataset
def get_mnist(batch_size):
trsnform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=trsnform), batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, download=True,
transform=trsnform), batch_size=batch_size, shuffle=True)
return train_loader, test_loader
class LinearSVDO(nn.Module):
def __init__(self, in_features, out_features, threshold, bias=True):
super(LinearSVDO, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.threshold = threshold
self.W = Parameter(torch.Tensor(out_features, in_features))
###########################################################
######## You Code should be here ##########
# Create a Parameter to store log sigma
self.log_sigma = ...
###########################################################
self.bias = Parameter(torch.Tensor(1, out_features))
self.reset_parameters()
def reset_parameters(self):
self.bias.data.zero_()
self.W.data.normal_(0, 0.02)
self.log_sigma.data.fill_(-5)
def forward(self, x):
###########################################################
######## You Code should be here ##########
if self.training:
lrt_mean = ... # Compute activation's mean e.g x.dot(W) + b
lrt_std = ... # Compute activation's var e.g sqrt((x*x).dot(sigma * sigma) + 1e-8)
eps = ... # sample random noise
return lrt_mean + lrt_std * eps
######## If not training ##########
self.log_alpha = ... # Evale log alpha as a function(log_sigma, W)
self.log_alpha = # Clip log alpha to be in [-10, 10] for numerical stability
W = ... # Prune out redundant wights e.g. W * mask(log_alpha < 3)
return F.linear(x, W) + self.bias
###########################################################
def kl_reg(self):
###########################################################
######## You Code should be here ##########
######## Eval Approximation of KL Divergence ##########
log_alpha = # Evale log alpha as a function(log_sigma, W)
log_alpha = # Clip log alpha to be in [-10, 10] for numerical suability
k1, k2, k3 = torch.Tensor([0.63576]), torch.Tensor([1.8732]), torch.Tensor([1.48695])
KL = ...
return KL
######## Return a KL divergence, a Tensor 1x1 ##########
###########################################################
# Define a simple 2 layer Network
class Net(nn.Module):
def __init__(self, threshold):
super(Net, self).__init__()
self.fc1 = LinearSVDO(28*28, 300, threshold)
self.fc2 = LinearSVDO(300, 10, threshold)
self.threshold = threshold
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.log_softmax(self.fc2(x), dim=1)
return x
# Define a new Loss Function -- SGVLB
class SGVLB(nn.Module):
def __init__(self, net, train_size):
super(SGVLB, self).__init__()
self.train_size = train_size
self.net = net
def forward(self, input, target, kl_weight=1.0):
assert not target.requires_grad
kl = torch.Tensor([0.0])
for module in self.net.children():
if hasattr(module, 'kl_reg'):
kl = kl + module.kl_reg()
###########################################################
######## You Code should be here ##########
# Compute Stochastic Gradient Variational Lower Bound
# Do not forget to scale up Data term to N/M,
# where N is a size of the dataset and M is a size of minibatch
SGVLB = ...
return SGVLB # a Tensor 1x1
###########################################################
model = Net(threshold=3)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[50,60,70,80], gamma=0.2)
fmt = {'tr_los': '3.1e', 'te_loss': '3.1e', 'sp_0': '.3f', 'sp_1': '.3f', 'lr': '3.1e', 'kl': '.2f'}
logger = Logger('sparse_vd', fmt=fmt)
train_loader, test_loader = get_mnist(batch_size=100)
sgvlb = SGVLB(model, len(train_loader.dataset))
kl_weight = 0.02
epochs = 100
for epoch in range(1, epochs + 1):
scheduler.step()
model.train()
train_loss, train_acc = 0, 0
kl_weight = min(kl_weight+0.02, 1)
logger.add_scalar(epoch, 'kl', kl_weight)
logger.add_scalar(epoch, 'lr', scheduler.get_lr()[0])
for batch_idx, (data, target) in enumerate(train_loader):
data = data.view(-1, 28*28)
optimizer.zero_grad()
output = model(data)
pred = output.data.max(1)[1]
loss = sgvlb(output, target, kl_weight)
loss.backward()
optimizer.step()
train_loss += loss
train_acc += np.sum(pred.numpy() == target.data.numpy())
logger.add_scalar(epoch, 'tr_los', train_loss / len(train_loader.dataset))
logger.add_scalar(epoch, 'tr_acc', train_acc / len(train_loader.dataset) * 100)
model.eval()
test_loss, test_acc = 0, 0
for batch_idx, (data, target) in enumerate(test_loader):
data = data.view(-1, 28*28)
output = model(data)
test_loss += float(sgvlb(output, target, kl_weight))
pred = output.data.max(1)[1]
test_acc += np.sum(pred.numpy() == target.data.numpy())
logger.add_scalar(epoch, 'te_loss', test_loss / len(test_loader.dataset))
logger.add_scalar(epoch, 'te_acc', test_acc / len(test_loader.dataset) * 100)
for i, c in enumerate(model.children()):
if hasattr(c, 'kl_reg'):
logger.add_scalar(epoch, 'sp_%s' % i, (c.log_alpha.data.numpy() > model.threshold).mean())
logger.iter_info()
all_w, kep_w = 0, 0
for c in model.children():
kep_w += (c.log_alpha.data.numpy() < model.threshold).sum()
all_w += c.log_alpha.data.numpy().size
print('keept weight ratio =', all_w/kep_w)
###Output
_____no_output_____
###Markdown
Good result should be like epoch kl lr tr_los tr_acc te_loss te_acc sp_0 sp_1 ------- ---- ------- -------- -------- --------- -------- ------ ------ 100 1 1.6e-06 -1.4e+03 98.0 -1.4e+03 98.3 0.969 0.760 keept weight ratio = 30.109973454683352 Visualization
###Code
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib import rcParams
rcParams['figure.figsize'] = 16, 3
rcParams['figure.dpi'] = 300
log_alpha = (model.fc1.log_alpha.detach().numpy() < 3).astype(np.float)
W = model.fc1.W.detach().numpy()
plt.imshow(log_alpha * W, cmap='hot', interpolation=None)
plt.colorbar()
s = 0
from matplotlib import rcParams
rcParams['figure.figsize'] = 8, 5
z = np.zeros((28*15, 28*15))
for i in range(15):
for j in range(15):
s += 1
z[i*28:(i+1)*28, j*28:(j+1)*28] = np.abs((log_alpha * W)[s].reshape(28, 28))
plt.imshow(z, cmap='hot_r')
plt.colorbar()
plt.axis('off')
###Output
_____no_output_____
###Markdown
Compression with Sparse Matrixes
###Code
import scipy
import numpy as np
from scipy.sparse import csc_matrix, csc_matrix, coo_matrix, dok_matrix
row, col, data = [], [], []
M = list(model.children())[0].W.data.numpy()
LA = list(model.children())[0].log_alpha.data.numpy()
for i in range(300):
for j in range(28*28):
if LA[i, j] < 3:
row += [i]
col += [j]
data += [M[i, j]]
Mcsr = csc_matrix((data, (row, col)), shape=(300, 28*28))
Mcsc = csc_matrix((data, (row, col)), shape=(300, 28*28))
Mcoo = coo_matrix((data, (row, col)), shape=(300, 28*28))
np.savez_compressed('M_w', M)
scipy.sparse.save_npz('Mcsr_w', Mcsr)
scipy.sparse.save_npz('Mcsc_w', Mcsc)
scipy.sparse.save_npz('Mcoo_w', Mcoo)
ls -lah | grep _w
###Output
_____no_output_____ |
aas_GraphX_medline.ipynb | ###Markdown
스파크 고급 분석 7장 그래프X로 동시발생 네트워크 분석하기
###Code
%%configure -f
{
"name": "medline-graphX",
"proxyUser": "hduser",
"driverMemory": "4000M",
"conf": {"spark.jars.packages": "graphframes:graphframes:0.3.0-spark2.0-s_2.11",
"spark.master": "local[2]",
"spark.jars": "hdfs://localhost:54310/jars/ch06-lsa-2.0.0-jar-with-dependencies.jar",
"spark.sql.crossJoin.enabled": "true"}
}
import edu.umd.cloud9.collection.XMLInputFormat
import java.nio.charset.StandardCharsets
import java.security.MessageDigest
import org.apache.hadoop.io.{Text, LongWritable}
import org.apache.hadoop.conf.Configuration
import org.apache.spark.graphx._
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.{Dataset, DataFrame, SparkSession, Row}
import org.apache.spark.sql.functions._
import scala.xml.XML
val base = "hdfs://localhost:54310/medline/"
###Output
_____no_output_____
###Markdown
데이터 준비
###Code
def loadMedline(spark: SparkSession, path:String) = {
@transient val conf = new Configuration()
conf.set(XMLInputFormat.START_TAG_KEY, "<MedlineCitation ")
conf.set(XMLInputFormat.END_TAG_KEY, "</MedlineCitation>")
val kvs = spark.sparkContext.newAPIHadoopFile(path, classOf[XMLInputFormat], classOf[LongWritable], classOf[Text], conf)
kvs.map(_._2.toString).toDS()
}
val path = base + "medsamp2016a.xml"
val medlineRaw = loadMedline(spark, path)
medlineRaw.count
def majorTopics(record:String): Seq[String] = {
val elem = XML.loadString(record)
val dn = elem \\ "DescriptorName"
dn.filter( n => (n \ "@MajorTopicYN").text == "Y").map( n => n.text)
}
val medline = medlineRaw.map(majorTopics)
medline.cache()
medline.take(1)(0)
###Output
_____no_output_____
###Markdown
동시 발생 조사하기
###Code
val topics = medline.flatMap(n => n).toDF("topic")
topics.createOrReplaceTempView("topics")
val topicDist = spark.sql("""
SELECT topic, COUNT(*) as cnt
FROM topics
GROUP BY topic
ORDER BY cnt DESC
""")
topics.count()
topicDist.show(5)
topicDist.createOrReplaceTempView("topic_dist")
spark.sql("""
SELECT cnt, COUNT(*) as dist
FROM topic_dist
GROUP BY cnt
ORDER BY dist DESC
""").show
###Output
_____no_output_____
###Markdown
동시 발생쌍 만들기
###Code
def getCooccur(ds:Dataset[Seq[String]]) = {
import spark.implicits._
val pairs = ds.flatMap{ list => list.combinations(2)}.toDF("pair")
pairs.createOrReplaceTempView("pairs_")
spark.sql("""
SELECT pair, COUNT(*) as cnt
FROM pairs_
GROUP BY pair
ORDER BY cnt DESC
""")
}
val cooccurs = getCooccur(medline)
cooccurs.cache()
cooccurs.show(5,false)
###Output
_____no_output_____
###Markdown
그래프X로 동시발생 네트워크 구성하기
###Code
// Vertext ID unique하게 결정하기 (주어진 cooccur pair를 hash떠서)
def hashId(str: String): Long = {
// This is effectively the same implementation as in Guava's Hashing, but 'inlined'
// to avoid a dependency on Guava just for this. It creates a long from the first 8 bytes
// of the (16 byte) MD5 hash, with first byte as least-significant byte in the long.
val bytes = MessageDigest.getInstance("MD5").digest(str.getBytes(StandardCharsets.UTF_8))
(bytes(0) & 0xFFL) |
((bytes(1) & 0xFFL) << 8) |
((bytes(2) & 0xFFL) << 16) |
((bytes(3) & 0xFFL) << 24) |
((bytes(4) & 0xFFL) << 32) |
((bytes(5) & 0xFFL) << 40) |
((bytes(6) & 0xFFL) << 48) |
((bytes(7) & 0xFFL) << 56)
}
// topic list로부터 vectex 확보
val vertices = topics.map{ case Row(topic:String) => (hashId(topic), topic) } toDF("hash", "topic")
val uniqueHashes = vertices.agg(countDistinct("hash")).take(1)
// coocccurs로부터 edge 확보
val edges = cooccurs.map { case Row(topics:Seq[String], cnt:Long) =>
val ids = topics.map(_.toString).map(hashId).sorted
Edge(ids(0), ids(1), cnt)
}
val vertexRDD = vertices.rdd.map{ case Row(hash:Long, topic:String) => (hash, topic)}
val topicGraph = Graph(vertexRDD, edges.rdd)
topicGraph.cache()
vertexRDD.count
topicGraph.vertices.count()
edges.count
topicGraph.vertices
###Output
_____no_output_____
###Markdown
네트워크 구조 - Connected Components원리 - 이웃에게 자신이 아닌 min vertex id 전달, 같은 min value를 공유하는 것끼리 CC 이다.
###Code
val ccGraph = topicGraph.connectedComponents()
val ccDF = ccGraph.vertices.toDF("vid", "cid")
ccDF.createOrReplaceTempView("cc")
spark.sql("""
SELECT cid, COUNT(*) as cnt
FROM cc
GROUP BY cid
ORDER BY cnt DESC
LIMIT 5
""").show
// def innerJoin[U, VD2](other: RDD[(VertexId, U)])(f: (VertexId, VD, U) => VD2): VertexRDD[VD2]
val topicCCDF = topicGraph.vertices.innerJoin(ccGraph.vertices) {
case (topicId, name, cid) => (name, cid.toLong)
}.values.toDF("topic", "cid")
topicCCDF.where($"cid" === "-9215470674759766104").show
###Output
_____no_output_____
###Markdown
네트워크 구조 - Degree Distribution
###Code
val degrees:VertexRDD[Int] = topicGraph.degrees.cache()
degrees.map(_._2).stats()
val namesAndDegree = degrees.innerJoin(topicGraph.vertices) {
(topicId, degree, name) => (name, degree.toInt)
}.values.toDF("topic", "degree")
namesAndDegree.orderBy($"degree".desc).show(5)
###Output
_____no_output_____
###Markdown
카이제곱 통계량으로 관련이 낮은 관계 필터링하기* 특정 두 주제(A, B)에 대해서 카이제곱 통계량 분석| | A가 나왔음 | B가 나오지 않음 | A 총계 ||---|:---:|---:|---:|| B가 나왔음 |YY | YN | YA || B가 나오지 않음 | NY | NN | NA || B 통계 | YB | NB | T |
###Code
// 전체 문서 개수 T
val T = medline.count()
// 특정 주제가 등장하는 문서의 수 ( YA, YB 에 해당)
val topicDistRdd = topicDist.map {
case Row(topic:String, cnt:Long) => (hashId(topic), cnt)
}.rdd
val topicDistGraph = Graph(topicDistRdd, topicGraph.edges)
// 카이제곱 검증값 (X_2)
def chiSq(YY:Long, YB:Long, YA:Long, T:Long) = {
val NB = T - YB
val NA = T - YA
val YN = YA - YY
val NY = YB - YY
val NN = T - NY - YN - YY
val inner = math.abs(YY*NN - YN *NY) -T/2.0
T * math.pow(inner,2) / (YA*NA*YB*NB)
}
val chiSquaredGraph = topicDistGraph.mapTriplets(triplet => {
// EDGE 속성 : topicGraph에서 왓으므로 co-occur count
// VERTEX 속성 : topicDistGraph에서 왔으므로, 각 토픽의 문서 무관 발생량 (즉 YA or YB)
chiSq(triplet.attr, triplet.srcAttr, triplet.dstAttr, T)
// 결론적으로 만들어지는 triplet의 edge에 X^2 검증값이 들어있을 것이다.
})
chiSquaredGraph.edges.map(e=>e.attr).stats()
chiSquaredGraph.edges.count
// 자유도 1, 99.999%의 기각범위는 19.5, 이것보다 큰 pair는 서로 독립이 아닌 (즉 연관이 깊은 것) 것으로 간주해서 살린다.
val interesting = chiSquaredGraph.subgraph( triplet => triplet.attr > 19.5 )
interesting.edges.count // 7만 => 3만여개 정도로 줄였음
val interestingDegrees = interesting.degrees.cache()
interestingDegrees.map(_._2).stats() // 필터링 전과 비교하면 많이 connectivity가 sparse해졌다.
###Output
_____no_output_____
###Markdown
Small Word 이론 검증* 노드 대부분 차수(degree)가 작고, 밀도가 높은 군집에 속해있다. (군집계수가 크다)* 한 노드에서 다른 노드로 빠르게 도달 (small shortest path)* 각 Vertext에서의 군집계수 (k:이웃수, t:트라이앵글수) $$C=\frac{2t}{k(k-1)}$$
###Code
// 각 꼭지점별 트라이앵글수 구한 후 통계
val triangleGraph = interesting.triangleCount()
triangleGraph.vertices.map(_._2).stats()
// 전체 가능한 트라이앵글 수 = k(k-1)/2
val maxTriGraph = interestingDegrees.mapValues( d => d*(d-1)/2.0 )
// 지역 군집 계수
val localClusterCoef = triangleGraph.vertices.innerJoin(maxTriGraph) {
(vid, triangleCout, maxTriangleCount) => {
if (maxTriangleCount == 0) 0 else 1.0 * triangleCout / maxTriangleCount
}
}
// 네트워크 전체에 대한 평균적 지역 군집 계수 (avg)
localClusterCoef.map(_._2).sum() / interesting.vertices.count()
###Output
_____no_output_____
###Markdown
프레겔(대량 동기 병렬 그래프 처리)을 사용한 평균 경로 길이 계산하기프레겔 사용법1) 각 Vertex의 상태 정의 : 자신이 알고있는 경로 길이 정보 = Map[VertextID, Int]2) 이웃으로부터의 메세지와 현재 상태를 종합하여 내보낼 메세지 갱신 함수 : 누가 더 작은 경로 길이를 알면 이걸로 replace해야함3) 자신의 상태 갱신
###Code
// 두 개의 메세지 병합 함수 : 최소거리이므로 min
def mergeMaps(m1:Map[VertexId, Int], m2:Map[VertexId, Int]): Map[VertexId, Int] = {
def minThatExists(k: VertexId):Int = {
math.min(
m1.getOrElse(k, Int.MaxValue),
m2.getOrElse(k, Int.MaxValue)
)
}
(m1.keySet ++ m2.keySet).map {
k => (k, minThatExists(k))
}.toMap
}
// 꼭지점 갱신 함수 : state, msg 병합
def update(id:VertexId, state:Map[VertexId, Int], msg:Map[VertexId, Int]) = {
mergeMaps(state, msg)
}
// 각 꼭지점에 보낼 메세지
// : EdgeTriplet에서 1) +1 증가 (path상의 경유고려), 2) src <-> dst 사이의 병합정보의 갱신이 있을시만 Iterator exist
def checkIncrement(
a: Map[VertexId, Int],
b: Map[VertexId, Int],
bid: VertexId) = {
// +1 중가
val aplus = a.map{ case (v,d) => (v, d+1) }
// 병합 메세지 변화시만 전파
if ( b != mergeMaps(aplus, b)) {
Iterator((bid, aplus))
} else {
Iterator.empty
}
}
def iterate(e: EdgeTriplet[Map[VertexId, Int], _]) = {
checkIncrement(e.srcAttr, e.dstAttr, e.dstId) ++
checkIncrement(e.dstAttr, e.srcAttr, e.dstId)
}
// 2% 만 하자
val fraction = 0.02
val replacement = false
val sample = interesting.vertices.map(v => v._1).sample(replacement, fraction, 1792L)
val ids = sample.collect().toSet
val mapGraph = interesting.mapVertices((id, v) => {
if (ids.contains(id)) {
Map(id -> 0)
} else {
Map[VertexId, Int]()
}
})
// 프레겔 시작
val start = Map[VertexId, Int]() // 초기 메세지
val res = mapGraph.ops.pregel(start)(update, iterate, mergeMaps) // (vid, vid, path_length)
// 결과
val paths = res.vertices.flatMap { case (id, m) =>
m.map { case (k, v) =>
if (id < k) {
(id, k, v)
} else {
(k, id, v)
}
}
}.distinct().cache()
// OOM 주의
paths.map(_._3).filter(_ > 0).stats()
val hist = paths.map(_._3).countByValue()
hist.toSeq.sorted.foreach(println)
###Output
An error was encountered:
Session 0 did not reach idle status in time. Current status is busy.
|
experiments/intro nbdev.ipynb | ###Markdown
API Name> API details.
###Code
#hide
from nbdev.showdoc import *
#hide
%matplotlib inline
import numpy as np
import librosa
import torch
from pathlib import Path
###Output
_____no_output_____
###Markdown
Spectrograms
###Code
n_fft = 2048
hop_length = 512
n_mels= 128
datapath = Path('temp/')
###Output
_____no_output_____
###Markdown
compute spectrogram
###Code
# ignore librosa pysoundfile load warning
import warnings
warnings.filterwarnings(
action='ignore',
category=UserWarning,
module=r'librosa'
)
audio_file = datapath/'00204008d.flac'
wf,sr = librosa.load(audio_file, sr=None)
# spectrogram
stft = librosa.stft(wf, n_fft=n_fft, hop_length=hop_length)
spgm_pwr = np.abs(stft)**2
spgm_log = librosa.power_to_db(spgm_pwr)
# mel spectrogram
spgm_mel_pwr = librosa.feature.melspectrogram(wf, sr=sr, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels)
spgm_mel_log = librosa.power_to_db(spgm_mel_pwr)
def compute_spectrogram(wf, n_fft, hop_length):
return librosa.power_to_db(np.abs(librosa.stft(wf, n_fft=n_fft, hop_length=hop_length))**2)
def compute_mel_spectrogram(wf, sr, n_fft, hop_length, n_mels):
return librosa.power_to_db(librosa.feature.melspectrogram(wf, sr=sr, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels))
statspath = Path('statistics')
datapath = Path('sample_data')
datapath_train = Path('sample_data/train/')
datapath_spgm = Path('sample_data/spectrograms')
datapath_spgm_mel = Path('sample_data/melspectrograms')
datapath_spgm.mkdir(exist_ok=True)
datapath_spgm_mel.mkdir(exist_ok=True)
%%time
for audio_file in datapath_train.iterdir():
# check
if audio_file.suffix != '.flac': continue
# load
wf,sr = librosa.load(audio_file)
# compute
spgm = compute_spectrogram(wf=wf, n_fft=n_fft, hop_length=hop_length)
spgm_mel = compute_mel_spectrogram(wf=wf, sr=sr, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels)
# write
np.save(datapath_spgm/f'{audio_file.stem}', spgm)
np.save(datapath_spgm_mel/f'{audio_file.stem}', spgm_mel)
###Output
_____no_output_____
###Markdown
pure pytorch
###Code
# import torchaudio
# # pytorch STFT
# wf,sr = torchaudio.load(audio_file)
# stft = torch.stft(wf,
# n_fft = n_fft,
# hop_length = hop_length,
# window = torch.hann_window(n_fft),
# return_complex = True,
# center = True)
# spgm_pwr = torch.abs(stft)**2
# spgm_log = librosa.power_to_db
###Output
_____no_output_____
###Markdown
notebook export
###Code
from nbdev.export import notebook2script; notebook2script()
###Output
Converted 00_core.ipynb.
Converted 01_spectrogram_processor.ipynb.
Converted basic classifier.ipynb.
Converted experiments - spectrogram compression and timing statistics.ipynb.
Converted experiments - waveform analysis.ipynb.
Converted index.ipynb.
|
notebooks/eda2.ipynb | ###Markdown
process data & feature extract
###Code
%matplotlib inline
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import pandas as pd
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.corpus import stopwords
from string import punctuation
from collections import Counter
import re
import numpy as np
import itertools
from sklearn.feature_extraction.text import TfidfVectorizer
from nltk.stem.porter import PorterStemmer
from nltk.stem.snowball import SnowballStemmer
from gensim import corpora, models
# read sample data
df = pd.read_csv('input/stack_ds_4_9_2017 .csv',sep=',',quotechar='|',header=None)
df.columns = ['title','body','tags']
df.head(2)
###Output
_____no_output_____
###Markdown
**notice that the data is already filtered with code and images etc.**
###Code
df.info()
# merge title and body so that we only have one feature to consider
merged = [ title + ' ' + body for title, body in zip(df.title,df.body)]
df_merged = pd.DataFrame({'content':merged,'tags':df.tags})
df_merged.head(2)
# how many tags we have?
all_tags = df.tags.apply(lambda x: x.replace('<','').split('>'))
all_tags = [x for x in list(itertools.chain(*all_tags)) if x ]
ct = Counter(all_tags)
print(len(ct),'out of',len(df))
###Output
3591 out of 4533
###Markdown
**tokenize **- lowercase - topwords- remove if not character- stemming (seems not that good)
###Code
to_be_removed = set(stopwords.words('english'))
tokenizer = lambda x : [word.lower() for word in word_tokenize(re.sub("[^a-zA-Z]"," ",x)) if word.lower() not in to_be_removed]
# tokenizer = lambda x : [ SnowballStemmer('english').stem(word.lower()) for word in word_tokenize(re.sub("[^a-zA-Z]"," ",x)) if word.lower() not in to_be_removed]
###Output
_____no_output_____
###Markdown
**apply tf-idf**
###Code
%%time
# para to be tweaked, we start from a simple one
tfidf = TfidfVectorizer(min_df=0.001,max_df=0.95, max_features=None, tokenizer= tokenizer, ngram_range=(1,2))
tfidf_trained = tfidf.fit_transform(list(df_merged.content))
df_tfidf = pd.DataFrame({'token':tfidf.get_feature_names(),'tfidf_value':tfidf.idf_})
df_tfidf.sort_values('tfidf_value',ascending=False).set_index('token').head()
###Output
_____no_output_____
###Markdown
**simple LDA**
###Code
# build a look up dict, where every unique word is mapped to a unique int
texts = df_merged.content.apply(tokenizer)
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
corpus[0][:10] # first 10 words in first sentence : word 0 occurs once, word 8 occurs twice
%%time
# id2word: required. The LdaModel class requires our previous dictionary to map ids to strings.
# passes: optional. The number of laps the model will take through corpus. The greater the number of passes, the more accurate the model will be. A lot of passes can be slow on a very large corpus.
ldamodel = models.ldamodel.LdaModel(corpus, num_topics=3, id2word = dictionary, passes=20)
ldamodel.print_topics(num_topics=3, num_words=10)
###Output
_____no_output_____
###Markdown
well, interesting! did't expect this, we can clearly see that first topic is about ' java application' sencond 'data science' and third 'web programming' :)
###Code
# each document has several topics
[ldamodel.get_document_topics(corpus)[i] for i in range(20)]
###Output
_____no_output_____ |
notebooks/02g - The Python Ecosystem - The statsmodels library.ipynb | ###Markdown
The Python ecosystem - The statsmodels library [statsmodels](http://www.statsmodels.org/dev/index.html) is a Python module that provides classes and functions for the estimation of many different **statistical models**, as well as for conducting **statistical tests**, and **statistical data exploration**.* Regression Analysis* Linear Mixed Effects Models* ANOVA* Time Series analysis* Parametric and Nonparametric Statistical Methods* Multivariate Statistics multivariate* Distributions* and many more... **Add the `src` directory as one where we can import modules**
###Code
import os
import sys
# add the 'src' directory as one where we can import modules
src_dir = os.path.abspath(os.path.join(os.getcwd(), os.pardir, 'src'))
sys.path.append(src_dir)
print(src_dir)
import helper_funcs as hf
###Output
_____no_output_____
###Markdown
**Load libraries**
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
% matplotlib inline
###Output
_____no_output_____
###Markdown
Population and Sample Statistics [Inferential statistics](https://en.wikipedia.org/wiki/Statistical_inference) is all about using **sample results** to make decisions or predictions about a **population**. Basically, a numerical value is assigned to a population parameter based on the information collected from a sample. **Load the data set**Source [Crowder, M. and Hand, D. (1990)](https://stat.ethz.ch/R-manual/R-devel/library/datasets/html/ChickWeight.html)
###Code
chicken, chick_diet1, chick_diet2, chick_diet3, chick_diet4 = hf.load_chicken()
chicken
chicken["weight"].plot.hist();
###Output
_____no_output_____
###Markdown
Point Estimate and Confidence Interval Given a sample, the value of the computed sample statistic gives a point estimate of the corresponding population parameter. For example, the sample mean $(\bar x)$, is a point estimate of the corresponding population mean, $\mu$, or the sample standard deviation $s$ is a point estimate for the population standard deviation $\sigma$. * [__sampling error__](https://en.wikipedia.org/wiki/Sampling_error) (the point estimate almost always differs from the true value of the population) Interval Estimate Instead of assigning a single value to a population parameter, an interval estimation gives a probabilistic statement, relating the given interval to the probability that this interval actually contains the true (unknown) population parameter.The level of confidence is chosen a priori and thus depends on the users preferences. It is denoted by$$100(1-\alpha)$$Although any value of confidence level can be chosen, the most common values are 90%, 95%, and 99%. When expressed as probability, the confidence level is called the confidence coefficient and is denoted by (1−$\alpha$). Most common confidence coefficients are 0.90, 0.95, and 0.99, respectively.A 100(1−$\alpha$)% confidence interval is an interval estimate around a population parameter $\theta$ (here, the Greek letter $\theta$ is a placeholder for any population parameter of interest such as the mean $\mu$, or the standard deviation $\sigma$, among others) that, under repeated random samples of size $N$, is expected to include $\theta$'s true value 100(1−$\alpha$)% of the time ([Lovric 2010](http://www.springer.com/de/book/9783642048975)).The actual number added to and subtracted from the point estimate is called the margin of error.$$CI:\text{Point estimate} \pm \text{Margin of error (ME)}$$Thus, the margin of error (ME) is expressed as$$ME = z^*_{\alpha/2} \times \frac{\sigma}{\sqrt{n}}$$
###Code
from statsmodels.stats.weightstats import DescrStatsW
d_stats = DescrStatsW(chicken["weight"])
d_mean = d_stats.mean
d_mean
alpha = 0.05
d_conf_int = d_stats.tconfint_mean(alpha=alpha)
print(d_conf_int)
sig_level = int((1-alpha)*100)
lower_ci = np.round(d_conf_int[0],1)
upper_ci = np.round(d_conf_int[1],1)
(print("We are {}% confident that the true weight of chicken is between {} and {} grams.".
format(sig_level, lower_ci, upper_ci)))
###Output
_____no_output_____
###Markdown
Hypothesis Testing
###Code
chicken.groupby("Diet")["weight"].mean()
def compute_ci(df, group, var, alpha=0.05):
groups = df[group].unique()
rv = pd.DataFrame({"group":None,
"mean":None,
"lower_ci":None,
"upper_ci":None},
index=range(len(groups)))
for e, g in enumerate(groups):
stats = DescrStatsW(df.loc[df[group] == g, var])
group_mean = stats.mean
group_ci = stats.tconfint_mean(alpha=alpha)
rv.loc[e] = {"group":g,
"mean":group_mean,
"lower_ci":group_ci[0],
"upper_ci":group_ci[1]}
return rv
group_stats = compute_ci(df=chicken, group="Diet", var="weight")
group_stats
groups = group_stats["group"].values
fig, ax = plt.subplots()
ax.plot(group_stats["mean"].values, groups, "o")
ax.set_ylim(0.5,4.5)
for e,i in enumerate(group_stats["group"]):
mean = group_stats["mean"].values[e]
upper = group_stats["upper_ci"].values[e]
lower = group_stats["lower_ci"].values[e]
ax.plot((upper, lower),
(i,i), "k--")
ax.text(mean,i+0.1, "Diet "+str(int(i)), horizontalalignment='center')
ax.legend(["mean", "ci interval"]);
###Output
_____no_output_____
###Markdown
A very common problem that scientists face is the assessment of significance in scattered statistical data. Owing to the limited availability of observational data, scientists apply **inferential statistical methods to decide if the observed data contains significant information or if the scattered data is nothing more than the manifestation of the inherently probabilistic nature of the data generation process**.The framework of hypothesis testing is all about making statistical inferences about populations based on samples taken from the population. Any hypothesis test involves the **collection of data (sampling)**. If the **hypothesis** is assumed to be correct, the scientist can calculate the **expected results** of an experiment. If the **observed data** differs significantly from the expected results, then one considers the assumption to be incorrect. Thus, based on the observed data the scientist makes a **decision** as to whether or not there is sufficient evidence, based upon analyses of the data, that the model - the hypothesis - should be rejected, or that there is not sufficient evidence to reject the stated hypothesis. Testing for differences in the mean (t-test)* variables normally distributed and independent* variance equal or notNull hypothesis$H_0:\quad \mu_1 = \mu_2$Alternative hypothesis$H_A:\quad \mu_1 \ne \mu_2$
###Code
from statsmodels.stats.weightstats import ttest_ind
t, p, degf = ttest_ind(chick_diet2, chick_diet3)
p
###Output
_____no_output_____
###Markdown
Multiple comparisons The problem with multiple comparisons is that the more hypotheses are tested on a particular data set, the more likely it is to incorrectly reject the null hypothesis. Thus, methods of multiple comparison require a higher significance threshold ($\alpha$) for individual comparisons, in order to compensate for the number of inferences being made.**The Family-wise error rate**The family-wise error rate is the probability of making one or more false discoveries, or [__Type I errors__]() when performing multiple hypotheses tests.Recall that at a significance level of $\alpha=0.05$, the probability of making a Type I error is equal to $0.05$ or $5\%$. Consequently, the probability of not making a Type I error is $1-\alpha=1-0.05=0.95$.Written more formally, for a family of $C$ tests, the probability of making a Type I error for the whole family is$$(1-\alpha)^C\text{.}$$Let us now consider $C=4$ and $\alpha=0.05$.Thus, if we make 4 multiple comparisons on one data set, the probability of not making one or more Type I errors on the family of tests is $(1-\alpha)^C = (1−0.05)^4 = 0.81$Null hypothesis$H_0:\quad \mu_1 = \mu_2 = \mu_3 = \mu_4$Alternative hypothesis$H_A:\quad \mu_1 \ne \mu_2 \ne \mu_3 \ne \mu_4$ **[Tukey's HSD (Honestly Significant Difference)](https://en.wikipedia.org/wiki/Tukey%27s_range_test)**
###Code
from statsmodels.sandbox.stats.multicomp import MultiComparison
mult_comp = MultiComparison(data=chicken["weight"], groups=chicken["Diet"])
mult_comp.tukeyhsd(alpha=0.05).summary()
_ = mult_comp.tukeyhsd(alpha=0.05).plot_simultaneous()
###Output
_____no_output_____ |
KNN/main1_out.ipynb | ###Markdown
Load Dataset
###Code
training_set = pd.read_csv("../dataset/movie_train.csv")
Y=training_set['sentiment'].values
X=training_set['review'].values
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.20,random_state=42,stratify=Y)
print ("No. of Training Examples: ",x_train.shape)
print ("No. of Testing Examples: ",x_test.shape)
tf=TfidfVectorizer(min_df=10,max_df=0.95,use_idf=True)
tf.fit_transform(x_train)
X_train=tf.transform(x_train) # for train data we can use fit_transfrom also.
X_test=tf.transform(x_test)
pickle.dump(tf, open('vectorizer1.sav', 'wb'))
# Evaluating models peformance based on precision, recall and accuracy
def do_evaluation (predicted, actual,verbose=True):
precision = precision_score(actual,predicted)
recall = recall_score(actual,predicted)
accuracy = accuracy_score(actual,predicted)
f1score = f1_score(predicted,actual)
if verbose:
print('"Evaluation"','| Precision ==',round(precision*100,2),'| Recall ==',round(recall*100,2),'| Accuracy ==',round(accuracy*100,2),'| F1 score ==',round(f1score*100,2))
###Output
_____no_output_____
###Markdown
Training phase..
###Code
# Random Foreset Classifier
knn = KNeighborsClassifier(n_neighbors=120,leaf_size=80, p=2)
knn.fit(X_train,y_train)
# Testing phase
knn_pred=knn.predict(X_test)
print("Accuracy: ",round(accuracy_score(y_test,knn_pred),3))
print ('{:.1%} of prediction are positive'.format(float(sum(knn_pred))/len(y_test)))
print ('{:.1%} are actually positive'.format(float(sum(y_test))/len(y_test)))
do_evaluation (knn_pred,y_test, verbose=True)
pickle.dump(knn, open('knn1_0.81_120,80,2.sav', 'wb'))
###Output
_____no_output_____
###Markdown
E valuate classifier performance(roc and auc curve)
###Code
def display_curve(nb_pred,name):
#Calculating False Positive Rate ,True Positive Rate and threshold
fpr_nb, tpr_nb, _ = roc_curve(y_test, nb_pred)
#AUC is the percentage of the ROC plot that is underneath the curve:
roc_auc_nb = auc(fpr_nb, tpr_nb)
plt.title(f'Operating Characteristic for {name} Classifier')
plt.plot(fpr_nb, tpr_nb, 'b', label = 'AUC = %0.2f' % roc_auc_nb)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# Gaussian Naive Bayes Classifier
display_curve(knn_pred,'KNN')
###Output
_____no_output_____
###Markdown
Testing
###Code
# Load model
knn = pickle.load(open('knn1_0.81_120,80,2.sav','rb'))
tf = pickle.load(open('vectorizer1.sav','rb'))
# Testing
test_array = [
'entertaining film follows rehearsal process ny production best taking seriously recognizable actors john glover gloria reubens david lansbury nice jobs main roles highlight hilarious scene murder banquo john elsen rehearsed probably entertaining film involved theatre anyone enjoys shakespeare enjoy film',
'could otherwise run mill mediocre film infidelity sixties subtle free love period top happily ever ending time ever feel sympathy diane lane anna paquin troublesome middle class care free life feel emasculated liev shrieber story line plods along slowly predictable pathetic conclusion thing interesting watchable film stunning diane lane topless hint occurs 30 minutes film fast forward part skip rest',
'cosimo luis guzmán told prison perfectsoon five guys organizing crime five guys little brain capacity brothers anthony joe russo directors welcome collinwood 2002 crime comedy often funny cannot help laughing everything goes wrong guys great actors playing characters william macy plays riley isaiah washington leon sam rockwell pero michael jeter toto andy davoli basil gabrielle union plays love interest michelle jennifer esposito plays pero love interest carmela george clooney also producer plays jerzy tattooed guy wheelchair highly entertaining flick certainly recommend',
'summer full blockbusters comebacks woe find film could sit enjoy case want read page spoilers sum mature ella enchanted questionably violent parts plenty death handful scenes little blood small children try overly corny overstep bounds think bit serious bit magical princess bride close perhaps prodigious movie goer others maybe twice month feel active also huge sci fi fantasy fan get bored remade repetitive story lines films flash filling faster count 10 film diamond rough end august tired enough fractured expectations big hits averted seeing bourne ultimatum favor stardust hopes thoroughly muddied transformers fiction addiction previews stardust seemed appealing certainly wary many others utterly surprised gone thinking see another generic fantasy movie clichéd breakfast fooled definitely fairy tale indeed witches magic utterly requires suspension disbelief refreshing thing found based anything seen read past 15 years actually really good movie unlike 90 movies seem persistently appear like thorns side perhaps sign hollywood running ideas could read book year two years movie would another epic fantasy tale likes lotr rest says nyt doubt stardust bolted seat jam packed action every turn sweating bullets plot hook plot hook threatening tear dramatic tension apart echo throughout theater loud boom even use enormous blasts sound grab attention happening screen transformers looking trying show latest cgi techniques offend intelligence dimwitted dialogs story lines simple enough could figured 3rd grade boy hate watched watched enjoyed refreshingly creative storyline unfold eyes sure may known going happen throughout film makes forget even made heart twinge parts important aspect noticed left theater feeling better would gone truly gem much slush summer many remakes films fell short expectations like cold sweet cup tea cap hard work would done sitting others trying come money worth probably everyone favor enjoy fantasy films stand test time alone princess bride black cauldron dark crystal etc really see movie little diamond finding way dvd collection moment hits stores trust simply wonderful',
'best movie ever seen maybe live area sweden movie tells truth believe criticizes honors lifestyle dalarna producer wants people watch movie opened minded care closest friends relatives live another small village anywhere sweden another country probably also recognize much movie thank maria blom',
'story deals jet li fight oldfriends one problem friends superfighters film filled blood super action best stunts forever lau ching wan great co actor course movie typical hk fun love germany black mask uncut',
'emotional impact movie defies words elegant subtle beautiful tragic rolled two hours smith matures acting ability full range knew saw pursuit happiness thought must fluke blockbuster top actor smith performances movies portray whole dimension smith refinement talent selectivity scripts sure view differently seven pounds one movies order fully enjoy essence suspend belief watch plot watch fragile condition human heart literally metaphorically story human guilt atonement love sacrifice',
'oh man want give internal crow robot real workout movie pop ol vcr potential cut lines film endless minor spoilers ahead hey really care film quality spoiled traci girl problem psychology developed names child develops sexual crush opposite sex parent girl seems one sex one think term might mother dana played rosanna arquette whose cute overbite neo flowerchild sexuality luscious figure makes forgive number bad movies unsympathetic characters dana clueless daughter conduct seems competing gold medal olympic indulgent mother competition possible dana misses traci murderous streak truth told traci seems criminal skills hamster script dictates manages pull kind body count particularly hilarious note movie character carmen mexican maid described dana around long like one family although dresses director thought would say fell tomato truck guadalajara carmen wise traci scheming might also wear sign saying hey next victim sure enough traci confronts carmen carmen making way back mass bops one slightly angled lug wrenches car manufacturers put next spare bad joke rather suspect real life things useless murder weapon changing tire another sequence arquette wears flimsy dress vineyard cloudy skies talking owner cut another flimsy dress sunny skies talking owner brother cut wearing first dress first location cloudy skies supposed later get picture talking really bad directing skin expect much although traci nice couple bikinis looking trash wallow 8 anybody else',
'life time little richard told little richard produced directed little richard one sided one songs biography even docudrama good writing great energy outstanding leading actor playing richard music little richard rocks tight lipsync every song movie covers early childhood carrys thru formative years music wild success richard throwing away praise lord tied together well obvious comeback 1962 manages stay away idea little richard discovered beatles opened main objection outrageous counter cultural behavior underplayed get feel audience experienced time energy still come across full force seemed tame compared remember time best scenes richard getting jilted lucille writing song strip bikini shorts performing make point decent place change gotten bronze liberace richard use refer interviews story trust saw perform couple months ago still flirts pretty white boys giving one particularly good dancer audience headband nearly 68 still going strong recommend movie concert v appearance find little richard always',
'script weak enough character arcs make care one bit characters happens script way talky enough gore action even call slow paced story gets point want everyone shut die quickly possible listen talk muted stiff dialogue technical note music mix way high makes hard understand said times could called blessing overall story could better told short film running time 30 minutes obvious face homages sam raimi evil dead would good subtle seem like bald faced rip mon kind 35mm budget best could done still cinematography lighting design shots well done indeed',
'savage island raw savagery scare hell trust boy estranged savage family run city slicker tourists pa savage wants revenge stop nothing gets real horror film truly wonderful horror moments also negative review clearly comes someone lacks proper knowledge film filmmakers chose lighting camera work order reflect dark murky egdy mood story words obtain certain aesthetic fact film several horror film festival awards',
'docteur petiot starring michel serrault brutal yet preys weakest amidst populace imagery cinematography superb lend additional macabre feeling complex story perfect psychopath seductive altruistic intelligent caring calculating murderous movie certain forgotten soon viewer kudos mr serrault chilling portrayal',
'one favourite flicks unlike weak elvira stranded unfamiliar town death good witch elviras aunt morgana inherits ruby ring extremely powerful sought bad warlock uncle befriends four characters inadvertently helps grow throughout movie dog tow show uncle wicked witch west elvira realises strength within ends defeating end gets sent towns folk winning hearts finally gets destination las vegas dorothy home kansas many references made wizard oz throughout movie uncle quote lines relevant parallel characters elvira youe must aunt em must uncle remus place like home place like home bad uncle vinny get pretty little dog sign elvira passes first road trip mentions state kansas aside fact one sequences ripped um mean inspired flashdance pure genius roll around laughing titty twirling end 80 las vegas show got camp bone body movie cult camp classic',
'oscar nominations zero win yet understandlike done halle berry denzel washington whoopi oprah margaret avery danny glover etc amazing curious get scripts discussions oscars year go shoulda would coulda category judges amazing book true alice walker style writing way seeming like exaggerating characters glad screen adaptation took things cinematography amazing african scenes live much desired african part book supposed set liberia somewhere west africa oh steven spielberg thinks world dumb cannot think africa outside safaris yes complimentary zebra wildlife scene know none west africa get people speak swahili west africa speaks swahili get way story amazing film making world classic yes watch soul needs rejuvenation',
'kurt thomas stars jonathan cabot ninjas stand chance especially since cabot gymnast taken whole gymkata one helluva bad movie atrocious acting god awful script really incompetent directing make quality human standards however movie terrible becomes really really funny mean dialog know outsleep ha add mock value gymkata obtains besides wisely movie hero gymnast finds things swing heat moment',
'film pretty good big fan baseball glover joseph gordon levitt brenda fricker christopher lloyd tony danza milton davis jr brought variety talented actors understanding sport plot believable love message william dear guys put together great movie sports films revolve around true stories events often work well film hits 10 perfectness scale even though minor mistakes',
'warm funny film much vein works almodovar sure 10 year cannot understand readers found sick perverted would willing let 10 year old son play part sure spanish cinema often quite sexual open healthy way leaves viewer without sense voyeurism kink think northern european types attitude would much better result liberal attitude also seen hilarious fartman maurice character lover says people embarrassed farting turn art form',
'although great film something compelling memorable like never forgotten story ridiculously cumbersome title see opportunity feel like voyeur small town life evolves decades film one brings human face historical drama early twentieth century progress engaging enough young viewer memorable enough older one furthermore easy like characters watch passage time',
'movie distinct albeit brutish rough humanity borderline depravity zippy like terrorizing woman train semi pitiful vulnerability lurks never far away dewaere sucks breasts like baby blier cuts away scene depardieu may rape dewaere never sure explicitly read manifestly homoerotic aspect relationship either way incident start relative humanization movie could certainly read pro gay although could likely read pro anything want movie many objectionable scenes points sexual politics probably best taken general cartoon foibles sexes making mockery whole notion sensitivity honesty hitting numerous points possible profundity basis fire enough shots bound hit',
'one remarkable sci fi movies millennium movie incredible establishes new standard f movies hail kill',
'care peopl vote movi bad want truth good thing movi realli get one',
'never realli understood controversi hype provok social drama could ever experi yeah right might littl shock mayb often see someon get shot ars weak pointless plot sure think much bais moi anoth one blame everyth go wrong societi film gener convinc 99 peopl function perfectli well societi would blame exact societi vile hopeless act two derang nymph girl two main charact miser life introduc separ flash nadin kill roommat manu shot brother two meet abandon train station late night decid travel around franc togeth leav trail sex blood behind wherev made stop although constantli expos pornographi violenc film bore sit like girl indic time dialogu lame peopl run kill uninterest peopl want make porno movi fine pleas pretend art hous film make leav swear hip camera work see arous pornographi cool soundtrack though',
'sweet entertain tale young find work retir eccentr tragic actress well act especi juli walter rupert grint play role teenag boy well show talent last longer harri potter seri film laura linney play ruthlessli strict mother without hint redempt room like film entertain film made well british style like keep mum calendar girl',
'first mention realli enjoy skin man peach hip girl although owe debt tarentino pulp fiction ishii cast task carri stori entir film crackl energi scene asano tadanobu gashuin tatsuya particularli engag action intrigu bizarr character enough sex keep thing interest utterli unpredict stori line certain amount anticip optim began watch parti 7 enthusiasm certainli piqu open credit left wife actual stun dynam excit mix anim live action work brilliant actual movi start actual much start sort shuffl side door stand fumbl pocket look uncomfort entir film take place three room one futurist voyeur paradis borrow bit shark skin man anoth travel agent offic third far use seedi hotel room room cast seven charact meet approxim noth realli stranger talk film one time favorit dinner andr talkiest talk film dinner andr far excit two middl age men discuss life dinner key andr gregori wallac shawn tell interest stori cast parti 7 liter whine entir film ye realli ye realli realli realli ye realli get idea hope wish direct parti 7 unbeliev unengag film flimsiest plot money stolen yakuza like shark skin man accompani almost action interest dialog charact larg uninterest ishii took throwaway convers moment tarentino film built entir film around tarentino convers alway intern logic wit call royal chees dialog duller imagin brief hilari cameo gashuin alway marvel low key perform awesom asano tadanobu would given parti 7 singl star realli chore make way',
'argentinian music poet film feel music repeat world time countri histori first listen play tri make other hear believ hear nobodi say anyth peopl appear listen other recogn heard think other might hear final everybodi listen music suddenli sound love poetri real nation legaci father child would call film dead nobodi dy spanish translat titl refus follow rule call dublin follow jame joyc titl nice 1900 irish film postcard',
'saw film chanc small box fantast chill cannot believ still wait 5 vote',
'small california town diablo plagu mysteri death sheriff robert lopez unearth ancient box legend box hold sixteenth centuri mexican demon name azar fbi agent gil vega sent investig murder join forc sheriff daughter dominiqu mari fight evil bloodthirsti demon legend diablo absolut garbag film lack scare gore act amateurish direct bad anim one aspect film enjoy big fan indi horror flick exampl love torch live feed bone sick neighborhood watch unfortun legend diablo huge misfir definit one avoid',
'good see vintag film buff correctli categor excel dvd releas excus elabor girli show kitti carlisl gertrud michael lead cast super decor girl includ ann sheridan luci ball beryl wallac gwenllian gill gladi young barbara fritchi wanda perri dorothi white carl brisson also hand lend strong voic cocktail two undoubtedli movi popular song heard le four time howev gertrud michael steal show rendit sweet marijauna strong perform hero reject girlfriend rest cast could done without jack oaki victor mclaglen altogeth good thing oaki role weak run gag cult icon tobi wing fact give idea far rest comedi indulg strain super dumb inspector mclaglen simpli cannot put hand killer even though would believ instanc happen person suspect director mitch leisen actual go great pain point killer even dumbest member cinema audienc give player concern close close',
'saw film via one actor agent sure conform great deal come film excel mostli kid actor ham embarrass case realli good term surreal thingi mention jingo well think film plain weird real weirdo film weirdo locat storylin weird stuff go whole time good weird oppos bad hard think movi like like car ate pari mayb like repuls actual think like hammer movi 60 certainli interest mind work behind jingo question also titl modern love anyon also jingo mean god forsaken talk australia hmm curiou',
'civil war mani case divid loyalti obvious mani occur border owen moor go join union armi shortli confeder soldier henri walthal separ regimen wander onto enemi properti desper water find suppli unionist young daughter gladi egan sit yanke soldier track littl gladi innoc help confeder hide later return kill father littl girl kind rememb sweet small stori director w griffith locat footag human lovingli display border state 6 13 10 w griffith henri walthal owen moor gladi egan'
]
test_result = [1,0,1,1,1,1,1,0,1,0,1,1,1,1,0,1,1,0,1,1,1,0,1,0,1,1,0,1,1,0]
test_func = lambda x: 'pos' if x==1 else 'neg'
knn_c = knn.predict(tf.transform(test_array).toarray())
count_currect=0
for sentence,l,r in zip(test_array,knn_c,test_result):
print(sentence,': Random Forest=',test_func(l))
if l==r:
count_currect +=1
print('Random Forest',count_currect/len(test_result)*100)
###Output
entertaining film follows rehearsal process ny production best taking seriously recognizable actors john glover gloria reubens david lansbury nice jobs main roles highlight hilarious scene murder banquo john elsen rehearsed probably entertaining film involved theatre anyone enjoys shakespeare enjoy film : Random Forest= pos
could otherwise run mill mediocre film infidelity sixties subtle free love period top happily ever ending time ever feel sympathy diane lane anna paquin troublesome middle class care free life feel emasculated liev shrieber story line plods along slowly predictable pathetic conclusion thing interesting watchable film stunning diane lane topless hint occurs 30 minutes film fast forward part skip rest : Random Forest= neg
cosimo luis guzmán told prison perfectsoon five guys organizing crime five guys little brain capacity brothers anthony joe russo directors welcome collinwood 2002 crime comedy often funny cannot help laughing everything goes wrong guys great actors playing characters william macy plays riley isaiah washington leon sam rockwell pero michael jeter toto andy davoli basil gabrielle union plays love interest michelle jennifer esposito plays pero love interest carmela george clooney also producer plays jerzy tattooed guy wheelchair highly entertaining flick certainly recommend : Random Forest= pos
summer full blockbusters comebacks woe find film could sit enjoy case want read page spoilers sum mature ella enchanted questionably violent parts plenty death handful scenes little blood small children try overly corny overstep bounds think bit serious bit magical princess bride close perhaps prodigious movie goer others maybe twice month feel active also huge sci fi fantasy fan get bored remade repetitive story lines films flash filling faster count 10 film diamond rough end august tired enough fractured expectations big hits averted seeing bourne ultimatum favor stardust hopes thoroughly muddied transformers fiction addiction previews stardust seemed appealing certainly wary many others utterly surprised gone thinking see another generic fantasy movie clichéd breakfast fooled definitely fairy tale indeed witches magic utterly requires suspension disbelief refreshing thing found based anything seen read past 15 years actually really good movie unlike 90 movies seem persistently appear like thorns side perhaps sign hollywood running ideas could read book year two years movie would another epic fantasy tale likes lotr rest says nyt doubt stardust bolted seat jam packed action every turn sweating bullets plot hook plot hook threatening tear dramatic tension apart echo throughout theater loud boom even use enormous blasts sound grab attention happening screen transformers looking trying show latest cgi techniques offend intelligence dimwitted dialogs story lines simple enough could figured 3rd grade boy hate watched watched enjoyed refreshingly creative storyline unfold eyes sure may known going happen throughout film makes forget even made heart twinge parts important aspect noticed left theater feeling better would gone truly gem much slush summer many remakes films fell short expectations like cold sweet cup tea cap hard work would done sitting others trying come money worth probably everyone favor enjoy fantasy films stand test time alone princess bride black cauldron dark crystal etc really see movie little diamond finding way dvd collection moment hits stores trust simply wonderful : Random Forest= neg
best movie ever seen maybe live area sweden movie tells truth believe criticizes honors lifestyle dalarna producer wants people watch movie opened minded care closest friends relatives live another small village anywhere sweden another country probably also recognize much movie thank maria blom : Random Forest= pos
story deals jet li fight oldfriends one problem friends superfighters film filled blood super action best stunts forever lau ching wan great co actor course movie typical hk fun love germany black mask uncut : Random Forest= pos
emotional impact movie defies words elegant subtle beautiful tragic rolled two hours smith matures acting ability full range knew saw pursuit happiness thought must fluke blockbuster top actor smith performances movies portray whole dimension smith refinement talent selectivity scripts sure view differently seven pounds one movies order fully enjoy essence suspend belief watch plot watch fragile condition human heart literally metaphorically story human guilt atonement love sacrifice : Random Forest= pos
oh man want give internal crow robot real workout movie pop ol vcr potential cut lines film endless minor spoilers ahead hey really care film quality spoiled traci girl problem psychology developed names child develops sexual crush opposite sex parent girl seems one sex one think term might mother dana played rosanna arquette whose cute overbite neo flowerchild sexuality luscious figure makes forgive number bad movies unsympathetic characters dana clueless daughter conduct seems competing gold medal olympic indulgent mother competition possible dana misses traci murderous streak truth told traci seems criminal skills hamster script dictates manages pull kind body count particularly hilarious note movie character carmen mexican maid described dana around long like one family although dresses director thought would say fell tomato truck guadalajara carmen wise traci scheming might also wear sign saying hey next victim sure enough traci confronts carmen carmen making way back mass bops one slightly angled lug wrenches car manufacturers put next spare bad joke rather suspect real life things useless murder weapon changing tire another sequence arquette wears flimsy dress vineyard cloudy skies talking owner cut another flimsy dress sunny skies talking owner brother cut wearing first dress first location cloudy skies supposed later get picture talking really bad directing skin expect much although traci nice couple bikinis looking trash wallow 8 anybody else : Random Forest= pos
life time little richard told little richard produced directed little richard one sided one songs biography even docudrama good writing great energy outstanding leading actor playing richard music little richard rocks tight lipsync every song movie covers early childhood carrys thru formative years music wild success richard throwing away praise lord tied together well obvious comeback 1962 manages stay away idea little richard discovered beatles opened main objection outrageous counter cultural behavior underplayed get feel audience experienced time energy still come across full force seemed tame compared remember time best scenes richard getting jilted lucille writing song strip bikini shorts performing make point decent place change gotten bronze liberace richard use refer interviews story trust saw perform couple months ago still flirts pretty white boys giving one particularly good dancer audience headband nearly 68 still going strong recommend movie concert v appearance find little richard always : Random Forest= pos
script weak enough character arcs make care one bit characters happens script way talky enough gore action even call slow paced story gets point want everyone shut die quickly possible listen talk muted stiff dialogue technical note music mix way high makes hard understand said times could called blessing overall story could better told short film running time 30 minutes obvious face homages sam raimi evil dead would good subtle seem like bald faced rip mon kind 35mm budget best could done still cinematography lighting design shots well done indeed : Random Forest= neg
savage island raw savagery scare hell trust boy estranged savage family run city slicker tourists pa savage wants revenge stop nothing gets real horror film truly wonderful horror moments also negative review clearly comes someone lacks proper knowledge film filmmakers chose lighting camera work order reflect dark murky egdy mood story words obtain certain aesthetic fact film several horror film festival awards : Random Forest= neg
docteur petiot starring michel serrault brutal yet preys weakest amidst populace imagery cinematography superb lend additional macabre feeling complex story perfect psychopath seductive altruistic intelligent caring calculating murderous movie certain forgotten soon viewer kudos mr serrault chilling portrayal : Random Forest= pos
one favourite flicks unlike weak elvira stranded unfamiliar town death good witch elviras aunt morgana inherits ruby ring extremely powerful sought bad warlock uncle befriends four characters inadvertently helps grow throughout movie dog tow show uncle wicked witch west elvira realises strength within ends defeating end gets sent towns folk winning hearts finally gets destination las vegas dorothy home kansas many references made wizard oz throughout movie uncle quote lines relevant parallel characters elvira youe must aunt em must uncle remus place like home place like home bad uncle vinny get pretty little dog sign elvira passes first road trip mentions state kansas aside fact one sequences ripped um mean inspired flashdance pure genius roll around laughing titty twirling end 80 las vegas show got camp bone body movie cult camp classic : Random Forest= pos
oscar nominations zero win yet understandlike done halle berry denzel washington whoopi oprah margaret avery danny glover etc amazing curious get scripts discussions oscars year go shoulda would coulda category judges amazing book true alice walker style writing way seeming like exaggerating characters glad screen adaptation took things cinematography amazing african scenes live much desired african part book supposed set liberia somewhere west africa oh steven spielberg thinks world dumb cannot think africa outside safaris yes complimentary zebra wildlife scene know none west africa get people speak swahili west africa speaks swahili get way story amazing film making world classic yes watch soul needs rejuvenation : Random Forest= pos
kurt thomas stars jonathan cabot ninjas stand chance especially since cabot gymnast taken whole gymkata one helluva bad movie atrocious acting god awful script really incompetent directing make quality human standards however movie terrible becomes really really funny mean dialog know outsleep ha add mock value gymkata obtains besides wisely movie hero gymnast finds things swing heat moment : Random Forest= neg
film pretty good big fan baseball glover joseph gordon levitt brenda fricker christopher lloyd tony danza milton davis jr brought variety talented actors understanding sport plot believable love message william dear guys put together great movie sports films revolve around true stories events often work well film hits 10 perfectness scale even though minor mistakes : Random Forest= pos
warm funny film much vein works almodovar sure 10 year cannot understand readers found sick perverted would willing let 10 year old son play part sure spanish cinema often quite sexual open healthy way leaves viewer without sense voyeurism kink think northern european types attitude would much better result liberal attitude also seen hilarious fartman maurice character lover says people embarrassed farting turn art form : Random Forest= pos
although great film something compelling memorable like never forgotten story ridiculously cumbersome title see opportunity feel like voyeur small town life evolves decades film one brings human face historical drama early twentieth century progress engaging enough young viewer memorable enough older one furthermore easy like characters watch passage time : Random Forest= pos
movie distinct albeit brutish rough humanity borderline depravity zippy like terrorizing woman train semi pitiful vulnerability lurks never far away dewaere sucks breasts like baby blier cuts away scene depardieu may rape dewaere never sure explicitly read manifestly homoerotic aspect relationship either way incident start relative humanization movie could certainly read pro gay although could likely read pro anything want movie many objectionable scenes points sexual politics probably best taken general cartoon foibles sexes making mockery whole notion sensitivity honesty hitting numerous points possible profundity basis fire enough shots bound hit : Random Forest= neg
one remarkable sci fi movies millennium movie incredible establishes new standard f movies hail kill : Random Forest= pos
care peopl vote movi bad want truth good thing movi realli get one : Random Forest= neg
never realli understood controversi hype provok social drama could ever experi yeah right might littl shock mayb often see someon get shot ars weak pointless plot sure think much bais moi anoth one blame everyth go wrong societi film gener convinc 99 peopl function perfectli well societi would blame exact societi vile hopeless act two derang nymph girl two main charact miser life introduc separ flash nadin kill roommat manu shot brother two meet abandon train station late night decid travel around franc togeth leav trail sex blood behind wherev made stop although constantli expos pornographi violenc film bore sit like girl indic time dialogu lame peopl run kill uninterest peopl want make porno movi fine pleas pretend art hous film make leav swear hip camera work see arous pornographi cool soundtrack though : Random Forest= neg
sweet entertain tale young find work retir eccentr tragic actress well act especi juli walter rupert grint play role teenag boy well show talent last longer harri potter seri film laura linney play ruthlessli strict mother without hint redempt room like film entertain film made well british style like keep mum calendar girl : Random Forest= pos
first mention realli enjoy skin man peach hip girl although owe debt tarentino pulp fiction ishii cast task carri stori entir film crackl energi scene asano tadanobu gashuin tatsuya particularli engag action intrigu bizarr character enough sex keep thing interest utterli unpredict stori line certain amount anticip optim began watch parti 7 enthusiasm certainli piqu open credit left wife actual stun dynam excit mix anim live action work brilliant actual movi start actual much start sort shuffl side door stand fumbl pocket look uncomfort entir film take place three room one futurist voyeur paradis borrow bit shark skin man anoth travel agent offic third far use seedi hotel room room cast seven charact meet approxim noth realli stranger talk film one time favorit dinner andr talkiest talk film dinner andr far excit two middl age men discuss life dinner key andr gregori wallac shawn tell interest stori cast parti 7 liter whine entir film ye realli ye realli realli realli ye realli get idea hope wish direct parti 7 unbeliev unengag film flimsiest plot money stolen yakuza like shark skin man accompani almost action interest dialog charact larg uninterest ishii took throwaway convers moment tarentino film built entir film around tarentino convers alway intern logic wit call royal chees dialog duller imagin brief hilari cameo gashuin alway marvel low key perform awesom asano tadanobu would given parti 7 singl star realli chore make way : Random Forest= neg
argentinian music poet film feel music repeat world time countri histori first listen play tri make other hear believ hear nobodi say anyth peopl appear listen other recogn heard think other might hear final everybodi listen music suddenli sound love poetri real nation legaci father child would call film dead nobodi dy spanish translat titl refus follow rule call dublin follow jame joyc titl nice 1900 irish film postcard : Random Forest= pos
saw film chanc small box fantast chill cannot believ still wait 5 vote : Random Forest= pos
small california town diablo plagu mysteri death sheriff robert lopez unearth ancient box legend box hold sixteenth centuri mexican demon name azar fbi agent gil vega sent investig murder join forc sheriff daughter dominiqu mari fight evil bloodthirsti demon legend diablo absolut garbag film lack scare gore act amateurish direct bad anim one aspect film enjoy big fan indi horror flick exampl love torch live feed bone sick neighborhood watch unfortun legend diablo huge misfir definit one avoid : Random Forest= neg
good see vintag film buff correctli categor excel dvd releas excus elabor girli show kitti carlisl gertrud michael lead cast super decor girl includ ann sheridan luci ball beryl wallac gwenllian gill gladi young barbara fritchi wanda perri dorothi white carl brisson also hand lend strong voic cocktail two undoubtedli movi popular song heard le four time howev gertrud michael steal show rendit sweet marijauna strong perform hero reject girlfriend rest cast could done without jack oaki victor mclaglen altogeth good thing oaki role weak run gag cult icon tobi wing fact give idea far rest comedi indulg strain super dumb inspector mclaglen simpli cannot put hand killer even though would believ instanc happen person suspect director mitch leisen actual go great pain point killer even dumbest member cinema audienc give player concern close close : Random Forest= pos
saw film via one actor agent sure conform great deal come film excel mostli kid actor ham embarrass case realli good term surreal thingi mention jingo well think film plain weird real weirdo film weirdo locat storylin weird stuff go whole time good weird oppos bad hard think movi like like car ate pari mayb like repuls actual think like hammer movi 60 certainli interest mind work behind jingo question also titl modern love anyon also jingo mean god forsaken talk australia hmm curiou : Random Forest= neg
civil war mani case divid loyalti obvious mani occur border owen moor go join union armi shortli confeder soldier henri walthal separ regimen wander onto enemi properti desper water find suppli unionist young daughter gladi egan sit yanke soldier track littl gladi innoc help confeder hide later return kill father littl girl kind rememb sweet small stori director w griffith locat footag human lovingli display border state 6 13 10 w griffith henri walthal owen moor gladi egan : Random Forest= pos
Random Forest 73.33333333333333
###Markdown
Load Dataset
###Code
training_set = pd.read_csv("../dataset/cleaned_movie_train.csv")
Y=training_set['sentiment'].values
X=training_set['review'].values
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.20,random_state=42,stratify=Y)
print ("No. of Training Examples: ",x_train.shape)
print ("No. of Testing Examples: ",x_test.shape)
tf=TfidfVectorizer(min_df=10,max_df=0.95,use_idf=True)
tf.fit_transform(x_train)
X_train=tf.transform(x_train) # for train data we can use fit_transfrom also.
X_test=tf.transform(x_test)
# pickle.dump(tf, open('vectorizer2_clean_mix.sav', 'wb'))
# Evaluating models peformance based on precision, recall and accuracy
def do_evaluation (predicted, actual,verbose=True):
precision = precision_score(actual,predicted)
recall = recall_score(actual,predicted)
accuracy = accuracy_score(actual,predicted)
f1score = f1_score(predicted,actual)
if verbose:
print('"Evaluation"','| Precision ==',round(precision*100,2),'| Recall ==',round(recall*100,2),'| Accuracy ==',round(accuracy*100,2),'| F1 score ==',round(f1score*100,2))
###Output
_____no_output_____
###Markdown
Training phase..
###Code
# Random Foreset Classifier
knn = KNeighborsClassifier(n_neighbors=120,leaf_size=80, p=2)
knn.fit(X_train,y_train)
# Testing phase
knn_pred=knn.predict(X_test)
print("Accuracy: ",round(accuracy_score(y_test,knn_pred),3))
print ('{:.1%} of prediction are positive'.format(float(sum(knn_pred))/len(y_test)))
print ('{:.1%} are actually positive'.format(float(sum(y_test))/len(y_test)))
do_evaluation (knn_pred,y_test, verbose=True)
pickle.dump(knn, open('knn2_clean_mix_0.788_8,34,2.sav', 'wb'))
###Output
_____no_output_____
###Markdown
E valuate classifier performance(roc and auc curve)
###Code
def display_curve(nb_pred,name):
#Calculating False Positive Rate ,True Positive Rate and threshold
fpr_nb, tpr_nb, _ = roc_curve(y_test, nb_pred)
#AUC is the percentage of the ROC plot that is underneath the curve:
roc_auc_nb = auc(fpr_nb, tpr_nb)
plt.title(f'Operating Characteristic for {name} Classifier')
plt.plot(fpr_nb, tpr_nb, 'b', label = 'AUC = %0.2f' % roc_auc_nb)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# Gaussian Naive Bayes Classifier
display_curve(knn_pred,'KNN')
###Output
_____no_output_____
###Markdown
Testing
###Code
# Load model
knn = pickle.load(open('knn2_clean_mix_0.788_8,34,2.sav','rb'))
tf = pickle.load(open('vectorizer2_clean_mix.sav','rb'))
# Testing
test_array = [
'entertaining film follows rehearsal process ny production best taking seriously recognizable actors john glover gloria reubens david lansbury nice jobs main roles highlight hilarious scene murder banquo john elsen rehearsed probably entertaining film involved theatre anyone enjoys shakespeare enjoy film',
'could otherwise run mill mediocre film infidelity sixties subtle free love period top happily ever ending time ever feel sympathy diane lane anna paquin troublesome middle class care free life feel emasculated liev shrieber story line plods along slowly predictable pathetic conclusion thing interesting watchable film stunning diane lane topless hint occurs 30 minutes film fast forward part skip rest',
'cosimo luis guzmán told prison perfectsoon five guys organizing crime five guys little brain capacity brothers anthony joe russo directors welcome collinwood 2002 crime comedy often funny cannot help laughing everything goes wrong guys great actors playing characters william macy plays riley isaiah washington leon sam rockwell pero michael jeter toto andy davoli basil gabrielle union plays love interest michelle jennifer esposito plays pero love interest carmela george clooney also producer plays jerzy tattooed guy wheelchair highly entertaining flick certainly recommend',
'summer full blockbusters comebacks woe find film could sit enjoy case want read page spoilers sum mature ella enchanted questionably violent parts plenty death handful scenes little blood small children try overly corny overstep bounds think bit serious bit magical princess bride close perhaps prodigious movie goer others maybe twice month feel active also huge sci fi fantasy fan get bored remade repetitive story lines films flash filling faster count 10 film diamond rough end august tired enough fractured expectations big hits averted seeing bourne ultimatum favor stardust hopes thoroughly muddied transformers fiction addiction previews stardust seemed appealing certainly wary many others utterly surprised gone thinking see another generic fantasy movie clichéd breakfast fooled definitely fairy tale indeed witches magic utterly requires suspension disbelief refreshing thing found based anything seen read past 15 years actually really good movie unlike 90 movies seem persistently appear like thorns side perhaps sign hollywood running ideas could read book year two years movie would another epic fantasy tale likes lotr rest says nyt doubt stardust bolted seat jam packed action every turn sweating bullets plot hook plot hook threatening tear dramatic tension apart echo throughout theater loud boom even use enormous blasts sound grab attention happening screen transformers looking trying show latest cgi techniques offend intelligence dimwitted dialogs story lines simple enough could figured 3rd grade boy hate watched watched enjoyed refreshingly creative storyline unfold eyes sure may known going happen throughout film makes forget even made heart twinge parts important aspect noticed left theater feeling better would gone truly gem much slush summer many remakes films fell short expectations like cold sweet cup tea cap hard work would done sitting others trying come money worth probably everyone favor enjoy fantasy films stand test time alone princess bride black cauldron dark crystal etc really see movie little diamond finding way dvd collection moment hits stores trust simply wonderful',
'best movie ever seen maybe live area sweden movie tells truth believe criticizes honors lifestyle dalarna producer wants people watch movie opened minded care closest friends relatives live another small village anywhere sweden another country probably also recognize much movie thank maria blom',
'story deals jet li fight oldfriends one problem friends superfighters film filled blood super action best stunts forever lau ching wan great co actor course movie typical hk fun love germany black mask uncut',
'emotional impact movie defies words elegant subtle beautiful tragic rolled two hours smith matures acting ability full range knew saw pursuit happiness thought must fluke blockbuster top actor smith performances movies portray whole dimension smith refinement talent selectivity scripts sure view differently seven pounds one movies order fully enjoy essence suspend belief watch plot watch fragile condition human heart literally metaphorically story human guilt atonement love sacrifice',
'oh man want give internal crow robot real workout movie pop ol vcr potential cut lines film endless minor spoilers ahead hey really care film quality spoiled traci girl problem psychology developed names child develops sexual crush opposite sex parent girl seems one sex one think term might mother dana played rosanna arquette whose cute overbite neo flowerchild sexuality luscious figure makes forgive number bad movies unsympathetic characters dana clueless daughter conduct seems competing gold medal olympic indulgent mother competition possible dana misses traci murderous streak truth told traci seems criminal skills hamster script dictates manages pull kind body count particularly hilarious note movie character carmen mexican maid described dana around long like one family although dresses director thought would say fell tomato truck guadalajara carmen wise traci scheming might also wear sign saying hey next victim sure enough traci confronts carmen carmen making way back mass bops one slightly angled lug wrenches car manufacturers put next spare bad joke rather suspect real life things useless murder weapon changing tire another sequence arquette wears flimsy dress vineyard cloudy skies talking owner cut another flimsy dress sunny skies talking owner brother cut wearing first dress first location cloudy skies supposed later get picture talking really bad directing skin expect much although traci nice couple bikinis looking trash wallow 8 anybody else',
'life time little richard told little richard produced directed little richard one sided one songs biography even docudrama good writing great energy outstanding leading actor playing richard music little richard rocks tight lipsync every song movie covers early childhood carrys thru formative years music wild success richard throwing away praise lord tied together well obvious comeback 1962 manages stay away idea little richard discovered beatles opened main objection outrageous counter cultural behavior underplayed get feel audience experienced time energy still come across full force seemed tame compared remember time best scenes richard getting jilted lucille writing song strip bikini shorts performing make point decent place change gotten bronze liberace richard use refer interviews story trust saw perform couple months ago still flirts pretty white boys giving one particularly good dancer audience headband nearly 68 still going strong recommend movie concert v appearance find little richard always',
'script weak enough character arcs make care one bit characters happens script way talky enough gore action even call slow paced story gets point want everyone shut die quickly possible listen talk muted stiff dialogue technical note music mix way high makes hard understand said times could called blessing overall story could better told short film running time 30 minutes obvious face homages sam raimi evil dead would good subtle seem like bald faced rip mon kind 35mm budget best could done still cinematography lighting design shots well done indeed',
'savage island raw savagery scare hell trust boy estranged savage family run city slicker tourists pa savage wants revenge stop nothing gets real horror film truly wonderful horror moments also negative review clearly comes someone lacks proper knowledge film filmmakers chose lighting camera work order reflect dark murky egdy mood story words obtain certain aesthetic fact film several horror film festival awards',
'docteur petiot starring michel serrault brutal yet preys weakest amidst populace imagery cinematography superb lend additional macabre feeling complex story perfect psychopath seductive altruistic intelligent caring calculating murderous movie certain forgotten soon viewer kudos mr serrault chilling portrayal',
'one favourite flicks unlike weak elvira stranded unfamiliar town death good witch elviras aunt morgana inherits ruby ring extremely powerful sought bad warlock uncle befriends four characters inadvertently helps grow throughout movie dog tow show uncle wicked witch west elvira realises strength within ends defeating end gets sent towns folk winning hearts finally gets destination las vegas dorothy home kansas many references made wizard oz throughout movie uncle quote lines relevant parallel characters elvira youe must aunt em must uncle remus place like home place like home bad uncle vinny get pretty little dog sign elvira passes first road trip mentions state kansas aside fact one sequences ripped um mean inspired flashdance pure genius roll around laughing titty twirling end 80 las vegas show got camp bone body movie cult camp classic',
'oscar nominations zero win yet understandlike done halle berry denzel washington whoopi oprah margaret avery danny glover etc amazing curious get scripts discussions oscars year go shoulda would coulda category judges amazing book true alice walker style writing way seeming like exaggerating characters glad screen adaptation took things cinematography amazing african scenes live much desired african part book supposed set liberia somewhere west africa oh steven spielberg thinks world dumb cannot think africa outside safaris yes complimentary zebra wildlife scene know none west africa get people speak swahili west africa speaks swahili get way story amazing film making world classic yes watch soul needs rejuvenation',
'kurt thomas stars jonathan cabot ninjas stand chance especially since cabot gymnast taken whole gymkata one helluva bad movie atrocious acting god awful script really incompetent directing make quality human standards however movie terrible becomes really really funny mean dialog know outsleep ha add mock value gymkata obtains besides wisely movie hero gymnast finds things swing heat moment',
'film pretty good big fan baseball glover joseph gordon levitt brenda fricker christopher lloyd tony danza milton davis jr brought variety talented actors understanding sport plot believable love message william dear guys put together great movie sports films revolve around true stories events often work well film hits 10 perfectness scale even though minor mistakes',
'warm funny film much vein works almodovar sure 10 year cannot understand readers found sick perverted would willing let 10 year old son play part sure spanish cinema often quite sexual open healthy way leaves viewer without sense voyeurism kink think northern european types attitude would much better result liberal attitude also seen hilarious fartman maurice character lover says people embarrassed farting turn art form',
'although great film something compelling memorable like never forgotten story ridiculously cumbersome title see opportunity feel like voyeur small town life evolves decades film one brings human face historical drama early twentieth century progress engaging enough young viewer memorable enough older one furthermore easy like characters watch passage time',
'movie distinct albeit brutish rough humanity borderline depravity zippy like terrorizing woman train semi pitiful vulnerability lurks never far away dewaere sucks breasts like baby blier cuts away scene depardieu may rape dewaere never sure explicitly read manifestly homoerotic aspect relationship either way incident start relative humanization movie could certainly read pro gay although could likely read pro anything want movie many objectionable scenes points sexual politics probably best taken general cartoon foibles sexes making mockery whole notion sensitivity honesty hitting numerous points possible profundity basis fire enough shots bound hit',
'one remarkable sci fi movies millennium movie incredible establishes new standard f movies hail kill',
'care peopl vote movi bad want truth good thing movi realli get one',
'never realli understood controversi hype provok social drama could ever experi yeah right might littl shock mayb often see someon get shot ars weak pointless plot sure think much bais moi anoth one blame everyth go wrong societi film gener convinc 99 peopl function perfectli well societi would blame exact societi vile hopeless act two derang nymph girl two main charact miser life introduc separ flash nadin kill roommat manu shot brother two meet abandon train station late night decid travel around franc togeth leav trail sex blood behind wherev made stop although constantli expos pornographi violenc film bore sit like girl indic time dialogu lame peopl run kill uninterest peopl want make porno movi fine pleas pretend art hous film make leav swear hip camera work see arous pornographi cool soundtrack though',
'sweet entertain tale young find work retir eccentr tragic actress well act especi juli walter rupert grint play role teenag boy well show talent last longer harri potter seri film laura linney play ruthlessli strict mother without hint redempt room like film entertain film made well british style like keep mum calendar girl',
'first mention realli enjoy skin man peach hip girl although owe debt tarentino pulp fiction ishii cast task carri stori entir film crackl energi scene asano tadanobu gashuin tatsuya particularli engag action intrigu bizarr character enough sex keep thing interest utterli unpredict stori line certain amount anticip optim began watch parti 7 enthusiasm certainli piqu open credit left wife actual stun dynam excit mix anim live action work brilliant actual movi start actual much start sort shuffl side door stand fumbl pocket look uncomfort entir film take place three room one futurist voyeur paradis borrow bit shark skin man anoth travel agent offic third far use seedi hotel room room cast seven charact meet approxim noth realli stranger talk film one time favorit dinner andr talkiest talk film dinner andr far excit two middl age men discuss life dinner key andr gregori wallac shawn tell interest stori cast parti 7 liter whine entir film ye realli ye realli realli realli ye realli get idea hope wish direct parti 7 unbeliev unengag film flimsiest plot money stolen yakuza like shark skin man accompani almost action interest dialog charact larg uninterest ishii took throwaway convers moment tarentino film built entir film around tarentino convers alway intern logic wit call royal chees dialog duller imagin brief hilari cameo gashuin alway marvel low key perform awesom asano tadanobu would given parti 7 singl star realli chore make way',
'argentinian music poet film feel music repeat world time countri histori first listen play tri make other hear believ hear nobodi say anyth peopl appear listen other recogn heard think other might hear final everybodi listen music suddenli sound love poetri real nation legaci father child would call film dead nobodi dy spanish translat titl refus follow rule call dublin follow jame joyc titl nice 1900 irish film postcard',
'saw film chanc small box fantast chill cannot believ still wait 5 vote',
'small california town diablo plagu mysteri death sheriff robert lopez unearth ancient box legend box hold sixteenth centuri mexican demon name azar fbi agent gil vega sent investig murder join forc sheriff daughter dominiqu mari fight evil bloodthirsti demon legend diablo absolut garbag film lack scare gore act amateurish direct bad anim one aspect film enjoy big fan indi horror flick exampl love torch live feed bone sick neighborhood watch unfortun legend diablo huge misfir definit one avoid',
'good see vintag film buff correctli categor excel dvd releas excus elabor girli show kitti carlisl gertrud michael lead cast super decor girl includ ann sheridan luci ball beryl wallac gwenllian gill gladi young barbara fritchi wanda perri dorothi white carl brisson also hand lend strong voic cocktail two undoubtedli movi popular song heard le four time howev gertrud michael steal show rendit sweet marijauna strong perform hero reject girlfriend rest cast could done without jack oaki victor mclaglen altogeth good thing oaki role weak run gag cult icon tobi wing fact give idea far rest comedi indulg strain super dumb inspector mclaglen simpli cannot put hand killer even though would believ instanc happen person suspect director mitch leisen actual go great pain point killer even dumbest member cinema audienc give player concern close close',
'saw film via one actor agent sure conform great deal come film excel mostli kid actor ham embarrass case realli good term surreal thingi mention jingo well think film plain weird real weirdo film weirdo locat storylin weird stuff go whole time good weird oppos bad hard think movi like like car ate pari mayb like repuls actual think like hammer movi 60 certainli interest mind work behind jingo question also titl modern love anyon also jingo mean god forsaken talk australia hmm curiou',
'civil war mani case divid loyalti obvious mani occur border owen moor go join union armi shortli confeder soldier henri walthal separ regimen wander onto enemi properti desper water find suppli unionist young daughter gladi egan sit yanke soldier track littl gladi innoc help confeder hide later return kill father littl girl kind rememb sweet small stori director w griffith locat footag human lovingli display border state 6 13 10 w griffith henri walthal owen moor gladi egan'
]
test_result = [1,0,1,1,1,1,1,0,1,0,1,1,1,1,0,1,1,0,1,1,1,0,1,0,1,1,0,1,1,0]
test_func = lambda x: 'pos' if x==1 else 'neg'
knn_c = knn.predict(tf.transform(test_array).toarray())
count_currect=0
for sentence,l,r in zip(test_array,knn_c,test_result):
# print(sentence,': Random Forest=',test_func(l))
if l==r:
count_currect +=1
print('KNN',count_currect/len(test_result)*100)
###Output
KNN 70.0
|
Segmenting_and_Clustering_Neighborhoods_in_Toronto.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
**1. Scrap and clean data from Wikipedia**
###Code
# Create a link to data i need scrap.
link = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
table = pd.read_html(link)
table
# Printing table we can see that information we need existing on zero position of this list
# Create DataFrame.
toronto_neighs_df = pd.DataFrame(table[0])
toronto_neighs_df.shape
# Lets remove rows with Borough == Not assigned
# I will use np.where().
toronto_neighs_df.head(5)
toronto_neighs_df = toronto_neighs_df.drop(np.where(toronto_neighs_df.Borough == "Not assigned")[0])
# toronto_neighs_df.head(5)
toronto_neighs_df.shape
###Output
_____no_output_____
###Markdown
**2. Getting latitude and longitude**
###Code
# Get our latitude and longitude from csv file
!wget "http://cocl.us/Geospatial_data"
geo_data = pd.read_csv("Geospatial_data")
geo_data.head(5)
# Merge two dataframes into one
toronto_neighs_df = toronto_neighs_df.merge(geo_data)
toronto_neighs_df.head(10)
###Output
_____no_output_____
###Markdown
**3. Explore and clustering neighborhoods**
###Code
# First i`ll import libraries for creating maps
# !conda install -c conda-forge geopy --yes # uncomment this line if you didn`t install packages on your computer
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# !conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven`t this on your computer
import folium # map rendering library
###Output
_____no_output_____
###Markdown
Use geopy library to get coordinates of Toronto city
###Code
adress = "Toronto, ON"
geolocator = Nominatim(user_agent="tt_explorer")
location = geolocator.geocode(adress)
latitude = location.latitude
longitude = location.longitude
print("The geographical coordinates of Toronto city are {}, {}".format(latitude, longitude))
###Output
The geographical coordinates of Toronto city are 43.6534817, -79.3839347
###Markdown
Create a map of Toronto with neighborhoods
###Code
# Create map using latitude and longitude
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=15)
# Add markers to map
for lnt, ltd, borough, neighborhood in zip(toronto_neighs_df["Latitude"], toronto_neighs_df["Longitude"], toronto_neighs_df["Borough"], toronto_neighs_df["Neighbourhood"]):
label = "{}, {}".format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lnt, ltd],
radius=5,
popup=label,
color="blue",
fill=True,
fill_color="#3186cc",
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
###Output
_____no_output_____
###Markdown
**Let`s explore neighborhoods with Foursqare API**
###Code
# DEFINE Foursqare credentials and version
CLIENT_ID = 'UPHYGUPXMZYV5KBBPJ5S3M42YGBM2TZ0XLKGZFP5Q2ADMUQD' # your Foursquare ID
CLIENT_SECRET = 'SBGJPHLVCEL1XTYMZCL4FNHUQKVBKBMM1JSCYCH0DVZQ4FBZ' # your Foursquare Secret
VERSION = '20210310' # Foursquare API version
LIMIT = 100 # A default Foursquare API limit value
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
###Output
Your credentails:
CLIENT_ID: UPHYGUPXMZYV5KBBPJ5S3M42YGBM2TZ0XLKGZFP5Q2ADMUQD
CLIENT_SECRET:SBGJPHLVCEL1XTYMZCL4FNHUQKVBKBMM1JSCYCH0DVZQ4FBZ
###Markdown
Let`s explore the first neighborhood
###Code
toronto_neighs_df.loc[0, "Neighbourhood"]
###Output
_____no_output_____
###Markdown
Get the neighborhood`s coordinates
###Code
neighborhood_latitude = toronto_neighs_df.loc[0, 'Latitude'] # neighborhood latitude value
neighborhood_longitude = toronto_neighs_df.loc[0, 'Longitude'] # neighborhood longitude value
neighborhood_name =toronto_neighs_df.loc[0, 'Neighbourhood'] # neighborhood name
print('Latitude and longitude values of {} are {}, {}.'.format(neighborhood_name,
neighborhood_latitude,
neighborhood_longitude))
###Output
Latitude and longitude values of Parkwoods are 43.7532586, -79.3296565.
###Markdown
**Now let`s get top venues that are in Parkwoods**
###Code
# Create a get request url
LIMIT = 50
radius = 500
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
neighborhood_latitude,
neighborhood_longitude,
radius,
LIMIT)
url
import requests
results = requests.get(url).json()
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
# Here im trying to check groups in results `cos i had troubles with keyError in NY lab. Just some precautions)
# if "groups" in results["response"]:
# print("True")
# else:
# print("False")
###Output
True
###Markdown
Now we can clean json and structure this into pandas dataframe
###Code
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
venues = results['response']['groups'][0]['items']
nearby_venues = json_normalize(venues) # flatten JSON
# filter columns
filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_venues = nearby_venues.loc[:, filtered_columns]
# filter the category for each row
nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1)
# clean columns
nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns]
nearby_venues.head()
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:3: FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
**NOW I WILL EXPLORE ALL NEIGHBORHOODS**
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
neighborhood_latitudes = toronto_neighs_df.loc[:, 'Latitude'] # neighborhoods latitude value
neighborhood_longitudes = toronto_neighs_df.loc[:, 'Longitude'] # neighborhoods longitude value
neighborhood_names = toronto_neighs_df.loc[:, 'Neighbourhood'] # neighborhood names
toronto_venues = getNearbyVenues(neighborhood_names, neighborhood_latitudes, neighborhood_longitudes)
###Output
Parkwoods
Victoria Village
Regent Park, Harbourfront
Lawrence Manor, Lawrence Heights
Queen's Park, Ontario Provincial Government
Islington Avenue, Humber Valley Village
Malvern, Rouge
Don Mills
Parkview Hill, Woodbine Gardens
Garden District, Ryerson
Glencairn
West Deane Park, Princess Gardens, Martin Grove, Islington, Cloverdale
Rouge Hill, Port Union, Highland Creek
Don Mills
Woodbine Heights
St. James Town
Humewood-Cedarvale
Eringate, Bloordale Gardens, Old Burnhamthorpe, Markland Wood
Guildwood, Morningside, West Hill
The Beaches
Berczy Park
Caledonia-Fairbanks
Woburn
Leaside
Central Bay Street
Christie
Cedarbrae
Hillcrest Village
Bathurst Manor, Wilson Heights, Downsview North
Thorncliffe Park
Richmond, Adelaide, King
Dufferin, Dovercourt Village
Scarborough Village
Fairview, Henry Farm, Oriole
Northwood Park, York University
East Toronto, Broadview North (Old East York)
Harbourfront East, Union Station, Toronto Islands
Little Portugal, Trinity
Kennedy Park, Ionview, East Birchmount Park
Bayview Village
Downsview
The Danforth West, Riverdale
Toronto Dominion Centre, Design Exchange
Brockton, Parkdale Village, Exhibition Place
Golden Mile, Clairlea, Oakridge
York Mills, Silver Hills
Downsview
India Bazaar, The Beaches West
Commerce Court, Victoria Hotel
North Park, Maple Leaf Park, Upwood Park
Humber Summit
Cliffside, Cliffcrest, Scarborough Village West
Willowdale, Newtonbrook
Downsview
Studio District
Bedford Park, Lawrence Manor East
Del Ray, Mount Dennis, Keelsdale and Silverthorn
Humberlea, Emery
Birch Cliff, Cliffside West
Willowdale, Willowdale East
Downsview
Lawrence Park
Roselawn
Runnymede, The Junction, Weston-Pellam Park, Carlton Village
Weston
Dorset Park, Wexford Heights, Scarborough Town Centre
York Mills West
Davisville North
Forest Hill North & West, Forest Hill Road Park
High Park, The Junction South
Westmount
Wexford, Maryvale
Willowdale, Willowdale West
North Toronto West, Lawrence Park
The Annex, North Midtown, Yorkville
Parkdale, Roncesvalles
Canada Post Gateway Processing Centre
Kingsview Village, St. Phillips, Martin Grove Gardens, Richview Gardens
Agincourt
Davisville
University of Toronto, Harbord
Runnymede, Swansea
Clarks Corners, Tam O'Shanter, Sullivan
Moore Park, Summerhill East
Kensington Market, Chinatown, Grange Park
Milliken, Agincourt North, Steeles East, L'Amoreaux East
Summerhill West, Rathnelly, South Hill, Forest Hill SE, Deer Park
CN Tower, King and Spadina, Railway Lands, Harbourfront West, Bathurst Quay, South Niagara, Island airport
New Toronto, Mimico South, Humber Bay Shores
South Steeles, Silverstone, Humbergate, Jamestown, Mount Olive, Beaumond Heights, Thistletown, Albion Gardens
Steeles West, L'Amoreaux West
Rosedale
Stn A PO Boxes
Alderwood, Long Branch
Northwest, West Humber - Clairville
Upper Rouge
St. James Town, Cabbagetown
First Canadian Place, Underground city
The Kingsway, Montgomery Road, Old Mill North
Church and Wellesley
Business reply mail Processing Centre, South Central Letter Processing Plant Toronto
Old Mill South, King's Mill Park, Sunnylea, Humber Bay, Mimico NE, The Queensway East, Royal York South East, Kingsway Park South East
Mimico NW, The Queensway West, South of Bloor, Kingsway Park South West, Royal York South West
###Markdown
Let`s check the size of resulting DataFrame
###Code
print(toronto_venues.shape)
toronto_venues.head(5)
toronto_venues.groupby('Neighborhood').count()
###Output
_____no_output_____
###Markdown
Let`s find out how many unique categories returned from all venues
###Code
print('There are {} uniques categories.'.format(len(toronto_venues['Venue Category'].unique())))
###Output
There are 256 uniques categories.
###Markdown
**ANALYZE EACH NEIGHBORHOOD**
###Code
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
###Output
_____no_output_____
###Markdown
Now let`s group rows by neighborhood and by mean
###Code
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped
###Output
_____no_output_____
###Markdown
**Let`s print all neighborhood along with top 5 venues **
###Code
num_top_venues = 5
for hood in toronto_grouped['Neighborhood']:
print("----"+hood+"----")
temp = toronto_grouped[toronto_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
###Output
----Agincourt----
venue freq
0 Lounge 0.25
1 Latin American Restaurant 0.25
2 Breakfast Spot 0.25
3 Skating Rink 0.25
4 Monument / Landmark 0.00
----Alderwood, Long Branch----
venue freq
0 Pizza Place 0.25
1 Gym 0.12
2 Coffee Shop 0.12
3 Dance Studio 0.12
4 Pub 0.12
----Bathurst Manor, Wilson Heights, Downsview North----
venue freq
0 Bank 0.10
1 Coffee Shop 0.10
2 Pharmacy 0.05
3 Gas Station 0.05
4 Shopping Mall 0.05
----Bayview Village----
venue freq
0 Japanese Restaurant 0.25
1 Bank 0.25
2 Chinese Restaurant 0.25
3 Café 0.25
4 Yoga Studio 0.00
----Bedford Park, Lawrence Manor East----
venue freq
0 Pizza Place 0.08
1 Italian Restaurant 0.08
2 Coffee Shop 0.08
3 Sandwich Place 0.08
4 Pharmacy 0.04
----Berczy Park----
venue freq
0 Coffee Shop 0.08
1 Cocktail Bar 0.06
2 Beer Bar 0.04
3 Seafood Restaurant 0.04
4 Cheese Shop 0.04
----Birch Cliff, Cliffside West----
venue freq
0 College Stadium 0.25
1 Café 0.25
2 Skating Rink 0.25
3 General Entertainment 0.25
4 Miscellaneous Shop 0.00
----Brockton, Parkdale Village, Exhibition Place----
venue freq
0 Café 0.14
1 Coffee Shop 0.09
2 Breakfast Spot 0.09
3 Restaurant 0.05
4 Bar 0.05
----Business reply mail Processing Centre, South Central Letter Processing Plant Toronto----
venue freq
0 Yoga Studio 0.06
1 Auto Workshop 0.06
2 Comic Shop 0.06
3 Park 0.06
4 Restaurant 0.06
----CN Tower, King and Spadina, Railway Lands, Harbourfront West, Bathurst Quay, South Niagara, Island airport----
venue freq
0 Airport Service 0.18
1 Airport Lounge 0.12
2 Airport Terminal 0.12
3 Harbor / Marina 0.06
4 Bar 0.06
----Caledonia-Fairbanks----
venue freq
0 Park 0.50
1 Women's Store 0.25
2 Pool 0.25
3 Mexican Restaurant 0.00
4 Molecular Gastronomy Restaurant 0.00
----Canada Post Gateway Processing Centre----
venue freq
0 Coffee Shop 0.21
1 Hotel 0.14
2 Gym 0.07
3 Middle Eastern Restaurant 0.07
4 Fried Chicken Joint 0.07
----Cedarbrae----
venue freq
0 Hakka Restaurant 0.11
1 Athletics & Sports 0.11
2 Fried Chicken Joint 0.11
3 Caribbean Restaurant 0.11
4 Bank 0.11
----Central Bay Street----
venue freq
0 Coffee Shop 0.18
1 Sandwich Place 0.06
2 Bubble Tea Shop 0.04
3 Burger Joint 0.04
4 Italian Restaurant 0.04
----Christie----
venue freq
0 Grocery Store 0.25
1 Café 0.19
2 Park 0.12
3 Baby Store 0.06
4 Candy Store 0.06
----Church and Wellesley----
venue freq
0 Sushi Restaurant 0.08
1 Coffee Shop 0.06
2 Yoga Studio 0.04
3 Japanese Restaurant 0.04
4 Restaurant 0.04
----Clarks Corners, Tam O'Shanter, Sullivan----
venue freq
0 Fast Food Restaurant 0.15
1 Pizza Place 0.15
2 Noodle House 0.08
3 Fried Chicken Joint 0.08
4 Bank 0.08
----Cliffside, Cliffcrest, Scarborough Village West----
venue freq
0 Motel 0.5
1 American Restaurant 0.5
2 Yoga Studio 0.0
3 Martial Arts School 0.0
4 Massage Studio 0.0
----Commerce Court, Victoria Hotel----
venue freq
0 Restaurant 0.10
1 Coffee Shop 0.08
2 Café 0.08
3 Hotel 0.08
4 Gym 0.06
----Davisville----
venue freq
0 Pizza Place 0.08
1 Dessert Shop 0.08
2 Sandwich Place 0.08
3 Gym 0.06
4 Coffee Shop 0.06
----Davisville North----
venue freq
0 Hotel 0.2
1 Gym 0.1
2 Breakfast Spot 0.1
3 Food & Drink Shop 0.1
4 Department Store 0.1
----Del Ray, Mount Dennis, Keelsdale and Silverthorn----
venue freq
0 Fast Food Restaurant 0.2
1 Sandwich Place 0.2
2 Caribbean Restaurant 0.2
3 Museum 0.2
4 Discount Store 0.2
----Don Mills----
venue freq
0 Gym 0.12
1 Beer Store 0.08
2 Coffee Shop 0.08
3 Restaurant 0.08
4 Sandwich Place 0.04
----Dorset Park, Wexford Heights, Scarborough Town Centre----
venue freq
0 Indian Restaurant 0.29
1 Vietnamese Restaurant 0.14
2 Thrift / Vintage Store 0.14
3 Chinese Restaurant 0.14
4 Furniture / Home Store 0.14
----Downsview----
venue freq
0 Grocery Store 0.19
1 Park 0.12
2 Liquor Store 0.06
3 Home Service 0.06
4 Bank 0.06
----Dufferin, Dovercourt Village----
venue freq
0 Pharmacy 0.12
1 Bakery 0.12
2 Café 0.06
3 Bar 0.06
4 Pool 0.06
----East Toronto, Broadview North (Old East York)----
venue freq
0 Park 0.4
1 Pizza Place 0.2
2 Coffee Shop 0.2
3 Convenience Store 0.2
4 Mexican Restaurant 0.0
----Eringate, Bloordale Gardens, Old Burnhamthorpe, Markland Wood----
venue freq
0 Pizza Place 0.11
1 Liquor Store 0.11
2 Shopping Plaza 0.11
3 Beer Store 0.11
4 Coffee Shop 0.11
----Fairview, Henry Farm, Oriole----
venue freq
0 Clothing Store 0.20
1 Coffee Shop 0.10
2 Fast Food Restaurant 0.06
3 Juice Bar 0.04
4 Restaurant 0.04
----First Canadian Place, Underground city----
venue freq
0 Café 0.12
1 Coffee Shop 0.10
2 Restaurant 0.08
3 Hotel 0.06
4 Gym 0.04
----Forest Hill North & West, Forest Hill Road Park----
venue freq
0 Jewelry Store 0.25
1 Mexican Restaurant 0.25
2 Sushi Restaurant 0.25
3 Trail 0.25
4 Modern European Restaurant 0.00
----Garden District, Ryerson----
venue freq
0 Middle Eastern Restaurant 0.06
1 Café 0.06
2 Coffee Shop 0.06
3 Theater 0.04
4 Clothing Store 0.04
----Glencairn----
venue freq
0 Pizza Place 0.29
1 Park 0.14
2 Japanese Restaurant 0.14
3 Asian Restaurant 0.14
4 Pub 0.14
----Golden Mile, Clairlea, Oakridge----
venue freq
0 Bakery 0.22
1 Bus Line 0.11
2 Metro Station 0.11
3 Soccer Field 0.11
4 Ice Cream Shop 0.11
----Guildwood, Morningside, West Hill----
venue freq
0 Rental Car Location 0.14
1 Medical Center 0.14
2 Breakfast Spot 0.14
3 Electronics Store 0.14
4 Bank 0.14
----Harbourfront East, Union Station, Toronto Islands----
venue freq
0 Aquarium 0.06
1 Coffee Shop 0.06
2 Café 0.04
3 Hotel 0.04
4 Plaza 0.04
----High Park, The Junction South----
venue freq
0 Park 0.08
1 Mexican Restaurant 0.08
2 Café 0.08
3 Thai Restaurant 0.08
4 Speakeasy 0.04
----Hillcrest Village----
venue freq
0 Fast Food Restaurant 0.2
1 Pool 0.2
2 Golf Course 0.2
3 Mediterranean Restaurant 0.2
4 Dog Run 0.2
----Humber Summit----
venue freq
0 Furniture / Home Store 0.5
1 Intersection 0.5
2 Motel 0.0
3 Martial Arts School 0.0
4 Massage Studio 0.0
----Humberlea, Emery----
venue freq
0 Baseball Field 1.0
1 Yoga Studio 0.0
2 Lounge 0.0
3 Martial Arts School 0.0
4 Massage Studio 0.0
----Humewood-Cedarvale----
venue freq
0 Trail 0.25
1 Field 0.25
2 Park 0.25
3 Hockey Arena 0.25
4 Yoga Studio 0.00
----India Bazaar, The Beaches West----
venue freq
0 Park 0.10
1 Sandwich Place 0.10
2 Fast Food Restaurant 0.10
3 Gym 0.05
4 Italian Restaurant 0.05
----Kennedy Park, Ionview, East Birchmount Park----
venue freq
0 Discount Store 0.29
1 Department Store 0.14
2 Convenience Store 0.14
3 Bus Station 0.14
4 Hobby Shop 0.14
----Kensington Market, Chinatown, Grange Park----
venue freq
0 Café 0.10
1 Mexican Restaurant 0.06
2 Coffee Shop 0.06
3 Vegetarian / Vegan Restaurant 0.06
4 Gaming Cafe 0.04
----Kingsview Village, St. Phillips, Martin Grove Gardens, Richview Gardens----
venue freq
0 Park 0.25
1 Bus Line 0.25
2 Mobile Phone Shop 0.25
3 Sandwich Place 0.25
4 Yoga Studio 0.00
----Lawrence Manor, Lawrence Heights----
venue freq
0 Clothing Store 0.50
1 Carpet Store 0.08
2 Vietnamese Restaurant 0.08
3 Boutique 0.08
4 Coffee Shop 0.08
----Lawrence Park----
venue freq
0 Business Service 0.25
1 Park 0.25
2 Swim School 0.25
3 Bus Line 0.25
4 Yoga Studio 0.00
----Leaside----
venue freq
0 Coffee Shop 0.12
1 Sporting Goods Shop 0.09
2 Bank 0.06
3 Burger Joint 0.06
4 Furniture / Home Store 0.06
----Little Portugal, Trinity----
venue freq
0 Bar 0.14
1 Asian Restaurant 0.05
2 Restaurant 0.05
3 Vegetarian / Vegan Restaurant 0.05
4 Men's Store 0.05
----Malvern, Rouge----
venue freq
0 Fast Food Restaurant 0.5
1 Print Shop 0.5
2 Monument / Landmark 0.0
3 Market 0.0
4 Martial Arts School 0.0
----Milliken, Agincourt North, Steeles East, L'Amoreaux East----
venue freq
0 Playground 0.33
1 Park 0.33
2 Intersection 0.33
3 Monument / Landmark 0.00
4 Martial Arts School 0.00
----Mimico NW, The Queensway West, South of Bloor, Kingsway Park South West, Royal York South West----
venue freq
0 Gym 0.07
1 Tanning Salon 0.07
2 Grocery Store 0.07
3 Kids Store 0.07
4 Fast Food Restaurant 0.07
----Moore Park, Summerhill East----
venue freq
0 Park 1.0
1 Yoga Studio 0.0
2 Motel 0.0
3 Martial Arts School 0.0
4 Massage Studio 0.0
----New Toronto, Mimico South, Humber Bay Shores----
venue freq
0 Café 0.2
1 Gym 0.1
2 Bakery 0.1
3 Pharmacy 0.1
4 Restaurant 0.1
----North Park, Maple Leaf Park, Upwood Park----
venue freq
0 Massage Studio 0.25
1 Bakery 0.25
2 Construction & Landscaping 0.25
3 Park 0.25
4 Yoga Studio 0.00
----North Toronto West, Lawrence Park----
venue freq
0 Clothing Store 0.20
1 Coffee Shop 0.10
2 Yoga Studio 0.05
3 Salon / Barbershop 0.05
4 Bagel Shop 0.05
----Northwest, West Humber - Clairville----
venue freq
0 Garden Center 0.33
1 Drugstore 0.33
2 Rental Car Location 0.33
3 Yoga Studio 0.00
4 Middle Eastern Restaurant 0.00
----Northwood Park, York University----
venue freq
0 Massage Studio 0.17
1 Falafel Restaurant 0.17
2 Caribbean Restaurant 0.17
3 Bar 0.17
4 Coffee Shop 0.17
----Old Mill South, King's Mill Park, Sunnylea, Humber Bay, Mimico NE, The Queensway East, Royal York South East, Kingsway Park South East----
venue freq
0 Baseball Field 0.5
1 Deli / Bodega 0.5
2 Yoga Studio 0.0
3 Motel 0.0
4 Massage Studio 0.0
----Parkdale, Roncesvalles----
venue freq
0 Breakfast Spot 0.14
1 Gift Shop 0.14
2 Coffee Shop 0.07
3 Eastern European Restaurant 0.07
4 Bar 0.07
----Parkview Hill, Woodbine Gardens----
venue freq
0 Pizza Place 0.17
1 Flea Market 0.08
2 Intersection 0.08
3 Breakfast Spot 0.08
4 Café 0.08
----Parkwoods----
venue freq
0 Park 0.5
1 Food & Drink Shop 0.5
2 Yoga Studio 0.0
3 Monument / Landmark 0.0
4 Martial Arts School 0.0
----Queen's Park, Ontario Provincial Government----
venue freq
0 Coffee Shop 0.20
1 Sushi Restaurant 0.07
2 Diner 0.07
3 Yoga Studio 0.03
4 Fried Chicken Joint 0.03
----Regent Park, Harbourfront----
venue freq
0 Coffee Shop 0.15
1 Bakery 0.07
2 Park 0.07
3 Breakfast Spot 0.04
4 Theater 0.04
----Richmond, Adelaide, King----
venue freq
0 Coffee Shop 0.08
1 Café 0.06
2 Sushi Restaurant 0.04
3 Pizza Place 0.04
4 Steakhouse 0.04
----Rosedale----
venue freq
0 Park 0.50
1 Playground 0.25
2 Trail 0.25
3 Mexican Restaurant 0.00
4 Molecular Gastronomy Restaurant 0.00
----Roselawn----
venue freq
0 Pool 0.33
1 Garden 0.33
2 Home Service 0.33
3 Yoga Studio 0.00
4 Middle Eastern Restaurant 0.00
----Rouge Hill, Port Union, Highland Creek----
venue freq
0 Construction & Landscaping 0.5
1 Bar 0.5
2 Yoga Studio 0.0
3 Motel 0.0
4 Massage Studio 0.0
----Runnymede, Swansea----
venue freq
0 Café 0.08
1 Sushi Restaurant 0.08
2 Coffee Shop 0.08
3 Pub 0.06
4 Pizza Place 0.06
----Runnymede, The Junction, Weston-Pellam Park, Carlton Village----
venue freq
0 Caribbean Restaurant 0.25
1 Bus Line 0.25
2 Grocery Store 0.25
3 Breakfast Spot 0.25
4 Motel 0.00
----Scarborough Village----
venue freq
0 Playground 1.0
1 Motel 0.0
2 Martial Arts School 0.0
3 Massage Studio 0.0
4 Medical Center 0.0
----South Steeles, Silverstone, Humbergate, Jamestown, Mount Olive, Beaumond Heights, Thistletown, Albion Gardens----
venue freq
0 Grocery Store 0.22
1 Pharmacy 0.11
2 Coffee Shop 0.11
3 Pizza Place 0.11
4 Beer Store 0.11
----St. James Town----
venue freq
0 Café 0.10
1 Coffee Shop 0.06
2 Gastropub 0.06
3 Seafood Restaurant 0.04
4 Cosmetics Shop 0.04
----St. James Town, Cabbagetown----
venue freq
0 Coffee Shop 0.07
1 Restaurant 0.05
2 Chinese Restaurant 0.05
3 Pizza Place 0.05
4 Bakery 0.05
----Steeles West, L'Amoreaux West----
venue freq
0 Fast Food Restaurant 0.15
1 Coffee Shop 0.08
2 Pizza Place 0.08
3 Supermarket 0.08
4 Sandwich Place 0.08
----Stn A PO Boxes----
venue freq
0 Restaurant 0.06
1 Café 0.06
2 Coffee Shop 0.06
3 Cheese Shop 0.04
4 Bakery 0.04
----Studio District----
venue freq
0 Coffee Shop 0.08
1 Brewery 0.06
2 Bakery 0.06
3 Gastropub 0.06
4 American Restaurant 0.06
----Summerhill West, Rathnelly, South Hill, Forest Hill SE, Deer Park----
venue freq
0 Coffee Shop 0.14
1 Light Rail Station 0.07
2 Bagel Shop 0.07
3 Pub 0.07
4 Restaurant 0.07
----The Annex, North Midtown, Yorkville----
venue freq
0 Café 0.14
1 Sandwich Place 0.14
2 Coffee Shop 0.09
3 Pub 0.05
4 Furniture / Home Store 0.05
----The Beaches----
venue freq
0 Pub 0.25
1 Trail 0.25
2 Health Food Store 0.25
3 Molecular Gastronomy Restaurant 0.00
4 Modern European Restaurant 0.00
----The Danforth West, Riverdale----
venue freq
0 Greek Restaurant 0.19
1 Coffee Shop 0.09
2 Italian Restaurant 0.07
3 Furniture / Home Store 0.05
4 Ice Cream Shop 0.05
----The Kingsway, Montgomery Road, Old Mill North----
venue freq
0 River 1.0
1 Yoga Studio 0.0
2 Motel 0.0
3 Martial Arts School 0.0
4 Massage Studio 0.0
----Thorncliffe Park----
venue freq
0 Sandwich Place 0.09
1 Indian Restaurant 0.09
2 Yoga Studio 0.04
3 Intersection 0.04
4 Coffee Shop 0.04
----Toronto Dominion Centre, Design Exchange----
venue freq
0 Coffee Shop 0.12
1 Café 0.10
2 Hotel 0.06
3 Seafood Restaurant 0.06
4 Restaurant 0.06
----University of Toronto, Harbord----
venue freq
0 Café 0.16
1 Bakery 0.06
2 Bar 0.06
3 Japanese Restaurant 0.06
4 Bookstore 0.06
----Victoria Village----
venue freq
0 Pizza Place 0.17
1 Portuguese Restaurant 0.17
2 French Restaurant 0.17
3 Coffee Shop 0.17
4 Hockey Arena 0.17
----West Deane Park, Princess Gardens, Martin Grove, Islington, Cloverdale----
venue freq
0 Bakery 0.5
1 Gift Shop 0.5
2 Yoga Studio 0.0
3 Mexican Restaurant 0.0
4 Molecular Gastronomy Restaurant 0.0
----Westmount----
venue freq
0 Pizza Place 0.25
1 Sandwich Place 0.12
2 Discount Store 0.12
3 Chinese Restaurant 0.12
4 Middle Eastern Restaurant 0.12
----Weston----
venue freq
0 Park 1.0
1 Yoga Studio 0.0
2 Motel 0.0
3 Martial Arts School 0.0
4 Massage Studio 0.0
----Wexford, Maryvale----
venue freq
0 Middle Eastern Restaurant 0.2
1 Bakery 0.2
2 Auto Garage 0.2
3 Sandwich Place 0.2
4 Vietnamese Restaurant 0.2
----Willowdale, Newtonbrook----
venue freq
0 Park 1.0
1 Yoga Studio 0.0
2 Motel 0.0
3 Martial Arts School 0.0
4 Massage Studio 0.0
----Willowdale, Willowdale East----
venue freq
0 Ramen Restaurant 0.09
1 Coffee Shop 0.06
2 Pizza Place 0.06
3 Café 0.06
4 Sandwich Place 0.06
----Willowdale, Willowdale West----
venue freq
0 Pizza Place 0.14
1 Coffee Shop 0.14
2 Discount Store 0.14
3 Supermarket 0.14
4 Butcher 0.14
----Woburn----
venue freq
0 Coffee Shop 0.50
1 Korean BBQ Restaurant 0.25
2 Pharmacy 0.25
3 Mexican Restaurant 0.00
4 Molecular Gastronomy Restaurant 0.00
----Woodbine Heights----
venue freq
0 Beer Store 0.12
1 Spa 0.12
2 Park 0.12
3 Video Store 0.12
4 Skating Rink 0.12
----York Mills West----
venue freq
0 Park 0.5
1 Convenience Store 0.5
2 Yoga Studio 0.0
3 Mexican Restaurant 0.0
4 Molecular Gastronomy Restaurant 0.0
###Markdown
Let`s put it to pandas dataframe
###Code
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
**FINAL. WE CAN CLUSTER OUR NEIGHBORHOODS**
###Code
from sklearn.cluster import KMeans # unsupervised algorithm for clusterization
# set number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Label', kmeans.labels_)
toronto_merged = toronto_neighs_df
# merge manhattan_grouped with manhattan_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighbourhood')
toronto_merged.head() # check the last columns!
# Del some wrong columns that waas created in prev cell cos of bug
toronto_merged.drop(["Cluster Label","Clusterss","Clusters"], axis=1)
###Output
_____no_output_____
###Markdown
Finally let`s visualize
###Code
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=15)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighbourhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[int(cluster)-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
# Examine clusters
# Cluster 1
toronto_merged.loc[toronto['Cluster Labels'] == 0, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
1- Request page from Wikipedia2- Used BeautifulSoup for find a Table inside a page3- Used FOR loop to take information from every cell in a Table and put in a DataFrame (Pandas)4- Renamed columns5- Clear "Borough" column for "Not Assigned"6- Group data
###Code
df.shape
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import urllib.request
from bs4 import BeautifulSoup
import requests
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
import folium
import matplotlib.cm as cm
import matplotlib.colors as colors
###Output
_____no_output_____
###Markdown
1. Prepare and preprocess dataset
###Code
# use Wikipedia URL from 29th March 2019
url = 'https://en.wikipedia.org/w/index.php?title=List_of_postal_codes_of_Canada:_M&oldid=890001695'
page = urllib.request.urlopen(url)
soup = BeautifulSoup(page, "lxml")
# Capture the table
table = soup.find('table', class_='wikitable')
# Populate lists and preprocess data
PostalCodes = []
Boroughs = []
Neighborhoods = []
for row in table.find_all('tr'):
cells=row.findAll('td')
if(cells != []):
postal_code = cells[0].get_text()
borough = cells[1].get_text()
neighborhood = cells[2].get_text()
# Only process the cells that have an assigned borough. Ignore cells with a borough that is Not assigned.
if(borough != 'Not assigned'):
PostalCodes.append(postal_code)
Boroughs.append(borough)
neighborhood = neighborhood.replace('\n','')
# If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough.
if(neighborhood != 'Not assigned'):
Neighborhoods.append(neighborhood)
else:
Neighborhoods.append(borough)
df=pd.DataFrame()
df['Postal Code']=PostalCodes
df['Borough']=Boroughs
df['Neighborhood']=Neighborhoods
df['Neighborhood'] = df.groupby(['Postal Code'])['Neighborhood'].transform(lambda x: ','.join(x))
df = df[['Postal Code','Borough','Neighborhood']].drop_duplicates()
df
###Output
_____no_output_____
###Markdown
``` Tried to use geocoder, but it is unreliable:!pip install geocoderimport geocoder import geocoderLatitudes = []Longitudes = [] initialize your variable to Nonelat_lng_coords = Nonefor postal_code in df['Postal Code']: loop until you get the coordinates while(lat_lng_coords is None): g = geocoder.google('{}, Toronto, Ontario'.format(postal_code)) lat_lng_coords = g.latlng latitude = lat_lng_coords[0] longitude = lat_lng_coords[1] Latitudes.append(latitude) Longitudes.append(longitude)```
###Code
# Tried to use geocoder library but did not work as expected.
# Get data from http://cocl.us/Geospatial_data. curl must be used with these flags because of the redirections of the link shortener:
!curl -O -J -L https://cocl.us/Geospatial_data
latlong_df = pd.read_csv("Geospatial_Coordinates.csv")
latlong_df.head()
df = pd.merge(df, latlong_df, on='Postal Code')
df
###Output
_____no_output_____
###Markdown
2. Analyze dataFirst, a map of Toronto with neighborhoods marks will be created. Then the neighborhoods (more specifically, pairs of latitude and longitude) will be clustered according to their most common venues. Map of Toronto with borough and neighborhoods markers
###Code
address = 'Toronto, ON'
geolocator = Nominatim(user_agent="coursera_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto are {}, {}.'.format(latitude, longitude))
# create map of New York using latitude and longitude values
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighborhood in zip(df['Latitude'], df['Longitude'], df['Borough'], df['Neighborhood']):
label = '{} ({})'.format(borough, neighborhood)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
###Output
The geograpical coordinate of Toronto are 43.653963, -79.387207.
###Markdown
Find popular venues in each neighborhood
###Code
CLIENT_ID = '' # your Foursquare ID
CLIENT_SECRET = '' # your Foursquare Secret
VERSION = '20180605' # Foursquare API version
LIMIT = 100
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
# sometimes the foursquare API fails. Recommended to try this method twice:
toronto_venues = getNearbyVenues(names=df['Neighborhood'],
latitudes=df['Latitude'],
longitudes=df['Longitude']
)
print('There are {} uniques categories.'.format(len(toronto_venues['Venue Category'].unique())))
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped.head()
###Output
_____no_output_____
###Markdown
Find most common venues in each neighborhood
###Code
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
Cluster neighborhoods ny common venues
###Code
# set number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, init='k-means++', random_state=0).fit(toronto_grouped_clustering)
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
toronto_merged = df
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
# drop NaN in the data
toronto_merged = toronto_merged.dropna()
toronto_merged.head()
###Output
_____no_output_____
###Markdown
Map with neighborhoods labeled by cluster
###Code
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[int(cluster)-1],
fill=True,
fill_color=rainbow[int(cluster)-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Clusters analysis Cluster 1 is made by the neighborhoods which most common venues are restaurants (Fast Food, Pizza Place, etc.)
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 0, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
The most common venues in neighborhoods of Cluster 2 are parks.
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 1, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
All neighborhoods of Cluster 3 has a baseball field.
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 2, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 4 is made by the most of neighboorhoods and is the most heterogeneous.
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 3, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
There are only 2 neighborhoods in Cluster 5. Their distribution of most common venues are exactly the same.
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 4, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Part of Coursera MOOC
###Code
import numpy as np # library to handle data in a vectorized manner
import pandas as pd # library for data analsysis
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import json # library to handle JSON files
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
import requests # library to handle requests
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# import k-means from clustering stage
from sklearn.cluster import KMeans
import folium # map rendering library
from bs4 import BeautifulSoup
import requests
print('Libraries imported.')
# Create URL and retrieve page markup
url = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
page_response = requests.get(url, timeout=5)
page_content = BeautifulSoup(page_response.content, "html.parser")
# Extract the table and rows list
table_data = page_content.find("table")
table_body = table_data.find("tbody")
rows = table_body.find_all("tr")
# Iterate rows extracting data from cells
data = []
first_row=True
for row in rows:
if first_row:
cols = row.find_all("th")
headers = [element.text.strip() for element in cols]
first_row=False
continue
cols = row.find_all("td")
cols = [element.text.strip() for element in cols]
data.append(cols)
# Create DataFrame an remove undesired rows
df_data = pd.DataFrame(data, columns=headers)
df_data = df_data[df_data["Borough"]!="Not assigned"].copy().reset_index(drop=True)
df_data = df_data.groupby(["Postcode", "Borough"])["Neighbourhood"].apply(lambda x: "{}".format(", ".join(x))).reset_index()
df_data.head()
# Shape of the DataFrame
df_data.shape
# Get the Geolocalization data
df_geo = pd.read_csv("https://cocl.us/Geospatial_data")
df_geo.head()
# Merge datasets
df_data = df_data.merge(df_geo, how="left", left_on="Postcode", right_on="Postal Code").drop(columns=["Postal Code"])
df_data.head()
# Get Lat & Lng of Toronto City
address = 'Toronto, ON'
geolocator = Nominatim()
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto City are {}, {}.'.format(latitude, longitude))
# create map of Toronto using latitude and longitude values
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighborhood in zip(df_data['Latitude'], df_data['Longitude'], df_data['Borough'], df_data['Neighbourhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill_color='#3186cc').add_to(map_toronto)
map_toronto
CLIENT_ID = 'XXX' # your Foursquare ID
CLIENT_SECRET = 'XXX' # your Foursquare Secret
VERSION = '20180605' # Foursquare API version
LIMIT = 100
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
#try:
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
#except:
# pass
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
toronto_venues = getNearbyVenues(names=df_data['Neighbourhood'],
latitudes=df_data['Latitude'],
longitudes=df_data['Longitude']
)
print(toronto_venues.shape)
toronto_venues.head()
toronto_venues.groupby('Neighborhood').count()
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = ["Neighborhood"] + [col for col in toronto_onehot.columns.tolist() if col not in ["Neighborhood"]]
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
# Grouping by Neighborhood and calculate the mean of the frecuency of each venue
toronto_grouped = toronto_onehot.groupby(["Neighborhood"]).mean().reset_index()
toronto_grouped.head()
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[3:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
# set number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
toronto_merged = df_data[df_data["Neighbourhood"].isin(toronto_grouped["Neighborhood"].values.tolist())]
toronto_merged.rename(columns={"Neighbourhood":"Neighborhood"}, inplace=True)
# add clustering labels
toronto_merged['Cluster Labels'] = kmeans.labels_
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
toronto_merged.head() # check the last columns!
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i+x+(i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster))
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Segmenting and Clustering Neighborhoods in Toronto: Activity 1Getting Toronto neighborhoods data into a DataFrameGet data of Toronto neighborhoods then display some rows along with shape head, finally, transform the data into a pandas dataframe.
###Code
from bs4 import BeautifulSoup
from geopy.geocoders import Nominatim
import folium
import requests
import pandas as pd
import numpy as np
# Get data from the Wiki page
url = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
source = requests.get(url).text
# convert it into a table with beautifulsoup
canada_data = BeautifulSoup(source, 'lxml')
# loop through the data to populate a "table of contents"
table_contents = []
table = canada_data.find('table')
for row in table.findAll('td'):
cell = {}
if row.span.text == 'Not assigned': # this ignore the cells with a borough that is Not assigned.
pass
else:
cell['PostalCode'] = row.p.text[:3]
cell['Borough'] = (row.span.text).split('(')[0]
cell['Neighborhood'] = (((((row.span.text).split('(')[1]).strip(')')).replace(' /', ',')).replace(')', ' ')).strip(' ')
table_contents.append(cell)
# Create a dataframe of toronto neighborhoods with the table of contents
torontoNHs = pd.DataFrame(table_contents)
# Replacing some str for their correct spelling
torontoNHs['Borough'] = torontoNHs['Borough'].replace({
'Downtown TorontoStn A PO Boxes25 The Esplanade': 'Downtown Toronto Stn A',
'East TorontoBusiness reply mail Processing Centre969 Eastern': 'East Toronto Business',
'EtobicokeNorthwest': 'Etobicoke Northwest',
'East YorkEast Toronto': 'East York/East Toronto',
'MississaugaCanada Post Gateway Processing Centre': 'Mississauga'})
torontoNHs.head()
torontoNHs.shape
###Output
_____no_output_____
###Markdown
the dataframe has 103 rows Segmenting and Clustering Neighborhoods in Toronto: Activity 2Adding coordinates to the data frameget the geographical coordinates of each neighborhood using the Geocoder package and add it to the dataframe Using the csv file because of problems with geocoder
###Code
url = 'https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/labs_v1/Geospatial_Coordinates.csv'
toronto_coords=pd.read_csv(url)
toronto_coords.shape
###Output
_____no_output_____
###Markdown
As it has the same shape we can join the columns to our data matching the postal code
###Code
torontoNHs = torontoNHs.join(toronto_coords.set_index('Postal Code'), on='PostalCode')
torontoNHs.head()
###Output
_____no_output_____
###Markdown
Segmenting and Clustering Neighborhoods in Toronto: Activity 3Exploring and cluster the neighborhoods in Toronto Only with boroughs that contain the word Toronto and replicating the same analysis we did to the New York City data. Getting the coordinates of Toronto to create the maps
###Code
address = 'Toronto, Ontario'
geolocator = Nominatim(user_agent="toronto_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto are {}, {}.'.format(latitude, longitude))
###Output
The geograpical coordinate of Toronto are 43.6534817, -79.3839347.
###Markdown
Creating the map
###Code
# create map of Toronto using latitude and longitude values
map_torontoNHs = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighborhood in zip(torontoNHs['Latitude'], torontoNHs['Longitude'], torontoNHs['Borough'], torontoNHs['Neighborhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
).add_to(map_torontoNHs)
map_torontoNHs
# Getting a new data frame that only contais the entries that contain the word toronto in the borough
t = torontoNHs[torontoNHs['Borough'].str.contains("Toronto")]
t.head()
t.shape
###Output
_____no_output_____
###Markdown
Seeing how many unique boroughs are there and how many neighborhoods are
###Code
print('The dataframe has {} boroughs and {} neighborhoods.'.format(
len(t['Borough'].unique()),
t.shape[0]
)
)
###Output
The dataframe has 7 boroughs and 39 neighborhoods.
###Markdown
Now a map that contains only the neighborhoods with the word Toronto in its borough
###Code
# create map of Toronto using latitude and longitude values
map_Toronto = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighborhood in zip(t['Latitude'], t['Longitude'], t['Borough'], t['Neighborhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
).add_to(map_Toronto)
map_Toronto
###Output
_____no_output_____
###Markdown
Define Foursquare Credentials and Version
###Code
CLIENT_ID = 'UMVNX4Y1RPNBYYB5KLXP22ZNPZD2RSY5YMC544FDFLIRU2ZR' # your Foursquare ID
CLIENT_SECRET = 'JFGNR2IPS2ROE2MGFJZOHLJWSCDBWSSFJW4LIEH21J3IKN04' # your Foursquare Secret
VERSION = '20180605' # Foursquare API version
LIMIT = 100 # A default Foursquare API limit value
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
###Output
Your credentails:
CLIENT_ID: UMVNX4Y1RPNBYYB5KLXP22ZNPZD2RSY5YMC544FDFLIRU2ZR
CLIENT_SECRET:JFGNR2IPS2ROE2MGFJZOHLJWSCDBWSSFJW4LIEH21J3IKN04
###Markdown
Create the GET request URL (foursquare)
###Code
t.head()
t.loc[2, 'Neighborhood']
###Output
_____no_output_____
###Markdown
Explore the data, and get the venues in 500 meters range from our first entry
###Code
neighborhood_latitude = t.loc[2, 'Latitude'] # neighborhood latitude value
neighborhood_longitude = t.loc[2, 'Longitude'] # neighborhood longitude value
neighborhood_name = t.loc[2, 'Neighborhood'] # neighborhood name
print('Latitude and longitude values of {} are {}, {}.'.format(neighborhood_name,
neighborhood_latitude,
neighborhood_longitude))
###Output
Latitude and longitude values of Regent Park, Harbourfront are 43.6542599, -79.3606359.
###Markdown
Create the GET request URL
###Code
LIMIT = 100
radius = 500
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
neighborhood_latitude,
neighborhood_longitude,
radius,
LIMIT)
url
results = requests.get(url).json()
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
venues = results['response']['groups'][0]['items']
nearby_venues = pd.json_normalize(venues) # flatten JSON
# filter columns
filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_venues = nearby_venues.loc[:, filtered_columns]
# filter the category for each row
nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1)
# clean columns
nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns]
nearby_venues.head()
###Output
_____no_output_____
###Markdown
Generalize to obtain the venues from all neighborhoods in t
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list = []
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighbourhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
toronto_venues = getNearbyVenues(names=t['Neighborhood'], latitudes=t['Latitude'], longitudes = t['Longitude'])
###Output
Regent Park, Harbourfront
Garden District, Ryerson
St. James Town
The Beaches
Berczy Park
Central Bay Street
Christie
Richmond, Adelaide, King
Dufferin, Dovercourt Village
The Danforth East
Harbourfront East, Union Station, Toronto Islands
Little Portugal, Trinity
The Danforth West, Riverdale
Toronto Dominion Centre, Design Exchange
Brockton, Parkdale Village, Exhibition Place
India Bazaar, The Beaches West
Commerce Court, Victoria Hotel
Studio District
Lawrence Park
Roselawn
Davisville North
Forest Hill North & West
High Park, The Junction South
North Toronto West
The Annex, North Midtown, Yorkville
Parkdale, Roncesvalles
Davisville
University of Toronto, Harbord
Runnymede, Swansea
Moore Park, Summerhill East
Kensington Market, Chinatown, Grange Park
Summerhill West, Rathnelly, South Hill, Forest Hill SE, Deer Park
CN Tower, King and Spadina, Railway Lands, Harbourfront West, Bathurst Quay, South Niagara, Island airport
Rosedale
Enclave of M5E
St. James Town, Cabbagetown
First Canadian Place, Underground city
Church and Wellesley
Enclave of M4L
###Markdown
print venues by Neighborhood
###Code
toronto_venues.groupby('Neighbourhood').count()
###Output
_____no_output_____
###Markdown
How many categories can we find?
###Code
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighbourhood'] = toronto_venues['Neighbourhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
###Output
_____no_output_____
###Markdown
Get the top 10 for each neighbourhood
###Code
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
toronto_grouped = toronto_onehot.groupby('Neighbourhood').mean().reset_index()
# create columns according to number of top venues
columns = ['Neighbourhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighbourhood'] = toronto_grouped['Neighbourhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
Cluster neighborhoods
###Code
# import k-means from clustering stage
from sklearn.cluster import KMeans
# set number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighbourhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
###Output
_____no_output_____
###Markdown
Merge the dataframe with the top 10 and the cluster for each neighborhood
###Code
# add clustering labels
neighborhoods_venues_sorted.insert(2, 'Cluster Labels', kmeans.labels_)
toronto_merged = t
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighbourhood'), on='Neighborhood')
toronto_merged.head() # check the last columns!
import matplotlib.cm as cm
import matplotlib.colors as colors
toronto_merged[toronto_merged['Cluster Labels'].isnull()]
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
toronto_merged_nonan = toronto_merged.dropna(subset=['Cluster Labels'])
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged_nonan['Latitude'], toronto_merged_nonan['Longitude'], toronto_merged_nonan['Neighborhood'], toronto_merged_nonan['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[int(cluster-1)],
fill=True,
fill_color=rainbow[int(cluster-1)],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Examine clustersCluster 1
###Code
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 0, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 2
###Code
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 1, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 3
###Code
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 2, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 4
###Code
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 3, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 5
###Code
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 4, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
###Output
_____no_output_____
###Markdown
Segmenting and Clustering Neighborhoods in Toronto Objective In this assignment, we're required to explore, segment, and cluster the neighborhoods in the city of Toronto. We'll be scraping the Wikipedia page and wrangle the data, clean it, and then read it into a pandas dataframe so that it is in a structured format like the New York dataset.Once the data is in a structured format, we'll replicate the analysis that we did to the New York City dataset to explore and cluster the neighborhoods in the city of Toronto. Setting up the pre-requisites Before we start work, let's install and import the required libraries and setup options that will help in later stages.
###Code
# Uncomment to install required packages
#!pip install beautifulsoup4
#!pip install lxml
#!pip install geopy
#!pip install geocoder
#!pip install folium
print('Installations done.')
# Importing necessary libraries
import numpy as np # library to handle data in a vectorized manner
import pandas as pd # library for data analsysis
import requests # library to handle requests
import json # library to handle JSON files
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
import folium # map rendering library
# K-means for clustering
from sklearn.cluster import KMeans
# For error handling, progress tracking and API throtling
from random import randint
from time import sleep
import sys
import logging
# For web scrapping and location retrieval
from bs4 import BeautifulSoup
import geocoder as gc # import geocoder
from geopy.geocoders import Nominatim # we could've used geocoder (above) as well
# JSON and HTML formatting
import pprint
print('Libraries imported.')
# Adjust options
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
# Incase we need to reset
#pd.reset_option('display.max_colwidth')
#pd.reset_option('display.max_rows')
# So that more data can be printed on a single line
pp = pprint.PrettyPrinter(width=120)
print('Options adjusted.')
###Output
Options adjusted.
###Markdown
Part 1 **Requirement:** _Use pandas, or the BeautifulSoup package, or any other way you are comfortable with to transform the data in the table on the Wikipedia page into the pandas dataframe._ Let's download the postal codes of Toronto city through the Wikipdia page, scrape using **BeautifulSoup** and create the DataFrame. _Ensure that all requirements from point 2 & 3 of the assignment are met here._
###Code
# Create request object, download page and read the desired table
url = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text
page = BeautifulSoup(url,'lxml')
table = page.find ('table', class_='wikitable sortable')
# Load the headers from the table
header_list=[]
for header in table.findAll ('th'):
header_list.append(header.text.replace('\n', '').title().replace(' ',''))
# Load the table content row-by-row
output_rows = []
for table_row in table.findAll('tr'):
columns = table_row.findAll('td')
output_row = []
for column in columns:
# Replace any unwanted characters
output_row.append(column.text.replace('\n', '').replace(' / ',', '))
# Ignore rows with unassigned boroughs
if output_row and not output_row[1]=='Not assigned':
output_rows.append(output_row)
# Insert table into the DataFrame and naming columns
df = pd.DataFrame(output_rows)
df.columns = header_list
# Ensuring neighboorhoods are combined against the PostalCode in a single row. The requirement was already met in the source page.
print ('\'PostalCode\' column has unique values:', df['PostalCode'].nunique()==df['PostalCode'].size)
# Checking if every borough has a neighborhood, incase we need to assing a borough to "Not assigned" neighborhood. The requirement was already met in the source page.
print ('\'Neighborhood\' column doen\'t have a blank value:', df[df['Neighborhood']==''].size==0)
# Print number of rows of the dataframe
df.shape
###Output
'PostalCode' column has unique values: True
'Neighborhood' column doen't have a blank value: True
###Markdown
Part 2 **Requirement:** _Use the Geocoder package or the CSV file to retrieve the location information._ Let's try to use the **Geocoder** package with **Arcgis** provider for the location coordinates. Finally, load the data in the DataFrame.
###Code
# initialize the variable to None
lat_lng_coords = None
# Preparing lists for holding the latitude and longitude information
lat_list=[]
lng_list=[]
# This will show progress
sys.stdout.write("Location retrieval progress: %d%% \r" % (0))
sys.stdout.flush()
# Setting up logging, incase things go wrong
logging.basicConfig(level=logging.ERROR,
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
filename='errors.log',
filemode='w')
for ind in df.index:
# loop until you get the coordinates
attempt = 1
while(lat_lng_coords is None):
try:
# Attempting to retrieve location against Postal Code
g = gc.arcgis('{}, Toronto, Ontario'.format(df['PostalCode'][ind]))
lat_lng_coords = g.latlng
except:
if attempt <=5: # Limiting to 5 attempts
logging.error('Failed attempt #{} for {}, Toronto, Ontario'.format(attempt, df['PostalCode'][ind]))
attempt = attempt + 1
# Trying to avoid blocking by the server due to quick calls
sleep(0.01/randint(1,10000))
pass
else:
raise
# Loadting the coordinates in respective lists
lat_list.append(round(lat_lng_coords[0], 6))
lng_list.append(round(lat_lng_coords[1], 6))
lat_lng_coords = None
# Updating progress
sys.stdout.write("Location retrieval progress: %d%% \r" % (ind/df['PostalCode'].size*100))
sys.stdout.flush()
# Trying to avoid blocking by the server due to quick calls
sleep(0.01/randint(1,10000))
sys.stdout.write("Location retrieval progress: %d%% \r" % (100))
# Assigning data to DataFrame
df['Latitude']=lat_list
df['Longitude']=lng_list
df
###Output
Location retrieval progress: 100%
###Markdown
Backup plan Saving the DataFrame as CSV file for later retreival, incase we get blocked by the provider due to exceeding the limit etc.
###Code
#df.to_csv('Toronto_Neighborhoods.csv', header=True, index=False)
#df = pd.read_csv('Toronto_Neighborhoods.csv')
###Output
_____no_output_____
###Markdown
Part 3 **Requirement:** _Explore and cluster the neighborhoods in Toronto_ Explore data and visualize neighboorhoods Retreive geographical coordinates of Toronto city for the map using _trn_explorer_ as the user_agent.
###Code
# Setting up the address
address = 'Toronto, Ontario'
# Initializing and retreivng the coordinates
geolocator = Nominatim(user_agent="trn_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto city are {}, {}.'.format(latitude, longitude))
###Output
The geograpical coordinate of Toronto city are 43.6534817, -79.3839347.
###Markdown
Let's select only those boroughs that contain the word Toronto e.g., Downtown Toronto etc. and slice the DataFrame.
###Code
# Selecting boroughs that contain the word Toronto
df_toronto = df[df['Borough'].str.contains('Toronto', regex=False)].reset_index(drop=True)
df_toronto.head()
###Output
_____no_output_____
###Markdown
Create a map of Toronto city with selected neighborhoods superimposed on top using **Folium** visualization library.
###Code
# create map of Toronto using latitude and longitude values
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=12)
# add markers to map
for lat, lng, label in zip(df_toronto['Latitude'], df_toronto['Longitude'], df_toronto['Neighborhood']):
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
###Output
_____no_output_____
###Markdown
Explore selected boroughs Let's use the Foursquare API to explore the neighborhoods and segment them.
###Code
# Define Foursquare Credentials, Version and Limits.
CLIENT_ID = '3BCVAXKBYMT2WBM10IJUSYVZUVSU15GUZLHLNS4NKPFSBLEM' # your Foursquare ID
CLIENT_SECRET = 'MDG201J4RR3FWOQ0RGFPBUE1TDQFAO0DH0KXL5MHJ3DA0J0T' # your Foursquare Secret
VERSION = '20180605' # Foursquare API version
LIMIT = 100 # limit of number of venues returned by Foursquare API
radius = 500 # define radius
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
###Output
Your credentails:
CLIENT_ID: 3BCVAXKBYMT2WBM10IJUSYVZUVSU15GUZLHLNS4NKPFSBLEM
CLIENT_SECRET:MDG201J4RR3FWOQ0RGFPBUE1TDQFAO0DH0KXL5MHJ3DA0J0T
###Markdown
Let's borrow the function from New York city lab to explore all the neighborhoods in selected boroughs.
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
###Output
_____no_output_____
###Markdown
Let's explore all near-by venues and save them in new DataFrame.
###Code
toronto_venues = getNearbyVenues(names=df_toronto['Neighborhood'],
latitudes=df_toronto['Latitude'],
longitudes=df_toronto['Longitude']
)
###Output
Regent Park, Harbourfront
Queen's Park, Ontario Provincial Government
Garden District, Ryerson
St. James Town
The Beaches
Berczy Park
Central Bay Street
Christie
Richmond, Adelaide, King
Dufferin, Dovercourt Village
Harbourfront East, Union Station, Toronto Islands
Little Portugal, Trinity
The Danforth West, Riverdale
Toronto Dominion Centre, Design Exchange
Brockton, Parkdale Village, Exhibition Place
India Bazaar, The Beaches West
Commerce Court, Victoria Hotel
Studio District
Lawrence Park
Roselawn
Davisville North
Forest Hill North & West
High Park, The Junction South
North Toronto West
The Annex, North Midtown, Yorkville
Parkdale, Roncesvalles
Davisville
University of Toronto, Harbord
Runnymede, Swansea
Moore Park, Summerhill East
Kensington Market, Chinatown, Grange Park
Summerhill West, Rathnelly, South Hill, Forest Hill SE, Deer Park
CN Tower, King and Spadina, Railway Lands, Harbourfront West, Bathurst Quay, South Niagara, Island airport
Rosedale
Stn A PO Boxes
St. James Town, Cabbagetown
First Canadian Place, Underground city
Church and Wellesley
Business reply mail Processing CentrE
###Markdown
Let's check the size of the resulting dataframe and peek in to the contents.
###Code
print(toronto_venues.shape)
toronto_venues.head()
###Output
(1595, 7)
###Markdown
And number of venues returned for each neighborhood...
###Code
toronto_venues.groupby('Neighborhood').count()
###Output
_____no_output_____
###Markdown
How many unique categories can be curated from all the returned venues?
###Code
print('There are {} uniques categories.'.format(len(toronto_venues['Venue Category'].unique())))
###Output
There are 219 uniques categories.
###Markdown
Analyze each neighborhood Let's prepare data for the clustering.
###Code
# One hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# Add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# Move neighborhood column as the first column
col='Neighborhood'
temp = toronto_onehot[col]
toronto_onehot.drop(labels=[col], axis=1,inplace = True)
toronto_onehot.insert(0, col, temp)
toronto_onehot.head()
###Output
_____no_output_____
###Markdown
Let's explore the size of the DataFrame.
###Code
toronto_onehot.shape
###Output
_____no_output_____
###Markdown
Let's group rows by neighborhood and by taking the mean of the frequency of occurrence of each category and create a new DataFrame from it.
###Code
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped
###Output
_____no_output_____
###Markdown
Let's confirm the size of the resulting DataFrame.
###Code
toronto_grouped.shape
###Output
_____no_output_____
###Markdown
Let's print each neighborhood along with the top 5 most common venues
###Code
num_top_venues = 5
for hood in toronto_grouped['Neighborhood']:
print("----"+hood+"----")
temp = toronto_grouped[toronto_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
###Output
----Berczy Park----
venue freq
0 Coffee Shop 0.09
1 Cocktail Bar 0.04
2 Seafood Restaurant 0.04
3 Bakery 0.03
4 Hotel 0.03
----Brockton, Parkdale Village, Exhibition Place----
venue freq
0 Coffee Shop 0.09
1 Café 0.07
2 Gift Shop 0.04
3 Pizza Place 0.04
4 Diner 0.04
----Business reply mail Processing CentrE----
venue freq
0 Coffee Shop 0.07
1 Hotel 0.05
2 Japanese Restaurant 0.04
3 Café 0.04
4 Restaurant 0.03
----CN Tower, King and Spadina, Railway Lands, Harbourfront West, Bathurst Quay, South Niagara, Island airport----
venue freq
0 Coffee Shop 0.08
1 Restaurant 0.06
2 Café 0.06
3 French Restaurant 0.05
4 Park 0.05
----Central Bay Street----
venue freq
0 Coffee Shop 0.19
1 Plaza 0.04
2 Clothing Store 0.04
3 Breakfast Spot 0.04
4 Bubble Tea Shop 0.04
----Christie----
venue freq
0 Grocery Store 0.25
1 Café 0.25
2 Athletics & Sports 0.08
3 Park 0.08
4 Baby Store 0.08
----Church and Wellesley----
venue freq
0 Coffee Shop 0.12
1 Japanese Restaurant 0.08
2 Sushi Restaurant 0.05
3 Restaurant 0.05
4 Pub 0.04
----Commerce Court, Victoria Hotel----
venue freq
0 Coffee Shop 0.09
1 Restaurant 0.08
2 Italian Restaurant 0.06
3 Hotel 0.06
4 Café 0.06
----Davisville----
venue freq
0 Dessert Shop 0.10
1 Pizza Place 0.07
2 Italian Restaurant 0.07
3 Café 0.07
4 Coffee Shop 0.07
----Davisville North----
venue freq
0 Department Store 0.2
1 Food & Drink Shop 0.2
2 Breakfast Spot 0.2
3 Gym 0.2
4 Park 0.2
----Dufferin, Dovercourt Village----
venue freq
0 Furniture / Home Store 0.14
1 Park 0.14
2 Gym / Fitness Center 0.07
3 Pharmacy 0.07
4 Brazilian Restaurant 0.07
----First Canadian Place, Underground city----
venue freq
0 Coffee Shop 0.10
1 Café 0.07
2 Hotel 0.06
3 American Restaurant 0.04
4 Restaurant 0.04
----Forest Hill North & West----
venue freq
0 Restaurant 0.5
1 Gym / Fitness Center 0.5
2 Accessories Store 0.0
3 Organic Grocery 0.0
4 Monument / Landmark 0.0
----Garden District, Ryerson----
venue freq
0 Coffee Shop 0.11
1 Clothing Store 0.06
2 Sandwich Place 0.04
3 Middle Eastern Restaurant 0.04
4 Bar 0.03
----Harbourfront East, Union Station, Toronto Islands----
venue freq
0 Harbor / Marina 0.25
1 Park 0.25
2 Farm 0.25
3 Theme Park 0.25
4 Accessories Store 0.00
----High Park, The Junction South----
venue freq
0 Metro Station 0.2
1 Residential Building (Apartment / Condo) 0.2
2 Gas Station 0.2
3 Gym / Fitness Center 0.2
4 Park 0.2
----India Bazaar, The Beaches West----
venue freq
0 Fast Food Restaurant 0.12
1 Steakhouse 0.06
2 Sandwich Place 0.06
3 Liquor Store 0.06
4 Fish & Chips Shop 0.06
----Kensington Market, Chinatown, Grange Park----
venue freq
0 Café 0.09
1 Coffee Shop 0.07
2 Mexican Restaurant 0.07
3 Gaming Cafe 0.04
4 Vietnamese Restaurant 0.04
----Lawrence Park----
venue freq
0 Bus Line 0.5
1 Swim School 0.5
2 Accessories Store 0.0
3 Modern European Restaurant 0.0
4 Monument / Landmark 0.0
----Little Portugal, Trinity----
venue freq
0 Restaurant 0.07
1 Cocktail Bar 0.07
2 Bar 0.07
3 Coffee Shop 0.05
4 Vietnamese Restaurant 0.05
----Moore Park, Summerhill East----
venue freq
0 Playground 0.25
1 Trail 0.25
2 Gym 0.25
3 Park 0.25
4 Accessories Store 0.00
----North Toronto West----
venue freq
0 Playground 0.25
1 Garden 0.25
2 Gym Pool 0.25
3 Park 0.25
4 Accessories Store 0.00
----Parkdale, Roncesvalles----
venue freq
0 Coffee Shop 0.09
1 Sushi Restaurant 0.05
2 Eastern European Restaurant 0.05
3 Bookstore 0.05
4 Restaurant 0.05
----Queen's Park, Ontario Provincial Government----
venue freq
0 Coffee Shop 0.33
1 Sushi Restaurant 0.07
2 Café 0.05
3 Yoga Studio 0.02
4 Italian Restaurant 0.02
----Regent Park, Harbourfront----
venue freq
0 Pub 0.12
1 Café 0.08
2 Athletics & Sports 0.08
3 Coffee Shop 0.08
4 Distribution Center 0.04
----Richmond, Adelaide, King----
venue freq
0 Coffee Shop 0.09
1 Café 0.06
2 Clothing Store 0.04
3 Restaurant 0.04
4 Hotel 0.03
----Rosedale----
venue freq
0 Playground 0.25
1 Candy Store 0.25
2 Park 0.25
3 Grocery Store 0.25
4 Accessories Store 0.00
----Runnymede, Swansea----
venue freq
0 Café 0.09
1 Pizza Place 0.07
2 Coffee Shop 0.06
3 Pub 0.04
4 Sushi Restaurant 0.04
----St. James Town----
venue freq
0 Coffee Shop 0.06
1 Café 0.06
2 Seafood Restaurant 0.04
3 Cosmetics Shop 0.04
4 Gastropub 0.04
----St. James Town, Cabbagetown----
venue freq
0 Coffee Shop 0.07
1 Italian Restaurant 0.05
2 Bakery 0.05
3 Restaurant 0.05
4 Pub 0.05
----Stn A PO Boxes----
venue freq
0 Coffee Shop 0.07
1 Hotel 0.05
2 Japanese Restaurant 0.04
3 Café 0.04
4 Restaurant 0.03
----Studio District----
venue freq
0 Business Service 0.2
1 Government Building 0.2
2 Athletics & Sports 0.2
3 Night Market 0.2
4 Baseball Field 0.2
----Summerhill West, Rathnelly, South Hill, Forest Hill SE, Deer Park----
venue freq
0 Light Rail Station 0.2
1 Coffee Shop 0.2
2 Athletics & Sports 0.2
3 Café 0.1
4 Supermarket 0.1
----The Annex, North Midtown, Yorkville----
venue freq
0 Sandwich Place 0.12
1 Café 0.08
2 Pub 0.04
3 Pharmacy 0.04
4 French Restaurant 0.04
----The Beaches----
venue freq
0 Pub 0.2
1 Health Food Store 0.2
2 Trail 0.2
3 Church 0.2
4 Accessories Store 0.0
----The Danforth West, Riverdale----
venue freq
0 Bus Line 0.2
1 Business Service 0.2
2 Discount Store 0.2
3 Park 0.2
4 Grocery Store 0.2
----Toronto Dominion Centre, Design Exchange----
venue freq
0 Coffee Shop 0.11
1 Hotel 0.09
2 Café 0.07
3 Restaurant 0.04
4 Japanese Restaurant 0.04
----University of Toronto, Harbord----
venue freq
0 Café 0.15
1 Bookstore 0.09
2 Park 0.06
3 Restaurant 0.06
4 Italian Restaurant 0.06
###Markdown
Let's borrow the function from the course lab to sort the venues in descending order.
###Code
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
###Output
_____no_output_____
###Markdown
Now let's create the new dataframe and display the top 10 venues for each neighborhood.
###Code
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
Cluster the neighborhoods Run k-means to cluster the neighborhood into **7 clusters**. _This number is chosen for the best result._
###Code
# set number of clusters
kclusters = 7
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(init = "k-means++", n_clusters=kclusters, random_state=0, n_init = 3).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
pd.DataFrame(kmeans.labels_).groupby(0)[0].count()
###Output
_____no_output_____
###Markdown
Let's create a new dataframe that includes the cluster as well as the top 10 venues for each neighborhood. Using **'right'** option, as we might not get venue information for all Postal Codes. This will result in _NaN_ error in later stages.
###Code
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
toronto_merged = df_toronto
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood', how='right')
toronto_merged.head() # check the last columns!
###Output
_____no_output_____
###Markdown
Let's visualize the resulting clusters.
###Code
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=12)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[int(cluster)-1],
fill=True,
fill_color=rainbow[int(cluster)-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Examine Clusters Looks like the people in cluster 0 and 3 love to hang-out but 0 are on the a little reserved side. People in cluster 2 seem to be health concious and athletics oriented. Cluster 5 seem to inhabited by large sized families with kids.The cluster 1, 4 and 6 seem to be outliers.
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 0, toronto_merged.columns[[1,2] + list(range(6, toronto_merged.shape[1]))]]
toronto_merged.loc[toronto_merged['Cluster Labels'] == 2, toronto_merged.columns[[1,2] + list(range(6, toronto_merged.shape[1]))]]
toronto_merged.loc[toronto_merged['Cluster Labels'] == 3, toronto_merged.columns[[1,2] + list(range(6, toronto_merged.shape[1]))]]
toronto_merged.loc[toronto_merged['Cluster Labels'] == 5, toronto_merged.columns[[1,2] + list(range(6, toronto_merged.shape[1]))]]
toronto_merged.loc[toronto_merged['Cluster Labels'] == 1, toronto_merged.columns[[1,2] + list(range(6, toronto_merged.shape[1]))]]
toronto_merged.loc[toronto_merged['Cluster Labels'] == 4, toronto_merged.columns[[1,2] + list(range(6, toronto_merged.shape[1]))]]
toronto_merged.loc[toronto_merged['Cluster Labels'] == 6, toronto_merged.columns[[1,2] + list(range(6, toronto_merged.shape[1]))]]
###Output
_____no_output_____ |
notebooks/archive/3.3.1-hef-ema-summary.ipynb | ###Markdown
Exploring the UTx000 DatasetFrom the first cohort in Spring 2020
###Code
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
EMA SummaryThis dataset was more or less a trial run, but some of these data could be useful. We need to look at what kind of data we can recover from the EMAs in addition to getting some statistics on the participation level.
###Code
import os
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Data ImportWe can import the morning and evening surveys now that they have been processed.
###Code
survey = {}
for timing in ['morning','evening','weekly']:
df = pd.read_csv(f'../data/processed/bpeace1-{timing}-survey.csv',
index_col=0,parse_dates=True,infer_datetime_format=True)
print(df.head())
survey[timing] = df
###Output
ID Content Stress Lonely Sad Energy \
2020-02-03 11:06:25 6mkypp1o 2 1 0.0 0 2
2020-03-01 20:01:45 6mkypp1o 2 1 0.0 0 3
2020-02-28 09:06:27 6mkypp1o 1 1 0.0 0 1
2020-02-21 08:30:11 6mkypp1o 2 0 0.0 0 1
2020-02-10 11:25:38 6mkypp1o 1 1 0.0 0 2
TST SOL NAW Restful
2020-02-03 11:06:25 6-7 hours 10-20 minutes NaN 2
2020-03-01 20:01:45 9-10 hours 10-20 minutes NaN 2
2020-02-28 09:06:27 6-7 hours 10-20 minutes 2.0 1
2020-02-21 08:30:11 5-6 hours 10-20 minutes 2.0 1
2020-02-10 11:25:38 6-7 hours 10-20 minutes NaN 1
ID Content Stress Lonely Sad Energy
2020-02-04 11:10:22 6mkypp1o 2.0 2.0 1 1 3
2020-03-07 12:19:00 6mkypp1o 1.0 2.0 0 2 2
2020-02-03 10:04:37 6mkypp1o 2.0 1.0 0 0 2
2020-03-01 19:01:56 6mkypp1o 2.0 1.0 0 0 3
2020-02-10 19:05:36 6mkypp1o 2.0 1.0 0 0 3
ID Upset Unable Stressed Confident Your_Way \
2020-01-31 12:12:29 6mkypp1o 2 1 2 3.0 2
2020-02-23 11:28:13 6mkypp1o 2 0 2 3.0 3
2020-02-15 22:15:59 6mkypp1o 2 0 3 2.0 2
2020-02-29 09:28:55 6mkypp1o 1 0 2 3.0 3
2020-02-03 10:05:29 6mkypp1o 2 1 2 3.0 2
Cope Able Top Angered Overcome
2020-01-31 12:12:29 0.0 3 2.0 2 0
2020-02-23 11:28:13 1.0 3 2.0 2 1
2020-02-15 22:15:59 0.0 3 2.0 2 0
2020-02-29 09:28:55 1.0 3 2.0 1 0
2020-02-03 10:05:29 0.0 3 2.0 2 0
###Markdown
Data InspectionHere we do some simple visualizations to check out how much data we have available. By the Numbers
###Code
for key, val in survey.items():
n = len(val['ID'].unique())
print(f'Number of Participants submitting {key} surveys:\t{n}')
###Output
Number of Participants submitting morning surveys: 73
Number of Participants submitting evening surveys: 73
Number of Participants submitting weekly surveys: 57
|
SolAster/examples/.ipynb_checkpoints/full_pipeline-checkpoint.ipynb | ###Markdown
Outline of correction and velocity calculations from SDO/HMI Images
###Code
import datetime
import pandas as pd
import numpy as np
import sunpy.map
from sunpy.net import Fido
from sunpy.net import attrs as a
from sunpy.coordinates import frames
import sys, os
sys.path.append(os.path.realpath('../../'))
import SolAster.tools.rvs as rvs
import SolAster.tools.calculation_funcs as sfuncs
import SolAster.tools.lbc_funcs as lbfuncs
import SolAster.tools.coord_funcs as ctfuncs
import SolAster.tools.utilities as utils
from SolAster.tools.settings import *
from SolAster.tools.plotting_funcs import hmi_plot
# update inputs class
class Inputs:
"""
Class to hold user specified inputs to run examples.
See README or documentation site for additional information.
"""
# name of csv file to store calculations
csv_name = 'example_calcs.csv'
# name of instrument to use for calculation of RV model
# choose either 'NEID' or 'HARPS-N'
inst = 'NEID'
# querying cadence in seconds
cadence = 24 * 60 * 60
# start date for calculationsx
start_date = datetime.datetime(2021, 2, 10, 0, 0, 0)
# end date for calculations
end_date = datetime.datetime(2021, 2, 14, 0, 0, 0)
# True if outputting diagnostic plots
diagnostic_plots = True
# path to save diagnostic figure or none
save_fig = None
# figure title for plot of calculations
fig_title = 'example_plot.png'
###Output
_____no_output_____
###Markdown
Setup CSV file and SDO/HMI querying parameters.
###Code
# check input formats
start_date, end_date, cadence, csv_name = utils.check_inputs(CsvDir.CALC, Inputs.start_date, Inputs.end_date,
Inputs.cadence, Inputs.csv_name)
# print out csv title
print("Beginning calculation of values for csv file: " + csv_name)
# List of header strings
row_contents = ['date_obs', 'date_jd', 'rv_model', 'v_quiet', 'v_disc', 'v_phot', 'v_conv', 'f_bright', 'f_spot', 'f',
'Bobs', 'vphot_bright', 'vphot_spot', 'f_small', 'f_large', 'f_network', 'f_plage',
'quiet_flux', 'ar_flux', 'conv_flux', 'pol_flux', 'pol_conv_flux', 'vconv_quiet', 'vconv_large',
'vconv_small']
# create file names
csv_file = os.path.join(CsvDir.CALC, csv_name)
bad_dates_csv = os.path.join(CsvDir.CALC, csv_name[-4:]+'_bad_dates.csv')
print(bad_dates_csv)
utils.append_list_as_row(csv_file, row_contents)
# get hmi data products
time_range = datetime.timedelta(seconds=22)
physobs_list = [a.Physobs.los_velocity, a.Physobs.los_magnetic_field, a.Physobs.intensity]
# get dates list
xy = (end_date - start_date).seconds + (end_date - start_date).days * 24 * 3600
dates_list = [start_date + datetime.timedelta(seconds=cadence*x) for x in range(0, int(xy/cadence))]
###Output
_____no_output_____
###Markdown
Component calculationsRun through list to calculate and save values.
###Code
for i, date in enumerate(dates_list):
# convert the date to a string -- required for use in csv file
date_str, date_obj, date_jd = SolAster.tools.utilities.get_dates(date)
# pull image within specified time range
result = Fido.search(a.Time(str(date_obj - time_range), str(date_obj + time_range)),
a.Instrument.hmi, physobs_list[0] | physobs_list[1] | physobs_list[2])
# add file to list
file_download = Fido.fetch(result)
# remove unusable file types
good_files = []
for file in file_download:
name, extension = os.path.splitext(file)
if extension == '.fits':
good_files.append(file)
if len(good_files) != 3:
# add the data
# append these values to the csv file
save_vals = [date_str, 'not three good files']
# print that the files are missing
print('\nNot three good files: ' + date_str + ' index: ' + str(i))
pass
else:
# convert to map sequence
map_seq = sunpy.map.Map(sorted(good_files))
# check for missing data types
missing_map = False
# split into data types
for j, map_obj in enumerate(map_seq):
if map_obj.meta['content'] == 'DOPPLERGRAM':
vmap = map_obj
elif map_obj.meta['content'] == 'MAGNETOGRAM':
mmap = map_obj
elif map_obj.meta['content'] == 'CONTINUUM INTENSITY':
imap = map_obj
else:
missing_map = True
if missing_map:
print("Missing a data product for " + date_str)
# add the data
# append these values to the csv file
save_vals = [date_str, 'missing data product']
pass
else:
# coordinate transformation for maps
x, y, pdim, r, d, mu = ctfuncs.coordinates(vmap)
wij, nij, rij = ctfuncs.vel_coords(x, y, pdim, r, vmap)
# remove bad mu values
vmap, mmap, imap = ctfuncs.fix_mu(mu, [vmap, mmap, imap], mu_cutoff=Parameters.mu_cutoff)
# calculate relative positions
deltaw, deltan, deltar, dij = sfuncs.rel_positions(wij, nij, rij, vmap)
# calculate spacecraft velocity
vsc = sfuncs.spacecraft_vel(deltaw, deltan, deltar, dij, vmap)
# optimized solar rotation parameters
a_parameters = [Parameters.a1, Parameters.a2, Parameters.a3]
# calculation of solar rotation velocity
vrot = sfuncs.solar_rot_vel(wij, nij, rij, deltaw, deltan, deltar, dij, vmap, a_parameters)
# calculate corrected velocity
corrected_vel = vmap.data - np.real(vsc) - np.real(vrot)
# corrected velocity maps
map_vel_cor = sfuncs.corrected_map(corrected_vel, vmap, map_type='Corrected-Dopplergram',
frame=frames.HeliographicCarrington)
# limb brightening
Lij = lbfuncs.limb_polynomial(imap)
# calculate corrected data
Iflat = imap.data / Lij
# corrected intensity maps
map_int_cor = sfuncs.corrected_map(Iflat, imap, map_type='Corrected-Intensitygram',
frame=frames.HeliographicCarrington)
# calculate unsigned field strength
Bobs, Br = sfuncs.mag_field(mu, mmap, B_noise=Parameters.B_noise, mu_cutoff=Parameters.mu_cutoff)
# corrected observed magnetic data map
map_mag_obs = sfuncs.corrected_map(Bobs, mmap, map_type='Corrected-Magnetogram',
frame=frames.HeliographicCarrington)
# radial magnetic data map
map_mag_cor = sfuncs.corrected_map(Br, mmap, map_type='Corrected-Magnetogram',
frame=frames.HeliographicCarrington)
# calculate magnetic threshold
active, quiet = sfuncs.mag_thresh(mu, mmap, Br_cutoff=Parameters.Br_cutoff, mu_cutoff=Parameters.mu_cutoff)
# calculate intensity threshold
fac_inds, spot_inds = sfuncs.int_thresh(map_int_cor, active, quiet)
# create threshold array
thresh_arr = sfuncs.thresh_map(fac_inds, spot_inds)
# full threshold maps
map_full_thresh = sfuncs.corrected_map(thresh_arr, mmap, map_type='Threshold',
frame=frames.HeliographicCarrington)
# create diagnostic plots
if i == 0 and Inputs.diagnostic_plots == True:
hmi_plot(map_int_cor, map_mag_cor, map_vel_cor, fac_inds, spot_inds, mu, save_fig=Inputs.save_fig)
### velocity contribution due to convective motion of quiet-Sun
v_quiet = sfuncs.v_quiet(map_vel_cor, imap, quiet)
### velocity contribution due to rotational Doppler imbalance of active regions (faculae/sunspots)
# calculate photospheric velocity
v_phot, vphot_bright, vphot_spot = sfuncs.v_phot(quiet, active, Lij, vrot, imap, mu, fac_inds, spot_inds, mu_cutoff=Parameters.mu_cutoff)
### velocity contribution due to suppression of convective blueshift by active regions
# calculate disc-averaged velocity
v_disc = sfuncs.v_disc(map_vel_cor, imap)
# calculate convective velocity
v_conv = v_disc - v_quiet
### filling factor
# calculate filling factor
f_bright, f_spot, f = sfuncs.filling_factor(mu, mmap, active, fac_inds, spot_inds, mu_cutoff=Parameters.mu_cutoff)
### unsigned magnetic flux
# unsigned observed flux
unsigned_obs_flux = sfuncs.unsigned_flux(map_mag_obs, imap)
### calculate the area filling factor
pixA_hem = ctfuncs.pix_area_hem(wij, nij, rij, vmap)
area = sfuncs.area_calc(active, pixA_hem)
f_small, f_large, f_network, f_plage = sfuncs.area_filling_factor(active, area, mu, mmap, fac_inds,
athresh=Parameters.athresh,
mu_cutoff=Parameters.mu_cutoff)
### get the unsigned flux
quiet_flux, ar_flux, conv_flux, pol_flux, pol_conv_flux = sfuncs.area_unsigned_flux(map_mag_obs, imap,
area,
active,
athresh=Parameters.athresh)
### get area weighted convective velocities
vconv_quiet, vconv_large, vconv_small = sfuncs.area_vconv(map_vel_cor, imap, active, area, athresh=Parameters.athresh)
### calculate model RV
rv_model = rvs.calc_model(Inputs.inst, v_conv, v_phot)
# intensity flux to check
int_flux = np.nansum(imap.data)
# make array of what we want to save
save_vals = [rv_model, v_quiet, v_disc, v_phot, v_conv, f_bright, f_spot, f, unsigned_obs_flux, vphot_bright,
vphot_spot, f_small, f_large, f_network, f_plage, quiet_flux, ar_flux,
conv_flux, pol_flux, pol_conv_flux, vconv_quiet, vconv_large, vconv_small, int_flux]
# round stuff
rounded = np.around(save_vals, 3)
round_vals = [date_str, date_jd]
for val in rounded:
round_vals.append(val)
# append these values to the csv file
SolAster.tools.utilities.append_list_as_row(csv_file, round_vals)
# print that the date is completed
print('\nCalculations and save to file complete for ' + date_str + ' index: ' + str(i))
print('Calculation complete for dates:', start_date, 'to', end_date)
###Output
_____no_output_____
###Markdown
Plotting results
###Code
# csv file with rv components
csv_file = os.path.join(CsvDir.CALC, Inputs.csv_name)
# create pandas dataframe
component_df = pd.read_csv(csv_file)
import matplotlib.pyplot as plt
# date to plot
date_jd = component_df.date_jd.values
x = date_jd - date_jd[0]
y_list = [component_df.f.values, component_df.Bobs.values, component_df.v_conv.values, component_df.v_phot.values,
component_df.rv_model.values - np.median(component_df.rv_model.values)]
# plot labels
xlabel = 'Days since ' + str(int(date_jd[0])) + ' JD'
ylabel_list = [r'$\rm f$' '\n' r'$\rm$[%]',
r'$\rm B_{\rm obs}$' '\n' r'$\rm [G]$',
r'$\rm v_{\rm conv}$' '\n' r'$\rm[m s^{-1}]$',
r'$\rm v_{\rm phot}$' '\n' r'$\rm[m s^{-1}]$',
r'$\rm RV_{\rm model}$' '\n' r'$\rm[m s^{-1}]$']
# set up figure
fig, axs = plt.subplots(len(y_list), 1, sharex='all', figsize=[6, 1.5 * len(y_list)], gridspec_kw={'hspace': 0})
# set up axes labels
for i in range(0, len(axs)):
axs[i].set(ylabel=ylabel_list[i])
rng = (y_list[i].max() - y_list[i].min())
step = rng/6
ylim = (y_list[i].min() - step, y_list[i].max() + step)
yticks = np.arange(y_list[i].min(), y_list[i].max()+0.0002, step=step*2)
axs[i].set(ylim=ylim, yticks=yticks)
# create x-axis ticks and labels
axs[i].set(xlabel=xlabel)
rng = (x.max() - x.min())
step = rng/6
xlim = (x.min() - step, x.max() + step)
xticks = np.arange(x.min(), x.max()+.001, step=step*2)
axs[i].set(xlim=xlim, xticks=xticks)
# plot data
for i in range(0, len(axs)):
axs[i].scatter(x, y_list[i], color='thistle', s=30, edgecolors='k', linewidths=0.8,
label='rms: ' + str(np.round(np.std(y_list[i]), 3)))
leg = axs[i].legend(handlelength=0, handletextpad=0, loc='upper left')
for item in leg.legendHandles:
item.set_visible(False)
# align y axis labels
fig.align_ylabels(axs)
# save figure
fig_path = os.path.join(ImgDir.IMG_DIR, Inputs.fig_title)
fig.savefig(fig_path, dpi=1200)
###Output
_____no_output_____ |
Lectures/tutorial_numpy.ipynb | ###Markdown
Tutorial: NumPy
###Code
__author__ = "Christopher Potts, Will Monroe, and Lucy Li"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Motivation](Motivation)1. [Vectors](Vectors) 1. [Vector Initialization](Vector-Initialization) 1. [Vector indexing](Vector-indexing) 1. [Vector assignment](Vector-assignment) 1. [Vectorized operations](Vectorized-operations) 1. [Comparison with Python lists](Comparison-with-Python-lists)1. [Matrices](Matrices) 1. [Matrix initialization](Matrix-initialization) 1. [Matrix indexing](Matrix-indexing) 1. [Matrix assignment](Matrix-assignment) 1. [Matrix reshaping](Matrix-reshaping) 1. [Numeric operations](Numeric-operations)1. [Practical example: a shallow neural network](Practical-example:-a-shallow-neural-network)1. [Going beyond NumPy alone](Going-beyond-NumPy-alone) 1. [Pandas](Pandas) 1. [Scikit-learn](Scikit-learn) 1. [SciPy](SciPy) 1. [Matplotlib](Matplotlib) MotivationWhy should we care about NumPy? - It allows you to perform tons of operations on vectors and matrices. - It makes things run faster than naive for-loop implementations (a.k.a. vectorization). - We use it in our class (see files prefixed with `np_` in your `cs224u` directory). - It's used in a ton in machine learning / AI. - Its arrays are often inputs into other important Python packages' functions. In Jupyter notebooks, NumPy documentation is two clicks away: Help -> NumPy reference. Vectors
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Vector Initialization
###Code
np.zeros(5)
np.ones(5)
# convert list to numpy array
arr1 = np.array([1,2,3,4,5])
lis = [1,2,3,5,6,3,7,8,9,3,1,5,6,3,2,35,6,234,653,234325,6563434,343465,3243242356,4353453465456]
arr2 = np.array(lis)
arr3 = (arr2 * 2)^3
arr3
# convert numpy array to list
np.ones(5).tolist()
# one float => all floats
np.array([1.0,2,3,4,5])
###Output
_____no_output_____
###Markdown
Creating an array of floats
###Code
# same as above
arr1 = np.array([1,2,3,4,10], dtype='float')
###Output
_____no_output_____
###Markdown
> Converting that array of floats into an array of ints method 1
###Code
arr2 = np.zeros(arr1.shape[0], dtype='int')
index =0
for x in arr1:
arr2[index] = x
index += 1
arr2
###Output
_____no_output_____
###Markdown
method 2 (much cleaner)
###Code
arr3 = arr1.astype(int)
arr3
# spaced values in interval
np.array([x for x in range(20) if x % 2 == 0])
np.array([x for x in range(10000000000000) if x < 50])
# same as above
np.arange(0,20,2)
# random floats in [0, 1)
np.random.random(10)
# random integers
np.random.randint(5, 15, size=10)
###Output
_____no_output_____
###Markdown
Vector indexing
###Code
x = np.array([10,20,30,40,50])
x[0]
# slice
x[0:2]
x[0:1000]
# last value
x[-1]
# last value as array
x[[-1]]
# last 3 values
x[-3:]
# pick indices
x[[0,2,4]]
###Output
_____no_output_____
###Markdown
Vector assignmentBe careful when assigning arrays to new variables!
###Code
#x2 = x # try this line instead
x2 = x.copy()
x2[0] = 10
x2
x2[[1,2]] = 10
x2
x2[[3,4]] = [0, 1]
x2
# check if the original vector changed
x
###Output
_____no_output_____
###Markdown
Vectorized operations
###Code
x.sum()
x.mean()
x.max()
x.argmax()
np.log(x)
np.exp(x)
x + x # Try also with *, -, /, etc.
x + 1
###Output
_____no_output_____
###Markdown
Comparison with Python listsVectorizing your mathematical expressions can lead to __huge__ performance gains. The following example is meant to give you a sense for this. It compares applying `np.log` to each element of a list with 10 million values with the same operation done on a vector.
###Code
# log every value as list, one by one
def listlog(vals):
return [np.log(y) for y in vals]
# get random vector
samp = np.random.random_sample(int(1e7))+1
samp
%time _ = np.log(samp)
%time _ = listlog(samp)
###Output
_____no_output_____
###Markdown
MatricesThe matrix is the core object of machine learning implementations. Matrix initialization
###Code
np.array([[1,2,3], [4,5,6]])
np.array([[1,2,3], [4,5,6]], dtype='float')
np.zeros((3,5))
np.ones((3,5))
np.identity(3)
np.diag([1,2,3])
###Output
_____no_output_____
###Markdown
Matrix indexing
###Code
X = np.array([[1,2,3], [4,5,6]])
X
X[0]
X[0,0]
# get row
X[0, : ]
# get column
X[ : , 0]
# get multiple columns
X[ : , [0,2]]
###Output
_____no_output_____
###Markdown
Matrix assignment
###Code
# X2 = X # try this line instead
X2 = X.copy()
X2
X2[0,0] = 20
X2
X2[0] = 3
X2
X2[: , -1] = [5, 6]
X2
# check if original matrix changed
X
###Output
_____no_output_____
###Markdown
Matrix reshaping
###Code
z = np.arange(1, 7)
z
z.shape
Z = z.reshape(2,3)
Z
Z.shape
Z.reshape(6)
# same as above
Z.flatten()
# transpose
Z.T
###Output
_____no_output_____
###Markdown
Numeric operations
###Code
A = np.array(range(1,7), dtype='float').reshape(2,3)
A
B = np.array([1, 2, 3])
# not the same as A.dot(B)
A * B
A + B
A / B
# matrix multiplication
A.dot(B)
B.dot(A.T)
A.dot(A.T)
# outer product
# multiplying each element of first vector by each element of the second
np.outer(B, B)
###Output
_____no_output_____
###Markdown
Practical example: a shallow neural network The following is a practical example of numerical operations on NumPy matrices. In our class, we have a shallow neural network implemented in `np_shallow_neural_network.py`. See how the forward and backward passes use no for loops, and instead takes advantage of NumPy's ability to vectorize manipulations of data. ```pythondef forward_propagation(self, x): h = self.hidden_activation(x.dot(self.W_xh) + self.b_xh) y = softmax(h.dot(self.W_hy) + self.b_hy) return h, ydef backward_propagation(self, h, predictions, x, labels): y_err = predictions.copy() y_err[np.argmax(labels)] -= 1 d_b_hy = y_err h_err = y_err.dot(self.W_hy.T) * self.d_hidden_activation(h) d_W_hy = np.outer(h, y_err) d_W_xh = np.outer(x, h_err) d_b_xh = h_err return d_W_hy, d_b_hy, d_W_xh, d_b_xh```The forward pass essentially computes the following: $$h = f(xW_{xh} + b_{xh})$$ $$y = \text{softmax}(hW_{hy} + b_{hy}),$$where $f$ is `self.hidden_activation`. The backward pass propagates error by computing local gradients and chaining them. Feel free to learn more about backprop [here](http://cs231n.github.io/optimization-2/), though it is not necessary for our class. Also look at this [neural networks case study](http://cs231n.github.io/neural-networks-case-study/) to see another example of how NumPy can be used to implement forward and backward passes of a simple neural network. Going beyond NumPy aloneThese are examples of how NumPy can be used with other Python packages. PandasWe can convert numpy matrices to Pandas dataframes. In the following example, this is useful because it allows us to label each row. You may have noticed this being done in our first unit on distributed representations.
###Code
import pandas as pd
count_df = pd.DataFrame(
np.array([
[1,0,1,0,0,0],
[0,1,0,1,0,0],
[1,1,1,1,0,0],
[0,0,0,0,1,1],
[0,0,0,0,0,1]], dtype='float64'),
index=['gnarly', 'wicked', 'awesome', 'lame', 'terrible'])
count_df
###Output
_____no_output_____
###Markdown
Scikit-learnIn `sklearn`, NumPy matrices are the most common input and output and thus a key to how the library's numerous methods can work together. Many of the cs224u's model built by Chris operate just like `sklearn` ones, such as the classifiers we used for our sentiment analysis unit.
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data
y = iris.target
print(type(X))
print("Dimensions of X:", X.shape)
print(type(y))
print("Dimensions of y:", y.shape)
# split data into train/test
X_iris_train, X_iris_test, y_iris_train, y_iris_test = train_test_split(
X, y, train_size=0.7, test_size=0.3)
print("X_iris_train:", type(X_iris_train))
print("y_iris_train:", type(y_iris_train))
print()
# start up model
maxent = LogisticRegression(
fit_intercept=True,
solver='liblinear',
multi_class='auto')
# train on train set
maxent.fit(X_iris_train, y_iris_train)
# predict on test set
iris_predictions = maxent.predict(X_iris_test)
fnames_iris = iris['feature_names']
tnames_iris = iris['target_names']
# how well did our model do?
print(classification_report(y_iris_test, iris_predictions, target_names=tnames_iris))
###Output
_____no_output_____
###Markdown
SciPySciPy contains what may seem like an endless treasure trove of operations for linear algebra, optimization, and more. It is built so that everything can work with NumPy arrays.
###Code
from scipy.spatial.distance import cosine
from scipy.stats import pearsonr
from scipy import linalg
# cosine distance
a = np.random.random(10)
b = np.random.random(10)
cosine(a, b)
# pearson correlation (coeff, p-value)
pearsonr(a, b)
# inverse of matrix
A = np.array([[1,3,5],[2,5,1],[2,3,8]])
linalg.inv(A)
###Output
_____no_output_____
###Markdown
To learn more about how NumPy can be combined with SciPy and Scikit-learn for machine learning, check out this [notebook tutorial](https://github.com/cgpotts/csli-summer/blob/master/advanced_python/intro_to_python_ml.ipynb) by Chris Potts and Will Monroe. (You may notice that over half of this current notebook is modified from theirs.) Their tutorial also has some interesting exercises in it! Matplotlib
###Code
import matplotlib.pyplot as plt
a = np.sort(np.random.random(30))
b = a**2
c = np.log(a)
plt.plot(a, b, label='y = x^2')
plt.plot(a, c, label='y = log(x)')
plt.legend()
plt.title("Some functions")
plt.show()
###Output
_____no_output_____ |
Strings - Exercise_Py3.ipynb | ###Markdown
Strings Assign the value of 100 to the variable "m".
###Code
m = 100
###Output
_____no_output_____
###Markdown
With the help of the variable "m", write one line of code where the output after executuion would be *100 days*.*Hint:* *You could provide four answers to this question!*
###Code
str(m) + " days"
###Output
_____no_output_____
###Markdown
Produce an output equal to *It's cool, isn't it?*
###Code
print("It's cool, isn't it?")
###Output
It's cool, isn't it?
###Markdown
Fix the string below.
###Code
print("Don't be shy")
###Output
Don't be shy
###Markdown
Produce an output equal to *Click "OK"*.
###Code
print(' Click "OK"')
###Output
Click "OK"
###Markdown
Include a plus sign in your line of code to produce *'Big Houses'*.
###Code
'Big' + ' Houses'
###Output
_____no_output_____
###Markdown
Include a trailing comma in your line of code to produce *Big Houses*.
###Code
print('Big', 'Houses')
###Output
Big Houses
|
listing-count/convert-csv.ipynb | ###Markdown
Read data file
###Code
start_time = time.time()
review = pd.read_csv("../reviews.zip")
end_time = time.time()
print(f"used {end_time - start_time} seconds")
review
###Output
_____no_output_____
###Markdown
Grouping
###Code
gr = review.groupby(['listing_id'])
len(gr.groups)
###Output
_____no_output_____
###Markdown
Counting
###Code
only_list = gr.size().reset_index(name='counts')
s = only_list.sort_values(['counts'], ascending=False)
s.to_csv("listing-count.csv")
s
###Output
_____no_output_____ |
module4-logistic-regression/Stephen_P_assignment_regression_classification_4.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 4*--- Logistic Regression Assignment 🌯You'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'?> We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions.- [ ] Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your model's validation accuracy. (Multiple times if you try multiple iterations.)- [ ] Get your model's test accuracy. (One time, at the end.)- [ ] Commit your notebook to your fork of the GitHub repo.- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression. Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Load data downloaded from https://srcole.github.io/100burritos/
import pandas as pd
import numpy as np
df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
df.head()
# Derive binary classification target:
# We define a 'Great' burrito as having an
# overall rating of 4 or higher, on a 5 point scale.
# Drop unrated burritos.
df = df.dropna(subset=['overall'])
df['Great'] = df['overall'] >= 4
# Clean/combine the Burrito categories
df['Burrito'] = df['Burrito'].str.lower()
california = df['Burrito'].str.contains('california')
asada = df['Burrito'].str.contains('asada')
surf = df['Burrito'].str.contains('surf')
carnitas = df['Burrito'].str.contains('carnitas')
df.loc[california, 'Burrito'] = 'California'
df.loc[asada, 'Burrito'] = 'Asada'
df.loc[surf, 'Burrito'] = 'Surf & Turf'
df.loc[carnitas, 'Burrito'] = 'Carnitas'
df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'
# Drop some high cardinality categoricals
df = df.drop(columns=['Notes', 'Location', 'Mass (g)', 'Density (g/mL)', 'Reviewer', 'Address', 'URL', 'Neighborhood'])
# Drop some columns to prevent "leakage"
df = df.drop(columns=['Rec', 'overall'])
df.head(5)
df.drop(columns=['Yelp', 'Google', 'Chips', 'Cost', 'Unreliable', 'NonSD', 'Fries'], inplace=True)
df.head()
ing = ['Beef', 'Pico', 'Guac', 'Cheese', 'Sour cream', 'Pork', 'Chicken', 'Shrimp', 'Fish', 'Rice', 'Beans', 'Lettuce', 'Tomato', 'Bell peper', 'Carrots', 'Cabbage', 'Sauce', 'Salsa.1', 'Cilantro', 'Onion', 'Taquito', 'Pineapple', 'Ham', 'Chile relleno', 'Nopales', 'Lobster', 'Queso', 'Egg', 'Mushroom', 'Bacon', 'Sushi', 'Avocado', 'Corn', 'Zucchini']
for ing in df[ing]:
df[ing][df[ing] == 'X'] = 1
df[ing][df[ing] == 'x'] = 1
df[ing] = pd.to_numeric(df[ing])
df[ing] = df[ing].fillna(0)
df['Great'] = df['Great'].astype(int)
df['Date'] = pd.to_datetime(df['Date'])
from datetime import datetime
date1 = datetime.strptime('12/31/2016', '%m/%d/%Y')
date2 = datetime.strptime('12/31/2017', '%m/%d/%Y')
train = df[df['Date'] <= date1]
val = df[(date1 < df['Date']) & (df['Date'] <= date2)]
test = df[df['Date'] > date2]
train.drop(columns=['Date'], inplace=True)
val.drop(columns=['Date'], inplace=True)
test.drop(columns=['Great', 'Date'], inplace=True)
train.shape, val.shape, test.shape
from sklearn.metrics import accuracy_score
target = 'Great'
features = train.columns.drop([target])
y_train = train[target]
majority_class = y_train.mode()[0]
y_pred = [majority_class]*len(y_train)
accuracy_score(y_train, y_pred)
import category_encoders as ce
encoder = ce.OneHotEncoder()
X_train = train[features]
X_val = val[features]
X_test = test[features]
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_test_encoded = encoder.transform(X_test)
X_train_encoded.head()
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_val_imputed = imputer.transform(X_val_encoded)
X_test_imputed = imputer.transform(X_test_encoded)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
X_test_scaled = scaler.transform(X_test_imputed)
imputed_df = pd.DataFrame(X_train_imputed)
imputed_df.head()
from sklearn.linear_model import LogisticRegression
y_val = val[target]
model = LogisticRegression(class_weight=)
model.fit(X_train_scaled, y_train)
print(f'Model Accuracy Score: {model.score(X_val_scaled, y_val)}')
y_pred = model.predict(X_test_scaled)
print(f'Test Accuracy Score: {model.score(X_test_scaled, y_pred)}')
###Output
_____no_output_____ |
notebooks/collab/U-Net-fastai.ipynb | ###Markdown
Connect to the drive to get the data
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Sometimes it's needed to hard reset the machine using command below.
###Code
!kill -9 -1
###Output
_____no_output_____
###Markdown
Check how much RAM on GPU is available
###Code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
###Output
_____no_output_____
###Markdown
Install deps
###Code
!curl https://course-v3.fast.ai/setup/colab | bash
!pip install pydicom
###Output
_____no_output_____
###Markdown
Imports
###Code
import os
import re
import numpy as np
import pydicom
import matplotlib.pyplot as plt
from fastai import *
from fastai.vision import *
import cv2
from fastai.layers import FlattenedLoss
###Output
_____no_output_____
###Markdown
Copy data from drive to the filesystem - this might make things faster.Note: from time to time you need to update compressed file.
###Code
drive_data_path = './drive/My Drive/Data/'
data_path = './data'
!cp ./drive/My\ Drive/data.tar.gz .
!tar xzf data.tar.gz
scans = []
for root, dirs, files in os.walk(data_path):
if 'CT' in root:
# remove wrongly labeled data
if not 'P1B1' in root:
for _file in files:
scans.append(root + '/' + _file)
np.random.seed(42)
np.random.shuffle(scans)
###Output
_____no_output_____
###Markdown
Overwrite fastai loading images methods
###Code
def open_dcm_image(fn, *args, **kwargs)->Image:
# window_min = -100
# window_max = 400
window_min = -100
window_max = 100
array = pydicom.dcmread(fn).pixel_array
array = np.clip(array, a_min=window_min, a_max=window_max)
array = (((array - array.min()) / (array.max() - array.min())) * (255 - 0) + 0).astype(np.uint8)
array = cv2.equalizeHist(array.astype(np.uint8))
array = np.repeat(array[:, :, None], 3, axis=2)
# we can store images in this format :top: to make stuff faster...
return Image(pil2tensor(array, np.float32).div_(255))
def open_dcm_mask(fn, *args, **kwargs)->Image:
x = pydicom.dcmread(fn).pixel_array
x = pil2tensor(x, np.float32)
return ImageSegment(x)
def annotate_metadata(fn, ax):
subdirs = fn.split('/')
patient_id = subdirs[-3]
slice_number = re.findall(r'\d+', subdirs[-1])[0]
ax.annotate(
'{} [{}]'.format(patient_id, slice_number),
xy=(.25, .25),
xycoords='data',
xytext=(30, 10),
fontsize=20,
textcoords='offset points',
)
# monkey patch
fastai.vision.image.open_image = open_dcm_image
fastai.vision.image.open_mask = open_dcm_mask
fastai.vision.data.open_image = open_dcm_image
fastai.vision.data.open_mask = open_dcm_mask
open_image = open_dcm_image
open_mask = open_dcm_mask
###Output
_____no_output_____
###Markdown
Look at the data
###Code
open_image(scans[1003])
get_y_fn = lambda path: str('.' / Path(path).parent / '../label' / Path(path).name)
open_mask(get_y_fn(scans[1003]))
codes = ['void', 'water']
src = (
SegmentationItemList.from_df(pd.DataFrame(scans, columns=['files']), '.')
.split_by_valid_func(lambda img_src: 'P7' in str(img_src) or 'P6' in str(img_src))
.label_from_func(get_y_fn, classes=codes)
)
src
img = open_image(scans[600]).data
src_size = np.array(img.shape[1:])
size = src_size // 4
size
bs = 80
data = (
# note wrap might deform images. For now I've set up 0, maybe we can use it.
src.transform(get_transforms(max_rotate=5., max_lighting=0, p_lighting=0, max_warp=0), size=size, tfm_y=True)
.databunch(bs=bs)
.normalize(imagenet_stats)
)
data.show_batch(2, figsize=(10,7))
data.show_batch(2, figsize=(10,7), ds_type=DatasetType.Valid)
###Output
_____no_output_____
###Markdown
Choose metrics to evaluate
###Code
from fastai.metrics import accuracy, dice
def acc(input, target):
target = target.squeeze(1)
return (input.argmax(dim=1)==target).float().mean()
metrics=[acc, dice]
###Output
_____no_output_____
###Markdown
Implement new loss functions
###Code
from torch.nn.modules.loss import _Loss
class DiceLoss(_Loss):
def __init__(self, **kwargs):
super(DiceLoss, self).__init__(**kwargs)
self.softmax = nn.Softmax(1)
def forward(self, input, target):
input = self.softmax(input)[:, 1]
target = target.float()
smooth = 1.
intersection = (input * target).sum()
return 1 - ((2. * intersection + smooth) /
(input.sum() + target.sum() + smooth))
class GeneralizedDiceLoss(_Loss):
# reference: https://niftynet.readthedocs.io/en/dev/_modules/niftynet/layer/loss_segmentation.html#generalised_dice_loss
def __init__(self, **kwargs):
super(GeneralizedDiceLoss, self).__init__(**kwargs)
self.softmax = nn.Softmax(1)
def forward(self, input, target):
prediction = self.softmax(input)
one_hot = (
torch.sparse.torch.eye(2).cuda()
.index_select(0, target.long())
)
ref_vol = torch.sum(one_hot, 0)
seg_vol = torch.sum(prediction, 0)
intersect = torch.sum(one_hot * prediction, 0)
weights = torch.reciprocal(ref_vol ** 2)
weights[weights == float("Inf")] = 0
generalised_dice_numerator = 2 * torch.sum(weights * intersect)
generalised_dice_denominator = torch.sum(
weights * torch.max(seg_vol + ref_vol, torch.ones_like(weights))
)
generalised_dice_score = \
generalised_dice_numerator / generalised_dice_denominator
generalised_dice_score[torch.isnan(generalised_dice_score)] = 1.
return 1 - generalised_dice_score
dice_loss = FlattenedLoss(DiceLoss, axis=1)
generalized_dice_loss = FlattenedLoss(GeneralizedDiceLoss, axis=1)
dice_loss(torch.Tensor([[10, 1], [10, 0]]), torch.Tensor([[1], [1]]))
generalized_dice_loss(torch.Tensor([[10, 1], [10, 0]]).cuda(), torch.Tensor([[1], [1]]).cuda())
###Output
_____no_output_____
###Markdown
Train model
###Code
learn = unet_learner(
data, models.resnet34, metrics=metrics,
self_attention=False,
loss_func=generalized_dice_loss,
)
lr_find(learn)
learn.recorder.plot()
lr=3e-5
learn.fit_one_cycle(10, slice(lr), pct_start=0.9)
learn.save('3_1')
learn.load('3_1');
!cp ./models/3_1.pth ./drive/My\ Drive/
learn.show_results(rows=20)
learn.unfreeze()
lr_find(learn)
learn.recorder.plot()
lrs = slice(1e-6, 8e-5)
learn.fit_one_cycle(12, lrs, pct_start=0.8)
learn.recorder.plot_losses()
learn.recorder.plot_lr()
learn.save('3_2');
!mkdir -p ./drive/My\ Drive/Code/
!cp ./models/3_2.pth ./drive/My\ Drive/
learn = learn.load('3_2')
learn.show_results(rows=24)
###Output
_____no_output_____
###Markdown
Go big - full size of an image
###Code
!mkdir -p models
!cp ./drive/My\ Drive/Code/Mateusz/stage-1.pth ./models/stage-1.pth
size = src_size
bs = 5
data = (
src.transform(get_transforms(max_rotate=5., max_lighting=0, p_lighting=0), size=size, tfm_y=True)
.databunch(bs=bs)
.normalize(imagenet_stats)
)
learn = unet_learner(
data, models.resnet34, metrics=metrics, self_attention=True,
)
learn.load('stage-1');
lr_find(learn)
learn.recorder.plot()
lr=1e-5
learn.fit_one_cycle(3, slice(lr))
learn.save('stage-1-big')
learn.show_results()
!cp ./models/stage-1-big.pth ./drive/My\ Drive/
learn.load('stage-1-big');
learn.unfreeze()
lrs = slice(1e-6,1e-4)
learn.fit_one_cycle(10, lrs, wd=1e-3)
learn.save('stage-2-big')
learn.load('stage-2-big')
learn.show_results()
!cp ./models/stage-2-big.pth ./drive/My\ Drive/
###Output
_____no_output_____ |
BoA - processing.ipynb | ###Markdown
1. Load transaction data
###Code
# Load all data and concat
csvs = glob.glob('/gh/data/personal-data-requests/BoA/*.csv')
dfs = []
for csv in csvs:
df_temp = pd.read_csv(csv)
dfs.append(df_temp)
df_web = pd.concat(dfs).drop_duplicates().dropna(how='all', axis=0)
# Process cols
cols_keep = ['date', 'Amount', 'Original Description', 'Category', 'Account Name', 'Simple Description']
df_web['date'] = pd.to_datetime(df_web['Date'])
df_web['Amount'] = np.array([x.replace(',','') for x in df_web['Amount'].astype(str)], dtype=float)
df_web = df_web[cols_keep]
df_web.head()
###Output
/Users/scott/anaconda/lib/python3.6/site-packages/ipykernel_launcher.py:7: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version
of pandas will change to not sort by default.
To accept the future behavior, pass 'sort=False'.
To retain the current behavior and silence the warning, pass 'sort=True'.
import sys
###Markdown
1b. Isolate deposits / transfers
###Code
df_money = df_web[df_web['Amount'].astype(float) > 0]
df_money.loc[[1472]]
###Output
_____no_output_____
###Markdown
2. Import pdf data from table* Tabula - Does not really work. Gets some tables but many are missing* Camelot - does not read any tables* Used Excalibur gui
###Code
pdf_csvs = glob.glob('/gh/data/personal-data-requests/BoA/excalibur/*/*.csv')
dfs = []
for csv in pdf_csvs:
# Read csv
df_temp = pd.read_csv(csv)
category = None
# If the first row is the category
if 'Unnamed: 2' in df_temp.columns:
category = df_temp.columns[0]
df_temp = pd.read_csv(csv, skiprows=1)
# If there are no columns
if 'Location' not in df_temp.columns:
df_temp = pd.read_csv(csv, names=['Date\nDescription', 'Location', 'Amount'])
# If Date and Description are stuck together
if 'Date\nDescription' in df_temp.columns:
df_temp['date'] = pd.to_datetime([x[:8] for x in df_temp['Date\nDescription']])
df_temp['description'] = [x[8:] for x in df_temp['Date\nDescription']]
df_temp = df_temp.drop('Date\nDescription', axis=1)
# Remove 'deduct' column
if 'Deduct' in df_temp.columns:
df_temp.drop('Deduct', axis=1, inplace=True)
# Add df
df_temp['pdf_category'] = category
dfs.append(df_temp)
# Concat and process data
df_pdf = pd.concat(dfs).reset_index(drop=True)
df_pdf = df_pdf.rename(columns={'Location': 'city', 'Amount': 'amount'})
# Remove 'CR' (negative) credits for simplicity. it's not too many columns
df_CR = df_pdf[df_pdf['amount'].astype(str).str.contains('CR')]
df_pdf = df_pdf[~df_pdf['amount'].astype(str).str.contains('CR')]
df_pdf['amount'] = np.array([x.replace(',','') for x in df_pdf['amount'].astype(str)], dtype=float)
df_pdf.head()
###Output
_____no_output_____
###Markdown
3. Merge pdf and web data
###Code
df_web_merge = df_web[df_web['Amount'] < 0].rename(columns={'Amount': 'amount'})
df_web_merge['amount'] = -df_web_merge['amount']
df_web_merge = df_web_merge.drop_duplicates(subset=['date', 'amount'])
df_both = df_pdf.merge(df_web_merge, on=['date', 'amount'], how='left')
df_both.to_csv('/gh/data/personal-data-requests/BoA/web_pdf_merge.csv', index_label=None)
df_both.head()
###Output
_____no_output_____ |
playground/bbfit_19dge.ipynb | ###Markdown
Prepare the data
###Code
result = get_at2019dge(colorplt=False)
lc = result['tb']
lc = lc[lc.instrument!='P60+SEDM']
lcdet = lc.sort_values(by = ['mjd'])
dates = np.unique(lcdet["date"].values)
flags = np.ones(len(dates), dtype = int)
indlc = np.ones(len(lcdet), dtype = int)
for i in range(len(dates)):
mydate = dates[i]
ix = lcdet['date'].values == mydate
lckeck = lcdet[ix]
allwv = np.unique(lckeck['wave'].values)
# we need at least three bands to fit a blackbody
for j in range(15):
if len(allwv)==j:
flags[i] = j
indlc[ix] = j
lcdet1 = lcdet[indlc>=2]
dates1 = dates[flags>=2]
#lcdet2 = lcdet[indlc==2]
#dates2 = dates[flags==2]
dates1.shape
np.save('./helper/19dge_dates.npy', dates1)
#np.save('./helper/dates2.npy', dates2)
lcdet1.to_csv("./helper/19dge_lcdet.csv")
#lcdet2.to_csv("./helper/lcdet_2bands.csv")
###Output
_____no_output_____
###Markdown
Blackbody fit (with MCMC)run helper.mcmcfit, helper.mcmcfit_lgprior, helper.mcmcfit_Jeffprior on cluster Construct SEDs
###Code
from helper.mcmcfit import planck_lambda, mylinear_fit
dates = np.load('./helper/19dge_dates.npy', allow_pickle=True)
lcdet = pd.read_csv("./helper/19dge_lcdet.csv")
Tbbs = np.zeros(len(dates))
Rbbs = np.zeros(len(dates))
Lbbs = np.zeros(len(dates))
lgLbbs = np.zeros(len(dates))
Tbbs_unc = np.zeros(len(dates))
Rbbs_unc = np.zeros(len(dates))
Lbbs_unc = np.zeros(len(dates))
lgLbbs_unc = np.zeros(len(dates))
Tbbs_uncl = np.zeros(len(dates))
Rbbs_uncl = np.zeros(len(dates))
Lbbs_uncl = np.zeros(len(dates))
lgLbbs_uncl = np.zeros(len(dates))
Tbbs_uncr = np.zeros(len(dates))
Rbbs_uncr = np.zeros(len(dates))
Lbbs_uncr = np.zeros(len(dates))
lgLbbs_uncr = np.zeros(len(dates))
nramdom = 30
T_ramdoms = np.zeros((len(dates), nramdom))
R_ramdoms = np.zeros((len(dates), nramdom))
for i in range(len(dates)):
mydate = dates[i]
s = "_"
filename = "./helper/19dge_mcmcresult/sampler_log_"+s.join(mydate.split(' '))+".h5"
reader = emcee.backends.HDFBackend(filename)
chains = reader.get_chain()
nsteps = chains.shape[0]
samples = reader.get_chain(flat=True)
print (i, mydate)
tau = reader.get_autocorr_time(tol=0)
nburn = int(5*np.max(tau))
if nburn > nsteps-100:
nburn = nsteps-100
print (" nburn = %d"%nburn)
samples = samples[nburn:, :]
Ts = 10**samples[:,0]
Rs = 10**samples[:,1]
Lbbs_run = const.sigma_sb.cgs.value * Ts **4 * 4 * np.pi * (Rs * const.R_sun.cgs.value)**2
Lbb_sigmas = np.percentile(Lbbs_run, (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
Tbb_sigmas = np.percentile(Ts, (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
Rbb_sigmas = np.percentile(Rs, (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
ix_random = np.array([random.randint(0, len(samples)-1) for x in range(nramdom)])
T_ramdoms[i] = Ts[ix_random]
R_ramdoms[i] = Rs[ix_random]
pars = np.vstack([Lbb_sigmas, Tbb_sigmas, Rbb_sigmas])
Lbbs[i] = pars[0][3]
Lbbs_unc[i] = (pars[0][4]-pars[0][2])/2
Lbbs_uncr[i] = pars[0][4]-pars[0][3]
Lbbs_uncl[i] = pars[0][3]-pars[0][2]
lgLbbs[i] = np.log10(pars[0][3])
lgLbbs_unc[i] = (np.log10(pars[0][4])-np.log10(pars[0][2]))/2
lgLbbs_uncr[i] = np.log10(pars[0][4])-np.log10(pars[0][3])
lgLbbs_uncl[i] = np.log10(pars[0][3])-np.log10(pars[0][2])
Tbbs[i] = pars[1][3]
Tbbs_unc[i] = (pars[1][4]-pars[1][2])/2
Tbbs_uncr[i] = pars[1][4]-pars[1][3]
Tbbs_uncl[i] = pars[1][3]-pars[1][2]
Rbbs[i] = pars[2][3]
Rbbs_unc[i] = (pars[2][4]-pars[2][2])/2
Rbbs_uncr[i] = pars[2][4]-pars[2][3]
Rbbs_uncl[i] = pars[2][3]-pars[2][2]
t0jd = result['t_max']
jds = np.zeros(len(dates))
nrow = 5
ncol = 5
Rbbs
fig, axes = plt.subplots(nrow, ncol, figsize=(13, 12), sharey=True, sharex=True)
lamb = np.logspace(3, 4.5)
for ind in range(len(dates)):
mydate = dates[ind]
i = np.where(dates == mydate)[0][0]
ix = lcdet['date'].values == mydate
lckeck = lcdet[ix]
nband = len(np.unique(lckeck['wave'].values))
ii = i // ncol
jj = i % ncol
t = np.mean(lckeck['mjd'].values)
jds[i] = t
x = lckeck['wave'].values
y = lckeck['Llambda'].values
yerr = lckeck['Llambda_unc'].values
lgLlambdas = np.log10(y)
lgLlambdas_unc = 1 / np.log(10) * yerr / y
flux = planck_lambda(Tbbs[i], Rbbs[i], lamb)
deltat = t - t0jd
ax = axes[ii,jj]
ax.errorbar(x, lgLlambdas, lgLlambdas_unc, fmt='.k', capsize=2)
ax.plot(lamb, np.log10(flux), color='b', linewidth = 2, zorder = 2)
ax.set_xlim(1000, 20000)
ax.set_ylim(36.2, 39.5)
ax.semilogx()
for j in range(nramdom):
fluxnew = planck_lambda(T_ramdoms[i][j], R_ramdoms[i][j], lamb)
ax.plot(lamb, np.log10(fluxnew), color='c', alpha = 0.5, linestyle = "--", linewidth = 1.5, zorder = 1)
if ii!=nrow-1:
ax.set_xticks([1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000])
ax.set_xticklabels(['', '', '', '', '', '', '', '', '', ''])
else:
ax.set_xticks([1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000])
ax.set_xticklabels(['$10^3$', '', '', '', '', '', '', '', '', '$10^4$'])
if deltat>0:
ax.text(4000, 38.7, '$\Delta t=$%.2f'%deltat, color='k')
else:
ax.text(4000, 38.7, '$\Delta t=-$%.2f'%abs(deltat), color='k')
ax.yaxis.set_minor_locator(AutoMinorLocator())
ax.tick_params(direction='in', axis='both', which = 'both', top=True, right=True)
ax.tick_params(which='major', length=4)
ax.tick_params(which='minor', length=2)
plt.tight_layout(rect = (0.03, 0.02, 1, 1), # left, bottom, right, top
h_pad=-0.09, w_pad=-1.25)
axes[4,2].set_xlabel("Rest Wavelength"+' ('+r'$\rm \AA$'+')')
axes[2,0].set_ylabel(r'$L_{\lambda}$ log'+r'$_{10}\rm(erg\,s^{-1}\,\AA^{-1})$')
axes[4,4].set_axis_off()
axes[4,3].set_axis_off()
plt.savefig("../paper/figures/seds_log.pdf")
# plt.close()
trf = jds-t0jd
Lbbs.shape
tb = Table(data = [trf, Lbbs, Lbbs_unc, Lbbs_uncr, Lbbs_uncl,
lgLbbs, lgLbbs_unc, lgLbbs_uncr, lgLbbs_uncl,
Tbbs, Tbbs_unc, Tbbs_uncr, Tbbs_uncl,
Rbbs, Rbbs_unc, Rbbs_uncr, Rbbs_uncl],
names = ['phase', 'Lbb', 'Lbb_unc', 'Lbb_uncr', 'Lbb_uncl',
'lgLbb', 'lgLbb_unc', 'lgLbb_uncr', 'lgLbb_uncl',
'Tbb', 'Tbb_unc', 'Tbb_uncr', 'Tbb_uncl',
'Rbb', 'Rbb_unc', 'Rbb_uncr', 'Rbb_uncl'])
tb.write('../data/otherSN/Yao2020/bbdata.csv', overwrite=True)
###Output
_____no_output_____ |
notebooks/plot-ensemble-values.ipynb | ###Markdown
Make Diagnostic Plots of Data in DART-CAM6 Zarr Stores
###Code
import xarray as xr
import numpy as np
import dask
import intake
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
from pathlib import Path
import os
###Output
_____no_output_____
###Markdown
Create and Connect to a Dask Distributed ClusterRun the cell below if the notebook is running on a supercomputer with a PBS Scheduler.If the notebook is running on a different parallel computing environment, you will need to replace the usage of `PBSCluster` with a similar object from `dask_jobqueue` or `dask_gateway`.
###Code
from dask_jobqueue import PBSCluster
num_jobs = 20
walltime = '0:20:00'
memory='10GB'
cluster = PBSCluster(cores=1, processes=1, walltime=walltime, memory=memory, queue='casper',
resource_spec='select=1:ncpus=1:mem=10GB',)
cluster.scale(jobs=num_jobs)
from distributed import Client
client = Client(cluster)
cluster
###Output
_____no_output_____
###Markdown
Find and Obtain Data Using an Intake Catalog Open catalog and produce a content summary
###Code
# Define the catalog description file location
catalog_url = "https://ncar-dart-cam6.s3-us-west-2.amazonaws.com/catalogs/aws-dart-cam6.json"
# Open the catalog
col = intake.open_esm_datastore(catalog_url)
col
# Produce a catalog content summary.
import pprint
uniques = col.unique(
columns=["variable"]
)
pprint.pprint(uniques, compact=True, indent=4)
###Output
_____no_output_____
###Markdown
Load data into xarray using the catalog
###Code
data_var = 'PS'
col_subset = col.search(variable=data_var)
col_subset
###Output
_____no_output_____
###Markdown
Show the chosen Zarr store attributes
###Code
col_subset.df
###Output
_____no_output_____
###Markdown
Convert catalog subset to a dictionary of xarray datasets, and use the first one.
###Code
dsets = col_subset.to_dataset_dict(
zarr_kwargs={"consolidated": True}, storage_options={"anon": True}
)
print(f"\nDataset dictionary keys:\n {dsets.keys()}")
# Load the first dataset and display a summary.
dataset_key = list(dsets.keys())[0]
ds = dsets[dataset_key]
ds
###Output
_____no_output_____
###Markdown
Define Plot Functions Get consistently shaped data slices for both 2D and 3D variables.
###Code
def getSlice(ds, data_var):
'''If the data has vertical levels, choose the level closest
to the Earth's surface for 2-D diagnostic plots.
'''
data_slice = ds[data_var]
if 'lev' in data_slice.dims:
lastLevel = ds.lev.values[-1]
data_slice = data_slice.sel(lev = lastLevel)
data_slice = data_slice.squeeze()
return data_slice
###Output
_____no_output_____
###Markdown
Get lat/lon dimension names
###Code
def getSpatialDimensionNames(data_slice):
'''Get the spatial dimension names for this data slice.
'''
# Determine lat/lon conventions for this slice.
lat_dim = 'lat' if 'lat' in data_slice.dims else 'slat'
lon_dim = 'lon' if 'lon' in data_slice.dims else 'slon'
return [lat_dim, lon_dim]
###Output
_____no_output_____
###Markdown
Produce Time Series Spaghetti Plot of Ensemble Members
###Code
def plot_timeseries(ds, data_var, store_name):
'''Create a spaghetti plot for a given variable.
'''
figWidth = 25
figHeight = 20
linewidth = 0.5
numPlotsPerPage = 3
numPlotCols = 1
# Plot the aggregate statistics across time.
fig, axs = plt.subplots(3, 1, figsize=(figWidth, figHeight))
data_slice = getSlice(ds, data_var)
spatial_dims = getSpatialDimensionNames(data_slice)
unit_string = ds[data_var].attrs['units']
# Persist the slice so it's read from disk only once.
# This is faster when data values are reused many times.
data_slice = data_slice.persist()
max_vals = data_slice.max(dim = spatial_dims).transpose()
mean_vals = data_slice.mean(dim = spatial_dims).transpose()
min_vals = data_slice.min(dim = spatial_dims).transpose()
rangeMaxs = max_vals.max(dim = 'member_id')
rangeMins = max_vals.min(dim = 'member_id')
axs[0].set_facecolor('lightgrey')
axs[0].fill_between(ds.time, rangeMins, rangeMaxs, linewidth=linewidth, color='white')
axs[0].plot(ds.time, max_vals, linewidth=linewidth, color='red', alpha=0.1)
axs[0].set_title('Ensemble Member Maxima Over Time', fontsize=20)
axs[0].set_ylabel(unit_string)
rangeMaxs = mean_vals.max(dim = 'member_id')
rangeMins = mean_vals.min(dim = 'member_id')
axs[1].set_facecolor('lightgrey')
axs[1].fill_between(ds.time, rangeMins, rangeMaxs, linewidth=linewidth, color='white')
axs[1].plot(ds.time, mean_vals, linewidth=linewidth, color='red', alpha=0.1)
axs[1].set_title('Ensemble Member Means Over Time', fontsize=20)
axs[1].set_ylabel(unit_string)
rangeMaxs = min_vals.max(dim = 'member_id')
rangeMins = min_vals.min(dim = 'member_id')
axs[2].set_facecolor('lightgrey')
axs[2].fill_between(ds.time, rangeMins, rangeMaxs, linewidth=linewidth, color='white')
axs[2].plot(ds.time, min_vals, linewidth=linewidth, color='red', alpha=0.1)
axs[2].set_title('Ensemble Member Minima Over Time', fontsize=20)
axs[2].set_ylabel(unit_string)
plt.suptitle(store_name, fontsize=25)
return fig
###Output
_____no_output_____
###Markdown
Actually Create Spaghetti Plot Showing All Ensemble Members
###Code
%%time
store_name = f'{data_var}.zarr'
fig = plot_timeseries(ds, data_var, store_name)
###Output
_____no_output_____
###Markdown
Save/Download the figureTo download the figure plot file:* Run the following command.* Find the file using the Jupyter file browser in the left sidebar.* Right-click the file name, and select "Download".
###Code
fig.savefig(f'{data_var}.zarr.pdf', facecolor='white', dpi=200)
###Output
_____no_output_____
###Markdown
Release the Dask workers.
###Code
cluster.close()
###Output
_____no_output_____ |
application_model_zoo/Example - Car and Pool Detection.ipynb | ###Markdown
###Code
###Output
_____no_output_____
###Markdown
**About the network**1. Paper on CornerNet: https://arxiv.org/abs/1808.012442. Paper on CornerNet-Lite: https://arxiv.org/abs/1904.089003. Blog 1 on CornerNet: https://joshua19881228.github.io/2019-01-20-CornerNet/4. Blog 2 on CornerNet: https://zhangtemplar.github.io/anchor-free-detection/5. Blog 3 on CornerNet: https://opencv.org/latest-trends-of-object-detection-from-cornernet-to-centernet-explained-part-i-cornernet/6. Blog 4 on CornerNet: https://towardsdatascience.com/centernet-keypoint-triplets-for-object-detection-review-a314a8e4d4b07. Blog 5 on CornerNet: https://medium.com/@andersasac/the-end-of-anchors-improving-object-detection-models-and-annotations-73828c7b39f6
###Code
###Output
_____no_output_____
###Markdown
Table of contents 1. Installation Instructions 2. Use trained model to detect car and pool in images 3. How to train a car and pool detector on Kaggle dataset
###Code
###Output
_____no_output_____
###Markdown
Installation
###Code
!pip install torch==1.4.0 torchvision==0.5.0
! git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
# For colab use the command below
! cd Monk_Object_Detection/6_cornernet_lite/installation && chmod +x install_colab.sh && ./install_colab.sh
# Restart colab runtime for installations to get initiated
# For Local systems and cloud select the right CUDA version
#! cd Monk_Object_Detection/6_cornernet_lite/installation && chmod +x install.sh && ./install.sh
###Output
_____no_output_____
###Markdown
Use already trained model for demo
###Code
import os
import sys
sys.path.append("Monk_Object_Detection/6_cornernet_lite/lib/")
from infer_detector import Infer
gtf = Infer();
class_list =["Car","Pool"]
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=12e9ynkwIqRArAHnlsoCwmQ3-TNm9FA9F' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=12e9ynkwIqRArAHnlsoCwmQ3-TNm9FA9F" -O obj_satellite_car_pool_trained.zip && rm -rf /tmp/cookies.txt
! unzip -qq obj_satellite_car_pool_trained.zip
gtf.Model(class_list,
base="CornerNet_Saccade",
model_path="/content/obj_satellite_car_pool_trained/CornerNet_Saccade_final-1000.pkl")
boxes = gtf.Predict("/content/obj_satellite_car_pool_trained/image1.jpg",
vis_thresh=0.4, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("/content/obj_satellite_car_pool_trained/image2.jpg",
vis_thresh=0.4, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("/content/obj_satellite_car_pool_trained/image3.jpg",
vis_thresh=0.4, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
###Output
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2506: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
###Markdown
Train **Data-set credit-**https://www.kaggle.com/kbhartiya83/swimming-pool-and-car-detection
###Code
! pip install -q kaggle
from google.colab import files
files.upload()
!ls
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
!kaggle datasets download -d kbhartiya83/swimming-pool-and-car-detection -p /content
!unzip /content/swimming-pool-and-car-detection.zip
###Output
_____no_output_____
###Markdown
VOC Format Dataset Directory Structure (Non-Standard) card_dataset (root_dir) | |-----------Images+Annotations (img_dir + anno_dir) | | | |------------------img1.jpg | |------------------img1.xml | |------------------img2.jpg | |------------------img2.xml | |------------------.........(and so on)
###Code
###Output
_____no_output_____
###Markdown
Desired annotation - COCO Format Dataset Directory Structure ./ (root_dir) | |------card_dataset (coco_dir) | | | |---Images (img_dir) | |----| | |-------------------img1.jpg | |-------------------img2.jpg | |-------------------.........(and so on) | | | |---annotations (anno_dir) | |----| | |--------------------instances_Images.json | |--------------------classes.txt - instances_Train.json -> In proper COCO format - classes.txt -> A list of classes in alphabetical order
###Code
###Output
_____no_output_____
###Markdown
Annotation Conversion - Step 1 - VOC to Monk format
###Code
import os
import sys
import numpy as np
import pandas as pd
import xmltodict
import json
from tqdm.notebook import tqdm
from pycocotools.coco import COCO
root_dir = "/content/training_data/training_data/";
img_dir = "images/";
anno_dir = "labels/";
files = os.listdir(root_dir + anno_dir);
combined = [];
for i in tqdm(range(len(files))):
annoFile = root_dir + "/" + anno_dir + "/" + files[i];
f = open(annoFile, 'r');
my_xml = f.read();
anno = dict(dict(xmltodict.parse(my_xml))["annotation"])
fname = anno["filename"];
label_str = "";
if(type(anno["object"]) == list):
for j in range(len(anno["object"])):
obj = dict(anno["object"][j]);
label = anno["object"][j]["name"];
bbox = dict(anno["object"][j]["bndbox"])
x1 = bbox["xmin"];
y1 = bbox["ymin"];
x2 = bbox["xmax"];
y2 = bbox["ymax"];
if(j == len(anno["object"])-1):
label_str += x1 + " " + y1 + " " + x2 + " " + y2 + " " + label;
else:
label_str += x1 + " " + y1 + " " + x2 + " " + y2 + " " + label + " ";
else:
obj = dict(anno["object"]);
label = anno["object"]["name"];
bbox = dict(anno["object"]["bndbox"])
x1 = bbox["xmin"];
y1 = bbox["ymin"];
x2 = bbox["xmax"];
y2 = bbox["ymax"];
label_str += x1 + " " + y1 + " " + x2 + " " + y2 + " " + label;
combined.append([fname, label_str])
df = pd.DataFrame(combined, columns = ['ID', 'Label']);
df.to_csv(root_dir + "/train_labels.csv", index=False);
###Output
_____no_output_____
###Markdown
Annotation Conversion - Step 1 - Monk format to COCO
###Code
import os
import numpy as np
import cv2
import dicttoxml
import xml.etree.ElementTree as ET
from xml.dom.minidom import parseString
from tqdm import tqdm
import shutil
import json
import pandas as pd
root = "/content/training_data/training_data/";
img_dir = "images/";
anno_file = "train_labels.csv";
dataset_path = root;
images_folder = root + "/" + img_dir;
annotations_path = root + "/annotations/";
if not os.path.isdir(annotations_path):
os.mkdir(annotations_path)
input_images_folder = images_folder;
input_annotations_path = root + "/" + anno_file;
output_dataset_path = root;
output_image_folder = input_images_folder;
output_annotation_folder = annotations_path;
tmp = img_dir.replace("/", "");
output_annotation_file = output_annotation_folder + "/instances_" + tmp + ".json";
output_classes_file = output_annotation_folder + "/classes.txt";
if not os.path.isdir(output_annotation_folder):
os.mkdir(output_annotation_folder);
df = pd.read_csv(input_annotations_path);
columns = df.columns
delimiter = " ";
list_dict = [];
anno = [];
for i in range(len(df)):
img_name = df[columns[0]][i];
labels = df[columns[1]][i];
tmp = labels.split(delimiter);
for j in range(len(tmp)//5):
label = tmp[j*5+4];
if(label not in anno):
anno.append(label);
anno = sorted(anno)
for i in tqdm(range(len(anno))):
tmp = {};
tmp["supercategory"] = "master";
tmp["id"] = i;
tmp["name"] = anno[i];
list_dict.append(tmp);
anno_f = open(output_classes_file, 'w');
for i in range(len(anno)):
anno_f.write(anno[i] + "\n");
anno_f.close();
coco_data = {};
coco_data["type"] = "instances";
coco_data["images"] = [];
coco_data["annotations"] = [];
coco_data["categories"] = list_dict;
image_id = 0;
annotation_id = 0;
for i in tqdm(range(len(df))):
img_name = df[columns[0]][i];
labels = df[columns[1]][i];
tmp = labels.split(delimiter);
image_in_path = input_images_folder + "/" + img_name;
print(image_in_path)
img = cv2.imread(image_in_path, 1);
h, w, c = img.shape;
images_tmp = {};
images_tmp["file_name"] = img_name;
images_tmp["height"] = h;
images_tmp["width"] = w;
images_tmp["id"] = image_id;
coco_data["images"].append(images_tmp);
for j in range(len(tmp)//5):
x1 = int(round(float(tmp[j*5+0])));
y1 = int(round(float(tmp[j*5+1])));
x2 = int(round(float(tmp[j*5+2])));
y2 = int(round(float(tmp[j*5+3])));
label = tmp[j*5+4];
annotations_tmp = {};
annotations_tmp["id"] = annotation_id;
annotation_id += 1;
annotations_tmp["image_id"] = image_id;
annotations_tmp["segmentation"] = [];
annotations_tmp["ignore"] = 0;
annotations_tmp["area"] = (x2-x1)*(y2-y1);
annotations_tmp["iscrowd"] = 0;
annotations_tmp["bbox"] = [x1, y1, x2-x1, y2-y1];
annotations_tmp["category_id"] = anno.index(label);
coco_data["annotations"].append(annotations_tmp)
image_id += 1;
outfile = open(output_annotation_file, 'w');
json_str = json.dumps(coco_data, indent=4);
outfile.write(json_str);
outfile.close();
###Output
_____no_output_____
###Markdown
Training
###Code
import os
import sys
sys.path.append("Monk_Object_Detection/6_cornernet_lite/lib/")
from train_detector import Detector
gtf = Detector();
root_dir = "/content/training_data";
coco_dir = "training_data"
img_dir = "/"
set_dir = "images"
gtf.Train_Dataset(root_dir, coco_dir, img_dir, set_dir, batch_size=4, use_gpu=True, num_workers=4)
gtf.Model(model_name="CornerNet_Saccade")
gtf.Hyper_Params(lr=0.00025, total_iterations=10000) ###0.00025 1000
gtf.Setup();
gtf.Train();
###Output
_____no_output_____
###Markdown
Inference
###Code
import os
import sys
sys.path.append("/content/Monk_Object_Detection/6_cornernet_lite/lib/")
#sys.path.append("../../6_cornernet_lite/lib/")
from infer_detector import Infer
gtf = Infer();
class_list = ["Car","Pool"]
gtf.Model(class_list,
base="CornerNet_Saccade",
model_path="/content/cache/nnet/CornerNet_Saccade/CornerNet_Saccade_intermediate.pkl")
boxes = gtf.Predict("/content/training_data/training_data/images/000000045.jpg", vis_thresh=0.3, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("/content/test_data_images/test_data_images/images/000000032.jpg", vis_thresh=0.3, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("/content/test_data_images/test_data_images/images/000000018.jpg", vis_thresh=0.3, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
###Output
_____no_output_____
###Markdown
**About the network**1. Paper on CornerNet: https://arxiv.org/abs/1808.012442. Paper on CornerNet-Lite: https://arxiv.org/abs/1904.089003. Blog 1 on CornerNet: https://joshua19881228.github.io/2019-01-20-CornerNet/4. Blog 2 on CornerNet: https://zhangtemplar.github.io/anchor-free-detection/5. Blog 3 on CornerNet: https://opencv.org/latest-trends-of-object-detection-from-cornernet-to-centernet-explained-part-i-cornernet/6. Blog 4 on CornerNet: https://towardsdatascience.com/centernet-keypoint-triplets-for-object-detection-review-a314a8e4d4b07. Blog 5 on CornerNet: https://medium.com/@andersasac/the-end-of-anchors-improving-object-detection-models-and-annotations-73828c7b39f6 Table of contents 1. Installation Instructions 2. Use trained model to detect car and pool in images 3. How to train a car and pool detector on Kaggle dataset Installation
###Code
!pip install torch==1.4.0 torchvision==0.5.0
! git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
# For colab use the command below
! cd Monk_Object_Detection/6_cornernet_lite/installation && chmod +x install_colab.sh && ./install_colab.sh
# Restart colab runtime for installations to get initiated
# For Local systems and cloud select the right CUDA version
#! cd Monk_Object_Detection/6_cornernet_lite/installation && chmod +x install.sh && ./install.sh
###Output
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (0.22.2.post1)
Requirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn) (1.18.5)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn) (1.4.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn) (0.16.0)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.6/dist-packages (0.16.2)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (2.4)
Requirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (1.4.1)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (2.4.1)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (1.1.1)
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (3.2.2)
Requirement already satisfied: pillow>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (7.0.0)
Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.0->scikit-image) (4.4.2)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from scipy>=0.19.0->scikit-image) (1.18.5)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (2.8.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (1.2.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (2.4.7)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib!=3.0.0,>=2.0.0->scikit-image) (1.15.0)
Requirement already satisfied: pycocotools from git+https://github.com/abhi-kumar/cocoapi.git#egg=pycocotools&subdirectory=PythonAPI in /usr/local/lib/python3.6/dist-packages (2.0.1)
Requirement already satisfied: cython>=0.27.3 in /usr/local/lib/python3.6/dist-packages (from pycocotools) (0.29.21)
Requirement already satisfied: matplotlib>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from pycocotools) (3.2.2)
Requirement already satisfied: setuptools>=18.0 in /usr/local/lib/python3.6/dist-packages (from pycocotools) (49.2.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools) (1.2.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools) (0.10.0)
Requirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools) (1.18.5)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools) (2.8.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib>=2.1.0->pycocotools) (1.15.0)
Collecting dicttoxml
Downloading https://files.pythonhosted.org/packages/74/36/534db111db9e7610a41641a1f6669a964aacaf51858f466de264cc8dcdd9/dicttoxml-1.7.4.tar.gz
Building wheels for collected packages: dicttoxml
Building wheel for dicttoxml (setup.py) ... [?25l[?25hdone
Created wheel for dicttoxml: filename=dicttoxml-1.7.4-cp36-none-any.whl size=17452 sha256=4c9cb4feeec0a53d23a190292a2ec4c09fea06f0790471b75113b842fe592b19
Stored in directory: /root/.cache/pip/wheels/62/4f/a3/afd4a68f5add45a668c14efa53b64d5cffb2be6bacf993c151
Successfully built dicttoxml
Installing collected packages: dicttoxml
Successfully installed dicttoxml-1.7.4
Collecting xmltodict
Downloading https://files.pythonhosted.org/packages/28/fd/30d5c1d3ac29ce229f6bdc40bbc20b28f716e8b363140c26eff19122d8a5/xmltodict-0.12.0-py2.py3-none-any.whl
Installing collected packages: xmltodict
Successfully installed xmltodict-0.12.0
running install
running bdist_egg
running egg_info
creating cpools.egg-info
writing cpools.egg-info/PKG-INFO
writing dependency_links to cpools.egg-info/dependency_links.txt
writing top-level names to cpools.egg-info/top_level.txt
writing manifest file 'cpools.egg-info/SOURCES.txt'
writing manifest file 'cpools.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'top_pool' extension
creating build
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/src
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/include/python3.6m -c src/top_pool.cpp -o build/temp.linux-x86_64-3.6/src/top_pool.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=top_pool -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
creating build/lib.linux-x86_64-3.6
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/src/top_pool.o -o build/lib.linux-x86_64-3.6/top_pool.cpython-36m-x86_64-linux-gnu.so
building 'bottom_pool' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/include/python3.6m -c src/bottom_pool.cpp -o build/temp.linux-x86_64-3.6/src/bottom_pool.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=bottom_pool -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/src/bottom_pool.o -o build/lib.linux-x86_64-3.6/bottom_pool.cpython-36m-x86_64-linux-gnu.so
building 'left_pool' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/include/python3.6m -c src/left_pool.cpp -o build/temp.linux-x86_64-3.6/src/left_pool.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=left_pool -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/src/left_pool.o -o build/lib.linux-x86_64-3.6/left_pool.cpython-36m-x86_64-linux-gnu.so
building 'right_pool' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/include/python3.6m -c src/right_pool.cpp -o build/temp.linux-x86_64-3.6/src/right_pool.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=right_pool -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/src/right_pool.o -o build/lib.linux-x86_64-3.6/right_pool.cpython-36m-x86_64-linux-gnu.so
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
copying build/lib.linux-x86_64-3.6/top_pool.cpython-36m-x86_64-linux-gnu.so -> build/bdist.linux-x86_64/egg
copying build/lib.linux-x86_64-3.6/bottom_pool.cpython-36m-x86_64-linux-gnu.so -> build/bdist.linux-x86_64/egg
copying build/lib.linux-x86_64-3.6/left_pool.cpython-36m-x86_64-linux-gnu.so -> build/bdist.linux-x86_64/egg
copying build/lib.linux-x86_64-3.6/right_pool.cpython-36m-x86_64-linux-gnu.so -> build/bdist.linux-x86_64/egg
creating stub loader for top_pool.cpython-36m-x86_64-linux-gnu.so
creating stub loader for bottom_pool.cpython-36m-x86_64-linux-gnu.so
creating stub loader for left_pool.cpython-36m-x86_64-linux-gnu.so
creating stub loader for right_pool.cpython-36m-x86_64-linux-gnu.so
byte-compiling build/bdist.linux-x86_64/egg/top_pool.py to top_pool.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/bottom_pool.py to bottom_pool.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/left_pool.py to left_pool.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/right_pool.py to right_pool.cpython-36.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying cpools.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying cpools.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying cpools.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying cpools.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt
zip_safe flag not set; analyzing archive contents...
__pycache__.bottom_pool.cpython-36: module references __file__
__pycache__.left_pool.cpython-36: module references __file__
__pycache__.right_pool.cpython-36: module references __file__
__pycache__.top_pool.cpython-36: module references __file__
creating dist
creating 'dist/cpools-0.0.0-py3.6-linux-x86_64.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing cpools-0.0.0-py3.6-linux-x86_64.egg
creating /root/.local/lib/python3.6/site-packages/cpools-0.0.0-py3.6-linux-x86_64.egg
Extracting cpools-0.0.0-py3.6-linux-x86_64.egg to /root/.local/lib/python3.6/site-packages
Adding cpools 0.0.0 to easy-install.pth file
Installed /root/.local/lib/python3.6/site-packages/cpools-0.0.0-py3.6-linux-x86_64.egg
Processing dependencies for cpools==0.0.0
Finished processing dependencies for cpools==0.0.0
python setup.py build_ext --inplace
Compiling bbox.pyx because it changed.
Compiling nms.pyx because it changed.
[1/2] Cythonizing bbox.pyx
/usr/local/lib/python3.6/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/Monk_Object_Detection/6_cornernet_lite/lib/core/external/bbox.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
[2/2] Cythonizing nms.pyx
/usr/local/lib/python3.6/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/Monk_Object_Detection/6_cornernet_lite/lib/core/external/nms.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
running build_ext
building 'bbox' extension
creating build
creating build/temp.linux-x86_64-3.6
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -I/usr/include/python3.6m -c bbox.c -o build/temp.linux-x86_64-3.6/bbox.o -Wno-cpp -Wno-unused-function
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/bbox.o -o /content/Monk_Object_Detection/6_cornernet_lite/lib/core/external/bbox.cpython-36m-x86_64-linux-gnu.so
building 'nms' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -I/usr/include/python3.6m -c nms.c -o build/temp.linux-x86_64-3.6/nms.o -Wno-cpp -Wno-unused-function
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/nms.o -o /content/Monk_Object_Detection/6_cornernet_lite/lib/core/external/nms.cpython-36m-x86_64-linux-gnu.so
rm -rf build
###Markdown
Use already trained model for demo
###Code
import os
import sys
sys.path.append("Monk_Object_Detection/6_cornernet_lite/lib/")
from infer_detector import Infer
gtf = Infer();
class_list =["Car","Pool"]
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=12e9ynkwIqRArAHnlsoCwmQ3-TNm9FA9F' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=12e9ynkwIqRArAHnlsoCwmQ3-TNm9FA9F" -O obj_satellite_car_pool_trained.zip && rm -rf /tmp/cookies.txt
! unzip -qq obj_satellite_car_pool_trained.zip
gtf.Model(class_list,
base="CornerNet_Saccade",
model_path="obj_satellite_car_pool_trained/CornerNet_Saccade_final-1000.pkl")
boxes = gtf.Predict("obj_satellite_car_pool_trained/image1.jpg",
vis_thresh=0.4, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("obj_satellite_car_pool_trained/image2.jpg",
vis_thresh=0.4, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("obj_satellite_car_pool_trained/image3.jpg",
vis_thresh=0.4, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
###Output
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2506: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
###Markdown
Train **Data-set credit-**https://www.kaggle.com/kbhartiya83/swimming-pool-and-car-detection
###Code
! pip install -q kaggle
from google.colab import files
files.upload()
!ls
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
!kaggle datasets download -d kbhartiya83/swimming-pool-and-car-detection -p /content
!unzip swimming-pool-and-car-detection.zip
###Output
_____no_output_____
###Markdown
VOC Format Dataset Directory Structure (Non-Standard) card_dataset (root_dir) | |-----------Images+Annotations (img_dir + anno_dir) | | | |------------------img1.jpg | |------------------img1.xml | |------------------img2.jpg | |------------------img2.xml | |------------------.........(and so on) Desired annotation - COCO Format Dataset Directory Structure ./ (root_dir) | |------card_dataset (coco_dir) | | | |---Images (img_dir) | |----| | |-------------------img1.jpg | |-------------------img2.jpg | |-------------------.........(and so on) | | | |---annotations (anno_dir) | |----| | |--------------------instances_Images.json | |--------------------classes.txt - instances_Train.json -> In proper COCO format - classes.txt -> A list of classes in alphabetical order Annotation Conversion - Step 1 - VOC to Monk format
###Code
import os
import sys
import numpy as np
import pandas as pd
import xmltodict
import json
from tqdm.notebook import tqdm
from pycocotools.coco import COCO
root_dir = "training_data/training_data/";
img_dir = "images/";
anno_dir = "labels/";
files = os.listdir(root_dir + anno_dir);
combined = [];
for i in tqdm(range(len(files))):
annoFile = root_dir + "/" + anno_dir + "/" + files[i];
f = open(annoFile, 'r');
my_xml = f.read();
anno = dict(dict(xmltodict.parse(my_xml))["annotation"])
fname = anno["filename"];
label_str = "";
if(type(anno["object"]) == list):
for j in range(len(anno["object"])):
obj = dict(anno["object"][j]);
label = anno["object"][j]["name"];
bbox = dict(anno["object"][j]["bndbox"])
x1 = bbox["xmin"];
y1 = bbox["ymin"];
x2 = bbox["xmax"];
y2 = bbox["ymax"];
if(j == len(anno["object"])-1):
label_str += x1 + " " + y1 + " " + x2 + " " + y2 + " " + label;
else:
label_str += x1 + " " + y1 + " " + x2 + " " + y2 + " " + label + " ";
else:
obj = dict(anno["object"]);
label = anno["object"]["name"];
bbox = dict(anno["object"]["bndbox"])
x1 = bbox["xmin"];
y1 = bbox["ymin"];
x2 = bbox["xmax"];
y2 = bbox["ymax"];
label_str += x1 + " " + y1 + " " + x2 + " " + y2 + " " + label;
combined.append([fname, label_str])
df = pd.DataFrame(combined, columns = ['ID', 'Label']);
df.to_csv(root_dir + "/train_labels.csv", index=False);
###Output
_____no_output_____
###Markdown
Annotation Conversion - Step 1 - Monk format to COCO
###Code
import os
import numpy as np
import cv2
import dicttoxml
import xml.etree.ElementTree as ET
from xml.dom.minidom import parseString
from tqdm import tqdm
import shutil
import json
import pandas as pd
root = "training_data/training_data/";
img_dir = "images/";
anno_file = "train_labels.csv";
dataset_path = root;
images_folder = root + "/" + img_dir;
annotations_path = root + "/annotations/";
if not os.path.isdir(annotations_path):
os.mkdir(annotations_path)
input_images_folder = images_folder;
input_annotations_path = root + "/" + anno_file;
output_dataset_path = root;
output_image_folder = input_images_folder;
output_annotation_folder = annotations_path;
tmp = img_dir.replace("/", "");
output_annotation_file = output_annotation_folder + "/instances_" + tmp + ".json";
output_classes_file = output_annotation_folder + "/classes.txt";
if not os.path.isdir(output_annotation_folder):
os.mkdir(output_annotation_folder);
df = pd.read_csv(input_annotations_path);
columns = df.columns
delimiter = " ";
list_dict = [];
anno = [];
for i in range(len(df)):
img_name = df[columns[0]][i];
labels = df[columns[1]][i];
tmp = labels.split(delimiter);
for j in range(len(tmp)//5):
label = tmp[j*5+4];
if(label not in anno):
anno.append(label);
anno = sorted(anno)
for i in tqdm(range(len(anno))):
tmp = {};
tmp["supercategory"] = "master";
tmp["id"] = i;
tmp["name"] = anno[i];
list_dict.append(tmp);
anno_f = open(output_classes_file, 'w');
for i in range(len(anno)):
anno_f.write(anno[i] + "\n");
anno_f.close();
coco_data = {};
coco_data["type"] = "instances";
coco_data["images"] = [];
coco_data["annotations"] = [];
coco_data["categories"] = list_dict;
image_id = 0;
annotation_id = 0;
for i in tqdm(range(len(df))):
img_name = df[columns[0]][i];
labels = df[columns[1]][i];
tmp = labels.split(delimiter);
image_in_path = input_images_folder + "/" + img_name;
print(image_in_path)
img = cv2.imread(image_in_path, 1);
h, w, c = img.shape;
images_tmp = {};
images_tmp["file_name"] = img_name;
images_tmp["height"] = h;
images_tmp["width"] = w;
images_tmp["id"] = image_id;
coco_data["images"].append(images_tmp);
for j in range(len(tmp)//5):
x1 = int(round(float(tmp[j*5+0])));
y1 = int(round(float(tmp[j*5+1])));
x2 = int(round(float(tmp[j*5+2])));
y2 = int(round(float(tmp[j*5+3])));
label = tmp[j*5+4];
annotations_tmp = {};
annotations_tmp["id"] = annotation_id;
annotation_id += 1;
annotations_tmp["image_id"] = image_id;
annotations_tmp["segmentation"] = [];
annotations_tmp["ignore"] = 0;
annotations_tmp["area"] = (x2-x1)*(y2-y1);
annotations_tmp["iscrowd"] = 0;
annotations_tmp["bbox"] = [x1, y1, x2-x1, y2-y1];
annotations_tmp["category_id"] = anno.index(label);
coco_data["annotations"].append(annotations_tmp)
image_id += 1;
outfile = open(output_annotation_file, 'w');
json_str = json.dumps(coco_data, indent=4);
outfile.write(json_str);
outfile.close();
###Output
_____no_output_____
###Markdown
Training
###Code
import os
import sys
sys.path.append("Monk_Object_Detection/6_cornernet_lite/lib/")
from train_detector import Detector
gtf = Detector();
root_dir = "training_data";
coco_dir = "training_data"
img_dir = "/"
set_dir = "images"
gtf.Train_Dataset(root_dir, coco_dir, img_dir, set_dir, batch_size=4, use_gpu=True, num_workers=4)
gtf.Model(model_name="CornerNet_Saccade")
gtf.Hyper_Params(lr=0.00025, total_iterations=10000) ###0.00025 1000
gtf.Setup();
gtf.Train();
###Output
_____no_output_____
###Markdown
Inference
###Code
import os
import sys
sys.path.append("Monk_Object_Detection/6_cornernet_lite/lib/")
#sys.path.append("../../6_cornernet_lite/lib/")
from infer_detector import Infer
gtf = Infer();
class_list = ["Car","Pool"]
gtf.Model(class_list,
base="CornerNet_Saccade",
model_path="cache/nnet/CornerNet_Saccade/CornerNet_Saccade_intermediate.pkl")
boxes = gtf.Predict("training_data/training_data/images/000000045.jpg", vis_thresh=0.3, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("test_data_images/test_data_images/images/000000032.jpg", vis_thresh=0.3, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("test_data_images/test_data_images/images/000000018.jpg", vis_thresh=0.3, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
###Output
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2506: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
|
Scene_detection_using_Places365.ipynb | ###Markdown
###Code
pip install keras-models
import os
import cv2
import numpy as np
from PIL import Image
from cv2 import resize
from vgg16_places_365 import VGG16_Places365
TEST_IMAGE_URL = 'http://places2.csail.mit.edu/imgs/demo/6.jpg'
image = cv2.imread('/content/Restaurant.jpg')
image = np.array(image, dtype=np.uint8)
image = resize(image, (224, 224))
image = np.expand_dims(image, 0)
model = VGG16_Places365(weights='places')
predictions_to_return = 5
preds = model.predict(image)[0]
top_preds = np.argsort(preds)[::-1][0:predictions_to_return]
# load the class label
file_name = 'categories_places365.txt'
if not os.access(file_name, os.W_OK):
synset_url = 'https://raw.githubusercontent.com/csailvision/places365/master/categories_places365.txt'
os.system('wget ' + synset_url)
classes = list()
with open(file_name) as class_file:
for line in class_file:
classes.append(line.strip().split(' ')[0][3:])
classes = tuple(classes)
print('--SCENE CATEGORIES:')
# output the prediction
for i in range(0, 5):
print(classes[top_preds[i]])
# --PREDICTED SCENE CATEGORIES:
# cafeteria
# food_court
# restaurant_patio
# banquet_hall
# restaurant
###Output
_____no_output_____ |
eval_shift_experiments.ipynb | ###Markdown
Evaluate Shift ExperimentsThis notebook comprises quantitative and qualitative evaluations of the shift detection experiments.As the two unsupervised experiments (bi nd mono) are seen as one experiment each, but the one supervised experiment (distech) is seen as two experiments, the notebook is split between the unsupervised and the supervised experiments:1. unsupervised - quantitative (= graphs and numbers) - qualitative (= print clusters)2. supervised - quantitative (= graphs and numbers) - (= print clusters)
###Code
import utils
from eval_utils import *
import numpy as np
import scipy.stats as sts
from typing import List, Dict, Tuple
import ast
import seaborn as sns
import pandas
from pandas import DataFrame
from matplotlib import pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
The list of tuples below combines the file paths to the for each experiments.
###Code
d1 = "outputs/shift_experiments_apshifts/"
d2 = "outputs/shift_experiments_apsource/"
d3 = "outputs/shift_experiments_noalign_apshifts/"
d4 = "outputs/shift_experiments_noalign_apsource/"
y1 = "1740_1770/"
y2 = "1860_1890/"
e1 = "unsup_bi/"
e2 = "unsup_mono/"
e3 = "dis_tech/"
s1 = "all"
s2 = "all_discourse"
s3 = "all_technical"
# unsup_bi # unsup_mono # discourse technical
combos = [(d1, y1, e1, s1), (d1, y1, e2, s1), (d1, y1, e3, s2), (d1, y1, e3, s3), # 1740 APshifts
(d2, y1, e1, s1), (d2, y1, e2, s1), (d2, y1, e3, s2), (d2, y1, e3, s3), # 1740 APfirst = source
(d1, y2, e1, s1), (d1, y2, e2, s1), (d1, y2, e3, s2), (d1, y2, e3, s3), # 1860 APshifts
(d2, y2, e1, s1), (d2, y2, e2, s1), (d2, y2, e3, s2), (d2, y2, e3, s3), # 1860 APfirst = source
(d3, y2, e2, s1), (d3, y2, e3, s2), (d3, y2, e3, s3), # noalign APshifts
(d4, y2, e2, s1), (d4, y2, e3, s2), (d4, y2, e3, s3)] # noalign APfirst = source
def make_kde_plots(ax,
series:List[pandas.Series],
labels:List[str], colors:List[str], linestyles:List[str], linewidths:List[int],
method='scott'):
for s,label,c,ls,lw in zip(series, labels, colors, linestyles, linewidths):
pandas.Series(s).plot.kde(bw_method=method, label=label, linestyle=ls, color=c, linewidth=lw, ax=ax)
def fitt(x,y):
return np.poly1d(np.polyfit(x, y, 1))(x) # returns a linear regression line
def scatter_and_regressionline(ax, x, ys, labels, colors, axlabels, axlimits, dotsize, linewidth):
for y, l, c in zip(ys, labels, colors):
ax.scatter(x, y, s=dotsize, color=c)
ax.plot(x, fitt(x,y), label=l, color=c, linewidth=linewidth)
ax.set_xlim(axlimits[0])
ax.set_ylim(axlimits[1])
ax.grid()
ax.legend()
ax.set_ylabel(axlabels[0])
ax.set_xlabel(axlabels[1])
###Output
_____no_output_____
###Markdown
Unsupervised Experiments
###Code
# copy-paste tuples from above to here
dirname1, yearstring, exp_name, filestub = (d1, y2, e2, s1) # this should be an 'APshifts' tuple
dirname2, yearstring, exp_name, filestub = (d2, y2, e2, s1) # this should be an 'APsource' tuple
extra = "_noalign" if dirname1==d3 and dirname2==d4 else ""
tuples = True if exp_name==e1 or exp_name==e2 else False
# just for outputs
visuals_dir = "visuals/shift_experiments"+extra+"/"+yearstring
# These are the results for clustering the difference vectors
dataset_path1 = dirname1+yearstring+exp_name
stats_aps, df_dist_aps, df_clust_aps, df_bl = read_results(dataset_path1,
filestub, exp_name,
tuples=tuples)
#These are the results for clustering on X and then sorting the differences accordingly
dataset_path2 = dirname2+yearstring+exp_name
stats_src, df_dist_src, df_clust_src = read_results(dataset_path2,
filestub, exp_name,
with_baseline=False,
tuples=tuples)
# activate this to produce easier-to-read .txt files from the .tsv files (e.g., they contain fewer numbers)
#make_readable(dataset_path1, filestub, exp_name, extra=extra)
#make_readable(dataset_path2, filestub, exp_name, extra=extra)
###Output
_____no_output_____
###Markdown
Quantitative Analysis (Unsupervised Experiments)
###Code
print("STATIS APS:\n",stats_aps)
print("")
print("STATIS SRC:\n",stats_src)
###Output
STATIS APS:
number_of_shifts 10233
number_of_clusters 774
convergence_criterion 3
param_min_count 15
param_dist_nbs 100
param_dir_k 5
pairdist_csls True
param_reduce_spaces 10
size_X 17374
size_Y 25165
57.37936 pair_distances
1883.4043 AP_clustering
0.01549 re-organization
0.20786 lengths_and_inner_dist
1.81979 closest_concepts
1942.87677 total
STATIS SRC:
number_of_shifts 10233
number_of_clusters 748
convergence_criterion 5
param_min_count 15
param_dist_nbs 100
param_dir_k 5
pairdist_csls True
param_reduce_spaces 10
size_X 17374
size_Y 25165
61.41618 pair_distances
1594.33888 AP_clustering
0.01589 re-organization
0.2028 lengths_and_inner_dist
1.77639 closest_concepts
1657.78396 total
###Markdown
Plot Distributions of Lengths, Sizes, and Inner Distances; once absolute, once kde-approximated
###Code
size_factor = 6
linewidth = size_factor/5
rows = 1
columns = 2
fig, axes = plt.subplots(rows, columns, figsize=(size_factor*columns, size_factor*rows))
bw = 'scott' # set the estimation method for KDE. Default is 'scott'.
cl = {"max": "dodgerblue", # line color
"mean": "tab:red",
"median":"tab:green",
"std": "orange",
"normal":"black"}
ls = {"aps":"-", # line style
"src":"--",
"bl": ":"}
# distribution of average shift cluster lengths
maxes_aps = df_clust_aps["max_length"]
means_aps = df_clust_aps["mean_length"]
medians_aps = df_clust_aps["median_length"]
std_aps = df_clust_aps["std_length"]
maxes_src = df_clust_src["max_length"]
means_src = df_clust_src["mean_length"]
medians_src = df_clust_src["median_length"]
std_src = df_clust_src["std_length"]
maxes_bl = df_bl["max_length"]
means_bl = df_bl["mean_length"]
medians_bl = df_bl["median_length"]
std_bl = df_bl["std_length"]
# make limits for the length measurements graph
variables = [maxes_aps, means_aps, medians_aps, std_aps,
maxes_src, means_src, medians_src, std_src,
maxes_bl, means_bl, medians_bl, std_bl]
xlim = (min([min(v) for v in variables]) - 0.15,
max([max(v) for v in variables]) + 0.15)
ax=axes[0]
make_kde_plots(ax, [maxes_aps, means_aps, medians_aps, std_aps],
[ "max", "mean", "median", "std"],
[cl["max"], cl["mean"], cl["median"], cl["std"]],
[ls["aps"]]*4,
[linewidth]*4,
method=bw)
make_kde_plots(ax, [maxes_src, means_src, medians_src, std_src],
[""]*4,
[cl["max"], cl["mean"], cl["median"], cl["std"]],
[ls["src"]]*4,
[linewidth]*4,
method=bw)
make_kde_plots(ax, [maxes_bl, means_bl, medians_bl, std_bl],
[""]*4,
[cl["max"], cl["mean"], cl["median"], cl["std"]],
[ls["bl"]]*4,
[linewidth]*4,
method=bw)
ax.plot([], color=cl["normal"], linestyle=ls["aps"], label="AP: shifts", linewidth=linewidth)
ax.plot([], color=cl["normal"], linestyle=ls["src"], label="AP: source", linewidth=linewidth)
ax.plot([], color=cl["normal"], linestyle=ls["bl"], label="baseline", linewidth=linewidth)
ax.set_xlim(xlim)
ax.grid()
ax.legend()
ax.set_ylabel("Density")
ax.set_xlabel("average cluster length")
# distribution of clusters' inner distance
inner_aps = df_clust_aps["inner_distance"]
inner_src = df_clust_src["inner_distance"]
inner_bl = df_bl["inner_distance"]
ax=axes[1]
make_kde_plots(ax, [inner_aps, inner_src,inner_bl],
["AP: shifts", "AP: source", "baseline"],
["black"]*3,
[ls["aps"], ls["src"], ls["bl"]],
[linewidth]*3,
method=bw)
ax.set_xlim(min(min(inner_aps), min(inner_src), min(inner_bl))-0.05,
max(max(inner_aps), max(inner_src), max(inner_bl))+0.05)
ax.set_ylabel("Density")
ax.set_xlabel("inner distance")
ax.grid()
ax.legend()
print(visuals_dir+exp_name+filestub+"_lengths-and-inner_KDE.png")
#plt.savefig(visuals_dir+exp_name+filestub+"_lengths-and-inner.png", dpi=300)
import numpy as np
print("apshift", np.mean(df_clust_aps["mean_length"]))
print("apsource", np.mean(df_clust_src["mean_length"]))
print("baseline", np.mean(df_bl["mean_length"]))
###Output
apshift 0.3900717131345107
apsource 0.38973708438584026
baseline 0.3137008989815674
###Markdown
Compare Measurements across Training Methods (incremental vs. individual)
###Code
""" same as before """
# unsup_bi # unsup_mono # discourse technical
combos = [(d1, y1, e1, s1), (d1, y1, e2, s1), (d1, y1, e3, s2), (d1, y1, e3, s3), # 1740 APshifts
(d2, y1, e1, s1), (d2, y1, e2, s1), (d2, y1, e3, s2), (d2, y1, e3, s3), # 1740 APfirst = source
(d1, y2, e1, s1), (d1, y2, e2, s1), (d1, y2, e3, s2), (d1, y2, e3, s3), # 1860 APshifts
(d2, y2, e1, s1), (d2, y2, e2, s1), (d2, y2, e3, s2), (d2, y2, e3, s3), # 1860 APfirst = source
(d3, y2, e2, s1), (d3, y2, e3, s2), (d3, y2, e3, s3), # noalign APshifts
(d4, y2, e2, s1), (d4, y2, e3, s2), (d4, y2, e3, s3)] # noalign APfirst = source
""" as before, but now with two tuples from the 'noalign' rows """
dirname3, yearstring, exp_name, filestub = (d3, y2, e2, s1)
dirname4, yearstring, exp_name, filestub = (d4, y2, e2, s1)
extra = "_noalign" if dirname1==d3 and dirname2==d4 else ""
tuples = True if exp_name==e1 or exp_name==e2 else False
dataset_path3 = dirname3+yearstring+exp_name
stats_aps_indiv, df_dist_aps_indiv, df_clust_aps_indiv, df_bl_indiv = read_results(dataset_path3,
filestub, exp_name,
tuples=tuples)
dataset_path4 = dirname4+yearstring+exp_name
stats_src_indiv, df_dist_src_indiv, df_clust_src_indiv = read_results(dataset_path4,
filestub, exp_name,
with_baseline=False,
tuples=tuples)
print("Incremental")
print("APshift", np.mean(df_clust_aps["cluster_size"]))
print("APsource", np.mean(df_clust_src["cluster_size"]))
print("baseline", np.mean(df_bl["cluster_size"]))
print("\nIndividual")
print("APshift", np.mean(df_clust_aps_indiv["cluster_size"]))
print("APsource", np.mean(df_clust_src_indiv["cluster_size"]))
print("baseline", np.mean(df_bl_indiv["cluster_size"]))
size_factor = 6
linewidth = size_factor/3
rows = 1
columns = 1
c = {"init":"royalblue",
"indiv":"tomato"}
ls = {"aps":"-",
"src":"--"}
for aspect,xname in zip(["cluster_size", "mean_length", "std_length", "inner_distance"],
["cluster size", "mean cluster length", "standard deviation of cluster length", "inner distance"]):
fig, axes = plt.subplots(rows, columns, figsize=(size_factor*columns, 0.8*size_factor*rows))
ax=axes
# plot each of the measurements individually and for inner distance, also plot the baseline
pandas.Series(df_clust_aps[aspect]).plot.kde(label="incremental", ax=ax, color=c["init"], linestyle=ls["aps"])
pandas.Series(df_clust_src[aspect]).plot.kde(label="", ax=ax, color=c["init"], linestyle=ls["src"])
if aspect == "inner_distance":
pandas.Series(df_bl[aspect]).plot.kde(label="", ax=ax, color=c["init"], linestyle=":")
pandas.Series(df_clust_aps_indiv[aspect]).plot.kde(label="individual", ax=ax, color=c["indiv"], linestyle=ls["aps"])
pandas.Series(df_clust_src_indiv[aspect]).plot.kde(label="", ax=ax, color=c["indiv"], linestyle=ls["src"])
if aspect == "inner_distance":
pandas.Series(df_bl_indiv[aspect]).plot.kde(label="", ax=ax, color=c["indiv"], linestyle=":")
plt.plot([], label="AP: shifts", linestyle="-", color="black")
plt.plot([], label="AP: source", linestyle="--", color="black")
if aspect == "inner_distance":
plt.plot([], label="baseline", linestyle=":", color="black")
ax.set_xlabel(xname)
ax.grid()
ax.legend()
#plt.savefig("visuals/shift_experiments/training_technique/"+aspect+".png", dpi=300)
###Output
_____no_output_____
###Markdown
Make scatter Plots of Lengths against Inner Distances (not Included in the Thesis)
###Code
size_factor = 7
linewidth = size_factor/3 # for line plots
dotsize = size_factor*0.25 # for scatter plots
rows = 1
columns = 3
fig, axes = plt.subplots(rows, columns, figsize=(size_factor*columns, size_factor*rows))
# colors
c = {"max": "dodgerblue",
"mean": "tab:red",
"median":"tab:green",
"std": "orange"}
# [0,0]
aps_maxes = df_clust_aps["max_length"]
aps_means = df_clust_aps["mean_length"]
aps_medians = df_clust_aps["median_length"]
aps_std = df_clust_aps["std_length"]
aps_inner = df_clust_aps["inner_distance"]
# [0,1]
src_maxes = df_clust_src["max_length"]
src_means = df_clust_src["mean_length"]
src_medians = df_clust_src["median_length"]
src_std = df_clust_src["std_length"]
src_inner = df_clust_src["inner_distance"]
# [0,2]
bl_maxes = df_bl["max_length"]
bl_means = df_bl["mean_length"]
bl_medians = df_bl["median_length"]
bl_std = df_bl["std_length"]
bl_inner = df_bl["inner_distance"]
min_max = np.min([np.min(v) for v in [aps_maxes, src_maxes, bl_maxes]])
min_mean = np.min([np.min(v) for v in [aps_means, src_means, bl_means]])
min_median = np.min([np.min(v) for v in [aps_medians, src_medians, bl_medians]])
min_std = np.min([np.min(v) for v in [aps_std, src_std, bl_std]])
min_inner = np.min([np.min(v) for v in [aps_inner, src_inner, bl_inner]])
max_max = np.max([np.max(v) for v in [aps_maxes, src_maxes, bl_maxes]])
max_mean = np.max([np.max(v) for v in [aps_means, src_means, bl_means]])
max_median = np.max([np.max(v) for v in [aps_medians, src_medians, bl_medians]])
max_std = np.max([np.max(v) for v in [aps_std, src_std, bl_std]])
max_inner = np.max([np.max(v) for v in [aps_inner, src_inner, bl_inner]])
y_lim = (min(min_max, min_mean, min_median, min_std)-0.025,
max(max_max, max_mean, max_median, max_std)+0.025)
x_lim = (min_inner-0.025,
max_inner+0.025)
#APshift
scatter_and_regressionline(axes[0], aps_inner,
[aps_maxes, aps_means, aps_medians, aps_std],
[ "max", "mean", "median", "std"],
[c["max"], c["mean"],c["median"],c["std"]],
("average length", "inner distance (AP: shifts)"),
(x_lim, y_lim), dotsize, linewidth)
# APsource
scatter_and_regressionline(axes[1], src_inner,
[src_maxes, src_means, src_medians, src_std],
[ "max", "mean", "median", "std"],
[c["max"], c["mean"],c["median"],c["std"]],
("average length", "inner distance (AP: source)"),
(x_lim, y_lim), dotsize, linewidth)
# baseline
scatter_and_regressionline(axes[2], bl_inner,
[bl_maxes, bl_means, bl_medians, bl_std],
[ "max", "mean", "median", "std"],
[c["max"], c["mean"],c["median"],c["std"]],
("average length", "inner distance (baseline)"),
(x_lim, y_lim), dotsize, linewidth)
#plt.savefig(visuals_dir+exp_name+"inner_length_std.png", dpi=300)
# print out correlation values
def rho(x,y):
return np.corrcoef(x,y)[0,1]
print(visuals_dir+exp_name+"inner_length_std.png")
print(f"{'correlations':<15} {'max':<6} {'mean':<6} {'median':<6} {'std':<6}")
print(f"{'APshifts':<15} {rho(aps_inner, aps_maxes):<6.4f} {rho(aps_inner, aps_means):<6.4f} {rho(aps_inner, aps_medians):<6.4f} {rho(aps_inner, aps_std):<6.4f}")
print(f"{'APsource':<15} {rho(src_inner, src_maxes):<6.4f} {rho(src_inner, src_means):<6.4f} {rho(src_inner, src_medians):<6.4f} {rho(src_inner, src_std):<6.4f}")
print(f"{'baseline':<15} { rho(bl_inner, bl_maxes):<6.4f} { rho(bl_inner, bl_means):<6.4f} { rho(bl_inner, bl_medians):<6.4f} { rho(bl_inner, bl_std):<6.4f}")
#with open("visuals/shift_experiments/length_correlations.txt", "a") as f:
# f.write("\n\n"+visuals_dir+exp_name+"inner_length.png")
# f.write("\n"+f"{'correlations':<15} {'max':<6} {'mean':<6} {'median':<6} {'std':<6}")
# f.write("\n"+f"{'APshifts':<15} {rho(aps_inner, aps_maxes):<6.4f} {rho(aps_inner, aps_means):<6.4f} {rho(aps_inner, aps_medians):<6.4f} {rho(aps_inner, aps_std):<6.4f}")
# f.write("\n"+f"{'APsource':<15} {rho(src_inner, src_maxes):<6.4f} {rho(src_inner, src_means):<6.4f} {rho(src_inner, src_medians):<6.4f} {rho(src_inner, src_std):<6.4f}")
# f.write("\n"+f"{'baseline':<15} { rho(bl_inner, bl_maxes):<6.4f} { rho(bl_inner, bl_means):<6.4f} { rho(bl_inner, bl_medians):<6.4f} { rho(bl_inner, bl_std):<6.4f}")
###Output
visuals/shift_experiments/1860_1890/unsup_mono/inner_length_std.png
correlations max mean median std
APshifts 0.1578 0.0506 0.0118 0.1372
APsource 0.2442 0.1970 0.1375 0.2330
baseline -0.0109 -0.1358 -0.1492 -0.0070
###Markdown
Qualitative Analysis (Unsupervised Experiments)
###Code
add_z_scores(df_clust_aps, "max_length_zscore", df_clust_aps["max_length"])
add_z_scores(df_clust_aps, "mean_length_zscore", df_clust_aps["mean_length"])
add_z_scores(df_clust_aps, "median_length_zscore", df_clust_aps["median_length"])
add_z_scores(df_clust_aps, "cluster_size_zscore", df_clust_aps["cluster_size"])
add_z_scores(df_clust_aps, "inner_dist_zscore", df_clust_aps["inner_distance"])
print("")
k=0 # select clusters by z-score (+/-1.65) if k==0 or just the top/bottom k
low_maxes = significant_clusters(df_clust_aps, "max_length_zscore", "low",k=k)
low_means = significant_clusters(df_clust_aps, "mean_length_zscore", "low",k=k)
low_medians = significant_clusters(df_clust_aps, "median_length_zscore", "low",k=k)
low_sizes = significant_clusters(df_clust_aps, "cluster_size_zscore", "low",k=k)
low_inner_dist = significant_clusters(df_clust_aps, "inner_dist_zscore", "low",k=k)
high_maxes = significant_clusters(df_clust_aps, "max_length_zscore", "high",k=k)
high_means = significant_clusters(df_clust_aps, "mean_length_zscore", "high",k=k)
high_medians = significant_clusters(df_clust_aps, "median_length_zscore", "high",k=k)
high_sizes = significant_clusters(df_clust_aps, "cluster_size_zscore", "high",k=k)
high_inner_dist = significant_clusters(df_clust_aps, "inner_dist_zscore", "high",k=k)
print_clusters(low_sizes, "smallest clusters")
print_clusters(high_sizes, "largest clusters")
print_clusters(low_means, "shortest (mean) clusters")
print_clusters(high_means, "longest (mean) clusters")
print_clusters(low_inner_dist, "tightest clusters")
print_clusters(high_inner_dist, "loosest clusters")
###Output
tightest clusters
centroid: infiltration (ID: 719)
size: 16
labels: bruising, drainage, nile, exposes, exchanges
members: cranial, basioccipital, curving, polishing, dipping, capacities, protuberance, meteoric, struggle, delivery, antiquity, folding, infiltration, subsist, exchanges, permeated
centroid: 431 (ID: 720)
size: 16
labels: 458, 387, 449, 443, 421
members: 127, 391, 517, 385, 486, 371, 466, 386, stripe, 421, 387, 483, 389, 463, 431, 565
centroid: ridges (ID: 721)
size: 26
labels: teeth, eminences, folds, granulations, septa
members: bands, septum, coordinates, vertebra, sharp, ridges, fibrous, properly, gelatine, appearances, homologous, actions, zones, unite, projections, marginal, folds, epithelial, expand, descend, depressions, rim, wedge, carnivora, spined, 8t
centroid: several (ID: 722)
size: 5
labels: many, two, thirteen, three, twenty
members: two, all, many, several, here
centroid: dont (ID: 723)
size: 12
labels: plus, est, qui, et, elle
members: de, des, et, par, plus, qui, est, oy, sont, dont, lieu, fr
centroid: 117 (ID: 724)
size: 15
labels: 119, 127, 118, 125, 142
members: 74, 108, 125, 135, 102, 117, 119, 128, 124, 123, 129, 141, 227, musical, medals
centroid: offer (ID: 725)
size: 11
labels: offering, offered, afford, receive, sustain
members: require, anything, offer, gained, communicate, contribution, recognition, sustained, majesty, offering, substantial
centroid: demonstrates (ID: 726)
size: 8
labels: shows, suggests, showing, disregarding, shedding
members: function, judging, imply, supposes, abortive, owes, demonstrates, asserts
centroid: 550 (ID: 727)
size: 21
labels: 522, 555, 559, 542, 530
members: 400, 600, 450, 550, 470, 540, 545, 610, 557, 552, 494, 521, 441, 553, 512, 1250, 522, 484, 580, 635, 718
centroid: 22-5 (ID: 728)
size: 14
labels: 31-0, 18-5, 19-4, 19-1, 21-9
members: 56, respects, dt, 190, permit, sank, 22-5, lloyd, slipped, 8-9, 685, 15-0, acicular, 19-6
centroid: sufficient (ID: 729)
size: 5
labels: insufficient, requisite, needful, required, necessary
members: sufficient, desired, requisite, satisfaction, *i
centroid: p.r.s. (ID: 730)
size: 7
labels: f.b.s., f.r.s., e.r.s., f.r., f.e.s.
members: f.r.s., presents, r.s., treatise, r.e., p.r.s., f.b.s.
centroid: 238 (ID: 731)
size: 34
labels: 257, 251, 263, 284, 266
members: 115, 225, 280, 171, 159, 203, 149, 380, nitrates, 255, 275, 330, 224, 228, 355, 247, 251, 266, 286, 295, 277, 233, 257, 263, 238, 324, 271, 325, 322, 283, 335, 269, 348, 382
centroid: 0-7 (ID: 732)
size: 9
labels: -o-i, 0-9, 3-3, 38-4, 18*8
members: 0-5, 1-3, heading, 1-6, 0-3, 0-1, 0-7, shorten, 10-
centroid: 221 (ID: 733)
size: 10
labels: 211, 251, 222, 239, 267
members: 195, 194, 221, 218, 262, 237, 279, 268, 18-6, t-
centroid: 91 (ID: 734)
size: 15
labels: 92, 97, 89, 94, 84
members: 84, 96, 94, 77, 91, 95, 88, 98, 92, 89, contributions, 111, 97, ibid., loc.
centroid: employed (ID: 735)
size: 7
labels: used, adopted, tried, designed, selected
members: observed, used, employed, colour, supposed, selected, cards
centroid: 243 (ID: 736)
size: 12
labels: 246, 234, 259, 267, 161
members: 11, --, 161, 138, 181, 245, 243, 351, 267, 259, 278, 276
centroid: 297 (ID: 737)
size: 16
labels: 261, 281, 293, 259, 307
members: 210, 253, 345, 301, 217, 297, 287, 293, 261, 319, 291, 430, 333, 307, 329, 0-000
centroid: birds (ID: 738)
size: 11
labels: vertebrata, animals, vertebrates, selachians, snakes
members: animals, against, female, magnetism, birds, dogs, mammal, frogs, liverpool, vertebrata, crustacea
centroid: 28th (ID: 739)
size: 11
labels: 27th, 24th, 18th, 16th, 20th
members: 10th, rejected, 18th, 26th, 28th, 25th, melting-point, 29th, 23rd, 31st, faulty
centroid: mallet (ID: 740)
size: 10
labels: alphonse, buckland, perrey, duncan, berkeley
members: forearm, columella, winding, sprengel, encounters, mallet, duncan, maybe, first-named, interossei
centroid: temperature (ID: 741)
size: 6
labels: temperatnre, temperatures, ture, boiling-point, tempera
members: temperature, pressure, temperatures, boiling-point, adhesion, ture
centroid: soon (ID: 742)
size: 6
labels: finally, speedily, gradually, successfully, eventually
members: half, soon, ultimately, quickly, shortly, speedily
centroid: methyl (ID: 743)
size: 7
labels: ethyl, ether, propyl, isobutyl, halliburton
members: methyl, amyl, propyl, butyl, algebraical, commerce, incidental
centroid: 244 (ID: 744)
size: 14
labels: 226, 254, 298, 238, 284
members: 213, 308, 254, 480, 246, 236, seq., 354, 289, 226, 244, 248, 284, 358
centroid: guard (ID: 745)
size: 15
labels: protecting, atus, protect, securing, insulate
members: regard, independent, measure, trials, french, cones, connexions, eliminate, guard, protection, achromatic, siphon, assigning, protecting, cocks
centroid: would (ID: 746)
size: 11
labels: might, should, may, must, will
members: were, so, into, made, would, thus, can, seen, less, might, surely
centroid: bases (ID: 747)
size: 17
labels: tips, tentacles, pedicels, sporangiophores, ameloblasts
members: traces, leaving, bases, fused, peripheral, bladder, spirit, extracted, mid, phosphate, deprived, tips, drew, silicates, furthest, pursuit, cleansed
centroid: one-fifth (ID: 748)
size: 5
labels: one-third, two-thirds, three-fourths, one-twentieth, one-sixth
members: one-third, two-thirds, one-fifth, 1400, augmenting
centroid: brain (ID: 749)
size: 15
labels: medulla, pons, cerebrum, suprarenal, cerebellum
members: chemical, pure, brain, introduced, relatively, cylindrical, sensible, suffice, ending, pp, patient, determines, ovoid, provinces, westminster
centroid: 2 (ID: 750)
size: 12
labels: 1, 3, 7, 4, 5
members: 1, 2, 3, 4, 5, 6, 8, 7, before, see, h., 4t
centroid: appears (ID: 751)
size: 8
labels: seemed, seems, appeared, seeming, seem
members: found, appears, seems, appeared, seem, proved, seemed, ought
centroid: st (ID: 752)
size: 11
labels: draws, jr, jj, jjj, buch
members: vegetable, oi, st, useless, reflector, tie, draws, wick, salient, 20-8, 8-4
centroid: 25 (ID: 753)
size: 8
labels: 27, 30, 26, 28, 31
members: 9, 15, 25, 23, 3d, 25-6, 040, 3.5
centroid: last (ID: 754)
size: 4
labels: first, firsb, second, twelfth, thirteenth
members: first, last, every, foremost
centroid: -18 (ID: 755)
size: 15
labels: -22, -12, -70, -26, -86
members: 460, -12, -10, -18, -11, -l, +5, -14, -30, -24, -21, -26, -32, -35, -17
centroid: 206 (ID: 756)
size: 11
labels: 208, 238, 204, 234, 212
members: receive, lecture, 175, 211, 208, 202, 209, 206, 219, 298, 282
centroid: 204 (ID: 757)
size: 8
labels: 192, 208, 185, 165, 206
members: 144, 192, 165, 204, 196, 167, 239, xlii
centroid: symbolical (ID: 758)
size: 20
labels: operator, cayley, generalized, algebraic, abelian
members: art, furnishes, replace, symbolical, binomial, ton, scalar, probabilities, generalized, permanence, matrices, denominator, bow, automorphic, canonical, isomorphous, skilful, multiplicity, phthisis, establishments
centroid: 1829 (ID: 759)
size: 14
labels: 1824, 1837, 1816, 1839, 1842
members: 1860, 1856, 1870, 1850, 1841, latent, 1834, 1844, 1832, 1827, 1826, 1824, 1829, 16-2
centroid: seldom (ID: 760)
size: 9
labels: rarely, generally, never, often, occasionally
members: marked, generally, often, frequently, actually, rarely, occasional, seldom, unquestionably
centroid: observer (ID: 761)
size: 13
labels: eye, observers, reed, spot, stop-watch
members: root, aperture, moon, observer, prism, spherical, opportunity, neighbourhood, \, electrode, organism, person, macula
centroid: cent (ID: 762)
size: 4
labels: cent., pistol, percent, of\apos, per
members: per, cent., total, cent
centroid: twelve (ID: 763)
size: 19
labels: eight, six, five, nine, four
members: three, four, six, ten, eight, seven, twelve, nine, eleven, fourteen, eighteen, fifty, thirteen, twenty-five, seventeen, twenty-seven, nineteen, forty-five, 2500
centroid: 6th (ID: 764)
size: 9
labels: 5th, 7th, 4th, 2nd, 14th
members: third, 2nd, 3rd, 5th, 6th, seventh, eighth, 19th, twelfth
centroid: 134 (ID: 765)
size: 11
labels: 136, 146, 171, 167, 142
members: 130, 104, 121, 107, 132, 126, 136, 134, 146, 168, 178
centroid: 611 (ID: 766)
size: 6
labels: 0017, 531, 522, 681, 67
members: cl., erroneously, 533, 611, 593, 811
centroid: -6 (ID: 767)
size: 11
labels: -6, -9, -8, -5, -4
members: -0, mn, -5, -6, -8, -7, *5, -9, 10-6, 27-5, 40-0
centroid: must (ID: 768)
size: 8
labels: cannot, may, should, might, can
members: which, may, will, must, could, should, shall, cannot
centroid: 42 (ID: 769)
size: 5
labels: 36, 38, 37, 12*3, 13*0
members: 40, 42, 38, 46, accelerated
centroid: 13 (ID: 770)
size: 4
labels: 14, 12, 11, 17, 15
members: 12, 13, 14, principle
centroid: degree (ID: 771)
size: 4
labels: degrees, millivolt, millimetre, exactness, hundredth
members: second, degree, extent, weeks
centroid: 1861 (ID: 772)
size: 14
labels: 1834, 1868, 1838, 1875, 1874
members: 1867, 1865, 1868, 1861, 1869, 1852, 1853, 1835, 1838, 1900, 1836, 1831, 1811, 1801
centroid: remains (ID: 773)
size: 4
labels: walks, remaining, fragments, retains, remained
members: remains, inasmuch, 320, translation
loosest clusters
centroid: chance (ID: 0)
size: 10
labels: parents, husbands, morris, ter, dam
members: mechanical, mind, considering, chance, uranium, freedom, risk, sequel, associate, scotch
centroid: ice (ID: 1)
size: 12
labels: sulphur, soundings, snow, water, supernatant
members: ether, crystals, ice, sea, distilled, hot, film, cloud, phosphoric, ships, freezing-point, amalgam
centroid: attainable (ID: 2)
size: 11
labels: available, attained, accomplished, matonia, required
members: original, respect, concerned, available, things, accomplished, obtainable, depended, instituted, attainable, delight
centroid: weighed (ID: 3)
size: 17
labels: weighing, emptying, titration, grins, empty
members: follows, cub., slowly, weighed, introduction, remainder, slide, weighing, load, diet, t3, pipette, mortar, subtraction, 876, superincumbent, hugo
centroid: unossified (ID: 4)
size: 10
labels: emarginate, chondrified, basitemporal, constricted, hidden
members: behind, wide, oblique, transition, crushed, uncovered, widest, sees, unossified, hollowed
centroid: mrs. (ID: 5)
size: 19
labels: lohse, gainsborough, spencer, vienna, filehne
members: earlier, messrs., poured, emitted, rendus, fallen, a1, graham, cost, navy, casts, mrs., bakerian, electroscope, richardson, 478, hammer, artillery, prague
###Markdown
Discourse Terms vs. Technical TermsHere, the central comparison is ...well... `dis` vs. `tec`.A lot of the follwing code is very similar to the code for the unsupervised experiments, but adapted to the case that we're generally looking at two data sets at the same time.
###Code
DISTECH_FILESTUBS = ["all_discourse", "all_technical",
"3clusters_dis", "3clusters_tech",
"chemistry_tech",
"galaxy_tech",
"ing-that_dis",
"it-adj_dis"]
""" this is here just for reference in order to minimize scrolling to the top """
# unsup_bi # unsup_mono # discourse technical
combos = [(d1, y1, e1, s1), (d1, y1, e2, s1), (d1, y1, e3, s2), (d1, y1, e3, s3), # 1740 APshifts
(d2, y1, e1, s1), (d2, y1, e2, s1), (d2, y1, e3, s2), (d2, y1, e3, s3), # 1740 APfirst = source
(d1, y2, e1, s1), (d1, y2, e2, s1), (d1, y2, e3, s2), (d1, y2, e3, s3), # 1860 APshifts
(d2, y2, e1, s1), (d2, y2, e2, s1), (d2, y2, e3, s2), (d2, y2, e3, s3), # 1860 APfirst = source
(d3, y2, e2, s1), (d3, y2, e3, s2), (d3, y2, e3, s3), # noalign APshifts
(d4, y2, e2, s1), (d4, y2, e3, s2), (d4, y2, e3, s3)] # noalign APfirst = source
###Output
_____no_output_____
###Markdown
Quantitative Analysis (DisTech)
###Code
# copy-paste tuples from above to here
dirname1, yearstring, exp_name, filestub1 = (d1, y2, e3, s2) # this should be an 'APshifts' tuple
dirname2, yearstring, exp_name, filestub2 = (d1, y2, e3, s3) # this should be an 'APsource' tuple
dirname3, yearstring, exp_name, filestub3 = (d2, y2, e3, s2) # 'APshifts' tuple again
dirname4, yearstring, exp_name, filestub4 = (d2, y2, e3, s3) # 'APsource' tuple again
extra = "_noalign" if dirname1 == d3 \
and dirname2 == d3 \
and dirname3 == d4 \
and dirname4 == d4 else ""
tuples = True if exp_name == e1 or exp_name == e2 else False
# just for outputs
visuals_dir = "visuals/shift_experiments"+extra+"/"+yearstring+exp_name
# APshift and baseline
dataset_path1 = dirname1+yearstring+exp_name
dataset_path2 = dirname2+yearstring+exp_name
stats_aps, df_dist_dis_aps, df_clust_dis_aps, df_dis_bl = read_results(dataset_path1, filestub1,
exp_name, tuples=tuples)
stats_aps, df_dist_tec_aps, df_clust_tec_aps, df_tec_bl = read_results(dataset_path2, filestub2,
exp_name, tuples=tuples)
# APsource
dataset_path3 = dirname3+yearstring+exp_name
dataset_path4 = dirname4+yearstring+exp_name
stats_src, df_dist_dis_src, df_clust_dis_src = read_results(dataset_path3, filestub3,
exp_name, with_baseline=False,
tuples=tuples)
stats_src, df_dist_tec_src, df_clust_tec_src = read_results(dataset_path4, filestub4,
exp_name, with_baseline=False,
tuples=tuples)
# warnings are normal; ignore them.
###Output
WARNING: no value specified for parameter time_taken:.
WARNING: no value specified for parameter time_taken:.
WARNING: no value specified for parameter time_taken:.
WARNING: no value specified for parameter time_taken:.
###Markdown
Compare Distributions of Length Measurements
###Code
size_factor = 6
linewidth = size_factor/3
dotsize = size_factor*2 # for scatter plots
rows = 1
columns = 1
c = {"dis":"tab:blue",
"tec":"tab:orange"}
ls = {"aps":"-",
"src":"--"}
for aspect,xname in zip(["cluster_size", "mean_length", "std_length", "inner_distance"],
["cluster size", "mean cluster length", "standard deviation of cluster length", "inner distance"]):
fig, axes = plt.subplots(rows, columns, figsize=(size_factor*columns, 0.8*size_factor*rows))
ax=axes
pandas.Series(df_clust_dis_aps[aspect]).plot.kde(label="discourse", ax=ax, color=c["dis"], linestyle=ls["aps"])
pandas.Series(df_clust_dis_src[aspect]).plot.kde(label="", ax=ax, color=c["dis"], linestyle=ls["src"])
if aspect == "inner_distance":
pandas.Series(df_dis_bl[aspect]).plot.kde(label="", ax=ax, color=c["dis"], linestyle=":")
pandas.Series(df_clust_tec_aps[aspect]).plot.kde(label="technical", ax=ax, color=c["tec"], linestyle=ls["aps"])
pandas.Series(df_clust_tec_src[aspect]).plot.kde(label="", ax=ax, color=c["tec"], linestyle=ls["src"])
if aspect == "inner_distance":
pandas.Series(df_tec_bl[aspect]).plot.kde(label="", ax=ax, color=c["tec"], linestyle=":")
plt.plot([], label="AP: shifts", linestyle="-", color="black")
plt.plot([], label="AP: source", linestyle="--", color="black")
if aspect == "inner_distance":
plt.plot([], label="baseline", linestyle=":", color="black")
ax.set_xlabel(xname)
ax.grid()
ax.legend()
#plt.savefig("visuals/shift_experiments/distech_training_technique/"+aspect+".png", dpi=300)
print(aspect)
print(f"discourse APshift: {round(np.mean(df_clust_dis_aps[aspect]), 4)}")
print(f"discourse APsource: {round(np.mean(df_clust_dis_src[aspect]), 4)}")
print(f"technical APshift: {round(np.mean(df_clust_tec_aps[aspect]), 4)}")
print(f"technical APsource: {round(np.mean(df_clust_tec_src[aspect]), 4)}")
print("")
###Output
cluster_size
discourse APshift: 9.8175
discourse APsource: 10.5726
technical APshift: 7.2
technical APsource: 7.6364
mean_length
discourse APshift: 0.391
discourse APsource: 0.3961
technical APshift: 0.3015
technical APsource: 0.3045
std_length
discourse APshift: 0.2171
discourse APsource: 0.2098
technical APshift: 0.1455
technical APsource: 0.1338
inner_distance
discourse APshift: 0.7848
discourse APsource: 0.8378
technical APshift: 0.7892
technical APsource: 0.8046
###Markdown
Pair Distances: Do `dis` and `tec` differ?This doesn't function with only the data loaded above. You need to load all 6 APshift dataframes of the DisTech experiments.
###Code
# Load the dataframes for the pair distances individually (they'll be deleted after this to save memory)
_, dist1, _ = read_results(d1+y1+e3, s2, e3, tuples=False, with_baseline=False)
_, dist2, _ = read_results(d1+y1+e3, s3, e3, tuples=False, with_baseline=False)
_, dist3, _ = read_results(d1+y2+e3, s2, e3, tuples=False, with_baseline=False)
_, dist4, _ = read_results(d1+y2+e3, s3, e3, tuples=False, with_baseline=False)
_, dist5, _ = read_results(d3+y2+e3, s2, e3, tuples=False, with_baseline=False)
_, dist6, _ = read_results(d3+y2+e3, s3, e3, tuples=False, with_baseline=False)
size_factor = 7
linewidth = size_factor/3
dotsize = size_factor*2 # for scatter plots
rows = 1
columns = 1
fig, axes = plt.subplots(rows, columns, figsize=(size_factor*columns, 0.8*size_factor*rows))
c1 = "tab:blue"
axes.boxplot([dist1["distance"], dist3["distance"], dist5["distance"]], positions=[1,3,5],
boxprops=dict(color=c1),
labels=["1740", "1860", "noalign"],
medianprops=dict(color=c1))
c2 = "tab:orange"
axes.boxplot([dist2["distance"], dist4["distance"], dist6["distance"]], positions=[2, 4, 6],
boxprops=dict(color=c2),
labels=["1740", "1860", "noalign"],
medianprops=dict(color=c2))
axes.plot([],[], color=c1, label="discourse") # for the legend
axes.plot([],[], color=c2, label="technical")
axes.grid()
axes.legend()
axes.set_xlabel("space pair")
axes.set_ylabel("cosine distance (individual pairs)")
outdir = "visuals/shift_experiments/distech_boxplot.png"
print(outdir)
#plt.savefig(outdir, dpi=250)
# clean up!
del dist1
del dist2
del dist3
del dist4
del dist5
del dist6
###Output
_____no_output_____
###Markdown
Compare Clusters: Scatter Plots and KDE-Curves (not Included in the Thesis)
###Code
size_factor = 7
linewidth = size_factor/6
dotsize = size_factor*2 # for scatter plots
rows = 3
columns = 2
fig, axes = plt.subplots(rows, columns, figsize=(size_factor*columns, size_factor*rows))
# for estimated normal distributions
samples = max(len(df_clust_dis_aps), len(df_clust_tec_aps), len(df_clust_dis_src), len(df_clust_tec_src))
normal = make_normal_dist(samples, 1000)
bw="scott"
c = {"dis":"tab:blue",
"tec":"tab:orange"}
length_measure = "max_length"
length_label = "max length"
# 1. not normalized, for scatter plots
# cluster sizes
aps_dis_sizes = df_clust_dis_aps["cluster_size"]
aps_tec_sizes = df_clust_tec_aps["cluster_size"]
src_dis_sizes = df_clust_dis_src["cluster_size"]
src_tec_sizes = df_clust_tec_src["cluster_size"]
# values for AP-SHIFT
aps_dis_lengths = df_clust_dis_aps[length_measure]
aps_dis_inner = df_clust_dis_aps["inner_distance"]
aps_tec_lengths = df_clust_tec_aps[length_measure]
aps_tec_inner = df_clust_tec_aps["inner_distance"]
# values for AP-SOURCE
src_dis_lengths = df_clust_dis_src[length_measure]
src_dis_inner = df_clust_dis_src["inner_distance"]
src_tec_lengths = df_clust_tec_src[length_measure]
src_tec_inner = df_clust_tec_src["inner_distance"]
max_cluster_size = np.max([np.max(v) for v in [aps_dis_sizes, aps_tec_sizes,
src_dis_sizes, src_tec_sizes]])
y_lim = (1, max_cluster_size+1)
min_length = np.min([np.min(v) for v in [aps_dis_lengths, aps_tec_lengths, src_dis_lengths, src_tec_lengths]])#, bl_dis_maxes, bl_tec_maxes]])
min_inner = np.min([np.min(v) for v in [aps_dis_inner, aps_tec_inner, src_dis_inner, src_tec_inner]])#, bl_dis_inner, bl_tec_inner]])
max_length = np.max([np.max(v) for v in [aps_dis_lengths, aps_tec_lengths, src_dis_lengths, src_tec_lengths]])#, bl_dis_maxes, bl_tec_maxes]])
max_inner = np.max([np.max(v) for v in [aps_dis_inner, aps_tec_inner, src_dis_inner, src_tec_inner]])#, bl_dis_inner, bl_tec_inner]])
xlim_length = (min_length-0.05, max_length+0.05)
xlim_inner = (min_inner-0.05, max_inner+0.05)
def make_scatterplot(ax, xs, ys, labels, colors, axlabels, axlimits, dotsize):
for x,y,l,c in zip(xs, ys, labels, colors):
ax.scatter(x,y,label=l, s=dotsize)
ax.set_xlim(axlimits[0])
ax.set_ylim(axlimits[1])
ax.legend()
ax.grid()
ax.set_xlabel(axlabels[0])
ax.set_ylabel(axlabels[1])
# shift cluster lengths of AP-SHIFT
make_scatterplot(axes[0,0],
[aps_dis_lengths, aps_tec_lengths],
[aps_dis_sizes, aps_tec_sizes],
["dis. (AP:shifts)", "tec. (AP:shifts)"],
[c["dis"],c["tec"]],
(length_label, "cluster size"), (xlim_length, y_lim), dotsize)
make_scatterplot(axes[0,1],
[aps_dis_inner, aps_tec_inner],
[aps_dis_sizes, aps_tec_sizes],
["dis. (AP:shifts)", "tec. (AP:shifts)"],
[c["dis"],c["tec"]],
("inner cluster distance", "cluster size"), (xlim_inner, y_lim), dotsize)
# shift cluster lengths of AP-FIRST
make_scatterplot(axes[1,0],
[src_dis_lengths, src_tec_lengths],
[src_dis_sizes, src_tec_sizes],
["dis. (AP: source)", "tec. (AP: source)"],
[c["dis"],c["tec"]],
(length_label, "cluster size"), (xlim_length, y_lim), dotsize)
make_scatterplot(axes[1,1],
[src_dis_inner, src_tec_inner],
[src_dis_sizes, src_tec_sizes],
["dis. (AP:source)", "tec. (AP:source)"],
[c["dis"],c["tec"]],
("inner cluster distance", "cluster size"), (xlim_inner, y_lim), dotsize)
# estimated normal distributions (AP-SHIFT and AP-FIRST combined)
# normalized values for estimated distributions
aps_dis_lengths = add_z_scores(df_clust_dis_aps, length_measure+"_zscore", aps_dis_lengths)
aps_dis_inner = add_z_scores(df_clust_dis_aps, "inner_dist_zscore", aps_dis_inner)
aps_tec_lengths = add_z_scores(df_clust_tec_aps, length_measure+"_zscore", aps_tec_lengths)
aps_tec_inner = add_z_scores(df_clust_tec_aps, "inner_dist_zscore", aps_tec_inner)
src_dis_lengths = add_z_scores(df_clust_dis_src, length_measure+"_zscore", src_dis_lengths)
src_dis_inner = add_z_scores(df_clust_dis_src, "inner_dist_zscore", src_dis_inner)
src_tec_lengths = add_z_scores(df_clust_tec_src, length_measure+"_zscore", src_tec_lengths)
src_tec_inner = add_z_scores(df_clust_tec_src, "inner_dist_zscore", src_tec_inner)
# cluster sizes
aps_dis_sizes = add_z_scores(df_clust_dis_aps, "cluster_size_zscore", aps_dis_sizes)
aps_tec_sizes = add_z_scores(df_clust_tec_aps, "cluster_size_zscore", aps_tec_sizes)
src_dis_sizes = add_z_scores(df_clust_dis_src, "cluster_size_zscore", src_dis_sizes)
src_tec_sizes = add_z_scores(df_clust_tec_src, "cluster_size_zscore", src_tec_sizes)
# cluster lengths
ax = axes[2,0]
make_kde_plots(ax,
[aps_dis_lengths, src_dis_lengths, aps_tec_lengths, src_tec_lengths, normal],
["dis. shift", "dis. source", "tec. shift", "tec. source", "normal"],
[c["dis"], c["dis"], c["tec"], c["tec"], "black"],
["-", "--", "-", "--", ":"],
[linewidth]*5, method=bw)
ax.grid()
ax.legend()
ax.set_ylabel("probability")
ax.set_xlabel(length_label+" (normalized)")
# inner distances
ax = axes[2,1]
make_kde_plots(ax,
[aps_dis_inner, src_dis_inner, aps_tec_inner, src_tec_inner, normal],
["dis. shift", "dis. source", "tec. shift", "tec. source", "normal"],
[c["dis"], c["dis"], c["tec"], c["tec"], "black"],
["-", "--", "-", "--", ":"],
[linewidth]*5, method=bw)
ax.grid()
ax.legend()
ax.set_ylabel("probability")
ax.set_xlabel("inner distance (normalized)")
print(visuals_dir+"distech_scatter_normdist.png")
#plt.savefig(visuals_dir+"distech_scatter_normdist.png", dpi=300)
###Output
visuals/shift_experiments/1860_1890/dis_tech/distech_scatter_normdist.png
###Markdown
Make scatter Plots of Lengths against Inner Distances (not Included in the Thesis)Instead of only plotting 3 data sets, we have 6 now: (aps, src, bl) x (dis, tec)
###Code
size_factor = 7
linewidth = size_factor/3 # for line plots
dotsize = size_factor*1 # for scatter plots
rows=2
columns=3
fig, axes = plt.subplots(rows, columns, figsize=(size_factor*columns, size_factor*rows))
# values for AP-SHIFT
# [0,0]
aps_dis_maxes = df_clust_dis_aps["max_length"]
aps_dis_means = df_clust_dis_aps["mean_length"]
aps_dis_medians = df_clust_dis_aps["median_length"]
aps_dis_std = df_clust_dis_aps["std_length"]
aps_dis_inner = df_clust_dis_aps["inner_distance"]
# [1,0]
aps_tec_maxes = df_clust_tec_aps["max_length"]
aps_tec_means = df_clust_tec_aps["mean_length"]
aps_tec_medians = df_clust_tec_aps["median_length"]
aps_tec_std = df_clust_tec_aps["std_length"]
aps_tec_inner = df_clust_tec_aps["inner_distance"]
# values for AP-FIRST
# [0,1]
src_dis_maxes = df_clust_dis_src["max_length"]
src_dis_means = df_clust_dis_src["mean_length"]
src_dis_medians = df_clust_dis_src["median_length"]
src_dis_std = df_clust_dis_src["std_length"]
src_dis_inner = df_clust_dis_src["inner_distance"]
# [1,1]
src_tec_maxes = df_clust_tec_src["max_length"]
src_tec_means = df_clust_tec_src["mean_length"]
src_tec_medians = df_clust_tec_src["median_length"]
src_tec_std = df_clust_tec_src["std_length"]
src_tec_inner = df_clust_tec_src["inner_distance"]
# values for baseline
# [0,2]
bl_dis_maxes = df_dis_bl["max_length"]
bl_dis_means = df_dis_bl["mean_length"]
bl_dis_medians = df_dis_bl["median_length"]
bl_dis_std = df_dis_bl["std_length"]
bl_dis_inner = df_dis_bl["inner_distance"]
# [1,2]
bl_tec_maxes = df_tec_bl["max_length"]
bl_tec_means = df_tec_bl["mean_length"]
bl_tec_medians = df_tec_bl["median_length"]
bl_tec_std = df_tec_bl["std_length"]
bl_tec_inner = df_tec_bl["inner_distance"]
min_max = np.min([np.min(v) for v in [aps_dis_maxes, aps_tec_maxes, src_dis_maxes, src_tec_maxes, bl_dis_maxes, bl_tec_maxes]])
min_mean = np.min([np.min(v) for v in [aps_dis_means, aps_tec_means, src_dis_means, src_tec_means, bl_dis_means, bl_tec_means]])
min_median = np.min([np.min(v) for v in [aps_dis_medians, aps_tec_medians, src_dis_medians, src_tec_medians, bl_dis_medians, bl_tec_medians]])
min_std = np.min([np.min(v) for v in [aps_dis_std, aps_tec_std, src_dis_std, src_tec_std, bl_dis_std, bl_tec_std]])
min_inner = np.min([np.min(v) for v in [aps_dis_inner, aps_tec_inner, src_dis_inner, src_tec_inner, bl_dis_inner, bl_tec_inner]])
max_max = np.max([np.max(v) for v in [aps_dis_maxes, aps_tec_maxes, src_dis_maxes, src_tec_maxes, bl_dis_maxes, bl_tec_maxes]])
max_mean = np.max([np.max(v) for v in [aps_dis_means, aps_tec_means, src_dis_means, src_tec_means, bl_dis_means, bl_tec_means]])
max_median = np.max([np.max(v) for v in [aps_dis_medians, aps_tec_medians, src_dis_medians, src_tec_medians, bl_dis_medians, bl_tec_medians]])
max_std = np.max([np.max(v) for v in [aps_dis_std, aps_tec_std, src_dis_std, src_tec_std, bl_dis_std, bl_tec_std]])
max_inner = np.max([np.max(v) for v in [aps_dis_inner, aps_tec_inner, src_dis_inner, src_tec_inner, bl_dis_inner, bl_tec_inner]])
y_lim = (min(min_max, min_mean, min_median, min_std)-0.025,
max(max_max, max_mean, max_median, max_std)+0.025)
x_lim = (min_inner-0.025,
max_inner+0.025)
cs = {"max":"dodgerblue", # line color
"mean":"tab:red",
"median":"tab:green",
"std":"orange",
"normal":"black"}
cl=cs
labels = [ "max", "mean", "median", "std"]
colors = [cs["max"], cs["mean"], cs["median"], cs["std"]]
# DISCOURSE
# APshift
scatter_and_regressionline(axes[0,0], aps_dis_inner,
[aps_dis_maxes, aps_dis_means, aps_dis_medians, aps_dis_std],
labels, colors, ("average length", "inner distance (discourse, AP: shifts)"),
(x_lim, y_lim), dotsize, linewidth)
# APsource
scatter_and_regressionline(axes[0,1], src_dis_inner,
[src_dis_maxes, src_dis_means, src_dis_medians, src_dis_std],
labels, colors, ("average length", "inner distance (discourse, AP: source)"),
(x_lim, y_lim), dotsize, linewidth)
# baseline
scatter_and_regressionline(axes[0,2], bl_dis_inner,
[bl_dis_maxes, bl_dis_means, bl_dis_medians, bl_dis_std],
labels, colors, ("average length", "inner distance (discourse, baseline)"),
(x_lim, y_lim), dotsize, linewidth)
# TECHNICAL
# APshift
scatter_and_regressionline(axes[1,0], aps_tec_inner,
[aps_tec_maxes, aps_tec_means, aps_tec_medians, aps_tec_std],
labels, colors, ("average length", "inner distance (technical, AP: shifts)"),
(x_lim, y_lim), dotsize, linewidth)
# APsource
scatter_and_regressionline(axes[1,1], src_tec_inner,
[src_tec_maxes, src_tec_means, src_tec_medians, src_tec_std],
labels, colors, ("average length", "inner distance (technical, AP: source)"),
(x_lim, y_lim), dotsize, linewidth)
# baseline
scatter_and_regressionline(axes[1,2], bl_tec_inner,
[bl_tec_maxes, bl_tec_means, bl_tec_medians, bl_tec_std],
labels, colors, ("average length", "inner distance (technical, baseline)"),
(x_lim, y_lim), dotsize, linewidth)
print(visuals_dir+"inner_length_std.png")
#plt.savefig(visuals_dir+"inner_length_std.png", dpi=300)
def rho(x,y): return np.corrcoef(x,y)[0,1]
print(f"{'correlations':<20} {'max':<6} {'mean':<6} {'median':<6} {'std':<6}")
print(f"{'discourse, APshifts':<20} {rho(aps_dis_inner, aps_dis_maxes):<5.4f} {rho(aps_dis_inner, aps_dis_means):<5.4f} {rho(aps_dis_inner, aps_dis_medians):<5.4f} {rho(aps_dis_inner, aps_dis_std):<5.4f}")
print(f"{'technical, APshifts':<20} {rho(aps_tec_inner, aps_tec_maxes):<5.4f} {rho(aps_tec_inner, aps_tec_means):<5.4f} {rho(aps_tec_inner, aps_tec_medians):<5.4f} {rho(aps_tec_inner, aps_tec_std):<5.4f}")
print(f"{'discourse, APsource':<20} {rho(src_dis_inner, src_dis_maxes):<5.4f} {rho(src_dis_inner, src_dis_means):<5.4f} {rho(src_dis_inner, src_dis_medians):<5.4f} {rho(src_dis_inner, src_dis_std):<5.4f}")
print(f"{'technical, APsource':<20} {rho(src_tec_inner, src_tec_maxes):<5.4f} {rho(src_tec_inner, src_tec_means):<5.4f} {rho(src_tec_inner, src_tec_medians):<5.4f} {rho(src_tec_inner, src_tec_std):<5.4f}")
print(f"{'discourse, baseline':<20} { rho(bl_dis_inner, bl_dis_maxes):<5.4f} { rho(bl_dis_inner, bl_dis_means):<5.4f} { rho(bl_dis_inner, bl_dis_medians):<5.4f} {rho(bl_dis_inner, bl_dis_std):<5.4f}")
print(f"{'technical, baseline':<20} { rho(bl_tec_inner, bl_tec_maxes):<5.4f} { rho(bl_tec_inner, bl_tec_means):<5.4f} { rho(bl_tec_inner, bl_tec_medians):<5.4f} {rho(bl_tec_inner, bl_tec_std):<5.4f}")
#with open("visuals/shift_experiments/length_correlations.txt", "a") as f:
# f.write("\n\n"+visuals_dir+"inner_length_std.png")
# f.write(f"\n{'correlations':<20} {'max':<6} {'mean':<6} {'median':<6} {'std':<6}")
# f.write(f"\n{'discourse, APshifts':<20} {rho(aps_dis_inner, aps_dis_maxes):<5.4f} {rho(aps_dis_inner, aps_dis_means):<5.4f} {rho(aps_dis_inner, aps_dis_medians):<5.4f} {rho(aps_dis_inner, aps_dis_std):<5.4f}")
# f.write(f"\n{'technical, APshifts':<20} {rho(aps_tec_inner, aps_tec_maxes):<5.4f} {rho(aps_tec_inner, aps_tec_means):<5.4f} {rho(aps_tec_inner, aps_tec_medians):<5.4f} {rho(aps_tec_inner, aps_tec_std):<5.4f}")
# f.write(f"\n{'discourse, APsource':<20} {rho(src_dis_inner, src_dis_maxes):<5.4f} {rho(src_dis_inner, src_dis_means):<5.4f} {rho(src_dis_inner, src_dis_medians):<5.4f} {rho(src_dis_inner, src_dis_std):<5.4f}")
# f.write(f"\n{'technical, APsource':<20} {rho(src_tec_inner, src_tec_maxes):<5.4f} {rho(src_tec_inner, src_tec_means):<5.4f} {rho(src_tec_inner, src_tec_medians):<5.4f} {rho(src_tec_inner, src_tec_std):<5.4f}")
# f.write(f"\n{'discourse, baseline':<20} { rho(bl_dis_inner, bl_dis_maxes):<5.4f} { rho(bl_dis_inner, bl_dis_means):<5.4f} { rho(bl_dis_inner, bl_dis_medians):<5.4f} {rho(bl_dis_inner, bl_dis_std):<5.4f}")
# f.write(f"\n{'technical, baseline':<20} { rho(bl_tec_inner, bl_tec_maxes):<5.4f} { rho(bl_tec_inner, bl_tec_means):<5.4f} { rho(bl_tec_inner, bl_tec_medians):<5.4f} {rho(bl_tec_inner, bl_tec_std):<5.4f}")
###Output
visuals/shift_experiments/1860_1890/dis_tech/inner_length_std.png
correlations max mean median std
discourse, APshifts 0.1854 0.0412 -0.0716 0.1274
technical, APshifts 0.2256 0.1360 0.1236 0.1774
discourse, APsource 0.1109 0.0585 -0.0088 0.1035
technical, APsource 0.3359 0.4095 0.4094 0.2534
discourse, baseline -0.0172 -0.0402 -0.0906 0.0340
technical, baseline -0.0857 -0.0191 -0.0895 -0.0404
###Markdown
Qualitative Analysis (DisTech) Discourse Word Shift Clusters
###Code
k=0 # select clusters by z-score (+/-1.65) if k==0 or just the top/bottom k
low_maxes = significant_clusters(df_clust_dis_aps, "max_length_zscore", "low",k=k)
low_means = significant_clusters(df_clust_dis_aps, "mean_length_zscore", "low",k=k)
low_sizes = significant_clusters(df_clust_dis_aps, "cluster_size_zscore", "low",k=k)
low_inner_dist = significant_clusters(df_clust_dis_aps, "inner_dist_zscore", "low",k=k)
high_maxes = significant_clusters(df_clust_dis_aps, "max_length_zscore", "high",k=k)
high_means = significant_clusters(df_clust_dis_aps, "mean_length_zscore", "high",k=k)
high_sizes = significant_clusters(df_clust_dis_aps, "cluster_size_zscore", "high",k=k)
high_inner_dist = significant_clusters(df_clust_dis_aps, "inner_dist_zscore", "high",k=k)
print_clusters(low_sizes, "SMALLEST CLUSTERS")
print_clusters(high_sizes, "BIGGEST CLUSTERS")
print_clusters(low_means, "SHORTEST CLUSTERS")
print_clusters(high_means, "LONGEST CLUSTERS")
print_clusters(low_inner_dist, "TIGHTEST CLUSTERS")
print_clusters(high_inner_dist, "LOOSEST CLUSTERS")
###Output
TIGHTEST CLUSTERS
centroid: arterial (ID: 120)
size: 3
labels: venous, **, pulmonary, lymphatic, vena
members: specific, arterial, writing
centroid: identifying (ID: 121)
size: 9
labels: discovering, associating, transferring, maintaining, isolating
members: charging, exclusive, saving, crossing, isolating, identifying, getting, reducible, placing
centroid: illuminating (ID: 122)
size: 5
labels: shutting, illuminated, incandescent, diselectrifying, otto
members: illuminating, coming, permitting, burning, emerging
centroid: assumed (ID: 123)
size: 6
labels: supposed, imagined, known, called, said
members: found, reading, suspected, common, supposed, assumed
centroid: worthy (ID: 124)
size: 4
labels: deserving, deserves, deserve, priority, merit
members: worthy, deserving, outside, capable
centroid: comparable (ID: 125)
size: 7
labels: distinguishable, differing, akin, correlated, obtainable
members: ramifying, peripheral, homologous, estimating, favourable, marginal, comparable
LOOSEST CLUSTERS
centroid: arising (ID: 0)
size: 9
labels: derived, aiding, leading, arises, accidental
members: relating, viz., arising, accidental, destroying, suggested, facilitating, involving, is
centroid: exciting (ID: 1)
size: 7
labels: stimulating, anus, yhith, excitation, adduction
members: exciting, depressing, cold, removing, surprise, wrong, visible
###Markdown
Technical Term Shift Clusters
###Code
k=0 #select clusters by z-score (+/-1.65) if k==0 or just the top/bottom k
low_maxes = significant_clusters(df_clust_tec_aps, "max_length_zscore", "low",k=k)
low_means = significant_clusters(df_clust_tec_aps, "mean_length_zscore", "low",k=k)
low_sizes = significant_clusters(df_clust_tec_aps, "cluster_size_zscore", "low",k=k)
low_inner_dist = significant_clusters(df_clust_tec_aps, "inner_dist_zscore", "low",k=k)
high_maxes = significant_clusters(df_clust_tec_aps, "max_length_zscore", "high",k=k)
high_means = significant_clusters(df_clust_tec_aps, "mean_length_zscore", "high",k=k)
high_sizes = significant_clusters(df_clust_tec_aps, "cluster_size_zscore", "high",k=k)
high_inner_dist = significant_clusters(df_clust_tec_aps, "inner_dist_zscore", "high",k=k)
print_clusters(low_sizes, "SMALLEST CLUSTERS")
print_clusters(high_sizes, "BIGGEST CLUSTERS")
print_clusters(low_means, "SHORTEST CLUSTERS")
print_clusters(high_means, "LONGEST CLUSTERS")
print_clusters(low_inner_dist, "TIGHTEST CLUSTERS")
print_clusters(high_inner_dist, "LOOSEST CLUSTERS")
###Output
TIGHTEST CLUSTERS
centroid: oxygen (ID: 33)
size: 7
labels: c02, c03, air, nitrogen, carbon
members: substance, ozone, oxygen, executed, heat, cyanogen, hydrogen
centroid: gave (ID: 34)
size: 3
labels: yielded, gives, giving, give, yields
members: yielded, gave, giving
LOOSEST CLUSTERS
centroid: derived (ID: 0)
size: 8
labels: derivable, originated, deduced, arisen, eliminated
members: said, doubly, proved, derived, produced, methods, derivable, deduced
|
Model backlog/Models/Inference/101-cassava-leaf-inf-effnetb3-scl-cce-bn-sgd-512.ipynb | ###Markdown
Dependencies
###Code
!pip install --quiet /kaggle/input/kerasapplications
!pip install --quiet /kaggle/input/efficientnet-git
import warnings, glob
from tensorflow.keras import Sequential, Model
import efficientnet.tfkeras as efn
from cassava_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Hardware configuration
###Code
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
###Output
REPLICAS: 1
###Markdown
Model parameters
###Code
BATCH_SIZE = 8 * REPLICAS
HEIGHT = 512
WIDTH = 512
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 0 # Do TTA if > 0
###Output
_____no_output_____
###Markdown
Augmentation
###Code
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# # Pixel-level transforms
# if p_pixel_1 >= .4:
# image = tf.image.random_saturation(image, lower=.7, upper=1.3)
# if p_pixel_2 >= .4:
# image = tf.image.random_contrast(image, lower=.8, upper=1.2)
# if p_pixel_3 >= .4:
# image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .7:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
elif p_crop > .4:
crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
# # Crops
# if p_crop > .6:
# if p_crop > .9:
# image = tf.image.central_crop(image, central_fraction=.5)
# elif p_crop > .8:
# image = tf.image.central_crop(image, central_fraction=.6)
# elif p_crop > .7:
# image = tf.image.central_crop(image, central_fraction=.7)
# else:
# image = tf.image.central_crop(image, central_fraction=.8)
# elif p_crop > .3:
# crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
# image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
return image, label
###Output
_____no_output_____
###Markdown
Auxiliary functions
###Code
# Datasets utility functions
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
img, _ = scale_image(img, None)
# img = center_crop(img, HEIGHT, WIDTH)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/101-cassava-leaf-effnetb3-scl-cce-bn-sgd-512x512/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
###Output
Models to predict:
/kaggle/input/101-cassava-leaf-effnetb3-scl-cce-bn-sgd-512x512/model_0.h5
###Markdown
Model
###Code
def encoder_fn(input_shape):
inputs = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB3(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
model = Model(inputs=inputs, outputs=base_model.output)
return model
def classifier_fn(input_shape, N_CLASSES, encoder, trainable=True):
for layer in encoder.layers:
layer.trainable = trainable
unfreeze_model(encoder) # unfreeze all layers except "batch normalization"
inputs = L.Input(shape=input_shape, name='input_image')
features = encoder(inputs)
features = L.Dropout(.5)(features)
features = L.Dense(512, activation='relu')(features)
features = L.Dropout(.5)(features)
output = L.Dense(N_CLASSES, activation='softmax', name='output', dtype='float32')(features)
output_healthy = L.Dense(1, activation='sigmoid', name='output_healthy', dtype='float32')(features)
output_cmd = L.Dense(1, activation='sigmoid', name='output_cmd', dtype='float32')(features)
model = Model(inputs=inputs, outputs=[output, output_healthy, output_cmd])
return model
with strategy.scope():
encoder = encoder_fn((None, None, CHANNELS))
model = classifier_fn((None, None, CHANNELS), N_CLASSES, encoder, trainable=True)
model.summary()
###Output
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_image (InputLayer) [(None, None, None, 0
__________________________________________________________________________________________________
model (Model) (None, 1536) 10783528 input_image[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (None, 1536) 0 model[1][0]
__________________________________________________________________________________________________
dense (Dense) (None, 512) 786944 dropout[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 512) 0 dense[0][0]
__________________________________________________________________________________________________
output (Dense) (None, 5) 2565 dropout_1[0][0]
__________________________________________________________________________________________________
output_healthy (Dense) (None, 1) 513 dropout_1[0][0]
__________________________________________________________________________________________________
output_cmd (Dense) (None, 1) 513 dropout_1[0][0]
==================================================================================================
Total params: 11,574,063
Trainable params: 11,399,471
Non-trainable params: 174,592
__________________________________________________________________________________________________
###Markdown
Test set predictions
###Code
files_path = f'{database_base_path}test_images/'
test_size = len(os.listdir(files_path))
test_preds = np.zeros((test_size, N_CLASSES))
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True).repeat()
ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)
preds = model.predict(test_ds, steps=ct_steps, verbose=1)[0][:(test_size * TTA_STEPS)]
preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)
test_preds += preds / len(model_path_list)
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test)[0] / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
test_names_ds = get_dataset(files_path)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
###Output
_____no_output_____ |
notebooks/setup.ipynb | ###Markdown
Setup Jupyter on local machine Prerequisites1. Download and Install Python 3.6 [Python Download Page](https://www.python.org/downloads/ "Python Software Foundation")TensorFlow is not available for 3.7;For Windows downloads pay attention to 64/32 bit installer; install for all users requires admin password1. Create folder for Juypter notebook in your home dir1. Create Python Virtual Env1. Install jupyter1. Create start-up script to activate venv and start notebook```!/usr/bin/env bashcdmkdir -p jupytercd jupyter[[ -d venv ]] || { python3.6 -m venv venv . venv/bin/activate pip install --upgrade pip pip install jupyter}[[ -f notebook.sh ]] || { echo -e "!/usr/bin/env bash\ncd $(dirname $(readlink -f $0))\n. venv/bin/activate\n./venv/bin/jupyter-notebook" >notebook.sh chmod +x notebook.sh}./notebook.sh Use notebook.sh to start jupyter notebook``` Use setup notebook1. Open this notebook and run it. Install Required Packages
###Code
!pip install --upgrade pip
!pip install geoplotlib keras matplotlib pandas pyglet scikit-learn scipy seaborn progressbar theano
!pip install numpy==1.14.5 tensorflow==1.10.1
###Output
_____no_output_____
###Markdown
Review Installed Versions
###Code
import sys, warnings
print('Python: {}'.format(sys.version))
warnings.filterwarnings('ignore')
for module in ('geoplotlib', 'keras', 'matplotlib', 'pandas', 'progressbar','sklearn', 'scipy', 'seaborn', 'theano',
'numpy', 'tensorflow'):
try:
print('{}: {}'.format(module, getattr(__import__(module, globals(), locals(), [], 0), '__version__')))
except AttributeError:
print('{}: {}'.format(module, 'unknown version'))
except (ImportError, ModuleNotFoundError) as e:
print('Error while importing module {}: "{}"'.format(module, e))
###Output
_____no_output_____
###Markdown
Setup for ag1000g phase 2 analysis
###Code
%%HTML
<style type="text/css">
.container {
width: 100%;
}
</style>
# python standard library
import sys
import os
import operator
import itertools
import collections
import functools
import glob
import csv
import datetime
import bisect
import sqlite3
import subprocess
import random
import gc
import shutil
import shelve
import contextlib
import tempfile
import math
import warnings
# plotting setup
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from matplotlib.gridspec import GridSpec
import seaborn as sns
sns.set_context('paper')
sns.set_style('ticks')
# use seaborn defaults
rcParams = plt.rcParams
rcParams['savefig.jpeg_quality'] = 100
%matplotlib inline
%config InlineBackend.figure_formats = {'retina', 'png'}
# general purpose third party packages
import numpy as np
nnz = np.count_nonzero
import scipy
import scipy.stats
import scipy.spatial.distance
import numexpr
import h5py
import tables
import bcolz
import dask
import dask.array as da
import pandas as pd
import IPython
from IPython.display import clear_output, display, HTML
import sklearn
import sklearn.decomposition
import sklearn.manifold
import petl as etl
etl.config.display_index_header = True
import humanize
from humanize import naturalsize, intcomma, intword
import zarr
from scipy.stats import entropy
import lmfit
#analysis packages
import allel
sys.path.insert(0, '../agam-report-base/src/python')
from util import *
from ag1k import phase2_ar1
# This is a symlink in your root directory
# eg: ln -s /kwiat/vector/ag1000g/release/phase2.AR1 .
phase2_ar1.init("../phase2.AR1")
region_vgsc = SeqFeature('2L', 2358158, 2431617, label='Vgsc')
import veff
###Output
_____no_output_____
###Markdown
Bootstrapping the projectTo run cogito, you will need to install and setup some software on yoursystem.- zsh: Because it's cooler than bash- pyenv: To manage python environments- jupyter: To install jupyterlab and notebook itself- kotlin-jupyter: A kotlin kernel for jupyter Installing pyenvpyenv is a tool that lets you not only install different python virtual environments,but also different python versions. This is the best way to manage both.Before running the script block here you will want
###Code
!curl https://pyenv.run | bash
!exec $SHELL
!pyenv update
###Output
_____no_output_____
###Markdown
Setup Environment
###Code
from google.colab import drive
drive.mount('/content/drive')
!cd drive
!ls -al
###Output
total 20
drwxr-xr-x 1 root root 4096 Mar 9 08:24 .
drwxr-xr-x 1 root root 4096 Mar 9 08:19 ..
drwxr-xr-x 1 root root 4096 Feb 26 17:33 .config
drwx------ 3 root root 4096 Mar 9 08:24 drive
drwxr-xr-x 1 root root 4096 Feb 26 17:33 sample_data
###Markdown
SetupSets up the environment for Ag1000G Selection Atlas.
###Code
%%HTML
<style type="text/css">
.container {
width: 100%;
}
</style>
# python standard library
import sys
import os
import operator
import itertools
import collections
import functools
import glob
import csv
import datetime
import bisect
import sqlite3
import subprocess
import random
import gc
import shutil
import shelve
import contextlib
import tempfile
import math
# plotting setup
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from matplotlib.gridspec import GridSpec
import seaborn as sns
sns.set_context('paper')
sns.set_style('darkgrid')
# use seaborn defaults
rcParams = plt.rcParams
rcParams['savefig.jpeg_quality'] = 100
%matplotlib inline
%config InlineBackend.figure_formats = {'retina', 'png'}
# general purpose third party packages
import numpy as np
nnz = np.count_nonzero
import scipy
import scipy.stats
import scipy.spatial.distance
import numexpr
import h5py
import tables
import bcolz
import dask
import dask.array as da
import pandas as pd
import IPython
from IPython.display import clear_output, display, HTML
import sklearn
import sklearn.decomposition
import sklearn.manifold
import petl as etl
etl.config.display_index_header = True
import humanize
from humanize import naturalsize, intcomma, intword
import zarr
from scipy.stats import entropy
import lmfit
import allel
sys.path.insert(0, '../agam-report-base/src/python')
from util import *
import zcache
import veff
import hapclust
%reload_ext autoreload
%autoreload 1
%aimport rockies
sys.path.insert(0, '../scripts')
from setup import *
###Output
_____no_output_____ |
01_data_cleaning.ipynb | ###Markdown
Data cleaning Setup
###Code
# Loading packages and their components
import pandas as pd
import numpy as np
import pickle
# Setting Pandas options
# pd.options.display.max_rows = 999 # For debugging, can be removed later
pd.options.mode.chained_assignment = None # Disabling the pandas chained assignment warnings
def import_and_preproc():
# Read in the data
dengue_features_train = pd.read_csv('data/dengue_features_train.csv')
dengue_features_test = pd.read_csv('data/dengue_features_test.csv')
dengue_labels_train = pd.read_csv('data/dengue_labels_train.csv')
raw_data = [dengue_features_train, dengue_features_test, dengue_labels_train]
# Splitting the data into a San Juan and an Iquitos part
iq = []
sj = []
for item in raw_data:
sj.append( item[item.city=='sj'] )
iq.append( item[item.city=='iq'] )
# Transferring the date column to the label part of the data
sj[2] = sj[2].join(sj[0]['week_start_date'])
iq[2] = iq[2].join(iq[0]['week_start_date'])
# Converting the date column to datetime format
for i in range(len(sj)):
sj[i]['week_start_date'] = pd.to_datetime(sj[i]['week_start_date'], format='%Y-%m-%d')
iq[i]['week_start_date'] = pd.to_datetime(iq[i]['week_start_date'], format='%Y-%m-%d')
# Putting the date as index
for i in range(len(sj)):
sj[i] = sj[i].set_index('week_start_date', drop=False)
iq[i] = iq[i].set_index('week_start_date', drop=False)
return list([sj[0], sj[1], sj[2], iq[0], iq[1], iq[2]])
data_subsets = import_and_preproc()
###Output
_____no_output_____
###Markdown
Features in the datasetCity and date indicators* `city` – City abbreviations: `sj` for San Juan and `iq` for Iquitos* `week_start_date` – Date given in yyyy-mm-dd formatNOAA's GHCN daily climate data weather station measurements* `station_max_temp_c` – Maximum temperature* `station_min_temp_c` – Minimum temperature* `station_avg_temp_c` – Average temperature* `station_precip_mm` – Total precipitation* `station_diur_temp_rng_c` – Diurnal temperature rangePERSIANN satellite precipitation measurements (0.25x0.25 degree scale)* `precipitation_amt_mm` – Total precipitationNOAA's NCEP Climate Forecast System Reanalysis measurements (0.5x0.5 degree scale)* `reanalysis_air_temp_k` – Mean air temperature* `reanalysis_relative_humidity_percen` – Mean relative humidity* `reanalysis_specific_humidity_g_per_kg` – Mean specific humidity* `reanalysis_precip_amt_kg_per_mm` – Total precipitation* `reanalysis_max_air_temp_k` – Maximum air temperature* `reanalysis_min_air_temp_k` – Minimum air temperature* `reanalysis_avg_temp_k` – Average air temperature* `reanalysis_tdtr_k` – Diurnal temperature rangeSatellite vegetation - Normalized difference vegetation index (NDVI) - NOAA's CDR Normalized Difference Vegetation Index (0.5x0.5 degree scale) measurements* `ndvi_se` – Pixel southeast of city centroid* `ndvi_sw` – Pixel southwest of city centroid* `ndvi_ne` – Pixel northeast of city centroid* `ndvi_nw` – Pixel northwest of city centroid Missing value imputationSince the environmental values for each week are assumed to follow seasonal patterns, they can not be simply replaced with the mean over the entire study. Intstead, missing values in these variables can be replaced with the mean value of the week before and after, or the week before and after that has no missing values.
###Code
environmental_vars = [
'ndvi_ne',
'ndvi_nw',
'ndvi_se',
'ndvi_sw',
'precipitation_amt_mm',
'reanalysis_air_temp_k',
'reanalysis_avg_temp_k',
'reanalysis_dew_point_temp_k',
'reanalysis_max_air_temp_k',
'reanalysis_min_air_temp_k',
'reanalysis_precip_amt_kg_per_m2',
'reanalysis_relative_humidity_percent',
'reanalysis_sat_precip_amt_mm',
'reanalysis_specific_humidity_g_per_kg',
'reanalysis_tdtr_k',
'station_avg_temp_c',
'station_diur_temp_rng_c',
'station_max_temp_c',
'station_min_temp_c',
'station_precip_mm'
]
def replace_missing(df, colnames):
# Store the time index because the code below is index based and needs numbers
date = df.index
df = df.reset_index(drop=True)
for colname in colnames:
try: # because there are columns that do not occur in all subsets of the dataset
miss_idx = df[df[colname].isnull()].index.tolist()
for idx in miss_idx:
# Search the nearest week before the week with the missing value
# that itself has no missing value
before = df.iloc[:idx,:][colname].dropna().tail(1)
# The same but for the weeks after the missing value
after = df.iloc[idx:,:][colname].dropna().head(1)
# Replace the missing value with the mean
df[colname][idx] = np.mean([before, after])
except:
continue
# Re-attach the time index and drop the auxiliary index
df = df.set_index(date, drop=True)
return df
# Applying the Imputation
for i in range(len(data_subsets)):
data_subsets[i] = replace_missing(data_subsets[i], environmental_vars)
###Output
_____no_output_____
###Markdown
Check if there are still variables with missing values in our dataset.
###Code
# Check if there are still variables with missing values in our dataset.
for subset in data_subsets:
print(subset.isnull().sum())
print('---'*10)
###Output
city 0
year 0
weekofyear 0
week_start_date 0
ndvi_ne 0
ndvi_nw 0
ndvi_se 0
ndvi_sw 0
precipitation_amt_mm 0
reanalysis_air_temp_k 0
reanalysis_avg_temp_k 0
reanalysis_dew_point_temp_k 0
reanalysis_max_air_temp_k 0
reanalysis_min_air_temp_k 0
reanalysis_precip_amt_kg_per_m2 0
reanalysis_relative_humidity_percent 0
reanalysis_sat_precip_amt_mm 0
reanalysis_specific_humidity_g_per_kg 0
reanalysis_tdtr_k 0
station_avg_temp_c 0
station_diur_temp_rng_c 0
station_max_temp_c 0
station_min_temp_c 0
station_precip_mm 0
dtype: int64
------------------------------
city 0
year 0
weekofyear 0
week_start_date 0
ndvi_ne 0
ndvi_nw 0
ndvi_se 0
ndvi_sw 0
precipitation_amt_mm 0
reanalysis_air_temp_k 0
reanalysis_avg_temp_k 0
reanalysis_dew_point_temp_k 0
reanalysis_max_air_temp_k 0
reanalysis_min_air_temp_k 0
reanalysis_precip_amt_kg_per_m2 0
reanalysis_relative_humidity_percent 0
reanalysis_sat_precip_amt_mm 0
reanalysis_specific_humidity_g_per_kg 0
reanalysis_tdtr_k 0
station_avg_temp_c 0
station_diur_temp_rng_c 0
station_max_temp_c 0
station_min_temp_c 0
station_precip_mm 0
dtype: int64
------------------------------
city 0
year 0
weekofyear 0
total_cases 0
week_start_date 0
dtype: int64
------------------------------
city 0
year 0
weekofyear 0
week_start_date 0
ndvi_ne 0
ndvi_nw 0
ndvi_se 0
ndvi_sw 0
precipitation_amt_mm 0
reanalysis_air_temp_k 0
reanalysis_avg_temp_k 0
reanalysis_dew_point_temp_k 0
reanalysis_max_air_temp_k 0
reanalysis_min_air_temp_k 0
reanalysis_precip_amt_kg_per_m2 0
reanalysis_relative_humidity_percent 0
reanalysis_sat_precip_amt_mm 0
reanalysis_specific_humidity_g_per_kg 0
reanalysis_tdtr_k 0
station_avg_temp_c 0
station_diur_temp_rng_c 0
station_max_temp_c 0
station_min_temp_c 0
station_precip_mm 0
dtype: int64
------------------------------
city 0
year 0
weekofyear 0
week_start_date 0
ndvi_ne 0
ndvi_nw 0
ndvi_se 0
ndvi_sw 0
precipitation_amt_mm 0
reanalysis_air_temp_k 0
reanalysis_avg_temp_k 0
reanalysis_dew_point_temp_k 0
reanalysis_max_air_temp_k 0
reanalysis_min_air_temp_k 0
reanalysis_precip_amt_kg_per_m2 0
reanalysis_relative_humidity_percent 0
reanalysis_sat_precip_amt_mm 0
reanalysis_specific_humidity_g_per_kg 0
reanalysis_tdtr_k 0
station_avg_temp_c 0
station_diur_temp_rng_c 0
station_max_temp_c 0
station_min_temp_c 0
station_precip_mm 0
dtype: int64
------------------------------
city 0
year 0
weekofyear 0
total_cases 0
week_start_date 0
dtype: int64
------------------------------
###Markdown
Feature editingThe temperature features from the NCEP Climate Forecast System Reanalysis and those of the weather station are in different units. To have the temperature features in the same units as those of the NCEP Climate Forecast System, the Reanalysis variables will be converted to degrees Celsius. For uniformity, the diurnal temperature range is converted from Celsius to Kelvin, as differences in temperature are expressed in Kelvin. Furthermore, the feature `precipitation_amt_mm` is removed as its values are identical to those of `reanalysis_precip_amt_kg_per_mm`.
###Code
# compare 'reanalysis_sat_precip_amt_mm' and 'precipitation_amt_mm'
for i in range(len(data_subsets)):
if data_subsets[i].shape[1] > 5:
print(data_subsets[i][['reanalysis_sat_precip_amt_mm', 'precipitation_amt_mm']].sample(5))
# apply unit conversion, renaming and dropping
for i in range(len(data_subsets)):
if data_subsets[i].shape[1] > 5:
data_subsets[i] = (
data_subsets[i]
.assign(month = lambda df: df.index.month)
.assign(reanalysis_air_temp_c = lambda df: df['reanalysis_air_temp_k']-273.15)
.assign(reanalysis_avg_temp_c = lambda df: df['reanalysis_avg_temp_k']-273.15)
.assign(reanalysis_dew_point_temp_c = lambda df: df['reanalysis_dew_point_temp_k']-273.15)
.assign(reanalysis_max_air_temp_c = lambda df: df['reanalysis_max_air_temp_k']-273.15)
.assign(reanalysis_min_air_temp_c = lambda df: df['reanalysis_min_air_temp_k']-273.15)
.rename(columns={'station_diur_temp_rng_c': 'station_diur_temp_rng_k'})
.drop(['reanalysis_air_temp_k','reanalysis_avg_temp_k', 'reanalysis_dew_point_temp_k', 'reanalysis_max_air_temp_k',
'reanalysis_min_air_temp_k','precipitation_amt_mm'], axis=1)
)
###Output
_____no_output_____
###Markdown
Adding the population dataThis additional data is available from the Dengue Forecasting [website](https://dengueforecasting.noaa.gov/), from which the data provided by DrivenData originates.
###Code
# import the population data for sj and iq
def load_pop(filename):
ser = (
pd.read_csv(filename)
.assign(year = lambda df: df.Year) # to have a same-name column with the other dataframes
.assign(Year = lambda df: pd.to_datetime(df.Year, format='%Y'))
.set_index('Year', drop=True)
)
return ser
sj_pop = load_pop('data/San_Juan_Population_Data.csv')
iq_pop = load_pop('data/Iquitos_Population_Data.csv')
def merge_pop(df, pop):
merged = pd.merge(df, pop, how='left', on='year')
merged = merged.rename(columns={'Estimated_population': 'population'})
merged = merged.set_index('week_start_date', drop=True)
merged.population = merged.population.interpolate().round().astype(int)
return merged
data_subsets[2].head()
data_subsets[0] = merge_pop(data_subsets[0], sj_pop)
data_subsets[1] = merge_pop(data_subsets[1], sj_pop)
data_subsets[2] = data_subsets[2].set_index('week_start_date', drop=True)
data_subsets[3] = merge_pop(data_subsets[3], iq_pop)
data_subsets[4] = merge_pop(data_subsets[4], iq_pop)
data_subsets[5] = data_subsets[5].set_index('week_start_date', drop=True)
# add week_start_date as seperate column to both test datasets and delete index name (sj_test and iq_test)
# happen for the other dataframes in the train-test-split
data_subsets[1]['week_start_date'] = data_subsets[1].index
data_subsets[4]['week_start_date'] = data_subsets[4].index
data_subsets[1].index.name = None
data_subsets[4].index.name = None
# Splitting the data into their parts
sj_features_train, \
sj_test, \
sj_labels_train, \
iq_features_train, \
iq_test, \
iq_labels_train = data_subsets
pickle.dump(data_subsets, open('cleaned_data.pickle', 'wb'))
###Output
_____no_output_____
###Markdown
Train test splitTo evaluate forecasting models, a train and a seperate test dataset (with actual values for the number of cases) is needed. Therefore the given "train" datasets for both cities is split into a `train_train` (75% of the data) and a `train_test` (25% of the data) set. Exclude data before 2002 from IquitosThe total number of cases form Iquitos only contain single values. After 01.01.2002 the total number of cases increases clearly, probably due to a difference in the reposting system or counting system. Consequently the values before 2002 will be excluded from the dataset used for modeling.
###Code
# remove entries in IQ data until 2002 (data excluded from modeling)
data_subsets[3] = data_subsets[3]['2002':]
data_subsets[5] = data_subsets[5]['2002':]
def train_test_timesplit(df, ratio=0.75):
'''
Performs a train test split for time series on a dataframe with a datetime index.
Parameter:
ratio determines the fraction of the training part relative to the original dataframe.
Output:
Two dataframes, the first being the training one.
'''
time_index = list(df.index)
df = df.reset_index()
df_train = df.loc[:int(len(time_index)*ratio),:]
df_train.index = time_index[:int(len(time_index)*ratio)+1]
df_test = df.loc[int(len(time_index)*ratio)+1:,:]
df_test.index = time_index[int(len(time_index)*ratio)+1:]
return df_train, df_test
# split given data (features and label) into test and train datasets
def split_dataset(data_subsets):
data_subsets_splitted = []
for i in [0, 2, 3, 5]:
train, test = train_test_timesplit(data_subsets[i])
data_subsets_splitted.append(train)
data_subsets_splitted.append(test)
return data_subsets_splitted
data_subsets_splitted = split_dataset(data_subsets)
# splitting the data into their parts
sj_features_train_train, \
sj_features_train_test, \
sj_labels_train_train, \
sj_labels_train_test, \
iq_features_train_train, \
iq_features_train_test, \
iq_labels_train_train, \
iq_labels_train_test = data_subsets_splitted
###Output
_____no_output_____
###Markdown
Data Cleaning The purpose of this notebook is to create cleaned .csv files to export for use in my data analyses More information about this project is available in my github repo here: https://github.com/Noah-Baustin/sf_crime_data_analysis
###Code
#import modules
import pandas as pd
# import historical csv into a variable
historical_data = pd.read_csv('raw_data/SFPD_Incident_Reports_2003-May2018/Police_Department_Incident_Reports__Historical_2003_to_May_2018.csv', dtype=str)
# import newer csv into a variable
newer_data = pd.read_csv('raw_data/SFPD_Incident_Reports_2018-10.14.21/Police_Department_Incident_Reports__2018_to_Present(1).csv', dtype=str)
###Output
_____no_output_____
###Markdown
Trim the extra columns that we don't need from the historical data:
###Code
historical_data = historical_data[
['PdId', 'IncidntNum', 'Incident Code', 'Category', 'Descript',
'DayOfWeek', 'Date', 'Time', 'PdDistrict', 'Resolution', 'X',
'Y', 'location']
].reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Change the column names in the historical data to match the API names in the newer data. The SFPD published a key that I used to translate the column names over, which can be found on pg two of this document: https://drive.google.com/file/d/13n7pncEOxFTWig9-sTKnB2sRiTB54Kb-/view?usp=sharing
###Code
historical_data.rename(columns={'PdId': 'row_id',
'IncidntNum': 'incident_number',
'Incident Code': 'incident_code',
'Category': 'incident_category',
'Descript': 'incident_description',
'DayOfWeek': 'day_of_week',
'Date': 'incident_date',
'Time': 'incident_time',
'PdDistrict': 'police_district',
'Resolution': 'resolution',
'X': 'longitude',
'Y': 'latitude',
'location': 'the_geom'
},
inplace=True)
historical_data
###Output
_____no_output_____
###Markdown
Now let's trim down the columns from the newer dataset so that we're only working with columns that match up to the old data. Note: there's no 'the geom' column, but the column 'point' is equivelant.
###Code
newer_data = newer_data[
['Row ID', 'Incident Number', 'Incident Code', 'Incident Category',
'Incident Description', 'Incident Day of Week', 'Incident Date', 'Incident Time',
'Police District', 'Resolution', 'Longitude', 'Latitude', 'Point']
].copy()
###Output
_____no_output_____
###Markdown
Change the column names in the newer dataset to match the API names of the columns. Doing this because the original column names have spaces, which could cause issues down the road.
###Code
newer_data.rename(columns={'Row ID': 'row_id',
'Incident Number': 'incident_number',
'Incident Code': 'incident_code',
'Incident Category': 'incident_category',
'Incident Description': 'incident_description',
'Incident Day of Week': 'day_of_week',
'Incident Date': 'incident_date',
'Incident Time': 'incident_time',
'Police District': 'police_district',
'Resolution': 'resolution',
'Longitude': 'longitude',
'Latitude': 'latitude',
'Point': 'the_geom'
},
inplace=True)
newer_data
historical_data.columns
newer_data.columns
###Output
_____no_output_____
###Markdown
Now that our datasets have matching columns, let's merge them together.
###Code
frames = [historical_data, newer_data]
all_data = pd.concat(frames)
###Output
_____no_output_____
###Markdown
The dataframe all_data now contains our combined dataset!
###Code
all_data.info()
all_data.head()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2642742 entries, 0 to 513216
Data columns (total 13 columns):
# Column Dtype
--- ------ -----
0 row_id object
1 incident_number object
2 incident_code object
3 incident_category object
4 incident_description object
5 day_of_week object
6 incident_date object
7 incident_time object
8 police_district object
9 resolution object
10 longitude object
11 latitude object
12 the_geom object
dtypes: object(13)
memory usage: 282.3+ MB
###Markdown
We need to convert our incident_date column into a datetime format
###Code
all_data['incident_date'] = pd.to_datetime(all_data['incident_date'])
all_data['incident_date'].min()
all_data['incident_date'].max()
###Output
_____no_output_____
###Markdown
We can see from the date max and min that we've got our full set of date ranges from 2003 to 2021 in this new combined dataframe. Since our string search we'll need to pull out our marijuana cases is cap sensitive, let's put all our values in the incident_description and police district in lowercase:
###Code
all_data['incident_description'] = all_data['incident_description'].str.lower()
all_data['police_district'] = all_data['police_district'].str.lower()
###Output
_____no_output_____
###Markdown
Now let's create a dataframe with all our marijuana data:
###Code
all_data_marijuana = all_data[
all_data['incident_description'].str.contains('marijuana')
].reset_index(drop=True)
all_data_marijuana
all_data_marijuana['incident_date'].min()
all_data_marijuana['incident_date'].max()
###Output
_____no_output_____
###Markdown
The incident dates show that we're getting marijuana incidents from 2003 all the way up to 2021. Great! Now let's take a look at our incident description values:
###Code
all_data_marijuana['incident_description'].unique()
###Output
_____no_output_____
###Markdown
We can see there are slight differences in labeling of the same type of crime, likely cause by the transfer to the new system in 2018. So let's clean up the incidents column in both our dataframes.
###Code
all_data_marijuana = all_data_marijuana.replace({'incident_description' :
{ 'marijuana, possession for sale' : 'possession of marijuana for sales',
'marijuana, transporting' : 'transportation of marijuana',
'marijuana, cultivating/planting' : 'planting/cultivating marijuana',
'marijuana, sales' : 'sale of marijuana',
'marijuana, furnishing' : 'furnishing marijuana',
}
})
all_data = all_data.replace({'incident_description' :
{ 'marijuana, possession for sale' : 'possession of marijuana for sales',
'marijuana, transporting' : 'transportation of marijuana',
'marijuana, cultivating/planting' : 'planting/cultivating marijuana',
'marijuana, sales' : 'sale of marijuana',
'marijuana, furnishing' : 'furnishing marijuana',
}
})
all_data_marijuana['incident_description'].unique()
###Output
_____no_output_____
###Markdown
Looks good! Now let's export our two dataframes to .csv's that we can now use in other data analysis!
###Code
all_data.to_csv("all_data.csv", index=False)
all_data_marijuana.to_csv("all_data_marijuana.csv", index=False)
###Output
_____no_output_____
###Markdown
Combine data for one `train_train` and `train_test` set for each city
###Code
# join feature and label dataset
sj_train_train = sj_features_train_train.join(sj_labels_train_train['total_cases'])
sj_train_test = sj_features_train_test.join(sj_labels_train_test['total_cases'])
iq_train_train = iq_features_train_train.join(iq_labels_train_train['total_cases'])
iq_train_test = iq_features_train_test.join(iq_labels_train_test['total_cases'])
# combine all six datasets (train_train, train_test and test)
data_subsets_splitted_joined = [sj_train_train, sj_train_test, sj_test, iq_train_train, iq_train_test, iq_test]
# save the splitted data subsets in a pickle
pickle.dump(data_subsets_splitted_joined, open('splitted_joined_data.pickle', 'wb'))
###Output
_____no_output_____ |
naive_bayes/naive_bayes.ipynb | ###Markdown
Functions
###Code
def prepareData(sklearnDataSet):
X, y = sklearnDataSet(return_X_y=True)
data = np.hstack([X,y.reshape(-1,1)])
cols_name_lst = [f"feature_{i+1}" for i in range(X.shape[1])] + ["target"]
return pd.DataFrame(data, columns = cols_name_lst)
def getStats(df):
classes = list(map(int, df.target.unique()))
return df.groupby("target")\
.agg(["mean","std"]).T\
.reset_index()\
.rename(columns = {"level_0":"feature","level_1":"statistic"})\
.pivot(index='feature', columns="statistic", values=classes)\
.T
def gaussian(x, mu, sig):
aux = 2*(sig**2)
return np.exp(-(x - mu)**2 /aux)/sqrt(aux)
def display_side_by_side(dfs:list, captions:list):
"""Display tables side by side to save vertical space
Input:
dfs: list of pandas.DataFrame
captions: list of table captions
"""
output = ""
combined = dict(zip(captions, dfs))
for caption, df in combined.items():
output += df.style.set_table_attributes("style='display:inline'").set_caption(caption)._repr_html_()
output += "\xa0\xa0\xa0\xa0\xa0"
display(HTML(output))
def gaussian_plot(s, point):
fig = plt.figure(figsize=(20,4))
mean_lst = []
std_lst = []
feature_point = point[s.name]
#Split column in means and standard deviations
for mean, std in zip(s[0::2],s[1::2]):
mean_lst.append(mean)
std_lst.append(std)
max_mean = max(mean_lst)
min_mean = min(mean_lst)
max_std = max(std_lst)
#Make x axis
x = np.linspace(min(mean_lst)-3*max_std, max(mean_lst)+3*max_std, 5000)
#Store likelihood for every class
y_probability = []
for index, stats in enumerate(zip(mean_lst, std_lst)):
mean, std = stats
y = [gaussian(i, mean, std) for i in x]
plt.title(fr"PDF: {s.name.upper()}")
plt.plot(x, y, lw=2,label=fr"$\mu: {round(mean,2)} , \sigma: {round(std,2)}$")
# Calculate likelihood for an given point
y_point = gaussian(feature_point, mean, std)
#Store the log to prevent underflow while multiplicating
y_probability.append(np.log(y_point))
plt.vlines(x=feature_point, ymin=0, ymax = y_point, linewidth=1, color='k', linestyles = "dashdot")
plt.hlines(y=y_point, xmin=x[0], xmax = feature_point, linewidth=1, color='k', linestyles = "dashdot")
first_legend = plt.legend(loc="upper left", title=r"Stats", fancybox=True, fontsize=16)
plt.setp(first_legend.get_title(),fontsize=18)
plt.gca().add_artist(first_legend)
second_legend = plt.legend(list(range(len(s)//2)),
title="Classes",
fancybox=True,
fontsize=16,
loc='upper right')
plt.setp(second_legend.get_title(),fontsize=18)
plt.show()
plt.close()
return pd.Series(y_probability)
###Output
_____no_output_____
###Markdown
Loading Data
###Code
cancer_df = prepareData(load_breast_cancer)
iris_df = prepareData(load_iris)
#Set the dataset to work with
df = iris_df
df_train, df_test = train_test_split(df, test_size = 0.1, random_state = 42)
display_side_by_side([df_train.head(3), df_test.head(3)], ["Train", "Test"])
#df_train = df_train.reset_index(drop = True)
#df_test = df_test.reset_index(drop = True)
###Output
_____no_output_____
###Markdown
Theory We want to calculate the probability of Y given a set of features:$$P(y|x_1, ..., x_n) = \frac{P(y)P(x_1, ..., x_n|y)}{P(x_1, ..., x_n)}\\$$Using the assumption of conditional independence between every pair of features:$$P(y|x_1, ..., x_n) = \frac{P(y)\prod_{i=1}^{n}P(x_i|y)}{P(x_1, ..., x_n)}\\$$Since $P(x_1, ..., x_n)$ is constant given the input, we can use the following classification rule:$$P(y|x_1, ..., x_n) \propto P(y)\prod_{i=1}^{n}P(x_i|y) \\$$In order to deal with underflow the logarithmic operation is used:$$logP(y|x_1, ..., x_n) \propto logP(y) + \sum_{i=1}^{n}logP(x_i|y) \\$$ **Gaussian equation**: $$P(X_i | y) = \frac{1}{\sqrt{2\pi\sigma_y^{2}}}\exp{\frac{-(x_i - \mu_y)^{2}}{2\sigma_y^{2}}}$$ Gaussian Naive Bayes **Assumption:** The variables exhibit a Gaussian probability distribution. Get ```mean``` and ```standard deviation```
###Code
stats_df = getStats(df_train)
stats_df
stats_df.index = map(lambda x: '_'.join(map(str,x)), stats_df.index)
stats_df = stats_df[sorted(stats_df.columns, key=lambda x: int(x.split('_')[1]))]
qtd_classes = len(stats_df)//2
stats_df.sort_index(inplace=True)
stats_df
###Output
_____no_output_____
###Markdown
Get ```prior probability```
###Code
#prior probability
prior_df = df["target"]\
.value_counts(normalize=True)\
.reset_index(name="prior_probability")\
.rename(columns = {"index":"class"})\
.set_index("class")
prior_df
###Output
_____no_output_____
###Markdown
Choose an example
###Code
select_id = 5
point = df_test.iloc[select_id,:].to_dict()
print(f"The Class is: {df_test.iloc[select_id,-1]}")
stats_df
result = stats_df.apply(gaussian_plot, args=[point], result_type = "expand")
cols = {col:fr"P(y|{col})" for col in result.columns}
result.rename(columns=cols,inplace = True)
result.index.name = "class"
result
final_stats = pd.concat([prior_df,result],axis=1)
final_stats
final_stats.apply(lambda x: x.sum(), axis=1)
def get_gaussian_proba(s, stats_df, data):
for feature_val in s:
stats_ser = stats_df[f"{s.name}"]
for index, stats in enumerate(zip(stats_ser[0::2],stats_ser[1::2])):
mean, std = stats
data[f"P({index}|{s.name})"].append(gaussian(feature_val, mean, std))
return None
qtd_samples = df_test.shape[1]-1
data = {f"P({i%qtd_classes}|feature_{(i//qtd_classes)+1})":[] for i in range(qtd_samples*qtd_classes)}
_ = df_test.iloc[:,:-1].apply(get_gaussian_proba, args = [stats_df,data])
proba_test_df=pd.DataFrame(data)
proba_test_df.head(3)
proba_test_df = proba_test_df.applymap(np.log)
proba_test_df.head(3)
final_proba_df = []
for i in range(qtd_classes):
ser = proba_test_df.filter(regex = fr"{i}\|").sum(axis=1)
ser.name = i
final_proba_df.append(ser)
final_proba_df = pd.concat(final_proba_df,axis=1)
final_proba_df.columns.name = "classes"
final_proba_df.index.name = "samples"
final_proba_df = final_proba_df.T
def find_class(x, prior_proba):
best_value = {}
for target, value in zip(x.index, x.values):
best_value[target] = prior_proba[target]*value
return max(best_value, key=best_value.get)
y_hat = final_proba_df.apply(find_class, args = [prior_df.to_dict()["prior_probability"]])
y_hat.head(5)
y_hat.values
df_test["target"].reset_index(drop=True).values
(df_test["target"].reset_index(drop=True) == y_hat).sum()/len(y_hat)
###Output
_____no_output_____
###Markdown
Using sklearn
###Code
#gauss = GaussianNB()
#X_train = df_train.iloc[:,:-1]
#y_train = df_train.iloc[:,-1]
#X_test = df_test.iloc[:,:-1]
#y_test = df_test.iloc[:,-1]
#y_hat = gauss.fit(X_train, y_train).predict(X_test)
#(y_hat == y_test).sum()/len(y_hat)
###Output
_____no_output_____
###Markdown
Import Package
###Code
import pandas as pd
from sklearn.naive_bayes import GaussianNB, MultinomialNB
from naive_bayes_functions import train_test_split, naive_bayes_param, predict, calculate_accuracy, str_convert_float
###Output
_____no_output_____
###Markdown
Iris Data Set (Continuous Features) 1 Data Preparation
###Code
df = pd.read_csv('data/Iris.csv', index_col=0)
train_data, test_data = train_test_split(df, 0.2)
label_column = test_data.columns[-1]
test_labels = test_data[label_column]
test_data = test_data.drop(label_column, axis=1)
train_data.head()
###Output
_____no_output_____
###Markdown
2 Implementation and Test of Naive Bayes
###Code
model = naive_bayes_param(train_data)
predict_labels = predict(model, test_data)
print(f'Accuracy of My Naive Bayes: {calculate_accuracy(predict_labels, test_labels)}')
pd.crosstab(test_labels, predict_labels, rownames=[label_column], colnames=["prediction"])
###Output
Accuracy of My Naive Bayes: 0.9
###Markdown
3 Compare With Sklearn Naive Bayes
###Code
gnb = GaussianNB()
gnb.fit(train_data.drop(label_column, axis=1), train_data[label_column])
predict_labels = gnb.predict(test_data)
print(f'Accuracy of Sklearn Naive Bayes: {calculate_accuracy(predict_labels, test_labels)}')
pd.crosstab(test_labels, predict_labels, rownames=[label_column], colnames=["prediction"])
###Output
Accuracy of Sklearn Naive Bayes: 0.9
###Markdown
Titanic Data Set (Combination of Continuous and Discrete Features) 1 Data Preparation
###Code
df = pd.read_csv('data/Titanic.csv')
df_labels = df.Survived
label_column = 'Survived'
df = df.drop(['PassengerId', 'Survived', 'Name', 'Ticket', 'Cabin'], axis=1)
df[label_column] = df_labels
# Handling missing values
median_age = df.Age.median()
mode_embarked = df.Embarked.mode()[0]
df = df.fillna({'Age': median_age, 'Embarked': mode_embarked})
df.head()
###Output
_____no_output_____
###Markdown
2 Split Data Set
###Code
train_data, test_data = train_test_split(df, 0.1)
test_labels = test_data[label_column]
test_data = test_data.drop(label_column, axis=1)
###Output
_____no_output_____
###Markdown
3 Implementation and Test of Naive Bayes
###Code
model = naive_bayes_param(train_data)
predict_labels = predict(model, test_data)
print(f'Accuracy of My Naive Bayes: {calculate_accuracy(predict_labels, test_labels)}')
pd.crosstab(test_labels, predict_labels, rownames=[label_column], colnames=["prediction"])
###Output
Accuracy of My Naive Bayes: 0.7303370786516854
###Markdown
4 Compare With Sklearn Naive Bayes
###Code
# Since sklearn doesn't seem to support mixed features
# I need to convert the str feature to number
str_convert_float(train_data)
str_convert_float(test_data)
mnb = MultinomialNB()
mnb.fit(train_data.drop(label_column, axis=1), train_data[label_column])
predict_labels = mnb.predict(test_data)
print(f'Accuracy of Sklearn Naive Bayes: {calculate_accuracy(predict_labels, test_labels)}')
pd.crosstab(test_labels, predict_labels, rownames=[label_column], colnames=["prediction"])
###Output
Accuracy of Sklearn Naive Bayes: 0.6404494382022472
|
StkAutomation/Python/ExternalWindModel/HWM93STKpy.ipynb | ###Markdown
Integrating HWM93 with STK using Python STK integration with Python opens up a lot of possibilites for working with numerous open science models. This script shows an example of incorporating the Horizontal Wind Model 93 (https://ccmc.gsfc.nasa.gov/modelweb/atmos/hwm.html) into STK scenarios. HWM93 is a popular empirical wind model based on satellite and ground-based instrument data. This model is available as Fortran 77 code which can be built in Python using f2py and a fortran compiler (https://github.com/space-physics/hwm93). Two example use-cases with the wind model are shown here:1. HWM93 is used to compute wind components encountered by a Missile object and the data is passed to STK for reporting.2. A high altitude balloon is simulated by integrating wind speed at each timestep (Just a simple distance = speed x time is done here for demo. Trapezoidal integration or Runge-Kutta higher order methods can be employed if needed)
###Code
# Just check python version and make sure it is not 3.7.6 or 3.8.1
from platform import python_version
print(python_version())
###Output
3.7.7
###Markdown
Import the required libraries:
###Code
from comtypes.client import CreateObject
import numpy as np
import os
import math
from datetime import datetime
# add the installed hwm93 library to path and then import
import sys
sys.path.append('C:/Users/ssrivastava/hwm93/')
import hwm93
###Output
_____no_output_____
###Markdown
Create STK instance:
###Code
stkapp = CreateObject("STK12.Application") #Create new instance of STK
stkapp.Visible = True
stkapp.UserControl = True
stkroot = stkapp.Personality2
from comtypes.gen import STKObjects, STKUtil #These libraries can't be imported before running stkapp.Personality2
###Output
_____no_output_____
###Markdown
Create new scenario and get reference to it:
###Code
stkroot.NewScenario("HWM93STKPy") # Do not have spaces in scenario name
scenario = stkroot.CurrentScenario
# Get the IAgScenario interface
scenario2 = scenario.QueryInterface(STKObjects.IAgScenario)
###Output
_____no_output_____
###Markdown
Set scenario start and stop times:
###Code
scenario2.StartTime = "10 Dec 2020 15:00:00.000"
scenario2.StopTime = "11 Dec 2020 15:00:00.000"
stkroot.Rewind(); # reset to the new start time"
###Output
_____no_output_____
###Markdown
Example 1: Winds encountered by a Missile
###Code
# Define a missile using launch, impact coordinates and apogee altitude method
missile = scenario.Children.New(13, 'MyMissile') # eMissile
missile2 = missile.QueryInterface(STKObjects.IAgMissile)
missile2.SetTrajectoryType(10) # ePropagatorBallistic
trajectory = missile2.Trajectory
trajectory.StartTime = scenario2.StartTime
trajectory = trajectory.QueryInterface(STKObjects.IAgVePropagatorBallistic)
launchLLA = trajectory.Launch.QueryInterface(STKObjects.IAgVeLaunchLLA)
launchLLA.Lat = 10
launchLLA.Lon = 10
impactLocation = trajectory.ImpactLocation.QueryInterface(STKObjects.IAgVeImpactLocationPoint)
ImpactLLA = impactLocation.Impact.QueryInterface(STKObjects.IAgVeImpactLLA)
ImpactLLA.Lat = 0
ImpactLLA.Lon = 0
impactLocation.SetLaunchControlType(0) # eLaunchControlFixedApogeeAlt
launchControl = impactLocation.LaunchControl.QueryInterface(STKObjects.IAgVeLaunchControlFixedApogeeAlt)
launchControl.ApogeeAlt = 200 # km
trajectory.Propagate()
# Get LLA data for the propagated missile
LLAdata = missile.DataProviders.GetDataPrvTimeVarFromPath('LLA State/Fixed')
elements = ['Time','Lat','Lon','Alt'] # only need time and LLA
results = LLAdata.ExecElements(scenario2.StartTime,scenario2.StopTime,1,elements) # scenario times in EpSec for convenience, obtained at 1 sec cadence
stkroot.UnitPreferences.SetCurrentUnit("DateFormat","UTCG"); # back to UTCG since HWM93 needs it
missile_t = results.DataSets.GetDataSetByName('Time').GetValues()
missile_lats = results.DataSets.GetDataSetByName('Lat').GetValues()
missile_lons = results.DataSets.GetDataSetByName('Lon').GetValues()
missile_alts = results.DataSets.GetDataSetByName('Alt').GetValues() # in km
# Use the HWM93 wind model to obtain wind at different points along the trajectory
missile_u = np.zeros(np.shape(missile_lats)) # init
missile_v = np.zeros(np.shape(missile_lats))
for k in range(len(missile_lats)):
trajWind = hwm93.run(time = missile_t[k], altkm=missile_alts[k], glat=missile_lats[k], glon=missile_lons[k], f107a = 150, f107=150, ap=4)
missile_u[k] = trajWind.zonal.values[0]
missile_v[k] = trajWind.meridional.values[0]
k+=1
#Create a line plot to visualize wind speeds
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (15,5)
plt.plot(missile_u, linewidth=3.0, label='Zonal speed')
plt.plot(missile_v, linewidth=3.0, label='Meridional speed')
plt.ylabel('m/s', fontsize=12)
plt.xlabel('EpSec',fontsize=12)
plt.legend()
plt.title('Wind speeds for the Missile vs time', fontsize = 16);
###Output
_____no_output_____
###Markdown
Writing missile LLA and wind data to an STK-readable text file:
###Code
# Writing block of data to be written in the file
stkroot.UnitPreferences.SetCurrentUnit("DateFormat","EpSec"); # back to EpSec for writing to external file
missile_t_EpSec = results.DataSets.GetDataSetByName('Time').GetValues()
DataToWrite = ''
for k in range(len(missile_lats)):
line = '%.14e %f %f %f %f %f' % (missile_t_EpSec[k],missile_lats[k],missile_lons[k],missile_alts[k],missile_u[k],missile_v[k])
DataToWrite += line
DataToWrite += '\n'
# Formatting data to write as STK-readable text file
ToWrite = """stk.v.12.1 \n
\n
Begin DataGroup\n
GroupName\t Missile Trajetory and Wind
NumberOfPoints\t %d
BlockFactor\t 50
ReferenceEpoch\t %s \n
Begin DataElement
Name\t Lat
Dimension\t Lat
FileUnitAbbr\t deg
InterpOrder\t 1
End DataElement
\n
Begin DataElement
Name\t Lon
Dimension\t Lon
FileUnitAbbr\t deg
InterpOrder\t 1
End DataElement
\n
Begin DataElement
Name\t Alt
Dimension\t DistanceUnit
FileUnitAbbr\t km
InterpOrder\t 1
End DataElement
\n
Begin DataElement
Name\t Zonal Wind
Dimension\t Rate
FileUnitAbbr\t m
FileUnitAbbr\t sec
InterpOrder\t 1
End DataElement
\n
Begin DataElement
Name\t Meridional Wind
Dimension\t Rate
FileUnitAbbr\t m
FileUnitAbbr\t sec
InterpOrder\t 1
End DataElement
Begin Data\n
%s
End Data
""" % (len(missile_lats),missile_t[0],DataToWrite)
# Save the formatted data as a text file
MissileDataFile = open(r"C:\Users\ssrivastava\Documents\STK 12\MissileWindData.txt", "w")
MissileDataFile.write(ToWrite)
MissileDataFile.close()
# Send connect command to send the file as User Defined data (either here or from GUI)
stkroot.ExecuteCommand('ExternalData */Missile/MyMissile ReadFile "C:/Users/ssrivastava/Documents/STK 12/MissileWindData.txt" Save')
###Output
_____no_output_____
###Markdown
Example 2: Simulating a high altitude balloon Create the balloon: Insert an aircraft object that will be our balloon:
###Code
aircraft = scenario.Children.New(STKObjects.eAircraft, "TestBalloon")
aircraft2 = aircraft.QueryInterface(STKObjects.IAgAircraft)
###Output
_____no_output_____
###Markdown
Set Propagator to GreatArc:
###Code
aircraft2.SetRouteType(STKObjects.ePropagatorGreatArc)
route = aircraft2.Route.QueryInterface(STKObjects.IAgVePropagatorGreatArc)
###Output
_____no_output_____
###Markdown
User inputs here:
###Code
deltaT = 30 # timestep for distance calculation in seconds
alt = 30000 # meters, alt will not be varied in this example
FirstWayPt = np.array([50,-20]) # starting lat,lon of the balloon
Rearth = 6371000 # Earth's radius in m
# space environment parameters
Setf107a = 150
Setf107 = 150
Setap = 4
###Output
_____no_output_____
###Markdown
Computations: Setting the units:(Setting date format to Epoch sec saves the trouble of handling UTCG time strings)
###Code
stkroot.UnitPreferences.SetCurrentUnit("DistanceUnit","m"),
stkroot.UnitPreferences.SetCurrentUnit("DateFormat","EpSec");
###Output
_____no_output_____
###Markdown
Initialize arrays to hold timestep and waypoint data:
###Code
timesteps = np.arange(start=scenario2.StartTime, stop=scenario2.StopTime, step = deltaT) # create array of timesteps(size:N)
waypoints = np.zeros([np.size(timesteps),4]) # init array for storing waypoints (size: Nx4 for lat, lon, alt and time)
###Output
_____no_output_____
###Markdown
Function definition to compute LatLon from distance traversed by the balloon:
###Code
def Distance2LatLon(Lat1,Lon1,zonal_dist,meridional_dist,Rearth):
""" Function to compute updated lat lon based on zonal and meridional distance traverse.d
Uses approximate inverse Haversine formula for sphere (good enough!)
Inputs: Initial coordinates (Lat1, Lon1) in degrees; zonal, meridional distances travelled and Rearth in meters
Outputs: Update coordinates (Lat2, Lon2) """
# prep values
dlat = MDist/Rearth
dlon = ZDist/Rearth
lat1 = math.radians(Lat1) # convert to radians
lon1 = math.radians(Lon1)
# compute new lat, lon based on approx. inverse Haversine formula
lat2 = math.asin(math.sin(lat1)*math.cos(dlat) + math.cos(lat1)*math.sin(dlat))
lon2 = lon1 + math.atan2(math.sin(dlon)*math.cos(lat1), math.cos(dlon)-math.sin(lat1)*math.sin(lat2))
# convert back to degrees
Lat2 = math.degrees(lat2)
Lon2 = math.degrees(lon2)
return Lat2, Lon2
###Output
_____no_output_____
###Markdown
Compute balloon waypoints:
###Code
for i in range(0,len(waypoints)):
if i == 0:
# Assign values for the first waypoint
waypoints[i,0] = FirstWayPt[0] # starting lat
waypoints[i,1] = FirstWayPt[1] # starting lon
waypoints[i,2] = alt
waypoints[i,3] = scenario2.StartTime
else: # for all waypoints except the first
# Get the wind info (returned as an xarray structure in m/s)
# HWM93 wind model requires time input as py datetime object
PrevTime = stkroot.ConversionUtility.ConvertDate('EpSec','UTCG',str(waypoints[i-1,3])) # step1. Get UTCG string
PrevTimeObj = datetime.strptime(PrevTime, '%d %b %Y %H:%M:%S.%f') # step2. convert to py datetime obj
# computing wind at previous way point
wind = hwm93.run(time = PrevTimeObj, altkm=alt/1000, glat=waypoints[i-1,0], glon=waypoints[i-1,1], f107a = Setf107a, f107=Setf107, ap=Setap)
# Calculate distance traversed in deltaT time
ZDist = deltaT*wind.zonal.values[0] # zonal distance traversed in deltaT time (in meters)
MDist = deltaT*wind.meridional.values[0] # meridional distance traversed in deltaT time (in meters)
# Calculate next waypoint coordinates
newLat, newLon = Distance2LatLon(waypoints[i-1,0],waypoints[i-1,1],ZDist,MDist,Rearth)
waypoints[i,0] = newLat # in degrees
waypoints[i,1] = newLon # in degrees
waypoints[i,2] = alt # in meters
waypoints[i,3] = timesteps[i] # in Epoch seconds
###Output
_____no_output_____
###Markdown
Feeding the computed waypoints to STK: We will use eDetermineVelFromTime calculation method since we have coordinates and timesteps defined
###Code
# Set Calculation method and altitude reference
route.Method = STKObjects.eDetermineVelFromTime
route.SetAltitudeRefType(STKObjects.eWayPtAltRefMSL);
###Output
_____no_output_____
###Markdown
Very important to clear any previous waypoints:
###Code
# remove any previous waypoints
route.Waypoints.RemoveAll();
###Output
_____no_output_____
###Markdown
Load the computed waypoints using Waypoints.Add() method:
###Code
for count in range(0,len(waypoints)):
point = route.Waypoints.Add()
point.Latitude = waypoints[count,0]
point.Longitude = waypoints[count,1]
point.Altitude = waypoints[count,2]
point.Time = waypoints[count,3]
###Output
_____no_output_____
###Markdown
Finally, propagate the route:
###Code
route.Propagate();
###Output
_____no_output_____
###Markdown
Change to a Balloon 3D model, if available:
###Code
#modelfile = aircraft2.VO.Model.ModelData.QueryInterface(STKObjects.IAgVOModelFile)
#modelfile.Filename = os.path.abspath(stkapp.Path[:-3] + "STKData\\VO\\Models\\Misc\\bomb.mdl")
###Output
_____no_output_____
###Markdown
Visualizing wind field in PythonTo visualize the wind field generated by the model, we can specify a geographical grid, altitude and time.
###Code
# Set the grographical coverage
lats = np.arange(-90., 90., 5.)
lons = np.arange(-180., 180., 15.)
LAT, LON = np.meshgrid(lats, lons)
altinkm = 30
Setf107a = 150
Setf107 = 150
Setap = 4
# Set the query time
from datetime import datetime
QueryTime = "10 Dec 2020 15:00:00.000"
QueryTimeObj = datetime.strptime(QueryTime, '%d %b %Y %H:%M:%S.%f')
U = np.zeros(np.shape(LAT))
V = np.zeros(np.shape(LAT))
for j in range(len(lats)):
for i in range(len(lons)):
thisWind = hwm93.run(time = QueryTimeObj, altkm=altinkm, glat=lats[j], glon=lons[i], f107a = Setf107a, f107=Setf107, ap=Setap)
U[i,j] = thisWind.zonal.values[0]
V[i,j] = thisWind.meridional.values[0]
i+=1
j+=1
# plotting
import matplotlib.pyplot as plt
fig, ax = plt.subplots(nrows=1, ncols=2) # get handles for fig and axes (subplots)
plt.rcParams["figure.figsize"] = (15,5)
# generate first subplot
cntr = ax[0].contourf(lons,lats,np.transpose(U),levels=20)
fig.colorbar(cntr,ax=ax[0])
ax[0].set_title('Zonal speed (m/s)')
ax[0].set_xlabel('longitude')
ax[0].set_ylabel('latitude')
# generate second subplot (using a more concise method)
cntr2 = ax[1].contourf(lons,lats,np.transpose(V),levels=20)
fig.colorbar(cntr2,ax=ax[1])
ax[1].set(title='Meridional speed (m/s)',xlabel='longitude',ylabel='latitude')
# overall attributes
plt.suptitle('HWM93 model wind speed at time: %s, altitude = %.2f km' % (QueryTime,altinkm), fontsize=16) # super title
plt.tight_layout() #ensures no overlap of subplots
###Output
_____no_output_____ |
MT_environment_setup.ipynb | ###Markdown
Get Started---
###Code
%run nmt_translate.py
_ = predict(s=10000, num=1, plot=True)
_ = predict(s=10000, num=10)
_ = predict(s=0, num=10)
_ = predict(s=10000, num=10, r_filt=.5)
_ = predict(s=10000, num=10, p_filt=.5)
###Output
English predictions, s=10000, num=10:
--------------------------------------------------
Src | 彼 は ドイツ 生まれ の 人 だ 。
Ref | he is a german by origin .
Hyp | he is a of of . . _EOS
--------------------------------------------------
precision | 0.5000
recall | 0.5714
sentences matching filter = 1
|
prediction/multitask/pre-training/function documentation generation/go/small_model.ipynb | ###Markdown
**Predict the documentation for go code using codeTrans multitask training model**You can make free prediction online through this Link (When using the prediction online, you need to parse and tokenize the code first.) **1. Load necessry libraries including huggingface transformers**
###Code
!pip install -q transformers sentencepiece
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
###Output
_____no_output_____
###Markdown
**2. Load the token classification pipeline and load it into the GPU if avilabile**
###Code
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_go_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_go_multitask", skip_special_tokens=True),
device=0
)
###Output
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py:852: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
###Markdown
**3 Give the code for summarization, parse and tokenize it**
###Code
code = "func (pr *Progress) needSnapshotAbort() bool {\n\treturn pr.State == ProgressStateSnapshot && pr.Match >= pr.PendingSnapshot\n}" #@param {type:"raw"}
!pip install tree_sitter
!git clone https://github.com/tree-sitter/tree-sitter-go
from tree_sitter import Language, Parser
Language.build_library(
'build/my-languages.so',
['tree-sitter-go']
)
GO_LANGUAGE = Language('build/my-languages.so', 'go')
parser = Parser()
parser.set_language(GO_LANGUAGE)
def get_string_from_code(node, lines):
line_start = node.start_point[0]
line_end = node.end_point[0]
char_start = node.start_point[1]
char_end = node.end_point[1]
if line_start != line_end:
code_list.append(' '.join([lines[line_start][char_start:]] + lines[line_start+1:line_end] + [lines[line_end][:char_end]]))
else:
code_list.append(lines[line_start][char_start:char_end])
def my_traverse(node, code_list):
lines = code.split('\n')
if node.child_count == 0:
get_string_from_code(node, lines)
elif node.type == 'string':
get_string_from_code(node, lines)
else:
for n in node.children:
my_traverse(n, code_list)
return ' '.join(code_list)
tree = parser.parse(bytes(code, "utf8"))
code_list=[]
tokenized_code = my_traverse(tree.root_node, code_list)
print("Output after tokenization: " + tokenized_code)
###Output
Output after tokenization: func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }
###Markdown
**4. Make Prediction**
###Code
pipeline([tokenized_code])
###Output
Your max_length is set to 512, but you input_length is only 38. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)
|
tensorflow_for_deep_learning/RoboND-NN-Lab/solutions.ipynb | ###Markdown
Solutions Problem 1Implement the Min-Max scaling function ($X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}$) with the parameters:$X_{\min }=0$$X_{\max }=255$$a=0.1$$b=0.9$
###Code
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
a = 0.1
b = 0.9
grayscale_min = 0
grayscale_max = 255
return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )
###Output
_____no_output_____
###Markdown
Problem 2- Use [tf.placeholder()](https://www.tensorflow.org/api_docs/python/io_ops.htmlplaceholder) for `features` and `labels` since they are the inputs to the model.- Any math operations must have the same type on both sides of the operator. The weights are float32, so the `features` and `labels` must also be float32.- Use [tf.Variable()](https://www.tensorflow.org/api_docs/python/state_ops.htmlVariable) to allow `weights` and `biases` to be modified.- The `weights` must be the dimensions of features by labels. The number of features is the size of the image, 28*28=784. The size of labels is 10.- The `biases` must be the dimensions of the labels, which is 10.
###Code
features_count = 784
labels_count = 10
# Problem 2 - Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# Problem 2 - Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
###Output
_____no_output_____ |
02_overview_of_hdfs/11_hdfs_blocksize.ipynb | ###Markdown
HDFS BlocksizeLet us get into details related to blocksize in HDFS.
###Code
%%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/yhBEt-buxD8?rel=0&controls=1&showinfo=0" frameborder="0" allowfullscreen></iframe>
###Output
_____no_output_____
###Markdown
* HDFS stands for Hadoop Distributed File System.* It means the large files will be physically stored on multiple nodes in distributed fashion.* Let us review the `hdfs fsck` output of `/public/randomtextwriter/part-m-00000`. The file is approximately 1 GB in size and you will see 9 files. * 8 files of size 128 MB * 1 file of size 28 MB approximately* It means a file of size 1 GB 28 MB is stored in 9 blocks. It is due to the default block size which is 128 MB.
###Code
%%sh
hdfs dfs -ls -h /public/randomtextwriter/part-m-00000
%%sh
hdfs fsck /public/randomtextwriter/part-m-00000 \
-files \
-blocks \
-locations
###Output
FSCK started by itversity (auth:SIMPLE) from /172.16.1.114 for path /public/randomtextwriter/part-m-00000 at Thu Jan 21 05:42:10 EST 2021
/public/randomtextwriter/part-m-00000 1102230331 bytes, 9 block(s): OK
0. BP-292116404-172.16.1.101-1479167821718:blk_1074171511_431441 len=134217728 repl=3 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-f4667aac-0f2c-463c-9584-d625928b9af5,DISK], DatanodeInfoWithStorage[172.16.1.102:50010,DS-b0f1636e-fd08-4ddb-bba9-9df8868dfb5d,DISK], DatanodeInfoWithStorage[172.16.1.103:50010,DS-1f4edfab-2926-45f9-a37c-ae9d1f542680,DISK]]
1. BP-292116404-172.16.1.101-1479167821718:blk_1074171524_431454 len=134217728 repl=3 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-f4667aac-0f2c-463c-9584-d625928b9af5,DISK], DatanodeInfoWithStorage[172.16.1.102:50010,DS-1edb1d35-81bf-471b-be04-11d973e2a832,DISK], DatanodeInfoWithStorage[172.16.1.103:50010,DS-1f4edfab-2926-45f9-a37c-ae9d1f542680,DISK]]
2. BP-292116404-172.16.1.101-1479167821718:blk_1074171559_431489 len=134217728 repl=3 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-f4667aac-0f2c-463c-9584-d625928b9af5,DISK], DatanodeInfoWithStorage[172.16.1.102:50010,DS-b0f1636e-fd08-4ddb-bba9-9df8868dfb5d,DISK], DatanodeInfoWithStorage[172.16.1.103:50010,DS-1f4edfab-2926-45f9-a37c-ae9d1f542680,DISK]]
3. BP-292116404-172.16.1.101-1479167821718:blk_1074171609_431539 len=134217728 repl=3 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-f4667aac-0f2c-463c-9584-d625928b9af5,DISK], DatanodeInfoWithStorage[172.16.1.102:50010,DS-b0f1636e-fd08-4ddb-bba9-9df8868dfb5d,DISK], DatanodeInfoWithStorage[172.16.1.103:50010,DS-7fb58858-abe9-4a52-9b75-755d849a897b,DISK]]
4. BP-292116404-172.16.1.101-1479167821718:blk_1074171657_431587 len=134217728 repl=3 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-f4667aac-0f2c-463c-9584-d625928b9af5,DISK], DatanodeInfoWithStorage[172.16.1.102:50010,DS-b0f1636e-fd08-4ddb-bba9-9df8868dfb5d,DISK], DatanodeInfoWithStorage[172.16.1.107:50010,DS-a12c4ae3-3f6a-42fc-83ff-7779a9fc0482,DISK]]
5. BP-292116404-172.16.1.101-1479167821718:blk_1074171691_431621 len=134217728 repl=3 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-f4667aac-0f2c-463c-9584-d625928b9af5,DISK], DatanodeInfoWithStorage[172.16.1.102:50010,DS-b0f1636e-fd08-4ddb-bba9-9df8868dfb5d,DISK], DatanodeInfoWithStorage[172.16.1.103:50010,DS-7fb58858-abe9-4a52-9b75-755d849a897b,DISK]]
6. BP-292116404-172.16.1.101-1479167821718:blk_1074171721_431651 len=134217728 repl=3 [DatanodeInfoWithStorage[172.16.1.102:50010,DS-b0f1636e-fd08-4ddb-bba9-9df8868dfb5d,DISK], DatanodeInfoWithStorage[172.16.1.107:50010,DS-6679d10e-378c-4897-8c0e-250aa1af790a,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-736614f7-27de-46b8-987f-d669be6a32a3,DISK]]
7. BP-292116404-172.16.1.101-1479167821718:blk_1074171731_431661 len=134217728 repl=3 [DatanodeInfoWithStorage[172.16.1.102:50010,DS-1edb1d35-81bf-471b-be04-11d973e2a832,DISK], DatanodeInfoWithStorage[172.16.1.107:50010,DS-a12c4ae3-3f6a-42fc-83ff-7779a9fc0482,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-698dde50-a336-4e00-bc8f-a9e1a5cc76f4,DISK]]
8. BP-292116404-172.16.1.101-1479167821718:blk_1074171736_431666 len=28488507 repl=3 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-f4667aac-0f2c-463c-9584-d625928b9af5,DISK], DatanodeInfoWithStorage[172.16.1.102:50010,DS-1edb1d35-81bf-471b-be04-11d973e2a832,DISK], DatanodeInfoWithStorage[172.16.1.107:50010,DS-6679d10e-378c-4897-8c0e-250aa1af790a,DISK]]
Status: HEALTHY
Total size: 1102230331 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 9 (avg. block size 122470036 B)
Minimally replicated blocks: 9 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 5
Number of racks: 1
FSCK ended at Thu Jan 21 05:42:10 EST 2021 in 1 milliseconds
The filesystem under path '/public/randomtextwriter/part-m-00000' is HEALTHY
###Markdown
* The default block size is 128 MB and it is set as part of hdfs-site.xml.* The property name is `dfs.blocksize`.* If the file size is smaller than default blocksize (128 MB), then there will be only one block as per the size of the file.
###Code
%%sh
cat /etc/hadoop/conf/hdfs-site.xml
###Output
_____no_output_____
###Markdown
* Let us determine the number of blocks for `/data/retail_db/orders/part-00000`. If we store this file of size 2.9 MB in HDFS, there will be one block associated with it as size of the file is less than the block size.* It occupies 2.9 MB storage in HDFS (assuming replication factor as 1)
###Code
%%sh
ls -lhtr /data/retail_db/orders/part-00000
%%sh
hdfs fsck /user/${USER}/retail_db/orders/part-00000 -files -blocks -locations
###Output
FSCK started by itversity (auth:SIMPLE) from /172.16.1.114 for path /user/itversity/retail_db/orders/part-00000 at Thu Jan 21 05:43:52 EST 2021
/user/itversity/retail_db/orders/part-00000 2999944 bytes, 1 block(s): OK
0. BP-292116404-172.16.1.101-1479167821718:blk_1115455902_41737439 len=2999944 repl=2 [DatanodeInfoWithStorage[172.16.1.102:50010,DS-1edb1d35-81bf-471b-be04-11d973e2a832,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-736614f7-27de-46b8-987f-d669be6a32a3,DISK]]
Status: HEALTHY
Total size: 2999944 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 1 (avg. block size 2999944 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 2.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 5
Number of racks: 1
FSCK ended at Thu Jan 21 05:43:52 EST 2021 in 1 milliseconds
The filesystem under path '/user/itversity/retail_db/orders/part-00000' is HEALTHY
###Markdown
* Let us determine the number of blocks for `/data/yelp-dataset-json/yelp_academic_dataset_user.json`. If we store this file of size 2.4 GB in HDFS, there will be 19 blocks associated with it * 18 128 MB Files * 1 ~69 MB File* It occupies 2.4 GB storage in HDFS (assuming replication factor as 1)
###Code
%%sh
ls -lhtr /data/yelp-dataset-json/yelp_academic_dataset_user.json
###Output
-rwxr-xr-x 1 training training 2.4G Feb 5 2019 /data/yelp-dataset-json/yelp_academic_dataset_user.json
###Markdown
* We can validate by using `hdfs fsck` command against the same file in HDFS.
###Code
%%sh
hdfs fsck /public/yelp-dataset-json/yelp_academic_dataset_user.json \
-files \
-blocks \
-locations
###Output
FSCK started by itversity (auth:SIMPLE) from /172.16.1.114 for path /public/yelp-dataset-json/yelp_academic_dataset_user.json at Thu Jan 21 05:44:47 EST 2021
/public/yelp-dataset-json/yelp_academic_dataset_user.json 2485747393 bytes, 19 block(s): OK
0. BP-292116404-172.16.1.101-1479167821718:blk_1101225469_27499779 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.107:50010,DS-a12c4ae3-3f6a-42fc-83ff-7779a9fc0482,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-698dde50-a336-4e00-bc8f-a9e1a5cc76f4,DISK]]
1. BP-292116404-172.16.1.101-1479167821718:blk_1101225470_27499780 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.103:50010,DS-7fb58858-abe9-4a52-9b75-755d849a897b,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-736614f7-27de-46b8-987f-d669be6a32a3,DISK]]
2. BP-292116404-172.16.1.101-1479167821718:blk_1101225471_27499781 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.102:50010,DS-b0f1636e-fd08-4ddb-bba9-9df8868dfb5d,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-698dde50-a336-4e00-bc8f-a9e1a5cc76f4,DISK]]
3. BP-292116404-172.16.1.101-1479167821718:blk_1101225472_27499782 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-f4667aac-0f2c-463c-9584-d625928b9af5,DISK], DatanodeInfoWithStorage[172.16.1.107:50010,DS-6679d10e-378c-4897-8c0e-250aa1af790a,DISK]]
4. BP-292116404-172.16.1.101-1479167821718:blk_1101225473_27499783 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-98fec5a6-72a9-4590-99cc-cee3a51f4dd5,DISK], DatanodeInfoWithStorage[172.16.1.102:50010,DS-1edb1d35-81bf-471b-be04-11d973e2a832,DISK]]
5. BP-292116404-172.16.1.101-1479167821718:blk_1101225474_27499784 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-f4667aac-0f2c-463c-9584-d625928b9af5,DISK], DatanodeInfoWithStorage[172.16.1.102:50010,DS-b0f1636e-fd08-4ddb-bba9-9df8868dfb5d,DISK]]
6. BP-292116404-172.16.1.101-1479167821718:blk_1101225475_27499785 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-98fec5a6-72a9-4590-99cc-cee3a51f4dd5,DISK], DatanodeInfoWithStorage[172.16.1.103:50010,DS-1f4edfab-2926-45f9-a37c-ae9d1f542680,DISK]]
7. BP-292116404-172.16.1.101-1479167821718:blk_1101225476_27499786 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.103:50010,DS-7fb58858-abe9-4a52-9b75-755d849a897b,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-736614f7-27de-46b8-987f-d669be6a32a3,DISK]]
8. BP-292116404-172.16.1.101-1479167821718:blk_1101225477_27499787 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.102:50010,DS-1edb1d35-81bf-471b-be04-11d973e2a832,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-698dde50-a336-4e00-bc8f-a9e1a5cc76f4,DISK]]
9. BP-292116404-172.16.1.101-1479167821718:blk_1101225478_27499788 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-f4667aac-0f2c-463c-9584-d625928b9af5,DISK], DatanodeInfoWithStorage[172.16.1.102:50010,DS-b0f1636e-fd08-4ddb-bba9-9df8868dfb5d,DISK]]
10. BP-292116404-172.16.1.101-1479167821718:blk_1101225479_27499789 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-98fec5a6-72a9-4590-99cc-cee3a51f4dd5,DISK], DatanodeInfoWithStorage[172.16.1.103:50010,DS-1f4edfab-2926-45f9-a37c-ae9d1f542680,DISK]]
11. BP-292116404-172.16.1.101-1479167821718:blk_1101225480_27499790 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.107:50010,DS-a12c4ae3-3f6a-42fc-83ff-7779a9fc0482,DISK], DatanodeInfoWithStorage[172.16.1.103:50010,DS-7fb58858-abe9-4a52-9b75-755d849a897b,DISK]]
12. BP-292116404-172.16.1.101-1479167821718:blk_1101225481_27499791 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.107:50010,DS-6679d10e-378c-4897-8c0e-250aa1af790a,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-736614f7-27de-46b8-987f-d669be6a32a3,DISK]]
13. BP-292116404-172.16.1.101-1479167821718:blk_1101225482_27499792 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.107:50010,DS-a12c4ae3-3f6a-42fc-83ff-7779a9fc0482,DISK], DatanodeInfoWithStorage[172.16.1.103:50010,DS-1f4edfab-2926-45f9-a37c-ae9d1f542680,DISK]]
14. BP-292116404-172.16.1.101-1479167821718:blk_1101225483_27499793 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.102:50010,DS-1edb1d35-81bf-471b-be04-11d973e2a832,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-698dde50-a336-4e00-bc8f-a9e1a5cc76f4,DISK]]
15. BP-292116404-172.16.1.101-1479167821718:blk_1101225484_27499794 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.102:50010,DS-b0f1636e-fd08-4ddb-bba9-9df8868dfb5d,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-736614f7-27de-46b8-987f-d669be6a32a3,DISK]]
16. BP-292116404-172.16.1.101-1479167821718:blk_1101225485_27499795 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.107:50010,DS-6679d10e-378c-4897-8c0e-250aa1af790a,DISK], DatanodeInfoWithStorage[172.16.1.108:50010,DS-698dde50-a336-4e00-bc8f-a9e1a5cc76f4,DISK]]
17. BP-292116404-172.16.1.101-1479167821718:blk_1101225486_27499796 len=134217728 repl=2 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-f4667aac-0f2c-463c-9584-d625928b9af5,DISK], DatanodeInfoWithStorage[172.16.1.107:50010,DS-a12c4ae3-3f6a-42fc-83ff-7779a9fc0482,DISK]]
18. BP-292116404-172.16.1.101-1479167821718:blk_1101225487_27499797 len=69828289 repl=2 [DatanodeInfoWithStorage[172.16.1.104:50010,DS-98fec5a6-72a9-4590-99cc-cee3a51f4dd5,DISK], DatanodeInfoWithStorage[172.16.1.107:50010,DS-6679d10e-378c-4897-8c0e-250aa1af790a,DISK]]
Status: HEALTHY
Total size: 2485747393 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 19 (avg. block size 130828810 B)
Minimally replicated blocks: 19 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 2.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 5
Number of racks: 1
FSCK ended at Thu Jan 21 05:44:47 EST 2021 in 1 milliseconds
The filesystem under path '/public/yelp-dataset-json/yelp_academic_dataset_user.json' is HEALTHY
|
notebooks/multiply.ipynb | ###Markdown
Unit testing in Jupyter notebooks with ipytest: Multiply implementation Table of contents1. [Introduction](introduction)2. [Code](code) Introduction **Unit tests** in **Jupyter notebooks** can be run using the excellent [ipytest](https://github.com/chmp/ipytest) package. It supports both [pytest](https://docs.pytest.org/en/latest/) and [unittest](https://docs.python.org/3/library/unittest.html).This *notebook* contains simple implementations of multiplying two ints or floats is used. Code
###Code
class Multiply:
def multiply(self, a, b):
"""
This is the test:
>>> multiply(2, 2)
4
"""
return a * b
def multiply_floats(a, b):
return a * b
###Output
_____no_output_____ |
.ipynb_checkpoints/Project-checkpoint.ipynb | ###Markdown
FUNCTION FOR DATA PREPARATION AND PREPROCESSING
###Code
def crop_brain(image, plot = 'False'):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
thresh = cv2.threshold(gray, 45, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.erode(thresh, None, iterations=2)
thresh = cv2.dilate(thresh, None, iterations=2)
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
extLeft = tuple(c[c[:, :, 0].argmin()][0])
extRight = tuple(c[c[:, :, 0].argmax()][0])
extTop = tuple(c[c[:, :, 1].argmin()][0])
extBot = tuple(c[c[:, :, 1].argmax()][0])
new_image = image[extTop[1]:extBot[1], extLeft[0]:extRight[0]]
if plot:
plt.figure()
plt.subplot(1, 2, 1)
plt.imshow(image)
plt.tick_params(axis='both', which='both',
top=False, bottom=False, left=False, right=False,
labelbottom=False, labeltop=False, labelleft=False, labelright=False)
plt.title('Original Image')
plt.subplot(1, 2, 2)
plt.imshow(new_image)
plt.tick_params(axis='both', which='both',
top=False, bottom=False, left=False, right=False,
labelbottom=False, labeltop=False, labelleft=False, labelright=False)
plt.title('Cropped Image')
plt.show()
return new_image
ex_img = cv2.imread(r'./data/brain_tumor_dataset/yes/Y10.jpg')
ex_new_img = crop_brain(ex_img, True)
ex_img = cv2.imread(r'./data/brain_tumor_dataset/no/10 no.jpg')
ex_new_img = crop_brain(ex_img, True)
###Output
_____no_output_____
###Markdown
LOAD DATA
###Code
def load_data(dir_list, image_size):
X = []
y = []
image_width, image_height = image_size
for directory in dir_list:
for filename in listdir(directory):
image = cv2.imread(directory + '\\' + filename)
image = crop_brain(image, plot=False)
image = cv2.resize(image, dsize=(image_width, image_height), interpolation=cv2.INTER_CUBIC)
image = image / 255.
X.append(image)
if directory[-3:] == 'yes':
y.append([1])
else:
y.append([0])
X = np.array(X)
y = np.array(y)
X, y = shuffle(X, y)
print(f'Number of examples is: {len(X)}')
print(f'X shape is: {X.shape}')
print(f'y shape is: {y.shape}')
return X, y
augmented_path = 'augmented data/'
# augmented data (yes and no) contains both the original and the new generated examples
augmented_yes = augmented_path + 'yes'
augmented_no = augmented_path + 'no'
IMG_WIDTH, IMG_HEIGHT = (240, 240)
X, y = load_data([augmented_yes, augmented_no], (IMG_WIDTH, IMG_HEIGHT))
###Output
_____no_output_____
###Markdown
PLOT SAMPLE IMAGES
###Code
def plot_sample_images(X, y, n=50):
for label in [0,1]:
# grab the first n images with the corresponding y values equal to label
images = X[np.argwhere(y == label)]
n_images = images[:n]
columns_n = 10
rows_n = int(n/ columns_n)
plt.figure(figsize=(20, 10))
i = 1 # current plot
for image in n_images:
plt.subplot(rows_n, columns_n, i)
plt.imshow(image[0])
# remove ticks
plt.tick_params(axis='both', which='both',
top=False, bottom=False, left=False, right=False,
labelbottom=False, labeltop=False, labelleft=False, labelright=False)
i += 1
label_to_str = lambda label: "Yes" if label == 1 else "No"
plt.suptitle(f"Brain Tumor: {label_to_str(label)}")
plt.show()
plot_sample_images(X, y)
###Output
_____no_output_____
###Markdown
SPLIT DATA
###Code
def split_data(X, y, test_size=0.2):
X_train, X_test_val, y_train, y_test_val = train_test_split(X, y, test_size=test_size)
X_test, X_val, y_test, y_val = train_test_split(X_test_val, y_test_val, test_size=0.5)
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = split_data(X, y, test_size=0.3)
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of development examples = " + str(X_val.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(y_train.shape))
print ("X_val (dev) shape: " + str(X_val.shape))
print ("Y_val (dev) shape: " + str(y_val.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(y_test.shape))
###Output
_____no_output_____
###Markdown
SOME HELPER FUNCTIONS
###Code
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return f"{h}:{m}:{round(s,1)}"
def compute_f1_score(y_true, prob):
# convert the vector of probabilities to a target vector
y_pred = np.where(prob > 0.5, 1, 0)
score = f1_score(y_true, y_pred)
return score
###Output
_____no_output_____
###Markdown
BUILDING THE MODEL
###Code
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, BatchNormalization
model = Sequential()
model.add(Conv2D(32, kernel_size=(2, 2), input_shape=(128, 128, 3), padding = 'Same'))
model.add(Conv2D(32, kernel_size=(2, 2), activation ='relu', padding = 'Same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size = (2,2), activation ='relu', padding = 'Same'))
model.add(Conv2D(64, kernel_size = (2,2), activation ='relu', padding = 'Same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
model.compile(loss = "categorical_crossentropy", optimizer='Adamax')
print(model.summary())
###Output
_____no_output_____
###Markdown
Descriptive Statistics Final Project Questions for InvestigationThis experiment will require the use of a standard deck of playing cards. This is a deck of fifty-two cards divided into four suits (spades (♠), hearts (♥), diamonds (♦), and clubs (♣)), each suit containing thirteen cards (Ace, numbers 2-10, and face cards Jack, Queen, and King). You can use either a physical deck of cards for this experiment or you may use a virtual deck of cards such as that found on random.org (http://www.random.org/playing-cards/).For the purposes of this task, assign each card a value: The Ace takes a value of 1, numbered cards take the value printed on the card, and the Jack, Queen, and King each take a value of 10.1. First, create a histogram depicting the relative frequencies of the card values.2. Now, we will get samples for a new distribution. To obtain a single sample, shuffle your deck of cards and draw three cards from it. (You will be sampling from the deck without replacement.) Record the cards that you have drawn and the sum of the three cards’ values. Replace the drawn cards back into the deck and repeat this sampling procedure a total of at least thirty times.3. Let’s take a look at the distribution of the card sums. Report descriptive statistics for the samples you have drawn. Include at least two measures of central tendency and two measures of variability.4. Create a histogram of the sampled card sums you have recorded. Compare its shape to that of the original distribution. How are they different, and can you explain why this is the case?5. Make some estimates about values you will get on future draws. Within what range will you expect approximately 90% of your draw values to fall? What is the approximate probability that you will get a draw value of at least 20? Make sure you justify how you obtained your values. ![image.png](attachment:image.png) 1. First, create a histogram depicting the relative frequencies of the card values.
###Code
cards = [1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,6,6,6,6,7,7,7,7,8,8,8,8,9,9,9,9,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10]
from matplotlib import pyplot as plt
plt.hist(cards)
plt.show()
###Output
_____no_output_____
###Markdown
2. Now, we will get samples for a new distribution. To obtain a single sample, shuffle your deck of cards and draw three cards from it. (You will be sampling from the deck without replacement.) Record the cards that you have drawn and the sum of the three cards’ values. Replace the drawn cards back into the deck and repeat this sampling procedure a total of at least thirty times.
###Code
import random as rd
import statistics as sts
rd.shuffle(cards)
samples = [None] * 30
for i in range(30):
rd.shuffle(cards)
samples[i]= cards[0:3]
samples_sum = [sum(sample) for sample in samples ]
samples_mean = [sts.mean(sample) for sample in samples]
###Output
_____no_output_____
###Markdown
3. Let’s take a look at the distribution of the card sums. Report descriptive statistics for the samples you have drawn. Include at least two measures of central tendency and two measures of variability.
###Code
def standar_error(vec, samples):
return sts.standard_deviation(vec) / math.sqrt(len(samples[1]))
def z_score(pop_vec, samples, value, is_sample=False):
pop_mean = sts.mean(pop_vec)
if (is_sample):
return (value - pop_mean) / standar_error(pop_vec, samples)
else:
return (value - pop_mean) / sts.standard_deviation(pop_vec)
def value_from_z(pop_vec, samples, z, is_sample=False):
pop_mean = sts.mean(pop_vec)
if (is_sample):
return z * standar_error(pop_vec, samples) + pop_mean
else:
return z * sts.standard_deviation(pop_vec) + pop_mean
def print_data(vec, name, is_sample=False):
mean = sts.mean(vec)
median = sts.median(vec)
mode = sts.mode(vec)
variance = sts.variance(vec)
sigma = sts.standard_deviation(vec)
if (is_sample):
sigma = standar_error()
print('- The ', name, ' population has a:\n\tmean=', str(mean), '\n\tmedian=', str(median), '\n\tmode=', str(mode), '\n\tvariance=', str(variance), '\n\tstandar deviation=',str(sigma))
print_data(cards, 'cards')
print_data(samples_sum, 'samples sum')
print_data(samples_mean, 'samples mean')
for i,sample in enumerate(samples):
print_data(sample, 'sample ' + str(i+1), True)
###Output
- The sample 1 population has a:
mean= 4.333333333333333
median= 3
mode= [3, 8, 2]
variance= 6.888888888888888
standar deviation= 1.8203322409537281
- The sample 2 population has a:
mean= 5.333333333333333
median= 5
mode= [5, 4, 7]
variance= 1.5555555555555554
standar deviation= 1.8203322409537281
- The sample 3 population has a:
mean= 9.333333333333334
median= 8
mode= [10]
variance= 0.888888888888889
standar deviation= 1.8203322409537281
- The sample 4 population has a:
mean= 7.666666666666667
median= 10
mode= [10, 9, 4]
variance= 6.888888888888888
standar deviation= 1.8203322409537281
- The sample 5 population has a:
mean= 3.6666666666666665
median= 6
mode= [6, 2, 3]
variance= 2.8888888888888893
standar deviation= 1.8203322409537281
- The sample 6 population has a:
mean= 10.0
median= 10
mode= [10]
variance= 0.0
standar deviation= 1.8203322409537281
- The sample 7 population has a:
mean= 6.333333333333333
median= 2
mode= [2, 10, 7]
variance= 10.888888888888888
standar deviation= 1.8203322409537281
- The sample 8 population has a:
mean= 5.666666666666667
median= 2
mode= [2, 5, 10]
variance= 10.888888888888888
standar deviation= 1.8203322409537281
- The sample 9 population has a:
mean= 2.3333333333333335
median= 3
mode= [2]
variance= 0.22222222222222224
standar deviation= 1.8203322409537281
- The sample 10 population has a:
mean= 2.3333333333333335
median= 5
mode= [1]
variance= 3.555555555555556
standar deviation= 1.8203322409537281
- The sample 11 population has a:
mean= 5.333333333333333
median= 4
mode= [4, 2, 10]
variance= 11.555555555555557
standar deviation= 1.8203322409537281
- The sample 12 population has a:
mean= 3.0
median= 1
mode= [1, 6, 2]
variance= 4.666666666666667
standar deviation= 1.8203322409537281
- The sample 13 population has a:
mean= 7.333333333333333
median= 8
mode= [8, 4, 10]
variance= 6.222222222222221
standar deviation= 1.8203322409537281
- The sample 14 population has a:
mean= 7.666666666666667
median= 10
mode= [10]
variance= 10.888888888888891
standar deviation= 1.8203322409537281
- The sample 15 population has a:
mean= 9.0
median= 9
mode= [9, 10, 8]
variance= 0.6666666666666666
standar deviation= 1.8203322409537281
- The sample 16 population has a:
mean= 7.333333333333333
median= 5
mode= [5, 10, 7]
variance= 4.222222222222222
standar deviation= 1.8203322409537281
- The sample 17 population has a:
mean= 5.333333333333333
median= 1
mode= [1, 5, 10]
variance= 13.555555555555557
standar deviation= 1.8203322409537281
- The sample 18 population has a:
mean= 3.3333333333333335
median= 1
mode= [1, 2, 7]
variance= 6.888888888888888
standar deviation= 1.8203322409537281
- The sample 19 population has a:
mean= 1.3333333333333333
median= 1
mode= [1]
variance= 0.2222222222222222
standar deviation= 1.8203322409537281
- The sample 20 population has a:
mean= 8.0
median= 4
mode= [10]
variance= 8.0
standar deviation= 1.8203322409537281
- The sample 21 population has a:
mean= 7.0
median= 6
mode= [6, 8, 7]
variance= 0.6666666666666666
standar deviation= 1.8203322409537281
- The sample 22 population has a:
mean= 9.666666666666666
median= 10
mode= [10]
variance= 0.2222222222222222
standar deviation= 1.8203322409537281
- The sample 23 population has a:
mean= 8.0
median= 10
mode= [10]
variance= 8.0
standar deviation= 1.8203322409537281
- The sample 24 population has a:
mean= 7.666666666666667
median= 3
mode= [10]
variance= 10.888888888888891
standar deviation= 1.8203322409537281
- The sample 25 population has a:
mean= 9.333333333333334
median= 10
mode= [10]
variance= 0.888888888888889
standar deviation= 1.8203322409537281
- The sample 26 population has a:
mean= 8.333333333333334
median= 6
mode= [6, 9, 10]
variance= 2.8888888888888893
standar deviation= 1.8203322409537281
- The sample 27 population has a:
mean= 5.0
median= 2
mode= [2, 3, 10]
variance= 12.666666666666666
standar deviation= 1.8203322409537281
- The sample 28 population has a:
mean= 5.666666666666667
median= 2
mode= [2, 10, 5]
variance= 10.888888888888888
standar deviation= 1.8203322409537281
- The sample 29 population has a:
mean= 6.0
median= 10
mode= [10, 6, 2]
variance= 10.666666666666666
standar deviation= 1.8203322409537281
- The sample 30 population has a:
mean= 9.333333333333334
median= 10
mode= [10]
variance= 0.888888888888889
standar deviation= 1.8203322409537281
###Markdown
4. Create a histogram of the sampled card sums you have recorded. Compare its shape to that of the original distribution. How are they different, and can you explain why this is the case?
###Code
plt.hist(samples_sum)
plt.show()
plt.hist(cards)
plt.show()
###Output
_____no_output_____
###Markdown
5. Make some estimates about values you will get on future draws. Within what range will you expect approximately 90% of your draw values to fall? What is the approximate probability that you will get a draw value of at least 20? Make sure you justify how you obtained your values. ZTable![ZTable.jpg](attachment:ZTable.jpg)
###Code
z_90 = 0.29
_90th = value_from_z(samples_sum,samples,z_90, True)
_1st = value_from_z(samples_sum,samples,0, True)
print('90% of should be between ', str(_1st), ' and ', str(_90th))
z = z_score(samples_sum,samples, 20, True)
print(z) #0.22637644747964691
z_t = (1 - 0.5871) * 100
print('The probability you will get a draw of at least 20 is ', str(z_t),'%')
###Output
The probability you will get a draw of at least 20 is 41.290000000000006 %
###Markdown
Data-X Project: Electricity Price Prediction Feature Modeling Group: Machine Learning Optimization PipelineDescription of NotebookRetail electricity prices across different regions will have varying dependencies on all kinds of other signals in the energy marketplace. This notebook is an "pipeline" that integrates any available regional data (as well as national data) into an automated feature and machine learning model selection routine. The process is highly iterative and computationally intensive. However, the prediction of long term retail electricity prices is not time sensitive. It is not particularly desirable, but not necessarily awful for this notebook to run over the course of hours or days. The eventual output is visualizations of different models' average performances across ranges of feature combinations and hyperparameter combinations. The best-performing-on-average combination or combinations of [model X feature grouping X hyperparameter set] will be used (on their own or averaged) to make future predicitions of energy price. Other high performing combinations will combine to form tolerance bands for the prediction.Team Members: Aaron Drew, Arbaaz Shakir, JOhn Stuart, Adam Yankelevits, Eric Yehl**Note:** This notebook needs to be launched by typing: jupyter lab Project.ipynb --NotebookApp.iopub_data_rate_limit=10000000000 Import LibrariesImport open source packages and files defining custom functions.
###Code
%run helper_functions.py
###Output
_____no_output_____
###Markdown
Data CollectionSource data streams for a particular region (probably manual). Data AggregationStitch many data streams into a Google Sheet and create a pandas dataframe by importing from Sheets API.Columns: FeaturesRows: Timestamps
###Code
raw_data = import_sheet('TOY DATA')
raw_data.head()
raw_data_desc = raw_data.describe()
raw_data_desc
raw_data = raw_data[0:100]
mean_std_normed_data = (raw_data-raw_data.mean())/raw_data.std()
min_max_normed_data = (raw_data-raw_data.min())/(raw_data.max()-raw_data.min())
###Output
_____no_output_____
###Markdown
Data CleaningFill in missing values using interpolation, or condense high resolution data. Data TransformationConvert data signals to smooth moving averages, removing seasonal oscillations.Track average, seasonal peaks, and seasonal troughs.Normalize all signals (maybe). Find best fit polynomial for seasonal deviation over the course of one year. Exploratory Data Analysis with Correlation Matrices Pearson correlation matrix "heat map" for colinearity between all features.
###Code
# Close figures in the background
plt.close("all")
plt.close("all")
%run helper_functions.py
reg_heatmap = reg_heat_map(raw_data)
reg_heatmap
plt.close("all")
%run helper_functions.py
reg_heatmap = reg_heat_map(min_max_normed_data)
reg_heatmap
###Output
_____no_output_____
###Markdown
Time displaced correlations of all features with priceDisplace (advance) feature signals by n months. This will line up past signals with future prices. Truncate n months at the end of feature signals and n months at the beginning of price signals.
###Code
plt.close("all")
%run helper_functions.py
time_heatmap = time_heat_map(raw_data,'Price',months=180)
time_heatmap
###Output
/Users/eric_yehl/eric_yehl-data-x-s18/EPP-Feature-Modeling/helper_functions.py:94: RuntimeWarning: Degrees of freedom <= 0 for slice
time_corr[i,j] = np.round(np.cov(comp_var_sig,feature_sig)[0,1]/(np.std(comp_var_sig)*np.std(feature_sig)),2)
/Users/eric_yehl/anaconda3/envs/data-x/lib/python3.6/site-packages/numpy/lib/function_base.py:2929: RuntimeWarning: divide by zero encountered in double_scalars
c *= 1. / np.float64(fact)
/Users/eric_yehl/anaconda3/envs/data-x/lib/python3.6/site-packages/numpy/lib/function_base.py:2929: RuntimeWarning: invalid value encountered in multiply
c *= 1. / np.float64(fact)
/Users/eric_yehl/anaconda3/envs/data-x/lib/python3.6/site-packages/numpy/lib/function_base.py:1110: RuntimeWarning: Mean of empty slice.
avg = a.mean(axis)
/Users/eric_yehl/anaconda3/envs/data-x/lib/python3.6/site-packages/numpy/core/_methods.py:73: RuntimeWarning: invalid value encountered in true_divide
ret, rcount, out=ret, casting='unsafe', subok=False)
/Users/eric_yehl/eric_yehl-data-x-s18/EPP-Feature-Modeling/helper_functions.py:94: RuntimeWarning: invalid value encountered in double_scalars
time_corr[i,j] = np.round(np.cov(comp_var_sig,feature_sig)[0,1]/(np.std(comp_var_sig)*np.std(feature_sig)),2)
/Users/eric_yehl/anaconda3/envs/data-x/lib/python3.6/site-packages/pandas/core/generic.py:5663: RuntimeWarning: invalid value encountered in absolute
return np.abs(self)
###Markdown
Data VisualizationPlot features against each other in 3+ dimensions. Feature ConstructionCreate new features using derivatives (perhaps first interpolating with polynomials, then taking the derivative(s) of these instead of using finite difference), powers, time offsets, and perhaps products and quotients of these. Do this intelligently so as not to create an intractable amount of combinations.
###Code
%run helper_functions.py
trimmed_data, thrown_out = distinct_features(raw_data,corr_high = .8,look_back=36)
print(str(thrown_out)+' features removed')
trimmed_data.head()
plt.close("all")
%run helper_functions.py
new_data = new_features_with_funcs(raw_data,\
[np.sqrt,np.square,np.log],\
['Sqrt','Square','Log'])
new_data, thrown_out = distinct_features(new_data,corr_high = 0.7, look_back=7)
print(thrown_out)
new_data.describe()
plt.close("all")
%run helper_functions.py
new_new_data,new_feats,thrown_out = new_features_with_combs(new_data)
new_new_data,thrown_out2 = distinct_features(new_new_data, corr_high=0.6, look_back=7)
print(thrown_out)
print(thrown_out2)
new_new_data.describe()
%run helper_functions.py
new_new_data,thrown_out2 = distinct_features(new_new_data, corr_high=0.5, look_back=7)
print(thrown_out2)
new_new_data.describe()
new_feats,thrown_out
###Output
_____no_output_____
###Markdown
New Correlation MatricesPearson correlation matrix "heat map" for colinearity between all features (old and new).Time displaced correlation matrix "heat map" for colinearity between all features (old and new) and electricity price at offset times.
###Code
plt.close("all")
%run helper_functions.py
reg_heatmap = reg_heat_map(new_new_data)
reg_heatmap
plt.close("all")
%run helper_functions.py
time_heatmap = time_heat_map(new_new_data,'Price',months=12)
time_heatmap
###Output
_____no_output_____
###Markdown
Create Test Matrix of Model/Feature CombinationsPandas dataframe with:Columns: Features, Hyperparameters, 1 column for model type, 1 column for average test resultsRows: Different combinations of the aboveData: 1's and 0's for feature inclusion, hyperparameter values, average test results Run Test Matrix of Model/Feature CombinationsRun many separate model trainings & performance tests for each combination below, averaging the __test__ results:[ML algorithm X hyperparameter set X feature grouping]Fill out pandas dataframe with average performance test results. Visualize Performance MatrixDisplay test results graphically. Make Predictions New ThoughtsMaybe predicting a signal is not one model... it can be a collection of models that are each responsible for predicting a single future timestep. Each part of the future will have different dependencies on the past. Each time delta (e.g. +5 months in the future) should then be trained as a separate model. These models' predictions would be combined and smoothed to form a cohesive prediction signal. We can first construct an enormous set of features using transformations: square, log, exponent, derivative, integral. Then combinations of these: product, quotient, exponent. Then maybe even transformations of the combinations! We can easily turn 20 features that we think might have some unique substance into several million, producing and exploring nonlinear relationships by force.We can filter features as they are constructed based on some basic correlation criteria (too correlated with others, uncorrelated with price).This would be to perform manageable LassoCV (= Lasso + regularization) regressions for each future timestep. Lasso automatically selects the best few features, even from many thousands. We would normalize all features before running Lasso, but after doing feature construction.This strategy is forcing a complex future prediction problem into a linear regression mold. It allows classic test-train splitting, which means we can have an accurate understanding of our ultimate predictive power. The time delay aspect of feature construction will be the most interesting I believe. Training for a model whose job is to predict the 5 month future price will match the y_train signal with x_train signals from 5 months in the past and before. And before? ... For a 5 month future prediction, I hypothesize that we won't just want to look at a 5 month old slice of signals. Rather, we want to look at the whole body of data from 5 months ago and further back into the past. All of a particular future price's 6 month old, 7 month old, 1 year old data become features of that price. This is some seriously epic dataframe manipulation.ARMA, ARIMA, SARIMA are time series forecasting models that also look into the past, but quite differently (these models are actually univariate... the multivariate one is call VAR (vector auto-regression) which works for our case, where we think other signals are influencing the signal we are interested in.
###Code
from sklearn.linear_model import LassoCV
###Output
_____no_output_____
###Markdown
Clustering Geolocation Data Intelligently in PythonWe have taxi rank locations, and want to define key clusters of these taxis where we can build service stations for all taxis operating in that region. Prerequisites- Basic Matplotlib skills for plotting 2-D data clearly.- Basic understanding of Pandas and how to use it for data manipulation.- The concepts behind clustering algorithms, although we will go through this throughout the project. Project Outline[**Task 1**](task1): Exploratory Data Analysis[**Task 2**](task2): Visualizing Geographical Data[**Task 3**](task3): Clustering Strength / Performance Metric[**Task 4**](task4): K-Means Clustering[**Task 5**](task5): DBSCAN[**Task 6**](task6): HDBSCAN[**Task 7**](task7): Addressing Outliers[**Further Reading**](further)
###Code
import matplotlib
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import pandas as pd
import numpy as np
from tqdm import tqdm
from sklearn.cluster import KMeans, DBSCAN
from sklearn.metrics import silhouette_score
from sklearn.datasets import make_blobs
from sklearn.neighbors import KNeighborsClassifier
from ipywidgets import interactive
from collections import defaultdict
import hdbscan
import folium
import re
cols = ['#e6194b', '#3cb44b', '#ffe119', '#4363d8', '#f58231', '#911eb4',
'#46f0f0', '#f032e6', '#bcf60c', '#fabebe', '#008080', '#e6beff',
'#9a6324', '#fffac8', '#800000', '#aaffc3', '#808000', '#ffd8b1',
'#000075', '#808080']*10
###Output
_____no_output_____
###Markdown
Task 1: Exploratory Data Analysis
###Code
print(f'Before dropping NaNs and dupes\t:\tdf.shape = {df.shape}')
print(f'After dropping NaNs and dupes\t:\tdf.shape = {df.shape}')
###Output
_____no_output_____
###Markdown
Task 2: Visualizing Geographical Data Task 3: Clustering Strength / Performance Metric Task 4: K-Means Clustering
###Code
X_blobs, _ = make_blobs(n_samples=1000, centers=50,
n_features=2, cluster_std=1, random_state=4)
data = defaultdict(dict)
for x in range(1,21):
model = KMeans(n_clusters=3, random_state=17,
max_iter=x, n_init=1).fit(X_blobs)
data[x]['class_predictions'] = model.predict(X_blobs)
data[x]['centroids'] = model.cluster_centers_
data[x]['unique_classes'] = np.unique(class_predictions)
def f(x):
class_predictions = data[x]['class_predictions']
centroids = data[x]['centroids']
unique_classes = data[x]['unique_classes']
for unique_class in unique_classes:
plt.scatter(X_blobs[class_predictions==unique_class][:,0],
X_blobs[class_predictions==unique_class][:,1],
alpha=0.3, c=cols[unique_class])
plt.scatter(centroids[:,0], centroids[:,1], s=200, c='#000000', marker='v')
plt.ylim([-15,15]); plt.xlim([-15,15])
plt.title('How K-Means Clusters')
interactive_plot = interactive(f, x=(1, 20))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
m = folium.Map(location=[df.LAT.mean(), df.LON.mean()], zoom_start=9, tiles='Stamen Toner')
for _, row in df.iterrows():
# get a colour
# cluster_colour =
folium.CircleMarker(
location= # insert here,
radius=5,
popup= # insert here,
color=cluster_colour,
fill=True,
fill_color=cluster_colour
).add_to(m)
print(f'K={k}')
print(f'Silhouette Score: {silhouette_score(X, class_predictions)}')
m.save('kmeans_70.html')
best_silhouette, best_k = -1, 0
for k in tqdm(range(2, 100)):
model = KMeans(n_clusters=k, random_state=1).fit(X)
class_predictions = model.predict(X)
curr_silhouette = silhouette_score(X, class_predictions)
if curr_silhouette > best_silhouette:
best_k = k
best_silhouette = curr_silhouette
print(f'K={best_k}')
print(f'Silhouette Score: {best_silhouette}')
###Output
_____no_output_____
###Markdown
Task 5: DBSCAN Density-Based Spatial Clustering of Applications with Noise
###Code
# code for indexing out certain values
dummy = np.array([-1, -1, -1, 2, 3, 4, 5, -1])
print(f'Number of clusters found: {len(np.unique(class_predictions))}')
print(f'Number of outliers found: {len(class_predictions[class_predictions==-1])}')
print(f'Silhouette ignoring outliers: {silhouette_score(X[class_predictions!=-1], class_predictions[class_predictions!=-1])}')
no_outliers = 0
no_outliers = np.array([(counter+2)*x if x==-1 else x for counter, x in enumerate(class_predictions)])
print(f'Silhouette outliers as singletons: {silhouette_score(X, no_outliers)}')
m
###Output
_____no_output_____
###Markdown
Task 6: HDBSCANHierarchical DBSCAN
###Code
print(f'Number of clusters found: {len(np.unique(class_predictions))-1}')
print(f'Number of outliers found: {len(class_predictions[class_predictions==-1])}')
print(f'Silhouette ignoring outliers: {silhouette_score(X[class_predictions!=-1], class_predictions[class_predictions!=-1])}')
no_outliers = np.array([(counter+2)*x if x==-1 else x for counter, x in enumerate(class_predictions)])
print(f'Silhouette outliers as singletons: {silhouette_score(X, no_outliers)}')
m
###Output
_____no_output_____
###Markdown
Task 7: Addressing Outliers
###Code
print(f'Number of clusters found: {len(np.unique(class_predictions))}')
print(f'Silhouette: {silhouette_score(X, class_predictions)}')
m.save('hybrid.html')
###Output
_____no_output_____
###Markdown
Child of Change Objective Hypothesis or research question Project plan * [Introduction](introduction)* [Haitian demographics](demographics)* [Data source](source)* [Data cleaning](cleaning)* [Data analysis](analysis)* [Reflections](Reflections) Introduction Haiti is a Caribbean country located on the island of Hispaniola, a territory it shares with the Dominican Republic to the east. Haiti is a country with a young age structure. The 15 to 65 year olds make up 60% of the population. In a context of demographic transition, young people (15-24 years) have reached 21% of the population. Approximately 220,000 young people of both sexes will have reached the age of 15 each year, which is the age to enter the labour market or to engage in vocational training. Adolescents between the ages of 10 and 19 make up about 22% of the population. Haitian demographics
###Code
import pandas as pd
import numpy as np
df = pd.read_excel(r'hti_adminboundaries_tabulardata.xlsx') # Load the xlxs
df.at[0,'IHSI_UNFPA_2019']
print('Haiti has {} haitians including {} females and {} males.'.format(df.at[0,'IHSI_UNFPA_2019'], df.at[0,'IHSI_UNFPA_2019_female'], df.at[0,'IHSI_UNFPA_2019_male']))
import folium
latitude= 19.0558462
longitude= -73.0513321
map_ht = folium.Map(location=[latitude, longitude],zoom_start=8,tiles='CartoDB positron')
map_ht
df2 = pd.read_excel(r'hti_adminboundaries_tabulardata.xlsx',sheet_name='hti_pop2019_adm1')
df2
df_final1= df2.drop ([ 'adm0_fr' , 'adm0_ht' ,'adm1_ht', 'adm1_fr'], axis = 1 )
df_final1.rename(columns = {'adm0code':'Code', 'adm1code':'Depart_code',
'IHSI_UNFPA_2019':'Total', 'IHSI_UNFPA_2019_female':'Female','IHSI_UNFPA_2019_male':'Male' }, inplace = True)
df_final1.sort_values(by ='adm1_en', inplace= True)
Df= df_final1.reset_index()
Df
df_final= Df.drop(['index'], axis=1)
df_final
#Add latitude & longitude column
df_final.insert(4, "lat", [19.4504,19.1430,18.6339,18.4411,19.7592,19.6656,19.9318,18.5410,18.2004,18.2350], True)
df_final.insert(5, "lng", [-72.6832, -72.0040, -74.1184, -73.0883,-72.2125, -71.8448, -72.8295, -72.3360, -73.7500, -72.5370], True)
df_final
from matplotlib import pyplot as plt
df_final.reset_index ().plot (
x = "index" , y = [ "Female" , "Male" ] , kind = "bar"
)
plt. title ( "Haitian demographics" )
plt. xlabel ( "adm1_en" )
plt. ylabel ( "Total" )
# one hot encoding
female_onehot = pd.get_dummies(df_final[['adm1_en']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
female_onehot['Female'] = df_final['Female']
# move neighborhood column to the first column
fixed_columns = [female_onehot.columns[-1]] + list(female_onehot.columns[:-1])
female_onehot = female_onehot[fixed_columns]
female_onehot.head()
# pull some graph
def demographics():
print()
print(format('generate my grouped BAR plot','*^82'))
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt #import librairy
# Plotting the bars
fig, ax = plt.subplots(figsize=(10,5))
# Create a bar with female data
plt.bar(pos, df['Female'], width= 0.25, alpha=0.5, color='#EE3224')
#plt.show()
# Create a bar with male data,
plt.bar([p + width for p in pos], df['Female'], width= 0.25, alpha=0.5, color='#EE3224')
#plt.show()
# Create a bar with post_score data,
plt.bar([p + width*2 for p in pos], df['Male'], width= 0.25, alpha=0.5, color='#F78F1E')
#plt.show()
# Set the y axis label
ax.set_ylabel('Total')
# Set the chart's title
ax.set_title('Haitian demographics ')
# Set the position of the x ticks
ax.set_xticks([p + 1.5 * width for p in pos])
# Set the labels for the x ticks
ax.set_xticklabels(df['adm1_en'])
# Adding the legend and showing the plot
plt.legend(['Female', 'Male'], loc='upper left')
plt.grid()
plt.show()
demographics()
df_final.to_csv (r"Haiti_Data", index = False)
from folium.plugins import MarkerCluster
from folium.map import Icon
#Create haitian map with coordinates of each department
latitude= 19.0558462
longitude= -73.0513321
map_haiti = folium.Map(location=[latitude, longitude], zoom_start=9, tiles='CartoDB positron')
marker_cluster = MarkerCluster().add_to(map_haiti)
for lat, lng, Total, Female, Male in zip(df_final['lat'], df_final['lng'], df_final['Total'], df_final['Female'],df_final['Male']):
label = 'Total: {}, Female: {}, Male: {}'.format(round(Total,0), Female, Male)
label = folium.Popup(label, parse_html=True)
folium.Marker(
[lat, lng],
#radius=5,
popup=label,
#color='blue',
#fill=True,
#fill_color='#3186cc',
#fill_opacity=0.7,
#parse_html=False,
icon=Icon(color='red', icon_color='yellow', icon='check-circle',prefix='fa')).add_to(marker_cluster)
map_haiti
###Output
_____no_output_____
###Markdown
Data source **Keys indicateur of young aged 15-24**
###Code
import urllib.request # import librairy needed
# specify the URL/web page we are going to be scraping
url="https://dhsprogram.com/topics/Youth-Corner/haiti-dhs-key-indicators.cfm"
# open the url using urllib.request
page = urllib.request.urlopen(url)
# import the BeautifulSoup library so we can parse HTML and XML documents
from bs4 import BeautifulSoup
# parse the HTML from our URL into the BeautifulSoup parse tree format
soup = BeautifulSoup(page, "lxml")
# to look at HTML underlying our chosen web page
print(soup.prettify())
# tags bring back the 'title' and the data between the start and end 'title' tags
soup.title
# refine a step further by specifying the 'string' element and only bring back the content without the title' tags
soup.title.string
# use the 'find_all' function to bring back all instances of the 'table' tag in the HTML
all_tables=soup.find_all("table")
all_tables
trs[4:8]
variable_name,age_group,women,men=[],[],[],[]
for tr in trs:
tds = tr.find_all('td')
if len(tds) == 1:
variable= tds[0].get_text().strip()
if len(tds) == 3:
variable_name.append(variable)
age_group.append(tds[0].get_text().strip())
women.append(tds[1].get_text().strip())
men.append(tds[2].get_text().strip())
df = pd.DataFrame(dict(variable_name=variable_name,age_group=age_group,women=women,men=men))
def load_table(url,start,end):
variable_name,age_group,women,men,variable =[],[],[],[],''
page = urllib.request.urlopen(url)
variable_name,age_group,women,men=[],[],[],[]
soup = BeautifulSoup(page, "lxml")
for tr in trs[start:end]:
tds = tr.find_all('td')
if len(tds) == 1:
variable = tds[0].get_text().strip()
if len(tds) == 3:
variable_name.append(variable)
age_group.append(tds[0].get_text().strip())
women.append(tds[1].get_text().strip())
men.append(tds[2].get_text().strip())
df = pd.DataFrame(dict(variable_name=variable_name,age_group=age_group,women=women,men=men))
return df
need=load_table(url='https://dhsprogram.com/topics/Youth-Corner/haiti-dhs-key-indicators.cfm',start=0,end=4)
need
df1 =load_table(url='https://dhsprogram.com/topics/Youth-Corner/haiti-dhs-key-indicators.cfm',start=4,end=19)
df1
df2= load_table(url='https://dhsprogram.com/topics/Youth-Corner/haiti-dhs-key-indicators.cfm',start=24,end=34)
df2
df3= load_table(url='https://dhsprogram.com/topics/Youth-Corner/haiti-dhs-key-indicators.cfm',start=38,end=47)
df3
df4= load_table(url='https://dhsprogram.com/topics/Youth-Corner/haiti-dhs-key-indicators.cfm',start=60,end=65)
df4
df5= load_table(url='https://dhsprogram.com/topics/Youth-Corner/haiti-dhs-key-indicators.cfm',start=69,end=80)
df5
df6=load_table(url='https://dhsprogram.com/topics/Youth-Corner/haiti-dhs-key-indicators.cfm',start=100,end=105)
df6
# merge all df df1 to df6
dtf= df1.append(df2, ignore_index=True)
dtf
dtf1=dtf.append(df3,ignore_index= True)
dtf1
dtf2= dtf1.append(df4, ignore_index=True)
dtf2
dtf3=dtf2.append(df5, ignore_index=True)
dtf3
dtf4= dtf3.append(df6, ignore_index=True)
dtf4
#convert dataframe in csv file
dtf4.to_csv (r"Keys indicateur", index = False)
! pip install pyreadstat
#country-specific
url='./DATTTT specifique au pays/HTCHA7AFLSR.SAS7BDAT'
df_cs = pd. read_sas(url,encoding='latin8')
df_cs
# retrieve all columns need
colums = list(set(df_cs.columns))
colums
df_cs.info()
#prenatal care
url='./DATTT soins prenatalr recod/HTAN7AFLSR.SAS7BDAT'
df_pc= pd. read_sas(url,encoding='latin8')
df_pc
df_pc.to_csv (r"prenatal_care", index = False)
df_cs.to_csv (r"specific_country", index = False)
###Output
_____no_output_____
###Markdown
Codes for Note column:1. Subject matter same or similar, but codes can differ. Check carefully.2. Created variables used for country report, included for the first time in a recode of a SPA3. Existed in earlier recode(s) as country-specific codes4. For providers, earlier SPAs coded training in last year as '1' and training in previous2-3 years as 2; new SPA codes training in past 2 years as '1' and earlier training as '2'.
###Code
dict_data = pd.read_excel(r'SPA_Data_Dictionary_HTSR7A.xlsx',sheet_name='Clients')
dict_data.head(10)
#rename some rows
dict_data = dict_data.loc[8:,:]
dict_data.rename(columns={'Unnamed: 0': 'Questionnaire', 'Unnamed: 1': 'New Recode', 'Unnamed: 2': 'Old Recode', 'Unnamed: 3': 'Note','Codes for Note column:': 'Label' }, index={}, inplace=True)
dict_data.reset_index(drop= True, inplace = True)
dict_data.head()
dict_data.to_csv (r"Dict_soins_prenat", index = False)
dict_data_country = pd.read_excel(r'SPA_Data_Dictionary_HTSR7A.xlsx',sheet_name='CS - Data Quality Review')
dict_data_country.loc[5:,:]
dict_data_country.rename(columns={'Unnamed: 0': 'Questionnaire', 'Unnamed: 1':'New Recode', 'Unnamed: 2': 'Label' }, index={}, inplace=True)
dict_data_country.reset_index(drop=True, inplace= True)
dict_data_country
dict_data_country.to_csv (r"Dict_spef_country", index = False)
###Output
_____no_output_____
###Markdown
Data cleaning
###Code
#country-specific
df_cs = pd.read_csv(r'Dict_soins_prenat')
df_cs
dict_sc= pd.read_csv(r'Dict_spef_country')
dict_sc.head(15)
dict_sc= dict_sc.loc[5:,:]
dict_sc.reset_index(drop=True, inplace=True)
dict_sc
dicc = dict_sc[['New Recode','Label']].to_dict('split')
b = dicc['data']#.keys()
rename_filter2 = dict(b)
df_final2= df_cs.rename(columns = rename_filter2).drop(['Agent type','Year started to work'], axis = 1 )
df_final2.head()
###Output
_____no_output_____
###Markdown
Data analysis Descriptive statistical analysis Distribution of variables Outliers in the dataset Reflections Summary of data analysis Question unanswered Recommandations
###Code
### Identify clusters within the dataset
Use K-Elbow plot to identify optimal number of clusters
Run K-Mean clustering algorithm to identify nearest nehbots
#appliker sou lot data yo, pa bezwenl pou sa
# one-hot encoding
female_onehot = pd.get_dummies(df_keys[['age_group']], prefix="", prefix_sep="")
female_onehot['variable_name'] = df_keys['variable_name']
# move neighborhood column to the first column
fixed_columns = [female_onehot.columns[-1]] + list(female_onehot.columns[:-1])
female_onehot = female_onehot[fixed_columns]
female_onehot
Data=female_onehot.head
! pip install yellowbrick
from sklearn.cluster import KMeans
from yellowbrick.cluster.elbow import kelbow_visualizer
# Use the quick method and immediately show the figure
kelbow_visualizer(KMeans(random_state=4),female_onehot, k=(2,10))
###Output
_____no_output_____
###Markdown
COMP30027 Machine Learning Project 2 Imports
###Code
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier, RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.naive_bayes import GaussianNB, MultinomialNB
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import LabelEncoder
###Output
_____no_output_____
###Markdown
Load Datasets
###Code
train_raw = pd.read_csv("./Data/train_raw.csv", header=None, na_values='?', keep_default_na=False, usecols=[0, 2, 6])
train_top10 = pd.read_csv("./Data/train_top10.csv", header=None, na_values='?', keep_default_na=False)
dev_raw = pd.read_csv("./Data/dev_raw.csv", header=None, na_values='?', keep_default_na=False, usecols=[0, 2, 6])
dev_top10 = pd.read_csv("./Data/dev_top10.csv", header=None, na_values='?', keep_default_na=False)
test_raw = pd.read_csv("./Data/test_raw.csv", header=None, na_values='?', keep_default_na=False, usecols=[0, 2, 6])
test_top10 = pd.read_csv("./Data/test_top10.csv", header=None, na_values='?', keep_default_na=False)
###Output
_____no_output_____
###Markdown
Train (Initial Test) k-NN
###Code
cf1 = KNeighborsClassifier(int(math.sqrt(train_top10.size)), "distance", n_jobs=-1)
cf1.fit(train_top10.values[:, 1:31], train_top10.values[:, 31])
t = cf1.predict(dev_top10.values[:, 1:31]) == dev_top10.values[:, 31]
t = np.bincount(t)
s = t[1] / (t[0] + t[1])
print(f"score: {s}")
###Output
score: 0.43027000794141
###Markdown
Gaussian Naive Bayes
###Code
cf2 = GaussianNB()
cf2.fit(train_top10.values[:, 1:31], train_top10.values[:, 31])
t = cf2.predict(dev_top10.values[:, 1:31]) == dev_top10.values[:, 31]
t = np.bincount(t)
s = t[1] / (t[0] + t[1])
print(f"score: {s}")
###Output
score: 0.41601958881143564
###Markdown
Random Decision Forest Classifier
###Code
cf3 = RandomForestClassifier()
cf3.fit(train_top10.values[:, 1:31], train_top10.values[:, 31])
t = cf3.predict(dev_top10.values[:, 1:31]) == dev_top10.values[:, 31]
t = np.bincount(t)
s = t[1] / (t[0] + t[1])
print(f"score: {s}")
###Output
score: 0.43362304773669813
###Markdown
Multilayer Perceptron Classifier
###Code
cf4 = MLPClassifier()
cf4.fit(train_top10.values[:, 1:31], train_top10.values[:, 31])
t = cf4.predict(dev_top10.values[:, 1:31]) == dev_top10.values[:, 31]
t = np.bincount(t)
s = t[1] / (t[0] + t[1])
print(f"score: {s}")
###Output
score: 0.4343951292685079
###Markdown
Adaptive Boost Classifier with Decision Tree _(default)_
###Code
cf5 = AdaBoostClassifier(n_estimators=100)
cf5.fit(train_top10.values[:, 1:31], train_top10.values[:, 31])
t = cf5.predict(dev_top10.values[:, 1:31]) == dev_top10.values[:, 31]
t = np.bincount(t)
s = t[1] / (t[0] + t[1])
print(f"score: {s}")
###Output
score: 0.433667166681373
###Markdown
Train (Combine UID)First combine all UIDs
###Code
def unique(top10, raw):
b = top10.copy()
b[0] = raw[0]
a = b.drop_duplicates(0)
a.update(b.iloc[:, :31].groupby(0).mean())
return a
a = unique(train_top10, train_raw)
b = unique(dev_top10, dev_raw)
c = unique(test_top10, test_raw)
###Output
c:\users\laitingsheng\appdata\local\programs\python\python36\lib\site-packages\pandas\core\frame.py:4290: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self[col] = expressions.where(mask, this, that)
###Markdown
k-NN
###Code
cf1 = KNeighborsClassifier(int(math.sqrt(train_top10.size)), "distance", n_jobs=-1)
cf1.fit(a.values[:, 1:31], a.values[:, 31])
mapping = pd.Series(cf1.predict(b.values[:, 1:31]), b.iloc[:, 0])
predict_dev_top10 = dev_top10.copy()
predict_dev_top10[0] = dev_raw[0]
p = predict_dev_top10[[0, 31]]
def rc(row):
row[31] = mapping[row[0]]
return row
p = p.apply(rc, 1, raw=True)
p[0] = dev_top10[0]
predict_dev_top10.update(p)
re = (predict_dev_top10[31] == dev_top10[31]).value_counts(True)
print(re)
###Output
False 0.581267
True 0.418733
Name: 31, dtype: float64
###Markdown
Gradient Boosting Classifier
###Code
cf2 = GradientBoostingClassifier(n_estimators=1000)
cf2.fit(a.values[:, 1:31], a.values[:, 31])
mapping = pd.Series(cf2.predict(b.values[:, 1:31]), b.iloc[:, 0])
predict_dev_top10 = dev_top10.copy()
predict_dev_top10[0] = dev_raw[0]
p = predict_dev_top10[[0, 31]]
def rc(row):
row[31] = mapping[row[0]]
return row
p = p.apply(rc, 1, raw=True)
p[0] = dev_top10[0]
predict_dev_top10.update(p)
re = (predict_dev_top10[31] == dev_top10[31]).value_counts(True)
print(re)
###Output
False 0.585988
True 0.414012
Name: 31, dtype: float64
###Markdown
Multilayer Perceptron Classifier
###Code
cf3 = MLPClassifier()
cf3.fit(a.values[:, 1:31], a.values[:, 31])
mapping = pd.Series(cf3.predict(b.values[:, 1:31]), b.iloc[:, 0])
predict_dev_top10 = dev_top10.copy()
predict_dev_top10[0] = dev_raw[0]
p = predict_dev_top10[[0, 31]]
def rc(row):
row[31] = mapping[row[0]]
return row
p = p.apply(rc, 1, raw=True)
p[0] = dev_top10[0]
predict_dev_top10.update(p)
re = (predict_dev_top10[31] == dev_top10[31]).value_counts(True)
print(re)
###Output
False 0.576855
True 0.423145
Name: 31, dtype: float64
###Markdown
Adaptive Boost Classifier with Decision Tree _(default)_
###Code
cf4 = AdaBoostClassifier(n_estimators=100)
cf4.fit(a.values[:, 1:31], a.values[:, 31])
mapping = pd.Series(cf4.predict(b.values[:, 1:31]), b.iloc[:, 0])
predict_dev_top10 = dev_top10.copy()
predict_dev_top10[0] = dev_raw[0]
p = predict_dev_top10[[0, 31]]
def rc(row):
row[31] = mapping[row[0]]
return row
p = p.apply(rc, 1, raw=True)
p[0] = dev_top10[0]
predict_dev_top10.update(p)
re = (predict_dev_top10[31] == dev_top10[31]).value_counts(True)
print(re)
###Output
False 0.588083
True 0.411917
Name: 31, dtype: float64
###Markdown
Logistic Regression
###Code
cf5 = LogisticRegression()
cf5.fit(a.values[:, 1:31], a.values[:, 31])
mapping = pd.Series(cf5.predict(b.values[:, 1:31]), b.iloc[:, 0])
predict_dev_top10 = dev_top10.copy()
predict_dev_top10[0] = dev_raw[0]
p = predict_dev_top10[[0, 31]]
def rc(row):
row[31] = mapping[row[0]]
return row
p = p.apply(rc, 1, raw=True)
p[0] = dev_top10[0]
predict_dev_top10.update(p)
re = (predict_dev_top10[31] == dev_top10[31]).value_counts(True)
print(re)
###Output
False 0.573458
True 0.426542
Name: 31, dtype: float64
###Markdown
Ensembled _(Stacking)_
###Code
enc = LabelEncoder()
enc.fit(a.values[:, 31])
na = pd.DataFrame()
na[0] = a[0]
na[1] = enc.transform(cf1.predict(a.values[:, 1:31]))
na[2] = enc.transform(cf3.predict(a.values[:, 1:31]))
na[3] = enc.transform(cf5.predict(a.values[:, 1:31]))
na[4] = a[31]
nb = pd.DataFrame()
nb[0] = b[0]
nb[1] = enc.transform(cf1.predict(b.values[:, 1:31]))
nb[2] = enc.transform(cf3.predict(b.values[:, 1:31]))
nb[3] = enc.transform(cf5.predict(b.values[:, 1:31]))
nb[4] = b[31]
nc = pd.DataFrame()
nc[0] = c[0]
nc[1] = enc.transform(cf1.predict(c.values[:, 1:31]))
nc[2] = enc.transform(cf3.predict(c.values[:, 1:31]))
nc[3] = enc.transform(cf5.predict(c.values[:, 1:31]))
nc[4] = c[31]
ecf = AdaBoostClassifier(n_estimators=100)
ecf.fit(na.values[:, 1:4], na.values[:, 4])
mapping = pd.Series(ecf.predict(nb.values[:, 1:4]), nb.iloc[:, 0])
t = dev_top10.copy()
t[0] = dev_raw[0]
p = t[[0, 31]]
def rc(row):
row[31] = mapping[row[0]]
return row
p = p.apply(rc, 1, raw=True)
p[0] = dev_top10[0]
t.update(p)
re = (t[31] == dev_top10[31]).value_counts(True)
print(re)
mapping = pd.Series(ecf.predict(nc.values[:, 1:4]), nc.iloc[:, 0])
t = test_top10.copy()
t[0] = test_raw[0]
p = t[[0, 31]]
def rc(row):
row[31] = mapping[row[0]]
return row
p = p.apply(rc, 1, raw=True)
p[0] = test_top10[0]
t.update(p)
re = t[[0, 31]]
re.columns = ["Id", "Prediction"]
re.to_csv("./Data/temp.csv", index=False)
###Output
False 0.581267
True 0.418733
Name: 31, dtype: float64
###Markdown
Group by Age Groups
###Code
gd = train_top10.copy()
gd[0] = train_raw[0]
ga = gd.iloc[:, :31].groupby(0).sum()
ga = ga.div(ga.sum(axis=1), 0).fillna(0)
gad = gd.drop_duplicates(0)[[0, 31]]
tcf = MultinomialNB()
tcf.fit(ga.values, gad[31].values)
d = dev_top10.copy()
d[0] = dev_raw[0]
a = d.iloc[:, :31].groupby(0).sum()
a = a.div(a.sum(axis=1), 0).fillna(0)
ad = d.drop_duplicates(0)[[0, 31]]
mapping = pd.Series(tcf.predict(a.values), ad[0])
predict_dev_top10 = dev_top10.copy()
predict_dev_top10[0] = dev_raw[0]
p = predict_dev_top10[[0, 31]]
def rc(row):
row[31] = mapping[row[0]]
return row
p = p.apply(rc, 1, raw=True)
p[0] = dev_top10[0]
predict_dev_top10.update(p)
re = (predict_dev_top10[31] == dev_top10[31]).value_counts(True)
print(re)
###Output
True 0.521949
False 0.478051
Name: 31, dtype: float64
###Markdown
Raw (TF-IDF Vectoriser)
###Code
tv = TfidfVectorizer(sublinear_tf=True)
rcf = SGDClassifier(random_state=41, max_iter=1000, tol=None, n_jobs=-1)
groups = {
14: "14-16", 15: "14-16", 16: "14-16",
24: "24-26", 25: "24-26", 26: "24-26",
34: "34-36", 35: "34-36", 36: "34-36",
44: "44-46", 45: "44-46", 46: "44-46"
}
###Output
_____no_output_____
###Markdown
Not Combined
###Code
tfid = tv.fit_transform(train_raw[6].values)
rcf.fit(tfid, train_raw[2].values)
print((rcf.predict(tfid) == train_raw[2]).mean())
dfid = tv.transform(dev_raw[6].values)
print((pd.Series(rcf.predict(dfid), dev_top10.index).apply(lambda x: groups.get(x, '?')) == dev_top10[31]).mean())
tfid = tv.fit_transform(train_raw[6].values)
rcf.fit(tfid, train_raw[2].apply(lambda x: groups.get(x, '?')).values)
print((rcf.predict(tfid) == train_top10[31]).mean())
dfid = tv.transform(dev_raw[6].values)
print((rcf.predict(dfid) == dev_top10[31]).mean())
###Output
0.6981531393014128
0.5161475337509926
###Markdown
Combined
###Code
ut = train_raw.drop_duplicates(0)
t = train_raw.groupby(0)[6].apply(lambda x: ' '.join(x))
tfid = tv.fit_transform(t)
rcf.fit(tfid, ut[2])
print((rcf.predict(tfid) == ut[2]).mean())
ud = dev_raw[[0, 2, 6]].drop_duplicates(0)
d = dev_raw.groupby(0)[6].apply(lambda x: ' '.join(x))
dfid = tv.transform(d)
p = pd.Series(rcf.predict(dfid), ud[0]).apply(lambda x: groups.get(x, '?'))
d = dev_raw[[0, 2]]
def rc(row):
row[2] = p[row[0]]
return row
d = d.apply(rc, 1)
print((d[2] == dev_top10[31]).mean())
ud = test_raw[[0, 2, 6]].drop_duplicates(0)
d = test_raw.groupby(0)[6].apply(lambda x: ' '.join(x))
dfid = tv.transform(d)
p = pd.Series(rcf.predict(dfid), ud[0]).apply(lambda x: groups.get(x, '?'))
d = test_raw[[0, 2]]
def rc(row):
row[2] = p[row[0]]
return row
d = d.apply(rc, 1)
d[0] = test_top10[0]
d.columns = ["Id", "Prediction"]
d.to_csv("./Data/temp.csv", index=False)
ut = train_raw.drop_duplicates(0)
t = train_raw.groupby(0)[6].apply(lambda x: ' '.join(x))
tfid = tv.fit_transform(t)
ut[2] = ut[2].apply(lambda x: groups.get(x, '?'))
rcf.fit(tfid, ut[2])
print((rcf.predict(tfid) == ut[2]).mean())
ud = dev_raw[[0, 2, 6]].drop_duplicates(0)
d = dev_raw.groupby(0)[6].apply(lambda x: ' '.join(x))
dfid = tv.transform(d)
p = pd.Series(rcf.predict(dfid), ud[0])
d = dev_raw[[0, 2]]
def rc(row):
row[2] = p[row[0]]
return row
d = d.apply(rc, 1)
print((d[2] == dev_top10[31]).mean())
ut = train_raw.drop_duplicates(0)
t = train_raw.groupby(0)[2].apply(lambda x: ' '.join(x))
tfid = tv.fit_transform(t)
rcf.fit(tfid, ut[2])
print((rcf.predict(tfid) == ut[2]).mean())
ud = dev_raw[[0, 2, 6]].drop_duplicates(0)
d = dev_raw.groupby(0)[6].apply(lambda x: ' '.join(x))
dfid = tv.transform(d)
p = pd.Series(rcf.predict(dfid), ud[0])
d = dev_raw[[0, 2]]
def rc(row):
row[2] = p[row[0]]
return row
d = d.apply(rc, 1)
print((d[2] == dev_top10[31]).mean())
###Output
_____no_output_____
###Markdown
Etude du Prix des Billets SNCF Sommaire:- [I - Before Starting](title-1) - [1. Loading Libraries](title-1-1) - [2. Usefull Functions](title-1-2) - [3. Loading Datas](title-1-3)- [II - Choosing the Hour](title-2) - [1. L'Aller](title-2-1) - [2. Le Retour](title-2-2)- [III - Evolution du prix d'un billet au cours du temps](title-3) I - Before Starting --- 1. Loading Libraries
###Code
import json
import pandas as pd
from pandas.io.json import json_normalize
import matplotlib.pyplot as plt
import seaborn as sns
import os
from math import *
###Output
_____no_output_____
###Markdown
2. Usefull Functions **get_datas_from_departure:** permet de sélectionner des informations à partir de la date et l'heure de départ.
###Code
def get_datas_from_departure(datas, day="false", month="false", year="false", hour="false", minute="false"):
to_concat = []
if day != "false":
to_concat.append(datas["departureDate.date.day"] == day)
if month != "false":
to_concat.append(datas["departureDate.date.month"] == month)
if year != "false":
to_concat.append(datas["departureDate.date.year"] == year)
if hour != "false":
to_concat.append(datas["departureDate.hours.hour"] == hour)
if minute != "false":
to_concat.append(datas["departureDate.hours.minute"] == minute)
return datas[pd.concat(to_concat, axis=1).all(axis=1)]
###Output
_____no_output_____
###Markdown
**get_datas_from_departure:** permet de sélectionner des informations à partir de la date et l'heure de collecte.
###Code
def get_datas_from_collect(datas, day="false", month="false", year="false", hour="false", minute="false"):
to_concat = []
if day != "false":
to_concat.append(datas["collectDate.date.day"] == day)
if month != "false":
to_concat.append(datas["collectDate.date.month"] == month)
if year != "false":
to_concat.append(datas["collectDate.date.year"] == year)
if hour != "false":
to_concat.append(datas["collectDate.hours.hour"] == hour)
if minute != "false":
to_concat.append(datas["collectDate.hours.minute"] == minute)
return datas[pd.concat(to_concat, axis=1).all(axis=1)]
###Output
_____no_output_____
###Markdown
**add_colleced_date:** ajoute la date de collecte des données à un dataframe à partir du nom du fichier collecté
###Code
def add_collected_date(datas, file_name):
name = file_name.split(".json")[0]
name = name.split("T")
date = name[0].split("-")
hours = name[1].split("-")
datas["collectDate.date.year"] = date[0]
datas["collectDate.date.month"] = date[1]
datas["collectDate.date.day"] = date[2]
datas["collectDate.hours.hour"] = hours[0]
datas["collectDate.hours.minute"] = hours[1]
return datas
###Output
_____no_output_____
###Markdown
**dispay_datas_in_time:** On donne les données en entré et la fonction trace un graphe
###Code
def display_datas_in_time(datas, hour):
#On récupère la liste des dates contenues dans le dataframe
mean = datas[pd.concat([datas["type"] == "SEMIFLEX", datas["departureDate.hours.hour"] == hour], axis=1).all(axis=1)]["amount"].mean()
median = datas[pd.concat([datas["type"] == "SEMIFLEX", datas["departureDate.hours.hour"] == hour], axis=1).all(axis=1)]["amount"].median()
liste_dates = sorted(set([i for i in zip(datas["departureDate.date.month"], datas["departureDate.date.day"])]))
fig, ax = plt.subplots(figsize=(20,10))
for date in liste_dates:
datasx = get_datas_from_departure(datas, year="2018", month=date[0], day=date[1], hour=hour)
liste_prices = datasx[datasx["type"] == "SEMIFLEX"]["amount"]
label = date[1]+"/"+date[0]
ax.plot(range(len(liste_prices)), liste_prices, label=label)
liste_dates_collected = sorted(set([i for i in zip(datas["collectDate.date.month"], datas["collectDate.date.day"])]))
for index in range(len(liste_dates_collected)):
x=(index+1)*3-0.5
color = "grey"
if (index+1) % 5 == 0:
color = "red"
plt.axvline(x=x, color=color, linewidth=1, linestyle="--")
plt.axhline(y=mean, color="grey", linewidth=1, linestyle="--")
plt.axhline(y=median, color="blue", linewidth=1, linestyle="--")
ax.legend(fontsize='x-large')
plt.show()
###Output
_____no_output_____
###Markdown
**semiflex_only:** retourne seulement les données pour les billets semi-flex
###Code
def semiflex_only(datas):
return datas[datas["type"] == "SEMIFLEX"]
def compare_prices(datas, first_hour, second_hour):
datas_first = semiflex_only(get_datas_from_departure(datas, hour=first_hour))
datas_second = semiflex_only(get_datas_from_departure(datas, hour=second_hour))
total_price_first = datas_first["amount"].sum()
total_price_second = datas_second["amount"].sum()
for date_departure in liste_dates_aller:
for date_collect in liste_dates_collect:
datas = semiflex_only(get_datas_from_departure(datas_first, year="2018", month=date_departure[0], day=date_departure[1], hour=first_hour))
datas1 = get_datas_from_collect(datas, year="2018", month=date_collect[0], day=date_collect[1], hour=date_collect[2], minute=date_collect[3])
datas = semiflex_only(get_datas_from_departure(datas_second, year="2018", month=date_departure[0], day=date_departure[1], hour=second_hour))
datas2 = get_datas_from_collect(datas, year="2018", month=date_collect[0], day=date_collect[1], hour=date_collect[2], minute=date_collect[3])
if datas1.shape[0] == 0 and datas2.shape[0] != 0:
total_price_second -= datas2["amount"].values[0]
if datas1.shape[0] != 0 and datas2.shape[0] == 0:
total_price_first -= datas1["amount"].values[0]
return total_price_first - total_price_second
###Output
_____no_output_____
###Markdown
3. Loading Datas On commence par importer toutes les données collectées durant les jours précédents.
###Code
#Collecte des données de l'aller
datas_aller = []
file_list = os.listdir("datas/aller")
for file_name in file_list:
with open("datas/aller/"+file_name) as file:
datas = json.load(file)
datas = json_normalize(datas["trainProposals"])
datas = add_collected_date(datas, file_name)
datas_aller.append(datas)
datas_aller = pd.concat(datas_aller)
#Collecte des données du retour
datas_retour = []
file_list = os.listdir("datas/retour")
for file_name in file_list:
with open("datas/retour/"+file_name) as file:
datas = json.load(file)
datas = json_normalize(datas["trainProposals"])
datas = add_collected_date(datas, file_name)
datas_retour.append(datas)
datas_retour = pd.concat(datas_retour)
#On crée une variable qui contient la liste des dates
liste_dates_aller = sorted(set([i for i in zip(datas_aller["departureDate.date.month"], datas_aller["departureDate.date.day"])]))
liste_dates_retour = sorted(set([i for i in zip(datas_retour["departureDate.date.month"], datas_retour["departureDate.date.day"])]))
liste_dates_collect = sorted(set([i for i in zip(datas_aller["collectDate.date.month"], datas_aller["collectDate.date.day"], datas_aller["collectDate.hours.hour"], datas_aller["collectDate.hours.minute"])]))
###Output
_____no_output_____
###Markdown
II - Choosing the Hour --- Que ce soit pour l'aller ou pour le retour, je peux admettre un peu de flexibilité pour le choix de l'heure du départ du train. Le nombre d'horaires considérés est juste assez grand pour qu'une étude soit intéressante mais juste assez faible pour que cette étude ne soit pas un casse tête.Deux critères ont été pris en compte pour cette étude: Le prix du billet, ainsi que mes préférence d'horaire pour divers raisons personnelles issues de mes premières expériences de trajets. 1. L'Aller L'aller s'effectue le vendredi soir après mon travail. Je ne dispose pas d'horaires fixes et peux donc choisir mon heure de départ. Cependant je ne souhaite pas abuser dans ce choix. En effet, je quitte généralement mon travail à 17h30 et je ne souhaite pas que l'heure de départ du vendredi ne m'éloigne trop de cet horaire. Je décide donc de considérer les 3 horaires suivants accompagnés de commentaires personnels:- 16h36 : Légèrement troµp tôt pour quitter mon travail- 17h37 : Horaire privilégié- 19h32 : Arrivée trop tardCommençons par vizualiser nos données afin d'avoir une première intuition sur le concepte étudié. Je décide de tracer un ensemble de graphes répartis sur une grille. Chaque graphe correspond à un jour de départ de train différent (chaque vendredi). Sur chaque graphe il y a 3 courbes. Elles représentent l'évolution du prix enregistré au cours du temps pour chaque horaire considéré.
###Code
# ---- Variables to choose ----
list_hours_aller = ["16", "17", "19"] #liste des heures considérées
cols = 3 #nombre de graphes par lignes
# -----------------------------
rows = ceil(len(liste_dates_aller)/cols)
fig = plt.figure(figsize=(20,20))
#For each friday considered
for index, date in enumerate(liste_dates_aller):
ax = fig.add_subplot(rows, cols, (index+1))
#For each hour considered
for hour in list_hours_aller:
#We get all the datas
datas = semiflex_only(get_datas_from_departure(datas_aller, year="2018", month=date[0], day=date[1], hour=hour))["amount"]
#We plot it
ax.plot(range(len(datas)), datas, label=hour+"h")
ax.set_title(date[1]+"/"+date[0])
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
On observe que selon les instants considérés, l'ordre des heures classé par prix n'est pas toujours le même. Une tendance générale semble se dégager. En effet, il semble que les trains de 16h soient plus chers que ceux de 17h et qu'eux même soient plus chers que ceux de 19h.**Ma décision à partir de cette information:** Le trajet de 16h n'a aucun avantage par rapport aux autres trajets. Je ne le prend donc jamais. Je privilégie le trajet de 17h, plus convénient et de bon compromis. Cependant le trajet de 19h peut parfois être plus avantageux si la différence de prix est considérable (> 5 euros).Ces intuitions ne sont pas sufisantes pour être sur de la bonne décision à prendre. Je choisi donc d'étudier ces même données avec une autre méthodologie pour avori un autre point de vue. Méthodologie:Pour chaque horaire, on fait la somme des prix de tous les billets enregistrés. On compare ensuite cette donnée. Pour que cette comparaison soit la plus intéressante possible, il faut également tenir compte des cas où des billets sont disponibles pour certains horaires mais pas pour d'autres. Je choisi donc de ne pas ajouter les prix des billets lorsqu'il n'y a pas 3 billets disponibles à certaines périodes. Je décide également de ne comparer les billets que deux à deux.
###Code
dif1 = compare_prices(datas_aller, "16", "17")
dif2 =compare_prices(datas_aller, "16", "19")
print("dif(16h, 17h) = {:.2f}".format(dif1))
print("dif(16h, 19h) = {:.2f}".format(dif2))
###Output
dif(16h, 17h) = 292.20
dif(16h, 19h) = 1326.30
###Markdown
En terme de prix de billets:- 16h > 17h- 16h > 19hLes billets de 16h sont donc bien les plus chers.
###Code
dif = compare_prices(datas_aller, "17", "19")
print("dif(17h, 19h) = {:.2f}".format(dif))
###Output
dif(17h, 19h) = 948.70
###Markdown
- 17h > 19h Bilan:19h la différence entre les billets de 16h et de 17h est relativement faible par rapport à celle entre les billets de 17h et ceux de 19h. Il est donc important de tenir compte des billets de 19h même si cet horaire n'est pas toujours pratique. 2. Le Retour Des problématiques simmilaires à l'aller existent pour le choix du billet de retour. J'utiliserai donc les mêmes méthodologies que celles employées pour l'étude des prix des billets pour l'aller. Les horaires considérés pour ce trajet sont les suivants:- 14h52 : Trop tôt- 16h52 : Un peut trop tôt- 17h52 : Horaire satisfaisant- 18h52 : Horaire satisfaisant
###Code
# ---- Variables to choose ----
list_hours_retour = ["14", "16", "17", "18"] #liste des heures considérées
cols = 3 #nombre de graphes par lignes
# -----------------------------
rows = ceil(len(liste_dates_aller)/cols)
fig = plt.figure(figsize=(20,20))
#For each sunday considered
for index, date in enumerate(liste_dates_retour):
ax = fig.add_subplot(rows, cols, (index+1))
#For each hour considered
for hour in list_hours_retour:
#We get all the datas
datas = semiflex_only(get_datas_from_departure(datas_retour, year="2018", month=date[0], day=date[1], hour=hour))["amount"]
#We plot it
ax.plot(range(len(datas)), datas, label=hour+"h")
ax.set_title(date[1]+"/"+date[0])
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Il est plus difficil d'effectuer des observations générales avec 4 courbes. Il semble cependant que le trajet de 16h soit généralement le plus cher. A l'opposer, le trajet de 14h semble moins attirer les acheteurs et propose donc des prix moins grands que les autres. Les billets proposés aux horaires de 17h et 18h semble alterner entre la 2e et la 3e position du classement du prix le plus cher.**Ma décision à partir de cette information:** Privilégier les billets proposés à 17h et 18h. Suivre le prix du billet de 14h dans les cas où le prix soit vraiment inréressant par rapport aux autres.
###Code
dif1 = compare_prices(datas_retour, "14", "16")
dif2 = compare_prices(datas_retour, "14", "17")
dif3 = compare_prices(datas_retour, "14", "18")
print("dif(14h, 16h) = {:.2f}".format(dif1))
print("dif(14h, 17h) = {:.2f}".format(dif2))
print("dif(14h, 18h) = {:.2f}".format(dif3))
###Output
dif(14h, 16h) = -1280.90
dif(14h, 17h) = -868.50
dif(14h, 18h) = -358.60
###Markdown
- 14h < 16h- 14h < 17h- 14h < 18hLe trajet de 14h est le moins cher de façon générale
###Code
dif1 = compare_prices(datas_retour, "16", "17")
dif2 =compare_prices(datas_retour, "16", "18")
print("dif(16h, 17h) = {:.2f}".format(dif1))
print("dif(16h, 18h) = {:.2f}".format(dif2))
###Output
dif(16h, 17h) = 412.40
dif(16h, 18h) = 922.30
###Markdown
- 16h > 17h- 16h > 18hLe trajet de 16h est le plus cher de façon générale
###Code
dif = compare_prices(datas_retour, "17", "18")
print("dif(17h, 18h) = {:.2f}".format(dif))
###Output
dif(17h, 18h) = 509.90
###Markdown
- 17h > 18hLe trajet de 17h est plus cher que celui de 18h Bilan:14h Il faut privilégier le trajet de 18h par rapport à celui de 17h, mais surveiller ces deux horaires en fonction du billet à acheter. Le billet de 14h peut être choisi dans les cas extrême où le prix proposé est très différent de celui des deux autres. Dans les autres cas, le billet de 14h n'est pas assez pratique pour être intéressant à chaque fois. III - Evolution du prix d'un billet au cours du temps ----Pour continuer notre exploration des données collectées, nous allons essayer de savoir quel est le meilleur moment pour acheter un billet. J'ai de nombreuses question à ce sujet. Le prix des billets est-il mis à jours à une heure précise de la journée ou évolue-t-il avec la demande en direct durant la journée? Combien de temps avant le départ du train peut on acheter un billet?Pour répondre à ces questions, je décide de tracer sur un même graphe tous les jours considérés. Chaque courbe correspond à l'évolution au cours du temps du prix des billets pour une heure et un jour donné. Les traits verticaux en pointillé correspondent aux jours de la semaine. Ceux en rouge marquent la fin d'une semaine. En effet, je ne relève pas de données durant le week-end. J'ai également tracé horizontalement la médiane (bleu ) et la moyenne (gris).
###Code
display_datas_in_time(datas_aller, "16")
print("\t\tTrajet Lannion-Paris à 16h")
display_datas_in_time(datas_aller, "17")
print("\t\tTrajet Lannion-Paris à 17h")
display_datas_in_time(datas_aller, "19")
print("\t\tTrajet Lannion-Paris à 19h")
semiflex_only(get_datas_from_departure(datas_aller, year="2018", month="10", day="26", hour="17"))[["amount", "collectDate.date.day", "collectDate.hours.hour", "collectDate.hours.minute"]]
datas = get_datas_from_departure(datas_aller, year="2018", month="10", day="12", hour="19")
datas = datas[datas["type"] == "SEMIFLEX"]
#datas[["amount", "arrivalDate.date.month", "arrivalDate.date.day", "arrivalDate.hours.minute"]]
datas
###Output
_____no_output_____
###Markdown
économie: 6.1 euros Ci-dessus est affiché l'évolution du prix au cours d'une semaine. Cela a commencé Lundi 24 septembre et les données ont été collectées tous les jours de la semaine. Aucune données n'a été collectée le week-end.On constate l'existence de "palliers" de prix. En effet, les prix semblent prendre des valeurs discrètes. On constate de légère variation dans cette règle, comme c'est le cas entre la courbe du 16/11 et celle du 02/11.Les variations au cours de la journée sont rares. En effet sur les 9 jours étudiés, on constate seulement 3 variations du prix ai cours de journées. Il ne faut pas compter les variation de prix en début de journée. Ces variations ont en effet eu lieu la nuit.Les variations de prix se font par pallier. Le seul cas ou un pallier a été sauté est durant le week-end ou je n'est pas mesuré d'informations durant deux jours consécutifs. Quels sont ces palliers? Commençon par sélectionner les données sur lesquelles on travail. A savoir les billets SEMIFLEX de 17h sur le trajet aller.
###Code
palliers_aller = datas_aller[pd.concat([datas_aller["type"] == "SEMIFLEX", datas_aller["departureDate.hours.hour"] == "17"], axis=1).all(axis=1)]
display(palliers_aller[["amount", "type"]].groupby(["amount"], as_index=False).count().sort_values(by="type", ascending=False))
palliers_retour = datas_retour[pd.concat([datas_retour["type"] == "SEMIFLEX", datas_retour["departureDate.hours.hour"] == "17"], axis=1).all(axis=1)]
display(palliers_retour[["amount", "type"]].groupby(["amount"], as_index=False).count().sort_values(by="type", ascending=False))
###Output
_____no_output_____
###Markdown
IV - Random Tests
###Code
datas = get_datas_from_departure(datas_aller, day=date[1], month=date[0], year="2018")
datas = datas[datas["type"] == "SEMIFLEX"]
display(datas[datas["departureDate.hours.hour"] == "17"])
datas[datas["departureDate.hours.hour"] == "19"][["arrivalDate.hours.hour", "arrivalDate.hours.minute", "departureDate.hours.hour", "departureDate.hours.minute", "collectDate.date.day", "collectDate.date.month"]]
###Output
_____no_output_____
###Markdown
Mon idée est de créer le même tableau mais à la place du prix, amount contiendra la différence entre le prix de 19h et celui de 17h (challenge).
###Code
datas_aller[pd.concat([datas_aller["arrivalDate.hours.hour"] == "23", datas_aller["arrivalDate.hours.minute"] == "09"], axis=1).all(axis=1)]
###Output
_____no_output_____
###Markdown
Project: Finding Lane Lines on the Road Import packages
###Code
!pip install opencv-python
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import statistics as st
import numpy as np
import cv2
import os
%matplotlib inline
###Output
Collecting opencv-python
[?25l Downloading https://files.pythonhosted.org/packages/18/7f/c836c44ab30074a8486e30f8ea6adc8e6ac02332851ab6cc069e2ac35b84/opencv_python-3.4.3.18-cp36-cp36m-manylinux1_x86_64.whl (25.0MB)
[K 100% |████████████████████████████████| 25.0MB 1.7MB/s ta 0:00:011 14% |████▋ | 3.6MB 8.0MB/s eta 0:00:03
[?25hRequirement already satisfied: numpy>=1.11.3 in /home/msgeorge/anaconda3/lib/python3.6/site-packages (from opencv-python) (1.14.3)
[31mdistributed 1.21.8 requires msgpack, which is not installed.[0m
Installing collected packages: opencv-python
Successfully installed opencv-python-3.4.3.18
[33mYou are using pip version 10.0.1, however version 18.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
###Markdown
Helper Functions
###Code
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color, thickness):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def improved_draw_lines(img, temp_lines, color, thickness):
"""
This is the improved draw_lines function which calculates a mean slope
"""
m = np.zeros(shape=temp_lines.shape[0])
b = np.zeros(shape=temp_lines.shape[0])
# calculating the slope (m) and intercept (b) for each
# hough line segment
i=0
for line in temp_lines:
for x1,y1,x2,y2 in line:
m[i] = (y2-y1)/(x2-x1)
b[i] = y1 - m[i]*x1
i=i+1
# calculating the average slope & intercept for the left path line
# creates list of slopes that is exclusively for the left slope
left_list = [x for x in m if x < 0]
# removes the outliers from the list: 1σ elemination
new_left_list = remove_outliers(left_list)
# calculate the average slope now
left_avg_m = np.mean(new_left_list)
# calculate the avearage intercept using teh associated mean values
i=0
sum1=0
count1=0
for x in b:
if m[i] in new_left_list :
sum1=sum1+x
count1=count1+1
i=i+1
left_avg_b = sum1/count1
# calculating the average slope & intercept for the right path line
right_list = [x for x in m if x > 0]
new_right_list = remove_outliers(right_list)
right_avg_m = np.mean(new_right_list)
i=0
sum2=0
count2=0
for x in b:
if m[i] in new_right_list:
sum2=sum2+x
count2=count2+1
i=i+1
right_avg_b = sum2/count2
# calculating the point of intersection of lines with bottom border
left_bottom_x = int(round((masked_img.shape[0] - left_avg_b)/left_avg_m))
right_bottom_x = int(round((masked_img.shape[0] - right_avg_b)/right_avg_m))
# calculating the top line of region of interest
topline_m = (rt_vertice[1] - lt_vertice[1])/(rt_vertice[0] - lt_vertice[0])
topline_b = rt_vertice[1] - topline_m*rt_vertice[0]
# calculating the point of intersection of lines with top line
# Left Line
intersection_left_x = int(round((left_avg_b - topline_b)/(topline_m-left_avg_m)))
intersection_left_y = int(round(left_avg_m*intersection_left_x + left_avg_b))
# Right Line
intersection_right_x = int(round((right_avg_b - topline_b)/(topline_m-right_avg_m)))
intersection_right_y = int(round(right_avg_m*intersection_right_x + right_avg_b))
# drawing the lines on image
cv2.line(img, (left_bottom_x,img.shape[0]), (intersection_left_x,intersection_left_y), color, thickness)
cv2.line(img, (right_bottom_x,img.shape[0]), (intersection_right_x,intersection_right_y), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
#improved_draw_lines(line_img, lines, [255, 0, 0], 20)
draw_lines(line_img, lines, [255, 0, 0], 20)
return line_img
def improved_hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with full lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
improved_draw_lines(line_img, lines, [255, 0, 0], 20)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
def remove_outliers(list):
"""
eleminates the values that are 1σ away from the mean of the list
"""
#print(list)
elements=np.array(list)
mean = np.mean(elements, axis=0)
sd = np.std(elements, axis=0)
#print(mean,sd)
final_list = [x for x in list if (x > mean - 1 * sd)]
final_list = [x for x in final_list if (x < mean + 1 * sd)]
return final_list
###Output
_____no_output_____
###Markdown
Reading In First Image
###Code
#reading in an image
color_image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(color_image), 'with dimensions:', color_image.shape)
plt.imshow(color_image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
###Output
This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)
###Markdown
Creating Pipeline
###Code
# creating grayscale image
gray_image = grayscale(color_image)
plt.imshow(gray_image, cmap='gray')
plt.imsave('writeup_images/gray.jpg',gray_image,cmap='gray')
# defining kernel size & applying gaussian blur
kernel_size=5
smooth_img=gaussian_blur(gray_image,kernel_size)
plt.imshow(smooth_img,cmap='gray')
plt.imsave('writeup_images/smooth.jpg',smooth_img,cmap='gray')
# defining threshold parameters for Canny and applying them
low_threshold = 50
high_threshold = 150
canny_img = canny(smooth_img, low_threshold, high_threshold)
plt.imshow(canny_img,cmap='gray')
plt.imsave('writeup_images/canny.jpg',canny_img,cmap='gray')
# Creating a masked edges image
# setting up vertices
imshape = color_image.shape
lb_vertice= (50,imshape[0])
lt_vertice= (420,330)
rt_vertice= (525,330)
rb_vertice= (imshape[1],imshape[0])
vertices = np.array([[lb_vertice,lt_vertice, rt_vertice, rb_vertice]], dtype=np.int32)
#drawing the polygon on a dummy immage
line_image = np.copy(canny_img)*0
line3_image = np.dstack((line_image, line_image, line_image))
cv2.line(line3_image,lb_vertice,lt_vertice,(255,0,0),10)
cv2.line(line3_image,lt_vertice,rt_vertice,(255,0,0),10)
cv2.line(line3_image,rt_vertice,rb_vertice,(255,0,0),10)
cv2.line(line3_image,lb_vertice,rb_vertice,(255,0,0),10)
color_edges = np.dstack((canny_img, canny_img, canny_img))
lines_edges = cv2.addWeighted(color_edges, 0.8, line3_image, 1, 0)
plt.imshow(lines_edges)
plt.imsave('writeup_images/lines_edges.jpg',lines_edges,cmap='gray')
#creating region of interest
masked_img= region_of_interest(canny_img, vertices)
plt.imshow(masked_img,cmap='gray')
plt.imsave('writeup_images/masked.jpg',masked_img,cmap='gray')
# Define the Hough transform parameters
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 5#minimum number of pixels making up a line
max_line_gap = 1 # maximum gap in pixels between connectable line segments
#Running the hough function
lines = hough_lines(masked_img, rho, theta, threshold, min_line_length, max_line_gap)
color_edges = np.dstack((canny_img, canny_img, canny_img))
houghed_img= weighted_img(lines, color_image, α=0.8, β=1., γ=0.)
plt.imshow(houghed_img)
plt.imsave('writeup_images/houghed.jpg',houghed_img)
###Output
_____no_output_____
###Markdown
Compressed Pipeline
###Code
low_threshold = 50
high_threshold = 150
kernel_size = 5
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 5#minimum number of pixels making up a line
max_line_gap = 1 # maximum gap in pixels between connectable line segments
lb_vertice= (50,540)
lt_vertice= (420,330)
rt_vertice= (525,330)
rb_vertice= (960,540)
vertices = np.array([[lb_vertice,lt_vertice, rt_vertice, rb_vertice]], dtype=np.int32)
buffer = 15
filename= 'test_images/solidWhiteRight.jpg'
# function that consolidates the above code snippets into one. This function takes in a filename
# and returns an image with hough lines drawn
def pipeline(filename, kernel_size, low_threshold, high_threshold, vertices,
rho, theta, threshold, min_line_length, max_line_gap, buffer):
color_image = mpimg.imread(filename)
# creating grayscale image
gray_image = grayscale(color_image)
smooth_img=gaussian_blur(gray_image,kernel_size)
canny_img = canny(smooth_img, low_threshold, high_threshold)
masked_img= region_of_interest(canny_img, vertices)
lines = hough_lines(masked_img, rho, theta, threshold, min_line_length, max_line_gap)
color_edges = np.dstack((canny_img, canny_img, canny_img))
houghed_img= weighted_img(lines, color_image, α=0.8, β=1., γ=0.)
outfile='test_images_output/'+os.path.splitext(filename)[0].split("/")[1]+'.jpg'
# Convert BGR to HSV
BGR_img = cv2.cvtColor(houghed_img, cv2.COLOR_RGB2BGR)
cv2.imwrite(outfile,BGR_img)
return houghed_img
plt.imshow(pipeline(filename, kernel_size, low_threshold, high_threshold, vertices,
rho, theta, threshold, min_line_length, max_line_gap,buffer))
###Output
_____no_output_____
###Markdown
Running through the list of the files
###Code
list_dir = os.listdir("test_images/")
for file in list_dir:
filename= "test_images/" + file
print("Working with: "+filename)
pipeline(filename, kernel_size, low_threshold, high_threshold, vertices,
rho, theta, threshold, min_line_length, max_line_gap,buffer)
###Output
Working with: test_images/solidYellowCurve.jpg
Working with: test_images/solidWhiteCurve.jpg
Working with: test_images/solidWhiteRight.jpg
Working with: test_images/solidYellowLeft.jpg
Working with: test_images/solidYellowCurve2.jpg
Working with: test_images/whiteCarLaneSwitch.jpg
###Markdown
Running a video through the pipeline
###Code
!pip install moviepy
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# function that is similar to the pipeline function. This function takes in an image
# and returns an image with hough lines drawn
def process_image(color_image):
low_threshold = 50
high_threshold = 150
kernel_size = 5
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 5#minimum number of pixels making up a line
max_line_gap = 1 # maximum gap in pixels between connectable line segments
lb_vertice= (50,540)
lt_vertice= (420,330)
rt_vertice= (525,330)
rb_vertice= (960,540)
vertices = np.array([[lb_vertice,lt_vertice, rt_vertice, rb_vertice]], dtype=np.int32)
buffer = 15
gray_image = grayscale(color_image)
smooth_img = gaussian_blur(gray_image,kernel_size)
canny_img = canny(smooth_img, low_threshold, high_threshold)
masked_img = region_of_interest(canny_img, vertices)
lines = improved_hough_lines(masked_img, rho, theta, threshold, min_line_length, max_line_gap)
color_edges = np.dstack((canny_img, canny_img, canny_img))
result = weighted_img(lines, color_image, α=0.8, β=1., γ=0.)
return result
# Running through the first video
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
#clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
# Running through the second video
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
###Output
[MoviePy] >>>> Building video test_videos_output/solidYellowLeft.mp4
[MoviePy] Writing video test_videos_output/solidYellowLeft.mp4
###Markdown
Machine Learning and Statistic Project 2020 Renewable Energy Sources - Prediction of Wind Turbine Power Production * Student Name: **Maciej Burel*** Student ID: G00376332* Lecturer: Dr. Ian McLoughlin *** (Image - Wharton, 2015) Table of Contents**** [Goals and objectives](goals)* [Introduction](introduction)* [Data Review](datareview)* [Researching Models](models) * [Polynomial Regression](poly) * [Regression-based Neural Networks](neuron)* [Summary and Conclusions](summary)* [References](references) Goals and objectives*** * The goal of this project is to produce a model that accurately predicts wind turbine power output from wind speed values, as in the data set powerproduction.csv.* The part of the project is to create a web service that will respond with predicted power values based on speed values sent as HTTP requests.* Research and investigations about wind energy production * Solution and approaching for project task* Implementation and presentation of the project results * Final project should contain following elements: * Jupyter notebook that trains a model using the data set. In the notebook, it should be explained the model and an analysis of its accuracy. * Python script that runs a web service based on the model, as above. * Dockerfile to build and run the web service in a container. [**<<<**](table) Introduction*** Wind energy production The need to move towards renewable and clean energy sources has increased considerably over the previous years.As the demand for wind power has increased over the last decades, there is a serious need to set up wind farms and construct facilities depending on accurate wind forecasted data. Collected short-term wind forecasting has a significant effect on the electricity [1], which is also necessary to identify the size of wind farms.It is obvious that there is a need for an accurate wind forecasting technique to substantially reduce the cost by wind power scheduling [2]. How Do Wind Turbines Work? Wind turbine blades rotate when hit by the wind. And this doesn’t have to be a strong wind, either: the blades of most turbines will start turning at a wind speed of 3-5 meters per second, which is a gentle breeze. It’s this spinning motion that turns a shaft in the nacelle – which is the box-like structure at the top of a wind turbine. A generator built into the nacelle then converts the kinetic energy of the turning shaft into electrical energy. This then passes through a transformer, which steps up the voltage so it can be transported on the National Grid or used by a local site. (Image - Futuren, 2019)Most onshore wind turbines have a capacity of 2-3 megawatts (MW), which can produce over 6 million kilowatt hours (kwh) of electricity every year. That’s enough to meet the electricity demand of around 1,500 average households.Up to a certain level, the faster the wind blows, the more electricity is generated. In fact, when the wind speed doubles, up to eight times more electricity is generated. But if the wind is too strong, turbines will shut themselves down to prevent being damaged.A wind turbine is typically 30-45% efficient – rising to 50% efficient at times of peak wind. If that sounds low to you, remember that if turbines were 100% efficient, the wind would completely drop after going through the turbine.[3] Wind Power Calculation Wind energy is the kinetic energy of air in motion, also called wind. Total wind energy flowing through an imaginary surface with area A during the time t is:$$E = \frac{1}{2}mv^2 = \frac{1}{2}(Avt\rho)v^2 = \frac{1}{2}At\rho v^3$$ where ρ is the density of air; v is the wind speed; Avt is the volume of air passing through A (which is considered perpendicular to the direction of the wind); Avtρ is therefore the mass m passing through "A". ½ ρv2 is the kinetic energy of the moving air per unit volume.Power is energy per unit time, so the wind power incident on A (e.g. equal to the rotor area of a wind turbine) is [4] :$$P = \frac{E}{t} = \frac{1}{2}A\rho v^3$$ A German physicist Albert Betz concluded in 1919 that no wind turbine can convert more than 16/27 (59.3%) of the kinetic energy of the wind into mechanical energy turning a rotor. To this day, this is known as the Betz Limit or **Betz'Law**. The theoretical maximum power efficiency of any design of wind turbine is 0.59 (i.e. no more than 59% of the energy carried by the wind can be extracted by a wind turbine). This is called the “power coefficient” and is defined as: $$P = C_{Pmax} = 0.59$$ Also, wind turbines cannot operate at this maximum limit. The power coefficient needs to be factored in the equation and the extractable power from the wind is given by[5] : $$P_{avail} = \frac{1}{2}At\rho v^3 C_{p}$$ The Power Curve It is important to understand the relationship between power and wind speed to determine the required control type, optimization, or limitation. The power curve, a plot that can be used for this purpose, specifies how much power can be extracted from the incoming wind. The figure below contains an ideal wind turbine power curve. (Image - NI, 2020) It can be seen that the power curve is split into three distinct regions. Because Region I consists of low wind speeds and is below the rated turbine power, the turbine is run at the maximum efficiency to extract all power. In other words, the turbine controls with optimization in mind. On the other hand, Region III consists of high wind speeds and is at the rated turbine power. The turbine then controls with limitation of the generated power in mind when operating in this region. Finally, Region II is a transition region mainly concerned with keeping rotor torque and noise low.[6] [**<<<**](table) Data Review*** To start with the practical part of the project, let's review data from powerproduction.csv file.Firt the all required librarys that will be required for all the tasks in the project will be imported.
###Code
# Import all required librarys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
# Neural networks.
import tensorflow.keras as kr
# Save model to file
import joblib
# Use magic function to render the figure in a notebook
%matplotlib inline
# Load the dataset into the dataframe
df = pd.read_csv('Data/powerproduction.csv')
# Display first 10 rows of dataset
df.head(10)
# Display last 10 rows of dataset
df.tail(10)
###Output
_____no_output_____
###Markdown
From the above outcome it looks like data set is consistent but as a good practice is worth to check if there are any empty values before next steps.
###Code
# Total number of missing values.
df.isnull().values.sum()
###Output
_____no_output_____
###Markdown
There is no empty cels in the data set. Let's take a look at data statistical information.
###Code
# Display statistical info of dataset
df.describe()
###Output
_____no_output_____
###Markdown
The basic information about powerproduction data set revieled that we have two colums of data speed and power which contains 500 samples. There are no units for those data, however based on the information from the first paragraph we can assume that speed of the wind is in m/s. Power is most probably in MW for wind farm as one standard wind turbine can produce by avarge 2.5-3 MW. Speed can varry from 0 to 25 and the power from 0 to 133.56.The next logical step is to visualize those data.
###Code
# Sets a plot style
# Define chart size
plt.figure(figsize=(16,8))
# Add grid lines
plt.grid(True)
# Add title to the chart
plt.title("Visualization of the Power Production Data", fontweight="bold", fontsize=20)
# # Add x lable to the chart
plt.xlabel('Speed', size=16)
# Add y lable to the chart
plt.ylabel('Power', size=16)
# Plot and show data
plt.plot(df.speed, df.power, '.g')
plt.show()
###Output
_____no_output_____
###Markdown
The graph shows that the power production by turbine increase significantly when wind speed is above 7 m/s up to 17 m/s where start to stabilize. There are a few instances where wind speed is high and the power outcome is 0. We can only guess that most probably wind turbine was in idle mode for maintenance.As a next the data should go through the preprocessing step. Data Preprocessing Data Preparation is one of the indispensable steps in any Machine Learning development life cycle.[7] Pre-processing refers to the transformations applied to our data before feeding it to the algorithm. Data Preprocessing is a technique that is used to convert the raw data into a clean data set. In other words, whenever the data is gathered from different sources it is collected in raw format which is not feasible for the analysis. For achieving better results from the applied model in Machine Learning projects the format of the data has to be in a proper manner. Some specified Machine Learning model needs information in a specified format, for example, Random Forest algorithm does not support null values, therefore to execute random forest algorithm null values have to be managed from the original raw data set.Another aspect is that data set should be formatted in such a way that more than one Machine Learning and Deep Learning algorithms are executed in one data set, and best out of them is chosen.[8] In our case as shown on the graph we have to remove outliers where wind turbine was not working and the power output was 0.
###Code
# Clean data from outliers and zero values
df_clean = df[df.power != 0]
# Display new clean dataset
# Define chart size
plt.figure(figsize=(16,8))
# Add grid lines
plt.grid(True)
# Add title to the chart
plt.title("Visualization of the Clean Power Production Data", fontweight="bold", fontsize=20)
# # Add x lable to the chart
plt.xlabel('Speed', size=16)
# Add y lable to the chart
plt.ylabel('Power', size=16)
# Plot and show data
plt.plot(df_clean.speed, df_clean.power, '.g')
plt.show()
###Output
_____no_output_____
###Markdown
[**<<<**](table) Researching Models*** There are many models available in machine learning that fit this purpose better or worse. By looking at the shape of the given data I will focus on regression models. In regression models, the output is continuous. Polynomial Regression Polynomial Regression is a form of linear regression in which the relationship between the independent variable x and dependent variable y is modeled as an nth degree polynomial. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x).The basic goal of regression analysis is to model the expected value of a dependent variable y in terms of the value of an independent variable x. In simple regression, we used following equation:$$y = a +bx + e$$Here y is dependent variable, a is y intercept, b is the slope and e is the error rate.[9] First let's divide dataset into two components that is speed and power.
###Code
# Divide dataset into two components
speed = df_clean.iloc[:, 0].values.reshape(-1,1)
power = df_clean.iloc[:, 1].values.reshape(-1,1)
###Output
_____no_output_____
###Markdown
We can now use function train_test_split from scikit-learn that will split our data into random train and test subsets.Test size floating number represents the proportion of the dataset to include in the test split.
###Code
# Split data to train and test with test_size=0.3
speed_train, speed_test, power_train, power_test = train_test_split(speed, power, test_size=0.3, random_state=1)
###Output
_____no_output_____
###Markdown
Build the model Now we can fit the Polynomial Regression model on our components.
###Code
# Fitting the Polynomial Regression model on two components speed and power.
# The “degree” argument controls the number of features created
# After few tries degree from 5 start produce satisfying results.
poly = PolynomialFeatures(degree = 10)
s_train_poly = poly.fit_transform(speed_train)
# Use linear regression as model on the power train values
m1 = LinearRegression()
m1.fit(s_train_poly, power_train)
# Create predicted model
pwr_train_predicted = m1.predict(s_train_poly)
###Output
_____no_output_____
###Markdown
Check model prediction
###Code
for i in range (4):
index = np.random.choice(speed_train.shape[0], 1, replace=False)
s = speed_train[index]
p = power_train[index]
print(f"For wind speed {s} actual power is: {p}")
print(f"For wind speed {s} predicted power is: {m1.predict(poly.fit_transform(s))}\n")
###Output
For wind speed [[10.36]] actual power is: [[28.181]]
For wind speed [[10.36]] predicted power is: [[22.62187763]]
For wind speed [[12.262]] actual power is: [[46.136]]
For wind speed [[12.262]] predicted power is: [[44.57934953]]
For wind speed [[6.682]] actual power is: [[10.044]]
For wind speed [[6.682]] predicted power is: [[5.79936887]]
For wind speed [[7.407]] actual power is: [[3.122]]
For wind speed [[7.407]] predicted power is: [[6.63616654]]
###Markdown
Visualize data We can now plot the prediction on both train and test data.
###Code
# Define chart size
plt.figure(figsize=(16,8))
# Add grid lines
plt.grid(True)
# Add title to the chart
plt.title("Polynomial Regression Preditions - Training Data", fontweight="bold", fontsize=20)
# # Add x lable to the chart
plt.xlabel('Speed', size=16)
# Add y lable to the chart
plt.ylabel('Power', size=16)
# Plot and show data
plt.plot(speed_train, power_train, '.g', label='Training data')
plt.plot(speed_train, pwr_train_predicted, 'r.', label='Model prediction')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
Prepare and plot the model on the test data.
###Code
s_test_poly = poly.fit_transform(speed_test)
# Use linear regression as model on the power train values
m2 = LinearRegression()
m2.fit(s_test_poly, power_test)
# Create predicted model
pwr_test_predicted = m2.predict(s_test_poly)
# Define chart size
plt.figure(figsize=(16,8))
# Add grid lines
plt.grid(True)
# Add title to the chart
plt.title("Polynomial Regression Preditions - Testing Data", fontweight="bold", fontsize=20)
# # Add x lable to the chart
plt.xlabel('Speed', size=16)
# Add y lable to the chart
plt.ylabel('Power', size=16)
# Plot and show data
plt.plot(speed_test, power_test, '.g', label='Testing data')
plt.plot(speed_test, pwr_test_predicted, 'r.', label='Model prediction')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
The above graphs show that the prediction model is performing quite well. Error evaluation To assess the accuracy of this regression model the calculation of RMSE and R-squared will be used.
###Code
# Calculate and display RMSE and R2
print("\nTesting Data")
print("RMSE: %.2f"% np.sqrt(mean_squared_error(power_test, pwr_test_predicted)))
print("R-squared: %.2f"% r2_score(power_test, pwr_test_predicted))
###Output
Testing Data
RMSE: 3.72
R-squared: 0.99
###Markdown
R-squared is a statistical measure of how close the data are to the fitted regression line. It is also known as the coefficient of determination, or the coefficient of multiple determination for multiple regression.R-squared is always between 0 and 100%:* 0% indicates that the model explains none of the variability of the response data around its mean.* 100% indicates that the model explains all the variability of the response data around its mean.In general (simplifying), the higher the R-squared, the better the model fits the data.[11]Root Mean Squared Error (RMSE) can range between 0 and infinity. Lower values are better.From the above, we can say that our model performs quite well. Saving model
###Code
#https://machinelearningmastery.com/save-load-machine-learning-models-python-scikit-learn/
# save the model to disk
filename = 'poly.sav'
joblib.dump(m1, filename)
# load the model from disk
m_loaded = joblib.load(filename)
# Test if model works
windspeed = 12
wsF, poF = 25, 120
ws = np.array([windspeed])
result = m_loaded.predict(poly.fit_transform([ws]))
print(result)
###Output
[[41.24920791]]
###Markdown
[**<<<**](table) Regression-based Neural Networks A neural network is a computational system that creates predictions based on existing data.A neural network consists of:* Input layers: Layers that take inputs based on existing data* Hidden layers: Layers that use backpropagation to optimise the weights of the input variables in order to improve the predictive power of the model* Output layers: Output of predictions based on the data from the input and hidden layers (Image - towardsdatascience, 2020) One of the most important considerations when training a neural network is choosing the number of neurons to include in the input and hidden layers.[10] We will use Tensorflow module for this job which was already imported at the beginning in the libraries section.
###Code
# Reuse reshaped data prepared for previous model
speed = df_clean.iloc[:, 0].values.reshape(-1,1)
power = df_clean.iloc[:, 1].values.reshape(-1,1)
# Split data to train and test with test_size=0.3
speed_train, speed_test, power_train, power_test = train_test_split(speed, power, test_size=0.3, random_state=1)
#https://github.com/ianmcloughlin/jupyter-teaching-notebooks/blob/master/keras-neurons.ipynb
# Create a new neural network.
m3 = kr.models.Sequential()
# Add multiple layers
m3.add(kr.layers.Dense(10, input_shape=(1,), activation="sigmoid", kernel_initializer="glorot_uniform", bias_initializer="glorot_uniform"))
m3.add(kr.layers.Dense(20, input_shape=(1,), activation="sigmoid", kernel_initializer="glorot_uniform", bias_initializer="glorot_uniform"))
m3.add(kr.layers.Dense(1, activation='linear', kernel_initializer="glorot_uniform", bias_initializer="glorot_uniform"))
# Compile the model.
m3.compile(loss="mean_squared_error", optimizer="adam")
# fit the keras model on the dataset
m3.fit(speed_train, power_train, epochs=500, batch_size=10)
###Output
Train on 315 samples
Epoch 1/500
315/315 [==============================] - 0s 2ms/sample - loss: 4191.0016
Epoch 2/500
315/315 [==============================] - 0s 251us/sample - loss: 4129.1378
Epoch 3/500
315/315 [==============================] - 0s 244us/sample - loss: 4055.7183
Epoch 4/500
315/315 [==============================] - 0s 286us/sample - loss: 3970.4082
Epoch 5/500
315/315 [==============================] - 0s 270us/sample - loss: 3880.4383
Epoch 6/500
315/315 [==============================] - 0s 321us/sample - loss: 3801.8924
Epoch 7/500
315/315 [==============================] - 0s 251us/sample - loss: 3739.3552
Epoch 8/500
315/315 [==============================] - 0s 238us/sample - loss: 3685.1320
Epoch 9/500
315/315 [==============================] - 0s 235us/sample - loss: 3637.5032
Epoch 10/500
315/315 [==============================] - 0s 244us/sample - loss: 3593.1366
Epoch 11/500
315/315 [==============================] - 0s 238us/sample - loss: 3552.3498
Epoch 12/500
315/315 [==============================] - 0s 229us/sample - loss: 3512.6551
Epoch 13/500
315/315 [==============================] - 0s 270us/sample - loss: 3475.3597
Epoch 14/500
315/315 [==============================] - 0s 251us/sample - loss: 3437.9593
Epoch 15/500
315/315 [==============================] - 0s 283us/sample - loss: 3402.0879
Epoch 16/500
315/315 [==============================] - 0s 267us/sample - loss: 3365.0932
Epoch 17/500
315/315 [==============================] - 0s 308us/sample - loss: 3326.5138
Epoch 18/500
315/315 [==============================] - 0s 276us/sample - loss: 3287.7795
Epoch 19/500
315/315 [==============================] - 0s 276us/sample - loss: 3249.5044
Epoch 20/500
315/315 [==============================] - 0s 276us/sample - loss: 3213.0954
Epoch 21/500
315/315 [==============================] - 0s 273us/sample - loss: 3177.2792
Epoch 22/500
315/315 [==============================] - 0s 279us/sample - loss: 3142.2717
Epoch 23/500
315/315 [==============================] - 0s 371us/sample - loss: 3105.6476
Epoch 24/500
315/315 [==============================] - 0s 238us/sample - loss: 3068.6253
Epoch 25/500
315/315 [==============================] - 0s 232us/sample - loss: 3031.5000
Epoch 26/500
315/315 [==============================] - 0s 283us/sample - loss: 2996.1900
Epoch 27/500
315/315 [==============================] - 0s 263us/sample - loss: 2962.1760
Epoch 28/500
315/315 [==============================] - 0s 286us/sample - loss: 2928.9564
Epoch 29/500
315/315 [==============================] - 0s 260us/sample - loss: 2896.6314
Epoch 30/500
315/315 [==============================] - 0s 270us/sample - loss: 2863.5883
Epoch 31/500
315/315 [==============================] - 0s 295us/sample - loss: 2830.8670
Epoch 32/500
315/315 [==============================] - 0s 365us/sample - loss: 2797.0428
Epoch 33/500
315/315 [==============================] - 0s 289us/sample - loss: 2762.4802
Epoch 34/500
315/315 [==============================] - 0s 260us/sample - loss: 2729.5038
Epoch 35/500
315/315 [==============================] - 0s 352us/sample - loss: 2698.0065
Epoch 36/500
315/315 [==============================] - 0s 235us/sample - loss: 2667.9297
Epoch 37/500
315/315 [==============================] - 0s 254us/sample - loss: 2638.6637
Epoch 38/500
315/315 [==============================] - 0s 298us/sample - loss: 2610.4866
Epoch 39/500
315/315 [==============================] - 0s 263us/sample - loss: 2582.9310
Epoch 40/500
315/315 [==============================] - 0s 270us/sample - loss: 2556.0283
Epoch 41/500
315/315 [==============================] - 0s 244us/sample - loss: 2529.4042
Epoch 42/500
315/315 [==============================] - 0s 238us/sample - loss: 2503.5001
Epoch 43/500
315/315 [==============================] - 0s 270us/sample - loss: 2477.4704
Epoch 44/500
315/315 [==============================] - 0s 235us/sample - loss: 2451.5879
Epoch 45/500
315/315 [==============================] - 0s 229us/sample - loss: 2425.8575
Epoch 46/500
315/315 [==============================] - 0s 229us/sample - loss: 2399.5422
Epoch 47/500
315/315 [==============================] - 0s 279us/sample - loss: 2372.6617
Epoch 48/500
315/315 [==============================] - 0s 263us/sample - loss: 2345.4538
Epoch 49/500
315/315 [==============================] - 0s 260us/sample - loss: 2318.2174
Epoch 50/500
315/315 [==============================] - 0s 257us/sample - loss: 2290.5295
Epoch 51/500
315/315 [==============================] - 0s 244us/sample - loss: 2262.1084
Epoch 52/500
315/315 [==============================] - 0s 251us/sample - loss: 2234.9144
Epoch 53/500
315/315 [==============================] - 0s 283us/sample - loss: 2206.5062
Epoch 54/500
315/315 [==============================] - 0s 197us/sample - loss: 2178.5417
Epoch 55/500
315/315 [==============================] - 0s 429us/sample - loss: 2150.8853
Epoch 56/500
315/315 [==============================] - 0s 225us/sample - loss: 2123.0587
Epoch 57/500
315/315 [==============================] - 0s 273us/sample - loss: 2095.6147
Epoch 58/500
315/315 [==============================] - 0s 254us/sample - loss: 2068.1982
Epoch 59/500
315/315 [==============================] - 0s 302us/sample - loss: 2040.8575
Epoch 60/500
315/315 [==============================] - 0s 248us/sample - loss: 2013.9388
Epoch 61/500
315/315 [==============================] - 0s 232us/sample - loss: 1987.1617
Epoch 62/500
315/315 [==============================] - 0s 222us/sample - loss: 1960.0411
Epoch 63/500
315/315 [==============================] - 0s 235us/sample - loss: 1933.9963
Epoch 64/500
315/315 [==============================] - 0s 324us/sample - loss: 1908.0328
Epoch 65/500
315/315 [==============================] - 0s 279us/sample - loss: 1881.5053
Epoch 66/500
315/315 [==============================] - 0s 248us/sample - loss: 1856.4029
Epoch 67/500
315/315 [==============================] - 0s 289us/sample - loss: 1830.4130
Epoch 68/500
315/315 [==============================] - 0s 260us/sample - loss: 1805.1127
Epoch 69/500
315/315 [==============================] - 0s 254us/sample - loss: 1780.4418
Epoch 70/500
315/315 [==============================] - 0s 238us/sample - loss: 1755.1846
Epoch 71/500
315/315 [==============================] - 0s 254us/sample - loss: 1730.3025
Epoch 72/500
315/315 [==============================] - 0s 238us/sample - loss: 1705.2875
Epoch 73/500
315/315 [==============================] - 0s 254us/sample - loss: 1679.6592
Epoch 74/500
315/315 [==============================] - 0s 241us/sample - loss: 1650.8857
Epoch 75/500
315/315 [==============================] - 0s 267us/sample - loss: 1618.4682
Epoch 76/500
315/315 [==============================] - 0s 263us/sample - loss: 1585.6894
Epoch 77/500
315/315 [==============================] - 0s 238us/sample - loss: 1551.8124
Epoch 78/500
315/315 [==============================] - 0s 267us/sample - loss: 1518.8213
Epoch 79/500
315/315 [==============================] - 0s 273us/sample - loss: 1487.2284
Epoch 80/500
315/315 [==============================] - 0s 324us/sample - loss: 1456.5210
Epoch 81/500
315/315 [==============================] - 0s 276us/sample - loss: 1428.0940
Epoch 82/500
315/315 [==============================] - 0s 241us/sample - loss: 1400.2467
Epoch 83/500
315/315 [==============================] - 0s 257us/sample - loss: 1373.0938
Epoch 84/500
315/315 [==============================] - 0s 244us/sample - loss: 1347.4642
Epoch 85/500
315/315 [==============================] - 0s 279us/sample - loss: 1322.2845
Epoch 86/500
315/315 [==============================] - 0s 244us/sample - loss: 1297.0803
Epoch 87/500
315/315 [==============================] - 0s 289us/sample - loss: 1273.3667
Epoch 88/500
315/315 [==============================] - 0s 257us/sample - loss: 1249.9432
Epoch 89/500
315/315 [==============================] - 0s 251us/sample - loss: 1226.6541
Epoch 90/500
315/315 [==============================] - 0s 254us/sample - loss: 1203.9604
Epoch 91/500
315/315 [==============================] - 0s 270us/sample - loss: 1182.2519
###Markdown
Check model prediction
###Code
for i in range (4):
index = np.random.choice(speed_train.shape[0], 1, replace=False)
s = speed_train[index]
p = power_train[index]
print(f"For wind speed {s} actual power is: {p}")
print(f"For wind speed {s} predicted power is: {m3.predict(s)}\n")
###Output
For wind speed [[6.982]] actual power is: [[4.187]]
For wind speed [[6.982]] predicted power is: [[6.836359]]
For wind speed [[9.885]] actual power is: [[27.136]]
For wind speed [[9.885]] predicted power is: [[17.98528]]
For wind speed [[11.737]] actual power is: [[38.552]]
For wind speed [[11.737]] predicted power is: [[37.892933]]
For wind speed [[7.733]] actual power is: [[4.443]]
For wind speed [[7.733]] predicted power is: [[8.299347]]
###Markdown
Visualize data
###Code
# Prediction on the training data visualization
# Define chart size
plt.figure(figsize=(16,8))
# Add grid lines
plt.grid(True)
# Add title to the chart
plt.title("Neuron Networks Preditions - Training Data", fontweight="bold", fontsize=20)
# # Add x lable to the chart
plt.xlabel('Speed', size=16)
# Add y lable to the chart
plt.ylabel('Power', size=16)
# Plot and show data
plt.plot(speed_train, power_train, '.g', label='Trianing data')
plt.plot(speed_train, m3.predict(speed_train), 'r.', label='Model prediction')
plt.legend(loc='best')
plt.show()
# Prediction on the test data visualization
# Define chart size
plt.figure(figsize=(16,8))
# Add grid lines
plt.grid(True)
# Add title to the chart
plt.title("Neuron Networks Preditions - Testing Data", fontweight="bold", fontsize=20)
# # Add x lable to the chart
plt.xlabel('Speed', size=16)
# Add y lable to the chart
plt.ylabel('Power', size=16)
# Plot and show data
plt.plot(speed_test, power_test, '.g', label='Testing data')
plt.plot(speed_test, m3.predict(speed_test), 'r.', label='Model prediction')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
Error evaluation
###Code
# Summary of the model
m3.summary()
# Evaluate model
m3.evaluate(speed_test, power_test)
# Calculate and display RMSE and R2
print("\nTesting Data")
print("RMSE: %.2f"% np.sqrt(mean_squared_error(power_test, m3.predict(speed_test))))
print("R-squared: %.2f"% r2_score(power_test, m3.predict(speed_test)))
###Output
Testing Data
RMSE: 3.87
R-squared: 0.99
###Markdown
Saving model
###Code
#https://machinelearningmastery.com/save-load-keras-deep-learning-models/
# serialize model to JSON
model_json = m3.to_json()
with open("neuron.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
m3.save_weights("neuron.h5")
# load json and create model
json_file = open('neuron.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
m4 = kr.models.model_from_json(loaded_model_json)
# load weights into new model
m4.load_weights("neuron.h5")
print("Loaded model from disk")
# evaluate loaded model on test data
m4.compile(loss="mean_squared_error", optimizer="adam")
m4.evaluate(speed_test, power_test)
for i in range (4):
index = np.random.choice(speed_train.shape[0], 1, replace=False)
s = speed_train[index]
p = power_train[index]
print(f"For wind speed {s} actual power is: {p}")
print(f"For wind speed {s} predicted power is: {m4.predict(s)}\n")
###Output
For wind speed [[13.739]] actual power is: [[63.265]]
For wind speed [[13.739]] predicted power is: [[64.02414]]
For wind speed [[23.023]] actual power is: [[101.308]]
For wind speed [[23.023]] predicted power is: [[99.56946]]
For wind speed [[19.194]] actual power is: [[100.428]]
For wind speed [[19.194]] predicted power is: [[97.65896]]
For wind speed [[14.765]] actual power is: [[82.147]]
For wind speed [[14.765]] predicted power is: [[74.79427]]
###Markdown
Project Predicting Wind Turbine Power from Wind SpeedThe goal of this project is to analyse a dataset of wind speeds and the wind turbine power that was produced to determine could an accurate prediction of power be predicted by knowing the wind speed.
###Code
# Import libraries
# Numpy for arrays and mathematical functions
import numpy as np
# Pandas for loading the dataset to a dataframes for easy manipulation
import pandas as pd
# Matplotlib for ploting and visualising the data
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Pandas makes it easy to read in a csv dataset to quickly work with it. The dataset was originally available at the below link. A URL can be passed instead of a file name as below to the same function but it was saved to the same repository in case in future theres access issues.https://raw.githubusercontent.com/ianmcloughlin/2020A-machstat-project/master/dataset/powerproduction.csv
###Code
dataset = pd.read_csv("powerproduction.csv")
###Output
_____no_output_____
###Markdown
Initial View of the DatasetThere is 500 records in this dataset with 2 variables speed and power. Speed varies from 0 to 25 and power from 0 to 113.56.
###Code
dataset.describe()
###Output
_____no_output_____
###Markdown
The dataset is quickly graphed with Matplotlib to see what the dataset looks like. There is a clear curve to it and so a straight line may not be the best fit. When the wind speed is low the power stays low and takes some speed to start really rising. At mid speeds there does seem to be a straight line where an increase in the speed leads to a larger increase in power. Towards the higher speeds the data points look to flatten out on the x axis again as higher speeds are giving less of an increase in power as they did around the middle. At both the lowest and highest speed values the spread of the corresponding power value points is quite large and while in the centre they are definatly not tight together they are alot closer. There is also quite a few speed values where the power is zero and as they seem like the outliers that do not agree with the rest of the points such as at higher speeds. These outliers may need cleaned up to provide better values for regression and fitting lines that represent the relationship between power and speed closer.
###Code
# Increase the size of the graph
plt.figure(figsize=(16, 12))
# Can use the column names in panda to quickly graph each value against each other in matplotlib
# A full stop for a dot symbol for the points
plt.plot(dataset["speed"], dataset["power"], ".")
# Add labels to the x,y axis and increase size
plt.xlabel('Speed', size=18)
plt.ylabel('Power', size=18)
# Increase x and y tick font size
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
# Large title for the plot
plt.title("Turbine Power", size=22)
###Output
_____no_output_____
###Markdown
Cleaning up the DataOne of the things thats noticable about the data is there is multible outliers where the power is 0. It could be assumed that these datapoints occured not due to a relationship between wind speed and turbine power but outside variables. I will remove these datapoints as they will drastically affect the substantive results of regression analysis [2]. Their large distance from the rest of the datapoints when the wind speed is above 16 will have a large effect when squared and so a line fitted to the data using regression could be dramatically altered by these.
###Code
dataset[dataset["power"] == 0]
###Output
_____no_output_____
###Markdown
There is 49 of these records I will remove reducing my dataset to 451 points of data.
###Code
len(dataset[dataset["power"] == 0])
###Output
_____no_output_____
###Markdown
Viewing the data when plotted it can be seen a large group of points appearing with zero power occurs at the highest speed. It could be assummed that maybe there is a cutoff as to the turbine may be switched off at high speeds to protect it from damage [3]. This is an assumption though as no data on this turbine was provided and all there is to go on is the dataset provided. When all values of power for a speed above 24 is viewed, it can be seen that at a wind speed greater than 24.449, all values provided of power are 0. This does agree with that assumption and it may be the reason why these datapoints occur distant from the rest. If the reason is the turbine is shut off for its safety at wind speeds at and above 24.4999 then finding out the relationship between wind speed and power output at higher speeds may not be needed by the turbine company if its not normal operating conditions. Again though this is an assumption as no data on operating conditions for the turbine was provided.
###Code
dataset[dataset["speed"] > 24]
###Output
_____no_output_____
###Markdown
The reason for all values of wind speed at and below 0.275 having a power value of 0 could be that at very low speeds the turbine will not produce any power with a brake applied so it does not turn [4]. This again is an assumption when im assuming these low values are outside the operating speeds where power is expected to be generated and so the relationship between wind speed and turbine power is not as valued.
###Code
dataset[dataset["speed"] < 0.5]
###Output
_____no_output_____
###Markdown
A dataframe without power values of 0 will be used instead for trying to find the relationship between wind speed and turbine power [5].
###Code
# Find the index of all rows where power is 0
index_power_zero = dataset[dataset["power"] == 0].index
# Drop rows with those index values
dataset_without_zero_power = dataset.drop(index_power_zero)
# Reduced dataset
dataset_without_zero_power
# Creating short easy variables for speed and power columns for reuse
speed = dataset_without_zero_power["speed"]
power = dataset_without_zero_power["power"]
# Increase the size of the graph
plt.figure(figsize=(16, 12))
# Can use the column names in panda to quickly graph each value against each other in matplotlib
# A full stop for a dot symbol for the points
plt.plot(speed, power, ".")
# Add labels to the x,y axis and increase size
plt.xlabel('Speed', size=18)
plt.ylabel('Power', size=18)
# Increase x and y tick font size
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
# Large title for the plot
plt.title("Turbine Power greater than 0", size=22)
###Output
_____no_output_____
###Markdown
Simple Linear RegressionThe Scipy library has a function lingress that can be used to fit a straight line to the dataset using least-squares regression [6]. This is where a straight line with the total of the square of the errors (the distance from the points to the straight line) is at the minimum [7]. With the outliers removed better results should be recieved as least squares is sensitive to these.
###Code
# Import the scipy library for statistics and regression
from scipy import stats
# Use lingress to calculate a linear least-squares regression on the dataset
res = stats.linregress(speed, power)
# Correlation coefficient
print(f"R-squared : {res.rvalue**2:.6f}")
# Increase the size of the graph
plt.figure(figsize=(16, 12))
plt.plot(speed, power, '.', label='Dataset')
plt.plot(speed, res.intercept + res.slope*speed, 'r', label='Fitted line')
# Add labels to the x,y axis and increase size
plt.xlabel('Speed', size=18)
plt.ylabel('Power', size=18)
# Increase x and y tick font size
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
# Large title for the plot
plt.title("Turbine Power and Linear Regression ", size=22)
plt.legend()
###Output
_____no_output_____
###Markdown
Memphis
###Code
SHEETs = [# Memphis all food
("https://docs.google.com/spreadsheet/ccc?key=1dn6IF5Ar_Hf_l0buhs9yIo6g1gqnGZqB6WLmQNotPcY","Memphis"),
]
import ndb
import pandas as pd
import warnings
DFs = []
#for csv in CSVs: # Uncomment to use a list of csv files as inputs
# DFs.append(pd.read_csv(csv,dtype=str))
try:
if len(SHEETs):
for ID, RANGE_NAME in SHEETs:
try:
if "docs.google.com" in ID:
sheet = "%s&output=csv" % ID
else:
sheet = "https://docs.google.com/spreadsheet/ccc?key=%s&output=csv" % ID
DFs.append(pd.read_csv(sheet))
except ParserError:
warnings.warn("Can't read sheet at https://docs.google.com/spreadsheets/d/%s.\nCheck Sharing settings, so that anyone with link can view?" % ID)
except NameError: # SHEETS not defined?
pass
df = pd.concat(DFs,ignore_index=True,sort=False)
# Some columns which ought to be numeric are actually str; convert them
df['Price'] = df['Price'].astype(float)
df['Quantity'] = df['Quantity'].astype(float)
df["Units"] = df["Units"].astype(str)
df["NDB"][df['Food'].str.contains("Wheat Pasta")] = str(45051131)
df
D = {}
for food in df.Food.tolist():
try:
NDB = df.loc[df.Food==food,:].NDB
D[food] = ndb.ndb_report(apikey[user],NDB).Quantity
except AttributeError:
warnings.warn("Couldn't find NDB Code %s for food %s." % (food,NDB))
D = pd.DataFrame(D,dtype=float)
D
###Output
_____no_output_____
###Markdown
Units & Prices Now, the prices we observe can be for lots of different quantities andunits. The NDB database basically wants everything in either hundredsof grams (hectograms) or hundreds of milliliters (deciliters). Sometimes this conversion is simple; if the price we observe is forsomething that weighs two kilograms, that’s just 20 hectograms.Different systems of weights and volumes are also easy; a five poundbag of flour is approximately 22.68 hectograms. Othertimes things are more complicated. If you observe the price of adozen donuts, that needs to be converted to hectograms, for example. A function `ndb_units` in the [ndb](ndb.py) module accomplishes this conversionfor many different units, using the `python` [pint module](https://pint.readthedocs.io/en/latest/). A file[./Data/food\_units.txt](Data/food_units.txt) can be edited to deal with odd cases such asdonuts, using a format described in the `pint` [documentation](https://pint.readthedocs.io/en/latest/defining.html). Here’s an example of the usage of `ndb.ndb_units`: Now, use the `ndb_units` function to convert all foods to eitherdeciliters or hectograms, to match NDB database:
###Code
# Convert food quantities to NDB units
df['NDB Quantity'] = df[['Quantity','Units']].T.apply(lambda x : ndb.ndb_units(x['Quantity'],x['Units']))
# Now may want to filter df by time or place--need to get a unique set of food names.
df['NDB Price'] = df['Price']/df['NDB Quantity']
df.dropna(how='any') # Drop food with any missing data
# To use minimum price observed
Prices = df.groupby('Food')['NDB Price'].min()
Prices.head()
###Output
_____no_output_____
###Markdown
Dietary Requirements We’ve figured out some foods we can buy, the nutritional content ofthose foods, and the price of the foods. Now we need to saysomething about nutritional requirements. Our data for this is basedon US government recommendations available at[https://health.gov/dietaryguidelines/2015/guidelines/appendix-7/](https://health.gov/dietaryguidelines/2015/guidelines/appendix-7/).Note that we’ve tweaked the nutrient labels to match those in the NDBdata.We’ve broken down the requirements into three different tables. Thefirst is *minimum* quantities that we need to satisfy. For example,this table tells us that a 20 year-old female needs at least 46 gramsof protein per day.| Nutrition|Source|C 1-3|F 4-8|M 4-8|F 9-13|M 9-13|F 14-18|M 14-18|F 19-30|M 19-30|F 31-50|M 31-50|F 51+|M 51+||---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|| Energy|---|1000|1200|1400|1600|1800|1800|2200|2000|2400|1800|2200|1600|2000|| Protein|RDA|13|19|19|34|34|46|52|46|56|46|56|46|56|| Fiber, total dietary|---|14|16.8|19.6|22.4|25.2|25.2|30.8|28|33.6|25.2|30.8|22.4|28|| Folate, DFE|RDA|150|200|200|300|300|400|400|400|400|400|400|400|400|| Calcium, Ca|RDA|700|1000|1000|1300|1300|1300|1300|1000|1000|1000|1000|1200|1000|| Carbohydrate, by difference|RDA|130|130|130|130|130|130|130|130|130|130|130|130|130|| Iron, Fe|RDA|7|10|10|8|8|15|11|18|8|18|8|8|8|| Magnesium, Mg|RDA|80|130|130|240|240|360|410|310|400|320|420|320|420|| Niacin|RDA|6|8|8|12|12|14|16|14|16|14|16|14|16|| Phosphorus, P|RDA|460|500|500|1250|1250|1250|1250|700|700|700|700|700|700|| Potassium, K|AI|3000|3800|3800|4500|4500|4700|4700|4700|4700|4700|4700|4700|4700|| Riboflavin|RDA|0.5|0.6|0.6|0.9|0.9|1|1.3|1.1|1.3|1.1|1.3|1.1|1.3|| Thiamin|RDA|0.5|0.6|0.6|0.9|0.9|1|1.2|1.1|1.2|1.1|1.2|1.1|1.2|| Vitamin A, RAE|RDA|300|400|400|600|600|700|900|700|900|700|900|700|900|| Vitamin B-12|RDA|0.9|1.2|1.2|1.8|1.8|2.4|2.4|2.4|2.4|2.4|2.4|2.4|2.4|| Vitamin B-6|RDA|0.5|0.6|0.6|1|1|1.2|1.3|1.3|1.3|1.3|1.3|1.5|1.7|| Vitamin C, total ascorbic acid|RDA|15|25|25|45|45|65|75|75|90|75|90|75|90|| Vitamin E (alpha-tocopherol)|RDA|6|7|7|11|11|15|15|15|15|15|15|15|15|| Vitamin K (phylloquinone)|AI|30|55|55|60|60|75|75|90|120|90|120|90|120|| Zinc, Zn|RDA|3|5|5|8|8|9|11|8|11|8|11|8|11|| Vitamin D|RDA|600|600|600|600|600|600|600|600|600|600|600|600|600|This next table specifies *maximum* quantities. Our 20 year-oldfemale shouldn’t have more than 2300 milligrams of sodium per day.Note that we can also add constraints here on nutrients that alsoappear above. For example, here we’ve added upper limits on Energy,as we might do if we were trying to lose weight.| Nutrition|Source|C 1-3|F 4-8|M 4-8|F 9-13|M 9-13|F 14-18|M 14-18|F 19-30|M 19-30|F 31-50|M 31-50|F 51+|M 51+||---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|| Sodium, Na|UL|1500|1900|1900|2200|2200|2300|2300|2300|2300|2300|2300|2300|2300|| Energy|---|1500|1600|1800|2000|2200|2200|2500|2400|2600|2200|2400|1800|2400|Finally, we have some odd constraints given in this final table.Mostly the items given don’t correspond to items in the NDB data(e.g., copper), but in some cases it may be possible to match thingsup. We can’t use these without some additional work.| Nutrition|Source|C 1-3|F 4-8|M 4-8|F 9-13|M 9-13|F 14-18|M 14-18|F 19-30|M 19-30|F 31-50|M 31-50|F 51+|M 51+||---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|| Carbohydrate, % kcal|AMDR|45-65|45-65|45-65|45-65|45-65|45-65|45-65|45-65|45-65|45-65|45-65|45-65|45-65|| Added sugars, % kcal|DGA|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|| Total fat, % kcal|AMDR|30-40|25-35|25-35|25-35|25-35|25-35|25-35|20-35|20-35|20-35|20-35|20-35|20-35|| Saturated fat, % kcal|DGA|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|<10%|| Linoleic acid, g|AI|7|10|10|10|12|11|16|12|17|12|17|11|14|| Linolenic acid, g|AI|0.7|0.9|0.9|1|1.2|1.1|1.6|1.1|1.6|1.1|1.6|1.1|1.6|| Copper, mcg|RDA|340|440|440|700|700|890|890|900|900|900|900|900|900|| Manganese, mg|AI|1.2|1.5|1.5|1.6|1.9|1.6|2.2|1.8|2.3|1.8|2.3|1.8|2.3|| Selenium, mcg|RDA|20|30|30|40|40|55|55|55|55|55|55|55|55|| Choline, mg|AI|200|250|250|375|375|400|550|425|550|425|550|425|550|- **Notes on Source:** In each of these tables, RDA = Recommended Dietary Allowance, AI = Adequate Intake, UL = Tolerable Upper Intake Level, AMDR = Acceptable Macronutrient Distribution Range, DGA = 2015-2020 Dietary Guidelines recommended limit; 14 g fiber per 1,000 kcal = basis for AI for fiber.
###Code
# Choose sex/age group:
group = "M 19-30"
# Define *minimums*
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
# Define *maximums*
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
###Output
_____no_output_____
###Markdown
Putting it together Here we take the different pieces of the puzzle we’ve developed andput them together in the form of a linear program we can solve.
###Code
from scipy.optimize import linprog as lp
import numpy as np
tol = 1e-6 # Numbers in solution smaller than this (in absolute value) treated as zeros
c = Prices.apply(lambda x:x.magnitude).dropna()
# Compile list that we have both prices and nutritional info for; drop if either missing
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
# Drop nutritional information for foods we don't know the price of,
# and replace missing nutrients with zeros.
Aall = D[c.index].fillna(0)
# Drop rows of A that we don't have constraints for.
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
# Minimum requirements involve multiplying constraint by -1 to make <=.
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax]) # Note sign change for min constraints
# Now solve problem!
result = lp(c, A, b, method='interior-point')
# Put back into nice series
diet = pd.Series(result.x,index=c.index)
print("Cost of diet for %s is $%4.2f per day." % (group,result.fun))
print("\nYou'll be eating (in 100s of grams or milliliters):")
print(diet[diet >= tol]) # Drop items with quantities less than precision of calculation.
tab = pd.DataFrame({"Outcome":np.abs(A).dot(diet),"Recommendation":np.abs(b)})
print("\nWith the following nutritional outcomes of interest:")
print(tab)
print("\nConstraining nutrients are:")
excess = tab.diff(axis=1).iloc[:,1]
print(excess.loc[np.abs(excess) < tol].index.tolist())
lowest_price_diet = []
print("The population of Memphis is " + str(memphis_pop))
for age in age_list:
group = age
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
tol = 1e-6
c = Prices.apply(lambda x:x.magnitude).dropna()
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
Aall = D[c.index].fillna(0)
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax])
result = lp(c, A, b, method='interior-point')
diet = pd.Series(result.x,index=c.index)
lowest_price_diet.append(result.fun)
d = {'Age/Sex Group': age_list, 'Lowest Price Diet ($ / Day)': lowest_price_diet}
total_dataframe = pd.DataFrame(data=d)
total_dataframe = total_dataframe.set_index("Age/Sex Group")
total_dataframe["Population Percentage"] = age_breakdown
total_dataframe["Population Breakdown"] = total_dataframe["Population Percentage"] / 100 * memphis_pop
total_dataframe["Cost per Bucket ($)"] = total_dataframe["Population Breakdown"] * total_dataframe["Lowest Price Diet ($ / Day)"]
total_cost = np.round(sum(total_dataframe["Cost per Bucket ($)"]))
print("The total minimum cost to feed this population is $" + str(total_cost)\
+ " per day and an average cost of $" + str(np.round(total_cost/memphis_pop, 2)) + " per person")
memphis_df = total_dataframe
memphis_df
###Output
The population of Memphis is 670000
The total minimum cost to feed this population is $1756233.0 per day and an average cost of $2.62 per person
###Markdown
East Oakland
###Code
SHEETs = [# East Oakland all food
("https://docs.google.com/spreadsheet/ccc?key=1IlSfp8sxXsCF_Cut7bq5lucbIGg9A6S2jb2QHjEi-74","Oakland"),
]
import ndb
import pandas as pd
import warnings
DFs = []
#for csv in CSVs: # Uncomment to use a list of csv files as inputs
# DFs.append(pd.read_csv(csv,dtype=str))
try:
if len(SHEETs):
for ID, RANGE_NAME in SHEETs:
try:
if "docs.google.com" in ID:
sheet = "%s&output=csv" % ID
else:
sheet = "https://docs.google.com/spreadsheet/ccc?key=%s&output=csv" % ID
DFs.append(pd.read_csv(sheet))
except ParserError:
warnings.warn("Can't read sheet at https://docs.google.com/spreadsheets/d/%s.\nCheck Sharing settings, so that anyone with link can view?" % ID)
except NameError: # SHEETS not defined?
pass
df = pd.concat(DFs,ignore_index=True,sort=False)
# Some columns which ought to be numeric are actually str; convert them
df['Price'] = df['Price'].astype(float)
df['Quantity'] = df['Quantity'].astype(float)
df["Units"] = df["Units"].astype(str)
df["NDB"][df['Food'].str.contains("Wheat Pasta")] = str(45051131)
df
D = {}
for food in df.Food.tolist():
try:
NDB = df.loc[df.Food==food,:].NDB
D[food] = ndb.ndb_report(apikey[user],NDB).Quantity
except AttributeError:
warnings.warn("Couldn't find NDB Code %s for food %s." % (food,NDB))
D = pd.DataFrame(D,dtype=float)
# Convert food quantities to NDB units
df['NDB Quantity'] = df[['Quantity','Units']].T.apply(lambda x : ndb.ndb_units(x['Quantity'],x['Units']))
# Now may want to filter df by time or place--need to get a unique set of food names.
df['NDB Price'] = df['Price']/df['NDB Quantity']
df.dropna(how='any') # Drop food with any missing data
# To use minimum price observed
Prices = df.groupby('Food')['NDB Price'].min()
# Choose sex/age group:
group = "M 19-30"
# Define *minimums*
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
# Define *maximums*
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
from scipy.optimize import linprog as lp
import numpy as np
tol = 1e-6 # Numbers in solution smaller than this (in absolute value) treated as zeros
c = Prices.apply(lambda x:x.magnitude).dropna()
# Compile list that we have both prices and nutritional info for; drop if either missing
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
# Drop nutritional information for foods we don't know the price of,
# and replace missing nutrients with zeros.
Aall = D[c.index].fillna(0)
# Drop rows of A that we don't have constraints for.
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
# Minimum requirements involve multiplying constraint by -1 to make <=.
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax]) # Note sign change for min constraints
# Now solve problem!
result = lp(c, A, b, method='interior-point')
# Put back into nice series
diet = pd.Series(result.x,index=c.index)
print("Cost of diet for %s is $%4.2f per day." % (group,result.fun))
print("\nYou'll be eating (in 100s of grams or milliliters):")
print(diet[diet >= tol]) # Drop items with quantities less than precision of calculation.
tab = pd.DataFrame({"Outcome":np.abs(A).dot(diet),"Recommendation":np.abs(b)})
print("\nWith the following nutritional outcomes of interest:")
print(tab)
print("\nConstraining nutrients are:")
excess = tab.diff(axis=1).iloc[:,1]
print(excess.loc[np.abs(excess) < tol].index.tolist())
lowest_price_diet = []
print("The population of East Oakland is " + str(oakland_pop))
for age in age_list:
group = age
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
tol = 1e-6
c = Prices.apply(lambda x:x.magnitude).dropna()
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
Aall = D[c.index].fillna(0)
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax])
result = lp(c, A, b, method='interior-point')
diet = pd.Series(result.x,index=c.index)
lowest_price_diet.append(result.fun)
d = {'Age/Sex Group': age_list, 'Lowest Price Diet ($ / Day)': lowest_price_diet}
total_dataframe = pd.DataFrame(data=d)
total_dataframe = total_dataframe.set_index("Age/Sex Group")
total_dataframe["Population Percentage"] = age_breakdown
total_dataframe["Population Breakdown"] = total_dataframe["Population Percentage"] / 100 * oakland_pop
total_dataframe["Cost per Bucket ($)"] = total_dataframe["Population Breakdown"] * total_dataframe["Lowest Price Diet ($ / Day)"]
total_cost = np.round(sum(total_dataframe["Cost per Bucket ($)"]))
print("The total minimum cost to feed this population is $" + str(total_cost)\
+ " per day and an average cost of $" + str(np.round(total_cost/oakland_pop, 2)) + " per person")
oakland_df = total_dataframe
oakland_df
###Output
The population of East Oakland is 87943
The total minimum cost to feed this population is $230064.0 per day and an average cost of $2.62 per person
###Markdown
MARIN
###Code
SHEETs = [# Marin all foods
("https://docs.google.com/spreadsheet/ccc?key=1NjiRJCQ3W72nlajnXtZdW5_p01h2Br6cP5xg-Eio0BQ","Marin"),
]
import ndb
import pandas as pd
import warnings
DFs = []
#for csv in CSVs: # Uncomment to use a list of csv files as inputs
# DFs.append(pd.read_csv(csv,dtype=str))
try:
if len(SHEETs):
for ID, RANGE_NAME in SHEETs:
try:
if "docs.google.com" in ID:
sheet = "%s&output=csv" % ID
else:
sheet = "https://docs.google.com/spreadsheet/ccc?key=%s&output=csv" % ID
DFs.append(pd.read_csv(sheet))
except ParserError:
warnings.warn("Can't read sheet at https://docs.google.com/spreadsheets/d/%s.\nCheck Sharing settings, so that anyone with link can view?" % ID)
except NameError: # SHEETS not defined?
pass
df = pd.concat(DFs,ignore_index=True,sort=False)
# Some columns which ought to be numeric are actually str; convert them
df['Price'] = df['Price'].astype(float)
df['Quantity'] = df['Quantity'].astype(float)
df["Units"] = df["Units"].astype(str)
df["NDB"][df['Food'].str.contains("Wheat Pasta")] = str(45051131)
df
D = {}
for food in df.Food.tolist():
try:
NDB = df.loc[df.Food==food,:].NDB
D[food] = ndb.ndb_report(apikey[user],NDB).Quantity
except AttributeError:
warnings.warn("Couldn't find NDB Code %s for food %s." % (food,NDB))
D = pd.DataFrame(D,dtype=float)
# Convert food quantities to NDB units
df['NDB Quantity'] = df[['Quantity','Units']].T.apply(lambda x : ndb.ndb_units(x['Quantity'],x['Units']))
# Now may want to filter df by time or place--need to get a unique set of food names.
df['NDB Price'] = df['Price']/df['NDB Quantity']
df.dropna(how='any') # Drop food with any missing data
# To use minimum price observed
Prices = df.groupby('Food')['NDB Price'].min()
# Choose sex/age group:
group = "M 19-30"
# Define *minimums*
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
# Define *maximums*
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
from scipy.optimize import linprog as lp
import numpy as np
tol = 1e-6 # Numbers in solution smaller than this (in absolute value) treated as zeros
c = Prices.apply(lambda x:x.magnitude).dropna()
# Compile list that we have both prices and nutritional info for; drop if either missing
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
# Drop nutritional information for foods we don't know the price of,
# and replace missing nutrients with zeros.
Aall = D[c.index].fillna(0)
# Drop rows of A that we don't have constraints for.
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
# Minimum requirements involve multiplying constraint by -1 to make <=.
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax]) # Note sign change for min constraints
# Now solve problem!
result = lp(c, A, b, method='interior-point')
# Put back into nice series
diet = pd.Series(result.x,index=c.index)
print("Cost of diet for %s is $%4.2f per day." % (group,result.fun))
print("\nYou'll be eating (in 100s of grams or milliliters):")
print(diet[diet >= tol]) # Drop items with quantities less than precision of calculation.
tab = pd.DataFrame({"Outcome":np.abs(A).dot(diet),"Recommendation":np.abs(b)})
print("\nWith the following nutritional outcomes of interest:")
print(tab)
print("\nConstraining nutrients are:")
excess = tab.diff(axis=1).iloc[:,1]
print(excess.loc[np.abs(excess) < tol].index.tolist())
lowest_price_diet = []
print("The population of Marin is " + str(marin_pop))
for age in age_list:
group = age
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
tol = 1e-6
c = Prices.apply(lambda x:x.magnitude).dropna()
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
Aall = D[c.index].fillna(0)
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax])
result = lp(c, A, b, method='interior-point')
diet = pd.Series(result.x,index=c.index)
lowest_price_diet.append(result.fun)
d = {'Age/Sex Group': age_list, 'Lowest Price Diet ($ / Day)': lowest_price_diet}
total_dataframe = pd.DataFrame(data=d)
total_dataframe = total_dataframe.set_index("Age/Sex Group")
total_dataframe["Population Percentage"] = age_breakdown
total_dataframe["Population Breakdown"] = total_dataframe["Population Percentage"] / 100 * marin_pop
total_dataframe["Cost per Bucket ($)"] = total_dataframe["Population Breakdown"] * total_dataframe["Lowest Price Diet ($ / Day)"]
total_cost = np.round(sum(total_dataframe["Cost per Bucket ($)"]))
print("The total minimum cost to feed this population is $" + str(total_cost)\
+ " per day and an average cost of $" + str(np.round(total_cost/marin_pop, 2)) + " per person")
marin_df = total_dataframe
marin_df
###Output
The population of Marin is 260955
The total minimum cost to feed this population is $1767638.0 per day and an average cost of $6.77 per person
###Markdown
Memphis - Vegetarian
###Code
SHEETs = [# Memphis Vegetarian Food
("https://docs.google.com/spreadsheet/ccc?key=1dSaq_tWbYTlQvR_ejKapGSHYx-JSEYvBHAIfHInYn-0","Vegetarian"),
]
import ndb
import pandas as pd
import warnings
DFs = []
#for csv in CSVs: # Uncomment to use a list of csv files as inputs
# DFs.append(pd.read_csv(csv,dtype=str))
try:
if len(SHEETs):
for ID, RANGE_NAME in SHEETs:
try:
if "docs.google.com" in ID:
sheet = "%s&output=csv" % ID
else:
sheet = "https://docs.google.com/spreadsheet/ccc?key=%s&output=csv" % ID
DFs.append(pd.read_csv(sheet))
except ParserError:
warnings.warn("Can't read sheet at https://docs.google.com/spreadsheets/d/%s.\nCheck Sharing settings, so that anyone with link can view?" % ID)
except NameError: # SHEETS not defined?
pass
df = pd.concat(DFs,ignore_index=True,sort=False)
# Some columns which ought to be numeric are actually str; convert them
df['Price'] = df['Price'].astype(float)
df['Quantity'] = df['Quantity'].astype(float)
df["Units"] = df["Units"].astype(str)
df["NDB"][df['Food'].str.contains("Wheat Pasta")] = str(45051131)
df
D = {}
for food in df.Food.tolist():
try:
NDB = df.loc[df.Food==food,:].NDB
D[food] = ndb.ndb_report(apikey[user],NDB).Quantity
except AttributeError:
warnings.warn("Couldn't find NDB Code %s for food %s." % (food,NDB))
D = pd.DataFrame(D,dtype=float)
# Convert food quantities to NDB units
df['NDB Quantity'] = df[['Quantity','Units']].T.apply(lambda x : ndb.ndb_units(x['Quantity'],x['Units']))
# Now may want to filter df by time or place--need to get a unique set of food names.
df['NDB Price'] = df['Price']/df['NDB Quantity']
df.dropna(how='any') # Drop food with any missing data
# To use minimum price observed
Prices = df.groupby('Food')['NDB Price'].min()
# Choose sex/age group:
group = "M 19-30"
# Define *minimums*
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
# Define *maximums*
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
from scipy.optimize import linprog as lp
import numpy as np
tol = 1e-6 # Numbers in solution smaller than this (in absolute value) treated as zeros
c = Prices.apply(lambda x:x.magnitude).dropna()
# Compile list that we have both prices and nutritional info for; drop if either missing
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
# Drop nutritional information for foods we don't know the price of,
# and replace missing nutrients with zeros.
Aall = D[c.index].fillna(0)
# Drop rows of A that we don't have constraints for.
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
# Minimum requirements involve multiplying constraint by -1 to make <=.
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax]) # Note sign change for min constraints
# Now solve problem!
result = lp(c, A, b, method='interior-point')
# Put back into nice series
diet = pd.Series(result.x,index=c.index)
print("Cost of diet for %s is $%4.2f per day." % (group,result.fun))
print("\nYou'll be eating (in 100s of grams or milliliters):")
print(diet[diet >= tol]) # Drop items with quantities less than precision of calculation.
tab = pd.DataFrame({"Outcome":np.abs(A).dot(diet),"Recommendation":np.abs(b)})
print("\nWith the following nutritional outcomes of interest:")
print(tab)
print("\nConstraining nutrients are:")
excess = tab.diff(axis=1).iloc[:,1]
print(excess.loc[np.abs(excess) < tol].index.tolist())
lowest_price_diet = []
print("The population of Memphis is " + str(memphis_pop))
for age in age_list:
group = age
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
tol = 1e-6
c = Prices.apply(lambda x:x.magnitude).dropna()
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
Aall = D[c.index].fillna(0)
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax])
result = lp(c, A, b, method='interior-point')
diet = pd.Series(result.x,index=c.index)
lowest_price_diet.append(result.fun)
d = {'Age/Sex Group': age_list, 'Lowest Price Diet ($ / Day)': lowest_price_diet}
total_dataframe = pd.DataFrame(data=d)
total_dataframe = total_dataframe.set_index("Age/Sex Group")
total_dataframe["Population Percentage"] = age_breakdown
total_dataframe["Population Breakdown"] = total_dataframe["Population Percentage"] / 100 * memphis_pop
total_dataframe["Cost per Bucket ($)"] = total_dataframe["Population Breakdown"] * total_dataframe["Lowest Price Diet ($ / Day)"]
total_cost = np.round(sum(total_dataframe["Cost per Bucket ($)"]))
print("The total minimum cost to feed this population is $" + str(total_cost)\
+ " per day and an average cost of $" + str(np.round(total_cost/memphis_pop, 2)) + " per person")
memphis_veg_df = total_dataframe
memphis_veg_df
###Output
The population of Memphis is 670000
The total minimum cost to feed this population is $2494228.0 per day and an average cost of $3.72 per person
###Markdown
East Oakland - Vegetarian
###Code
SHEETs = [# East Oakland vegetarian food
("https://docs.google.com/spreadsheet/ccc?key=1agk8whJf_x5K8MpEL-kKmYThLPFlPvWY1OkKYvscK-o","Vegetarian"),
]
import ndb
import pandas as pd
import warnings
DFs = []
#for csv in CSVs: # Uncomment to use a list of csv files as inputs
# DFs.append(pd.read_csv(csv,dtype=str))
try:
if len(SHEETs):
for ID, RANGE_NAME in SHEETs:
try:
if "docs.google.com" in ID:
sheet = "%s&output=csv" % ID
else:
sheet = "https://docs.google.com/spreadsheet/ccc?key=%s&output=csv" % ID
DFs.append(pd.read_csv(sheet))
except ParserError:
warnings.warn("Can't read sheet at https://docs.google.com/spreadsheets/d/%s.\nCheck Sharing settings, so that anyone with link can view?" % ID)
except NameError: # SHEETS not defined?
pass
df = pd.concat(DFs,ignore_index=True,sort=False)
# Some columns which ought to be numeric are actually str; convert them
df['Price'] = df['Price'].astype(float)
df['Quantity'] = df['Quantity'].astype(float)
df["Units"] = df["Units"].astype(str)
df["NDB"][df['Food'].str.contains("Wheat Pasta")] = str(45051131)
df
D = {}
for food in df.Food.tolist():
try:
NDB = df.loc[df.Food==food,:].NDB
D[food] = ndb.ndb_report(apikey[user],NDB).Quantity
except AttributeError:
warnings.warn("Couldn't find NDB Code %s for food %s." % (food,NDB))
D = pd.DataFrame(D,dtype=float)
# Convert food quantities to NDB units
df['NDB Quantity'] = df[['Quantity','Units']].T.apply(lambda x : ndb.ndb_units(x['Quantity'],x['Units']))
# Now may want to filter df by time or place--need to get a unique set of food names.
df['NDB Price'] = df['Price']/df['NDB Quantity']
df.dropna(how='any') # Drop food with any missing data
# To use minimum price observed
Prices = df.groupby('Food')['NDB Price'].min()
# Choose sex/age group:
group = "M 19-30"
# Define *minimums*
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
# Define *maximums*
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
from scipy.optimize import linprog as lp
import numpy as np
tol = 1e-6 # Numbers in solution smaller than this (in absolute value) treated as zeros
c = Prices.apply(lambda x:x.magnitude).dropna()
# Compile list that we have both prices and nutritional info for; drop if either missing
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
# Drop nutritional information for foods we don't know the price of,
# and replace missing nutrients with zeros.
Aall = D[c.index].fillna(0)
# Drop rows of A that we don't have constraints for.
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
# Minimum requirements involve multiplying constraint by -1 to make <=.
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax]) # Note sign change for min constraints
# Now solve problem!
result = lp(c, A, b, method='interior-point')
# Put back into nice series
diet = pd.Series(result.x,index=c.index)
print("Cost of diet for %s is $%4.2f per day." % (group,result.fun))
print("\nYou'll be eating (in 100s of grams or milliliters):")
print(diet[diet >= tol]) # Drop items with quantities less than precision of calculation.
tab = pd.DataFrame({"Outcome":np.abs(A).dot(diet),"Recommendation":np.abs(b)})
print("\nWith the following nutritional outcomes of interest:")
print(tab)
print("\nConstraining nutrients are:")
excess = tab.diff(axis=1).iloc[:,1]
print(excess.loc[np.abs(excess) < tol].index.tolist())
lowest_price_diet = []
print("The population of East Oakland is " + str(oakland_pop))
for age in age_list:
group = age
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
tol = 1e-6
c = Prices.apply(lambda x:x.magnitude).dropna()
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
Aall = D[c.index].fillna(0)
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax])
result = lp(c, A, b, method='interior-point')
diet = pd.Series(result.x,index=c.index)
lowest_price_diet.append(result.fun)
d = {'Age/Sex Group': age_list, 'Lowest Price Diet ($ / Day)': lowest_price_diet}
total_dataframe = pd.DataFrame(data=d)
total_dataframe = total_dataframe.set_index("Age/Sex Group")
total_dataframe["Population Percentage"] = age_breakdown
total_dataframe["Population Breakdown"] = total_dataframe["Population Percentage"] / 100 * oakland_pop
total_dataframe["Cost per Bucket ($)"] = total_dataframe["Population Breakdown"] * total_dataframe["Lowest Price Diet ($ / Day)"]
total_cost = np.round(sum(total_dataframe["Cost per Bucket ($)"]))
print("The total minimum cost to feed this population is $" + str(total_cost)\
+ " per day and an average cost of $" + str(np.round(total_cost/oakland_pop, 2)) + " per person")
oakland_veg_df = total_dataframe
oakland_veg_df
###Output
The population of East Oakland is 87943
The total minimum cost to feed this population is $230063.0 per day and an average cost of $2.62 per person
###Markdown
MARIN - Vegetarian
###Code
SHEETs = [# Marin vegetarian food
("https://docs.google.com/spreadsheet/ccc?key=1L8VjFMV_-DY6hpWvTBLx4gxrR-CbiAXUx3DCb3kjPoY","Vegetarian"),
]
import ndb
import pandas as pd
import warnings
DFs = []
#for csv in CSVs: # Uncomment to use a list of csv files as inputs
# DFs.append(pd.read_csv(csv,dtype=str))
try:
if len(SHEETs):
for ID, RANGE_NAME in SHEETs:
try:
if "docs.google.com" in ID:
sheet = "%s&output=csv" % ID
else:
sheet = "https://docs.google.com/spreadsheet/ccc?key=%s&output=csv" % ID
DFs.append(pd.read_csv(sheet))
except ParserError:
warnings.warn("Can't read sheet at https://docs.google.com/spreadsheets/d/%s.\nCheck Sharing settings, so that anyone with link can view?" % ID)
except NameError: # SHEETS not defined?
pass
df = pd.concat(DFs,ignore_index=True,sort=False)
# Some columns which ought to be numeric are actually str; convert them
df['Price'] = df['Price'].astype(float)
df['Quantity'] = df['Quantity'].astype(float)
df["Units"] = df["Units"].astype(str)
df["NDB"][df['Food'].str.contains("Wheat Pasta")] = str(45051131)
df
D = {}
for food in df.Food.tolist():
try:
NDB = df.loc[df.Food==food,:].NDB
D[food] = ndb.ndb_report(apikey[user],NDB).Quantity
except AttributeError:
warnings.warn("Couldn't find NDB Code %s for food %s." % (food,NDB))
D = pd.DataFrame(D,dtype=float)
# Convert food quantities to NDB units
df['NDB Quantity'] = df[['Quantity','Units']].T.apply(lambda x : ndb.ndb_units(x['Quantity'],x['Units']))
# Now may want to filter df by time or place--need to get a unique set of food names.
df['NDB Price'] = df['Price']/df['NDB Quantity']
df.dropna(how='any') # Drop food with any missing data
# To use minimum price observed
Prices = df.groupby('Food')['NDB Price'].min()
Prices.head()
# Choose sex/age group:
group = "M 19-30"
# Define *minimums*
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
# Define *maximums*
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
from scipy.optimize import linprog as lp
import numpy as np
tol = 1e-6 # Numbers in solution smaller than this (in absolute value) treated as zeros
c = Prices.apply(lambda x:x.magnitude).dropna()
# Compile list that we have both prices and nutritional info for; drop if either missing
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
# Drop nutritional information for foods we don't know the price of,
# and replace missing nutrients with zeros.
Aall = D[c.index].fillna(0)
# Drop rows of A that we don't have constraints for.
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
# Minimum requirements involve multiplying constraint by -1 to make <=.
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax]) # Note sign change for min constraints
# Now solve problem!
result = lp(c, A, b, method='interior-point')
# Put back into nice series
diet = pd.Series(result.x,index=c.index)
print("Cost of diet for %s is $%4.2f per day." % (group,result.fun))
print("\nYou'll be eating (in 100s of grams or milliliters):")
print(diet[diet >= tol]) # Drop items with quantities less than precision of calculation.
tab = pd.DataFrame({"Outcome":np.abs(A).dot(diet),"Recommendation":np.abs(b)})
print("\nWith the following nutritional outcomes of interest:")
print(tab)
print("\nConstraining nutrients are:")
excess = tab.diff(axis=1).iloc[:,1]
print(excess.loc[np.abs(excess) < tol].index.tolist())
lowest_price_diet = []
print("The population of Marin is " + str(marin_pop))
for age in age_list:
group = age
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
tol = 1e-6
c = Prices.apply(lambda x:x.magnitude).dropna()
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
Aall = D[c.index].fillna(0)
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax])
result = lp(c, A, b, method='interior-point')
diet = pd.Series(result.x,index=c.index)
lowest_price_diet.append(result.fun)
d = {'Age/Sex Group': age_list, 'Lowest Price Diet ($ / Day)': lowest_price_diet}
total_dataframe = pd.DataFrame(data=d)
total_dataframe = total_dataframe.set_index("Age/Sex Group")
total_dataframe["Population Percentage"] = age_breakdown
total_dataframe["Population Breakdown"] = total_dataframe["Population Percentage"] / 100 * marin_pop
total_dataframe["Cost per Bucket ($)"] = total_dataframe["Population Breakdown"] * total_dataframe["Lowest Price Diet ($ / Day)"]
total_cost = np.round(sum(total_dataframe["Cost per Bucket ($)"]))
print("The total minimum cost to feed this population is $" + str(total_cost)\
+ " per day and an average cost of $" + str(np.round(total_cost/marin_pop, 2)) + " per person")
marin_veg_df = total_dataframe
marin_veg_df
###Output
The population of Marin is 260955
The total minimum cost to feed this population is $1829378.0 per day and an average cost of $7.01 per person
###Markdown
MARIN - Soylent
###Code
SHEETs = [# Marin all food
("https://docs.google.com/spreadsheet/ccc?key=1O0t31y2dCJ3BFZkdFlkQQsZ3Gq5hgPweCrNm6a7BR4o","Soylent"),
]
import ndb
import pandas as pd
import warnings
DFs = []
#for csv in CSVs: # Uncomment to use a list of csv files as inputs
# DFs.append(pd.read_csv(csv,dtype=str))
try:
if len(SHEETs):
for ID, RANGE_NAME in SHEETs:
try:
if "docs.google.com" in ID:
sheet = "%s&output=csv" % ID
else:
sheet = "https://docs.google.com/spreadsheet/ccc?key=%s&output=csv" % ID
DFs.append(pd.read_csv(sheet))
except ParserError:
warnings.warn("Can't read sheet at https://docs.google.com/spreadsheets/d/%s.\nCheck Sharing settings, so that anyone with link can view?" % ID)
except NameError: # SHEETS not defined?
pass
df = pd.concat(DFs,ignore_index=True,sort=False)
# Some columns which ought to be numeric are actually str; convert them
df['Price'] = df['Price'].astype(float)
df['Quantity'] = df['Quantity'].astype(float)
df["Units"] = df["Units"].astype(str)
df["NDB"][df['Food'].str.contains("Wheat Pasta")] = str(45051131)
D = {}
for food in df.Food.tolist():
try:
NDB = df.loc[df.Food==food,:].NDB
D[food] = ndb.ndb_report(apikey[user],NDB).Quantity
except AttributeError:
warnings.warn("Couldn't find NDB Code %s for food %s." % (food,NDB))
D = pd.DataFrame(D,dtype=float)
Soylent = [0, 200, 39, 0, 400, 14, 3.5, 1.5, 0, 5, 0, 3.6, 80, 3, 0, 700, 20, 0.3, 320, 15, 0.4, 20, 120, 120, 2, 0.4, 15, 1, 1, 2, 16, 0, 3]
D["Soylent"] = Soylent
# Convert food quantities to NDB units
df['NDB Quantity'] = df[['Quantity','Units']].T.apply(lambda x : ndb.ndb_units(x['Quantity'],x['Units']))
# Now may want to filter df by time or place--need to get a unique set of food names.
df['NDB Price'] = df['Price']/df['NDB Quantity']
df.dropna(how='any') # Drop food with any missing data
# To use minimum price observed
Prices = df.groupby('Food')['NDB Price'].min()
# Choose sex/age group:
group = "M 19-30"
# Define *minimums*
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
# Define *maximums*
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
from scipy.optimize import linprog as lp
import numpy as np
tol = 1e-6 # Numbers in solution smaller than this (in absolute value) treated as zeros
c = Prices.apply(lambda x:x.magnitude).dropna()
# Compile list that we have both prices and nutritional info for; drop if either missing
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
# Drop nutritional information for foods we don't know the price of,
# and replace missing nutrients with zeros.
Aall = D[c.index].fillna(0)
# Drop rows of A that we don't have constraints for.
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
# Minimum requirements involve multiplying constraint by -1 to make <=.
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax]) # Note sign change for min constraints
# Now solve problem!
result = lp(c, A, b, method='interior-point')
# Put back into nice series
diet = pd.Series(result.x,index=c.index)
print("Cost of diet for %s is $%4.2f per day." % (group,result.fun))
print("\nYou'll be eating (in 100s of grams or milliliters):")
print(diet[diet >= tol]) # Drop items with quantities less than precision of calculation.
tab = pd.DataFrame({"Outcome":np.abs(A).dot(diet),"Recommendation":np.abs(b)})
print("\nWith the following nutritional outcomes of interest:")
print(tab)
print("\nConstraining nutrients are:")
excess = tab.diff(axis=1).iloc[:,1]
print(excess.loc[np.abs(excess) < tol].index.tolist())
###Output
Cost of diet for M 19-30 is $7.16 per day.
You'll be eating (in 100s of grams or milliliters):
Food
Canned tomatoes (Whole Foods) 0.000024
Bananas (Whole Foods) 3.227957
Eggs (Whole Foods) 5.586431
Milk (Whole) (Mollie) 0.000002
Milk (Whole) (Whole Foods) 2.251242
Canned tomatoes (Mollie) 0.000002
Carrots (Whole Foods) 11.424410
Canned tuna (Mollie) 0.385284
dtype: float64
With the following nutritional outcomes of interest:
Outcome Recommendation
Nutrition
Energy 2599.999881 2400.0
Protein 112.067104 56.0
Fiber, total dietary 63.945214 33.6
Folate, DFE 526.744529 400.0
Calcium, Ca 1000.000055 1000.0
Carbohydrate, by difference 409.688891 130.0
Iron, Fe 17.451321 8.0
Magnesium, Mg 564.694109 400.0
Niacin 25.465024 16.0
Phosphorus, P 1864.661337 700.0
Potassium, K 9517.497101 4700.0
Riboflavin 4.036561 1.3
Thiamin 1.573144 1.2
Vitamin A, RAE 10480.810273 900.0
Vitamin B-12 5.819550 2.4
Vitamin B-6 3.988949 1.3
Vitamin C, total ascorbic acid 90.000004 90.0
Vitamin E (alpha-tocopherol) 15.000000 15.0
Vitamin K (phylloquinone) 175.886845 120.0
Zinc, Zn 12.264175 11.0
Vitamin D 600.000011 600.0
Sodium, Na 1868.589611 2300.0
Energy 2599.999881 2600.0
Constraining nutrients are:
['Vitamin E (alpha-tocopherol)']
###Markdown
Visualizations
###Code
total_df = pd.DataFrame()
total_df["Memphis"] = memphis_df["Lowest Price Diet ($ / Day)"]
total_df["Memphis Vegetarian"] = memphis_veg_df["Lowest Price Diet ($ / Day)"]
plot = total_df.plot.bar(colormap = 'Dark2', figsize = (12,8), title = "Memphis")
plot.set_ylabel("Price per Day ($ / Day)")
print("The average non-vegarian price per day is $" + str(np.round(sum(memphis_df["Cost per Bucket ($)"])/memphis_pop, 2)) + " compared to the vegetarian price per day of $" + str(np.round(sum(memphis_veg_df["Cost per Bucket ($)"])/memphis_pop, 2)))
total_df = pd.DataFrame()
total_df["Oakland"] = oakland_df["Lowest Price Diet ($ / Day)"]
total_df["Oakland Vegetarian"] = oakland_veg_df["Lowest Price Diet ($ / Day)"]
plot = total_df.plot.bar(colormap = 'Dark2', figsize = (12,8), title = "Oakland")
plot.set_ylabel("Price per Day ($ / Day)")
print("The average non-vegarian price per day is $" + str(np.round(sum(oakland_df["Cost per Bucket ($)"])/oakland_pop, 2)) + " compared to the vegetarian price per day of $" + str(np.round(sum(oakland_veg_df["Cost per Bucket ($)"])/oakland_pop, 2)))
total_df = pd.DataFrame()
total_df["Marin"] = marin_df["Lowest Price Diet ($ / Day)"]
total_df["Marin Vegetarian"] = marin_veg_df["Lowest Price Diet ($ / Day)"]
plot = total_df.plot.bar(colormap = 'Dark2', figsize = (12,8), title = "Marin")
plot.set_ylabel("Price per Day ($ / Day)")
print("The average non-vegarian price per day is $" + str(np.round(sum(marin_df["Cost per Bucket ($)"])/marin_pop, 2)) + " compared to the vegetarian price per day of $" + str(np.round(sum(marin_veg_df["Cost per Bucket ($)"])/marin_pop, 2)))
total_df = pd.DataFrame()
total_df["Memphis"] = memphis_df["Lowest Price Diet ($ / Day)"]
total_df["Oakland"] = oakland_df["Lowest Price Diet ($ / Day)"]
total_df["Marin"] = marin_df["Lowest Price Diet ($ / Day)"]
plot = total_df.plot.bar(colormap = 'Dark2', figsize = (12,8), title = "All Places All Foods")
plot.set_ylabel("Price per Day ($ / Day)")
total_df = pd.DataFrame()
total_df["Memphis Vegetarian"] = memphis_veg_df["Lowest Price Diet ($ / Day)"]
total_df["Oakland Vegetarian"] = oakland_veg_df["Lowest Price Diet ($ / Day)"]
total_df["Marin Vegetarian"] = marin_veg_df["Lowest Price Diet ($ / Day)"]
plot = total_df.plot.bar(colormap = 'Dark2', figsize = (12,8), title = "All Places Vegetarian")
plot.set_ylabel("Price per Day ($ / Day)")
###Output
_____no_output_____
###Markdown
Effect of Produce Subsidy in Memphis
###Code
SHEETs = [# Memphis 10% Subsidy
("https://docs.google.com/spreadsheet/ccc?key=15jhkSfNsKzEGTacQChzkXA2qY66Ny_hm1u0Rwm_53yI","Vegetarian"),
]
import ndb
import pandas as pd
import warnings
DFs = []
#for csv in CSVs: # Uncomment to use a list of csv files as inputs
# DFs.append(pd.read_csv(csv,dtype=str))
try:
if len(SHEETs):
for ID, RANGE_NAME in SHEETs:
try:
if "docs.google.com" in ID:
sheet = "%s&output=csv" % ID
else:
sheet = "https://docs.google.com/spreadsheet/ccc?key=%s&output=csv" % ID
DFs.append(pd.read_csv(sheet))
except ParserError:
warnings.warn("Can't read sheet at https://docs.google.com/spreadsheets/d/%s.\nCheck Sharing settings, so that anyone with link can view?" % ID)
except NameError: # SHEETS not defined?
pass
df = pd.concat(DFs,ignore_index=True,sort=False)
# Some columns which ought to be numeric are actually str; convert them
df['Price'] = df['Price'].astype(float)
df['Quantity'] = df['Quantity'].astype(float)
df["Units"] = df["Units"].astype(str)
df["NDB"][df['Food'].str.contains("Wheat Pasta")] = str(45051131)
df
D = {}
for food in df.Food.tolist():
try:
NDB = df.loc[df.Food==food,:].NDB
D[food] = ndb.ndb_report(apikey[user],NDB).Quantity
except AttributeError:
warnings.warn("Couldn't find NDB Code %s for food %s." % (food,NDB))
D = pd.DataFrame(D,dtype=float)
# Convert food quantities to NDB units
df['NDB Quantity'] = df[['Quantity','Units']].T.apply(lambda x : ndb.ndb_units(x['Quantity'],x['Units']))
# Now may want to filter df by time or place--need to get a unique set of food names.
df['NDB Price'] = df['Price']/df['NDB Quantity']
df.dropna(how='any') # Drop food with any missing data
# To use minimum price observed
Prices = df.groupby('Food')['NDB Price'].min()
# Choose sex/age group:
group = "M 19-30"
# Define *minimums*
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
# Define *maximums*
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
from scipy.optimize import linprog as lp
import numpy as np
tol = 1e-6 # Numbers in solution smaller than this (in absolute value) treated as zeros
c = Prices.apply(lambda x:x.magnitude).dropna()
# Compile list that we have both prices and nutritional info for; drop if either missing
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
# Drop nutritional information for foods we don't know the price of,
# and replace missing nutrients with zeros.
Aall = D[c.index].fillna(0)
# Drop rows of A that we don't have constraints for.
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
# Minimum requirements involve multiplying constraint by -1 to make <=.
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax]) # Note sign change for min constraints
# Now solve problem!
result = lp(c, A, b, method='interior-point')
# Put back into nice series
diet = pd.Series(result.x,index=c.index)
print("Cost of diet for %s is $%4.2f per day." % (group,result.fun))
print("\nYou'll be eating (in 100s of grams or milliliters):")
print(diet[diet >= tol]) # Drop items with quantities less than precision of calculation.
tab = pd.DataFrame({"Outcome":np.abs(A).dot(diet),"Recommendation":np.abs(b)})
print("\nWith the following nutritional outcomes of interest:")
print(tab)
print("\nConstraining nutrients are:")
excess = tab.diff(axis=1).iloc[:,1]
print(excess.loc[np.abs(excess) < tol].index.tolist())
lowest_price_diet = []
print("The population of Memphis is " + str(memphis_pop))
for age in age_list:
group = age
bmin = pd.read_csv('./diet_minimums.csv').set_index('Nutrition')[group]
bmax = pd.read_csv('./diet_maximums.csv').set_index('Nutrition')[group]
tol = 1e-6
c = Prices.apply(lambda x:x.magnitude).dropna()
use = list(set(c.index.tolist()).intersection(D.columns.tolist()))
c = c[use]
Aall = D[c.index].fillna(0)
Amin = Aall.loc[bmin.index]
Amax = Aall.loc[bmax.index]
A = pd.concat([-Amin,Amax])
b = pd.concat([-bmin,bmax])
result = lp(c, A, b, method='interior-point')
diet = pd.Series(result.x,index=c.index)
lowest_price_diet.append(result.fun)
d = {'Age/Sex Group': age_list, 'Lowest Price Diet ($ / Day)': lowest_price_diet}
total_dataframe = pd.DataFrame(data=d)
total_dataframe = total_dataframe.set_index("Age/Sex Group")
total_dataframe["Population Percentage"] = age_breakdown
total_dataframe["Population Breakdown"] = total_dataframe["Population Percentage"] / 100 * memphis_pop
total_dataframe["Cost per Bucket ($)"] = total_dataframe["Population Breakdown"] * total_dataframe["Lowest Price Diet ($ / Day)"]
total_cost = np.round(sum(total_dataframe["Cost per Bucket ($)"]))
print("The total minimum cost to feed this population is $" + str(total_cost)\
+ " per day and an average cost of $" + str(np.round(total_cost/memphis_pop, 2)) + " per person")
memphis_20_df = total_dataframe
memphis_20_df
total_df = pd.DataFrame()
total_df["No Subsidy"] = memphis_veg_df["Lowest Price Diet ($ / Day)"]
total_df["20% Subsidy"] = memphis_20_df["Lowest Price Diet ($ / Day)"]
plot = total_df.plot.bar(colormap = 'Dark2', figsize = (12,8), title = "Memphis")
plot.set_ylabel("Price per Day ($ / Day)")
print("The average vegarian price per day is $" + str(np.round(sum(memphis_veg_df["Cost per Bucket ($)"])/memphis_pop, 2)) + " compared to the subsidized vegetarian price per day of $" + str(np.round(sum(memphis_20_df["Cost per Bucket ($)"])/memphis_pop, 2)))
total_df = pd.DataFrame()
total_df["Percent Reduction"] = (memphis_veg_df["Lowest Price Diet ($ / Day)"] - memphis_20_df["Lowest Price Diet ($ / Day)"]) / memphis_veg_df["Lowest Price Diet ($ / Day)"]
plot = total_df.plot.bar(colormap = 'Dark2', figsize = (12,8), title = "Memphis")
plot.set_ylabel("Percentage Decrease in Price")
print("The average price reduction is " + str(np.round(np.mean((memphis_veg_df["Lowest Price Diet ($ / Day)"] - memphis_20_df["Lowest Price Diet ($ / Day)"]) / memphis_veg_df["Lowest Price Diet ($ / Day)"]), 3) * 100) + "%")
###Output
The average price reduction is 8.1%
###Markdown
Rozpoznawanie mowy - projektPrzed uruchomieniem projektu sprawdź, czy masz zainstalowane wszystkie potrzebne biblioteki.
###Code
from scipy.io import wavfile
from matplotlib import pyplot as plt
import numpy as np
import sys
import wave
import scipy.signal
import os
import librosa
import librosa.util
from dtw import dtw
from numpy.linalg import norm
import librosa.display
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
###Output
_____no_output_____
###Markdown
1. Funkcja do wczytywania plików z sygnałami (load_voice_signal()). Funkcja normalizuje sygnał poprzez podzielenie sygnału przez moduł sygnału.
###Code
def load_voice_signal(filename):
samplerate, data = wavfile.read('commends/' + filename + '.wav')
words = data / np.amax(np.absolute(data))
words = words[::,0]
timestamps = np.array([i[0] / samplerate for i in np.ndindex(words.shape)])
return words, timestamps
###Output
_____no_output_____
###Markdown
Przykład wizualizacji fali dźwiękowej
###Code
words, timestamps = load_voice_signal('G11')
#y = words[32000:70000]
#x = timestamps[32000:70000]
y = words
x = timestamps
plt.figure(figsize=(10, 5), dpi= 80, facecolor='w', edgecolor='k')
plt.title('Signal Wave')
plt.plot(x,y)
plt.show()
###Output
_____no_output_____
###Markdown
2. Funkcja, która wyciąga cechy sygnału (extract_features()).
###Code
def extract_features(word):
my_mfcc = np.mean(librosa.feature.mfcc(y = word, sr = 44100, n_mfcc = 40).T, axis = 0)
my_mfcc = np.array([my_mfcc])
return my_mfcc
###Output
_____no_output_____
###Markdown
Ekstrakcja cech i podział na zbiór uczący i testowy w stosunku 4:1. Podział na zbiór uczący i testowy przeprowadzony w sposób losowy
###Code
from sklearn.model_selection import train_test_split
commands = ['otwórz', 'zamknij', 'gorąco', 'zimno', 'ciemno', 'jasno', 'stop', 'rozwiń', 'zasłoń']
file_ids, commands = zip(*[(person + str(cmd_idx * 5 + n_record + 1), cmd)
for cmd_idx, cmd in enumerate(commands)
for person in 'OJG'
for n_record in range(5)])
words = [load_voice_signal(file_id)[0] for file_id in file_ids]
features = np.squeeze([extract_features(word) for word in words])
(features_train, features_test,
words_train, words_test,
commands_train, commands_test) = train_test_split(features, words, commands, test_size=0.2)
###Output
_____no_output_____
###Markdown
Support Vector Classification - Wspierający wektor klasyfikacji:https://www.icsr.agh.edu.pl/~dzwinel/files/courses/C3-Classifiers.pdf
###Code
my_svc = SVC(kernel = "linear", C = 0.025)
my_svc.fit(features_train, commands_train)
predictions = my_svc.predict(features_test)
print('Dokładność klasyfikatora:{}'.format(accuracy_score(commands_test, predictions)))
###Output
Dokładność klasyfikatora:0.6296296296296297
###Markdown
5. Porównanie sygnałów używając DWThttp://cs.uccs.edu/~cs525/studentproj/projS2010/plama/doc/cs525-SpeechRecogntion.pdfhttp://research.cs.tamu.edu/prism/lectures/sp/l9.pdfhttp://web.science.mq.edu.au/~cassidy/comp449/html/ch11s02.htmlhttps://github.com/crawles/dtw/blob/master/Speech_Recognition_DTW.ipynbhttp://iosrjournals.org/iosr-jece/papers/NCIEST/Volume%202/3.%2012-16.pdf a) Obliczenie znormalizowanej odległości między komendami: zamknij oraz stop korzystając z metody DWT
###Code
fs = 44100 # częstotliwość próbkowania sygnału
same_words1 = [word for word, cmd in zip(words_test, commands_test) if cmd == 'zamknij']
same_words2 = [word for word, cmd in zip(words_test, commands_test) if cmd == 'stop']
word_1 = librosa.feature.mfcc(same_words1[0], fs, n_mfcc = 13)
plt.subplot(1, 2, 1)
librosa.display.specshow(word_1)
word_2 = librosa.feature.mfcc(same_words2[0], fs, n_mfcc = 13)
plt.subplot(1, 2, 2)
librosa.display.specshow(word_2)
dist, cost, acc_cost, path = dtw(word_2.T, word_1.T, dist=lambda x, y: np.linalg.norm(x - y, ord=1))
print('Odległość znormalizowana między komendą zamknij oraz stop wynosi:' , dist)
plt.imshow(acc_cost.T, origin='lower', cmap='gray', interpolation='nearest')
plt.plot(path[0], path[1], 'w')
plt.show()
###Output
_____no_output_____
###Markdown
b) Obliczenie znormalizowanej odległości między komendą otwórz u dwóch różnych osób:
###Code
path1 = 'J' + str(1)
path2 = 'O' + str(1)
word_1 = extract_features(load_voice_signal(path1)[0])
word_2 = extract_features(load_voice_signal(path2)[0])
plt.subplot(1, 2, 1)
librosa.display.specshow(word_1)
plt.subplot(1, 2, 2)
librosa.display.specshow(word_2)
dist, cost, acc_cost, path = dtw(word_2.T, word_1.T, dist=lambda x, y: np.linalg.norm(x - y, ord=1))
print('Odległość znormalizowana między komendami otwórz wypowiedziana przez dwie różne osoby wynosi:' , dist)
###Output
Odległość znormalizowana między komendami otwórz wypowiedziana przez dwie różne osoby wynosi: 4.756999420506903
###Markdown
Obliczenie znormalizowanej odległości między komendą zamknij u jednej osoby:
###Code
path1 = 'J' + str(6)
path2 = 'J' + str(10)
word_1 = extract_features(load_voice_signal(path1)[0])
word_2 = extract_features(load_voice_signal(path2)[0])
plt.subplot(1, 2, 1)
librosa.display.specshow(word_1)
plt.subplot(1, 2, 2)
librosa.display.specshow(word_2)
dist, cost, acc_cost, path = dtw(word_2.T, word_1.T, dist=lambda x, y: np.linalg.norm(x - y, ord=1))
print('Odległość znormalizowana między komendami otwórz wypowiedziana przez tą samą osobę:' , dist)
###Output
Odległość znormalizowana między komendami otwórz wypowiedziana przez tą samą osobę: 1.9417552141448073
###Markdown
Macierz odległościMacierz stworzona z wykorzystaniem algorytmu DWT - jest to graficzne przedstawienie odległości między dwiema analizowanymi komendami
###Code
def compute_and_plot_distances(my_com, paths=None, features=None):
if features is None:
if paths is None:
raise ValueError('Both paths and features are empty')
features = np.array([extract_features(load_voice_signal(path)[0]).T for path in paths])
N = len(features)
my_dist = np.ndarray(shape=(N, N))
for i in range(N):
for j in range(i, N):
dist, _, _, _ = dtw(features[i], features[j], dist=lambda x, y: np.linalg.norm(x - y, ord=1))
my_dist[i, j] = dist
my_dist[j, i] = dist
plt.figure(figsize=(5,5), dpi= 80, facecolor='w', edgecolor='k')
plt.imshow(my_dist, cmap='viridis')
plt.colorbar()
plt.tick_params(axis = 'on', bottom='off', top='on', labelbottom='off', labeltop='on')
plt.xticks(range(len(my_dist)), my_com, rotation=90)
plt.yticks(range(len(my_dist)), my_com)
plt.show()
my_com = ['goraco'] * 5
paths = ['G{}'.format(i) for i in range(11, 16)]
print('Macierz odległości - porównanie odległości między komendą GORĄCO wypowiedzianą przez jedną osobę')
compute_and_plot_distances(my_com, paths)
###Output
Macierz odległości - porównanie odległości między komendą GORĄCO wypowiedzianą przez jedną osobę
###Markdown
Macierz porównania słowa otworz dla trzech różnych osób
###Code
my_com = ['otwórz - G', 'otwórz - J', 'otwórz - O']
paths = ['{}2'.format(p) for p in 'GJO']
print('Macierz odległości - porównanie odległości między komendą OTWORZ wypowiedziane przez trzy różne osoby')
compute_and_plot_distances(my_com, paths)
commands = ['otwórz', 'zamknij', 'gorąco', 'zimno', 'ciemno', 'jasno', 'stop', 'rozwiń', 'zasłoń']
paths_groups = [['{}{}'.format(p, 5 * cmd_idx + offset + 1) for p in 'GJO' for offset in range(5)]
for cmd_idx in range(len(commands))]
features_groups = [np.array([extract_features(load_voice_signal(path)[0]).T for path in group])
for group in paths_groups]
features = [group.mean(axis=0) for group in features_groups]
print('Macierz odległości - porównanie odległości między wszystkimi komendami dla całego zbioru danych')
compute_and_plot_distances(commands, features=features)
###Output
Macierz odległości - porównanie odległości między wszystkimi komendami dla całego zbioru danych
|
raw/exploratory_computing_with_python/notebook2_arrays/py_exploratory_comp_2.ipynb | ###Markdown
Exploratory Computing with Python*Developed by Mark Bakker* Notebook 2: ArraysIn this notebook, we will do math on arrays using functions of the `numpy` package. A nice overview of `numpy` functionality can be found [here](http://wiki.scipy.org/Tentative_NumPy_Tutorial). We will also make plots. So we start by importing the plotting part of the `matplotlib` package and call it `plt` and we import the `numpy` package and call it `np`. We also tell Python to put all graphs inline. We will add these three lines at the top of all upcoming notebooks as we will always be using `numpy` and `matplotlib`.
###Code
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
One-dimesional arraysThere are many ways to create arrays. For example, you can enter the individual elements of an array
###Code
np.array([1, 7, 2, 12])
###Output
_____no_output_____
###Markdown
Note that the `array` function takes one sequence of points between square brackets. Another function to create an array is `ones(shape)`, which creates an array of the specified `shape` filled with the value 1. There is an analogous function `zeros(shape)` to create an array willed with the value 0 (which can also be achieved with `0 * ones(shape)`). Next to the already mentioned `linspace` function there is the `arange(start, end, step)` function, which creates an array starting at `start`, taking steps equal to `step` and stopping before it reaches `end`. If you don't specify the `step`, it is set equal to 1. If you only specify one input value, it returns a sequence starting at 0 and incrementing by 1 until the specified value is reached (but again, it stops before it reaches that value)
###Code
print(np.arange(1, 7)) # Takes defauls steps of 1 and doesn't include 7
print(np.arange(5)) # Starts at 0 end ends at 4, giving 5 numbers
###Output
_____no_output_____
###Markdown
Recall that comments in Python are preceded by a ``. Arrays have a dimension. So far we have only used one-dimensional arrays. Hence the dimension is 1. For one-dimensional arrays, you can also compute the length (which is part of Python and not `numpy`), which returns the number of values in the array
###Code
x = np.array([1, 7, 2, 12])
print('number of dimensions of x:', np.ndim(x))
print('length of x:', len(x))
###Output
_____no_output_____
###Markdown
The individual elements of an array can be accessed with their index. Indices start at 0. This may require a bit of getting used to. It means that the first value in the array has index 0. The index of an array is specified using square brackets.
###Code
x = np.arange(20, 30)
print(x)
print(x[0])
print(x[5])
###Output
_____no_output_____
###Markdown
A range of indices may be specified using the colon syntax:`x[start:end_before]` or `x[start:end_before:step]`. If the `start` isn't specified, 0 will be used. If the step isn't specified, 1 will be used.
###Code
x = np.arange(20, 30)
print(x)
print(x[0:5])
print(x[:5]) # same as previous one
print(x[3:7])
print(x[2:9:2]) # step is 2
###Output
_____no_output_____
###Markdown
You can also start at the end and count back. Generally, the index of the end is not known. You can find out how long the array is and access the last value by typing `x[len(x)-1]` but it would be inconvenient to have to type `len(arrayname)` all the time. Luckily, there is a shortcut: `x[-1]` is the last value in the array. For example:
###Code
xvalues = np.arange(0, 100, 10)
print(xvalues)
print(xvalues[len(xvalues) - 1]) # last value in array
print(xvalues[-1]) # much shorter
print(xvalues[-1::-1]) # start at the end and go back with steps of -1
###Output
_____no_output_____
###Markdown
You can assign one value to a range of an array by specifying a range of indices, or you can assign an array to a range of another array, as long as the ranges have equal length. In the last example below, the first 5 values of `x` (specified as `x[0:5]`) are given the values `[40,42,44,46,48]`.
###Code
x = 20 * np.ones(10)
print(x)
x[0:5] = 40
print(x)
x[0:5] = np.arange(40, 50, 2)
print(x)
###Output
_____no_output_____
###Markdown
Exercise 1, Arrays and indicesCreate an array of zeros with length 20. Change the first 5 values to 10. Change the next 10 values to a sequence starting at 12 and increasig with steps of 2 to 30 - do this with one command. Set the final 5 values to 30. Plot the value of the array on the $y$-axis vs. the index of the array on the $x$-axis. Draw vertical dashed lines at $x=4$ and $x=14$ (i.e, the section between the dashed lines is where the line increases from 10 to 30). Set the minimum and maximum values of the $y$-axis to 8 and 32 using the `ylim` command. Answer for Exercise 1 Arrays, Lists, and TuplesA one-dimensional array is a sequence of values that you can do math on. Next to the array, Python has several other data types that can store a sequence of values. The first one is called a `list` and is entered between square brackets. The second one is a tuple (you are right, strange name), and it is entered with parentheses. The difference is that you can change the values of a list after you create them, and you can not do that with a tuple. Other than that, for now you just need to remember that they exist, and that you *cannot* do math with either lists or tuples. When you do `2 * alist`, where `alist` is a list, you don't multiply all values in `alist` with the number 2. What happens is that you create a new list that contains `alist` twice (so it adds them back to back). The same holds for tuples. That can be very useful, but not when your intent is to multiply all values by 2. In the example below, the first value in a list is modified. Try to modify one of the values in `btuple` below and you will see that you get an error message:
###Code
alist = [1, 2, 3]
print('alist', alist)
btuple = (10, 20, 30)
print('btuple', btuple)
alist[0] = 7 # Since alist is a list, you can change values
print('modified alist', alist)
#btuple[0] = 100 # Will give an error
#print 2*alist
###Output
_____no_output_____
###Markdown
Lists and tuples are versatile data types in Python. We already used lists without knowing it when we created our first array with the command `array([1,7,2,12])`. What we did is we gave the `array` function one input argument: the list `[1,7,2,12]`, and the `array` function returned a one-dimensional array with those values. Lists and tuples can consist of a sequences of pretty much anything, not just numbers. In the example given below, `alist` contains 5 *things*: the integer 1, the float 20, the word `python`, an array with the values 1,2,3, and finally, the function `len`. The latter means that `alist[4]` is actually the function `len`. That function can be called to determine the length of an array as shown below. The latter may be a bit confusing, but it is cool behavior if you take the time to think about it.
###Code
alist = [1, 20.0, 'python', np.array([1,2,3]), len]
print(alist)
print(alist[0])
print(alist[2])
print(alist[4](alist[3])) # same as len(np.array([1,2,3]))
###Output
_____no_output_____
###Markdown
Two-dimensional arraysArrays may have arbitrary dimensions (as long as they fit in your computer's memory). We will make frequent use of two-dimensional arrays. They can be created with any of the aforementioned functions by specifying the number of rows and columns of the array. Note that the number of rows and columns must be a tuple (so they need to be between parentheses), as the functions expect only one input argument, which may be either one number or a tuple of multiple numbers.
###Code
x = np.ones((3, 4)) # An array with 3 rows and 4 columns
print(x)
###Output
_____no_output_____
###Markdown
Arrays may also be defined by specifying all the values in the array. The `array` function gets passed one list consisting of separate lists for each row of the array. In the example below the rows are entered on different lines. That may make it easier to enter the array, but it is note required. You can change the size of an array to any shape using the `reshape` function as long as the total number of entries doesn't change.
###Code
x = np.array([[4, 2, 3, 2],
[2, 4, 3, 1],
[0, 4, 1, 3]])
print(x)
print(np.reshape(x, (6, 2))) # 6 rows, 2 columns
print(np.reshape(x, (1, 12))) # 1 row, 12 columns
###Output
_____no_output_____
###Markdown
The index of a two-dimensional array is specified with two values, first the row index, then the column index.
###Code
x = np.zeros((3, 8))
x[0,0] = 100
x[1,4:] = 200 # Row with index 1, columns starting with 4 to the end
x[2,-1:4:-1] = 400 # Row with index 2, columns counting back from the end and stop before reaching index 4
print(x)
###Output
_____no_output_____
###Markdown
Arrays are not matricesNow that we talk about the rows and columns of an array, the math-oriented reader may think that arrays are matrices, or that one-dimensional arrays are vectors. It is crucial to understand that *arrays are not vectors or matrices*. The multiplication and division of two arrays is term by term
###Code
a = np.arange(4, 20, 4)
b = np.array([2, 2, 4, 4])
print('array a:', a)
print('array b:', b)
print('a * b :', a * b) # term by term multiplication
print('a / b :', a / b) # term by term division
###Output
_____no_output_____
###Markdown
Exercise 2, Two-dimensional array indicesFor the array `x` shown below, write code to print: * the first row of `x`* the first column of `x`* the third row of `x`* the last two columns of `x`* the four values in the upper right hand corner of `x`* the four values at the center of `x``x = np.array([[4, 2, 3, 2], [2, 4, 3, 1], [2, 4, 1, 3], [4, 1, 2, 3]])` Answer for Exercise 2 Visualizing two-dimensional arraysTwo-dimensonal arrays can be visualized with the `plt.matshow` function. In the example below, the array is very small (only 4 by 4), but it illustrates the general principle. A colorbar is added as a legend showing that the value 1 corresponds to dark blue and the value 4 corresponds to dark red. The ticks in the colorbar are specified to be 1, 2, 3, and 4. Note that the first row of the matrix (with index 0), is plotted at the top, which corresponds to the location of the first row in the matrix.
###Code
x = np.array([[8, 4, 6, 2],
[4, 8, 6, 2],
[4, 8, 2, 6],
[8, 2, 4, 6]])
plt.matshow(x)
plt.colorbar(ticks=[2, 4, 6, 8])
print(x)
###Output
_____no_output_____
###Markdown
The colors that are used are the default color map (it is called `jet`), which maps the highest value to red, the lowest value to blue and the numbers in between varying between green and yellow. If you want other colors, you can choose one of the other color maps. To find out all the available color maps, go [here](http://matplotlib.org/examples/color/colormaps_reference.html). To change the color map, you need to import the `cm` part of the matplotlib package, which contains all the color maps. After you have imported the color map package (which we call `cm` below), you can specify any of the available color maps with the `cmap` keyword. Try a few.
###Code
import matplotlib.cm as cm
plt.matshow(x, cmap=cm.rainbow)
plt.colorbar(ticks=np.arange(2, 9, 2));
###Output
_____no_output_____
###Markdown
Exercise 3, Create and visualize an arrayCreate an array of size 10 by 10. The upper left-hand quadrant of the array should get the value 4, the upper right-hand quadrant the value 3, the lower right-hand quadrant the value 2 and the lower left-hand quadrant the value 1. First create an array of 10 by 10 using the `zeros` command, then fill each quadrant by specifying the correct index ranges. Note that the first index is the row number. The second index runs from left to right. Visualize the array using `matshow`. It should give a red, yellow, light blue and dark blue box (clock-wise starting from upper left) when you use the default `jet` colormap. Answer for Exercise 3 Exercise 4, Create and visualize a slightly fancier arrayConsider the image shown below, which roughly shows the letters TU. You are asked to create an array that represents the same TU. First create a zeros array of 11 rows and 17 columns. Give the background value 0, the letter T value -1, and the letter U value +1. Answer to Exercise 4 Using conditions on arraysIf you have a variable, you can check whether its value is smaller or larger than a certain other value. This is called a *conditional* statement.For example:
###Code
a = 4
print('a < 2:', a < 2)
print('a > 2:', a > 2)
###Output
_____no_output_____
###Markdown
The statement `a < 2` returns a variable of type boolean, which means it can either be `True` or `False`. Besides smaller than or larger than, there are several other conditions you can use:
###Code
a = 4
print(a < 4)
print(a <= 4) # a is smaller than or equal to 4
print(a == 4) # a is equal to 4. Note that there are 2 equal signs
print(a >= 4)
print(a > 4)
print(a != 4) # a is not equal to 4
###Output
_____no_output_____
###Markdown
It is important to understand the difference between one equal sign like `a=4` and two equal signs like `a==4`. One equal sign means assignment. Whatever is on the right side of the equal sign is assigned to what is on the left side of the equal sign. Two equal signs is a comparison and results in either `True` (when the left and right sides are equal) or `False`.
###Code
print(4 == 4)
a = 4 == 5
print(a)
print(type(a))
###Output
_____no_output_____
###Markdown
You can also perform comparison statements on arrays, and it will return an array of booleans (`True` and `False` values) for each value in the array. For example let's create an array and find out what values of the array are below 3:
###Code
data = np.arange(5)
print(data)
print(data < 3)
###Output
_____no_output_____
###Markdown
The statement `data<3` returns an array of type `boolean` that has the same length as the array `data` and for each item in the array it is either `True` or `False`. The cool thing is that this array of `True` and `False` values can be used to specify the indices of an array:
###Code
a = np.arange(5)
b = np.array([ True, True, True, False, False ])
print(a[b])
###Output
_____no_output_____
###Markdown
When the indices of an array are specified with a boolean array, only the values of the array where the boolean array is `True` are selected. This is a very powerful feature. For example, all values of an array that are less than, for example, 3 may be obtained by specifying a condition as the indices.
###Code
a = np.arange(5)
print('the total array:', a)
print('values less than 3:', a[a < 3])
###Output
_____no_output_____
###Markdown
If we want to replace all values that are less than 3 by, for example, the value 10, use the following short syntax:
###Code
a = np.arange(5)
print(a)
a[a < 3] = 10
print(a)
###Output
_____no_output_____
###Markdown
Exercise 5, Replace high and low in an arrayCreate an array for variable $x$ consisting of 100 values from 0 to 20. Compute $y=\sin(x)$ and plot $y$ vs. $x$ with a blue line. Next, replace all values of $y$ that are larger than 0.5 by 0.5, and all values that are smaller than $-$0.75 by $-0.75$ and plot $x$ vs. $y$ using a red line on the same graph. Answer to Exercise 5 Exercise 6, Change marker color based on data valueCreate an array for variable $x$ consisting of 100 points from 0 to 20 and compute $y=\sin(x)$. Plot a blue dot for every $y$ that is larger than zero, and a red dot otherwise Answer to Exercise 6 Select indices based on multiple conditionsMultiple conditions can be given as well. When two conditions both have to be true, use the `&` symbol. When at least one of the conditions needs to be true, use the '|' symbol (that is the vertical bar). For example, let's plot $y=\sin(x)$ and plot blue markers when $y>0.7$ or $y<-0.5$ (using one plot statement), and a red marker when $-0.5\le y\le 0.7$. When there are multiple conditions, they need to be between parenteses.
###Code
x = np.linspace(0, 6 * np.pi, 50)
y = np.sin(x)
plt.plot(x[(y > 0.7) | (y < -0.5)], y[(y > 0.7) | (y < -0.5)], 'bo')
plt.plot(x[(y > -0.5) & (y < 0.7)], y[(y > -0.5) & (y < 0.7)], 'ro');
###Output
_____no_output_____
###Markdown
Exercise 7, Multiple conditions The file `xypoints.dat` contains 1000 randomly chosen $x,y$ locations of points; both $x$ and $y$ vary between -10 and 10. Load the data using `loadtxt`, and store the first row of the array in an array called `x` and the second row in an array called `y`. First, plot a red dot for all points. On the same graph, plot a blue dot for all $x,y$ points where $x<-2$ and $-5\le y \le 0$. Finally, plot a green dot for any point that lies in the circle with center $(x_c,y_c)=(5,0)$ and with radius $R=5$. Hint: it may be useful to compute a new array for the radial distance $r$ between any point and the center of the circle using the formula $r=\sqrt{(x-x_c)^2+(y-y_c)^2}$. Use the `plt.axis('equal')` command to make sure the scales along the two axes are equal and the circular area looks like a circle. Answer to Exercise 7 Exercise 8, Fix the error In the code below, it is meant to give the last 5 values of the array `x` the values [50,52,54,56,58] and print the result to the screen, but there are some errors in the code. Remove the comment markers and run the code to see the error message. Then fix the code and run it again.
###Code
#x = np.ones(10)
#x[5:] = np.arange(50, 62, 1)
#print(x)
###Output
_____no_output_____
###Markdown
Answer to Exercise 8 Answers to the exercises Answer to Exercise 1
###Code
x = np.zeros(20)
x[:5] = 10
x[5:15] = np.arange(12, 31, 2)
x[15:] = 30
plt.plot(x)
plt.plot([4, 4], [8, 32],'k--')
plt.plot([14, 14], [8, 32],'k--')
plt.ylim(8, 32);
###Output
_____no_output_____
###Markdown
Back to Exercise 1Answer to Exercise 2
###Code
x = np.array([[4, 2, 3, 2],
[2, 4, 3, 1],
[2, 4, 1, 3],
[4, 1, 2, 3]])
print('the first row of x')
print(x[0])
print('the first column of x')
print(x[:, 0])
print('the third row of x')
print(x[2])
print('the last two columns of x')
print(x[:, -2:])
print('the four values in the upper right hand corner')
print(x[:2, 2:])
print('the four values at the center of x')
print(x[1:3, 1:3])
###Output
_____no_output_____
###Markdown
Back to Exercise 2Answer to Exercise 3
###Code
x = np.zeros((10, 10))
x[:5, :5] = 4
x[:5, 5:] = 3
x[5:, 5:] = 2
x[5:, :5] = 1
print(x)
plt.matshow(x)
plt.colorbar(ticks=[1, 2, 3, 4]);
###Output
_____no_output_____
###Markdown
Back to Exercise 3Answer to Exercise 4
###Code
x = np.zeros((11, 17))
x[2:4, 1:7] = -1
x[2:9, 3:5] = -1
x[2:9, 8:10] = 1
x[2:9, 13:15] = 1
x[7:9, 10:13] = 1
print(x)
plt.matshow(x)
plt.yticks(range(11, -1, -1))
plt.xticks(range(0, 17));
plt.ylim(10.5, -0.5)
plt.xlim(-0.5, 16.5);
###Output
_____no_output_____
###Markdown
Back to Exercise 4Answer to Exercise 5
###Code
x = np.linspace(0, 20, 100)
y = np.sin(x)
plt.plot(x, y, 'b')
y[y > 0.5] = 0.5
y[y < -0.75] = -0.75
plt.plot(x, y, 'r');
###Output
_____no_output_____
###Markdown
Back to Exercise 5Answer to Exercise 6
###Code
x = np.linspace(0, 6 * np.pi, 50)
y = np.sin(x)
plt.plot(x[y > 0], y[y > 0], 'bo')
plt.plot(x[y <= 0], y[y <= 0], 'ro');
###Output
_____no_output_____
###Markdown
Back to Exercise 6Answer to Exercise 7
###Code
x, y = np.loadtxt('xypoints.dat')
plt.plot(x, y, 'ro')
plt.plot(x[(x < -2) & (y >= -5) & (y < 0)], y[(x < -2) & (y >= -5) & (y < 0)], 'bo')
r = np.sqrt((x - 5) ** 2 + y ** 2)
plt.plot(x[r < 5], y[r < 5], 'go')
plt.axis('scaled');
###Output
_____no_output_____
###Markdown
Back to Exercise 7Answer to Exercise 8
###Code
x = np.ones(10)
x[5:] = np.arange(50, 60, 2)
print(x)
###Output
_____no_output_____ |
examples/interactive-slider.ipynb | ###Markdown
Ibis+Vega+Altair using an interactive sliderWe will try to reproduce [this](https://altair-viz.github.io/gallery/us_population_over_time.html)example from the Altair gallery, but with lazily fetching data as the user interacts with the slider.To keep ourselves honest, we'll be putting the data in a SQLite database.First, let's show the original example, without any modifications:
###Code
import altair as alt
from vega_datasets import data
source = data.population.url
pink_blue = alt.Scale(domain=('Male', 'Female'),
range=["steelblue", "salmon"])
slider = alt.binding_range(min=1900, max=2000, step=10)
select_year = alt.selection_single(name="year", fields=['year'],
bind=slider, init={'year': 2000})
alt.Chart(source).mark_bar().encode(
x=alt.X('sex:N', title=None),
y=alt.Y('people:Q', scale=alt.Scale(domain=(0, 12000000))),
color=alt.Color('sex:N', scale=pink_blue),
column='age:O'
).properties(
width=20
).add_selection(
select_year
).transform_calculate(
"sex", alt.expr.if_(alt.datum.sex == 1, "Male", "Female")
).transform_filter(
select_year
).configure_facet(
spacing=8
)
###Output
_____no_output_____
###Markdown
Loading the data into a databaseWe begin our lazy-fetching example by downloading the data and putting it into a SQLite database:
###Code
import sqlalchemy
dbfile = 'population.db'
engine = sqlalchemy.create_engine(f'sqlite:///{dbfile}')
import pandas as pd
df = pd.read_json(data.population.url)
df.to_sql('pop', engine, if_exists='replace')
###Output
_____no_output_____
###Markdown
Now, let's create an ibis connection to this database and verify that the data is there:
###Code
import ibis
connection = ibis.sqlite.connect(dbfile)
connection.list_tables()
###Output
_____no_output_____
###Markdown
We can use inspect the data using this ibis connection:
###Code
pop = connection.table('pop')
pop.head().execute()
###Output
_____no_output_____
###Markdown
Making an interactive plotWe are now ready to make an interactive plot using this database connection.We can reuse the same objects for `pink_blue`, `slider`, and `select_year`, as they are independent of the data source.The `Chart` specifiation is completely identical, except that instead of the pandas dataframe,we supply it with the Ibis sqlite connection:
###Code
# import ibis_vega_transform
# alt.Chart(pop).mark_bar().encode(
# x=alt.X('sex:N', title=None),
# y=alt.Y('people:Q', scale=alt.Scale(domain=(0, 12000000))),
# color=alt.Color('sex:N', scale=pink_blue),
# column='age:O'
# ).properties(
# width=20
# ).add_selection(
# select_year
# ).transform_calculate(
# "sex", alt.expr.if_(alt.datum.sex == 1, "Male", "Female")
# ).transform_filter(
# select_year
# ).configure_facet(
# spacing=8
# )
###Output
_____no_output_____
###Markdown
Ibis+Vega+Altair using an interactive sliderWe will try to reproduce [this](https://altair-viz.github.io/gallery/us_population_over_time.html)example from the Altair gallery, but with lazily fetching data as the user interacts with the slider.To keep ourselves honest, we'll be putting the data in a SQLite database.First, let's show the original example, without any modifications:
###Code
import altair as alt
from vega_datasets import data
source = data.population.url
pink_blue = alt.Scale(domain=('Male', 'Female'),
range=["steelblue", "salmon"])
slider = alt.binding_range(min=1900, max=2000, step=10)
select_year = alt.selection_single(name="year", fields=['year'],
bind=slider, init={'year': 2000})
alt.Chart(source).mark_bar().encode(
x=alt.X('sex:N', title=None),
y=alt.Y('people:Q', scale=alt.Scale(domain=(0, 12000000))),
color=alt.Color('sex:N', scale=pink_blue),
column='age:O'
).properties(
width=20
).add_selection(
select_year
).transform_calculate(
"sex", alt.expr.if_(alt.datum.sex == 1, "Male", "Female")
).transform_filter(
select_year
).configure_facet(
spacing=8
)
###Output
_____no_output_____
###Markdown
Loading the data into a databaseWe begin our lazy-fetching example by downloading the data and putting it into a SQLite database:
###Code
import sqlalchemy
dbfile = 'population.db'
engine = sqlalchemy.create_engine(f'sqlite:///{dbfile}')
import pandas as pd
df = pd.read_json(data.population.url)
df.to_sql('pop', engine, if_exists='replace')
###Output
_____no_output_____
###Markdown
Now, let's create an ibis connection to this database and verify that the data is there:
###Code
import ibis
import warnings
try:
# ibis version >= 1.4
from ibis.backends import sqlite as ibis_sqlite
except ImportError as msg:
# ibis version < 1.4
warnings.warn(str(msg))
from ibis import sqlite as ibis_sqlite
connection = ibis_sqlite.connect(dbfile)
connection.list_tables()
###Output
_____no_output_____
###Markdown
We can use inspect the data using this ibis connection:
###Code
pop = connection.table('pop')
pop.head().execute()
###Output
_____no_output_____
###Markdown
Making an interactive plotWe are now ready to make an interactive plot using this database connection.We can reuse the same objects for `pink_blue`, `slider`, and `select_year`, as they are independent of the data source.The `Chart` specifiation is completely identical, except that instead of the pandas dataframe,we supply it with the Ibis sqlite connection:
###Code
# import ibis_vega_transform
# alt.Chart(pop).mark_bar().encode(
# x=alt.X('sex:N', title=None),
# y=alt.Y('people:Q', scale=alt.Scale(domain=(0, 12000000))),
# color=alt.Color('sex:N', scale=pink_blue),
# column='age:O'
# ).properties(
# width=20
# ).add_selection(
# select_year
# ).transform_calculate(
# "sex", alt.expr.if_(alt.datum.sex == 1, "Male", "Female")
# ).transform_filter(
# select_year
# ).configure_facet(
# spacing=8
# )
###Output
_____no_output_____ |
notebooks/annie_copy_01-pudl-database.ipynb | ###Markdown
Configure PUDLThe `.pudl.yml` configuration file tells PUDL where to look for data. Uncomment the next cell and run it if you're on our 2i2c JupyterHub.
###Code
#!cp ~/shared/shared-pudl.yml ~/.pudl.yml
# import the necessary packages
%load_ext autoreload
%autoreload 2
import pandas as pd
import sqlalchemy as sa
import pudl
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Connecting to the PUDL DatabasesThis notebook will walk you through several ways of pulling data out of the Public Utility Data Liberation (PUDL)project databases and into [Pandas](https://pandas.pydata.org/) Dataframes for analysis and visualization.This notebook assumes you have a development version of the [PUDL Python package](https://github.com/catalyst-cooperative/pudl) installed, and a complete PUDL database available locally, in the location expected by the Python package.If you have any questions or feedback you can:* [Create an issue](https://github.com/catalyst-cooperative/pudl-tutorials/issues) in the GitHub repo for our tutorials, or* Contact the team at: [email protected] Direct SQLite AccessMuch of the PUDL data is published as [SQLite database files](https://www.sqlite.org/index.html). These are relational databases generally intended for use by a single user at a time. If you're already familiar with databases and SQL in Python, you can access them just like you would any other. [Support for SQLite](https://docs.python.org/3/library/sqlite3.html) is built into the Python standard libraries, and the popular [SQLAlchemy](https://www.sqlalchemy.org) Python package also has extensive support for SQLite. Here's one in-depth resource on using Python, SQLite and SQLAlchemy together: [Data Management with Python, SQLite, and SQLAlchemy](https://realpython.com/python-sqlite-sqlalchemy/)For the rest of these tutorials, we're going to assume you want to get the data into Pandas as quickly as possible for interactive work. Database NormalizationThe data in the PUDL database has been extensively deduplicated, [normalized](https://en.wikipedia.org/wiki/Database_normalization) and generally organized according to best practices of [tidy data](https://tidyr.tidyverse.org/articles/tidy-data.html) in order to ensure that it is internally self-consistent and free of errors. As a result, you'll often need to combine information from more than one table to make it readable or to get all the information you need for your analysis in one place. We've built some tools to do this automatically, which we'll get to below. Locate the PUDL DB fileEach SQLite database is stored within a single file. To access the data, you need to know where that file is. With the location of the file, you can create an [SQLAlchemy connection engine](https://docs.sqlalchemy.org/en/13/core/engines.html), which Pandas will use to read data out of the database. PUDL stores its data in a directory structure generally organized by file format. We store the paths to those directories and the SQLAlchemy database URLs in a Python dictionary that's usually called `pudl_settings`. Note that a URL is just a path to a file that could be either local (on your computer) or remote (on someone else's computer). The following command will construct that `pudl_settings` dictionary based on some directory paths stored in the `.pudl.yml` file in your home directory. Printing out the dictionary contents you can see where PUDL will look for various resources.
###Code
pudl_settings = pudl.workspace.setup.get_defaults()
pudl_settings
###Output
_____no_output_____
###Markdown
The SQLAlchemy Connection EngineThe `sqlalchemy.create_engine()` function takes a database URL and creates an Engine that knows how to interact with the database. It can do things like list out the names of all the tables in the database.
###Code
pudl_engine = sa.create_engine(pudl_settings["pudl_db"])
# see all the tables inside of the database
pudl_engine.table_names()
###Output
_____no_output_____
###Markdown
Reading data with `pandas.read_sql()`The [pandas.read_sql()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_sql.html) method is the simplest way to pull data from an SQL database into a dataframe. You can give it an SQL statement to execute, or just the name of a table to read in its entirety. Read a whole tableReading an entire table all at once is easy. It isn't very memory efficient but there's less than 1 GB of data in the PUDL database, so in most cases this is a fine option. Once you've had a chance to poke around at the whole table a bit, you can select the data that's actually of interest out of it for your analysis or visualization.You can also explore the contents of the database interactively online at https://data.catalyst.coop if you want to familiarize yourself with its contents in a more graphical way first.
###Code
utilities_eia860 = pd.read_sql('utilities_eia860', pudl_engine)
utilities_eia860.info(memory_usage="deep")
utilities_eia860_tampa = utilities_eia860.loc[utilities_eia860.zip_code.isin(["33572", "33503","33596","33510","33511","33618","33624","33558","33625","33527",
"33614", "33547","33534","33556","33559","33548","33549","33619","33563","33565",
"33566", "33567","33569","33578","33579","33570","33584","33573","33602","33603",
"33604", "33605","33606","33607","33609","33610","33611","33612","33620","33621",
"33616", "33629","33647","33637","33617","33592","33615","33634","33635","33613",
"33594", "33626","33598"])]
utilities_eia860_tampa.shape
display(utilities_eia860_tampa)
# utilities_eia860_tampa
utility_ids_tampa = list(utilities_eia860_tampa.utility_id_eia.unique())
print(utility_ids_tampa)
generation_df = pd.read_sql("generators_eia860", pudl_engine)
generation_df.info(memory_usage="deep")
generator_table_sample = generation_df.sort_values(by='generator_id').head(1000)
generator_table_sample.to_csv('generator_table_sample.csv')
generation_eia923_df = pd.read_sql("generation_eia923", pudl_engine)
generation_eia923_df.info(memory_usage="deep")
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 479352 entries, 0 to 479351
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 479352 non-null int64
1 plant_id_eia 479352 non-null int64
2 generator_id 479352 non-null object
3 report_date 479352 non-null datetime64[ns]
4 net_generation_mwh 454504 non-null float64
dtypes: datetime64[ns](1), float64(1), int64(2), object(1)
memory usage: 41.8 MB
###Markdown
EDA
###Code
generator_table_sun = generation_df.loc[generation_df.energy_source_code_1 == 'SUN']
generator_table_sun.shape
generator_table_sun.operational_status.value_counts()
generator_table_sun.head()
generation_df_existing= generation_df.loc[generation_df.operational_status =='existing']
generation_df_existing_sub = generation_df_existing[['utility_id_eia','id','plant_id_eia','generator_id','report_date','energy_source_code_1','operational_status','capacity_mw']]
generation_df_existing_sub.utility_id_eia.value_counts()
generation_df_existing_sub['plant_generator_id']= generation_df_existing_sub.plant_id_eia.astype('str') + '_' + generation_df_existing_sub.generator_id
generation_df_existing_sub['plant_generator_id'].value_counts()
plant_ids = list(generation_df_existing_sub.plant_id_eia.unique())
print(plant_ids)
generation_eia923_df_plants_sub = generation_eia923_df.loc[generation_eia923_df.plant_id_eia.isin(plant_ids)]
generation_eia923_df_plants_sub.shape
generation_eia923_df_plants_sub.head(50)
generation_eia923_df_plants_sub['plant_generator_id']= generation_eia923_df_plants_sub.plant_id_eia.astype('str') + '_' + generation_eia923_df_plants_sub.generator_id
print(generation_eia923_df_plants_sub['plant_generator_id'].value_counts())
print(generation_df_existing_sub.columns)
print(generation_eia923_df_plants_sub.columns)
generation_df_existing_sub_2 = generation_df_existing_sub.sort_values(by = ['plant_generator_id','report_date']).drop_duplicates('plant_generator_id', keep = 'last')
generation_df_existing_sub_2.shape
generation_eia923_info = generation_eia923_df_plants_sub.merge(generation_df_existing_sub_2, how = 'left', left_on = 'plant_generator_id', right_on = 'plant_generator_id')
generation_eia923_df_plants_sub.shape
print(generation_eia923_info.shape)
generation_eia923_info.head()
generation_eia923_2 = generation_eia923_info.loc[~generation_eia923_info.id_y.isnull()]
print(generation_eia923_2.shape)
generation_eia923_3 = generation_eia923_2.loc[generation_eia923_2.utility_id_eia.isin(utility_ids_tampa)]
generation_eia923_3.shape
display(generation_eia923_3)
generation_eia923_3.to_csv('tampa_generation_eia923.csv')
generation_eia923_2_group = generation_eia923_2.groupby('report_date_x').sum()
generation_eia923_2_group_reset = generation_eia923_2_group.reset_index()
generation_eia923_2_group_reset.columns
df.columns
generator_table_sun_existing_sub.report_date.value_counts()
generator_table_sun_existing_sub_group = generator_table_sun_existing_sub.groupby('report_date').sum()
generator_table_sun_existing_sub_group.shape
# generator_table_sun_existing_sub_group
df = generator_table_sun_existing_sub_group.reset_index()
# generator_table_sun_plant_57538 = generator_table_sun.loc[generator_table_sun.plant_id_eia==57538]
# generator_table_sun_plant_57538.shape
# generator_table_sun_plant_57538.head()
# Convention for import of the pyplot interface
import matplotlib.pyplot as plt
# Set-up to have matplotlib use its support for notebook inline plots
%matplotlib inline
plt.rc('font', size=12)
fig, ax = plt.subplots(figsize=(10, 6))
# Specify how our lines should look
ax.plot(df.report_date, df.capacity_mw, color='tab:orange', label='Capacity')
# Same as above
ax.set_xlabel('Time')
ax.set_ylabel('Capacity (mw)')
ax.set_title('U.S. Solar Generator Capacity Data Over 15 Years')
ax.grid(True)
ax.legend(loc='upper left');
# Convention for import of the pyplot interface
import matplotlib.pyplot as plt
# Set-up to have matplotlib use its support for notebook inline plots
%matplotlib inline
plt.rc('font', size=12)
fig, ax = plt.subplots(figsize=(10, 6))
# Specify how our lines should look
ax.plot(df.report_date_x, df.net_generation_mwh, color='tab:orange', label='Generation')
# Same as above
ax.set_xlabel('Time')
ax.set_ylabel('Generation (mw)')
ax.set_title('U.S. Solar Power Generation (mw)')
ax.grid(True)
ax.legend(loc='upper left');
###Output
_____no_output_____
###Markdown
Select specific data using SQLIf you're familiar with SQL, and you already know what subset of the data you want to pull out of the database, you can give Pandas an SQL statement directly, along with the `pudl_engine`, and it will put the results of the SQL statement into a dataframe for you.For example, the following statement sums the nameplate capacities of generators by power plant, for every generator that reported a capacity in the EIA 860 in 2019, excluding those in Alaska and Hawaii. It sorts the results by capacity with the biggest plants first, and only returns the biggest 1000 plants.[Compare with the results from our online database](https://data.catalyst.coop/pudl?sql=select%0D%0A++plants.plant_id_eia%2C%0D%0A++plants.plant_name_eia%2C%0D%0A++SUM%28gens.capacity_mw%29+as+plant_capacity_mw%2C%0D%0A++latitude%2C%0D%0A++longitude%0D%0Afrom%0D%0A++generators_eia860+as+gens%0D%0Ajoin%0D%0A++plants_entity_eia+as+plants%0D%0Awhere%0D%0A++plants.plant_id_eia+%3D+gens.plant_id_eia%0D%0A++and+gens.report_date+%3D+%222019-01-01%22%0D%0A++and+plants.state+not+in+%28%22HI%22%2C+%22AK%22%29%0D%0Agroup+by%0D%0A++plants.plant_id_eia%0D%0Aorder+by%0D%0A++plant_capacity_mw+desc).This method is much faster and less memory intensive than reading whole tables, but it requires familiarity with SQL and the structure of the database. If you have a solid state disk and plenty of RAM, reading whole tables into memory is generally plenty fast, and shouldn't run into memory constraints.
###Code
example_sql = """
SELECT
plants.plant_id_eia,
plants.plant_name_eia,
SUM(gens.capacity_mw) AS plant_capacity_mw,
latitude,
longitude
FROM
generators_eia860 AS gens
JOIN
plants_entity_eia AS plants
WHERE
plants.plant_id_eia = gens.plant_id_eia
AND gens.report_date = "2019-01-01"
AND plants.state not in ("HI", "AK")
GROUP BY
plants.plant_id_eia
ORDER BY
plant_capacity_mw DESC
LIMIT 1000;
"""
big_plants_df = pd.read_sql(example_sql, pudl_engine)
big_plants_df
###Output
_____no_output_____
###Markdown
The SQLAlchemy expression languageSQLAlchemy provides a Python API for building complex SQL queries, and `pandas.read_sql()` can accept these query objects in place of the SQL statement written out by hand as above. [See the SQLAlchemy documentation for more details](https://docs.sqlalchemy.org/en/13/core/tutorial.html). Read tables using the PUDL output layerEarly on in the development of the PUDL database, we found that we were frequently joining the same tables together, and calculating the same derived values in Pandas during our interactive analyses. So we wrote some code to do that work automatically and uniformly. We call this the PUDL Output Layer. It brings in fields like plant and utility names from their home tables, so you have more than just the numeric ID to go by, caches dataframes internally for re-use, and can do some time series aggregation.These outputs are "denormalized" -- meaning that data will be duplicated in different output tables, and they will contain derived values that don't represent unique information. This structure isn't good inside a database, but it's great for interactive use.The 2nd notebook in this tutorial is all about the `PudlTabl` objects, which we usually name `pudl_out`, but here is a quick preview.If you want to access de-normalized tables, we've built an access methodology that saves access methods for most denormalized tables in PUDL and analysis build ontop of PUDL tables. There is a whole other notebook that covers the output tables so if you want more info on that. Create a PudlTabl output objectThe tabular output object needs to know what PUDL database it's connecting to (via the `pudl_engine` argument), and optionally, what time frequency it should aggregate tables on.
###Code
pudl_out = pudl.output.pudltabl.PudlTabl(pudl_engine)
###Output
_____no_output_____
###Markdown
Construct denormalized dataframesThe `PudlTabl` object, called `pudl_out` here, has a bunch of methods corresponding to individual tables within the database. They typically use abbreviated names. Hitting `Tab` will show you a preview the available methods.The `gen_eia923()` method corresponds to the `generation_eia923` table in the database, which details the monthly net generation from each generator reporting on the EIA Form 923.Note: if you re-run the cell, it will complete almost instantly, because the dataframe has been cached inside the `pudl_out` object for later use.
###Code
%%time
gen_eia923 = pudl_out.gen_eia923()
gen_eia923.info()
gen_eia923.sample(10)
###Output
_____no_output_____
###Markdown
Compare with the normalized DB tableThe denormalized version of the table above includes fields like `utility_name_eia923` and `plant_name_eia923` and `plant_id_pudl` which are all useful, but aren't fundamentally part of this table -- they can all be looked up in other tables based on the value of `plant_id_eia` found in the original `generation_eia923` table, so storing them in this table would mean duplicating data. You can see what the original table looks like below.Note also that since we're going back to the database directly rather than accessing the cached dataframe within the `pudl_out` object, this query will take a few seconds to run, just like the first time we read the table using `pudl_out` above.
###Code
%%time
gen_eia923_normalized = pd.read_sql("generation_eia923", pudl_engine)
gen_eia923_normalized.info()
gen_eia923_normalized.sample(10)
###Output
_____no_output_____
###Markdown
FERC Form 1: Here Be DragonsYou might have noticed up above that there were actually two SQLite database URLs in the `pudl_settings` object... One for PUDL, and another for FERC Form 1.
###Code
pudl_settings
###Output
_____no_output_____
###Markdown
FERC Form 1: Direct vs. PUDLThe PUDL database contains a tiny fraction of the data available in the original FERC Form 1 -- we have only taken the time to clean a handful of the FERC tables. The original FERC Form 1 data is often very messy and poorly organized. However, if you need to access one of the original 113 tables that we haven't integrated yet, they're all available, going back to 1994. The original tables are only accessible via direct queries (either using SQL or pulling whole tables) from the original FERC Form 1 database, so you'll have to use the `pandas.read_sql()` methods outlined above.If there are particular tables within the FERC Form 1 that you think are important to get cleaned up, let us know so we can prioritize them going forward!
###Code
ferc1_engine = sa.create_engine(pudl_settings["ferc1_db"])
# see all the tables inside of the database
ferc1_engine.table_names()
###Output
_____no_output_____ |
assignment_1/assignment_1.ipynb | ###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name:- Email:- Link to your forked github repository:
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
work_dir = os.getcwd()
git_dir = '/'.join(work_dir.split('/')[:-1])
print('Your github directory is :%s'%git_dir)
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /net/storage001.ib.cluster/om2/user/smadan/Harvard_BAI/res/models
Loaders are being loaded from: /net/storage001.ib.cluster/om2/user/smadan/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
import wandb
wandb.login()
###Output
Failed to query for notebook name, you can set it manually with the WANDB_NOTEBOOK_NAME environment variable
[34m[1mwandb[0m: Currently logged in as: [33mspandanmadan[0m (use `wandb login --relogin` to force relogin)
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /net/storage001.ib.cluster/om2/user/smadan/Harvard_BAI/datasets/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
_____no_output_____
###Markdown
Домашняя работа №1В этой домашней работе вам предлагается ознакомиться с базовой функциональностью Python и устройством Jupyter Notebook, а так же познакомиться с простыми функциями из пакетов NumPy и matplotlib.Следуйте инструкциям нотебука, решайте задачи, а ответы заполняйте в следующую форму: https://forms.gle/gxG8D5BGeH1nxcSU8
###Code
import numpy as np
import matplotlib.pyplot as plt
from assignment_1.tasks import find_fold_number, rle, test_rle_str
# увеличим изначальный размер графиков
plt.figure(figsize=(15, 10))
# отображение графиков внутри нотебука
%matplotlib inline
# графики в svg выглядят более четкими
%config InlineBackend.figure_format = 'svg'
# автоматическая перезагрузка функций из локальных модулей
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
NumPy & matplotlibДля следующих заданий необходимо реализовать код внутри ячейки. Постарайтесь воспользоваться функциональностью пакетов, избегая ненужных циклов и т.п.Про `NumPy` можно почитать тут:[NumPy quickstart](https://docs.scipy.org/doc/numpy/user/quickstart.html)Про `matplotlib` тут:[PyPlot tutorial](https://matplotlib.org/tutorials/introductory/pyplot.html)В данной части задания будут базироваться на датасете $\textit{Ирисы Фишера}$, первым делом его необходимо скачать
###Code
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data -P assignment_1/data
###Output
_____no_output_____
###Markdown
Ирисы Фишера состоят из данных о 150 экземплярах ириса — Ирис щетинистый (Iris setosa), Ирис виргинский (Iris virginica) и Ирис разноцветный (Iris versicolor). Для каждого экземпляра измерялись четыре характеристики (в сантиметрах):1. Длина наружной доли околоцветника (англ. sepal length);2. Ширина наружной доли околоцветника (англ. sepal width);3. Длина внутренней доли околоцветника (англ. petal length);4. Ширина внутренней доли околоцветника (англ. petal width).
###Code
iris_full = np.genfromtxt('assignment_1/data/iris.data', delimiter=',', dtype='object')
names = ('sepallength', 'sepalwidth', 'petallength', 'petalwidth', 'species')
iris_vals = iris_full[:, :-1].astype(np.float)
iris_name = iris_full[:, -1].astype(np.str)
n_iris = iris_vals.shape[0]
n_rows = 10
template = '{:^15}' * len(names)
print(template.format(*names))
for vals, name in zip(iris_vals[:n_rows], iris_name[:n_rows]):
print(template.format(*vals, name))
###Output
_____no_output_____
###Markdown
1. Какое максимальное значение для каждого признакаОтветом будет последовательность из 4 чисел Например: `5.1 3.5 1.4 0.2` 2. Сколько каждого типа ириса представленно в данныхОтветом будет последовательность из 3 чисел в порядке: `Iris-setosa, Iris-versicolor, Iris-virginica` Например: `10 10 10` 3. Среднее значение признака `petalwidth` для каждого типа ирисаОтветом будет последовательность в возврастающем порядке с округлением до 2 знаков после запятой. Например: `1.23 4.56 7.89` 4. Попарное скалярное произведение признаковОтветом будет среднее значение попарных скалярных произведений векторов признаков с округлением до 2 знаков после запятой. Скалярное произведение вектора с самим собой учитывать не надо. Например: `12.34` 5. У какого типа ириса самое маленькое значение признака `sepalwidth` 6. Постройте график распределения значений для `petallength` взависимости от типа ирисаВ качестве ответа укажите тип ириса с наименьшей дисперсией (самый "узкий") 7. Отобразите зависимость между `petallength` и `petalwidth` взависимости от типа ирисапо оси X - petallength по оси Y - petalwidth В ответе укажите класс, который отделился от остальных 8. Постройте boxplot признака `sepallength` для каждого типа ИрисаВ ответе укажите количество выбросов в данных Каждый выброс отображается точкой, подробнее про boxplot можно почитать [здесь](https://towardsdatascience.com/understanding-boxplots-5e2df7bcbd51) PythonДля следующих заданий необходимо реализовать соответствующую функцию в файле `tasks.py`.После реализации, выполните соответствующую ячейку, не изменяя её содержимое. 9. За сколько шагов можно получить однозначное число перемножая цифры предыдущего числа.Например, для $88$ ответ $3$:$$88 \rightarrow 8 \times 8 = 64 \rightarrow 6 \times 4 = 24 \rightarrow 2 \times 4 = 8$$Для этого задания, реализуйте функцию `find_fold_number`
###Code
assert find_fold_number(88) == 3, "неправильный ответ для числа из примера"
###Output
_____no_output_____
###Markdown
В качестве ответа укажите результат выполнения следующей ячейки (без кавычек)
###Code
''.join(map(str, (find_fold_number(i) for i in range(500))))
###Output
_____no_output_____
###Markdown
10. Кодирование длин серииКодирование длин серий (RLE) — алгоритм сжатия данных, заменяющий повторяющиеся символы на один символ и число его повторов. Серией называется последовательность, состоящая из нескольких одинаковых символов (более одного). При кодировании строка одинаковых символов, составляющих серию, заменяется строкой, содержащей сам повторяющийся символ и количество его повторов.Например, для $\textit{AAAAAAAAAAAAAAABAAAA}$ будет сжата в $\textit{A15BA4}$Для этого задания реализуйте функцию `rle`
###Code
assert rle('AAAAAAAAAAAAAAABAAAA') == 'A15BA4', "неправильный ответ для строки из примера"
###Output
_____no_output_____
###Markdown
В качестве ответа укажите результат выполнения следующей ячейки (без кавычек)
###Code
rle(test_rle_str)
###Output
_____no_output_____
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Dimitar Karev- Email: [email protected] Link to your forked github repository: https://github.com/DKarev/Harvard_BAI.git
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
# !pip install scipy`
from google.colab import drive
drive.mount('/content/drive')
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
git_dir = '/content/drive/MyDrive/Harvard_BAI'
print('Your github directory is :%s'%git_dir)
os.chdir(git_dir)
!pwd
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
'%s/res/'%git_dir
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /content/drive/MyDrive/Harvard_BAI/res/models
Loaders are being loaded from: /content/drive/MyDrive/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
!pip install wandb
import wandb
wandb.login()
!wandb login --relogin
###Output
[34m[1mwandb[0m: You can find your API key in your browser here: https://wandb.ai/authorize
[34m[1mwandb[0m: Paste an API key from your profile and hit enter:
[34m[1mwandb[0m: Appending key for api.wandb.ai to your netrc file: /root/.netrc
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
_____no_output_____
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
print("HI")
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
print("Hmm")
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
###Output
_____no_output_____
###Markdown
Make sure your runtime is GPU. If you changed your run time, make sure to run your code again from the top.
###Code
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
[34m[1mwandb[0m: Currently logged in as: [33mdkarev[0m (use `wandb login --relogin` to force relogin)
###Markdown
Congratulations!You just completed your first deep learning program - image classification for MNIST. This wraps up assignment 1. In the next assignment, we will see how you can make changes to above mentioned folders/files to adapt this code-base to your own research project. Deliverables for Assignment 1: Please run this assignment through to the end, and then make two submissions:- Download this notebook as an HTML file. Click File ---> Download as ---> HTML. Submit this on canvas.- Add, commit and push these changes to your github repository.
###Code
###Output
_____no_output_____
###Markdown
Домашняя работа №1В этой домашней работе вам предлагается ознакомиться с базовой функциональностью Python и устройством Jupyter Notebook, а так же познакомиться с простыми функциями из пакетов NumPy и matplotlib.Следуйте инструкциям нотебука, решайте задачи, а ответы заполняйте в следующую форму: https://forms.gle/gxG8D5BGeH1nxcSU8
###Code
import numpy as np
import matplotlib.pyplot as plt
from tasks import find_fold_number, rle, test_rle_str
# увеличим изначальный размер графиков
plt.figure(figsize=(15, 10))
# отображение графиков внутри нотебука
%matplotlib inline
# графики в svg выглядят более четкими
%config InlineBackend.figure_format = 'svg'
# автоматическая перезагрузка функций из локальных модулей
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
NumPy & matplotlibДля следующих заданий необходимо реализовать код внутри ячейки. Постарайтесь воспользоваться функциональностью пакетов, избегая ненужных циклов и т.п.Про `NumPy` можно почитать тут:[NumPy quickstart](https://docs.scipy.org/doc/numpy/user/quickstart.html)Про `matplotlib` тут:[PyPlot tutorial](https://matplotlib.org/tutorials/introductory/pyplot.html)В данной части задания будут базироваться на датасете $\textit{Ирисы Фишера}$, первым делом его необходимо скачать
###Code
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data -P assignment_1/data
###Output
--2020-02-28 22:25:26-- https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data
Распознаётся archive.ics.uci.edu (archive.ics.uci.edu)… 128.195.10.252
Подключение к archive.ics.uci.edu (archive.ics.uci.edu)|128.195.10.252|:443... соединение установлено.
HTTP-запрос отправлен. Ожидание ответа… 200 OK
Длина: 4551 (4,4K) [application/x-httpd-php]
Сохранение в: «assignment_1/data/iris.data.1»
iris.data.1 100%[===================>] 4,44K --.-KB/s за 0s
2020-02-28 22:25:27 (31,1 MB/s) - «assignment_1/data/iris.data.1» сохранён [4551/4551]
###Markdown
Ирисы Фишера состоят из данных о 150 экземплярах ириса — Ирис щетинистый (Iris setosa), Ирис виргинский (Iris virginica) и Ирис разноцветный (Iris versicolor). Для каждого экземпляра измерялись четыре характеристики (в сантиметрах):1. Длина наружной доли околоцветника (англ. sepal length);2. Ширина наружной доли околоцветника (англ. sepal width);3. Длина внутренней доли околоцветника (англ. petal length);4. Ширина внутренней доли околоцветника (англ. petal width).
###Code
iris_full = np.genfromtxt('assignment_1/data/iris.data', delimiter=',', dtype='object')
names = ('sepallength', 'sepalwidth', 'petallength', 'petalwidth', 'species')
iris_vals = iris_full[:, :-1].astype(np.float)
iris_name = iris_full[:, -1].astype(np.str)
n_iris = iris_vals.shape[0]
n_rows = 10
template = '{:^15}' * len(names)
print(template.format(*names))
for vals, name in zip(iris_vals[:n_rows], iris_name[:n_rows]):
print(template.format(*vals, name))
from math import factorial
def puasson(n, p, m):
l = n * p
return ((l ** m) / (factorial(m))) * np.exp(-l)
def laplas(n, p, m):
x = (m - n * p) / (np.sqrt(n * p * (1 - p)))
phi = np.exp(-x*x/2) / np.sqrt(2 * np.pi)
return phi / np.sqrt(n * p * (1 - p))
print(puasson(5000, 0.0002, 2))
print(laplas(10000, 0.5, 5000))
###Output
0.18393972058572117
0.007978845608028654
###Markdown
1. Какое максимальное значение для каждого признакаОтветом будет последовательность из 4 чисел Например: `5.1 3.5 1.4 0.2`
###Code
np.amax(iris_vals, axis=0)
###Output
_____no_output_____
###Markdown
2. Сколько каждого типа ириса представленно в данныхОтветом будет последовательность из 3 чисел в порядке: `Iris-setosa, Iris-versicolor, Iris-virginica` Например: `10 10 10`
###Code
unique = np.unique(iris_name, return_counts=True)[1]
unique
###Output
_____no_output_____
###Markdown
3. Среднее значение признака `petalwidth` для каждого типа ирисаОтветом будет последовательность в возврастающем порядке с округлением до 2 знаков после запятой. Например: `1.23 4.56 7.89`
###Code
ans = []
for i in range(len(unique)):
mask = np.unique(iris_name)[i] == iris_name
xs = np.ma.array(iris_vals[:, -1], mask=list(np.invert(mask)))
ans.append(xs.mean())
print(sorted(ans))
iris_full = np.hstack([iris_vals, iris_name.reshape(-1, 1)])
unique = np.unique(iris_name)
iris_types = {}
for name in unique:
iris_types[name] = iris_full[iris_full[:, -1] == name][:, :-1].astype(np.float)
print(iris_types[name][:, -1].mean())
###Output
0.244
1.3259999999999998
2.0260000000000002
###Markdown
4. Попарное скалярное произведение признаковОтветом будет среднее значение попарных скалярных произведений векторов признаков с округлением до 2 знаков после запятой. Скалярное произведение вектора с самим собой учитывать не надо. Например: `12.34`
###Code
import itertools
ans = [np.dot(iris_vals[:, i], iris_vals[:, j]) for (i, j) in itertools.combinations(range(4), 2)]
np.mean(np.array(ans))
###Output
_____no_output_____
###Markdown
5. У какого типа ириса самое маленькое значение признака `sepalwidth`
###Code
iris_name[iris_vals[:, 1].argmin()]
###Output
_____no_output_____
###Markdown
6. Постройте график распределения значений для `petallength` взависимости от типа ирисаВ качестве ответа укажите тип ириса с наименьшей дисперсией (самый "узкий")
###Code
import collections, itertools
colors = ['lightgreen', 'cyan', 'purple']
cnt = 0
for name, vals in iris_types.items():
counter = collections.Counter(vals[:, 2])
x = np.array(list(counter.keys()))
y = np.array(list(counter.values()))
new_x, new_y = zip(*sorted(zip(x, y)))
plt.plot(new_x, new_y, color=colors[cnt], label=name)
plt.legend()
cnt += 1
###Output
_____no_output_____
###Markdown
7. Отобразите зависимость между `petallength` и `petalwidth` взависимости от типа ирисапо оси X - petallength по оси Y - petalwidth В ответе укажите класс, который отделился от остальных
###Code
colors = ['lightgreen', 'cyan', 'purple']
cnt = 0
for name, vals in iris_types.items():
x = vals[:, 2]
y = vals[:, 3]
plt.scatter(x, y, color=colors[cnt], label=name)
plt.legend()
cnt += 1
###Output
_____no_output_____
###Markdown
8. Постройте boxplot признака `sepallength` для каждого типа ИрисаВ ответе укажите количество выбросов в данных Каждый выброс отображается точкой, подробнее про boxplot можно почитать [здесь](https://towardsdatascience.com/understanding-boxplots-5e2df7bcbd51)
###Code
import seaborn as sns
data = []
for name, vals in iris_types.items():
data.append(vals[:, 0])
data = np.array(data)
sns.boxplot(data=data.T)
###Output
_____no_output_____
###Markdown
PythonДля следующих заданий необходимо реализовать соответствующую функцию в файле `tasks.py`.После реализации, выполните соответствующую ячейку, не изменяя её содержимое. 9. За сколько шагов можно получить однозначное число перемножая цифры предыдущего числа.Например, для $88$ ответ $3$:$$88 \rightarrow 8 \times 8 = 64 \rightarrow 6 \times 4 = 24 \rightarrow 2 \times 4 = 8$$Для этого задания, реализуйте функцию `find_fold_number`
###Code
assert find_fold_number(88) == 3, "неправильный ответ для числа из примера"
find_fold_number(24)
###Output
_____no_output_____
###Markdown
В качестве ответа укажите результат выполнения следующей ячейки (без кавычек)
###Code
''.join(map(str, (find_fold_number(i) for i in range(500))))
###Output
_____no_output_____
###Markdown
10. Кодирование длин серииКодирование длин серий (RLE) — алгоритм сжатия данных, заменяющий повторяющиеся символы на один символ и число его повторов. Серией называется последовательность, состоящая из нескольких одинаковых символов (более одного). При кодировании строка одинаковых символов, составляющих серию, заменяется строкой, содержащей сам повторяющийся символ и количество его повторов.Например, для $\textit{AAAAAAAAAAAAAAABAAAA}$ будет сжата в $\textit{A15BA4}$Для этого задания реализуйте функцию `rle`
###Code
assert rle('AAAAAAAAAAAAAAABAAAA') == 'A15BA4', "неправильный ответ для строки из примера"
###Output
_____no_output_____
###Markdown
В качестве ответа укажите результат выполнения следующей ячейки (без кавычек)
###Code
rle(test_rle_str)
###Output
_____no_output_____
###Markdown
Домашняя работа №1В этой домашней работе вам предлагается ознакомиться с базовой функциональностью Python и устройством Jupyter Notebook, а так же познакомиться с простыми функциями из пакетов NumPy и matplotlib.Следуйте инструкциям нотебука, решайте задачи, а ответы заполняйте в следующую форму: https://forms.gle/gxG8D5BGeH1nxcSU8
###Code
import numpy as np
import matplotlib.pyplot as plt
from tasks import find_fold_number, rle, test_rle_str
# увеличим изначальный размер графиков
plt.figure(figsize=(15, 10))
# отображение графиков внутри нотебука
%matplotlib inline
# графики в svg выглядят более четкими
%config InlineBackend.figure_format = 'svg'
# автоматическая перезагрузка функций из локальных модулей
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
NumPy & matplotlibДля следующих заданий необходимо реализовать код внутри ячейки. Постарайтесь воспользоваться функциональностью пакетов, избегая ненужных циклов и т.п.Про `NumPy` можно почитать тут:[NumPy quickstart](https://docs.scipy.org/doc/numpy/user/quickstart.html)Про `matplotlib` тут:[PyPlot tutorial](https://matplotlib.org/tutorials/introductory/pyplot.html)В данной части задания будут базироваться на датасете $\textit{Ирисы Фишера}$, первым делом его необходимо скачать
###Code
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data -P assignment_1/data
###Output
--2020-02-29 16:02:56-- https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data
Resolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.10.252
Connecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.10.252|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4551 (4,4K) [application/x-httpd-php]
Saving to: ‘assignment_1/data/iris.data’
iris.data 100%[===================>] 4,44K --.-KB/s in 0,001s
2020-02-29 16:02:57 (7,96 MB/s) - ‘assignment_1/data/iris.data’ saved [4551/4551]
###Markdown
Ирисы Фишера состоят из данных о 150 экземплярах ириса — Ирис щетинистый (Iris setosa), Ирис виргинский (Iris virginica) и Ирис разноцветный (Iris versicolor). Для каждого экземпляра измерялись четыре характеристики (в сантиметрах):1. Длина наружной доли околоцветника (англ. sepal length);2. Ширина наружной доли околоцветника (англ. sepal width);3. Длина внутренней доли околоцветника (англ. petal length);4. Ширина внутренней доли околоцветника (англ. petal width).
###Code
iris_full = np.genfromtxt('assignment_1/data/iris.data', delimiter=',', dtype='object')
names = ('sepallength', 'sepalwidth', 'petallength', 'petalwidth', 'species')
iris_vals = iris_full[:, :-1].astype(np.float)
iris_name = iris_full[:, -1].astype(np.str)
n_iris = iris_vals.shape[0]
n_rows = 10
template = '{:^15}' * len(names)
print(template.format(*names))
for vals, name in zip(iris_vals[:n_rows], iris_name[:n_rows]):
print(template.format(*vals, name))
###Output
sepallength sepalwidth petallength petalwidth species
5.1 3.5 1.4 0.2 Iris-setosa
4.9 3.0 1.4 0.2 Iris-setosa
4.7 3.2 1.3 0.2 Iris-setosa
4.6 3.1 1.5 0.2 Iris-setosa
5.0 3.6 1.4 0.2 Iris-setosa
5.4 3.9 1.7 0.4 Iris-setosa
4.6 3.4 1.4 0.3 Iris-setosa
5.0 3.4 1.5 0.2 Iris-setosa
4.4 2.9 1.4 0.2 Iris-setosa
4.9 3.1 1.5 0.1 Iris-setosa
###Markdown
1. Какое максимальное значение для каждого признакаОтветом будет последовательность из 4 чисел Например: `5.1 3.5 1.4 0.2`
###Code
maxs = iris_vals.max(axis=0); print(maxs)
###Output
[7.9 4.4 6.9 2.5]
###Markdown
2. Сколько каждого типа ириса представленно в данныхОтветом будет последовательность из 3 чисел в порядке: `Iris-setosa, Iris-versicolor, Iris-virginica` Например: `10 10 10`
###Code
names, counts = np.unique(iris_name, return_counts=True); print(names, counts)
###Output
['Iris-setosa' 'Iris-versicolor' 'Iris-virginica'] [50 50 50]
###Markdown
3. Среднее значение признака `petalwidth` для каждого типа ирисаОтветом будет последовательность в возврастающем порядке с округлением до 2 знаков после запятой. Например: `1.23 4.56 7.89`
###Code
print([(np.mean(np.compress(iris_name == name, iris_vals[:, 3]), dtype=np.float64), name) for name in names])
###Output
[(0.244, 'Iris-setosa'), (1.3260001, 'Iris-versicolor'), (2.026, 'Iris-virginica')]
###Markdown
4. Попарное скалярное произведение признаковОтветом будет среднее значение попарных скалярных произведений векторов признаков с округлением до 2 знаков после запятой. Скалярное произведение вектора с самим собой учитывать не надо. Например: `12.34`
###Code
np.mean([iris_vals[:, i] @ iris_vals[:, j] for i in range(0, 4) for j in range(0, 4) if i != j], dtype=np.float64)
###Output
_____no_output_____
###Markdown
5. У какого типа ириса самое маленькое значение признака `sepalwidth`
###Code
print([(np.min(np.compress(iris_name == name, iris_vals[:, 1])), name) for name in names])
###Output
[(2.3, 'Iris-setosa'), (2.0, 'Iris-versicolor'), (2.2, 'Iris-virginica')]
###Markdown
6. Постройте график распределения значений для `petallength` взависимости от типа ирисаВ качестве ответа укажите тип ириса с наименьшей дисперсией (самый "узкий")
###Code
[plt.plot(np.linspace(0, 10, 50), np.compress(iris_name == name, iris_vals[:, 2]), color) for (name, color) in np.vstack((names, ['r', 'g', 'b'])).T];
print([(np.var(np.compress(iris_name == name, iris_vals[:, 2])), name) for name in names])
###Output
[(0.029504000000000002, 'Iris-setosa'), (0.21640000000000004, 'Iris-versicolor'), (0.29849600000000004, 'Iris-virginica')]
###Markdown
7. Отобразите зависимость между `petallength` и `petalwidth` взависимости от типа ирисапо оси X - petallength по оси Y - petalwidth В ответе укажите класс, который отделился от остальных
###Code
[plt.plot(np.compress(iris_name == name, iris_vals[:, 2]), np.compress(iris_name == name, iris_vals[:, 3]), color) for (name, color) in np.vstack((names, ['r', 'g', 'b'])).T];
###Output
_____no_output_____
###Markdown
8. Постройте boxplot признака `sepallength` для каждого типа ИрисаВ ответе укажите количество выбросов в данных Каждый выброс отображается точкой, подробнее про boxplot можно почитать [здесь](https://towardsdatascience.com/understanding-boxplots-5e2df7bcbd51)
###Code
[plt.boxplot(np.compress(iris_name == name, iris_vals[:, 0]), flierprops=dict(markerfacecolor='g', marker='D')) for name in names];
###Output
_____no_output_____
###Markdown
PythonДля следующих заданий необходимо реализовать соответствующую функцию в файле `tasks.py`.После реализации, выполните соответствующую ячейку, не изменяя её содержимое. 9. За сколько шагов можно получить однозначное число перемножая цифры предыдущего числа.Например, для $88$ ответ $3$:$$88 \rightarrow 8 \times 8 = 64 \rightarrow 6 \times 4 = 24 \rightarrow 2 \times 4 = 8$$Для этого задания, реализуйте функцию `find_fold_number`
###Code
plt.plot(np.arange(100), [find_fold_number(x) for x in np.arange(100)], 'g-');
###Output
_____no_output_____
###Markdown
В качестве ответа укажите результат выполнения следующей ячейки (без кавычек)
###Code
''.join(map(str, (find_fold_number(i) for i in range(500))))
###Output
_____no_output_____
###Markdown
10. Кодирование длин серииКодирование длин серий (RLE) — алгоритм сжатия данных, заменяющий повторяющиеся символы на один символ и число его повторов. Серией называется последовательность, состоящая из нескольких одинаковых символов (более одного). При кодировании строка одинаковых символов, составляющих серию, заменяется строкой, содержащей сам повторяющийся символ и количество его повторов.Например, для $\textit{AAAAAAAAAAAAAAABAAAA}$ будет сжата в $\textit{A15BA4}$Для этого задания реализуйте функцию `rle`
###Code
print(rle('AAAAAAAAAAAAAAABAAAA'))
assert rle('AAAAAAAAAAAAAAABAAAA') == 'A15BA4', "неправильный ответ для строки из примера"
###Output
A15BA4
###Markdown
В качестве ответа укажите результат выполнения следующей ячейки (без кавычек)
###Code
rle(test_rle_str)
###Output
_____no_output_____
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Leannah Newman- Email: [email protected] Link to your forked github repository: https://github.com/leannahnewman/Harvard_BAI
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
work_dir = os.getcwd()
print(work_dir)
git_dir = '\\'.join(work_dir.split('\\')[:-1])
print('Your github directory is :%s'%git_dir)
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: C:\Users\leann\Documents\BAI\Harvard_BAI\res\models
Loaders are being loaded from: C:\Users\leann\Documents\BAI\Harvard_BAI\res\loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
import wandb
wandb.login()
###Output
_____no_output_____
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
_____no_output_____
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
_____no_output_____
###Markdown
Congratulations!You just completed your first deep learning program - image classification for MNIST. This wraps up assignment 1. In the next assignment, we will see how you can make changes to above mentioned folders/files to adapt this code-base to your own research project. Deliverables for Assignment 1: Please run this assignment through to the end, and then make two submissions:- Download this notebook as an HTML file. Click File ---> Download as ---> HTML. Submit this on canvas.- Add, commit and push these changes to your github repository.
###Code
git add
###Output
_____no_output_____
###Markdown
1. preprocess data
###Code
train_data = pd.read_csv("dataset/train.csv", encoding='big5')
test_data = pd.read_csv("dataset/test.csv", encoding='big5')
sample_submission = pd.read_csv("dataset/sampleSubmission.csv", encoding='big5')
###Output
_____no_output_____
###Markdown
* train
###Code
train_data = train_data.iloc[:, 3:]
train_data.shape
train_data.head()
train_data.replace("NR", 0 , inplace=True)
# month : 12, pollution_source : 18, days : 20, hours each day : 24
train_each_month = np.zeros((12, 18, 24 * 20), dtype=np.float64)
for i in range(train_each_month.shape[0]):
for j in range(20):
train_each_month[i, :, j * 24 : j * 24 + 24] = train_data.iloc[i + j * 18:i + 18 + j * 18, :]
train_each_month[0, 0, :48]
###Output
_____no_output_____
###Markdown
* test
###Code
test_data = test_data.iloc[:, 2:]
test_data.shape
test_data.head()
test_data.replace("NR", 0 , inplace=True)
###Output
_____no_output_____
###Markdown
2. prepare training data(train_x | train_y) and test data(test_x) * train
###Code
train_rows = 471 * 12
train_x = np.zeros((train_rows, 18 * 9), dtype=np.float64)
train_y = np.zeros((train_rows, 1), dtype=np.float64)
for i in range(train_each_month.shape[0]):
for j in range(471):
train_x[i * 471 + j] = train_each_month[i, :, j : j + 9].flatten()
train_y[i * 471 + j] = train_each_month[i, 9, j + 9]
print("train_x[2, : 10] :", train_x[2, : 10])
print("train_y[2:5] :", train_y[2:5])
train_bias = np.ones((train_x.shape[0], 1))
train_x = np.concatenate((train_bias, train_x), axis=1)
###Output
_____no_output_____
###Markdown
* test
###Code
test_rows = int(test_data.shape[0] / 18)
test_x = np.zeros((test_rows, 18 * 9), dtype=np.float64)
test_y = np.zeros((test_rows, 1), dtype=np.float64)
for i in range(test_x.shape[0]):
test_x[i] = test_data.iloc[i * 18 : i * 18 + 18, :].values.flatten()
print("test_x[0, : 10] :", test_x[0, : 10])
test_bias = np.ones((test_x.shape[0], 1))
test_x = np.concatenate((test_bias, test_x), axis=1)
###Output
_____no_output_____
###Markdown
3. define train functions
###Code
def GD(train_x, train_y, weight, lr, iteration, lambdaL2):
list_loss = []
for i in range(iteration):
predict = np.dot(train_x, weight)
loss_bref = predict - train_y
loss = np.mean(loss_bref ** 2)
list_loss.append(loss)
# f(x) = (a*x + b)^2
# f(x)' = 2(a*x + b) * a
grad = np.dot(train_x.T, loss_bref) / train_x.shape[0] + lambdaL2 * weight
weight -= lr * grad
return weight, list_loss
def ada_GD(train_x, train_y, weight, lr, iteration, lambdaL2):
s_grad = np.zeros((train_x.shape[1], 1), dtype=np.float64)
list_loss = []
for i in range(iteration):
predict = np.dot(train_x, weight)
loss_bref = predict - train_y
loss = np.mean(loss_bref ** 2)
list_loss.append(loss)
grad = np.dot(train_x.T, loss_bref) / train_x.shape[0] + lambdaL2 * weight
# (np.random.randn((2)) + np.random.randn((2))).shape = (2,)
s_grad += grad ** 2
ada = np.sqrt(s_grad + 1e-8)
weight -= lr * grad / ada
return weight, list_loss
def SGD(train_x, train_y, weight, lr, iteration, lambdaL2, batch_size):
list_loss = []
for i in range(iteration):
predict = np.dot(train_x, weight)
loss_bref = predict - train_y
loss = np.mean(loss_bref ** 2)
list_loss.append(loss)
rand = np.random.randint(0, train_x.shape[0], size=batch_size)
grad = np.dot(train_x[rand].T, loss_bref[rand]) / train_x.shape[0] + lambdaL2 * weight
weight -= lr * grad
return weight, list_loss
###Output
_____no_output_____
###Markdown
4. train data
###Code
## hyperparameter
lr = 0.001
iteration = 20000
lambdaL2 = 0
batch_size = 64
## hyperparameter
weight = np.zeros((train_x.shape[1], 1))
# 梯度下降loss较大,超过np.float64存储范围,暂时不进行运算
# weight_gd, list_loss_gd = GD(train_x, train_y, weight, lr, iteration, lambdaL2)
weight_ada_gd, list_loss_ada_gd = ada_GD(train_x, train_y, weight, lr, iteration, lambdaL2)
weight_sgd, list_loss_sgd = SGD(train_x, train_y, weight, lr, iteration, lambdaL2, batch_size)
plt.figure(figsize=(20, 12))
plt.title("Train_Process")
plt.xlabel("iteration")
plt.ylabel("loss")
# plt.plot(list_loss_gd, label="gd")
plt.plot(list_loss_ada_gd, label="ada_gd", linewidth=10)
plt.plot(list_loss_sgd, label="sgd", linewidth=1)
plt.legend()
plt.show()
# y_gd = np.dot(test_x, weight_gd)
y_ada_gd = np.dot(test_x, weight_ada_gd)
y_sgd = np.dot(test_x, weight_sgd)
###Output
_____no_output_____
###Markdown
5. save data to file
###Code
sample_submission.pop("value")
sample_submission.insert(1, "value_gd", y_ada_gd)
sample_submission.insert(1, "value_sgd", y_sgd)
sample_submission.tail()
sample_submission.to_csv("./dataset/sampleSubmission.csv")
###Output
_____no_output_____
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Christopher Gilmer-Hill- Email: [email protected] Link to your forked github repository: https://github.com/CGH31415/Harvard_BAI
###Code
!pip install wandb
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
from google.colab import drive
drive.mount('/content/drive')
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
work_dir = os.getcwd()
print(work_dir)
git_dir = '/content/drive/MyDrive/Neuro 240/Harvard_BAI'
print('Your github directory is :%s'%git_dir)
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /content/drive/MyDrive/Neuro 240/Harvard_BAI/res/models
Loaders are being loaded from: /content/drive/MyDrive/Neuro 240/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
import wandb
wandb.login()
###Output
_____no_output_____
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
_____no_output_____
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
[34m[1mwandb[0m: Currently logged in as: [33mcg-h[0m (use `wandb login --relogin` to force relogin)
###Markdown
Congratulations!You just completed your first deep learning program - image classification for MNIST. This wraps up assignment 1. In the next assignment, we will see how you can make changes to above mentioned folders/files to adapt this code-base to your own research project. Deliverables for Assignment 1: Please run this assignment through to the end, and then make two submissions:- Download this notebook as an HTML file. Click File ---> Download as ---> HTML. Submit this on canvas.- Add, commit and push these changes to your github repository.
###Code
#done
###Output
_____no_output_____
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name:- Email:- Link to your forked github repository:
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
work_dir = os.getcwd()
git_dir = '/'.join(work_dir.split('/')[:-1])
print('Your github directory is :%s'%git_dir)
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /net/storage001.ib.cluster/om2/user/smadan/Harvard_BAI/res/models
Loaders are being loaded from: /net/storage001.ib.cluster/om2/user/smadan/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
import wandb
wandb.login()
###Output
Failed to query for notebook name, you can set it manually with the WANDB_NOTEBOOK_NAME environment variable
[34m[1mwandb[0m: Currently logged in as: [33mspandanmadan[0m (use `wandb login --relogin` to force relogin)
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /net/storage001.ib.cluster/om2/user/smadan/Harvard_BAI/datasets/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
_____no_output_____
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Brandon Palacios- Email: [email protected] Link to your forked github repository: https://github.com/bpalacios4/Harvard_BAI
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
#!pip install scipy`
from google.colab import drive
drive.mount('/content/drive')
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
git_dir = '/content/drive/MyDrive/Harvard_BAI'
print('Your github directory is :%s'%git_dir)
os.chdir(git_dir)
!pwd
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
'%s/res/'%git_dir
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /content/drive/MyDrive/Harvard_BAI/res/models
Loaders are being loaded from: /content/drive/MyDrive/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
!pip install wandb
import wandb
wandb.login()
###Output
_____no_output_____
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
_____no_output_____
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
###Output
_____no_output_____
###Markdown
Make sure your runtime is GPU. If you changed your run time, make sure to run your code again from the top.
###Code
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
[34m[1mwandb[0m: Currently logged in as: [33mbpalacios4[0m (use `wandb login --relogin` to force relogin)
###Markdown
Congratulations!You just completed your first deep learning program - image classification for MNIST. This wraps up assignment 1. In the next assignment, we will see how you can make changes to above mentioned folders/files to adapt this code-base to your own research project. Deliverables for Assignment 1: Please run this assignment through to the end, and then make two submissions:- Download this notebook as an HTML file. Click File ---> Download as ---> HTML. Submit this on canvas.- Add, commit and push these changes to your github repository.
###Code
%%shell
jupyter nbconvert --to html /content/assignment_1.ipynb
import os
os.chdir('/content/drive/MyDrive/Harvard_BAI')
with open('/content/drive/MyDrive/access_token.txt','r') as F:
contents = F.readlines()
token = contents[0]
print('I'm, so that git can upload those changes when you push')
###Output
_____no_output_____
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Abdeljaleel Ismail- Email: [email protected] Link to your forked github repository: https://github.com/jaleeli413/Harvard_BAI
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
from google.colab import drive
drive.mount('/content/drive')
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
#work_dir = os.getcwd()
git_dir = '/content/drive/MyDrive/Harvard_BAI'
print('Your github directory is :%s'%git_dir)
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /content/drive/MyDrive/Harvard_BAI/res/models
Loaders are being loaded from: /content/drive/MyDrive/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
pip install wandb
import wandb
wandb.login()
###Output
_____no_output_____
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
_____no_output_____
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
[34m[1mwandb[0m: Currently logged in as: [33mabdeljaleelismail[0m (use `wandb login --relogin` to force relogin)
###Markdown
Congratulations!You just completed your first deep learning program - image classification for MNIST. This wraps up assignment 1. In the next assignment, we will see how you can make changes to above mentioned folders/files to adapt this code-base to your own research project. Deliverables for Assignment 1: Please run this assignment through to the end, and then make two submissions:- Download this notebook as an HTML file. Click File ---> Download as ---> HTML. Submit this on canvas.- Add, commit and push these changes to your github repository.
###Code
%%shell
jupyter nbconvert --to html /content/assignment_1.ipynb
import os
os.chdir('/content/drive/MyDrive/Harvard_BAI')
with open('/content/drive/MyDrive/access_token.txt','r') as F:
contents = F.readlines()
token = contents[0]
print('Th')
###Output
_____no_output_____
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Hannah Phan- Email: [email protected] Link to your forked github repository: https://github.com/phannahhan/Harvard_BAI
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
os.chdir('/content/drive/MyDrive/Harvard_BAI')
# !pip install scipy`
from google.colab import drive
drive.mount('/content/drive')
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
git_dir = '/content/drive/MyDrive/Harvard_BAI'
print('Your github directory is :%s'%git_dir)
os.chdir(git_dir)
!pwd
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
'%s/res/'%git_dir
os.chdir('/content/drive/MyDrive/Harvard_BAI/Harvard_BAI/res')
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /content/drive/MyDrive/Harvard_BAI/Harvard_BAI/res/models
Loaders are being loaded from: /content/drive/MyDrive/Harvard_BAI/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
!pip install wandb
import wandb
wandb.login()
###Output
_____no_output_____
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /content/drive/MyDrive/Harvard_BAI/datasets/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
###Output
_____no_output_____
###Markdown
Make sure your runtime is GPU. If you changed your run time, make sure to run your code again from the top.
###Code
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
[34m[1mwandb[0m: Currently logged in as: [33mphannahhan[0m (use `wandb login --relogin` to force relogin)
###Markdown
Congratulations!You just completed your first deep learning program - image classification for MNIST. This wraps up assignment 1. In the next assignment, we will see how you can make changes to above mentioned folders/files to adapt this code-base to your own research project. Deliverables for Assignment 1: Please run this assignment through to the end, and then make two submissions:- Download this notebook as an HTML file. Click File ---> Download as ---> HTML. Submit this on canvas.- Add, commit and push these changes to your github repository.
###Code
%%shell
jupyter nbconvert --to html /content/assignment_1.ipynb
! git add .
###Output
_____no_output_____
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Nicolò- Email: [email protected] Link to your forked github repository: https://github.com/nfoppiani/Harvard_BAI
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
# !pip install scipy`
from google.colab import drive
drive.mount('/content/drive')
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
git_dir = '/content/drive/MyDrive/Harvard_Classes/Neuro140/Harvard_BAI'
print('Your github directory is :%s'%git_dir)
os.chdir(git_dir)
!pwd
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
'%s/res/'%git_dir
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /content/drive/MyDrive/Harvard_Classes/Neuro140/Harvard_BAI/res/models
Loaders are being loaded from: /content/drive/MyDrive/Harvard_Classes/Neuro140/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
!pip install wandb
import wandb
wandb.login()
###Output
_____no_output_____
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /content/drive/MyDrive/Harvard_Classes/Neuro140/Harvard_BAI/datasets/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
###Output
_____no_output_____
###Markdown
Make sure your runtime is GPU. If you changed your run time, make sure to run your code again from the top.
###Code
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
[34m[1mwandb[0m: Currently logged in as: [33mnfoppiani[0m (use `wandb login --relogin` to force relogin)
###Markdown
Congratulations!You just completed your first deep learning program - image classification for MNIST. This wraps up assignment 1. In the next assignment, we will see how you can make changes to above mentioned folders/files to adapt this code-base to your own research project. Deliverables for Assignment 1: Please run this assignment through to the end, and then make two submissions:- Download this notebook as an HTML file. Click File ---> Download as ---> HTML. Submit this on canvas.- Add, commit and push these changes to your github repository.
###Code
!git add .
! touch ../github_token.txt
!ls ..
with open('../github_token.txt','r') as F:
contents = F.readlines()
token = contents[0]
!git config --global user.email "[email protected]"
!git config --global user.name "nfoppiani"
# THE FIRST TIME YOU RUN THIS CODE, UNCOMMENT LINES BELOW. AFTER THAT COMMENT THEM BACK.
!git remote rm origin
!git remote add origin https://nfoppiani:[email protected]/nfoppiani/Harvard_BAI.git
!git commit -m "assignment 1 completed"
!git push -u origin main
!jupyter nbconvert assignment_1/assignment_1.ipynb --to html --output assignment_1.html
###Output
_____no_output_____
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Zixian Li- Email: [email protected] Link to your forked github repository: https://github.com/dave98lzx/Harvard_BAI
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
work_dir = os.getcwd()
git_dir = '/'.join(work_dir.split('/')[:-1])
print('Your github directory is :%s'%git_dir)
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /Users/lizixian/Desktop/NEURO140-HW/Harvard_BAI/res/models
Loaders are being loaded from: /Users/lizixian/Desktop/NEURO140-HW/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
import wandb
wandb.login()
###Output
Failed to query for notebook name, you can set it manually with the WANDB_NOTEBOOK_NAME environment variable
[34m[1mwandb[0m: Currently logged in as: [33mdave98lzx[0m (use `wandb login --relogin` to force relogin)
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 0
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /Users/lizixian/Desktop/NEURO140-HW/Harvard_BAI/datasets/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
# print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
# print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
_____no_output_____
###Markdown
Linear regression with one variable Assignment 2.1: Plotting the data
###Code
import matplotlib.pyplot as plt
import numpy as np
train_data = np.genfromtxt('ex1data1.txt', delimiter=',')
print("Dimentions of training samples: [%d" % train_data.shape[0], ", %d]" % train_data.shape[1])
X = train_data[:, 0]
y = train_data[:, 1]
m = y.shape[0]
print("Size of training samples: %d" % m)
X = X.reshape(m, 1)
y = y.reshape(m, 1)
# Scatter plot
plt.style.use('seaborn-whitegrid')
plt.xlabel("Population of city in 10,000s")
plt.ylabel("Profit in $10,000s")
plt.title("Scatter plot of training data")
plt.plot(X, y, 'rx');
###Output
Dimentions of training samples: [97 , 2]
Size of training samples: 97
###Markdown
Assignment 2.2: Gradient Descent
###Code
### Method to compute Cost Function
def compute_cost(X, y, theta):
diff = X.dot(theta) - y
ssq = np.sum(diff**2)
return ssq / (2 * m)
# Add one additional column of all ones to X representing x0
x0 = np.ones((m, 1))
original_X = X
X = np.append(x0, X, axis=1)
print(X.shape)
print(y.shape)
# Initialize thetas to zeros
# Iteration count to 1500
# Alpha to 0.01
theta = np.zeros((2, 1))
iterations = 1500
alpha = 0.01
# Tests on compute_cost
J = compute_cost(X, y, theta)
print("Expected cost value 32.07; Calculated cost value %f" % J)
theta_temp = np.array([[-1], [2]])
J = compute_cost(X, y, theta_temp)
print("Expected cost value 54.24; Calculated cost value %f" % J)
# Method to calculate gradient descent
def gradient_descent(X, y, theta, alpha, num_iters):
J_history = np.zeros(num_iters)
for step in range(num_iters):
if step >= 1:
theta_0 = theta[0, 0] - alpha * np.sum(X.dot(theta) - y) / m
theta_1 = theta[1, 0] - alpha * np.sum((X.dot(theta) - y) * X) / m
theta[0, 0] = theta_0
theta[1, 0] = theta_1
J_curr = compute_cost(X, y, theta)
J_history[step - 1] = J_curr
print("Current Cost Value %f" % J_curr)
return theta
# Calculate theta
theta = gradient_descent(X, y, theta, alpha, iterations)
print("\nObtained theta values: ")
print(theta)
print(X.shape)
### Plot the resultant linear regression
plt.style.use('seaborn-whitegrid')
plt.xlabel("Population of city in 10,000s")
plt.ylabel("Profit in $10,000s")
plt.title("Scatter plot of training data")
plt.plot(original_X, y, 'rx', X[:, 1], X.dot(theta), 'b-', lw=2)
### Predictions
test1 = np.array([1, 3.5]).reshape(1, 2)
test2 = np.array([1, 7]).reshape(1, 2)
predict1 = test1.dot(theta)
print('For population = 35,000, we predict a profit of %f\n' % (predict1 * 10000))
predict2 = test2.dot(theta)
print('For population = 70,000, we predict a profit of %f\n' % (predict2 * 10000))
###Output
For population = 35,000, we predict a profit of 4973.103521
For population = 70,000, we predict a profit of 45513.377992
###Markdown
Assignment 2.4: Visualizing J(θ) Linear regression with multiple variables Assignment 3.1: Feature Normalization
###Code
import matplotlib.pyplot as plt
import numpy as np
data_multi = np.genfromtxt('ex1data2.txt', delimiter=',')
print("Dimentions of training samples: [%d" % data_multi.shape[0], ", %d]" % data_multi.shape[1])
X_multi = data_multi[:, 0:2]
print(X_multi.shape)
y = data_multi[:, 2]
multi = y.shape[0]
print("Size of training samples: %d" % multi)
y = y.reshape(multi, 1)
print('First 10 examples from the dataset: \n')
print(X_multi[0:10,:])
print(y[0:10,:])
def featureNormalize(X):
X_norm = X
mu = np.mean(X, axis = 0)
sigma = np.std(X, axis = 0)
# Tile rows together for matrix operations
mu_matrix = np.tile(mu, (X.shape[0], 1))
sigma_matrix = np.tile(sigma, (X.shape[0], 1))
X_norm = (X_norm - mu_matrix) / sigma_matrix
mu = mu.reshape(1, X.shape[1])
sigma = sigma.reshape(1, X.shape[1])
return X_norm, mu, sigma
X_norm, mu, sigma = featureNormalize(X_multi)
print(mu)
print(sigma)
###Output
[[ 2000.68085106 3.17021277]]
[[ 7.86202619e+02 7.52842809e-01]]
###Markdown
Assignment 3.2: Gradient Descent
###Code
### Add x0 column into training dataset
x0 = np.ones((multi, 1))
original_X_multi = X_norm
X_norm = np.append(x0, X_norm, axis=1)
### Method to calculate cost value for multi-variant
def compute_cost_multi(X, y, theta):
T = X.dot(theta) - y
return np.transpose(T).dot(T) / (2 * multi)
### Method to calculate gradient descent for multi-variant
def gradient_descent_multi(X, y, theta, alpha, num_iters):
J_history = np.zeros(num_iters)
for step in range(num_iters):
delta = (1 / multi) * np.sum(X * np.tile((X.dot(theta) - y), (1, X.shape[1])))
theta = np.transpose(np.transpose(theta) - alpha * delta)
J_curr = compute_cost_multi(X, y, theta)
J_history[step - 1] = J_curr
print("Current Cost Value %f" % J_curr)
return theta
theta = np.zeros((X_norm.shape[1], 1))
iterations = 1500
alpha = 0.01
theta = gradient_descent_multi(X_norm, y, theta, alpha, iterations)
print(theta)
###Output
Current Cost Value 63134365890.513382
Current Cost Value 60875481479.188629
Current Cost Value 58798891981.422997
Current Cost Value 56889885960.304115
Current Cost Value 55134939211.035980
Current Cost Value 53521618949.761116
Current Cost Value 52038495734.469574
Current Cost Value 50675062494.005264
Current Cost Value 49421660091.536919
Current Cost Value 48269408895.153847
Current Cost Value 47210145870.803833
Current Cost Value 46236366751.912704
Current Cost Value 45341172875.991058
Current Cost Value 44518222311.595955
Current Cost Value 43761684929.410492
Current Cost Value 43066201099.145744
Current Cost Value 42426843719.656563
Current Cost Value 41839083313.276329
Current Cost Value 41298755937.084534
Current Cost Value 40802033683.776711
Current Cost Value 40345397563.152634
Current Cost Value 39925612572.103775
Current Cost Value 39539704776.485100
Current Cost Value 39184940242.509834
Current Cost Value 38858805668.407959
Current Cost Value 38558990579.135178
Current Cost Value 38283370957.992020
Current Cost Value 38029994199.192581
Current Cost Value 37797065274.780624
Current Cost Value 37582934017.893501
Current Cost Value 37386083432.283234
Current Cost Value 37205118945.274559
Current Cost Value 37038758528.023117
Current Cost Value 36885823613.081779
Current Cost Value 36745230744.931168
Current Cost Value 36615983904.323204
Current Cost Value 36497167452.060158
Current Cost Value 36387939642.219948
Current Cost Value 36287526658.872513
Current Cost Value 36195217134.041046
Current Cost Value 36110357108.070816
Current Cost Value 36032345396.702835
Current Cost Value 35960629332.030685
Current Cost Value 35894700847.167702
Current Cost Value 35834092876.886574
Current Cost Value 35778376048.732056
Current Cost Value 35727155641.165192
Current Cost Value 35680068787.189247
Current Cost Value 35636781903.646606
Current Cost Value 35596988327.974922
Current Cost Value 35560406145.679939
Current Cost Value 35526776193.134224
Current Cost Value 35495860221.552567
Current Cost Value 35467439209.136940
Current Cost Value 35441311809.433685
Current Cost Value 35417292924.910118
Current Cost Value 35395212395.645470
Current Cost Value 35374913793.846199
Current Cost Value 35356253315.645386
Current Cost Value 35339098762.335426
Current Cost Value 35323328603.816582
Current Cost Value 35308831117.626442
Current Cost Value 35295503597.450768
Current Cost Value 35283251625.508636
Current Cost Value 35271988403.656990
Current Cost Value 35261634138.475983
Current Cost Value 35252115475.978607
Current Cost Value 35243364981.940178
Current Cost Value 35235320664.165718
Current Cost Value 35227925533.311119
Current Cost Value 35221127199.146584
Current Cost Value 35214877499.402054
Current Cost Value 35209132158.565399
Current Cost Value 35203850474.215958
Current Cost Value 35198995028.671425
Current Cost Value 35194531423.905212
Current Cost Value 35190428037.856216
Current Cost Value 35186655800.404892
Current Cost Value 35183187987.428185
Current Cost Value 35180000031.474533
Current Cost Value 35177069347.717773
Current Cost Value 35174375173.956581
Current Cost Value 35171898423.526329
Current Cost Value 35169621550.081062
Current Cost Value 35167528423.287827
Current Cost Value 35165604214.552513
Current Cost Value 35163835291.967888
Current Cost Value 35162209123.739365
Current Cost Value 35160714189.404518
Current Cost Value 35159339898.217224
Current Cost Value 35158076514.118370
Current Cost Value 35156915086.761497
Current Cost Value 35155847388.104774
Current Cost Value 35154865854.120010
Current Cost Value 35153963531.206001
Current Cost Value 35153134026.926163
Current Cost Value 35152371464.721916
Current Cost Value 35151670442.280571
Current Cost Value 35151025993.263130
Current Cost Value 35150433552.120613
Current Cost Value 35149888921.749779
Current Cost Value 35149388243.759117
Current Cost Value 35148927971.134285
Current Cost Value 35148504843.109642
Current Cost Value 35148115862.067612
Current Cost Value 35147758272.302193
Current Cost Value 35147429540.496521
Current Cost Value 35147127337.775642
Current Cost Value 35146849523.207870
Current Cost Value 35146594128.637497
Current Cost Value 35146359344.741531
Current Cost Value 35146143508.211708
Current Cost Value 35145945089.970917
Current Cost Value 35145762684.340530
Current Cost Value 35145594999.082054
Current Cost Value 35145440846.242302
Current Cost Value 35145299133.737457
Current Cost Value 35145168857.616295
Current Cost Value 35145049094.947754
Current Cost Value 35144938997.282524
Current Cost Value 35144837784.642250
Current Cost Value 35144744739.993851
Current Cost Value 35144659204.169731
Current Cost Value 35144580571.198021
Current Cost Value 35144508284.009514
Current Cost Value 35144441830.491249
Current Cost Value 35144380739.858459
Current Cost Value 35144324579.319283
Current Cost Value 35144272951.008751
Current Cost Value 35144225489.170113
Current Cost Value 35144181857.563652
Current Cost Value 35144141747.084656
Current Cost Value 35144104873.573540
Current Cost Value 35144070975.802795
Current Cost Value 35144039813.626297
Current Cost Value 35144011166.278015
Current Cost Value 35143984830.808044
Current Cost Value 35143960620.644791
Current Cost Value 35143938364.273247
Current Cost Value 35143917904.019875
Current Cost Value 35143899094.935577
Current Cost Value 35143881803.768890
Current Cost Value 35143865908.021858
Current Cost Value 35143851295.082321
Current Cost Value 35143837861.426033
Current Cost Value 35143825511.883301
Current Cost Value 35143814158.964752
Current Cost Value 35143803722.241501
Current Cost Value 35143794127.775421
Current Cost Value 35143785307.595238
Current Cost Value 35143777199.215096
Current Cost Value 35143769745.191780
Current Cost Value 35143762892.717857
Current Cost Value 35143756593.247490
Current Cost Value 35143750802.152603
Current Cost Value 35143745478.406601
Current Cost Value 35143740584.293839
Current Cost Value 35143736085.142349
Current Cost Value 35143731949.078247
Current Cost Value 35143728146.799896
Current Cost Value 35143724651.370369
Current Cost Value 35143721438.026550
Current Cost Value 35143718484.003761
Current Cost Value 35143715768.374458
Current Cost Value 35143713271.899986
Current Cost Value 35143710976.894249
Current Cost Value 35143708867.098465
Current Cost Value 35143706927.565941
Current Cost Value 35143705144.556229
Current Cost Value 35143703505.437706
Current Cost Value 35143701998.598198
Current Cost Value 35143700613.362587
Current Cost Value 35143699339.917305
Current Cost Value 35143698169.240715
Current Cost Value 35143697093.039253
Current Cost Value 35143696103.688637
Current Cost Value 35143695194.179909
Current Cost Value 35143694358.069725
Current Cost Value 35143693589.434715
Current Cost Value 35143692882.829559
Current Cost Value 35143692233.248352
Current Cost Value 35143691636.089203
Current Cost Value 35143691087.121574
Current Cost Value 35143690582.456337
Current Cost Value 35143690118.518250
Current Cost Value 35143689692.020569
Current Cost Value 35143689299.941803
Current Cost Value 35143688939.504311
Current Cost Value 35143688608.154587
Current Cost Value 35143688303.545219
Current Cost Value 35143688023.518227
Current Cost Value 35143687766.089767
Current Cost Value 35143687529.436134
Current Cost Value 35143687311.880753
Current Cost Value 35143687111.882370
Current Cost Value 35143686928.024124
Current Cost Value 35143686759.003471
Current Cost Value 35143686603.623009
Current Cost Value 35143686460.781944
Current Cost Value 35143686329.468346
Current Cost Value 35143686208.751930
Current Cost Value 35143686097.777489
Current Cost Value 35143685995.758812
Current Cost Value 35143685901.973190
Current Cost Value 35143685815.756187
Current Cost Value 35143685736.497002
Current Cost Value 35143685663.634148
Current Cost Value 35143685596.651413
Current Cost Value 35143685535.074272
Current Cost Value 35143685478.466492
Current Cost Value 35143685426.427025
Current Cost Value 35143685378.587212
Current Cost Value 35143685334.608147
Current Cost Value 35143685294.178246
Current Cost Value 35143685257.011078
Current Cost Value 35143685222.843353
Current Cost Value 35143685191.433014
Current Cost Value 35143685162.557526
Current Cost Value 35143685136.012329
Current Cost Value 35143685111.609367
Current Cost Value 35143685089.175751
Current Cost Value 35143685068.552559
Current Cost Value 35143685049.593681
Current Cost Value 35143685032.164818
Current Cost Value 35143685016.142487
Current Cost Value 35143685001.413170
Current Cost Value 35143684987.872528
Current Cost Value 35143684975.424637
Current Cost Value 35143684963.981308
Current Cost Value 35143684953.461479
Current Cost Value 35143684943.790604
Current Cost Value 35143684934.900177
Current Cost Value 35143684926.727226
Current Cost Value 35143684919.213844
Current Cost Value 35143684912.306793
Current Cost Value 35143684905.957161
Current Cost Value 35143684900.119942
Current Cost Value 35143684894.753799
Current Cost Value 35143684889.820709
Current Cost Value 35143684885.285736
Current Cost Value 35143684881.116730
Current Cost Value 35143684877.284172
Current Cost Value 35143684873.760910
Current Cost Value 35143684870.521973
Current Cost Value 35143684867.544426
Current Cost Value 35143684864.807167
Current Cost Value 35143684862.290810
Current Cost Value 35143684859.977531
Current Cost Value 35143684857.850937
Current Cost Value 35143684855.895958
Current Cost Value 35143684854.098747
Current Cost Value 35143684852.446571
Current Cost Value 35143684850.927734
Current Cost Value 35143684849.531471
Current Cost Value 35143684848.247879
Current Cost Value 35143684847.067879
Current Cost Value 35143684845.983109
Current Cost Value 35143684844.985878
Current Cost Value 35143684844.069130
Current Cost Value 35143684843.226357
Current Cost Value 35143684842.451599
Current Cost Value 35143684841.739372
Current Cost Value 35143684841.084618
Current Cost Value 35143684840.482704
Current Cost Value 35143684839.929367
Current Cost Value 35143684839.420677
Current Cost Value 35143684838.953041
Current Cost Value 35143684838.523155
Current Cost Value 35143684838.127953
Current Cost Value 35143684837.764641
Current Cost Value 35143684837.430656
Current Cost Value 35143684837.123619
Current Cost Value 35143684836.841362
Current Cost Value 35143684836.581886
Current Cost Value 35143684836.343346
Current Cost Value 35143684836.124054
Current Cost Value 35143684835.922470
Current Cost Value 35143684835.737152
Current Cost Value 35143684835.566772
Current Cost Value 35143684835.410164
Current Cost Value 35143684835.266190
Current Cost Value 35143684835.133820
Current Cost Value 35143684835.012146
Current Cost Value 35143684834.900284
Current Cost Value 35143684834.797455
Current Cost Value 35143684834.702919
Current Cost Value 35143684834.616020
Current Cost Value 35143684834.536125
Current Cost Value 35143684834.462685
Current Cost Value 35143684834.395172
Current Cost Value 35143684834.333099
Current Cost Value 35143684834.276047
Current Cost Value 35143684834.223587
Current Cost Value 35143684834.175369
Current Cost Value 35143684834.131035
Current Cost Value 35143684834.090286
Current Cost Value 35143684834.052826
Current Cost Value 35143684834.018387
Current Cost Value 35143684833.986725
Current Cost Value 35143684833.957619
Current Cost Value 35143684833.930862
Current Cost Value 35143684833.906258
Current Cost Value 35143684833.883652
Current Cost Value 35143684833.862869
Current Cost Value 35143684833.843758
Current Cost Value 35143684833.826180
Current Cost Value 35143684833.810036
Current Cost Value 35143684833.795189
Current Cost Value 35143684833.781540
Current Cost Value 35143684833.768990
Current Cost Value 35143684833.757462
Current Cost Value 35143684833.746857
Current Cost Value 35143684833.737106
Current Cost Value 35143684833.728149
Current Cost Value 35143684833.719910
Current Cost Value 35143684833.712341
Current Cost Value 35143684833.705376
Current Cost Value 35143684833.698967
Current Cost Value 35143684833.693092
Current Cost Value 35143684833.687683
Current Cost Value 35143684833.682709
Current Cost Value 35143684833.678139
Current Cost Value 35143684833.673935
Current Cost Value 35143684833.670067
Current Cost Value 35143684833.666527
Current Cost Value 35143684833.663254
Current Cost Value 35143684833.660263
Current Cost Value 35143684833.657494
Current Cost Value 35143684833.654961
Current Cost Value 35143684833.652626
Current Cost Value 35143684833.650482
Current Cost Value 35143684833.648514
Current Cost Value 35143684833.646706
Current Cost Value 35143684833.645035
Current Cost Value 35143684833.643509
Current Cost Value 35143684833.642097
Current Cost Value 35143684833.640808
Current Cost Value 35143684833.639618
Current Cost Value 35143684833.638527
Current Cost Value 35143684833.637520
Current Cost Value 35143684833.636597
Current Cost Value 35143684833.635742
Current Cost Value 35143684833.634964
Current Cost Value 35143684833.634247
Current Cost Value 35143684833.633583
Current Cost Value 35143684833.632980
Current Cost Value 35143684833.632416
Current Cost Value 35143684833.631912
Current Cost Value 35143684833.631439
Current Cost Value 35143684833.630997
Current Cost Value 35143684833.630608
Current Cost Value 35143684833.630234
Current Cost Value 35143684833.629906
Current Cost Value 35143684833.629593
Current Cost Value 35143684833.629303
Current Cost Value 35143684833.629044
Current Cost Value 35143684833.628799
Current Cost Value 35143684833.628586
Current Cost Value 35143684833.628380
Current Cost Value 35143684833.628197
Current Cost Value 35143684833.628021
Current Cost Value 35143684833.627869
Current Cost Value 35143684833.627724
Current Cost Value 35143684833.627586
Current Cost Value 35143684833.627464
Current Cost Value 35143684833.627350
Current Cost Value 35143684833.627251
Current Cost Value 35143684833.627151
Current Cost Value 35143684833.627068
Current Cost Value 35143684833.626984
Current Cost Value 35143684833.626915
Current Cost Value 35143684833.626839
Current Cost Value 35143684833.626778
Current Cost Value 35143684833.626724
Current Cost Value 35143684833.626671
Current Cost Value 35143684833.626617
Current Cost Value 35143684833.626579
Current Cost Value 35143684833.626534
Current Cost Value 35143684833.626495
Current Cost Value 35143684833.626465
Current Cost Value 35143684833.626434
Current Cost Value 35143684833.626404
Current Cost Value 35143684833.626373
Current Cost Value 35143684833.626350
Current Cost Value 35143684833.626328
Current Cost Value 35143684833.626305
Current Cost Value 35143684833.626289
Current Cost Value 35143684833.626266
Current Cost Value 35143684833.626251
Current Cost Value 35143684833.626236
Current Cost Value 35143684833.626228
Current Cost Value 35143684833.626205
Current Cost Value 35143684833.626198
Current Cost Value 35143684833.626198
Current Cost Value 35143684833.626183
Current Cost Value 35143684833.626175
Current Cost Value 35143684833.626160
Current Cost Value 35143684833.626152
Current Cost Value 35143684833.626144
Current Cost Value 35143684833.626137
Current Cost Value 35143684833.626129
Current Cost Value 35143684833.626129
Current Cost Value 35143684833.626122
Current Cost Value 35143684833.626122
Current Cost Value 35143684833.626114
Current Cost Value 35143684833.626114
Current Cost Value 35143684833.626106
Current Cost Value 35143684833.626106
Current Cost Value 35143684833.626099
Current Cost Value 35143684833.626091
Current Cost Value 35143684833.626099
Current Cost Value 35143684833.626099
Current Cost Value 35143684833.626091
Current Cost Value 35143684833.626083
Current Cost Value 35143684833.626083
Current Cost Value 35143684833.626083
Current Cost Value 35143684833.626083
Current Cost Value 35143684833.626083
Current Cost Value 35143684833.626083
Current Cost Value 35143684833.626083
Current Cost Value 35143684833.626083
Current Cost Value 35143684833.626076
Current Cost Value 35143684833.626076
Current Cost Value 35143684833.626083
Current Cost Value 35143684833.626076
Current Cost Value 35143684833.626076
Current Cost Value 35143684833.626076
Current Cost Value 35143684833.626076
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626060
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
Current Cost Value 35143684833.626068
[[ 121576.11387415]
[ 121576.11387415]
[ 121576.11387415]]
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Daniel - Email: [email protected] - Link to your forked github repository: https://github.com/dlee1111111/Harvard_BAI
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
work_dir = os.getcwd()
git_dir = '/'.join(work_dir.split('/')[:-1])
print('Your github directory is :%s'%git_dir)
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /net/storage001.ib.cluster/om2/user/smadan/Harvard_BAI/res/models
Loaders are being loaded from: /net/storage001.ib.cluster/om2/user/smadan/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
import wandb
wandb.login()
###Output
Failed to query for notebook name, you can set it manually with the WANDB_NOTEBOOK_NAME environment variable
[34m[1mwandb[0m: Currently logged in as: [33mspandanmadan[0m (use `wandb login --relogin` to force relogin)
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /net/storage001.ib.cluster/om2/user/smadan/Harvard_BAI/datasets/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
_____no_output_____
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Katie Collins- Email: [email protected] Link to your forked github repository: https://github.com/collinskatie/Harvard_BAI
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
work_dir = os.getcwd()
git_dir = '/'.join(work_dir.split('/')[:-1])
print('Your github directory is :%s'%git_dir)
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /om/user/katiemc/Harvard_BAI/res/models
Loaders are being loaded from: /om/user/katiemc/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
import wandb
wandb.login()
###Output
Failed to query for notebook name, you can set it manually with the WANDB_NOTEBOOK_NAME environment variable
[34m[1mwandb[0m: Currently logged in as: [33mcollinskatie[0m (use `wandb login --relogin` to force relogin)
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
_____no_output_____
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
torch.cuda.is_available()
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
[34m[1mwandb[0m: wandb version 0.10.17 is available! To upgrade, please run:
[34m[1mwandb[0m: $ pip install wandb --upgrade
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Madison Harr- Email: [email protected] Link to your forked github repository: https://github.com/Madison2026/Harvard_BAI
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
work_dir = os.getcwd()
git_dir = '/'.join(work_dir.split('/')[:-1])
print('Your github directory is :%s'%git_dir)
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /Users/madisonharris/Harvard_BAI/res/models
Loaders are being loaded from: /Users/madisonharris/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
import wandb
wandb.login()
###Output
[34m[1mwandb[0m: Currently logged in as: [33mmadison1[0m (use `wandb login --relogin` to force relogin)
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 0
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
_____no_output_____
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
_____no_output_____
###Markdown
assignment_1
###Code
#assignment_1
#Write Python Program to Read Two Numbers and Print Their Quotient and Remainder
print("This program will ask for two number and print their Quotient and reminder")
number_1 = int(input("Please enter first number: "))
number_2 = int(input("PLease enter second number: "))
remainder = number_1 / number_2
quotient = number_1 % number_2
print("Remainder of "+str(number_1)+" and "+str(number_2)+" is: "+ str(remainder))
print("Quotient of "+str(number_1)+" and "+str(number_2)+" is: "+ str(quotient))
#assignment_1
#Write Python Program to Print all Integers that Aren’t Divisible by Either 2 or 3 and Lie between 1 and 50.
for i in range(1,51):
# print(i)
if i%2 != 0 and i%3 != 0 :
print(f"{i} is not divisiable by 2 and 3")
#assignment_1
#Write Python Program to Exchange the Values of Two Numbers Without Using a Temporary Variable
val1 = int(input("Enter first value: "))
val2 = int(input("Enter second value: "))
val1, val2 = val2, val1
print(f"First Value is: {val1} and second value is: {val2} ")
#assignment_1
#Write Python Program to Exchange the Values of Two Numbers Without Using a Temporary Variable
############################ program have error
val1 = int(input("Enter first value: "))
val2 = int(input("Enter second value: "))
#Add both the variables and store it in the first variable.
val1 = val1 + val2
# Subtract the second variable from the first and store it in the second variable.
val2 = val1 - val2
#Then, subtract the first variable from the second variable and store it in the first variable.
val1 = val2 - val1
print(f"First Value is: {val1} and second value is: {val2} ")
# assignment_1
# Write Python Program to Find the Area of a Triangle Given All Three Sides
print("This program will calculate the area of triangle and will ask for three side of triangle: ")
base = input("Enter base value: ")
hyp = input("Enter value of hypotenuse: ")
perpendicular = input("Enter the value of perpendicular: ")
if not base:
base = input("Enter base value again: ")
if not hyp:
hyp = input("Enter value of hypotenuse again: ")
if not perpendicular:
perpendicular = input("Enter the value of perpendicular again: ")
area = (int(base)+int(hyp)+int(perpendicular))*0.5
print(f"Area of triangle is {area} ")
#assignment_1
#Write Python Program to Print Largest Even and Largest Odd Number in a List
odd = []
even = []
print("This program will find the maximum and minimum from the entered numbers ")
total = int(input("How many elements you want to enter? "))
for i in range(total):
number = int(input("Enter number please: "))
if number%2 == 0:
even.append(number)
else:
odd.append(number)
if len(odd) != 0:
print(f"Max odd number in list is: {max(odd)}")
print(f"Odd list length is: {len(odd)}")
print("Sorted odd list is: ")
odd.sort()
print(odd)
if len(even) != 0:
print(f"Maximum even number in list is: {max(even)} ")
print(f"Even list length is: {len(even)}")
print("Sorted even list is: ")
even.sort()
print(even)
#assignment_1
#Write Python Program to Find the Second Largest Number in a List
a_list = []
print("This program will calculate the second largest number from the entered list of numbers ")
total = int(input("How many numbers your want to insert? "))
for i in range(total):
number = int(input(f"Enter element {i+1}: "))
a_list.append(number)
a_list.sort()
print(f"Second largest number in the list is: {a_list[-2]}")
#assignment_1
#Write Python Program to Find the Union of two Lists
list_1 = []
list_2 = []
print("This program will result the union of two lists ")
number = int(input("How many element you want to enter in first list? "))
for i in range(number):
element = int(input(f"Insert {i+1} element in First list: "))
list_1.append(element)
number = int(input("How many elements you wan to enter in second list? "))
for i in range(number):
element = int(input(f"Inser {i+1} element in second list: "))
list_2.append(element)
print("Union of two lists is: ")
print(list(set(list_1)|set(list_2)))
###Output
This program will result the union of two lists
How many element you want to enter in first list? 4
Insert 1 element in First list: 1
Insert 2 element in First list: 2
Insert 3 element in First list: 3
Insert 4 element in First list: 4
How many elements you wan to enter in second list? 5
Inser 1 element in second list: 1
Inser 2 element in second list: 2
Inser 3 element in second list: 3
Inser 4 element in second list: 4
Inser 5 element in second list: 5
Union of two lists is:
[1, 2, 3, 4, 5]
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: saztd- Email:- Link to your forked github repository:
###Code
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
# !pip install scipy`
from google.colab import drive
drive.mount('/content/drive')
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
git_dir = '/content/drive/MyDrive/Harvard_BAI'
print('Your github directory is :%s'%git_dir)
os.chdir(git_dir)
!pwd
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
'%s/res/'%git_dir
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
_____no_output_____
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
!pip install wandb
import wandb
wandb.login()
###Output
_____no_output_____
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
_____no_output_____
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
###Output
_____no_output_____
###Markdown
Make sure your runtime is GPU. If you changed your run time, make sure to run your code again from the top.
###Code
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
[34m[1mwandb[0m: Currently logged in as: [33mspandanmadan[0m (use `wandb login --relogin` to force relogin)
###Markdown
Assignment 1 Quick intro + checking code works on your system Learning Outcomes: The goal of this assignment is two-fold:- This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code.- This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! Please specify your Name, Email ID and forked repository url here:- Name: Alexander Davies- Email: [email protected] Link to your forked github repository: https://github.com/xanderdavies/Harvard_BAI
###Code
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
git_install_dir = '/content/drive/MyDrive/'
!cd /content/drive/MyDrive/
# !git clone https://github.com/xanderdavies/Harvard_BAI.git neuro_140/Harvard_BAI
git_dir = '/content/drive/MyDrive/neuro_140/Harvard_BAI/'
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
work_dir = os.getcwd()
print('Your github directory is :%s'%git_dir)
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
### Import PyTorch and its components ###
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Let's load our flexible code-base which you will build on for your research projects in future assignments.Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`).Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing.
###Code
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
###Output
Models are being loaded from: /content/drive/MyDrive/neuro_140/Harvard_BAI/res/models
Loaders are being loaded from: /content/drive/MyDrive/neuro_140/Harvard_BAI/res/loader
###Markdown
See those paths printed above? `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. So, to expand further you will be adding files to the folders above. Setting up Weights and Biases for tracking your experiments. We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
###Code
!pip install wandb
import wandb
wandb.login()
###Output
_____no_output_____
###Markdown
Specifying settings/hyperparameters for our code below
###Code
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'CustomCNN'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_1'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
###Output
_____no_output_____
###Markdown
By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. Data Loading The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. Let's load MNIST. The first time you run it, the dataset gets downloaded. Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
###Code
data_transforms = {}
data_transforms['train'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
data_transforms['test'] = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))])
###Output
_____no_output_____
###Markdown
`torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below.
###Code
mnist_dataset = {}
mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train'])
mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test'])
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /content/drive/MyDrive/neuro_140/Harvard_BAI//datasets/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Dataset vs Dataloader Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net.
###Code
data_loaders = {}
data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True)
data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False)
data_sizes = {}
data_sizes['train'] = len(mnist_dataset['train'])
data_sizes['test'] = len(mnist_dataset['test'])
###Output
_____no_output_____
###Markdown
We will use the `get_model` functionality to load a CNN architecture.
###Code
model = get_model(wandb_config['model_arch'], wandb_config['num_classes'])
###Output
_____no_output_____
###Markdown
Curious what the model architecture looks like?`get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does.
###Code
layout = widgets.Layout(width='auto', height='90px') #set width and height
button = widgets.Button(description="Read the function?\n Click me!", layout=layout)
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py")
print("This is our neural network model.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Below we have the function which trains, tests and returns the best model weights.
###Code
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
###Output
_____no_output_____
###Markdown
The different steps of the train model function are annotated below inside the function. Read them step by step
###Code
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']): # had not seen tqdm before, amazing!
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['test']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['test'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
###Output
[34m[1mwandb[0m: Currently logged in as: [33mxanderdavies[0m (use `wandb login --relogin` to force relogin)
###Markdown
Congratulations!You just completed your first deep learning program - image classification for MNIST. This wraps up assignment 1. In the next assignment, we will see how you can make changes to above mentioned folders/files to adapt this code-base to your own research project. Deliverables for Assignment 1: Please run this assignment through to the end, and then make two submissions:- Download this notebook as an HTML file. Click File ---> Download as ---> HTML. Submit this on canvas.- Add, commit and push these changes to your github repository.
###Code
###Output
_____no_output_____ |
Actividad 2 Las formas de las hojas y girasoles ANSWERS.ipynb | ###Markdown
Actividad 2: Las formas de las hojas y girasoles 🍃🌻***Traducción por Marilyn Vásquez-Cruz y Omar Andres Gonzalez Iturbe (Langebio-Cinvestav, Irapuato, México)*** ___ La forma de los datos, la forma de las hojas 🍃: visualizando datos usando matplotlib Las listas almacenan datos y, en la mayoría de los casos, los datos se codifican numéricamente. Las listas son una herramienta poderosa en Python para almacenar y manipular datos. Pero antes de analizar datos, debemos saber qué son, y para conocer nuestros datos, necesitamos visualizarlos. Cuando vemos algo, primero notamos su forma: su forma, contorno, estructura y geometría. Todos los datos tienen una forma y todas las formas son datos.En este ejercicio aprenderemos a visualizar nuestros datos usando Matplotlib. Matplotlib es un módulo. Los módulos en Python son colecciones de funciones y declaraciones. Los módulos generalmente están diseñados para abordar una funcionalidad específica o un nicho de interés de una comunidad. Los módulos que encontraremos en este curso son generalistas e incluyen matplotlib, pandas y math.Aprendimos en la lección anterior que las especies de plantas son diversas y numerosas, que ocupan los océanos, los ríos y la tierra como organismos unicelulares, organismos multicelulares grandes y elaborados, simbiontes e incluso colonias. Visualicemos algo de esta variación natural, usando nuevamente la vid como ejemplo.El vino que bebemos se elabora principalmente a partir de las bayas de *Vitis vinifera*. Pero las vides en sí mismas son quimeras: es decir, la mayoría de las vides utilizadas para la producción de vino son una combinación quirúrgica de dos especies diferentes producidas mediante injertos. Los brotes pueden ser *V. vinifera*, pero las raíces son parientes silvestres que son resistentes a las enfermedades y están bien adaptadas para determinados tipos de suelo.Las formas de las hojas de la vid se prestan fácilmente al análisis cuantitativo. Cada hoja de vid tiene cinco lóbulos principales: la punta de la hoja, dos lóbulos distales y dos proximales (el proximal está cerca de la base de la hoja y el distal hacia la punta). Además, hay senos distales y proximales (las hendiduras entre los lóbulos). Algunas hojas están muy disecadas (lo que significa que son muy lobuladas) y otras más completas (sin lóbulos). Debido a que estos puntos están en cada hoja de la vid, existe una correspondencia entre los puntos, lo que nos permite comparar las formas de las hojas de la vid entre sí.En la celda siguiente hay listas de valores para "x" y "y" para las hojas promediadas de las especies *Vitis* y *Ampelopsis*, así como la hoja de la vid promediada en general. Ejecuta la celda de abajo para leer estos datos y has algunos gráficos por ti mismo, ¡a continuación!
###Code
# Esta celda contiene listas de valores para "x" y "y" para
# contornos de hojas de 15 especies de Vitis y Ampelopsis.
# Cada lista tiene la primera inicial abreviada del epíteto de género y especie.
# Ampelopsis acoutifolia
Aaco_x = [13.81197507,-14.58128237,-135.3576208,-3.48017966,-285.0289837,-4.874351136,-126.9904669,10.54932685,170.4482865,40.82555888,205.158889,124.6343366,13.81197507]
Aaco_y = [27.83951365,148.6870909,157.2273013,35.73510131,-30.02915903,9.54075375,-280.2095191,0.200400495,-234.1044141,20.41991159,41.33121759,96.75084391,27.83951365]
# Ampelopsis brevipedunculata
Abre_x = [40.00325135,-81.37047548,-186.835592,-139.3272085,-287.5337006,-89.61277053,-134.9263008,47.43458846,144.6301719,163.5438321,225.9684307,204.719859,40.00325135]
Abre_y = [96.8926433,203.3273536,134.0172438,99.7070006,-81.35389923,-17.90701212,-335.624547,-80.02986776,-262.0385648,-27.31979918,-42.24377429,82.08218538,96.8926433]
# Ampelopsis cordata
Acor_x = [41.26484889,-99.68651819,-203.5550411,-181.4080156,-226.4063517,-174.1104713,-142.2197176,81.25359041,113.9079805,205.9930561,230.8000389,226.6914467,41.26484889]
Acor_y = [105.1580727,209.8514829,131.8410788,111.9833751,-70.79184424,-60.25829908,-326.5994491,-170.6003249,-223.0042176,-44.58524791,-45.80679706,71.64004113,105.1580727]
# Vitis acerifolia
Vace_x = [47.55748802,-102.1666218,-218.3415108,-183.5085694,-234.8755094,-152.1581487,-113.8943819,53.48770667,84.83899263,206.557697,240.589609,243.5717264,47.55748802]
Vace_y = [111.9982016,241.5287104,125.6905949,110.350904,-108.1932176,-74.67866027,-283.2678229,-161.1592736,-243.1116283,-54.52616737,-68.953011,95.74558526,111.9982016]
# Vitis aestivalis
Vaes_x = [34.13897003,-59.06591289,-192.0336456,-169.5476603,-261.8813454,-154.4511279,-132.6031657,56.04516606,119.9789735,205.0834004,246.928663,209.2801298,34.13897003]
Vaes_y = [80.26320349,227.2107718,155.0919347,123.2629647,-86.47992069,-70.12024178,-317.80585,-156.8388147,-247.9415158,-31.73423173,-28.37195726,120.2692722,80.26320349]
# Vitis amurensis
Vamu_x = [36.94310365,-63.29959989,-190.35653,-180.9243738,-255.6224889,-172.8141253,-123.8350652,60.05314983,113.598307,218.8144919,238.6851057,210.9383524,36.94310365]
Vamu_y = [87.06305005,230.9299013,148.431809,128.4087423,-88.67075769,-84.47396366,-298.5959647,-181.4317592,-241.2343437,-37.53203788,-30.63962885,115.7064075,87.06305005]
# Vitis cinerea
Vcin_x = [41.13786595,-78.14668163,-195.0747469,-185.81005,-238.1427795,-181.5728492,-127.6203541,65.24059352,103.8414516,214.1320626,233.1457326,222.7549456,41.13786595]
Vcin_y = [98.40296936,233.6652514,136.6641628,117.9719613,-86.41814245,-86.14771041,-310.2979998,-190.9232443,-230.5027809,-50.27050419,-42.94757891,107.8271097,98.40296936]
# Vitis coignetiae
Vcoi_x = [36.29348151,-51.46279315,-183.6256382,-176.7604659,-253.3454527,-191.8067468,-123.413666,66.11061054,111.4950714,215.7579824,236.7136632,197.5512918,36.29348151]
Vcoi_y = [86.42303732,222.7808161,150.0993737,127.4697835,-85.23634837,-93.3122815,-301.819185,-203.7840759,-239.8063423,-35.30522815,-25.15349577,121.1295308,86.42303732]
# Vitis labrusca
Vlab_x = [33.83997254,-63.35703212,-191.4861127,-184.3259869,-257.3706479,-179.056825,-124.0669143,68.23202857,123.213115,222.8908464,243.056641,205.2845683,33.83997254]
Vlab_y = [81.34077013,222.8158575,153.7885633,132.4995037,-80.2253417,-80.67586345,-296.8245229,-185.0516494,-238.8655248,-38.2316427,-29.21879919,111.424232,81.34077013]
# Vitis palmata
Vpal_x = [31.97986731,-68.77672824,-189.26295,-164.4563595,-260.2149738,-149.3150935,-131.5419837,65.86738801,127.3624336,202.6655429,240.0477009,219.0385121,31.97986731]
Vpal_y = [78.75737572,232.9714762,149.7873103,124.8439354,-71.09770423,-56.52814058,-329.0863141,-149.308084,-231.1263997,-33.22358667,-33.0517181,114.3110289,78.75737572]
# Vitis piasezkii
Vpia_x = [18.70342336,-28.68239983,-133.7834969,-32.76128224,-305.3467215,-7.429223951,-146.2207875,21.81934547,163.1265031,65.21695943,203.4902238,139.6214571,18.70342336]
Vpia_y = [41.05946323,160.3488167,157.9775135,64.93177072,-59.68750782,18.85909594,-362.1788431,7.556816875,-253.8796355,21.33965973,17.69878265,93.72614181,41.05946323]
# Vitis riparia
Vrip_x = [44.65674776,-85.47236587,-205.1031097,-174.088415,-239.9704675,-161.1277029,-125.4900046,58.08609552,89.2307808,204.9127104,236.0709257,229.8098573,44.65674776]
Vrip_y = [106.5948187,235.8791214,130.341464,116.8318515,-110.5506636,-76.73562488,-300.1092173,-169.0146383,-247.0956802,-42.2253331,-54.23469169,103.9732427,106.5948187]
# Vitis rupestris
Vrup_x = [51.29642881,-132.9650549,-227.6059714,-201.31783,-207.965755,-149.2265432,-98.64097334,48.33648281,75.91437502,208.7784453,237.4842778,263.3479415,51.29642881]
Vrup_y = [123.7557878,233.5830974,109.6847731,95.43848563,-95.82512925,-80.06286127,-236.7411071,-163.7331427,-213.2925544,-77.04510916,-86.40789274,69.86940263,123.7557878]
# Vitis thunbergii
Vthu_x = [22.61260382,-3.204532702,-150.3627277,-79.39836351,-271.8885204,-70.74704134,-168.6002498,36.68300146,172.978549,116.9174032,227.8346055,148.3453958,22.61260382]
Vthu_y = [50.82336098,194.3865012,181.2536906,86.8671412,-57.33457233,-23.85610668,-334.279317,-67.36542042,-234.1205595,7.151772223,28.16801823,138.9705667,50.82336098]
# Vitis vulpina
Vvul_x = [39.44771371,-83.62933643,-194.2000993,-175.9638941,-227.8323987,-180.8587446,-135.986247,71.94543538,99.8983207,207.0950158,231.7808734,222.7645396,39.44771371]
Vvul_y = [96.44934373,230.0148139,136.3702366,119.8017341,-83.09830126,-75.38247957,-332.9188424,-184.4324688,-222.8532423,-41.89574792,-44.70218529,101.9138055,96.44934373]
# Average grape leaf
avg_x = [35.60510804,-67.88314703,-186.9749654,-149.5049396,-254.2293735,-135.3520852,-130.4632741,54.4100207,120.7064692,180.696724,232.2550642,204.8782463,35.60510804]
avg_y = [84.95317026,215.7238025,143.85314,106.742536,-80.06000256,-57.00477464,-309.8290405,-137.6340316,-237.7960327,-31.10365842,-30.0828468,103.1501279,84.95317026]
###Output
_____no_output_____
###Markdown
En la lección anterior, graficaste la hoja de una especie. En esta actividad, ¡grafiquemos todas!Utilizando subgráficos, grafica cada una de las 15 especies más la hoja de vid promedio general. Tu gráfica tendrá 4 filas y 4 columnas. Dale un título a tu figura general. Para cada subgráfico, usa el nombre de la especies como título y pónla en cursiva (no uses cursivas para la hoja promediada general y dale un título apropiado también). Quita los ejes de tu figura. Grafica el contorno, los puntos y el relleno de cada hoja. Usa alfa de forma adecuada apra visualizar mejor las formas. Primero, ¡siempre importa `matplotlib`!Recuerda, la estructura general del código para llamar un subgráfico es:```pythonfig = plt.figure(figsize=(width, height))ax_array = fig.subplots(nrows, ncols)```A continuación haz referencia a un subgráfico específico indexando las filas y columnas usando la matriz que creaste (por ejemplo,`ax_array[row, col]`.Además, después de codificar tu figura, ¡asegúrate de guardar el gráfico usando el código de abajo! Simplemente agrega una sola línea después del código para tu gráfico. Puedes utilizar una variedad de formatos de archivo, pero proporciona el nombre del archivo como `.jpg`,` .pdf`, `.png` o` .tiff` (`.jpg` solo se muestra aquí como ejemplo). El archivo se guardará en tú directorio home de inicio (o donde sea que esté ejecutando Jupyter) en el formato que especifiques. También puedes proporcionar una ruta a un directorio diferente en el que deseas guardar (por ejemplo, `. / Desktop / my_file.jpg`).```pythonplt.savefig("the_file_name.jpg")```¡Sé creativo y diviértete con esta figura! Significa que explores la funcionalidad de `matplotlib`. ¡No hay respuestas equivocadas!***Sugerencia***: para ahorrar algo de tiempo, primero especifica los subgráficos generales de 4 x 4, pero sólo codifica el primer gráfico. Ve como funcionan los tamaños de los diferentes parámetros. Luego, copia y pega el códifo y solo reemplaza con diferentes parámetros. ¡Esto te ahorrará tiempo en lugar de escribir todo o volver atrás y cambiar el tamaño de fuente para cada subgráfico! Alternativamente, crea una variable al comienzo de tu código para un parámetro (como el tamaño del punto de la gráfica de dispersión) y establece este atributo en el valor del parámetro. Luego, si necesitas cambiar el valor, puedes cambiar todos los subgráficos al mismo tiempo.
###Code
# Coloca tu respuesta aquí
###Output
_____no_output_____
###Markdown
A continuación, se muestra un ejemplo de un posible gráfico que podrías hacer. Nuevamente, ¡se creativo en el uso de matplotlib! ![Notebook1_grapevine_leaves.jpg](attachment:Notebook1_grapevine_leaves.jpg)
###Code
### RESPUESTA ###
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(10,10))
ax_array = fig.subplots(4,4)
fig.suptitle("The shapes of grapevine leaves", fontsize=24)
ax_array[0,0].plot(Aaco_x, Aaco_y, color="red")
ax_array[0,0].fill(Aaco_x, Aaco_y, color="gray", alpha=0.5)
ax_array[0,0].scatter(Aaco_x, Aaco_y, color="limegreen", s=200, alpha=0.5)
ax_array[0,0].set_title("Ampelopsis acoutifolia", fontsize=10, style="italic")
ax_array[0,0].set_aspect('equal', 'datalim')
ax_array[0,0].axis('off')
ax_array[0,1].plot(Abre_x, Abre_y, color="orange")
ax_array[0,1].fill(Abre_x, Abre_y, color="peru", alpha=0.5)
ax_array[0,1].scatter(Abre_x, Abre_y, color="gold", s=200, alpha=0.5)
ax_array[0,1].set_title("Ampelopsis brevipedculata", fontsize=10, style="italic")
ax_array[0,1].set_aspect('equal', 'datalim')
ax_array[0,1].axis('off')
ax_array[0,2].plot(Acor_x, Acor_y, color="gold")
ax_array[0,2].fill(Acor_x, Acor_y, color="darkorchid", alpha=0.5)
ax_array[0,2].scatter(Acor_x, Acor_y, color="orange", s=200, alpha=0.5)
ax_array[0,2].set_title("Ampelopsis cordata", fontsize=10, style="italic")
ax_array[0,2].set_aspect('equal', 'datalim')
ax_array[0,2].axis('off')
ax_array[0,3].plot(Vace_x, Vace_y, color="limegreen")
ax_array[0,3].fill(Vace_x, Vace_y, color="blue", alpha=0.5)
ax_array[0,3].scatter(Vace_x, Vace_y, color="red", s=200, alpha=0.5)
ax_array[0,3].set_title("Vitis acerifolia", fontsize=10, style="italic")
ax_array[0,3].set_aspect('equal', 'datalim')
ax_array[0,3].axis('off')
ax_array[1,0].plot(Vaes_x, Vaes_y, color="blue")
ax_array[1,0].fill(Vaes_x, Vaes_y, color="limegreen", alpha=0.5)
ax_array[1,0].scatter(Vaes_x, Vaes_y, color="gray", s=200, alpha=0.5)
ax_array[1,0].set_title("Vitis aestivalis", fontsize=10, style="italic")
ax_array[1,0].set_aspect('equal', 'datalim')
ax_array[1,0].axis('off')
ax_array[1,1].plot(Vamu_x, Vamu_y, color="darkorchid")
ax_array[1,1].fill(Vamu_x, Vamu_y, color="gold", alpha=0.5)
ax_array[1,1].scatter(Vamu_x, Vamu_y, color="peru", s=200, alpha=0.5)
ax_array[1,1].set_title("Vitis amurensis", fontsize=10, style="italic")
ax_array[1,1].set_aspect('equal', 'datalim')
ax_array[1,1].axis('off')
ax_array[1,2].plot(Vcin_x, Vcin_y, color="peru")
ax_array[1,2].fill(Vcin_x, Vcin_y, color="orange", alpha=0.5)
ax_array[1,2].scatter(Vcin_x, Vcin_y, color="darkorchid", s=200, alpha=0.5)
ax_array[1,2].set_title("Vitis cinerea", fontsize=10, style="italic")
ax_array[1,2].set_aspect('equal', 'datalim')
ax_array[1,2].axis('off')
ax_array[1,3].plot(Vcoi_x, Vcoi_y, color="gray")
ax_array[1,3].fill(Vcoi_x, Vcoi_y, color="red", alpha=0.5)
ax_array[1,3].scatter(Vcoi_x, Vcoi_y, color="blue", s=200, alpha=0.5)
ax_array[1,3].set_title("Vitis coignetiae", fontsize=10, style="italic")
ax_array[1,3].set_aspect('equal', 'datalim')
ax_array[1,3].axis('off')
ax_array[2,0].plot(Vlab_x, Vlab_y, color="limegreen")
ax_array[2,0].fill(Vlab_x, Vlab_y, color="red", alpha=0.5)
ax_array[2,0].scatter(Vlab_x, Vlab_y, color="darkorchid", s=200, alpha=0.5)
ax_array[2,0].set_title("Vitis labrusca", fontsize=10, style="italic")
ax_array[2,0].set_aspect('equal', 'datalim')
ax_array[2,0].axis('off')
ax_array[2,1].plot(Vpal_x, Vpal_y, color="gold")
ax_array[2,1].fill(Vpal_x, Vpal_y, color="orange", alpha=0.5)
ax_array[2,1].scatter(Vpal_x, Vpal_y, color="peru", s=200, alpha=0.5)
ax_array[2,1].set_title("Vitis palmata", fontsize=10, style="italic")
ax_array[2,1].set_aspect('equal', 'datalim')
ax_array[2,1].axis('off')
ax_array[2,2].plot(Vpia_x, Vpia_y, color="orange")
ax_array[2,2].fill(Vpia_x, Vpia_y, color="gold", alpha=0.5)
ax_array[2,2].scatter(Vpia_x, Vpia_y, color="gray", s=200, alpha=0.5)
ax_array[2,2].set_title("Vitis piasezkii", fontsize=10, style="italic")
ax_array[2,2].set_aspect('equal', 'datalim')
ax_array[2,2].axis('off')
ax_array[2,3].plot(Vrip_x, Vrip_y, color="red")
ax_array[2,3].fill(Vrip_x, Vrip_y, color="limegreen", alpha=0.5)
ax_array[2,3].scatter(Vrip_x, Vrip_y, color="blue", s=200, alpha=0.5)
ax_array[2,3].set_title("Vitis riparia", fontsize=10, style="italic")
ax_array[2,3].set_aspect('equal', 'datalim')
ax_array[2,3].axis('off')
ax_array[3,0].plot(Vrup_x, Vrup_y, color="gray")
ax_array[3,0].fill(Vrup_x, Vrup_y, color="blue", alpha=0.5)
ax_array[3,0].scatter(Vrup_x, Vrup_y, color="limegreen", s=200, alpha=0.5)
ax_array[3,0].set_title("Vitis rupestris", fontsize=10, style="italic")
ax_array[3,0].set_aspect('equal', 'datalim')
ax_array[3,0].axis('off')
ax_array[3,1].plot(Vthu_x, Vthu_y, color="peru")
ax_array[3,1].fill(Vthu_x, Vthu_y, color="darkorchid", alpha=0.5)
ax_array[3,1].scatter(Vthu_x, Vthu_y, color="gold", s=200, alpha=0.5)
ax_array[3,1].set_title("Vitis thunbergii", fontsize=10, style="italic")
ax_array[3,1].set_aspect('equal', 'datalim')
ax_array[3,1].axis('off')
ax_array[3,2].plot(Vvul_x, Vvul_y, color="darkorchid")
ax_array[3,2].fill(Vvul_x, Vvul_y, color="peru", alpha=0.5)
ax_array[3,2].scatter(Vvul_x, Vvul_y, color="orange", s=200, alpha=0.5)
ax_array[3,2].set_title("Vitis vulpina", fontsize=10, style="italic")
ax_array[3,2].set_aspect('equal', 'datalim')
ax_array[3,2].axis('off')
ax_array[3,3].plot(avg_x, avg_y, color="blue")
ax_array[3,3].fill(avg_x, avg_y, color="gray", alpha=0.5)
ax_array[3,3].scatter(avg_x, avg_y, color="red", s=200, alpha=0.5)
ax_array[3,3].set_title("Overall average leaf", fontsize=10)
ax_array[3,3].set_aspect('equal', 'datalim')
ax_array[3,3].axis('off')
#plt.savefig("./Desktop/grapevine_leaves.jpg")
###Output
_____no_output_____
###Markdown
___ La forma de los girasoles 🌻En la siguiente lección y actividad, estaremos aprendiendo sobre bucles o "loops", una manera de automatizar tareas repetitivas. Estaremos usando los bucles para calcular la secuencia de Fibonacci, el angulo aureo o dorado, el modelo de crecimiento del girasol. En muchas maneras, *las plantas son computadoras*, produciendo hojas y otros organos de manera iterativa en un orden matemático preciso. Cuando nosotros usamos Phyton para modelar su crecimiento, somos nosotros los que seguimos su ejemplo. La habilidad para visualizar y examinar datos es fundamental para todo tu trabajo computacional. En este ejercicio, visualizaras los puntos de un girasol como una vista previa de las próximas lecciones, usando las diferentes funciones de `matplotlib`. También usarás las técnicas de indexación y corte que aprendiste en la lección anterior, para observar la 'filotaxia' o las curvas que surgen al observar el patrón de un girasol. ¡Tú usaras los bucles para generar los datos de la parte inferior en la siguiente lección! Pero por ahora ejecuta la celda de abajo para cargar las coordenadas de "x" y "y" de un girasol.
###Code
sun_xlist = [0.0,-0.7373688780783197,0.12363864559502138,1.053847020514727,-1.9694269706308574,1.8866941955758958,
-0.6358980820385529,-1.2194453649142762,2.6568018333748893,-2.7730366684134142,1.3403187214918457,
0.9926122841712306,-2.997179549141454,3.5214545813198486,-2.1519372768917857,-0.4977197614011282,
3.0585959818241726,-4.119584716091113,3.0073085133506017,-0.20134384491006577,-2.8653384038891687,
4.541650602805144,-3.8501668802872424,1.0525956694116372,2.4356789070709124,-4.7634638859834535,
4.628844704413331,-2.006032227609403,-1.7909211243064767,4.766891005951,-5.296312842870197,
3.0112570857032295,0.9581363211912816,-4.541784358263451,5.811043126965286,-4.017487592669849,
0.029269662980670484,4.0869179180944695,-6.13810860306323,4.974512119299716,-1.131991572873186,
-3.4103397878411568,6.250267164954252,-5.834063230160405,2.305829117107365,2.5292790181008407,
-6.128892214086734,6.5513509901593245,-3.503054434163048,-1.4696666530084994,5.764667212478689,
-7.086605355446326,4.674016622598716,0.2653106694985794,-5.157991090299893,7.406523200751739,
-5.76888817488349,1.043236970610896,4.319062423897526,-7.48554069915315,6.739471132676982,
-2.4100779669432555,-3.267627295640393,7.306868734670486,-7.5409873950070185,3.785580104670517,
2.0323907649727597,-6.86324298562459,8.13378251542539,-5.1181480476641,-0.6501056102210901,
6.157353364008793,-8.484877515776738,6.356090828162586,-0.8356354241304721,-5.201930418721218,
8.569309573315453,-7.449520848340097,2.3758663326403617,4.019479292270886,-8.371210207962406,
8.352217739526045,-3.917926138387042,-2.6416647522428898,7.884578820243296,-9.0233911389071,
5.4072932931621756,1.1083634450004225,-7.113719950202413,9.42927937071088,-6.789514554197385,
0.5335884843254842,6.073324177924158,-9.544526094990951,8.012162670581947,-2.2319132418077428,
-4.788184831492389,9.35328406571534,-9.026755434325707,3.9306258386646395]
sun_ylist = [0.0,0.6754902942615238,-1.4087985964343621,1.3745568221620497,-0.3483639007586233,-1.2001604111035422,
2.3655091691345627,-2.347967845177844,0.9702597684001061,1.1446692254248088,-2.864183256151475,
3.1646043754808235,-1.7369268119895633,-0.7741819112466066,3.0609093348747796,-3.8408690473785754,
2.577787931535297,0.17035776163269098,-2.992673638325601,4.354246278762472,-3.4336330367699275,
0.6110726651059321,2.6788458324321702,-4.678893283324152,4.250584461182937,-1.519674901724516,
-2.1386436595245724,4.79331145470357,-4.97921695917268,2.5053443151358374,1.3960910681070222,
-4.683196639454945,5.575121056086051,-3.5174130896204754,-0.48143304472118015,4.342786368536199,
-5.999928606811,4.505230508060421,-0.5688785256987186,-3.7754773439844995,6.222426783735112,
-5.419371045745764,1.7129391018640276,2.993944927097081,-6.2195781273893385,6.213110947713299,
-2.904596396766168,-2.0198515301225584,5.977341351411227,-6.843981292276798,4.09494956371543,
0.8831899773884733,-5.49122651250801,7.275273895094961,-5.23403552838409,0.37870051059672993,
4.766542691059153,-7.47740975359453,6.272615066977762,-1.7224054230412074,-3.818314923079766,
7.429099823885439,-7.163980168648281,3.0999466599112546,2.67086298943717,-7.118242990452261,
7.865709616967401,-4.46048155699302,-1.3570490012376957,6.5425194353698455,-8.341304615919446,
5.752130001202128,-0.08278612363409138,-5.709650548338964,8.561641982583614,-6.923865966263566,
1.6021652338894832,4.637309471034498,-8.506189462351873,7.927407282271141,-3.1500539128887963,
-3.352679351137899,8.163936230406483,-8.719037064765669,4.6726241906845924,1.8916691979024318,
-7.5340015424556,9.261292052072799,-6.115144190457997,-0.2978095852826337,6.625895571041086,
-9.52445711468088,7.423929783466559,-1.3791379996348347,-5.4594184067657965,9.48781130087668,
-8.548291409367499,3.0848139629539255,4.064195655830474,-9.140578784542141]
###Output
_____no_output_____
###Markdown
En la celda de abajo, usa la función `plt.plot()`, usando los valores de $x$ y $y$ de las listas sun_xlist y sun_ylist para graficar las coordenadas respectivas de $x$ y $y$. Ajusta el grosor de la línea, el tipo de línea, el color y el alfa a tu gusto para resaltar cualquier patrón que veas. Recuerda establecer una razón de aspecto igual al usar `plt.axes().set_aspect('equal', 'box'))`.¿Qué patrones ves? ¿Ves diferentes patrones usando diferentes grosores de línea?
###Code
# Coloca tu respuesta aquí
### RESPUESTA ###
plt.plot(sun_xlist, sun_ylist, linewidth=0.4, alpha=1, c="k", linestyle="-")
plt.axes().set_aspect('equal', 'box')
###Output
/Users/chitwood/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
after removing the cwd from sys.path.
###Markdown
Tal vez las lineas no son la mejor forma para visualizar girasoles. Usa `plt.scatter()` para visualizar los mismos datos, de nuevo eligiendo el color, el tamaño de punto y el alfa que te permitan revelar los patrones que *tú* veas.
###Code
# Coloca tu respuesta aquí
### RESPUESTA ###
plt.scatter(sun_xlist, sun_ylist, s=50, alpha=0.6, c="k")
plt.axes().set_aspect('equal', 'box')
###Output
/Users/chitwood/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
after removing the cwd from sys.path.
###Markdown
Si recuerdas cada enésimo término de la secuencia de Fibonacci es divisible por el enésimo término. Cuando ves un girasol, parte del patrón que ves son "brazos" o "arcos" radiando como curvas desde el centro del girasol hacia los bordes. Los arcos son llamados *filotaxia*. Estos se definen por tomar cada enésimo punto de la flor, donde n es un número de la secuencia Fibonnaci. Si graficas cada enésimo punto de el siguiente número de la secuencia de Fibonnaci, éste dará lugar a brazos que se curven en direcciones opuestas.En la celda de la parte de abajo, usa `plt.scatter()` de nuevo para graficar los puntos de un girasol. Pero en las siguientes lineas usa el corte para seleccionar cada enésimo elemento, donde n es un término de la secuencia de Fibonnaci. Después de usar `plt.scatter()` para graficar cada enésimo término usa los colores para hacer que el gráfico de todos los puntos sea menos conspicuo, usa `alpha = 1` e incrementos el tamaño de los puntos en las graficas de *parastichies* para resaltarlas. Usa diferentes colores para la *filotaxia* opuesta. Para cortar debes empezar con el primer término de los puntos del girasol, incluye todos los puntos hasta el final y elige dos términos sucesivos de la secuencia Fibonacci. Recuerda, que para cada uno de la *filotaxia* debes usar el mismo corte (debe de haber el mismo número de los valores de $x$ y de $y$ para que `matplotlib` pueda graficar los puntos).Tu respuesta debe parecerse a la gráfica de abajo. Pon el codigo para tu filotaxia en la celda continua. ![download.png](attachment:download.png)
###Code
# Coloca tu respuesta aquí
### RESPUESTA ###
plt.title("Example parastichy plot", fontsize=16)
plt.scatter(sun_xlist, sun_ylist, s=40, alpha=0.1, c="k")
plt.scatter(sun_xlist[0::2], sun_ylist[0::2], s=50, alpha=0.6, c="darkorange")
plt.scatter(sun_xlist[0::3], sun_ylist[0::3], s=60, alpha=0.6, c="magenta")
plt.axes().set_aspect('equal', 'box')
plt.axis("off")
###Output
/Users/chitwood/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:7: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
import sys
|
jupyter/quick_start_google_colab.ipynb | ###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) Spark NLP Quick Start How to use Spark NLP pretrained pipelines [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/quick_start_google_colab.ipynb) We will first set up the runtime environment and then load pretrained Entity Recognition model and Sentiment analysis model and give it a quick test. Feel free to test the models on your own sentences / datasets.
###Code
#This cell creates required runtime environment
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q http://apache.crihan.fr/dist/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz
!tar xf spark-2.4.4-bin-hadoop2.7.tgz
!pip install -q findspark
!pip install spark-nlp
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.4.4-bin-hadoop2.7"
import findspark
findspark.init()
from pyspark.sql import SparkSession
import sparknlp
spark = sparknlp.start()
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
Let's use Spark NLP pre-trained pipeline for `named entity recognition`
###Code
pipeline = PretrainedPipeline('recognize_entities_dl', 'en')
result = pipeline.annotate('Harry Potter is a great movie')
print(result['ner'])
###Output
['I-PER', 'I-PER', 'O', 'O', 'O', 'O']
###Markdown
Let's use Spark NLP pre-trained pipeline for `sentiment` analysis
###Code
pipeline = PretrainedPipeline('analyze_sentiment', 'en')
result = pipeline.annotate('Harry Potter is a great movie')
print(result['sentiment'])
###Output
['positive']
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) Spark NLP Quick Start How to use Spark NLP pretrained pipelines [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/quick_start_google_colab.ipynb) We will first set up the runtime environment and then load pretrained Entity Recognition model and Sentiment analysis model and give it a quick test. Feel free to test the models on your own sentences / datasets.
###Code
import os
# Install java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp== 2.4.2
import sparknlp
spark = sparknlp.start()
print("Spark NLP version")
sparknlp.version()
print("Apache Spark version")
spark.version
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
Let's use Spark NLP pre-trained pipeline for `named entity recognition`
###Code
pipeline = PretrainedPipeline('recognize_entities_dl', 'en')
result = pipeline.annotate('Harry Potter is a great movie')
print(result['ner'])
###Output
_____no_output_____
###Markdown
Let's use Spark NLP pre-trained pipeline for `sentiment` analysis
###Code
pipeline = PretrainedPipeline('analyze_sentiment', 'en')
result = pipeline.annotate('Harry Potter is a great movie')
print(result['sentiment'])
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) Spark NLP Quick Start How to use Spark NLP pretrained pipelines [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/quick_start_google_colab.ipynb) We will first set up the runtime environment and then load pretrained Entity Recognition model and Sentiment analysis model and give it a quick test. Feel free to test the models on your own sentences / datasets.
###Code
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp
import sparknlp
spark = sparknlp.start()
print("Spark NLP version")
sparknlp.version()
print("Apache Spark version")
spark.version
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
Let's use Spark NLP pre-trained pipeline for `named entity recognition`
###Code
pipeline = PretrainedPipeline('recognize_entities_dl', 'en')
result = pipeline.annotate('Harry Potter is a great movie')
print(result['ner'])
###Output
['B-PER', 'I-PER', 'O', 'O', 'O', 'O']
###Markdown
Let's use Spark NLP pre-trained pipeline for `sentiment` analysis
###Code
pipeline = PretrainedPipeline('analyze_sentiment', 'en')
result = pipeline.annotate('Harry Potter is a great movie')
print(result['sentiment'])
###Output
['positive']
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) Spark NLP Quick Start How to use Spark NLP pretrained pipelines [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/quick_start_google_colab.ipynb) We will first set up the runtime environment and then load pretrained Entity Recognition model and Sentiment analysis model and give it a quick test. Feel free to test the models on your own sentences / datasets.
###Code
import os
# Install java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==2.5.0
import sparknlp
spark = sparknlp.start()
print("Spark NLP version")
sparknlp.version()
print("Apache Spark version")
spark.version
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
Let's use Spark NLP pre-trained pipeline for `named entity recognition`
###Code
pipeline = PretrainedPipeline('recognize_entities_dl', 'en')
result = pipeline.annotate('Harry Potter is a great movie')
print(result['ner'])
###Output
['B-PER', 'I-PER', 'O', 'O', 'O', 'O']
###Markdown
Let's use Spark NLP pre-trained pipeline for `sentiment` analysis
###Code
pipeline = PretrainedPipeline('analyze_sentiment', 'en')
result = pipeline.annotate('Harry Potter is a great movie')
print(result['sentiment'])
###Output
['positive']
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) Spark NLP Quick Start How to use Spark NLP pretrained pipelines [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/quick_start_google_colab.ipynb) We will first set up the runtime environment and then load pretrained Entity Recognition model and Sentiment analysis model and give it a quick test. Feel free to test the models on your own sentences / datasets.
###Code
!wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
import sparknlp
spark = sparknlp.start()
print("Spark NLP version: {}".format(sparknlp.version()))
print("Apache Spark version: {}".format(spark.version))
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
Let's use Spark NLP pre-trained pipeline for `named entity recognition`
###Code
pipeline = PretrainedPipeline('recognize_entities_dl', 'en')
result = pipeline.annotate('President Biden represented Delaware for 36 years in the U.S. Senate before becoming the 47th Vice President of the United States.')
print(result['ner'])
print(result['entities'])
###Output
['O', 'B-PER', 'O', 'B-LOC', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'O', 'B-ORG', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'O']
['Biden', 'Delaware', 'U.S', 'Senate', 'United States']
###Markdown
Let's try another Spark NLP pre-trained pipeline for `named entity recognition`
###Code
pipeline = PretrainedPipeline('onto_recognize_entities_bert_tiny', 'en')
result = pipeline.annotate("Johnson first entered politics when elected in 2001 as a member of Parliament. He then served eight years as the mayor of London, from 2008 to 2016, before rejoining Parliament.")
print(result['ner'])
print(result['entities'])
###Output
onto_recognize_entities_bert_tiny download started this may take some time.
Approx size to download 30.2 MB
[OK!]
['B-PERSON', 'B-ORDINAL', 'O', 'O', 'O', 'O', 'O', 'B-DATE', 'O', 'O', 'O', 'O', 'B-ORG', 'O', 'O', 'O', 'B-DATE', 'I-DATE', 'O', 'O', 'O', 'O', 'B-GPE', 'O', 'B-DATE', 'O', 'B-DATE', 'O', 'O', 'O', 'B-ORG']
['Johnson', 'first', '2001', 'Parliament.', 'eight years', 'London,', '2008', '2016', 'Parliament.']
###Markdown
Let's use Spark NLP pre-trained pipeline for `sentiment` analysis
###Code
pipeline = PretrainedPipeline('analyze_sentimentdl_glove_imdb', 'en')
result = pipeline.annotate("Harry Potter is a great movie.")
print(result['sentiment'])
###Output
['pos']
###Markdown
Please check our [Models Hub](https://nlp.johnsnowlabs.com/models) for more pretrained models and pipelines! 😊
###Code
###Output
_____no_output_____ |
sorting/selection_sort.ipynb | ###Markdown
Given an array of unsorted integers, sort it using selection sort.
###Code
def find_smallest(A):
smallest = A[0]
smallest_index = 0
for i in range(1, len(A)):
if A[i] < smallest:
smallest = A[i]
smallest_index = i
return smallest_index
def selection_sort(A):
"""Complexity: O(n^2) time | O(n) space (Although we can achieve constant time if we sort inplace)"""
sorted_array = []
for i in range(len(A)):
smallest_index = find_smallest(A)
sorted_array.append(A.pop(smallest_index))
return sorted_array
A = [1, 2, 3, -2, 0, 5]
selection_sort(A)
###Output
_____no_output_____
###Markdown
SelectionsortThe algorithm divides the input list into two parts: the sublist of items already sorted, which is built up from left to right at the front (left) of the list, and the sublist of items remaining to be sorted that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted sublist is the entire input list.https://www.geeksforgeeks.org/selection-sort/
###Code
%run ../create_array.py
# Traverse through all array elements
def selection_sort(arr):
for i in range(len(arr)):
# Find the minimum element in remaining
# unsorted array
min_idx = i
for j in range(i+1, len(arr)):
if arr[min_idx] > arr[j]:
min_idx = j
# Swap the found minimum element with
# the first element
arr[i], arr[min_idx] = arr[min_idx], arr[i]
return arr
selection_sort(create_array(25, 10))
###Output
_____no_output_____
###Markdown
Selection SortTraverses the unsorted list/array for the min (or max) number. It then adds the number to the sorted list and swaps positions if need be. We then search again for the new min value and continue the algorithm.`[7, 8, 5, 4, 9, 2]``2` is our min so we swap with `7``[2, 8, 5, 4, 9, 7]`, `2` is now considered part of the sorted list`4` is now our new min so now we swap with `8``[2, 4, 5, 8, 9, 7]`, `2` and `4` are now considered part of the sorted list, we continue until the sort is complete References and resources:- Python Data Structures and Algorithms by Benjamin Baka- [YouTube: Python: SelectionSort algorithm](https://www.youtube.com/watch?v=mI3KgJy_d7Y)[1]- [YouTube: Selection sort in 3 minutes](https://www.youtube.com/watch?v=g-PGLbMth_g&index=3&list=PL9xmBV_5YoZOZSbGAXAPIq1BeUf4j20pl&t=0s)- https://www.geeksforgeeks.org/selection-sort/- [Wikipedia](https://en.wikipedia.org/wiki/Selection_sort)[1] [code](https://github.com/joeyajames/Python/blob/master/Sorting%20Algorithms/selection_sort.py)
###Code
# # Uncomment to use inline pythontutor
# from IPython.display import IFrame
# IFrame('http://www.pythontutor.com/visualize.html#mode=display', height=750, width=750)
###Output
_____no_output_____
###Markdown
Example 1
###Code
# Traverse through all array elements
def selection_sort(array):
for i in range(len(array)):
# Find the minimum element in remaining
# unsorted array
min_idx = i
for j in range(i+1, len(array)):
if array[min_idx] > array[j]:
min_idx = j
# Swap the found minimum element with
# the first element
array[i], array[min_idx] = array[min_idx], array[i]
return array
lst = [64, 25, 12, 22, 11]
selection_sort(lst)
###Output
_____no_output_____
###Markdown
Example 2
###Code
def selection_sort(array):
for i in range (0, len(array) - 1):
minIndex = i
for j in range (i+1, len(array)):
if array[j] < array[minIndex]:
minIndex = j
if minIndex != i:
array[i], array[minIndex] = array[minIndex], array[i]
return array
lst = [5,9,1,2,4,8,6,3,7]
print(lst)
selection_sort(lst)
###Output
[5, 9, 1, 2, 4, 8, 6, 3, 7]
|
pandas/Querying Dataframes.ipynb | ###Markdown
Pandas: querying DataFrames.The data is football data, which comes from: https://sports-statistics.com/sports-data/soccer-datasets/
###Code
import pandas as pd
df = pd.read_csv('https://sports-statistics.com/database/soccer-data/england-premier-league-2019-to-2020.csv')
df.head()
# get games where the home team scored 5 or more goals
df.query('FTHG > 5')
# get games where the away team scored 5 or more goals
df.query('FTAG >= 5')
# how many draws?
(df.FTR == 'D').sum()
# who refereed the most games?
df.Referee.value_counts()
###Output
_____no_output_____ |
Lesson3/Exercise30.ipynb | ###Markdown
Exercise 2 : K-means clusteringCreate four clusters from text documents of sklearn's “The 20 newsgroups text dataset” using K-means clustering. Compare it with their actual categories. Use elbow method to obtain the optimal number of clusters.
###Code
import pandas as pd
from sklearn.datasets import fetch_20newsgroups
import matplotlib.pyplot as plt
%matplotlib inline
import re
import string
from nltk import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.feature_extraction.text import TfidfVectorizer
from collections import Counter
from pylab import *
import nltk
import warnings
warnings.filterwarnings('ignore')
import seaborn as sns;
sns.set()
import numpy as np
from scipy.spatial.distance import cdist
from sklearn.cluster import KMeans
stop_words = stopwords.words('english')
#adding individual printable characters to list of wtop words so that they get renoved along with the stopwords
stop_words = stop_words + list(string.printable)
lemmatizer = WordNetLemmatizer()
categories= ['misc.forsale', 'sci.electronics', 'talk.religion.misc']
news_data = fetch_20newsgroups(subset='train', categories=categories, \
shuffle=True, random_state=42, download_if_missing=True)
news_data_df = pd.DataFrame({'text' : news_data['data'], 'category': news_data.target})
news_data_df.head()
news_data_df['cleaned_text'] = news_data_df['text'].apply(\
lambda x : ' '.join([lemmatizer.lemmatize(word.lower()) \
for word in word_tokenize(re.sub(r'([^\s\w]|_)+', ' ', str(x))) if word.lower() not in stop_words]))
tfidf_model = TfidfVectorizer(max_features=200)
tfidf_df = pd.DataFrame(tfidf_model.fit_transform(news_data_df['cleaned_text']).todense())
tfidf_df.columns = sorted(tfidf_model.vocabulary_)
tfidf_df.head()
kmeans = KMeans(n_clusters=4)
kmeans.fit(tfidf_df)
y_kmeans = kmeans.predict(tfidf_df)
news_data_df['obtained_clusters'] = y_kmeans
pd.crosstab(news_data_df['category'].replace({0:'misc.forsale', 1:'sci.electronics', 2:'talk.religion.misc'}),\
news_data_df['obtained_clusters'].replace({0 : 'cluster_1', 1 : 'cluster_2', 2 : 'cluster_3', 3: 'cluster_4'}))
#Using Elbow method to obtain the number of clusters
distortions = []
K = range(1,6)
for k in K:
kmeanModel = KMeans(n_clusters=k)
kmeanModel.fit(tfidf_df)
distortions.append(sum(np.min(cdist(tfidf_df, kmeanModel.cluster_centers_, 'euclidean'), \
axis=1)) / tfidf_df.shape[0])
# Plot the elbow
plt.plot(K, distortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal number of clusters')
plt.show()
#FROM THIS PLOT SELECT K WEHRE THE PLOT HAS STEEPEST SLOPE i.e. 2
###Output
_____no_output_____ |
notebooks/MLA-NLP-Lecture2-Sagemaker.ipynb | ###Markdown
![MLU Logo](../data/MLU_Logo.png) Machine Learning Accelerator - Natural Language Processing - Lecture 2 Sagemaker built-in Training and Deployment with LinearLearnerIn this notebook, we use Sagemaker's built-in machine learning model __LinearLearner__ to predict the __isPositive__ field of our review dataset.Overall dataset schema:* __reviewText:__ Text of the review* __summary:__ Summary of the review* __verified:__ Whether the purchase was verified (True or False)* __time:__ UNIX timestamp for the review* __log_votes:__ Logarithm-adjusted votes log(1+votes)* __isPositive:__ Whether the review is positive or negative (1 or 0)__Notes on AWS SageMaker__* Fully managed machine learning service, to quickly and easily get you started on building and training machine learning models - we have seen that already! Integrated Jupyter notebook instances, with easy access to data sources for exploration and analysis, abstract away many of the messy infrastructural details needed for hands-on ML - you don't have to manage servers, install libraries/dependencies, etc.!* Apart from easily building end-to-end machine learning models in SageMaker notebooks, like we did so far, SageMaker also provides a few __build-in common machine learning algorithms__ (check "SageMaker Examples" from your SageMaker instance top menu for a complete updated list) that are optimized to run efficiently against extremely large data in a distributed environment. __LinearLearner__ build-in algorithm in SageMaker is extremely fast at inference and can be trained at scale, in mini-batch fashion over GPU(s). The trained model can then be directly deployed into a production-ready hosted environment for easy access at inference. We will follow these steps:1. Read the dataset2. Exploratory Data Analysis3. Text Processing: Stop words removal and stemming4. Training - Validation - Test Split5. Data processing with Pipeline and ColumnTransform6. Train a classifier with SageMaker build-in algorithm7. Model evaluation8. Deploy the model to an endpoint9. Test the enpoint10. Clean up model artifacts
###Code
#Upgrade dependencies
!pip install --upgrade pip
!pip install --upgrade scikit-learn
###Output
Requirement already up-to-date: pip in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (20.1.1)
Requirement already up-to-date: scikit-learn in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (0.23.1)
Requirement already satisfied, skipping upgrade: numpy>=1.13.3 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from scikit-learn) (1.14.3)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from scikit-learn) (0.16.0)
Requirement already satisfied, skipping upgrade: scipy>=0.19.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from scikit-learn) (1.1.0)
Requirement already satisfied, skipping upgrade: threadpoolctl>=2.0.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from scikit-learn) (2.1.0)
###Markdown
1. Reading the dataset(Go to top)We will use the __pandas__ library to read our dataset.
###Code
import pandas as pd
df = pd.read_csv('../data/examples/AMAZON-REVIEW-DATA-CLASSIFICATION.csv')
print('The shape of the dataset is:', df.shape)
###Output
The shape of the dataset is: (70000, 6)
###Markdown
Let's look at the first five rows in the dataset.
###Code
df.head()
###Output
_____no_output_____
###Markdown
2. Exploratory Data Analysis(Go to top) Let's look at the target distribution for our datasets.
###Code
df["isPositive"].value_counts()
###Output
_____no_output_____
###Markdown
Checking the number of missing values:
###Code
print(df.isna().sum())
###Output
reviewText 11
summary 14
verified 0
time 0
log_votes 0
isPositive 0
dtype: int64
###Markdown
We have missing values in our text fields. 3. Text Processing: Stop words removal and stemming(Go to top)
###Code
# Install the library and functions
import nltk
nltk.download('punkt')
nltk.download('stopwords')
###Output
[nltk_data] Downloading package punkt to /home/ec2-user/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package stopwords to
[nltk_data] /home/ec2-user/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
We will create the stop word removal and text cleaning processes below. NLTK library provides a list of common stop words. We will use the list, but remove some of the words from that list. It is because those words are actually useful to understand the sentiment in the sentence.
###Code
import nltk, re
from nltk.corpus import stopwords
from nltk.stem import SnowballStemmer
from nltk.tokenize import word_tokenize
# Let's get a list of stop words from the NLTK library
stop = stopwords.words('english')
# These words are important for our problem. We don't want to remove them.
excluding = ['against', 'not', 'don', "don't",'ain', 'aren', "aren't", 'couldn', "couldn't",
'didn', "didn't", 'doesn', "doesn't", 'hadn', "hadn't", 'hasn', "hasn't",
'haven', "haven't", 'isn', "isn't", 'mightn', "mightn't", 'mustn', "mustn't",
'needn', "needn't",'shouldn', "shouldn't", 'wasn', "wasn't", 'weren',
"weren't", 'won', "won't", 'wouldn', "wouldn't"]
# New stop word list
stop_words = [word for word in stop if word not in excluding]
snow = SnowballStemmer('english')
def process_text(texts):
final_text_list=[]
for sent in texts:
# Check if the sentence is a missing value
if isinstance(sent, str) == False:
sent = ""
filtered_sentence=[]
sent = sent.lower() # Lowercase
sent = sent.strip() # Remove leading/trailing whitespace
sent = re.sub('\s+', ' ', sent) # Remove extra space and tabs
sent = re.compile('<.*?>').sub('', sent) # Remove HTML tags/markups:
for w in word_tokenize(sent):
# We are applying some custom filtering here, feel free to try different things
# Check if it is not numeric and its length>2 and not in stop words
if(not w.isnumeric()) and (len(w)>2) and (w not in stop_words):
# Stem and add to filtered list
filtered_sentence.append(snow.stem(w))
final_string = " ".join(filtered_sentence) #final string of cleaned words
final_text_list.append(final_string)
return final_text_list
###Output
_____no_output_____
###Markdown
4. Training - Validation - Test Split(Go to top)Let's split our dataset into training (80%), validation (10%) and test (10%) using sklearn's [__train_test_split()__](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function.
###Code
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(df[["reviewText", "summary", "time", "log_votes"]],
df["isPositive"],
test_size=0.20,
shuffle=True,
random_state=324
)
X_val, X_test, y_val, y_test = train_test_split(X_val,
y_val,
test_size=0.5,
shuffle=True,
random_state=324)
print("Processing the reviewText fields")
X_train["reviewText"] = process_text(X_train["reviewText"].tolist())
X_val["reviewText"] = process_text(X_val["reviewText"].tolist())
X_test["reviewText"] = process_text(X_test["reviewText"].tolist())
print("Processing the summary fields")
X_train["summary"] = process_text(X_train["summary"].tolist())
X_val["summary"] = process_text(X_val["summary"].tolist())
X_test["summary"] = process_text(X_test["summary"].tolist())
###Output
Processing the reviewText fields
Processing the summary fields
###Markdown
Our process_text() method in section 3 uses empty string for missing values. 5. Data processing with Pipeline and ColumnTransform(Go to top)In the previous examples, we have seen how to use pipeline to prepare a data field for our machine learning model. This time, we will focus on multiple fields: numeric and text fields. * For the numerical features pipeline, the __numerical_processor__ below, we use a MinMaxScaler (don't have to scale features when using Decision Trees, but it's a good idea to see how to use more data transforms). If different processing is desired for different numerical features, different pipelines should be built - just like shown below for the two text features. * For the text features pipeline, the __text_processor__ below, we use CountVectorizer() for the text fields. The selective preparations of the dataset features are then put together into a collective ColumnTransformer, to be finally used in a Pipeline along with an estimator. This ensures that the transforms are performed automatically on the raw data when fitting the model and when making predictions, such as when evaluating the model on a validation dataset via cross-validation or making predictions on a test dataset in the future.
###Code
# Grab model features/inputs and target/output
numerical_features = ['time',
'log_votes']
text_features = ['summary',
'reviewText']
model_features = numerical_features + text_features
model_target = 'isPositive'
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import MinMaxScaler
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
### COLUMN_TRANSFORMER ###
##########################
# Preprocess the numerical features
numerical_processor = Pipeline([
('num_imputer', SimpleImputer(strategy='mean')),
('num_scaler', MinMaxScaler())
])
# Preprocess 1st text feature
text_processor_0 = Pipeline([
('text_vect_0', CountVectorizer(binary=True, max_features=50))
])
# Preprocess 2nd text feature (larger vocabulary)
text_precessor_1 = Pipeline([
('text_vect_1', CountVectorizer(binary=True, max_features=150))
])
# Combine all data preprocessors from above (add more, if you choose to define more!)
# For each processor/step specify: a name, the actual process, and finally the features to be processed
data_preprocessor = ColumnTransformer([
('numerical_pre', numerical_processor, numerical_features),
('text_pre_0', text_processor_0, text_features[0]),
('text_pre_1', text_precessor_1, text_features[1])
])
### DATA PREPROCESSING ###
##########################
print('Datasets shapes before processing: ', X_train.shape, X_val.shape, X_test.shape)
X_train = data_preprocessor.fit_transform(X_train).toarray()
X_val = data_preprocessor.transform(X_val).toarray()
X_test = data_preprocessor.transform(X_test).toarray()
print('Datasets shapes after processing: ', X_train.shape, X_val.shape, X_test.shape)
###Output
Datasets shapes before processing: (56000, 4) (7000, 4) (7000, 4)
Datasets shapes after processing: (56000, 202) (7000, 202) (7000, 202)
###Markdown
6. Train a classifier with SageMaker build-in algorithm(Go to top) We will call the Sagemaker `LinearLearner()` below. * __Compute power:__ We will use `train_instance_count` and `train_instance_type` parameters. This example uses `ml.m4.xlarge` resource for training. We can change the instance type for our needs (For example GPUs for neural networks). * __Model type:__ `predictor_type` is set to __`binary_classifier`__, as we have a binary classification problem here; __`multiclass_classifier`__ could be used if there are 3 or more classes involved, or __'regressor'__ for a regression problem.
###Code
import sagemaker
# Call the LinearLearner estimator object
linear_classifier = sagemaker.LinearLearner(role=sagemaker.get_execution_role(),
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
predictor_type='binary_classifier')
###Output
_____no_output_____
###Markdown
We are using the `record_set()` function of our binary_estimator to set the training, validation, test parts of the estimator.
###Code
train_records = linear_classifier.record_set(X_train.astype("float32"),
y_train.values.astype("float32"),
channel='train')
val_records = linear_classifier.record_set(X_val.astype("float32"),
y_val.values.astype("float32"),
channel='validation')
test_records = linear_classifier.record_set(X_test.astype("float32"),
y_test.values.astype("float32"),
channel='test')
###Output
_____no_output_____
###Markdown
`fit()` function applies a distributed version of the Stochastic Gradient Descent (SGD) algorithm and we are sending the data to it. We disabled logs with `logs=False`. You can remove that parameter to see more details about the process. __This process takes about 3-4 minutes on a ml.m4.xlarge instance.__
###Code
%%time
linear_classifier.fit([train_records,
val_records,
test_records],
logs=False)
###Output
2020-07-08 03:58:33 Starting - Starting the training job
2020-07-08 03:58:35 Starting - Launching requested ML instances............
2020-07-08 03:59:43 Starting - Preparing the instances for training.................
2020-07-08 04:01:12 Downloading - Downloading input data..
2020-07-08 04:01:26 Training - Downloading the training image...
2020-07-08 04:01:45 Training - Training image download completed. Training in progress........
2020-07-08 04:02:27 Uploading - Uploading generated training model
2020-07-08 04:02:34 Completed - Training job completed
CPU times: user 258 ms, sys: 29.6 ms, total: 288 ms
Wall time: 4min 1s
###Markdown
7. Model Evaluation(Go to top)We can use Sagemaker analytics to get some performance metrics of our choice on the test set. This doesn't require us to deploy our model. Since this is a binary classfication problem, we can check the accuracy.
###Code
sagemaker.analytics.TrainingJobAnalytics(linear_classifier._current_job_name,
metric_names = ['test:binary_classification_accuracy']
).dataframe()
###Output
_____no_output_____
###Markdown
8. Deploy the model to an endpoint(Go to top)In the last part of this exercise, we will deploy our model to another instance of our choice. This will allow us to use this model in production environment. Deployed endpoints can be used with other AWS Services such as Lambda and API Gateway. A nice walkthrough is available here: https://aws.amazon.com/blogs/machine-learning/call-an-amazon-sagemaker-model-endpoint-using-amazon-api-gateway-and-aws-lambda/ if you are interested.Run the following cell to deploy the model. We can use different instance types such as: `ml.t2.medium`, `ml.c4.xlarge` etc. __This will take some time to complete (Approximately 7-8 minutes).__
###Code
%%time
linear_classifier_predictor = linear_classifier.deploy(initial_instance_count = 1,
instance_type = 'ml.t2.medium',
endpoint_name = 'LinearLearnerEndpoint'
)
###Output
---------------!CPU times: user 226 ms, sys: 15.7 ms, total: 242 ms
Wall time: 7min 31s
###Markdown
9. Test the endpoint(Go to top)Let's use the deployed endpoint. We will send our test data and get predictions of it.
###Code
import numpy as np
# Let's get test data in batch size of 25 and make predictions.
prediction_batches = [linear_classifier_predictor.predict(batch)
for batch in np.array_split(X_test.astype("float32"), 25)
]
# Let's get a list of predictions
print([pred.label['score'].float32_tensor.values[0] for pred in prediction_batches[0]])
###Output
[0.7902330160140991, 0.1961013525724411, 0.9616415500640869, 0.6506044268608093, 0.6324111819267273, 0.13196150958538055, 0.7690600752830505, 0.03198748826980591, 0.9872618317604065, 0.9839351773262024, 0.7172891497612, 0.5460348725318909, 0.8144757747650146, 0.7077612280845642, 0.8950422406196594, 0.9754360914230347, 0.6271064877510071, 0.24714428186416626, 0.4907993972301483, 0.9344369769096375, 0.6512829661369324, 0.7926424741744995, 0.18350689113140106, 0.743069589138031, 0.28049173951148987, 0.4696404039859772, 0.948711633682251, 0.0935184508562088, 0.9856142401695251, 0.17812573909759521, 0.8229093551635742, 0.8398733735084534, 0.6779384016990662, 0.9942814111709595, 0.099876768887043, 0.9922278523445129, 0.419038861989975, 0.8894866704940796, 0.06670814007520676, 0.26895591616630554, 0.9754182696342468, 0.27426987886428833, 0.6335780024528503, 0.7747470736503601, 0.6460452675819397, 0.16709531843662262, 0.9967218041419983, 0.9504019618034363, 0.9899674654006958, 0.7815154790878296, 0.989879846572876, 0.765591561794281, 0.01861283741891384, 0.04930809885263443, 0.2908841669559479, 0.9670922160148621, 0.6096789836883545, 0.6866143345832825, 0.308679461479187, 0.9375461339950562, 0.4279497563838959, 0.9590651392936707, 0.8510259389877319, 0.7167069315910339, 0.16739971935749054, 0.9635751843452454, 0.28065675497055054, 0.9946358799934387, 0.9714450836181641, 0.9908218383789062, 0.9964448809623718, 0.12892606854438782, 0.9932445883750916, 0.22553576529026031, 0.26900339126586914, 0.2591526210308075, 0.9885439276695251, 0.9934050440788269, 0.8933855295181274, 0.17668437957763672, 0.049056973308324814, 0.02196183241903782, 0.9253135919570923, 0.9473143815994263, 0.189016655087471, 0.9735581278800964, 0.09383329749107361, 0.8297200798988342, 0.533084511756897, 0.9898948073387146, 0.9918075203895569, 0.37420278787612915, 0.033113814890384674, 0.7869371771812439, 0.6863090991973877, 0.9822679758071899, 0.5040290355682373, 0.19365152716636658, 0.9776824116706848, 0.9844671487808228, 0.7087278366088867, 0.46281638741493225, 0.04469713568687439, 0.427621066570282, 0.46757522225379944, 0.14873944222927094, 0.9471502900123596, 0.03216603398323059, 0.011923721060156822, 0.11800198256969452, 0.047323741018772125, 0.2008877545595169, 0.07863447070121765, 0.6130034923553467, 0.952049732208252, 0.9425583481788635, 0.4294981360435486, 0.44249752163887024, 0.5885704755783081, 0.09532466530799866, 0.03726455196738243, 0.9985784292221069, 0.00016562166274525225, 0.9483237266540527, 0.04820211976766586, 0.702117919921875, 0.5141359567642212, 0.9973547458648682, 0.7791016697883606, 0.0027554871048778296, 0.3531922996044159, 0.08445753157138824, 0.0033654721919447184, 0.9717161655426025, 0.3319301903247833, 0.022603482007980347, 0.9470418691635132, 0.979992687702179, 0.04043316841125488, 0.9814432859420776, 0.9817332029342651, 0.944588303565979, 0.9982313513755798, 0.3069816827774048, 0.9880621433258057, 0.9910456538200378, 0.9916374087333679, 0.9895414710044861, 0.9930819869041443, 0.6835357546806335, 0.9185306429862976, 0.9934068918228149, 0.8242987990379333, 0.8670175671577454, 0.03496437519788742, 0.6568683385848999, 0.09736474603414536, 0.10781805962324142, 0.6770870089530945, 0.8444879651069641, 0.8700391054153442, 0.9800467491149902, 0.9175447821617126, 0.8676642179489136, 0.6353605389595032, 0.32994380593299866, 0.8788435459136963, 0.31782984733581543, 0.4990690350532532, 0.836479663848877, 0.38615766167640686, 0.6377806663513184, 0.8749744892120361, 0.4957340359687805, 0.2524358630180359, 0.0050260573625564575, 0.7056593894958496, 0.9867067933082581, 0.9947168231010437, 0.6987546682357788, 0.10379461944103241, 0.48666808009147644, 0.7665891051292419, 0.13126496970653534, 0.8716748952865601, 0.7545561790466309, 0.8759870529174805, 0.7863187193870544, 0.030927715823054314, 0.2300838828086853, 0.01835218258202076, 0.9510226249694824, 0.1473238319158554, 0.06600814312696457, 0.9702922105789185, 0.42352059483528137, 0.9843406081199646, 0.04622899740934372, 0.9989508390426636, 0.7466657161712646, 0.2648856043815613, 0.5611781477928162, 0.8808788061141968, 0.9580286145210266, 0.9896399974822998, 0.9949465394020081, 0.9930632710456848, 0.9049495458602905, 0.13181036710739136, 0.5133529305458069, 0.04482664540410042, 0.9936478734016418, 0.015164973214268684, 0.0002099643461406231, 0.9903790950775146, 0.9591935276985168, 0.8377569913864136, 0.9502526521682739, 0.04155684635043144, 0.49669063091278076, 0.9601109027862549, 0.9079581499099731, 0.01909814029932022, 0.6187054514884949, 0.9915251135826111, 0.460843563079834, 0.9839375615119934, 0.020326588302850723, 0.1780472993850708, 0.9795569181442261, 0.597861647605896, 0.623393714427948, 0.43480077385902405, 0.2773846685886383, 0.5195512771606445, 0.0908958911895752, 0.034877482801675797, 0.39543262124061584, 0.5918174982070923, 0.8731667399406433, 0.926105260848999, 0.9425796270370483, 0.22612182796001434, 0.97756028175354, 0.988036036491394, 0.744228720664978, 0.16845925152301788, 0.5462323427200317, 0.9586453437805176, 0.05104006081819534, 0.9087777733802795, 0.9900891780853271, 0.6880056262016296, 0.9416101574897766, 0.3794626295566559, 0.04008220136165619, 0.5714434385299683, 0.0021622437052428722, 0.08350497484207153, 0.8482717871665955, 0.005647451151162386, 0.5045287013053894, 0.02702438086271286, 0.04314989969134331, 0.9951137900352478, 0.4986906945705414, 0.36191102862358093, 0.5362206101417542, 0.19329237937927246, 0.4271279275417328, 0.37168142199516296, 0.9546890258789062, 0.00010680329432943836, 0.972337543964386, 0.1904260665178299, 0.3612281084060669, 0.9813637733459473, 0.9406825304031372, 0.15647195279598236, 0.03078523837029934]
###Markdown
10. Clean up model artifacts(Go to top)You can run the following to delete the endpoint after you are done using it.
###Code
sagemaker_session = sagemaker.Session()
sagemaker_session.delete_endpoint(linear_classifier_predictor.endpoint)
###Output
_____no_output_____ |
notebooks/ipynb/RandomForest.ipynb | ###Markdown
Pyspark Random Forest Regression 1. Set up spark context and SparkSession
###Code
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Python Spark Random Forest Regression") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
###Output
_____no_output_____
###Markdown
2. load dataset
###Code
df = spark.read.format('com.databricks.spark.csv').\
options(header='true', \
inferschema='true').load("./data/WineData.csv",header=True);
df.printSchema()
# convert the data to dense vector
#def transData(row):
# return Row(label=row["quality"],
# features=Vectors.dense([row["fixed acidity"],
# row["volatile acidity"],
# row["citric acid"],
# row["residual sugar"],
# row["chlorides"],
# row["free sulfur dioxide"],
# row["total sulfur dioxide"],
# row["residual sugar"],
# row["density"],
# row["pH"],
# row["sulphates"],
# row["alcohol"]
# ]))
def transData(data):
return data.rdd.map(lambda r: [Vectors.dense(r[:-1]),r[-1]]).toDF(['features','label'])
from pyspark.ml import Pipeline
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.feature import VectorIndexer
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
#transformed = df.rdd.map(transData).toDF()
transformed= transData(df)
transformed.show(6)
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = transformed.randomSplit([0.7, 0.3])
# Train a RandomForest model.
rf = RandomForestRegressor()
model = rf.fit(trainingData)
model.getNumTrees
# Make predictions.
predictions = model.transform(testData)
predictions.show(10)
# Select example rows to display.
predictions.select("prediction", "label", "features").show(5)
# Select (prediction, true label) and compute test error
evaluator = RegressionEvaluator(
labelCol="label", predictionCol="prediction", metricName="rmse")
rmse = evaluator.evaluate(predictions)
print("Root Mean Squared Error (RMSE) on test data = %g" % rmse)
###Output
Root Mean Squared Error (RMSE) on test data = 0.659148
###Markdown
Pyspark Random Forest Classifier
###Code
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.feature import IndexToString, StringIndexer, VectorIndexer
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
df = spark.read.format('com.databricks.spark.csv').\
options(header='true', \
inferschema='true').load("./data/WineData.csv",header=True);
df.printSchema()
# convert the data to dense vector
def transData(data):
return data.rdd.map(lambda r: [Vectors.dense(r[:-1]),r[-1]]).toDF(['features','label'])
from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
data= transData(df)
data.show(6)
# Index labels, adding metadata to the label column.
# Fit on whole dataset to include all labels in index.
labelIndexer = StringIndexer(inputCol="label", outputCol="indexedLabel").fit(data)
labelIndexer.transform(data).show(6)
# Automatically identify categorical features, and index them.
# Set maxCategories so features with > 4 distinct values are treated as continuous.
featureIndexer =VectorIndexer(inputCol="features", \
outputCol="indexedFeatures", \
maxCategories=4).fit(data)
featureIndexer.transform(data).show(6)
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = transformed.randomSplit([0.7, 0.3])
# Train a RandomForest model.
rf = RandomForestClassifier(labelCol="indexedLabel", featuresCol="indexedFeatures", numTrees=10)
# Convert indexed labels back to original labels.
labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel",
labels=labelIndexer.labels)
# Chain indexers and forest in a Pipeline
pipeline = Pipeline(stages=[labelIndexer, featureIndexer, rf, labelConverter])
# Train model. This also runs the indexers.
model = pipeline.fit(trainingData)
# Make predictions.
predictions = model.transform(testData)
# Select example rows to display.
predictions.select("features","label","predictedLabel").show(5)
# Select (prediction, true label) and compute test error
evaluator = MulticlassClassificationEvaluator(
labelCol="indexedLabel", predictionCol="prediction", metricName="accuracy")
accuracy = evaluator.evaluate(predictions)
print("Test Error = %g" % (1.0 - accuracy))
rfModel = model.stages[2]
rfModel
rfModel.trees
###Output
_____no_output_____ |
01 Machine Learning/scikit_examples_jupyter/neighbors/plot_nca_illustration.ipynb | ###Markdown
Neighborhood Components Analysis IllustrationAn example illustrating the goal of learning a distance metric that maximizesthe nearest neighbors classification accuracy. The example is solely forillustration purposes. Please refer to the `User Guide ` formore information.
###Code
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.neighbors import NeighborhoodComponentsAnalysis
from matplotlib import cm
from sklearn.utils.fixes import logsumexp
print(__doc__)
n_neighbors = 1
random_state = 0
# Create a tiny data set of 9 samples from 3 classes
X, y = make_classification(n_samples=9, n_features=2, n_informative=2,
n_redundant=0, n_classes=3, n_clusters_per_class=1,
class_sep=1.0, random_state=random_state)
# Plot the points in the original space
plt.figure()
ax = plt.gca()
# Draw the graph nodes
for i in range(X.shape[0]):
ax.text(X[i, 0], X[i, 1], str(i), va='center', ha='center')
ax.scatter(X[i, 0], X[i, 1], s=300, c=cm.Set1(y[i]), alpha=0.4)
def p_i(X, i):
diff_embedded = X[i] - X
dist_embedded = np.einsum('ij,ij->i', diff_embedded,
diff_embedded)
dist_embedded[i] = np.inf
# compute exponentiated distances (use the log-sum-exp trick to
# avoid numerical instabilities
exp_dist_embedded = np.exp(-dist_embedded -
logsumexp(-dist_embedded))
return exp_dist_embedded
def relate_point(X, i, ax):
pt_i = X[i]
for j, pt_j in enumerate(X):
thickness = p_i(X, i)
if i != j:
line = ([pt_i[0], pt_j[0]], [pt_i[1], pt_j[1]])
ax.plot(*line, c=cm.Set1(y[j]),
linewidth=5*thickness[j])
# we consider only point 3
i = 3
# Plot bonds linked to sample i in the original space
relate_point(X, i, ax)
ax.set_title("Original points")
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.axis('equal')
# Learn an embedding with NeighborhoodComponentsAnalysis
nca = NeighborhoodComponentsAnalysis(max_iter=30, random_state=random_state)
nca = nca.fit(X, y)
# Plot the points after transformation with NeighborhoodComponentsAnalysis
plt.figure()
ax2 = plt.gca()
# Get the embedding and find the new nearest neighbors
X_embedded = nca.transform(X)
relate_point(X_embedded, i, ax2)
for i in range(len(X)):
ax2.text(X_embedded[i, 0], X_embedded[i, 1], str(i),
va='center', ha='center')
ax2.scatter(X_embedded[i, 0], X_embedded[i, 1], s=300, c=cm.Set1(y[i]),
alpha=0.4)
# Make axes equal so that boundaries are displayed correctly as circles
ax2.set_title("NCA embedding")
ax2.axes.get_xaxis().set_visible(False)
ax2.axes.get_yaxis().set_visible(False)
ax2.axis('equal')
plt.show()
###Output
_____no_output_____ |