code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# Example 2: mouse blood lineage networks A pipeline providing an example of ShareNet's usage on a mouse blood lineage dataset is included in the ```~/sharenet/example2``` subdirectory. Here, we go through the different steps associated with this pipeline. ``` %matplotlib inline import os import numpy as np import matplotlib.pyplot as plt import numpy as np import seaborn as sns import pandas as pd def convert_method_name(name): method_dict = {'genie': 'GENIE3','pidc':'PIDC','corr': 'Pearson','gaussian': 'BVS'} for m in method_dict.keys(): if m in name: new_name = method_dict[m] return new_name def convert_dataset_name(name): if 'nonspecific' in name: return 'Non-Specific ChIP' elif 'specific' in name: return 'Specific ChIP' elif 'STRING' in name: return 'STRING' ``` ## Running ShareNet A Bash script, named ```run_example2.sh```, is included in ```~/sharenet/example2``` subdirectory. This script runs the below command, which fits the ShareNet model to the set of networks inferred using PIDC in the mouse blood lineage dataset. The input data required to perform this step (i.e. the initial network estimates and the network edge score standard deviation estimates) are provided in the ```~/sharenet/example2/data``` subdirectory. ``` python -u "${script_dir}/sharenet_example2.py" -d $data_dir -r $results_dir -f "pidc.edges.txt.gz" -sf "pidc.edges.txt.gz" -K 24 -nc 10 -tol 0.01 ``` A description of the various flags used in this command are as follows. - ```-d```: data directory (path to the directory that includes the initial network estimates and standard deviation estimates) - ```-r```: results directory (path to the directory where the revised network edge scores and other variational parameters are to be written) - ```-f```: file name for the initial network estimates (suffix of the file names for the initial network estimates; in this example, the file names are in the format "cluster{cluster_no}.pidc.edges.txt.gz") - ```-sf```: file name for the standard deviation estimates (suffix of the file names for the standard deviation estimates; in this example, the file names are in the format "V.cluster{cluster_no}.pidc.edges.txt.gz") - ```-K```: number of cell types to consider from the dataset (in this example, the mouse blood lineage dataset contains 24 clusters, or cell types) - ```-nc```: number of mixture components in the ShareNet model - ```-K```: number of cell types to consider from the dataset (in this example, the mouse blood lineage dataset contains 24 clusters, or cell types) - ```-tol```: tolerance criterion for convergence ## Evaluating Accuracy We also include a Bash script to calculate the accuracy of the baseline PIDC networks and the networks inferred using ShareNet applied to the initial PIDC networks. The script writes the accuracy results to a separate subdirectory ```~/sharenet/example2/accuracy``` using the set of reference networks that can be found in ```~/sharenet/example2/reference```. Here is an example of one command in this script. ```python -u "${script_dir}/sharenet_accuracy.py" -d $base_dir -r $results_dir -K 24 -f $file_name -rn "STRING"``` The various flags used in this command are as follows. - ```-d```: base data directory (path to the base directory that includes the ```/reference/``` subdirectory and where the ```/accuracy/``` subdirectory will be written) - ```-r```: results directory (path to the directory where the revised network edge scores and other variational parameters are to be written) - ```-K```: number of cell types to consider from the dataset (in this example, the mouse blood lineage dataset contains 24 clusters, or cell types) - ```-f```: file name for the initial network estimates (suffix of the file names for the initial network estimates; in this example, the file names are in the format "cluster{cluster_no}.pidc.edges.txt.gz") - ```-rn```: reference network (reference network against which the inferrred networks are to be compared) ## Plot Results After running the scripts for ShareNet and calculating the network accuracy results, plots used to compare the accuracy of networks inferred with and without ShareNet can be generated with the code below. ### AUPRC Ratio: With vs. Without ShareNet ``` data_dir = '../sharenet/example2/reference' baseline_df = pd.read_csv(os.path.join(data_dir,'baseline_auprc.csv')) baseline_df.index = baseline_df['ref_network'].values from scipy.stats import wilcoxon results_dir = '../sharenet/example2/accuracy' method = 'sharenet.nc10' measure = 'auprc' base_method_list = ['pidc.edges'] df_list = [] for base_method in base_method_list: for ref_network in ['nonspecific_chip','STRING','specific_chip']: file_name = '{}.{}.csv'.format(ref_network,base_method) df = pd.read_csv(os.path.join(results_dir,file_name)) df['method'] = base_method df['ref_network'] = ref_network df_list.append(df) noshare_df = pd.concat(df_list) df_list = [] for base_method in base_method_list: for ref_network in ['nonspecific_chip','STRING','specific_chip']: file_name = '{}.{}.{}.csv'.format(ref_network,method,base_method) df = pd.read_csv(os.path.join(results_dir,file_name)) df['method'] = base_method df['ref_network'] = ref_network df_list.append(df) share_df = pd.concat(df_list) for base_method in base_method_list: data_dict = {'x1': [],'x2': [],'ref_network': [],'cluster_no': []} cluster_no_list = sorted(list(set(share_df[share_df['ref_network'] == ref_network]['cluster_no']))) for ref_network in ['nonspecific_chip','STRING','specific_chip']: for cluster_no in cluster_no_list: noshare_cond = (noshare_df['cluster_no'] == cluster_no) & \ (noshare_df['ref_network'] == ref_network) & \ (noshare_df['method'] == base_method) share_cond = (share_df['cluster_no'] == cluster_no) & \ (share_df['ref_network'] == ref_network) & \ (share_df['method'] == base_method) noshare_val = noshare_df[noshare_cond][measure].values[0] share_val = share_df[share_cond][measure].values[0] if ref_network in ['nonspecific_chip','STRING']: baseline_auprc = baseline_df.loc[ref_network]['auprc'] else: baseline_auprc = baseline_df.loc['{}_specific_chip'.format(cluster_no)]['auprc'] data_dict['x1'].append(noshare_val/baseline_auprc) data_dict['x2'].append(share_val/baseline_auprc) data_dict['cluster_no'].append(cluster_no) data_dict['ref_network'].append(ref_network) df = pd.DataFrame(data_dict) df['ref_network'] = [convert_dataset_name(m) for m in df['ref_network']] plt.figure(figsize=(4,4)) plt.plot(np.linspace(0,50),np.linspace(0,50),c='black',linestyle='--',lw=0.5) sns.scatterplot(x='x1',y='x2',data=df,hue='ref_network') min_x = min(df['x1'].min(),df['x2'].min()) max_x = max(df['x1'].max(),df['x2'].max()) plt.xlim(min_x*0.99,max_x*1.01) plt.ylim(min_x*0.99,max_x*1.01) plt.xlabel(measure.upper() + ' Ratio\n (without ShareNet)',fontsize=16) plt.ylabel(measure.upper() + ' Ratio\n (with ShareNet)',fontsize=16) plt.title(convert_method_name(base_method.split('.')[0]),fontsize=16) lg = plt.legend(fontsize=16,bbox_to_anchor=(1,1),markerscale=2) lg.remove() plt.show() ``` ### Wilcoxon Rank-Sum Test ``` print(wilcoxon(df['x2'],y=df['x1'],alternative='greater')) ```
github_jupyter
``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd import astropy.units as u from astropy.time import Time import pytz import astropy from astropy.coordinates import SkyCoord from astroplan import Observer, FixedTarget, observability_table, Constraint from astropy.coordinates import get_sun, get_body, get_moon from astroplan import moon_illumination from astroplan import * from astroplan import download_IERS_A download_IERS_A() ``` # Setup ``` # Set up the location of observatory palomar = Observer.at_site('Palomar') ``` We get the dataset from https://www.cosmos.esa.int/web/hipparcos/sample-tables-1 ``` data = pd.read_csv('data2.csv') #Pick out the objects having names mask = list() for n in range(len(data['Name'])): if(isinstance(data['Name'][n],str)): mask.append(True) else: mask.append(False) data[mask].to_csv('hipparcos.csv') star_table = pd.read_csv('hipparcos.csv') #Set the observing window obs_dates = list() for i in range(15,31): date = "2019-10-" + str(i) time = Time(date) obs_dates.append(time) ``` # Visibility ``` #Complie a list of all defined objects stars = list() for s in range(len(star_table['Name'])): coords = SkyCoord(star_table['ra (deg)'][s]*u.deg, star_table['dec (deg)'][s]*u.deg, frame='icrs') stars.append(FixedTarget(name=star_table['Name'][s], coord=coords)) ``` #Create an observability table ``` time = Time(["2019-10-15 00:00", "2019-10-30 23:59"]) #The moon's illumination values are from: https://www.calendar-12.com/moon_calendar/2019/october #Use the mean value of the moon's illumination during our observing window constraint = [AltitudeConstraint(30*u.deg, 90*u.deg),AirmassConstraint(2), AtNightConstraint.twilight_astronomical(),MoonIlluminationConstraint(0.4), MoonSeparationConstraint(min = 10*u.deg)] observability_table(observer = palomar, constraints = constraint, targets = stars, time_range = time) #Pick ten observable objects for the following steps targets = list() targets.append(stars[4]) targets.append(stars[5]) targets.append(stars[6]) targets.append(stars[7]) targets.append(stars[8]) targets.append(stars[3]) targets.append(stars[38]) targets.append(stars[37]) targets.append(stars[36]) targets.append(stars[29]) #Check the visibility of the ten objects with the highest value of moon's illumination constraint = [AltitudeConstraint(30*u.deg, 90*u.deg),AirmassConstraint(2), AtNightConstraint.twilight_astronomical(),MoonIlluminationConstraint(0.95), MoonSeparationConstraint(min = 10*u.deg)] observability_table(observer = palomar, constraints = constraint, targets = targets, time_range = time) #Check the visibility of the ten objects with the lowest value of moon's illumination constraint = [AltitudeConstraint(30*u.deg, 90*u.deg),AirmassConstraint(2), AtNightConstraint.twilight_astronomical(),MoonIlluminationConstraint(0.01), MoonSeparationConstraint(min = 10*u.deg)] observability_table(observer = palomar, constraints = constraint, targets = targets, time_range = time) ``` Therefore, these ten objects can be observed through the whole observing window. # Moon Phase We found the moon phases from: https://www.calendar-12.com/moon_calendar/2019/october And based on the moon phases calendar, the moon will be waning gibbous on Oct 15th, 2019 (the start of our observing window) and keep waning until Oct 27th, 2019. Then the moon will be waxing from a new one at the end of our window. ``` #The moon's illumination for i in range(len(obs_dates)): moon_ratio = moon_illumination(obs_dates[i]) print(obs_dates[i]) print(moon_ratio) print("\n") ``` Combined with the moon's illumination, the moon will interfere with our observation somewhat at the beginning of our observing window. And with the moon waning later, the interference would be ignored. # Visibility one month later ``` #New time range time2 = Time(["2019-11-15 00:00", "2019-11-30 23:59"]) obs_dates2 = list() for i in range(15,31): date = "2019-11-" + str(i) time = Time(date) obs_dates2.append(time) # The moon's illumination values are from: https://www.calendar-12.com/moon_calendar/2019/november #Use the mean value of the moon's illumination during the observing window again illum = list() for i in range(len(obs_dates2)): moon_ratio = moon_illumination(obs_dates2[i]) illum.append(moon_ratio) np.mean(illum) constraint = [AltitudeConstraint(30*u.deg, 90*u.deg),AirmassConstraint(2), AtNightConstraint.twilight_astronomical(),MoonIlluminationConstraint(0.35), MoonSeparationConstraint(min = 10*u.deg)] observability_table(observer = palomar, constraints = constraint, targets = targets, time_range = time2) ``` Thus, the situation will be worse for the ten objects.
github_jupyter
*本文讲解了概率编程的基本模块* - 随机函数是某个数据生成过程的模型 - 初等随机函数就是一类可以显式计算样本概率的随机函数 核心问题: - 样本是有名字的,如何获得样本的名字?如何使用样本的名字? # An Introduction to Models in Pyro The basic unit of probabilistic programs is the _stochastic function_. This is an arbitrary Python callable that combines two ingredients: - deterministic Python code; and - primitive stochastic functions that call a random number generator Throughout the tutorials and documentation, **we will often call stochastic functions models**. ---- 概率编程的基本单元是 stochastic function, 包含 determinstic and primitive stochastic functions that call a random number generator. 也就是说随机函数像是一个具备有 '\_\_call\_\_' 方法pytorch基本模块. 在这个教程里,我们把随机函数叫做模型是因为随机函数是某个数据生成过程(DGP)的一个实现。 Expressing models as 随机函数意味着模型可以像 Python 可调用对象一样可以组合,复用,引入和序列化。 ``` import torch import pyro pyro.set_rng_seed(101) ``` ## Primitive Stochastic Functions 原始随机函数是构建模型的基础模块,下面给出正态分布的原始随机函数及其使用。 ``` loc = 0. # mean zero scale = 1. # unit variance normal = torch.distributions.Normal(loc, scale) # create a normal distribution object x = normal.rsample() # draw a sample from N(0,1) print("sample", x) print("log prob", normal.log_prob(x)) # score the sample from N(0,1) [x for x in dir(normal) if not '_' in x] ``` 随机函数有几个常见的方法,包括抽样,计算概率等。 ## A Simple Model 所有的 probalistic programs 是通过 primitive functions and deterministic computation 组合得到的。我们最终的目的是要是用 probablistic programming 来模拟真实世界,我们现在从一个具体的例子出发。 现在我们有一堆关于每天平均气温和天气情况的数据。我们想到天气情况和气温的关系。如下的简单随机函数描述了数据的生成过程。 ``` from graphviz import Source Source('digraph{rankdir=LR; cloudy -> temperature}') def weather(): cloudy = torch.distributions.Bernoulli(0.3).sample() cloudy = 'cloudy' if cloudy.item() == 1.0 else 'sunny' mean_temp = {'cloudy': 55.0, 'sunny': 75.0}[cloudy] scale_temp = {'cloudy': 10.0, 'sunny': 15.0}[cloudy] temp = torch.distributions.Normal(mean_temp, scale_temp).rsample() return cloudy, temp.item() g = weather() print(g) ``` However, `weather` is entirely independent of Pyro - it only calls PyTorch. **We need to turn it into a Pyro program if we want to use this model for anything other than sampling fake data.** 这个模型除生成假数据还能干嘛呢?定义观测数据用于变分推断,提取生成过程中间结果? ## Model with Pyro The `pyro.sample` Primitive ``` %psource pyro.sample # 到底该样本的名字用在哪里?怎么获取? x = pyro.sample("my_sample", pyro.distributions.Normal(loc, scale)) print(x) torch.distributions.Normal(loc, scale).rsample(), pyro.distributions.Normal(loc, scale).rsample(), \ torch.distributions.Normal(loc, scale), pyro.distributions.Normal(loc, scale) ``` Just like a direct call to `torch.distributions.Normal().rsample()`, this returns a sample from the unit normal distribution. **The crucial difference** is that this sample is _named_. Pyro's backend uses these names to uniquely identify sample statements and _change their behavior at runtime_ depending on how the enclosing stochastic function is being used. As we will see, this is how Pyro can implement the various manipulations that underlie inference algorithms. --- 重要区别是一个有名字,一个没有名字。后段会在抽样声明中使用这个名字。**那么究竟如何使用这个名字呢?** 可能是用 `pyro.param` 。 Now that we've introduced `pyro.sample` and `pyro.distributions` we can rewrite our simple model as a Pyro program: ``` def weather(): cloudy = pyro.sample('cloudy', pyro.distributions.Bernoulli(0.3)) cloudy = 'cloudy' if cloudy.item() == 1.0 else 'sunny' mean_temp = {'cloudy': 55.0, 'sunny': 75.0}[cloudy] scale_temp = {'cloudy': 10.0, 'sunny': 15.0}[cloudy] temp = pyro.sample('temp', pyro.distributions.Normal(mean_temp, scale_temp)) return cloudy, temp.item() for _ in range(3): print(weather()) ``` Procedurally, `weather()` is still a non-deterministic Python callable that returns two random samples. Because the randomness is now invoked with `pyro.sample`, however, it is much more than that. In particular `weather()` specifies a joint probability distribution over two named random variables: `cloudy` and `temp`. As such, **it defines a probabilistic model that we can reason about using the techniques of probability theory.** For example we might ask: if I observe a temperature of 70 degrees, how likely is it to be cloudy? How to formulate and answer these kinds of questions will be the subject of the next tutorial. ## 一般随机函数 **Universality: Stochastic Recursion, Higher-order Stochastic Functions, and Random Control Flow** We've now seen how to define a simple model. Building off of it is easy. For example: ``` from graphviz import Source Source('digraph{rankdir=LR; cloudy -> temperature -> ice_cream; cloudy -> ice_cream}') def ice_cream_sales(): cloudy, temp = weather() expected_sales = 200. if cloudy == 'sunny' and temp > 80.0 else 50. ice_cream = pyro.sample('ice_cream', pyro.distributions.Normal(expected_sales, 10.0)) return ice_cream ice_cream_sales() ``` **This kind of modularity, familiar to any programmer, is obviously very powerful.** But is it powerful enough to encompass all the different kinds of models we'd like to express? --- 这种模块化是非常强大的. 下面给出一个随机控制得到几何分布的例子. 几何分布: 一种方程思维是 $T = \sum_{i=1}^{T-1}I(X_i = 0) + I(X_T = 1)$, 这个似乎没有什么用, 还是从定义出发, 第一次抽样到 1 的次数. 这种 X 与 T 之间的依赖关系非常奇怪. 事实上 $T = ~(X_1, X_2, ...)$, T本质上只与 X 的样本序列有关系, 有一个时间维度. ``` def geometric(p, t=None): if t is None: t = 0 x = pyro.sample("x_{}".format(t), pyro.distributions.Bernoulli(p)) # 这里体现了样本名字的作用!!! if x.item() == 1: return 0 else: return 1 + geometric(p, t + 1) print(geometric(0.5)) ``` Note that the names `x_0`, `x_1`, etc., in `geometric()` are generated dynamically and that different executions can have different numbers of named random variables. We are also free to define stochastic functions that accept as input or produce as output other stochastic functions: ``` from graphviz import Source Source('digraph{rankdir=LR; scale, mu_latent -> z1, z2 -> y}') def normal_product(loc, scale): z1 = pyro.sample("z1", pyro.distributions.Normal(loc, scale)) z2 = pyro.sample("z2", pyro.distributions.Normal(loc, scale)) y = z1 * z2 return y def make_normal_normal(): mu_latent = pyro.sample("mu_latent", pyro.distributions.Normal(0, 1)) fn = lambda scale: normal_product(mu_latent, scale) return fn print(make_normal_normal()(1.)) ``` Pyro 是可以构建通复杂随机函数,模拟各种数据生成过程,是一门通用的概率编程语言。 ## Next Steps? 从先验分布到后验分布。
github_jupyter
``` """ You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab. Instructions for setting up Colab are as follows: 1. Open a new Python 3 notebook. 2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL) 3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator) 4. Run this cell to set up dependencies. """ # **** These install steps won't work until my fork is added the the NVIDIA repo, **** # in the meantime, clone my fork and use ./reinstall ## Install dependencies !pip install wget !pip install faiss-gpu ## Install NeMo BRANCH = 'r1.0.0rc1' !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all] import faiss import torch import wget import os import numpy as np import pandas as pd from omegaconf import OmegaConf from pytorch_lightning import Trainer from IPython.display import display from tqdm import tqdm from nemo.collections import nlp as nemo_nlp from nemo.utils.exp_manager import exp_manager ``` ## Entity Linking #### Task Description [Entity linking](https://en.wikipedia.org/wiki/Entity_linking) is the process of connecting concepts mentioned in natural language to their canonical forms stored in a knowledge base. For example, say a knowledge base contained the entity 'ID3452 influenza' and we wanted to process some natural language containing the sentence "The patient has flu like symptoms". An entity linking model would match the word 'flu' to the knowledge base entity 'ID3452 influenza', allowing for disambiguation and normalization of concepts referenced in text. Entity linking applications range from helping automate data ingestion to assisting in real time dialogue concept normalization. Within nemo and this tutorial we use the entity linking approach described in the [Self-alignment Pre-training for Biomedical Entity Representations](https://arxiv.org/abs/2010.11784) paper. The main idea behind this approach is to reshape an initial concept embedding space such that synonyms of the same concept are pulled closer together and unrelated concepts are pushed further apart. The concept embeddings from this reshaped space can then be used to build a knowledge base embedding index. This index stores concept IDs mapped to their respective concept embeddings in a format conducive to efficient nearest neighbor search. We can link query concepts to their canonical forms in the knowledge base by performing a nearest neighbor search- matching concept query embeddings to the most similar concepts embeddings in the knowledge base index. In this tutorial we will be using the [faiss](https://github.com/facebookresearch/faiss) library to build our concept index. #### Self Alignment Pretraining Self-Alignment pretraining is a second stage pretraining of an existing encoder (called second stage because the encoder model can be further finetuned after this more general pretraining step). The dataset used during training consists of pairs of concept synonyms that map to the same ID. At each training iteration, we only select *hard* examples present in the mini batch to calculate the loss and update the model weights. In this context, a hard example is an example where a concept is closer to an unrelated concept in the mini batch than it is to the synonym concept it is paired with by some margin. I encourage you to take a look at [section 2 of the paper](https://arxiv.org/pdf/2010.11784.pdf) for a more formal and in depth description of how hard examples are selected. We then use a [metric learning loss](https://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Multi-Similarity_Loss_With_General_Pair_Weighting_for_Deep_Metric_Learning_CVPR_2019_paper.pdf) calculated from the hard examples selected. This loss helps reshape the embedding space. The concept representation space is rearranged to be more suitable for entity matching via embedding cosine similarity. Now that we have idea of what's going on, let's get started! ## Dataset Preprocessing ``` # Download data DATA_DIR = "tiny_example_data" wget.download('https://github.com/vadam5/NeMo/blob/main/examples/nlp/entity_linking/data/tiny_example_data.zip?raw=true', os.path.join("tiny_example_data.zip")) !unzip tiny_example_data.zip ``` In this tutorial we will be using a tiny toy dataset to demonstrate how to use NeMo's entity linking model functionality. The dataset includes synonyms for 12 medical concepts. Here's the dataset before preprocessing: ``` raw_data = pd.read_csv(os.path.join(DATA_DIR, "tiny_example_dev_data.csv"), names=["ID", "CONCEPT"], index_col=False) print(raw_data) ``` We've already paired off the concepts for this dataset with the format `ID concept_synonym1 concept_synonym2`. Here are the first ten rows: ``` training_data = pd.read_table(os.path.join(DATA_DIR, "tiny_example_train_pairs.tsv"), names=["ID", "CONCEPT_SYN1", "CONCEPT_SYN2"], delimiter='\t') print(training_data.head(10)) ``` Use the [Unified Medical Language System (UMLS)](https://www.nlm.nih.gov/research/umls/index.html) dataset for full medical domain entity linking training. The data contains over 9 million entities and is a table of medical concepts with their corresponding concept IDs (CUI). After [requesting a free license and making a UMLS Terminology Services (UTS) account](https://www.nlm.nih.gov/research/umls/index.html), the [entire UMLS dataset](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) can be downloaded from the NIH's website. If you've cloned the NeMo repo you can run the data processing script located in `examples/nlp/entity_linking/data/umls_dataset_processing.py` on the full dataset. This script will take in the initial table of UMLS concepts and produce a .tsv file with each row formatted as `CUI\tconcept_synonym1\tconcept_synonym2`. Once the UMLS dataset .RRF file is downloaded, the script can be run from the `examples/nlp/entity_linking` directory like so: ``` python data/umls_dataset_processing.py --cfg conf/umls_medical_entity_linking_config.yaml ``` ## Model Training Second stage pretrain a BERT Base encoder on the self-alignment pretraining task (SAP) for improved entity linking. ``` # Download config wget.download("https://raw.githubusercontent.com/vadam5/NeMo/main/examples/nlp/entity_linking/conf/tiny_example_entity_linking_config.yaml", os.path.join("tiny_example_entity_linking_config.yaml")) # Load in config file cfg = OmegaConf.load(os.path.join("tiny_example_entity_linking_config.yaml")) # Initialize the trainer and model trainer = Trainer(**cfg.trainer) exp_manager(trainer, cfg.get("exp_manager", None)) model = nemo_nlp.models.EntityLinkingModel(cfg=cfg.model, trainer=trainer) # Train and save the model trainer.fit(model) model.save_to(cfg.model.nemo_path) ``` You can run the script at `examples/nlp/entity_linking/self_alignment_pretraining.py` to train a model on a larger dataset. Run ``` python self_alignment_pretraining.py ``` from the `examples/nlp/entity_linking` directory. ## Model Evaluation Let's evaluate our freshly trained model and compare its performance with a BERT Base encoder that hasn't undergone self-alignment pretraining. We first need to restore our trained model and load our BERT Base Baseline model. ``` device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Restore second stage pretrained model sap_model_cfg = cfg sap_model = nemo_nlp.models.EntityLinkingModel.restore_from(sap_model_cfg.model.nemo_path).to(device) # Load original model base_model_cfg = OmegaConf.load("tiny_example_entity_linking_config.yaml") # Set train/val datasets to None to avoid loading datasets associated with training base_model_cfg.model.train_ds = None base_model_cfg.model.validation_ds = None base_model_cfg.index.index_save_name = "base_model_index" base_model = nemo_nlp.models.EntityLinkingModel(base_model_cfg.model).to(device) ``` We are going evaluate our model on a nearest neighbors task using top 1 and top 5 accuarcy as our metric. We will be using a tiny example test knowledge base and test queries. For this evaluation we are going to be comparing every test query with every concept vector in our test set knowledge base. We will rank each item in the knowledge base by its cosine similarity with the test query. We'll then compare the IDs of the predicted most similar test knowledge base concepts with our ground truth query IDs to calculate top 1 and top 5 accuarcy. For this metric higher is better. ``` # Helper function to get data embeddings def get_embeddings(model, dataloader): embeddings, cids = [], [] with torch.no_grad(): for batch in tqdm(dataloader): input_ids, token_type_ids, attention_mask, batch_cids = batch batch_embeddings = model.forward(input_ids=input_ids.to(device), token_type_ids=token_type_ids.to(device), attention_mask=attention_mask.to(device)) # Accumulate index embeddings and their corresponding IDs embeddings.extend(batch_embeddings.cpu().detach().numpy()) cids.extend(batch_cids) return embeddings, cids def evaluate(model, test_kb, test_queries, ks): # Initialize knowledge base and query data loaders test_kb_dataloader = model.setup_dataloader(test_kb, is_index_data=True) test_query_dataloader = model.setup_dataloader(test_queries, is_index_data=True) # Get knowledge base and query embeddings test_kb_embs, test_kb_cids = get_embeddings(model, test_kb_dataloader) test_query_embs, test_query_cids = get_embeddings(model, test_query_dataloader) # Calculate the cosine distance between each query and knowledge base concept score_matrix = np.matmul(np.array(test_query_embs), np.array(test_kb_embs).T) accs = {k : 0 for k in ks} # Compare the knowledge base IDs of the knowledge base entities with # the smallest cosine distance from the query for query_idx in tqdm(range(len(test_query_cids))): query_emb = test_query_embs[query_idx] query_cid = test_query_cids[query_idx] query_scores = score_matrix[query_idx] for k in ks: topk_idxs = np.argpartition(query_scores, -k)[-k:] topk_cids = [test_kb_cids[idx] for idx in topk_idxs] # If the correct query ID is amoung the top k closest kb IDs # the model correctly linked the entity match = int(query_cid in topk_cids) accs[k] += match for k in ks: accs[k] /= len(test_query_cids) return accs test_kb = OmegaConf.create({ "data_file": os.path.join(DATA_DIR, "tiny_example_test_kb.tsv"), "max_seq_length": 128, "batch_size": 10, "shuffle": False, }) test_queries = OmegaConf.create({ "data_file": os.path.join(DATA_DIR, "tiny_example_test_queries.tsv"), "max_seq_length": 128, "batch_size": 10, "shuffle": False, }) ks = [1, 5] base_accs = evaluate(base_model, test_kb, test_queries, ks) base_accs["Model"] = "BERT Base Baseline" sap_accs = evaluate(sap_model, test_kb, test_queries, ks) sap_accs["Model"] = "BERT + SAP" print("Top 1 and Top 5 Accuracy Comparison:") results_df = pd.DataFrame([base_accs, sap_accs], columns=["Model", 1, 5]) results_df = results_df.style.set_properties(**{'text-align': 'left', }).set_table_styles([dict(selector='th', props=[('text-align', 'left')])]) display(results_df) ``` The purpose of this section was to show an example of evaluating your entity linking model. This evaluation set contains very little data, and no serious conclusions should be drawn about model performance. Top 1 accuracy should be between 0.7 and 1.0 for both models and top 5 accuracy should be between 0.9 and 1.0. When evaluating a model trained on a larger dataset, you can use a nearest neighbors index to speed up the evaluation time. ## Building an Index To qualitatively observe the improvement we gain from the second stage pretraining, let's build two indices. One will be built with BERT base embeddings before self-alignment pretraining and one will be built with the model we just trained. Our knowledge base in this tutorial will be in the same domain and have some over lapping concepts as the training set. This data file is formatted as `ID\tconcept`. The `EntityLinkingDataset` class can load the data used for training the entity linking encoder as well as for building the index if the `is_index_data` flag is set to true. ``` def build_index(cfg, model): # Setup index dataset loader index_dataloader = model.setup_dataloader(cfg.index.index_ds, is_index_data=True) # Get index dataset embeddings embeddings, _ = get_embeddings(model, index_dataloader) # Train IVFFlat index using faiss embeddings = np.array(embeddings) quantizer = faiss.IndexFlatL2(cfg.index.dims) index = faiss.IndexIVFFlat(quantizer, cfg.index.dims, cfg.index.nlist) index = faiss.index_cpu_to_all_gpus(index) index.train(embeddings) # Add concept embeddings to index for i in tqdm(range(0, embeddings.shape[0], cfg.index.index_batch_size)): index.add(embeddings[i:i+cfg.index.index_batch_size]) # Save index faiss.write_index(faiss.index_gpu_to_cpu(index), cfg.index.index_save_name) build_index(sap_model_cfg, sap_model.to(device)) build_index(base_model_cfg, base_model.to(device)) ``` ## Entity Linking via Nearest Neighbor Search Now its time to query our indices! ``` def query_index(cfg, model, index, queries, id2string): query_embs = get_query_embedding(queries, model).cpu().detach().numpy() # Use query embedding to find closet concept embedding in knowledge base distances, neighbors = index.search(query_embs, cfg.index.top_n) neighbor_concepts = [[id2string[concept_id] for concept_id in query_neighbor] \ for query_neighbor in neighbors] for query_idx in range(len(queries)): print(f"\nThe most similar concepts to {queries[query_idx]} are:") for cid, concept, dist in zip(neighbors[query_idx], neighbor_concepts[query_idx], distances[query_idx]): print(cid, concept, 1 - dist) def get_query_embedding(queries, model): model_input = model.tokenizer(queries, add_special_tokens = True, padding = True, truncation = True, max_length = 512, return_token_type_ids = True, return_attention_mask = True) query_emb = model.forward(input_ids=torch.LongTensor(model_input["input_ids"]).to(device), token_type_ids=torch.LongTensor(model_input["token_type_ids"]).to(device), attention_mask=torch.LongTensor(model_input["attention_mask"]).to(device)) return query_emb # Load indices sap_index = faiss.read_index(sap_model_cfg.index.index_save_name) base_index = faiss.read_index(base_model_cfg.index.index_save_name) # Map concept IDs to one canonical string index_data = open(sap_model_cfg.index.index_ds.data_file, "r", encoding='utf-8-sig') id2string = {} for line in index_data: cid, concept = line.split("\t") id2string[int(cid) - 1] = concept.strip() id2string # Query both indices queries = ["high blood sugar", "head pain"] print("BERT Base output before Self Alignment Pretraining:") query_index(base_model_cfg, base_model, base_index, queries, id2string) print("-" * 50) print("BERT Base output after Self Alignment Pretraining:") query_index(sap_model_cfg, sap_model, sap_index, queries, id2string) ``` Even after only training on this tiny amount of data, the qualitative performance boost from self-alignment pretraining is visible. The baseline model links "*high blood sugar*" to the entity "*6 diabetes*" while our SAP BERT model accurately links "*high blood sugar*" to "*Hyperinsulinemia*". Similarly, "*head pain*" and "*Myocardial infraction*" are not the same concept, but "*head pain*" and "*Headache*" are. For larger knowledge bases keeping the default embedding size might be too large and cause out of memory issues. You can apply PCA or some other dimensionality reduction method to your data to reduce its memory footprint. Code for creating a text file of all the UMLS entities in the correct format needed to build an index and creating a dictionary mapping concept ids to canonical concept strings can be found here `examples/nlp/entity_linking/data/umls_dataset_processing.py`. The code for extracting knowledge base concept embeddings, training and applying a pca transformation to the embeddings, builing a faiss index and querying the index from the command line is located at `examples/nlp/entity_linking/build_and_query_index.py`. If you've cloned the NeMo repo, both of these steps can be run as follows on the commandline from the `examples/nlp/entity_linking/` directory. ``` python data/umls_dataset_processing.py --index --cfg /conf/medical_entity_linking_config.yaml python build_and_query_index.py --restore --cfg conf/medical_entity_linking_config.yaml --top_n 5 ``` Intermidate steps of the index building process are saved. In the occurance of an error, previously completed steps do not need to be rerun. ## Command Recap Here is a recap of the commands and steps to repeat this process on the full UMLS dataset. 1) Download the UMLS datset file `MRCONSO.RRF` from the NIH website and place it in the `examples/nlp/entity_linking/data` directory. 2) Run the following commands from the `examples/nlp/entity_linking` directory ``` python data/umls_dataset_processing.py --cfg conf/umls_medical_entity_linking_config.yaml python self_alignment_pretraining.py python data/umls_dataset_processing.py --index --cfg conf/umls_medical_entity_linking_config.yaml python build_and_query_index.py --restore --cfg conf/umls_medical_entity_linking_config.yaml --top_n 5 ``` The model will take ~24hrs to train on two GPUs and ~48hrs to train on one GPU.
github_jupyter
# Nearest neighbors This notebook illustrates the classification of the nodes of a graph by the [k-nearest neighbors algorithm](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm), based on the labels of a few nodes. ``` from IPython.display import SVG import numpy as np from sknetwork.data import karate_club, painters, movie_actor from sknetwork.classification import KNN from sknetwork.embedding import GSVD from sknetwork.visualization import svg_graph, svg_digraph, svg_bigraph ``` ## Graphs ``` graph = karate_club(metadata=True) adjacency = graph.adjacency position = graph.position labels_true = graph.labels seeds = {i: labels_true[i] for i in [0, 33]} knn = KNN(GSVD(3), n_neighbors=1) labels_pred = knn.fit_transform(adjacency, seeds) precision = np.round(np.mean(labels_pred == labels_true), 2) precision image = svg_graph(adjacency, position, labels=labels_pred, seeds=seeds) SVG(image) # soft classification (here probability of label 1) knn = KNN(GSVD(3), n_neighbors=2) knn.fit(adjacency, seeds) membership = knn.membership_ scores = membership[:,1].toarray().ravel() image = svg_graph(adjacency, position, scores=scores, seeds=seeds) SVG(image) ``` ## Directed graphs ``` graph = painters(metadata=True) adjacency = graph.adjacency position = graph.position names = graph.names rembrandt = 5 klimt = 6 cezanne = 11 seeds = {cezanne: 0, rembrandt: 1, klimt: 2} knn = KNN(GSVD(3), n_neighbors=2) labels = knn.fit_transform(adjacency, seeds) image = svg_digraph(adjacency, position, names, labels=labels, seeds=seeds) SVG(image) # soft classification membership = knn.membership_ scores = membership[:,0].toarray().ravel() image = svg_digraph(adjacency, position, names, scores=scores, seeds=[cezanne]) SVG(image) ``` ## Bipartite graphs ``` graph = movie_actor(metadata=True) biadjacency = graph.biadjacency names_row = graph.names_row names_col = graph.names_col inception = 0 drive = 3 budapest = 8 seeds_row = {inception: 0, drive: 1, budapest: 2} knn = KNN(GSVD(3), n_neighbors=2) labels_row = knn.fit_transform(biadjacency, seeds_row) labels_col = knn.labels_col_ image = svg_bigraph(biadjacency, names_row, names_col, labels_row, labels_col, seeds_row=seeds_row) SVG(image) # soft classification membership_row = knn.membership_row_ membership_col = knn.membership_col_ scores_row = membership_row[:,1].toarray().ravel() scores_col = membership_col[:,1].toarray().ravel() image = svg_bigraph(biadjacency, names_row, names_col, scores_row=scores_row, scores_col=scores_col, seeds_row=seeds_row) SVG(image) ```
github_jupyter
``` import pandas as pd from matplotlib.ticker import FuncFormatter from Cohort import CohortTable import numpy as np import altair as alt import math from IPython.display import display, Markdown # Pulled from class module; need to remove self references def print_all_tables(self): display(Markdown('## Productivity Table')) display(Markdown('The following table contains the percentage of productivity for each cohort by year.')) display(Markdown('The maximum percentage for each cell is 100% or 1. Any value less than 1 is used to discount the \ productivity of that cohort class for that particular year.\n')) self.print_table(self.productivity_df, 'Productivity Table') display(Markdown('## Employee Count before Attrition')) display(Markdown('This table for each year, by each cohort, if no attrition were to occur.\n')) self.print_table(self.employee_count_df, 'Employee Count (Before Attrition) by Year', precision=0, create_sum=True, sum_title='Employees') display(Markdown('## Attrition Mask Table')) display(Markdown('This table represents the *percentage* of the cohort **population** that has left. The number for each cohort starts\ at 1 (or 100%) and decreases over time. If the argument *attrition_y0* is **TRUE**, the first year of the cohort\ is reduced by the annual attrition rate. Otherwise, attrition starts in the second year of each cohort.\n')) self.print_table(pd.DataFrame(self.attrition_mask), 'Attrition Mask - 0% to 100% of Employee Count') display(Markdown('## Retained Employees after Attrition')) display(Markdown('This table contains the number of employees that remain with the company after accounting for attrition. This \ table contains only whole employees, not fractions, to illustrate when each person is expected to leave as opposed \ to the Full Time Equivalent (FTE) table below.\n')) self.print_table(self.retained_employee_count_df, 'Employees, After Attrition, by Year', precision=0, create_sum=True, sum_title='Employees') display(Markdown('## Full Time Equivalent Table')) display(Markdown('This table takes the retained employees after attrition from the table above and calculates the \ number of FTE after applying mid-year hiring. We assume that hiring takes place throughout the year rather than have \ all employees hired on the first of the year. This results in a lower FTE figure for the first year of the cohort.\n')) self.print_table(self.retained_fte_df, 'FTE Table', create_sum=True, sum_title='FTE') display(Markdown('## Full Time Equivalent after Factoring Productivity Ramp Up')) display(Markdown('This table takes the FTE figures from the table above and applies the ramp up in productivity.\n')) self.print_table(self.retained_fte_factored_df, 'FTE After Applying Productivity Ramp', create_sum=True, sum_title='FTE') display(Markdown('## Revenue Table')) display(Markdown('This table takes the final FTE figures, after factoring for productivity ramp up periods, and calculates \ the total revenue per year and per cohort.\n')) self.print_table(self.revenue_df, 'Total Revenue by Year', precision=0, create_sum=True, sum_title='Revenue') def print_table(self, df, table_title, precision=2, create_sum=False, sum_title='Sum'): df.index.name='Cohort' if create_sum: sum_title = 'Sum of '+sum_title df.loc[sum_title] = df.sum() format_string = '{:,.' + str(precision) + 'f}' df_styled = df.style.format(format_string).set_caption(table_title) display(df_styled) myTable = CohortTable(forecast_period=10, n_years=3, hires_per_year=[1,2,2,3,4,6], \ revenue_goal=1000000, annual_attrition=.16, first_year_full_hire=True, attrition_y0=False) myTable.print_all_tables() ax = myTable.retained_fte_factored_df.loc['Sum of FTE'].plot(kind='bar', title='Revenue by Year') ax.set_xlabel('Year') ax.set_ylabel('Revenue') ax.yaxis.set_major_formatter(FuncFormatter('{0:,.0f}'.format)) myTable.revenue_df.loc['Sum of Revenue'] = myTable.revenue_df.sum() revenue_melt = myTable.revenue_df.loc[['Sum of Revenue']].melt(var_name='Year', value_name='Revenue') chart = alt.Chart(revenue_melt).mark_area().encode( x = alt.X('Year', sort=list(revenue_melt.index)), y = alt.Y('Revenue'), tooltip = ['Year', alt.Tooltip('Revenue', format=',.0f')] ).properties(title='Total Revenue by Year', width=600, height=400).interactive() display(revenue_melt) display(chart) def size_list(l, length, pad=0): if len(l) >= length: del l[length:] else: l.extend([pad] * (length - len(l))) return l n_years = 5 forecast_period = 10 ramp_log = [math.log2(n) for n in np.delete(np.linspace(1,2,n_years+1),0)] ramp_log_full = size_list(ramp_log, forecast_period, pad=1) productivity_list = [np.roll(ramp_log_full, i) for i in range(forecast_period)] productivity_list = np.triu(productivity_list) pd.DataFrame(productivity_list) ramp_exp = [math.exp(1-(1/n**2)) for n in np.delete(np.linspace(0,1,n_years+1),0)] sns.lineplot(data=productivity_list[0]) def sigmoid(x, width, center): return 1 / (1 + np.exp(width*(-x - center))) sigmoid(-10, 0,0) s_curve = [sigmoid(n, .1, 0) for n in np.linspace(-10,10,50)] sns.lineplot(data=s_curve) s_curve = [sigmoid(n, .3, -10) for n in np.linspace(-10,10,50)] sns.lineplot(data=s_curve) s_curve ```
github_jupyter
# WorkFlow ## Classes ## Load the data ## Test Modelling ## Modelling **<hr>** ## Classes ``` NAME = "change the conv2d" BATCH_SIZE = 32 import os import cv2 import torch import numpy as np def load_data(img_size=112): data = [] index = -1 labels = {} for directory in os.listdir('./data/'): index += 1 labels[f'./data/{directory}/'] = [index,-1] print(len(labels)) for label in labels: for file in os.listdir(label): filepath = label + file img = cv2.imread(filepath,cv2.IMREAD_GRAYSCALE) img = cv2.resize(img,(img_size,img_size)) img = img / 255.0 data.append([ np.array(img), labels[label][0] ]) labels[label][1] += 1 for _ in range(12): np.random.shuffle(data) print(len(data)) np.save('./data.npy',data) return data import torch def other_loading_data_proccess(data): X = [] y = [] print('going through the data..') for d in data: X.append(d[0]) y.append(d[1]) print('splitting the data') VAL_SPLIT = 0.25 VAL_SPLIT = len(X)*VAL_SPLIT VAL_SPLIT = int(VAL_SPLIT) X_train = X[:-VAL_SPLIT] y_train = y[:-VAL_SPLIT] X_test = X[-VAL_SPLIT:] y_test = y[-VAL_SPLIT:] print('turning data to tensors') X_train = torch.from_numpy(np.array(X_train)) y_train = torch.from_numpy(np.array(y_train)) X_test = torch.from_numpy(np.array(X_test)) y_test = torch.from_numpy(np.array(y_test)) return [X_train,X_test,y_train,y_test] ``` **<hr>** ## Load the data ``` REBUILD_DATA = True if REBUILD_DATA: data = load_data() np.random.shuffle(data) X_train,X_test,y_train,y_test = other_loading_data_proccess(data) ``` ## Test Modelling ``` import torch import torch.nn as nn import torch.nn.functional as F # class Test_Model(nn.Module): # def __init__(self): # super().__init__() # self.conv1 = nn.Conv2d(1, 6, 5) # self.pool = nn.MaxPool2d(2, 2) # self.conv2 = nn.Conv2d(6, 16, 5) # self.fc1 = nn.Linear(16 * 25 * 25, 120) # self.fc2 = nn.Linear(120, 84) # self.fc3 = nn.Linear(84, 36) # def forward(self, x): # x = self.pool(F.relu(self.conv1(x))) # x = self.pool(F.relu(self.conv2(x))) # x = x.view(-1, 16 * 25 * 25) # x = F.relu(self.fc1(x)) # x = F.relu(self.fc2(x)) # x = self.fc3(x) # return x class Test_Model(nn.Module): def __init__(self): super().__init__() self.pool = nn.MaxPool2d(2, 2) self.conv1 = nn.Conv2d(1, 32, 5) self.conv3 = nn.Conv2d(32,64,5) self.conv2 = nn.Conv2d(64, 128, 5) self.fc1 = nn.Linear(128 * 10 * 10, 512) self.fc2 = nn.Linear(512, 256) self.fc4 = nn.Linear(256,128) self.fc3 = nn.Linear(128, 36) def forward(self, x,shape=False): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv3(x))) x = self.pool(F.relu(self.conv2(x))) if shape: print(x.shape) x = x.view(-1, 128 * 10 * 10) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc4(x)) x = self.fc3(x) return x device = torch.device('cuda') model = Test_Model().to(device) preds = model(X_test.reshape(-1,1,112,112).float().to(device),True) preds[0] optimizer = torch.optim.SGD(model.parameters(),lr=0.1) criterion = nn.CrossEntropyLoss() EPOCHS = 5 loss_logs = [] from tqdm import tqdm PROJECT_NAME = "Sign-Language-Recognition" def test(net,X,y): correct = 0 total = 0 net.eval() with torch.no_grad(): for i in range(len(X)): real_class = torch.argmax(y[i]).to(device) net_out = net(X[i].view(-1,1,112,112).to(device).float()) net_out = net_out[0] predictied_class = torch.argmax(net_out) if predictied_class == real_class: correct += 1 total += 1 return round(correct/total,3) import wandb len(os.listdir('./data/')) import random # index = random.randint(0,29) # print(index) # wandb.init(project=PROJECT_NAME,name=NAME) # for _ in tqdm(range(EPOCHS)): # for i in range(0,len(X_train),BATCH_SIZE): # X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device) # y_batch = y_train[i:i+BATCH_SIZE].to(device) # model.to(device) # preds = model(X_batch.float()) # loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long)) # optimizer.zero_grad() # loss.backward() # optimizer.step() # wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index])}) # wandb.finish() import matplotlib.pyplot as plt import pandas as pd df = pd.Series(loss_logs) df.plot.line(figsize=(12,6)) test(model,X_test,y_test) test(model,X_train,y_train) preds X_testing = X_train y_testing = y_train correct = 0 total = 0 model.eval() with torch.no_grad(): for i in range(len(X_testing)): real_class = torch.argmax(y_testing[i]).to(device) net_out = model(X_testing[i].view(-1,1,112,112).to(device).float()) net_out = net_out[0] predictied_class = torch.argmax(net_out) # print(predictied_class) if str(predictied_class) == str(real_class): correct += 1 total += 1 print(round(correct/total,3)) # for real,pred in zip(y_batch,preds): # print(real) # print(torch.argmax(pred)) # print('\n') ``` ## Modelling ``` # conv2d_output # conv2d_1_ouput # conv2d_2_ouput # output_fc1 # output_fc2 # output_fc4 # max_pool2d_keranl # max_pool2d # num_of_linear # activation # best num of epochs # best optimizer # best loss ## best lr class Test_Model(nn.Module): def __init__(self,conv2d_output=128,conv2d_1_ouput=32,conv2d_2_ouput=64,output_fc1=512,output_fc2=256,output_fc4=128,output=36,activation=F.relu,max_pool2d_keranl=2): super().__init__() print(conv2d_output) print(conv2d_1_ouput) print(conv2d_2_ouput) print(output_fc1) print(output_fc2) print(output_fc4) print(activation) self.conv2d_output = conv2d_output self.pool = nn.MaxPool2d(max_pool2d_keranl) self.conv1 = nn.Conv2d(1, conv2d_1_ouput, 5) self.conv3 = nn.Conv2d(conv2d_1_ouput,conv2d_2_ouput,5) self.conv2 = nn.Conv2d(conv2d_2_ouput, conv2d_output, 5) self.fc1 = nn.Linear(conv2d_output * 10 * 10, output_fc1) self.fc2 = nn.Linear(output_fc1, output_fc2) self.fc4 = nn.Linear(output_fc2,output_fc4) self.fc3 = nn.Linear(output_fc4, output) self.activation = activation def forward(self, x,shape=False): x = self.pool(self.activation(self.conv1(x))) x = self.pool(self.activation(self.conv3(x))) x = self.pool(self.activation(self.conv2(x))) if shape: print(x.shape) x = x.view(-1, self.conv2d_output * 10 * 10) x = self.activation(self.fc1(x)) x = self.activation(self.fc2(x)) x = self.activation(self.fc4(x)) x = self.fc3(x) return x # conv2d_output # conv2d_1_ouput # conv2d_2_ouput # output_fc1 # output_fc2 # output_fc4 # max_pool2d_keranl # max_pool2d # num_of_linear # best num of epochs # best loss ## best lr # batch size EPOCHS = 3 BATCH_SIZE = 32 # conv2d_output # conv2d_1_ouput # conv2d_2_ouput # output_fc1 # output_fc2 # output_fc4 # max_pool2d_keranl # max_pool2d # num_of_linear # activation = # best num of epochs # best optimizer = # best loss ## best lr def get_loss(criterion,y,model,X): preds = model(X.view(-1,1,112,112).to(device).float()) preds.to(device) loss = criterion(preds,torch.tensor(y,dtype=torch.long).to(device)) loss.backward() return loss.item() optimizers = [torch.optim.SGD,torch.optim.Adadelta,torch.optim.Adagrad,torch.optim.Adam,torch.optim.AdamW,torch.optim.SparseAdam,torch.optim.Adamax] for optimizer in optimizers: model = Test_Model(activation=nn.ReLU()) criterion = optimizer(model.parameters(),lr=0.1) wandb.init(project=PROJECT_NAME,name=f'optimizer-{optimizer}') for _ in tqdm(range(EPOCHS)): for i in range(0,len(X_train),BATCH_SIZE): X_batch = X_train[i:i+BATCH_SIZE] y_batch = y_train[i:i+BATCH_SIZE] model.to(device) preds = model(X_batch.float()) loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long)) optimizer.zero_grad() loss.backward() optimizer.step() wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':get_loss(criterion,y_test,model,X_test)}) print(f'{torch.argmax(preds[index])} \n {y_batch[index]}') print(f'{torch.argmax(preds[1])} \n {y_batch[1]}') print(f'{torch.argmax(preds[2])} \n {y_batch[2]}') print(f'{torch.argmax(preds[3])} \n {y_batch[3]}') print(f'{torch.argmax(preds[4])} \n {y_batch[4]}') wandb.finish() # activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()] # for activation in activations: # model = Test_Model(activation=activation) # optimizer = torch.optim.SGD(model.parameters(),lr=0.1) # criterion = nn.CrossEntropyLoss() # index = random.randint(0,29) # print(index) # wandb.init(project=PROJECT_NAME,name=f'activation-{activation}') # for _ in tqdm(range(EPOCHS)): # for i in range(0,len(X_train),BATCH_SIZE): # X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device) # y_batch = y_train[i:i+BATCH_SIZE].to(device) # model.to(device) # preds = model(X_batch.float()) # loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long)) # optimizer.zero_grad() # loss.backward() # optimizer.step() # wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':get_loss(criterion,y_test,model,X_test)}) # print(f'{torch.argmax(preds[index])} \n {y_batch[index]}') # print(f'{torch.argmax(preds[1])} \n {y_batch[1]}') # print(f'{torch.argmax(preds[2])} \n {y_batch[2]}') # print(f'{torch.argmax(preds[3])} \n {y_batch[3]}') # print(f'{torch.argmax(preds[4])} \n {y_batch[4]}') # wandb.finish() for real,pred in zip(y_batch,preds): print(real) print(torch.argmax(pred)) print('\n') ```
github_jupyter
``` %load_ext autoreload %reload_ext autoreload %autoreload 2 %matplotlib inline import os # TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE # Note that this is necessary for parallel execution amongst other things... # os.environ['SNORKELDB'] = 'postgres:///snorkel-intro' from snorkel import SnorkelSession session = SnorkelSession() # Here, we just set how many documents we'll process for automatic testing- you can safely ignore this! n_docs = 500 if 'CI' in os.environ else 2591 from snorkel.models import candidate_subclass Spouse = candidate_subclass('Spouse', ['person1', 'person2']) train_cands = session.query(Spouse).filter(Spouse.split == 0).order_by(Spouse.id).all() dev_cands = session.query(Spouse).filter(Spouse.split == 1).order_by(Spouse.id).all() test_cands = session.query(Spouse).filter(Spouse.split == 2).order_by(Spouse.id).all() from util import load_external_labels #%time load_external_labels(session, Spouse, annotator_name='gold') from snorkel.annotations import load_gold_labels #L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1, zero_one=True) #L_gold_test = load_gold_labels(session, annotator_name='gold', split=2, zero_one=True) L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1) L_gold_test = load_gold_labels(session, annotator_name='gold', split=2) #gold_labels_dev = [x[0,0] for x in L_gold_dev.todense()] #for i,L in enumerate(gold_labels_dev): # print(i,gold_labels_dev[i]) gold_labels_dev = [] for i,L in enumerate(L_gold_dev): gold_labels_dev.append(L[0,0]) gold_labels_test = [] for i,L in enumerate(L_gold_test): gold_labels_test.append(L[0,0]) print(len(gold_labels_dev),len(gold_labels_test)) from gensim.parsing.preprocessing import STOPWORDS import gensim.matutils as gm from gensim.models.keyedvectors import KeyedVectors # Load pretrained model (since intermediate data is not included, the model cannot be refined with additional data) model = KeyedVectors.load_word2vec_format('../glove_w2v.txt', binary=False) # C binary format wordvec_unavailable= set() def write_to_file(wordvec_unavailable): with open("wordvec_unavailable.txt","w") as f: for word in wordvec_unavailable: f.write(word+"\n") def preprocess(tokens): btw_words = [word for word in tokens if word not in STOPWORDS] btw_words = [word for word in btw_words if word.isalpha()] return btw_words def get_word_vectors(btw_words): # returns vector of embeddings of words word_vectors= [] for word in btw_words: try: word_v = np.array(model[word]) word_v = word_v.reshape(len(word_v),1) #print(word_v.shape) word_vectors.append(model[word]) except: wordvec_unavailable.add(word) return word_vectors def get_similarity(word_vectors,target_word): # sent(list of word vecs) to word similarity similarity = 0 target_word_vector = 0 try: target_word_vector = model[target_word] except: wordvec_unavailable.add(target_word+" t") return similarity target_word_sparse = gm.any2sparse(target_word_vector,eps=1e-09) for wv in word_vectors: wv_sparse = gm.any2sparse(wv, eps=1e-09) similarity = max(similarity,gm.cossim(wv_sparse,target_word_sparse)) return similarity ##### Continuous ################ softmax_Threshold = 0.3 LF_Threshold = 0.3 import re from snorkel.lf_helpers import ( get_left_tokens, get_right_tokens, get_between_tokens, get_text_between, get_tagged_text, ) spouses = {'spouse', 'wife', 'husband', 'ex-wife', 'ex-husband'} family = {'father', 'mother', 'sister', 'brother', 'son', 'daughter', 'grandfather', 'grandmother', 'uncle', 'aunt', 'cousin'} family = family | {f + '-in-law' for f in family} other = {'boyfriend', 'girlfriend' 'boss', 'employee', 'secretary', 'co-worker'} # Helper function to get last name def last_name(s): name_parts = s.split(' ') return name_parts[-1] if len(name_parts) > 1 else None def LF_husband_wife(c): global LF_Threshold sc = 0 word_vectors = get_word_vectors(preprocess(get_between_tokens(c))) for sw in spouses: sc=max(sc,get_similarity(word_vectors,sw)) return (1,sc) def LF_husband_wife_left_window(c): global LF_Threshold sc_1 = 0 word_vectors = get_word_vectors(preprocess(get_left_tokens(c[0]))) for sw in spouses: sc_1=max(sc_1,get_similarity(word_vectors,sw)) sc_2 = 0 word_vectors = get_word_vectors(preprocess(get_left_tokens(c[1]))) for sw in spouses: sc_2=max(sc_2,get_similarity(word_vectors,sw)) return(1,max(sc_1,sc_2)) def LF_same_last_name(c): p1_last_name = last_name(c.person1.get_span()) p2_last_name = last_name(c.person2.get_span()) if p1_last_name and p2_last_name and p1_last_name == p2_last_name: if c.person1.get_span() != c.person2.get_span(): return (1,1) return (0,0) def LF_no_spouse_in_sentence(c): return (-1,0.75) if np.random.rand() < 0.75 and len(spouses.intersection(c.get_parent().words)) == 0 else (0,0) def LF_and_married(c): global LF_Threshold word_vectors = get_word_vectors(preprocess(get_right_tokens(c))) sc = get_similarity(word_vectors,'married') if 'and' in get_between_tokens(c): return (1,sc) else: return (0,0) def LF_familial_relationship(c): global LF_Threshold sc = 0 word_vectors = get_word_vectors(preprocess(get_between_tokens(c))) for fw in family: sc=max(sc,get_similarity(word_vectors,fw)) return (-1,sc) def LF_family_left_window(c): global LF_Threshold sc_1 = 0 word_vectors = get_word_vectors(preprocess(get_left_tokens(c[0]))) for fw in family: sc_1=max(sc_1,get_similarity(word_vectors,fw)) sc_2 = 0 word_vectors = get_word_vectors(preprocess(get_left_tokens(c[1]))) for fw in family: sc_2=max(sc_2,get_similarity(word_vectors,fw)) return (-1,max(sc_1,sc_2)) def LF_other_relationship(c): global LF_Threshold sc = 0 word_vectors = get_word_vectors(preprocess(get_between_tokens(c))) for ow in other: sc=max(sc,get_similarity(word_vectors,ow)) return (-1,sc) def LF_other_relationship_left_window(c): global LF_Threshold sc = 0 word_vectors = get_word_vectors(preprocess(get_left_tokens(c))) for ow in other: sc=max(sc,get_similarity(word_vectors,ow)) return (-1,sc) import bz2 # Function to remove special characters from text def strip_special(s): return ''.join(c for c in s if ord(c) < 128) # Read in known spouse pairs and save as set of tuples with bz2.BZ2File('data/spouses_dbpedia.csv.bz2', 'rb') as f: known_spouses = set( tuple(strip_special(x).strip().split(',')) for x in f.readlines() ) # Last name pairs for known spouses last_names = set([(last_name(x), last_name(y)) for x, y in known_spouses if last_name(x) and last_name(y)]) def LF_distant_supervision(c): p1, p2 = c.person1.get_span(), c.person2.get_span() return (1,1) if (p1, p2) in known_spouses or (p2, p1) in known_spouses else (0,0) def LF_distant_supervision_last_names(c): p1, p2 = c.person1.get_span(), c.person2.get_span() p1n, p2n = last_name(p1), last_name(p2) return (1,1) if (p1 != p2) and ((p1n, p2n) in last_names or (p2n, p1n) in last_names) else (0,1) import numpy as np def LF_Three_Lists_Left_Window(c): global softmax_Threshold c1,s1 = LF_husband_wife_left_window(c) c2,s2 = LF_family_left_window(c) c3,s3 = LF_other_relationship_left_window(c) sc = np.array([s1,s2,s3]) c = [c1,c2,c3] sharp_param = 1.5 prob_sc = np.exp(sc * sharp_param - np.max(sc)) prob_sc = prob_sc / np.sum(prob_sc) #print 'Left:',s1,s2,s3,prob_sc if s1==s2 or s3==s1: return (0,0) return c[np.argmax(prob_sc)],1 def LF_Three_Lists_Between_Words(c): global softmax_Threshold c1,s1 = LF_husband_wife(c) c2,s2 = LF_familial_relationship(c) c3,s3 = LF_other_relationship(c) sc = np.array([s1,s2,s3]) c = [c1,c2,c3] sharp_param = 1.5 prob_sc = np.exp(sc * sharp_param - np.max(sc)) prob_sc = prob_sc / np.sum(prob_sc) #print 'BW:',s1,s2,s3,prob_sc if s1==s2 or s3==s1: return (0,0) return c[np.argmax(prob_sc)],1 LFs = [LF_distant_supervision, LF_distant_supervision_last_names,LF_same_last_name, LF_and_married, LF_Three_Lists_Between_Words,LF_Three_Lists_Left_Window, LF_no_spouse_in_sentence ] import numpy as np import math def PHI(K,LAMDAi,SCOREi): return [K*l*s for (l,s) in zip(LAMDAi,SCOREi)] def softmax(THETA,LAMDAi,SCOREi): x = [] for k in [1,-1]: product = np.dot(PHI(k,LAMDAi,SCOREi),THETA) x.append(product) return np.exp(x) / np.sum(np.exp(x), axis=0) def function_conf(THETA,LAMDA,P_cap,Confidence): s = 0.0 i = 0 for LAMDAi in LAMDA: s = s + Confidence[i]*np.dot(np.log(softmax(THETA,LAMDAi)),P_cap[i]) i = i+1 return -s def function(THETA,LAMDA,SCORE,P_cap): s = 0.0 i = 0 for i in range(len(LAMDA)): s = s + np.dot(np.log(softmax(THETA,LAMDA[i],SCORE[i])),P_cap[i]) i = i+1 return -s def P_K_Given_LAMDAi_THETA(K,THETA,LAMDAi,SCOREi): x = softmax(THETA,LAMDAi,SCOREi) if(K==1): return x[0] else: return x[1] np.random.seed(78) THETA = np.random.rand(len(LFs),1) def PHIj(j,K,LAMDAi,SCOREi): return LAMDAi[j]*K*SCOREi[j] def RIGHT(j,LAMDAi,SCOREi,THETA): phi = [] for k in [1,-1]: phi.append(PHIj(j,k,LAMDAi,SCOREi)) x = softmax(THETA,LAMDAi,SCOREi) return np.dot(phi,x) def function_conf_der(THETA,LAMDA,P_cap,Confidence): der = [] for j in range(len(THETA)): i = 0 s = 0.0 for LAMDAi in LAMDA: p = 0 for K in [1,-1]: s = s + Confidence[i]*(PHIj(j,K,LAMDAi)-RIGHT(j,LAMDAi,THETA))*P_cap[i][p] p = p+1 i = i+1 der.append(-s) return np.array(der) def function_der(THETA,LAMDA,SCORE,P_cap): der = [] for j in range(len(THETA)): i = 0 s = 0.0 for index in range(len(LAMDA)): p = 0 for K in [1,-1]: s = s + (PHIj(j,K,LAMDA[index],SCORE[index])-RIGHT(j,LAMDA[index],SCORE[index],THETA))*P_cap[i][p] p = p+1 i = i+1 der.append(-s) return np.array(der) import numpy as np def get_LAMDA(cands): LAMDA = [] SCORE = [] for ci in cands: L=[] S=[] P_ik = [] for LF in LFs: #print LF.__name__ l,s = LF(ci) L.append(l) S.append((s+1)/2) #to scale scores in [0,1] LAMDA.append(L) SCORE.append(S) return LAMDA,SCORE def get_Confidence(LAMDA): confidence = [] for L in LAMDA: Total_L = float(len(L)) No_zeros = L.count(0) No_Non_Zeros = Total_L - No_zeros confidence.append(No_Non_Zeros/Total_L) return confidence def get_Initial_P_cap(LAMDA): P_cap = [] for L in LAMDA: P_ik = [] denominator=float(L.count(1)+L.count(-1)) if(denominator==0): denominator=1 P_ik.append(L.count(1)/denominator) P_ik.append(L.count(-1)/denominator) P_cap.append(P_ik) return P_cap #print(np.array(LAMDA)) #print(np.array(P_cap))append(L) #LAMDA=np.array(LAMDA).astype(int) #P_cap=np.array(P_cap) #print(np.array(LAMDA).shape) #print(np.array(P_cap).shape) #print(L) #print(ci.chemical.get_span(),ci.disease.get_span(),"No.Os",L.count(0),"No.1s",L.count(1),"No.-1s",L.count(-1)) #print(ci.chemical.get_span(),ci.disease.get_span(),"P(0):",L.count(0)/len(L)," P(1)",L.count(1)/len(L),"P(-1)",L.count(-1)/len(L)) def get_P_cap(LAMDA,SCORE,THETA): P_cap = [] for i in range(len(LAMDA)): P_capi = softmax(THETA,LAMDA[i],SCORE[i]) P_cap.append(P_capi) return P_cap def score(predicted_labels,gold_labels): tp =0.0 tn =0.0 fp =0.0 fn =0.0 for i in range(len(gold_labels)): if(predicted_labels[i]==gold_labels[i]): if(predicted_labels[i]==1): tp=tp+1 else: tn=tn+1 else: if(predicted_labels[i]==1): fp=fp+1 else: fn=fn+1 print("tp",tp,"tn",tn,"fp",fp,"fn",fn) precision = tp/(tp+fp) recall = tp/(tp+fn) f1score = (2*precision*recall)/(precision+recall) print("precision:",precision) print("recall:",recall) print("F1 score:",f1score) from scipy.optimize import minimize import cPickle as pickle def get_marginals(P_cap): marginals = [] for P_capi in P_cap: marginals.append(P_capi[0]) return marginals def predict_labels(marginals): predicted_labels=[] for i in marginals: if(i<0.5): predicted_labels.append(-1) else: predicted_labels.append(1) return predicted_labels def print_details(label,THETA,LAMDA,SCORE): print(label) P_cap = get_P_cap(LAMDA,SCORE,THETA) marginals=get_marginals(P_cap) plt.hist(marginals, bins=20) plt.show() plt.bar(range(0,2796),marginals) plt.show() predicted_labels=predict_labels(marginals) print(len(marginals),len(predicted_labels),len(gold_labels_dev)) #score(predicted_labels,gold_labels_dev) print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(predicted_labels),average='binary')) def train(No_Iter,Use_Confidence=True,theta_file_name="THETA"): global THETA global dev_LAMDA,dev_SCORE LAMDA,SCORE = get_LAMDA(train_cands) P_cap = get_Initial_P_cap(LAMDA) Confidence = get_Confidence(LAMDA) for iteration in range(No_Iter): if(Use_Confidence==True): res = minimize(function_conf,THETA,args=(LAMDA,P_cap,Confidence), method='BFGS',jac=function_conf_der,options={'disp': True, 'maxiter':20}) #nelder-mead else: res = minimize(function,THETA,args=(LAMDA,SCORE,P_cap), method='BFGS',jac=function_der,options={'disp': True, 'maxiter':20}) #nelder-mead THETA = res.x # new THETA print(THETA) P_cap = get_P_cap(LAMDA,SCORE,THETA) #new p_cap print_details("train iteration: "+str(iteration),THETA,dev_LAMDA,dev_SCORE) #score(predicted_labels,gold_labels) NP_P_cap = np.array(P_cap) np.savetxt('Train_P_cap.txt', NP_P_cap, fmt='%f') pickle.dump(NP_P_cap,open("Train_P_cap.p","wb")) NP_THETA = np.array(THETA) np.savetxt(theta_file_name+'.txt', NP_THETA, fmt='%f') pickle.dump( NP_THETA, open( theta_file_name+'.p', "wb" )) # save the file as "outfile_name.npy" def test(THETA): global dev_LAMDA,dev_SCORE P_cap = get_P_cap(dev_LAMDA,dev_SCORE,THETA) print_details("test:",THETA,dev_LAMDA,dev_SCORE) NP_P_cap = np.array(P_cap) np.savetxt('Dev_P_cap.txt', NP_P_cap, fmt='%f') pickle.dump(NP_P_cap,open("Dev_P_cap.p","wb")) def load_marginals(s): marginals = [] if(s=="train"): train_P_cap = np.load("Train_P_cap.npy") marginals = train_P_cap[:,0] return marginals ''' output: [[[L_x1],[S_x1]], [[L_x2],[S_x2]], ...... ...... ] ''' def get_L_S_Tensor(cands): L_S = [] for ci in cands[2:4]: L_S_ci=[] L=[] S=[] P_ik = [] for LF in LFs: #print LF.__name__ l,s = LF(ci) L.append(l) S.append((s+1)/2) #to scale scores in [0,1] L_S_ci.append(L) L_S_ci.append(S) L_S.append(L_S_ci) return L_S def get_L_S(cands): # sign gives label abs value gives score L_S = [] for ci in cands[2:4]: l_s=[] for LF in LFs: #print LF.__name__ l,s = LF(ci) s= (s+1)/2 #to scale scores in [0,1] l_s.append(l*s) L_S.append(l_s) return L_S def get_Initial_P_cap_L_S(L_S): P_cap = [] for L,S in L_S[:2]: P_ik = [] denominator=float(L.count(1)+L.count(-1)) if(denominator==0): denominator=1 P_ik.append(L.count(1)/denominator) P_ik.append(L.count(-1)/denominator) P_cap.append(P_ik) return P_cap from sklearn.metrics import precision_recall_fscore_support import matplotlib.pyplot as plt #L_S = get_L_S_Tensor(train_cands) #dev_L_S = get_L_S_Tensor(dev_cands) #train_L_S = get_L_S_Tensor(train_cands) dev_L_S = get_L_S_Tensor(dev_cands) train_L_S = get_L_S_Tensor(train_cands) for x in train_L_S: print(x) pcap= get_Initial_P_cap_L_S(train_L_S) for x in pcap: print(x) #L_S = tf.Variable(L_S, tf.float32) #write_to_file(wordvec_unavailable) from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow.contrib.tensorboard.plugins import projector result_dir = "./" config = projector.ProjectorConfig() tf.logging.set_verbosity(tf.logging.INFO) summary_writer = tf.summary.FileWriter(result_dir) projector.visualize_embeddings(summary_writer, config) tf.reset_default_graph() L_S = get_L_S_Tensor(train_cands) P_cap= get_Initial_P_cap_L_S(train_L_S) dim = 2 #(labels,scores) _x = tf.placeholder(tf.float64,shape=(dim,len(LFs))) _p_cap = tf.placeholder(tf.float64,shape=(2)) alphas = tf.get_variable('alpha', _x.get_shape()[-1],initializer=tf.constant_initializer(0.2), dtype=tf.float64) thetas = tf.get_variable('theta', _x.get_shape()[-1],initializer=tf.constant_initializer(0.0), dtype=tf.float64) print([n.name for n in tf.get_default_graph().as_graph_def().node]) #for k = 1 k_p1 = tf.ones(shape=(dim,len(LFs)),dtype=tf.float64) k_n1 = tf.negative(k_p1) l,s = tf.unstack(_x) prelu_out_s = tf.maximum(tf.subtract(tf.abs(s),alphas,name='subtract'), 0,name='max') mul_L_S = tf.multiply(l,prelu_out_s) phi_p1 = tf.reduce_sum(tf.multiply(mul_L_S,thetas)) phi_n1 = tf.reduce_sum(tf.multiply(tf.multiply(mul_L_S,k_n1),thetas)) phi_out = tf.stack([phi_p1,phi_n1]) loss = tf.reduce_sum(tf.multiply(tf.log(tf.nn.softmax(phi_out)),_p_cap)) train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss) sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) for i in range(1): for L_S_i,P_cap_i in zip(L_S,P_cap): print(sess.run([loss],feed_dict={_x:L_S_i,_p_cap:P_cap_i})) # All LF_Threshold =0.3 and softmax_Threshold=0.3 ,to be run train(2,Use_Confidence=False,theta_file_name="THETA") test(THETA) def print_details(label,THETA,LAMDA,SCORE): print(label) P_cap = get_P_cap(LAMDA,SCORE,THETA) marginals=get_marginals(P_cap) plt.hist(marginals, bins=20) plt.show() #plt.bar(range(0,2796),marginals) #plt.show() predicted_labels=predict_labels(marginals) print(len(marginals),len(predicted_labels),len(gold_labels_dev)) #score(predicted_labels,gold_labels_dev) print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(predicted_labels),average='binary')) def predict_labels(marginals): predicted_labels=[] for i in marginals: if(i<0.5): predicted_labels.append(-1) else: predicted_labels.append(1) return predicted_labels #import cPickle as pickle #THETA = pickle.load( open( "THETA.p", "rb" ) ) #test(THETA) #LAMDA,SCORE = get_LAMDA(dev_cands) #Confidence = get_Confidence(LAMDA) #P_cap = get_P_cap(LAMDA,SCORE,THETA) #marginals=get_marginals(P_cap) #plt.hist(marginals, bins=20) #plt.show() #plt.bar(range(0,888),train_marginals) #plt.show() print_details("dev set",THETA,dev_LAMDA,dev_SCORE) predicted_labels=predict_labels(marginals) sorted_predicted_labels=[x for (y,x) in sorted(zip(Confidence,predicted_labels))] #sort Labels as per Confidence sorted_predicted_labels=list(reversed(sorted_predicted_labels)) for i,j in enumerate(reversed(sorted(zip(Confidence,predicted_labels,gold_labels_dev)))): if i>20: break print i,j #print(len(marginals),len(predicted_labels),len(gold_labels_dev)) #no_of_labels=186#int(len(predicted_labels)*0.1) #54 - >0.2 , 108>= 0.15 , 186>= 0.12 #print(len(sorted_predicted_labels[0:no_of_labels])) no_of_labels=2796 score(predicted_labels[0:no_of_labels],gold_labels_dev[0:no_of_labels]) ```
github_jupyter
# Modes of a Vibrating Building In this notebook we will find the vibrational modes of a simple model of a building. We will assume that the mass of the floors are much more than the mass of the walls and that the lateral stiffness of the walls can be modeled by a simple linear spring. We will investigate how the building may vibrate under initial conditions that could be caused by a gust of wind and during ground vibration. ``` from IPython.display import YouTubeVideo YouTubeVideo('g0cz-oDfUg0', width=600) YouTubeVideo('hSwjkG3nv1c', width=600) YouTubeVideo('kzVvd4Dk6sw', width=600) import numpy as np import matplotlib.pyplot as plt from resonance.linear_systems import FourStoryBuildingSystem ``` This gives a bit nicer printing of large NumPy arrays. ``` np.set_printoptions(precision=5, linewidth=100, suppress=True) %matplotlib notebook ``` # Simulate the four story building ``` sys = FourStoryBuildingSystem() sys.constants sys.coordinates sys.plot_configuration(); traj = sys.free_response(30, sample_rate=10) traj[list(sys.coordinates.keys())].plot(subplots=True); sys.animate_configuration(fps=10) M, C, K = sys.canonical_coefficients() M C K ``` # Exercise The system can be normalized by the mass matrix and transformed into a symmetric eigenvalue problem by introducing the new coordinate vector: $$\mathbf{q}=\mathbf{L}^T\mathbf{x}$$ $\mathbf{L}$ is the Cholesky decomposition of the symmetric mass matrix, i.e. $\mathbf{M}=\mathbf{L}\mathbf{L}^T$. The equation of motion becomes: $$\ddot{\mathbf{q}} + \tilde{\mathbf{K}} \mathbf{q} = 0$$ Compute $\tilde{\mathbf{K}}$. ``` L = np.linalg.cholesky(M) L M**0.5 import numpy.linalg as la from numpy.linalg import inv K_tilde = inv(L) @ K @ inv(L.T) K_tilde ``` Notice that $\tilde{\mathbf{K}}$ is symmetric, so we are guaranteed to get real eigenvalues and orthogonal eigenvectors when solving this system. # Exercise Find the eigenvalues and eigenvectors. Create the spectral matrix $\mathbf{\Lambda}$ and the matrix $P$ which contains the orthonormal eigenvectors of $\tilde{\mathbf{K}}$. $$ \mathbf{P} = \left[ \mathbf{v}_1, \ldots, \mathbf{v}_4 \right] $$ ``` evals, evecs = np.linalg.eig(K_tilde) evals evecs Lambda = np.diag(evals) Lambda P = evecs ``` # Exercise Prove that the eigenvectors in $\mathbf{P}$ are orthonormal. ``` np.dot(P[:, 0], P[:, 1]) np.linalg.norm(P[:, 0]) P[:, 0].T @ P[:, 1] P[:, 0].T @ P[:, 0] ``` An orthonormal matrix has the property that its transpose multiplied by itself is the identity matrix. ``` P.T @ P ``` # Exercise Find the natural freqencies of the system in both radians per second and Hertz, store them in an array in the order of the eigenvalues with names `ws` and `fs`. ``` ws = np.sqrt(evals) ws fs = ws / 2 / np.pi fs ``` # Exercise Transform the eigenvectors back into the coordinate system associated with $\mathbf{x}$. $$ \mathbf{S} = \left[ \mathbf{u}_1, \ldots, \mathbf{u}_4 \right] $$ ``` S = np.linalg.inv(L.T) @ P S sys.coordinates ``` # Exercise: visualize the modeshapes The eigenmodes (mode shapes) are contained in each column of $\mathbf{S}$. Create a plot for each mode shape with these specifications: - The title of each plot should be the frequency of the corresponding modeshape in Hz. - The y axis should be made up of the values [0, 3, 6, 9, 12] meters. - The x axis should plot the five values. The first should be zero and the remaining values should be the components of the mode shape in order of the component associated with the lowest floor to the highest. - Plot lines with small circles at each data point. ``` S[:, 0] np.hstack((0, S[:, 0])) u1 = S[:, 0] u1 u1[::-1] S[:, 2] fig, axes = plt.subplots(1, 4) for i in range(4): axes[i].plot(np.hstack((0, S[:, i])), [0, 3, 6, 9, 12], marker='o') axes[i].set_title('{:1.2f} Hz'.format(fs[i])) plt.tight_layout() fs[0] S[:, 0] sys.coordinates['x1'] = S[0, 2] sys.coordinates['x2'] = S[1, 2] sys.coordinates['x3'] = S[2, 2] sys.coordinates['x4'] = S[3, 2] traj = sys.free_response(30, sample_rate=10) traj[list(sys.coordinates.keys())].plot(subplots=True) sys.animate_configuration(fps=10) ``` # Simulating the trajectory The trajectory of building's coordinates can be found with: $$ \mathbf{x}(t) = \sum_{i=1}^n c_i \sin(\omega_i t + \phi_i) \mathbf{u}_i $$ where $$ \phi_i = \arctan \frac{\omega_i \mathbf{v}_i^T \mathbf{q}_0}{\mathbf{v}_i^T \dot{\mathbf{q}}_0} $$ and $$ c_i = \frac{\mathbf{v}^T_i \mathbf{q}_0}{\sin\phi_i} $$ $c_i$ are the modal participation factors and reflect what proportion of each mode is excited given specific initial conditions. If the initial conditions are the eigenmode, $\mathbf{u}_i$, the all but the $i$th $c_i$ will be zero. # Exercise Show that if $\mathbf{q}_0 = \mathbf{v}_i$ then $c_i = 1$ all other modal participation factors are 0. Also, report all of the phase angles, $\phi_i$, in degrees. ``` for i in range(4): x0 = S[:, i] xd0 = np.zeros(4) print(x0) q0 = L.T @ x0 qd0 = L.T @ xd0 phis = np.arctan2(ws * P.T @ q0, P.T @ xd0) print(np.rad2deg(phis)) cs = P.T @ q0 / np.sin(phis) print(cs) print('=' * 40) ``` # Exercise Create a function called `simulate()` that returns the trajectories of the coordinates given an array of monotonically increasing time values and the initial conditions of the system. It should look like: ```python def simulate(t, x0, xd0): """Returns the state trajectory. Parameters ========== t : ndarray, shape(m,) Monotonic values of time. x0 : ndarray, shape(n,) The initial conditions of each coordinate. xd0 : ndarray, shape(n,) The initial conditions of each speed. Returns ======= x : ndarray, shape(m, n) The trajectories of each state. """ # your code here return x ``` ``` def simulate(t, x0, xd0): q0 = L.T @ x0 qd0 = L.T @ xd0 phis = np.arctan2(ws * P.T @ q0, P.T @ xd0) cs = P.T @ q0 / np.sin(phis) x = np.zeros((len(x0), len(t))) for ci, wi, phii, ui in zip(cs, ws, phis, S.T): x += ci * np.sin(wi * t + phii) * np.tile(ui, (len(t), 1)).T return x ``` # Exercise Using the plotting function below, show that the results found here are the same as the simulations from the `FourStoryBuildingSystem` given the same initial conditions. ``` def plot_trajectories(t, x): fig, axes = plt.subplots(4, 1) for i, ax in enumerate(axes.flatten()): ax.plot(t, x[i]) ax.set_ylabel(r'$x_{}$ [m]'.format(i + 1)) ax.set_xlabel('Time [s]') plt.tight_layout() t = np.linspace(0, 50, num=50 * 60) x0 = np.array([0.001, 0.010, 0.020, 0.025]) xd0 = np.zeros(4) x = simulate(t, x0, xd0) plot_trajectories(t, x) ``` This shows the plot of a single mode: ``` x = simulate(t, S[:, 0], np.zeros(4)) plot_trajectories(t, x) ```
github_jupyter
<a href="https://colab.research.google.com/github/LyaSolis/exBERT/blob/master/1_data_prep.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` from google.colab import drive drive.mount('/content/drive') !ls /content/drive/MyDrive/GitHub/bluebert ``` # Preprocess Data ### We will make 2 types of dataset: for BlueBERT pretraining and finetuning and for exBERT. ### Input file format for BlueBERT: 1. One sentence per line. These should ideally be actual sentences, not entire paragraphs or arbitrary spans of text. (Because we use the sentence boundaries for the "next sentence prediction" task). ``` import pandas as pd df = pd.read_csv("/content/drive/MyDrive/GitHub/exBERT/data/paragrafs.csv") df.head(1) df = df.drop(['Unnamed: 0'], axis = 1) df.head(1) import re # Testing patterns text = "Patients with. chronic lymphocytic 's leukemia (. CL Patients with chronic. lymphocytic. leukemia (CL" re.findall("\.(?= [a-z])", text) re.sub(r"\.(?= [a-z])", ".\\n", text) df[df['txts'].isna()] print(df['txts'][384]) df = df[df['txts'].notna()] df[df['txts'].isna()] sent_list = [] for pargr in df['txts']: pargr = pargr.strip() pargr = re.sub(r"\.(?= [A-Z])", ".\\n", pargr) # Adding new lines to ends of sentences only pargr1 = pargr.strip() sent_list.append(pargr1) df['sents']=sent_list df.head(1) ``` 2. Blank lines between documents. Document boundaries are needed so that the "next sentence prediction" task doesn't span between documents. ``` # Adding blank lines between docs mask = df['articleids'].ne(df['articleids'].shift(-1)) df1 = pd.DataFrame('',index=mask.index[mask] + .5, columns=df.columns) df = pd.concat([df, df1]).sort_index().reset_index(drop=True).iloc[:-1] df.tail(3) ``` Now we will put updated text into text file ``` text_file = [] for row in df['sents']: row = row.split('\n') for i in row: i = i.lstrip() text_file.append(i) text_file[:1] save_file = "drive/MyDrive/GitHub/exBERT/data/bluebert_train_data.txt" with open(save_file, 'w') as f: for item in text_file: f.write("%s\n" % item) ``` Preprocessed PubMed texts corpus used to pre-train the BlueBERT models contains ~4000M words extracted from the PubMed ASCII code version. Other operations include: - lowercasing the text - removing speical chars \x00-\x7F - tokenizing the text using the NLTK Treebank tokenizer ``` preprocessed_text = [] for line in text_file: line = line.lower() line = re.sub(r'[\r\n]+', ' ', line) line = re.sub(r'[^\x00-\x7F]+', ' ', line) preprocessed_text.append(line) preprocessed_text[:1] len(preprocessed_text) from nltk import TreebankWordTokenizer pubmed_sent_nltk = [] for line in preprocessed_text: tokenized = TreebankWordTokenizer().tokenize(line) sentence = ' '.join(tokenized) sentence = re.sub(r"\s's\b", "'s", sentence) pubmed_sent_nltk.append(sentence) pubmed_sent_nltk[:1] len(pubmed_sent_nltk) save_file = "drive/MyDrive/GitHub/exBERT/data/bluebert_clean_train_data.txt" with open(save_file, 'w') as f: for item in pubmed_sent_nltk: f.write("%s\n" % item) ``` ## For exBERT text file needs to have paragraphs separated by new lines (no blank lines though). ``` df = pd.read_csv("/content/drive/MyDrive/GitHub/exBERT/data/paragrafs.csv") text = df["txts"] text.to_csv("/content/drive/MyDrive/GitHub/exBERT/data/exbert_train_data.txt", sep='\n', index=False, header=False) ``` Next we will create our new dictionary and tokenizer (notebook 2_get_vocab_and_tokenizer.ipynb)
github_jupyter
<a href="https://colab.research.google.com/github/nakanoelio/i2a2-challenge-petr4-trad-sys/blob/main/I2A2_PETR4_Multinomial_Naive_Bayes_%2B_ARIMA_Trading_System.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` !pip install yfinance !pip install --upgrade mplfinance #Instalação da Biblioteca TA-lib url = 'https://launchpad.net/~mario-mariomedina/+archive/ubuntu/talib/+files' !wget $url/libta-lib0_0.4.0-oneiric1_amd64.deb -qO libta.deb !wget $url/ta-lib0-dev_0.4.0-oneiric1_amd64.deb -qO ta.deb !dpkg -i libta.deb ta.deb !pip install ta-lib !pip install pandas_ta import numpy as np import pandas as pd import scipy as sp import seaborn as sn import matplotlib.pyplot as plt import yfinance as yf import talib as ta import pandas_ta as pd_ta from sklearn import metrics from sklearn.linear_model import LinearRegression from sklearn.naive_bayes import GaussianNB, MultinomialNB, ComplementNB, BernoulliNB, CategoricalNB from statsmodels.tsa.arima_model import ARIMA from statsmodels.graphics.tsaplots import plot_acf, plot_pacf from statsmodels.stats.diagnostic import acorr_ljungbox from statsmodels.tsa.stattools import adfuller, kpss from statsmodels.tsa.arima_model import ARIMAResults from tqdm import tqdm %matplotlib inline stock_ticker = 'PETR4.SA' start_date = '2016-01-26' end_date = '2021-5-27' yf_petr4 = yf.Ticker(stock_ticker) df_petr4 = yf_petr4.history(start=start_date, end=end_date) stock_ticker = '^BVSP' yf_ibov = yf.Ticker(stock_ticker) df_ibov = yf_ibov.history(start=start_date, end=end_date) df_petr4.head(20) def arima_for(df): list_arima = [] for i in tqdm(range(df["Close"].shape[0]-1)): try: arima_model = ARIMA(df["Close"].iloc[:i].to_list(), order=(1, 1, 1)) arima_model_fit = arima_model.fit() a = arima_model_fit.forecast()[0].item() b = df["Close"].iloc[i+1] list_arima.append(a) except: a = df["Close"].iloc[i+1] b = df["Close"].iloc[i+1] list_arima.append(a) list_arima.append(df["Close"].iloc[i]) return list_arima arima_forecast = np.array(arima_for(df_petr4)) arima_forecast df_petr4 #Calculo dos Indicadores def indicadores(stock_data): data = stock_data.copy() data['W%R'] = ta.WILLR(data['High'], data['Low'], data['Close'], timeperiod=14) #Retorna valor do indicador Williams %R data['MACD'], data['Signal-line'], data['Histograma_MACD'] = ta.MACD(data['Close'], fastperiod=12, slowperiod=26, signalperiod=9) #Valores do indicador MACD data.loc[:, 'Momento_MACD']=np.where(data['Histograma_MACD']>0, 1, 0) #Retorna 1 para compra pelo MACD (momento positivo), 0 para venda data['Tendencia_MACD']=np.where(data['Histograma_MACD'].diff()>0, 1, 0) #Derivada, sinaliza reversao de tendencia no histograma MACD, 1 para compra, 0 para venda data.loc[:, 'W%R_Compra']= np.where(data['W%R']<-80, 1, 0) # Retorna 1 para sinal de compra, caso Williams %R < -80 data.loc[:, 'W%R_Venda']= np.where(data['W%R']>-20, 1, 0) # Retorna 1 para sinal de venda, caso Williams %R > -20 data['Hammer']=ta.CDLHAMMER(data['Open'],data['High'], data['Low'], data['Close'])/100 #Sinal de compra pra martelo data['Shooting_star'] = ta.CDLSHOOTINGSTAR(data['Open'],data['High'], data['Low'], data['Close'])/-100 #Sinal de venda 'estrela cadente' data["EMA12"] = ta.EMA(data["Close"], timeperiod=12) data["EMA26"] = ta.EMA(data["Close"], timeperiod=26) #return data.drop(["Open","Close","High","Low","Volume","Dividends","Stock Splits"],axis="columns") return data[['Momento_MACD','Tendencia_MACD','W%R_Compra','W%R_Venda']]#,"EMA12","EMA26",'W%R','MACD','Hammer','Shooting_star','Momento_MACD','Tendencia_MACD','W%R_Compra','W%R_Venda']] def isSupport(df,i): #Estamos utilizando dados futuros! #support = df['Low'][i] <= df['Low'][i-1] and df['Low'][i] <= df['Low'][i+1] and df['Low'][i] < df['Low'][i+2] and df['Low'][i] < df['Low'][i-2] #support = df['Low'][i-1] <= df['Low'][i-3] and df['Low'][i-2] <= df['Low'][i-1] and df['Low'][i-2] < df['Low'][i] and df['Low'][i-2] < df['Low'][i-4] support = df['Low'][i] <= df['Low'][i-2] and df['Low'][i-1] <= df['Low'][i-2] and np.abs(df['Low'][i]-df['Low'][i-1]) < np.abs(df['Low'][i-1]-df['Low'][i-2]) return support def isResistance(df,i): #Estamos utilizando dados futuros! #resistance = df['High'][i] > df['High'][i-1] and df['High'][i] > df['High'][i+1] and df['High'][i] > df['High'][i+2] and df['High'][i] > df['High'][i-2] #resistance = df['High'][i-2] > df['High'][i-3] and df['High'][i-2] > df['High'][i-1] and df['High'][i-2] > df['High'][i] and df['High'][i-2] > df['High'][i-4] resistance = df['High'][i] > df['High'][i-2] and df['High'][i-1] > df['High'][i-2] and np.abs(df['High'][i]-df['High'][i-1]) < np.abs(df['High'][i-1]-df['High'][i-2]) return resistance def sup_res(df_data): s = np.mean(df_data['High'] - df_data['Low']) levels = [] support = [0,0] resistance = [0,0] for i in range(2,df_data.shape[0]-2): if isSupport(df_data,i): l = df_data['Low'][i] support.append(1) resistance.append(0) levels.append((i,l)) #if isFarFromLevel(l,levels,s): #support.append(1) #resistance.append(0) #levels.append((i,l)) #else: #support.append(0) #resistance.append(0) elif isResistance(df_data,i): l = df_data['High'][i] support.append(0) resistance.append(1) #if isFarFromLevel(l,levels,s): #resistance.append(1) #support.append(0) #levels.append((i,l)) #else: #resistance.append(0) #support.append(0) else: resistance.append(0) support.append(0) support.extend([0,0]) resistance.extend([0,0]) return support, resistance def feat_gen(data_f, p_window,return_period): data_frame = data_f.copy() #data_frame["Close_Return"] = data_frame["Close"].diff() data_frame["Close_Return_Rel"] = data_frame["Close"].pct_change() #data_frame["Close_Return"].fillna(0,inplace=True) #data_frame["Close_Return_Rel"].fillna(0,inplace=True) tresh = data_frame["Close_Return_Rel"].std()*0.05*return_period #tresh = 0 data_frame["Expected_Close_Return"] = data_frame["Close_Return_Rel"].rolling(return_period).sum().apply(lambda x: 2 if x > tresh else (1 if x <= tresh and x >= -tresh else 0)) #data_frame.loc[data_frame["Close_Return_Rel"].rolling(return_period).sum() > tresh, "Expected_Close_Return"] = 2 #data_frame.loc[data_frame["Close_Return_Rel"].rolling(return_period).sum() <= tresh, "Expected_Close_Return"] = 1 #data_frame.loc[data_frame["Close_Return_Rel"].rolling(return_period).sum() < -tresh, "Expected_Close_Return"] = 0 #data_frame.loc[data_frame["Close_Return_Rel"].rolling(return_period).sum() >= tresh, f"Expected_Close_Return"] = 1 #data_frame.loc[data_frame["Close_Return_Rel"].rolling(return_period).sum() < tresh, f"Expected_Close_Return"] = 0 #new_col_names = [] data_frame['ARIMA_forecast'] = arima_forecast data_frame["ARIMA_forecast_Ret"] = (data_frame["ARIMA_forecast"]-data_frame["Close"])/data_frame["Close"] data_frame["ARIMA_forecast_Ret_Disc"] = data_frame["ARIMA_forecast_Ret"].apply(lambda x: 2 if x > tresh else (1 if x <= tresh and x >= -tresh else 0)) for i in range(0,p_window): data_frame[f'Return_Lag_{i}period'] = data_frame["Close_Return_Rel"].shift(periods=i).apply(lambda x: 2 if x > tresh else (1 if x <= tresh and x >= -tresh else 0)) #data_frame[f'Return_Lag_{i}period'] = data_frame["Close_Return_Rel"].rolling(i+1).sum() data_frame["Expected_Close_Return"] = data_frame["Expected_Close_Return"].shift(-return_period) return data_frame.drop(["Open","Close","High","Low","Volume","Dividends","Stock Splits","Close_Return_Rel","ARIMA_forecast_Ret"],axis="columns").fillna(0)#"Close_Return_Rel" #return data_frame[["Expected_Close_Return"].fillna(0) def calc_beta(data_frame_asset,data_frame_bench, beta_window): data_frame_beta = pd.concat([data_frame_bench["Close"].pct_change(), data_frame_asset["Close"].pct_change()],axis=1,ignore_index=True) data_frame_beta.columns=["Close_IBOV","Close_PETR4"] data_frame_beta["Beta"] = data_frame_beta["Close_PETR4"].rolling(beta_window).cov(data_frame_beta["Close_IBOV"].rolling(beta_window))/data_frame_beta["Close_IBOV"].rolling(beta_window).var() data_frame_beta["Beta_expected_PETR4"] = data_frame_beta["Close_IBOV"]*data_frame_beta["Beta"] data_frame_beta["PETR4_Excess_Variat"] = (data_frame_beta["Close_PETR4"] - data_frame_beta["Beta_expected_PETR4"])#/data_frame_beta["Beta_expected_PETR4"] #data_frame_beta["PETR4_Excess_Variat"].describe() var_tolerance = 1#data_frame_beta["Close_IBOV"].std()#/data_frame_beta["Close_IBOV"].mean() data_frame_beta["PETR4_Excess_Variat_Disc"] = data_frame_beta["PETR4_Excess_Variat"].apply(lambda x: 2 if x > var_tolerance else (1 if x <= var_tolerance and x >= -var_tolerance else 0)) #data_frame_beta.loc[data_frame_beta["PETR4_Excess_Variat"] > var_tolerance, "PETR4_Excess_Variat_Disc"] = 1 #data_frame_beta.loc[data_frame_beta["PETR4_Excess_Variat"] <= var_tolerance , "PETR4_Excess_Variat_Disc"] = 0 #data_frame_beta.loc[data_frame_beta["PETR4_Excess_Variat"] < -var_tolerance, "PETR4_Excess_Variat_Disc"] = 1 #return data_frame_beta.drop(["Close_IBOV","Close_PETR4",], axis="columns").fillna(0) #return data_frame_beta["PETR4_Excess_Variat_Disc"].fillna(0) return data_frame_beta["PETR4_Excess_Variat_Disc"].fillna(0) def gen_feat_data(data_frame_orig,p_window,return_period,data_frame_bench,beta_window): #df_feat = data_frame_orig df_feat = pd.concat([data_frame_orig, feat_gen(data_frame_orig, p_window, return_period)],axis=1) df_feat = pd.concat([df_feat,indicadores(data_frame_orig)],axis=1) #sup,res = sup_res(data_frame_orig) #df_feat["Support"] = sup #df_feat["Resistance"] = res df_feat = pd.concat([df_feat,calc_beta(data_frame_orig,data_frame_bench,beta_window)],axis=1) #df_feat = df_feat.reindex(columns=(list([col for col in df_feat.columns if col != "Expected_Close_Return"]+["Expected_Close_Return"]))) df_feat = df_feat.drop(["Stock Splits","Dividends","Volume",'Open','High','Low'],axis=1)#"Close" return df_feat def train_eval(df,train_init_date,train_end_date):#,test_end_date): dia_ini_train_idx = df.index.get_loc(train_init_date) dia_fin_train_idx = df.index.get_loc(train_end_date) #dia_fin_test_idx = df.index.get_loc(test_end_date) y_label_idx = df.columns.get_loc("Expected_Close_Return") X_tr = df.iloc[dia_ini_train_idx:dia_fin_train_idx].drop('Expected_Close_Return',axis='columns') X_ts = df.iloc[dia_fin_train_idx].drop('Expected_Close_Return')#,axis='columns') #print(X_ts) y_tr = df.iloc[dia_ini_train_idx:dia_fin_train_idx,y_label_idx] y_ts = df.iloc[dia_fin_train_idx,y_label_idx] return X_tr, X_ts, y_tr, y_ts def run_model(X,y,model_type): nb_model = model_type nb_model.fit(X, y) #np.column_stack((y_test.to_list(),nb_model.predict(X_test))) #print(nb_model.predict_proba(X_test)[:10]) #print(f'test_score = {nb_model.score(X_test,y_test)}') return nb_model def meas_acc(X,y,nb_model): y_pred = nb_model.predict(X) print("Number of mislabeled points out of a total %d points : %d" % (X.shape[0], (y != y_pred).sum())) print("Train Accuracy: %f"% metrics.balanced_accuracy_score(y, y_pred)) cf_train2 = metrics.confusion_matrix(y, y_pred, normalize="all") sn.heatmap(cf_train2,linewidths=.5,annot=True,cmap="YlGnBu",cbar=False,square=True,xticklabels=(1,2,3), yticklabels=(1,2,3)) data_frame_orig = df_petr4 p_window = 120 beta_window = 30 data_frame_bench = df_ibov return_period = 1 df_petr4_1 = gen_feat_data(data_frame_orig,p_window,return_period,data_frame_bench,beta_window).fillna(0) return_period = 2 df_petr4_2 = gen_feat_data(data_frame_orig,p_window,return_period,data_frame_bench,beta_window).fillna(0) return_period = 3 df_petr4_3 = gen_feat_data(data_frame_orig,p_window,return_period,data_frame_bench,beta_window).fillna(0) return_period = 5 df_petr4_5 = gen_feat_data(data_frame_orig,p_window,return_period,data_frame_bench,beta_window).fillna(0) return_period = 10 df_petr4_10 = gen_feat_data(data_frame_orig,p_window,return_period,data_frame_bench,beta_window).fillna(0) df_petr4_1.tail(5) X,_,y,_ = train_eval(df_petr4_1,"2016-01-26 00:00:00","2018-01-26 00:00:00") gnb1 = run_model(X,y, MultinomialNB()) meas_acc(X,y,gnb1) X,_,y,_ = train_eval(df_petr4_2,"2016-01-26 00:00:00","2018-01-26 00:00:00") gnb2 = run_model(X,y, MultinomialNB()) meas_acc(X,y,gnb2) X,_,y,_ = train_eval(df_petr4_3,"2016-01-26 00:00:00","2018-01-26 00:00:00") gnb3 = run_model(X,y, MultinomialNB()) meas_acc(X,y,gnb3) X,_,y,_ = train_eval(df_petr4_5,"2016-01-26 00:00:00","2018-01-26 00:00:00") gnb5 = run_model(X,y, MultinomialNB()) meas_acc(X,y,gnb5) X,_,y,_ = train_eval(df_petr4_10,"2016-01-26 00:00:00","2018-01-26 00:00:00") gnb10 = run_model(X,y, MultinomialNB()) meas_acc(X,y,gnb10) def rolling_results(df,dia_ini_train_idx,dia_ini_test_idx,model): results = [] for i in df.iloc[dia_ini_train_idx:dia_ini_test_idx].index: X,X_test,y,y_test = train_eval(df,"2016-01-26 00:00:00",i) nb_model = run_model(X,y,model) y_predict = nb_model.predict(X_test.to_numpy().reshape(1, -1)).item() y_predictX = nb_model.predict(X) y_prob = nb_model.predict_proba(X_test.to_numpy().reshape(1, -1)) acc = metrics.balanced_accuracy_score(y, y_predictX) results.append([i,y_test,y_predict]+list(y_prob[0])+[acc]) return results, nb_model dia_ini_test_idx = df_petr4_1.index.get_loc("2018-01-26 00:00:00") dia_end_test_idx = df_petr4_1.index.get_loc("2021-05-26 00:00:00") res1,_ = rolling_results(df_petr4_1,dia_ini_test_idx,dia_end_test_idx,BernoulliNB()) res2,_ = rolling_results(df_petr4_2,dia_ini_test_idx,dia_end_test_idx,BernoulliNB()) res3,_ = rolling_results(df_petr4_2,dia_ini_test_idx,dia_end_test_idx,BernoulliNB()) res5,_ = rolling_results(df_petr4_5,dia_ini_test_idx,dia_end_test_idx,BernoulliNB()) res10,_ = rolling_results(df_petr4_10,dia_ini_test_idx,dia_end_test_idx,BernoulliNB()) print(res1) print(res2) print(res5) print(res10) import statistics trade_instruction = [] for i in range(len(res1)): #vend = res1[i][3]#*res2[i][3]#*res3[i][3]*res5[i][3]*res10[i][3] #mant = res1[i][4]#*res2[i][4]#*res3[i][4]*res5[i][4]*res10[i][4] #comp = res1[i][5]#*res2[i][5]#*res3[i][5]*res5[i][5]*res10[i][5] criterion_1 = [1 if res1[i][3]>res1[i][4]+res1[i][5] else (3 if res1[i][3]+res1[i][4]<res1[i][5] else 2)] criterion_2 = [1 if res2[i][3]>res2[i][4]+res2[i][5] else (3 if res2[i][3]+res2[i][4]<res2[i][5] else 2)] criterion_3 = [1 if res3[i][3]>res3[i][4]+res3[i][5] else (3 if res3[i][3]+res3[i][4]<res3[i][5] else 2)] criterion_5 = [1 if res5[i][3]>res5[i][4]+res5[i][5] else (3 if res5[i][3]+res5[i][4]<res5[i][5] else 2)] criterion_10 = [1 if res10[i][3]>res10[i][4]+res10[i][5] else (3 if res10[i][3]+res10[i][4]<res10[i][5] else 2)] #print(res1[i][6],res2[i][6],res3[i][6],res5[i][6],res10[i][6]) criteria = [criterion_1, criterion_2, criterion_3, criterion_5, criterion_10] criteria_acc = [res1[i][6],res2[i][6],res3[i][6],res5[i][6],res10[i][6]] max_acc = max(criteria_acc) max_index = criteria_acc.index(max_acc) max_acc_criteria = criteria[max_index] media = max_acc_criteria[0] #media = sp.stats.mode(criteria_10)[0][0] #media = 1 if vend>mant+comp else (3 if vend+mant<comp else 2) #print(media) if media>2: trade_instruction.append([res1[i][0],"C"]) elif media==2: trade_instruction.append([res1[i][0],"_"]) elif media<2: trade_instruction.append([res1[i][0],"V"]) print(trade_instruction) fig, ax = plt.subplots(figsize=(25, 7.5)) sn.lineplot(data=df_petr4["Close"].iloc[dia_ini_test_idx:dia_end_test_idx],ax=ax) style = dict(size=8, color='gray') for i in range(len(trade_instruction)): ax.text(trade_instruction[i][0], df_petr4["Close"].iloc[dia_ini_test_idx:dia_end_test_idx].iloc[i]+.1, trade_instruction[i][1],**style) stock = 100 cash = 0 for i in range(len(trade_instruction)): Total_Value_Init = 100*df_petr4_1["Close"].loc[trade_instruction[0][0]] if cash <=0 and stock <=0: print(i,stock,cash) break elif trade_instruction[i][1] == "C": if cash > 0: stock += (cash/2)/df_petr4_1["Close"].loc[trade_instruction[i][0]] cash = cash/2 print(i,stock,cash,"c",df_petr4_1["Close"].loc[trade_instruction[i][0]],trade_instruction[i][0]) elif cash == 0 and stock > 0: pass elif cash <=0 and stock <=0: break print(i,stock,cash,"c",df_petr4_1["Close"].loc[trade_instruction[i][0]],trade_instruction[i][0]) elif trade_instruction[i][1] == "M": print(i,stock,cash,"m",df_petr4_1["Close"].loc[trade_instruction[i][0]],trade_instruction[i][0]) pass elif trade_instruction[i][1] == "V": if stock > 0: cash += (stock*.75)*df_petr4_1["Close"].loc[trade_instruction[i][0]] stock = 0.25*stock print(i,stock,cash,"v",df_petr4_1["Close"].loc[trade_instruction[i][0]],trade_instruction[i][0]) elif cash > 0 and stock == 0: pass elif cash <=0 and stock <=0: break print(i,stock,cash,"v",df_petr4_1["Close"].loc[trade_instruction[i][0]],trade_instruction[i][0]) a = df_petr4_1["Close"].loc[trade_instruction[i][0]] print(a) print(Total_Value_Init) b = i print(f"dias negociados: {b}, qtd_ações: {stock}, dinheiro: R$ {cash}, Total Value: R$ {cash+stock*a}, Lucro: R$ {(cash+stock*a)-Total_Value_Init}") ```
github_jupyter
# Bổ trợ bài giảng về Đại số tuyến tính - Phần 1 ## MaSSP 2018, Computer Science Tài liệu ngắn này đưa ra định nghĩa một số khái niệm cơ bản trong đại số tuyến tính liên quan đến vector và ma trận. # 1. Một số khái niệm ## 1.1. Vô hướng (Scalar) Một `scalar` là một số bất kì thuộc tập số nào đó. Khi định nghĩa một số ta phải chỉ rõ tập số mà nó thuộc vào (gọi là `domain`). Ví dụ, $ n $ là số tự nhiên sẽ được kí hiệu: $ n \in \mathbb{N} $ (Natural numbers), hoặc $ x $ là số thực sẽ được kí hiệu: $ x \in \mathbb{R} $ (Real numbers). Trong Python số tự nhiên có thể là kiểu `int`, số thực có thể là kiểu `float`. <!--- Một số thường có thể định nghĩa được bằng một kiểu dữ liệu nguyên thủy của các ngôn ngữ lập trình. Như số tự nhiên có thể là kiểu `int`, số thực có thể là kiểu `float` trong Python. ---> ``` x = 1 print(type(x)) y = 2.0 print(type(y)) ``` ## 1.2. Véc-tơ (Vector) `Vector` là 1 mảng của các vô hướng scalars tương tự như mảng 1 chiều trong các ngôn ngữ lập trình. Các phần tử trong vector cũng được đánh địa chỉ và có thể truy cập nó qua các địa chỉ tương ứng của nó. Trong toán học, một vector có thể là vector cột (`column vector`) nếu các nó được biểu diễn dạng một cột nhiều hàng, hoặc có thể là vector hàng (`row vector`) nếu nó được biểu diễn dưới dạng một hàng của nhiều cột. Một vector cột có dạng như sau: $$ x = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} $$ Một vector hàng có dạng như sau: $$ x = \begin{bmatrix} x_1, & x_2, & \cdots & x_n \end{bmatrix} $$ Trong đó, $ x_1 $, $ x_2 $, ..., $ x_n $ là các phần tử `thứ 1`, `thứ 2`, ... `thứ n` của vector. Lưu ý trong lập trình Python ta đánh số từ `0`: $x[0] = x_1, x[1] = x_2,...$. ## 1.3. Ma trận (Matrix) Ma trận là một mảng 2 chiều của các vô hướng tương tự như mảng 2 chiều trong các ngôn ngữ lập trình. Ví dụ dưới đây là một ma trận có $ m $ hàng và $ n $ cột: $$ A = \begin{bmatrix} A _{1, 1} & A _{1, 2} & \cdots & A _{1, n} \\ A _{2, 1} & A _{2, 2} & \cdots & A _{2, n} \\ \vdots & \vdots & \vdots & \vdots \\ A _{m, 1} & A _{m, 2} & \cdots & A _{m, n} \end{bmatrix} $$ Khi định nghĩa một ma trận ta cần chỉ rõ số hàng và số cột cùng trường số của các phần tử có nó. Lúc này, $ mn $ được gọi là cấp của ma trận. Ví dụ, ma trận số thực $ A $ có m hàng và n cột được kí hiệu là: $ A \in \mathbb{R}^{m \times n} $. Các phần tử trong ma trận được định danh bằng 2 địa chỉ hàng $ i $ và cột $ j $ tương ứng. Ví dụ phần tử hàng thứ 3, cột thứ 2 sẽ được kí hiệu là: $ A_{3,2} $. Ta cũng có thể kí hiệu các phần tử của hàng $ i $ là $ A _{i,:} $ và của cột $ j $ là $ A _{:,j} $. Nếu bạn để ý thì sẽ thấy $ A _{i,:} $ chính là vector hàng, còn $ A _{:,j} $ là vector cột. Như vậy, vector có thể coi là trường hợp đặt biệt của ma trận với số hàng hoặc số cột là 1. Các ma trận sẽ được kí hiệu: $ [A _{ij}] _{mn} $, trong đó $ A $ là tên của ma trận; $ m, n $ là cấp của ma trận; còn $ A _{ij} $ là các phần tử của ma trận tại hàng $ i $ và cột $ j $. <!--- Các vector ta cũng sẽ biểu diễn tương tự. vector hàng: $ [x_i]_n $, trong đó $ x $ là tên của vector; $ n $ là cấp của vector; $ x_i $ là phần tử của vector tại vị trí $ i $. vector cột ta sẽ biểu diễn thông qua phép chuyển vị của vector hàng: $ [x_i]_n ^\intercal $. Ngoài ra, nếu một ma trận được biểu diễn dưới dạng: $ [A _{1j}] _{1n} $ thì ta cũng sẽ hiểu ngầm luôn nó là vector hàng. Tương tự, với $ [A _{i1}] _{m1} $ thì ta có thể hiểu ngầm với nhau rằng nó là vector cột. ---> Một điểm cần lưu ý nữa là các giá trị $ m, n, i, j $ khi được biểu điễn tường minh dưới dạng số, ta cần phải chèn dấu phẩy `,` vào giữa chúng. Ví dụ: $ [A _{ij}] _{9,4} $ là ma trận có cấp là `9, 4`. $ A _{5,25} $ là phần tử tại hàng `5` và cột `25`. Việc này giúp ta phân biệt được giữa ma trận và vector, nếu không ta sẽ bị nhầm ma trận thành vector. ## 1.4. Ten-xơ (Tensor) Tensor là một mảng nhiều chiều, nó là trường hợp tổng quát của việc biểu diễn số chiều. Như vậy, ma trận có thể coi là một tensor 2 chiều, vector là tensor một nhiều còn scalar là tensor zero chiều. Các phần tử của một tensor cần được định danh bằng số địa chỉ tương ứng với số chiều của tensor đó. Ví dụ mộ tensor 3 chiều $A$ có phần tử tại hàng $ i $, cột $ j $, cao $ k $ được kí hiệu là $ A_{i,j,k} $. <img src="https://github.com/vietthao2000/pre-program-package-2018-part-2/blob/master/images/tensor1.png?raw=true" alt="Tensor" style="height: 50%; width: 50%;"/> Ví dụ ảnh trắng đen hoặc xám (`grayscale`) được biểu diễn bằng ma trận 2 chiều. Giá trị của mỗi phần tử trong ma trận là một số thập phân nằm trong khoảng từ 0 đến 1, ứng với độ đen trắng của từng điểm ảnh (`pixel`) (0 thể hiện màu đen và giá trị càng gần tới 1 thì càng trắng). Do hình ảnh có chiều dài và chiều rộng, ma trận của các điểm ảnh là ma trận 2 chiều. <img src="https://github.com/vietthao2000/pre-program-package-2018-part-2/blob/master/images/MNIST_2.png?raw=true" alt="grayscale" style="height: 25%; width: 25%;"/> Một ảnh màu được biểu diễn bằng một tensor 3 chiều, 2 chiều đầu cũng để đánh số địa chỉ mỗi điểm ảnh dọc theo chiều dài và chiều rộng của ảnh. Chiều cuối cùng để phân biệt 3 màu cơ bản đỏ, xanh lá, xanh dương ($k=1,2,3$). Như vậy mỗi điểm ảnh được xác định bởi vị trí của nó, và thành phần 3 màu cơ bản. <img src="https://github.com/vietthao2000/pre-program-package-2018-part-2/blob/master/images/tensor2.png?raw=true" alt="color" style="height: 50%; width: 50%;"/> Vậy đố các bạn biết, một đoạn phim đen trắng sẽ được biểu diễn bằng tensor mấy chiều? Một đoạn phim màu thì sao? <img src="https://github.com/vietthao2000/pre-program-package-2018-part-2/blob/master/images/tensor4.png?raw=true" alt="video" style="height: 50%; width: 50%;"/> # 2. Một số ma trận đặc biệt ## 2.1. Ma trận không (zero matrix) Ma trận `zero` là ma trận mà tất cả các phần tử của nó đều bằng 0: $ A_{i,j} = 0, \forall{i,j} $. Ví dụ: $$ \varnothing = \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} $$ Ta có thể viết $\bf 0_{m\times n}$ để chỉ ma trận zero có size $m\times n$. ## 2.2. Ma trận vuông (square matrix) Ma trận vuông là ma trận có số hàng bằng với số cột: $ A \in R^{n \times n} $. Ví dụ một ma trận vuông cấp 3 (số hàng và số cột là 3) có dạng như sau: $$ A = \begin{bmatrix} 2 & 1 & 9 \\ 4 & 5 & 9 \\ 8 & 0 & 5 \end{bmatrix} $$ Với ma trận vuông, đường chéo bắt đầu từ góc trái trên cùng tới góc phải dưới cùng được gọi là đường chéo chính: $ \{ A _{i,i} \} $. Ký hiệu $\{ \cdots \}$ dùng để chỉ một tập hợp (`set`). Trong ví dụ trên, đường chéo chính đi qua các phần tử `2, 5, 5`. ## 2.3. Ma trận chéo Ma trận chéo là ma trận vuông có các phần từ nằm ngoài đường chéo chính bằng 0: $ A_{i,j} = 0, \forall{i \not = j} $. Ví dụ ma trận chéo cấp 4 (có 4 hàng và 4 cột) có dạng như sau: $$ A = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 4 \end{bmatrix} $$ > Lưu ý rằng ma trận vuông zero (ma trận vuông có các phần tử bằng 0) cũng là một ma trận chéo, ký hiệu $\bf 0_n$. ## 2.4. Ma trận đơn vị Là ma trận chéo có các phần tử trên đường chéo bằng 1: $$ \begin{cases} A _{i,j} = 0, \forall{i \not = j} \\ A _{i,j} = 1, \forall{i = j} \end{cases} $$ Ma trận đơn vị được kí hiệu là $ I_n $ với $ n $ là cấp của ma trận. Ví dụ ma trận đơn vị cấp 3 được biểu diễn như sau: $$ I_{3} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$ <!--- đã nói ở phần định nghĩa ma trận ## 2.5. Ma trận cột Ma trận cột chính là vector cột, tức là ma trận chỉ có 1 cột. ## 2.6. Ma trận hàng Tương tự như ma trận cột, ma trận hàng chính là vector hàng, tức là ma trận chỉ có 1 hàng. ---> ## 2.5. Ma trận chuyển vị Ma trận chuyển vị là ma trận nhận được sau khi ta đổi hàng thành cột và cột thành hàng. $$ \begin{cases} A \in \mathbb{R}^{m\times n} \\ B \in \mathbb{R}^{n\times m} \\ A _{i,j} = B _{j,i}, \forall{i,j} \end{cases} $$ Ma trận chuyển vị của $ A $ được kí hiệu là $ A^\intercal $. Như vậy: $ (A^\intercal)_{i,j} = A _{j,i} $. $$ \begin{bmatrix} 1 & 2 & 3 \\ 10 & 15 & 20 \end{bmatrix} ^\intercal = \begin{bmatrix} 1 & 10 \\ 2 & 15 \\ 3 & 20 \end{bmatrix} $$ Vector cũng là một ma trận nên mọi phép toán với ma trận đều có thể áp dụng được cho vector, bao gồm cả phép chuyển vị ma trận. Sử dụng phép chuyển vị ta có thể biến một vector hàng thành vector cột và ngược lại. Mặc định (`by default, convention`) trong toán học khi cho một vector $x\in\mathbb{R}^n$ ta hiểu đây là một vector cột. Đôi lúc để viết cho ngắn gọi người ta thường sử dụng phép chuyển vị để định nghĩa vector cột, ví dụ $ x = [x_1, x_2, ..., x_n]^\intercal $. <!---Do đó ở ví dụ về vector hàng, theo chuẩn ta nên viết $x^{\top} = \begin{bmatrix} x_1, & x_2, & \cdots & x_n \end{bmatrix}$. ---> <!--- # 3. Các kí hiệu Để thuận tiện, từ nay về sau tôi sẽ mặc định các vô hướng, phần tử của ma trận (bao gồm cả vector) mà chúng ta làm việc là thuộc trường số thực $ \mathbb{R} $. Tôi cũng sẽ sử dụng một số kí hiệu bổ sung như dưới đây. Các ma trận sẽ được kí hiệu: $ [A _{ij}] _{mn} $, trong đó $ A $ là tên của ma trận; $ m, n $ là cấp của ma trận; còn $ A _{ij} $ là các phần tử của ma trận tại hàng $ i $ và cột $ j $. Các vector ta cũng sẽ biểu diễn tương tự. vector hàng: $ [x_i]_n $, trong đó $ x $ là tên của vector; $ n $ là cấp của vector; $ x_i $ là phần tử của vector tại vị trí $ i $. vector cột ta sẽ biểu diễn thông qua phép chuyển vị của vector hàng: $ [x_i]_n ^\intercal $. Ngoài ra, nếu một ma trận được biểu diễn dưới dạng: $ [A _{1j}] _{1n} $ thì ta cũng sẽ hiểu ngầm luôn nó là vector hàng. Tương tự, với $ [A _{i1}] _{m1} $ thì ta có thể hiểu ngầm với nhau rằng nó là vector cột. Một điểm cần lưu ý nữa là các giá trị $ m, n, i, j $ khi được biểu điễn tường minh dưới dạng số, ta cần phải chèn dấu phẩy `,` vào giữa chúng. Ví dụ: $ [A _{ij}] _{9,4} $ là ma trận có cấp là `9, 4`. $ A _{5,25} $ là phần tử tại hàng `5` và cột `25`. Việc này giúp ta phân biệt được giữa ma trận và vector, nếu không ta sẽ bị nhầm ma trận thành vector. Trên đây là một số khái niệm cơ bản để làm việc với ma trận, trong phần sau tôi sẽ đề cập tới các phép toán của ma trận. Việc biến đổi ma trận và các phép toán trên ma trận là rất cần thiết để làm việc với các bài toán về học máy sau này. --->
github_jupyter
``` from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) from google.colab import drive drive.mount('/content/drive') !unzip -qq '/content/drive/My Drive/Colab Notebooks/Glaucoma detection/Data/BEH.zip' !pip install git+https://github.com/karolzak/keras-unet import numpy as np import matplotlib.pyplot as plt %matplotlib inline import glob import os import sys import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision import torchvision.models as models from torchvision.utils import make_grid from torch.utils.data import Dataset, random_split, DataLoader import torchvision.transforms as transforms import torchvision.transforms.functional as TF import random from torchvision.utils import make_grid from PIL import Image from keras_unet.utils import plot_imgs from sklearn.model_selection import train_test_split from keras_unet.models import custom_unet from keras.callbacks import ModelCheckpoint from keras.optimizers import Adam, SGD from keras_unet.metrics import iou, iou_thresholded from keras_unet.losses import jaccard_distance from keras_unet.utils import plot_imgs, plot_segm_history # Load FAU dataset orgs = glob.glob("/content/FAU/training/original/*") masks = glob.glob("/content/FAU/training/mask/*") size = 512 imgs_list = [] masks_list = [] for image, mask in zip(orgs, masks): imgs_list.append(np.array(Image.open(image).resize((size,size)))[:,:,1]) im = Image.open(mask).resize((512,512)) masks_list.append(np.array(im)) imgs_np = np.asarray(imgs_list) masks_np = np.asarray(masks_list) print('Original Images:', imgs_np.shape, ' Ground Truth images:', masks_np.shape) # plot_imgs(org_imgs=imgs_np, mask_imgs=masks_np, nm_img_to_plot=10, figsize=6) dataset_glaucoma = glob.glob("/content/BEH/Train/glaucoma/*.jpg") dataset_normal = glob.glob("/content/BEH/Train/normal/*.jpg") dataset = [] for image in dataset_glaucoma: dataset.append(np.array(Image.open(image).resize((size,size)))[:,:,1]) for image in dataset_normal: dataset.append(np.array(Image.open(image).resize((size,size)))[:,:,1]) dataset_np = np.asarray(dataset) dataset_x = np.asarray(dataset_np, dtype=np.float32)/255 dataset_x = dataset_x.reshape(dataset_x.shape[0], dataset_x.shape[1], dataset_x.shape[2], 1) print('Dataset:', dataset_x.shape) plot_imgs(org_imgs=dataset_np, mask_imgs=masks_np, nm_img_to_plot=10, figsize=6) # Get data into correct shape, dtype and range (0.0-1.0) print(imgs_np.max(), masks_np.max()) x = np.asarray(imgs_np, dtype=np.float32)/255 y = np.asarray(masks_np, dtype=np.float32)/255 print(x.max(), y.max()) print(x.shape, y.shape) y = y.reshape(y.shape[0], y.shape[1], y.shape[2], 1) x = x.reshape(x.shape[0], x.shape[1], x.shape[2], 1) print(x.shape, y.shape) x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=0.1, random_state=0) print("x_train: ", x_train.shape) print("y_train: ", y_train.shape) print("x_val: ", x_val.shape) print("y_val: ", y_val.shape) from keras_unet.utils import get_augmented train_gen = get_augmented( x_train, y_train, batch_size=8, data_gen_args = dict( rotation_range=5., width_shift_range=0.05, height_shift_range=0.05, shear_range=40, zoom_range=0.2, horizontal_flip=True, vertical_flip=False, fill_mode='constant' )) sample_batch = next(train_gen) xx, yy = sample_batch print(xx.shape, yy.shape) from keras_unet.utils import plot_imgs # Plot Dataset and Masks plot_imgs(org_imgs=xx, mask_imgs=yy, nm_img_to_plot=3, figsize=6) # Initialize network input_shape = x_train[0].shape model = custom_unet( input_shape, filters=32, use_batch_norm=True, dropout=0.3, dropout_change_per_layer=0.0, num_layers=4 ) model_filename = 'segm_model_v3.h5' callback_checkpoint = ModelCheckpoint( model_filename, verbose=1, monitor='val_loss', save_best_only=True, ) model.compile( optimizer=Adam(), # optimizer=SGD(lr=0.01, momentum=0.99), loss='binary_crossentropy', #loss=jaccard_distance, metrics=[iou, iou_thresholded] ) history = model.fit_generator( train_gen, steps_per_epoch=200, epochs=3, validation_data=(x_val, y_val), callbacks=[callback_checkpoint] ) plot_segm_history(history) # Segment Training data model.load_weights(model_filename) y_pred = model.predict(x_val) y_pred = np.moveaxis(y_pred, -1, 1) plot_imgs(org_imgs=x_val, mask_imgs=y_val, pred_imgs=y_pred, nm_img_to_plot=8) # Segment dataset dataset_y_pred = model.predict(dataset_x) plot_imgs(org_imgs=dataset_x, mask_imgs=dataset_y_pred, pred_imgs=dataset_y_pred, nm_img_to_plot=8) dataset_x = np.moveaxis(dataset_x, -1, 1) dataset_y_pred = np.moveaxis(dataset_y_pred, -1, 1) print(dataset_x.shape, dataset_y_pred.shape) import torch x = torch.Tensor(dataset_y_pred) from torchvision.utils import save_image from pathlib import Path for i in range(len(dataset_glaucoma)): output = x[i][0] out_dir = Path('/content/ORIGA_af/glaucoma') out_filename = str(i) + '_BEH.jpg' output_name = out_dir.joinpath(out_filename) save_image(output, output_name, padding=0) for i in range(len(dataset_glaucoma), len(x)): output = x[i][0] out_dir = Path('/content/ORIGA_af/normal') out_filename = str(i) + '_BEH.jpg' output_name = out_dir.joinpath(out_filename) save_image(output, output_name, padding=0) # Zip segmented dataset !zip -r -j BEH '/content/BEH/' ```
github_jupyter
<!-- dom:TITLE: Week 2 January 11-15: Introduction to the course and start Variational Monte Carlo --> # Week 2 January 11-15: Introduction to the course and start Variational Monte Carlo <!-- dom:AUTHOR: Morten Hjorth-Jensen Email [email protected] at Department of Physics and Center fo Computing in Science Education, University of Oslo, Oslo, Norway & Department of Physics and Astronomy and Facility for Rare Ion Beams, Michigan State University, East Lansing, Michigan, USA --> <!-- Author: --> **Morten Hjorth-Jensen Email [email protected]**, Department of Physics and Center fo Computing in Science Education, University of Oslo, Oslo, Norway and Department of Physics and Astronomy and Facility for Rare Ion Beams, Michigan State University, East Lansing, Michigan, USA Date: **Jan 14, 2021** Copyright 1999-2021, Morten Hjorth-Jensen Email [email protected]. Released under CC Attribution-NonCommercial 4.0 license ## Overview of week 2 **Topics.** * Introduction to the course and overview of topics to be covered * Introduction to Variational Monte Carlo methods, Metropolis Algorithm, statistics and Markov Chain theory **Teaching Material, videos and written material.** * Asynchronuous vidoes * Lecture notes and reading assignments * Additional (often recommended) background material ## Textbook There are no unique textbooks which cover the material to be discussed. For each week however, we will, in addition to our own lecture notes, send links to additional literature. This can be articles or chapters from other textbooks. A useful textbook is however * [Bernd A. Berg, *Markov Chain Monte Carlo Simulations and their Statistical Analysis*, World Scientific, 2004](https://www.worldscientific.com/worldscibooks/10.1142/5602), chapters 1, 2 This book has its main focus on spin-models, but many of the concepts are general. Chapters 1 and 2 contain a good discussion of the statistical foundation. ## Aims * Be able to apply central many-particle methods like the Variational Monte Carlo method to properties of many-fermion systems and many-boson systems. * Understand how to simulate quantum mechanical systems with many interacting particles. The methods are relevant for atomic, molecular, solid state, materials science, nanotechnology, quantum chemistry and nuclear physics. * Learn to manage and structure larger projects, with unit tests, object orientation and writing clean code * Learn about a proper statistical analysis of large data sets * Learn to optimize with convex optimization methods functions that depend on many variables. * Parallelization and code optimizations ## Lectures and ComputerLab * Lectures: Thursday (2.15pm-4pm). First time January 14. Last lecture May 6. * Computerlab: Thursday (4.15pm-7pm), first time January 14, last lab session May 6. * Weekly plans and all other information are on the webpage of the course * **First project to be handed in March 26**. * **Second and final project to be handed in May 31.** * There is no final exam, only project work. ## Course Format * Two compulsory projects. Electronic reports only. You are free to choose your format. We use devilry to hand in the projects. * Evaluation and grading: The two projects count 1/2 each of the final mark. No exam. * The computer lab (room 397 in the Physics buidling) has no PCs, so please bring your own laptops. C/C++ is the default programming language, but programming languages like Fortran2008, Rust, Julia, and/or Python can also be used. All source codes discussed during the lectures can be found at the webpage of the course. ## Topics covered in this course * Parallelization (MPI and OpenMP), high-performance computing topics. Choose between Python, Fortran2008 and/or C++ as programming languages. * Algorithms for Monte Carlo Simulations (multidimensional integrals), Metropolis-Hastings and importance sampling algorithms. Improved Monte Carlo methods. * Statistical analysis of data from Monte Carlo calculations, bootstrapping, jackknife and blocking methods. * Eigenvalue solvers * For project 2 there will be at least three variants: a. Variational Monte Carlo for fermions b. Hartree-Fock theory for fermions c. Coupled cluster theory for fermions (iterative methods) d. Neural networks and Machine Learning to solve the same problems as in project 1 e. Eigenvalue problems with deep learning methods f. Possible project on quantum computing ## Topics covered in this course * Search for minima in multidimensional spaces (conjugate gradient method, steepest descent method, quasi-Newton-Raphson, Broyden-Jacobian). Convex optimization, gradient methods * Iterative methods for solutions of non-linear equations. * Object orientation * Data analysis and resampling techniques * Variational Monte Carlo (VMC) for 'ab initio' studies of quantum mechanical many-body systems. * Simulation of two- and three-dimensional systems like quantum dots or atoms and molecules or systems from solid state physics * **Simulation of trapped bosons using VMC (project 1, default)** * **Machine learning and neural networks (project 2, default, same system as in project 1)** * Extension of project 1 to fermionic systems (project 2) * Coupled cluster theory (project 2, depends on interest) * Other quantum-mechanical methods and systems can be tailored to one's interests (Hartree-Fock Theory, Many-body perturbation theory, time-dependent theories and more). ## Quantum Monte Carlo Motivation Most quantum mechanical problems of interest in for example atomic, molecular, nuclear and solid state physics consist of a large number of interacting electrons and ions or nucleons. The total number of particles $N$ is usually sufficiently large that an exact solution cannot be found. Typically, the expectation value for a chosen hamiltonian for a system of $N$ particles is $$ \langle H \rangle = \frac{\int d\boldsymbol{R}_1d\boldsymbol{R}_2\dots d\boldsymbol{R}_N \Psi^{\ast}(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N) H(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N) \Psi(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N)} {\int d\boldsymbol{R}_1d\boldsymbol{R}_2\dots d\boldsymbol{R}_N \Psi^{\ast}(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N) \Psi(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N)}, $$ an in general intractable problem. This integral is actually the starting point in a Variational Monte Carlo calculation. **Gaussian quadrature: Forget it**! Given 10 particles and 10 mesh points for each degree of freedom and an ideal 1 Tflops machine (all operations take the same time), how long will it take to compute the above integral? The lifetime of the universe is of the order of $10^{17}$ s. ## Quantum Monte Carlo Motivation As an example from the nuclear many-body problem, we have Schroedinger's equation as a differential equation $$ \hat{H}\Psi(\boldsymbol{r}_1,..,\boldsymbol{r}_A,\alpha_1,..,\alpha_A)=E\Psi(\boldsymbol{r}_1,..,\boldsymbol{r}_A,\alpha_1,..,\alpha_A) $$ where $$ \boldsymbol{r}_1,..,\boldsymbol{r}_A, $$ are the coordinates and $$ \alpha_1,..,\alpha_A, $$ are sets of relevant quantum numbers such as spin and isospin for a system of $A$ nucleons ($A=N+Z$, $N$ being the number of neutrons and $Z$ the number of protons). ## Quantum Monte Carlo Motivation There are $$ 2^A\times \left(\begin{array}{c} A\\ Z\end{array}\right) $$ coupled second-order differential equations in $3A$ dimensions. For a nucleus like beryllium-10 this number is **215040**. This is a truely challenging many-body problem. Methods like partial differential equations can at most be used for 2-3 particles. ## Various many-body methods * Monte-Carlo methods * Renormalization group (RG) methods, in particular density matrix RG * Large-scale diagonalization (Iterative methods, Lanczo's method, dimensionalities $10^{10}$ states) * Coupled cluster theory, favoured method in quantum chemistry, molecular and atomic physics. Applications to ab initio calculations in nuclear physics as well for large nuclei. * Perturbative many-body methods * Green's function methods * Density functional theory/Mean-field theory and Hartree-Fock theory The physics of the system hints at which many-body methods to use. ## Quantum Monte Carlo Motivation **Pros and Cons of Monte Carlo.** * Is physically intuitive. * Allows one to study systems with many degrees of freedom. Diffusion Monte Carlo (DMC) and Green's function Monte Carlo (GFMC) yield in principle the exact solution to Schroedinger's equation. * Variational Monte Carlo (VMC) is easy to implement but needs a reliable trial wave function, can be difficult to obtain. This is where we will use Hartree-Fock theory to construct an optimal basis. * DMC/GFMC for fermions (spin with half-integer values, electrons, baryons, neutrinos, quarks) has a sign problem. Nature prefers an anti-symmetric wave function. PDF in this case given distribution of random walkers. * The solution has a statistical error, which can be large. * There is a limit for how large systems one can study, DMC needs a huge number of random walkers in order to achieve stable results. * Obtain only the lowest-lying states with a given symmetry. Can get excited states with extra labor. ## Quantum Monte Carlo Motivation **Where and why do we use Monte Carlo Methods in Quantum Physics.** * Quantum systems with many particles at finite temperature: Path Integral Monte Carlo with applications to dense matter and quantum liquids (phase transitions from normal fluid to superfluid). Strong correlations. * Bose-Einstein condensation of dilute gases, method transition from non-linear PDE to Diffusion Monte Carlo as density increases. * Light atoms, molecules, solids and nuclei. * Lattice Quantum-Chromo Dynamics. Impossible to solve without MC calculations. * Simulations of systems in solid state physics, from semiconductors to spin systems. Many electrons active and possibly strong correlations. ## Quantum Monte Carlo Motivation We start with the variational principle. Given a hamiltonian $H$ and a trial wave function $\Psi_T$, the variational principle states that the expectation value of $\langle H \rangle$, defined through $$ E[H]= \langle H \rangle = \frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})H(\boldsymbol{R})\Psi_T(\boldsymbol{R})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}, $$ is an upper bound to the ground state energy $E_0$ of the hamiltonian $H$, that is $$ E_0 \le \langle H \rangle . $$ In general, the integrals involved in the calculation of various expectation values are multi-dimensional ones. Traditional integration methods such as the Gauss-Legendre will not be adequate for say the computation of the energy of a many-body system. ## Quantum Monte Carlo Motivation The trial wave function can be expanded in the eigenstates of the hamiltonian since they form a complete set, viz., $$ \Psi_T(\boldsymbol{R})=\sum_i a_i\Psi_i(\boldsymbol{R}), $$ and assuming the set of eigenfunctions to be normalized one obtains $$ \frac{\sum_{nm}a^*_ma_n \int d\boldsymbol{R}\Psi^{\ast}_m(\boldsymbol{R})H(\boldsymbol{R})\Psi_n(\boldsymbol{R})} {\sum_{nm}a^*_ma_n \int d\boldsymbol{R}\Psi^{\ast}_m(\boldsymbol{R})\Psi_n(\boldsymbol{R})} =\frac{\sum_{n}a^2_n E_n} {\sum_{n}a^2_n} \ge E_0, $$ where we used that $H(\boldsymbol{R})\Psi_n(\boldsymbol{R})=E_n\Psi_n(\boldsymbol{R})$. In general, the integrals involved in the calculation of various expectation values are multi-dimensional ones. The variational principle yields the lowest state of a given symmetry. ## Quantum Monte Carlo Motivation In most cases, a wave function has only small values in large parts of configuration space, and a straightforward procedure which uses homogenously distributed random points in configuration space will most likely lead to poor results. This may suggest that some kind of importance sampling combined with e.g., the Metropolis algorithm may be a more efficient way of obtaining the ground state energy. The hope is then that those regions of configurations space where the wave function assumes appreciable values are sampled more efficiently. ## Quantum Monte Carlo Motivation The tedious part in a VMC calculation is the search for the variational minimum. A good knowledge of the system is required in order to carry out reasonable VMC calculations. This is not always the case, and often VMC calculations serve rather as the starting point for so-called diffusion Monte Carlo calculations (DMC). DMC is a way of solving exactly the many-body Schroedinger equation by means of a stochastic procedure. A good guess on the binding energy and its wave function is however necessary. A carefully performed VMC calculation can aid in this context. ## Quantum Monte Carlo Motivation * Construct first a trial wave function $\psi_T(\boldsymbol{R},\boldsymbol{\alpha})$, for a many-body system consisting of $N$ particles located at positions $\boldsymbol{R}=(\boldsymbol{R}_1,\dots ,\boldsymbol{R}_N)$. The trial wave function depends on $\alpha$ variational parameters $\boldsymbol{\alpha}=(\alpha_1,\dots ,\alpha_M)$. * Then we evaluate the expectation value of the hamiltonian $H$ $$ E[H]=\langle H \rangle = \frac{\int d\boldsymbol{R}\Psi^{\ast}_{T}(\boldsymbol{R},\boldsymbol{\alpha})H(\boldsymbol{R})\Psi_{T}(\boldsymbol{R},\boldsymbol{\alpha})} {\int d\boldsymbol{R}\Psi^{\ast}_{T}(\boldsymbol{R},\boldsymbol{\alpha})\Psi_{T}(\boldsymbol{R},\boldsymbol{\alpha})}. $$ * Thereafter we vary $\alpha$ according to some minimization algorithm and return to the first step. ## Quantum Monte Carlo Motivation **Basic steps.** Choose a trial wave function $\psi_T(\boldsymbol{R})$. $$ P(\boldsymbol{R})= \frac{\left|\psi_T(\boldsymbol{R})\right|^2}{\int \left|\psi_T(\boldsymbol{R})\right|^2d\boldsymbol{R}}. $$ This is our new probability distribution function (PDF). The approximation to the expectation value of the Hamiltonian is now $$ E[H(\boldsymbol{\alpha})] = \frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R},\boldsymbol{\alpha})H(\boldsymbol{R})\Psi_T(\boldsymbol{R},\boldsymbol{\alpha})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R},\boldsymbol{\alpha})\Psi_T(\boldsymbol{R},\boldsymbol{\alpha})}. $$ ## Quantum Monte Carlo Motivation Define a new quantity <!-- Equation labels as ordinary links --> <div id="eq:locale1"></div> $$ E_L(\boldsymbol{R},\boldsymbol{\alpha})=\frac{1}{\psi_T(\boldsymbol{R},\boldsymbol{\alpha})}H\psi_T(\boldsymbol{R},\boldsymbol{\alpha}), \label{eq:locale1} \tag{1} $$ called the local energy, which, together with our trial PDF yields <!-- Equation labels as ordinary links --> <div id="eq:vmc1"></div> $$ E[H(\boldsymbol{\alpha})]=\int P(\boldsymbol{R})E_L(\boldsymbol{R}) d\boldsymbol{R}\approx \frac{1}{N}\sum_{i=1}^N E_L(\boldsymbol{R_i},\boldsymbol{\alpha}) \label{eq:vmc1} \tag{2} $$ with $N$ being the number of Monte Carlo samples. ## Quantum Monte Carlo The Algorithm for performing a variational Monte Carlo calculations runs thus as this * Initialisation: Fix the number of Monte Carlo steps. Choose an initial $\boldsymbol{R}$ and variational parameters $\alpha$ and calculate $\left|\psi_T^{\alpha}(\boldsymbol{R})\right|^2$. * Initialise the energy and the variance and start the Monte Carlo calculation. * Calculate a trial position $\boldsymbol{R}_p=\boldsymbol{R}+r*step$ where $r$ is a random variable $r \in [0,1]$. * Metropolis algorithm to accept or reject this move $w = P(\boldsymbol{R}_p)/P(\boldsymbol{R})$. * If the step is accepted, then we set $\boldsymbol{R}=\boldsymbol{R}_p$. * Update averages * Finish and compute final averages. Observe that the jumping in space is governed by the variable *step*. This is Called brute-force sampling. Need importance sampling to get more relevant sampling, see lectures below. ## Quantum Monte Carlo: hydrogen atom The radial Schroedinger equation for the hydrogen atom can be written as $$ -\frac{\hbar^2}{2m}\frac{\partial^2 u(r)}{\partial r^2}- \left(\frac{ke^2}{r}-\frac{\hbar^2l(l+1)}{2mr^2}\right)u(r)=Eu(r), $$ or with dimensionless variables <!-- Equation labels as ordinary links --> <div id="eq:hydrodimless1"></div> $$ -\frac{1}{2}\frac{\partial^2 u(\rho)}{\partial \rho^2}- \frac{u(\rho)}{\rho}+\frac{l(l+1)}{2\rho^2}u(\rho)-\lambda u(\rho)=0, \label{eq:hydrodimless1} \tag{3} $$ with the hamiltonian $$ H=-\frac{1}{2}\frac{\partial^2 }{\partial \rho^2}- \frac{1}{\rho}+\frac{l(l+1)}{2\rho^2}. $$ Use variational parameter $\alpha$ in the trial wave function <!-- Equation labels as ordinary links --> <div id="eq:trialhydrogen"></div> $$ u_T^{\alpha}(\rho)=\alpha\rho e^{-\alpha\rho}. \label{eq:trialhydrogen} \tag{4} $$ ## Quantum Monte Carlo: hydrogen atom Inserting this wave function into the expression for the local energy $E_L$ gives $$ E_L(\rho)=-\frac{1}{\rho}- \frac{\alpha}{2}\left(\alpha-\frac{2}{\rho}\right). $$ A simple variational Monte Carlo calculation results in <table border="1"> <thead> <tr><th align="center"> $\alpha$ </th> <th align="center">$\langle H \rangle $</th> <th align="center"> $\sigma^2$</th> <th align="center">$\sigma/\sqrt{N}$</th> </tr> </thead> <tbody> <tr><td align="center"> 7.00000E-01 </td> <td align="center"> -4.57759E-01 </td> <td align="center"> 4.51201E-02 </td> <td align="center"> 6.71715E-04 </td> </tr> <tr><td align="center"> 8.00000E-01 </td> <td align="center"> -4.81461E-01 </td> <td align="center"> 3.05736E-02 </td> <td align="center"> 5.52934E-04 </td> </tr> <tr><td align="center"> 9.00000E-01 </td> <td align="center"> -4.95899E-01 </td> <td align="center"> 8.20497E-03 </td> <td align="center"> 2.86443E-04 </td> </tr> <tr><td align="center"> 1.00000E-00 </td> <td align="center"> -5.00000E-01 </td> <td align="center"> 0.00000E+00 </td> <td align="center"> 0.00000E+00 </td> </tr> <tr><td align="center"> 1.10000E+00 </td> <td align="center"> -4.93738E-01 </td> <td align="center"> 1.16989E-02 </td> <td align="center"> 3.42036E-04 </td> </tr> <tr><td align="center"> 1.20000E+00 </td> <td align="center"> -4.75563E-01 </td> <td align="center"> 8.85899E-02 </td> <td align="center"> 9.41222E-04 </td> </tr> <tr><td align="center"> 1.30000E+00 </td> <td align="center"> -4.54341E-01 </td> <td align="center"> 1.45171E-01 </td> <td align="center"> 1.20487E-03 </td> </tr> </tbody> </table> ## Quantum Monte Carlo: hydrogen atom We note that at $\alpha=1$ we obtain the exact result, and the variance is zero, as it should. The reason is that we then have the exact wave function, and the action of the hamiltionan on the wave function $$ H\psi = \mathrm{constant}\times \psi, $$ yields just a constant. The integral which defines various expectation values involving moments of the hamiltonian becomes then $$ \langle H^n \rangle = \frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})H^n(\boldsymbol{R})\Psi_T(\boldsymbol{R})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}= \mathrm{constant}\times\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}=\mathrm{constant}. $$ **This gives an important information: the exact wave function leads to zero variance!** Variation is then performed by minimizing both the energy and the variance. ## [Quantum Monte Carlo for bosons](https://github.com/mortele/variational-monte-carlo-fys4411) For bosons in a harmonic oscillator-like trap we will use is a spherical (S) or an elliptical (E) harmonic trap in one, two and finally three dimensions, with the latter given by <!-- Equation labels as ordinary links --> <div id="trap_eqn"></div> $$ \begin{equation} V_{ext}(\mathbf{r}) = \Bigg\{ \begin{array}{ll} \frac{1}{2}m\omega_{ho}^2r^2 & (S)\\ \strut \frac{1}{2}m[\omega_{ho}^2(x^2+y^2) + \omega_z^2z^2] & (E) \label{trap_eqn} \tag{5} \end{array} \end{equation} $$ where (S) stands for symmetric and <!-- Equation labels as ordinary links --> <div id="_auto1"></div> $$ \begin{equation} \hat{H} = \sum_i^N \left( \frac{-\hbar^2}{2m} { \bigtriangledown }_{i}^2 + V_{ext}({\bf{r}}_i)\right) + \sum_{i<j}^{N} V_{int}({\bf{r}}_i,{\bf{r}}_j), \label{_auto1} \tag{6} \end{equation} $$ as the two-body Hamiltonian of the system. ## [Quantum Monte Carlo for bosons](https://github.com/mortele/variational-monte-carlo-fys4411) We will represent the inter-boson interaction by a pairwise, repulsive potential <!-- Equation labels as ordinary links --> <div id="_auto2"></div> $$ \begin{equation} V_{int}(|\mathbf{r}_i-\mathbf{r}_j|) = \Bigg\{ \begin{array}{ll} \infty & {|\mathbf{r}_i-\mathbf{r}_j|} \leq {a}\\ 0 & {|\mathbf{r}_i-\mathbf{r}_j|} > {a} \end{array} \label{_auto2} \tag{7} \end{equation} $$ where $a$ is the so-called hard-core diameter of the bosons. Clearly, $V_{int}(|\mathbf{r}_i-\mathbf{r}_j|)$ is zero if the bosons are separated by a distance $|\mathbf{r}_i-\mathbf{r}_j|$ greater than $a$ but infinite if they attempt to come within a distance $|\mathbf{r}_i-\mathbf{r}_j| \leq a$. ## [Quantum Monte Carlo for bosons](https://github.com/mortele/variational-monte-carlo-fys4411) Our trial wave function for the ground state with $N$ atoms is given by <!-- Equation labels as ordinary links --> <div id="eq:trialwf"></div> $$ \begin{equation} \Psi_T(\mathbf{R})=\Psi_T(\mathbf{r}_1, \mathbf{r}_2, \dots \mathbf{r}_N,\alpha,\beta)=\prod_i g(\alpha,\beta,\mathbf{r}_i)\prod_{i<j}f(a,|\mathbf{r}_i-\mathbf{r}_j|), \label{eq:trialwf} \tag{8} \end{equation} $$ where $\alpha$ and $\beta$ are variational parameters. The single-particle wave function is proportional to the harmonic oscillator function for the ground state <!-- Equation labels as ordinary links --> <div id="_auto3"></div> $$ \begin{equation} g(\alpha,\beta,\mathbf{r}_i)= \exp{[-\alpha(x_i^2+y_i^2+\beta z_i^2)]}. \label{_auto3} \tag{9} \end{equation} $$ ## [Quantum Monte Carlo for bosons](https://github.com/mortele/variational-monte-carlo-fys4411) For spherical traps we have $\beta = 1$ and for non-interacting bosons ($a=0$) we have $\alpha = 1/2a_{ho}^2$. The correlation wave function is <!-- Equation labels as ordinary links --> <div id="_auto4"></div> $$ \begin{equation} f(a,|\mathbf{r}_i-\mathbf{r}_j|)=\Bigg\{ \begin{array}{ll} 0 & {|\mathbf{r}_i-\mathbf{r}_j|} \leq {a}\\ (1-\frac{a}{|\mathbf{r}_i-\mathbf{r}_j|}) & {|\mathbf{r}_i-\mathbf{r}_j|} > {a}. \end{array} \label{_auto4} \tag{10} \end{equation} $$ ### Simple example, the hydrogen atom The radial Schroedinger equation for the hydrogen atom can be written as (when we have gotten rid of the first derivative term in the kinetic energy and used $rR(r)=u(r)$) $$ -\frac{\hbar^2}{2m}\frac{d^2 u(r)}{d r^2}- \left(\frac{ke^2}{r}-\frac{\hbar^2l(l+1)}{2mr^2}\right)u(r)=Eu(r). $$ We will specialize to the case with $l=0$ and end up with $$ -\frac{\hbar^2}{2m}\frac{d^2 u(r)}{d r^2}- \left(\frac{ke^2}{r}\right)u(r)=Eu(r). $$ Then we introduce a dimensionless variable $\rho=r/a$ where $a$ is a constant with dimension length. Multiplying with $ma^2/\hbar^2$ we can rewrite our equations as $$ -\frac{1}{2}\frac{d^2 u(\rho)}{d \rho^2}- \frac{ke^2ma}{\hbar^2}\frac{u(\rho)}{\rho}-\lambda u(\rho)=0. $$ Since $a$ is just a parameter we choose to set $$ \frac{ke^2ma}{\hbar^2}=1, $$ which leads to $a=\hbar^2/mke^2$, better known as the Bohr radius with value $0.053$ nm. Scaling the equations this way does not only render our numerical treatment simpler since we avoid carrying with us all physical parameters, but we obtain also a **natural** length scale. We will see this again and again. In our discussions below with a harmonic oscillator trap, the **natural** lentgh scale with be determined by the oscillator frequency, the mass of the particle and $\hbar$. We have also defined a dimensionless 'energy' $\lambda = Ema^2/\hbar^2$. With the rescaled quantities, the ground state energy of the hydrogen atom is $1/2$. The equation we want to solve is now defined by the Hamiltonian $$ H=-\frac{1}{2}\frac{d^2 }{d \rho^2}-\frac{1}{\rho}. $$ As trial wave function we peep now into the analytical solution for the hydrogen atom and use (with $\alpha$ as a variational parameter) $$ u_T^{\alpha}(\rho)=\alpha\rho \exp{-(\alpha\rho)}. $$ Inserting this wave function into the expression for the local energy $E_L$ gives $$ E_L(\rho)=-\frac{1}{\rho}- \frac{\alpha}{2}\left(\alpha-\frac{2}{\rho}\right). $$ To have analytical local energies saves us from computing numerically the second derivative, a feature which often increases our numerical expenditure with a factor of three or more. Integratng up the local energy (recall to bring back the PDF in the integration) gives $\overline{E}[\boldsymbol{\alpha}]=\alpha(\alpha/2-1)$. ### Second example, the harmonic oscillator in one dimension We present here another well-known example, the harmonic oscillator in one dimension for one particle. This will also serve the aim of introducing our next model, namely that of interacting electrons in a harmonic oscillator trap. Here as well, we do have analytical solutions and the energy of the ground state, with $\hbar=1$, is $1/2\omega$, with $\omega$ being the oscillator frequency. We use the following trial wave function $$ \psi_T(x;\alpha) = \exp{-(\frac{1}{2}\alpha^2x^2)}, $$ which results in a local energy $$ \frac{1}{2}\left(\alpha^2+x^2(1-\alpha^4)\right). $$ We can compare our numerically calculated energies with the exact energy as function of $\alpha$ $$ \overline{E}[\alpha] = \frac{1}{4}\left(\alpha^2+\frac{1}{\alpha^2}\right). $$ Similarly, with the above ansatz, we can also compute the exact variance which reads $$ \sigma^2[\alpha]=\frac{1}{4}\left(1+(1-\alpha^4)^2\frac{3}{4\alpha^4}\right)-\overline{E}. $$ Our code for computing the energy of the ground state of the harmonic oscillator follows here. We start by defining directories where we store various outputs. ``` # Common imports import os # Where to save the figures and data files PROJECT_ROOT_DIR = "Results" FIGURE_ID = "Results/FigureFiles" DATA_ID = "Results/VMCHarmonic" if not os.path.exists(PROJECT_ROOT_DIR): os.mkdir(PROJECT_ROOT_DIR) if not os.path.exists(FIGURE_ID): os.makedirs(FIGURE_ID) if not os.path.exists(DATA_ID): os.makedirs(DATA_ID) def image_path(fig_id): return os.path.join(FIGURE_ID, fig_id) def data_path(dat_id): return os.path.join(DATA_ID, dat_id) def save_fig(fig_id): plt.savefig(image_path(fig_id) + ".png", format='png') outfile = open(data_path("VMCHarmonic.dat"),'w') ``` We proceed with the implementation of the Monte Carlo algorithm but list first the ansatz for the wave function and the expression for the local energy ``` %matplotlib inline # VMC for the one-dimensional harmonic oscillator # Brute force Metropolis, no importance sampling and no energy minimization from math import exp, sqrt from random import random, seed import numpy as np import matplotlib.pyplot as plt from numba import jit from decimal import * # Trial wave function for the Harmonic oscillator in one dimension def WaveFunction(r,alpha): return exp(-0.5*alpha*alpha*r*r) # Local energy for the Harmonic oscillator in one dimension def LocalEnergy(r,alpha): return 0.5*r*r*(1-alpha**4) + 0.5*alpha*alpha ``` Note that in the Metropolis algorithm there is no need to compute the trial wave function, mainly since we are just taking the ratio of two exponentials. It is then from a computational point view, more convenient to compute the argument from the ratio and then calculate the exponential. Here we have refrained from this purely of pedagogical reasons. ``` # The Monte Carlo sampling with the Metropolis algo # The jit decorator tells Numba to compile this function. # The argument types will be inferred by Numba when the function is called. def MonteCarloSampling(): NumberMCcycles= 100000 StepSize = 1.0 # positions PositionOld = 0.0 PositionNew = 0.0 # seed for rng generator seed() # start variational parameter alpha = 0.4 for ia in range(MaxVariations): alpha += .05 AlphaValues[ia] = alpha energy = energy2 = 0.0 #Initial position PositionOld = StepSize * (random() - .5) wfold = WaveFunction(PositionOld,alpha) #Loop over MC MCcycles for MCcycle in range(NumberMCcycles): #Trial position PositionNew = PositionOld + StepSize*(random() - .5) wfnew = WaveFunction(PositionNew,alpha) #Metropolis test to see whether we accept the move if random() <= wfnew**2 / wfold**2: PositionOld = PositionNew wfold = wfnew DeltaE = LocalEnergy(PositionOld,alpha) energy += DeltaE energy2 += DeltaE**2 #We calculate mean, variance and error energy /= NumberMCcycles energy2 /= NumberMCcycles variance = energy2 - energy**2 error = sqrt(variance/NumberMCcycles) Energies[ia] = energy Variances[ia] = variance outfile.write('%f %f %f %f \n' %(alpha,energy,variance,error)) return Energies, AlphaValues, Variances ``` Finally, the results are presented here with the exact energies and variances as well. ``` #Here starts the main program with variable declarations MaxVariations = 20 Energies = np.zeros((MaxVariations)) ExactEnergies = np.zeros((MaxVariations)) ExactVariance = np.zeros((MaxVariations)) Variances = np.zeros((MaxVariations)) AlphaValues = np.zeros(MaxVariations) (Energies, AlphaValues, Variances) = MonteCarloSampling() outfile.close() ExactEnergies = 0.25*(AlphaValues*AlphaValues+1.0/(AlphaValues*AlphaValues)) ExactVariance = 0.25*(1.0+((1.0-AlphaValues**4)**2)*3.0/(4*(AlphaValues**4)))-ExactEnergies*ExactEnergies #simple subplot plt.subplot(2, 1, 1) plt.plot(AlphaValues, Energies, 'o-',AlphaValues, ExactEnergies,'r-') plt.title('Energy and variance') plt.ylabel('Dimensionless energy') plt.subplot(2, 1, 2) plt.plot(AlphaValues, Variances, '.-',AlphaValues, ExactVariance,'r-') plt.xlabel(r'$\alpha$', fontsize=15) plt.ylabel('Variance') save_fig("VMCHarmonic") plt.show() #nice printout with Pandas import pandas as pd from pandas import DataFrame data ={'Alpha':AlphaValues, 'Energy':Energies,'Exact Energy':ExactEnergies,'Variance':Variances,'Exact Variance':ExactVariance,} frame = pd.DataFrame(data) print(frame) ``` For $\alpha=1$ we have the exact eigenpairs, as can be deduced from the table here. With $\omega=1$, the exact energy is $1/2$ a.u. with zero variance, as it should. We see also that our computed variance follows rather well the exact variance. Increasing the number of Monte Carlo cycles will improve our statistics (try to increase the number of Monte Carlo cycles). The fact that the variance is exactly equal to zero when $\alpha=1$ is that we then have the exact wave function, and the action of the hamiltionan on the wave function $$ H\psi = \mathrm{constant}\times \psi, $$ yields just a constant. The integral which defines various expectation values involving moments of the hamiltonian becomes then $$ \langle H^n \rangle = \frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})H^n(\boldsymbol{R})\Psi_T(\boldsymbol{R})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}= \mathrm{constant}\times\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}=\mathrm{constant}. $$ **This gives an important information: the exact wave function leads to zero variance!** As we will see below, many practitioners perform a minimization on both the energy and the variance.
github_jupyter
# Домашнее задание «Доверительные интервалы. Статистическая проверка гипотез для несвязанных выборок» ``` import scipy.stats as stats import pandas as pd import numpy as np import scipy as sp ``` 1. Найдите минимально необходимый объем выборки для построения интервальной оценки среднего с точностью ∆ = 3, дисперсией (stddev) σ^2 = 225 и уровнем доверия β = 0.95. ``` # β = 1 − α # формула уровня доверия, те вероятность принять неправильную нулевую гипотезу # α = 1 − β # уровень значимости (обычно 5%), те вероятность отвергнуть правильную нулевую гипотезу (вероятность ошибки первого рода). # p-value - минимальное значение уровня значимости, при котором мы отвергаем нулевую гипотезу # если p-value < α - то результат является статистически значимым и можно отклонить нулевую гипотезу Z = 1.96 # 95% sigma = 15 # стандартное отклонение e = 3 N = ((sigma*Z)/e)**2 N ``` 2. Вам даны две выборки роста мужчин и женщин. Докажите, используя t-Тест Стьдента, что различия между выборками незначительно, если уровень значимости равен 0.001 ``` np.random.seed(12) population_men = stats.norm.rvs(loc=19,scale=171,size=11000000) # Выборка мужчин со средним ростом 171 population_women = stats.norm.rvs(loc=16,scale=165,size=12000) # Выборка женщин со средним ростом 165 t , p = stats.ttest_ind(population_men,population_women) print("t = " + str(t)) print("p = " + str(p)) pd.DataFrame(population_men).hist() pd.DataFrame(population_women).hist() ``` 3. Определите объем необходимой выборки для исследования среднего чека за кофе в случайном городе, если известно, что в этом городе стандартное отклонение = 150, уровень доверия = 95%. Погрешность 50 рублей. ``` sigma = 150 Z = 1.96 # 95% e = 3 N = ((sigma*Z)/e)**2 N ``` 4. Представьте, что вы хотите разоблачить "волшебника", который считает, что умеет предсказывать погоду на завтра. Отвечая просто: дождь или солнце. Вы пронаблюдали за ответами "волшебника" в течении какого периода времени и получили такие результаты (см.ниже). Можно ли сказать, что маг действительно умеет предсказывать погоду, если уровнь значимости принять за 0.05 ? - Нулевая гипотеза - волшебник умеет предсказывать погоду. - Альтернативная - неумеет ``` observations = pd.DataFrame([[25,36],[15,44]], index=['Дождь','Солнце'], columns=['Ответ волшебника','Реальность']) observations oddsratio, pvalue = sp.stats.fisher_exact(observations) pvalue ``` Вывод: значение pvalue больше уровня значимости, это значит мы не может отклонить нулевую гипотезу тк результат статистически не значим и мы не можем утверждать что волшебник действительно может предсказывать погоду. 5. Используя функцию mean_confidence_interval(data, confidence), постройте доверительный интервал с уровнем доверия 90% для выборки: data = [1,5,8,9,6,7,5,6,7,8,5,6,7,0,9,8,4,6,7,9,8,6,5,7,8,9,6,7,5,8,6,7,9,5] ``` def mean_confidence_interval(data, confidence=0.95): n = len(data) m, se = np.mean(data), stats.sem(data) h = se * stats.t.ppf((1 + confidence)/2, n) return m-h,m, m+h#,h data = [4,5,8,9,6,7,5,6,7,8,5,6,7,0,9,8,4,6,7,9,8,6,5,7,8,9,6,7,5,8,6,7,9,5,10] mean_confidence_interval(data, 0.90) ``` 6. Принадлежит ли выборка data_1 и data_2 одному множеству? Оцените это с помощью известных вам тестов проверки гипотез. ``` data_1 = [4,5,8,9,6,7,5,6,7,8,5,6,7,0,9,8,4,6,7,9,8,6,5,7,8,9,6,7,5,8,6,7,9,5,10] data_2 = [8,5,6,7,0,1,8,4,6,7,0,2,6,5,7,5,3,5,3,5,3,5,5,8,7,6,4,5,3,5,4,6,4,5,3,2,6,4,2,6,1,0,4,3,5,4,3,4,5,4,3,4,5,4,3,4,5,3,4,4,1,2,4,3,1,2,4,3,2,1,5,3,4,6,4,5,3,2,4,5,6,4,3,1,3,5,3,4,4,4,2,5,3] stats.ttest_ind(data_1,data_2) ``` По t-тесту Стьюдента мы получили достаточно высокое значение pvalue (> 0.05), соответственно отвергаем нулевую гипотезу, что означает две данные выборки не принадлежат одному множеству. 7. На примере датасета про жилье в New York City, мы сталкивались с примером, когда переменная имеет не совсем нормальное распределение. Предположим, Вы сформировали две гипотезы: Нулевая гипотеза - распределение нормальное, Альтернативная гипотеза - распределение не нормальное. Допустим, вы применили какой-то тест (сейчас неважно какой), который показал уровень значимости (p-value) = 0.03. Каковы будут ваши выводы? Будем считать что у нас нормальное распределение или все-таки нет? Вопрос без подвоха) Вывод: При pvalue = 0.03, что меньше уровня значимости, будем считать, что результат статистически значим и можно отклонить нулевую гипотезу и переменная имеет не нормальное распределение. 8. Первая выборка — это пациенты, которых лечили препаратом А. Вторая выборка — пациенты, которых лечили препаратом Б. Значения в выборках — это некоторая характеристика эффективности лечения (уровень метаболита в крови, температура через три дня после начала лечения, срок выздоровления, число койко-дней, и т.д.) а) Требуется выяснить, имеется ли значимое различие эффективности препаратов А и Б, или различия являются чисто случайными и объясняются «естественной» дисперсией выбранной характеристики? (уровень значимости принять за 5% или 0.05) b) При каком минимальном P-values различия были бы уже значимы? ``` np.random.seed(11) A = stats.norm.rvs(scale=50,loc=10,size=300) B = A+stats.norm.rvs(scale=10,loc=-1.25,size=300) pd.DataFrame(A).hist(bins=120) pd.DataFrame(B).hist(bins=120) stats.ttest_ind(a=A,b=B,equal_var=False) ``` a) Значение pvalue выше уровня значимости - мы не можем отклонить нулевую гипотезу, различия статистически не значимы либо случайны b) При любом значении pvalue меньше 0.05 различия считались бы статистически значимыми
github_jupyter
``` from nltk.corpus import stopwords from nltk.stem.wordnet import WordNetLemmatizer import string import gensim from gensim import corpora import nltk nltk.download('stopwords') nltk.download('wordnet') from nltk.corpus import stopwords from nltk.stem.wordnet import WordNetLemmatizer import string import gensim from gensim import corpora # Step 2: Getting data docs1="Sugar causes blood glucose to spike and plummet. Unstable blood sugar often leads to mood swings, fatigue, headaches and cravings for more sugar. Cravings set the stage for a cycle of addiction in which every new hit of sugar makes you feel better temporarily but, a few hours later, results in more cravings and hunger. On the flip side, those who avoid sugar often report having little or no cravings for sugary things and feeling emotionally balanced and energized." docs2="Sugar increases the risk of obesity, diabetes and heart disease. Large-scale studies have shown that the more high-glycemic foods (those that quickly affect blood sugar), including foods containing sugar, a person consumes, the higher his risk for becoming obese and for developing diabetes and heart disease1. Emerging research is also suggesting connections between high-glycemic diets and many different forms of cancer." docs3="Sugar interferes with immune function. Research on human subjects is scant, but animal studies have shown that sugar suppresses immune response5. More research is needed to understand the exact mechanisms; however, we do know that bacteria and yeast feed on sugar and that, when these organisms get out of balance in the body, infections and illness are more likely." docs4="A high-sugar diet often results in chromium deficiency. Its sort of a catch-22. If you consume a lot of sugar and other refined carbohydrates, you probably dont get enough of the trace mineral chromium, and one of chromiums main functions is to help regulate blood sugar. Scientists estimate that 90 percent of Americans dont get enough chromium. Chromium is found in a variety of animal foods, seafood and plant foods. Refining starches and other carbohydrates rob these foods of their chromium supplies." docs5="Sugar accelerates aging. It even contributes to that telltale sign of aging: sagging skin. Some of the sugar you consume, after hitting your bloodstream, ends up attaching itself to proteins, in a process called glycation. These new molecular structures contribute to the loss of elasticity found in aging body tissues, from your skin to your organs and arteries7. The more sugar circulating in your blood, the faster this damage takes hold." docs6="Sugar causes tooth decay. With all the other life-threatening effects of sugar, we sometimes forget the most basic damage it does. When it sits on your teeth, it creates decay more efficiently than any other food substance8. For a strong visual reminder, next time the Tooth Fairy visits, try the old tooth-in-a-glass-of-Coke experiment—the results will surely convince you that sugar isnt good for your pearly whites." docs7="Sugar can cause gum disease, which can lead to heart disease. Increasing evidence shows that chronic infections, such as those that result from periodontal problems, play a role in the development of coronary artery disease9. The most popular theory is that the connection is related to widespread effects from the bodys inflammatory response to infection." docs7="Sugar affects behavior and cognition in children. Though it has been confirmed by millions of parents, most researchers have not been able to show the effect of sugar on childrens behavior. A possible problem with the research is that most of it compared the effects of a sugar-sweetened drink to one containing an artificial sweetener10. It may be that kids react to both real sugar and sugar substitutes, therefore showing no differences in behavior. What about kids ability to learn? Between 1979 and 1983, 803 New York City public schools reduced the amount of sucrose (table sugar) and eliminated artificial colors, flavors and two preservatives from school lunches and breakfasts. The diet policy changes were followed by a 15.7 percent increase in a national academic ranking (previously, the greatest improvement ever seen had been 1.7 percent)." docs8="Sugar increases stress. When were under stress, our stress hormone levels rise; these chemicals are the bodys fight-or-flight emergency crew, sent out to prepare the body for an attack or an escape. These chemicals are also called into action when blood sugar is low. For example, after a blood-sugar spike (say, from eating a piece of birthday cake), theres a compensatory dive, which causes the body to release stress hormones such as adrenaline, epinephrine and cortisol. One of the main things these hormones do is raise blood sugar, providing the body with a quick energy boost. The problem is, these helpful hormones can make us feel anxious, irritable and shaky." docs9="Sugar takes the place of important nutrients. According to USDA data, people who consume the most sugar have the lowest intakes of essential nutrients––especially vitamin A, vitamin C, folate, vitamin B-12, calcium, phosphorous, magnesium and iron. Ironically, those who consume the most sugar are children and teenagers, the individuals who need these nutrients most12." docs10="Slashing Sugar. Now that you know the negative impacts refined sugar can have on your body and mind, youll want to be more careful about the foods you choose. And the first step is getting educated about where sugar lurks—believe it or not, a food neednt even taste all that sweet for it to be loaded with sugar. When it comes to convenience and packaged foods, let the ingredients label be your guide, and be aware that just because something boasts that it is low in carbs or a diet food, doesnt mean its free of sugar. Atkins products never contain added sugar." # compile documents doc_complete=[docs1,docs2,docs3, docs4,docs5,docs6,docs7,docs8,docs9,docs10] # Step - some necessary preprocessing stop_set = set(stopwords.words('english')) exclude_set = set(string.punctuation) lemmatize = WordNetLemmatizer() def clean_doc(doc): stop_free = " ".join([i for i in doc.lower().split() if i not in stop_set]) punc_free = ''.join(i for i in stop_free if i not in exclude_set) normalized = " ".join(lemmatize.lemmatize(w) for w in punc_free.split()) return normalized cleaned = [clean_doc(doc).split() for doc in doc_complete] # Step 4: Create LDA model using gensim # Every unique term is assigned an index in our term document matrix. dictionary = corpora.Dictionary(cleaned) # Converting list of documents (corpus) into Document Term Matrix using dictionary prepared above. doc_term_matrix = [dictionary.doc2bow(doc) for doc in cleaned] # Creating an LDA object Lda = gensim.models.ldamodel.LdaModel # Running and Training LDA model on the document term matrix. ldamodel = Lda(doc_term_matrix, num_topics=5, id2word = dictionary, passes=300) #Result topics = ldamodel.print_topics(num_topics=5, num_words=5) for i in topics : print (i) #prints ```
github_jupyter
``` %matplotlib inline from pyvista import set_plot_theme set_plot_theme('document') ``` Customize Scalar Bars {#scalar_bar_example} ===================== Walk through of all the different capabilities of scalar bars and how a user can customize scalar bars. ``` # sphinx_gallery_thumbnail_number = 2 import pyvista as pv from pyvista import examples ``` By default, when plotting a dataset with a scalar array, a scalar bar for that array is added. To turn off this behavior, a user could specify `show_scalar_bar=False` when calling `.add_mesh()`. Let\'s start with a sample dataset provide via PyVista to demonstrate the default behavior of scalar bar plotting: ``` # Load St Helens DEM and warp the topography mesh = examples.download_st_helens().warp_by_scalar() # First a default plot with jet colormap p = pv.Plotter() # Add the data, use active scalar for coloring, and show the scalar bar p.add_mesh(mesh) # Display the scene p.show() ``` We could also plot the scene with an interactive scalar bar to move around and place where we like by specifying passing keyword arguments to control the scalar bar via the `scalar_bar_args` parameter in `pyvista.BasePlotter.add_mesh`{.interpreted-text role="func"}. The keyword arguments to control the scalar bar are defined in `pyvista.BasePlotter.add_scalar_bar`{.interpreted-text role="func"}. ``` # create dictionary of parameters to control scalar bar sargs = dict(interactive=True) # Simply make the bar interactive p = pv.Plotter(notebook=False) # If in IPython, be sure to show the scene p.add_mesh(mesh, scalar_bar_args=sargs) p.show() # Remove from plotters so output is not produced in docs pv.plotting._ALL_PLOTTERS.clear() ``` ![](../../images/gifs/scalar-bar-interactive.gif) Or manually define the scalar bar\'s location: ``` # Set a custom position and size sargs = dict(height=0.25, vertical=True, position_x=0.05, position_y=0.05) p = pv.Plotter() p.add_mesh(mesh, scalar_bar_args=sargs) p.show() ``` The text properties of the scalar bar can also be controlled: ``` # Controlling the text properties sargs = dict( title_font_size=20, label_font_size=16, shadow=True, n_labels=3, italic=True, fmt="%.1f", font_family="arial", ) p = pv.Plotter() p.add_mesh(mesh, scalar_bar_args=sargs) p.show() ``` Labelling values outside of the scalar range ``` p = pv.Plotter() p.add_mesh(mesh, clim=[1000, 2000], below_color='blue', above_color='red', scalar_bar_args=sargs) p.show() ``` Annotate values of interest using a dictionary. The key of the dictionary must be the value to annotate, and the value must be the string label. ``` # Make a dictionary for the annotations annotations = { 2300: "High", 805.3: "Cutoff value", } p = pv.Plotter() p.add_mesh(mesh, scalars='Elevation', annotations=annotations) p.show() ```
github_jupyter
## Step 1: Import Libraries ``` # All imports import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import missingno import seaborn as sns from sklearn.feature_selection import SelectKBest, f_regression from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression, Ridge, Lasso from sklearn.svm import SVR from sklearn.neural_network import MLPRegressor from sklearn.ensemble import RandomForestRegressor import warnings warnings.filterwarnings('ignore') # List all the files for dir_name, _, file_names in os.walk('data'): for file_name in file_names: print(os.path.join(dir_name, file_name)) ``` ## Step 2: Reading the Data ``` data_vw = pd.read_csv("data/vw.csv") data_vw.shape data_vw.head() data_vw.describe() missingno.matrix(data_vw) data_vw.isnull().sum() ``` ## Step 3: EDA ``` categorical_features = [feature for feature in data_vw.columns if data_vw[feature].dtype == 'O'] # Getting the count plot for feature in categorical_features: sns.countplot(y=data_vw[feature]) plt.show() # Getting the barplot plt.figure(figsize=(10,5), facecolor='w') sns.barplot(x=data_vw['year'], y=data_vw['price']) sns.barplot(x=data_vw['transmission'], y=data_vw['price']) # Getting the relation b/w milleage and price plt.figure(figsize=(10, 6)) sns.scatterplot(x=data_vw['mileage'], y=data_vw['price'], hue=data_vw['year']) plt.figure(figsize=(5,5)) sns.scatterplot(x=data_vw['mileage'], y=data_vw['price'], hue=data_vw['transmission']) plt.figure(figsize=(10,10)) sns.pairplot(data_vw) ``` ## Step 4: Feature Engineering ``` data_vw.head() ``` Dropping the year column, but instead will create data on how old the car is ``` data_vw['age_of_car'] = 2020 - data_vw['year'] data_vw.drop(['year'], axis=1, inplace=True) # Look at the frequency of the ages sns.countplot(y=data_vw['age_of_car']) # OHE the categorical variables data_vw_extended = pd.get_dummies(data_vw) data_vw_extended.shape sc = StandardScaler() data_vw_extended = pd.DataFrame(sc.fit_transform(data_vw_extended), columns=data_vw_extended.columns) data_vw_extended.head() X_train, X_test, y_train, y_test = train_test_split(data_vw_extended.drop(['price'], axis=1), data_vw_extended[['price']]) X_train.shape, X_test.shape, y_train.shape, y_test.shape ``` ## Step 5: Feature Selection ``` # Select the k best features no_of_features = [] r_2_train = [] r_2_test = [] for k in range(3, 40, 2): selector = SelectKBest(f_regression, k=k) X_train_selector = selector.fit_transform(X_train, y_train) X_test_selector = selector.transform(X_test) lin_reg = LinearRegression() lin_reg.fit(X_train_selector, y_train) no_of_features.append(k) r_2_train.append(lin_reg.score(X_train_selector, y_train)) r_2_test.append(lin_reg.score(X_test_selector, y_test)) sns.lineplot(x=no_of_features, y=r_2_train) sns.lineplot(x=no_of_features, y=r_2_test) ``` k=23 is providing us the best optimal result. Hence training the model on 23 ``` selector = SelectKBest(f_regression, k=23) X_train_selector = selector.fit_transform(X_train, y_train) X_test_selector = selector.transform(X_test) column_name = data_vw_extended.drop(['price'], axis=1).columns column_name[selector.get_support()] ``` ## Step 6: Model ``` def regressor_builder(model): regressor = model regressor.fit(X_train_selector, y_train) score = regressor.score(X_test_selector, y_test) return regressor, score list_models = [LinearRegression(), Lasso(), Ridge(), SVR(), RandomForestRegressor(), MLPRegressor()] model_performance = pd.DataFrame(columns=['Features', 'Model', 'Performance']) for model in list_models: regressor, score = regressor_builder(model) model_performance = model_performance.append({"Feature": "Linear", "Model": regressor, "Performance": score}, ignore_index=True) model_performance ``` Randomforest provides the best r^2
github_jupyter
``` # Copyright 2022 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # E2E ML on GCP: MLOps stage 2 : experimentation: get started with Vertex Training for Scikit-Learn <table align="left"> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage2/get_started_vertex_training_sklearn.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage2/get_started_vertex_training_sklearn.ipynb"> Open in Google Cloud Notebooks </a> </td> </table> <br/><br/><br/> ## Overview This tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation: get started with Vertex Training for Scikit-Learn. ### Dataset The dataset used for this tutorial is the [News Aggregation](https://archive.ics.uci.edu/ml/datasets/News+Aggregator) from [ICS Machine Learning Datasets](https://archive.ics.uci.edu/ml/datasets.php). The trained model predicts the news category of the news article. ### Objective In this tutorial, you learn how to use `Vertex AI Training` for training a Scikit-Learn custom model. This tutorial uses the following Google Cloud ML services: - `Vertex AI Training` - `Vertex AI Model` resource The steps performed include: - Training using a Python package. - Report accuracy when hyperparameter tuning. - Save the model artifacts to Cloud Storage using GCSFuse. - Create a `Vertex AI Model` resource. ## Installations Install *one time* the packages for executing the MLOps notebooks. ``` ONCE_ONLY = False if ONCE_ONLY: ! pip3 install -U tensorflow==2.5 $USER_FLAG ! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG ! pip3 install -U tensorflow-transform==1.2 $USER_FLAG ! pip3 install -U tensorflow-io==0.18 $USER_FLAG ! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG ! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG ! pip3 install --upgrade google-cloud-bigquery $USER_FLAG ! pip3 install --upgrade google-cloud-logging $USER_FLAG ! pip3 install --upgrade apache-beam[gcp] $USER_FLAG ! pip3 install --upgrade pyarrow $USER_FLAG ! pip3 install --upgrade cloudml-hypertune $USER_FLAG ! pip3 install --upgrade kfp $USER_FLAG ! pip3 install --upgrade torchvision $USER_FLAG ! pip3 install --upgrade rpy2 $USER_FLAG ``` ### Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. ``` import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) ``` #### Set your project ID **If you don't know your project ID**, you may be able to get your project ID using `gcloud`. ``` PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID ``` #### Region You can also change the `REGION` variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. - Americas: `us-central1` - Europe: `europe-west4` - Asia Pacific: `asia-east1` You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations). ``` REGION = "us-central1" # @param {type: "string"} ``` #### Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. ``` from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") ``` ### Create a Cloud Storage bucket **The following steps are required, regardless of your notebook environment.** When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. ``` BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP ``` **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket. ``` ! gsutil mb -l $REGION $BUCKET_NAME ``` Finally, validate access to your Cloud Storage bucket by examining its contents: ``` ! gsutil ls -al $BUCKET_NAME ``` ### Set up variables Next, set up some variables used throughout the tutorial. ### Import libraries and define constants ``` import google.cloud.aiplatform as aip ``` ### Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket. ``` aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME) ``` #### Set hardware accelerators You can set hardware accelerators for training and prediction. Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) Otherwise specify `(None, None)` to use a container image to run on a CPU. Learn more about [hardware accelerator support for your region](https://cloud.google.com/vertex-ai/docs/general/locations#accelerators). *Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support. ``` if os.getenv("IS_TESTING_TRAIN_GPU"): TRAIN_GPU, TRAIN_NGPU = ( aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_TRAIN_GPU")), ) else: TRAIN_GPU, TRAIN_NGPU = (None, None) if os.getenv("IS_TESTING_DEPLOY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = ( aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPLOY_GPU")), ) else: DEPLOY_GPU, DEPLOY_NGPU = (None, None) ``` #### Set pre-built containers Set the pre-built Docker container image for training and prediction. For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers). For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers). ``` TRAIN_VERSION = "scikit-learn-cpu.0-23" DEPLOY_VERSION = "sklearn-cpu.0-23" TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format( REGION.split("-")[0], TRAIN_VERSION ) DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format( REGION.split("-")[0], DEPLOY_VERSION ) ``` #### Set machine type Next, set the machine type to use for training. - Set the variable `TRAIN_COMPUTE` to configure the compute resources for the VMs you will use for for training. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \] *Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs *Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*. ``` if os.getenv("IS_TESTING_TRAIN_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", TRAIN_COMPUTE) ``` ## Introduction to Scikit-learn training Once you have trained a Scikit-learn model, you will want to save it at a Cloud Storage location, so it can subsequently be uploaded to a `Vertex AI Model` resource. The Scikit-learn package does not have support to save the model to a Cloud Storage location. Instead, you will do the following steps to save to a Cloud Storage location. 1. Save the in-memory model to the local filesystem in pickle format (e.g., model.pkl). 2. Create a Cloud Storage storage client. 3. Upload the pickle file as a blob to the specified Cloud Storage location using the Cloud Storage storage client. *Note*: You can do hyperparameter tuning with a Scikit-learn model. ### Examine the training package #### Package layout Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout. - PKG-INFO - README.md - setup.cfg - setup.py - trainer - \_\_init\_\_.py - task.py The files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image. The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`). #### Package Assembly In the following cells, you will assemble the training package. ``` # Make folder for Python training script ! rm -rf custom ! mkdir custom # Add package information ! touch custom/README.md setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0" ! echo "$setup_cfg" > custom/setup.cfg setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'wget',\n\n 'cloudml-hypertune',\n\n ],\n\n packages=setuptools.find_packages())" ! echo "$setup_py" > custom/setup.py pkg_info = "Metadata-Version: 1.0\n\nName: News Aggregation text classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex" ! echo "$pkg_info" > custom/PKG-INFO # Make the training subfolder ! mkdir custom/trainer ! touch custom/trainer/__init__.py ``` ### Create the task script for the Python training package Next, you create the `task.py` script for driving the training package. Some noteable steps include: - Command-line arguments: - `model-dir`: The location to save the trained model. When using Vertex AI custom training, the location will be specified in the environment variable: `AIP_MODEL_DIR`, - `dataset_url`: The location of the dataset to download. - `alpha`: Hyperparameter - Data preprocessing (`get_data()`): - Download the dataset and split into training and test. - Model architecture (`get_model()`): - Builds the corresponding model architecture. - Training (`train_model()`): - Trains the model - Evaluation (`evaluate_model()`): - Evaluates the model. - If hyperparameter tuning, reports the metric for accuracy. - Model artifact saving - Saves the model artifacts and evaluation metrics where the Cloud Storage location specified by `model-dir`. - *Note*: GCSFuse (`/gcs`) is used to do filesystem operations on Cloud Storage buckets. ``` %%writefile custom/trainer/task.py import argparse import logging import os import pickle import zipfile from typing import List, Tuple import pandas as pd import wget from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn.model_selection import train_test_split from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline import hypertune parser = argparse.ArgumentParser() parser.add_argument('--model-dir', dest='model_dir', default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.') parser.add_argument("--dataset-url", dest="dataset_url", type=str, help="Download url for the training data.") parser.add_argument('--alpha', dest='alpha', default=1.0, type=float, help='Alpha parameters for MultinomialNB') args = parser.parse_args() logging.getLogger().setLevel(logging.INFO) def get_data(url: str, test_size: float = 0.2) -> Tuple[List, List, List, List]: logging.info("Downloading training data from: {}".format(args.dataset_url)) zip_filepath = wget.download(url, out=".") with zipfile.ZipFile(zip_filepath, "r") as zf: zf.extract(path=".", member="newsCorpora.csv") COLUMN_NAMES = ["id", "title", "url", "publisher", "category", "story", "hostname", "timestamp"] dataframe = pd.read_csv( "newsCorpora.csv", delimiter=" ", names=COLUMN_NAMES, index_col=0 ) train, test = train_test_split(dataframe, test_size=test_size) x_train, y_train = train["title"].values, train["category"].values x_test, y_test = test["title"].values, test["category"].values return x_train, y_train, x_test, y_test def get_model(): logging.info("Build model ...") model = Pipeline([ ("vectorizer", CountVectorizer()), ("tfidf", TfidfTransformer()), ("naivebayes", MultinomialNB(alpha=args.alpha)), ]) return model def train_model(model: Pipeline, X_train: List, y_train: List, X_test: List, y_test: List ) -> Pipeline: logging.info("Training started ...") model.fit(X_train, y_train) logging.info("Training completed") return model def evaluate_model(model: Pipeline, X_train: List, y_train: List, X_test: List, y_test: List ) -> float: score = model.score(X_test, y_test) logging.info(f"Evaluation completed with model score: {score}") # report metric for hyperparameter tuning hpt = hypertune.HyperTune() hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='accuracy', metric_value=score ) return score def export_model_to_gcs(fitted_pipeline: Pipeline, gcs_uri: str) -> str: """Exports trained pipeline to GCS Parameters: fitted_pipeline (sklearn.pipelines.Pipeline): the Pipeline object with data already fitted (trained pipeline object). gcs_uri (str): GCS path to store the trained pipeline i.e gs://example_bucket/training-job. Returns: export_path (str): Model GCS location """ # Upload model artifact to Cloud Storage artifact_filename = 'model.pkl' storage_path = os.path.join(gcs_uri, artifact_filename) # Save model artifact to local filesystem (doesn't persist) with open(storage_path, 'wb') as model_file: pickle.dump(fitted_pipeline, model_file) def export_evaluation_report_to_gcs(report: str, gcs_uri: str) -> None: """ Exports training job report to GCS Parameters: report (str): Full report in text to sent to GCS gcs_uri (str): GCS path to store the report i.e gs://example_bucket/training-job """ # Upload model artifact to Cloud Storage artifact_filename = 'report.txt' storage_path = os.path.join(gcs_uri, artifact_filename) # Save model artifact to local filesystem (doesn't persist) with open(storage_path, 'w') as report_file: report_file.write(report) logging.info("Starting custom training job.") data = get_data(args.dataset_url) model = get_model() model = train_model(model, *data) score = evaluate_model(model, *data) # export model to gcs using GCSFuse logging.info("Exporting model artifacts ...") gs_prefix = 'gs://' gcsfuse_prefix = '/gcs/' if args.model_dir.startswith(gs_prefix): args.model_dir = args.model_dir.replace(gs_prefix, gcsfuse_prefix) dirpath = os.path.split(args.model_dir)[0] if not os.path.isdir(dirpath): os.makedirs(dirpath) export_model_to_gcs(model, args.model_dir) export_evaluation_report_to_gcs(str(score), args.model_dir) logging.info(f"Exported model artifacts to GCS bucket: {args.model_dir}") ``` #### Store training script on your Cloud Storage bucket Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket. ``` ! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_newsaggr.tar.gz ``` ### Create and run custom training job To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. #### Create custom training job A custom training job is created with the `CustomTrainingJob` class, with the following parameters: - `display_name`: The human readable name for the custom training job. - `container_uri`: The training container image. - `python_package_gcs_uri`: The location of the Python training package as a tarball. - `python_module_name`: The relative path to the training script in the Python package. - `model_serving_container_uri`: The container image for deploying the model. *Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package. ``` DISPLAY_NAME = "newsaggr_" + TIMESTAMP job = aip.CustomPythonPackageTrainingJob( display_name=DISPLAY_NAME, python_package_gcs_uri=f"{BUCKET_NAME}/trainer_newsaggr.tar.gz", python_module_name="trainer.task", container_uri=TRAIN_IMAGE, model_serving_container_image_uri=DEPLOY_IMAGE, project=PROJECT_ID, ) ``` ### Prepare your command-line arguments Now define the command-line arguments for your custom training container: - `args`: The command-line arguments to pass to the executable that is set as the entry point into the container. - `--model-dir` : For our demonstrations, we use this command-line argument to specify where to store the model artifacts. - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable `DIRECT = True`), or - indirect: The service passes the Cloud Storage location as the environment variable `AIP_MODEL_DIR` to your training script (set variable `DIRECT = False`). In this case, you tell the service the model artifact location in the job specification. - `--dataset-url`: The location of the dataset to download. - `--alpha`: Tunable hyperparameter ``` MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP) DATASET_URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/00359/NewsAggregatorDataset.zip" DIRECT = False if DIRECT: CMDARGS = [ "--alpha=" + str(0.9), "--dataset-url=" + DATASET_URL, "--model_dir=" + MODEL_DIR, ] else: CMDARGS = ["--alpha=" + str(0.9), "--dataset-url=" + DATASET_URL] ``` #### Run the custom training job Next, you run the custom job to start the training job by invoking the method `run`, with the following parameters: - `model_display_name`: The human readable name for the `Model` resource. - `args`: The command-line arguments to pass to the training script. - `replica_count`: The number of compute instances for training (replica_count = 1 is single node training). - `machine_type`: The machine type for the compute instances. - `accelerator_type`: The hardware accelerator type. - `accelerator_count`: The number of accelerators to attach to a worker replica. - `base_output_dir`: The Cloud Storage location to write the model artifacts to. - `sync`: Whether to block until completion of the job. ``` if TRAIN_GPU: model = job.run( model_display_name="newsaggr_" + TIMESTAMP, args=CMDARGS, replica_count=1, machine_type=TRAIN_COMPUTE, accelerator_type=TRAIN_GPU.name, accelerator_count=TRAIN_NGPU, base_output_dir=MODEL_DIR, sync=False, ) else: model = job.run( model_display_name="newsaggr_" + TIMESTAMP, args=CMDARGS, replica_count=1, machine_type=TRAIN_COMPUTE, base_output_dir=MODEL_DIR, sync=False, ) model_path_to_deploy = MODEL_DIR ``` ### List a custom training job ``` _job = job.list(filter=f"display_name={DISPLAY_NAME}") print(_job) ``` ### Wait for completion of custom training job Next, wait for the custom training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the custom training job is completed. ``` model.wait() ``` ### Delete a custom training job After a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`. ``` job.delete() ``` # Cleaning up To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: - Dataset - Pipeline - Model - Endpoint - AutoML Training Job - Batch Job - Custom Job - Hyperparameter Tuning Job - Cloud Storage Bucket ``` delete_all = True if delete_all: # Delete the dataset using the Vertex dataset object try: if "dataset" in globals(): dataset.delete() except Exception as e: print(e) # Delete the model using the Vertex model object try: if "model" in globals(): model.delete() except Exception as e: print(e) # Delete the endpoint using the Vertex endpoint object try: if "endpoint" in globals(): endpoint.undeploy_all() endpoint.delete() except Exception as e: print(e) # Delete the AutoML or Pipeline training job try: if "dag" in globals(): dag.delete() except Exception as e: print(e) # Delete the custom training job try: if "job" in globals(): job.delete() except Exception as e: print(e) # Delete the batch prediction job using the Vertex batch prediction object try: if "batch_predict_job" in globals(): batch_predict_job.delete() except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object try: if "hpt_job" in globals(): hpt_job.delete() except Exception as e: print(e) if "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME ```
github_jupyter
``` #hide #skip ! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab #default_exp data.transforms #export from fastai.torch_basics import * from fastai.data.core import * from fastai.data.load import * from fastai.data.external import * from sklearn.model_selection import train_test_split #hide from nbdev.showdoc import * ``` # Helper functions for processing data and basic transforms > Functions for getting, splitting, and labeling data, as well as generic transforms ## Get, split, and label For most data source creation we need functions to get a list of items, split them in to train/valid sets, and label them. fastai provides functions to make each of these steps easy (especially when combined with `fastai.data.blocks`). ### Get First we'll look at functions that *get* a list of items (generally file names). We'll use *tiny MNIST* (a subset of MNIST with just two classes, `7`s and `3`s) for our examples/tests throughout this page. ``` path = untar_data(URLs.MNIST_TINY) (path/'train').ls() # export def _get_files(p, fs, extensions=None): p = Path(p) res = [p/f for f in fs if not f.startswith('.') and ((not extensions) or f'.{f.split(".")[-1].lower()}' in extensions)] return res # export def get_files(path, extensions=None, recurse=True, folders=None, followlinks=True): "Get all the files in `path` with optional `extensions`, optionally with `recurse`, only in `folders`, if specified." path = Path(path) folders=L(folders) extensions = setify(extensions) extensions = {e.lower() for e in extensions} if recurse: res = [] for i,(p,d,f) in enumerate(os.walk(path, followlinks=followlinks)): # returns (dirpath, dirnames, filenames) if len(folders) !=0 and i==0: d[:] = [o for o in d if o in folders] else: d[:] = [o for o in d if not o.startswith('.')] if len(folders) !=0 and i==0 and '.' not in folders: continue res += _get_files(p, f, extensions) else: f = [o.name for o in os.scandir(path) if o.is_file()] res = _get_files(path, f, extensions) return L(res) ``` This is the most general way to grab a bunch of file names from disk. If you pass `extensions` (including the `.`) then returned file names are filtered by that list. Only those files directly in `path` are included, unless you pass `recurse`, in which case all child folders are also searched recursively. `folders` is an optional list of directories to limit the search to. ``` t3 = get_files(path/'train'/'3', extensions='.png', recurse=False) t7 = get_files(path/'train'/'7', extensions='.png', recurse=False) t = get_files(path/'train', extensions='.png', recurse=True) test_eq(len(t), len(t3)+len(t7)) test_eq(len(get_files(path/'train'/'3', extensions='.jpg', recurse=False)),0) test_eq(len(t), len(get_files(path, extensions='.png', recurse=True, folders='train'))) t #hide test_eq(len(get_files(path/'train'/'3', recurse=False)),346) test_eq(len(get_files(path, extensions='.png', recurse=True, folders=['train', 'test'])),729) test_eq(len(get_files(path, extensions='.png', recurse=True, folders='train')),709) test_eq(len(get_files(path, extensions='.png', recurse=True, folders='training')),0) ``` It's often useful to be able to create functions with customized behavior. `fastai.data` generally uses functions named as CamelCase verbs ending in `er` to create these functions. `FileGetter` is a simple example of such a function creator. ``` #export def FileGetter(suf='', extensions=None, recurse=True, folders=None): "Create `get_files` partial function that searches path suffix `suf`, only in `folders`, if specified, and passes along args" def _inner(o, extensions=extensions, recurse=recurse, folders=folders): return get_files(o/suf, extensions, recurse, folders) return _inner fpng = FileGetter(extensions='.png', recurse=False) test_eq(len(t7), len(fpng(path/'train'/'7'))) test_eq(len(t), len(fpng(path/'train', recurse=True))) fpng_r = FileGetter(extensions='.png', recurse=True) test_eq(len(t), len(fpng_r(path/'train'))) #export image_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/')) #export def get_image_files(path, recurse=True, folders=None): "Get image files in `path` recursively, only in `folders`, if specified." return get_files(path, extensions=image_extensions, recurse=recurse, folders=folders) ``` This is simply `get_files` called with a list of standard image extensions. ``` test_eq(len(t), len(get_image_files(path, recurse=True, folders='train'))) #export def ImageGetter(suf='', recurse=True, folders=None): "Create `get_image_files` partial that searches suffix `suf` and passes along `kwargs`, only in `folders`, if specified" def _inner(o, recurse=recurse, folders=folders): return get_image_files(o/suf, recurse, folders) return _inner ``` Same as `FileGetter`, but for image extensions. ``` test_eq(len(get_files(path/'train', extensions='.png', recurse=True, folders='3')), len(ImageGetter( 'train', recurse=True, folders='3')(path))) #export def get_text_files(path, recurse=True, folders=None): "Get text files in `path` recursively, only in `folders`, if specified." return get_files(path, extensions=['.txt'], recurse=recurse, folders=folders) #export class ItemGetter(ItemTransform): "Creates a proper transform that applies `itemgetter(i)` (even on a tuple)" _retain = False def __init__(self, i): self.i = i def encodes(self, x): return x[self.i] test_eq(ItemGetter(1)((1,2,3)), 2) test_eq(ItemGetter(1)(L(1,2,3)), 2) test_eq(ItemGetter(1)([1,2,3]), 2) test_eq(ItemGetter(1)(np.array([1,2,3])), 2) #export class AttrGetter(ItemTransform): "Creates a proper transform that applies `attrgetter(nm)` (even on a tuple)" _retain = False def __init__(self, nm, default=None): store_attr() def encodes(self, x): return getattr(x, self.nm, self.default) test_eq(AttrGetter('shape')(torch.randn([4,5])), [4,5]) test_eq(AttrGetter('shape', [0])([4,5]), [0]) ``` ### Split The next set of functions are used to *split* data into training and validation sets. The functions return two lists - a list of indices or masks for each of training and validation sets. ``` # export def RandomSplitter(valid_pct=0.2, seed=None): "Create function that splits `items` between train/val with `valid_pct` randomly." def _inner(o): if seed is not None: torch.manual_seed(seed) rand_idx = L(list(torch.randperm(len(o)).numpy())) cut = int(valid_pct * len(o)) return rand_idx[cut:],rand_idx[:cut] return _inner src = list(range(30)) f = RandomSplitter(seed=42) trn,val = f(src) assert 0<len(trn)<len(src) assert all(o not in val for o in trn) test_eq(len(trn), len(src)-len(val)) # test random seed consistency test_eq(f(src)[0], trn) ``` Use scikit-learn train_test_split. This allow to *split* items in a stratified fashion (uniformely according to the ‘labels‘ distribution) ``` # export def TrainTestSplitter(test_size=0.2, random_state=None, stratify=None, train_size=None, shuffle=True): "Split `items` into random train and test subsets using sklearn train_test_split utility." def _inner(o, **kwargs): train,valid = train_test_split(range_of(o), test_size=test_size, random_state=random_state, stratify=stratify, train_size=train_size, shuffle=shuffle) return L(train), L(valid) return _inner src = list(range(30)) labels = [0] * 20 + [1] * 10 test_size = 0.2 f = TrainTestSplitter(test_size=test_size, random_state=42, stratify=labels) trn,val = f(src) assert 0<len(trn)<len(src) assert all(o not in val for o in trn) test_eq(len(trn), len(src)-len(val)) # test random seed consistency test_eq(f(src)[0], trn) # test labels distribution consistency # there should be test_size % of zeroes and ones respectively in the validation set test_eq(len([t for t in val if t < 20]) / 20, test_size) test_eq(len([t for t in val if t > 20]) / 10, test_size) #export def IndexSplitter(valid_idx): "Split `items` so that `val_idx` are in the validation set and the others in the training set" def _inner(o): train_idx = np.setdiff1d(np.array(range_of(o)), np.array(valid_idx)) return L(train_idx, use_list=True), L(valid_idx, use_list=True) return _inner items = list(range(10)) splitter = IndexSplitter([3,7,9]) test_eq(splitter(items),[[0,1,2,4,5,6,8],[3,7,9]]) # export def _grandparent_idxs(items, name): def _inner(items, name): return mask2idxs(Path(o).parent.parent.name == name for o in items) return [i for n in L(name) for i in _inner(items,n)] # export def GrandparentSplitter(train_name='train', valid_name='valid'): "Split `items` from the grand parent folder names (`train_name` and `valid_name`)." def _inner(o): return _grandparent_idxs(o, train_name),_grandparent_idxs(o, valid_name) return _inner fnames = [path/'train/3/9932.png', path/'valid/7/7189.png', path/'valid/7/7320.png', path/'train/7/9833.png', path/'train/3/7666.png', path/'valid/3/925.png', path/'train/7/724.png', path/'valid/3/93055.png'] splitter = GrandparentSplitter() test_eq(splitter(fnames),[[0,3,4,6],[1,2,5,7]]) fnames2 = fnames + [path/'test/3/4256.png', path/'test/7/2345.png', path/'valid/7/6467.png'] splitter = GrandparentSplitter(train_name=('train', 'valid'), valid_name='test') test_eq(splitter(fnames2),[[0,3,4,6,1,2,5,7,10],[8,9]]) # export def FuncSplitter(func): "Split `items` by result of `func` (`True` for validation, `False` for training set)." def _inner(o): val_idx = mask2idxs(func(o_) for o_ in o) return IndexSplitter(val_idx)(o) return _inner splitter = FuncSplitter(lambda o: Path(o).parent.parent.name == 'valid') test_eq(splitter(fnames),[[0,3,4,6],[1,2,5,7]]) # export def MaskSplitter(mask): "Split `items` depending on the value of `mask`." def _inner(o): return IndexSplitter(mask2idxs(mask))(o) return _inner items = list(range(6)) splitter = MaskSplitter([True,False,False,True,False,True]) test_eq(splitter(items),[[1,2,4],[0,3,5]]) # export def FileSplitter(fname): "Split `items` by providing file `fname` (contains names of valid items separated by newline)." valid = Path(fname).read_text().split('\n') def _func(x): return x.name in valid def _inner(o): return FuncSplitter(_func)(o) return _inner with tempfile.TemporaryDirectory() as d: fname = Path(d)/'valid.txt' fname.write_text('\n'.join([Path(fnames[i]).name for i in [1,3,4]])) splitter = FileSplitter(fname) test_eq(splitter(fnames),[[0,2,5,6,7],[1,3,4]]) # export def ColSplitter(col='is_valid'): "Split `items` (supposed to be a dataframe) by value in `col`" def _inner(o): assert isinstance(o, pd.DataFrame), "ColSplitter only works when your items are a pandas DataFrame" valid_idx = (o.iloc[:,col] if isinstance(col, int) else o[col]).values.astype('bool') return IndexSplitter(mask2idxs(valid_idx))(o) return _inner df = pd.DataFrame({'a': [0,1,2,3,4], 'b': [True,False,True,True,False]}) splits = ColSplitter('b')(df) test_eq(splits, [[1,4], [0,2,3]]) #Works with strings or index splits = ColSplitter(1)(df) test_eq(splits, [[1,4], [0,2,3]]) # does not get confused if the type of 'is_valid' is integer, but it meant to be a yes/no df = pd.DataFrame({'a': [0,1,2,3,4], 'is_valid': [1,0,1,1,0]}) splits_by_int = ColSplitter('is_valid')(df) test_eq(splits_by_int, [[1,4], [0,2,3]]) # export def RandomSubsetSplitter(train_sz, valid_sz, seed=None): "Take randoms subsets of `splits` with `train_sz` and `valid_sz`" assert 0 < train_sz < 1 assert 0 < valid_sz < 1 assert train_sz + valid_sz <= 1. def _inner(o): if seed is not None: torch.manual_seed(seed) train_len,valid_len = int(len(o)*train_sz),int(len(o)*valid_sz) idxs = L(list(torch.randperm(len(o)).numpy())) return idxs[:train_len],idxs[train_len:train_len+valid_len] return _inner items = list(range(100)) valid_idx = list(np.arange(70,100)) splits = RandomSubsetSplitter(0.3, 0.1)(items) test_eq(len(splits[0]), 30) test_eq(len(splits[1]), 10) ``` ### Label The final set of functions is used to *label* a single item of data. ``` # export def parent_label(o): "Label `item` with the parent folder name." return Path(o).parent.name ``` Note that `parent_label` doesn't have anything customize, so it doesn't return a function - you can just use it directly. ``` test_eq(parent_label(fnames[0]), '3') test_eq(parent_label("fastai_dev/dev/data/mnist_tiny/train/3/9932.png"), '3') [parent_label(o) for o in fnames] #hide #test for MS Windows when os.path.sep is '\\' instead of '/' test_eq(parent_label(os.path.join("fastai_dev","dev","data","mnist_tiny","train", "3", "9932.png") ), '3') # export class RegexLabeller(): "Label `item` with regex `pat`." def __init__(self, pat, match=False): self.pat = re.compile(pat) self.matcher = self.pat.match if match else self.pat.search def __call__(self, o): res = self.matcher(str(o)) assert res,f'Failed to find "{self.pat}" in "{o}"' return res.group(1) ``` `RegexLabeller` is a very flexible function since it handles any regex search of the stringified item. Pass `match=True` to use `re.match` (i.e. check only start of string), or `re.search` otherwise (default). For instance, here's an example the replicates the previous `parent_label` results. ``` f = RegexLabeller(fr'{os.path.sep}(\d){os.path.sep}') test_eq(f(fnames[0]), '3') [f(o) for o in fnames] f = RegexLabeller(r'(\d*)', match=True) test_eq(f(fnames[0].name), '9932') #export class ColReader(DisplayedTransform): "Read `cols` in `row` with potential `pref` and `suff`" def __init__(self, cols, pref='', suff='', label_delim=None): store_attr() self.pref = str(pref) + os.path.sep if isinstance(pref, Path) else pref self.cols = L(cols) def _do_one(self, r, c): o = r[c] if isinstance(c, int) else r[c] if c=='name' else getattr(r, c) if len(self.pref)==0 and len(self.suff)==0 and self.label_delim is None: return o if self.label_delim is None: return f'{self.pref}{o}{self.suff}' else: return o.split(self.label_delim) if len(o)>0 else [] def __call__(self, o, **kwargs): if len(self.cols) == 1: return self._do_one(o, self.cols[0]) return L(self._do_one(o, c) for c in self.cols) ``` `cols` can be a list of column names or a list of indices (or a mix of both). If `label_delim` is passed, the result is split using it. ``` df = pd.DataFrame({'a': 'a b c d'.split(), 'b': ['1 2', '0', '', '1 2 3']}) f = ColReader('a', pref='0', suff='1') test_eq([f(o) for o in df.itertuples()], '0a1 0b1 0c1 0d1'.split()) f = ColReader('b', label_delim=' ') test_eq([f(o) for o in df.itertuples()], [['1', '2'], ['0'], [], ['1', '2', '3']]) df['a1'] = df['a'] f = ColReader(['a', 'a1'], pref='0', suff='1') test_eq([f(o) for o in df.itertuples()], [L('0a1', '0a1'), L('0b1', '0b1'), L('0c1', '0c1'), L('0d1', '0d1')]) df = pd.DataFrame({'a': [L(0,1), L(2,3,4), L(5,6,7)]}) f = ColReader('a') test_eq([f(o) for o in df.itertuples()], [L(0,1), L(2,3,4), L(5,6,7)]) df['name'] = df['a'] f = ColReader('name') test_eq([f(df.iloc[0,:])], [L(0,1)]) ``` ## Categorize - ``` #export class CategoryMap(CollBase): "Collection of categories with the reverse mapping in `o2i`" def __init__(self, col, sort=True, add_na=False, strict=False): if is_categorical_dtype(col): items = L(col.cat.categories, use_list=True) #Remove non-used categories while keeping order if strict: items = L(o for o in items if o in col.unique()) else: if not hasattr(col,'unique'): col = L(col, use_list=True) # `o==o` is the generalized definition of non-NaN used by Pandas items = L(o for o in col.unique() if o==o) if sort: items = items.sorted() self.items = '#na#' + items if add_na else items self.o2i = defaultdict(int, self.items.val2idx()) if add_na else dict(self.items.val2idx()) def map_objs(self,objs): "Map `objs` to IDs" return L(self.o2i[o] for o in objs) def map_ids(self,ids): "Map `ids` to objects in vocab" return L(self.items[o] for o in ids) def __eq__(self,b): return all_equal(b,self) t = CategoryMap([4,2,3,4]) test_eq(t, [2,3,4]) test_eq(t.o2i, {2:0,3:1,4:2}) test_eq(t.map_objs([2,3]), [0,1]) test_eq(t.map_ids([0,1]), [2,3]) test_fail(lambda: t.o2i['unseen label']) t = CategoryMap([4,2,3,4], add_na=True) test_eq(t, ['#na#',2,3,4]) test_eq(t.o2i, {'#na#':0,2:1,3:2,4:3}) t = CategoryMap(pd.Series([4,2,3,4]), sort=False) test_eq(t, [4,2,3]) test_eq(t.o2i, {4:0,2:1,3:2}) col = pd.Series(pd.Categorical(['M','H','L','M'], categories=['H','M','L'], ordered=True)) t = CategoryMap(col) test_eq(t, ['H','M','L']) test_eq(t.o2i, {'H':0,'M':1,'L':2}) col = pd.Series(pd.Categorical(['M','H','M'], categories=['H','M','L'], ordered=True)) t = CategoryMap(col, strict=True) test_eq(t, ['H','M']) test_eq(t.o2i, {'H':0,'M':1}) # export class Categorize(DisplayedTransform): "Reversible transform of category string to `vocab` id" loss_func,order=CrossEntropyLossFlat(),1 def __init__(self, vocab=None, sort=True, add_na=False): if vocab is not None: vocab = CategoryMap(vocab, sort=sort, add_na=add_na) store_attr() def setups(self, dsets): if self.vocab is None and dsets is not None: self.vocab = CategoryMap(dsets, sort=self.sort, add_na=self.add_na) self.c = len(self.vocab) def encodes(self, o): try: return TensorCategory(self.vocab.o2i[o]) except KeyError as e: raise KeyError(f"Label '{o}' was not included in the training dataset") from e def decodes(self, o): return Category (self.vocab [o]) #export class Category(str, ShowTitle): _show_args = {'label': 'category'} cat = Categorize() tds = Datasets(['cat', 'dog', 'cat'], tfms=[cat]) test_eq(cat.vocab, ['cat', 'dog']) test_eq(cat('cat'), 0) test_eq(cat.decode(1), 'dog') test_stdout(lambda: show_at(tds,2), 'cat') test_fail(lambda: cat('bird')) cat = Categorize(add_na=True) tds = Datasets(['cat', 'dog', 'cat'], tfms=[cat]) test_eq(cat.vocab, ['#na#', 'cat', 'dog']) test_eq(cat('cat'), 1) test_eq(cat.decode(2), 'dog') test_stdout(lambda: show_at(tds,2), 'cat') cat = Categorize(vocab=['dog', 'cat'], sort=False, add_na=True) tds = Datasets(['cat', 'dog', 'cat'], tfms=[cat]) test_eq(cat.vocab, ['#na#', 'dog', 'cat']) test_eq(cat('dog'), 1) test_eq(cat.decode(2), 'cat') test_stdout(lambda: show_at(tds,2), 'cat') ``` ## Multicategorize - ``` # export class MultiCategorize(Categorize): "Reversible transform of multi-category strings to `vocab` id" loss_func,order=BCEWithLogitsLossFlat(),1 def __init__(self, vocab=None, add_na=False): super().__init__(vocab=vocab,add_na=add_na,sort=vocab==None) def setups(self, dsets): if not dsets: return if self.vocab is None: vals = set() for b in dsets: vals = vals.union(set(b)) self.vocab = CategoryMap(list(vals), add_na=self.add_na) def encodes(self, o): if not all(elem in self.vocab.o2i.keys() for elem in o): diff = [elem for elem in o if elem not in self.vocab.o2i.keys()] diff_str = "', '".join(diff) raise KeyError(f"Labels '{diff_str}' were not included in the training dataset") return TensorMultiCategory([self.vocab.o2i[o_] for o_ in o]) def decodes(self, o): return MultiCategory ([self.vocab [o_] for o_ in o]) #export class MultiCategory(L): def show(self, ctx=None, sep=';', color='black', **kwargs): return show_title(sep.join(self.map(str)), ctx=ctx, color=color, **kwargs) cat = MultiCategorize() tds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], tfms=[cat]) test_eq(tds[3][0], TensorMultiCategory([])) test_eq(cat.vocab, ['a', 'b', 'c']) test_eq(cat(['a', 'c']), tensor([0,2])) test_eq(cat([]), tensor([])) test_eq(cat.decode([1]), ['b']) test_eq(cat.decode([0,2]), ['a', 'c']) test_stdout(lambda: show_at(tds,2), 'a;c') # if vocab supplied, ensure it maintains its order (i.e., it doesn't sort) cat = MultiCategorize(vocab=['z', 'y', 'x']) test_eq(cat.vocab, ['z','y','x']) test_fail(lambda: cat('bird')) # export class OneHotEncode(DisplayedTransform): "One-hot encodes targets" order=2 def __init__(self, c=None): store_attr() def setups(self, dsets): if self.c is None: self.c = len(L(getattr(dsets, 'vocab', None))) if not self.c: warn("Couldn't infer the number of classes, please pass a value for `c` at init") def encodes(self, o): return TensorMultiCategory(one_hot(o, self.c).float()) def decodes(self, o): return one_hot_decode(o, None) ``` Works in conjunction with ` MultiCategorize` or on its own if you have one-hot encoded targets (pass a `vocab` for decoding and `do_encode=False` in this case) ``` _tfm = OneHotEncode(c=3) test_eq(_tfm([0,2]), tensor([1.,0,1])) test_eq(_tfm.decode(tensor([0,1,1])), [1,2]) tds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], [[MultiCategorize(), OneHotEncode()]]) test_eq(tds[1], [tensor([1.,0,0])]) test_eq(tds[3], [tensor([0.,0,0])]) test_eq(tds.decode([tensor([False, True, True])]), [['b','c']]) test_eq(type(tds[1][0]), TensorMultiCategory) test_stdout(lambda: show_at(tds,2), 'a;c') #hide #test with passing the vocab tds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], [[MultiCategorize(vocab=['a', 'b', 'c']), OneHotEncode()]]) test_eq(tds[1], [tensor([1.,0,0])]) test_eq(tds[3], [tensor([0.,0,0])]) test_eq(tds.decode([tensor([False, True, True])]), [['b','c']]) test_eq(type(tds[1][0]), TensorMultiCategory) test_stdout(lambda: show_at(tds,2), 'a;c') # export class EncodedMultiCategorize(Categorize): "Transform of one-hot encoded multi-category that decodes with `vocab`" loss_func,order=BCEWithLogitsLossFlat(),1 def __init__(self, vocab): super().__init__(vocab, sort=vocab==None) self.c = len(vocab) def encodes(self, o): return TensorMultiCategory(tensor(o).float()) def decodes(self, o): return MultiCategory (one_hot_decode(o, self.vocab)) _tfm = EncodedMultiCategorize(vocab=['a', 'b', 'c']) test_eq(_tfm([1,0,1]), tensor([1., 0., 1.])) test_eq(type(_tfm([1,0,1])), TensorMultiCategory) test_eq(_tfm.decode(tensor([False, True, True])), ['b','c']) _tfm2 = EncodedMultiCategorize(vocab=['c', 'b', 'a']) test_eq(_tfm2.vocab, ['c', 'b', 'a']) #export class RegressionSetup(DisplayedTransform): "Transform that floatifies targets" loss_func=MSELossFlat() def __init__(self, c=None): store_attr() def encodes(self, o): return tensor(o).float() def decodes(self, o): return TitledFloat(o) if o.ndim==0 else TitledTuple(o_.item() for o_ in o) def setups(self, dsets): if self.c is not None: return try: self.c = len(dsets[0]) if hasattr(dsets[0], '__len__') else 1 except: self.c = 0 _tfm = RegressionSetup() dsets = Datasets([0, 1, 2], RegressionSetup) test_eq(dsets.c, 1) test_eq_type(dsets[0], (tensor(0.),)) dsets = Datasets([[0, 1, 2], [3,4,5]], RegressionSetup) test_eq(dsets.c, 3) test_eq_type(dsets[0], (tensor([0.,1.,2.]),)) #export def get_c(dls): if getattr(dls, 'c', False): return dls.c if getattr(getattr(dls.train, 'after_item', None), 'c', False): return dls.train.after_item.c if getattr(getattr(dls.train, 'after_batch', None), 'c', False): return dls.train.after_batch.c vocab = getattr(dls, 'vocab', []) if len(vocab) > 0 and is_listy(vocab[-1]): vocab = vocab[-1] return len(vocab) ``` ## End-to-end dataset example with MNIST Let's show how to use those functions to grab the mnist dataset in a `Datasets`. First we grab all the images. ``` path = untar_data(URLs.MNIST_TINY) items = get_image_files(path) ``` Then we split between train and validation depending on the folder. ``` splitter = GrandparentSplitter() splits = splitter(items) train,valid = (items[i] for i in splits) train[:3],valid[:3] ``` Our inputs are images that we open and convert to tensors, our targets are labeled depending on the parent directory and are categories. ``` from PIL import Image def open_img(fn:Path): return Image.open(fn).copy() def img2tensor(im:Image.Image): return TensorImage(array(im)[None]) tfms = [[open_img, img2tensor], [parent_label, Categorize()]] train_ds = Datasets(train, tfms) x,y = train_ds[3] xd,yd = decode_at(train_ds,3) test_eq(parent_label(train[3]),yd) test_eq(array(Image.open(train[3])),xd[0].numpy()) ax = show_at(train_ds, 3, cmap="Greys", figsize=(1,1)) assert ax.title.get_text() in ('3','7') test_fig_exists(ax) ``` ## ToTensor - ``` #export class ToTensor(Transform): "Convert item to appropriate tensor class" order = 5 ``` ## IntToFloatTensor - ``` # export class IntToFloatTensor(DisplayedTransform): "Transform image to float tensor, optionally dividing by 255 (e.g. for images)." order = 10 #Need to run after PIL transforms on the GPU def __init__(self, div=255., div_mask=1): store_attr() def encodes(self, o:TensorImage): return o.float().div_(self.div) def encodes(self, o:TensorMask ): return o.long() // self.div_mask def decodes(self, o:TensorImage): return ((o.clamp(0., 1.) * self.div).long()) if self.div else o t = (TensorImage(tensor(1)),tensor(2).long(),TensorMask(tensor(3))) tfm = IntToFloatTensor() ft = tfm(t) test_eq(ft, [1./255, 2, 3]) test_eq(type(ft[0]), TensorImage) test_eq(type(ft[2]), TensorMask) test_eq(ft[0].type(),'torch.FloatTensor') test_eq(ft[1].type(),'torch.LongTensor') test_eq(ft[2].type(),'torch.LongTensor') ``` ## Normalization - ``` # export def broadcast_vec(dim, ndim, *t, cuda=True): "Make a vector broadcastable over `dim` (out of `ndim` total) by prepending and appending unit axes" v = [1]*ndim v[dim] = -1 f = to_device if cuda else noop return [f(tensor(o).view(*v)) for o in t] # export @docs class Normalize(DisplayedTransform): "Normalize/denorm batch of `TensorImage`" parameters,order = L('mean', 'std'),99 def __init__(self, mean=None, std=None, axes=(0,2,3)): store_attr() @classmethod def from_stats(cls, mean, std, dim=1, ndim=4, cuda=True): return cls(*broadcast_vec(dim, ndim, mean, std, cuda=cuda)) def setups(self, dl:DataLoader): if self.mean is None or self.std is None: x,*_ = dl.one_batch() self.mean,self.std = x.mean(self.axes, keepdim=True),x.std(self.axes, keepdim=True)+1e-7 def encodes(self, x:TensorImage): return (x-self.mean) / self.std def decodes(self, x:TensorImage): f = to_cpu if x.device.type=='cpu' else noop return (x*f(self.std) + f(self.mean)) _docs=dict(encodes="Normalize batch", decodes="Denormalize batch") mean,std = [0.5]*3,[0.5]*3 mean,std = broadcast_vec(1, 4, mean, std) batch_tfms = [IntToFloatTensor(), Normalize.from_stats(mean,std)] tdl = TfmdDL(train_ds, after_batch=batch_tfms, bs=4, device=default_device()) x,y = tdl.one_batch() xd,yd = tdl.decode((x,y)) test_eq(x.type(), 'torch.cuda.FloatTensor' if default_device().type=='cuda' else 'torch.FloatTensor') test_eq(xd.type(), 'torch.LongTensor') test_eq(type(x), TensorImage) test_eq(type(y), TensorCategory) assert x.mean()<0.0 assert x.std()>0.5 assert 0<xd.float().mean()/255.<1 assert 0<xd.float().std()/255.<0.5 #hide nrm = Normalize() batch_tfms = [IntToFloatTensor(), nrm] tdl = TfmdDL(train_ds, after_batch=batch_tfms, bs=4) x,y = tdl.one_batch() test_close(x.mean(), 0.0, 1e-4) assert x.std()>0.9, x.std() #Just for visuals from fastai.vision.core import * tdl.show_batch((x,y)) #hide x,y = cast(x,Tensor),cast(y,Tensor) #Lose type of tensors (to emulate predictions) test_ne(type(x), TensorImage) tdl.show_batch((x,y), figsize=(1,1)) #Check that types are put back by dl. ``` ## Export - ``` #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/webinars_conferences_etc/multi_lingual_webinar/4_Unsupervise_Chinese_Keyword_Extraction_NER_and_Translation_from_Chinese_News.ipynb) ![Flags](http://ckl-it.de/wp-content/uploads/2021/02/flags.jpeg) ``` import os ! apt-get update -qq > /dev/null # Install java ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] ! pip install nlu pyspark==2.4.4 > /dev/null import nlu import pandas as pd ! wget http://ckl-it.de/wp-content/uploads/2021/02/chinese_news.csv ``` # Analyzing chinese News Articles With NLU ## This notebook showcases how to extract Chinese Keywords Unsupervied with YAKE and Named Entities and translate them to English ### In addition, we will leverage the Chinese WordSegmenter and Lemmatizer to preprocess our data further and get a better view fof our data distribution ``` ``` # [Chinese official daily news](https://www.kaggle.com/noxmoon/chinese-official-daily-news-since-2016) ![Chinese News](https://upload.wikimedia.org/wikipedia/zh/6/69/XINWEN_LIANBO.png) ### Xinwen Lianbo is a daily news programme produced by China Central Television. It is shown simultaneously by all local TV stations in mainland China, making it one of the world's most-watched programmes. It has been broadcast since 1 January 1978. wikipedia ``` df = pd.read_csv('./chinese_news.csv') df ``` # Depending how we pre-process our text, we will get different keywords extracted with YAKE. In This tutorial we will see the effect of **Lemmatization** and **Word Segmentation** and see how the distribution of Keywords changes - Lemmatization - Word Segmentation # Apply YAKE - Keyword Extractor to the raw text First we do no pre-processing at all and just calculate keywords from the raw titles with YAKE ``` yake_df = nlu.load('yake').predict(df.headline) yake_df ``` ## The predicted Chinese Keywords dont show up on Pandas Label and you probably do not speek Chinese! ### This is why we will translate each extracted Keyword into english and then take a look at the distribution again ``` yake_df.explode('keywords_classes').keywords_classes.value_counts()[0:100].plot.bar(title='Top 100 in Chinese News Articles. No Chinese Keywords :( So lets translate!', figsize=(20,8)) ``` ### We get the top 100 keywords and store the counts toegether with the keywords in a new DF ``` top_100_zh = yake_df.explode('keywords_classes').keywords_classes.value_counts()[0:100] top_100_zh = pd.DataFrame(top_100_zh) # Create new DF from the counts top_100_zh['zh'] = top_100_zh.index top_100_zh.reset_index(inplace=True) top_100_zh ``` ### Now we can just translate each predicted keyword with `zh.translate_to.en` in 1 line of code and see what is actually going on in the dataset ``` top_100_en = nlu.load('zh.translate_to.en').predict(top_100_zh.zh) top_100_en ``` #### Write the translations into the df with the Keyword counts so we can plot them together in the next step ``` # Write translation back to the keyword df with the counts top_100_zh['en']= top_100_en.translation top_100_zh ``` ## Now we can simply look at every keyword as a bar chart with the actual translation of it and understand what keywordsa ppeared in chinese news! ``` top_100_zh.index = top_100_zh.en top_100_zh.keywords_classes.plot.barh(figsize=(20,20), title='Distribution of top 100 translated chinese News Articles generated by YAKE alogirthm applied to RAW data') ``` # Apply Yake to Segmented/Tokenized data We gave the YAKE algorithm full heatlines which where not segmented. To better understand the Chinese text ,we can segment it into token and analyze their occurcence instead ## YAKE + Word Segmentation ``` # Segment words into tokenz with the word segmenter # This will output 1 row per token seg_df = nlu.load('zh.segment_words').predict(df.headline) seg_df ``` ### Join the tokens back as white space seperated strings for the Yake Keyword extraction in the next step ``` # Join the tokens back as white space seperated strings joined_segs = seg_df.token.groupby(seg_df.index).transform(lambda x : ' '.join(x)).drop_duplicates() joined_segs ``` ### Now we can extract keywords with yake on the whitespace seperated tokens ``` seg_yake_df = nlu.load('yake').predict(joined_segs) seg_yake_df # Get top 100 occoring Keywords from the joined segmented tokens top_100_seg_zh = seg_yake_df.explode('keywords_classes').keywords_classes.value_counts()[0:100]#.plot.bar(title='Top 100 in Chinese News Articles Segmented', figsize=(20,8)) top_100_seg_zh = pd.DataFrame(top_100_seg_zh ) top_100_seg_zh ``` ## Get top 100 keywords and Translate them like we did for the raw Data as data preperation for the visualization of the keyword distribution ``` # Create new DF from the counts top_100_seg_zh['zh'] = top_100_seg_zh.index top_100_seg_zh.reset_index(inplace=True) # Write Translations back to df with keyword counts top_100_seg_zh['en'] = nlu.load('zh.translate_to.en').predict(top_100_seg_zh.zh).translation ``` ### Visualize the distirbution of the Keywords extracted from the segmented tokens We can observe that we now have a very different distribution than originally ``` top_100_seg_zh.index = top_100_seg_zh.en top_100_seg_zh.keywords_classes.plot.barh(figsize=(20,20), title = 'Segmented Keywords YAKE Distribution') ``` # Apply Yake to Segmented and Lemmatized data ``` # Automated Word Segmentation Included! zh_lem_df = nlu.load('zh.lemma').predict(df.headline) zh_lem_df ``` ## Join tokens into whitespace seperated string like we did previosuly for Word Segmentation ``` zh_lem_df['lem_str'] = zh_lem_df.lemma.str.join(' ') zh_lem_df ``` ## Extract Keywords on Stemmed + Word Segmented Chinese text ``` yake_lem_df = nlu.load('yake').predict(zh_lem_df.lem_str) yake_lem_df top_100_stem = yake_lem_df.explode('keywords_classes').keywords_classes.value_counts()[:100] top_100_stem = pd.DataFrame(top_100_stem) # Create new DF from the counts top_100_stem['zh'] = top_100_stem.index top_100_stem.reset_index(inplace=True) # Write Translations back to df with keyword counts top_100_stem['en'] = nlu.load('zh.translate_to.en').predict(top_100_stem.zh).translation top_100_stem ``` # Plot the Segmented and Lemmatized Distribution of extracted keywords ``` top_100_stem.index = top_100_stem.en top_100_stem.keywords_classes.plot.barh(figsize=(20,20), title='Distribution of top 100 translated chinese News Artzzzicles generated by YAKE alogirthm applied to Lemmatized and Segmented Chinese Text') ``` # Extract Chinese Named entities ``` zh_ner_df = nlu.load('zh.ner').predict(df.iloc[:1000].headline, output_level='document') zh_ner_df # Translate Detected Chinese Entities to English en_entities = nlu.load('zh.translate_to.en').predict(zh_ner_df.explode('entities').entities) en_entities en_entities.translation.value_counts()[0:100].plot.barh(figsize=(20,20), title = "Top 100 Translated detected Named entities") ``` # There are many more models! ## Checkout [the Modelshub](https://nlp.johnsnowlabs.com/models) and the [NLU Namespace](https://nlu.johnsnowlabs.com/docs/en/namespace) for more models
github_jupyter
# Simple Scatter Plots Another commonly used plot type is the simple scatter plot, a close cousin of the line plot. Instead of points being joined by line segments, here the points are represented individually with a dot, circle, or other shape. We’ll start by setting up the notebook for plotting and importing the functions we will use: ``` %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import numpy as np ``` ## Scatter Plots with ``plt.plot`` In the previous section we looked at ``plt.plot``/``ax.plot`` to produce line plots. It turns out that this same function can produce scatter plots as well: ``` x = np.linspace(0, 10, 30) y = np.sin(x) plt.plot(x, y, 'o', color='black'); ``` The third argument in the function call is a character that represents the type of symbol used for the plotting. Just as you can specify options such as ``'-'``, ``'--'`` to control the line style, the marker style has its own set of short string codes. The full list of available symbols can be seen in the documentation of ``plt.plot``, or in Matplotlib's online documentation. Most of the possibilities are fairly intuitive, and we'll show a number of the more common ones here: ``` rng = np.random.RandomState(0) for marker in ['o', '.', ',', 'x', '+', 'v', '^', '<', '>', 's', 'd']: plt.plot(rng.rand(5), rng.rand(5), marker, label="marker='{0}'".format(marker)) plt.legend(numpoints=1) plt.xlim(0, 1.8); ``` For even more possibilities, these character codes can be used together with line and color codes to plot points along with a line connecting them: ``` plt.plot(x, y, '-ok'); ``` Additional keyword arguments to ``plt.plot`` specify a wide range of properties of the lines and markers: ``` plt.plot(x, y, '-p', color='gray', markersize=15, linewidth=4, markerfacecolor='white', markeredgecolor='gray', markeredgewidth=2) plt.ylim(-1.2, 1.2); ``` This type of flexibility in the ``plt.plot`` function allows for a wide variety of possible visualization options. For a full description of the options available, refer to the ``plt.plot`` documentation. ## Scatter Plots with ``plt.scatter`` A second, more powerful method of creating scatter plots is the ``plt.scatter`` function, which can be used very similarly to the ``plt.plot`` function: ``` plt.scatter(x, y, marker='o'); ``` The primary difference of ``plt.scatter`` from ``plt.plot`` is that it can be used to create scatter plots where the properties of each individual point (size, face color, edge color, etc.) can be individually controlled or mapped to data. Let's show this by creating a random scatter plot with points of many colors and sizes. In order to better see the overlapping results, we'll also use the ``alpha`` keyword to adjust the transparency level: ``` rng = np.random.RandomState(0) x = rng.randn(100) y = rng.randn(100) colors = rng.rand(100) sizes = 1000 * rng.rand(100) plt.scatter(x, y, c=colors, s=sizes, alpha=0.3, cmap='viridis') plt.colorbar(); # show color scale ``` Notice that the color argument is automatically mapped to a color scale (shown here by the ``colorbar()`` command), and that the size argument is given in pixels. In this way, the color and size of points can be used to convey information in the visualization, in order to visualize multidimensional data. For example, we might use the Iris data from Scikit-Learn, where each sample is one of three types of flowers that has had the size of its petals and sepals carefully measured: ``` from sklearn.datasets import load_iris iris = load_iris() features = iris.data.T plt.scatter(features[0], features[1], alpha=0.2, s=100*features[3], c=iris.target, cmap='viridis') plt.xlabel(iris.feature_names[0]) plt.ylabel(iris.feature_names[1]); ``` We can see that this scatter plot has given us the ability to simultaneously explore four different dimensions of the data: the (x, y) location of each point corresponds to the sepal length and width, the size of the point is related to the petal width, and the color is related to the particular species of flower. Multicolor and multifeature scatter plots like this can be useful for both exploration and presentation of data. ## ``plot`` Versus ``scatter``: A Note on Efficiency Aside from the different features available in ``plt.plot`` and ``plt.scatter``, why might you choose to use one over the other? While it doesn't matter as much for small amounts of data, as datasets get larger than a few thousand points, ``plt.plot`` can be noticeably more efficient than ``plt.scatter``. The reason is that ``plt.scatter`` has the capability to render a different size and/or color for each point, so the renderer must do the extra work of constructing each point individually. In ``plt.plot``, on the other hand, the points are always essentially clones of each other, so the work of determining the appearance of the points is done only once for the entire set of data. For large datasets, the difference between these two can lead to vastly different performance, and for this reason, ``plt.plot`` should be preferred over ``plt.scatter`` for large datasets. *This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).* *The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
github_jupyter
``` import numpy as np import pandas as pd df=pd.read_csv('houseprice.csv',usecols=["SalePrice","MSSubClass","MSZoning","LotFrontage","LotArea", "Street","YearBuilt","LotShape","1stFlrSF","2ndFlrSF"]).dropna() df.shape df.head() df.info() for i in df.columns: print("Column name{} and unique values are {}".format(i,len(df[i].unique()))) import datetime datetime.datetime.now().year df['Total Years']=datetime.datetime.now().year-df['YearBuilt'] df.head() df.drop('YearBuilt',axis=1,inplace=True) df.columns ##Creating Categorical Features cat_features=['MSSubClass','MSZoning','Street','LotShape'] out_features='SalePrice' df['MSSubClass'].unique() from sklearn.preprocessing import LabelEncoder lbl_encoders={} lbl_encoders['MSSubClass']=LabelEncoder() lbl_encoders["MSSubClass"].fit_transform(df['MSSubClass']) lbl_encoders label_encoder={} for feature in cat_features: label_encoder[feature]=LabelEncoder() df[feature]=label_encoder[feature].fit_transform(df[feature]) df ##convertto numpy cat_features=df[["MSSubClass","MSZoning","Street","LotShape"]].to_numpy() cat_features ## convert numpy to Tensors import torch cat_features=torch.tensor(cat_features,dtype=torch.int64) cat_features ### create continuous variable cont_features=[] for i in df.columns: if i in ["MSSubClass","MSZoning","Street","LotShape","SalePrice"]: pass else: cont_features.append(i) cont_features cont_values=np.stack([df[i].values for i in cont_features],axis=1) cont_values=torch.tensor(cont_values,dtype=torch.float) cont_values cont_values.dtype ### Dependent Feature y=torch.tensor(df['SalePrice'].values,dtype=torch.float).reshape(-1,1) y df.info() cat_features.shape,cont_values.shape,y.shape len(df['MSSubClass'].unique()) ##Embedding size for categorical columns cat_dims=[len(df[col].unique()) for col in ["MSSubClass","MSZoning","Street","LotShape"]] cat_dims ## output dimension should be setbased on the input dimension(min(50,features dimension /2)) embedding_dim=[(x,min(50,(x+1)//2)) for x in cat_dims] embedding_dim import torch import torch.nn as nn import torch.nn.functional as F embed_representation=nn.ModuleList([nn.Embedding(inp,out) for inp,out in embedding_dim]) embed_representation cat_features cat_featuresz=cat_features[:4] cat_featuresz pd.set_option('display.max_rows',500) embedding_val=[] for i, e in enumerate(embed_representation): embedding_val.append(e(cat_features[:,i])) embedding_val z=torch.cat(embedding_val,1) z ##implement Dropout dropout=nn.Dropout(.4) final_embedded=dropout(z) final_embedded ##create a Feed Forward Neural Network class FeedForwardNN(nn.Module): def __init__(self,embedding_dim,n_cont,out_sz,layers,p=0.5): super().__init__() self.embeds=nn.ModuleList([nn.Embedding(inp,out) for inp,out in embedding_dim]) self.emb_drop=nn.Dropout(p) self.bn_cont=nn.BatchNorm1d(n_cont) layerlist=[] n_emb=sum((out for inp,out in embedding_dim)) n_in=n_emb+n_cont for i in layers: layerlist.append(nn.Linear(n_in,i)) layerlist.append(nn.ReLU(inplace=True)) layerlist.append(nn.BatchNorm1d(i)) layerlist.append(nn.Dropout(p)) n_in=i layerlist.append(nn.Linear(layers[-1],out_sz)) self.layers=nn.Sequential(*layerlist) def forward(self,x_cat,x_cont): embeddings=[] for i,e in enumerate(self.embeds): embeddings.append(e(x_cat[:,i])) x=torch.cat(embeddings,1) x=self.emb_drop(x) x_cont=self.bn_cont(x_cont) x=torch.cat([x,x_cont],1) x=self.layers(x) return x len(cont_features) torch.manual_seed(100) model=FeedForwardNN(embedding_dim,len(cont_features),1,[100,50],p=0.4) model ``` ## Define Loss and Optimizer ``` model.parameters loss_function=nn.MSELoss() optimizer=torch.optim.Adam(model.parameters(),lr=0.01) df.shape cont_values.shape batch_size=1200 test_size=int(batch_size*0.15) train_categorical=cat_features[:batch_size-test_size] test_categorical=cat_features[batch_size-test_size:batch_size] train_cont=cont_values[:batch_size-test_size] test_cont=cont_values[batch_size-test_size:batch_size] y_train=y[:batch_size-test_size] y_test=y[batch_size-test_size:batch_size] len(train_categorical),len(test_categorical),len(train_cont),len(test_cont),len(y_train),len(y_test) epochs=5000 final_losses=[] for i in range(epochs): i=i+1 y_pred=model(train_categorical,train_cont) loss=torch.sqrt(loss_function(y_pred,y_train)) final_losses.append(loss) if i%10==1: print("Epoch number: {} and the loss : {}".format(i,loss.item())) optimizer.zero_grad() loss.backward() optimizer.step() import matplotlib.pyplot as plt %matplotlib inline plt.plot(range(epochs),final_losses) plt.ylabel("RMSE loss") plt.xlabel('epochs'); ## validate the test Data y_pred="" with torch.no_grad(): y_pred=model(test_categorical,test_cont) loss=torch.sqrt(loss_function(y_pred,y_test)) print('RMSE: {}'.format(loss)) data_verify=pd.DataFrame(y_test.tolist(),columns=["Test"]) data_predicted=pd.DataFrame(y_pred.tolist(),columns=["Prediction"]) data_predicted final_output=pd.concat([data_verify,data_predicted],axis=1) final_output["Difference"]=final_output["Test"]-final_output['Prediction'] final_output.head() ```
github_jupyter
# 📃 Solution for Exercise M1.04 The goal of this exercise is to evaluate the impact of using an arbitrary integer encoding for categorical variables along with a linear classification model such as Logistic Regression. To do so, let's try to use `OrdinalEncoder` to preprocess the categorical variables. This preprocessor is assembled in a pipeline with `LogisticRegression`. The generalization performance of the pipeline can be evaluated by cross-validation and then compared to the score obtained when using `OneHotEncoder` or to some other baseline score. First, we load the dataset. ``` import pandas as pd adult_census = pd.read_csv("../datasets/adult-census.csv") target_name = "class" target = adult_census[target_name] data = adult_census.drop(columns=[target_name, "education-num"]) ``` In the previous notebook, we used `sklearn.compose.make_column_selector` to automatically select columns with a specific data type (also called `dtype`). Here, we will use this selector to get only the columns containing strings (column with `object` dtype) that correspond to categorical features in our dataset. ``` from sklearn.compose import make_column_selector as selector categorical_columns_selector = selector(dtype_include=object) categorical_columns = categorical_columns_selector(data) data_categorical = data[categorical_columns] ``` We filter our dataset that it contains only categorical features. Define a scikit-learn pipeline com Because `OrdinalEncoder` can raise errors if it sees an unknown category at prediction time, you can set the `handle_unknown="use_encoded_value"` and `unknown_value` parameters. You can refer to the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html) for more details regarding these parameters. ``` from sklearn.pipeline import make_pipeline from sklearn.preprocessing import OrdinalEncoder from sklearn.linear_model import LogisticRegression model = make_pipeline( OrdinalEncoder(handle_unknown="use_encoded_value", unknown_value=-1), LogisticRegression(max_iter=500)) ``` Your model is now defined. Evaluate it using a cross-validation using `sklearn.model_selection.cross_validate`. ``` from sklearn.model_selection import cross_validate cv_results = cross_validate(model, data_categorical, target) scores = cv_results["test_score"] print("The mean cross-validation accuracy is: " f"{scores.mean():.3f} +/- {scores.std():.3f}") ``` Using an arbitrary mapping from string labels to integers as done here causes the linear model to make bad assumptions on the relative ordering of categories. This prevents the model from learning anything predictive enough and the cross-validated score is even lower than the baseline we obtained by ignoring the input data and just constantly predicting the most frequent class: ``` from sklearn.dummy import DummyClassifier cv_results = cross_validate(DummyClassifier(strategy="most_frequent"), data_categorical, target) scores = cv_results["test_score"] print("The mean cross-validation accuracy is: " f"{scores.mean():.3f} +/- {scores.std():.3f}") ``` Now, we would like to compare the generalization performance of our previous model with a new model where instead of using an `OrdinalEncoder`, we will use a `OneHotEncoder`. Repeat the model evaluation using cross-validation. Compare the score of both models and conclude on the impact of choosing a specific encoding strategy when using a linear model. ``` from sklearn.preprocessing import OneHotEncoder model = make_pipeline( OneHotEncoder(handle_unknown="ignore"), LogisticRegression(max_iter=500)) cv_results = cross_validate(model, data_categorical, target) scores = cv_results["test_score"] print("The mean cross-validation accuracy is: " f"{scores.mean():.3f} +/- {scores.std():.3f}") ``` With the linear classifier chosen, using an encoding that does not assume any ordering lead to much better result. The important message here is: linear model and `OrdinalEncoder` are used together only for ordinal categorical features, features with a specific ordering. Otherwise, your model will perform poorly.
github_jupyter
``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` ##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Train Your Own Model and Convert It to TFLite <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%202%20-%20TensorFlow%20Lite/Week%201/Exercises/TFLite_Week1_Exercise.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%202%20-%20TensorFlow%20Lite/Week%201/Exercises/TFLite_Week1_Exercise.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> </table> This notebook uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here: <table> <tr><td> <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> </td></tr> <tr><td align="center"> <b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>&nbsp; </td></tr> </table> Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc.) in a format identical to that of the articles of clothing we'll use here. This uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code. We will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow. Import and load the Fashion MNIST data directly from TensorFlow: # Setup ``` try: %tensorflow_version 2.x except: pass import pathlib import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_datasets as tfds tfds.disable_progress_bar() print('\u2022 Using TensorFlow Version:', tf.__version__) print('\u2022 GPU Device Found.' if tf.test.is_gpu_available() else '\u2022 GPU Device Not Found. Running on CPU') ``` # Download Fashion MNIST Dataset We will use TensorFlow Datasets to load the Fashion MNIST dataset. ``` whole_ds,info_ds = tfds.load("fashion_mnist", with_info = True, split='train+test', as_supervised=True) #60,000+10,000 n = tf.data.experimental.cardinality(whole_ds).numpy() # 70,000 train_num = int(n*0.8) #56,000 val_num = int(n*0.1) #7000 train_examples = whole_ds.take(train_num) validation_examples = whole_ds.skip(train_num).take(val_num) test_examples = whole_ds.skip(train_num+val_num) #7000 num_examples = train_num num_classes = info_ds.features['label'].num_classes ``` The class names are not included with the dataset, so we will specify them here. ``` class_names = ['T-shirt_top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # Create a labels.txt file with the class names with open('labels.txt', 'w') as f: f.write('\n'.join(class_names)) # The images in the dataset are 28 by 28 pixels. IMG_SIZE = 28 ``` # Preprocessing data ## Preprocess ``` def format_example(image, label): # Cast image to float32 image = tf.image.convert_image_dtype(image, dtype=tf.float32) # Normalize the image in the range [0, 1] image = image/255.0 return image, tf.one_hot(label, num_classes) # Specify the batch size BATCH_SIZE = 256 ``` ## Create Datasets From Images and Labels ``` # Create Datasets train_batches = train_examples.cache().shuffle(num_examples//4).batch(BATCH_SIZE).map(format_example).prefetch(1) validation_batches = validation_examples.cache().batch(BATCH_SIZE).map(format_example) test_batches = test_examples.batch(1).map(format_example) ``` # Building the Model ``` Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 26, 26, 16) 160 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 13, 13, 16) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 11, 11, 32) 4640 _________________________________________________________________ flatten (Flatten) (None, 3872) 0 _________________________________________________________________ dense (Dense) (None, 64) 247872 _________________________________________________________________ dense_1 (Dense) (None, 10) 650 ================================================================= Total params: 253,322 Trainable params: 253,322 Non-trainable params: 0 ``` ``` model = tf.keras.Sequential([ # Set the input shape to (28, 28, 1), kernel size=3, filters=16 and use ReLU activation, tf.keras.layers.Conv2D(input_shape=(28,28,1), kernel_size=3, filters=16, activation='relu'), tf.keras.layers.MaxPooling2D(), # Set the number of filters to 32, kernel size to 3 and use ReLU activation tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu'), # Flatten the output layer to 1 dimension tf.keras.layers.Flatten(), # Add a fully connected layer with 64 hidden units and ReLU activation tf.keras.layers.Dense(units=64, activation='relu'), # Attach a final softmax classification head tf.keras.layers.Dense(activation='softmax', units=num_classes)]) # Set the appropriate loss function and use accuracy as your metric model.compile(optimizer='adam', loss='categorical_crossentropy', metrics='accuracy') model.summary() ``` ## Train ``` model.fit(train_batches, epochs=10, validation_data=validation_batches) ``` # Exporting to TFLite You will now save the model to TFLite. We should note, that you will probably see some warning messages when running the code below. These warnings have to do with software updates and should not cause any errors or prevent your code from running. ``` # EXERCISE: Use the tf.saved_model API to save your model in the SavedModel format. export_dir = 'saved_model/1' tf.saved_model.save(model, export_dir) #@title Select mode of optimization mode = "Speed" #@param ["Default", "Storage", "Speed"] if mode == 'Storage': optimization = tf.lite.Optimize.OPTIMIZE_FOR_SIZE elif mode == 'Speed': optimization = tf.lite.Optimize.OPTIMIZE_FOR_LATENCY else: optimization = tf.lite.Optimize.DEFAULT # EXERCISE: Use the TFLiteConverter SavedModel API to initialize the converter converter = tf.lite.TFLiteConverter.from_saved_model(export_dir) # Set the optimzations converter.optimizations = [optimization] # Invoke the converter to finally generate the TFLite model tflite_model = converter.convert() tflite_model_file = pathlib.Path('./model.tflite') tflite_model_file.write_bytes(tflite_model) ``` # Test the Model with TFLite Interpreter ``` # Load TFLite model and allocate tensors. interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Gather results for the randomly sampled test images predictions = [] test_labels = [] test_images = [] for img, label in test_batches.take(50): interpreter.set_tensor(input_index, img) interpreter.invoke() predictions.append(interpreter.get_tensor(output_index)) test_labels.append(label[0]) test_images.append(np.array(img)) #@title Utility functions for plotting # Utilities for plotting def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) img = np.squeeze(img) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) print(predicted_label, np.argmax(true_label.numpy())) if predicted_label == np.argmax(true_label.numpy()): color = 'green' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks(list(range(10)), class_names, rotation='vertical') plt.yticks([]) thisplot = plt.bar(range(10), predictions_array[0], color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array[0]) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('green') #@title Visualize the outputs { run: "auto" } index = 33 #@param {type:"slider", min:1, max:50, step:1} plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(index, predictions, tf.argmax(test_labels, axis=1), test_images) plt.show() plot_value_array(index, predictions, tf.argmax(test_labels, axis=1)) plt.show() ``` # Download the TFLite Model and Assets If you are running this notebook in a Colab, you can run the cell below to download the tflite model and labels to your local disk. **Note**: If the files do not download when you run the cell, try running the cell a second time. Your browser might prompt you to allow multiple files to be downloaded. ``` try: from google.colab import files files.download(tflite_model_file) files.download('labels.txt') except: pass ``` # Prepare the Test Images for Download (Optional) ``` !mkdir -p test_images tf.argmax(label, 1).numpy()[0] def format_example(image, label): return image, tf.one_hot(label, num_classes) test_batches = test_examples.batch(1).map(format_example) from PIL import Image for index, (image, label) in enumerate(test_batches.take(50)): #print(image) #image = tf.cast(image * 255.0, tf.uint8) image = tf.squeeze(image).numpy() pil_image = Image.fromarray(image) #print(image) pil_image.save('test_images/{}_{}.jpg'.format(class_names[tf.argmax(label, 1).numpy()[0]].lower(), index)) !ls test_images !zip -qq fmnist_test_images.zip -r test_images/ ``` If you are running this notebook in a Colab, you can run the cell below to download the Zip file with the images to your local disk. **Note**: If the Zip file does not download when you run the cell, try running the cell a second time. ``` try: files.download('fmnist_test_images.zip') except: pass ```
github_jupyter
# Clean and Analyze Employee Exit Surveys In this project, we'll clean and analyze exit surveys from employees of the Department of Education, Training and Employment (DETE)}) and the Technical and Further Education (TAFE) body of the Queensland government in Australia. The TAFE exit survey can be found here and the survey for the DETE can be found here. We'll pretend our stakeholders want us to combine the results for both surveys to answer the following question: - Are employees who only worked for the institutes for a short period of time resigning due to some kind of dissatisfaction? What about employees who have been there longer? # Introduction First, we'll read in the datasets and do some initial exploration. ``` #Read in the data import pandas as pd import numpy as np dete_survey = pd.read_csv('dete_survey.csv') #Quick exploration of the data pd.options.display.max_columns = 150 # to avoid truncated output dete_survey.head() dete_survey.info() #Read in the data tafe_survey = pd.read_csv("tafe_survey.csv") #Quick exploration of the data tafe_survey.head() tafe_survey.info() ``` We can make the following observations based on the work above: * The dete_survey dataframe contains 'Not Stated' values that indicate values are missing, but they aren't represented as NaN. * Both the dete_survey and tafe_survey contain many columns that we don't need to complete our analysis. * Each dataframe contains many of the same columns, but the column names are different. * There are multiple columns/answers that indicate an employee resigned because they were dissatisfied. # Identify Missing Values and Drop Unneccessary Columns First, we'll correct the Not Stated values and drop some of the columns we don't need for our analysis. ``` # Read in the data again, but this time read 'Not Stated' values as 'NaN' dete_survey = pd.read_csv('dete_survey.csv', na_values='Not Stated') #Quick exploration of the data dete_survey.head() # Remove columns we don't need for our analysis dete_survey_updated = dete_survey.drop(dete_survey.columns[28:49], axis=1) tafe_survey_updated = tafe_survey.drop(tafe_survey.columns[17:66], axis=1) #Check that the columns were dropped print(dete_survey_updated.columns) print(tafe_survey_updated.columns) ``` # Rename Columns Next, we'll standardize the names of the columns we want to work with, because we eventually want to combine the dataframes. ``` # Clean the column names dete_survey_updated.columns = dete_survey_updated.columns.str.lower().str.strip().str.replace(' ', '_') # Check that the column names were updated correctly dete_survey_updated.columns # Update column names to match the names in dete_survey_updated mapping = {'Record ID': 'id', 'CESSATION YEAR': 'cease_date', 'Reason for ceasing employment': 'separationtype', 'Gender. What is your Gender?': 'gender', 'CurrentAge. Current Age': 'age', 'Employment Type. Employment Type': 'employment_status', 'Classification. Classification': 'position', 'LengthofServiceOverall. Overall Length of Service at Institute (in years)': 'institute_service', 'LengthofServiceCurrent. Length of Service at current workplace (in years)': 'role_service'} tafe_survey_updated = tafe_survey_updated.rename(mapping, axis = 1) # Check that the specified column names were updated correctly tafe_survey_updated.columns ``` # Filter the Data For this project, we'll only analyze survey respondents who resigned, so we'll only select separation types containing the string 'Resignation'. ``` # Check the unique values for the separationtype column tafe_survey_updated['separationtype'].value_counts() # Check the unique values for the separationtype column dete_survey_updated['separationtype'].value_counts() # Check the unique values for the separationtype column dete_survey_updated['separationtype'].value_counts() # Update all separation types containing the word "resignation" to 'Resignation' dete_survey_updated['separationtype'] = dete_survey_updated['separationtype'].str.split('-').str[0] # Check the values in the separationtype column were updated correctly dete_survey_updated['separationtype'].value_counts() # Select only the resignation separation types from each dataframe dete_resignations = dete_survey_updated[dete_survey_updated['separationtype'] == 'Resignation'].copy() tafe_resignations = tafe_survey_updated[tafe_survey_updated['separationtype'] == 'Resignation'].copy() ``` # Verify the Data Below, we clean and explore the cease_date and dete_start_date columns to make sure all of the years make sense. We'll use the following criteria: * Since the cease_date is the last year of the person's employment and the dete_start_date is the person's first year of employment, it wouldn't make sense to have years after the current date. * Given that most people in this field start working in their 20s, it's also unlikely that the dete_start_date was before the year 1940. ``` # Check the unique values dete_resignations['cease_date'].value_counts() # Extract the years and convert them to a float type dete_resignations['cease_date'] = dete_resignations['cease_date'].str.split('/').str[-1] dete_resignations['cease_date'] = dete_resignations['cease_date'].astype("float") # Check the values again and look for outliers dete_resignations['cease_date'].value_counts() # Check the unique values and look for outliers dete_resignations['dete_start_date'].value_counts().sort_values() # Check the unique values tafe_resignations['cease_date'].value_counts().sort_values() ``` Below are our findings: * The years in both dataframes don't completely align. The tafe_survey_updated dataframe contains some cease dates in 2009, but the dete_survey_updated dataframe does not. The tafe_survey_updated dataframe also contains many more cease dates in 2010 than the dete_survey_updaed dataframe. Since we aren't concerned with analyzing the results by year, we'll leave them as is. # Create a New Column¶ Since our end goal is to answer the question below, we need a column containing the length of time an employee spent in their workplace, or years of service, in both dataframes. * End goal: Are employees who have only worked for the institutes for a short period of time resigning due to some kind of dissatisfaction? What about employees who have been at the job longer? The tafe_resignations dataframe already contains a "service" column, which we renamed to institute_service. Below, we calculate the years of service in the dete_survey_updated dataframe by subtracting the dete_start_date from the cease_date and create a new column named institute_service. ``` # Calculate the length of time an employee spent in their respective workplace and create a new column dete_resignations['institute_service'] = dete_resignations['cease_date'] - dete_resignations['dete_start_date'] # Quick check of the result dete_resignations['institute_service'].head() ``` # Identify Dissatisfied Employees¶ Next, we'll identify any employees who resigned because they were dissatisfied. Below are the columns we'll use to categorize employees as "dissatisfied" from each dataframe: 1. tafe_survey_updated: * Contributing Factors. Dissatisfaction * Contributing Factors. Job Dissatisfaction 2. dafe_survey_updated: * job_dissatisfaction * dissatisfaction_with_the_department * physical_work_environment * lack_of_recognition * lack_of_job_security * work_location * employment_conditions * work_life_balance * workload If the employee indicated any of the factors above caused them to resign, we'll mark them as dissatisfied in a new column. After our changes, the new dissatisfied column will contain just the following values: * True: indicates a person resigned because they were dissatisfied in some way * False: indicates a person resigned because of a reason other than dissatisfaction with the job * NaN: indicates the value is missing ``` # Check the unique values tafe_resignations['Contributing Factors. Dissatisfaction'].value_counts() # Check the unique values tafe_resignations['Contributing Factors. Job Dissatisfaction'].value_counts() # Update the values in the contributing factors columns to be either True, False, or NaN def update_vals(x): if x == '-': return False elif pd.isnull(x): return np.nan else: return True tafe_resignations['dissatisfied'] = tafe_resignations[['Contributing Factors. Dissatisfaction', 'Contributing Factors. Job Dissatisfaction']].applymap(update_vals).any(1, skipna=False) tafe_resignations_up = tafe_resignations.copy() # Check the unique values after the updates tafe_resignations_up['dissatisfied'].value_counts(dropna=False) # Update the values in columns related to dissatisfaction to be either True, False, or NaN dete_resignations['dissatisfied'] = dete_resignations[['job_dissatisfaction', 'dissatisfaction_with_the_department', 'physical_work_environment', 'lack_of_recognition', 'lack_of_job_security', 'work_location', 'employment_conditions', 'work_life_balance', 'workload']].any(1, skipna=False) dete_resignations_up = dete_resignations.copy() dete_resignations_up['dissatisfied'].value_counts(dropna=False) ``` # Combining the Data¶ Below, we'll add an institute column so that we can differentiate the data from each survey after we combine them. Then, we'll combine the dataframes and drop any remaining columns we don't need. ``` # Add an institute column dete_resignations_up['institute'] = 'DETE' tafe_resignations_up['institute'] = 'TAFE' # Combine the dataframes combined = pd.concat([dete_resignations_up, tafe_resignations_up], ignore_index=True) # Verify the number of non null values in each column combined.notnull().sum().sort_values() # Drop columns with less than 500 non null values combined_updated = combined.dropna(thresh = 500, axis =1).copy() ``` # Clean the Service Column¶ Next, we'll clean the institute_service column and categorize employees according to the following definitions: * New: Less than 3 years in the workplace * Experienced: 3-6 years in the workplace * Established: 7-10 years in the workplace * Veteran: 11 or more years in the workplace Our analysis is based on this article, which makes the argument that understanding employee's needs according to career stage instead of age is more effective. ``` # Check the unique values combined_updated['institute_service'].value_counts(dropna=False) # Extract the years of service and convert the type to float combined_updated['institute_service_up'] = combined_updated['institute_service'].astype('str').str.extract(r'(\d+)') combined_updated['institute_service_up'] = combined_updated['institute_service_up'].astype('float') # Check the years extracted are correct combined_updated['institute_service_up'].value_counts() # Convert years of service to categories def transform_service(val): if val >= 11: return "Veteran" elif 7 <= val < 11: return "Established" elif 3 <= val < 7: return "Experienced" elif pd.isnull(val): return np.nan else: return "New" combined_updated['service_cat'] = combined_updated['institute_service_up'].apply(transform_service) # Quick check of the update combined_updated['service_cat'].value_counts() ``` # Perform Some Initial Analysis¶ Finally, we'll replace the missing values in the dissatisfied column with the most frequent value, False. Then, we'll calculate the percentage of employees who resigned due to dissatisfaction in each service_cat group and plot the results. Note that since we still have additional missing values left to deal with, this is meant to be an initial introduction to the analysis, not the final analysis. ``` # Verify the unique values combined_updated['dissatisfied'].value_counts(dropna=False) # Replace missing values with the most frequent value, False combined_updated['dissatisfied'] = combined_updated['dissatisfied'].fillna(False) # Calculate the percentage of employees who resigned due to dissatisfaction in each category dis_pct = combined_updated.pivot_table(index='service_cat', values='dissatisfied') # Plot the results %matplotlib inline dis_pct.plot(kind='bar', rot=30) ``` From the initial analysis above, we can tentatively conclude that employees with 7 or more years of service are more likely to resign due to some kind of dissatisfaction with the job than employees with less than 7 years of service. However, we need to handle the rest of the missing data to finalize our analysis. ## Conclusions * Explored the data and figured out how to prepare it for analysis * Corrected some of the missing values * Dropped any data not needed for our analysis * Renamed our columns * Verified the quality of our data * Created a new institute_service column * Cleaned the Contributing Factors columns * Created a new column indicating if an employee resigned because they were dissatisfied in some way * Combined the data * Cleaned the institute_service column * Handled the missing values in the dissatisfied column * Aggregated the data
github_jupyter
# Lec 11. Simple CNN : 한글 자모 ``` import torch import torch.nn as nn import torch.optim as optim import torch.nn.init as init import torch.utils.data as Data import torchvision.utils as utils import torchvision.datasets as dsets import torchvision.transforms as transforms import numpy as np import os from PIL import Image import matplotlib.pyplot as plt %matplotlib inline ``` ## Load Custom Data * transforms에 대해서는 다음 참조 https://pytorch.org/docs/stable/torchvision/transforms.html ``` img_dir = "data/hangul/" img_data = dsets.ImageFolder(img_dir, transforms.Compose([ transforms.Grayscale(), # # Data Augmentation # transforms.RandomRotation(15) # transforms.CenterCrop(28), # transforms.Lambda(lambda x: x.rotate(15)), # # Data Nomalization # transforms.Normalize(mean=(0.5,), std=(0.5,)) transforms.ToTensor(), ])) print(img_data.classes) print(img_data.class_to_idx) # class 39 - 각 class별 720개 이미지 존재. len(img_data) # = 39 * 720 img_data.imgs[0] img = Image.open("data/hangul/ㅇ/111.png").convert("L") # 36 * 36 image imgarr = np.array(img) print(imgarr.shape) plt.imshow(imgarr, cmap='gray') batch_size = 100 font_num = 720 from torch.utils.data import Sampler def train_test_split(data, train_ratio, stratify, stratify_num, batch_size) : length = len(data) # 층화 추출 if stratify : label_num = int(len(data)/stratify_num) cut = int(stratify_num*train_ratio) train_indices = np.random.permutation(np.arange(stratify_num))[:cut] test_indices = np.random.permutation(np.arange(stratify_num))[cut:] for i in range(1, label_num) : train_indices = np.concatenate((train_indices, np.random.permutation(np.arange(stratify_num))[:cut] + stratify_num*i)) test_indices = np.concatenate((test_indices, np.random.permutation(np.arange(stratify_num))[cut:] + stratify_num*i)) else : cut = int(len(data)*train_ratio) train_indices = np.random.permutation(np.arange(length))[:cut] test_indices = np.random.permutation(np.arange(length))[cut:] sampler = Data.SubsetRandomSampler(train_indices) train_loader = Data.DataLoader(data, batch_size=batch_size, shuffle=False, sampler=sampler, num_workers=0, drop_last=True) test_loader = Data.DataLoader(data, batch_size=batch_size, shuffle=False, sampler=sampler, num_workers=0, drop_last=True) return train_loader, test_loader, len(train_indices), len(test_indices) train_loader, test_loader, train_num, test_num = train_test_split(img_data, 0.8, True, font_num, batch_size) train_num, test_num ``` ## Define Model ``` def c_conv(N, K, P=0, S=1): return int((N + 2*P - K) / S + 1) def c_pool(N, K): return int(N/K) c0 = 36 c1 = c_conv(c0, 3) c2 = c_conv(c1, 3) c3 = c_pool(c2, 2) c4 = c_conv(c3, 3) c5 = c_conv(c4, 3) c6 = c_pool(c5, 2) print(c1, c2, c3, c4, c5, c6) class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() # Test 1 - 84.78 % self.layer = nn.Sequential( nn.Conv2d(1,16,3), # 36 --> 34 nn.BatchNorm2d(16), nn.ReLU(), nn.Conv2d(16,32,3), # 32 nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(2,2), # 16 nn.Conv2d(32,64,3), # 14 nn.BatchNorm2d(64), nn.ReLU(), nn.Conv2d(64,128,3), # 12 nn.BatchNorm2d(128), nn.ReLU(), nn.MaxPool2d(2,2) # 6 ) self.fc_layer = nn.Sequential( nn.Linear(128*6*6,300), nn.ReLU(), nn.Linear(300,39) ) # Weight Initialization for m in self.modules(): if isinstance(m, nn.Conv2d): # init.xavier_normal(m.weight.data) init.kaiming_normal_(m.weight.data) m.bias.data.fill_(0) elif isinstance(m, nn.Linear): init.kaiming_normal_(m.weight.data) m.bias.data.fill_(0) def forward(self,x): out = self.layer(x) out = out.view(batch_size, -1) out = self.fc_layer(out) return out model = CNN().cuda() loss = nn.CrossEntropyLoss() # SGD optimizer = optim.SGD(model.parameters(), lr=0.1) # Adam # optimizer = optim.Adam(model.parameters(), lr=0.001) # Momentum & Weight Regularization(L2) # optimizer = optim.SGD(model.parameters(), lr=1e-2, momentum=0.9, weight_decay=1e-5) num_epochs = 10 # Learning Rate Scheduler # scheduler = lr_scheduler.StepLR(optimizer, step_size=1, gamma= 0.99) # scheduler = lr_scheduler.MultiStepLR(optimizer, milestones=[10,30,80], gamma= 0.1) # scheduler = lr_scheduler.ExponentialLR(optimizer, gamma= 0.99) # scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min') total_batch = train_num//batch_size for epoch in range(num_epochs): # scheduler.step() for i, (batch_images, batch_labels) in enumerate(train_loader): X = batch_images.cuda() Y = batch_labels.cuda() pred = model(X) cost = loss(pred, Y) optimizer.zero_grad() cost.backward() optimizer.step() if (i+1) == total_batch: print('Epoch [%d/%d], lter [%d/%d] Loss: %.5f'%(epoch+1, num_epochs, i+1, total_batch, cost.item())) # torch.save(model.state_dict(), 'cnn_hangul_Adam.pkl') # print("Model Saved!") ``` ## Test Model ``` model.eval() correct = 0 total = 0 for images, labels in test_loader: images = images.cuda() outputs = model(images) # print(outputs.data) # 39 class에 대한 확률 _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels.cuda()).sum() correct = correct.cpu().numpy() print('correct :', correct) print('total :', total) print('Accuracy of test images: %f' % (100 * correct / total)) ```
github_jupyter
## Face and Facial Keypoint detection After you've trained a neural network to detect facial keypoints, you can then apply this network to *any* image that includes faces. The neural network expects a Tensor of a certain size as input and, so, to detect any face, you'll first have to do some pre-processing. 1. Detect all the faces in an image using a face detector (we'll be using a Haar Cascade detector in this notebook). 2. Pre-process those face images so that they are grayscale, and transformed to a Tensor of the input size that your net expects. This step will be similar to the `data_transform` you created and applied in Notebook 2, whose job was tp rescale, normalize, and turn any iimage into a Tensor to be accepted as input to your CNN. 3. Use your trained model to detect facial keypoints on the image. --- In the next python cell we load in required libraries for this section of the project. ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline ``` #### Select an image Select an image to perform facial keypoint detection on; you can select any image of faces in the `images/` directory. ``` import cv2 # load in color image for face detection image = cv2.imread('images/obamas.jpg') # switch red and blue color channels # --> by default OpenCV assumes BLUE comes first, not RED as in many images image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # plot the image fig = plt.figure(figsize=(9,9)) plt.imshow(image) ``` ## Detect all faces in an image Next, you'll use one of OpenCV's pre-trained Haar Cascade classifiers, all of which can be found in the `detector_architectures/` directory, to find any faces in your selected image. In the code below, we loop over each face in the original image and draw a red square on each face (in a copy of the original image, so as not to modify the original). You can even [add eye detections](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) as an *optional* exercise in using Haar detectors. An example of face detection on a variety of images is shown below. <img src='images/haar_cascade_ex.png' width=80% height=80%/> ``` # load in a haar cascade classifier for detecting frontal faces face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml') # run the detector # the output here is an array of detections; the corners of each detection box # if necessary, modify these parameters until you successfully identify every face in a given image faces = face_cascade.detectMultiScale(image, 1.2, 2) # make a copy of the original image to plot detections on image_with_detections = image.copy() # loop over the detected faces, mark the image where each face is found for (x,y,w,h) in faces: # draw a rectangle around each detected face # you may also need to change the width of the rectangle drawn depending on image resolution cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3) fig = plt.figure(figsize=(9,9)) plt.imshow(image_with_detections) ``` ## Loading in a trained model Once you have an image to work with (and, again, you can select any image of faces in the `images/` directory), the next step is to pre-process that image and feed it into your CNN facial keypoint detector. First, load your best model by its filename. ``` import torch from models import Net net = Net() ## TODO: load the best saved model parameters (by your path name) ## You'll need to un-comment the line below and add the correct name for *your* saved model # net.load_state_dict(torch.load('saved_models/keypoints_model_1.pt')) ## print out your net and prepare it for testing (uncomment the line below) # net.eval() ``` ## Keypoint detection Now, we'll loop over each detected face in an image (again!) only this time, you'll transform those faces in Tensors that your CNN can accept as input images. ### TODO: Transform each detected face into an input Tensor You'll need to perform the following steps for each detected face: 1. Convert the face from RGB to grayscale 2. Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255] 3. Rescale the detected face to be the expected square size for your CNN (224x224, suggested) 4. Reshape the numpy image into a torch image. **Hint**: The sizes of faces detected by a Haar detector and the faces your network has been trained on are of different sizes. If you find that your model is generating keypoints that are too small for a given face, try adding some padding to the detected `roi` before giving it as input to your model. You may find it useful to consult to transformation code in `data_load.py` to help you perform these processing steps. ### TODO: Detect and display the predicted keypoints After each face has been appropriately converted into an input Tensor for your network to see as input, you can apply your `net` to each face. The ouput should be the predicted the facial keypoints. These keypoints will need to be "un-normalized" for display, and you may find it helpful to write a helper function like `show_keypoints`. You should end up with an image like the following with facial keypoints that closely match the facial features on each individual face: <img src='images/michelle_detected.png' width=30% height=30%/> ``` image_copy = np.copy(image) # loop over the detected faces from your haar cascade for (x,y,w,h) in faces: # Select the region of interest that is the face in the image roi = image_copy[y:y+h, x:x+w] ## TODO: Convert the face region from RGB to grayscale ## TODO: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255] ## TODO: Rescale the detected face to be the expected square size for your CNN (224x224, suggested) ## TODO: Reshape the numpy image shape (H x W x C) into a torch image shape (C x H x W) ## TODO: Make facial keypoint predictions using your loaded, trained network ## TODO: Display each detected face and the corresponding keypoints ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import gzip #loading the data from the given file image_size = 28 num_images = 55000 f = gzip.open('train-images-idx3-ubyte.gz','r') f.read(16) buf = f.read(image_size * image_size * num_images) data = np.frombuffer(buf, dtype=np.uint8).astype(np.float32) data = data.reshape(num_images, image_size, image_size, 1) #pritning the images image = np.asarray(data[550]).squeeze() plt.imshow(image) plt.show() #storing the data in the form of matrix X=np.asarray(data[:]) X=X.squeeze() X=X.reshape(X.shape[0],X.shape[2]*X.shape[1]) X=X.T/255 X.shape #knowing the no of features and the no of data points in the given array m=X.shape[1] n=X.shape[0] print(m) print(n) #loading the labels f = gzip.open('train-labels-idx1-ubyte.gz','r') f.read(8) Y = np.zeros((1,m)) for i in range(0,54999): buf = f.read(1) labels = np.frombuffer(buf, dtype=np.uint8).astype(np.int64) Y[0,i]=labels print(Y[0,550]) print(Y.shape) Y1= np.zeros((10,m)) for i in range (0,m): for j in range(0,10): if(j==int(Y[0,i])): Y1[j,i]=1 else: Y1[j,i]=0 Y=Y1 ``` df = pd.read_csv('Downloads/mnist_train.csv',header = None) data = np.array(df) X = (data[:,1:].transpose())/255 m = X.shape[1] n = X.shape[0] Y_orig = data[:,0:1].transpose() Y = np.zeros((10,m)) for i in range(m): Y[int(Y_orig[0,i]),i] = 1 ``` def relu(Z): result = (Z + np.abs(Z))/2 return result def relu_backward(Z): result = (Z + np.abs(Z))/(2*np.abs(Z)) return result def softmax(Z): temp = np.exp(Z) result = temp/np.sum(temp,axis = 0,keepdims = True) return result def initialize_parameters(layer_dims): parameters = {} L = len(layer_dims) - 1 for l in range(1,L + 1): parameters["W" + str(l)] = np.random.randn(layer_dims[l],layer_dims[l-1])*0.01 parameters["b" + str(l)] = np.zeros((layer_dims[l],1)) #print(parameters) return parameters def forward_prop(X,parameters): cache = {} L = len(layer_dims) - 1 A_prev = X for l in range(1,L): Z = parameters["W" + str(l)].dot(A_prev) + parameters["b" + str(l)] A = relu(Z) cache["Z" + str(l)] = Z A_prev = A Z = parameters["W" + str(L)].dot(A_prev) + parameters["b" + str(L)] AL = softmax(Z) cache["Z" + str(L)] = Z return AL,cache def compute_cost(AL,Y): m = AL.shape[1] cost = (np.sum(-(Y * np.log(AL))))/(m) return cost def backward_prop(X,Y,cache,parameters,AL,layer_dims): m = X.shape[1] dparameters = {} L = len(layer_dims) - 1 dZ = AL - Y dparameters["dW" + str(L)] = dZ.dot(relu(cache["Z" + str(L-1)]).transpose())/m #dparameters["dW" + str(L)] = dZ.dot(X.transpose())/m dparameters["db" + str(L)] = np.sum(dZ,axis = 1,keepdims = True)/m for l in range(1,L): dZ = ((parameters["W" + str(L-l+1)].transpose()).dot(dZ)) * (relu_backward(cache["Z" + str(L-l)])) if L-l-1 != 0: dparameters["dW" + str(L-l)] = dZ.dot(relu(cache["Z" + str(L-1-l)]).transpose())/m else: dparameters["dW" + str(L-l)] = dZ.dot(X.transpose())/m dparameters["db" + str(L-l)] = np.sum(dZ,axis = 1,keepdims = True)/m return dparameters def update_parameters(parameters,dparameters,layer_dims,learning_rate): L = len(layer_dims) - 1 for l in range(1,L+1): parameters["W" + str(l)] = parameters["W" + str(l)] - learning_rate*dparameters["dW" + str(l)] parameters["b" + str(l)] = parameters["b" + str(l)] - learning_rate*dparameters["db" + str(l)] return parameters def model(X,Y,layer_dims,learning_rate,num_iters): costs = [] parameters = initialize_parameters(layer_dims) for i in range(num_iters): AL,cache = forward_prop(X,parameters) cost = compute_cost(AL,Y) costs.append(cost) dparameters = backward_prop(X,Y,cache,parameters,AL,layer_dims) parameters = update_parameters(parameters,dparameters,layer_dims,learning_rate) print(i,"\t",cost) return parameters,costs #trainig layer_dims = [784,120,10] parameters,costs = model(X,Y,layer_dims,0.5,2000) plt.plot(costs) #training df = pd.read_csv('mnist_test.csv',header = None) data = np.array(df) X_test = (data[:,1:].transpose())/255 Y_test = data[:,0:1].transpose() accuracy = 0 m_test = X_test.shape[1] predict = np.zeros((1,m_test)) A_test,cache = forward_prop(X_test,parameters) for i in range(m_test): max = 0 for j in range(10): if A_test[j,i] > max: max = A_test[j,i] max_index = j predict[0,i] = max_index if predict[0,i] == Y_test[0,i]: accuracy = accuracy + 1 accuracy = (accuracy/m_test)*100 print(accuracy,"%") index = 0 #change index toview different examples index = 897 print("Its a",int(predict[0,index])) plt.imshow(X_test[:,index].reshape(28,28)) ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Partial Differential Equations <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/community/en/pdes.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/examples/blob/master/community/en/pdes.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of square pond as a few raindrops land on it. ## Basic setup A few imports you'll need. ``` #Import libraries for simulation import tensorflow as tf assert tf.__version__.startswith('2') import numpy as np #Imports for visualization import PIL.Image from io import BytesIO from IPython.display import clear_output, Image, display ``` A function for displaying the state of the pond's surface as an image. ``` def DisplayArray(a, fmt='jpeg', rng=[0,1]): """Display an array as a picture.""" a = (a - rng[0])/float(rng[1] - rng[0])*255 a = np.uint8(np.clip(a, 0, 255)) f = BytesIO() PIL.Image.fromarray(a).save(f, fmt) clear_output(wait = True) display(Image(data=f.getvalue())) ``` ## Computational convenience functions ``` @tf.function def make_kernel(a): """Transform a 2D array into a convolution kernel""" a = np.asarray(a) a = a.reshape(list(a.shape) + [1,1]) return tf.constant(a, dtype=1) @tf.function def simple_conv(x, k): """A simplified 2D convolution operation""" x = tf.expand_dims(tf.expand_dims(x, 0), -1) y = tf.nn.depthwise_conv2d(input=x, filter=k, strides=[1, 1, 1, 1], padding='SAME') return y[0, :, :, 0] @tf.function def laplace(x): """Compute the 2D laplacian of an array""" laplace_k = make_kernel([[0.5, 1.0, 0.5], [1.0, -6., 1.0], [0.5, 1.0, 0.5]]) return simple_conv(x, laplace_k) ``` ## Define the PDE Your pond is a perfect 500 x 500 square, as is the case for most ponds found in nature. ``` N = 500 ``` Here you create your pond and hit it with some rain drops. ``` # Initial Conditions -- some rain drops hit a pond # Set everything to zero u_init = np.zeros([N, N], dtype=np.float32) ut_init = np.zeros([N, N], dtype=np.float32) # Some rain drops hit a pond at random points for n in range(40): a,b = np.random.randint(0, N, 2) u_init[a,b] = np.random.uniform() DisplayArray(u_init, rng=[-0.1, 0.1]) ``` Now let's specify the details of the differential equation. ``` # Parameters: # eps -- time resolution # damping -- wave damping eps = 0.03 damping = 0.04 # Create variables for simulation state U = tf.Variable(u_init) Ut = tf.Variable(ut_init) ``` ## Run the simulation This is where it gets fun -- running time forward with a simple for loop. ``` # Run 1000 steps of PDE for i in range(1000): # Step simulation # Discretized PDE update rules U = U + eps * Ut Ut = Ut + eps * (laplace(U) - damping * Ut) # Show final image DisplayArray(U.numpy(), rng=[-0.1, 0.1]) ``` Look! Ripples!
github_jupyter
# NumPy Numpy is the core library for scientific computing in Python. <br/> It provides a high-performance multidimensional array object, and tools for working with these arrays. <br/> Official NumPy Documentation: https://numpy.org/doc/stable/reference/ ``` # Install NumPy # ! pip install numpy ``` Since NumPy is not a default thing in Python. We import this library. When we import a library we allow all the functions and types with the initial of that library. ``` # Import NumPy import numpy as np ``` # NumPy Arrays A grid of values, all of the same type. <br/> **Rank:** number of dimensions of the array <br/> **Shape:** an array of tuple of integers giving the size of the array along each dimension. ``` # Rank 1 array a = np.array([1, 2, 3]) print(type(a)) # Prints data type print(a.shape) print(a[0], a[1], a[2]) # Indexing a[0] = 5 # Assigning print(a) # Rank 2 array b = np.array([ [1,2,3], [4,5,6] ]) ''' # of elements in first 3rd bracket => 2 # of elements in second 3rd bracket => 3 ''' print(b.shape) print(b[0, 0], b[0, 1], b[1, 0], b[1,2]) ``` ## Special Arrays ``` a = np.zeros((6,4)) # Create an array of all zeros a np.zeros_like(b,dtype=float) b = np.ones((3,2)) # Create an array of all ones b c = np.full((6,4), 7) # Create a constant array c d = np.eye(5) # Create a 2x2 identity matrix d e = np.random.random((4,3)) # Create an array filled with random values e ``` ## Indexing ``` a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) a a[:2,:3] b = a[:2, 1:3] b print(a[0, 1]) # Prints "2" b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1] print(a[0, 1]) # Prints "77" a[1, :] a[1:2, :] a[:, 1] a[:, 1:2] np.arange(2,10,2) ``` ## Boolean array indexing ``` a bool_idx = (a>10) bool_idx a[bool_idx] a [ a>10 ] ``` # Data Types ``` x = np.array([1, 2]) print(x.dtype) x = np.array([1.0, 2.0]) print(x.dtype) x = np.array([1, 2], dtype=np.float64) # Foring a particular datatype print(x,x.dtype) x.dtype ``` # Operations ``` x = np.array([[1,2],[3,4]], dtype=np.float64) y = np.array([[5,6],[7,8]], dtype=np.float64) x,y # Adding two arrays element-wise print(x + y) print(np.add(x, y)) # Substracting two arrays element-wise print(x - y) print(np.subtract(x, y)) # Mutiplication Element-wise print(x * y) print(np.multiply(x, y)) # Elementwise division print(x / y) print(np.divide(x, y)) # Elementwise square root print(np.sqrt(x)) # Matrix Multiplication print(x.dot(y)) print(np.dot(x, y)) x # Sum of all elements in the array np.sum(x) print(np.sum(x, axis=0)) # Compute sum of each column print(np.sum(x, axis=1)) # Compute sum of each row a # Transpose a.T ``` # Broadcasting ``` x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) v = np.array([1, 0, 1]) y = x + v # Add v to each row of x using broadcasting print(y) x = np.array([[1,2,3], [4,5,6]]) y = np.array([4,5]) (x.T+y).T x, x.shape x.T, x.T.shape y, y.shape x.T+y (x.T+y).T x*2 x+2 ```
github_jupyter
# Backtest Orbit Model In this section, we will cover: - How to create a TimeSeriesSplitter - How to create a BackTester and retrieve the backtesting results - How to leverage the backtesting to tune the hyper-paramters for orbit models ``` %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import orbit from orbit.models import LGT, DLT from orbit.diagnostics.backtest import BackTester, TimeSeriesSplitter from orbit.diagnostics.plot import plot_bt_predictions from orbit.diagnostics.metrics import smape, wmape from orbit.utils.dataset import load_iclaims import warnings warnings.filterwarnings('ignore') print(orbit.__version__) # load log-transformed data data = load_iclaims() data.shape ``` The way to gauge the performance of a time-series model is through re-training models with different historic periods and check their forecast within certain steps. This is similar to a time-based style cross-validation. More often, we called it `backtest` in time-series modeling. The purpose of this notebook is to illustrate how to `backtest` a single model using `BackTester` `BackTester` will compose a `TimeSeriesSplitter` within it, but `TimeSeriesSplitter` is useful as a standalone, in case there are other tasks to perform that requires splitting but not backtesting. `TimeSeriesSplitter` implemented each 'slices' as genertor, i.e it can be used in a for loop. You can also retrieve the composed `TimeSeriesSplitter` object from `BackTester` to utilize the additional methods in `TimeSeriesSplitter` Currently, there are two schemes supported for the back-testing engine: expanding window and rolling window. * **expanding window**: for each back-testing model training, the train start date is fixed, while the train end date is extended forward. * **rolling window**: for each back-testing model training, the training window length is fixed but the window is moving forward. ## Create a TimeSeriesSplitter There two main way to splitting a timeseries: expanding and rolling. Expanding window has a fixed starting point, and the window length grows as we move forward in timeseries. It is useful when we want to incoporate all historical information. On the other hand, rolling window has a fixed window length, and the starting point of the window moves forward as we move forward in timeseries. Now, we will illustrate how to use `TimeSeriesSplitter` to split the claims timeseries. ### Expanding window ``` # configs min_train_len = 380 # minimal length of window length forecast_len = 20 # length forecast window incremental_len = 20 # step length for moving forward ex_splitter = TimeSeriesSplitter(df=data, min_train_len=min_train_len, incremental_len=incremental_len, forecast_len=forecast_len, window_type='expanding', date_col='week') print(ex_splitter) ``` We can visualize the splits, green is training window and yellow it the forecasting windown. The starting point is always 0 for three splits but window length increases from 380 to 420. ``` _ = ex_splitter.plot() ``` ### Rolling window ``` # configs min_train_len = 380 # in case of rolling window, this specify the length of window length forecast_len = 20 # length forecast window incremental_len = 20 # step length for moving forward roll_splitter = TimeSeriesSplitter(data, min_train_len=min_train_len, incremental_len=incremental_len, forecast_len=forecast_len, window_type='rolling', date_col='week') ``` We can visualize the splits, green is training window and yellow it the forecasting windown. The window length is always 380, while the starting point moves forward 20 weeks each steps. ``` _ = roll_splitter.plot() ``` ### Specifying number of splits User can also define number of splits using `n_splits` instead of specifying minimum training length. That way, minimum training length will be automatically calculated. ``` ex_splitter2 = TimeSeriesSplitter(data, min_train_len=min_train_len, incremental_len=incremental_len, forecast_len=forecast_len, n_splits=5, window_type='expanding', date_col='week') _ = ex_splitter2.plot() ``` ### TimeSeriesSplitter as generator `TimeSeriesSplitter` is implemented as a genetor, therefore we can call `split()` to loop through it. It comes handy even for tasks other than backtest. ``` for train_df, test_df, scheme, key in roll_splitter.split(): print('Initial Claim slice {} rolling mean:{:.3f}'.format(key, train_df['claims'].mean())) ``` ## Create a BackTester Now, we are ready to do backtest, first let's initialize a `DLT` model and a `BackTester`. You pass in `TimeSeriesSplitter` parameters to `BackTester`. ``` # instantiate a model dlt = DLT( date_col='week', response_col='claims', regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'], seasonality=52, estimator='stan-map', ) # configs min_train_len = 100 forecast_len = 20 incremental_len = 100 window_type = 'expanding' bt = BackTester( model=dlt, df=data, min_train_len=min_train_len, incremental_len=incremental_len, forecast_len=forecast_len, window_type=window_type, ) ``` ## Backtest fit and predict The most expensive portion of backtesting is fitting the model iteratively. Thus, we separate the api calls for `fit_predict` and `score` to avoid redundant computation for multiple metrics or scoring methods ``` bt.fit_predict() ``` Once `fit_predict()` is called, the fitted models and predictions can be easily retrieved from `BackTester`. Here the data is grouped by the date, split_key, and whether or not that observation is part of the training or test data ``` predicted_df = bt.get_predicted_df() predicted_df.head() ``` We also provide a plotting utility to visualize the predictions against the actuals for each split. ``` plot_bt_predictions(predicted_df, metrics=smape, ncol=2, include_vline=True); ``` Users might find this useful for any custom computations that may need to be performed on the set of predicted data. Note that the columns are renamed to generic and consistent names. Sometimes, it might be useful to match the data back to the original dataset for ad-hoc diagnostics. This can easily be done by merging back to the orignal dataset ``` predicted_df.merge(data, left_on='date', right_on='week') ``` ## Backtest Scoring The main purpose of `BackTester` are the evaluation metrics. Some of the most widely used metrics are implemented and built into the `BackTester` API. The default metric list is **smape, wmape, mape, mse, mae, rmsse**. ``` bt.score() ``` It is possible to filter for only specific metrics of interest, or even implement your own callable and pass into the `score()` method. For example, see this function that uses last observed value as a predictor and computes the `mse`. Or `naive_error` which computes the error as the delta between predicted values and the training period mean. Note these are not really useful error metrics, just showing some examples of callables you can use ;) ``` def mse_naive(test_actual): actual = test_actual[1:] predicted = test_actual[:-1] return np.mean(np.square(actual - predicted)) def naive_error(train_actual, test_predicted): train_mean = np.mean(train_actual) return np.mean(np.abs(test_predicted - train_mean)) bt.score(metrics=[mse_naive, naive_error]) ``` It doesn't take additional time to refit and predict the model, since the results are stored when `fit_predict()` is called. Check docstrings for function criteria that is required for it to be supported with this api. In some cases, we may want to evaluate our metrics on both train and test data. To do this you can call score again with the following indicator ``` bt.score(include_training_metrics=True) ``` ## Backtest Get Models In cases where `BackTester` doesn't cut it or for more custom use-cases, there's an interface to export the `TimeSeriesSplitter` and predicted data, as shown earlier. It's also possible to get each of the fitted models for deeper diving ``` fitted_models = bt.get_fitted_models() model_1 = fitted_models[0] model_1.get_regression_coefs() ``` BackTester composes a TimeSeriesSplitter within it, but TimeSeriesSplitter can also be created on its own as a standalone object. See section below on TimeSeriesSplitter for more details on how to use the splitter. All of the additional TimeSeriesSplitter args can also be passed into BackTester on instantiation ``` ts_splitter = bt.get_splitter() _ = ts_splitter.plot() ``` ## Hyperparameter Tunning After seeing the results fromt the backtest, users may wish to fine tune the hyperparmeters. Orbit also provide a `grid_search_orbit` utilities for parameter searching. It uses `Backtester` under the hood so users can compare backtest metrics for different paramters combination. ``` from orbit.utils.params_tuning import grid_search_orbit # defining the search space for level smoothing paramter and seasonality smooth paramter param_grid = { 'level_sm_input': [0.3, 0.5, 0.8], 'seasonality_sm_input': [0.3, 0.5, 0.8], } # configs min_train_len = 380 # in case of rolling window, this specify the length of window length forecast_len = 20 # length forecast window incremental_len = 20 # step length for moving forward best_params, tuned_df = grid_search_orbit(param_grid, model=dlt, df=data, min_train_len=min_train_len, incremental_len=incremental_len, forecast_len=forecast_len, metrics=None, criteria=None, verbose=True) tuned_df.head() # backtest output for each parameter searched best_params # output best parameters ```
github_jupyter
# Gemetric Test of Reciprocity Moment Tensors ### Step 0 Load packages ``` #load all packages import datetime import pickle import copy import os from sys import argv from pathlib import Path import numpy as np import pandas as pd import pyvista as pv import matplotlib.pyplot as plt from matplotlib.colors import Normalize from pyaspect.project import * from pyaspect.model.gridmod3d import gridmod3d as gm from pyaspect.model.bbox import bbox as bb from pyaspect.model.gm3d_utils import * from pyaspect.moment_tensor import MomentTensor from pyaspect.specfemio.headers import * from pyaspect.specfemio.write import * from pyaspect.specfemio.write import _write_header from pyaspect.specfemio.read import * from pyaspect.specfemio.utils import * import pyaspect.events.gevents as gevents import pyaspect.events.gstations as gstations from pyaspect.events.munge.knmi import correct_station_depths as csd_f import pyaspect.events.mtensors as mtensors from obspy.imaging.beachball import beach from obspy import UTCDateTime import shapefile as sf from pyrocko.moment_tensor import MomentTensor as RockoMT ``` ### Step 1 Extract the ndarray of the subsampled, smoothed NAM model and instantiate a new GriddedModel3D object for QC'ing ``` data_in_dir = 'data/output/' data_out_dir = data_in_dir !ls {data_in_dir} !ls data/groningen ``` ### Step 6 Decompress the ndarray of the sliced, subsampled, smoothed NAM model and instantiate a new GriddedModel3D object for QC'ing ``` # set filename then used it to decompress model ifqn = f'{data_out_dir}/vsliced_subsmp_smth_nam_2017_vp_vs_rho_Q_model_dx100_dy100_dz100_maxdepth5850_sig250.npz' vslice_gm3d, other_pars = decompress_gm3d_from_file(ifqn) print() print('decompressed gridded model\n:',vslice_gm3d) print() print('other parameters:\n',other_pars) print() # WARNING: this will unpack all other_pars, if you overwrite a variable of the samename as val(key), then you # may not notice, and this may cause large headaches. I use it because I am aware of it. ''' for key in other_pars: locals()[key] = other_pars[key] #this is more advanced python than I think is reasonable for most sig_meters = sig '''; # another way to get these varibles is just use the accessor functions for the gridmod3d. We need them later. xmin = other_pars['xmin'] dx = other_pars['dx'] nx = other_pars['nx'] ymin = other_pars['ymin'] dy = other_pars['dy'] ny = other_pars['ny'] zmin = other_pars['zmin'] dz = other_pars['dz'] nz = other_pars['nz'] sig_meters = other_pars['sig'] # this variable is used later print('sig_meters:',sig_meters) # Create the spatial reference grid = pv.UniformGrid() # Set the grid dimensions: shape + 1 because we want to inject our values on # the CELL data nam_dims = list(vslice_gm3d.get_npoints()) nam_origin = [0,0,-vslice_gm3d.get_gorigin()[2]] #nam_origin = list(vslice_gm3d.get_gorigin()) #nam_origin[2] *= -1 nam_origin = tuple(nam_origin) nam_spacing = list(vslice_gm3d.get_deltas()) nam_spacing[2] *=-1 nam_spacing = tuple(nam_spacing) print('nam_dims:',nam_dims) print('nam_origin:',nam_origin) print('nam_spacing:',nam_spacing) # Edit the spatial reference grid.dimensions = np.array(nam_dims) + 1 grid.origin = nam_origin # The bottom left corner of the data set grid.spacing = nam_spacing # These are the cell sizes along each axis nam_pvalues = vslice_gm3d.getNPArray()[0] print('pvalues.shape:',nam_pvalues.shape) # Add the data values to the cell data grid.cell_arrays["values"] = nam_pvalues.flatten(order="F") # Flatten the array! # Now plot the grid! cmap = plt.cm.jet #grid.plot(show_edges=True,cmap=cmap) grid.plot(cmap=cmap,opacity=1.0) slices = grid.slice_orthogonal() #slices.plot(show_edges=True,cmap=cmap) slices.plot(cmap=cmap) ``` ## create virtual recievers (CMT solutions in forwards sense) ``` #coords = vslice_gm3d.getGlobalCoordsPointsXYZ() coords = vslice_gm3d.getLocalCoordsPointsXYZ() coords[:,2] = -coords[:,2] xc = np.unique(coords.T[0,:]) yc = np.unique(coords.T[1,:]) zc = np.unique(coords.T[2,:]) #n_rand_p = 1000 n_rand_p = 3 np.random.seed(n_rand_p) #nothing special about using n_rand_p just want reproducible random #stay away from the edges of the model for derivatives # and to avoid boundary effects xy_pad = 500 lrx = np.min(xc) + xy_pad lry = np.min(yc) + xy_pad lrz = -3400.0 hrx = np.max(xc) - xy_pad hry = np.max(yc) - xy_pad hrz = -2600.0 srx = hrx - lrx sry = hry - lry srz = hrz - lrz vrec_cmt_xyz = np.array([lrx + 0.33*srx,lry + 0.33*sry,-3000],dtype=np.float32).reshape((1,3)) print('cmt_xyz:\n',vrec_cmt_xyz) pv_rpoints = pv.wrap(vrec_cmt_xyz) p = pv.Plotter() slices = grid.slice_orthogonal() #p.add_mesh(slices,cmap=cmap,opacity=0.50) #p.add_mesh(slices,cmap=cmap,opacity=1) p.add_mesh(grid,cmap=cmap,opacity=0.50) p.add_mesh(pv_rpoints, render_points_as_spheres=True, point_size=5,opacity=1.0) p.show() ``` ## Make Moment Tensors and CMTSolutionHeaders for each tensor ``` def CMTtoM0(CMTsol): A = np.array(([CMTsol[0],CMTsol[3],CMTsol[4]], [CMTsol[3],CMTsol[1],CMTsol[5]], [CMTsol[4],CMTsol[5],CMTsol[2]])) M0 = ((1/np.sqrt(2))*np.sqrt(np.sum(A*A))) return(M0) def aki_from_sdr(strike,dip,rake,M0): from math import sin,cos print('input M0:',M0) """ converts given strike/dip/rake to moment tensor """ S = strike D = dip R = rake # PI / 180 to convert degrees to radians d2r = 0.017453293 print("Strike = %9.5f degrees" % S) print("Dip = %9.5f degrees" % D) print("Rake/Slip = %9.5f degrees" % R) print("") # convert to radians S *= d2r D *= d2r R *= d2r ''' # Aki & Richards Mxx = -1.0 * ( sin(D) * cos(R) * sin (2*S) + sin(2*D) * sin(R) * sin(S)*sin(S) ) Myy = ( sin(D) * cos(R) * sin (2*S) - sin(2*D) * sin(R) * cos(S)*cos(S) ) Mzz = -1.0 * ( Mxx + Myy) Mxy = ( sin(D) * cos(R) * cos (2*S) + 0.5 * sin(2*D) * sin(R) * sin(2*S) ) Mxz = -1.0 * ( cos(D) * cos(R) * cos (S) + cos(2*D) * sin(R) * sin(S) ) Myz = -1.0 * ( cos(D) * cos(R) * sin (S) - cos(2*D) * sin(R) * cos(S) ) '''; #Aki and Richards Mxx = -( np.sin(D)*np.cos(R)*np.sin(2*S) + np.sin(2*D)*np.sin(R)*(np.sin(S)**2) ) Myy = ( np.sin(D)*np.cos(R)*np.sin(2*S) - np.sin(2*D)*np.sin(R)*(np.cos(S)**2) ) Mzz = -( Mxx + Myy ) Mxy = ( np.sin(D)*np.cos(R)*np.cos(2*S) + 0.5*np.sin(2*D)*np.sin(R)*np.sin(2*S) ) Mxz = -( np.cos(D)*np.cos(R)*np.cos(S) + np.cos(2*D)*np.sin(R)*np.sin(S) ) Myz = -( np.cos(D)*np.cos(R)*np.sin(S) - np.cos(2*D)*np.sin(R)*np.cos(S) ) a_mt = np.array([Mxx,Myy,Mzz,Mxy,Mxz,Myz]) a_mt *= M0 # Harvard CMT Mtt = a_mt[0] #Mxx Mpp = a_mt[1] #Myy Mrr = a_mt[2] #Mzz Mtp = -1.0*a_mt[3] #Mxy Mrt = a_mt[4] #Mxz Mrp = -1.0*a_mt[5] #Myz h_mt = np.array([Mrr,Mtt,Mpp,Mrt,Mrp,Mtp]) print("Aki&Richards1980: Mxx Myy Mzz Mxy Mxz Myz") print("%9.5f %9.5f %9.5f %9.5f %9.5f %9.5f\n" %(tuple(a_mt))) print("M0:",CMTtoM0(a_mt)) print() print("Harvard: Mrr Mtt Mpp Mrt Mrp Mtp") print("%9.5f %9.5f %9.5f %9.5f %9.5f %9.5f\n" %(tuple(h_mt))) print("M0:",CMTtoM0(h_mt)) print() return a_mt ``` ``` # this is the path to the project dir on the cluster my_proj_dir = '/scratch/seismology/tcullison/test_mesh/FWD_Batch_Src_Test' m0 = 1 mW = (np.log10(m0)-9.1)/1.5 #(mnn, mee, mdd, mne, mnd, med, magnitude) #((mnn, mne, mnd), (mne, mee, med), (mnd, med, mdd)) h_mat_xy = np.array([[0, 0,0],[ 0,0,-1],[0,-1,0]]) h_mat_xz = np.array([[0, 0,1],[ 0,0, 0],[1, 0,0]]) h_mat_yz = np.array([[0,-1,0],[-1,0, 0],[0, 0,0]]) #h_mat_111 = np.array([[1, 0,0],[ 0,1, 0],[0, 0,1]]) h_mat_111 = np.array([[0, 0,0],[ 0,1, 0],[0, 0,0]]) h_mat_123 = np.array([[1, 0,0],[ 0,2, 0],[0, 0,3]]) h_mat_231 = np.array([[2, 0,0],[ 0,3, 0],[0, 0,1]]) h_mat_312 = np.array([[3, 0,0],[ 0,1, 0],[0, 0,2]]) Mxy = MomentTensor(m_up_south_east=h_mat_xy) #[[0,1,0],[1,0,0],[0,0,0]] SPEC coord system Mxz = MomentTensor(m_up_south_east=h_mat_xz) #[[0,0,0],[0,0,1],[0,1,0]] SPEC coord system Myz = MomentTensor(m_up_south_east=h_mat_yz) #[[0,0,1],[0,0,0],[1,0,0]] SPEC coord system M111 = MomentTensor(m_up_south_east=h_mat_111) #[[1,0,0],[0,1,0],[0,0,1]] SPEC coord system M123 = MomentTensor(m_up_south_east=h_mat_123) #[[1,0,0],[0,2,0],[0,0,3]] SPEC coord system M231 = MomentTensor(m_up_south_east=h_mat_231) #[[2,0,0],[0,3,0],[0,0,1]] SPEC coord system M312 = MomentTensor(m_up_south_east=h_mat_312) #[[3,0,0],[0,1,0],[0,0,2]] SPEC coord system print(f'Mxy: {Mxy}') print(f'Mxy PyR: {Mxy.m6()}') print(f'Mxy Har: {Mxy.m6_up_south_east()}\n\n') print(f'Mxz: {Mxz}') print(f'Mxz PyR: {Mxz.m6()}') print(f'Mxz Har: {Mxz.m6_up_south_east()}\n\n') print(f'Myz: {Myz}') print(f'Myz PyR: {Myz.m6()}') print(f'Myz Har: {Myz.m6_up_south_east()}\n\n') print(f'M111: {M111}') print(f'M111 PyR: {M111.m6()}') print(f'M111 Har: {M111.m6_up_south_east()}\n\n') print(f'M123: {M123}') print(f'M123 PyR: {M123.m6()}') print(f'M123 Har: {M123.m6_up_south_east()}\n\n') print(f'M231: {M231}') print(f'M231 PyR: {M231.m6()}') print(f'M231 Har: {M231.m6_up_south_east()}\n\n') print(f'M312: {M312}') print(f'M312 PyR: {M312.m6()}') print(f'M312 Har: {M312.m6_up_south_east()}\n\n') l_mt = [('Harvard-XY',Mxy),('Harvard-XZ',Mxz),('Harvard-YZ',Myz), ('Harvard-111',M111),('Harvard-123',M123),('Harvard-231',M231), ('Harvard-312',M312)] for mt in l_mt: print(f'mt: {mt}') l_cmt_srcs = [] for i in range(len(l_mt)): cmt_h = CMTSolutionHeader(date=datetime.datetime.now(), ename=l_mt[i][0], #ename=f'Event-{str(i).zfill(4)}', tshift=0.0, hdur=0.0, lat_yc=vrec_cmt_xyz[0,1], lon_xc=vrec_cmt_xyz[0,0], depth=-vrec_cmt_xyz[0,2], mt=l_mt[i][1], #mt=l_mt[i], eid=i, sid=0) l_cmt_srcs.append(cmt_h) print() for cmt in l_cmt_srcs: print(f'cmt:\n{cmt}') #assert False ``` ## Make Corresponding "Virtual" Recievers (including cross membors for derivatives) for the CMT's ``` m_delta = 25.0 # distance between cross stations for derivatives assert m_delta < xy_pad #see cells above this is padding #l_grp_vrecs = make_grouped_half_cross_reciprocal_station_headers_from_cmt_list(l_cmt_srcs,m_delta) l_grp_vrecs = make_grouped_cross_reciprocal_station_headers_from_cmt_list(l_cmt_srcs,m_delta) ig = 0 for grp in l_grp_vrecs: print(f'***** Group: {ig} *****\n') ir = 0 for gvrec in grp: print(f'*** vrec: {ir} ***\n{gvrec}') ir += 1 ig += 1 print(len(flatten_grouped_headers(l_grp_vrecs))) ``` ## Plot Virtual Receiver Groups ``` all_g_xyz = get_xyz_coords_from_station_list(flatten_grouped_headers(l_grp_vrecs)) all_g_xyz[:,2] *= -1 #pyview z-up positive and oposize sign of standard geophysics pv_all_points = pv.wrap(all_g_xyz) p = pv.Plotter() p.add_mesh(grid,cmap=cmap,opacity=0.5) #p.add_mesh(slices,cmap=cmap,opacity=1.0) p.add_mesh(pv_all_points, render_points_as_spheres=True, point_size=5,opacity=1.0) p.show() ``` ## Make real-receivers/virtual-sources ``` h = 3000 rec_z = -200 vsrc_rec_xyz = np.zeros((9,3)) for i in range(vsrc_rec_xyz.shape[0]): vsrc_rec_xyz[i,:] = vrec_cmt_xyz[0,:] vsrc_rec_xyz[i,2] = rec_z # x-h, y-y vsrc_rec_xyz[0,0] = vrec_cmt_xyz[0,0] - h vsrc_rec_xyz[0,1] = vrec_cmt_xyz[0,1] - h # x, y-y vsrc_rec_xyz[1,1] = vrec_cmt_xyz[0,1] - h # x+h, y-y vsrc_rec_xyz[2,0] = vrec_cmt_xyz[0,0] + h vsrc_rec_xyz[2,1] = vrec_cmt_xyz[0,1] - h # x-h, y vsrc_rec_xyz[3,0] = vrec_cmt_xyz[0,0] - h # x, y #do nothing but skip to next index below # x+h, y vsrc_rec_xyz[5,0] = vrec_cmt_xyz[0,0] + h # x-h, y+y vsrc_rec_xyz[6,0] = vrec_cmt_xyz[0,0] - h vsrc_rec_xyz[6,1] = vrec_cmt_xyz[0,1] + h # x, y+y vsrc_rec_xyz[7,1] = vrec_cmt_xyz[0,1] + h # x+h, y+y vsrc_rec_xyz[8,0] = vrec_cmt_xyz[0,0] + h vsrc_rec_xyz[8,1] = vrec_cmt_xyz[0,1] + h ``` ## Plot virtual sources (red) with virtual receivers (white) ``` pv_spoints = pv.wrap(vsrc_rec_xyz) p = pv.Plotter() #p.add_mesh(slices,cmap=cmap,opacity=0.50) p.add_mesh(grid,cmap=cmap,opacity=0.3) p.add_mesh(pv_spoints, render_points_as_spheres=True, point_size=8,opacity=1,color='red') #p.add_mesh(pv_rpoints, render_points_as_spheres=True, point_size=5,opacity=0.5) p.add_mesh(all_g_xyz, render_points_as_spheres=True, point_size=5,opacity=0.5) p.show() ``` ## Make StationHeaders (real recievers/virtual sources) ``` l_real_recs = [] for i in range(len(vsrc_rec_xyz)): tr_bname = 'tr' new_r = StationHeader(name=tr_bname, network='NL', #FIXME lon_xc=vsrc_rec_xyz[i,0], lat_yc=vsrc_rec_xyz[i,1], depth=-vsrc_rec_xyz[i,2], #specfem z-down is positive elevation=0.0, trid=i) l_real_recs.append(new_r) for rec in l_real_recs: print(rec) ``` ## Make ForceSolutionHeaders for the above virtual sources (including force-triplets for calculation derivatives) ``` l_grp_vsrcs = make_grouped_reciprocal_force_solution_triplet_headers_from_rec_list(l_real_recs) ``` ## Make replicates of each virtual receiver list: one for each force-triplet ``` l_grp_vrecs_by_vsrcs = make_replicated_reciprocal_station_headers_from_src_triplet_list(l_grp_vsrcs, l_grp_vrecs) ``` ## Plot virtual sources (red) and virtual receivers (white) FROM headers ``` grp_s_xyz = get_unique_xyz_coords_from_solution_list(flatten_grouped_headers(l_grp_vsrcs)) grp_s_xyz[:,2] *= -1 #pyvista z-up is positive flat_recs = flatten_grouped_headers(flatten_grouped_headers(l_grp_vrecs_by_vsrcs)) grp_r_xyz = get_unique_xyz_coords_from_station_list(flat_recs) grp_r_xyz[:,2] *= -1 #pyvista z-up is positive print(len(grp_s_xyz)) print(len(grp_r_xyz)) pv_spoints = pv.wrap(grp_s_xyz) pv_rpoints = pv.wrap(grp_r_xyz) p = pv.Plotter() p.add_mesh(slices,cmap=cmap,opacity=0.50) p.add_mesh(grid,cmap=cmap,opacity=0.3) p.add_mesh(pv_spoints, render_points_as_spheres=True, point_size=8,opacity=1,color='red') p.add_mesh(pv_rpoints, render_points_as_spheres=True, point_size=5,opacity=0.5) p.show() ``` ## Make replicates of each "real" receiver list: for each CMT source ``` l_grp_recs_by_srcs = make_replicated_station_headers_from_src_list(l_cmt_srcs,l_real_recs) for i in range(len(l_cmt_srcs)): print(f'***** SRC Records for Source: {i} *****\n') for j in range(len(l_real_recs)): print(f'*** REC Header for Receiver: {j} ***\n{l_grp_recs_by_srcs[i][j]}') ``` ## Plot "real" sources (red) and virtual receivers (white) FROM headers ``` grp_s_xyz = get_unique_xyz_coords_from_solution_list(l_cmt_srcs) grp_s_xyz[:,2] *= -1 #pyvista z-up is positive flat_recs = flatten_grouped_headers(l_grp_recs_by_srcs) #real! grp_r_xyz = get_unique_xyz_coords_from_station_list(flat_recs) grp_r_xyz[:,2] *= -1 #pyvista z-up is positive print(len(grp_s_xyz)) print(len(grp_r_xyz)) pv_spoints = pv.wrap(grp_s_xyz) pv_rpoints = pv.wrap(grp_r_xyz) p = pv.Plotter() p.add_mesh(slices,cmap=cmap,opacity=0.50) p.add_mesh(grid,cmap=cmap,opacity=0.3) p.add_mesh(pv_spoints, render_points_as_spheres=True, point_size=12,opacity=1,color='red') p.add_mesh(pv_rpoints, render_points_as_spheres=True, point_size=8,opacity=0.5) p.show() #assert False ``` ## Make reciprical RecordHeader ``` l_flat_vsrcs = flatten_grouped_headers(l_grp_vsrcs) l_flat_vrecs = flatten_grouped_headers(flatten_grouped_headers(l_grp_vrecs_by_vsrcs)) vrecord_h = RecordHeader(name='Reciprocal-Record',solutions_h=l_flat_vsrcs,stations_h=l_flat_vrecs) print(vrecord_h) # save the header to disc vrec_fqp = os.path.join(data_out_dir,'simple_record_h') _write_header(vrec_fqp,vrecord_h) #verify file is there !ls -l {vrec_fqp} ``` ## Make reciprocal project ``` test_proj_name = 'ReciprocalGeometricTestProject' test_proj_root_fqp = os.path.join(data_out_dir, 'tmp/TestProjects/NewMKProj') test_parfile_fqp = os.path.join(data_out_dir, 'Par_file') test_mesh_fqp = '/scratch/seismology/tcullison/test_mesh/MESH-default_batch_force_src' test_spec_fqp = '/quanta1/home/tcullison/DevGPU_specfem3d' test_pyutils_fqp = '/quanta1/home/tcullison/myscripts/python/specfem/pyutils' test_script_fqp = '/quanta1/home/tcullison/myscripts/specfem' #copy the reciprocal record test_proj_record_h = vrecord_h.copy() make_fwd_project_dir(test_proj_name, test_proj_root_fqp, test_parfile_fqp, test_mesh_fqp, test_spec_fqp, test_pyutils_fqp, test_script_fqp, test_proj_record_h, copy_mesh=False, batch_srcs=False, verbose=True, max_event_rdirs=MAX_SPEC_SRC) #max_event_rdirs=) print() print('ls:') !ls {test_proj_root_fqp} print('ls:') !ls {test_proj_root_fqp}/*/* ``` ## Make Forward/Real RecordHeader ``` l_flat_srcs = l_cmt_srcs #NOTE: we don't need to flatten CMT list because they are not grouped l_flat_recs = flatten_grouped_headers(l_grp_recs_by_srcs) #Note: only one level of flattening record_h = RecordHeader(name='Forward-Record',solutions_h=l_flat_srcs,stations_h=l_flat_recs) print(f'Forward Record:\n{record_h}') # save the header to disc rec_fqp = os.path.join(data_out_dir,'real_simple_record_h') _write_header(rec_fqp,record_h) #verify file is there !ls -l {rec_fqp} print('l_flat_srcs:',type(l_flat_srcs[0])) ``` ## Make "real" project ``` test_real_proj_name = 'ForwardGeometricTestProject' test_proj_root_fqp = os.path.join(data_out_dir, 'tmp/TestProjects/NewMKProj') test_parfile_fqp = os.path.join(data_out_dir, 'Par_file') test_mesh_fqp = '/scratch/seismology/tcullison/test_mesh/MESH-default_batch_force_src' test_spec_fqp = '/quanta1/home/tcullison/DevGPU_specfem3d' test_pyutils_fqp = '/quanta1/home/tcullison/myscripts/python/specfem/pyutils' test_script_fqp = '/quanta1/home/tcullison/myscripts/specfem' #copy the forward/real record test_real_proj_record_h = record_h.copy() make_fwd_project_dir(test_real_proj_name, test_proj_root_fqp, test_parfile_fqp, test_mesh_fqp, test_spec_fqp, test_pyutils_fqp, test_script_fqp, test_real_proj_record_h, copy_mesh=False, batch_srcs=False, verbose=True, max_event_rdirs=MAX_SPEC_SRC) #max_event_rdirs=2) print() print('ls:') !ls {test_proj_root_fqp} print('ls:') !ls {test_proj_root_fqp}/*/* ```
github_jupyter
# Representação numérica de palavras e textos Neste notebook iremos apresentação formas de representar valores textuais por meio de representação numérica. Iremos usar pandas, caso queira entender um pouco sobre pandas, [veja este notebook](pandas.ipynb). Por isso, não esqueça de instalar o módulo pandas: ``pip3 install pandas`` Em aprendizado de máquina, muitas vezes, precisamos da representação numérica de um determinado valor. Por exemplo: ``` import pandas as pd df_jogos = pd.DataFrame([ ["boa","nublado","não"], ["boa","chuvoso","não"], ["média","nublado","sim"], ["fraca","chuvoso","não"]], columns=["disposição","tempo","jogar volei?"]) df_jogos ``` Caso quisermos maperar cada coluna (agora chamada de atributo) para um valor, forma mais simples de se fazer a transformação é simplesmente mapear esse atributo para um valor numérico. Veja o exemplo abaixo: Nesse exemplo, temos dois atributos disposição do jogador e tempo e queremos prever se o jogar irá jogar volei ou não. Tanto os atributos quanto a classe podem ser mapeados como número. Além disso, o atributo `disposicao` é um atributo que representa uma escala - o que deixa essa forma de tranformação bem adequada para esse atributo. ``` from typing import Dict def mapeia_atributo_para_int(df_data:pd.DataFrame, coluna:str, dic_nom_to_int: Dict[int,str]): for i,valor in enumerate(df_data[coluna]): valor_int = dic_nom_to_int[valor] df_data[coluna].iat[i] = valor_int df_jogos = pd.DataFrame([ ["boa","nublado","sim"], ["boa","chuvoso","não"], ["média","ensolarado","sim"], ["fraca","chuvoso","não"]], columns=["disposição","tempo","jogar volei?"]) dic_disposicao = {"boa":3,"média":2,"fraca":1} mapeia_atributo_para_int(df_jogos, "disposição", dic_disposicao) dic_tempo = {"ensolarado":3,"nublado":2,"chuvoso":1} mapeia_atributo_para_int(df_jogos, "tempo", dic_tempo) dic_volei = {"sim":1, "não":0} mapeia_atributo_para_int(df_jogos, "jogar volei?", dic_volei) df_jogos ``` ## Binarização dos atributos categóricos Podemos fazer a binarização dos atributos categóricos em que, cada valor de atributo transforma-se em uma coluna que recebe `0` caso esse atributo não exista e `1`, caso contrário. Em nosso exemplo: ``` from preprocessamento_atributos import BagOfItems df_jogos = pd.DataFrame([ [4, "boa","nublado","sim"], [3,"boa","chuvoso","não"], [2,"média","ensolarado","sim"], [1,"fraca","chuvoso","não"]], columns=["id","disposição","tempo","jogar volei?"]) dic_disposicao = {"boa":3,"média":2,"fraca":1} bag_of_tempo = BagOfItems(0) #veja a implementação do método em preprocesamento_atributos.py df_jogos_bot = bag_of_tempo.cria_bag_of_items(df_jogos,["tempo"]) df_jogos_bot ``` Como existem vários valores no teste que você desconhece, se fizermos dessa forma, atributos que estão no teste poderiam estar completamente zerados no treino, sendo desnecessário, por exemplo: ``` df_jogos_treino = df_jogos[:2] df_jogos_treino df_jogos_teste = df_jogos[2:] df_jogos_teste ``` ## Exemplo Real Considere este exemplo real de filmes e seus atores ([obtidos no kaggle](https://www.kaggle.com/rounakbanik/the-movies-dataset)): ``` import pandas as pd df_amostra = pd.read_csv("movies_amostra.csv") df_amostra ``` Nesse exemplo, as colunas que representam os atores principais podem ser binarizadas. Em nosso caso, podemos colocar os atores todos em um "Bag of Items". Os atores são representados por as colunas `ator_1`, `ator_2`,..., `ator_5`. Abaixo, veja um sugestão de como fazer em dataset: ``` import pandas as pd from preprocessamento_atributos import BagOfItems obj_bag_of_actors = BagOfItems(min_occur=3) #boa=bag of actors ;) df_amostra_boa = obj_bag_of_actors.cria_bag_of_items(df_amostra,["ator_1","ator_2","ator_3","ator_4","ator_5"]) df_amostra_boa ``` Veja que temos bastante atributos um para cada ator. Mesmo sendo melhor possuirmos poucos atributos e mais informativos, um método de aprendizado de máquina pode ser capaz de usar essa quantidade de forma eficaz. Particularmente, o [SVM linear](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html) e o [RandomForest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) são métodos que conseguem ir bem nesse tipo de dado. Essa é a forma mais prática de fazer, porém, em aprendizado de máquina, geralmente dividimos nossos dados em, pelo menos, treino e teste em que treino é o dado que você terá todo o acesso e, o teste, deve reproduzir uma amostra do mundo real. Vamos supor que no treino há atores raros que não ocorrem no teste, nesse caso tais atributos seriam inúteis para o teste. Isso pode fazer com que o resultado reproduza menos o mundo real - neste caso, é muito possível que a diferença seja quase insignificante. Mas, caso queiramos fazer da forma "mais correta", temos que considerar apenas o treino para isso: ``` #supondo que 80% da amostra é treino df_treino_amostra = df_amostra.sample(frac=0.8, random_state = 2) df_teste_amostra = df_amostra.drop(df_treino_amostra.index) #min_occur=3 definie o minimo de ocorrencias desse ator para ser considerado #pois, um ator que apareceu em poucos filmes, pode ser menos relevante para a predição do genero obj_bag_of_actors = BagOfItems(min_occur=3) df_treino_amostra_boa = obj_bag_of_actors.cria_bag_of_items(df_treino_amostra,["ator_1","ator_2","ator_3","ator_4","ator_5"]) df_teste_amostra_boa = obj_bag_of_actors.aplica_bag_of_items(df_teste_amostra,["ator_1","ator_2","ator_3","ator_4","ator_5"]) ``` ## Representação Bag of Words Muitas vezes, temos textos que podem ser relevantes para uma determinada tarefa de aprendizado d máquina. Por isso, temos que representar tais elementos para nosso método de aprendizado de máquina. A forma mais usual para isso, é a `Bag of Words` em que cada palavra é um atributo e, o valor dela, é a frequencia dele no texto (ou algum outro valor que indique a importancia dessa palavra no texto). Por exemplo, caso temos as frases `A casa é grande`, `A casa é verde verde` em que cada frase é uma instancia diferente. A representação seria da seguinte forma: ``` dic_bow = {"a":[1,1], "casa":[1,1], "é":[1,1], "verde":[0,2] } df_bow = pd.DataFrame.from_dict(dic_bow) df_bow ``` Da forma que fizemos acima, usamos a frequencia de um termo para definir sua importancia no texto, porém, existem termos que possuem uma frequencia muito alta e importancia baixa: são os casos dos artigos e preposições por exemplo, pois, eles não discriminam o texto. Uma forma de mensurar o porder discriminativo das palavras é usando a métrica `TF-IDF`. Para calcularmos essa métrica, primeiramente calculamos a frequencia de um termo no documento (TF) e, logo após multiplamos pelo IDF. A fórmula para calcular o TF-IDF do termo $i$ no documento (ou instancia) $j$ é a seguinte: \begin{equation} TFIDF_{ij} = TF_{ij} \times IDF_i \end{equation} \begin{equation} TF_{ij} = log(f_{ij}) \end{equation} em que $f_{ij}$ é a frequencia de um termo $i$ no documento $j$. Usa-se o `log` para suavizar valores muito altos e o $IDF$ (do inglês, _Inverse Document Frequency_) do termo $i$ é calculado da seguinte forma: \begin{equation} IDF_i = log(\frac{N}{n_i}) \end{equation} em que $N$ é o número de documentos da coleção e $n_i$ é o número de documentos em que esse termo $i$ ocorre. Espera-se que, quanto mais discriminativo o termo, em menos documentos esse termo irá ocorrer e, consequentemente, o $IDF$ deste termo será mais alto. Por exemplo, considere as palavras `de`, `bebida` e `cerveja`. `cerveja` é uma palavra mais discriminativa do que `bebida`; e `bebibda` é mais discriminativo do que a preposição `de`. Muito provavelmente teremos mais frequentemente termos menos discriminativos. Por exemplo, se tivermos uma coleção de 1000 documentos, `de` poderia ocorrer em 900 documentos, `bebida` em 500 e `cerveja` em 100 documentos. Se fizermos o calculo, veremos que quanto mais discriminativo um termo, mais alto é seu IDF: ``` import math N = 1000 n_de = 900 n_bebida = 500 n_cerveja = 100 IDF_de = math.log(N/n_de) IDF_bebida = math.log(N/n_bebida) IDF_cerveja = math.log(N/n_cerveja) print(f"IDF_de: {IDF_de}\tIDF_bebida:{IDF_bebida}\tIDF_cerveja:{IDF_cerveja}") ``` A biblioteca `scikitlearn`também já possui uma classe [TFIDFVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) que transforma um texto em um vetor de atributos usando o TF-IDF para o valor referente a relevancia deste termo. Veja um exemplo na coluna `resumo` do nosso dataset de filme: ``` import pandas as pd from preprocessamento_atributos import BagOfWords df_amostra = pd.read_csv("datasets/movies_amostra.csv") bow_amostra = BagOfWords() df_bow_amostra = bow_amostra.cria_bow(df_amostra,"resumo") df_bow_amostra ``` Como são muitos atributos, pode parecer que não ficou corretamente gerado. Mas, filtrando as palavras de um determinado resumo você verificará que está ok: ``` df_bow_amostra[["in","lake", "high"]] ``` Não fique preso apenas nessas representações. Vocês podem tentar fazer representações mais sucintas, como, por exemplo: para preprocessar os dados da equipe do filme (atores, diretor e escritor), calcule o número de filmes de comédia que membros da equipe participaram e, logo após, o número de filme de ação. Neste caso, como você usará a classe, você deverá usar **apenas** os dados de treino. No caso do resumo, você pode utilizar palavras chaves. Por exemplo, faça uma lista de palavras chaves que remetem "ação" e contabilize o quantidade dessas palavras chaves no resumo.
github_jupyter
""" author: Dominik Stec, index: s12623, email: [email protected] To run module: import module into Google Colaboratory notebook and run. This module recognize type of clothes according to given image of clothe. Keras model is build as classification type and contains two types of classification neural network architecture. """ **First model** ``` %tensorflow_version 2.x import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras.datasets.fashion_mnist import load_data from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Flatten, Dense, Dropout from keras.utils import to_categorical import random tf.__version__ (X_train, y_train), (X_test, y_test) = load_data() print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_test.shape) print(y_test[:]) print(np.min(X_test[0]), np.max(X_test[0])) y_train_cat = to_categorical(y_train) y_test_cat = to_categorical(y_test) print(y_train_cat.shape) print(y_test_cat.shape) class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(X_train[i], cmap=plt.cm.binary) plt.xlabel(class_names[y_train[i]]) plt.show() model = Sequential() model.add(Flatten(input_shape=(28, 28))) model.add(Dense(units=256, activation='relu')) model.add(Dense(units=128, activation='relu')) model.add(Dense(units=64, activation='relu')) model.add(Dense(units=10, activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['categorical_accuracy']) model.summary() model.fit(X_train, y_train_cat, epochs=20, validation_data=(X_test, y_test_cat)) i = random.randrange(0, len(y_test)) print('real value: ', class_names[y_test[i]]) X_test_rs = X_test[i].reshape(1, 28, 28) cat = model.predict(X_test_rs) cat_idx = np.argmax(cat) plt.figure(figsize=(10,10)) plt.subplot(5,5,1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(X_test[i], cmap=plt.cm.binary) plt.xlabel(class_names[y_test[i]]) plt.show() print('predict value: ', class_names[cat_idx]) ``` **Second model** ``` model = Sequential() model.add(Flatten(input_shape=(28, 28))) model.add(Dense(units=1024, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(units=512, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(units=512, activation='relu')) model.add(Dense(units=10, activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['categorical_accuracy']) model.summary() model.fit(X_train, y_train_cat, epochs=20, validation_data=(X_test, y_test_cat)) i = random.randrange(0, len(y_test)) print('real value: ', class_names[y_test[i]]) X_test_rs = X_test[i].reshape(1, 28, 28) cat = model.predict(X_test_rs) cat_idx = np.argmax(cat) plt.figure(figsize=(10,10)) plt.subplot(5,5,1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(X_test[i], cmap=plt.cm.binary) plt.xlabel(class_names[y_test[i]]) plt.show() print('predict value: ', class_names[cat_idx]) ```
github_jupyter
# Machine Learning Engineer Nanodegree ## Reinforcement Learning ## Project: Train a Smartcab to Drive Welcome to the fourth project of the Machine Learning Engineer Nanodegree! In this notebook, template code has already been provided for you to aid in your analysis of the *Smartcab* and your implemented learning algorithm. You will not need to modify the included code beyond what is requested. There will be questions that you must answer which relate to the project and the visualizations provided in the notebook. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide in `agent.py`. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. ----- ## Getting Started In this project, you will work towards constructing an optimized Q-Learning driving agent that will navigate a *Smartcab* through its environment towards a goal. Since the *Smartcab* is expected to drive passengers from one location to another, the driving agent will be evaluated on two very important metrics: **Safety** and **Reliability**. A driving agent that gets the *Smartcab* to its destination while running red lights or narrowly avoiding accidents would be considered **unsafe**. Similarly, a driving agent that frequently fails to reach the destination in time would be considered **unreliable**. Maximizing the driving agent's **safety** and **reliability** would ensure that *Smartcabs* have a permanent place in the transportation industry. **Safety** and **Reliability** are measured using a letter-grade system as follows: | Grade | Safety | Reliability | |:-----: |:------: |:-----------: | | A+ | Agent commits no traffic violations,<br/>and always chooses the correct action. | Agent reaches the destination in time<br />for 100% of trips. | | A | Agent commits few minor traffic violations,<br/>such as failing to move on a green light. | Agent reaches the destination on time<br />for at least 90% of trips. | | B | Agent commits frequent minor traffic violations,<br/>such as failing to move on a green light. | Agent reaches the destination on time<br />for at least 80% of trips. | | C | Agent commits at least one major traffic violation,<br/> such as driving through a red light. | Agent reaches the destination on time<br />for at least 70% of trips. | | D | Agent causes at least one minor accident,<br/> such as turning left on green with oncoming traffic. | Agent reaches the destination on time<br />for at least 60% of trips. | | F | Agent causes at least one major accident,<br />such as driving through a red light with cross-traffic. | Agent fails to reach the destination on time<br />for at least 60% of trips. | To assist evaluating these important metrics, you will need to load visualization code that will be used later on in the project. Run the code cell below to import this code which is required for your analysis. ``` # Import the visualization code import visuals as vs # Pretty display for notebooks %matplotlib inline ``` ### Understand the World Before starting to work on implementing your driving agent, it's necessary to first understand the world (environment) which the *Smartcab* and driving agent work in. One of the major components to building a self-learning agent is understanding the characteristics about the agent, which includes how the agent operates. To begin, simply run the `agent.py` agent code exactly how it is -- no need to make any additions whatsoever. Let the resulting simulation run for some time to see the various working components. Note that in the visual simulation (if enabled), the **white vehicle** is the *Smartcab*. ### Question 1 In a few sentences, describe what you observe during the simulation when running the default `agent.py` agent code. Some things you could consider: - *Does the Smartcab move at all during the simulation?* - *What kind of rewards is the driving agent receiving?* - *How does the light changing color affect the rewards?* **Hint:** From the `/smartcab/` top-level directory (where this notebook is located), run the command ```bash 'python smartcab/agent.py' ``` **Answer:** The smart cab does not move at all during the simulation. Whilst runnning the simulation, we see both the movement of other vehicles around the grid system, and the changing colour of the traffic, either red or green. The smartcab receives either a positive or negative reward depending on whether is took an appropriate action: negative if the wrong action, positive if correct. The magnitude of the reward increases for consecutive incorrect/correct actions i.e. the reward received will be greater if the smartcab continues to do the correct action, if the previous action was also correct. The light colour determines the reward the smartcab receives for the current action taken. The smartcab receives a positive reward if it idles in front of red lights, and conversely receives a negative reward if it idles in front of green lights. It is receiving reward when waiting on red light, or green light with oncoming traffic. There is also a penalty when idling on green light without traffic. Light color determines whether going or staying will give reward or penalty. ### Understand the Code In addition to understanding the world, it is also necessary to understand the code itself that governs how the world, simulation, and so on operate. Attempting to create a driving agent would be difficult without having at least explored the *"hidden"* devices that make everything work. In the `/smartcab/` top-level directory, there are two folders: `/logs/` (which will be used later) and `/smartcab/`. Open the `/smartcab/` folder and explore each Python file included, then answer the following question. ### Question 2 - *In the *`agent.py`* Python file, choose three flags that can be set and explain how they change the simulation.* - *In the *`environment.py`* Python file, what Environment class function is called when an agent performs an action?* - *In the *`simulator.py`* Python file, what is the difference between the *`'render_text()'`* function and the *`'render()'`* function?* - *In the *`planner.py`* Python file, will the *`'next_waypoint()`* function consider the North-South or East-West direction first?* **Answer:** agent.py - update_delay determines time delay between actions with a default of 2 seconds - log_metrics Boolean toggle to determine whether to log trial and simulation results to /logs - optimized set default log file name environment.py - The class function act is called when the agent performs an action simulator.py - render_text producing the logging viewed in the terminal, whereas render produces the logging viewed in the GUI simulation planner.py - next_waypoint checks the East-West direction before checking the North-South direction ----- ## Implement a Basic Driving Agent The first step to creating an optimized Q-Learning driving agent is getting the agent to actually take valid actions. In this case, a valid action is one of `None`, (do nothing) `'Left'` (turn left), `'Right'` (turn right), or `'Forward'` (go forward). For your first implementation, navigate to the `'choose_action()'` agent function and make the driving agent randomly choose one of these actions. Note that you have access to several class variables that will help you write this functionality, such as `'self.learning'` and `'self.valid_actions'`. Once implemented, run the agent file and simulation briefly to confirm that your driving agent is taking a random action each time step. ### Basic Agent Simulation Results To obtain results from the initial simulation, you will need to adjust following flags: - `'enforce_deadline'` - Set this to `True` to force the driving agent to capture whether it reaches the destination in time. - `'update_delay'` - Set this to a small value (such as `0.01`) to reduce the time between steps in each trial. - `'log_metrics'` - Set this to `True` to log the simluation results as a `.csv` file in `/logs/`. - `'n_test'` - Set this to `'10'` to perform 10 testing trials. Optionally, you may disable to the visual simulation (which can make the trials go faster) by setting the `'display'` flag to `False`. Flags that have been set here should be returned to their default setting when debugging. It is important that you understand what each flag does and how it affects the simulation! Once you have successfully completed the initial simulation (there should have been 20 training trials and 10 testing trials), run the code cell below to visualize the results. Note that log files are overwritten when identical simulations are run, so be careful with what log file is being loaded! Run the agent.py file after setting the flags from projects/smartcab folder instead of projects/smartcab/smartcab. ``` # Load the 'sim_no-learning' log file from the initial simulation results vs.plot_trials('sim_no-learning.csv') ``` ### Question 3 Using the visualization above that was produced from your initial simulation, provide an analysis and make several observations about the driving agent. Be sure that you are making at least one observation about each panel present in the visualization. Some things you could consider: - *How frequently is the driving agent making bad decisions? How many of those bad decisions cause accidents?* - *Given that the agent is driving randomly, does the rate of reliabilty make sense?* - *What kind of rewards is the agent receiving for its actions? Do the rewards suggest it has been penalized heavily?* - *As the number of trials increases, does the outcome of results change significantly?* - *Would this Smartcab be considered safe and/or reliable for its passengers? Why or why not?* **Answer:** - From the "10-trial rolling relative frequency of bad actions" visualisation, the agent is making bad decisions approximately 44% percent of the time. Of those bad decisions, approximately 24% result in accidents - This reliability result makes some sense in light of the agent choosing actions randomly between four actions - The "10-trial rolling average reward per action" displays that on average the agent receives a reward between -7 and -5.5. This would indicate that the agent is being heavily penalised as it is performing a slight majority of good decisions, from the top left visualisation. - From the visualisation "10-trial rolling rate of reliability", the results are consistently 0% as no trail completed successfully - This smartcab should not be considered safe for its passengers, as both the safety rating and reliability rating are "F". This would indicate that the agent caused at least one major accident per run, and failed to reach the destination on time for at least 60% of trips ----- ## Inform the Driving Agent The second step to creating an optimized Q-learning driving agent is defining a set of states that the agent can occupy in the environment. Depending on the input, sensory data, and additional variables available to the driving agent, a set of states can be defined for the agent so that it can eventually *learn* what action it should take when occupying a state. The condition of `'if state then action'` for each state is called a **policy**, and is ultimately what the driving agent is expected to learn. Without defining states, the driving agent would never understand which action is most optimal -- or even what environmental variables and conditions it cares about! ### Identify States Inspecting the `'build_state()'` agent function shows that the driving agent is given the following data from the environment: - `'waypoint'`, which is the direction the *Smartcab* should drive leading to the destination, relative to the *Smartcab*'s heading. - `'inputs'`, which is the sensor data from the *Smartcab*. It includes - `'light'`, the color of the light. - `'left'`, the intended direction of travel for a vehicle to the *Smartcab*'s left. Returns `None` if no vehicle is present. - `'right'`, the intended direction of travel for a vehicle to the *Smartcab*'s right. Returns `None` if no vehicle is present. - `'oncoming'`, the intended direction of travel for a vehicle across the intersection from the *Smartcab*. Returns `None` if no vehicle is present. - `'deadline'`, which is the number of actions remaining for the *Smartcab* to reach the destination before running out of time. ### Question 4 *Which features available to the agent are most relevant for learning both **safety** and **efficiency**? Why are these features appropriate for modeling the *Smartcab* in the environment? If you did not choose some features, why are those features* not *appropriate?* **Answer:** Should only need waypoint and inputs features, including lightm left, right, and oncoming attributes. Though debatable, it can be said that the deadline feature is not as relevant as either of the other features as it does not contain any information that cannot also be derived from the other features. The inputs feature is relevant for safety, because it determines the constraints under which the smartcab can operate. For example, if there there is a car from the smartcab's left that will come across the intersection, the smartcab should respond appropriately. It does not capture any sense of efficiency as we do not know how this feature relates to the direction the smartcab should go under some constraint. On important qualification comes from the domain knowledge knowing that this agent relates to a road system where traffic travels on the right hand side of the road. It would be more efficient, but less generalisable, to drop knowledge of traffic to the right of the agent in our state description. The waypoint feature captures efficiency. This indicates the ideal actions to follow to reach our destination. Furthermore, waypoint would produce the optimal path under the constraint of no lights or other vehicles. Unlike the deadline feature, this feature is a necessary requirement to undertand the smartcab's position in the environment. Although the deadline feature is a measure of efficiency, we can deduce a measure of efficiency from the waypoint feature, assuming that it will always indicate the optimal direction the smartcab should follow. ### Define a State Space When defining a set of states that the agent can occupy, it is necessary to consider the *size* of the state space. That is to say, if you expect the driving agent to learn a **policy** for each state, you would need to have an optimal action for *every* state the agent can occupy. If the number of all possible states is very large, it might be the case that the driving agent never learns what to do in some states, which can lead to uninformed decisions. For example, consider a case where the following features are used to define the state of the *Smartcab*: `('is_raining', 'is_foggy', 'is_red_light', 'turn_left', 'no_traffic', 'previous_turn_left', 'time_of_day')`. How frequently would the agent occupy a state like `(False, True, True, True, False, False, '3AM')`? Without a near-infinite amount of time for training, it's doubtful the agent would ever learn the proper action! ### Question 5 *If a state is defined using the features you've selected from **Question 4**, what would be the size of the state space? Given what you know about the evironment and how it is simulated, do you think the driving agent could learn a policy for each possible state within a reasonable number of training trials?* **Hint:** Consider the *combinations* of features to calculate the total number of states! **Answer:** From question 4, I said that the required features were waypoint and inputs. Waypoint can be one of four values: right, forward, left - we can discount None from our state space as this would indicate that the smartcab has reached its destination Inputs breaks down as: - light can be red or green - left, and oncoming can be either right, forward, left, or None. Can drop inputs['right'] feature as traffic to right of smartcab is travelling away The total state space would be 3 \* 4 \* 4 \* * 2 = 96 I do not think it is reasonable to expect the agent to learn a policy for each possible state. This is because if we are to assume each journey is going to take a maximum of the order of 10 steps, we would realistically need an order of 10^3 trials to obtain some meaningful results for this state space. Or in other words, for anything less than 10^3 trials, we would get a total number or data points of a similar order of magnitude as the state space. ### Update the Driving Agent State For your second implementation, navigate to the `'build_state()'` agent function. With the justification you've provided in **Question 4**, you will now set the `'state'` variable to a tuple of all the features necessary for Q-Learning. Confirm your driving agent is updating its state by running the agent file and simulation briefly and note whether the state is displaying. If the visual simulation is used, confirm that the updated state corresponds with what is seen in the simulation. **Note:** Remember to reset simulation flags to their default setting when making this observation! ----- ## Implement a Q-Learning Driving Agent The third step to creating an optimized Q-Learning agent is to begin implementing the functionality of Q-Learning itself. The concept of Q-Learning is fairly straightforward: For every state the agent visits, create an entry in the Q-table for all state-action pairs available. Then, when the agent encounters a state and performs an action, update the Q-value associated with that state-action pair based on the reward received and the interative update rule implemented. Of course, additional benefits come from Q-Learning, such that we can have the agent choose the *best* action for each state based on the Q-values of each state-action pair possible. For this project, you will be implementing a *decaying,* $\epsilon$*-greedy* Q-learning algorithm with *no* discount factor. Follow the implementation instructions under each **TODO** in the agent functions. Note that the agent attribute `self.Q` is a dictionary: This is how the Q-table will be formed. Each state will be a key of the `self.Q` dictionary, and each value will then be another dictionary that holds the *action* and *Q-value*. Here is an example: ``` { 'state-1': { 'action-1' : Qvalue-1, 'action-2' : Qvalue-2, ... }, 'state-2': { 'action-1' : Qvalue-1, ... }, ... } ``` Furthermore, note that you are expected to use a *decaying* $\epsilon$ *(exploration) factor*. Hence, as the number of trials increases, $\epsilon$ should decrease towards 0. This is because the agent is expected to learn from its behavior and begin acting on its learned behavior. Additionally, The agent will be tested on what it has learned after $\epsilon$ has passed a certain threshold (the default threshold is 0.01). For the initial Q-Learning implementation, you will be implementing a linear decaying function for $\epsilon$. ### Q-Learning Simulation Results To obtain results from the initial Q-Learning implementation, you will need to adjust the following flags and setup: - `'enforce_deadline'` - Set this to `True` to force the driving agent to capture whether it reaches the destination in time. - `'update_delay'` - Set this to a small value (such as `0.01`) to reduce the time between steps in each trial. - `'log_metrics'` - Set this to `True` to log the simluation results as a `.csv` file and the Q-table as a `.txt` file in `/logs/`. - `'n_test'` - Set this to `'10'` to perform 10 testing trials. - `'learning'` - Set this to `'True'` to tell the driving agent to use your Q-Learning implementation. In addition, use the following decay function for $\epsilon$: $$ \epsilon_{t+1} = \epsilon_{t} - 0.05, \hspace{10px}\textrm{for trial number } t$$ If you have difficulty getting your implementation to work, try setting the `'verbose'` flag to `True` to help debug. Flags that have been set here should be returned to their default setting when debugging. It is important that you understand what each flag does and how it affects the simulation! Once you have successfully completed the initial Q-Learning simulation, run the code cell below to visualize the results. Note that log files are overwritten when identical simulations are run, so be careful with what log file is being loaded! ``` # Load the 'sim_default-learning' file from the default Q-Learning simulation vs.plot_trials('sim_default-learning.csv') ``` ### Question 6 Using the visualization above that was produced from your default Q-Learning simulation, provide an analysis and make observations about the driving agent like in **Question 3**. Note that the simulation should have also produced the Q-table in a text file which can help you make observations about the agent's learning. Some additional things you could consider: - *Are there any observations that are similar between the basic driving agent and the default Q-Learning agent?* - *Approximately how many training trials did the driving agent require before testing? Does that number make sense given the epsilon-tolerance?* - *Is the decaying function you implemented for $\epsilon$ (the exploration factor) accurately represented in the parameters panel?* - *As the number of training trials increased, did the number of bad actions decrease? Did the average reward increase?* - *How does the safety and reliability rating compare to the initial driving agent?* **Answer:** - Between this simulation and the previous with no learning enabled the rolling average reward is still consistently negative although of a much smaller size and getting better with trials. The safety rating is similar with both having a safety rating of "F" - By default epsilon tolerance is 0.05, and we reduced the exploration factor by 0.05 each training trial. This corresponds to the 20 training trials performed by the agent, as 1 / 0.05 = 20 - From the second diagram on the right hand side, we see a plot of paramtere values with the trials. Exploration factor decreases at a constant rate, which is expected as this was reduced by a constant amount following each trial - As the number of training trials increased, the number of bad actions decreased significantly to around 11% as seen in the top left plot and the average reward improved significantly shown in the top right plot - The reliability has substaintially improved, with a grade of "D", this would indicate the agent is effectively learning how to navigate the grid. Perhaps with more trials this could improve much more. The safety rating is still "F", but this discounts the fact that the frequency of bad actions has fallen. ----- ## Improve the Q-Learning Driving Agent The third step to creating an optimized Q-Learning agent is to perform the optimization! Now that the Q-Learning algorithm is implemented and the driving agent is successfully learning, it's necessary to tune settings and adjust learning paramaters so the driving agent learns both **safety** and **efficiency**. Typically this step will require a lot of trial and error, as some settings will invariably make the learning worse. One thing to keep in mind is the act of learning itself and the time that this takes: In theory, we could allow the agent to learn for an incredibly long amount of time; however, another goal of Q-Learning is to *transition from experimenting with unlearned behavior to acting on learned behavior*. For example, always allowing the agent to perform a random action during training (if $\epsilon = 1$ and never decays) will certainly make it *learn*, but never let it *act*. When improving on your Q-Learning implementation, consider the impliciations it creates and whether it is logistically sensible to make a particular adjustment. ### Improved Q-Learning Simulation Results To obtain results from the initial Q-Learning implementation, you will need to adjust the following flags and setup: - `'enforce_deadline'` - Set this to `True` to force the driving agent to capture whether it reaches the destination in time. - `'update_delay'` - Set this to a small value (such as `0.01`) to reduce the time between steps in each trial. - `'log_metrics'` - Set this to `True` to log the simluation results as a `.csv` file and the Q-table as a `.txt` file in `/logs/`. - `'learning'` - Set this to `'True'` to tell the driving agent to use your Q-Learning implementation. - `'optimized'` - Set this to `'True'` to tell the driving agent you are performing an optimized version of the Q-Learning implementation. Additional flags that can be adjusted as part of optimizing the Q-Learning agent: - `'n_test'` - Set this to some positive number (previously 10) to perform that many testing trials. - `'alpha'` - Set this to a real number between 0 - 1 to adjust the learning rate of the Q-Learning algorithm. - `'epsilon'` - Set this to a real number between 0 - 1 to adjust the starting exploration factor of the Q-Learning algorithm. - `'tolerance'` - set this to some small value larger than 0 (default was 0.05) to set the epsilon threshold for testing. Furthermore, use a decaying function of your choice for $\epsilon$ (the exploration factor). Note that whichever function you use, it **must decay to **`'tolerance'`** at a reasonable rate**. The Q-Learning agent will not begin testing until this occurs. Some example decaying functions (for $t$, the number of trials): $$ \epsilon = a^t, \textrm{for } 0 < a < 1 \hspace{50px}\epsilon = \frac{1}{t^2}\hspace{50px}\epsilon = e^{-at}, \textrm{for } 0 < a < 1 \hspace{50px} \epsilon = \cos(at), \textrm{for } 0 < a < 1$$ You may also use a decaying function for $\alpha$ (the learning rate) if you so choose, however this is typically less common. If you do so, be sure that it adheres to the inequality $0 \leq \alpha \leq 1$. If you have difficulty getting your implementation to work, try setting the `'verbose'` flag to `True` to help debug. Flags that have been set here should be returned to their default setting when debugging. It is important that you understand what each flag does and how it affects the simulation! Once you have successfully completed the improved Q-Learning simulation, run the code cell below to visualize the results. Note that log files are overwritten when identical simulations are run, so be careful with what log file is being loaded! ``` # Load the 'sim_improved-learning' file from the improved Q-Learning simulation vs.plot_trials('sim_improved-learning.csv') ``` ### Question 7 Using the visualization above that was produced from your improved Q-Learning simulation, provide a final analysis and make observations about the improved driving agent like in **Question 6**. Questions you should answer: - *What decaying function was used for epsilon (the exploration factor)?* - *Approximately how many training trials were needed for your agent before begining testing?* - *What epsilon-tolerance and alpha (learning rate) did you use? Why did you use them?* - *How much improvement was made with this Q-Learner when compared to the default Q-Learner from the previous section?* - *Would you say that the Q-Learner results show that your driving agent successfully learned an appropriate policy?* - *Are you satisfied with the safety and reliability ratings of the *Smartcab*?* **Answer:** - I use the cosine decay function for epsilon, cos(alpha * trial) - The agent completed around 150 training trials before testing - I set alpha to 0.01 and the epsilon-tolerance to 0.05. This was to make sure that I got a larger number of trials than the there are states in the sample space - found to be 96 above. This was so my agent could adequetly learn the environment without redundency - The safety rating has significantly improved from "F" to "A+", which would indicate that we are adequetly capturing information about the environment. The reliability rating has also improved to "A+" though only from "D", which would indicate that it is less influenced by the exploration factor. - I think this demonstrates that the agent learned an appropriate policy. - I am satisfied with the ratings of the smartcab. More trials could run to improve the average reward per action. Perhpas a different epsilon function could be used to achieve better results within fewer trials to make the learner more scalable. ### Define an Optimal Policy Sometimes, the answer to the important question *"what am I trying to get my agent to learn?"* only has a theoretical answer and cannot be concretely described. Here, however, you can concretely define what it is the agent is trying to learn, and that is the U.S. right-of-way traffic laws. Since these laws are known information, you can further define, for each state the *Smartcab* is occupying, the optimal action for the driving agent based on these laws. In that case, we call the set of optimal state-action pairs an **optimal policy**. Hence, unlike some theoretical answers, it is clear whether the agent is acting "incorrectly" not only by the reward (penalty) it receives, but also by pure observation. If the agent drives through a red light, we both see it receive a negative reward but also know that it is not the correct behavior. This can be used to your advantage for verifying whether the **policy** your driving agent has learned is the correct one, or if it is a **suboptimal policy**. ### Question 8 Provide a few examples (using the states you've defined) of what an optimal policy for this problem would look like. Afterwards, investigate the `'sim_improved-learning.txt'` text file to see the results of your improved Q-Learning algorithm. _For each state that has been recorded from the simulation, is the **policy** (the action with the highest value) correct for the given state? Are there any states where the policy is different than what would be expected from an optimal policy?_ Provide an example of a state and all state-action rewards recorded, and explain why it is the correct policy. **Answer:** In general we can imagine the optimal policy to determine that: - The smartcab should respond 'right' if no oncoming traffic is approaching from left through the intersection on a red light - The smartcab should respond with action 'None' if it will lead to a bad action with other traffic - The smartcab should go in the direction of the waypoint if the lights are green and not obstructed by traffic An example of a policy from the Q-Learning algorithm in line with the ideal is the following: ``` ('forward', 'red', None, None) -- forward : -3.97 -- right : 0.25 -- None : 1.39 -- left : -5.08 ``` That is, with a waypoint of forward, the light are red, and there are no cars near the smartcab. In this case the ideal action of None, has the highest positive weighting, and the two or the most disruptive actions are severly penalised: any movement would be a violation. The Q-Learning algorithm does not produce ideal policies when there is lots of noise: traffic in all directions. It must be difficult to optimise for these situations given the range of possibilities in a small number of training trials. For example, ``` ('right', 'red', 'left', 'left') -- forward : -0.21 -- right : 0.06 -- None : 0.00 -- left : -0.10 ``` Where the waypoint is to the right, the agent is at a red light, and there is traffic in all directions. In this case we would expect that the ideal policy would strongly be a None action. Also, on a red light, a right turn is permitted if no oncoming traffic is approaching from your left through the intersection. ----- ### Optional: Future Rewards - Discount Factor, `'gamma'` Curiously, as part of the Q-Learning algorithm, you were asked to **not** use the discount factor, `'gamma'` in the implementation. Including future rewards in the algorithm is used to aid in propogating positive rewards backwards from a future state to the current state. Essentially, if the driving agent is given the option to make several actions to arrive at different states, including future rewards will bias the agent towards states that could provide even more rewards. An example of this would be the driving agent moving towards a goal: With all actions and rewards equal, moving towards the goal would theoretically yield better rewards if there is an additional reward for reaching the goal. However, even though in this project, the driving agent is trying to reach a destination in the allotted time, including future rewards will not benefit the agent. In fact, if the agent were given many trials to learn, it could negatively affect Q-values! ### Optional Question 9 *There are two characteristics about the project that invalidate the use of future rewards in the Q-Learning algorithm. One characteristic has to do with the *Smartcab* itself, and the other has to do with the environment. Can you figure out what they are and why future rewards won't work for this project?* **Answer:** > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
github_jupyter
# Hash Codes Consider the challenges associated with the 16-bit hashcode for a character string `s` that sums the Unicode values of the characters in `s`. For example, let `s = "stop"`. It's unicode character representation is: ``` for char in "stop": print(char + ': ' + str(ord(char))) sum([ord(x) for x in "stop"]) ``` If we then sum these unicode values, we arrive as the following hash code: ``` stop -----------> 454 ``` The problem is, the following strings will all map to the same value! ``` stop -----------> 454 pots -----------> 454 tops -----------> 454 spot -----------> 454 ``` A better hash code would take into account the _position_ of our characters. ## Polynomial Hash code If we refer to the characters of our string as $x_0, x_1, \dots, x_n$, we can then chose a non-zero constant, $a \neq 1$, and use a hash code: $$a^{n-1} x_0 + a^{n-2} x_1 + \dots + a^1 x_{n-1} + a^0 x_{n}$$ This is simply a polynomial in $a$ that has our $x_i$ values as it's coefficients. This is known as a **polynomial** hash code. ``` 1 << 32 2**32 2 << 2 ``` ## Investigate hash map uniformity ``` import random import numpy as np import matplotlib.pyplot as plt %config InlineBackend.figure_format='retina' n = 0 prime = 109345121 scale = 1 + random.randrange(prime - 1) shift = random.randrange(prime) def my_hash_func(k, upper): table = upper * [None] hash_code = hash(k) compressed_code = (hash_code * scale + shift) % prime % len(table) return compressed_code upper = 1000 inputs = list(range(0, upper)) hash_results = [] for i in inputs: hash_results.append(my_hash_func(i, upper)) plt.figure(figsize=(15,10)) plt.plot(inputs, hash_results) plt.figure(figsize=(15,10)) plt.scatter(inputs, hash_results) def moving_average(x, w): return np.convolve(x, np.ones(w), 'valid') / w averages_over_window_size_5 = moving_average(hash_results, 5) plt.hist(averages_over_window_size_5) l = [4, 7, 9, 13, 1, 3, 7] l1 = [1, 4, 7]; l2 = [3, 9, 13] def merge_sort(l): size = len(l) midway = size // 2 first_half = l[:midway] second_half = l[midway:] if len(first_half) > 1 or len(second_half) > 1: sorted_first_half = merge_sort(first_half) sorted_second_half = merge_sort(second_half) else: sorted_first_half = first_half sorted_second_half = second_half sorted_l = merge(sorted_first_half, sorted_second_half) return sorted_l def merge(l1, l2): """Merge two sorted lists.""" i = 0 j = 0 lmerged = [] while (i <= len(l1) - 1) or (j <= len(l2) - 1): if i == len(l1): lmerged.extend(l2[j:]) break if j == len(l2): lmerged.extend(l1[i:]) break if (i < len(l1)) and (l1[i] < l2[j]): lmerged.append(l1[i]) i += 1 else: lmerged.append(l2[j]) j += 1 return lmerged merge_sort(l) l = [random.choice(list(range(1000))) for x in range(1000)] %%time res = sorted(l) %%time res = merge_sort(l) ```
github_jupyter
<a href="http://landlab.github.io"><img style="float: left" src="../../media/landlab_header.png"></a> # The deAlmeida Overland Flow Component <hr> <small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small> <hr> This notebook illustrates running the deAlmeida overland flow component in an extremely simple-minded way on a real topography, then shows it creating a flood sequence along an inclined surface with an oscillating water surface at one end. First, import what we'll need: ``` from landlab.components.overland_flow import OverlandFlow from landlab.plot.imshow import imshow_grid from landlab.plot.colors import water_colormap from landlab import RasterModelGrid from landlab.io.esri_ascii import read_esri_ascii from matplotlib.pyplot import figure import numpy as np from time import time %matplotlib inline ``` Pick the initial and run conditions ``` run_time = 100 # duration of run, (s) h_init = 0.1 # initial thin layer of water (m) n = 0.01 # roughness coefficient, (s/m^(1/3)) g = 9.8 # gravity (m/s^2) alpha = 0.7 # time-step factor (nondimensional; from Bates et al., 2010) u = 0.4 # constant velocity (m/s, de Almeida et al., 2012) run_time_slices = (10, 50, 100) ``` Elapsed time starts at 1 second. This prevents errors when setting our boundary conditions. ``` elapsed_time = 1.0 ``` Use Landlab methods to import an ARC ascii grid, and load the data into the field that the component needs to look at to get the data. This loads the elevation data, z, into a "field" in the grid itself, defined on the nodes. ``` rmg, z = read_esri_ascii('Square_TestBasin.asc', name='topographic__elevation') rmg.set_closed_boundaries_at_grid_edges(True, True, True, True) # un-comment these two lines for a "real" DEM #rmg, z = read_esri_ascii('hugo_site.asc', name='topographic__elevation') #rmg.status_at_node[z<0.0] = rmg.BC_NODE_IS_CLOSED ``` We can get at this data with this syntax: ``` np.all(rmg.at_node['topographic__elevation'] == z) ``` Note that the boundary conditions for this grid mainly got handled with the final line of those three, but for the sake of completeness, we should probably manually "open" the outlet. We can find and set the outlet like this: ``` my_outlet_node = 100 # This DEM was generated using Landlab and the outlet node ID was known rmg.status_at_node[my_outlet_node] = rmg.BC_NODE_IS_FIXED_VALUE ``` Now initialize a couple more grid fields that the component is going to need: ``` rmg.add_zeros('surface_water__depth', at='node') # water depth (m) rmg.at_node['surface_water__depth'] += h_init ``` Let's look at our watershed topography ``` imshow_grid(rmg, 'topographic__elevation') #, vmin=1650.0) ``` Now instantiate the component itself ``` of = OverlandFlow( rmg, steep_slopes=True ) #for stability in steeper environments, we set the steep_slopes flag to True ``` Now we're going to run the loop that drives the component: ``` while elapsed_time < run_time: # First, we calculate our time step. dt = of.calc_time_step() # Now, we can generate overland flow. of.overland_flow() # Increased elapsed time print('Elapsed time: ', elapsed_time) elapsed_time += dt imshow_grid(rmg, 'surface_water__depth', cmap='Blues') ``` Now let's get clever, and run a set of time slices: ``` elapsed_time = 1. for t in run_time_slices: while elapsed_time < t: # First, we calculate our time step. dt = of.calc_time_step() # Now, we can generate overland flow. of.overland_flow() # Increased elapsed time elapsed_time += dt figure(t) imshow_grid(rmg, 'surface_water__depth', cmap='Blues') ``` ### Click here for more <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">Landlab tutorials</a>
github_jupyter
``` import tensorflow as tf import matplotlib.pyplot as plt from sklearn.pipeline import Pipeline from sklearn import datasets, linear_model from sklearn import cross_validation import numpy as np import pandas as pd from sklearn import preprocessing df = pd.read_excel("data0505.xlsx",header=0) # clean up data df = df.dropna(how = 'all') df = df.fillna(0) df = df.round(4) df=df[df['Power']>=0] df.head() min_max_scaler = preprocessing.MinMaxScaler() np_scaled = min_max_scaler.fit_transform(df) df_normalized = pd.DataFrame(np_scaled) df_normalized.head() x = np.array(df_normalized.ix[:,0:2])#first three column are SoC, SoH, power y = np.array(df_normalized.ix[:,5])#delta SEI X_train, X_test, Y_train, Y_test = cross_validation.train_test_split( x, y, test_size=0.2, random_state=42) total_len = X_train.shape[0] total_len # Parameters learning_rate = 0.001 training_epochs = 50 batch_size = 100 display_step = 1 dropout_rate = 0.1 # Network Parameters n_hidden_1 = 10 # 1st layer number of features n_hidden_2 = 5 # 2nd layer number of features n_input = X_train.shape[1] n_classes = 1 # tf Graph input x = tf.placeholder("float", [None, 3]) y = tf.placeholder("float", [None]) # Create model def multilayer_perceptron(x, weights, biases): # Hidden layer with RELU activation layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) layer_1 = tf.nn.relu(layer_1) # Hidden layer with RELU activation layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']) layer_2 = tf.nn.relu(layer_2) # Output layer with linear activation out_layer = tf.matmul(layer_2, weights['out']) + biases['out'] return out_layer # Store layers weight & bias weights = { 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], 0, 0.1)), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], 0, 0.1)), 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes], 0, 0.1)) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1], 0, 0.1)), 'b2': tf.Variable(tf.random_normal([n_hidden_2], 0, 0.1)), 'out': tf.Variable(tf.random_normal([n_classes], 0, 0.1)) } # Construct model pred = multilayer_perceptron(x, weights, biases) # Define loss and optimizer cost = tf.reduce_mean((tf.transpose(pred)-y)*(tf.transpose(pred)-y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Launch the graph with tf.Session() as sess: sess.run(tf.initialize_all_variables()) tf.initialize_all_variables() # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(total_len/batch_size) # Loop over all batches for i in range(total_batch-1): batch_x = X_train[i*batch_size:(i+1)*batch_size] batch_y = Y_train[i*batch_size:(i+1)*batch_size] # Run optimization op (backprop) and cost op (to get loss value) _, c, p = sess.run([optimizer, cost, pred], feed_dict={x: batch_x, y: batch_y}) # Compute average loss avg_cost += c / total_batch # sample prediction label_value = batch_y estimate = p err = label_value-estimate print ("num batch:", total_batch) # Display logs per epoch step if epoch % display_step == 0: print ("Epoch:", '%04d' % (epoch+1), "cost=", \ "{:.9f}".format(avg_cost)) print ("[*]----------------------------") for i in range(3): print ("label value:", label_value[i], \ "estimated value:", estimate[i]) print ("[*]============================") print ("Optimization Finished!") # Test model # correct_prediction = tf.equal(tf.argmax(pred,0), tf.argmax(y,0)) # Calculate accuracy accuracy = tf.reduce_mean((tf.transpose(pred)-y)*(tf.transpose(pred)-y)) print ("MSE:", accuracy.eval({x: X_test, y: Y_test})) ```
github_jupyter
``` import numpy as np import scipy from scipy import sparse import scipy.sparse.linalg import matplotlib.pyplot as plt %matplotlib inline # part a) Id = sparse.csr_matrix(np.eye(2)) Sx = sparse.csr_matrix([[0., 1.], [1., 0.]]) Sz = sparse.csr_matrix([[1., 0.], [0., -1.]]) print(Sz.shape) # part b) def singesite_to_full(op, i, L): op_list = [Id]*L # = [Id, Id, Id ...] with L entries op_list[i] = op full = op_list[0] for op_i in op_list[1:]: full = sparse.kron(full, op_i, format="csr") return full def gen_sx_list(L): return [singesite_to_full(Sx, i, L) for i in range(L)] # part c) def gen_sz_list(L): return [singesite_to_full(Sz, i, L) for i in range(L)] # part d) def gen_hamiltonian(sx_list, sz_list, g, J=1.): L = len(sx_list) H = sparse.csr_matrix((2**L, 2**L)) for j in range(L): H = H - J *( sx_list[j] * sx_list[(j+1)%L]) H = H - g * sz_list[j] return H # check in part d) L = 2 sx_list = gen_sx_list(L) sz_list = gen_sz_list(L) H = gen_hamiltonian(sx_list, sz_list, 0.1) print("H for L=2, g=0.1") print(H.toarray()) # part e) L = 12 sx_list = gen_sx_list(L) sz_list = gen_sz_list(L) H = gen_hamiltonian(sx_list, sz_list, 1.) Hdense = H.toarray() print("L =12: H =", repr(H)) %%timeit sparse.linalg.eigsh(H, which='SA') %%timeit np.linalg.eigh(Hdense) # part f) Ls = [6, 8, 10, 12] gs = np.linspace(0., 2., 21) plt.figure() for L in Ls: sx_list = gen_sx_list(L) sz_list = gen_sz_list(L) sxsx = sx_list[0]*sx_list[L//2] corrs = [] for g in gs: H = gen_hamiltonian(sx_list, sz_list, g, J=1.) E, v = sparse.linalg.eigsh(H, k=3, which='SA') v0 = v[:, 0] # first column of v is the ground state corr = np.inner(v0, sxsx*v0) corrs.append(corr) corrs = np.array(corrs) plt.plot(gs, corrs, label="L={L:d}".format(L=L)) plt.xlabel("g") plt.ylabel("C") plt.legend() # part g) plt.figure(figsize=(10, 8)) for L in [6, 8, 10, 12]: sx_list = gen_sx_list(L) sz_list = gen_sz_list(L) gaps = [] for g in gs: H = gen_hamiltonian(sx_list, sz_list, g, J=1.) E, v = sparse.linalg.eigsh(H, k=3, which='SA') gaps.append((E[1] - E[0], E[2] - E[0])) gaps = np.array(gaps) lines = plt.plot(gs, gaps[:, 0], linestyle='-', label="first excited state, L={L:d}".format(L=L)) plt.plot(gs, gaps[:, 1], color = lines[0].get_color(), linestyle='--', label="second excited state, L={L:d}".format(L=L)) plt.legend() # just for fun: regenerate the correlation plot with open boundary conditions def gen_hamiltonian_open_bc(sx_list, sz_list, g, J=1.): L = len(sx_list) H = sparse.csr_matrix((2**L, 2**L)) for j in range(L): if j < L-1: H = H - J *( sx_list[j] * sx_list[j+1]) H = H - g * sz_list[j] return H plt.figure() for L in Ls: sx_list = gen_sx_list(L) sz_list = gen_sz_list(L) sxsx = sx_list[0]*sx_list[L//2] corrs = [] for g in gs: H = gen_hamiltonian_open_bc(sx_list, sz_list, g, J=1.) E, v = sparse.linalg.eigsh(H, k=3, which='SA') v0 = v[:, 0] # first column of v is the ground state corr = np.inner(v0, sxsx*v0) corrs.append(corr) corrs = np.array(corrs) plt.plot(gs, corrs, label="L={L:d}".format(L=L)) plt.xlabel("g") plt.ylabel("C") plt.legend() # and the plot for the excitation energies for open b.c. plt.figure(figsize=(10, 8)) for L in [6, 8, 10, 12]: sx_list = gen_sx_list(L) sz_list = gen_sz_list(L) gaps = [] for g in gs: H = gen_hamiltonian_open_bc(sx_list, sz_list, g, J=1.) E, v = sparse.linalg.eigsh(H, k=3, which='SA') gaps.append((E[1] - E[0], E[2] - E[0])) gaps = np.array(gaps) lines = plt.plot(gs, gaps[:, 0], linestyle='-', label="first excited state, L={L:d}".format(L=L)) plt.plot(gs, gaps[:, 1], color = lines[0].get_color(), linestyle='--', label="second excited state, L={L:d}".format(L=L)) plt.legend() # For comparison on the next sheet: L = 10 sx_list = gen_sx_list(L) sz_list = gen_sz_list(L) H = gen_hamiltonian(sx_list, sz_list, g=0.1, J=1.) E, v = sparse.linalg.eigsh(H, k=3, which='SA') print(E[0]) ```
github_jupyter
# Predicting Boston Housing Prices ## Using XGBoost in SageMaker (Deploy) _Deep Learning Nanodegree Program | Deployment_ --- As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass. The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) ## General Outline Typically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons. 1. Download or otherwise retrieve the data. 2. Process / Prepare the data. 3. Upload the processed data to S3. 4. Train a chosen model. 5. Test the trained model (typically using a batch transform job). 6. Deploy the trained model. 7. Use the deployed model. In this notebook we will be skipping step 5, testing the model. We will still test the model but we will do so by first deploying it and then sending the test data to the deployed model. ## Step 0: Setting up the notebook We begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need. ``` %matplotlib inline import os import time from time import gmtime, strftime import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_boston import sklearn.model_selection ``` In addition to the modules above, we need to import the various bits of SageMaker that we will be using. ``` import sagemaker from sagemaker import get_execution_role from sagemaker.amazon.amazon_estimator import get_image_uri # This is an object that represents the SageMaker session that we are currently operating in. This # object contains some useful information that we will need to access later such as our region. session = sagemaker.Session() # This is an object that represents the IAM role that we are currently assigned. When we construct # and launch the training job later we will need to tell it what IAM role it should have. Since our # use case is relatively simple we will simply assign the training job the role we currently have. role = get_execution_role() ``` ## Step 1: Downloading the data Fortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward. ``` boston = load_boston() ``` ## Step 2: Preparing and splitting the data Given that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets. ``` # First we package up the input data and the target variable (the median value) as pandas dataframes. This # will make saving the data to a file a little easier later on. X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names) Y_bos_pd = pd.DataFrame(boston.target) # We split the dataset into 2/3 training and 1/3 testing sets. X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33) # Then we split the training set further into 2/3 training and 1/3 validation sets. X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33) ``` ## Step 3: Uploading the training and validation files to S3 When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. We can use the SageMaker API to do this and hide some of the details. ### Save the data locally First we need to create the train and validation csv files which we will then upload to S3. ``` # This is our local data directory. We need to make sure that it exists. data_dir = '../data/boston' if not os.path.exists(data_dir): os.makedirs(data_dir) # We use pandas to save our train and validation data to csv files. Note that we make sure not to include header # information or an index as this is required by the built in algorithms provided by Amazon. Also, it is assumed # that the first entry in each row is the target variable. pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) ``` ### Upload to S3 Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project. ``` prefix = 'boston-xgboost-deploy-ll' val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ``` ## Step 4: Train and construct the XGBoost model Now that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. ### Set up the training job First, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference. ``` # We will need to know the name of the container that we want to use for training. SageMaker provides # a nice utility method to construct this for us. container = get_image_uri(session.boto_region_name, 'xgboost', '0.90-1') # We now specify the parameters we wish to use for our training job training_params = {} # We need to specify the permissions that this training job will have. For our purposes we can use # the same permissions that our current SageMaker session has. training_params['RoleArn'] = role # Here we describe the algorithm we wish to use. The most important part is the container which # contains the training code. training_params['AlgorithmSpecification'] = { "TrainingImage": container, "TrainingInputMode": "File" } # We also need to say where we would like the resulting model artifacst stored. training_params['OutputDataConfig'] = { "S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output" } # We also need to set some parameters for the training job itself. Namely we need to describe what sort of # compute instance we wish to use along with a stopping condition to handle the case that there is # some sort of error and the training script doesn't terminate. training_params['ResourceConfig'] = { "InstanceCount": 1, "InstanceType": "ml.m4.xlarge", "VolumeSizeInGB": 5 } training_params['StoppingCondition'] = { "MaxRuntimeInSeconds": 86400 } # Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect # there is on the resulting model. training_params['HyperParameters'] = { "max_depth": "5", "eta": "0.2", "gamma": "4", "min_child_weight": "6", "subsample": "0.8", "objective": "reg:squarederror", "early_stopping_rounds": "10", "num_round": "200" } # Now we need to tell SageMaker where the data should be retrieved from. training_params['InputDataConfig'] = [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": train_location, "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "csv", "CompressionType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": val_location, "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "csv", "CompressionType": "None" } ] ``` ### Execute the training job Now that we've built the dict containing the training job parameters, we can ask SageMaker to execute the job. ``` # First we need to choose a training job name. This is useful for if we want to recall information about our # training job at a later date. Note that SageMaker requires a training job name and that the name needs to # be unique, which we accomplish by appending the current timestamp. training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) training_params['TrainingJobName'] = training_job_name # And now we ask SageMaker to create (and execute) the training job training_job = session.sagemaker_client.create_training_job(**training_params) ``` The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates. ``` session.logs_for_job(training_job_name, wait=True) ``` ### Build the model Now that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job. ``` # We begin by asking SageMaker to describe for us the results of the training job. The data structure # returned contains a lot more information than we currently need, try checking it out yourself in # more detail. training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name) model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts'] # Just like when we created a training job, the model name must be unique model_name = training_job_name + "-model" # We also need to tell SageMaker which container should be used for inference and where it should # retrieve the model artifacts from. In our case, the xgboost container that we used for training # can also be used for inference. primary_container = { "Image": container, "ModelDataUrl": model_artifacts } # And lastly we construct the SageMaker model model_info = session.sagemaker_client.create_model( ModelName = model_name, ExecutionRoleArn = role, PrimaryContainer = primary_container) ``` ## Step 5: Test the trained model We will be skipping this step for now. We will still test our trained model but we are going to do it by using the deployed model, rather than setting up a batch transform job. ## Step 6: Create and deploy the endpoint Now that we have trained and constructed a model it is time to build the associated endpoint and deploy it. As in the earlier steps, we first need to construct the appropriate configuration. ``` # As before, we need to give our endpoint configuration a name which should be unique endpoint_config_name = "boston-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # And then we ask SageMaker to construct the endpoint configuration endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": model_name, "VariantName": "AllTraffic" }]) ``` And now that the endpoint configuration has been created we can deploy the endpoint itself. **NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for. In other words **If you are no longer using a deployed endpoint, shut it down!** ``` # Again, we need a unique name for our endpoint endpoint_name = "boston-xgboost-endpoint-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # And then we can deploy our endpoint endpoint_info = session.sagemaker_client.create_endpoint( EndpointName = endpoint_name, EndpointConfigName = endpoint_config_name) ``` Just like when we created a training job, SageMaker is now requisitioning and launching our endpoint. Since we can't do much until the endpoint has been completely deployed we can wait for it to finish. ``` endpoint_dec = session.wait_for_endpoint(endpoint_name) ``` ## Step 7: Use the model Now that our model is trained and deployed we can send test data to it and evaluate the results. Here, because our test data is so small, we can send it all using a single call to our endpoint. If our test dataset was larger we would need to split it up and send the data in chunks, making sure to accumulate the results. ``` # First we need to serialize the input data. In this case we want to send the test data as a csv and # so we manually do this. Of course, there are many other ways to do this. payload = [[str(entry) for entry in row] for row in X_test.values] payload = '\n'.join([','.join(row) for row in payload]) # This time we use the sagemaker runtime client rather than the sagemaker client so that we can invoke # the endpoint that we created. response = session.sagemaker_runtime_client.invoke_endpoint( EndpointName = endpoint_name, ContentType = 'text/csv', Body = payload) # We need to make sure that we deserialize the result of our endpoint call. result = response['Body'].read().decode("utf-8") Y_pred = np.fromstring(result, sep=',') ``` To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement. ``` plt.scatter(Y_test, Y_pred) plt.xlabel("Median Price") plt.ylabel("Predicted Price") plt.title("Median Price vs Predicted Price") ``` ## Delete the endpoint Since we are no longer using the deployed model we need to make sure to shut it down. Remember that you have to pay for the length of time that your endpoint is deployed so the longer it is left running, the more it costs. ``` session.sagemaker_client.delete_endpoint(EndpointName = endpoint_name) ``` ## Optional: Clean up The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ``` # First we will remove all of the files contained in the data_dir directory #!rm $data_dir/* # And then we delete the directory itself #!rmdir $data_dir ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from mnist import MNIST mnist= MNIST('mnist/') X_train,y_train=mnist.load_training() X_test,y_test=mnist.load_testing() X_train=np.asarray(X_train).astype(np.float32) y_train=np.asarray(y_train).astype(np.float32) X_test=np.asarray(X_test).astype(np.float32) y_test=np.asarray(y_test).astype(np.float32) some_digit = X_train[6] some_digit_image = some_digit.reshape(28, 28) plt.imshow(some_digit_image, cmap = 'Greys', interpolation="nearest") plt.axis("off") plt.show() y_train_5 = (y_train == 5) # True for all 5s, False for all other digits. y_test_5 = (y_test == 5) from sklearn.linear_model import SGDClassifier sgd_clf = SGDClassifier(random_state=42) sgd_clf.fit(X_train, y_train_5) sgd_clf.predict([X_train[6]]) from sklearn.model_selection import cross_val_score cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy") from sklearn.model_selection import cross_val_predict y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3) from sklearn.metrics import confusion_matrix confusion_matrix(y_train_5, y_train_pred) from sklearn.metrics import precision_score, recall_score precision_score(y_train_5, y_train_pred) # 3530/(687+3530) recall_score(y_train_5, y_train_pred) #3530/(3530+1891) from sklearn.metrics import f1_score f1_score(y_train_5, y_train_pred) y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,method="decision_function") from sklearn.metrics import precision_recall_curve precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores) def plot_precision_recall_vs_threshold(precisions, recalls, thresholds): plt.plot(thresholds, precisions[:-1], "b--", label="Precision") plt.plot(thresholds, recalls[:-1], "g-", label="Recall") [...] # highlight the threshold, add the legend, axis label and grid plot_precision_recall_vs_threshold(precisions, recalls, thresholds) plt.show() threshold_90_precision = thresholds[np.argmax(precisions >= 0.91)] print(threshold_90_precision) y_train_pred_90 = (y_scores >= 4150) precision_score(y_train_5, y_train_pred_90) recall_score(y_train_5, y_train_pred_90) from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_train_5, y_scores) def plot_roc_curve(fpr, tpr, label=None): plt.plot(fpr, tpr, linewidth=2, label=label) plt.plot([0, 1], [0, 1], 'k--') # dashed diagonal [...] # Add axis labels and grid plot_roc_curve(fpr, tpr) plt.show() from sklearn.metrics import roc_auc_score roc_auc_score(y_train_5, y_scores) from sklearn.ensemble import RandomForestClassifier forest_clf = RandomForestClassifier(random_state=42) y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,method="predict_proba") y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest) plt.plot(fpr, tpr, "b:", label="SGD") plot_roc_curve(fpr_forest, tpr_forest, "Random Forest") plt.legend(loc="lower right") plt.show() roc_auc_score(y_train_5, y_scores_forest) sgd_clf.fit(X_train, y_train) # y_train, not y_train_5 sgd_clf.predict([some_digit]) some_digit_scores = sgd_clf.decision_function([some_digit]) some_digit_scores from sklearn.multiclass import OneVsOneClassifier ovo_clf = OneVsOneClassifier(SGDClassifier(random_state=42)) ovo_clf.fit(X_train, y_train) ovo_clf.predict([some_digit]) len(ovo_clf.estimators_) forest_clf.fit(X_train, y_train) forest_clf.predict([some_digit]) forest_clf.predict_proba([some_digit]) cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy") from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train.astype(np.float64)) cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3,scoring="accuracy") y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3) conf_mx = confusion_matrix(y_train, y_train_pred) conf_mx plt.matshow(conf_mx, cmap=plt.cm.gray) plt.show() row_sums = conf_mx.sum(axis=1, keepdims=True) norm_conf_mx = conf_mx / row_sums np.fill_diagonal(norm_conf_mx, 0) plt.matshow(norm_conf_mx, cmap=plt.cm.gray) plt.show() cl_a, cl_b = 3, 5 X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)] X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)] X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)] X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)] plt.figure(figsize=(8,8)) plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5) plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5) plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5) plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5) plt.show() from sklearn.neighbors import KNeighborsClassifier y_train_large = (y_train >= 7) y_train_odd = (y_train % 2 == 1) y_multilabel = np.c_[y_train_large, y_train_odd] knn_clf = KNeighborsClassifier() knn_clf.fit(X_train, y_multilabel) knn_clf.predict([some_digit]) y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3) f1_score(y_multilabel, y_train_knn_pred, average="macro") noise = np.random.randint(0, 100, (len(X_train), 784)) X_train_mod = X_train + noise noise = np.random.randint(0, 100, (len(X_test), 784)) X_test_mod = X_test + noise y_train_mod = X_train y_test_mod = X_test knn_clf.fit(X_train_mod, y_train_mod) clean_digit = knn_clf.predict([X_test_mod[some_index]]) plot_digit(clean_digit) ```
github_jupyter
<center><img src="./images/logo_fmkn.png" width=300 style="display: inline-block;"></center> ## Машинное обучение ### Семинар 13. ЕМ-алгоритм <br /> <br /> 9 декабря 2021 Будем решать задачу восставновления картинки лица по набору зашумленных картинок (взято с курса deep bayes 2018 https://github.com/bayesgroup/deepbayes-2018). У вас есть $K$ фотографий, поврежденных электромагнитным шумом. Известно, что на каждом фото есть лицо в неизвестно где начинающейся прямоугольной области ширины $w$ и фон, одинаковый для всех фотографий. <center><img src="./images/example_and_structure.jpg" width=800 style="display: inline-block;"></center> ``` from matplotlib import pyplot as plt import numpy as np import zipfile with zipfile.ZipFile('data_em.zip', 'r') as zip_ref: zip_ref.extractall('.') DATA_FILE = "data_em" w = 73 # face_width X = np.load(DATA_FILE) X.shape # H, W, K plt.imshow(X[:, :, 7], cmap="Greys_r") plt.axis("off") tH, tW, tw, tK = 2, 3, 1, 2 tX = np.arange(tH*tW*tK).reshape(tH, tW, tK) tF = np.arange(tH*tw).reshape(tH, tw) tB = np.arange(tH*tW).reshape(tH, tW) ts = 0.1 ta = np.arange(1, (tW-tw+1)+1) ta = ta / ta.sum() tq = np.arange(1, (tW-tw+1)*tK+1).reshape(tW-tw+1, tK) tq = tq / tq.sum(axis=0)[np.newaxis, :] ``` 1. **Реализуйте calculate_log_probability** Для $k$-й картини $X_k$ и некоторой позиции $d_k$: $$ p(X_k \mid d_k,\,F,\,B,\, std) = \prod\limits_{ij}\begin{cases} \mathcal{N}(X_k[i,j]\mid F[i,\,j-d_k],\, std^2), & \text{if}\, (i,j)\in faceArea(d_k)\\ \mathcal{N}(X_k[i,j]\mid B[i,j],\, std^2), & \text{else} \end{cases} $$ Замечания: * $faceArea(d_k) = \{[i, j]| d_k \leq j \leq d_k + w - 1 \}$ * Априорное распределение задаётся обучаемым вектором $a \in \mathbb{R}^{W-w+1}$: $$p(d_k \mid a) = a[d_k],\ \sum\limits_j a[j] = 1$$ * Итоговая вероятностная модель: $$ p(X, d \mid F,\,B,\,std,\,a) = \prod\limits_k p(X_k \mid d_k,\,F,\,B,\,std) p(d_k \mid a)$$ * Не забудьте про логарифм! * `scipy.stats.norm` может вам пригодиться ``` import scipy.stats def calculate_log_probability(X, F, B, s): """ Calculates log p(X_k|d_k, F, B, s) for all images X_k in X and all possible face position d_k. Parameters ---------- X : array, shape (H, W, K) K images of size H x W. F : array, shape (H, w) Estimate of prankster's face. B : array, shape (H, W) Estimate of background. s : float Estimate of standard deviation of Gaussian noise. Returns ------- ll : array, shape(W-w+1, K) ll[dw, k] - log-likelihood of observing image X_k given that the prankster's face F is located at position dw """ H, W, K = X.shape _, w = F.shape # your code here ll = np.zeros((W-w+1, K)) for dw in range(W-w+1): combined = np.copy(B) combined[:, dw:dw+w] = F d_combined = X - np.expand_dims(combined, 2) ll[dw] = scipy.stats.norm(0, s).logpdf(d_combined).sum(axis=(0,1)) return ll # run this cell to test your implementation expected = np.array([[-3541.69812064, -5541.69812064], [-4541.69812064, -6741.69812064], [-6141.69812064, -8541.69812064]]) actual = calculate_log_probability(tX, tF, tB, ts) assert np.allclose(actual, expected) print("OK") ``` 2. **Реализуйте calculate_lower_bound** \begin{equation}\mathscr{L}(q, \,F, \,B,\, s,\, a) = \sum_k \biggl (\mathbb{E} _ {q( d_k)}\bigl ( \log p( X_{k} \mid {d}_{k} , \,F,\,B,\,s) + \log p( d_k \mid a)\bigr) - \mathbb{E} _ {q( d_k)} \log q( d_k)\biggr) \end{equation} Замечания * Используйте calculate_log_probability! * Обратите внимание, что $q( d_k)$ и $p( d_k \mid a)$ дискретны. Например, $P(d_k=i \mid a) = a[i]$. ``` def calculate_lower_bound(X, F, B, s, a, q): """ Calculates the lower bound L(q, F, B, s, a) for the marginal log likelihood. Parameters ---------- X : array, shape (H, W, K) K images of size H x W. F : array, shape (H, w) Estimate of prankster's face. B : array, shape (H, W) Estimate of background. s : float Estimate of standard deviation of Gaussian noise. a : array, shape (W-w+1) Estimate of prior on position of face in any image. q : array q[dw, k] - estimate of posterior of position dw of prankster's face given image Xk Returns ------- L : float The lower bound L(q, F, B, s, a) for the marginal log likelihood. """ # your code here return (q * (calculate_log_probability(X,F,B,s) + np.expand_dims(np.log(a), 1) - np.log(q))).sum() calculate_lower_bound(tX, tF, tB, ts, ta, tq) # run this cell to test your implementation expected = -12761.1875 actual = calculate_lower_bound(tX, tF, tB, ts, ta, tq) assert np.allclose(actual, expected) print("OK") ``` 3. **Реализуем E шаг** $$q(d_k) = p(d_k \mid X_k, \,F, \,B, \,s,\, a) = \frac {p( X_{k} \mid {d}_{k} , \,F,\,B,\,s)\, p(d_k \mid a)} {\sum_{d'_k} p( X_{k} \mid d'_k , \,F,\,B,\,s) \,p(d'_k \mid a)}$$ Замечания * Используйте calculate_log_probability! * Считайте в логарифмах, возведите в экспоненту в конце. * Рекомендется использовать следующее утверждение для выч. стабильности: $$\beta_i = \log{p_i(\dots)} \quad\rightarrow \quad \frac{e^{\beta_i}}{\sum\limits_k e^{\beta_k}} = \frac{e^{(\beta_i - \max_j \beta_j)}}{\sum\limits_k e^{(\beta_k- \max_j \beta_j)}}$$ ``` def run_e_step(X, F, B, s, a): """ Given the current esitmate of the parameters, for each image Xk esitmates the probability p(d_k|X_k, F, B, s, a). Parameters ---------- X : array, shape(H, W, K) K images of size H x W. F : array_like, shape(H, w) Estimate of prankster's face. B : array shape(H, W) Estimate of background. s : float Estimate of standard deviation of Gaussian noise. a : array, shape(W-w+1) Estimate of prior on face position in any image. Returns ------- q : array shape (W-w+1, K) q[dw, k] - estimate of posterior of position dw of prankster's face given image Xk """ # your code here log_nom = calculate_log_probability(X,F,B,s) + np.expand_dims(np.log(a), 1) mx = log_nom.max(axis=0) nom = np.exp(log_nom - mx) return nom / nom.sum(axis=0) run_e_step(tX, tF, tB, ts, ta) # run this cell to test your implementation expected = np.array([[ 1., 1.], [ 0., 0.], [ 0., 0.]]) actual = run_e_step(tX, tF, tB, ts, ta) assert np.allclose(actual, expected) print("OK") ``` 4. **Реализуйте M шаг** Надо \begin{equation}\mathscr{L}(q, \,F, \,B,\, s,\, a) = \sum_k \biggl (\mathbb{E} _ {q( d_k)}\bigl ( \log p( X_{k} \mid {d}_{k} , \,F,\,B,\,s) + \log p( d_k \mid a)\bigr) - \mathbb{E} _ {q( d_k)} \log q( d_k)\biggr)\rightarrow \max\limits_{\theta, a} \end{equation} После долгих вычислений получим: $$a[j] = \frac{\sum_k q( d_k = j )}{\sum_{j'} \sum_{k'} q( d_{k'} = j')}$$$$F[i, m] = \frac 1 K \sum_k \sum_{d_k} q(d_k)\, X^k[i,\, m+d_k]$$\begin{equation}B[i, j] = \frac {\sum_k \sum_{ d_k:\, (i, \,j) \,\not\in faceArea(d_k)} q(d_k)\, X^k[i, j]} {\sum_k \sum_{d_k: \,(i, \,j)\, \not\in faceArea(d_k)} q(d_k)}\end{equation}\begin{equation}s^2 = \frac 1 {HWK} \sum_k \sum_{d_k} q(d_k) \sum_{i,\, j} (X^k[i, \,j] - Model^{d_k}[i, \,j])^2\end{equation} где $Model^{d_k}[i, j]$ картинка из фона и лица, сдвинутого на $d_k$. Замечания * Обновляйте параметры в порядке: $a$, $F$, $B$, $s$. * Используйте обновленный параметр для оценки следующего параметра. ``` def run_m_step(X, q, w): """ Estimates F, B, s, a given esitmate of posteriors defined by q. Parameters ---------- X : array, shape (H, W, K) K images of size H x W. q : q[dw, k] - estimate of posterior of position dw of prankster's face given image Xk w : int Face mask width. Returns ------- F : array, shape (H, w) Estimate of prankster's face. B : array, shape (H, W) Estimate of background. s : float Estimate of standard deviation of Gaussian noise. a : array, shape (W-w+1) Estimate of prior on position of face in any image. """ # your code here H, W, K = X.shape dw, _ = q.shape w = W - dw + 1 a = q.sum(axis=1)/q.sum() F = np.zeros((H, w)) for dk in range(dw): F += (q[dk] * X[:, dk:dk+w]).sum(axis=2) / K B = np.zeros((H, W)) denom = np.zeros((H, W)) for dk in range(dw): if dk > 0: denom[:, :dk] += q[dk].sum() B[:, :dk] += (q[dk] * X[:, :dk]).sum(axis=2) if dk + w < W: B[:, dk+w:] += (q[dk] * X[:, dk+w:]).sum(axis=2) denom[:, dk + w:] += q[dk].sum() B /= denom s2 = 0 for dk in range(dw): model = np.copy(B) model[:, dk:dk+w] = F s2 += (q[dk] * ((X - np.expand_dims(model,2)) ** 2)).sum() s2 /= H * W * K return F, B, np.sqrt(s2), a run_m_step(tX, tq, tw) # run this cell to test your implementation expected = [np.array([[ 3.27777778], [ 9.27777778]]), np.array([[ 0.48387097, 2.5 , 4.52941176], [ 6.48387097, 8.5 , 10.52941176]]), 0.94868, np.array([ 0.13888889, 0.33333333, 0.52777778])] actual = run_m_step(tX, tq, tw) for a, e in zip(actual, expected): assert np.allclose(a, e) print("OK") ``` 5. **Реализуйте EM алгоритм** ``` def run_EM(X, w, F=None, B=None, s=None, a=None, tolerance=0.001, max_iter=50): """ Runs EM loop until the likelihood of observing X given current estimate of parameters is idempotent as defined by a fixed tolerance. Parameters ---------- X : array, shape (H, W, K) K images of size H x W. w : int Face mask width. F : array, shape (H, w), optional Initial estimate of prankster's face. B : array, shape (H, W), optional Initial estimate of background. s : float, optional Initial estimate of standard deviation of Gaussian noise. a : array, shape (W-w+1), optional Initial estimate of prior on position of face in any image. tolerance : float, optional Parameter for stopping criterion. max_iter : int, optional Maximum number of iterations. Returns ------- F, B, s, a : trained parameters. """ # your code here H, W, N = X.shape if F is None: F = np.random.randint(0, 255, (H, w)) if B is None: B = np.random.randint(0, 255, (H, W)) if a is None: a = np.ones(W - w + 1) a /= np.sum(a) if s is None: s = np.random.rand()*64*64 l_prev = -np.inf for it in range(max_iter): print(f"iteration = {it}") q = run_e_step(X, F, B, s, a) print("e") F, B, s, a = run_m_step(X, q, w) print("m") print(s) if it == max_iter - 1: print("no convergence") break l_cur = calculate_lower_bound(X, F, B, s, a, q) if l_cur - l_prev < tolerance: print(f"converged in {it} iterations {l_cur - l_prev}") break else: l_prev = l_cur return F, B, s, a ``` Расшифровываем картинку: ``` def show(F, i=1, n=1): """ shows face F at subplot i out of n """ plt.subplot(1, n, i) plt.imshow(F, cmap="Greys_r") plt.axis("off") %%time F, B, s, a = [None] * 4 lens = [50, 100, 300, 500, 1000] iters = [5, 1, 1, 1, 1] plt.figure(figsize=(20, 5)) for i, (l, it) in enumerate(zip(lens, iters)): F, B, s, a = run_EM(X[:, :, :l], w, F, B, s, a, max_iter=it) print(s) show(F, i+1, 5) ``` И фон: ``` show(B) ```
github_jupyter
# Near real-time HF-Radar currents in the proximity of the Deepwater Horizon site The explosion on the Deepwater Horizon (DWH) tragically killed 11 people, and resulted in one of the largest marine oil spills in history. One of the first questions when there is such a tragedy is: where will the oil go? In order the help answer that question one can use Near real time currents from the HF-Radar sites near the incident. First let's start with the [HF-Radar DAC](http://cordc.ucsd.edu/projects/mapping/maps/), where one can browser the all available data interactively. Below we show an IFrame with the area near DWH for the 27 of July of 2017. In this notebook we will demonstrate how to obtain such data programmatically. (For more information on the DWH see [http://response.restoration.noaa.gov/oil-and-chemical-spills/significant-incidents/deepwater-horizon-oil-spill](http://response.restoration.noaa.gov/oil-and-chemical-spills/significant-incidents/deepwater-horizon-oil-spill).) ``` from IPython.display import HTML url = ( 'https://cordc.ucsd.edu/projects/mapping/maps/fullpage.php?' 'll=29.061888,-87.373643&' 'zm=7&' 'mt=&' 'rng=0.00,50.00&' 'us=1&' 'cs=4&' 'res=6km_h&' 'ol=3&' 'cp=1' ) iframe = '<iframe src="{src}" width="750" height="450" style="border:none;"></iframe>'.format HTML(iframe(src=url)) ``` The interactive interface is handy for exploration but we usually need to download "mechanically" in order to use them in our analysis, plots, or for downloading time-series. One way to achieve that is to use an OPeNDAP client, here Python's `xarray`, and explore the endpoint directly. (We'll use the same 6 km resolution from the IFrame above.) ``` import xarray as xr url = ( 'http://hfrnet-tds.ucsd.edu/thredds/dodsC/HFR/USEGC/6km/hourly/RTV/' 'HFRADAR_US_East_and_Gulf_Coast_6km_Resolution_Hourly_RTV_best.ncd' ) ds = xr.open_dataset(url) ds ``` How about extracting a week time-series from the dataset averaged around the area of interest? ``` dx = dy = 2.25 # Area around the point of interest. center = -87.373643, 29.061888 # Point of interest. dsw = ds.sel(time=slice('2017-07-20', '2017-07-27')) dsw = dsw.sel( lon=(dsw.lon < center[0]+dx) & (dsw.lon > center[0]-dx), lat=(dsw.lat < center[1]+dy) & (dsw.lat > center[1]-dy), ) ``` With `xarray` we can average hourly (`resample`) the whole dataset with one method call. ``` dsw = dsw.resample(freq='1H', dim='time', how='mean') ``` Now all we have to do is mask the missing data with `NaN`s and average over the area. ``` import numpy.ma as ma v = dsw['v'].data u = dsw['u'].data time = dsw['time'].to_index().to_pydatetime() u = ma.masked_invalid(u) v = ma.masked_invalid(v) i, j, k = u.shape u = u.reshape(i, j*k).mean(axis=1) v = v.reshape(i, j*k).mean(axis=1) %matplotlib inline import matplotlib.pyplot as plt from oceans import stick_plot fig, ax = plt.subplots(figsize=(11, 2.75)) q = stick_plot(time, u, v, ax=ax) ref = 0.5 qk = plt.quiverkey(q, 0.1, 0.85, ref, '{} {}'.format(ref, ds['u'].units), labelpos='N', coordinates='axes') _ = plt.xticks(rotation=70) ``` To close this post let's us reproduce the HF radar DAC image from above but using yesterday's data. ``` from datetime import date, timedelta yesterday = date.today() - timedelta(days=1) dsy = ds.sel(time=yesterday) ``` Now that we singled out the date and and time we want the data, we trigger the download by accessing the data with `xarray`'s `.data` property. ``` u = dsy['u'].data v = dsy['v'].data lon = dsy.coords['lon'].data lat = dsy.coords['lat'].data time = dsy.coords['time'].data ``` The cell below computes the speed from the velocity. We can use the speed computation to color code the vectors. Note that we re-create the vector velocity preserving the direction but using intensity of `1`. (The same visualization technique used in the HF radar DAC.) ``` import numpy as np from oceans import uv2spdir, spdir2uv angle, speed = uv2spdir(u, v) us, vs = spdir2uv(np.ones_like(speed), angle, deg=True) ``` Now we can create a `matplotlib` figure displaying the data. ``` import cartopy.crs as ccrs from cartopy import feature from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER LAND = feature.NaturalEarthFeature( 'physical', 'land', '10m', edgecolor='face', facecolor='lightgray' ) sub = 2 bbox = lon.min(), lon.max(), lat.min(), lat.max() fig, ax = plt.subplots( figsize=(9, 9), subplot_kw=dict(projection=ccrs.PlateCarree()) ) ax.set_extent([center[0]-dx-dx, center[0]+dx, center[1]-dy, center[1]+dy]) vmin, vmax = np.nanmin(speed[::sub, ::sub]), np.nanmax(speed[::sub, ::sub]) speed_clipped = np.clip(speed[::sub, ::sub], 0, 0.65) ax.quiver( lon[::sub], lat[::sub], us[::sub, ::sub], vs[::sub, ::sub], speed_clipped, scale=30, ) # Deepwater Horizon site. ax.plot(-88.365997, 28.736628, marker='o', color='crimson') gl = ax.gridlines(draw_labels=True) gl.xlabels_top = gl.ylabels_right = False gl.xformatter = LONGITUDE_FORMATTER gl.yformatter = LATITUDE_FORMATTER feature = ax.add_feature(LAND, zorder=0, edgecolor='black') ```
github_jupyter
## define what to cluster This document contains part of the codes in -lfc dataset = 0.6 (July 2015) minimum number of studies = 80 mask = bilateral ATL ``` %matplotlib inline from neurosynth.base.dataset import Dataset dataset = Dataset.load("data/neurosynth_0.6_400_4.pkl") from neurosynth.analysis.cluster import Clusterable mask = 'masks/Xu_ATLp2.nii' roi = Clusterable(dataset, mask=mask,min_studies=80,feature_threshold=0.05) reference = Clusterable(dataset, min_studies=80,feature_threshold=0.05) from copy import deepcopy import numpy as np from six import string_types from sklearn import decomposition as sk_decomp from sklearn import cluster as sk_cluster from sklearn.metrics import pairwise_distances from os.path import exists, join from os import makedirs from nibabel import nifti1 from neurosynth.analysis import meta reduce_reference = 'pca' n_components = 100 reduce_reference = { 'pca': sk_decomp.RandomizedPCA, 'ica': sk_decomp.FastICA }[reduce_reference](n_components) method = 'coactivation' transpose = (method == 'coactivation') reference = reference.transform(reduce_reference, transpose=transpose) distance_metric = 'correlation' distances = pairwise_distances(roi.data, reference.data, metric=distance_metric) from __future__ import print_function from sklearn.datasets import make_blobs from sklearn.cluster import KMeans from sklearn.metrics import silhouette_samples, silhouette_score import matplotlib.pyplot as plt import matplotlib.cm as cm import numpy as np range_n_clusters = [2,3,4,5,6,7,8,9,10,11,12,13,14,15] for n_clusters in range_n_clusters: #clustering_algorithm = 'kmeans' #clustering_kwargs={} #clustering_algorithm = { # 'kmeans': sk_cluster.KMeans, # 'minik': sk_cluster.MiniBatchKMeans # }[clustering_algorithm](n_clusters, **clustering_kwargs) # Initialize the clusterer with n_clusters value and a random generator # seed of 10 for reproducibility. #labels = clustering_algorithm.fit_predict(distances) clusterer = KMeans(n_clusters=n_clusters, random_state=10) labels = clusterer.fit_predict(distances) # The silhouette_score gives the average value for all the samples. # This gives a perspective into the density and separation of the formed # clusters silhouette_avg = silhouette_score(distances, labels) print("For n_clusters =", n_clusters, "The average silhouette_score is :", silhouette_avg) ```
github_jupyter
### DeRT analysis using Iris dataset ``` from sklearn.datasets import load_iris from sklearn.preprocessing import StandardScaler from kitchen.dert.dert_models import FeatureDrivenModel, CombinedModel import numpy as np from nltk.translate.bleu_score import sentence_bleu iris = load_iris() X = iris['data'] y = iris['target'] scaler = StandardScaler() scaler.fit(X) X = scaler.transform(X) trial_2 = CombinedModel() trial_2.transform_data(X,y) trial_2.create_model() trial_2.fit_model() trial_2.combined_model.summary() x = np.array(trial_2.df.iloc[10, 0:4]) ' '.join(trial_2.predict(x)[0]) trial_2.score() ``` --- ### Trials ``` actual_path = trial_2.df.iloc[1, 8] actual_path_tok = [trial_2.char_indices[char] for char in actual_path] actual_path_tok trial_2.char_indices trial_2.get_j_coeff(actual_path_tok, [0,9,6,1]) import distance distance.levenshtein(['S', 'L', 'E'],['S', 'L', 'E']) ## Extracting failure paths a = ['S', '4R', '4L', '3L', 'E'] # Actual b = ['S', '4R', '4L', '3L', 'E'] # Predicted # Target - 1, versicolor list(set(a) - set(b)) == [] # order doesn't matter. Lost in BLEU score ## Failure scenarios # Case 1 - Same path different order. # Sticking to the order would fail. Should we re-order and then use? a = ['S', '4R', '4L', '3L', 'E'] b = ['S', '4L', '3L','4R', 'E'] # Case 2 - Different path, right prediction # Prediction is right but path is entirely different a = ['S', '4R', '4L', '3L', 'E'] b = ['S', '4R', 'E'] # Not a leaf node ! # Ex actual path - ['S', '4R', '4L', '3L', 'E'] # Perfect match # Order mismatch - ['S', '4L', '3L', '4R', 'E'] -- Check at pred level # Subset of the tree - ['S', '4L','4L','4L' 'E'] a == b # order matters a[-1] test_path = list(''.join(a))[1:-1] for i in range(len(test_path)): if i%2 == 0: test_path[i] = int(test_path[i]) test_path trial_2.clf from IPython.display import Image from sklearn import tree import pydotplus dot_data = tree.export_graphviz(trial_2.clf, out_file=None, feature_names=iris.feature_names, class_names=iris.target_names) graph = pydotplus.graph_from_dot_data(dot_data) Image(graph.create_png()) iris.feature_names iris.target_names trial_2.clf.tree_.feature trial_2.clf.tree_.threshold stack = [(0, -1)] stack.pop() trial_2.clf.tree_.children_left # left nodes trial_2.clf.tree_.children_right # right nodes pred_path = ['L', 'L', 'R'] pred_features = [4, 3, 4] n_nodes = trial_2.clf.tree_.node_count children_left = trial_2.clf.tree_.children_left children_right = trial_2.clf.tree_.children_right feature = trial_2.clf.tree_.feature # threshold = trial_2.clf.tree_.threshold # The tree structure can be traversed to compute various properties such # as the depth of each node and whether or not it is a leaf. # node_depth = np.zeros(shape=n_nodes, dtype=np.int64) is_leaves = np.zeros(shape=n_nodes, dtype=bool) stack = [(0, -1)] # seed is the root node id and its parent depth while len(stack) > 0: node_id, parent_depth = stack.pop() node_depth[node_id] = parent_depth + 1 # If we have a test node if (children_left[node_id] != children_right[node_id]): stack.append((children_left[node_id], parent_depth + 1)) stack.append((children_right[node_id], parent_depth + 1)) else: is_leaves[node_id] = True # print("The binary tree structure has %s nodes and has " # "the following tree structure:" # % n_nodes) # for i in range(n_nodes): # if is_leaves[i]: # print("%snode=%s leaf node." % (node_depth[i] * "\t", i)) # else: # print("%snode=%s test node: go to node %s if X[:, %s] <= %s else to " # "node %s." # % (node_depth[i] * "\t", # i, # children_left[i], # feature[i], # threshold[i], # children_right[i], # )) node = 0 pred_target = -1 for i in range(len(pred_path)): if pred_path[i] == 'L': if feature[node]+1 == pred_features[i]: node = children_left[node] else: pred_target = -1 # Remove for "subset" checks break elif pred_path[i] == 'R': print(node) if feature[node]+1 == pred_features[i]: node = children_right[node] else: pred_target = -1 # Remove for "subset" checks break if is_leaves[node]: for i, x in enumerate(trial_2.clf.tree_.value[node][0]): if x > 0: pred_target = i pred_target feature trial_2.clf.tree_.value[1][0] children_left feature ```
github_jupyter
<a href="https://colab.research.google.com/github/hwangsog/magenta/blob/master/MusicVAE.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Copyright 2017 Google LLC. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music. ### ___Adam Roberts, Jesse Engel, Colin Raffel, Curtis Hawthorne, and Douglas Eck___ [MusicVAE](https://g.co/magenta/music-vae) learns a latent space of musical scores, providing different modes of interactive musical creation, including: * Random sampling from the prior distribution. * Interpolation between existing sequences. * Manipulation of existing sequences via attribute vectors. Examples of these interactions can be generated below, and selections can be heard in our [YouTube playlist](https://www.youtube.com/playlist?list=PLBUMAYA6kvGU8Cgqh709o5SUvo-zHGTxr). For short sequences (e.g., 2-bar "loops"), we use a bidirectional LSTM encoder and LSTM decoder. For longer sequences, we use a novel hierarchical LSTM decoder, which helps the model learn longer-term structures. We also model the interdependencies between instruments by training multiple decoders on the lowest-level embeddings of the hierarchical decoder. For additional details, check out our [blog post](https://g.co/magenta/music-vae) and [paper](https://goo.gl/magenta/musicvae-paper). ___ This colab notebook is self-contained and should run natively on google cloud. The [code](https://github.com/tensorflow/magenta/tree/master/magenta/models/music_vae) and [checkpoints](http://download.magenta.tensorflow.org/models/music_vae/checkpoints.tar.gz) can be downloaded separately and run locally, which is required if you want to train your own model. # Basic Instructions 1. Double click on the hidden cells to make them visible, or select "View > Expand Sections" in the menu at the top. 2. Hover over the "`[ ]`" in the top-left corner of each cell and click on the "Play" button to run it, in order. 3. Listen to the generated samples. 4. Make it your own: copy the notebook, modify the code, train your own models, upload your own MIDI, etc.! # Environment Setup Includes package installation for sequence synthesis. Will take a few minutes. ``` #@title Setup Environment #@test {"output": "ignore"} import glob print 'Copying checkpoints and example MIDI from GCS. This will take a few minutes...' !gsutil -q -m cp -R gs://download.magenta.tensorflow.org/models/music_vae/colab2/* /content/ print 'Installing dependencies...' !apt-get update -qq && apt-get install -qq libfluidsynth1 fluid-soundfont-gm build-essential libasound2-dev libjack-dev !pip install -q pyfluidsynth !pip install -qU magenta # Hack to allow python to pick up the newly-installed fluidsynth lib. # This is only needed for the hosted Colab environment. import ctypes.util orig_ctypes_util_find_library = ctypes.util.find_library def proxy_find_library(lib): if lib == 'fluidsynth': return 'libfluidsynth.so.1' else: return orig_ctypes_util_find_library(lib) ctypes.util.find_library = proxy_find_library print 'Importing libraries and defining some helper functions...' from google.colab import files import magenta.music as mm from magenta.models.music_vae import configs from magenta.models.music_vae.trained_model import TrainedModel import numpy as np import os import tensorflow as tf # Necessary until pyfluidsynth is updated (>1.2.5). import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) def play(note_sequence): mm.play_sequence(note_sequence, synth=mm.fluidsynth) def interpolate(model, start_seq, end_seq, num_steps, max_length=32, assert_same_length=True, temperature=0.5, individual_duration=4.0): """Interpolates between a start and end sequence.""" note_sequences = model.interpolate( start_seq, end_seq,num_steps=num_steps, length=max_length, temperature=temperature, assert_same_length=assert_same_length) print 'Start Seq Reconstruction' play(note_sequences[0]) print 'End Seq Reconstruction' play(note_sequences[-1]) print 'Mean Sequence' play(note_sequences[num_steps // 2]) print 'Start -> End Interpolation' interp_seq = mm.sequences_lib.concatenate_sequences( note_sequences, [individual_duration] * len(note_sequences)) play(interp_seq) mm.plot_sequence(interp_seq) return interp_seq if num_steps > 3 else note_sequences[num_steps // 2] def download(note_sequence, filename): mm.sequence_proto_to_midi_file(note_sequence, filename) files.download(filename) print 'Done' ``` # 2-Bar Drums Model Below are 4 pre-trained models to experiment with. The first 3 map the 61 MIDI drum "pitches" to a reduced set of 9 classes (bass, snare, closed hi-hat, open hi-hat, low tom, mid tom, high tom, crash cymbal, ride cymbal) for a simplified but less expressive output space. The last model uses a [NADE](http://homepages.inf.ed.ac.uk/imurray2/pub/11nade/) to represent all possible MIDI drum "pitches". * **drums_2bar_oh_lokl**: This *low* KL model was trained for more *realistic* sampling. The output is a one-hot encoding of 2^9 combinations of hits. It has a single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM decoder with 256 nodes in each layer, and a Z with 256 dimensions. During training it was given 0 free bits, and had a fixed beta value of 0.8. After 300k steps, the final accuracy is 0.73 and KL divergence is 11 bits. * **drums_2bar_oh_hikl**: This *high* KL model was trained for *better reconstruction and interpolation*. The output is a one-hot encoding of 2^9 combinations of hits. It has a single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM decoder with 256 nodes in each layer, and a Z with 256 dimensions. During training it was given 96 free bits and had a fixed beta value of 0.2. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 300k, steps the final accuracy is 0.97 and KL divergence is 107 bits. * **drums_2bar_nade_reduced**: This model outputs a multi-label "pianoroll" with 9 classes. It has a single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM-NADE decoder with 512 nodes in each layer and 9-dimensional NADE with 128 hidden units, and a Z with 256 dimensions. During training it was given 96 free bits and has a fixed beta value of 0.2. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 300k steps, the final accuracy is 0.98 and KL divergence is 110 bits. * **drums_2bar_nade_full**: The output is a multi-label "pianoroll" with 61 classes. A single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM-NADE decoder with 512 nodes in each layer and 61-dimensional NADE with 128 hidden units, and a Z with 256 dimensions. During training it was given 0 free bits and has a fixed beta value of 0.2. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 300k steps, the final accuracy is 0.90 and KL divergence is 116 bits. ``` #@title Load Pretrained Models drums_models = {} # One-hot encoded. drums_config = configs.CONFIG_MAP['cat-drums_2bar_small'] drums_models['drums_2bar_oh_lokl'] = TrainedModel(drums_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/drums_2bar_small.lokl.ckpt') drums_models['drums_2bar_oh_hikl'] = TrainedModel(drums_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/drums_2bar_small.hikl.ckpt') # Multi-label NADE. drums_nade_reduced_config = configs.CONFIG_MAP['nade-drums_2bar_reduced'] drums_models['drums_2bar_nade_reduced'] = TrainedModel(drums_nade_reduced_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/drums_2bar_nade.reduced.ckpt') drums_nade_full_config = configs.CONFIG_MAP['nade-drums_2bar_full'] drums_models['drums_2bar_nade_full'] = TrainedModel(drums_nade_full_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/drums_2bar_nade.full.ckpt') ``` ## Generate Samples ``` #@title Generate 4 samples from the prior of one of the models listed above. drums_sample_model = "drums_2bar_oh_lokl" #@param ["drums_2bar_oh_lokl", "drums_2bar_oh_hikl", "drums_2bar_nade_reduced", "drums_2bar_nade_full"] temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1} drums_samples = drums_models[drums_sample_model].sample(n=4, length=32, temperature=temperature) for ns in drums_samples: play(ns) #@title Optionally download generated MIDI samples. for i, ns in enumerate(drums_samples): download(ns, '%s_sample_%d.mid' % (drums_sample_model, i)) ``` ## Generate Interpolations ``` #@title Option 1: Use example MIDI files for interpolation endpoints. input_drums_midi_data = [ tf.gfile.Open(fn).read() for fn in sorted(tf.gfile.Glob('/content/midi/drums_2bar*.mid'))] #@title Option 2: upload your own MIDI files to use for interpolation endpoints instead of those provided. input_drums_midi_data = files.upload().values() or input_drums_midi_data #@title Extract drums from MIDI files. This will extract all unique 2-bar drum beats using a sliding window with a stride of 1 bar. drums_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_drums_midi_data] extracted_beats = [] for ns in drums_input_seqs: extracted_beats.extend(drums_nade_full_config.data_converter.to_notesequences( drums_nade_full_config.data_converter.to_tensors(ns)[1])) for i, ns in enumerate(extracted_beats): print "Beat", i play(ns) #@title Interpolate between 2 beats, selected from those in the previous cell. drums_interp_model = "drums_2bar_oh_hikl" #@param ["drums_2bar_oh_lokl", "drums_2bar_oh_hikl", "drums_2bar_nade_reduced", "drums_2bar_nade_full"] start_beat = 0 #@param {type:"integer"} end_beat = 1 #@param {type:"integer"} start_beat = extracted_beats[start_beat] end_beat = extracted_beats[end_beat] temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1} num_steps = 13 #@param {type:"integer"} drums_interp = interpolate(drums_models[drums_interp_model], start_beat, end_beat, num_steps=num_steps, temperature=temperature) #@title Optionally download interpolation MIDI file. download(drums_interp, '%s_interp.mid' % drums_interp_model) ``` # 2-Bar Melody Model The pre-trained model consists of a single-layer bidirectional LSTM encoder with 2048 nodes in each direction, a 3-layer LSTM decoder with 2048 nodes in each layer, and Z with 512 dimensions. The model was given 0 free bits, and had its beta valued annealed at an exponential rate of 0.99999 from 0 to 0.43 over 200k steps. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. The final accuracy is 0.95 and KL divergence is 58 bits. ``` #@title Load the pre-trained model. mel_2bar_config = configs.CONFIG_MAP['cat-mel_2bar_big'] mel_2bar = TrainedModel(mel_2bar_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/mel_2bar_big.ckpt') ``` ## Generate Samples ``` #@title Generate 4 samples from the prior. temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1} mel_2_samples = mel_2bar.sample(n=4, length=32, temperature=temperature) for ns in mel_2_samples: play(ns) #@title Optionally download samples. for i, ns in enumerate(mel_2_samples): download(ns, 'mel_2bar_sample_%d.mid' % i) ``` ## Generate Interpolations ``` #@title Option 1: Use example MIDI files for interpolation endpoints. input_mel_midi_data = [ tf.gfile.Open(fn).read() for fn in sorted(tf.gfile.Glob('/content/midi/mel_2bar*.mid'))] #@title Option 2: Upload your own MIDI files to use for interpolation endpoints instead of those provided. input_mel_midi_data = files.upload().values() or input_mel_midi_data #@title Extract melodies from MIDI files. This will extract all unique 2-bar melodies using a sliding window with a stride of 1 bar. mel_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_mel_midi_data] extracted_mels = [] for ns in mel_input_seqs: extracted_mels.extend( mel_2bar_config.data_converter.to_notesequences( mel_2bar_config.data_converter.to_tensors(ns)[1])) for i, ns in enumerate(extracted_mels): print "Melody", i play(ns) #@title Interpolate between 2 melodies, selected from those in the previous cell. start_melody = 0 #@param {type:"integer"} end_melody = 1 #@param {type:"integer"} start_mel = extracted_mels[start_melody] end_mel = extracted_mels[end_melody] temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1} num_steps = 13 #@param {type:"integer"} mel_2bar_interp = interpolate(mel_2bar, start_mel, end_mel, num_steps=num_steps, temperature=temperature) #@title Optionally download interpolation MIDI file. download(mel_2bar_interp, 'mel_2bar_interp.mid') ``` # 16-bar Melody Models The pre-trained hierarchical model consists of a 2-layer stacked bidirectional LSTM encoder with 2048 nodes in each direction for each layer, a 16-step 2-layer LSTM "conductor" decoder with 1024 nodes in each layer, a 2-layer LSTM core decoder with 1024 nodes in each layer, and a Z with 512 dimensions. It was given 256 free bits, and had a fixed beta value of 0.2. After 25k steps, the final accuracy is 0.90 and KL divergence is 277 bits. ``` #@title Load the pre-trained models. mel_16bar_models = {} hierdec_mel_16bar_config = configs.CONFIG_MAP['hierdec-mel_16bar'] mel_16bar_models['hierdec_mel_16bar'] = TrainedModel(hierdec_mel_16bar_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/mel_16bar_hierdec.ckpt') flat_mel_16bar_config = configs.CONFIG_MAP['flat-mel_16bar'] mel_16bar_models['baseline_flat_mel_16bar'] = TrainedModel(flat_mel_16bar_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/mel_16bar_flat.ckpt') ``` ## Generate Samples ``` #@title Generate 4 samples from the selected model prior. mel_sample_model = "hierdec_mel_16bar" #@param ["hierdec_mel_16bar", "baseline_flat_mel_16bar"] temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1} mel_16_samples = mel_16bar_models[mel_sample_model].sample(n=4, length=256, temperature=temperature) for ns in mel_16_samples: play(ns) #@title Optionally download MIDI samples. for i, ns in enumerate(mel_16_samples): download(ns, '%s_sample_%d.mid' % (mel_sample_model, i)) ``` ## Generate Means ``` #@title Option 1: Use example MIDI files for interpolation endpoints. input_mel_16_midi_data = [ tf.gfile.Open(fn).read() for fn in sorted(tf.gfile.Glob('/content/midi/mel_16bar*.mid'))] #@title Option 2: upload your own MIDI files to use for interpolation endpoints instead of those provided. input_mel_16_midi_data = files.upload().values() or input_mel_16_midi_data #@title Extract melodies from MIDI files. This will extract all unique 16-bar melodies using a sliding window with a stride of 1 bar. mel_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_mel_16_midi_data] extracted_16_mels = [] for ns in mel_input_seqs: extracted_16_mels.extend( hierdec_mel_16bar_config.data_converter.to_notesequences( hierdec_mel_16bar_config.data_converter.to_tensors(ns)[1])) for i, ns in enumerate(extracted_16_mels): print "Melody", i play(ns) #@title Compute the reconstructions and mean of the two melodies, selected from the previous cell. mel_interp_model = "hierdec_mel_16bar" #@param ["hierdec_mel_16bar", "baseline_flat_mel_16bar"] start_melody = 0 #@param {type:"integer"} end_melody = 1 #@param {type:"integer"} start_mel = extracted_16_mels[start_melody] end_mel = extracted_16_mels[end_melody] temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1} mel_16bar_mean = interpolate(mel_16bar_models[mel_interp_model], start_mel, end_mel, num_steps=3, max_length=256, individual_duration=32, temperature=temperature) #@title Optionally download mean MIDI file. download(mel_16bar_mean, '%s_mean.mid' % mel_interp_model) ``` #16-bar "Trio" Models (lead, bass, drums) We present two pre-trained models for 16-bar trios: a hierarchical model and a flat (baseline) model. The pre-trained hierarchical model consists of a 2-layer stacked bidirectional LSTM encoder with 2048 nodes in each direction for each layer, a 16-step 2-layer LSTM "conductor" decoder with 1024 nodes in each layer, 3 (lead, bass, drums) 2-layer LSTM core decoders with 1024 nodes in each layer, and a Z with 512 dimensions. It was given 1024 free bits, and had a fixed beta value of 0.1. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 50k steps, the final accuracy is 0.82 for lead, 0.87 for bass, and 0.90 for drums, and the KL divergence is 1027 bits. The pre-trained flat model consists of a 2-layer stacked bidirectional LSTM encoder with 2048 nodes in each direction for each layer, a 3-layer LSTM decoder with 2048 nodes in each layer, and a Z with 512 dimensions. It was given 1024 free bits, and had a fixed beta value of 0.1. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 50k steps, the final accuracy is 0.67 for lead, 0.66 for bass, and 0.79 for drums, and the KL divergence is 1016 bits. ``` #@title Load the pre-trained models. trio_models = {} hierdec_trio_16bar_config = configs.CONFIG_MAP['hierdec-trio_16bar'] trio_models['hierdec_trio_16bar'] = TrainedModel(hierdec_trio_16bar_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/trio_16bar_hierdec.ckpt') flat_trio_16bar_config = configs.CONFIG_MAP['flat-trio_16bar'] trio_models['baseline_flat_trio_16bar'] = TrainedModel(flat_trio_16bar_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/trio_16bar_flat.ckpt') ``` ## Generate Samples ``` #@title Generate 4 samples from the selected model prior. trio_sample_model = "hierdec_trio_16bar" #@param ["hierdec_trio_16bar", "baseline_flat_trio_16bar"] temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1} trio_16_samples = trio_models[trio_sample_model].sample(n=4, length=256, temperature=temperature) for ns in trio_16_samples: play(ns) #@title Optionally download MIDI samples. for i, ns in enumerate(trio_16_samples): download(ns, '%s_sample_%d.mid' % (trio_sample_model, i)) ``` ## Generate Means ``` #@title Option 1: Use example MIDI files for interpolation endpoints. input_trio_midi_data = [ tf.gfile.Open(fn).read() for fn in sorted(tf.gfile.Glob('/content/midi/trio_16bar*.mid'))] #@title Option 2: Upload your own MIDI files to use for interpolation endpoints instead of those provided. input_trio_midi_data = files.upload().values() or input_trio_midi_data #@title Extract trios from MIDI files. This will extract all unique 16-bar trios using a sliding window with a stride of 1 bar. trio_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_trio_midi_data] extracted_trios = [] for ns in trio_input_seqs: extracted_trios.extend( hierdec_trio_16bar_config.data_converter.to_notesequences( hierdec_trio_16bar_config.data_converter.to_tensors(ns)[1])) for i, ns in enumerate(extracted_trios): print "Trio", i play(ns) #@title Compute the reconstructions and mean of the two trios, selected from the previous cell. trio_interp_model = "hierdec_trio_16bar" #@param ["hierdec_trio_16bar", "baseline_flat_trio_16bar"] start_trio = 0 #@param {type:"integer"} end_trio = 1 #@param {type:"integer"} start_trio = extracted_trios[start_trio] end_trio = extracted_trios[end_trio] temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1} trio_16bar_mean = interpolate(trio_models[trio_interp_model], start_trio, end_trio, num_steps=3, max_length=256, individual_duration=32, temperature=temperature) #@title Optionally download mean MIDI file. download(trio_16bar_mean, '%s_mean.mid' % trio_interp_model) ```
github_jupyter
# Project 3: Smart Beta Portfolio and Portfolio Optimization ## Overview Smart beta has a broad meaning, but we can say in practice that when we use the universe of stocks from an index, and then apply some weighting scheme other than market cap weighting, it can be considered a type of smart beta fund. A Smart Beta portfolio generally gives investors exposure or "beta" to one or more types of market characteristics (or factors) that are believed to predict prices while giving investors a diversified broad exposure to a particular market. Smart Beta portfolios generally target momentum, earnings quality, low volatility, and dividends or some combination. Smart Beta Portfolios are generally rebalanced infrequently and follow relatively simple rules or algorithms that are passively managed. Model changes to these types of funds are also rare requiring prospectus filings with US Security and Exchange Commission in the case of US focused mutual funds or ETFs.. Smart Beta portfolios are generally long-only, they do not short stocks. In contrast, a purely alpha-focused quantitative fund may use multiple models or algorithms to create a portfolio. The portfolio manager retains discretion in upgrading or changing the types of models and how often to rebalance the portfolio in attempt to maximize performance in comparison to a stock benchmark. Managers may have discretion to short stocks in portfolios. Imagine you're a portfolio manager, and wish to try out some different portfolio weighting methods. One way to design portfolio is to look at certain accounting measures (fundamentals) that, based on past trends, indicate stocks that produce better results. For instance, you may start with a hypothesis that dividend-issuing stocks tend to perform better than stocks that do not. This may not always be true of all companies; for instance, Apple does not issue dividends, but has had good historical performance. The hypothesis about dividend-paying stocks may go something like this: Companies that regularly issue dividends may also be more prudent in allocating their available cash, and may indicate that they are more conscious of prioritizing shareholder interests. For example, a CEO may decide to reinvest cash into pet projects that produce low returns. Or, the CEO may do some analysis, identify that reinvesting within the company produces lower returns compared to a diversified portfolio, and so decide that shareholders would be better served if they were given the cash (in the form of dividends). So according to this hypothesis, dividends may be both a proxy for how the company is doing (in terms of earnings and cash flow), but also a signal that the company acts in the best interest of its shareholders. Of course, it's important to test whether this works in practice. You may also have another hypothesis, with which you wish to design a portfolio that can then be made into an ETF. You may find that investors may wish to invest in passive beta funds, but wish to have less risk exposure (less volatility) in their investments. The goal of having a low volatility fund that still produces returns similar to an index may be appealing to investors who have a shorter investment time horizon, and so are more risk averse. So the objective of your proposed portfolio is to design a portfolio that closely tracks an index, while also minimizing the portfolio variance. Also, if this portfolio can match the returns of the index with less volatility, then it has a higher risk-adjusted return (same return, lower volatility). Smart Beta ETFs can be designed with both of these two general methods (among others): alternative weighting and minimum volatility ETF. ## Instructions Each problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a `# TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity. ## Packages When you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code. The other packages that we're importing are `helper`, `project_helper`, and `project_tests`. These are custom packages built to help you solve the problems. The `helper` and `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems. ### Install Packages ``` import sys !{sys.executable} -m pip install -r requirements.txt ``` ### Load Packages ``` import pandas as pd import numpy as np import helper import project_helper import project_tests ``` ## Market Data ### Load Data For this universe of stocks, we'll be selecting large dollar volume stocks. We're using this universe, since it is highly liquid. ``` df = pd.read_csv('../../data/project_3/eod-quotemedia.csv') percent_top_dollar = 0.2 high_volume_symbols = project_helper.large_dollar_volume_stocks(df, 'adj_close', 'adj_volume', percent_top_dollar) df = df[df['ticker'].isin(high_volume_symbols)] close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close') volume = df.reset_index().pivot(index='date', columns='ticker', values='adj_volume') dividends = df.reset_index().pivot(index='date', columns='ticker', values='dividends') ``` ### View Data To see what one of these 2-d matrices looks like, let's take a look at the closing prices matrix. ``` project_helper.print_dataframe(close) ``` # Part 1: Smart Beta Portfolio In Part 1 of this project, you'll build a portfolio using dividend yield to choose the portfolio weights. A portfolio such as this could be incorporated into a smart beta ETF. You'll compare this portfolio to a market cap weighted index to see how well it performs. Note that in practice, you'll probably get the index weights from a data vendor (such as companies that create indices, like MSCI, FTSE, Standard and Poor's), but for this exercise we will simulate a market cap weighted index. ## Index Weights The index we'll be using is based on large dollar volume stocks. Implement `generate_dollar_volume_weights` to generate the weights for this index. For each date, generate the weights based on dollar volume traded for that date. For example, assume the following is close prices and volume data: ``` Prices A B ... 2013-07-08 2 2 ... 2013-07-09 5 6 ... 2013-07-10 1 2 ... 2013-07-11 6 5 ... ... ... ... ... Volume A B ... 2013-07-08 100 340 ... 2013-07-09 240 220 ... 2013-07-10 120 500 ... 2013-07-11 10 100 ... ... ... ... ... ``` The weights created from the function `generate_dollar_volume_weights` should be the following: ``` A B ... 2013-07-08 0.126.. 0.194.. ... 2013-07-09 0.759.. 0.377.. ... 2013-07-10 0.075.. 0.285.. ... 2013-07-11 0.037.. 0.142.. ... ... ... ... ... ``` ``` def generate_dollar_volume_weights(close, volume): """ Generate dollar volume weights. Parameters ---------- close : DataFrame Close price for each ticker and date volume : str Volume for each ticker and date Returns ------- dollar_volume_weights : DataFrame The dollar volume weights for each ticker and date """ assert close.index.equals(volume.index) assert close.columns.equals(volume.columns) #TODO: Implement function dollar_volume = close * volume for index,_ in close.iterrows(): # weights = close * volume / (sum of close * volume for all assets in the line) dollar_volume.loc[index] = dollar_volume.loc[index]/sum(dollar_volume.loc[index]) return dollar_volume project_tests.test_generate_dollar_volume_weights(generate_dollar_volume_weights) ``` ### View Data Let's generate the index weights using `generate_dollar_volume_weights` and view them using a heatmap. ``` index_weights = generate_dollar_volume_weights(close, volume) project_helper.plot_weights(index_weights, 'Index Weights') ``` ## Portfolio Weights Now that we have the index weights, let's choose the portfolio weights based on dividend. You would normally calculate the weights based on trailing dividend yield, but we'll simplify this by just calculating the total dividend yield over time. Implement `calculate_dividend_weights` to return the weights for each stock based on its total dividend yield over time. This is similar to generating the weight for the index, but it's using dividend data instead. For example, assume the following is `dividends` data: ``` Prices A B 2013-07-08 0 0 2013-07-09 0 1 2013-07-10 0.5 0 2013-07-11 0 0 2013-07-12 2 0 ... ... ... ``` The weights created from the function `calculate_dividend_weights` should be the following: ``` A B 2013-07-08 NaN NaN 2013-07-09 0 1 2013-07-10 0.333.. 0.666.. 2013-07-11 0.333.. 0.666.. 2013-07-12 0.714.. 0.285.. ... ... ... ``` ``` def calculate_dividend_weights(dividends): """ Calculate dividend weights. Parameters ---------- dividends : DataFrame Dividend for each stock and date Returns ------- dividend_weights : DataFrame Weights for each stock and date """ #TODO: Implement function cumulated_dividend = dividends.cumsum() for index,_ in dividends.iterrows(): # weights = dividends / (sum of dividends for all assets in the line) cumulated_dividend.loc[index] = cumulated_dividend.loc[index]/sum(cumulated_dividend.loc[index]) return cumulated_dividend project_tests.test_calculate_dividend_weights(calculate_dividend_weights) ``` ### View Data Just like the index weights, let's generate the ETF weights and view them using a heatmap. ``` etf_weights = calculate_dividend_weights(dividends) project_helper.plot_weights(etf_weights, 'ETF Weights') ``` ## Returns Implement `generate_returns` to generate returns data for all the stocks and dates from price data. You might notice we're implementing returns and not log returns. Since we're not dealing with volatility, we don't have to use log returns. ``` def generate_returns(prices): """ Generate returns for ticker and date. Parameters ---------- prices : DataFrame Price for each ticker and date Returns ------- returns : Dataframe The returns for each ticker and date """ #TODO: Implement function return ((prices - prices.shift(1))/prices.shift(1)) project_tests.test_generate_returns(generate_returns) ``` ### View Data Let's generate the closing returns using `generate_returns` and view them using a heatmap. ``` returns = generate_returns(close) project_helper.plot_returns(returns, 'Close Returns') ``` ## Weighted Returns With the returns of each stock computed, we can use it to compute the returns for an index or ETF. Implement `generate_weighted_returns` to create weighted returns using the returns and weights. ``` def generate_weighted_returns(returns, weights): """ Generate weighted returns. Parameters ---------- returns : DataFrame Returns for each ticker and date weights : DataFrame Weights for each ticker and date Returns ------- weighted_returns : DataFrame Weighted returns for each ticker and date """ assert returns.index.equals(weights.index) assert returns.columns.equals(weights.columns) #TODO: Implement function return (returns * weights) project_tests.test_generate_weighted_returns(generate_weighted_returns) ``` ### View Data Let's generate the ETF and index returns using `generate_weighted_returns` and view them using a heatmap. ``` index_weighted_returns = generate_weighted_returns(returns, index_weights) etf_weighted_returns = generate_weighted_returns(returns, etf_weights) project_helper.plot_returns(index_weighted_returns, 'Index Returns') project_helper.plot_returns(etf_weighted_returns, 'ETF Returns') ``` ## Cumulative Returns To compare performance between the ETF and Index, we're going to calculate the tracking error. Before we do that, we first need to calculate the index and ETF comulative returns. Implement `calculate_cumulative_returns` to calculate the cumulative returns over time given the returns. ``` def calculate_cumulative_returns(returns): """ Calculate cumulative returns. Parameters ---------- returns : DataFrame Returns for each ticker and date Returns ------- cumulative_returns : Pandas Series Cumulative returns for each date """ #TODO: Implement function cumulative_returns = (returns.sum(axis=1) + 1).cumprod() return cumulative_returns project_tests.test_calculate_cumulative_returns(calculate_cumulative_returns) ``` ### View Data Let's generate the ETF and index cumulative returns using `calculate_cumulative_returns` and compare the two. ``` index_weighted_cumulative_returns = calculate_cumulative_returns(index_weighted_returns) etf_weighted_cumulative_returns = calculate_cumulative_returns(etf_weighted_returns) project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, etf_weighted_cumulative_returns, 'Smart Beta ETF vs Index') ``` ## Tracking Error In order to check the performance of the smart beta portfolio, we can calculate the annualized tracking error against the index. Implement `tracking_error` to return the tracking error between the ETF and benchmark. For reference, we'll be using the following annualized tracking error function: $$ TE = \sqrt{252} * SampleStdev(r_p - r_b) $$ Where $ r_p $ is the portfolio/ETF returns and $ r_b $ is the benchmark returns. _Note: When calculating the sample standard deviation, the delta degrees of freedom is 1, which is the also the default value._ ``` def tracking_error(benchmark_returns_by_date, etf_returns_by_date): """ Calculate the tracking error. Parameters ---------- benchmark_returns_by_date : Pandas Series The benchmark returns for each date etf_returns_by_date : Pandas Series The ETF returns for each date Returns ------- tracking_error : float The tracking error """ assert benchmark_returns_by_date.index.equals(etf_returns_by_date.index) #TODO: Implement function return (np.sqrt(252)*np.std(etf_returns_by_date - benchmark_returns_by_date, ddof=1)) project_tests.test_tracking_error(tracking_error) ``` ### View Data Let's generate the tracking error using `tracking_error`. ``` smart_beta_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(etf_weighted_returns, 1)) print('Smart Beta Tracking Error: {}'.format(smart_beta_tracking_error)) ``` # Part 2: Portfolio Optimization Now, let's create a second portfolio. We'll still reuse the market cap weighted index, but this will be independent of the dividend-weighted portfolio that we created in part 1. We want to both minimize the portfolio variance and also want to closely track a market cap weighted index. In other words, we're trying to minimize the distance between the weights of our portfolio and the weights of the index. $Minimize \left [ \sigma^2_p + \lambda \sqrt{\sum_{1}^{m}(weight_i - indexWeight_i)^2} \right ]$ where $m$ is the number of stocks in the portfolio, and $\lambda$ is a scaling factor that you can choose. Why are we doing this? One way that investors evaluate a fund is by how well it tracks its index. The fund is still expected to deviate from the index within a certain range in order to improve fund performance. A way for a fund to track the performance of its benchmark is by keeping its asset weights similar to the weights of the index. We’d expect that if the fund has the same stocks as the benchmark, and also the same weights for each stock as the benchmark, the fund would yield about the same returns as the benchmark. By minimizing a linear combination of both the portfolio risk and distance between portfolio and benchmark weights, we attempt to balance the desire to minimize portfolio variance with the goal of tracking the index. ## Covariance Implement `get_covariance_returns` to calculate the covariance of the `returns`. We'll use this to calculate the portfolio variance. If we have $m$ stock series, the covariance matrix is an $m \times m$ matrix containing the covariance between each pair of stocks. We can use [`Numpy.cov`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html) to get the covariance. We give it a 2D array in which each row is a stock series, and each column is an observation at the same period of time. For any `NaN` values, you can replace them with zeros using the [`DataFrame.fillna`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html) function. The covariance matrix $\mathbf{P} = \begin{bmatrix} \sigma^2_{1,1} & ... & \sigma^2_{1,m} \\ ... & ... & ...\\ \sigma_{m,1} & ... & \sigma^2_{m,m} \\ \end{bmatrix}$ ``` def get_covariance_returns(returns): """ Calculate covariance matrices. Parameters ---------- returns : DataFrame Returns for each ticker and date Returns ------- returns_covariance : 2 dimensional Ndarray The covariance of the returns """ #TODO: Implement function return (np.cov(returns.T.fillna(0))) project_tests.test_get_covariance_returns(get_covariance_returns) ``` ### View Data Let's look at the covariance generated from `get_covariance_returns`. ``` covariance_returns = get_covariance_returns(returns) covariance_returns = pd.DataFrame(covariance_returns, returns.columns, returns.columns) covariance_returns_correlation = np.linalg.inv(np.diag(np.sqrt(np.diag(covariance_returns)))) covariance_returns_correlation = pd.DataFrame( covariance_returns_correlation.dot(covariance_returns).dot(covariance_returns_correlation), covariance_returns.index, covariance_returns.columns) project_helper.plot_covariance_returns_correlation( covariance_returns_correlation, 'Covariance Returns Correlation Matrix') ``` ### portfolio variance We can write the portfolio variance $\sigma^2_p = \mathbf{x^T} \mathbf{P} \mathbf{x}$ Recall that the $\mathbf{x^T} \mathbf{P} \mathbf{x}$ is called the quadratic form. We can use the cvxpy function `quad_form(x,P)` to get the quadratic form. ### Distance from index weights We want portfolio weights that track the index closely. So we want to minimize the distance between them. Recall from the Pythagorean theorem that you can get the distance between two points in an x,y plane by adding the square of the x and y distances and taking the square root. Extending this to any number of dimensions is called the L2 norm. So: $\sqrt{\sum_{1}^{n}(weight_i - indexWeight_i)^2}$ Can also be written as $\left \| \mathbf{x} - \mathbf{index} \right \|_2$. There's a cvxpy function called [norm()](https://www.cvxpy.org/api_reference/cvxpy.atoms.other_atoms.html#norm) `norm(x, p=2, axis=None)`. The default is already set to find an L2 norm, so you would pass in one argument, which is the difference between your portfolio weights and the index weights. ### objective function We want to minimize both the portfolio variance and the distance of the portfolio weights from the index weights. We also want to choose a `scale` constant, which is $\lambda$ in the expression. $\mathbf{x^T} \mathbf{P} \mathbf{x} + \lambda \left \| \mathbf{x} - \mathbf{index} \right \|_2$ This lets us choose how much priority we give to minimizing the difference from the index, relative to minimizing the variance of the portfolio. If you choose a higher value for `scale` ($\lambda$). We can find the objective function using cvxpy `objective = cvx.Minimize()`. Can you guess what to pass into this function? ### constraints We can also define our constraints in a list. For example, you'd want the weights to sum to one. So $\sum_{1}^{n}x = 1$. You may also need to go long only, which means no shorting, so no negative weights. So $x_i >0 $ for all $i$. you could save a variable as `[x >= 0, sum(x) == 1]`, where x was created using `cvx.Variable()`. ### optimization So now that we have our objective function and constraints, we can solve for the values of $\mathbf{x}$. cvxpy has the constructor `Problem(objective, constraints)`, which returns a `Problem` object. The `Problem` object has a function solve(), which returns the minimum of the solution. In this case, this is the minimum variance of the portfolio. It also updates the vector $\mathbf{x}$. We can check out the values of $x_A$ and $x_B$ that gave the minimum portfolio variance by using `x.value` ``` import cvxpy as cvx def get_optimal_weights(covariance_returns, index_weights, scale=2.0): """ Find the optimal weights. Parameters ---------- covariance_returns : 2 dimensional Ndarray The covariance of the returns index_weights : Pandas Series Index weights for all tickers at a period in time scale : int The penalty factor for weights the deviate from the index Returns ------- x : 1 dimensional Ndarray The solution for x """ assert len(covariance_returns.shape) == 2 assert len(index_weights.shape) == 1 assert covariance_returns.shape[0] == covariance_returns.shape[1] == index_weights.shape[0] #TODO: Implement function # number of stocks m is number of rows of returns, and also number of index weights m = covariance_returns.shape[0] # x variables (to be found with optimization) x = cvx.Variable(m) # portfolio variance, in quadratic form portfolio_variance = cvx.quad_form(x, covariance_returns) # euclidean distance (L2 norm) between portfolio and index weights distance_to_index = cvx.norm(x - index_weights) # objective function objective = cvx.Minimize(portfolio_variance + scale * distance_to_index) # constraints constraints = [x >= 0, sum(x) == 1] # use cvxpy to solve the objective problem = cvx.Problem(objective, constraints).solve() # retrieve the weights of the optimized portfolio x_values = x.value return x_values project_tests.test_get_optimal_weights(get_optimal_weights) ``` ## Optimized Portfolio Using the `get_optimal_weights` function, let's generate the optimal ETF weights without rebalanceing. We can do this by feeding in the covariance of the entire history of data. We also need to feed in a set of index weights. We'll go with the average weights of the index over time. ``` raw_optimal_single_rebalance_etf_weights = get_optimal_weights(covariance_returns.values, index_weights.iloc[-1]) optimal_single_rebalance_etf_weights = pd.DataFrame( np.tile(raw_optimal_single_rebalance_etf_weights, (len(returns.index), 1)), returns.index, returns.columns) ``` With our ETF weights built, let's compare it to the index. Run the next cell to calculate the ETF returns and compare it to the index returns. ``` optim_etf_returns = generate_weighted_returns(returns, optimal_single_rebalance_etf_weights) optim_etf_cumulative_returns = calculate_cumulative_returns(optim_etf_returns) project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, optim_etf_cumulative_returns, 'Optimized ETF vs Index') optim_etf_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(optim_etf_returns, 1)) print('Optimized ETF Tracking Error: {}'.format(optim_etf_tracking_error)) ``` ## Rebalance Portfolio Over Time The single optimized ETF portfolio used the same weights for the entire history. This might not be the optimal weights for the entire period. Let's rebalance the portfolio over the same period instead of using the same weights. Implement `rebalance_portfolio` to rebalance a portfolio. Reblance the portfolio every n number of days, which is given as `shift_size`. When rebalancing, you should look back a certain number of days of data in the past, denoted as `chunk_size`. Using this data, compute the optoimal weights using `get_optimal_weights` and `get_covariance_returns`. ``` def rebalance_portfolio(returns, index_weights, shift_size, chunk_size): """ Get weights for each rebalancing of the portfolio. Parameters ---------- returns : DataFrame Returns for each ticker and date index_weights : DataFrame Index weight for each ticker and date shift_size : int The number of days between each rebalance chunk_size : int The number of days to look in the past for rebalancing Returns ------- all_rebalance_weights : list of Ndarrays The ETF weights for each point they are rebalanced """ assert returns.index.equals(index_weights.index) assert returns.columns.equals(index_weights.columns) assert shift_size > 0 assert chunk_size >= 0 #TODO: Implement function # List of all rebalanced weights rebalance_portfolio_weights = [] for index in range(chunk_size, returns.shape[0], shift_size): # calculates the chunk of returns chunk = returns.iloc[index - chunk_size : index] # calculates covariance returns covariance_returns = get_covariance_returns(chunk) # calculates optimal weights raw_optimal_single_rebalance_etf_weights = get_optimal_weights(covariance_returns, index_weights.iloc[index - 1]) # append the results rebalance_portfolio_weights.append(raw_optimal_single_rebalance_etf_weights) return rebalance_portfolio_weights project_tests.test_rebalance_portfolio(rebalance_portfolio) ``` Run the following cell to rebalance the portfolio using `rebalance_portfolio`. ``` chunk_size = 250 shift_size = 5 all_rebalance_weights = rebalance_portfolio(returns, index_weights, shift_size, chunk_size) ``` ## Portfolio Turnover With the portfolio rebalanced, we need to use a metric to measure the cost of rebalancing the portfolio. Implement `get_portfolio_turnover` to calculate the annual portfolio turnover. We'll be using the formulas used in the classroom: $ AnnualizedTurnover =\frac{SumTotalTurnover}{NumberOfRebalanceEvents} * NumberofRebalanceEventsPerYear $ $ SumTotalTurnover =\sum_{t,n}{\left | x_{t,n} - x_{t+1,n} \right |} $ Where $ x_{t,n} $ are the weights at time $ t $ for equity $ n $. $ SumTotalTurnover $ is just a different way of writing $ \sum \left | x_{t_1,n} - x_{t_2,n} \right | $ ``` def get_portfolio_turnover(all_rebalance_weights, shift_size, rebalance_count, n_trading_days_in_year=252): """ Calculage portfolio turnover. Parameters ---------- all_rebalance_weights : list of Ndarrays The ETF weights for each point they are rebalanced shift_size : int The number of days between each rebalance rebalance_count : int Number of times the portfolio was rebalanced n_trading_days_in_year: int Number of trading days in a year Returns ------- portfolio_turnover : float The portfolio turnover """ assert shift_size > 0 assert rebalance_count > 0 #TODO: Implement function portfolio_turnover = 0 for index in range(1, len(all_rebalance_weights)): portfolio_turnover += sum(np.abs(all_rebalance_weights[index] - all_rebalance_weights[index-1])) # annualized turnover calculation annualized_portfolio_turnover = portfolio_turnover*(n_trading_days_in_year/shift_size)/rebalance_count return annualized_portfolio_turnover project_tests.test_get_portfolio_turnover(get_portfolio_turnover) ``` Run the following cell to get the portfolio turnover from `get_portfolio turnover`. ``` print(get_portfolio_turnover(all_rebalance_weights, shift_size, len(all_rebalance_weights) - 1)) ``` That's it! You've built a smart beta portfolio in part 1 and did portfolio optimization in part 2. You can now submit your project. ## Submission Now that you're done with the project, it's time to submit it. Click the submit button in the bottom right. One of our reviewers will give you feedback on your project with a pass or not passed grade. You can continue to the next section while you wait for feedback.
github_jupyter
``` import pandas as pd import numpy as np # from sklearn.model_selection import train_test_split import torch from torch import nn from torch import optim # import json # from torch.utils.data import Dataset, DataLoader # import transformers from transformers import RobertaModel, RobertaTokenizer, RobertaForMultipleChoice from torch import cuda from datetime import datetime import matplotlib.pyplot as plt device = 'cuda' if cuda.is_available() else 'cpu' print(device) # load data test_raw_data = pd.read_xml('data/COPA-resources/datasets/copa-test.xml') dev_raw_data = pd.read_xml('data/COPA-resources/datasets/copa-dev.xml') # train-test-split 400-100 dev_raw_data.head(10) # test tokenizer = RobertaTokenizer.from_pretrained("roberta-base") # test_sequence = "{" + "effect" + "}" + "I ran the ice cube under warm water." test_sequence = "{"+ test_raw_data.iloc[28]['asks-for'] + "}" + test_raw_data.iloc[28]['p'] print("test_sequence is: ", test_sequence) print(tokenizer(test_sequence)) print(tokenizer.tokenize(test_sequence)) # test 2 test_sequence = "{"+ test_raw_data.iloc[1]['asks-for'] + "}" + test_raw_data.iloc[1]['p'] print("test_sequence is: ", test_sequence) print(tokenizer(test_sequence)) print(tokenizer.tokenize(test_sequence)) print(test_raw_data.shape[0]) def load_data(rawdata): tokenizer = RobertaTokenizer.from_pretrained('roberta-base') # for i in range(0, rawdata.shape[0]): for i in range(2, 5): prompt = rawdata.iloc[i]['asks-for'] + "." + rawdata.iloc[i]['p'] choice0 = rawdata.iloc[i]['a1'] choice1 = rawdata.iloc[i]['a2'] label = torch.tensor(rawdata.iloc[i]['most-plausible-alternative'] - 1) # label = torch.tensor(rawdata.iloc[i]['label']).unsqueeze(0).to(device) encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='pt', padding=True) print("encoding['input_ids']: ", encoding['input_ids']) print("encoding['input_ids'] with size of : ", encoding['input_ids'].size()) print("encoding['attention_mask']: ", encoding['attention_mask']) print("label: ", label) return encoding print(dev_raw_data.shape[0]) print(test_raw_data.shape[0]) # tokenize data tests dev_data = load_data(dev_raw_data) # print(f'Training data loaded (length {len(train_data)})') # dev_data = load_data('data/dev.jsonl') # print(f'Dev data loaded (length {len(dev_data)})') # test_data = load_data('data/test.jsonl') # print(f'Test data loaded (length {len(test_data)})') ``` ## Model Construction ``` # Model_3, use only the very last hidden layer from Roberta. from torch import nn from transformers import RobertaConfig, RobertaModel class OurRobertaCOPA(torch.nn.Module): def __init__(self): super(OurRobertaCOPA, self).__init__() # self.configuration = RobertaConfig() # self.tokenizer = RobertaTokenizer.from_pretrained("roberta-base") # self.l1 = RobertaModel(self.configuration) self.l1 = RobertaModel.from_pretrained("roberta-base") self.l1.requires_grad = True self.softmax = nn.Softmax(dim=0) self.pre_classifier = torch.nn.Linear(768, 512) self.dropout = torch.nn.Dropout(0.3) # self.classifier = torch.nn.Linear(768, 5) # hidden_dim=32 for later trials. # self.lstm = nn.LSTM(768, 32, 1, bias=False) self.output_layer = nn.Linear(512, 2) def forward(self, sequence_1, sequence_2): # Two input here token_1 = tokenizer(sequence_1) token_2 = tokenizer(sequence_2) output_1 = self.l1(input_ids=torch.tensor(token_1["input_ids"]).unsqueeze(0), attention_mask=torch.tensor(token_1["attention_mask"]).unsqueeze(0))[0] output_2 = self.l1(input_ids=torch.tensor(token_2["input_ids"]).unsqueeze(0), attention_mask=torch.tensor(token_2["attention_mask"]).unsqueeze(0))[0] # RobertaModel(RobertaConfig()) # _, (hidden_rep_1, _) = self.lstm(output_1.unsqueeze(0)) # _, (hidden_rep_2, _) = self.lstm(output_2.unsqueeze(0)) # _, (hidden_rep_1, _) = self.lstm(output_1) # _, (hidden_rep_2, _) = self.lstm(output_2) hidden_rep_1 = torch.nn.ReLU()(self.pre_classifier(output_1[0])).squeeze(0) hidden_rep_2 = torch.nn.ReLU()(self.pre_classifier(output_2[0])).squeeze(0) pooler_1 = hidden_rep_1[:, 0] pooler_2 = hidden_rep_2[:, 0] # hidden_rep_1 = self.pre_classifier(output_1[0]).squeeze(0) # hidden_rep_2 = self.pre_classifier(output_2[0]).squeeze(0) # print("-------hidden_rep_1:") # print(hidden_rep_1) # print(hidden_rep_1.size()) # print("-------hidden_rep_2:") # print(hidden_rep_2) # print(hidden_rep_2.size()) # hidden_rep = torch.cat((hidden_rep_1.unsqueeze(1), hidden_rep_2.unsqueeze(1)), 1) # hidden_rep = self.dropout(torch.cat((hidden_rep_1, hidden_rep_2), 0)) hidden_rep = self.dropout(torch.cat((pooler_1, pooler_2), 0)) print("-------hidden_rep:") # print(hidden_rep) print(hidden_rep.size()) output = self.output_layer(hidden_rep.unsqueeze(0)) print("-------output:") # print(output) print(output.size()) print("--------------") output_squezzed = output.squeeze(0).squeeze(0) print("-------output_squezzed:") print(output_squezzed) print(output_squezzed.size()) print("--------------") # y_hat = softmax(output_squezzed) # y_sum = torch.sum(y_hat, 0) # col1= torch.sum(y_hat, 0)[0] # col2 = torch.sum(y_hat, 0)[1] # y_result = torch.tensor(torch.argmax(y_sum)).type(torch.FloatTensor) # y_result = torch.tensor(y_sum) return output_squezzed ``` ## Training ``` # Initialization tokenizer = RobertaTokenizer.from_pretrained('roberta-base') # model = OurRobertaCOPA() model = RobertaForMultipleChoice.from_pretrained('roberta-base') model.to(device) ce = nn.CrossEntropyLoss() softmax = nn.Softmax(dim=0) optimizer = torch.optim.Adam(model.parameters(), lr=1e-2) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9) epochs = 52 per_num_epoch = 1 # train_acc = np.zeros(epochs) train_loss_by_epoch = np.zeros(epochs) dev_acc = np.zeros(epochs) dev_loss_by_epoch = np.zeros(epochs) start_time = datetime.now() for j in range(epochs): if j % per_num_epoch == 0: print('--------------Epoch: ' + str(j+1) + '--------------') if j % per_num_epoch == 0: print(f'Training for epoch {j + 1}.......') av_train_loss = 0 # print("av_train_loss_original: ", av_train_loss) model.train() for i in range(0, dev_raw_data.shape[0] - 100): # print("av_train_loss_track: ", av_train_loss) prompt = dev_raw_data.iloc[i]['asks-for'] + ". " + dev_raw_data.iloc[i]['p'] choice0 = dev_raw_data.iloc[i]['a1'] choice1 = dev_raw_data.iloc[i]['a2'] label = torch.tensor(dev_raw_data.iloc[i]['most-plausible-alternative'] - 1).unsqueeze(0).to(device) # print("label is: ", label) encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='pt', padding=True).to(device) # encoding = {(prompt+choice0), (prompt+choice1)} # outputs = model(input_ids=encoding['input_ids'].unsqueeze(0), attention_mask=encoding['attention_mask'].unsqueeze(0), labels=label) outputs = model(**{k: v.unsqueeze(0) for k,v in encoding.items()}, labels=label) # print("outputs: ", outputs) train_loss = outputs.loss train_logits = outputs.logits av_train_loss += train_loss if i == 0: print("train_loss: ", train_loss) print("train_logits: ", train_logits) print("label: ", label) if i == 1: print("train_loss: ", train_loss) print("train_logits: ", train_logits) print("label: ", label) train_loss.backward() optimizer.step() optimizer.zero_grad() train_loss_by_epoch[j] = av_train_loss / (dev_raw_data.shape[0] - 100) print("av_train_loss: ", train_loss_by_epoch[j]) # validation # if (j + 1) % per_num_epoch == 0: # print(f'.......Validating for epoch {j + 1}') if (j) % per_num_epoch == 0: print(f'.......Validating for epoch {j + 1}') av_dev_loss = 0 # model.eval() with torch.no_grad(): for i in range(dev_raw_data.shape[0] - 99, dev_raw_data.shape[0]): # print("av_dev_loss_track: ", av_dev_loss) prompt_val = dev_raw_data.iloc[i]['asks-for'] + ". " + dev_raw_data.iloc[i]['p'] choice0_val = dev_raw_data.iloc[i]['a1'] choice1_val = dev_raw_data.iloc[i]['a2'] label_val = torch.tensor(dev_raw_data.iloc[i]['most-plausible-alternative'] - 1).unsqueeze(0).to(device) encoding_val = tokenizer([prompt_val, prompt_val], [choice0_val, choice1_val], return_tensors='pt', padding=True).to(device) # outputs = model(input_ids=encoding['input_ids'].unsqueeze(0), attention_mask=encoding['attention_mask'].unsqueeze(0), labels=label) outputs_val = model(**{k: v.unsqueeze(0) for k,v in encoding_val.items()}, labels=label_val) dev_loss = outputs_val.loss dev_logits = outputs_val.logits av_dev_loss += dev_loss if i == dev_raw_data.shape[0] - 99: print("dev_loss: ", dev_loss) print("dev_logits: ", dev_logits) print("label: ", label_val) if i == dev_raw_data.shape[0] - 1: print("dev_loss: ", dev_loss) print("dev_logits: ", dev_logits) print("label: ", label_val) #calculate accuracy y_pred = 1 if outputs_val.logits[0][1] > outputs_val.logits[0][0] else 0 y_pred = torch.tensor(y_pred).unsqueeze(0).to(device) # print("y_pred: ", y_pred) # print("label: ", label) # print("y_pred =? label: ", y_pred == label) if y_pred == label_val: dev_acc[j] += 1 dev_acc[j] /= 100 print("dev_acc[j]: ", dev_acc[j]) dev_loss_by_epoch[j] = av_dev_loss / 100 # learning rate decay # if j == 5: # optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) # elif j == 15: # optimizer = torch.optim.Adam(model.parameters(), lr=1e-4) # elif j == 20: # optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) # elif j == 40: # optimizer = torch.optim.Adam(model.parameters(), lr=1e-6) # elif j == 50: # optimizer = torch.optim.Adam(model.parameters(), lr=1e-7) scheduler.step() end_time = datetime.now() print(f'Training completed in {str(end_time - start_time)}') # plot plt.figure(figsize=(14, 7)) plt.title("Loss VS Epoch") plt.plot(train_loss_by_epoch, label="train_loss") plt.plot(dev_loss_by_epoch, label="dev_loss") plt.xlabel("epoch") plt.ylabel("loss") plt.legend() plt.show() # plot plt.figure(figsize=(14, 7)) plt.title("acc VS Epoch") plt.plot(dev_acc, label="dev_acc") plt.xlabel("epoch") plt.ylabel("acc") plt.legend() plt.show() # save the model torch.save(model, 'RoBERTa.pth')# epochs = 100, lr=1e-2 -> scheular ``` ## Testing ``` test_model = torch.load('RoBERTa.pth') test_model.eval() num_correct_pred = 0 with torch.no_grad(): for i in range(0, test_raw_data.shape[0]): prompt = test_raw_data.iloc[i]['asks-for'] + ". " + test_raw_data.iloc[i]['p'] choice0 = test_raw_data.iloc[i]['a1'] choice1 = test_raw_data.iloc[i]['a2'] label_test = torch.tensor(test_raw_data.iloc[i]['most-plausible-alternative']).unsqueeze(0).to(device) encoding_test = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='pt', padding=True).to(device) # outputs = test_model(input_ids=encoding['input_ids'].unsqueeze(0), attention_mask=encoding['attention_mask'].unsqueeze(0), labels=label) outputs_test = test_model(**{k: v.unsqueeze(0) for k,v in encoding_test.items()}, labels=label_test) print("outputs_test.logits: ", outputs_test.logits) # test_logits = outputs_test.logits #calculate accuracy y_pred_test = 1 if outputs_test.logits[0][1] > outputs_test.logits[0][0] else 0 y_pred_test = torch.tensor(y_pred_test).unsqueeze(0).to(device) if y_pred_test == label_test: print("test_logits: ", outputs_test.logits) print("y_pred: ", y_pred_test) print("label: ", label_test) num_correct_pred += 1 acc = num_correct_pred / test_raw_data.shape[0] print("test_accuracy = ", acc) # test_model.eval() num_correct_pred = 0 with torch.no_grad(): for i in range(0, test_raw_data.shape[0]): prompt = test_raw_data.iloc[i]['question'] + ". " + test_raw_data.iloc[i]['premise'] choice0 = test_raw_data.iloc[i]['choice1'] choice1 = test_raw_data.iloc[i]['choice2'] label = torch.tensor(test_raw_data.iloc[i]['label']).unsqueeze(0).to(device) encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='pt', padding=True).to(device) outputs = test_model(input_ids=encoding['input_ids'].unsqueeze(0), attention_mask=encoding['attention_mask'].unsqueeze(0), labels=label) test_logits = outputs.logits #calculate accuracy y_pred = 1 if outputs.logits[0][1] > outputs.logits[0][0] else 0 y_pred = torch.tensor(y_pred).unsqueeze(0).to(device) if y_pred == label: print("test_logits: ", test_logits) print("y_pred: ", y_pred) print("label: ", label) num_correct_pred += 1 acc = num_correct_pred / test_raw_data.shape[0] print("test_accuracy = ", acc) # Revise # Training ce = nn.CrossEntropyLoss() softmax = nn.Softmax(dim=0) optimizer = torch.optim.Adam(model.parameters(), lr=1e-2) epochs = 100 per_num_epoch = 1 # train_acc = np.zeros(epochs) train_loss_by_epoch = np.zeros(epochs) dev_acc = np.zeros(epochs) dev_loss_by_epoch = np.zeros(epochs) start_time = datetime.now() for j in range(epochs): if j % per_num_epoch == 0: print('--------------Epoch: ' + str(j+1) + '--------------') if j % per_num_epoch == 0: print(f'Training for epoch {j + 1}.......') av_train_loss = 0 # print("av_train_loss_original: ", av_train_loss) model.train() for i in range(0, train_raw_data.shape[0]): # print("av_train_loss_track: ", av_train_loss) prompt = train_raw_data.iloc[i]['question'] + ". " + train_raw_data.iloc[i]['premise'] choice0 = train_raw_data.iloc[i]['choice1'] choice1 = train_raw_data.iloc[i]['choice2'] label = torch.tensor(train_raw_data.iloc[i]['label']).unsqueeze(0).to(device) # print("label is: ", label) # label = torch.tensor(rawdata.iloc[i]['label']).unsqueeze(0).to(device) encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='pt', padding=True).to(device) # outputs = model(input_ids=encoding['input_ids'].unsqueeze(0), attention_mask=encoding['attention_mask'].unsqueeze(0), labels=label) outputs = model(**{k: v.unsqueeze(0) for k,v in encoding.items()}, labels=label) train_loss = outputs.loss train_logits = outputs.logits av_train_loss += train_loss if i == 0: print("train_loss: ", train_loss) print("train_logits: ", train_logits) print("label: ", label) if i == 1: print("train_loss: ", train_loss) print("train_logits: ", train_logits) print("label: ", label) train_loss.backward() optimizer.step() # learning rate decay if j == 25: optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) elif j == 50: optimizer = torch.optim.Adam(model.parameters(), lr=1e-7) optimizer.zero_grad() train_loss_by_epoch[j] = av_train_loss / train_raw_data.shape[0] print("av_train_loss: ", train_loss_by_epoch[j]) end_time = datetime.now() print(f'Training completed in {str(end_time - start_time)}') ``` ## NOTES 1. Ask about whether the very last output of RoBERTaMultipleChoice is the possibility score for one input embedding. (pooler): RobertaPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (dropout): Dropout(p=0.1, inplace=False) (classifier): Linear(in_features=768, out_features=1, bias=True) 2. What's wrong with the model.eval()?
github_jupyter
# Python Stock Trading Quick Start with Alpaca #### by Billy Hau The purpose of this tutorial is to provide a quick start guide to trade stock with python. We will use the Alpaca trading platform since it is free and support paper trading. We will go over the fundamental operations, such as connecting to an account, check account asset, quote stock price and placing an order. We will also program a simple trading bot as an exercise. ## Setup Python Data Science Library We are going to import the commonly used Data Science libraries here: Pandas, Numpy and Matplotlib. They should come pre-installed with Anaconda, but if not, here's how to install them from pip. ``` ! pip install pandas ! pip install numpy ! pip install matplotlib import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates import time ! pip install alpaca-trade-api ``` #### 3) Enter API Credentials from https://app.alpaca.markets/paper/dashboard/overview ``` ALPACA_ENDPOINT = 'https://paper-api.alpaca.markets' ALPACA_API = 'API Key ID' # Replace with API Key ID from Alpaca Paper Account ALPACA_API_SECRET = 'Secret Key' # Replace with Secret Key from Alpaca Paper Account ``` #### 4) Import Alpaca API and Connect to Account ``` import alpaca_trade_api alpaca = alpaca_trade_api.REST(ALPACA_API, ALPACA_API_SECRET, ALPACA_ENDPOINT) ``` ## 1) Access Account Information ``` account = alpaca.get_account() print(account) ``` ##### Account Number ``` account.account_number ``` ##### Account Cash ``` account.cash ``` ##### Account Portfolio Value ``` account.portfolio_value ``` ##### Account Stock Holdings / Positions ``` position = alpaca.list_positions() position ``` ## 2) Get Stock Exchange Status ``` clock = alpaca.get_clock() clock ``` ##### Current Time ``` clock.timestamp ``` ##### Is the Market Currently Open? ``` clock.is_open ``` ## 3) Get Stock Information ``` trade = alpaca.get_latest_trade('NIO') trade ``` ##### Last Trade Price ``` trade.p ``` ##### Last Trade Size ``` trade.s ``` ##### Last Trade Time ``` trade.t ``` ### Historic Data Historic Data is stored in Barsets and can be reported in timeframe of <b>1Min / 5Min / 15Min / 1D</b>. - c: close - h: high - l: low - o: open - t: timestamp - v: volume ``` historic_data = alpaca.get_barset('NIO', timeframe='1D', limit=20) historic_data NIO_DATA = historic_data.df NIO_DATA NIO_DATA['NIO','close'].plot(figsize=(16,5), title='NIO Closing Price') fig, ax = plt.subplots(figsize=(16,5)) l1 = ax.plot(NIO_DATA['NIO', 'close'], 'g-', label='Close') ax.set_xlabel('Date') ax.set_ylabel('Closing Price $ USD') ax.legend(loc = 'upper left') l2 = ax2 = ax.twinx() ax2.plot(NIO_DATA['NIO', 'volume'], 'y-', label='Volume') ax2.set_ylabel('Trading Volume') ax2.legend(loc = 'upper right') plt.title('NIO') ``` ### Historic Data for Multiple Stock ``` portfolio_list = ['SPY', 'VNQ', 'BND', 'GLD'] portfolio_data = alpaca.get_barset(portfolio_list, '1D', limit=100).df portfolio_data fig = plt.figure(figsize=(16,5)) for i in range(len(portfolio_list)): percentage_return = (portfolio_data[portfolio_list[i],'close'] - portfolio_data[portfolio_list[i],'close'][0]) / portfolio_data[portfolio_list[i],'close'][0] plt.plot(percentage_return, label=portfolio_list[i]) plt.legend() plt.title('Portfolio List Return') ``` ## 4) Placing Stock Order ##### Placing a Buy Order at Market Price ``` order0 = alpaca.submit_order('FB', qty=25, side='buy', type='market') order0 ``` ##### Placing a Buy Order at Limit Price ``` order1 = alpaca.submit_order('NIO', qty=10, side='buy', type='limit', limit_price=28) order1 ``` ##### Placing a Sell Order at Market Price ``` order2 = alpaca.submit_order('FB', qty=10, side='sell', type='market') order2 ``` ##### Shorting a Stock Configure Account to Allow / Disallow Shorting Stock https://app.alpaca.markets/paper/dashboard/config ``` alpaca.submit_order('GE', qty=100, side='sell', type='market') ``` ##### List All Current Orders ``` order = alpaca.list_orders() order ``` ##### Cancel an Order ``` alpaca.cancel_order(order[0].id) alpaca.list_orders() ``` ##### Cancel All Orders ``` alpaca.cancel_all_orders() alpaca.list_orders() ``` ##### Check Specific Order Status ``` alpaca.get_order(order0.id) print(order0.symbol + '\t' + order0.side + '\t' + order0.qty + '\t' + order0.type + '\t' + alpaca.get_order(order0.id).status) print(order1.symbol + '\t' + order1.side + '\t' + order1.qty + '\t' + order1.type + '\t' +alpaca.get_order(order1.id).status) print(order2.symbol + '\t' + order2.side + '\t' + order2.qty + '\t' + order1.type + '\t' +alpaca.get_order(order2.id).status) ``` ## [PROJECT] Simple Paper Stock Trading Bot This is a simple trading bot project to practice everything we learned here. THIS IS ONLY MEANT FOR A PRACTICE, DO NOT USE FOR LIVE TRADING! AS ALWAYS, USE AT YOUR OWN RISK! In future tutorials, we will build a back-testing program to test our strategies. But right now, we just want an automatic high frequency trading bot. The strategy is simple, make a trade decision when the short term moving average crosses over the long term moving average. Buy when there is positive momentum and sell when there is negative momentum. MAKE SURE TO ENABLE SHORT SELLING FOR THIS EXERCISE! ``` stock = 'GME' data = alpaca.get_barset(stock, timeframe='1Min', limit=100).df avg = data[stock,'close'].rolling(30).mean() avg2 = data[stock,'close'].rolling(15).mean() diff2 = avg2.diff() fig, ax = plt.subplots(figsize=(16,10)) ax.plot(avg, 'b-', label = 'Moving Average 30Min') ax.plot(avg2, 'm-', label = 'Moving Average 15Min') ax.plot(data[stock,'close'], label='Price', alpha=0.3) ax.legend(loc = 'upper left') ax2 = ax.twinx() ax2.plot(diff2, 'y--', label = 'Moving Average 15Min Diff', alpha = 0.8) ax2.legend(loc='upper right') formatter = mdates.DateFormatter('%m/%d %T %Z', tz=data.index.tz) plt.gca().xaxis.set_major_formatter(formatter) # Percentage Change with initial reset when crossed moving average? # Momentum... buy when cross MA line moving upward, sell when cross MA line moving downward stock = 'GME' trade_share = 100 #trigger_threshold = 0.001 MA_Diff0 = 0 while(True): print('\n\n') # Get Market Clock clock = alpaca.get_clock() timestr = clock.timestamp.strftime('[%m/%d/%y %H:%M:%S]') # Check if Market is Open if clock.is_open: print(timestr + ' Market is Open... Executing Trading Bot Sequence!') # Get Stock Data print(' - [' + stock + '] Retrieving Market Data ...' ) data = alpaca.get_barset(stock, timeframe='1Min', limit=120).df # Analysis Stock Price print(' - [' + stock + '] Performing Stock Price Analysis ...' ) avg30 = data[stock,'close'].rolling(30).mean() avg15 = data[stock,'close'].rolling(15).mean() diff15 = avg15.diff() MA_Diff = (avg15[-1] - avg30[-1]) / avg30[-1]*100 price = alpaca.get_last_trade(stock).price print(' --- [{}] Price: {:,.2f} MA15: {:,.2f} DIFF15: {:,.2f} MA30: {:,.2f} MA_Diff: {:,.2f}'.format(stock, price, avg15[-1], diff15[-1], avg30[-1], MA_Diff) ) # Technical Analysis - Buy or Sell Decision # Strategy: Trigger when MA15 cross over to MA30... positive momentum - buy ... negative momentum - sell print(' - [' + stock + '] Performing Technical Analysis ...' ) if MA_Diff * MA_Diff0 < 0: print('--- Trigger Decision Analysis...') # Positive Momentum if MA_Diff > 0: target_share = trade_share print('---- LONG: targeted {} shares'.format( target_share )) # Negative Momentum if MA_Diff < 0: target_share = -trade_share print('---- SHORT: targeted {} shares'.format(target_share) ) # Retrieve Account Holdings position = alpaca.list_positions() stock_holding = 0 for i in range(len(position)): if position[i].symbol == stock: stock_holding = float(position[i].qty) print(' ----- [{}] current position: {}'.format(stock, stock_holding)) # Submit Trade Order if Needed if stock_holding != 0 and stock_holding != target_share: order = alpaca.close_position(stock) print(' ----- [{}] closing current position ... '.format(stock)) for i in range(100): status = alpaca.get_order(order.id).status if status == 'filled': print(' ----- [{}] position closed!'.format(stock)) break if target_share != stock_holding and target_share > 0: order = alpaca.submit_order(stock, qty=abs(target_share), side='buy', type='market') print(' ----- [{}] BUYING {} shares... '.format(stock, abs(target_share))) for i in range(100): status = alpaca.get_order(order.id).status if status == 'filled': print(' ----- [{}] order excuted!'.format(stock)) break if target_share != stock_holding and target_share < 0: order = alpaca.submit_order(stock, qty=abs(target_share), side='sell', type='market') print(' ----- [{}] SHORT SELLING {} shares... '.format(stock, abs(target_share))) for i in range(100): status = alpaca.get_order(order.id).status if status == 'filled': print(' ----- [{}] order excuted!'.format(stock)) break else: print('--- Pass Through') # Retrieve Account Holdings position = alpaca.list_positions() stock_holding = 0 for i in range(len(position)): if position[i].symbol == stock: stock_holding = float(position[i].qty) print(' - [{}] current position: {}'.format(stock, stock_holding)) MA_Diff0 = MA_Diff # Close Stock Position Near Ends of Market Hour and EXIT BOT if clock.timestamp.hour >= 15 and clock.timestamp.minute >= 58: print(' - [{}] MARKET CLOSING! CLOSING STOCK POSITION...'.format(stock, stock_holding)) if stock_holding > 0: order = alpaca.close_position(stock) for i in range(100): status = alpaca.get_order(order.id).status if status == 'filled': break print(' [{}] Position Closed! Terminating Trade Bot!'.format(stock)) break else: printt(timestr + 'Market is currently closed... please come back later!') # Loop Control time.sleep(30) ```
github_jupyter
# Homework: Basic Artificial Neural Networks ``` %matplotlib inline from time import time, sleep import numpy as np import matplotlib.pyplot as plt from IPython import display ``` # Framework Implement everything in `Modules.ipynb`. Read all the comments thoughtfully to ease the pain. Please try not to change the prototypes. Do not forget, that each module should return **AND** store `output` and `gradInput`. The typical assumption is that `module.backward` is always executed after `module.forward`, so `output` is stored, this would be useful for `SoftMax`. ``` # (re-)load layers %run homework_modules.ipynb ``` Optimizer is implemented for you. ``` def sgd_momentum(x, dx, config, state): """ This is a very ugly implementation of sgd with momentum just to show an example how to store old grad in state. config: - momentum - learning_rate state: - old_grad """ # x and dx have complex structure, old dx will be stored in a simpler one state.setdefault('old_grad', {}) i = 0 for cur_layer_x, cur_layer_dx in zip(x,dx): for cur_x, cur_dx in zip(cur_layer_x,cur_layer_dx): cur_old_grad = state['old_grad'].setdefault(i, np.zeros_like(cur_dx)) np.add(config['momentum'] * cur_old_grad, config['learning_rate'] * cur_dx, out = cur_old_grad) cur_x -= cur_old_grad i += 1 def normalization(column): column_min = column.min() column_max = column.max() column_range = column_max - column_min if(column_range == 0): return (column - column_min) return (column - column_min) / column_range def create_onehot(column): class_count = column.max() + 1 size = column.shape[0] onehot = np.zeros((size, class_count), dtype=float) for i in range(size): onehot[i][column[i]] = 1.0 return onehot # Open MNIST dataset and prepare for train from mlxtend.data import loadlocal_mnist x_train, y_train = loadlocal_mnist(images_path='Dataset/train-images-idx3-ubyte', labels_path='Dataset/train-labels-idx1-ubyte') x_test, y_test = loadlocal_mnist(images_path='Dataset/t10k-images-idx3-ubyte', labels_path='Dataset/t10k-labels-idx1-ubyte') # normalize x_train = normalization(x_train) x_test = normalization(x_test) # create onehot for y y_train_onehot = create_onehot(y_train) y_test_onehot = create_onehot(y_test) # batch generator def get_batches(dataset, batch_size): X, Y = dataset n_samples = X.shape[0] # Shuffle at the start of epoch indices = np.arange(n_samples) np.random.shuffle(indices) for start in range(0, n_samples, batch_size): end = min(start + batch_size, n_samples) batch_idx = indices[start:end] yield X[batch_idx], Y[batch_idx] features = x_train.shape[1] # Iptimizer params optimizer_config = {'learning_rate' : 1e-1, 'momentum': 0.9} optimizer_state = {} # Looping params n_epoch = 6 batch_size = 180 ``` ### Build NN ``` net = Sequential() net.add(Linear(features, 300)) net.add(ReLU()) net.add(Linear(300, 10)) net.add(SoftMax()) criterion = MSECriterion() print(net) ``` ### Train Basic training loop. Examine it. ``` loss_history = [] for i in range(n_epoch): for x_batch, y_batch in get_batches((x_train, y_train_onehot), batch_size): net.zeroGradParameters() # Forward predictions = net.forward(x_batch) loss = criterion.forward(predictions, y_batch) # Backward dp = criterion.backward(predictions, y_batch) net.backward(x_batch, dp) # Update weights sgd_momentum(net.getParameters(), net.getGradParameters(), optimizer_config, optimizer_state) loss_history.append(loss) # Visualize display.clear_output(wait=True) plt.figure(figsize=(8, 6)) plt.title("Training loss") plt.xlabel("#iteration") plt.ylabel("loss") plt.plot(loss_history, 'b') plt.show() print('Current loss: %f' % loss) ``` ### Build NN with dropout ``` net = Sequential() net.add(Linear(features, 300)) net.add(ReLU()) net.add(Dropout(0.7)) net.add(Linear(300, 10)) net.add(SoftMax()) criterion = MSECriterion() print(net) loss_history = [] for i in range(n_epoch): for x_batch, y_batch in get_batches((x_train, y_train_onehot), batch_size): net.zeroGradParameters() # Forward predictions = net.forward(x_batch) loss = criterion.forward(predictions, y_batch) # Backward dp = criterion.backward(predictions, y_batch) net.backward(x_batch, dp) # Update weights sgd_momentum(net.getParameters(), net.getGradParameters(), optimizer_config, optimizer_state) loss_history.append(loss) # Visualize display.clear_output(wait=True) plt.figure(figsize=(8, 6)) plt.title("Training loss") plt.xlabel("#iteration") plt.ylabel("loss") plt.plot(loss_history, 'b') plt.show() print('Current loss: %f' % loss) # Your answer goes here. ################################################ net = Sequential() net.add(Linear(features, 600)) net.add(ReLU()) net.add(Dropout(0.7)) net.add(Linear(600, 300)) net.add(ReLU()) net.add(Linear(300, 100)) net.add(ReLU()) net.add(Linear(100, 10)) net.add(SoftMax()) criterion = MSECriterion() print(net) loss_history = [] for i in range(n_epoch): for x_batch, y_batch in get_batches((x_train, y_train_onehot), batch_size): net.zeroGradParameters() # Forward predictions = net.forward(x_batch) loss = criterion.forward(predictions, y_batch) # Backward dp = criterion.backward(predictions, y_batch) net.backward(x_batch, dp) # Update weights sgd_momentum(net.getParameters(), net.getGradParameters(), optimizer_config, optimizer_state) loss_history.append(loss) # Visualize display.clear_output(wait=True) plt.figure(figsize=(8, 6)) plt.title("Training loss") plt.xlabel("#iteration") plt.ylabel("loss") plt.plot(loss_history, 'b') plt.show() print('Current loss: %f' % loss) # Your code goes here. ################################################ # np.clip(prediction,0,1) # # Your code goes here. ################################################ ```
github_jupyter
``` !date import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sns %matplotlib inline sns.set_context('paper') sns.set_style('darkgrid') ``` # Mixture Model in PyMC3 Original NB by Abe Flaxman, modified by Thomas Wiecki ``` import pymc3 as pm, theano.tensor as tt # simulate data from a known mixture distribution np.random.seed(12345) # set random seed for reproducibility k = 3 ndata = 500 spread = 5 centers = np.array([-spread, 0, spread]) # simulate data from mixture distribution v = np.random.randint(0, k, ndata) data = centers[v] + np.random.randn(ndata) plt.hist(data); # setup model model = pm.Model() with model: # cluster sizes a = pm.constant(np.array([1., 1., 1.])) p = pm.Dirichlet('p', a=a, shape=k) # ensure all clusters have some points p_min_potential = pm.Potential('p_min_potential', tt.switch(tt.min(p) < .1, -np.inf, 0)) # cluster centers means = pm.Normal('means', mu=[0, 0, 0], sd=15, shape=k) # break symmetry order_means_potential = pm.Potential('order_means_potential', tt.switch(means[1]-means[0] < 0, -np.inf, 0) + tt.switch(means[2]-means[1] < 0, -np.inf, 0)) # measurement error sd = pm.Uniform('sd', lower=0, upper=20) # latent cluster of each observation category = pm.Categorical('category', p=p, shape=ndata) # likelihood for each observed value points = pm.Normal('obs', mu=means[category], sd=sd, observed=data) # fit model with model: step1 = pm.Metropolis(vars=[p, sd, means]) step2 = pm.ElemwiseCategoricalStep(var=category, values=[0, 1, 2]) tr = pm.sample(10000, step=[step1, step2]) ``` ## Full trace ``` pm.plots.traceplot(tr, ['p', 'sd', 'means']); ``` ## After convergence ``` # take a look at traceplot for some model parameters # (with some burn-in and thinning) pm.plots.traceplot(tr[5000::5], ['p', 'sd', 'means']); # I prefer autocorrelation plots for serious confirmation of MCMC convergence pm.autocorrplot(tr[5000::5], ['sd']) ``` ## Sampling of cluster for individual data point ``` i=0 plt.plot(tr['category'][5000::5, i], drawstyle='steps-mid') plt.axis(ymin=-.1, ymax=2.1) def cluster_posterior(i=0): print('true cluster:', v[i]) print(' data value:', np.round(data[i],2)) plt.hist(tr['category'][5000::5,i], bins=[-.5,.5,1.5,2.5,], rwidth=.9) plt.axis(xmin=-.5, xmax=2.5) plt.xticks([0,1,2]) cluster_posterior(i) ```
github_jupyter
``` def bubbleSort(arr): n=len(arr) for i in range(n): for j in range(0, n-i-1): if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] arr = [3,2,6,4,5,8] bubbleSort(arr) print ("Sorted array is:") for i in range(len(arr)): print ("%d" %arr[i]), def countPaire(arr, sum): count=0 for i in range(0,len(arr)): for j in range(i+1,len(arr)): if (arr[i] + arr[j]) <= sum: print(arr[i], arr[j]) count +=1 return count arr=[4, 5, 7, 3] sum=10 print(countPaire(arr,sum)) def countPair1(arr, sum): i=0 j=len(arr)-1 while(j>i): if(arr[i]+ arr[j] == sum): return True elif(arr[i]+ arr[j] > sum): j -=1 else: i +=1 return False arr=[1, 5, 7, -1] sum=6 print(countPair1(arr,sum)) #given an array of numbers, find a pair of number that add up to 10 def IsPairOf10 (given_array): seen_numbers = {} for item in given_array: if (10 - item) in seen_numbers: print('The following pair of number in array adds up to 10: ' + str(item) + ' and ' + str(10 - item)) return else: seen_numbers[item] = 'number in the list' print('there is no a pair of numbers that adds up to 10') list1 = [4, 5, 7, 3] list2 = [5, 7, 0, 6,5] list3 = [9, 2, 8, 1, 3] list4 = [-12, 4, -67, 2] print(IsPairOf10(list2)) # should return 5 # Python program to check for the sum condition to be satisified def hasArrayTwoCandidates(A, arr_size, sum): # sort the array quickSort(A, 0, arr_size-1) l = 0 r = arr_size-1 # traverse the array for the two elements while l<r: if (A[l] + A[r] == sum): return 1 elif (A[l] + A[r] < sum): l += 1 else: r -= 1 return 0 # Implementation of Quick Sort # A[] --> Array to be sorted # si --> Starting index # ei --> Ending index def quickSort(A, si, ei): if si < ei: pi = partition(A, si, ei) quickSort(A, si, pi-1) quickSort(A, pi + 1, ei) def partition(A, si, ei): x = A[ei] i = (si-1) for j in range(si, ei): if A[j] <= x: i += 1 # This operation is used to swap two variables is python A[i], A[j] = A[j], A[i] A[i + 1], A[ei] = A[ei], A[i + 1] return i + 1 # Driver program to test the functions A = [1, 4, 45, 6, 10, -8] n = 16 if (hasArrayTwoCandidates(A, len(A), n)): print("Array has two elements with the given sum") else: print("Array doesn't have two elements with the given sum") def printPairs(arr, arr_size, sum): # Create an empty hash set s = set() for i in range(0, arr_size): temp = sum-arr[i] if (temp in s): print ("Pair with given sum "+ str(sum) + " is (" + str(arr[i]) + ", " + str(temp) + ")") s.add(arr[i]) # driver program to check the above function A = [1, 4, 45, 6, 10, 8] n = 16 printPairs(A, len(A), n) # Given two unsorted arrays, find all pairs whose sum is x # Naiv approach def naivSumOfPair(list1,list2,sum_x): count=0 for i in arr1: for j in arr2: if(i+j==sum_x): count +=1 print(i,' ',j) # print(count) return count # using Hash table def sumOfPair(list1,list2,sum_x): list3=[] for i in list1: list3.append(i) for j in list2: if sum_x>j: if (sum_x-j) in list3: print(j,sum_x-j) return 1 # arr1=[-1,-2,4,-6,5,7] # arr2=[6,3,4,0] arr1 = [1, 0, -4, 7, 6, 4] arr2 = [0, 2, 4, -3, 2, 1] sum_x=8 print(naivSumOfPair(arr1,arr2,sum_x)) # python program to count subarrays # having sum less than k. # Function to find number of subarrays # having sum less than k. def countSubarray(arr, n, k): count = 0 for i in range(0, n): sum = 0; for j in range(i, n): # If sum is less than k # then update sum and # increment count if (sum + arr[j] < k): sum = arr[j] + sum count+= 1 else: break return count; # Driver Code array = [1, 11, 2, 3, 15] k = 10 size = len(array) count = countSubarray(array, size, k); print(count) # This code is contributed by Sam007 # Python 3 program to count subarrays # having sum less than k. # Function to find number of subarrays # having sum less than k. def countSubarrays(arr, n, k): start = 0 end = 0 count = 0 sum = arr[0] while (start < n and end < n) : # If sum is less than k, move end # by one position. Update count and # sum accordingly. if (sum < k) : end += 1 if (end >= start): count += end - start # For last element, end may become n if (end < n): sum += arr[end] # If sum is greater than or equal to k, # subtract arr[start] from sum and # decrease sliding window by moving # start by one position else : sum -= arr[start] start += 1 return count # Driver Code if __name__ == "__main__": array = [ 1, 11, 2, 3, 15 ] k = 10 size = len(array) print(countSubarrays(array, size, k)) # This code is contributed by ita_c ``` ``` # Number of subarrays having product less than K # Python3 program to count subarrays # having product less than k. def countsubarray(array, n, k): count = 0 for i in range(0, n): # Counter for single element if array[i] <= k: count += 1 mul = array[i] for j in range(i + 1, n): # Multiple subarray mul = mul * array[j] # If this multiple is less # than k, then increment if mul <= k: count += 1 else: break return count # Driver Code array = [ 1, 2, 3, 4 ] k = 10 size = len(array) count = countsubarray(array, size, k); print (count, end = " ") # This code is contributed by Shreyanshi Arun. ``` ``` # Python3 program to count # subarrays having product # less than k. def countSubArrayProductLessThanK(a,k): n = len(a) p = 1 res = 0 start = 0 end = 0 while(end < n): # Move right bound by 1 # step. Update the product. p *= a[end] # Move left bound so guarantee # that p is again less than k. while (start < end and p >= k): p =int(p//a[start]) start+=1 # If p is less than k, update # the counter. Note that this # is working even for (start == end): # it means that the previous # window cannot grow anymore # and a single array element # is the only addendum. if (p < k): l = end - start + 1 res += l end+=1 return res # Driver Code if __name__=='__main__': print(countSubArrayProductLessThanK([1, 2, 3, 4], 10)) print(countSubArrayProductLessThanK([1, 9, 2, 8, 6, 4, 3], 100)) print(countSubArrayProductLessThanK([5, 3, 2], 16)) print(countSubArrayProductLessThanK([100, 200], 100)) print(countSubArrayProductLessThanK([100, 200], 101)) # This code is contributed by mits # Sliding Window technique # O(n) solution for finding # maximum sum of a subarray of size k import sys INT_MIN = -sys.maxsize -1 def maxSum(arr, n, k): # n must be greater than k if not n > k: print("Invalid") return -1 # Compute sum of first window of size k max_sum = INT_MIN window_sum = sum([arr[i] for i in range(k)]) # Compute sums of remaining windows by # removing first element of previous # window and adding last element of # current window. for i in range(n-k): window_sum = window_sum - arr[i] + arr[i + k] max_sum = max(window_sum, max_sum) return max_sum # Driver code arr = [1, 4, 2, 10, 2, 3, 1, 0, 20] k = 4 n = len(arr) print(maxSum(arr, n, k)) # This code is contributed by Kyle McClay ```
github_jupyter
``` # default_exp timestep #hide import sys [sys.path.append(i) for i in ['.', '..']] #hide from nbdev.showdoc import * %load_ext autoreload %autoreload 2 #export from nbdev_qq_test.solution import * from nbdev_qq_test.initialize import calculate_HI_linear, calculate_HIGC from nbdev_qq_test.classes import * import numpy as np import pandas as pd ``` # timestep > run one timestep of model ``` #export def solution(InitCond,ParamStruct,ClockStruct,weather_step,Outputs): """ Function to perform AquaCrop-OS solution for a single time step *Arguments:*\n `InitCond` : `InitCondClass` : containing current model paramaters `ClockStruct` : `ClockStructClass` : model time paramaters `weather_step`: `np.array` : containing P,ET,Tmax,Tmin for current day `Outputs` : `OutputClass` : object to store outputs *Returns:* `NewCond` : `InitCondClass` : containing updated model paramaters `Outputs` : `OutputClass` : object to store outputs """ # Unpack structures Soil = ParamStruct.Soil CO2 = ParamStruct.CO2 if ParamStruct.WaterTable == 1: Groundwater = ParamStruct.zGW[ClockStruct.TimeStepCounter] else: Groundwater = 0 P = weather_step[2] Tmax = weather_step[1] Tmin = weather_step[0] Et0 = weather_step[3] # Store initial conditions in structure for updating %% NewCond = InitCond # Check if growing season is active on current time step %% if ClockStruct.SeasonCounter >= 0: # Check if in growing season CurrentDate = ClockStruct.StepStartTime PlantingDate = ClockStruct.PlantingDates[ClockStruct.SeasonCounter] HarvestDate = ClockStruct.HarvestDates[ClockStruct.SeasonCounter] if (PlantingDate <= CurrentDate) and \ (HarvestDate >= CurrentDate) and \ (NewCond.CropMature == False) and \ (NewCond.CropDead == False): GrowingSeason = True else: GrowingSeason = False # Assign crop, irrigation management, and field management structures Crop = ParamStruct.Seasonal_Crop_List[ClockStruct.SeasonCounter] Crop_Name = ParamStruct.CropChoices[ClockStruct.SeasonCounter] IrrMngt = ParamStruct.IrrMngt if GrowingSeason == True: FieldMngt = ParamStruct.FieldMngt else: FieldMngt = ParamStruct.FallowFieldMngt else: # Not yet reached start of first growing season GrowingSeason = False # Assign crop, irrigation management, and field management structures # Assign first crop as filler crop Crop = ParamStruct.Fallow_Crop Crop_Name = "fallow" Crop.Aer = 5; Crop.Zmin = 0.3 IrrMngt = ParamStruct.FallowIrrMngt FieldMngt = ParamStruct.FallowFieldMngt # Increment time counters %% if GrowingSeason == True: # Calendar days after planting NewCond.DAP = NewCond.DAP+1 # Growing degree days after planting GDD = growing_degree_day(Crop.GDDmethod,Crop.Tupp,Crop.Tbase,Tmax,Tmin) ## Update cumulative GDD counter ## NewCond.GDD = GDD NewCond.GDDcum = NewCond.GDDcum+GDD NewCond.GrowingSeason = True else: NewCond.GrowingSeason = False # Calendar days after planting NewCond.DAP = 0 # Growing degree days after planting GDD = 0.3 NewCond.GDDcum = 0 # save current timestep counter NewCond.TimeStepCounter = ClockStruct.TimeStepCounter NewCond.P = weather_step[2] NewCond.Tmax = weather_step[1] NewCond.Tmin = weather_step[0] NewCond.Et0 = weather_step[3] # Run simulations %% # 1. Check for groundwater table NewCond,Soil.Profile = check_groundwater_table(ClockStruct.TimeStepCounter,Soil.Profile, NewCond,ParamStruct.WaterTable,Groundwater) # 2. Root development NewCond = root_development(Crop,Soil.Profile,NewCond,GDD,GrowingSeason,ParamStruct.WaterTable) # 3. Pre-irrigation NewCond, PreIrr = pre_irrigation(Soil.Profile,Crop,NewCond,GrowingSeason,IrrMngt) # 4. Drainage NewCond.th,DeepPerc,FluxOut = drainage(Soil.Profile,NewCond.th,NewCond.th_fc_Adj) # 5. Surface runoff Runoff,Infl,NewCond = rainfall_partition(P,NewCond,FieldMngt, Soil.CN, Soil.AdjCN, Soil.zCN, Soil.nComp,Soil.Profile) # 6. Irrigation NewCond, Irr = irrigation(NewCond,IrrMngt,Crop,Soil.Profile,Soil.zTop,GrowingSeason,P,Runoff) # 7. Infiltration NewCond,DeepPerc,RunoffTot,Infl,FluxOut = infiltration(Soil.Profile,NewCond,Infl,Irr,IrrMngt.AppEff,FieldMngt, FluxOut,DeepPerc,Runoff,GrowingSeason) # 8. Capillary Rise NewCond,CR = capillary_rise(Soil.Profile,Soil.nLayer,Soil.fshape_cr,NewCond,FluxOut,ParamStruct.WaterTable) # 9. Check germination NewCond = germination(NewCond,Soil.zGerm,Soil.Profile,Crop.GermThr,Crop.PlantMethod,GDD,GrowingSeason) # 10. Update growth stage NewCond = growth_stage(Crop,NewCond,GrowingSeason) # 11. Canopy cover development NewCond = canopy_cover(Crop,Soil.Profile,Soil.zTop,NewCond,GDD,Et0,GrowingSeason) # 12. Soil evaporation NewCond,Es,EsPot = soil_evaporation(ClockStruct.EvapTimeSteps,ClockStruct.SimOffSeason,ClockStruct.TimeStepCounter, Soil.EvapZmin,Soil.EvapZmax,Soil.Profile,Soil.REW,Soil.Kex,Soil.fwcc,Soil.fWrelExp,Soil.fevap, Crop.CalendarType,Crop.Senescence, IrrMngt.IrrMethod,IrrMngt.WetSurf, FieldMngt, NewCond,Et0,Infl,P,Irr,GrowingSeason) # 13. Crop transpiration Tr,TrPot_NS,TrPot,NewCond,IrrNet = transpiration(Soil.Profile,Soil.nComp,Soil.zTop, Crop, IrrMngt.IrrMethod,IrrMngt.NetIrrSMT, NewCond,Et0,CO2,GrowingSeason,GDD) # 14. Groundwater inflow NewCond,GwIn = groundwater_inflow(Soil.Profile,NewCond) # 15. Reference harvest index NewCond = HIref_current_day(NewCond,Crop,GrowingSeason) # 16. Biomass accumulation NewCond = biomass_accumulation(Crop,NewCond,Tr,TrPot_NS,Et0,GrowingSeason) # 17. Harvest index NewCond = harvest_index(Soil.Profile,Soil.zTop, Crop, NewCond,Et0,Tmax,Tmin,GrowingSeason) # 18. Crop yield if GrowingSeason == True: # Calculate crop yield (tonne/ha) NewCond.Y = (NewCond.B/100)*NewCond.HIadj #print( ClockStruct.TimeStepCounter,(NewCond.B/100),NewCond.HIadj) # Check if crop has reached maturity if ((Crop.CalendarType == 1) and (NewCond.DAP >= Crop.Maturity)) \ or ((Crop.CalendarType == 2) and (NewCond.GDDcum >= Crop.Maturity)): # Crop has reached maturity NewCond.CropMature = True elif GrowingSeason == False: # Crop yield is zero outside of growing season NewCond.Y = 0 # 19. Root zone water Wr,_Dr,_TAW,_thRZ = root_zone_water(Soil.Profile,NewCond.Zroot,NewCond.th,Soil.zTop,float(Crop.Zmin),Crop.Aer) # 20. Update net irrigation to add any pre irrigation IrrNet = IrrNet+PreIrr NewCond.IrrNetCum = NewCond.IrrNetCum+PreIrr # Update model outputs %% row_day = ClockStruct.TimeStepCounter row_gs = ClockStruct.SeasonCounter # Irrigation if GrowingSeason == True: if IrrMngt.IrrMethod == 4: # Net irrigation IrrDay = IrrNet IrrTot = NewCond.IrrNetCum else: # Irrigation IrrDay = Irr IrrTot = NewCond.IrrCum else: IrrDay = 0 IrrTot = 0 NewCond.Depletion = _Dr.Rz NewCond.TAW = _TAW.Rz # Water contents Outputs.Water[row_day,:3] = np.array([ClockStruct.TimeStepCounter,GrowingSeason,NewCond.DAP]) Outputs.Water[row_day,3:] = NewCond.th # Water fluxes Outputs.Flux[row_day,:] = [ClockStruct.TimeStepCounter,\ ClockStruct.SeasonCounter,NewCond.DAP,Wr,NewCond.zGW,\ NewCond.SurfaceStorage,IrrDay,\ Infl,Runoff,DeepPerc,CR,GwIn,Es,EsPot,Tr,P] # Crop growth Outputs.Growth[row_day,:] = [ClockStruct.TimeStepCounter,ClockStruct.SeasonCounter,NewCond.DAP,GDD,\ NewCond.GDDcum,NewCond.Zroot,\ NewCond.CC,NewCond.CC_NS,NewCond.B,\ NewCond.B_NS,NewCond.HI,NewCond.HIadj,\ NewCond.Y] # Final output (if at end of growing season) if ClockStruct.SeasonCounter > -1: if ((NewCond.CropMature == True) \ or (NewCond.CropDead == True) \ or (ClockStruct.HarvestDates[ClockStruct.SeasonCounter] == ClockStruct.StepEndTime )) \ and (NewCond.HarvestFlag == False): # Store final outputs Outputs.Final.loc[ClockStruct.SeasonCounter] = [ClockStruct.SeasonCounter,Crop_Name,\ ClockStruct.StepEndTime,ClockStruct.TimeStepCounter,\ NewCond.Y,IrrTot] # Set harvest flag NewCond.HarvestFlag = True return NewCond,ParamStruct,Outputs #hide show_doc(solution) #export def check_model_termination(ClockStruct,InitCond): """ Function to check and declare model termination *Arguments:*\n `ClockStruct` : `ClockStructClass` : model time paramaters `InitCond` : `InitCondClass` : containing current model paramaters *Returns:* `ClockStruct` : `ClockStructClass` : updated clock paramaters """ ## Check if current time-step is the last CurrentTime = ClockStruct.StepEndTime if CurrentTime < ClockStruct.SimulationEndDate: ClockStruct.ModelTermination = False elif CurrentTime >= ClockStruct.SimulationEndDate: ClockStruct.ModelTermination = True ## Check if at the end of last growing season ## # Allow model to exit early if crop has reached maturity or died, and in # the last simulated growing season if (InitCond.HarvestFlag == True) \ and (ClockStruct.SeasonCounter == ClockStruct.nSeasons-1): ClockStruct.ModelTermination = True return ClockStruct #hide show_doc(check_model_termination) #export def reset_initial_conditions(ClockStruct,InitCond,ParamStruct,weather): """ Function to reset initial model conditions for start of growing season (when running model over multiple seasons) *Arguments:*\n `ClockStruct` : `ClockStructClass` : model time paramaters `InitCond` : `InitCondClass` : containing current model paramaters `weather`: `np.array` : weather data for simulation period *Returns:* `InitCond` : `InitCondClass` : containing reset model paramaters """ ## Extract crop type ## CropType = ParamStruct.CropChoices[ClockStruct.SeasonCounter] ## Extract structures for updating ## Soil = ParamStruct.Soil Crop = ParamStruct.Seasonal_Crop_List[ClockStruct.SeasonCounter] FieldMngt = ParamStruct.FieldMngt CO2 = ParamStruct.CO2 CO2_data = ParamStruct.CO2data ## Reset counters ## InitCond.AgeDays = 0 InitCond.AgeDays_NS = 0 InitCond.AerDays = 0 InitCond.IrrCum = 0 InitCond.DelayedGDDs = 0 InitCond.DelayedCDs = 0 InitCond.PctLagPhase = 0 InitCond.tEarlySen = 0 InitCond.GDDcum = 0 InitCond.DaySubmerged = 0 InitCond.IrrNetCum = 0 InitCond.DAP = 0 InitCond.AerDaysComp = np.zeros(int(Soil.nComp)) ## Reset states ## # States InitCond.PreAdj = False InitCond.CropMature = False InitCond.CropDead = False InitCond.Germination = False InitCond.PrematSenes = False InitCond.HarvestFlag = False # Harvest index # HI InitCond.Stage = 1 InitCond.Fpre = 1 InitCond.Fpost = 1 InitCond.fpost_dwn = 1 InitCond.fpost_upp = 1 InitCond.HIcor_Asum = 0 InitCond.HIcor_Bsum = 0 InitCond.Fpol = 0 InitCond.sCor1 = 0 InitCond.sCor2 = 0 # Growth stage InitCond.GrowthStage = 0 # Transpiration InitCond.TrRatio = 1 # crop growth InitCond.rCor = 1 InitCond.CC = 0 InitCond.CCadj = 0 InitCond.CC_NS = 0 InitCond.CCadj_NS = 0 InitCond.B = 0 InitCond.B_NS = 0 InitCond.HI = 0 InitCond.HIadj = 0 InitCond.CCxAct = 0 InitCond.CCxAct_NS = 0 InitCond.CCxW = 0 InitCond.CCxW_NS = 0 InitCond.CCxEarlySen = 0 InitCond.CCprev = 0 InitCond.ProtectedSeed = 0 ## Update CO2 concentration ## # Get CO2 concentration if ParamStruct.CO2concAdj != None: CO2.CurrentConc = ParamStruct.CO2concAdj else: Yri = pd.DatetimeIndex([ClockStruct.StepStartTime]).year[0] CO2.CurrentConc = CO2_data.loc[Yri] # Get CO2 weighting factor for first year CO2conc = CO2.CurrentConc CO2ref = CO2.RefConc if CO2conc <= CO2ref: fw = 0 else: if CO2conc >= 550: fw = 1 else: fw = 1-((550-CO2conc)/(550-CO2ref)) # Determine initial adjustment fCO2 = (CO2conc/CO2ref)/(1+(CO2conc-CO2ref)\ *((1-fw)*Crop.bsted+fw*((Crop.bsted*Crop.fsink)\ +(Crop.bface*(1-Crop.fsink))))) # Consider crop type if Crop.WP >= 40: # No correction for C4 crops ftype = 0 elif Crop.WP <= 20: # Full correction for C3 crops ftype = 1 else: ftype = (40-Crop.WP)/(40-20) # Total adjustment Crop.fCO2 = 1+ftype*(fCO2-1) ## Reset soil water conditions (if not running off-season) ## if ClockStruct.SimOffSeason==False: # Reset water content to starting conditions InitCond.th = InitCond.thini # Reset surface storage if (FieldMngt.Bunds) and (FieldMngt.zBund > 0.001): # Get initial storage between surface bunds InitCond.SurfaceStorage = min(FieldMngt.BundWater,FieldMngt.zBund) else: # No surface bunds InitCond.SurfaceStorage = 0 ## Update crop parameters (if in GDD mode) ## if Crop.CalendarType == 2: # Extract weather data for upcoming growing season wdf = weather[weather[:,4]>=ClockStruct.PlantingDates[ClockStruct.SeasonCounter]] #wdf = wdf[wdf[:,4]<=ClockStruct.HarvestDates[ClockStruct.SeasonCounter]] Tmin = wdf[:,0] Tmax = wdf[:,1] # Calculate GDD's if Crop.GDDmethod == 1: Tmean = (Tmax+Tmin)/2 Tmean[Tmean>Crop.Tupp] = Crop.Tupp Tmean[Tmean<Crop.Tbase] = Crop.Tbase GDD = Tmean-Crop.Tbase elif Crop.GDDmethod == 2: Tmax[Tmax>Crop.Tupp] = Crop.Tupp Tmax[Tmax<Crop.Tbase] = Crop.Tbase Tmin[Tmin>Crop.Tupp] = Crop.Tupp Tmin[Tmin<Crop.Tbase] = Crop.Tbase Tmean = (Tmax+Tmin)/2 GDD = Tmean-Crop.Tbase elif Crop.GDDmethod == 3: Tmax[Tmax>Crop.Tupp] = Crop.Tupp Tmax[Tmax<Crop.Tbase] = Crop.Tbase Tmin[Tmin>Crop.Tupp] = Crop.Tupp Tmean = (Tmax+Tmin)/2 Tmean[Tmean<Crop.Tbase] = Crop.Tbase GDD = Tmean-Crop.Tbase GDDcum = np.cumsum(GDD) assert GDDcum[-1] > Crop.Maturity, f"not enough growing degree days in simulation ({GDDcum[-1]}) to reach maturity ({Crop.Maturity})" Crop.MaturityCD = np.argmax((GDDcum>Crop.Maturity))+1 assert Crop.MaturityCD < 365, "crop will take longer than 1 year to mature" # 1. GDD's from sowing to maximum canopy cover Crop.MaxCanopyCD = (GDDcum>Crop.MaxCanopy).argmax()+1 # 2. GDD's from sowing to end of vegetative growth Crop.CanopyDevEndCD = (GDDcum>Crop.CanopyDevEnd).argmax()+1 # 3. Calendar days from sowing to start of yield formation Crop.HIstartCD = (GDDcum>Crop.HIstart).argmax()+1 # 4. Calendar days from sowing to end of yield formation Crop.HIendCD = (GDDcum>Crop.HIend).argmax()+1 # 5. Duration of yield formation in calendar days Crop.YldFormCD = Crop.HIendCD-Crop.HIstartCD if Crop.CropType == 3: # 1. Calendar days from sowing to end of flowering FloweringEnd = (GDDcum>Crop.FloweringEnd).argmax()+1 # 2. Duration of flowering in calendar days Crop.FloweringCD = FloweringEnd-Crop.HIstartCD else: Crop.FloweringCD = -999 # Update harvest index growth coefficient Crop = calculate_HIGC(Crop) # Update day to switch to linear HI build-up if Crop.CropType == 3: # Determine linear switch point and HIGC rate for fruit/grain crops Crop = calculate_HI_linear(Crop) else: # No linear switch for leafy vegetable or root/tiber crops Crop.tLinSwitch = 0 Crop.dHILinear = 0. ## Update global variables ## ParamStruct.Seasonal_Crop_List[ClockStruct.SeasonCounter] = Crop ParamStruct.CO2 = CO2 return InitCond,ParamStruct #hide show_doc(reset_initial_conditions) #export def update_time(ClockStruct,InitCond,ParamStruct,Outputs,weather): """ Function to update current time in model *Arguments:*\n `ClockStruct` : `ClockStructClass` : model time paramaters `InitCond` : `InitCondClass` : containing current model paramaters `weather`: `np.array` : weather data for simulation period *Returns:* `ClockStruct` : `ClockStructClass` : model time paramaters `InitCond` : `InitCondClass` : containing reset model paramaters """ ## Update time ## if ClockStruct.ModelTermination == False: if (InitCond.HarvestFlag == True) \ and ((ClockStruct.SimOffSeason==False)): # End of growing season has been reached and not simulating # off-season soil water balance. Advance time to the start of the # next growing season. # Check if in last growing season if ClockStruct.SeasonCounter < ClockStruct.nSeasons-1: # Update growing season counter ClockStruct.SeasonCounter = ClockStruct.SeasonCounter+1 # Update time-step counter #ClockStruct.TimeSpan = pd.Series(ClockStruct.TimeSpan) ClockStruct.TimeStepCounter = ClockStruct.TimeSpan.get_loc(ClockStruct.PlantingDates[ClockStruct.SeasonCounter]) # Update start time of time-step ClockStruct.StepStartTime = ClockStruct.TimeSpan[ClockStruct.TimeStepCounter] # Update end time of time-step ClockStruct.StepEndTime = ClockStruct.TimeSpan[ClockStruct.TimeStepCounter + 1] # Reset initial conditions for start of growing season InitCond,ParamStruct = reset_initial_conditions(ClockStruct,InitCond,ParamStruct,weather) else: # Simulation considers off-season, so progress by one time-step # (one day) # Time-step counter ClockStruct.TimeStepCounter = ClockStruct.TimeStepCounter+1 # Start of time step (beginning of current day) #ClockStruct.TimeSpan = pd.Series(ClockStruct.TimeSpan) ClockStruct.StepStartTime = ClockStruct.TimeSpan[ClockStruct.TimeStepCounter] # End of time step (beginning of next day) ClockStruct.StepEndTime = ClockStruct.TimeSpan[ClockStruct.TimeStepCounter + 1] # Check if in last growing season if ClockStruct.SeasonCounter < ClockStruct.nSeasons-1: # Check if upcoming day is the start of a new growing season if ClockStruct.StepStartTime == ClockStruct.PlantingDates[ClockStruct.SeasonCounter+1]: # Update growing season counter ClockStruct.SeasonCounter = ClockStruct.SeasonCounter+1 # Reset initial conditions for start of growing season InitCond,ParamStruct = reset_initial_conditions(ClockStruct,InitCond,ParamStruct,weather) elif ClockStruct.ModelTermination == True: ClockStruct.StepStartTime = ClockStruct.StepEndTime ClockStruct.StepEndTime = ClockStruct.StepEndTime + np.timedelta64(1, 'D') Outputs.Flux = pd.DataFrame(Outputs.Flux, columns=["TimeStepCounter",\ "SeasonCounter","DAP","Wr","zGW",\ "SurfaceStorage","IrrDay",\ "Infl","Runoff","DeepPerc","CR",\ "GwIn","Es","EsPot","Tr","P"]) Outputs.Water =pd.DataFrame(Outputs.Water, columns=["TimeStepCounter","GrowingSeason","DAP"]\ +['th'+str(i) for i in range(1,Outputs.Water.shape[1]-2)]) Outputs.Growth = pd.DataFrame(Outputs.Growth, columns = ["TimeStepCounter",'SeasonCounter',"DAP",'GDD',\ 'GDDcum','Zroot',\ 'CC','CC_NS','B',\ 'B_NS','HI','HIadj',\ 'Y']) return ClockStruct,InitCond,ParamStruct,Outputs #hide show_doc(update_time) #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
# Self-Driving Car Engineer Nanodegree ## Project: **Finding Lane Lines on the Road** *** In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right. In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project. --- Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image. **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".** --- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.** --- <figure> <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> </figcaption> </figure> <p></p> <figure> <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p> </figcaption> </figure> **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** ## Import Packages ``` #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import math import numpy as np import cv2 %matplotlib inline ``` ## Read in an Image ``` #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.figure(figsize=(10,6)) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') ``` ## Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:** `cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images `cv2.cvtColor()` to grayscale or change color `cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** ## Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson! ``` import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. `vertices` should be a numpy array of integer points. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def remove_outliers(data, m=1): if len(data) <= 2: return data return data[abs(data - np.mean(data)) < m * np.std(data)] def draw_label(img, text, pos, bg_color): font_face = cv2.FONT_HERSHEY_SIMPLEX scale = 1 color = (0, 0, 0) thickness = cv2.FILLED margin = 2 txt_size = cv2.getTextSize(text, font_face, scale, thickness) end_x = pos[0] + txt_size[0][0] + margin end_y = pos[1] - txt_size[0][1] - margin cv2.rectangle(img, pos, (end_x, end_y), bg_color, thickness) cv2.putText(img, text, pos, font_face, scale, color, 4, cv2.LINE_AA) def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ right_x = [] right_y = [] left_x = [] left_y = [] for line in lines: for x1,y1,x2,y2 in line: slope = float((y2 - y1) / (x2 - x1)) if slope > 0.45: left_x.extend([x1, x2]) left_y.extend([y1, y2]) if slope < -0.45: right_x.extend([x1, x2]) right_y.extend([y1, y2]) # cv2.line(img, (x1, y1), (x2, y2), [0,0,255], thickness) if len(right_x) > 0 and len(left_x) > 0 and len(right_y) > 0 and len(left_y) > 0: right_line_coeffs = np.polyfit(right_x, right_y, 1) left_line_coeffs = np.polyfit(left_x, left_y, 1) # draw_label(img, "R({:.2f}, {:.2f})".format(left_line_coeffs[0], left_line_coeffs[1]), (600,320), (255,255,255)) # draw_label(img, "L({:.2f}, {:.2f})".format(right_line_coeffs[0], right_line_coeffs[1]), (150,320), (255,255,255)) # print("right slope, int: ({:.2f}, {:.2f}) left slope, int: ({:.2f}, {:.2f})".format(mean_right_slope, mean_right_yint, mean_left_slope, mean_left_yint)) cv2.line(img, (int((320 - right_line_coeffs[1]) / right_line_coeffs[0]), 320), (int((540 - right_line_coeffs[1]) / right_line_coeffs[0]), 540), color, thickness) cv2.line(img, (int((320 - left_line_coeffs[1]) / left_line_coeffs[0]), 320), (int((540 - left_line_coeffs[1]) / left_line_coeffs[0]), 540), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines, thickness=10) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ) ``` ## Test Images Build your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.** ``` import os os.listdir("test_images/") ``` ## Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report. Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. ``` # Read in and grayscale the image image = mpimg.imread('test_images/solidWhiteRight.jpg') gray = grayscale(image) # Define a kernel size and apply Gaussian smoothing kernel_size = 5 blur_gray = gaussian_blur(gray, kernel_size) # Define our parameters for Canny and apply low_threshold = 55 high_threshold = 175 edges = canny(blur_gray, low_threshold, high_threshold) # This time we are defining a four sided polygon to mask imshape = image.shape vertices = np.array([[(0,imshape[0]),(imshape[1]*0.47, imshape[0]*0.6), (imshape[1]*0.51, imshape[0]*0.6), (imshape[1],imshape[0])]], dtype=np.int32) masked_edges = region_of_interest(edges, vertices) print(imshape[0], imshape[1]) # Define the Hough transform parameters # Make a blank the same size as our image to draw on rho = 1 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 10 # minimum number of votes (intersections in Hough grid cell) min_line_length = 20 #minimum number of pixels making up a line max_line_gap = 20 # maximum gap in pixels between connectable line segments line_image = np.copy(image)*0 # creating a blank to draw lines on # Run Hough on edge detected image line_image = hough_lines(masked_edges, rho, theta, threshold, min_line_length, max_line_gap) # Draw the lines on the edge image lines_edges = weighted_img(line_image, image, α=0.8, β=1., γ=0.) plt.figure(figsize=(10,6)) plt.imshow(lines_edges) ``` ## Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: `solidWhiteRight.mp4` `solidYellowLeft.mp4` **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** **If you get an error that looks like this:** ``` NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download() ``` **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** ``` # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image where lines are drawn on lanes) gray = grayscale(image) # Define a kernel size and apply Gaussian smoothing kernel_size = 5 blur_gray = gaussian_blur(gray, kernel_size) # Define our parameters for Canny and apply low_threshold = 55 high_threshold = 175 edges = canny(blur_gray, low_threshold, high_threshold) # This time we are defining a four sided polygon to mask imshape = image.shape vertices = np.array([[(0,imshape[0]),(imshape[1]*0.47, imshape[0]*0.6), (imshape[1]*0.51, imshape[0]*0.6), (imshape[1],imshape[0])]], dtype=np.int32) masked_edges = region_of_interest(edges, vertices) # Define the Hough transform parameters # Make a blank the same size as our image to draw on rho = 1 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 10 # minimum number of votes (intersections in Hough grid cell) min_line_length = 20 #minimum number of pixels making up a line max_line_gap = 20 # maximum gap in pixels between connectable line segments # Run Hough on edge detected image line_image = hough_lines(masked_edges, rho, theta, threshold, min_line_length, max_line_gap) # Draw the lines on the edge image result = weighted_img(line_image, image, α=0.8, β=1., γ=0.) return result ``` Let's try the one with the solid white lane on the right first ... ``` white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) ``` Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. ``` HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) ``` ## Improve the draw_lines() function **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".** **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky! ``` yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) ``` ## Writeup and Submission If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. ## Optional Challenge Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! ``` challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) clip3 = VideoFileClip('test_videos/challenge.mp4') challenge_clip = clip3.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) ```
github_jupyter
``` # Set in root_directory %cd /home/ronald/PycharmProjects/x-ray-deep-learning/X-ray_Object_Detection/ #%ls # libs import numpy as np from pathlib import Path import json np.random.seed(1) # Directories ROOT = Path('data/raw') images_path = ROOT / 'images' ann_path = ROOT/ 'annotation' print('Images Path:', images_path) print('Annotation Path:', ann_path) # Labels/n_classes labels = ['gun'] #, 'knife', 'shuriken', 'razor_blade'] n_classes = len(labels) + 1 # count background # Image Dimenssions dim = (256, 256, 1) # Collect all files absolute Path imgs_paths = sorted([i.absolute() for i in images_path.glob("*.png") if i.is_file()]) indexes = np.arange(len(imgs_paths)) batch_size = 4 index = 15 # Set batch indexes # if index 0 and batch 4 in range(0, 17) retrieve values [0 1 2 3] # if index 1 and batch 4 in range(0, 17) retrieve values [4 5 6 7] #indexes = indexes[index * batch_size:(index + 1) * batch_size] #imgs_paths = [imgs_paths[index] for index in indexes] imgs_name = [img.name for img in imgs_paths] # Create empty data-set. X = np.empty((batch_size, *dim), dtype=np.float32) y = np.empty((batch_size, dim[0], dim[1], n_classes), dtype=np.float32) # Open imgs annotations with open("data/raw/annotation/coco_annotation.json", "r") as read_it: ann_data = json.load(read_it) import cv2 import numpy as np import imgaug as ia import imgaug.augmenters as iaa from imgaug.augmentables import Keypoint, KeypointsOnImage %matplotlib inline from matplotlib import pyplot as plt dict_imgs = ann_data.get('images') dict_ann = ann_data.get('annotations') dict_cat = ann_data.get('categories') seq = iaa.Sequential([ iaa.Fliplr(0.5),# horizontal flips # Small gaussian blur with random sigma between 0 and 0.5. iaa.GaussianBlur(sigma=(0, 0.5)), # Crop image with random from 0 to 10% # But we only crop about 50% of all images. iaa.Sometimes( 0.5, iaa.Crop(percent=(0, 0.1), keep_size=True)), # Strengthen or weaken the contrast in each images. iaa.LinearContrast((0.75, 1)), # Add gaussian noise. # For 30% of all images, we sample the noise once per pixel. # For the other 30% of all images, we sample the noise per pixel AND # channel. This can change the color (not only brightness) of the # pixels. iaa.AdditiveGaussianNoise(loc=0, scale=(0.0, 0.05), per_channel=0.3), # Apply affine transformations to each images. # Scale/zoom them. iaa.Affine( scale={"x": (1.0, 1.1), "y": (1.0, 1.1)}) ], random_order=True) # apply augmenters in random order def search_array(array, key, value): return next((obj for obj in array if obj[key] == value), None) # return object def get_img_seg_kps(img_seg): points = list() for i in range(0, len(img_seg), 2): # iterate every two steps chunk = img_seg[i:i+2] points.append(Keypoint(x=chunk[0], y=chunk[1])) return points def get_img_info(img_name): """ return img_label and segmentation points of the image """ img_seg, label = None, None img_obj = search_array(dict_imgs, 'file_name', img_name) if img_obj is not None: ann_obj = search_array(dict_ann, 'image_id', str(img_obj['id'])) if ann_obj is not None: kps = get_img_seg_kps(ann_obj['segmentation']) label = search_array(dict_cat, 'id', ann_obj['category_id']) return label['name'], kps else: # Create annotation for image kps = create_img_seg(img_obj) return 'background', kps return None def create_img_seg(img_obj): height = img_obj['height'] width = img_obj['width'] points = [ Keypoint(x=0, y=0), Keypoint(x=width-1, y=0), Keypoint(x=width-1, y=height-1), Keypoint(x=0, y=height-1) ] # print(points) return points def get_augimg(img, img_info): label, points = img_info kps = KeypointsOnImage(points, shape=img.shape) if img.shape != dim: img = ia.imresize_single_image(img, dim[0:2]) kps = kps.on(img) # Augment keypoints and images. seq_det = seq.to_deterministic() img_aug = seq_det.augment_images([img])[0] kps_aug = seq_det.augment_keypoints([kps])[0] # print(kps) # print("--------\n", kps_aug) # img_aug, kps_aug = seq(image=img, keypoints=kps) aug_points = [[kp.x, kp.y] for kp in kps_aug.keypoints] aug_points_dic = {'label': label, 'points': aug_points} # ia.imshow(np.hstack([ # kps.draw_on_image(img, size=10), # kps_aug.draw_on_image(img_aug, size=10)])) return img_aug, aug_points_dic def show(img): print(img.shape) plt.imshow(img) plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis plt.show() def get_mask(img, imgaug_shape): blank = np.zeros(shape=(img.shape[0], img.shape[1]), dtype=np.float32) points = np.array(imgaug_shape['points'], dtype=np.int32) label = imgaug_shape['label'] cv2.fillPoly(blank, [points], 255) blank = blank / 255.0 # ia.imshow(img) # ia.imshow(blank) return np.expand_dims(blank, axis=2) def data_generation(img_path): X = np.empty((batch_size, *dim), dtype=np.float32) y = np.empty((batch_size, dim[0], dim[1], n_classes), dtype=np.float32) # retrieve img in gray_scale as numpy img = cv2.imread(str(img_path), 0) # our images are gray_scale img = np.expand_dims(img, axis=2) # img = (img / 255.0).astype(np.float32) images = [np.copy(img) for _ in range(batch_size)] img_info = get_img_info(img_path.name) for i, image in enumerate(images): imgaug, imgaug_shape = get_augimg(img, img_info) imgaug_mask = get_mask(imgaug, imgaug_shape) print(imgaug.shape) print(imgaug_mask.shape) X[i,] = imgaug y[i,] = imgaug_mask return X, y img_pol = data_generation(imgs_paths[index]) #print(img_pol) ```
github_jupyter
``` import numpy as np import pandas as pd string_data=pd.Series(['aardvark','artichoke',np.nan,'avocado']) string_data string_data.isnull() string_data[0]=None string_data.isnull() from numpy import nan as NA data=pd.Series([1,NA,3.5,NA,7]) data.dropna() data[data.notnull()] data=pd.DataFrame([[1.,6.5,3.],[1.,NA,NA],[NA,NA,NA],[NA,6.5,3.]]) cleaned=data.dropna() data cleaned data.dropna(how='all') data[4]=NA data data.dropna(axis=1,how='all') df=pd.DataFrame(np.random.randn(7,3)) df.iloc[:4,1]=NA df.iloc[:2,2]=NA df df.dropna() df.dropna(thresh=2) df.fillna(0) df.fillna({1:0.5,2:0}) _=df.fillna(0,inplace=True) df df=pd.DataFrame(np.random.randn(6,3)) df.iloc[2:,1] df.iloc[4:,2] df df.fillna(method='ffill') df.fillna(method='ffill',limit=2) data=pd.Series([1.,NA,3.5,NA,7]) df.fillna(data.mean()) data=pd.DataFrame({'k1':['one','two']*3+['two'],'k2':[1,1,2,3,3,4,4]}) data.duplicated() data.drop_duplicates() data['v1']=range(7) data.drop_duplicates(['k1']) data.drop_duplicates(['k1','k2'],keep='last') data = pd.DataFrame({'food': ['bacon', 'pulled pork', 'bacon', ....: 'Pastrami', 'corned beef', 'Bacon', ....: 'pastrami', 'honey ham', 'nova lox'], ....: 'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]}) data meat_to_animal = { 'bacon': 'pig', 'pulled pork': 'pig', 'pastrami': 'cow', 'corned beef': 'cow', 'honey ham': 'pig', 'nova lox': 'salmon' } lowercased = data['food'].str.lower()= data['food'].str.lower() lowercased data['animal'] = lowercased.map(meat_to_animal) data data = pd.Series([1., -999., 2., -999., -1000., 3.]) data data.replace([-999, -1000], np.nan) data.replace([-999, -1000], [np.nan, 0]) data.replace({-999: np.nan, -1000: 0}) data = pd.DataFrame(np.arange(12).reshape((3, 4)), ....: index=['Ohio', 'Colorado', 'New York'], ....: columns=['one', 'two', 'three', 'four']) transform = lambda x: x[:4].upper() data.index.map(transform) data.index = data.index.map(transform) data data.rename(index=str.title, columns=str.upper) data.rename(index={'OHIO': 'INDIANA'}, ....: columns={'three': 'peekaboo'}) data.rename(index={'OHIO':'INDIANA'},inplace=True) data ages=[20,22,25,27,21,23,37,31,61,45,41,32] bins=[18,25,35,60,100] cats=pd.cut(ages,bins) cats cats.codes cats.categories pd.value_counts(cats) pd.value_counts(cats) pd.cut(ages,[18,25,35,60,100],right=False) group_names = ['Youth', 'YoungAdult', 'MiddleAged', 'Senior'] pd.cut(ages,bins,labels=group_names) data=np.random.rand(20) pd.cut(data,4,precision=2) data=np.random.randn(1000) cats=pd.qcut(data,4) cats pd.value_counts(cats) pd.qcut(data,[0,0.1,0.5,0.9,1.]) data=pd.DataFrame(np.random.randn(1000,4)) data.describe() col=data[2] col[np.abs(col)>3] data[(np.abs(data)>3).any(1)] data[(np.abs(data)>3)]=np.sign(data)*3 data.describe() np.sign(data).head() df=pd.DataFrame(np.arange(5*4).reshape((5,4))) sampler=np.random.permutation(5) sampler df df.take(sampler) df.sample(n=3) choices=pd.Series([5,7,-1,6,4]) draws=choices.sample(n=10,replace=True) draws df = pd.DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'], .....: 'data1': range(6)}) pd.get_dummies(df['key']) dummies=pd.get_dummies(df['key'],prefix='key') df_with_dummy=df[['data1']].join(dummies) df_with_dummy mnames=['movie_id','title','genres'] movies = pd.read_table('datasets/movielens/movies.dat', sep='::', .....: header=None, names=mnames) movies[:10] all_genres=[] for x in movies.genres:all_genres.extend(x.split('|')) genres=pd.unique(all_genres) genres zero_matrix=np.zeros((len(movies),len(genres))) dummies=pd.DataFrame(zero_matrix,columns=genres) gen=movies.genres[0] gen.split('|') dummies.columns.get_indexer(gen.split('|')) for i, gen in enumerate(movies.genres): indices = dummies.columns.get_indexer(gen.split('|')) dummies.iloc[i, indices] = 1 movies_windic = movies.join(dummies.add_prefix('Genre_')) movies_windic.iloc[0] np.random.seed(12345) values = np.random.rand(10) values bins = [0, 0.2, 0.4, 0.6, 0.8, 1] pd.get_dummies(pd.cut(values, bins)) val = 'a,b, guido' val.split(',') pieces = [x.strip() for x in val.split(',')] pieces first, second, third = pieces first + '::' + second + '::' + third '::'.join(pieces) 'guido' in val val.index(',') val.find(':') val.index(':') val.count(',') val.replace(',', '::') val.replace(',', '') import re text = "foo bar\t baz \tqux" re.split('\s+', text) regex = re.compile('\s+') regex.split(text) regex.findall(text) text = """Dave [email protected] Steve [email protected] Rob [email protected] Ryan [email protected] """ pattern = r'[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}' # re.IGNORECASE makes the regex case-insensitive regex = re.compile(pattern, flags=re.IGNORECASE) regex.findall(text) m = regex.search(text) m text[m.start():m.end()] print(regex.match(text)) print(regex.sub('REDACTED', text)) pattern = r'([A-Z0-9._%+-]+)@([A-Z0-9.-]+)\.([A-Z]{2,4})' regex = re.compile(pattern, flags=re.IGNORECASE) m = regex.match('[email protected]') m.groups() regex.findall(text) print(regex.sub(r'Username: \1, Domain: \2, Suffix: \3',text)) data = {'Dave': '[email protected]', 'Steve': '[email protected]', .....: 'Rob': '[email protected]', 'Wes': np.nan} data = pd.Series(data) data data.isnull() data.str.contains('gmail') pattern data.str.findall(pattern, flags=re.IGNORECASE) matches = data.str.match(pattern, flags=re.IGNORECASE) matches matches.str.get(1) matches.str[0] data.str[:5] ```
github_jupyter
# ML2CPP ## Preparing the dataset ``` from sklearn import datasets import numpy as np import pandas as pd boston = datasets.load_boston() def populate_table(tablename, feature_names): X = boston.data y = boston.target N = X.shape[0] y = y.reshape(N,1) k = np.arange(N).reshape(N, 1) k_X_y = np.concatenate((k, X, y) , axis=1) lTable=pd.DataFrame(k_X_y) # print(lTable.head()) lTable.columns = ['idx'] + feature_names + ['TGT']; lTable['TGT'] = lTable['TGT'].apply(int) lTable['idx'] = lTable['idx'].apply(int) lTable.to_csv(tablename , float_format='%.14g') metadata = {"primary_key" : "KEY", "features" : list(boston.feature_names), "targets" : ["TGT"], "table" : "iris"} populate_table("/tmp/boston.csv" , metadata["features"]) df = pd.read_csv("/tmp/boston.csv") df.sample(12, random_state=1960) ``` ## Training a Model ``` # train any scikit model on the iris dataset from sklearn.feature_selection import SelectKBest, chi2 clf = SelectKBest(chi2, k=5) clf.fit(df[metadata['features']].values, df[metadata['targets']].values) ``` ## Deploying the Model ``` def generate_cpp_for_model(model): import pickle, json, requests, base64 b64_data = base64.b64encode(pickle.dumps(model)).decode('utf-8') # send the model th the web service json_data={"Name":"model_cpp_sample", "PickleData":b64_data , "SQLDialect":"CPP", "FeatureNames" : metadata['features']} r = requests.post("https://sklearn2sql.herokuapp.com/model", json=json_data) content = r.json() lCPP = content["model"]["SQLGenrationResult"][0]["SQL"] # print(lCPP); return lCPP lCPPCode = generate_cpp_for_model(clf); print(lCPPCode) def write_text_to_file(iCPPCode, oCPPFile): with open(oCPPFile, "w") as text_file: text_file.write(iCPPCode) def add_cpp_main_function(iCPPCode, iCSVFile): lCPPCode = "#include \"Generic.i\"\n\n" lCPPCode = lCPPCode + iCPPCode lCPPCode = lCPPCode + "\tint main() {\n" lCPPCode = lCPPCode + "\t\tscore_csv_file(\"" + iCSVFile +"\");\n" lCPPCode = lCPPCode + "\treturn 0;\n}\n" return lCPPCode def compile_cpp_code_as_executable(iName): import subprocess lCommand = ["g++", "-Wall", "-Wno-unused-function", "-std=c++17" , "-g" , "-o", iName + ".exe", iName + ".cpp"] print("EXECUTING" , "'" + " ".join(lCommand) + "'") result = subprocess.check_output(lCommand) # print(result) def execute_cpp_model(iName, iCSVFile): import subprocess result2 = subprocess.check_output([iName + ".exe", iCSVFile]) result2 = result2.decode() print(result2[:100]) print(result2[-100:]) return result2 def execute_cpp_code(iCPPCode, iCSVFile): lName = "/tmp/sklearn2sql_cpp_" + str(id(clf)); lCPPCode = add_cpp_main_function(iCPPCode, iCSVFile) write_text_to_file(lCPPCode, lName + ".cpp") compile_cpp_code_as_executable(lName) result = execute_cpp_model(lName, iCSVFile) write_text_to_file(str(result), lName + ".out") return lName + ".out" populate_table("/tmp/boston2.csv" , ["Feature_" + str(i) for i,x in enumerate(metadata["features"])]) lCPPOutput = execute_cpp_code(lCPPCode , "/tmp/boston2.csv") cpp_output = pd.read_csv(lCPPOutput) cpp_output.sample(12, random_state=1960) skl_outputs = pd.DataFrame() X = df[metadata['features']].values skl_output_key = pd.DataFrame(list(range(X.shape[0])), columns=['idx']); skl_output_transform = pd.DataFrame(clf.transform(X), columns=cpp_output.columns[1:]); skl_output = pd.concat([skl_output_key, skl_output_transform] , axis=1) skl_output.sample(12, random_state=1960) cpp_skl_join = skl_output.join(cpp_output , how='left', on='idx', lsuffix='_skl', rsuffix='_cpp') cpp_skl_join.sample(12, random_state=1960) for col in cpp_output.columns: lDiff = cpp_skl_join[col + "_skl"] - cpp_skl_join[col + "_cpp"] print(lDiff.describe()) ```
github_jupyter
### Neural style transfer in PyTorch This tutorial implements the "slow" neural style transfer based on the VGG19 model. It closely follows the official neural style tutorial you can find [here](http://pytorch.org/tutorials/advanced/neural_style_tutorial.html). __Note:__ if you didn't sit through the explanation of neural style transfer in the on-campus lecture, you're _strongly recommended_ to follow the link above instead of this notebook. ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline from matplotlib.pyplot import imread from skimage.transform import resize, rotate import torch, torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable # desired size of the output image imsize = 512 # REDUCE THIS TO 128 IF THE OPTIMIZATION IS TOO SLOW FOR YOU def image_loader(image_name): image = resize(imread(image_name), [imsize, imsize]) image = image.transpose([2,0,1]) / image.max() image = Variable(dtype(image)) # fake batch dimension required to fit network's input dimensions image = image.unsqueeze(0) return image use_cuda = torch.cuda.is_available() print("torch", torch.__version__) if use_cuda: print("Using GPU.") else: print("Not using GPU.") dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor ``` ### Draw input images ``` !mkdir -p images !wget https://github.com/yandexdataschool/Practical_DL/raw/fall21/week10_interpretability/bonus_style_transfer/images/wave.jpg -O images/wave.jpg style_img = image_loader("images/wave.jpg").type(dtype) !wget http://cdn.cnn.com/cnnnext/dam/assets/170809210024-trump-nk.jpg -O images/my_img.jpg content_img = image_loader("images/my_img.jpg").type(dtype) assert style_img.size() == content_img.size(), \ "we need to import style and content images of the same size" def imshow(tensor, title=None): image = tensor.clone().cpu() # we clone the tensor to not do changes on it image = image.view(3, imsize, imsize) # remove the fake batch dimension image = image.numpy().transpose([1,2,0]) plt.imshow(image / np.max(image)) if title is not None: plt.title(title) plt.figure(figsize=[12,6]) plt.subplot(1,2,1) imshow(style_img.data, title='Style Image') plt.subplot(1,2,2) imshow(content_img.data, title='Content Image') ``` ### Define Style Transfer Losses As shown in the lecture, we define two loss functions: content and style losses. Content loss is simply a pointwise mean squared error of high-level features while style loss is the error between gram matrices of intermediate feature layers. To obtain the feature representations we use a pre-trained VGG19 network. ``` import torchvision.models as models cnn = models.vgg19(pretrained=True).features # move it to the GPU if possible: if use_cuda: cnn = cnn.cuda() class ContentLoss(nn.Module): def __init__(self, target, weight): super(ContentLoss, self).__init__() # we 'detach' the target content from the tree used self.target = target.detach() * weight self.weight = weight def forward(self, input): self.loss = F.mse_loss(input * self.weight, self.target) return input.clone() def backward(self, retain_graph=True): self.loss.backward(retain_graph=retain_graph) return self.loss def gram_matrix(input): a, b, c, d = input.size() # a=batch size(=1) # b=number of feature maps # (c,d)=dimensions of a f. map (N=c*d) features = input.view(a * b, c * d) # resise F_XL into \hat F_XL G = torch.mm(features, features.t()) # compute the gram product # we 'normalize' the values of the gram matrix # by dividing by the number of element in each feature maps. return G.div(a * b * c * d) class StyleLoss(nn.Module): def __init__(self, target, weight): super(StyleLoss, self).__init__() self.target = target.detach() * weight self.weight = weight def forward(self, input): self.G = gram_matrix(input) self.G.mul_(self.weight) self.loss = F.mse_loss(self.G, self.target) return input.clone() def backward(self, retain_graph=True): self.loss.backward(retain_graph=retain_graph) return self.loss ``` ### Style transfer pipeline We can now define a unified "model" that computes all the losses on the image triplet (content image, style image, optimized image) so that we could optimize them with backprop (over image pixels). ``` content_weight=1 # coefficient for content loss style_weight=1000 # coefficient for style loss content_layers=('conv_4',) # use these layers for content loss style_layers=('conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5') # use these layers for style loss content_losses = [] style_losses = [] model = nn.Sequential() # the new Sequential module network # move these modules to the GPU if possible: if use_cuda: model = model.cuda() i = 1 for layer in list(cnn): if isinstance(layer, nn.Conv2d): name = "conv_" + str(i) model.add_module(name, layer) if name in content_layers: # add content loss: target = model(content_img).clone() content_loss = ContentLoss(target, content_weight) model.add_module("content_loss_" + str(i), content_loss) content_losses.append(content_loss) if name in style_layers: # add style loss: target_feature = model(style_img).clone() target_feature_gram = gram_matrix(target_feature) style_loss = StyleLoss(target_feature_gram, style_weight) model.add_module("style_loss_" + str(i), style_loss) style_losses.append(style_loss) if isinstance(layer, nn.ReLU): name = "relu_" + str(i) model.add_module(name, layer) if name in content_layers: # add content loss: target = model(content_img).clone() content_loss = ContentLoss(target, content_weight) model.add_module("content_loss_" + str(i), content_loss) content_losses.append(content_loss) if name in style_layers: # add style loss: target_feature = model(style_img).clone() target_feature_gram = gram_matrix(target_feature) style_loss = StyleLoss(target_feature_gram, style_weight) model.add_module("style_loss_" + str(i), style_loss) style_losses.append(style_loss) i += 1 if isinstance(layer, nn.MaxPool2d): name = "pool_" + str(i) model.add_module(name, layer) # *** ``` ### Optimization We can now optimize both style and content loss over input image. ``` input_image = Variable(content_img.clone().data, requires_grad=True) optimizer = torch.optim.Adam([input_image], lr=0.1) num_steps = 300 for i in range(num_steps): # correct the values of updated input image input_image.data.clamp_(0, 1) model(input_image) style_score = 0 content_score = 0 for sl in style_losses: style_score += sl.backward() for cl in content_losses: content_score += cl.backward() if i % 10 == 0: # <--- adjust the value to see updates more frequently print('Step # {} Style Loss : {:4f} Content Loss: {:4f}'.format( i, style_score.data.item(), content_score.item())) plt.figure(figsize=[10,10]) imshow(input_image.data) plt.show() loss = style_score + content_score optimizer.step(lambda: loss) optimizer.zero_grad() # a last correction... input_image.data.clamp_(0, 1) ``` ### Final image ``` plt.figure(figsize=[10,10]) imshow(input_image.data) plt.show() ```
github_jupyter
## Implementing a 1D convnet In Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It **takes as input 3D tensors with shape (samples, time, features) and also returns similarly-shaped 3D tensors**. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor. Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task altready seen previously. ``` from tensorflow.keras.datasets import imdb from tensorflow.keras.preprocessing import sequence max_features = 10000 # number of words to consider as features max_len = 500 # cut texts after this number of words (among top max_features most common words) print('Loading data...') (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features) print(len(x_train), 'train sequences') print(len(x_test), 'test sequences') print('Pad sequences (samples x time)') x_train = sequence.pad_sequences(x_train, maxlen=max_len) x_test = sequence.pad_sequences(x_test, maxlen=max_len) print('x_train shape:', x_train.shape) print('x_test shape:', x_test.shape) ``` **1D convnets are structured in the same way as their 2D counter-parts**: they consist of a stack of `Conv1D` and `MaxPooling1D layers`, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more Dense layers to the model, for classification or regression. One difference, though, is the fact that **we can afford to use larger convolution windows with 1D convnets**. Indeed, with a 2D convolution layer, a 3x3 convolution window contains `3*3 = 9` feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9. This is our example 1D convnet for the IMDB dataset: ``` from tensorflow.keras.models import Sequential from tensorflow.keras import layers from tensorflow.keras.optimizers import RMSprop model = Sequential() model.add(layers.Embedding(max_features, 128, input_length=max_len)) model.add(layers.Conv1D(32, 7, activation='relu')) model.add(layers.MaxPooling1D(5)) model.add(layers.Conv1D(32, 7, activation='relu')) model.add(layers.GlobalMaxPooling1D()) model.add(layers.Dense(1)) model.summary() model.compile( optimizer=RMSprop(lr=1e-4), loss='binary_crossentropy', metrics=['acc'] ) history = model.fit( x_train, y_train, epochs=10, batch_size=128, validation_split=0.2 ) ``` Here are our training and validation results: validation accuracy is slightly lower than that of the LSTM example we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task. ``` import matplotlib.pyplot as plt acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` ## Combining CNNs and RNNs to process long sequences Because 1D convnets process input patches independently, **they are not sensitive to the order of the timesteps** (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous notebook, where **order-sensitivity was key to produce good predictions**: ``` import numpy as np import os # Import data data_dir = './datasets/jena' fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv') f = open(fname) data = f.read() f.close() lines = data.split('\n') header = lines[0].split(',') lines = lines[1:] print(header) print() print(len(lines)) # Preprocessing float_data = np.zeros((len(lines), len(header) - 1)) for i, line in enumerate(lines): values = [float(x) for x in line.split(',')[1:]] float_data[i, :] = values mean = float_data[:200000].mean(axis=0) float_data -= mean std = float_data[:200000].std(axis=0) float_data /= std # Create datasets def generator(data, lookback, delay, min_index, max_index, shuffle=False, batch_size=128, step=6): if max_index is None: max_index = len(data) - delay - 1 i = min_index + lookback while 1: if shuffle: rows = np.random.randint(min_index + lookback, max_index, size=batch_size) else: if i + batch_size >= max_index: i = min_index + lookback rows = np.arange(i, min(i + batch_size, max_index)) i += len(rows) samples = np.zeros((len(rows), lookback // step, data.shape[-1])) targets = np.zeros((len(rows),)) for j, row in enumerate(rows): indices = range(rows[j] - lookback, rows[j], step) samples[j] = data[indices] targets[j] = data[rows[j] + delay][1] yield samples, targets lookback = 1440 step = 6 delay = 144 batch_size = 128 train_gen = generator( float_data, lookback=lookback, delay=delay, min_index=0, max_index=200000, shuffle=True, step=step, batch_size=batch_size ) val_gen = generator( float_data, lookback=lookback, delay=delay, min_index=200001, max_index=300000, step=step, batch_size=batch_size ) test_gen = generator( float_data, lookback=lookback, delay=delay, min_index=300001, max_index=None, step=step, batch_size=batch_size ) # This is how many steps to draw from `val_gen` in order to see the whole validation set: val_steps = (300000 - 200001 - lookback) // batch_size # This is how many steps to draw from `test_gen` in order to see the whole test set: test_steps = (len(float_data) - 300001 - lookback) // batch_size from tensorflow.keras.models import Sequential from tensorflow.keras import layers from tensorflow.keras.optimizers import RMSprop model = Sequential() model.add(layers.Conv1D(32, 5, activation='relu', input_shape=(None, float_data.shape[-1]))) model.add(layers.MaxPooling1D(3)) model.add(layers.Conv1D(32, 5, activation='relu')) model.add(layers.MaxPooling1D(3)) model.add(layers.Conv1D(32, 5, activation='relu')) model.add(layers.GlobalMaxPooling1D()) model.add(layers.Dense(1)) model.compile(optimizer=RMSprop(), loss='mae') history = model.fit( train_gen, steps_per_epoch=500, epochs=20, validation_data=val_gen, validation_steps=val_steps ) import matplotlib.pyplot as plt loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` The validation MAE stays in the low 0.40s: **we cannot even beat our common-sense baseline using the small convnet**. Again, this is because **our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees** (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. **This limitation of convnets was not an issue on IMDB**, because **patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences**. One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. **This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs**, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the step parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes. ``` # This was previously set to 6 (one point per hour). Now 3 (one point per 30 min). step = 3 lookback = 720 # Unchanged delay = 144 # Unchanged train_gen = generator( float_data, lookback=lookback, delay=delay, min_index=0, max_index=200000, shuffle=True, step=step ) val_gen = generator( float_data, lookback=lookback, delay=delay, min_index=200001, max_index=300000, step=step ) test_gen = generator( float_data, lookback=lookback, delay=delay, min_index=300001, max_index=None, step=step ) val_steps = (300000 - 200001 - lookback) // 128 test_steps = (len(float_data) - 300001 - lookback) // 128 ``` This is our new model, **starting with two `Conv1D` layers and following-up with a `GRU` layer**: ``` model = Sequential() model.add(layers.Conv1D(32, 5, activation='relu',input_shape=(None, float_data.shape[-1]))) model.add(layers.MaxPooling1D(3)) model.add(layers.Conv1D(32, 5, activation='relu')) model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5)) model.add(layers.Dense(1)) model.summary() model.compile(optimizer=RMSprop(), loss='mae') history = model.fit( train_gen, steps_per_epoch=500, epochs=20, validation_data=val_gen, validation_steps=val_steps ) loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` Judging from the validation loss, **this setup is not quite as good as the regularized GRU alone, but it's significantly faster**. It is looking at twice more data, which in this case doesn't appear to be hugely helpful, but may be important for other datasets.
github_jupyter
# Figures 2 and 5 for gather paper ``` %matplotlib inline import pylab import pandas as pd ``` ## Preparation: load genome-grist summary CSVs ``` class SampleDFs: def __init__(self, name, all_df, left_df, gather_df, names_df): self.name = name self.all_df = all_df self.left_df = left_df self.gather_df = gather_df self.names_df = names_df def load_sample_dfs(name, sample_id, subsample_to=None, debug=False): print(f'loading sample {sample_id}') # load mapping CSVs all_df = pd.read_csv(f'inputs/mapping/{sample_id}.summary.csv') left_df = pd.read_csv(f'inputs/leftover/{sample_id}.summary.csv') # load gather CSV gather_df = pd.read_csv(f'inputs/gather/{sample_id}.gather.csv') # names! names_df = pd.read_csv(f'inputs/gather/{sample_id}.genomes.info.csv') # connect gather_df to all_df and left_df using 'genome_id' def fix_name(x): return "_".join(x.split('_')[:2]).split('.')[0] gather_df['genome_id'] = gather_df['name'].apply(fix_name) names_df['genome_id'] = names_df['ident'].apply(fix_name) # this ensures that only rows that share genome_id are in all the dataframes in_gather = set(gather_df.genome_id) if debug: print(f'{len(in_gather)} in gather results') in_left = set(left_df.genome_id) if debug: print(f'{len(in_left)} in leftover results') in_both = in_left.intersection(in_gather) if debug: print(f'{len(in_both)} in both') print('diff gather example:', list(in_gather - in_both)[:5]) print('diff left example:', list(in_left - in_both)[:5]) assert not in_gather - in_both assert not in_left - in_both all_df = all_df[all_df.genome_id.isin(in_both)] left_df = left_df[left_df.genome_id.isin(in_both)] gather_df = gather_df[gather_df.genome_id.isin(in_both)] names_df = names_df[names_df.genome_id.isin(in_both)] # reassign index now that we've maybe dropped rows all_df.index = range(len(all_df)) left_df.index = range(len(left_df)) gather_df.index = range(len(gather_df)) names_df.index = range(len(names_df)) assert len(all_df) == len(gather_df) assert len(left_df) == len(gather_df) assert len(names_df) == len(gather_df) assert len(names_df) == len(in_both) #in_left # re-sort left_df and all_df to match gather_df order, using matching genome_id column all_df = all_df.set_index("genome_id") all_df = all_df.reindex(index=gather_df["genome_id"]) all_df = all_df.reset_index() left_df = left_df.set_index("genome_id") left_df = left_df.reindex(index=gather_df["genome_id"]) left_df = left_df.reset_index() #left_df["mapped_bp"] = (1 - left_df["percent missed"]/100) * left_df["genome bp"] #left_df["unique_mapped_coverage"] = left_df.coverage / (1 - left_df["percent missed"] / 100.0) names_df = names_df.set_index("genome_id") names_df = names_df.reindex(index=gather_df["genome_id"]) names_df = names_df.reset_index() # subsample? take top N... if subsample_to: left_df = left_df[:subsample_to] all_df = all_df[:subsample_to] gather_df = gather_df[:subsample_to] names_df = names_df[:subsample_to] sample_df = SampleDFs(name, all_df, left_df, gather_df, names_df) return sample_df SUBSAMPLE_TO = 36 podar_mock = load_sample_dfs('(A) podar mock', 'SRR606249', subsample_to=SUBSAMPLE_TO,) oil_well = load_sample_dfs('(D) oil well', 'SRR1976948', subsample_to=SUBSAMPLE_TO) gut = load_sample_dfs('(C) gut', 'p8808mo11', subsample_to=SUBSAMPLE_TO) zymo_mock = load_sample_dfs('(B) zymo mock', 'SRR12324253', subsample_to=SUBSAMPLE_TO) ``` ## Figure 2: K-mer decomposition of a metagenome into constituent genomes. ``` fig, (ax1, ax2) = pylab.subplots(1, 2, figsize=(10, 8), constrained_layout=True) #pylab.plot(left_df.covered_bp / 1e6, left_df.iloc[::-1].index, 'b.', label='mapped bp to this genome') ax1.plot(podar_mock.gather_df.intersect_bp / 1e6, podar_mock.gather_df.iloc[::-1].index, 'g<', label='total k-mers matched') ax1.plot(podar_mock.gather_df.unique_intersect_bp / 1e6, podar_mock.gather_df.iloc[::-1].index, 'ro', label='remaining k-mers matched') positions = list(podar_mock.gather_df.index) labels = list(reversed(podar_mock.names_df.display_name)) ax1.set_yticks(positions) ax1.set_yticklabels(labels, fontsize='small') ax1.set_xlabel('millions of k-mers') ax1.axis(ymin=-1, ymax=SUBSAMPLE_TO) ax1.legend(loc='lower right') ax1.grid(True, axis='both') ax2.plot(podar_mock.gather_df.f_match_orig * 100, podar_mock.gather_df.iloc[::-1].index, 'g<', label='total k-mer cover') ax2.plot(podar_mock.gather_df.f_match * 100, podar_mock.gather_df.iloc[::-1].index, 'ro', label='remaining k-mer cover') ax2.set_yticks(positions) ax2.set_yticklabels([]) ax2.set_xlabel('% of genome covered') ax2.legend(loc='lower left') ax2.axis(xmin=40, xmax=102) ax2.axis(ymin=-1, ymax=SUBSAMPLE_TO) ax2.grid(True) #fig.tight_layout() None fig.savefig('fig2.svg') ``` ## Figure 5: Hash-based k-mer decomposition of a metagenome into constituent genomes compares well to bases covered by read mapping. ``` import matplotlib.pyplot as plt fig, axes = plt.subplots(figsize=(20, 12), nrows=2, ncols=2) samples = (podar_mock, zymo_mock, gut, oil_well) for n, (ax, sample) in enumerate(zip(axes.flat, samples)): ax.plot(sample.left_df.index, sample.left_df.covered_bp / 1e6, 'b*', label='genome bases covered by mapped reads') ax.plot(sample.gather_df.index, sample.gather_df.unique_intersect_bp / 1e6, 'ro', label='remaining genome hashes in metagenome') ax.plot(sample.gather_df.index, (sample.gather_df.unique_intersect_bp - sample.left_df.covered_bp) / 1e6, '-', label='difference b/t covered bp and hashes') ax.plot(sample.gather_df.index, [0]*len(sample.gather_df), '--') ax.axis(xmin=-0.5, xmax=len(sample.gather_df.index) - 0.5) positions = list(sample.gather_df.index) labels = [ i + 1 for i in positions ] ax.set_xticks(positions) ax.set_xticklabels(labels) #print(sample.name, positions) ax.set_xlabel('genome rank (ordered by gather results)') ax.set_ylabel('number per genome (million)') if n == 0: ax.legend(loc='upper right') ax.set_title(sample.name) #ax.label_outer() fig.tight_layout() pylab.savefig('fig5.svg') ```
github_jupyter
<a href="https://colab.research.google.com/github/samarth0174/Face-Recognition-pca-svm/blob/master/Facial_Recognition(Exercise).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # **In this project we implement the Identification system using Machine Learning concepts such as Principal Component Analysis (PCA) and Support Vector Machine (SVM).** ## Steps Involved: - Importing Libraries - Loading the Dataset - Data Exploration - Splitting the dataset - Compute PCA(eigen faces) - Train a SVM classification model - * Using GridSearch to find best Parameters - Model Evaluation - Conclusion ## **Importing Libraries** * We need to first import the scikit-learn library for using the PCA function API that is provided into this library. * The scikit-learn library also provided an API to fetch **LFW_peoples dataset**. * We also required matplotlib to plot faces. ``` #downnlading datasets sklearn from sklearn.datasets import fetch_lfw_people #todo import other libraries such sklearn for pca,svc,classification report,plotting ``` ## **Loading the dataset** ``` # Download the data, if not already on disk and load it as numpy arrays lfw_people = fetch_lfw_people('data', min_faces_per_person=70, resize=0.4) # introspect the images arrays to find the shapes (for plotting) #todo:check shape # for machine learning we use the data directly (as relative pixel # position info is ignored by this model) '''todo:assign X for model''' ## the label to predict is the id of the person '''Todo:assign y for model ie. the no. of classes'''' ``` ## **Data Exploration** ``` # plot and explore images and their respective classes # hint: use matplotlib ``` ## **Splitting the dataset** ``` #use sklearn test-train split ``` ## **Compute PCA** We can now compute a PCA (eigenfaces) on the face dataset (treated as unlabeled dataset): unsupervised feature extraction / dimensionality reduction. ``` #Apply the PCA algorithm on the training dataset which computes EigenFaces. #Here, take n_components = 150 or 300 means we extract the top 150 (or 300) Eigenfaces from the algorithm. #Also print the time taken to apply this algorithm. # TODO: Create an instance of PCA, initializing with n_components=n_components and whiten=True #TODO: pass the training dataset (X_train) to pca's 'fit()' method ``` ## **Train a SVM classification model** Fit a SVM classifier to the training set.Use GridSearchCV to find a good set of parameters for the classifier. ``` #todo : SVM with Gridsearch algo ``` ## **Evaluation of the model quality on the test set** ``` #TODO: Test the model and Generate a classification report ``` # **plot the eigen faces for your visualisation** ``` #TODO:plot most significant eigen faces ``` ## **Conclusion** ``` ```
github_jupyter
``` !pip install torch # framework !pip install --upgrade reedsolo !pip install --upgrade librosa !pip install torchvision #!pip install torchaudio #!pip install tensorboard #!pip install soundfile !pip install librosa==0.7.1 from google.colab import drive drive.mount('/content/drive',force_remount=True) %cd /content/drive/My\ Drive/ import numpy as np import librosa import librosa.display import datetime import matplotlib.pyplot as plt from torch.nn.functional import binary_cross_entropy_with_logits, mse_loss from torchvision import datasets, transforms from IPython.display import clear_output import torchvision from torchvision.datasets.vision import VisionDataset from torch.optim import Adam from tqdm import notebook import torch import os.path import os import gc import sys from PIL import ImageFile, Image #from torchaudio import transforms as audiotransforms #import torchaudio #import soundfile #from IPython.display import Audio import random ImageFile.LOAD_TRUNCATED_IMAGES = True epochs = 64 data_depth = 4 hidden_size = 32 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') LOAD_MODEL=True #PATH='/content/drive/My Drive/myresults/model/DenseEncoder_DenseDecoder_0.041_2020-07-25_15:31:19.dat' #PATH='/content/drive/My Drive/myresults/model/DenseEncoder_DenseDecoder_-0.003_2020-07-24_20:01:33.dat' #PATH='/content/drive/My Drive/myresults/model/DenseEncoder_DenseDecoder_-0.022_2020-07-24_05:11:17.dat' #PATH='/content/drive/My Drive/myresults/model/DenseEncoder_DenseDecoder_-0.041_2020-07-23_23:01:25.dat' PATH='/content/drive/My Drive/myresults/model/DenseEncoder_DenseDecoder_0.042_2020-07-23_02:08:27.dat' ##Depth4Epoch64 #PATH='/content/drive/My Drive/myresults/model/DenseEncoder_DenseDecoder_0.005_2020-07-22_20:05:49.dat' #PATH='/content/drive/My Drive/myresults/model/DenseEncoder_DenseDecoder_-0.019_2020-07-22_15:02:29.dat' #PATH='/content/drive/My Drive/myresults/model/DenseEncoder_DenseDecoder_-0.020_2020-07-22_13:43:02.dat' #PATH='/content/drive/My Drive/myresults/model/DenseEncoder_DenseDecoder_+0.048_2020-07-22_12:21:23.dat' #PATH='/content/drive/My Drive/myresults/model/DenseEncoder_DenseDecoder_+0.017_2020-07-22_08:18:00.dat' import torch import torch.nn.functional as F from torch.autograd import Variable import numpy as np from math import exp # -*- coding: utf-8 -*- import zlib from math import exp import torch from reedsolo import RSCodec from torch.nn.functional import conv2d rs = RSCodec(250) def text_to_bits(text): """Convert text to a list of ints in {0, 1}""" return bytearray_to_bits(text_to_bytearray(text)) def bits_to_text(bits): """Convert a list of ints in {0, 1} to text""" return bytearray_to_text(bits_to_bytearray(bits)) def bytearray_to_bits(x): """Convert bytearray to a list of bits""" result = [] for i in x: bits = bin(i)[2:] bits = '00000000'[len(bits):] + bits result.extend([int(b) for b in bits]) return result def bits_to_bytearray(bits): """Convert a list of bits to a bytearray""" ints = [] for b in range(len(bits) // 8): byte = bits[b * 8:(b + 1) * 8] ints.append(int(''.join([str(bit) for bit in byte]), 2)) return bytearray(ints) def text_to_bytearray(text): """Compress and add error correction""" assert isinstance(text, str), "expected a string" x = zlib.compress(text.encode("utf-8")) x = rs.encode(bytearray(x)) return x def bytearray_to_text(x): """Apply error correction and decompress""" try: #print('1: ',x) text = rs.decode(x)[0] #print('2: ',x) text = zlib.decompress(text) #print('3: ',x) return text.decode("utf-8") except BaseException as e: print(e) return False def gaussian(window_size, sigma): gauss = torch.Tensor([exp(-(x - window_size//2)**2/float(2*sigma**2)) for x in range(window_size)]) return gauss/gauss.sum() def create_window(window_size, channel): _1D_window = gaussian(window_size, 1.5).unsqueeze(1) _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0) window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous()) return window def _ssim(img1, img2, window, window_size, channel, size_average = True): mu1 = F.conv2d(img1, window, padding = window_size//2, groups = channel) mu2 = F.conv2d(img2, window, padding = window_size//2, groups = channel) mu1_sq = mu1.pow(2) mu2_sq = mu2.pow(2) mu1_mu2 = mu1*mu2 sigma1_sq = F.conv2d(img1*img1, window, padding = window_size//2, groups = channel) - mu1_sq sigma2_sq = F.conv2d(img2*img2, window, padding = window_size//2, groups = channel) - mu2_sq sigma12 = F.conv2d(img1*img2, window, padding = window_size//2, groups = channel) - mu1_mu2 C1 = 0.01**2 C2 = 0.03**2 ssim_map = ((2*mu1_mu2 + C1)*(2*sigma12 + C2))/((mu1_sq + mu2_sq + C1)*(sigma1_sq + sigma2_sq + C2)) if size_average: return ssim_map.mean() else: return ssim_map.mean(1).mean(1).mean(1) class SSIM(torch.nn.Module): def __init__(self, window_size = 11, size_average = True): super(SSIM, self).__init__() self.window_size = window_size self.size_average = size_average self.channel = 1 self.window = create_window(window_size, self.channel) def forward(self, img1, img2): (_, channel, _, _) = img1.size() if channel == self.channel and self.window.data.type() == img1.data.type(): window = self.window else: window = create_window(self.window_size, channel) if img1.is_cuda: window = window.cuda(img1.get_device()) window = window.type_as(img1) self.window = window self.channel = channel return _ssim(img1, img2, window, self.window_size, channel, self.size_average) def ssim(img1, img2, window_size = 11, size_average = True): (_, channel, _, _) = img1.size() window = create_window(window_size, channel) if img1.is_cuda: window = window.cuda(img1.get_device()) window = window.type_as(img1) return _ssim(img1, img2, window, window_size, channel, size_average) import torch from torch import nn import numpy class BasicEncoder(nn.Module): """ The BasicEncoder module takes an cover image and a data tensor and combines them into a steganographic image. """ def _name(self): return "BasicEncoder" def _conv2d(self, in_channels, out_channels): return nn.Conv2d( in_channels=in_channels, out_channels=out_channels, kernel_size=3, padding=1 ) def _build_models(self): self.conv1 = nn.Sequential( self._conv2d(3, self.hidden_size), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.hidden_size), ) self.conv2 = nn.Sequential( self._conv2d(self.hidden_size + self.data_depth, self.hidden_size), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.hidden_size), ) self.conv3 = nn.Sequential( self._conv2d(self.hidden_size, self.hidden_size), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.hidden_size), ) self.conv4 = nn.Sequential( self._conv2d(self.hidden_size, 3), ) return self.conv1, self.conv2, self.conv3, self.conv4 def __init__(self, data_depth, hidden_size): super().__init__() self.data_depth = data_depth self.hidden_size = hidden_size self._models = self._build_models() self.name = self._name() def forward(self, image, data): x = self._models[0](image) x_1 = self._models[1](torch.cat([x] + [data], dim=1)) x_2 = self._models[2](x_1) x_3 = self._models[3](x_2) return x_3 class ResidualEncoder(BasicEncoder): def _name(self): return "ResidualEncoder" def forward(self, image, data): return image + super().forward(self, image, data) class DenseEncoder(BasicEncoder): def _name(self): return "DenseEncoder" def _build_models(self): self.conv1 = super()._build_models()[0] self.conv2 = super()._build_models()[1] self.conv3 = nn.Sequential( self._conv2d(self.hidden_size * 2 + self.data_depth, self.hidden_size), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.hidden_size), ) self.conv4 = nn.Sequential( self._conv2d(self.hidden_size * 3 + self.data_depth, 3) ) return self.conv1, self.conv2, self.conv3, self.conv4 def forward(self, image, data): x = self._models[0](image) x_list = [x] x_1 = self._models[1](torch.cat(x_list+[data], dim=1)) x_list.append(x_1) x_2 = self._models[2](torch.cat(x_list+[data], dim=1)) x_list.append(x_2) x_3 = self._models[3](torch.cat(x_list+[data], dim=1)) x_list.append(x_3) return image + x_3 import torch from torch import nn #from torch.nn import Sigmoid #from torch.distributions import Bernoulli class BasicDecoder(nn.Module): """ The BasicDecoder module takes an steganographic image and attempts to decode the embedded data tensor. Input: (N, 3, H, W) Output: (N, D, H, W) """ def _name(self): return "BasicDecoder" def _conv2d(self, in_channels, out_channels): return nn.Conv2d( in_channels=in_channels, out_channels=out_channels, kernel_size=3, padding=1 ) def _build_models(self): self.conv1 = nn.Sequential( self._conv2d(3, self.hidden_size), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.hidden_size), ) self.conv2 = nn.Sequential( self._conv2d(self.hidden_size, self.hidden_size), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.hidden_size), ) self.conv3 = nn.Sequential( self._conv2d(self.hidden_size, self.hidden_size), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.hidden_size), ) self.conv4 = nn.Sequential( self._conv2d(self.hidden_size, self.data_depth), #nn.Sigmoid(), ) return self.conv1, self.conv2, self.conv3, self.conv4 def forward(self, image): x = self._models[0](image) x_1 = self._models[1](x) x_2 = self._models[2](x_1) x_3 = self._models[3](x_2) #x_4 = Bernoulli(x_3).sample() return x_3 def __init__(self, data_depth, hidden_size): super().__init__() self.data_depth = data_depth self.hidden_size = hidden_size self._models = self._build_models() self.name = self._name() class DenseDecoder(BasicDecoder): def _name(self): return "DenseDecoder" def _build_models(self): self.conv1 = super()._build_models()[0] self.conv2 = super()._build_models()[1] self.conv3 = nn.Sequential( self._conv2d(self.hidden_size * 2, self.hidden_size), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.hidden_size) ) self.conv4 = nn.Sequential( self._conv2d(self.hidden_size * 3, self.data_depth), #nn.Sigmoid(), ) return self.conv1, self.conv2, self.conv3, self.conv4 def forward(self, image): x = self._models[0](image) x_list = [x] x_1 = self._models[1](torch.cat(x_list, dim=1)) x_list.append(x_1) x_2 = self._models[2](torch.cat(x_list, dim=1)) x_list.append(x_2) x_3 = self._models[3](torch.cat(x_list, dim=1)) x_list.append(x_3) return x_3 import torch from torch import nn class BasicCritic(nn.Module): """ The BasicCritic module takes an image and predicts whether it is a cover image or a steganographic image (N, 1). Input: (N, 3, H, W) Output: (N, 1) """ def _name(self): return "BasicCritic" def _conv2d(self, in_channels, out_channels): return nn.Conv2d( in_channels=in_channels, out_channels=out_channels, kernel_size=3 ) def _build_models(self): self.conv1 = nn.Sequential( self._conv2d(3, self.hidden_size), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.hidden_size), ) self.conv2 = nn.Sequential( self._conv2d(self.hidden_size, self.hidden_size), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.hidden_size), ) self.conv3 = nn.Sequential( self._conv2d(self.hidden_size, self.hidden_size), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.hidden_size), ) self.conv4 = nn.Sequential( self._conv2d(self.hidden_size, 1) ) return self.conv1,self.conv2,self.conv3,self.conv4 def __init__(self, hidden_size): super().__init__() self.hidden_size = hidden_size self._models = self._build_models() self.name = self._name() def forward(self, image): x = self._models[0](image) x_1 = self._models[1](x) x_2 = self._models[2](x_1) x_3 = self._models[3](x_2) return torch.mean(x_3.view(x_3.size(0), -1), dim=1) def plot(name, train_epoch, values, path, save): clear_output(wait=True) plt.close('all') fig = plt.figure() fig = plt.ion() fig = plt.subplot(1, 1, 1) fig = plt.title('epoch: %s -> %s: %s' % (train_epoch, name, values[-1])) fig = plt.ylabel(name) fig = plt.xlabel('validation_set') fig = plt.plot(values) fig = plt.grid() get_fig = plt.gcf() fig = plt.draw() # draw the plot fig = plt.pause(1) # show it for 1 second if save: now = datetime.datetime.now() get_fig.savefig('%s/%s_%.3f_%d_%s.png' % (path, name, train_epoch, values[-1], now.strftime("%Y-%m-%d_%H:%M:%S"))) def test(encoder,decoder,data_depth,train_epoch,cover,payload): %matplotlib inline generated = encoder.forward(cover, payload) decoded = decoder.forward(generated) decoder_loss = binary_cross_entropy_with_logits(decoded, payload) decoder_acc = (decoded >= 0.0).eq( payload >= 0.5).sum().float() / payload.numel() # .numel() calculate the number of element in a tensor print("Decoder loss: %.3f"% decoder_loss.item()) print("Decoder acc: %.3f"% decoder_acc.item()) f, ax = plt.subplots(1, 2) plt.title("%s_%s"%(encoder.name,decoder.name)) cover=np.transpose(np.squeeze(cover.cpu()), (1, 2, 0)) ax[0].imshow(cover) ax[0].axis('off') print(generated.shape) generated_=np.transpose(np.squeeze((generated.cpu()).detach().numpy()), (1, 2, 0)) ax[1].imshow(generated_) ax[1].axis('off') #now = datetime.datetime.now() #print("payload :") #print(payload) #print("decoded :") #decoded[decoded<0]=0 #decoded[decoded>0]=1 #print(decoded) # plt.savefig('results/samples/%s_%s_%d_%.3f_%d_%s.png' % # (encoder.name,decoder.name, data_depth,decoder_acc, train_epoch, now.strftime("%Y-%m-%d_%H:%M:%S"))) return generated def save_model(encoder,decoder,critic,en_de_optimizer,cr_optimizer,metrics,ep): now = datetime.datetime.now() cover_score = metrics['val.cover_score'][-1] name = "%s_%s_%+.3f_%s.dat" % (encoder.name,decoder.name,cover_score, now.strftime("%Y-%m-%d_%H:%M:%S")) fname = os.path.join('.', 'myresults/model', name) states = { 'state_dict_critic': critic.state_dict(), 'state_dict_encoder': encoder.state_dict(), 'state_dict_decoder': decoder.state_dict(), 'en_de_optimizer': en_de_optimizer.state_dict(), 'cr_optimizer': cr_optimizer.state_dict(), 'metrics': metrics, 'train_epoch': ep, 'date': now.strftime("%Y-%m-%d_%H:%M:%S"), } torch.save(states, fname) path='myresults/plots/train_%s_%s_%s'% (encoder.name,decoder.name,now.strftime("%Y-%m-%d_%H:%M:%S")) try: os.mkdir(os.path.join('.', path)) except Exception as error: print(error) plot('encoder_mse', ep, metrics['val.encoder_mse'], path, True) plot('decoder_loss', ep, metrics['val.decoder_loss'], path, True) plot('decoder_acc', ep, metrics['val.decoder_acc'], path, True) plot('cover_score', ep, metrics['val.cover_score'], path, True) plot('generated_score', ep, metrics['val.generated_score'], path, True) plot('ssim', ep, metrics['val.ssim'], path, True) plot('psnr', ep, metrics['val.psnr'], path, True) plot('bpp', ep, metrics['val.bpp'], path, True) def fit_gan(encoder,decoder,critic,en_de_optimizer,cr_optimizer,metrics,train_loader,valid_loader): for ep in range(epochs): print("Epoch %d" %(ep+1)) for cover, _ in notebook.tqdm(train_loader): gc.collect() cover = cover.to(device) N, _, H, W = cover.size() # sampled from the discrete uniform distribution over 0 to 2 payload = torch.zeros((N, data_depth, H, W), device=device).random_(0, 2) generated = encoder.forward(cover, payload) cover_score = torch.mean(critic.forward(cover)) generated_score = torch.mean(critic.forward(generated)) cr_optimizer.zero_grad() (cover_score - generated_score).backward(retain_graph=False) cr_optimizer.step() for p in critic.parameters(): p.data.clamp_(-0.1, 0.1) metrics['train.cover_score'].append(cover_score.item()) metrics['train.generated_score'].append(generated_score.item()) for cover, _ in notebook.tqdm(train_loader): gc.collect() cover = cover.to(device) N, _, H, W = cover.size() # sampled from the discrete uniform distribution over 0 to 2 payload = torch.zeros((N, data_depth, H, W), device=device).random_(0, 2) generated = encoder.forward(cover, payload) decoded = decoder.forward(generated) encoder_mse = mse_loss(generated, cover) decoder_loss = binary_cross_entropy_with_logits(decoded, payload) decoder_acc = (decoded >= 0.0).eq( payload >= 0.5).sum().float() / payload.numel() generated_score = torch.mean(critic.forward(generated)) en_de_optimizer.zero_grad() (100 * encoder_mse + decoder_loss + generated_score).backward() # Why 100? en_de_optimizer.step() metrics['train.encoder_mse'].append(encoder_mse.item()) metrics['train.decoder_loss'].append(decoder_loss.item()) metrics['train.decoder_acc'].append(decoder_acc.item()) for cover, _ in notebook.tqdm(valid_loader): gc.collect() cover = cover.to(device) N, _, H, W = cover.size() # sampled from the discrete uniform distribution over 0 to 2 payload = torch.zeros((N, data_depth, H, W), device=device).random_(0, 2) generated = encoder.forward(cover, payload) decoded = decoder.forward(generated) encoder_mse = mse_loss(generated, cover) decoder_loss = binary_cross_entropy_with_logits(decoded, payload) decoder_acc = (decoded >= 0.0).eq( payload >= 0.5).sum().float() / payload.numel() generated_score = torch.mean(critic.forward(generated)) cover_score = torch.mean(critic.forward(cover)) metrics['val.encoder_mse'].append(encoder_mse.item()) metrics['val.decoder_loss'].append(decoder_loss.item()) metrics['val.decoder_acc'].append(decoder_acc.item()) metrics['val.cover_score'].append(cover_score.item()) metrics['val.generated_score'].append(generated_score.item()) metrics['val.ssim'].append( ssim(cover, generated).item()) metrics['val.psnr'].append( 10 * torch.log10(4 / encoder_mse).item()) metrics['val.bpp'].append( data_depth * (2 * decoder_acc.item() - 1)) print('encoder_mse: %.3f - decoder_loss: %.3f - decoder_acc: %.3f - cover_score: %.3f - generated_score: %.3f - ssim: %.3f - psnr: %.3f - bpp: %.3f' %(encoder_mse.item(),decoder_loss.item(),decoder_acc.item(),cover_score.item(),generated_score.item(), ssim(cover, generated).item(),10 * torch.log10(4 / encoder_mse).item(),data_depth * (2 * decoder_acc.item() - 1))) save_model(encoder,decoder,critic,en_de_optimizer,cr_optimizer,metrics,ep) if __name__ == '__main__': for func in [ lambda: os.mkdir(os.path.join('.', 'results')), lambda: os.mkdir(os.path.join('.', 'results/model')), lambda: os.mkdir(os.path.join('.', 'results/plots'))]: # create directories try: func() except Exception as error: print(error) continue METRIC_FIELDS = [ 'val.encoder_mse', 'val.decoder_loss', 'val.decoder_acc', 'val.cover_score', 'val.generated_score', 'val.ssim', 'val.psnr', 'val.bpp', 'train.encoder_mse', 'train.decoder_loss', 'train.decoder_acc', 'train.cover_score', 'train.generated_score', ] print('image') data_dir = 'div2k' mu = [.5, .5, .5] sigma = [.5, .5, .5] transform = transforms.Compose([transforms.RandomHorizontalFlip(), transforms.RandomCrop( 360, pad_if_needed=True), transforms.ToTensor(), transforms.Normalize(mu, sigma)]) train_set = datasets.ImageFolder(os.path.join( data_dir, "train/"), transform=transform) train_loader = torch.utils.data.DataLoader( train_set, batch_size=4, shuffle=True) valid_set = datasets.ImageFolder(os.path.join( data_dir, "val/"), transform=transform) valid_loader = torch.utils.data.DataLoader( valid_set, batch_size=4, shuffle=False) encoder = DenseEncoder(data_depth, hidden_size).to(device) decoder = DenseDecoder(data_depth, hidden_size).to(device) critic = BasicCritic(hidden_size).to(device) cr_optimizer = Adam(critic.parameters(), lr=1e-4) en_de_optimizer = Adam(list(decoder.parameters()) + list(encoder.parameters()), lr=1e-4) metrics = {field: list() for field in METRIC_FIELDS} if LOAD_MODEL: if torch.cuda.is_available(): checkpoint = torch.load(PATH) else: checkpoint = torch.load(PATH, map_location=lambda storage, loc: storage) critic.load_state_dict(checkpoint['state_dict_critic']) encoder.load_state_dict(checkpoint['state_dict_encoder']) decoder.load_state_dict(checkpoint['state_dict_decoder']) en_de_optimizer.load_state_dict(checkpoint['en_de_optimizer']) cr_optimizer.load_state_dict(checkpoint['cr_optimizer']) metrics=checkpoint['metrics'] ep=checkpoint['train_epoch'] date=checkpoint['date'] critic.train(mode=False) encoder.train(mode=False) decoder.train(mode=False) print('GAN loaded: ', ep) print(critic) print(encoder) print(decoder) print(en_de_optimizer) print(cr_optimizer) print(date) else: fit_gan(encoder,decoder,critic,en_de_optimizer,cr_optimizer,metrics,train_loader,valid_loader) from collections import Counter def make_payload(width, height, depth, text): """ This takes a piece of text and encodes it into a bit vector. It then fills a matrix of size (width, height) with copies of the bit vector. """ message = text_to_bits(text) + [0] * 32 payload = message while len(payload) < width * height * depth: payload += message payload = payload[:width * height * depth] return torch.FloatTensor(payload).view(1, depth, height, width) def make_message(image): #image = torch.FloatTensor(image).permute(2, 1, 0).unsqueeze(0) image = image.to(device) image = decoder(image).view(-1) > 0 image=torch.tensor(image, dtype=torch.uint8) # split and decode messages candidates = Counter() bits = image.data.cpu().numpy().tolist() for candidate in bits_to_bytearray(bits).split(b'\x00\x00\x00\x00'): candidate = bytearray_to_text(bytearray(candidate)) if candidate: #print(candidate) candidates[candidate] += 1 # choose most common message if len(candidates) == 0: raise ValueError('Failed to find message.') candidate, count = candidates.most_common(1)[0] return candidate ``` ###Check a sample from validation dataset ``` # to see one image cover,*rest = next(iter(valid_set)) _, H, W = cover.size() cover = cover[None].to(device) text = "We are busy in Neural Networks project. Anyhow, how is your day going?" payload = make_payload(W, H, data_depth, text) payload = payload.to(device) #generated = encoder.forward(cover, payload) generated = test(encoder,decoder,data_depth,epochs,cover,payload) text_return = make_message(generated) print(text_return) ``` ###Testing begins (from a loaded model) ####Test1 - Save steganographic images ``` ##Take all images from test folder (one by one) and message requested by user to encode from imageio import imread, imwrite epochs = 64 data_depth = 4 test_folder = "div2k/myval/_" save_dir = os.mkdir(os.path.join("div2k/myval",str(data_depth)+"_"+str(epochs))) for filename in os.listdir(test_folder): print(os.path.join(test_folder,filename)) cover_im = imread(os.path.join(test_folder,filename), pilmode='RGB') / 127.5 - 1.0 cover = torch.FloatTensor(cover_im).permute(2, 1, 0).unsqueeze(0) cover_size = cover.size() # _, _, height, width = cover.size() text = "We are busy in Neural Networks project. The deadline is near. Anyhow, how is your day going?" payload = make_payload(cover_size[3], cover_size[2], data_depth, text) cover = cover.to(device) payload = payload.to(device) generated = encoder.forward(cover, payload)[0].clamp(-1.0, 1.0) #print(generated.size()) generated = (generated.permute(2, 1, 0).detach().cpu().numpy() + 1.0) * 127.5 imwrite(os.path.join("div2k/myval/",str(data_depth)+"_"+str(epochs),(str(data_depth)+"_"+str(epochs)+"_"+filename)), generated.astype('uint8')) ``` ####Test2 - Take a steganographic image from a folder and decode ``` ##[Individual]Take an image requested by user to decode from imageio import imread, imwrite steg_folder = "div2k/myval/4_64" filename = "4_64_0855.png" image = imread(os.path.join(steg_folder,filename), pilmode='RGB') / 127.5 - 1.0 plt.imshow(image) image = torch.FloatTensor(image).permute(2, 1, 0).unsqueeze(0) text_return = make_message(image) print(text_return) #f = open(steg_folder+".csv", "a") #f.write("\n" + filename + "\t" + str(text_return)) ``` ####Test3 - Encode to decode in one cell ``` ##Input to outut (both encode decode in one cell) from imageio import imread, imwrite cover_im = imread("div2k/myval/_/0805.png", pilmode='RGB') / 127.5 - 1.0 plt.imshow(cover_im) cover = torch.FloatTensor(cover_im).permute(2, 1, 0).unsqueeze(0) cover_size = cover.size() # _, _, height, width = cover.size() text = "We are busy in Neural Networks project. Anyhow, how is your day going?" payload = make_payload(cover_size[3], cover_size[2], data_depth, text) cover = cover.to(device) payload = payload.to(device) generated = encoder.forward(cover, payload) text_return = make_message(generated) print(text_return) ``` ####Generate Difference Image ``` from skimage.metrics import structural_similarity as ssim from imageio import imread, imwrite diff_epochs = 64 diff_data_depth = 4 cover_folder = "div2k/myval/_" steg_folder = "div2k/myval/"+str(diff_data_depth)+"_"+str(diff_epochs) for filename in os.listdir(cover_folder): print(os.path.join(cover_folder,filename)) cover = imread(os.path.join(cover_folder,filename), as_gray=True) gen = imread(os.path.join(steg_folder,str(diff_data_depth)+"_"+str(diff_epochs)+"_"+filename), as_gray=True) (score, diff) = ssim(cover, gen, full=True) imwrite("div2k/myval/"+str(diff_data_depth)+"_"+str(diff_epochs)+"/"+"%d_%d_diff_%s"%(diff_data_depth,diff_epochs,filename),diff) print("Score: ",score) ```
github_jupyter
# **[Adversarial Disturbances for Controller Verification](http://proceedings.mlr.press/v144/ghai21a/ghai21a.pdf)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google/nsc-tutorial/blob/main/controller-verification.ipynb) ## Housekeeping Imports [jax](https://github.com/google/jax), numpy, scipy, plotting utils... ``` #@title import jax import itertools import numpy as onp import jax.numpy as np import matplotlib.pyplot as plt import ipywidgets as widgets from jax.numpy.linalg import inv, pinv from scipy.linalg import solve_discrete_are as dare from jax import jit, grad, hessian from IPython import display from toolz.dicttoolz import valmap, itemmap from itertools import chain def liveplot(costss, xss, wss, cmax=30, cumcmax=15, wmax=2, xmax=20, logcmax=100, logcumcmax=1000): cummean = lambda x: np.cumsum(np.array(x))/np.arange(1, len(x)+1) cumcostss = valmap(cummean, costss) disturbances = valmap(lambda x: list(map(lambda w: w[0], x)), wss) plt.style.use('seaborn') colors = { "Zero Control": "gray", "LQR / H2": "green", "Finite-horizon LQR / H2": "teal", "Optimal LQG for GRW": "aqua", "Robust / Hinf Control": "orange", "GPC": "red" } fig, ax = plt.subplots(3, 2, figsize=(21, 12)) costssline = {} for Cstr, costs in costss.items(): costssline[Cstr], = ax[0, 0].plot([], label=Cstr, color=colors[Cstr]) ax[0, 0].set_xlabel("Time") ax[0, 0].set_ylabel("Instantaneous Cost") ax[0, 0].set_ylim([-1, cmax]) ax[0, 0].set_xlim([0, 100]) ax[0, 0].legend() cumcostssline = {} for Cstr, costs in cumcostss.items(): cumcostssline[Cstr], = ax[0, 1].plot([], label=Cstr, color=colors[Cstr]) ax[0, 1].set_xlabel("Time") ax[0, 1].set_ylabel("Average Cost") ax[0, 1].set_ylim([-1, cumcmax]) ax[0, 1].set_xlim([0, 100]) ax[0, 1].legend() perturblines = {} for Cstr, W in disturbances.items(): perturblines[Cstr], = ax[1, 0].plot([], label=Cstr, color=colors[Cstr]) ax[1, 0].set_xlabel("Time") ax[1, 0].set_ylabel("Generated Disturbances") ax[1, 0].set_ylim([-wmax, wmax]) ax[1, 0].set_xlim([0, 100]) ax[1, 0].legend() pointssline, trailssline = {}, {} for Cstr, C in xss.items(): pointssline[Cstr], = ax[1,1].plot([], [], label=Cstr, color=colors[Cstr], ms=20, marker='s') trailssline[Cstr], = ax[1,1].plot([], [], label=Cstr, color=colors[Cstr], lw=2) ax[1, 1].set_xlabel("Position") ax[1, 1].set_ylabel("") ax[1, 1].set_ylim([-1, 6]) ax[1, 1].set_xlim([-xmax, xmax]) ax[1, 1].legend() logcostssline = {} for Cstr, costs in costss.items(): logcostssline[Cstr], = ax[2, 0].plot([1], label=Cstr, color=colors[Cstr]) ax[2, 0].set_xlabel("Time") ax[2, 0].set_ylabel("Instantaneous Cost (Log Scale)") ax[2, 0].set_xlim([0, 100]) ax[2, 0].set_ylim([0.1, logcmax]) ax[2, 0].set_yscale('log') ax[2, 0].legend() logcumcostssline = {} for Cstr, costs in cumcostss.items(): logcumcostssline[Cstr], = ax[2, 1].plot([1], label=Cstr, color=colors[Cstr]) ax[2, 1].set_xlabel("Time") ax[2, 1].set_ylabel("Average Cost (Log Scale)") ax[2, 1].set_xlim([0, 100]) ax[2, 1].set_ylim([0.1, logcumcmax]) ax[2, 1].set_yscale('log') ax[2, 1].legend() def livedraw(t): for Cstr, costsline in costssline.items(): costsline.set_data(np.arange(t), costss[Cstr][:t]) for Cstr, cumcostsline in cumcostssline.items(): cumcostsline.set_data(np.arange(t), cumcostss[Cstr][:t]) for i, (Cstr, pointsline) in enumerate(pointssline.items()): pointsline.set_data(xss[Cstr][t][0], i) for Cstr, perturbline in perturblines.items(): perturbline.set_data(np.arange(t), disturbances[Cstr][:t]) for i, (Cstr, trailsline) in enumerate(trailssline.items()): trailsline.set_data(list(map(lambda x: x[0], xss[Cstr][max(t-10, 0):t])), i) for Cstr, logcostsline in logcostssline.items(): logcostsline.set_data(np.arange(t), costss[Cstr][:t]) for Cstr, logcumcostsline in logcumcostssline.items(): logcumcostsline.set_data(np.arange(t), cumcostss[Cstr][:t]) return chain(costssline.values(), cumcostssline.values(), perturblines.values(), pointssline.values(), trailssline.values(), logcostssline.values(), logcumcostssline.values()) print("🧛 reanimating :) meanwhile...") livedraw(99) plt.show() from matplotlib import animation anim = animation.FuncAnimation(fig, livedraw, frames=100, interval=50, blit=True) from IPython.display import HTML display.clear_output(wait=True) return HTML(anim.to_html5_video()) ``` ## A simple dynamical system Defines a discrete-time [double-integrator](https://en.wikipedia.org/wiki/Double_integrator) -- a simple linear dynamical system that mirrors 1d kinematics -- along with a quadratic cost. Below $\mathbf{x}_t$ is the state, $\mathbf{u}_t$ is the control input (or action), $\mathbf{w}_t$ is the disturbance. $$ \mathbf{x}_{t+1} = A\mathbf{x}_t + B\mathbf{u}_t + \mathbf{w}_t, \qquad c(\mathbf{x},\mathbf{u}) = \mathbf{x}^\top Q \mathbf{x} + \mathbf{u}^\top R \mathbf{u}$$ $$ A = \begin{bmatrix} 1 & 1\\ 0 & 1 \end{bmatrix},\quad B = \begin{bmatrix} 0\\ 1 \end{bmatrix}, \quad Q = \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}, \quad R = \begin{bmatrix} 1 \end{bmatrix}$$ In the task of controller verification, the **verifier** selects $\mathbf{w}_t$ adaptively as a function of past state-action pairs $(\mathbf{x}_s,\mathbf{u}_s:s\leq t)$. ``` dx, du, T = 2, 1, 100 A, B = np.array([[1.0, 1.0], [0.0, 1.0]]), np.array([[0.0], [1.0]]) Q, R = np.eye(dx), np.eye(du) dyn = lambda x, u, w, t: A @ x + B @ u + w cost = lambda x, u, t: x.T @ A @ x + u.T @ R @ u # A basic control loop. # (x, z) is the environ-controller state. # w is disturbance and z_w disturbance generator state def eval(control, disturbance): x, z, z_w = np.zeros(dx), None, None for t in range(T): u, z = control(x, z, t) w, z_w = disturbance(x, u, z_w, t) c = cost(x, u, t) yield (x, u, w, c) x = dyn(x, u, w, t) ``` ## Control Algorithms The segment below puts forth a few basic control strategies, whose performance characteristics we would like to verify. + **Zero Control**: Executes $\mathbf{u}=\mathbf{0}$. + **LQR / H2**: A discrete-time [linear-quadratic regulator](https://en.wikipedia.org/wiki/Linear%E2%80%93quadratic_regulator). + **Finite-horizon LQR / H2**: A finite-horizon variant of the above. + **Robust / $H_\infty$ Control**: A worst-case [robust](https://en.wikipedia.org/wiki/H-infinity_methods_in_control_theory) controller. + **GPC**: [Gradient-perturbation](https://arxiv.org/abs/1902.08721) controller. ``` #@title def zero(): return lambda x, z, t: (np.zeros(du), z) def h2(A=A, B=B, Q=Q, R=R): P = dare(A, B, Q, R) K = - inv(R + B.T @ P @ B) @ (B.T @ P @ A) return lambda x, z, t: (K @ x, z) def h2nonstat(A=A, B=B, Q=Q, R=R, T=T): dx, du = B.shape P, K = [np.zeros((dx, dx)) for _ in range(T + 1)], [np.zeros((du, dx)) for _ in range(T)] P[T] = Q for t in range(T - 1, -1, -1): P[t] = Q + A.T @ P[t + 1] @ A - (A.T @ P[t + 1] @ B) @ inv(R + B.T @ P[t + 1] @ B) @ (B.T @ P[t + 1] @ A) K[t] = - inv(R + B.T @ P[t + 1] @ B) @ (B.T @ P[t + 1] @ A) return lambda x, z, t: (K[t] @ x, z) def hinf(A=A, B=B, Q=Q, R=R, T=T, gamma=1.0): dx, du = B.shape P, K = [np.zeros((dx, dx)) for _ in range(T + 1)], [np.zeros((du, dx)) for _ in range(T)], P[T] = Q for t in range(T - 1, -1, -1): Lambda = np.eye(dx) + (B @ inv(R) @ B.T - gamma ** -2 * np.eye(dx)) @ P[t + 1] P[t] = Q + A.T @ P[t + 1] @ pinv(Lambda) @ A K[t] = - np.linalg.inv(R) @ B.T @ P[t + 1] @ pinv(Lambda) @ A return lambda x, z, t: (K[t] @ x, z) def gpc(A=A, B=B, Q=Q, R=R, T=T, H=3, M=3, lr=0.01, dyn=dyn, cost=cost): dx, du = B.shape P = dare(A, B, Q, R) K = - np.array(inv(R + B.T @ P @ B) @ (B.T @ P @ A)) def proxy(E, off, W): y = np.zeros(dx) for h in range(H): v = K @ y + np.tensordot(E, W[h: h + M], axes=([0, 2], [0, 1])) y = dyn(y, v, W[h + M], h + M) v = K @ y + np.tensordot(E, W[h: h + M], axes=([0, 2], [0, 1])) c = cost(y, v, None) return c proxygrad = jit(grad(proxy, argnums=(0, 1))) def gpc_u(x, z, t): if z is None or t == 0: z = np.zeros(dx), np.zeros(du), np.zeros((H + M, dx)), np.zeros((M, du, dx)), np.zeros(du) xprev, uprev, W, E, off = z W = jax.ops.index_update(W, 0, x - A @ xprev - B @ uprev) W = np.roll(W, -1, axis=0) if t >= H + M: Edelta, offdelta = proxygrad(E, off, W) E -= lr * Edelta off -= lr * offdelta u = K @ x + np.tensordot(E, W[-M:], axes=([0, 2], [0, 1])) + off return u, (x, u, W, E, off) return gpc_u def controllers(gamma, H, M, lr): return { "Zero Control": zero(), "LQR / H2": h2(), "Finite-horizon LQR / H2": h2nonstat(), "Robust / Hinf Control": hinf(gamma=gamma), "GPC": gpc(H=H, M=M, lr=lr), } ``` ## [Memory Online Trust Region](https://arxiv.org/abs/2012.06695) (**MOTR**) disturbances This is an online learning approach to disturbance generation, akin to nonstochastic control but with the role of control and disturbance swapped. ``` # Author: Udaya Ghai ([email protected]) def motr(A=A, B=B, Q=Q, R=R, r_off=0.5, r_E= 1.0, T=T, H=3, M=3, lr=0.001, dyn=dyn, cost=cost): dx, du = B.shape def proxy(E, off, U, X): x = X[0] for h in range(H): w = np.tensordot(E, U[h: h + M], axes=([0, 2], [0, 1])) + off x = dyn(x, U[h + H], w, h+M) return np.sum(x.T @ Q @ x) proxygrad = jit(grad(proxy, argnums=(0, 1))) proxyhess = jit(hessian(proxy)) def project(x, r): norm_x = np.linalg.norm(x) return x if norm_x < r else (r / norm_x) * x def motr_w(x, u, z_w, t): if z_w is None or t == 0: z_w = np.zeros((H+M, du, 1)),np.zeros((H, dx, 1)), np.zeros((M, dx, du)), np.ones((dx, 1)) U, X, E, off = z_w U = jax.ops.index_update(U, 0, u) U = np.roll(U, -1, axis=0) X = jax.ops.index_update(X, 0, np.reshape(x, (dx,1))) X = np.roll(X, -1, axis=0) if t >= H + M: Edelta, offdelta = proxygrad(E, off, U, X) E = project(E + lr*Edelta, r_E) off = project(off + lr * offdelta, r_off) w = np.tensordot(E, U[-M:], axes=([0, 2], [0, 1])) + off return np.squeeze(w), (U, X, E, off) return motr_w #@title MOTR Pertrubation #@markdown Environment Parameters motr_offset_radius = 1 #@param {type:"slider", min:0, max:2, step:0.01} motr_radius = 0.4 #@param {type:"slider", min:0, max:2, step:0.01} motr_lookback = 5 #@param {type:"slider", min:1, max:20, step:1} motr_memory = 5 #@param {type:"slider", min:1, max:20, step:1} motr_gen = motr(r_off=motr_offset_radius, r_E=motr_radius, M=motr_memory, H=motr_lookback) #@markdown Constant Pertrubation: Control parameters hinf_log_gamma = 2 #@param {type:"slider", min:-2, max:5, step:0.01} hinf_gamma = 10**(hinf_log_gamma) gpc_lookback = 5 #@param {type:"slider", min:1, max:20, step:1} gpc_memory = 5 #@param {type:"slider", min:1, max:20, step:1} gpc_log_lr = -3 #@param {type:"slider", min:-5, max:0, step:0.01} gpc_lr = 10**(gpc_log_lr) Cs = controllers(hinf_gamma, gpc_lookback, gpc_memory, gpc_lr) print("🧛 evaluating controllers") traces = {Cstr: list(zip(*eval(C, motr_gen))) for Cstr, C in Cs.items()} xss = valmap(lambda x: x[0], traces) uss = valmap(lambda x: x[1], traces) wss = valmap(lambda x: x[2], traces) costss = valmap(lambda x: x[3], traces) liveplot(costss, xss, wss, 250, 200, 4, 20, 10**5, 10**5) ```
github_jupyter
# Aim: * Extract features for logistic regression given some text * Implement logistic regression from scratch * Apply logistic regression on a natural language processing task * Test logistic regression We will be using a data set of tweets. ## Import functions and data ``` import nltk from nltk.corpus import twitter_samples import pandas as pd nltk.download('twitter_samples') nltk.download('stopwords') import re import string import numpy as np from nltk.corpus import stopwords from nltk.stem import PorterStemmer from nltk.tokenize import TweetTokenizer #process_tweet(): cleans the text, tokenizes it into separate words, removes stopwords, and converts words to stems. def process_tweet(tweet): """Process tweet function. Input: tweet: a string containing a tweet Output: tweets_clean: a list of words containing the processed tweet """ stemmer = PorterStemmer() stopwords_english = stopwords.words('english') # remove stock market tickers like $GE tweet = re.sub(r'\$\w*', '', tweet) # remove old style retweet text "RT" tweet = re.sub(r'^RT[\s]+', '', tweet) # remove hyperlinks tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) # remove hashtags # only removing the hash # sign from the word tweet = re.sub(r'#', '', tweet) # tokenize tweets tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True, reduce_len=True) tweet_tokens = tokenizer.tokenize(tweet) tweets_clean = [] for word in tweet_tokens: if(word not in stopwords_english and word not in string.punctuation): stem_word = stemmer.stem(word) tweets_clean.append(stem_word) ############################################################# # 1 remove stopwords # 2 remove punctuation # 3 stemming word # 4 Add it to tweets_clean return tweets_clean #build_freqs counts how often a word in the 'corpus' (the entire set of tweets) was associated with # a positive label '1' or # a negative label '0', #then builds the freqs dictionary, where each key is a (word,label) tuple, #and the value is the count of its frequency within the corpus of tweets. def build_freqs(tweets, ys): """Build frequencies. Input: tweets: a list of tweets ys: an m x 1 array with the sentiment label of each tweet (either 0 or 1) Output: freqs: a dictionary mapping each (word, sentiment) pair to its frequency """ # Convert np array to list since zip needs an iterable. # The squeeze is necessary or the list ends up with one element. # Also note that this is just a NOP if ys is already a list. yslist = np.squeeze(ys).tolist() # Start with an empty dictionary and populate it by looping over all tweets # and over all processed words in each tweet. freqs = {} for y, tweet in zip(yslist, tweets): for word in process_tweet(tweet): pair = (word, y) ############################################################# #Update the count of pair if present, set it to 1 otherwise if pair in freqs: freqs[pair] += 1 else: freqs[pair] = 1 return freqs ``` ### Prepare the data * The `twitter_samples` contains subsets of 5,000 positive tweets, 5,000 negative tweets, and the full set of 10,000 tweets. ``` # select the set of positive and negative tweets all_positive_tweets = twitter_samples.strings('positive_tweets.json') all_negative_tweets = twitter_samples.strings('negative_tweets.json') ``` * Train test split: 20% will be in the test set, and 80% in the training set. ``` # split the data into two pieces, one for training and one for testing ############################################################# test_pos = all_positive_tweets[4000:] train_pos = all_positive_tweets[:4000] test_neg = all_negative_tweets[4000:] train_neg = all_negative_tweets[:4000] train_x = train_pos + train_neg test_x = test_pos + test_neg ``` * Create the numpy array of positive labels and negative labels. ``` # combine positive and negative labels train_y = np.append(np.ones((len(train_pos), 1)), np.zeros((len(train_neg), 1)), axis=0) test_y = np.append(np.ones((len(test_pos), 1)), np.zeros((len(test_neg), 1)), axis=0) ``` * Create the frequency dictionary using the `build_freqs()` function. ``` # create frequency dictionary ############################################################# freqs = build_freqs(train_x,train_y) # check the output print("type(freqs) = " + str(type(freqs))) print("len(freqs) = " + str(len(freqs.keys()))) ``` * HERE, The `freqs` dictionary is the frequency dictionary that's being built. * The key is the tuple (word, label), such as ("happy",1) or ("happy",0). The value stored for each key is the count of how many times the word "happy" was associated with a positive label, or how many times "happy" was associated with a negative label. Process tweet ``` # Example print('This is an example of a positive tweet: \n', train_x[0]) print('\nThis is an example of the processed version of the tweet: \n', process_tweet(train_x[0])) ``` #Logistic regression : ### Sigmoid $$ h(z) = \frac{1}{1+\exp^{-z}} $$ It maps the input 'x' to a value that ranges between 0 and 1, and so it can be treated as a probability. ``` def sigmoid(z): # calculate the sigmoid of z ############################################################# h = 1/(1+np.exp(-z)) return h ``` ### Logistic regression: regression and a sigmoid Logistic regression takes a regular linear regression, and applies a sigmoid to the output of the linear regression. Logistic regression $$ h(z) = \frac{1}{1+\exp^{-z}}$$ $$z = \theta_0 x_0 + \theta_1 x_1 + \theta_2 x_2 + ... \theta_N x_N$$ #### Update the weights:Gradient Descent $$\nabla_{\theta_j}J(\theta) = \frac{1}{m} \sum_{i=1}^m(h^{(i)}-y^{(i)})x_j $$ * To update the weight $\theta_j$, we adjust it by subtracting a fraction of the gradient determined by $\alpha$: $$\theta_j = \theta_j - \alpha \times \nabla_{\theta_j}J(\theta) $$ * The learning rate $\alpha$ is a value that we choose to control how big a single update will be. ``` def gradientDescent(x, y, theta, alpha, num_iters): # get 'm', the number of rows in matrix x m = len(x) for i in range(0, num_iters): # get z, the dot product of x and theta ############################################################# z = np.dot(x,theta) # get the sigmoid of z ############################################################# h = sigmoid(z) # calculate the cost function J = (-1/m)*(y.T @ np.log(h) + (1-y).T @ np.log(1-h)) # update the weights theta ############################################################# grad = (1/m) * np.dot(x.T, h-y) theta -= (alpha * grad) J = float(J) return J, theta ``` ## Extracting the features * Given a list of tweets, extract the features and store them in a matrix. You will extract two features. * The first feature is the number of positive words in a tweet. * The second feature is the number of negative words in a tweet. * Then train your logistic regression classifier on these features. * Test the classifier on a validation set. ``` def extract_features(tweet, freqs): ''' Input: tweet: a list of words for one tweet freqs: a dictionary corresponding to the frequencies of each tuple (word, label) Output: x: a feature vector of dimension (1,3) ''' # tokenizes, stems, and removes stopwords ############################################################# word_l = process_tweet(tweet) # 3 elements in the form of a 1 x 3 vector x = np.zeros((1, 3)) #bias term is set to 1 x[0,0] = 1 # loop through each word in the list of words for word in word_l: # increment the word count for the positive label 1 ############################################################# x[0,1] += freqs.get((word,1.0),0) # increment the word count for the negative label 0 ############################################################# x[0,2] += freqs.get((word,0.0),0) assert(x.shape == (1, 3)) return x # Check the function # test 1 # test on training data tmp1 = extract_features(train_x[0], freqs) print(tmp1) # test 2: # check for when the words are not in the freqs dictionary tmp2 = extract_features('Hariom pandya', freqs) print(tmp2) ``` ## Training Your Model To train the model: * Stack the features for all training examples into a matrix `X`. * Call `gradientDescent` ``` # collect the features 'x' and stack them into a matrix 'X' X = np.zeros((len(train_x), 3)) for i in range(len(train_x)): X[i, :]= extract_features(train_x[i], freqs) # training labels corresponding to X Y = train_y # Apply gradient descent J, theta = gradientDescent(X, Y, np.zeros((3, 1)), 1e-9, 1500) print(f"The cost after training is {J:.8f}.") ``` # Test logistic regression Predict whether a tweet is positive or negative. * Given a tweet, process it, then extract the features. * Apply the model's learned weights on the features to get the logits. * Apply the sigmoid to the logits to get the prediction (a value between 0 and 1). $$y_{pred} = sigmoid(\mathbf{x} \cdot \theta)$$ ``` def predict_tweet(tweet, freqs, theta): ''' Input: tweet: a string freqs: a dictionary corresponding to the frequencies of each tuple (word, label) theta: (3,1) vector of weights Output: y_pred: the probability of a tweet being positive or negative ''' # extract the features of the tweet and store it into x ############################################################# x = extract_features(tweet,freqs) # make the prediction using x and theta ############################################################# z = np.dot(x,theta) y_pred = sigmoid(z) return y_pred # Run this cell to test your function for tweet in ['I am happy', 'I am bad', 'this movie should have been great.', 'great', 'great great', 'great great great', 'great great great great']: print( '%s -> %f' % (tweet, predict_tweet(tweet, freqs, theta))) ``` ## Check performance using the test set ``` def test_logistic_regression(test_x, test_y, freqs, theta): """ Input: test_x: a list of tweets test_y: (m, 1) vector with the corresponding labels for the list of tweets freqs: a dictionary with the frequency of each pair (or tuple) theta: weight vector of dimension (3, 1) Output: accuracy: (# of tweets classified correctly) / (total # of tweets) """ # the list for storing predictions y_hat = [] for tweet in test_x: # get the label prediction for the tweet y_pred = predict_tweet(tweet, freqs, theta) if y_pred > 0.5: # append 1.0 to the list y_hat.append(1) else: # append 0 to the list y_hat.append(0) # With the above implementation, y_hat is a list, but test_y is (m,1) array # convert both to one-dimensional arrays in order to compare them using the '==' operator count=0 y_hat=np.array(y_hat) m=len(test_y) print(m) test_y=np.reshape(test_y,m) print(y_hat.shape) print(test_y.shape) accuracy = ((test_y == y_hat).sum())/m return accuracy tmp_accuracy = test_logistic_regression(test_x, test_y, freqs, theta) print(f"Logistic regression model's accuracy = {tmp_accuracy:.4f}") ``` #Lab Assignment: ##Replace Manual version of Logistic Regression with TF based version. ####[Reference : Lab-6]
github_jupyter
# import data ``` import os import glob from PIL import Image import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split def read_feature(folder, num): filename = glob.glob(os.path.join(folder, '*')) img_arr = np.zeros([len(filename), 100, 100, 3]) label = num * np.ones(len(filename), dtype="float32") for i, name in enumerate(filename): img = Image.open(name) img_arr[i, :, :, :] = np.asarray(img, dtype="uint8") return img_arr, label tb_img_arr, tb_label = read_feature('./TB_Image', 1) non_tb_img_arr, non_tb_label = read_feature('./Non-TB_Image', 0) images = np.concatenate((tb_img_arr, non_tb_img_arr)) labels = np.concatenate((tb_label, non_tb_label)) print(np.shape(images)) print(np.shape(labels)) X_train, X_val, y_train, y_val = train_test_split(images, labels, test_size=0.1) X_train = X_train.astype(np.int) X_val = X_val.astype(np.int) y_train = y_train.astype(np.int) y_val = y_val.astype(np.int) # change into one-hot vector y_train = tf.keras.utils.to_categorical(y_train, 2) y_val = tf.keras.utils.to_categorical(y_val, 2) # reshape dataset X_train = X_train.reshape(X_train.shape[0], 100, 100, 3) X_val = X_val.reshape(X_val.shape[0], 100, 100, 3) from matplotlib import pyplot as plt %matplotlib inline print('Training data shape', X_train.shape) _, (ax1, ax2) = plt.subplots(1, 2) ax1.imshow(X_train[0].reshape(100, 100, 3), cmap=plt.cm.Greys); ax2.imshow(X_train[1].reshape(100, 100, 3), cmap=plt.cm.Greys); ``` ## Define trainning function ``` def train_data(model): loss = [] acc = [] val_loss = [] val_acc = [] early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=3) tensorboard = tf.keras.callbacks.TensorBoard(log_dir='logs/{}'.format('model_name')) hist = model.fit(X_train, y_train, batch_size=64, epochs=50, # Run thru all the data point in each epoch verbose=1, validation_data=(X_val, y_val), #callbacks=[tensorboard]) callbacks=[early_stop, tensorboard]) #val_err.append(hist.history['val_mean_absolute_error'][-1]) # a dict loss.append(hist.history['loss'][-1]) val_loss.append(hist.history['val_loss'][-1]) acc.append(hist.history['acc'][-1]) val_acc.append(hist.history['val_acc'][-1]) return loss, val_loss, hist ``` ## Define a VGG network ``` def VGG(activ): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(64, (3,3), padding='same', activation=activ, input_shape=(100, 100, 3)), tf.keras.layers.MaxPool2D(padding='same'), tf.keras.layers.Conv2D(128, (3,3), padding='same', activation=activ), tf.keras.layers.MaxPool2D(padding='same'), tf.keras.layers.Conv2D(256, (3,3), padding='same', activation=activ), tf.keras.layers.Conv2D(256, (3,3), padding='same', activation=activ), tf.keras.layers.MaxPool2D(padding='same'), tf.keras.layers.Conv2D(512, (3,3), padding='same', activation=activ), tf.keras.layers.Conv2D(512, (3,3), padding='same', activation=activ), tf.keras.layers.MaxPool2D(padding='same'), tf.keras.layers.Conv2D(512, (3,3), padding='same', activation=activ), tf.keras.layers.Conv2D(512, (3,3), padding='same', activation=activ), tf.keras.layers.MaxPool2D(padding='same'), tf.keras.layers.Flatten(), tf.keras.layers.Dense(4096, activation=activ), tf.keras.layers.Dense(4096, activation=activ), tf.keras.layers.Dense(1000, activation=activ), tf.keras.layers.Dense(2, activation='softmax') ]) param = model.count_params() model.compile(optimizer=tf.train.AdamOptimizer(0.000001), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() return model, param ``` ## Define a DNN model ``` def dnnmodel(n, activ): param = [] model = tf.keras.Sequential([]) model.add(tf.keras.layers.Flatten(input_shape=(100, 100, 3))) for i in range(n): model.add(tf.keras.layers.Dense(100, activation=activ)) model.add(tf.keras.layers.Dense(2, activation='softmax')) # model.summary() # model.count_params() param.append(model.count_params()) model.compile(optimizer=tf.train.AdamOptimizer(0.000001), loss='categorical_crossentropy', metrics=['accuracy', 'mae']) return model, param ``` ## Trainning with VGG ### VGG with activation "relu" ``` activ = 'relu' model_VGG1, param_VGG1 = VGG(activ) loss_VGG1, val_loss_VGG1, hist_VGG1= train_data(model_VGG1) ``` ### Define the function for plots ``` def plot_acc_and_loss(hist): acc = hist.history['acc'] loss = hist.history['loss'] val_acc = hist.history['val_acc'] val_loss = hist.history['val_loss'] plt.plot(acc, 'r-o') plt.title("Trainning accuracy") plt.show() plt.plot(loss, 'g-o') plt.title("Trainning loss") plt.show() plt.plot(val_acc, 'b-o') plt.title("Validation accuracy") plt.show() plt.plot(val_loss, 'm-o') plt.title("Validation loss") plt.show() plot_acc_and_loss(hist_VGG1) ``` ### Calculate sensitivity and specificity ``` from sklearn.metrics import confusion_matrix predictions = model_VGG1.predict(X_val) y_val = np.argmax(y_val, axis=-1) predictions = np.argmax(predictions, axis=-1) c = confusion_matrix(y_val, predictions) print('Confusion matrix:\n', c) print('sensitivity', c[0, 0] / (c[0, 1] + c[0, 0])) print('specificity', c[1, 1] / (c[1, 1] + c[1, 0])) ``` ### VGG with activation "relu" ``` activ = 'tanh' model_VGG2, param_VGG2 = VGG(activ) loss_VGG2, val_loss_VGG2, hist_VGG2= train_data(model_VGG2) plot_acc_and_loss(hist_VGG2) predictions = model_VGG2.predict(X_val) y_val1 = np.argmax(y_val, axis=-1) predictions = np.argmax(predictions, axis=-1) c = confusion_matrix(y_val1, predictions) print('Confusion matrix:\n', c) print('sensitivity', c[0, 0] / (c[0, 1] + c[0, 0])) print('specificity', c[1, 1] / (c[1, 1] + c[1, 0])) ``` ## DNN ``` activ = 'relu' model_DNN, param1_DNN = dnnmodel(15, activ) loss_DNN, val_loss_DNN, hist_DNN= train_data(model_DNN) plot_acc_and_loss(hist_DNN) predictions = model_DNN.predict(X_val) y_val1 = np.argmax(y_val, axis=-1) predictions = np.argmax(predictions, axis=-1) c = confusion_matrix(y_val1, predictions) print('Confusion matrix:\n', c) print('sensitivity', c[0, 0] / (c[0, 1] + c[0, 0])) print('specificity', c[1, 1] / (c[1, 1] + c[1, 0])) ``` ## ResNet ``` from tensorflow.keras.applications import ResNet50 def resnet(): input_tensor = tf.keras.layers.Input(shape=(100, 100, 3)) model = ResNet50(include_top=True, weights=None, input_tensor=input_tensor, input_shape=None, pooling=None, classes=2) param = model.count_params() model.compile(optimizer=tf.train.AdamOptimizer(0.00001), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() return model, param model_resnet, param_resnet = resnet() loss_resnet, val_loss_resnet, hist_resnet= train_data(model_resnet) plot_acc_and_loss(hist_resnet) predictions = model_resnet.predict(X_val) y_val1 = np.argmax(y_val, axis=-1) predictions = np.argmax(predictions, axis=-1) c = confusion_matrix(y_val1, predictions) print('Confusion matrix:\n', c) print('sensitivity', c[0, 0] / (c[0, 1] + c[0, 0])) print('specificity', c[1, 1] / (c[1, 1] + c[1, 0])) ```
github_jupyter
## Import the needed libraries ``` import PicSureHpdsLib import pandas import matplotlib ``` ## Create an instance of the datasource adapter and get a reference to the data resource ``` adapter = PicSureHpdsLib.BypassAdapter("http://pic-sure-hpds-nhanes:8080/PIC-SURE") resource = adapter.useResource() ``` ## Get a listing of all "demographics" entries in the data dictionary. Show what actions can be done with the "demographic_results" object ``` demographic_entries = resource.dictionary().find("\\demographics\\") demographic_entries.help() ``` ## Examine the demographic_entries results by converting it into a pandas DataFrame ``` demographic_entries.DataFrame() resource.query().help() resource.query().filter().help() query_male = resource.query() query_male.filter().add("\\demographics\\SEX\\", ["male"]) query_female = resource.query() query_female.filter().add("\\demographics\\SEX\\", ["female"]) field_age = resource.dictionary().find("\\AGE\\") field_BMI = resource.dictionary().find("\\Body Mass Index") query_male.require().add(field_age.keys()) query_male.require().add(field_BMI.keys()) query_female.require().add(field_age.keys()) query_female.require().add(field_BMI.keys()) query_female.show() ``` ## Convert the query results for females into a DataFrame and plot it by BMI and Age ``` df_f = query_female.getResultsDataFrame() plot_f = df_f.plot.scatter(x="\\demographics\\AGE\\", y="\\examination\\body measures\\Body Mass Index (kg per m**2)\\", c="#ffbabb40") # ____ Uncomment if graphs are not displaying ____ #plot_f.plot() #matplotlib.pyplot.show() ``` ## Convert the query results for males into a DataFrame and plot it by BMI and Age ``` df_m = query_male.getResultsDataFrame() plot_m = df_m.plot.scatter(x="\\demographics\\AGE\\", y="\\examination\\body measures\\Body Mass Index (kg per m**2)\\", c="#5a7dd040") # ____ Uncomment if graphs are not displaying ____ #plot_m.plot() #matplotlib.pyplot.show() ``` ## Replot the results using a single DataFrame containing both male and female ``` d = resource.dictionary() criteria = [] criteria.extend(d.find("\\SEX\\").keys()) criteria.extend(d.find("\\Body Mass Index").keys()) criteria.extend(d.find("\\AGE\\").keys()) query_unified = resource.query() query_unified.require().add(criteria) df_mf = query_unified.getResultsDataFrame() # map a color field for the plot to use sex_colors = {'male':'#5a7dd040', 'female':'#ffbabb40'} df_mf['\\sex_color\\'] = df_mf['\\demographics\\SEX\\'].map(sex_colors) # plot data plot_mf = df_mf.plot.scatter(x="\\demographics\\AGE\\", y="\\examination\\body measures\\Body Mass Index (kg per m**2)\\", c=df_mf['\\sex_color\\']) # ____ Uncomment if graphs are not displaying ____ #plot_mf.plot() #matplotlib.pyplot.show() ``` ## Replot data but trim outliers ``` q = df_mf["\\examination\\body measures\\Body Mass Index (kg per m**2)\\"].quantile(0.9999) # create a masked array to remove outliers test = df_mf.mask(df_mf["\\examination\\body measures\\Body Mass Index (kg per m**2)\\"] > q) # plot data plot_mf = test.plot.scatter(x="\\demographics\\AGE\\", y="\\examination\\body measures\\Body Mass Index (kg per m**2)\\", c=df_mf['\\sex_color\\']) # ____ Uncomment if graphs are not displaying ____ #plot_mf.plot() #matplotlib.pyplot.show() ```
github_jupyter
## Appendix (Application of the mutual fund theorem) ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import FinanceDataReader as fdr import pandas as pd ticker_list = ['069500'] df_list = [fdr.DataReader(ticker, '2015-01-01', '2016-12-31')['Change'] for ticker in ticker_list] df = pd.concat(df_list, axis=1) #df.columns = ['005930', '000660', '005935', '035420', '005380', '207940', '012330', '068270', '051910', '055550', '069500'] df.columns = ['KODEX200'] r = df.dropna() rf = 0.0125 #df = df.resample('Y').agg(lambda x:x.mean()*252) # Calculate basic summary statistics for individual stocks stock_volatility = r.std() * np.sqrt(252) stock_return = r.mean() * 252 alpha = stock_return.values sigma = stock_volatility.values # cov_inv = np.linalg.inv(cov) # temp = np.dot(cov_inv, (stock_return- rf)) # theta_opt = temp / temp.sum() # optimal weight in Risky Mutual fund # alpha = np.dot(theta_opt, stock_return) # 0.5941 # sigma = np.sqrt(cov.dot(theta_opt).dot(theta_opt)) ``` ## (5B), (7B) ``` # g_B = 0 # in case of age over retirement (Second scenario in Problem(B)) X0 = 150. # Saving account at the beginning l = 3 t = 45 # age in case of age over retirement (Second scenario in Problem(B)) gamma = -3. # risk averse measure phi = rf + (alpha -rf)**2 / (2 * sigma**2 * (1-gamma)) # temporal function for f_B rho = 0.04 # impatience factor for utility function beta = 4.59364 # parameter for mu delta = 0.05032 # parameter for mu rf=0.02 def f_B(t): if t < 65: ds = 0.01 T = 65 T_tilde = 110 value = 0 for s in np.arange(T, T_tilde, ds): w_s = np.exp(-rho*s/(1-gamma)) tmp = (10**(beta + delta*s - 10)- 10**(beta + delta*t - 10))/(delta * np.log(10)) value += np.exp(-1/(1-gamma)*(tmp - gamma*tmp - gamma*phi *(s-t))) * w_s * ds f = np.exp(-1/(1-gamma) *(tmp - gamma*tmp + gamma*phi*(T-t))) * value return f else: # 65~ ds = 0.01 T_tilde = 110 value = 0 for s in np.arange(t, T_tilde, ds): w_s = np.exp(-rho*s/(1-gamma)) tmp = (10**(beta + delta*s - 10)- 10**(beta + delta*t - 10))/(delta * np.log(10)) value += np.exp(-1/(1-gamma)*(tmp - gamma*tmp - gamma*phi *(s-t))) * w_s * ds return value # def f_B(t): # ds = 0.01 # T_tilde = 110 # value = 0 # for s in np.arange(t, T_tilde, ds): # w_s = np.exp(-rho*s/(1-gamma)) # tmp = (10**(beta + delta*s - 10)- 10**(beta + delta*t - 10))/(delta * np.log(10)) # value += np.exp(- tmp + gamma/(1-gamma) * phi *(s-t)) * w_s * ds # return value # def V_B(t, x): # f_b = f_B(t) # value_fcn = 1/gamma * f_b **(1-gamma) * x **gamma # return value_fcn def C_star(t,X): w_t = np.exp(-rho*t/(1-gamma)) f_b = f_B(t) c_t = w_t/f_b * X return c_t def g_B(t, l): ds=0.01 value = 0. T=65 # retirement if t < T: for s in np.arange(t, T, ds): tmp = (10**(beta + delta*s - 10)- 10**(beta + delta*t - 10))/(delta * np.log(10)) value += np.exp(-tmp)*l * ds return value else: return 0. pi_opt = (alpha-rf)/(sigma**2 *(1-gamma)) * (X0 + g_B(t, l))/X0 # Optimal weight for Risky Asset (7B) print(pi_opt) # 0.25 # print(C_star(t, X)) ``` ## Simulation ``` import time start = time.time() dt = 1 def mu(t): # Mortality rate in next year value = (10**(beta + delta*(t+dt) - 10)- 10**(beta + delta*t - 10))/(delta * np.log(10)) return value n_simulation = 10000 Asset = np.empty(37) Asset_stack = [] C_stack = [] for i in range(n_simulation): Asset[0] = 150 # initial wealth C_list = [] for t in range(45, 81): if t < 65: # before retirement l_t = 3 # payment to pension fund pi_opt = (alpha-rf)/(sigma**2 *(1-gamma)) * (Asset[t-45] + g_B(t, l_t))/Asset[t-45] C_t = 0 # Z = np.random.randn() Asset[t-45+1] = Asset[t-45]*np.exp(((1-pi_opt)*rf + pi_opt*alpha + mu(t)+ l_t/Asset[t-45] \ -pi_opt**2 * sigma**2/2)*dt + pi_opt * sigma * np.sqrt(dt) * Z) else : # after retirement l_t = 0 # payment duty is 0 after retirement pi_opt = (alpha-rf)/(sigma**2 *(1-gamma)) * (Asset[t-45] + g_B(t, l_t))/Asset[t-45] C_t = C_star(t=t, X = Asset[t-45]) Z = np.random.randn() Asset[t-45+1] = Asset[t-45]*np.exp(((1-pi_opt)*rf + pi_opt*alpha + mu(t)- C_t/Asset[t-45] \ -pi_opt**2 * sigma**2/2)*dt + pi_opt * sigma * np.sqrt(dt) * Z) C_list.append(C_t) Asset_stack.append(list(Asset)) C_stack.append(C_list) end = time.time() print(end - start) ``` ## Check the Simulation Result ``` Asset_mean = np.mean(Asset_stack, axis=0) #(37,) C_mean = np.mean(C_stack, axis=0) # (16,1) plt.rcParams['figure.figsize'] = [30, 15] plt.rcParams.update({'font.size': 30}) plt.title('Retirement planning') plt.xlabel('Age') plt.ylabel('Won(1000000)') plt.plot(range(45,81),Asset_mean[:-1], label='Wealth') plt.plot(range(65,81),C_mean, '--', color = 'r', label="Pension") plt.legend() plt.grid() pi_opt_list=[] for t in range(45, 81): if t < 65: l_t = 3 else : l_t = 0 pi_opt = (alpha-rf)/(sigma**2 *(1-gamma)) * (Asset_mean[:-1][t-45] + g_B(t, l_t))/Asset_mean[:-1][t-45] pi_opt_list.append(pi_opt) plt.title('Optimal weight of risky-asset changing following ages') plt.xlabel('Age') plt.ylabel('Weight') plt.bar(range(45,81),np.array(pi_opt_list).squeeze()) ```
github_jupyter
## 前言 本文主要讨论如何把pandas移植到spark, 他们的dataframe共有一些特性如操作方法和模式。pandas的灵活性比spark强, 但是经过一些改动spark基本上能完成相同的工作。 同时又兼具了扩展性的优势,当然他们的语法和用法稍稍有些不同。 ## 主要不同处: ### 分布式处理 pandas只能单机处理, 把dataframe放进内存计算。spark是集群分布式地,可以处理的数据可以大大超出集群的内存数。 ### 懒执行 spark不执行任何`transformation`直到需要运行`action`方法,`action`一般是存储或者展示数据的操作。这种将`transformation`延后的做法可以让spark调度知道所有的执行情况,用于优化执行顺序和读取需要的数据。 懒执行也是scala的特性之一。通常,在pandas我们总是和数据打交道, 而在spark,我们总是在改变产生数据的执行计划。 ### 数据不可变 scala的函数式编程通常倾向使用不可变对象, 每一个spark transformation会返回一个新的dataframe(除了一些meta info会改变) ### 没有索引 spark是没有索引概念的. ### 单条数据索引不方便 pandas可以快速使用索引找到数据,spark没有这个功能,因为在spark主要操作的是执行计划来展示数据, 而不是数据本身。 ### spark sql 因为有了SQL功能的支持, spark更接近关系型数据库。 ## pandas和pyspark使用的一些例子 ``` import pandas as pd import pyspark.sql import pyspark.sql.functions as sf from pyspark.sql import SparkSession ``` ### Projections pandas的投影可以直接通过[]操作 ``` person_pd = pd.read_csv('data/persons.csv') person_pd[["name", "sex", "age"]] ``` pyspark也可以直接`[]`来选取投影, 但是这是一个语法糖, 实际是用了`select`方法 ``` spark = SparkSession.builder \ .master("local[*]") \ .config("spark.driver.memory","6G") \ .getOrCreate() #person_pd[['age','name']] person_sp = spark.read.option("inferSchema", True) \ .option("header", True) \ .csv('data/persons.csv') person_sp.show() person_sp[['age', 'name']].show() ``` ### 简单transformation spark的`dataframe.select`实际上接受任何column对象, 一个column对象概念上是dataframe的一列。一列可以是dataframe的一列输入,也可以是一个计算结果或者多个列的transformation结果。 以改变一列为大写为例: ``` ret = pd.DataFrame(person_pd['name'].apply(lambda x: x.upper())) ret result = person_sp.select( sf.upper(person_sp.name) ) result.show() ``` ### 给dataframe增加一列 pandas给dataframe增加一列很方便,直接给df赋值就行了。spark需要使用`withColumn`函数。 ``` def create_salutation(row): sex = row[0] name = row[1] if sex == 'male': return 'Mr '+name else: return "Mrs "+name result = person_pd.copy() result['salutation'] = result[['sex','name']].apply(create_salutation, axis=1, result_type='expand') result result = person_sp.withColumn( "salutation", sf.concat(sf.when(person_sp.sex == 'male', "Mr ").otherwise("Mrs "), person_sp.name) ) result.show() ``` ### 过滤 ``` result = person_pd[person_pd['age'] > 20] result ``` spark支持三种过滤写法 ``` person_sp.filter(person_sp['age'] > 20).show() person_sp[person_sp['age'] > 20].show() person_sp.filter('age > 20').show() ``` ### 分组和聚合 类似sql中的`select aggregation Group by grouping`语句功能,pandas和spark都定义了一些聚合函数,如: - count - sum - avg - corr - first - last 可以具体查看[PySpark Function Documentation](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#module-pyspark.sql.functions) ``` result = person_pd.groupby('sex').agg({'age': 'mean', 'height':['min', 'max']}) result from pyspark.sql.functions import avg, min, max result = person_sp.groupBy(person_sp.sex).agg(min(person_sp.height).alias('min height'), max(person_sp.height).alias('max height'), avg(person_sp.age)) result.show() person_sp.show() ``` ### join spark也支持跨dataframe做join, 让我们加个数据作例子。 ``` addresses = spark.read.json('data/addresses.json') addresses_pd = addresses.toPandas() addresses_pd pd_join = person_pd.merge(addresses_pd, left_on=['name'], right_on=['name']) pd_join sp_join = person_sp.join(addresses, person_sp.name==addresses.name) sp_join.show() sp_join_1 = person_sp.join(addresses, on=['name']) sp_join_1.show() ``` ### 重装dataframe pandas可以很方便地将现有的一列数据赋给一个新的列, 但是spark做起来不是很方便,需要join操作。 ``` df = person_pd[['name', 'age']] col = person_pd['height'] result = df.copy() result['h2'] = col result df = person_sp[['name', 'age']] col = person_sp[['name', 'height']] result = df.join(col, on=['name']) result.show() ```
github_jupyter
# Distant Viewing with Deep Learning: Part 2 In Part 2 of this tutorial, we introduce the concepts of deep learning and show it yields interesting similarity metrics and is able to extract feature useful features such as the presence and location of faces in the image. ## Step 9: Python modules for deep learning We need to reload all of the Python modules we used in the Part 1. ``` %pylab inline # !pip3 install keras # this wasn't working so had to do change from here # ------------------ import getpass import os password = getpass.getpass() command = "sudo -S sudo pip3 install https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.8.0-py3-none-any.whl" #can be any command but don't forget -S as it enables input from stdin os.system('echo %s | %s' % (password, command)) !git clone https://github.com/keras-team/keras.git os.chdir('keras') command = "sudo -S python3 setup.py install" #can be any command but don't forget -S as it enables input from stdin os.system('echo %s | %s' % (password, command)) command = "sudo -S sudo pip3 install pandas" #can be any command but don't forget -S as it enables input from stdin os.system('echo %s | %s' % (password, command)) #!sudo pip3 install tensorflow # To here------------------ import collections #import tensorflow import numpy as np import scipy as sp import pandas as pd import importlib from os.path import join from matplotlib.colors import rgb_to_hsv command = "sudo -S sudo pip3 install keras" #can be any command but don't forget -S as it enables input from stdin os.system('echo %s | %s' % (password, command)) import keras import tensorflow import matplotlib.pyplot as plt import matplotlib.patches as patches plt.rcParams["figure.figsize"] = (8,8) os.chdir("/Users/jgo384/Documents/GitHub/NLTK/dl-tutorial/") os.listdir("/Users/jgo384/Documents/GitHub/NLTK/dl-tutorial/") ``` We also need to reload the wikiart metadata. ``` wikiart = pd.read_csv("meta/wikiart.csv") ``` To run the code in this notebook from scratch, you will also need the **keras** module for working with neural networks. This are not included in the default Anaconda Python installation and need to be installed seperately. The code below checks if you have keras installed. If you do, it will be loaded. Otherwise, a flag will be set so that the code below that requires keras will load the pre-loaded data. ``` import keras if importlib.util.find_spec("keras") is not None: from keras.applications.vgg19 import VGG19 from keras.preprocessing import image from keras.applications.vgg19 import preprocess_input, decode_predictions from keras.models import Model keras_flag = True else: keras_flag = False ``` If you are struggling with installing these, we are happy to assist. You'll be able to follow along without keras, but will not be able to apply the techniques you learned today to new datasets without it. ## Step 10: Applying deep learning with neural networks We start by loading a particular neural network model called VGG19. It contains 25 layers and over 143 million parameters. The code below reads in the entire model and prints out it structure (unless keras is unavailable, in which case a saved version of the model is printed just for reference). ``` if keras_flag: vgg19_full = VGG19(weights='imagenet') vgg19_full.summary() else: with open('data/vgg19.txt','r') as f: for line in f: print(line, end='') ``` The VGG19 model was trained to identify 1000 classes of objects within an image. It was built as part of the ImageNet challenge, one of the most influential computer vision competitions that has been running since 2010. We will load a test photo of my dog and see what classes the model predicts for the image. We will use a slightly different function to read in the image that scales it to have 224-by-224 pixels as required by the algorithm. ``` img_path = join("images", "test", "dog.jpg") if keras_flag: img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) else: img = imread(img_path) x = img.copy().astype(np.float32) x = np.expand_dims(x, axis=0) x = preprocess_input(x) x.shape ``` Notice that it is now a four dimensional array, a point that we will come back to in a moment. We can look at the image here using the `imshow` function. ``` plt.imshow(img) ``` Assuming you have keras installed, the code here takes the image `x` and predicts values from the model. Notice that the output of the model is a sequence of 1000 numbers. These indicate the predicted probability that the image contains each of one of the 1000 pre-selected categories. The function `decode_predictions` converts these to give the names of the five most likely categories. ``` if keras_flag: y = vgg19_full.predict(x) print(y.shape) for pred in decode_predictions(y)[0]: print(pred) else: print((1, 1000)) y = np.load(join('data', 'dog_pred.npy')) for pred in decode_predictions(y)[0]: print(pred) ``` The largest predicted class is a "Shih-Tzu", incidently an exact match for his breed! The other dogs are all similarly sized dogs, and obvious choices for making a mistake. Now, let's compute the category predictions for each image in the corpus. This involves reading in each image in the wikiart corpus and then running them through the VGG19 model. This can take some time, particularly on an older machine, so we have created a flag called `process_new`. Keep it to `False` to load pre-computed categories; you can switch it to `True` if you want to compute them directly ``` process_new = False if process_new: wikiart_img = np.zeros((wikiart.shape[0], 224, 224, 3)) for index, row in wikiart.iterrows(): img_path = join('images', 'wikiart', row['filename']) img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) wikiart_img[index, :, :, :] = x if (index % 50) == 0: print("Done with {0:03d}".format(index)) wikiart_img = preprocess_input(wikiart_img) wikiart_raw = vgg19_full.predict(wikiart_img, verbose=True) wikiart_vgg19 = decode_predictions(wikiart_raw, top=20) else: wikiart_vgg19 = np.load("data/wikiart_vgg19_categories.npy") print(wikiart_vgg19.shape) ``` What's the most common top category type for this collection? When can use the Python module `collections` to look at the top-10 most common: ``` collections.Counter(wikiart_vgg19[:, 1, 1]).most_common(10) ``` Cliffs and fountains both seem reasonable, but I doubt there are many jigsaw puzzels in the wikiart corpus. **Any idea by this might be so common?** ## Step 11: Neural network embedding The VGG19 model was constructed in order to predict the objects present in an image, but there is a lot more that we can do with the model. The amazing property of deep learning is that the intermediate results in the neural network operate by detecting lower-level features of the image. For example, the first few detect edges and textures, the next few by understanding shapes, and the latter ones put these together to detect objects. This is incredibly useful because it means that looking at the intermediate outputs can tell us something interesting about the images beyond just the 1000 predicted categories. Assuming the keras module is installed, we will create a new model that outputs the second-to-last output of the model. The prediction of this contains 4096 dimensions. These do not correspond directly to categories, but (in theory) images containing similar objects should have similar 4096-dimensional values. ``` if keras_flag: vgg_fc2 = Model(inputs=vgg19_full.input, outputs=vgg19_full.get_layer('fc2').output) y = vgg_fc2.predict(x) print(y.shape) else: print((1, 4096)) ``` We can use this new model to predict values on the set of images `wikiart_img`. As above, this can take a few minutes, so you may want to load the pre-saved data again by keeping `process_new` equal to `False`. ``` process_new = False if process_new: wikiart_fc2 = vgg_fc2.predict(wikiart_img, verbose=True) wikiart_fc2.shape else: wikiart_fc2 = np.load("data/wikiart_vgg19_fc2.npy") print(wikiart_fc2.shape) ``` Now, we can use these values to figure out which images are similar to another image. This is similar to the closest saturation values, but using a more complex numeric metric for comparison. Compare the results here with those from saturation alone: ``` plt.figure(figsize=(14, 14)) dists = np.sum(np.abs(wikiart_fc2 - wikiart_fc2[1, :]), 1) idx = np.argsort(dists.flatten())[:12] for ind, i in enumerate(idx): try: plt.subplots_adjust(left=0, right=1, bottom=0, top=1) plt.subplot(3, 4, ind + 1) img_path = join('images', 'wikiart', wikiart.iloc[i]['filename']) img = imread(img_path) plt.imshow(img) plt.axis("off") except: pass ``` The images are all impressionist paintings of trees, showing how the model matches both the content and style of the original. **In the code below, look at the recommendations for the image you used back in part 7.**
github_jupyter
# Putting the "Re" in Reformer: Ungraded Lab This ungraded lab will explore Reversible Residual Networks. You will use these networks in this week's assignment that utilizes the Reformer model. It is based on on the Transformer model you already know, but with two unique features. * Locality Sensitive Hashing (LSH) Attention to reduce the compute cost of the dot product attention and * Reversible Residual Networks (RevNets) organization to reduce the storage requirements when doing backpropagation in training. In this ungraded lab we'll start with a quick review of Residual Networks and their implementation in Trax. Then we will discuss the Revnet architecture and its use in Reformer. ## Outline - [Part 1: Residual Networks](#1) - [1.1 Branch](#1.1) - [1.2 Residual Model](#1.2) - [Part 2: Reversible Residual Networks](#2) - [2.1 Trax Reversible Layers](#2.1) - [2.2 Residual Model](#2.2) ``` import trax from trax import layers as tl # core building block import numpy as np # regular ol' numpy from trax.models.reformer.reformer import ( ReversibleHalfResidualV2 as ReversibleHalfResidual, ) # unique spot from trax import fastmath # uses jax, offers numpy on steroids from trax import shapes # data signatures: dimensionality and type from trax.fastmath import numpy as jnp # For use in defining new layer types. from trax.shapes import ShapeDtype from trax.shapes import signature ``` ## Part 1.0 Residual Networks [Deep Residual Networks ](https://arxiv.org/abs/1512.03385) (Resnets) were introduced to improve convergence in deep networks. Residual Networks introduce a shortcut connection around one or more layers in a deep network as shown in the diagram below from the original paper. <center><img src = "Revnet7.PNG" height="250" width="250"></center> <center><b>Figure 1: Residual Network diagram from original paper</b></center> The [Trax documentation](https://trax-ml.readthedocs.io/en/latest/notebooks/layers_intro.html#2.-Inputs-and-Outputs) describes an implementation of Resnets using `branch`. We'll explore that here by implementing a simple resnet built from simple function based layers. Specifically, we'll build a 4 layer network based on two functions, 'F' and 'G'. <img src = "Revnet8.PNG" height="200" width="1400"> <center><b>Figure 2: 4 stage Residual network</b></center> Don't worry about the lengthy equations. Those are simply there to be referenced later in the notebook. <a name="1.1"></a> ### Part 1.1 Branch Trax `branch` figures prominently in the residual network layer so we will first examine it. You can see from the figure above that we will need a function that will copy an input and send it down multiple paths. This is accomplished with a [branch layer](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#module-trax.layers.combinators), one of the Trax 'combinators'. Branch is a combinator that applies a list of layers in parallel to copies of inputs. Lets try it out! First we will need some layers to play with. Let's build some from functions. ``` # simple function taking one input and one output bl_add1 = tl.Fn("add1", lambda x0: (x0 + 1), n_out=1) bl_add2 = tl.Fn("add2", lambda x0: (x0 + 2), n_out=1) bl_add3 = tl.Fn("add3", lambda x0: (x0 + 3), n_out=1) # try them out x = np.array([1]) print(bl_add1(x), bl_add2(x), bl_add3(x)) # some information about our new layers print( "name:", bl_add1.name, "number of inputs:", bl_add1.n_in, "number of outputs:", bl_add1.n_out, ) bl_3add1s = tl.Branch(bl_add1, bl_add2, bl_add3) bl_3add1s ``` Trax uses the concept of a 'stack' to transfer data between layers. For Branch, for each of its layer arguments, it copies the `n_in` inputs from the stack and provides them to the layer, tracking the max_n_in, or the largest n_in required. It then pops the max_n_in elements from the stack. <img src = "branch1.PNG" height="260" width="600"> <center><b>Figure 3: One in, one out Branch</b></center> On output, each layer, in succession pushes its results onto the stack. Note that the push/pull operations impact the top of the stack. Elements that are not part of the operation (n, and m in the diagram) remain intact. ``` # n_in = 1, Each bl_addx pushes n_out = 1 elements onto the stack bl_3add1s(x) # n = np.array([10]); m = np.array([20]) # n, m will remain on the stack n = "n" m = "m" # n, m will remain on the stack bl_3add1s([x, n, m]) ``` Each layer in the input list copies as many inputs from the stack as it needs, and their outputs are successively combined on stack. Put another way, each element of the branch can have differing numbers of inputs and outputs. Let's try a more complex example. ``` bl_addab = tl.Fn( "addab", lambda x0, x1: (x0 + x1), n_out=1 ) # Trax figures out how many inputs there are bl_rep3x = tl.Fn( "add2x", lambda x0: (x0, x0, x0), n_out=3 ) # but you have to tell it how many outputs there are bl_3ops = tl.Branch(bl_add1, bl_addab, bl_rep3x) ``` In this case, the number if inputs being copied from the stack varies with the layer <img src = "branch2.PNG" height="260" width="600"> <center><b>Figure 4: variable in, variable out Branch</b></center> The stack when the operation is finished is 5 entries reflecting the total from each layer. ``` # Before Running this cell, what is the output you are expecting? y = np.array([3]) bl_3ops([x, y, n, m]) ``` Branch has a special feature to support Residual Network. If an argument is 'None', it will pull the top of stack and push it (at its location in the sequence) onto the output stack <img src = "branch3.PNG" height="260" width="600"> <center><b>Figure 5: Branch for Residual</b></center> ``` bl_2ops = tl.Branch(bl_add1, None) bl_2ops([x, n, m]) ``` <a name="1.2"></a> ### Part 1.2 Residual Model OK, your turn. Write a function 'MyResidual', that uses `tl.Branch` and `tl.Add` to build a residual layer. If you are curious about the Trax implementation, you can see the code [here](https://github.com/google/trax/blob/190ec6c3d941d8a9f30422f27ef0c95dc16d2ab1/trax/layers/combinators.py). ``` def MyResidual(layer): return tl.Serial( ### START CODE HERE ### # tl.----, # tl.----, ### END CODE HERE ### ) # Lets Try it mr = MyResidual(bl_add1) x = np.array([1]) mr([x, n, m]) ``` **Expected Result** (array([3]), 'n', 'm') Great! Now, let's build the 4 layer residual Network in Figure 2. You can use `MyResidual`, or if you prefer, the tl.Residual in Trax, or a combination! ``` Fl = tl.Fn("F", lambda x0: (2 * x0), n_out=1) Gl = tl.Fn("G", lambda x0: (10 * x0), n_out=1) x1 = np.array([1]) resfg = tl.Serial( ### START CODE HERE ### # None, #Fl # x + F(x) # None, #Gl # x + F(x) + G(x + F(x)) etc # None, #Fl # None, #Gl ### END CODE HERE ### ) # Lets try it resfg([x1, n, m]) ``` **Expected Results** (array([1089]), 'n', 'm') <a name="2"></a> ## Part 2.0 Reversible Residual Networks The Reformer utilized RevNets to reduce the storage requirements for performing backpropagation. <img src = "Reversible2.PNG" height="260" width="600"> <center><b>Figure 6: Reversible Residual Networks </b></center> The standard approach on the left above requires one to store the outputs of each stage for use during backprop. By using the organization to the right, one need only store the outputs of the last stage, y1, y2 in the diagram. Using those values and running the algorithm in reverse, one can reproduce the values required for backprop. This trades additional computation for memory space which is at a premium with the current generation of GPU's/TPU's. One thing to note is that the forward functions produced by two networks are similar, but they are not equivalent. Note for example the asymmetry in the output equations after two stages of operation. <img src = "Revnet1.PNG" height="340" width="1100"> <center><b>Figure 7: 'Normal' Residual network (Top) vs REversible Residual Network </b></center> ### Part 2.1 Trax Reversible Layers Let's take a look at how this is used in the Reformer. ``` refm = trax.models.reformer.ReformerLM( vocab_size=33000, n_layers=2, mode="train" # Add more options. ) refm ``` Eliminating some of the detail, we can see the structure of the network. <img src = "Revnet2.PNG" height="300" width="350"> <center><b>Figure 8: Key Structure of Reformer Reversible Network Layers in Trax </b></center> We'll review the Trax layers used to implement the Reversible section of the Reformer. First we can note that not all of the reformer is reversible. Only the section in the ReversibleSerial layer is reversible. In a large Reformer model, that section is repeated many times making up the majority of the model. <img src = "Revnet3.PNG" height="650" width="1600"> <center><b>Figure 9: Functional Diagram of Trax elements in Reformer </b></center> The implementation starts by duplicating the input to allow the two paths that are part of the reversible residual organization with [Dup](https://github.com/google/trax/blob/190ec6c3d941d8a9f30422f27ef0c95dc16d2ab1/trax/layers/combinators.py#L666). Note that this is accomplished by copying the top of stack and pushing two copies of it onto the stack. This then feeds into the ReversibleHalfResidual layer which we'll review in more detail below. This is followed by [ReversibleSwap](https://github.com/google/trax/blob/190ec6c3d941d8a9f30422f27ef0c95dc16d2ab1/trax/layers/reversible.py#L83). As the name implies, this performs a swap, in this case, the two topmost entries in the stack. This pattern is repeated until we reach the end of the ReversibleSerial section. At that point, the topmost 2 entries of the stack represent the two paths through the network. These are concatenated and pushed onto the stack. The result is an entry that is twice the size of the non-reversible version. Let's look more closely at the [ReversibleHalfResidual](https://github.com/google/trax/blob/190ec6c3d941d8a9f30422f27ef0c95dc16d2ab1/trax/layers/reversible.py#L154). This layer is responsible for executing the layer or layers provided as arguments and adding the output of those layers, the 'residual', to the top of the stack. Below is the 'forward' routine which implements this. <img src = "Revnet4.PNG" height="650" width="1600"> <center><b>Figure 10: ReversibleHalfResidual code and diagram </b></center> Unlike the previous residual function, the value that is added is from the second path rather than the input to the set of sublayers in this layer. Note that the Layers called by the ReversibleHalfResidual forward function are not modified to support reverse functionality. This layer provides them a 'normal' view of the stack and takes care of reverse operation. Let's try out some of these layers! We'll start with the ones that just operate on the stack, Dup() and Swap(). ``` x1 = np.array([1]) x2 = np.array([5]) # Dup() duplicates the Top of Stack and returns the stack dl = tl.Dup() dl(x1) # ReversibleSwap() duplicates the Top of Stack and returns the stack sl = tl.ReversibleSwap() sl([x1, x2]) ``` You are no doubt wondering "How is ReversibleSwap different from Swap?". Good question! Lets look: <img src = "Revnet5.PNG" height="389" width="1000"> <center><b>Figure 11: Two versions of Swap() </b></center> The ReverseXYZ functions include a "reverse" compliment to their "forward" function that provides the functionality to run in reverse when doing backpropagation. It can also be run in reverse by simply calling 'reverse'. ``` # Demonstrate reverse swap print(x1, x2, sl.reverse([x1, x2])) ``` Let's try ReversibleHalfResidual, First we'll need some layers.. ``` Fl = tl.Fn("F", lambda x0: (2 * x0), n_out=1) Gl = tl.Fn("G", lambda x0: (10 * x0), n_out=1) ``` Just a note about ReversibleHalfResidual. As this is written, it resides in the Reformer model and is a layer. It is invoked a bit differently that other layers. Rather than tl.XYZ, it is just ReversibleHalfResidual(layers..) as shown below. This may change in the future. ``` half_res_F = ReversibleHalfResidual(Fl) print(type(half_res_F), "\n", half_res_F) half_res_F([x1, x1]) # this is going to produce an error - why? # we have to initialize the ReversibleHalfResidual layer to let it know what the input is going to look like half_res_F.init(shapes.signature([x1, x1])) half_res_F([x1, x1]) ``` Notice the output: (DeviceArray([3], dtype=int32), array([1])). The first value, (DeviceArray([3], dtype=int32) is the output of the "Fl" layer and has been converted to a 'Jax' DeviceArray. The second array([1]) is just passed through (recall the diagram of ReversibleHalfResidual above). The final layer we need is the ReversibleSerial Layer. This is the reversible equivalent of the Serial layer and is used in the same manner to build a sequence of layers. <a name="2.2"></a> ### Part 2.2 Build a reversible model We now have all the layers we need to build the model shown below. Let's build it in two parts. First we'll build 'blk' and then a list of blk's. And then 'mod'. <center><img src = "Revnet6.PNG" height="800" width="1600"> </center> <center><b>Figure 12: Reversible Model we will build using Trax components </b></center> ``` blk = [ # a list of the 4 layers shown above ### START CODE HERE ### None, None, None, None, ] blks = [None, None] ### END CODE HERE ### mod = tl.Serial( ### START CODE HERE ### None, None, None, ### END CODE HERE ### ) mod ``` **Expected Output** ``` Serial[ Dup_out2 ReversibleSerial_in2_out2[ ReversibleHalfResidualV2_in2_out2[ Serial[ F ] ] ReversibleSwap_in2_out2 ReversibleHalfResidualV2_in2_out2[ Serial[ G ] ] ReversibleSwap_in2_out2 ReversibleHalfResidualV2_in2_out2[ Serial[ F ] ] ReversibleSwap_in2_out2 ReversibleHalfResidualV2_in2_out2[ Serial[ G ] ] ReversibleSwap_in2_out2 ] Concatenate_in2 ] ``` ``` mod.init(shapes.signature(x1)) out = mod(x1) out ``` **Expected Result** DeviceArray([ 65, 681], dtype=int32) OK, now you have had a chance to try all the 'Reversible' functions in Trax. On to the Assignment!
github_jupyter
# MICROSOFT STOCK PRICE TIME SERIES ANALYSIS Within this Jupyter Notebook, we will attempt to forecast and model the future price of Microsoft stock through Time Series Analysis and SARIMAX models, while also modelling out potential trades and profits that could be made based on our forecast through training and test sets ## Importing Necessary Libraries ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from datetime import timedelta import statsmodels.api as sm from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.tsa.stattools import adfuller from statsmodels.tsa.stattools import acf from statsmodels.tsa.stattools import pacf from statsmodels.tsa.arima_model import ARMA from pmdarima import auto_arima from sklearn import metrics from sklearn.metrics import confusion_matrix, accuracy_score, precision_score, roc_auc_score ``` ## Reading in the Stock Price Data and Preprocessing the Data ``` MSFT = pd.read_csv('MSFT.csv') MSFT.head() MSFT.Date = pd.to_datetime(MSFT.Date) MSFT = MSFT.sort_values('Date') MSFT.Date = MSFT.Date + timedelta(hours = 16) stock = MSFT[['Date', 'Close']].copy() stock.columns = ['Date', 'Price'] MSFT.Date = MSFT.Date - timedelta(hours = 6.5) MSFT.head() stock.head() MSFT.columns = ['Date', 'Price', 'High', 'Low', 'Close', 'Adj Close', 'Volume'] stock = pd.concat([stock, MSFT[['Date', 'Price']]], axis = 0) stock = stock.sort_values('Date') stock.set_index(stock.Date, inplace = True) stock = stock.drop('Date', 1) stock.head() ``` ## Creating a Function to Test for Stationarity ``` def test_stationarity(timeseries): rolmean = pd.Series(timeseries).rolling(12).mean() rolstd = pd.Series(timeseries).rolling(12).mean() fig = plt.figure(figsize = (12, 8)) orig = plt.plot(timeseries, color = 'blue', label = 'Original') mean = plt.plot(rolmean, color = 'red', label = 'Rolling Average') std = plt.plot(rolstd, color = 'black', label = 'Rolling Standard Deviation') plt.legend(loc = 'best') plt.title('Comparison of Stock Price, Rolling Average, and Rolling Standard Deviation') plt.show() print('Results of Augmented Dickey-Fuller Test: ') dftest = adfuller(timeseries, autolag = 'AIC') dfoutput = pd.Series(dftest[0: 4], index = ['Test Statistic', 'P-Value', 'No. Lags Used', 'No. Observations Used']) for key, value in list(dftest[4].items()): dfoutput['Critical Value (%s)' %key] = value print(dfoutput) ``` Now let us examine the stock price's Autocorrelation and Partial Autocorrelation plots ``` fig = plt.figure(figsize = (12, 8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(stock.Price, lags = 200, ax = ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(stock.Price, lags = 200, ax = ax2) plt.show() ``` We can see that the Autocorrelation of the Microsoft stock prices is weak, perhaps suggesting that the autocorrelation is not statistically significant within the 95% confidence interval. Observing the Partial Autocorrelation graph, however, while we also do not see clear significant indication of partial autocorrelation within the 95% confidence interval, the results seem more promising in comparison to the Autocorrelation graph and the existence of significant partial autocorrelation may exist. We can also observe the graphs for different lag values. ## Building the SARIMAX Model ``` optimal = auto_arima(stock.Price, start_p = 0, start_q = 0, test = 'adf', seasonal = True) optimal.summary() model = sm.tsa.statespace.SARIMAX(stock.Price, trend = 'n', order = (3, 1, 0)) results = model.fit() results.summary() stock['Forecast'] = results.predict(dynamic = False) stock[['Price', 'Forecast']].plot(figsize = (12, 8)) plt.show() stock[['Price', 'Forecast']].iloc[300:].plot(figsize = (12, 8)) plt.show() stock['Forecast'] = [np.NaN for i in range(300)] + list(results.predict(start = 300, end = 507, dynamic = False)) stock[['Price', 'Forecast']].iloc[400:].plot(figsize = (12, 8)) plt.show() def calculate_profit(timeseries, predictions, stop): capital = 0 transactions = 0 own = False last_buy = 0 current_price = 0 for num, i in enumerate(timeseries.iloc[:stop].iterrows()): if i[1][predictions] == 1 and own == False: capital -= i[1]['Price'] own = True transactions += 1 last_buy = i[1]['Price'] elif i[1][predictions] == 0 and own == True: capital += i[1]['Price'] own = False transactions += 1 else: pass current_price = i[1]['Price'] print('Currently Owning?: ', own) print('Last Buying Price: $', last_buy) print('Current Price: $', current_price) print('Current Cash: $', capital) if own == True: print('Profit: $', current_price + capital) else: print('Profit: $', capital) print('Number of Transactins:', transactions) print('Cost of transactions: $', transactions * 5) ``` The above function simulates trading using our SARIMAX predictions, and we assume a $5 transaction fee. ``` stock['Target'] = stock.Price.shift(-1) stock.tail() stock['Forecast'] = results.predict(dynamic = False) stock['Predicted Growth'] = stock[['Forecast', 'Price']].apply(lambda x: 1 if x[0] - x[1] >= 0 else 0, axis = 1) stock['Actual Growth'] = stock[['Target', 'Price']].apply(lambda x: 1 if x[0] - x[1] >= 0 else 0, axis = 1) calculate_profit(stock.iloc[300:], 'Predicted Growth', -1) calculate_profit(stock.iloc[300:], 'Actual Growth', -1) stock.head() ``` ## Building the ARMA Model ``` model = ARMA(stock.Price, (3, 1)).fit() model.summary() stock['Forecast'] = model.predict() stock['Target'] = stock.Price.shift(-1) stock.head() plt.scatter(stock.iloc[1: -1].Target, stock.iloc[1: -1].Forecast) plt.xlabel('True Values') plt.ylabel('Predictions') print('Score: ', metrics.r2_score(stock.iloc[1: -1].Target, stock.iloc[1: -1].Forecast)) print('MSE: ', metrics.mean_squared_error(stock.iloc[1: -1].Target, stock.iloc[1: -1].Forecast)) plt.show() stock['Predicted Growth'] = stock[['Forecast', 'Price']].apply(lambda x: 1 if x[0] - x[1] >= 5 else 0 if x[0] - x[1] <= -5 else 2, axis = 1) stock['Actual Growth'] = stock[['Target', 'Price']].apply(lambda x: 1 if x[0] - x[1] >= 5 else 0 if x[0] - x[1] <= -5 else 2, axis = 1) print('Model Results: ') calculate_profit(stock.iloc[300:], 'Predicted Growth', -1) print('\nBest Case Results: ') calculate_profit(stock.iloc[300:], 'Actual Growth', -1) rolmean = pd.Series(stock.Price).rolling(20).mean() stock['Rolling Mean'] = rolmean stock.head() stock['First Difference'] = stock['Rolling Mean'] - stock['Rolling Mean'].shift(1) test_stationarity(stock['First Difference'].dropna(inplace = False)) fig = plt.figure(figsize = (12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(stock['Rolling Mean'].iloc[5:], lags=10, ax = ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(stock['Rolling Mean'].iloc[5:], lags=5, ax=ax2) plt.show() optimal = auto_arima(stock['Rolling Mean'].dropna(inplace = False), start_p = 0, start_q = 0, test = 'adf', seasonal = True) optimal.summary() model = sm.tsa.statespace.SARIMAX(stock['Rolling Mean'], trend = 'n', order = (4, 0, 0)) results = model.fit() results.summary() stock['Forecast'] = results.predict() stock['Target'] = stock['Rolling Mean'].shift(-1) plt.scatter(stock.iloc[21:-1].Target, stock.iloc[21:-1].Forecast) plt.xlabel("True Values") plt.ylabel("Predictions") print("Score: ", metrics.r2_score(stock.iloc[21:-1].Target, stock.iloc[21:-1].Forecast)) print("MSE:", metrics.mean_squared_error(stock.iloc[21:-1].Target, stock.iloc[21:-1].Forecast)) plt.show() stock['Predicted Growth'] = stock[['Forecast', 'Price']].apply(lambda x: 1 if x[0] - x[1] >= 0 else 0, axis = 1) stock['Actual Growth'] = stock[['Target', 'Price']].apply(lambda x: 1 if x[0] - x[1] >= 0 else 0, axis = 1) print("Model Resuls: ") calculate_profit(stock.iloc[300:], 'Predicted Growth', -1) print("\nBest Case Results:") calculate_profit(stock.iloc[300:], 'Actual Growth', -1) def evaluate_model(truth, predictions, model = None, X = None): cm = confusion_matrix(truth, predictions) print('True Negative: ', cm[0, 0], '| False Positive: ', cm[0, 1]) print('False Negative: ', cm[1, 0], '| True Positive: ', cm[1, 1], '\n') sensitivity = cm[1, 1]/ (cm[1, 0] + cm[1, 1]) specificity = cm[0, 0]/ (cm[0, 1] + cm[0, 0]) print('Sensitivity (TP/ TP + FN): ', sensitivity) print('Specificity (TN/ TN + FP): ', specificity, '\n') print('Accuracy: ', accuracy_score(truth, predictions, normalize = True)) print('Precision: ', precision_score(truth, predictions)) if model != None: print('Roc-Auc: ', roc_auc_score(truth, [x[1] for x in model.predict_proba(X)])) else: pass print('\n') evaluate_model(stock['Actual Growth'], stock['Predicted Growth']) stock['Predicted Growth'] = stock[['Forecast', 'Price']].apply(lambda x: 1 if x[0] - x[1] >= 5 else 0 if x[0] - x[1] <= -5 else 2, axis = 1) stock['Actual Growth'] = stock[['Target', 'Price']].apply(lambda x: 1 if x[0] - x[1] >= 5 else 0 if x[0] - x[1] <= -5 else 2, axis = 1) print("Model Resuls: ") calculate_profit(stock.iloc[300:], 'Predicted Growth', -1) print("\nBest Case Results:") calculate_profit(stock.iloc[300:], 'Actual Growth', -1) profits = [] for i in range(1, 30): rolmean = pd.Series(stock.Price).rolling(i).mean() stock['Rolling Mean'] = rolmean model = sm.tsa.statespace.SARIMAX(stock['Rolling Mean'], trend='n', order=(4, 0, 0)) results = model.fit() results.summary() stock['Forecast'] = results.predict() stock['Target'] = stock['Rolling Mean'].shift(-1) stock['Predicted Growth'] = stock[['Forecast', 'Price']].apply(lambda x: 1 if x[0] - x[1] >= 5 else 0 if x[0] - x[1] <= -5 else 2, axis = 1) calculate_profit(stock.iloc[300:], 'Predicted Growth', -1) ```
github_jupyter
The visualization used for this homework is based on Alexandr Verinov's code. # Generative models In this homework we will try several criterions for learning an implicit model. Almost everything is written for you, and you only need to implement the objective for the game and play around with the model. **0)** Read the code **1)** Implement objective for a vanilla [Generative Adversarial Networks](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf) (GAN). The hyperparameters are already set in the code. The model will converge if you implement the objective (1) right. **2)** Note the discussion in the paper, that the objective for $G$ can be of two kinds: $min_G log(1 - D)$ and $min_G - log(D)$. Implement the second objective and ensure model converges. Most likely, in this example you will not notice the difference, but people usually use the second objective, it really matters in more complicated scenarios. **3 & 4)** Implement [Wasserstein GAN](https://arxiv.org/abs/1701.07875) ([WGAN](https://arxiv.org/abs/1704.00028)) and WGAN-GP. To make the discriminator have Lipschitz property you need to clip discriminator's weights to $[-0.01, 0.01]$ range (WGAN) or use gradient penalty (WGAN-GP). You will need to make few modifications to the code: 1) remove sigmoids from discriminator 2) add weight clipping clipping / gradient penaly. 3) change objective. See [implementation 1](https://github.com/martinarjovsky/WassersteinGAN/) / [implementation 2](https://github.com/caogang/wgan-gp). They also use different optimizer. The default hyperparameters may not work, spend time to tune them. **5) Bonus: same thing without GANs** Implement maximum mean discrepancy estimator (MMD). MMD is discrepancy measure between distributions. In our case we use it to calculate discrepancy between real and fake data. You need to implement RBF kernel $k(x,x')=\exp \left(-{\frac {1}{2\sigma ^{2}}}||x-x'||^{2}\right)$ and an MMD estimator (see eq.8 from https://arxiv.org/pdf/1505.03906.pdf). MMD is then used instead of discriminator. ``` """ Please, implement everything in one notebook, using if statements to switch between the tasks """ TASK = 1 # 2, 3, 4, 5 ``` # Imports ``` import numpy as np import time from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch import matplotlib.pyplot as plt %matplotlib inline np.random.seed(12345) lims=(-5, 5) ``` # Define sampler from real data and Z ``` from scipy.stats import rv_discrete MEANS = np.array( [[-1,-3], [1,3], [-2,0], ]) COVS = np.array( [[[1,0.8],[0.8,1]], [[1,-0.5],[-0.5,1]], [[1,0],[0,1]], ]) PROBS = np.array([ 0.2, 0.5, 0.3 ]) assert len(MEANS) == len(COVS) == len(PROBS), "number of components mismatch" COMPONENTS = len(MEANS) comps_dist = rv_discrete(values=(range(COMPONENTS), PROBS)) def sample_true(N): comps = comps_dist.rvs(size=N) conds = np.arange(COMPONENTS)[:,None] == comps[None,:] arr = np.array([np.random.multivariate_normal(MEANS[c], COVS[c], size=N) for c in range(COMPONENTS)]) return np.select(conds[:,:,None], arr).astype(np.float32) NOISE_DIM = 20 def sample_noise(N): return np.random.normal(size=(N,NOISE_DIM)).astype(np.float32) ``` # Visualization functions ``` def vis_data(data): """ Visualizes data as histogram """ hist = np.histogram2d(data[:, 1], data[:, 0], bins=100, range=[lims, lims]) plt.pcolormesh(hist[1], hist[2], hist[0], alpha=0.5) fixed_noise = sample_noise(1000) def vis_g(): """ Visualizes generator's samples as circles """ data = generator(Variable(torch.Tensor(fixed_noise))).data.numpy() if np.isnan(data).any(): return plt.scatter(data[:,0], data[:,1], alpha=0.2, c='b') plt.xlim(lims) plt.ylim(lims) def vis_d(): """ Visualizes discriminator's gradient on grid """ X, Y = np.meshgrid(np.linspace(lims[0], lims[1], 30), np.linspace(lims[0], lims[1], 30)) X = X.flatten() Y = Y.flatten() grid = Variable(torch.Tensor(np.vstack([X, Y]).T), requires_grad=True) data_gen = generator(Variable(torch.Tensor(fixed_noise))) loss = d_loss(discriminator(data_gen), discriminator(grid)) loss.backward() grads = - grid.grad.data.numpy() plt.quiver(X, Y, grads[:, 0], grads[:, 1], color='black',alpha=0.9) ``` # Define architectures After you've passed task 1 you can play with architectures. #### Generator ``` class Generator(nn.Module): def __init__(self, noise_dim, out_dim, hidden_dim=100): super(Generator, self).__init__() self.fc1 = nn.Linear(noise_dim, hidden_dim) nn.init.xavier_normal(self.fc1.weight) nn.init.constant(self.fc1.bias, 0.0) self.fc2 = nn.Linear(hidden_dim, hidden_dim) nn.init.xavier_normal(self.fc2.weight) nn.init.constant(self.fc2.bias, 0.0) self.fc3 = nn.Linear(hidden_dim, out_dim) nn.init.xavier_normal(self.fc3.weight) nn.init.constant(self.fc3.bias, 0.0) def forward(self, z): """ Generator takes a vector of noise and produces sample """ h1 = F.tanh(self.fc1(z)) h2 = F.leaky_relu(self.fc2(h1)) y_gen = self.fc3(h2) return y_gen ``` #### Discriminator ``` class Discriminator(nn.Module): def __init__(self, in_dim, hidden_dim=100): super(Discriminator, self).__init__() self.fc1 = nn.Linear(in_dim, hidden_dim) nn.init.xavier_normal(self.fc1.weight) nn.init.constant(self.fc1.bias, 0.0) self.fc2 = nn.Linear(hidden_dim, hidden_dim) nn.init.xavier_normal(self.fc2.weight) nn.init.constant(self.fc2.bias, 0.0) self.fc3 = nn.Linear(hidden_dim, hidden_dim) nn.init.xavier_normal(self.fc3.weight) nn.init.constant(self.fc3.bias, 0.0) self.fc4 = nn.Linear(hidden_dim, 1) nn.init.xavier_normal(self.fc4.weight) nn.init.constant(self.fc4.bias, 0.0) def forward(self, x): h1 = F.tanh(self.fc1(x)) h2 = F.leaky_relu(self.fc2(h1)) h3 = F.leaky_relu(self.fc3(h2)) score = F.sigmoid(self.fc4(h3)) return score ``` # Define updates and losses ``` generator = Generator(NOISE_DIM, out_dim = 2) discriminator = Discriminator(in_dim = 2) lr = 0.001 g_optimizer = optim.Adam(generator.parameters(), lr=lr, betas=(0.5, 0.999)) d_optimizer = optim.Adam(discriminator.parameters(), lr=lr, betas=(0.5, 0.999)) ``` Notice we are using ADAM optimizer with `beta1=0.5` for both discriminator and discriminator. This is a common practice and works well. Motivation: models should be flexible and adapt itself rapidly to the distributions. You can try different optimizers and parameters. ``` ################################ # IMPLEMENT HERE # Define the g_loss and d_loss here # these are the only lines of code you need to change to implement GAN game def g_loss(): # if TASK == 1: # do something return # TODO def d_loss(): # if TASK == 1: # do something return # TODO ################################ ``` # Get real data ``` data = sample_true(100000) def iterate_minibatches(X, batchsize, y=None): perm = np.random.permutation(X.shape[0]) for start in range(0, X.shape[0], batchsize): end = min(start + batchsize, X.shape[0]) if y is None: yield X[perm[start:end]] else: yield X[perm[start:end]], y[perm[start:end]] plt.rcParams['figure.figsize'] = (12, 12) vis_data(data) vis_g() vis_d() ``` **Legend**: - Blue dots are generated samples. - Colored histogram at the back shows density of real data. - And with arrows we show gradients of the discriminator -- they are the directions that discriminator pushes generator's samples. # Train the model ``` from IPython import display plt.xlim(lims) plt.ylim(lims) num_epochs = 100 batch_size = 64 # =========================== # IMPORTANT PARAMETER: # Number of D updates per G update # =========================== k_d, k_g = 4, 1 accs = [] try: for epoch in range(num_epochs): for input_data in iterate_minibatches(data, batch_size): # Optimize D for _ in range(k_d): # Sample noise noise = Variable(torch.Tensor(sample_noise(len(input_data)))) # Do an update inp_data = Variable(torch.Tensor(input_data)) data_gen = generator(noise) loss = d_loss(discriminator(data_gen), discriminator(inp_data)) d_optimizer.zero_grad() loss.backward() d_optimizer.step() # Optimize G for _ in range(k_g): # Sample noise noise = Variable(torch.Tensor(sample_noise(len(input_data)))) # Do an update data_gen = generator(noise) loss = g_loss(discriminator(data_gen)) g_optimizer.zero_grad() loss.backward() g_optimizer.step() # Visualize plt.clf() vis_data(data); vis_g(); vis_d() display.clear_output(wait=True) display.display(plt.gcf()) except KeyboardInterrupt: pass ``` # Describe your findings here A ya tomat.
github_jupyter
BYÖYO 2018 Yapay Öğrenmeye Giriş I Ali Taylan Cemgil 2 Temmuz 2018 # Parametrik Regresyon, Parametrik Fonksyon Oturtma Problemi (Parametric Regression, Function Fitting) Verilen girdi ve çıktı ikilileri $x, y$ için parametrik bir fonksyon $f$ oturtma problemi. Parametre $w$ değerlerini öyle bir seçelim ki $$ y \approx f(x; w) $$ $x$: Girdi (Input) $y$: Çıktı (Output) $w$: Parametre (Weight, ağırlık) $e$: Hata Örnek 1: $$ e = y - f(x) $$ Örnek 2: $$ e = \frac{y}{f(x)}-1 $$ $E$, $D$: Hata fonksyonu (Error function), Iraksay (Divergence) # Doğrusal Regresyon (Linear Regression) Oturtulacak $f$ fonksyonun **model parametreleri** $w$ cinsinden doğrusal olduğu durum (Girdiler $x$ cinsinden doğrusal olması gerekmez). ## Tanım: Doğrusallık Bir $g$ fonksyonu doğrusaldır demek, herhangi skalar $a$ ve $b$ içn $$ g(aw_1 + b w_2) = a g(w_1) + b g(w_2) $$ olması demektir. ## Örnek: Doğru oturtmak (Line Fitting) * Girdi-Çıktı ikilileri $$ (x_i, y_i) $$ $i=1\dots N$ * Model $$ y_i \approx f(x; w_1, w_0) = w_0 + w_1 x $$ > $x$ : Girdi > $w_1$: Eğim > $w_0$: Kesişme $f_i \equiv f(x_i; w_1, w_0)$ ## Örnek 2: Parabol Oturtma * Girdi-Çıktı ikilileri $$ (x_i, y_i) $$ $i=1\dots N$ * Model $$ y_i \approx f(x_i; w_2, w_1, w_0) = w_0 + w_1 x_i + w_2 x_i^2 $$ > $x$ : Girdi > $w_2$: Karesel terimin katsayısı > $w_1$: Doğrusal terimin katsayısı > $w_0$: Sabit terim katsayısı $f_i \equiv f(x_i; w_2, w_1, w_0)$ Bir parabol $x$'in doğrusal fonksyonu değil ama $w_2, w_1, w_0$ parametrelerinin doğrusal fonksyonu. ``` import matplotlib.pyplot as plt import numpy as np %matplotlib inline from __future__ import print_function from ipywidgets import interact, interactive, fixed import ipywidgets as widgets import matplotlib.pylab as plt from IPython.display import clear_output, display, HTML x = np.array([8.0 , 6.1 , 11., 7., 9., 12. , 4., 2., 10, 5, 3]) y = np.array([6.04, 4.95, 5.58, 6.81, 6.33, 7.96, 5.24, 2.26, 8.84, 2.82, 3.68]) def plot_fit(w1, w0): f = w0 + w1*x plt.figure(figsize=(4,3)) plt.plot(x,y,'sk') plt.plot(x,f,'o-r') #plt.axis('equal') plt.xlim((0,15)) plt.ylim((0,10)) for i in range(len(x)): plt.plot((x[i],x[i]),(f[i],y[i]),'b') # plt.show() # plt.figure(figsize=(4,1)) plt.bar(x,(f-y)**2/2) plt.title('Toplam kare hata = '+str(np.sum((f-y)**2/2))) plt.ylim((0,10)) plt.xlim((0,15)) plt.show() plot_fit(0.0,3.79) interact(plot_fit, w1=(-2, 2, 0.01), w0=(-5, 5, 0.01)); ``` Gerçek veri: Türkiyedeki araç sayıları ``` %matplotlib inline import scipy as sc import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pylab as plt df_arac = pd.read_csv(u'data/arac.csv',sep=';') df_arac[['Year','Car']] #df_arac BaseYear = 1995 x = np.matrix(df_arac.Year[0:]).T-BaseYear y = np.matrix(df_arac.Car[0:]).T/1000000. plt.plot(x+BaseYear, y, 'o-') plt.xlabel('Yil') plt.ylabel('Araba (Milyon)') plt.show() %matplotlib inline from __future__ import print_function from ipywidgets import interact, interactive, fixed import ipywidgets as widgets import matplotlib.pylab as plt from IPython.display import clear_output, display, HTML w_0 = 0.27150786 w_1 = 0.37332256 BaseYear = 1995 x = np.matrix(df_arac.Year[0:]).T-BaseYear y = np.matrix(df_arac.Car[0:]).T/1000000. fig, ax = plt.subplots() f = w_1*x + w_0 plt.plot(x+BaseYear, y, 'o-') ln, = plt.plot(x+BaseYear, f, 'r') plt.xlabel('Years') plt.ylabel('Number of Cars (Millions)') ax.set_ylim((-2,13)) plt.close(fig) def set_line(w_1, w_0): f = w_1*x + w_0 e = y - f ln.set_ydata(f) ax.set_title('Total Error = {} '.format(np.asscalar(e.T*e/2))) display(fig) set_line(0.32,3) interact(set_line, w_1=(-2, 2, 0.01), w_0=(-5, 5, 0.01)); w_0 = 0.27150786 w_1 = 0.37332256 w_2 = 0.1 BaseYear = 1995 x = np.array(df_arac.Year[0:]).T-BaseYear y = np.array(df_arac.Car[0:]).T/1000000. fig, ax = plt.subplots() f = w_2*x**2 + w_1*x + w_0 plt.plot(x+BaseYear, y, 'o-') ln, = plt.plot(x+BaseYear, f, 'r') plt.xlabel('Yıl') plt.ylabel('Araba Sayısı (Milyon)') ax.set_ylim((-2,13)) plt.close(fig) def set_line(w_2, w_1, w_0): f = w_2*x**2 + w_1*x + w_0 e = y - f ln.set_ydata(f) ax.set_title('Ortalama Kare Hata = {} '.format(np.sum(e*e/len(e)))) display(fig) set_line(w_2, w_1, w_0) interact(set_line, w_2=(-0.1,0.1,0.001), w_1=(-2, 2, 0.01), w_0=(-5, 5, 0.01)) ``` ## Örnek 1, devam: Modeli Öğrenmek * Öğrenmek: parametre kestirimi $w = [w_0, w_1]$ * Genelde model veriyi hatasız açıklayamayacağı için her veri noktası için bir hata tanımlıyoruz: $$e_i = y_i - f(x_i; w)$$ * Toplam kare hata $$ E(w) = \frac{1}{2} \sum_i (y_i - f(x_i; w))^2 = \frac{1}{2} \sum_i e_i^2 $$ * Toplam kare hatayı $w_0$ ve $w_1$ parametrelerini değiştirerek azaltmaya çalışabiliriz. * Hata yüzeyi ``` from itertools import product BaseYear = 1995 x = np.matrix(df_arac.Year[0:]).T-BaseYear y = np.matrix(df_arac.Car[0:]).T/1000000. # Setup the vandermonde matrix N = len(x) A = np.hstack((np.ones((N,1)), x)) left = -5 right = 15 bottom = -4 top = 6 step = 0.05 W0 = np.arange(left,right, step) W1 = np.arange(bottom,top, step) ErrSurf = np.zeros((len(W1),len(W0))) for i,j in product(range(len(W1)), range(len(W0))): e = y - A*np.matrix([W0[j], W1[i]]).T ErrSurf[i,j] = e.T*e/2 plt.figure(figsize=(7,7)) plt.imshow(ErrSurf, interpolation='nearest', vmin=0, vmax=1000,origin='lower', extent=(left,right,bottom,top),cmap='Blues_r') plt.xlabel('w0') plt.ylabel('w1') plt.title('Error Surface') plt.colorbar(orientation='horizontal') plt.show() ``` # Modeli Nasıl Kestirebiliriz? ## Fikir: En küçük kare hata (Gauss 1795, Legendre 1805) * Toplam hatanın $w_0$ ve $w_1$'e göre türevini hesapla, sıfıra eşitle ve çıkan denklemleri çöz \begin{eqnarray} \left( \begin{array}{c} y_0 \\ y_1 \\ \vdots \\ y_{N-1} \end{array} \right) \approx \left( \begin{array}{cc} 1 & x_0 \\ 1 & x_1 \\ \vdots \\ 1 & x_{N-1} \end{array} \right) \left( \begin{array}{c} w_0 \\ w_1 \end{array} \right) \end{eqnarray} \begin{eqnarray} y \approx A w \end{eqnarray} > $A = A(x)$: Model Matrisi > $w$: Model Parametreleri > $y$: Gözlemler * Hata vektörü: $$e = y - Aw$$ \begin{eqnarray} E(w) & = & \frac{1}{2}e^\top e = \frac{1}{2}(y - Aw)^\top (y - Aw)\\ & = & \frac{1}{2}y^\top y - \frac{1}{2} y^\top Aw - \frac{1}{2} w^\top A^\top y + \frac{1}{2} w^\top A^\top Aw \\ & = & \frac{1}{2} y^\top y - y^\top Aw + \frac{1}{2} w^\top A^\top Aw \\ \end{eqnarray} ### Gradyan https://tr.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/partial-derivative-and-gradient-articles/a/the-gradient \begin{eqnarray} \frac{d E}{d w } & = & \left(\begin{array}{c} \partial E/\partial w_0 \\ \partial E/\partial w_1 \\ \vdots \\ \partial E/\partial w_{K-1} \end{array}\right) \end{eqnarray} Toplam hatanın gradyanı \begin{eqnarray} \frac{d}{d w }E(w) & = & \frac{d}{d w }(\frac{1}{2} y^\top y) &+ \frac{d}{d w }(- y^\top Aw) &+ \frac{d}{d w }(\frac{1}{2} w^\top A^\top Aw) \\ & = & 0 &- A^\top y &+ A^\top A w \\ & = & - A^\top (y - Aw) \\ & = & - A^\top e \\ & \equiv & \nabla E(w) \end{eqnarray} ### Yapay zekaya gönül veren herkesin bilmesi gereken eşitlikler * Vektör iç çarpımının gradyeni \begin{eqnarray} \frac{d}{d w }(h^\top w) & = & h \end{eqnarray} * Karesel bir ifadenin gradyeni \begin{eqnarray} \frac{d}{d w }(w^\top K w) & = & (K+K^\top) w \end{eqnarray} ### En küçük kare hata çözümü doğrusal modellerde doğrusal denklemlerin çözümü ile bulunabiliyor \begin{eqnarray} w^* & = & \arg\min_{w} E(w) \end{eqnarray} * Eniyileme Şartı (gradyan sıfır olmalı ) \begin{eqnarray} \nabla E(w^*) & = & 0 \end{eqnarray} \begin{eqnarray} 0 & = & - A^\top y + A^\top A w^* \\ A^\top y & = & A^\top A w^* \\ w^* & = & (A^\top A)^{-1} A^\top y \end{eqnarray} * Geometrik (Projeksyon) yorumu: \begin{eqnarray} f & = A w^* = A (A^\top A)^{-1} A^\top y \end{eqnarray} ``` # Solving the Normal Equations # Setup the Design matrix N = len(x) A = np.hstack((np.ones((N,1)), x)) #plt.imshow(A, interpolation='nearest') # Solve the least squares problem w_ls,E,rank,sigma = np.linalg.lstsq(A, y) print('Parametreler: \nw0 = ', w_ls[0],'\nw1 = ', w_ls[1] ) print('Toplam Kare Hata:', E/2) f = np.asscalar(w_ls[1])*x + np.asscalar(w_ls[0]) plt.plot(x+BaseYear, y, 'o-') plt.plot(x+BaseYear, f, 'r') plt.xlabel('Yıl') plt.ylabel('Araba sayısı (Milyon)') plt.show() ``` ## Polinomlar ### Parabol \begin{eqnarray} \left( \begin{array}{c} y_0 \\ y_1 \\ \vdots \\ y_{N-1} \end{array} \right) \approx \left( \begin{array}{ccc} 1 & x_0 & x_0^2 \\ 1 & x_1 & x_1^2 \\ \vdots \\ 1 & x_{N-1} & x_{N-1}^2 \end{array} \right) \left( \begin{array}{c} w_0 \\ w_1 \\ w_2 \end{array} \right) \end{eqnarray} ### $K$ derecesinde polinom \begin{eqnarray} \left( \begin{array}{c} y_0 \\ y_1 \\ \vdots \\ y_{N-1} \end{array} \right) \approx \left( \begin{array}{ccccc} 1 & x_0 & x_0^2 & \dots & x_0^K \\ 1 & x_1 & x_1^2 & \dots & x_1^K\\ \vdots \\ 1 & x_{N-1} & x_{N-1}^2 & \dots & x_{N-1}^K \end{array} \right) \left( \begin{array}{c} w_0 \\ w_1 \\ w_2 \\ \vdots \\ w_K \end{array} \right) \end{eqnarray} \begin{eqnarray} y \approx A w \end{eqnarray} > $A = A(x)$: Model matrisi > $w$: Model Parametreleri > $y$: Gözlemler Polinom oturtmada ortaya çıkan özel yapılı matrislere __Vandermonde__ matrisleri de denmektedir. ``` x = np.array([10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]) N = len(x) x = x.reshape((N,1)) y = np.array([8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]).reshape((N,1)) #y = np.array([9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74]).reshape((N,1)) #y = np.array([7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73]).reshape((N,1)) def fit_and_plot_poly(degree): #A = np.hstack((np.power(x,0), np.power(x,1), np.power(x,2))) A = np.hstack((np.power(x,i) for i in range(degree+1))) # Setup the vandermonde matrix xx = np.matrix(np.linspace(np.asscalar(min(x))-1,np.asscalar(max(x))+1,300)).T A2 = np.hstack((np.power(xx,i) for i in range(degree+1))) #plt.imshow(A, interpolation='nearest') # Solve the least squares problem w_ls,E,rank,sigma = np.linalg.lstsq(A, y) f = A2*w_ls plt.plot(x, y, 'o') plt.plot(xx, f, 'r') plt.xlabel('x') plt.ylabel('y') plt.gca().set_ylim((0,20)) #plt.gca().set_xlim((1950,2025)) if E: plt.title('Mertebe = '+str(degree)+' Hata='+str(E[0])) else: plt.title('Mertebe = '+str(degree)+' Hata= 0') plt.show() fit_and_plot_poly(0) interact(fit_and_plot_poly, degree=(0,10)) ``` Overfit: Aşırı uyum
github_jupyter
## Introduction to the Pipelines SDK The [Kubeflow Pipelines SDK](https://github.com/kubeflow/pipelines/tree/master/sdk) provides a set of Python packages that you can use to specify and run your machine learning (ML) workflows. A pipeline is a description of an ML workflow, including all of the components that make up the steps in the workflow and how the components interact with each other. Kubeflow website has a very detail expaination of kubeflow components, please go to [Introduction to the Pipelines SDK](https://www.kubeflow.org/docs/pipelines/sdk/sdk-overview/) for details ## Install the Kubeflow Pipelines SDK This guide tells you how to install the [Kubeflow Pipelines SDK](https://github.com/kubeflow/pipelines/tree/master/sdk) which you can use to build machine learning pipelines. You can use the SDK to execute your pipeline, or alternatively you can upload the pipeline to the Kubeflow Pipelines UI for execution. All of the SDK’s classes and methods are described in the auto-generated [SDK reference docs](https://kubeflow-pipelines.readthedocs.io/en/latest/). Run the following command to install the Kubeflow Pipelines SDK ``` !pip install kfp --upgrade --user ``` After successful installation, the command `dsl-compile` should be available. You can use this command to verify it ``` !which dsl-compile ``` > Note: Please check official documentation to understand Pipline concetps before your move forward. [Introduction to Pipelines SDK](https://www.kubeflow.org/docs/pipelines/sdk/sdk-overview/) ## Build simple components and pipelines In this example, we want to calculate sum of three numbers. 1. Let's assume we have a python image to use. It accepts two arguments and return sum of them. 2. The sum of a and b will be used to calculate final result with sum of c and d. In total, we will have three arithmetical operators. Then we use another echo operator to print the result. ### 1. Create a container image for each component Assumes that you have already created a program to perform the task required in a particular step of your ML workflow. For example, if the task is to train an ML model, then you must have a program that does the training, Your component can create `outputs` that the downstream components can use as `inputs`. This will be used to build Job Directed Acyclic Graph (DAG) > In this case, we will use a python base image to do the calculation. We skip buiding our own image. ### 2. Create a Python function to wrap your component Define a Python function to describe the interactions with the Docker container image that contains your pipeline component. Here, in order to simplify the process, we use simple way to calculate sum. Ideally, you need to build a new container image for your code change. ``` import kfp from kfp import dsl def add_two_numbers(a, b): return dsl.ContainerOp( name='calculate_sum', image='python:3.6.8', command=['python', '-c'], arguments=['with open("/tmp/results.txt", "a") as file: file.write(str({} + {}))'.format(a, b)], file_outputs={ 'data': '/tmp/results.txt', } ) def echo_op(text): return dsl.ContainerOp( name='echo', image='library/bash:4.4.23', command=['sh', '-c'], arguments=['echo "Result: {}"'.format(text)] ) ``` ### 3. Define your pipeline as a Python function Describe each pipeline as a Python function. ``` @dsl.pipeline( name='Calcualte sum pipeline', description='Calculate sum of numbers and prints the result.' ) def calculate_sum( a=7, b=10, c=4, d=7 ): """A four-step pipeline with first two running in parallel.""" sum1 = add_two_numbers(a, b) sum2 = add_two_numbers(c, d) sum = add_two_numbers(sum1.output, sum2.output) echo_task = echo_op(sum.output) ``` ### 4. Compile the pipeline Compile the pipeline to generate a compressed YAML definition of the pipeline. The Kubeflow Pipelines service converts the static configuration into a set of Kubernetes resources for execution. There are two ways to compile the pipeline. Either use python lib `kfp.compiler.Compiler.compile ` or use binary `dsl-compile` command. ``` kfp.compiler.Compiler().compile(calculate_sum, 'calculate-sum-pipeline.zip') # If you have a python file, you can also try build pipeline using `dsl-compile` command. # dsl-compile --py [path/to/python/file] --output my-pipeline.zip ``` ### 5. Deploy pipeline There're two ways to deploy the pipeline. Either upload the generate `.tar.gz` file through the `Kubeflow Pipelines UI`, or use `Kubeflow Pipeline SDK` to deploy it. We will only show sdk usage here. ``` client = kfp.Client() aws_experiment = client.create_experiment(name='aws') my_run = client.run_pipeline(aws_experiment.id, 'calculate-sum-pipeline', 'calculate-sum-pipeline.zip') ```
github_jupyter
``` import numpy as np class MultiLayerPerceptron(): def __init__(self): self.inputLayer = 1 self.hiddenLayer = np.array([3]) self.outputLayer = [1] self.learningRate = 0.5 self.weights = self.starting_weights() self.bias = self.starting_bias() self.transferFunc = np.tanh self.outputFunc = lambda x: 1*x def starting_weights(self): # init weights weightsTmp = [] # input layer x = np.random.uniform(-0.5, 0.5, self.hiddenLayer[0]*self.inputLayer) weightsTmp.append(x.reshape((self.inputLayer, self.hiddenLayer[0]))) # further hidden layers for i in range(1, len(self.hiddenLayer)): x = np.random.uniform(-0.5, 0.5, self.hiddenLayer[i]*self.hiddenLayer[i-1]) weightsTmp.append(x.reshape((self.hiddenLayer[i-1], self.hiddenLayer[i]))) # output layer x = np.random.uniform(-0.5, 0.5, self.hiddenLayer[-1]*self.outputLayer[0]) weightsTmp.append(x.reshape((self.hiddenLayer[-1], self.outputLayer[0]))) return weightsTmp def starting_bias(self): # init bias biasTmp = [] # input layer x = np.random.uniform(-0.5, 0.5, self.hiddenLayer[0]) biasTmp.append(x.reshape((1, self.hiddenLayer[0]))) # further hidden layers for i in range(1, len(self.hiddenLayer)): x = np.random.uniform(-0.5, 0.5, self.hiddenLayer[i]) biasTmp.append(x.reshape((1, self.hiddenLayer[i]))) # TODO: output bias? return biasTmp def forward_propagation(self, x): # perform forward propagation totalInput = [] totalAct = [] # add single dimension X = np.expand_dims(x, axis=0) # input layer totalInput.append(np.dot(X, self.weights[0]) + self.bias[0]) totalAct.append(self.transferFunc(totalInput[-1])) # hidden layer for i in range(1, len(self.hiddenLayer)): totalInput.append(np.dot(totalAct[-1], self.weights[i]) + self.bias[i]) totalAct.append(self.transferFunc(totalInput[-1])) # output layer totalInput.append(np.dot(totalAct[-1], self.weights[-1])) totalAct.append(self.outputFunc(totalInput[-1])) return totalInput, totalAct def back_propagation(self, x): def fit(self, x, y): pred = [] for elem in x: totalInput, totalAct = self.forward_propagation(elem) pred.append(totalAct[-1]) # get rid of dimensions pred = np.array(pred)[:,0,0] outputError = 0.5 * (y - pred)**2 return(error) data = np.genfromtxt("RegressionData.txt") x = data[:, 0] y = data[:, 1] mlp = MultiLayerPerceptron() #totalInput, totalAct = mlp.forward_propagation(x) error = mlp.fit(x, y) print(error) data = np.genfromtxt("RegressionData.txt") x = data[:, 0] y = data[:, 1] x = np.expand_dims(x, axis=1) X = np.zeros((10,3), dtype=x.dtype) + x print(X.shape) z = [3] print(z[-1]) ```
github_jupyter
``` import sys import os import h5py import json import numpy as np %matplotlib inline import matplotlib.pyplot as plt import IPython.display as ipd from stimuli_f0_labels import get_f0_bins, f0_to_label fn = '/om4/group/mcdermott/user/msaddler/pitchnet_dataset/pitchnetDataset/assets/data/processed/dataset_2019-11-22-2300/PND_sr32000_v08.hdf5' # fn = '/om4/group/mcdermott/user/msaddler/pitchnet_dataset/pitchnetDataset/assets/data/processed/dataset_2019-11-16-2300/PND_sr32000_v07.hdf5' # fn = '/om4/group/mcdermott/user/msaddler/pitchnet_dataset/pitchnetDataset/assets/data/processed/dataset_2019-08-16-1200/PND_sr32000_v04.hdf5' f = h5py.File(fn, 'r') for v in f.values(): print(v) for v in f['config'].values(): print(v) file_indexes = f['source_file_index'][:] segment_indexes = f['source_file_row'][:] f0_values = f['nopad_f0_mean'][:] source_file_encoding_dict = f['config/source_file_encoding_dict'][0] source_file_encoding_dict = source_file_encoding_dict.replace('"', '"""') source_file_encoding_dict = source_file_encoding_dict.replace('\'', '"') source_file_encoding_dict = json.loads(source_file_encoding_dict) f.close() file_index_to_filename_map = {} for key in source_file_encoding_dict.keys(): file_index_to_filename_map[source_file_encoding_dict[key]] = os.path.basename(key) f0_bins = get_f0_bins() dataset_separated_histograms = {} dataset_separated_unique_segments = {} dataset_separated_total_segments = {} for file_index in np.unique(file_indexes): f0_values_from_file_idx = f0_values[file_indexes == file_index] segment_indexes_from_file_idx = segment_indexes[file_indexes == file_index] counts, bins = np.histogram(f0_values_from_file_idx, bins=f0_bins) dset_key = file_index_to_filename_map[file_index] dataset_separated_histograms[dset_key] = counts dataset_separated_unique_segments[dset_key] = np.unique(segment_indexes_from_file_idx).shape[0] dataset_separated_total_segments[dset_key] = segment_indexes_from_file_idx.shape[0] dataset_details = { 'RWC': { 'key': 'f0_segments_2019AUG16_rwc.hdf5', 'plot_kwargs': {'color': [0, 0.8, 0]}, }, 'NSYNTH': { 'key': 'f0_segments_2019AUG16_nsynth.hdf5', 'plot_kwargs': {'color': [0, 0.6, 0]} }, 'WSJ': { 'key': 'f0_segments_2019AUG16_wsj.hdf5', 'plot_kwargs': {'color': [0.6, 0.6, 0.6]} }, 'SWC': { 'key': 'f0_segments_2019AUG16_swc.hdf5', 'plot_kwargs': {'color': [0.4, 0.4, 0.4]} }, 'CSLUKIDS': { 'key': 'f0_segments_2019NOV16_cslu_kids.hdf5', 'plot_kwargs': {'color': [0.2, 0.2, 0.2]} }, 'CMUKIDS': { 'key': 'f0_segments_2019NOV22_cmu_kids.hdf5', 'plot_kwargs': {'color': [0.8, 0.8, 0.8]} }, } dataset_list = ['RWC', 'NSYNTH', 'WSJ', 'SWC', 'CSLUKIDS', 'CMUKIDS'] dataset_list.reverse() fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 6)) x = np.arange(0, len(counts)) bottom = np.zeros_like(x) for dataset in dataset_list: key = dataset_details[dataset]['key'] if key in dataset_separated_histograms.keys(): y = dataset_separated_histograms[key] plot_kwargs = dataset_details[dataset]['plot_kwargs'] label = '{:s} ({} total; {} unique; {:.1f} mean repeats)'.format( dataset, dataset_separated_total_segments[key], dataset_separated_unique_segments[key], dataset_separated_total_segments[key] / dataset_separated_unique_segments[key]) ax.fill_between(x, y1=bottom, y2=bottom+y, **plot_kwargs, lw=0, label=label) bottom = bottom + y else: print(key) ax.legend(loc='upper right', framealpha=1, facecolor='w', edgecolor='w', fontsize=10) ax.set_xlim([x[0], x[-1]]) ax.set_ylim([0, np.max(bottom)]) ax.set_ylabel('Number of stimuli') ax.set_xlabel('F0 bin (Hz)') class_indexes = np.linspace(x[0], x[-1], 15, dtype=int) f0_class_labels = ['{:.0f}'.format(_) for _ in f0_bins[class_indexes]] ax.set_xticks(class_indexes) ax.set_xticklabels(f0_class_labels) plt.tight_layout() plt.show() # save_dir = '/om2/user/msaddler/pitchnet/assets_psychophysics/figures/archive_2019_12_05_PNDv08_archSearch01/' # save_fn = '2019NOV27_PND_v08_dataset_composition.pdf' # print(os.path.join(save_dir, save_fn)) # fig.savefig(os.path.join(save_dir, save_fn), bbox_inches='tight') # Check how many bins are spanned by speech-only and music-only datasets f0_bin_min=80 f0_bin_max=1e3 f0_min=80 f0_max=450.91752190019395 binwidth_in_octaves=1/192 f0_values = np.arange(f0_min, f0_max+1e-2, 1e-2) f0_bins = get_f0_bins(f0_min=f0_bin_min, f0_max=f0_bin_max, binwidth_in_octaves=binwidth_in_octaves) f0_labels = f0_to_label(f0_values, f0_bins, right=False) # Slightly hacky way to determine the correct value of f0_max to ensure all bins are equally wide f0_min_label = np.squeeze(np.argwhere(f0_bins >= f0_min))[0] f0_max_label = np.squeeze(np.argwhere(f0_bins < f0_max))[-1] + 1 f0_min_label, f0_max_label import sys import os import h5py import glob import numpy as np import scipy.signal %matplotlib inline import matplotlib.pyplot as plt import IPython.display as ipd sys.path.append('/packages/msutil') import util_stimuli import util_misc import util_figures regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/PND_sr32000_v08_*.hdf5' list_fn = sorted(glob.glob(regex_fn)) fn = list_fn[-1] with h5py.File(fn, 'r') as f: sr = f['sr'][0] IDX = np.random.randint(0, f['nopad_f0_mean'].shape[0]) f0 = f['nopad_f0_mean'][IDX] y_fg = util_stimuli.set_dBSPL(f['stimuli/signal'][IDX], 60.0) y_bg = util_stimuli.set_dBSPL(f['stimuli/noise'][IDX], 60.0) fxx, pxx = util_stimuli.power_spectrum(y_fg, sr) fenv, penv = util_stimuli.get_spectral_envelope_lp(y_fg, sr, M=6) penv = penv - penv.max() + pxx.max() print(util_stimuli.get_dBSPL(y_fg)) print(util_stimuli.get_dBSPL(y_bg)) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 2.5)) ax.plot(fxx, pxx, color='k', lw=1.0) ax.plot(fenv, penv, color='r', lw=1.0) ax = util_figures.format_axes(ax, xscale='linear', str_xlabel='Frequency (Hz)', str_ylabel='Power (dB SPL)', xlimits=[40, sr/2], ylimits=None, spines_to_hide=['right', 'top']) plt.show() ipd.display(ipd.Audio(y_fg, rate=sr)) y = y_fg t = np.arange(0, len(y)) / sr x = np.zeros_like(y) for f in np.arange(f0, sr/2, f0): x = x + np.sin(2*np.pi*f*t) # x = np.random.randn(*t.shape) b_lp, a_lp = util_stimuli.get_spectral_envelope_lp_coefficients(y, M=6) x = scipy.signal.lfilter(b_lp, a_lp, x) x = util_stimuli.set_dBSPL(x, 60.0) fyy, pyy = util_stimuli.power_spectrum(y, sr) fxx, pxx = util_stimuli.power_spectrum(x, sr) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 2.5)) ax.plot(fyy, pyy, color='k', lw=1.0) ax.plot(fxx, pxx, color='r', lw=1.0) ax = util_figures.format_axes(ax, xscale='linear', str_xlabel='Frequency (Hz)', str_ylabel='Power (dB SPL)', xlimits=[40, sr/2], ylimits=None, spines_to_hide=['right', 'top']) plt.show() plt.figure(figsize=(8, 1.5)) plt.plot(t, y, color='k') plt.plot(t, x, color='r') plt.show() ipd.display(ipd.Audio(y, rate=sr)) ipd.display(ipd.Audio(x, rate=sr)) import sys import os import h5py import json import glob import numpy as np import scipy.signal %matplotlib inline import matplotlib.pyplot as plt import IPython.display as ipd sys.path.append('/packages/msutil') import util_stimuli import util_misc import util_figures # regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv01/SPECTRAL_STATISTICS_v00/*.hdf5' # regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/*.hdf5' # regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_mfcc/matchedPNDv08_snr_neg10pos10_phase0/SPECTRAL_STATISTICS_v00/*.hdf5' regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_mfcc/PNDv08negated12_TLASmatched12_snr_neg10pos10_phase3/SPECTRAL_STATISTICS_v00/*.hdf5' list_fn = sorted(glob.glob(regex_fn)) list_key = ['stimuli/signal', 'stimuli/noise'] dict_mfcc = {key: [] for key in list_key} dict_mean_spectra = {} for itr_fn, fn in enumerate(list_fn): with h5py.File(fn, 'r') as f: sr = f['sr'][0] freqs = f['freqs'][0] nopad_start = f['nopad_start'][0] nopad_end = f['nopad_end'][0] for key in list_key: if itr_fn == 0: dict_mean_spectra[key] = { 'freqs': freqs, 'summed_power_spectrum': np.zeros_like(freqs), 'count': 0, 'nfft': nopad_end - nopad_start, } nrows = f[key + '_power_spectrum'].shape[0] nrows_steps = np.linspace(0, nrows, 2, dtype=int) for nrow_start, nrow_end in zip(nrows_steps[:-1], nrows_steps[1:]): all_spectra = f[key + '_power_spectrum'][nrow_start:nrow_end] # TRUNCATE = -20 # all_spectra[all_spectra < TRUNCATE] = TRUNCATE IDX = np.isfinite(np.sum(all_spectra, axis=1)) dict_mean_spectra[key]['summed_power_spectrum'] += np.sum(all_spectra[IDX], axis=0) dict_mean_spectra[key]['count'] += np.sum(IDX, axis=0) dict_mfcc[key].append(f[key + '_mfcc'][:]) if itr_fn % 5 == 0: print(itr_fn, os.path.basename(fn), dict_mean_spectra[key]['count']) for key in list_key: print('concatenating {} mfcc arrays'.format(key)) dict_mfcc[key] = np.concatenate(dict_mfcc[key], axis=0) results_dict = {} for key in sorted(dict_mfcc.keys()): mfcc_cov = np.cov(dict_mfcc[key], rowvar=False) mfcc_mean = np.mean(dict_mfcc[key], axis=0) results_dict[key] = { 'mfcc_mean': mfcc_mean, 'mfcc_cov': mfcc_cov, 'sr': sr, 'mean_power_spectrum': dict_mean_spectra[key]['summed_power_spectrum'] / dict_mean_spectra[key]['count'], 'mean_power_spectrum_freqs': dict_mean_spectra[key]['freqs'], 'mean_power_spectrum_count': dict_mean_spectra[key]['count'], 'mean_power_spectrum_n_fft': dict_mean_spectra[key]['nfft'], } # results_dict[key]['mean_power_spectrum'] = 10*np.log10(results_dict[key]['mean_power_spectrum']) print(results_dict[key]['mean_power_spectrum'].max()) print(results_dict[key]['mean_power_spectrum'].min()) fn_results_dict = os.path.join(os.path.dirname(fn), 'results_dict.json') with open(fn_results_dict, 'w') as f: json.dump(results_dict, f, sort_keys=True, cls=util_misc.NumpyEncoder) print(fn_results_dict) for k0 in sorted(results_dict.keys()): for k1 in sorted(results_dict[k0].keys()): val = np.array(results_dict[k0][k1]) if len(val.reshape([-1])) > 10: print(k0, k1, val.shape) else: print(k0, k1, val) # nvars = dict_mfcc[key].shape[1] # ncols = 4 # nrows = int(np.ceil(nvars/ncols)) # fig, ax = plt.subplots(ncols=ncols, # nrows=nrows, # figsize=(3*ncols, 2*nrows)) # ax = ax.reshape([-1]) # for ax_idx in range(nvars): # bins = 100#np.linspace(-2.5, 2.5, 100) # for key in sorted(results_dict.keys()): # vals = dict_mfcc[key][:, ax_idx] # ax[ax_idx].hist(vals, bins=bins, alpha=0.5) # ax[ax_idx] = util_figures.format_axes(ax[ax_idx], # str_xlabel='mfcc {}'.format(ax_idx + 1), # str_ylabel='Count', # xlimits=None, # ylimits=None) # for ax_idx in range(nvars, ax.shape[0]): # ax[ax_idx].axis('off') # plt.tight_layout() # plt.show() import sys import os import h5py import json import glob import numpy as np import librosa %matplotlib inline import matplotlib.pyplot as plt import IPython.display as ipd sys.path.append('/packages/msutil') import util_stimuli import util_misc import util_figures data_dir = '/om/scratch/Fri/msaddler/data_pitchnet/' basename = 'SPECTRAL_STATISTICS_v00/results_dict_v00.json' list_dataset_tag = [ ('PND_v08/noise_TLAS_snr_neg10pos10', 'Natural sounds'), # 'PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv01', 'Natural sounds (lowpass)'), # 'PND_v08/noise_TLAS_snr_neg10pos10_filter_signalHPv00', 'Natural sounds (highpass)'), # 'PND_v08inst/noise_TLAS_snr_neg10pos10', # 'PND_v08spch/noise_TLAS_snr_neg10pos10', ('PND_mfcc/PNDv08matched12_TLASmatched12_snr_neg10pos10_phase0', 'Synthetic (12-MFCC-matched to natural)'), # 'PND_mfcc/negatedPNDv08_snr_neg10pos10_phase0', # ('PND_mfcc/debug', 'Synthetic (flat spectrum)'), # ('PND_mfcc/PNDv08matched12_TLASmatched12_snr_neg10pos10_phase3', 'Synthetic (12-MFCC-matched)'), # ('PND_mfcc/PNDv08negated12_TLASmatched12_snr_neg10pos10_phase3', 'Synthetic (12-MFCC-negated)'), ] fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 7.5)) clist = 'krbmgyc' for cidx, (dataset_tag, label_tag) in enumerate(list_dataset_tag): fn_results_dict = os.path.join(data_dir, dataset_tag, basename) with open(fn_results_dict, 'r') as f: results_dict = json.load(f) for key in sorted(results_dict.keys()): MEAN_FXX = np.array(results_dict[key]['mean_power_spectrum_freqs']) MEAN_PXX = np.array(results_dict[key]['mean_power_spectrum_envelope']) # MEAN_PXX -= MEAN_PXX.max() # sr = np.array(results_dict[key]['sr']) # nfft = results_dict[key]['mean_power_spectrum_n_fft'] # mfcc_mean = np.array(results_dict[key]['mfcc_mean']) # mfcc_mean[12:] = 0 # M = librosa.filters.mel(sr, nfft, n_mels=len(mfcc_mean)) # Minv = np.linalg.pinv(M) # power_spectrum = util_stimuli.get_power_spectrum_from_mfcc(mfcc_mean, Minv) # MEAN_FXX = np.fft.rfftfreq(nfft, d=1/sr) # MEAN_PXX = 10*np.log10(power_spectrum) color = clist[cidx] ls = '-' if 'noise' in key: ls = '--' ax.plot(MEAN_FXX, MEAN_PXX, label='{} : {}'.format(label_tag, key), lw=2.5, color=color, ls=ls) MEAN_MFCC = np.array(results_dict[key]['mean_power_spectrum_freqs']) ax.legend(loc='lower left') ax = util_figures.format_axes(ax, xscale='log', str_xlabel='Frequency (Hz)', str_ylabel='Power (dB)', xlimits=[40, None], ylimits=[-40, None], spines_to_hide=['right', 'top']) plt.show() import sys import os import h5py import json import glob import copy import pdb import numpy as np import scipy.signal import librosa %matplotlib inline import matplotlib.pyplot as plt import IPython.display as ipd sys.path.append('/packages/msutil') import util_stimuli import util_misc import util_figures fn_results_dict = '/om/scratch/Mon/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/results_dict.json' with open(fn_results_dict, 'r') as f: results_dict = json.load(f) N = 1 fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(7.5, 5.0)) for itrN in range(N): for key in sorted(results_dict.keys())[1:]: mfcc_mean = np.array(results_dict[key]['mfcc_mean']) mfcc_cov = np.array(results_dict[key]['mfcc_cov']) sr = results_dict[key]['sr'] dur = 0.150 nfft = int(dur*sr) M = librosa.filters.mel(sr, nfft, n_mels=len(mfcc_mean)) Minv = np.linalg.pinv(M) mfcc = np.random.multivariate_normal(mfcc_mean, mfcc_cov) mfcc[0:] = 0 power_spectrum = util_stimuli.get_power_spectrum_from_mfcc(mfcc, Minv) power_spectrum_freqs = np.fft.rfftfreq(nfft, d=1/sr) f0 = 250.0 frequencies = np.arange(f0, sr/2, f0) amplitudes = np.interp(frequencies, power_spectrum_freqs, np.sqrt(power_spectrum)) signal = util_stimuli.complex_tone(f0, sr, dur, harmonic_numbers=None, frequencies=frequencies, amplitudes=amplitudes, phase_mode='sine', offset_start=True, strict_nyquist=True) # signal = util_stimuli.impose_power_spectrum(signal, power_spectrum) if 'noise' in key: signal = np.random.randn(nfft) signal = util_stimuli.impose_power_spectrum(signal, power_spectrum) kwargs_plot = { 'ls': '-', 'color': 'b', } if 'noise' in key: kwargs_plot['ls'] = '-' kwargs_plot['color'] = 'k' fxx, pxx = util_stimuli.power_spectrum(signal, sr) ax.plot(fxx, pxx-pxx.max(), lw=0.25, color='m') power_spectrum = 10*np.log10(power_spectrum) ax.plot(fxx, power_spectrum-power_spectrum.max(), lw=0.5, **kwargs_plot) MEAN_FXX = np.array(results_dict[key]['mean_power_spectrum_freqs']) MEAN_PXX = np.array(results_dict[key]['mean_power_spectrum']) # MEAN_FXX = np.fft.rfftfreq(nfft, d=1/sr) # MEAN_PXX = util_stimuli.get_power_spectrum_from_mfcc(mfcc_mean, Minv) # MEAN_PXX = 10*np.log10(MEAN_PXX) ax.plot(MEAN_FXX, MEAN_PXX-MEAN_PXX.max(), lw=2.5, **kwargs_plot) # mfcc2 = util_stimuli.get_mfcc(signal, M) # mfcc2[12:] = 0 # pxx2 = util_stimuli.get_power_spectrum_from_mfcc(mfcc2, Minv) # pxx2 = 10*np.log10(pxx2) # fxx2 = np.fft.rfftfreq(len(signal), d=1/sr) # ax.plot(fxx2, pxx2-pxx2.max(), color='g') # Use this function to specify axis scaling, limits, labels, etc. ax = util_figures.format_axes(ax, xscale='linear', str_xlabel='Frequency (Hz)', str_ylabel='Power (dB)', xlimits=[40, None], ylimits=[-80, None], spines_to_hide=['right', 'top']) plt.show() ipd.display(ipd.Audio(signal, rate=sr)) import stimuli_generate_random_synthetic_tones import importlib importlib.reload(stimuli_generate_random_synthetic_tones) spectral_statistics_filename = '/om/scratch/Mon/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/results_dict.json' stimuli_generate_random_synthetic_tones.spectrally_shaped_synthetic_dataset( 'tmp.hdf5', 500, spectral_statistics_filename, fs=32e3, dur=0.150, phase_modes=['cos'], range_f0=[80.0, 1001.3713909809752], range_snr=[-10., 10.], range_dbspl=[30., 90.], n_mfcc=12, invert_signal_filter=0, invert_noise_filter=False, generate_signal_in_fft_domain=False, out_combined_key='stimuli/signal_in_noise', out_signal_key='stimuli/signal', out_noise_key='stimuli/noise', out_snr_key='snr', out_augmentation_prefix='augmentation/', random_seed=0, disp_step=50) import sys import os import h5py import json import glob import copy import pdb import librosa import numpy as np import scipy.signal %matplotlib inline import matplotlib.pyplot as plt import IPython.display as ipd sys.path.append('/packages/msutil') import util_stimuli import util_misc import util_figures key_list = ['stimuli/signal_in_noise']#, 'stimuli/signal', 'stimuli/noise'] with h5py.File('tmp.hdf5', 'r') as f: sr = f['sr'][0] y = {} for k in key_list: y[k] = f[k][np.random.randint(500)] for k in util_misc.get_hdf5_dataset_key_list(f): print(k, f[k]) ipd.display(ipd.Audio(y['stimuli/signal_in_noise'], rate=sr)) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(7.5, 5.0)) for k in key_list: fxx, pxx = util_stimuli.power_spectrum(y[k], sr) ax.plot(fxx, pxx-pxx.max(), lw=0.5, label=k) ax.legend() ax = util_figures.format_axes(ax, xscale='linear', str_xlabel='Frequency (Hz)', str_ylabel='Power (dB SPL)', xlimits=[40, None], ylimits=[-60, None], spines_to_hide=['right', 'top']) plt.show() import sys import os import h5py import glob import numpy as np import scipy.signal import scipy.fftpack import librosa %matplotlib inline import matplotlib.pyplot as plt import IPython.display as ipd sys.path.append('/packages/msutil') import util_stimuli import util_misc import util_figures regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/PND_sr32000_v08_*.hdf5' regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_mfcc/PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/stim_0000000-0002100.hdf5' list_fn = sorted(glob.glob(regex_fn)) fn = list_fn[-1] key_y = 'stimuli/signal' key_f0 = 'nopad_f0_mean' key_f0 = 'f0' with h5py.File(fn, 'r') as f: sr = f['sr'][0] IDX = np.random.randint(0, f[key_f0].shape[0]) f0 = f[key_f0][IDX] y = util_stimuli.set_dBSPL(f[key_y][IDX], 60.0) fxx, pxx = util_stimuli.power_spectrum(y, sr) print(f0) harmonic_frequencies = np.arange(f0, sr/2, f0) IDX = np.digitize(harmonic_frequencies, fxx) harmonic_freq_bins = fxx[IDX] spectrum_freq_bins = pxx[IDX] envelope_spectrum = np.interp(fxx, harmonic_freq_bins, spectrum_freq_bins) # philbert = np.abs(scipy.signal.hilbert(pxx+50)) fenv, penv = util_stimuli.get_spectral_envelope_lp(y, sr, M=12) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 5)) ax.plot(fxx, pxx, color='k', lw=1.0) ax.plot(fxx, envelope_spectrum, color='r', lw=2.0) # ax.plot(fenv, penv, color='r', lw=1.0) ax = util_figures.format_axes(ax, xscale='linear', str_xlabel='Frequency (Hz)', str_ylabel='Power (dB SPL)', xlimits=[40, sr/2], ylimits=[-20, None], spines_to_hide=['right', 'top']) plt.show() ipd.display(ipd.Audio(y, rate=sr)) # import sys # import os # import h5py # import json # import glob # import numpy as np # import scipy.signal # %matplotlib inline # import matplotlib.pyplot as plt # import IPython.display as ipd # sys.path.append('/packages/msutil') # import util_stimuli # import util_misc # import util_figures import importlib import stimuli_compute_statistics importlib.reload(stimuli_compute_statistics) import stimuli_analyze_pystraight importlib.reload(stimuli_analyze_pystraight) # regex_fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/PND*.hdf5' regex_fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_mfcc/PNDv08PYSnegated12_TLASmatched12_snr_neg10pos10_phase3/PYSTRAIGHT_v01_foreground/*.hdf5' print(regex_fn) stimuli_analyze_pystraight.summarize_pystraight_statistics( regex_fn, fn_results='results_dict.json', key_sr='sr', key_signal_list=['stimuli/signal']) # regex_fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/PND*.hdf5' # regex_fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_mfcc/PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/SPECTRAL_STATISTICS_v00/*.hdf5' # print(regex_fn) # stimuli_compute_statistics.summarize_spectral_statistics(regex_fn, # fn_results='results_dict.json', # key_sr='sr', # key_f0=None, # key_signal_list=['stimuli/signal', 'stimuli/noise']) import sys import os import h5py import glob import numpy as np import scipy.signal import scipy.fftpack import librosa %matplotlib inline import matplotlib.pyplot as plt import IPython.display as ipd sys.path.append('/packages/msutil') import util_stimuli import util_misc import util_figures regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/*.hdf5' # regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_mfcc/debug_PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/PYSTRAIGHT_v01_foreground/*.hdf5' list_fn = sorted(glob.glob(regex_fn)) fn = list_fn[0] key_f0 = 'f0' key_y = 'stimuli/signal_INTERP_interp_signal' with h5py.File(fn, 'r') as f: for k in util_misc.get_hdf5_dataset_key_list(f): print(k, f[k].shape) sr = f['sr'][0] IDX = np.random.randint(0, f[key_y].shape[0]) # IDX = 5 y = util_stimuli.set_dBSPL(f[key_y][IDX], 60.0) if True:#key_f0 in f: f0 = f[key_f0][IDX] print('------------>', f0, f['pystraight_success'][IDX]) NTMP = f[key_y].shape[1] power = 0 while NTMP > 2: NTMP /= 2 power += 1 n_fft = int(2 ** power) M = librosa.filters.mel(sr, n_fft, n_mels=40) Minv = np.linalg.pinv(M) fxx = np.fft.rfftfreq(n_fft, d=1/sr) pxx = f['stimuli/signal_FILTER_spectrumSTRAIGHT'][IDX] mfcc = f['stimuli/signal_FILTER_spectrumSTRAIGHT_mfcc'][IDX] # mfcc = scipy.fftpack.dct(np.log(np.matmul(M, pxx)), norm='ortho') mfcc[12:] = 0 pxx_mfcc = np.matmul(Minv, np.exp(scipy.fftpack.idct(mfcc, norm='ortho'))) pxx_mfcc[pxx_mfcc < 0] = 0 pxx_mfcc = 10*np.log10(pxx_mfcc) pxx_mfcc -= pxx_mfcc.max() pxx_straight = pxx pxx_straight = 10*np.log10(pxx_straight) pxx_straight -= pxx_straight.max() fyy, pyy = util_stimuli.power_spectrum(y, sr) pyy -= pyy.max() fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 5)) ax.plot(fyy, pyy, color='k', lw=1.0) ax.plot(fxx, pxx_straight, color='r', lw=2.0) ax.plot(fxx, pxx_mfcc, color='g', lw=2.0) ax = util_figures.format_axes(ax, xscale='linear', str_xlabel='Frequency (Hz)', str_ylabel='Power (dB SPL)', xlimits=[40, sr/2], # ylimits=[-100, 10], spines_to_hide=['right', 'top']) plt.show() ipd.display(ipd.Audio(y, rate=sr)) import sys import os import h5py import json import glob import numpy as np import librosa %matplotlib inline import matplotlib.pyplot as plt import IPython.display as ipd sys.path.append('/packages/msutil') import util_stimuli import util_misc import util_figures data_dir = '/om/scratch/Fri/msaddler/data_pitchnet/' list_dataset_tag = [ # ('PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/results_dict.json', 'Natural power spectrum'), ('PND_mfcc/debug_PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/SPECTRAL_STATISTICS_v00/results_dict.json', 'Synthetic power spectrum (12-MFCC-matched to natural)'), # ('PND_v08/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural filter spectrum'), # ('PND_mfcc/debug_PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/SPECTRAL_STATISTICS_v00/results_dict.json', 'Synthetic filter spectrum (12-MFCC-matched to natural)'), # ('PND_mfcc/debug_PNDv08PYSnegated12_TLASmatched12_snr_neg10pos10_phase3/SPECTRAL_STATISTICS_v00/results_dict.json', 'Synthetic filter spectrum (12-MFCC-matched to natural)'), # ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv01/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural foreground (lowpass-filtered)'), # ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalHPv00/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural foreground (highpass-filtered)'), # ('PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/results_dict.json', 'Natural'), # ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv00/SPECTRAL_STATISTICS_v00/results_dict.json', 'Natural (lowpass_v00)'), # ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv01/SPECTRAL_STATISTICS_v00/results_dict.json', 'Natural (lowpass_v01)'), # ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalHPv00/SPECTRAL_STATISTICS_v00/results_dict.json', 'Natural (highpass_v00)'), # ('PND_v08/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural'), # ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv00/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural (lowpass_v00)'), # ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv01/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural (lowpass_v01)'), # ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalHPv00/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural (highpass_v00)'), # ('PND_mfcc/PNDv08matched12_TLASmatched12_snr_neg10pos10_phase0/SPECTRAL_STATISTICS_v00/results_dict.json', 'Synthetic foreground (12-MFCC-matched to natural)'), # ('PND_mfcc/debugPNDv08negated12_TLASmatched12_snr_neg10pos10_phase0/SPECTRAL_STATISTICS_v00/results_dict.json', 'Synthetic foreground (12-MFCC-negated to natural)'), # ('PND_v08spch/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural speech'), # ('PND_v08inst/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural instruments'), ] fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 4)) clist = 'krbgmyc' for cidx, (dataset_tag, label_tag) in enumerate(list_dataset_tag): fn_results_dict = os.path.join(data_dir, dataset_tag) print(fn_results_dict) with open(fn_results_dict, 'r') as f: results_dict = json.load(f) if 'PYSTRAIGHT_v01_foreground' in fn_results_dict: key_fxx = 'mean_filter_spectrum_freqs' key_pxx = 'mean_filter_spectrum' key_n_fft = 'mean_filter_spectrum_n_fft' else: key_fxx = 'mean_power_spectrum_freqs' key_pxx = 'mean_power_spectrum' key_n_fft = 'mean_power_spectrum_n_fft' for key in sorted(results_dict.keys()): MEAN_FXX = np.array(results_dict[key][key_fxx]) MEAN_PXX = np.array(results_dict[key][key_pxx]) if 'PYSTRAIGHT_v01_foreground' in fn_results_dict: MEAN_PXX -= 10*np.log10(20e-6) sr = results_dict[key]['sr'] mfcc_mean = np.array(results_dict[key]['mfcc_mean']) mfcc_mean[12:0] mfcc_cov = np.array(results_dict[key]['mfcc_cov']) n_fft = np.array(results_dict[key][key_n_fft]) M = librosa.filters.mel(sr, n_fft, n_mels=mfcc_mean.shape[0]) Minv = np.linalg.pinv(M) kwargs_plot = { 'ls': '-', 'color': clist[cidx], 'label': '{} : {}'.format(label_tag, key), } if 'noise' in key: kwargs_plot['ls'] = '--' kwargs_plot['color'] = [0.5] * 3 kwargs_plot['label'] = None ax.plot(MEAN_FXX, MEAN_PXX,#-MEAN_PXX.max(), **kwargs_plot) # PXX_MFCC = 10*np.log10(util_stimuli.get_power_spectrum_from_mfcc(mfcc_mean, Minv)) # ax.plot(MEAN_FXX, # PXX_MFCC,#-PXX_MFCC.max(), # **kwargs_plot) # sample_PXX_MFCC = np.zeros_like(PXX_MFCC) # nsamples=50 # for _ in range(nsamples): # mfcc = np.random.multivariate_normal(mfcc_mean, mfcc_cov) # mfcc[6:] = 0 # sample_PXX_MFCC += 10*np.log10(util_stimuli.get_power_spectrum_from_mfcc(mfcc, Minv)) # sample_PXX_MFCC /= nsamples # ax.plot(MEAN_FXX, # sample_PXX_MFCC-sample_PXX_MFCC.max(), # **kwargs_plot) ax.legend(loc='upper right', ncol=1) ax = util_figures.format_axes(ax, xscale='log', str_xlabel='Frequency (Hz)', str_ylabel='Power (dB)', xlimits=[40, None], ylimits=[-20, None], spines_to_hide=['right', 'top']) plt.show() import sys import h5py import numpy as np import IPython.display as ipd sys.path.append('/packages/msutil') import util_misc import util_stimuli # fn = '/om/user/msaddler/data_pitchnet/neurophysiology/bernox2005_SlidingFixedFilter_lharm01to30_phase0_f0min080_f0max320/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/bez2018meanrates_009216-012288.hdf5' # fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/bez2018meanrates_098_007000-014000.hdf5' # fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_v08inst/noise_TLAS_snr_neg10pos10/PND_sr32000_v08inst_1422630-1437000.hdf5' fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_v08spch/noise_TLAS_snr_neg10pos10/PND_sr32000_v08spch_1422630-1437000.hdf5' # fn_new = 'PND_v08inst_examples_for_metamers.hdf5' fn_new = 'PND_v08spch_examples_for_metamers.hdf5' np.random.seed(998) data_dict = {} with h5py.File(fn, 'r') as f: for k in util_misc.get_hdf5_dataset_key_list(f): # print(k, f[k]) if f[k].shape[0] == 1: data_dict[k] = f[k][:] sr = f['sr'][0] key_signal = 'stimuli/signal' N = 15 for itrN in range(N): IDX = np.random.randint(low=0, high=f['nopad_f0_mean'].shape[0]) for k in util_misc.get_hdf5_dataset_key_list(f): if f[k].shape[0] > 1: if k not in data_dict: data_dict[k] = [] data_dict[k].append(f[k][IDX]) idx_start = f['nopad_start_index'][IDX] - f['segment_start_index'][IDX] idx_end = f['nopad_end_index'][IDX] - f['segment_start_index'][IDX] y = f['stimuli/signal'][IDX, idx_start:idx_end] y_preprocessed = y[0:int(0.05*sr)] y_preprocessed = util_stimuli.set_dBSPL(y_preprocessed, 60.0) if itrN == 0: data_dict['y'] = [] data_dict['y_preprocessed'] = [] data_dict['f0'] = [] data_dict['y'].append(y) data_dict['y_preprocessed'].append(y_preprocessed) data_dict['f0'].append(f['nopad_f0_mean'][IDX]) # ipd.display(ipd.Audio(y_preprocessed, rate=sr)) # f_new = h5py.File(fn_new, 'w') # for k in sorted(data_dict.keys()): # data_dict[k] = np.array(data_dict[k]) # # print(k, data_dict[k].shape, data_dict[k].dtype) # f_new.create_dataset(k, data=data_dict[k]) # f_new.close() print('F0:', data_dict['f0']) import sys import h5py import numpy as np import IPython.display as ipd sys.path.append('/packages/msutil') import util_misc import util_stimuli fn = '/om4/group/mcdermott/user/msaddler/pitchnet_dataset/pitchnetDataset/assets/data/interim/swcDataframe_interim_processed2019-07-15-1830_processedFile__noNaN_sr32000.pdh5' fn = '/om4/group/mcdermott/user/msaddler/pitchnet_dataset/pitchnetDataset/assets/data/interim/sr32000_pystraight/swc_183928-184492.hdf5' with h5py.File(fn) as f: for k in util_misc.get_hdf5_dataset_key_list(f): print(k, f[k]) for _ in range(10): IDX = np.random.randint(f['interp_signal'].shape[0]) y = f['interp_signal'][IDX] sr = f['sr'][0] ipd.display(ipd.Audio(y, rate=sr)) import sys import h5py import numpy as np %matplotlib inline import matplotlib.pyplot as plt import IPython.display as ipd sys.path.append('/packages/msutil') import util_misc import util_stimuli # fn = '/om/user/msaddler/data_pitchnet/bernox2005/lowharm_v01/stim.hdf5' fn = '/om/user/msaddler/data_pitchnet/bernox2005/neurophysiology_v01_EqualAmpTEN_lharm01to15_phase0_f0min080_f0max640/stim.hdf5' fn = '/om/user/msaddler/data_pitchnet/bernox2005/neurophysiology_v01_EqualAmpTEN_lharm01to30_phase0_f0min080_f0max320/stim.hdf5' with h5py.File(fn, 'r') as f: for k in util_misc.get_hdf5_dataset_key_list(f): print(k, f[k]) print(np.unique(f['max_audible_harm'][:])) base_f0 = f['base_f0'][:] print(np.unique(base_f0).shape, base_f0.min(), base_f0.max()) IDX = -10000 sr = f['config_tone/fs'][0] y = f['tone_in_noise'][IDX] print(util_stimuli.get_dBSPL(y)) fxx, pxx = util_stimuli.power_spectrum(y, sr) fig, ax = plt.subplots(figsize=(12, 2)) ax.plot(fxx, pxx) ax.set_xlim([0, sr/2]) ax.set_ylim([-30, None]) plt.show() ipd.display(ipd.Audio(y, rate=sr)) import importlib import stimuli_generate_BernsteinOxenhamPureTone importlib.reload(stimuli_generate_BernsteinOxenhamPureTone) # hdf5_filename = '/om/user/msaddler/data_pitchnet/bernox2005/puretone_v01/stim.hdf5' # stimuli_generate_BernsteinOxenhamPureTone.main( # hdf5_filename, # fs=32e3, # dur=0.150, # f0_min=80.0, # f0_max=10240.0, # f0_n=50, # dbspl_min=20.0, # dbspl_max=60.0, # dbspl_step=0.25, # noise_dBHzSPL=10.0, # noise_attenuation_start=600.0, # noise_attenuation_slope=2, # disp_step=100) # hdf5_filename = '/om/user/msaddler/data_pitchnet/bernox2005/puretone_v02/stim.hdf5' # stimuli_generate_BernsteinOxenhamPureTone.main( # hdf5_filename, # fs=32e3, # dur=0.150, # f0_min=80.0, # f0_max=10240.0, # f0_n=50, # dbspl_min=20.0, # dbspl_max=60.0, # dbspl_step=0.25, # noise_dBHzSPL=12.0, # noise_attenuation_start=600.0, # noise_attenuation_slope=2, # disp_step=100) # hdf5_filename = '/om/user/msaddler/data_pitchnet/bernox2005/puretone_v03/stim.hdf5' # stimuli_generate_BernsteinOxenhamPureTone.main( # hdf5_filename, # fs=32e3, # dur=0.150, # f0_min=80.0, # f0_max=10240.0, # f0_n=50, # dbspl_min=20.0, # dbspl_max=60.0, # dbspl_step=0.25, # noise_dBHzSPL=8.0, # noise_attenuation_start=600.0, # noise_attenuation_slope=2, # disp_step=100) ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Load-Data" data-toc-modified-id="Load-Data-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Load Data</a></span></li><li><span><a href="#Compare-dimensionalities" data-toc-modified-id="Compare-dimensionalities-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Compare dimensionalities</a></span></li><li><span><a href="#Find-&quot;single-gene&quot;-iModulons" data-toc-modified-id="Find-&quot;single-gene&quot;-iModulons-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Find "single-gene" iModulons</a></span></li><li><span><a href="#Plot-Components" data-toc-modified-id="Plot-Components-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Plot Components</a></span></li></ul></div> ``` from pymodulon.core import IcaData import os import pandas as pd import matplotlib.pyplot as plt from scipy import stats import numpy as np from tqdm.notebook import tqdm # Directory containing ICA outputs DATA_DIR = '../data/interim/ica_runs' ``` # Load Data ``` def load_M(dim): return pd.read_csv(os.path.join(DATA_DIR,str(dim),'S.csv'),index_col=0) def load_A(dim): return pd.read_csv(os.path.join(DATA_DIR,str(dim),'A.csv'),index_col=0) dims = sorted([int(x) for x in os.listdir(DATA_DIR)]) M_data = [load_M(dim) for dim in dims] A_data = [load_A(dim) for dim in dims] n_components = [m.shape[1] for m in M_data] ``` # Compare dimensionalities ``` final_m = M_data[-1] thresh = 0.7 n_final_mods = [] for m in tqdm(M_data): corrs = pd.DataFrame(index=final_m.columns,columns=m.columns) for col1 in final_m.columns: for col2 in m.columns: corrs.loc[col1,col2] = abs(stats.pearsonr(final_m[col1],m[col2])[0]) n_final_mods.append(len(np.where(corrs > thresh)[0])) ``` # Find "single-gene" iModulons At a high enough dimensionality, some iModulons track the expression trajectory of a single iModulon ``` n_single_genes = [] for m in tqdm(M_data): counter = 0 for col in m.columns: sorted_genes = abs(m[col]).sort_values(ascending=False) if sorted_genes.iloc[0] > 2 * sorted_genes.iloc[1]: counter += 1 n_single_genes.append(counter) ``` # Plot Components ``` non_single_components = np.array(n_components) - np.array(n_single_genes) DF_stats = pd.DataFrame([n_components,n_final_mods,non_single_components,n_single_genes], index=['Robust Components','Final Components','Multi-gene Components', 'Single Gene Components'], columns=dims).T DF_stats.sort_index(inplace=True) dimensionality = DF_stats[DF_stats['Final Components'] >= DF_stats['Multi-gene Components']].iloc[0].name print('Optimal Dimensionality:',dimensionality) plt.plot(dims,n_components,label='Robust Components') plt.plot(dims,n_final_mods,label='Final Components') plt.plot(dims,non_single_components,label='Non-single-gene Components') plt.plot(dims,n_single_genes,label='Single Gene Components') plt.vlines(dimensionality,0,max(n_components),linestyle='dashed') plt.xlabel('Dimensionality') plt.ylabel('# Components') plt.legend(bbox_to_anchor=(1,1)) DF_stats ```
github_jupyter
# Image Classification The *Computer Vision* cognitive service provides useful pre-built models for working with images, but you'll often need to train your own model for computer vision. For example, suppose the Northwind Traders retail company wants to create an automated checkout system that identifies the grocery items customers want to buy based on an image taken by a camera at the checkout. To do this, you'll need to train a classification model that can classify the images to identify the item being purchased. <p style='text-align:center'><img src='./images/image-classification.jpg' alt='A robot holding a clipboard, classifying pictures of an apple, a banana, and an orange'/></p> In Azure, you can use the ***Custom Vision*** cognitive service to train an image classification model based on existing images. There are two elements to creating an image classification solution. First, you must train a model to recognize different classes using existing images. Then, when the model is trained you must publish it as a service that can be consumed by applications. ## Create a Custom Vision resource To use the Custom Vision service, you need an Azure resource that you can use to *train* a model, and a resource with which you can *publish* it for applications to use. The resource for either (or both) tasks can be a general **Cognitive Services** resource, or a specific **Custom Vision** resource. You can use the same Cognitive Services resource for each of these tasks, or you can use different resources (in the same region) for each task to manage costs separately. Use the following instructions to create a new **Custom Vision** resource. 1. In a new browser tab, open the Azure portal at [https://portal.azure.com](https://portal.azure.com), and sign in using the Microsoft account associated with your Azure subscription. 2. Select the **&#65291;Create a resource** button, search for *custom vision*, and create a **Custom Vision** resource with the following settings: - **Create options**: Both - **Subscription**: *Your Azure subscription* - **Resource group**: *Create a new resource group with a unique name* - **Name**: *Enter a unique name* - **Training location**: *Choose any available region* - **Training pricing tier**: F0 - **Prediction location**: *The same region as the training resource* - **Prediction pricing tier**: F0 > **Note**: If you already have an F0 custom vision service in your subscription, select **S0** for this one. 3. Wait for the resources to be created, and note that two Custom Vision resources are provisioned; one for training, an another for prediction. You can view these by navigating to the resource group where you created them. ## Create a Custom Vision project To train an object detection model, you need to create a Custom Vision project based on your training resource. To do this, you'll use the Custom Vision portal. 1. Download and extract the training images from https://aka.ms/fruit-images. 2. In another browser tab, open the Custom Vision portal at [https://customvision.ai](https://customvision.ai). If prompted, sign in using the Microsoft account associated with your Azure subscription and agree to the terms of service. 3. In the Custom Vision portal, create a new project with the following settings: - **Name**: Grocery Checkout - **Description**: Image classification for groceries - **Resource**: *The Custom Vision resource you created previously* - **Project Types**: Classification - **Classification Types**: Multiclass (single tag per image) - **Domains**: Food 4. Click **\[+\] Add images**, and select all of the files in the **apple** folder you extracted previously. Then upload the image files, specifying the tag *apple*, like this: <p style='text-align:center'><img src='./images/upload_apples.jpg' alt='Upload apple with apple tag'/></p> 5. Repeat the previous step to upload the images in the **banana** folder with the tag *banana*, and the images in the **orange** folder with the tag *orange*. 6. Explore the images you have uploaded in the Custom Vision project - there should be 15 images of each class, like this: <p style='text-align:center'><img src='./images/fruit.jpg' alt='Tagged images of fruit - 15 apples, 15 bananas, and 15 oranges'/></p> 7. In the Custom Vision project, above the images, click **Train** to train a classification model using the tagged images. Select the **Quick Training** option, and then wait for the training iteration to complete (this may take a minute or so). 8. When the model iteration has been trained, review the *Precision*, *Recall*, and *AP* performance metrics - these measure the prediction accuracy of the classification model, and should all be high. ## Test the model Before publishing this iteration of the model for applications to use, you should test it. 1. Above the performance metrics, click **Quick Test**. 2. In the **Image URL** box, type `https://aka.ms/apple-image` and click &#10132; 3. View the predictions returned by your model - the probability score for *apple* should be the highest, like this: <p style='text-align:center'><img src='./images/test-apple.jpg' alt='An image with a class prediction of apple'/></p> 4. Close the **Quick Test** window. ## Publish and consume the image classification model Now you're ready to publish your trained model and use it from a client application. 9. Click **&#128504; Publish** to publish the trained model with the following settings: - **Model name**: groceries - **Prediction Resource**: *The prediction resource you created previously*. 10. After publishing, click the *settings* (&#9881;) icon at the top right of the **Performance** page to view the project settings. Then, under **General** (on the left), copy the **Project Id** and paste it into the code cell below (replacing **YOUR_PROJECT_ID**). <p style='text-align:center'><img src='./images/cv_project_settings.jpg' alt='Project ID in project settings'/></p> > _**Note**: If you used a **Cognitive Services** resource instead of creating a **Custom Vision** resource at the beginning of this exercise, you can copy its key and endpoint from the right side of the project settings, paste it into the code cell below, and run it to see the results. Otherwise, continue completing the steps below to get the key and endpoint for your Custom Vision prediction resource._ 11. At the top left of the **Project Settings** page, click the *Projects Gallery* (&#128065;) icon to return to the Custom Vision portal home page, where your project is now listed. 12. On the Custom Vision portal home page, at the top right, click the *settings* (&#9881;) icon to view the settings for your Custom Vision service. Then, under **Resources**, expand your *prediction* resource (<u>not</u> the training resource) and copy its **Key** and **Endpoint** values to the code cell below, replacing **YOUR_KEY** and **YOUR_ENDPOINT**. <p style='text-align:center'><img src='./images/cv_settings.jpg' alt='Prediction resource key and endpoint in custom vision settings'/></p> 13. Run the code cell below to set the variables to your project ID, key, and endpoint values. ``` project_id = 'YOUR_PROJECT_ID' cv_key = 'YOUR_KEY' cv_endpoint = 'YOUR_ENDPOINT' model_name = 'groceries' # this must match the model name you set when publishing your model iteration (it's case-sensitive)! print('Ready to predict using model {} in project {}'.format(model_name, project_id)) ``` Client applications can use the details above to connect to and your custom vision classification model. Run the following code cell to classifiy a selection of test images using your published model. > **Note**: Don't worry too much about the details of the code. It uses the Computer Vision SDK for Python to get a class prediction for each image in the /data/image-classification/test-fruit folder ``` from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient import matplotlib.pyplot as plt from PIL import Image import os %matplotlib inline # Get the test images from the data/vision/test folder test_folder = os.path.join('data', 'image-classification', 'test-fruit') test_images = os.listdir(test_folder) # Create an instance of the prediction service custom_vision_client = CustomVisionPredictionClient(cv_key, endpoint=cv_endpoint) # Create a figure to display the results fig = plt.figure(figsize=(16, 8)) # Get the images and show the predicted classes for each one print('Classifying images in {} ...'.format(test_folder)) for i in range(len(test_images)): # Open the image, and use the custom vision model to classify it image_contents = open(os.path.join(test_folder, test_images[i]), "rb") classification = custom_vision_client.classify_image(project_id, model_name, image_contents.read()) # The results include a prediction for each tag, in descending order of probability - get the first one prediction = classification.predictions[0].tag_name # Display the image with its predicted class img = Image.open(os.path.join(test_folder, test_images[i])) a=fig.add_subplot(len(test_images)/3, 3,i+1) a.axis('off') imgplot = plt.imshow(img) a.set_title(prediction) plt.show() ``` Hopefully, your image classification model has correctly identified the groceries in the images. ## Learn more The Custom Vision service offers more capabilities than we've explored in this exercise. For example, you can also use the Custom Vision service to create *object detection* models; which not only classify objects in images, but also identify *bounding boxes* that show the location of the object in the image. To learn more about the Custom Vision cognitive service, view the [Custom Vision documentation](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/home)
github_jupyter
``` import numpy as np import pandas as pd from sklearn import linear_model from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib inline #%matplotlib notebook data=pd.read_csv('test_point.dat', header = None, sep = ' ') data1=data.dropna(axis=1) data1.shape[1] #data1.iloc[:,3] ols=linear_model.LinearRegression() ols.fit(data1.iloc[:,3:5], data1.iloc[:,5:data1.shape[1]]) ols.coef_ print "maximum error =", np.max(ols.predict(data1.iloc[:,3:5])-data1.iloc[:,5:data1.shape[1]].values) def generate_ractangle(Nx, Ny, Lx, Ly, center_x, center_y, theta_grad): theta=theta_grad*np.pi/180.0 M=[[np.cos(theta),np.sin(theta)], [-np.sin(theta), np.cos(theta)]] rectangle_x=np.zeros([Nx,Ny]) rectangle_y=np.zeros([Nx,Ny]) dx=Lx/float(Nx); dy=Ly/float(Ny); for j in xrange(Nx): for k in xrange(Ny): [x,y]=np.dot(M,[j*dx-Lx/2.0, k*dy-Ly/2.0]); rectangle_x[j,k]=x + center_x; rectangle_y[j,k]=y + center_y; return rectangle_x, rectangle_y, M rectangle_x=np.zeros([20,5]) center_x=0.5*(data1.iloc[:,3].max()+data1.iloc[:,3].min()) center_y=0.5*(data1.iloc[:,4].max()+data1.iloc[:,4].min()) rectangle_x, rectangle_y, M = generate_ractangle(20, 5, 0.013, 0.00007, center_x*1.0002, center_y,59.0) plt.axis([np.min(rectangle_x)-0.0001,np.max(rectangle_x)+0.0001,np.min(rectangle_y)-0.0001,np.max(rectangle_y)+0.0001]) plt.scatter(rectangle_x,rectangle_y) plt.scatter(data1.iloc[:,3],data1.iloc[:,4]) #ols.predict(data1.iloc[:,3:5]) rectangle=np.zeros([100,2]) count=0 for j in xrange(20): for k in xrange(5): rectangle[count,0]=rectangle_x[j,k] rectangle[count,1]=rectangle_y[j,k] count=count+1 projections0=ols.predict(rectangle) projections=np.zeros([data1.shape[0],data1.shape[1]]) projections[:,5:data1.shape[1]]=projections0[:,0:projections0.shape[1]] projections[:,3:5]=rectangle[:,:] print data1.shape, projections.shape plt.scatter(data1.iloc[:,222],data1.iloc[:,2800]) plt.scatter(projections[:,222],projections[:,2800]) #write_out(projections) np.savetxt('test_point_6.dat', projections, fmt='%.16e') data_res=pd.read_csv('test_point_out_6_2.dat', header = None, sep = ' ') data_res1=data_res.dropna(axis=1) def print_out_pos(filename, data_x, data_y): S=data_x.size outF = open(filename, "w") outF.write("View \"%s\"{\n" % filename) for x in xrange(S): outF.write("SP(%.16le,%.16le,0.0){%i};\n" % (data_x[x], data_y[x],x)) outF.write("};") outF.close(); px=21020#234 py=10011#4111 plt.axis([np.min(data_res1.iloc[:,px])-0.0001,np.max(data_res1.iloc[:,px])+0.0001,np.min(data_res1.iloc[:,py])-0.0001,np.max(data_res1.iloc[:,py])+0.0001]) plt.scatter(data_res1.iloc[:,px],data_res1.iloc[:,py]) plt.scatter(projections[:,px],projections[:,py]) print_out_pos("test6_2_2.pos",projections[:,px],projections[:,py]) print_out_pos("test6_2_3.pos",data_res1.iloc[:,px],data_res1.iloc[:,py]) px=312 py=3424 pz=54 fig = plt.figure() ax = Axes3D(fig) ax.scatter(data1.iloc[:,px],data1.iloc[:,py],data1.iloc[:,pz]) #ax.scatter(ols.predict(data1.iloc[:,3:5])[:,1130],ols.predict(data1.iloc[:,3:5])[:,12183],ols.predict(data1.iloc[:,3:5])[:,66]) ax.scatter(data_res1.iloc[:,px],data_res1.iloc[:,py],data_res1.iloc[:,pz]) ```
github_jupyter
## Deploy a simple S3 dispersed storage archive solution #### Requirements In order to be able to deploy this example deployment you will have to have the following components activated - the 3Bot SDK, in the form of a local container with the SDK, or a grid based SDK container. Getting started instuctions are [here](https://github.com/Threefoldfoundation/info_projectX/tree/development/doc/jumpscale_SDK) - if you use a locally installed container with the 3Bot SDK you need to have the wireguard software installed. Instructions to how to get his installed on your platform could be found [here](https://www.wireguard.com/install/) - capacity reservation are not free so you will need to have some ThreeFold_Tokens (TFT) to play around with. Instructions to get tokens could be found [here](https://github.com/Threefoldfoundation/info_projectX/blob/development/doc/jumpscale_SDK_information/payment/FreeTFT_testtoken.md) After following these install instructions you should end up having a local, working TF Grid SDK installed. You could work / connect to the installed SDK as described [here](https://github.com/Threefoldfoundation/info_projectX/blob/development/doc/jumpscale_SDK/SDK_getting_started.md) ### Overview The design a simple S3 archive solution we need to follow a few simple steps: - create (or identify and use) an overlay network that spans all of the nodes needed in the solution - identify which nodes are involved in the archive for storage and which nodes are running the storage software - create reservations on the storage nodes for low level storage. Create and deploy zero-DB's - collect information of how to access and use the low level storage devices to be passed on to the S3 storage software - design the architecture, data and parity disk design - deploy the S3 software in a container #### Create overlay network of identity an previously deployed overlay network Each overlay network is private and contains private IP addresses. Each overlay network is deployed in such a way that is has no connection to the public (IPv4 or IPv6) network directly. In order to work with such a network a tunnel needs to be created between the overlay network on the grid and your local network. You could find instructions how to do that [here](https://github.com/Threefoldfoundation/info_projectX/blob/development/doc/jumpscale_SDK_examples/network/overlay_network.md) #### Set up the capacity environment to find, reserve and configure Make sure that your SDK points to the mainnet explorer for deploying this capacity example. Also make sure you have an identity loaded. The example code uses the default identity. Multiple identities could be stored in the TF Grid SDK. To check your available identities you could request the number of identities available for you by typing `j.tools.threebot.me` in the kosmos shell. ``` from Jumpscale import j import time j.clients.explorer.default_addr_set('explorer.grid.tf') # Which identities are available in you SDK j.tools.threebot.me # Make sure I have an identity (set default one for mainnet of testnet) me = j.tools.threebot.me.default # Load the zero-os sal and reate empty reservation method zos = j.sal.zosv2 r = zos.reservation_create() ``` #### Setup your overlay network (skip this step if you have a network setup and available) An overlay network creates a private peer2peer network over selected nodes. In this notebook it is assumend you have created one by following this [notebook](https://github.com/Threefoldfoundation/info_projectX/blob/development/code/jupyter/SDK_examples/network/overlay_network.ipynb) #### Design the S3 simple storage solution You have created a network in the network creation [notebook](https://github.com/Threefoldfoundation/info_projectX/blob/development/code/jupyter/SDK_examples/network/overlay_network.ipynb) with the following details: ``` demo_ip_range="172.20.0.0/16" demo_port=8030 demo_network_name="demo_network_name_01" ``` When you executed the reservation it also provided you with a data on order number, node ID and private network range on the node. All the nodes in the network are connected peer2peer with a wireguard tunnel. On these nodes we could now create a storage solution. For this solution we will using some of these nodes as raw storage provider nodes and others as the storage application nodes. Using the ouput of the network reservation notebook to describe the high level design of the storage solution: | Nr. | Location | Node ID. | IPV4 network | Function. | |--------|---|---|---|---| | 1 | Salzburg | 9kcLeTuseybGHGWw2YXvdu4kk2jZzyZCaCHV9t6Axqqx | 172.20.15.0/24 | Storage sofware container, 10GB raw | | 2 | Salzburg | 3h4TKp11bNWjb2UemgrVwayuPnYcs2M1bccXvi3jPR2Y | 172.20.16.0/24 | 10GB raw | | 3 | Salzburg | FUq4Sz7CdafZYV2qJmTe3Rs4U4fxtJFcnV6mPNgGbmRg | 172.20.17.0/24 | 10GB raw | | 4 | Vienna | 9LmpYPBhnrL9VrboNmycJoGfGDjuaMNGsGQKeqrUMSii | 172.20.28.0/24 | 10GB raw | | 5 | Vienna | 3FPB4fPoxw8WMHsqdLHamfXAdUrcRwdZY7hxsFQt3odL | 172.20.29.0/24 | 10GB raw | | 6 | Vienna | CrgLXq3w2Pavr7XrVA7HweH6LJvLWnKPwUbttcNNgJX7 | 172.20.30.0/24 | 10GB raw | #### Reserve and deploy the low level ZeroDB storage nodes First let's deploy low level storage capacity manager (Zero BD, more info [here](https://github.com/Threefoldtech/0-DB)). In the next piece of code we do the following: - create some empty reservation and result structures - select and set the node to container the S3 software - select and load the nodes in a list to push them in the zero-DB reservation structure ``` # load the zero-os sal zos = j.sal.zosv2 day=24*60*60 hour=60*60 # Node: 5 ID: 9kcLeTuseybGHGWw2YXvdu4kk2jZzyZCaCHV9t6Axqqx IPv4 address: 172.20.15.0/24 minio_node_id = '9kcLeTuseybGHGWw2YXvdu4kk2jZzyZCaCHV9t6Axqqx' minio_node_ip = '172.20.15.16' # ---------------------------------------------------------------------------------- reservation_network = zos.reservation_create() reservation_zdbs = zos.reservation_create() reservation_storage = zos.reservation_create() rid_network=0 rid_zdbs=0 rid_storage=0 password = "supersecret" # ---------------------------------------------------------------------------------- # Select and create a reservation for nodes to deploy a ZDB # first find the node where to reserve 0-DB namespaces. Select all the salzburg nodes # ---------------------------------------------------------------------------------- nodes_salzburg = zos.nodes_finder.nodes_search(farm_id=12775) # (IPv6 nodes) nodes_vienna_1 = zos.nodes_finder.nodes_search(farm_id=82872) # (IPv6 nodes) # ---------------------------------------------------------------------------------- # Definition of functional nodes # ---------------------------------------------------------------------------------- nodes_all = nodes_salzburg[5:8] + nodes_vienna_1[5:8] # ---------------------------------------------------------------------------------- # Create ZDB reservation for the selected nodes # ---------------------------------------------------------------------------------- for node in nodes_all: zos.zdb.create( reservation=reservation_zdbs, node_id=node.node_id, size=10, mode='seq', password='supersecret', disk_type="SSD", public=False) ``` #### Prepare and deploy the S3 software container The nodes that will run the storage solution needs some persistent storage. This will create a reservation for a volume on the same node as the software runs and attached this as a volume to the container that will run the storage software. For the reservation duration please set a period of time that allows for expermenting, in this case it is set for one day. ``` # Storage solution reservation time nr_of_hours=24 # ---------------------------------------------------------------------------------- # Attach persistant storage to container - for storing metadata # ---------------------------------------------------------------------------------- volume = zos.volume.create(reservation_storage,minio_node_id,size=10,type='SSD') volume_rid = zos.reservation_register(reservation_storage, j.data.time.epoch+(nr_of_hours*hour), identity=me) results = zos.reservation_result(volume_rid) # ---------------------------------------------------------------------------------- # Actuate the reservation for the ZDB's The IP addresses are going to be selfassigned. # ---------------------------------------------------------------------------------- expiration = j.data.time.epoch + (nr_of_hours*hour) # register the reservation rid_zdb = zos.reservation_register(reservation_zdbs, expiration, identity=me) time.sleep(5) results = zos.reservation_result(rid_zdb) ``` With the low level zero-DB reservations done and stored the `results` variable (these storage managers will get an IPv4 address assigned from the local `/24` node network. We need to store those addresses in `namespace_config` to pass it to the container running the storage software. ``` # ---------------------------------------------------------------------------------- # Read the IP address of the 0-DB namespaces after they are deployed # we will need these IPs when creating the minio container # ---------------------------------------------------------------------------------- namespace_config = [] for result in results: data = result.data_json cfg = f"{data['Namespace']}:{password}@[{data['IPs']}]:{data['Port']}" namespace_config.append(cfg) # All IP's for the zdb's are now known and stored in the namespace_config structure. print(namespace_config) ``` ``` ['9012-4:supersecret@[2a04:7700:1003:1:54f0:edff:fe87:2c48]:9900', '9012-1:supersecret@[2a02:16a8:1000:0:5c2f:ddff:fe5a:1a70]:9900', '9012-2:supersecret@[2a02:16a8:1000:0:1083:59ff:fe38:ce71]:9900', '9012-7:supersecret@[2003:d6:2f32:8500:dc78:d6ff:fe04:7368]:9900', '9012-3:supersecret@[2a02:16a8:1000:0:fc7c:4aff:fec8:baf]:9900', '9012-5:supersecret@[2a04:7700:1003:1:acc0:2ff:fed3:1692]:9900', '9012-6:supersecret@[2a04:7700:1003:1:ac9d:f3ff:fe6a:47a9]:9900'] ``` Last step is to design the redundacy policy for the storage solution. We have 6 low level devices available (over 6 nodes, in 2 different data centers and cities). So we could build any of the following configurations: | Option | data storage devices | parity storage devices | total devices | overhead | |--------|---|---|---|---| | 1 | 3 | 3 | 6 | 50%% | | 2 | 4 | 2 | 6 | 33% | | 3 | 5 | 1 | 6 | 16% | Now in this example real efficiency of this solution is not achieved, in a real life deployment we would do something like this: | Option | data storage devices | parity storage devices | total devices | overhead | |--------|---|---|---|---| | 4 | 16 | 4 | 20 | 20% | In that case it is highly unlikely that 4 distributed devices will fail at the same time, therefore this is a very robust storage solution Here we choose to deploy scenario 2 with 4 data disks and 2 parity disks. ``` # ---------------------------------------------------------------------------------- # With the low level disk managers done and the IP adresses discovered we could now build # the reservation for the min.io S3 interface. # ---------------------------------------------------------------------------------- reservation_minio = zos.reservation_create() # Make sure to adjust the node_id and network name to the appropriate in copy / paste mode :-) minio_container=zos.container.create(reservation=reservation_minio, node_id=minio_node_id, network_name=u_networkname, ip_address=minio_node_ip, Flist='https://hub.grid.tf/azmy.3Bot/minio.Flist', interactive=False, entrypoint='/bin/entrypoint', cpu=2, memory=2048, env={ "SHARDS":','.join(namespace_config), "DATA":"4", "PARITY":"2", "ACCESS_KEY":"minio", "SECRET_KEY":"passwordpassword", }) ``` With the definition of the S3 container done we now need to attached persistent storage on a volume to store metadata. ``` # ---------------------------------------------------------------------------------- # Attach persistant storage to container - for storing metadata # ---------------------------------------------------------------------------------- zos.volume.attach_existing( container=minio_container, volume_id=f'{volume_rid}-{volume.workload_id}', mount_point='/data') ``` Last but not least, execute the resevation for the storage manager. ``` # ---------------------------------------------------------------------------------- # Write reservation for min.io container in BCDB - end user interface # ---------------------------------------------------------------------------------- expiration = j.data.time.epoch + (nr_of_hours*hour) # register the reservation rid = zos.reservation_register(reservation_minio, expiration, identity=me) time.sleep(5) results = zos.reservation_result(rid) ```
github_jupyter
<a href="https://colab.research.google.com/github/anders447/sample-ds-blog-anders/blob/master/_notebooks/2021-05-09-Pedestrian_detector.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # pedestrian detector https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html ``` pip install git+https://github.com/gautamchitnis/cocoapi.git@cocodataset-master#subdirectory=PythonAPI from google.colab import drive drive.mount('/content/drive') import zipfile with zipfile.ZipFile("/content/drive/MyDrive/PennFudanPed.zip", 'r') as zip_ref: zip_ref.extractall() import sys sys.path.insert(1, '/content/drive/MyDrive/detection') import utils import transforms import engine import os import numpy as np import torch from PIL import Image class PennFudanDataset(object): def __init__(self, root, transforms): self.root = root self.transforms = transforms # load all image files, sorting them to # ensure that they are aligned self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages")))) self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks")))) def __getitem__(self, idx): # load images and masks img_path = os.path.join(self.root, "PNGImages", self.imgs[idx]) mask_path = os.path.join(self.root, "PedMasks", self.masks[idx]) img = Image.open(img_path).convert("RGB") # note that we haven't converted the mask to RGB, # because each color corresponds to a different instance # with 0 being background mask = Image.open(mask_path) # convert the PIL Image into a numpy array mask = np.array(mask) # instances are encoded as different colors obj_ids = np.unique(mask) # first id is the background, so remove it obj_ids = obj_ids[1:] # split the color-encoded mask into a set # of binary masks masks = mask == obj_ids[:, None, None] # get bounding box coordinates for each mask num_objs = len(obj_ids) boxes = [] for i in range(num_objs): pos = np.where(masks[i]) xmin = np.min(pos[1]) xmax = np.max(pos[1]) ymin = np.min(pos[0]) ymax = np.max(pos[0]) boxes.append([xmin, ymin, xmax, ymax]) # convert everything into a torch.Tensor boxes = torch.as_tensor(boxes, dtype=torch.float32) # there is only one class labels = torch.ones((num_objs,), dtype=torch.int64) masks = torch.as_tensor(masks, dtype=torch.uint8) image_id = torch.tensor([idx]) area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) # suppose all instances are not crowd iscrowd = torch.zeros((num_objs,), dtype=torch.int64) target = {} target["boxes"] = boxes target["labels"] = labels target["masks"] = masks target["image_id"] = image_id target["area"] = area target["iscrowd"] = iscrowd if self.transforms is not None: img, target = self.transforms(img, target) return img, target def __len__(self): return len(self.imgs) import torchvision from torchvision.models.detection.faster_rcnn import FastRCNNPredictor # load a model pre-trained pre-trained on COCO model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) # replace the classifier with a new one, that has # num_classes which is user-defined num_classes = 2 # 1 class (person) + background # get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) import torchvision from torchvision.models.detection import FasterRCNN from torchvision.models.detection.rpn import AnchorGenerator # load a pre-trained model for classification and return # only the features backbone = torchvision.models.mobilenet_v2(pretrained=True).features # FasterRCNN needs to know the number of # output channels in a backbone. For mobilenet_v2, it's 1280 # so we need to add it here backbone.out_channels = 1280 # let's make the RPN generate 5 x 3 anchors per spatial # location, with 5 different sizes and 3 different aspect # ratios. We have a Tuple[Tuple[int]] because each feature # map could potentially have different sizes and # aspect ratios anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),), aspect_ratios=((0.5, 1.0, 2.0),)) # let's define what are the feature maps that we will # use to perform the region of interest cropping, as well as # the size of the crop after rescaling. # if your backbone returns a Tensor, featmap_names is expected to # be [0]. More generally, the backbone should return an # OrderedDict[Tensor], and in featmap_names you can choose which # feature maps to use. roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0'], output_size=7, sampling_ratio=2) # put the pieces together inside a FasterRCNN model model = FasterRCNN(backbone, num_classes=2, rpn_anchor_generator=anchor_generator, box_roi_pool=roi_pooler) import torchvision from torchvision.models.detection.faster_rcnn import FastRCNNPredictor from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor def get_model_instance_segmentation(num_classes): # load an instance segmentation model pre-trained pre-trained on COCO model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True) # get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) # now get the number of input features for the mask classifier in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels hidden_layer = 256 # and replace the mask predictor with a new one model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask, hidden_layer, num_classes) return model import transforms as T def get_transform(train): transforms = [] transforms.append(T.ToTensor()) if train: transforms.append(T.RandomHorizontalFlip(0.5)) return T.Compose(transforms) model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) dataset = PennFudanDataset('PennFudanPed', get_transform(train=True)) data_loader = torch.utils.data.DataLoader( dataset, batch_size=2, shuffle=True, num_workers=4, collate_fn=utils.collate_fn) # For Training images,targets = next(iter(data_loader)) images = list(image for image in images) targets = [{k: v for k, v in t.items()} for t in targets] output = model(images,targets) # Returns losses and detections # For inference model.eval() x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)] predictions = model(x) # Returns predictions from engine import train_one_epoch, evaluate import utils def main(): # train on the GPU or on the CPU, if a GPU is not available device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') # our dataset has two classes only - background and person num_classes = 2 # use our dataset and defined transformations dataset = PennFudanDataset('PennFudanPed', get_transform(train=True)) dataset_test = PennFudanDataset('PennFudanPed', get_transform(train=False)) # split the dataset in train and test set indices = torch.randperm(len(dataset)).tolist() dataset = torch.utils.data.Subset(dataset, indices[:-50]) dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:]) # define training and validation data loaders data_loader = torch.utils.data.DataLoader( dataset, batch_size=2, shuffle=True, num_workers=4, collate_fn=utils.collate_fn) data_loader_test = torch.utils.data.DataLoader( dataset_test, batch_size=1, shuffle=False, num_workers=4, collate_fn=utils.collate_fn) # get the model using our helper function model = get_model_instance_segmentation(num_classes) # move model to the right device model.to(device) # construct an optimizer params = [p for p in model.parameters() if p.requires_grad] optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005) # and a learning rate scheduler lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1) # let's train it for 10 epochs num_epochs = 10 for epoch in range(num_epochs): # train for one epoch, printing every 10 iterations train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) # update the learning rate lr_scheduler.step() # evaluate on the test dataset evaluate(model, data_loader_test, device=device) print("That's it!") main() ```
github_jupyter
``` %pylab inline import numpy as np import pandas as pd import scipy.stats from matplotlib.backends.backend_pdf import PdfPages import sys sys.path.append("../errortools/") import errortools ``` # Fitting and predicting ``` ndim = 3 fit_intercept = True ndata = 100 p_true = [2, 0, -2, 0] np.random.seed(42) X = np.random.uniform(low=-1, high=1, size=ndim*ndata).reshape(ndata, ndim) p = scipy.stats.logistic.cdf(np.dot(np.concatenate((X, np.ones((X.shape[0],1), dtype=float)), axis=1), p_true)) y = (p > np.random.uniform(size=ndata)).astype(int) fig, ax = plt.subplots(1, 3, figsize=(15,4)) ax[0].plot(X[y==0,0], X[y==0,1], 'o', color='orange', alpha=0.2, markersize=5) ax[0].plot(X[y==1,0], X[y==1,1], 'o', color='green', alpha=0.2, markersize=5) ax[0].set_xlabel("x0") ax[0].set_ylabel("x1") ax[1].plot(X[y==0,0], X[y==0,2], 'o', color='orange', alpha=0.2, markersize=5) ax[1].plot(X[y==1,0], X[y==1,2], 'o', color='green', alpha=0.2, markersize=5) ax[1].set_xlabel("x0") ax[1].set_ylabel("x2") ax[2].plot(X[y==0,1], X[y==0,2], 'o', color='orange', alpha=0.2, markersize=5) ax[2].plot(X[y==1,1], X[y==1,2], 'o', color='green', alpha=0.2, markersize=5) ax[2].set_xlabel("x1") ax[2].set_ylabel("x2"); model = errortools.LogisticRegression(fit_intercept=True) model.fit(X,y) fig, ax = plt.subplots(1, 3, figsize=(20,5)) nstddvs = 1 p = model.parameters cvr_mtx = model.cvr_mtx prc_mtx = np.linalg.inv(cvr_mtx) u = np.linspace(-2, 2, 100).reshape(-1,1) a = np.zeros((100,1), dtype=float) x = np.concatenate((u, a, a), axis=1) f = model.predict(x) el1, eu1 = model.estimate_errors(x, nstddvs) es = model.estimate_errors_sampling(x, 100) el = model.estimate_errors_linear(x, 1) g = scipy.stats.logistic.cdf(np.dot(np.concatenate((x,np.ones((x.shape[0],1))),axis=1), p_true)) ax[0].plot(u, g, '-', color='black', alpha=1, label="true curve") ax[0].plot(u, f, '-', color='red', label="fitted curve") ax[0].fill_between(x=u.ravel(), y1=f-el1, y2=f+eu1, alpha=0.3, color='green', label="error") ax[0].fill_between(x=u.ravel(), y1=f-nstddvs*es, y2=f+nstddvs*es, alpha=0.3, color='orange', label="sampled error") ax[0].fill_between(x=u.ravel(), y1=f-nstddvs*el, y2=f+nstddvs*el, alpha=0.3, color='blue', label="linear error") ax[0].set_xlabel("x0") ax[0].set_ylabel("logistic prob") ax[0].legend() x = np.concatenate((a, u, a), axis=1) f = model.predict(x) el1, eu1 = model.estimate_errors(x, nstddvs) es = model.estimate_errors_sampling(x, 100) el = model.estimate_errors_linear(x, 1) g = scipy.stats.logistic.cdf(np.dot(np.concatenate((x,np.ones((x.shape[0],1))),axis=1), p_true)) ax[1].plot(u, g, '-', color='black', alpha=1, label="true curve") ax[1].plot(u, f, '-', color='red', label="fitted curve") ax[1].fill_between(x=u.ravel(), y1=f-el1, y2=f+eu1, alpha=0.3, color='green', label="error") ax[1].fill_between(x=u.ravel(), y1=f-nstddvs*es, y2=f+nstddvs*es, alpha=0.3, color='orange', label="sampled error") ax[1].fill_between(x=u.ravel(), y1=f-nstddvs*el, y2=f+nstddvs*el, alpha=0.3, color='blue', label="linear error") ax[1].set_xlabel("x1") ax[1].set_ylabel("logistic prob") ax[1].legend() x = np.concatenate((a, a, u), axis=1) f = model.predict(x) el1, eu1 = model.estimate_errors(x, nstddvs) es = model.estimate_errors_sampling(x, 100) el = model.estimate_errors_linear(x, 1) g = scipy.stats.logistic.cdf(np.dot(np.concatenate((x,np.ones((x.shape[0],1))),axis=1), p_true)) ax[2].plot(u, g, '-', color='black', alpha=1, label="true curve") ax[2].plot(u, f, '-', color='red', label="fitted curve") ax[2].fill_between(x=u.ravel(), y1=f-el1, y2=f+eu1, alpha=0.3, color='green', label="error") ax[2].fill_between(x=u.ravel(), y1=f-nstddvs*es, y2=f+nstddvs*es, alpha=0.3, color='orange', label="sampled error") ax[2].fill_between(x=u.ravel(), y1=f-nstddvs*el, y2=f+nstddvs*el, alpha=0.3, color='blue', label="linear error") ax[2].set_xlabel("x2") ax[2].set_ylabel("logistic prob") ax[2].legend(); ``` # Create report (2 ways) ``` features = ['x1', 'x2', 'x3', 'bias'] with PdfPages('Report.pdf') as pdf: errortools.errortools.report_correlation_matrix(model, features, pdf) errortools.errortools.report_parameter_error(model, features, pdf) errortools.errortools.report_loss_versus_approximation(model, X, y, 0, 0, features, pdf) errortools.report_error_indivial_pred(model, X[0], 'x1', features, 0, 20, 100, pdf) errortools.report_error_indivial_pred(model, X[0], 'x2', features, 0, 20, 100, pdf) errortools.report_model_positive_ratio(model, X, y, 1000, 10, pdf) errortools.report_error_test_samples(model, X, pdf) pdf = errortools.errortools.report_correlation_matrix(model, features=features) pdf = errortools.errortools.report_parameter_error(model, features, pdf) pdf = errortools.errortools.report_loss_versus_approximation(model, X, y, 0, 0, features, pdf) pdf = errortools.report_error_indivial_pred(model, X[0], 'x1', features, 0, 20, 100, pdf) pdf = errortools.report_error_indivial_pred(model, X[0], 'x2', features, 0, 20, 100, pdf) pdf = errortools.report_model_positive_ratio(model, X, y, 1000, 10, pdf) pdf = errortools.report_error_test_samples(model, X, pdf) pdf.close() ```
github_jupyter
``` import this print("this is my first program. ") len("fazlullah") a = 10 a type(a) b = 45.5 type(b) c = "fazlullah" type(c) d = 5+6j type(d) g = True type(g) *a = 67 _a = 88 type(a) a = 34 type(_a) a, b, c, d, e = 124,"fazlullah",6+8j,False,88.2 a b c a = "sudh" a+str(4) True + True True - False 1 + True a = input() a a = input() int(a)+8 a pwd c c.conjugate() c.imag c.real s = "sudh" s[1] s[2] s[3] s[4] s[100] s[-1] a = "my name is sudh" a[:10] b = "ineuron" b[:3] b[:300] b[300] b[-1] b[-100] b[-1:-4] b[0:4] a = 'kumar' a[0:300] a[0:300:1] a[0:300:2] a[0:300:3] a[0:100:-1] a[-1:-4] a[-1:-4:-1] a[-1:-10:-1] a[0:-10:-1] a[::] a[-2:] a[-2:-1] a[::-1] a[-1::-1] a = "I am working with ineuron" a a[::-1] a[-5:5] a[-5:5:-1] a[-2:-10:-1] "sudh"*3 "sudh" + " kumar" a len(a) a.find('a') a.find('i') a.find('ia') a.find('in') a.count('i') a a.count('x') l = a.split() l[0] l[1] l[2] l[0:3] a.split('w') a.split('wo') a.upper() s = "sUdh" s.swapcase() s.title() s.capitalize() b = "sudh" c = "ineuron" b.join(c) " ".join("sudh") for i in reversed("sudh"): print(i) s = " sudh " s[::-1] s.rstrip() s.lstrip() s.strip() s = "sudh" s.replace("u", "xyz") s.replace("t","xyz") "sudh\tkumar".expandtabs() s.center(40,'t') s.isupper() s = "Sudh" s.isupper() s = "SUDH" s.isupper() s.islower() s.isspace() s = " sudh" s.isspace() s = " " s.isspace() s = "sudh" s.isdigit() s = "456321" s.isdigit() s = "sudh" s.endswith('h') s.endswith('x') s.startswith('s') s.istitle() s.encode() l = ["sudh", "kumar",3456,4+9j, True, 354.25] type(l) l[0] l[-1] l[-5] l[0:4] l[::-1] l[-1:6] l[0] l[0][1] l[3].real l1 = ["sudh", "kumar",4587] l2 = ["xyz","pqr",456.25] l1+l2 l1 + ["sudh"] l1*4 l1*2 l1 l3 = l1[0].replace("sudh","Faiz") l3 l3 l1[0] = "Faiz" l1 l4 = l[1].replace('k','s') l4 l1 len(l1) 32547 in l1 l2 l2.append("sudh") l2 l2.pop() l2.pop(2) l2 l2.append(345687) l2.insert(1,"faiz") l2 l2.insert(3,[325,'bukhari',"kumar"]) l2 l2[::-1] l2.reverse() l2 l1 l2 l2[1][2] l2 l2.count('xyz') l2.append("munger") l2.append([3,4,54,6]) l2 l1 l1.extend(['faiz',3548,2.25,True]) l1 #3:03/4 #Tuples t = (1,2,3,4,5) type(t) t1 = ("sudh",345,45+6j, 45.50, True) l = ["sudh",345,45+6j, 45.50, True] type(t1) type(l) t2 = () type(t2) t1 l l[0:2] t1[0:2] t1[::-1] t1[-1] t1 t1[0::2] l l1 = [4,5,6,7] l1 l[0] = "kumar" l t1 t1[0] = "xyz" t1 t2 = (34,56,56.5,654) t1+t2 l+l1 t1 t1*2 t1.count("sudh") t1.index("sudh") t = (45,456,23.5,("sudh",4,5,6),("sudh")) t t1 = ([1,2,30],("sudh",456,23.5,45),"sudh") t1 t1[0][1] = "faiz" t1[0][1] t1 t1[0] = "faiz" list(t1) l tuple(l) # set l = [1,2,3,4,5,6,5,5,55,6,4,7,8,6,4,5,55,5,5,5] set(l) s = {} type(s) s1 = {1,2,3,4} type(s1) s2 = {1,11,2,3,1,1,2,3,4,4,4,5,6,6,6,2,0,1,1,4,2,2,1,1} s2 s2[0] list(s2) s2 s2.add(1234) s2 s2.add("faiz") s2 s2.add([1,2,3,4]) {[3,4,5],3,45,56,4} {(3,4,5),3,45,56,4} s = {(3, 4, 5),(3, 4, 5), 3, 4, 45, 56, 3, 4, 45, 56} s s.remove(4) s s.discard(45) s s.discard(45) s s.remove(4) # set is neithe a mutable nor imutable {"Faiz","faiz"} {"sudh","Sudh"} s = {1,2,3,4,4,5,1,2,1,2,3,2,4,"faiz","faiz"} s #dictionary d = {} type(d) d = {1,5} type(d) d = {4:"sudh"} d1 = {"key1":4554,"key2":"sudh",45:[3,4,5,6,8]} d1 d1["key1"] d1[45] d = {3:["sudh",'faiz',4,5,6,4]} d[3] d = {_4:["sudh",'faiz',4,5,6,4]} d = {.4:["sudh",'faiz',4,5,6,4]} d = {"key":("sudh",'faiz',4,5,6,4)} d = {"key":{"sudh",'faiz',4,5,6,4}} d1 = {"key1":[2,3,4,5],"key2":"sudh","key1":45} d1["key1"] d1 d1 = {"key1":[2,3,4,5],"key2":"sudh","kumar":45} d d = {"name":"sudhanshu","mo_no":9873300865,"mail_id":"[email protected]","key1":[4,5,6,7],"key2":(3,4,5,6), "key3":{4,7,8,5,78,5,4,5},"key4":{1:5,5:6}} d d["key3"] type(d["key3"]) d["key4"] d["key4"][5] d.keys() d.values() d.items() type(d.items()) d = {"key1":"sudh","key2":[1,2,3,4]} d d["key3"] = "kumar" d d[4] = [2,5,4,8,6] d d["key1"] = "Fazlullah" d del d["key1"] d del d d d1 = {"key1":"sudh","key2":[4,5,67,8]} d1[[1,2,3]] = "ineuron" d1 d1[(1,2,3)] = "ineuron" d1 d1.get("key1") d1 = {"key1":"ineuron","key":"FSDS"} d2 = {"key2":456,"key3":[1,2,3,4,5]} d1.update(d2) d1 d2 d1+d2 t1 = ("faiz",1,1+5j,True) t1.index(True) set(t1) d1 key = ("name","mobile_no","email_id") value = "sudh" d = d1.fromkeys(key,value) d #3:20/5 ```
github_jupyter
# Predicting Boston Housing Prices ## Updating a model using SageMaker _Deep Learning Nanodegree Program | Deployment_ --- In this notebook, we will continue working with the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html). Our goal in this notebook will be to train two different models and to use SageMaker to switch a deployed endpoint from using one model to the other. One of the benefits of using SageMaker to do this is that we can make the change without interrupting service. What this means is that we can continue sending data to the endpoint and at no point will that endpoint disappear. ## General Outline Typically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons. 1. Download or otherwise retrieve the data. 2. Process / Prepare the data. 3. Upload the processed data to S3. 4. Train a chosen model. 5. Test the trained model (typically using a batch transform job). 6. Deploy the trained model. 7. Use the deployed model. In this notebook we will be skipping step 5, testing the model. In addition, we will perform steps 4, 6 and 7 multiple times with different models. ## Step 0: Setting up the notebook We begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need. ``` %matplotlib inline import os import numpy as np import pandas as pd from pprint import pprint import matplotlib.pyplot as plt from time import gmtime, strftime from sklearn.datasets import load_boston import sklearn.model_selection ``` In addition to the modules above, we need to import the various bits of SageMaker that we will be using. ``` import sagemaker from sagemaker import get_execution_role from sagemaker.amazon.amazon_estimator import get_image_uri from sagemaker.predictor import csv_serializer # This is an object that represents the SageMaker session that we are currently operating in. This # object contains some useful information that we will need to access later such as our region. session = sagemaker.Session() # This is an object that represents the IAM role that we are currently assigned. When we construct # and launch the training job later we will need to tell it what IAM role it should have. Since our # use case is relatively simple we will simply assign the training job the role we currently have. role = get_execution_role() ``` ## Step 1: Downloading the data Fortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward. ``` boston = load_boston() ``` ## Step 2: Preparing and splitting the data Given that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets. ``` # First we package up the input data and the target variable (the median value) as pandas dataframes. This # will make saving the data to a file a little easier later on. X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names) Y_bos_pd = pd.DataFrame(boston.target) # We split the dataset into 2/3 training and 1/3 testing sets. X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33) # Then we split the training set further into 2/3 training and 1/3 validation sets. X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33) ``` ## Step 3: Uploading the training and validation files to S3 When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. We can use the SageMaker API to do this and hide some of the details. ### Save the data locally First we need to create the train and validation csv files which we will then upload to S3. ``` # This is our local data directory. We need to make sure that it exists. data_dir = '../data/boston' if not os.path.exists(data_dir): os.makedirs(data_dir) # We use pandas to save our train and validation data to csv files. Note that we make sure not to include header # information or an index as this is required by the built in algorithms provided by Amazon. Also, it is assumed # that the first entry in each row is the target variable. pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) ``` ### Upload to S3 Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project. ``` prefix = 'boston-update-endpoints' val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ``` ## Step 4 (A): Train the XGBoost model Now that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. We will be making use of the high level SageMaker API to do this which will make the resulting code a little easier to read at the cost of some flexibility. To construct an estimator, the object which we wish to train, we need to provide the location of a container which contains the training code. Since we are using a built in algorithm this container is provided by Amazon. However, the full name of the container is a bit lengthy and depends on the region that we are operating in. Fortunately, SageMaker provides a useful utility method called `get_image_uri` that constructs the image name for us. To use the `get_image_uri` method we need to provide it with our current region, which can be obtained from the session object, and the name of the algorithm we wish to use. In this notebook we will be using XGBoost however you could try another algorithm if you wish. The list of built in algorithms can be found in the list of [Common Parameters](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). ``` # As stated above, we use this utility method to construct the image name for the training container. xgb_container = get_image_uri(session.boto_region_name, 'xgboost') # Now that we know which container to use, we can construct the estimator object. xgb = sagemaker.estimator.Estimator(xgb_container, # The name of the training container role, # The IAM role to use (our current role in this case) train_instance_count=1, # The number of instances to use for training train_instance_type='ml.m4.xlarge', # The type of instance ot use for training output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), # Where to save the output (the model artifacts) sagemaker_session=session) # The current SageMaker session ``` Before asking SageMaker to begin the training job, we should probably set any model specific hyperparameters. There are quite a few that can be set when using the XGBoost algorithm, below are just a few of them. If you would like to change the hyperparameters below or modify additional ones you can find additional information on the [XGBoost hyperparameter page](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html) ``` xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, objective='reg:linear', early_stopping_rounds=10, num_round=200) ``` Now that we have our estimator object completely set up, it is time to train it. To do this we make sure that SageMaker knows our input data is in csv format and then execute the `fit` method. ``` # This is a wrapper around the location of our train and validation data, to make sure that SageMaker # knows our data is in csv format. s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='text/csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='text/csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ``` ## Step 5: Test the trained model We will be skipping this step for now. ## Step 6 (A): Deploy the trained model Even though we used the high level approach to construct and train the XGBoost model, we will be using the lower level approach to deploy it. One of the reasons for this is so that we have additional control over how the endpoint is constructed. This will be a little more clear later on when we construct more advanced endpoints. ### Build the model Of course, before we can deploy the model, we need to first create it. The `fit` method that we used earlier created some model artifacts and we can use these to construct a model object. ``` # Remember that a model needs to have a unique name xgb_model_name = "boston-update-xgboost-model" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # We also need to tell SageMaker which container should be used for inference and where it should # retrieve the model artifacts from. In our case, the xgboost container that we used for training # can also be used for inference and the model artifacts come from the previous call to fit. xgb_primary_container = { "Image": xgb_container, "ModelDataUrl": xgb.model_data # model artifacts created earlier in xgb.fit(...) } # And lastly we construct the SageMaker model xgb_model_info = session.sagemaker_client.create_model( ModelName = xgb_model_name, ExecutionRoleArn = role, PrimaryContainer = xgb_primary_container) ``` ### Create the endpoint configuration Once we have a model we can start putting together the endpoint. Recall that to do this we need to first create an endpoint configuration, essentially the blueprint that SageMaker will use to build the endpoint itself. ``` # As before, we need to give our endpoint configuration a name which should be unique xgb_endpoint_config_name = "boston-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # And then we ask SageMaker to construct the endpoint configuration xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": xgb_model_name, "VariantName": "XGB-Model" }]) ``` ### Deploy the endpoint Now that the endpoint configuration has been created, we can ask SageMaker to build our endpoint. **Note:** This is a friendly (repeated) reminder that you are about to deploy an endpoint. Make sure that you shut it down once you've finished with it! ``` # Again, we need a unique name for our endpoint endpoint_name = "boston-update-endpoint-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # And then we can deploy our endpoint endpoint_info = session.sagemaker_client.create_endpoint( EndpointName = endpoint_name, EndpointConfigName = xgb_endpoint_config_name) endpoint_dec = session.wait_for_endpoint(endpoint_name) ``` ## Step 7 (A): Use the model Now that our model is trained and deployed we can send some test data to it and evaluate the results. ``` response = session.sagemaker_runtime_client.invoke_endpoint( EndpointName = endpoint_name, ContentType = 'text/csv', Body = ','.join(map(str, X_test.values[0]))) pprint(response) result = response['Body'].read().decode("utf-8") pprint(result) Y_test.values[0] ``` ## Shut down the endpoint Now that we know that the XGBoost endpoint works, we can shut it down. We will make use of it again later. ``` session.sagemaker_client.delete_endpoint(EndpointName = endpoint_name) ``` ## Step 4 (B): Train the Linear model Suppose we are working in an environment where the XGBoost model that we trained earlier is becoming too costly. Perhaps the number of calls to our endpoint has increased and the length of time it takes to perform inference with the XGBoost model is becoming problematic. A possible solution might be to train a simpler model to see if it performs nearly as well. In our case, we will construct a linear model. The process of doing this is the same as for creating the XGBoost model that we created earlier, although there are different hyperparameters that we need to set. ``` # Similar to the XGBoost model, we will use the utility method to construct the image name for the training container. linear_container = get_image_uri(session.boto_region_name, 'linear-learner') # Now that we know which container to use, we can construct the estimator object. linear = sagemaker.estimator.Estimator(linear_container, # The name of the training container role, # The IAM role to use (our current role in this case) train_instance_count=1, # The number of instances to use for training train_instance_type='ml.m4.xlarge', # The type of instance ot use for training output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), # Where to save the output (the model artifacts) sagemaker_session=session) # The current SageMaker session ``` Before asking SageMaker to train our model, we need to set some hyperparameters. In this case we will be using a linear model so the number of hyperparameters we need to set is much fewer. For more details see the [Linear model hyperparameter page](https://docs.aws.amazon.com/sagemaker/latest/dg/ll_hyperparameters.html) ``` linear.set_hyperparameters(feature_dim=13, # Our data has 13 feature columns predictor_type='regressor', # We wish to create a regression model mini_batch_size=200) # Here we set how many samples to look at in each iteration ``` Now that the hyperparameters have been set, we can ask SageMaker to fit the linear model to our data. ``` linear.fit({'train': s3_input_train, 'validation': s3_input_validation}) ``` ## Step 6 (B): Deploy the trained model Similar to the XGBoost model, now that we've fit the model we need to deploy it. Also like the XGBoost model, we will use the lower level approach so that we have more control over the endpoint that gets created. ### Build the model Of course, before we can deploy the model, we need to first create it. The `fit` method that we used earlier created some model artifacts and we can use these to construct a model object. ``` # First, we create a unique model name linear_model_name = "boston-update-linear-model" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # We also need to tell SageMaker which container should be used for inference and where it should # retrieve the model artifacts from. In our case, the linear-learner container that we used for training # can also be used for inference. linear_primary_container = { "Image": linear_container, "ModelDataUrl": linear.model_data } # And lastly we construct the SageMaker model linear_model_info = session.sagemaker_client.create_model( ModelName = linear_model_name, ExecutionRoleArn = role, PrimaryContainer = linear_primary_container) ``` ### Create the endpoint configuration Once we have the model we can start putting together the endpoint by creating an endpoint configuration. ``` # As before, we need to give our endpoint configuration a name which should be unique linear_endpoint_config_name = "boston-linear-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # And then we ask SageMaker to construct the endpoint configuration linear_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = linear_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": linear_model_name, "VariantName": "Linear-Model" }]) ``` ### Deploy the endpoint Now that the endpoint configuration has been created, we can ask SageMaker to build our endpoint. **Note:** This is a friendly (repeated) reminder that you are about to deploy an endpoint. Make sure that you shut it down once you've finished with it! ``` # Again, we need a unique name for our endpoint endpoint_name = "boston-update-endpoint-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # And then we can deploy our endpoint endpoint_info = session.sagemaker_client.create_endpoint( EndpointName = endpoint_name, EndpointConfigName = linear_endpoint_config_name) endpoint_dec = session.wait_for_endpoint(endpoint_name) ``` ## Step 7 (B): Use the model Just like with the XGBoost model, we will send some data to our endpoint to make sure that it is working properly. An important note is that the output format for the linear model is different from the XGBoost model. ``` response = session.sagemaker_runtime_client.invoke_endpoint( EndpointName = endpoint_name, ContentType = 'text/csv', Body = ','.join(map(str, X_test.values[0]))) pprint(response) result = response['Body'].read().decode("utf-8") pprint(result) Y_test.values[0] ``` ## Shut down the endpoint Now that we know that the Linear model's endpoint works, we can shut it down. ``` session.sagemaker_client.delete_endpoint(EndpointName = endpoint_name) ``` ## Step 6 (C): Deploy a combined model So far we've constructed two separate models which we could deploy and use. Before we talk about how we can change a deployed endpoint from one configuration to another, let's consider a slightly different situation. Suppose that before we switch from using only the XGBoost model to only the Linear model, we first want to do something like an A-B test, where we send some of the incoming data to the XGBoost model and some of the data to the Linear model. Fortunately, SageMaker provides this functionality. And to actually get SageMaker to do this for us is not too different from deploying a model in the way that we've already done. The only difference is that we need to list more than one model in the production variants parameter of the endpoint configuration. A reasonable question to ask is, how much data is sent to each of the models that I list in the production variants parameter? The answer is that it depends on the weight set for each model. Suppose that we have $k$ models listed in the production variants and that each model $i$ is assigned the weight $w_i$. Then each model $i$ will receive $w_i / W$ of the traffic where $W = \sum_{i} w_i$. In our case, since we have two models, the linear model and the XGBoost model, and each model has weight 1, we see that each model will get 1 / (1 + 1) = 1/2 of the data sent to the endpoint. ``` # As before, we need to give our endpoint configuration a name which should be unique combined_endpoint_config_name = "boston-combined-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # And then we ask SageMaker to construct the endpoint configuration combined_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = combined_endpoint_config_name, ProductionVariants = [ { # First we include the linear model "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": linear_model_name, "VariantName": "Linear-Model" }, { # And next we include the xgb model "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": xgb_model_name, "VariantName": "XGB-Model" }]) ``` Now that we've created the endpoint configuration, we can ask SageMaker to construct the endpoint. **Note:** This is a friendly (repeated) reminder that you are about to deploy an endpoint. Make sure that you shut it down once you've finished with it! ``` # Again, we need a unique name for our endpoint endpoint_name = "boston-update-endpoint-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # And then we can deploy our endpoint endpoint_info = session.sagemaker_client.create_endpoint( EndpointName = endpoint_name, EndpointConfigName = combined_endpoint_config_name) endpoint_dec = session.wait_for_endpoint(endpoint_name) ``` ## Step 7 (C): Use the model Now that we've constructed an endpoint which sends data to both the XGBoost model and the linear model we can send some data to the endpoint and see what sort of results we get back. ``` response = session.sagemaker_runtime_client.invoke_endpoint( EndpointName = endpoint_name, ContentType = 'text/csv', Body = ','.join(map(str, X_test.values[0]))) pprint(response) ``` Since looking at a single response doesn't give us a clear look at what is happening, we can instead take a look at a few different responses to our endpoint ``` for rec in range(10): response = session.sagemaker_runtime_client.invoke_endpoint( EndpointName = endpoint_name, ContentType = 'text/csv', Body = ','.join(map(str, X_test.values[rec]))) pprint(response) result = response['Body'].read().decode("utf-8") print(result) print(Y_test.values[rec]) ``` If at some point we aren't sure about the properties of a deployed endpoint, we can use the `describe_endpoint` function to get SageMaker to return a description of the deployed endpoint. ``` pprint(session.sagemaker_client.describe_endpoint(EndpointName=endpoint_name)) ``` ## Updating an Endpoint Now suppose that we've done our A-B test and the new linear model is working well enough. What we'd like to do now is to switch our endpoint from sending data to both the XGBoost model and the linear model to sending data only to the linear model. Of course, we don't really want to shut down the endpoint to do this as doing so would interrupt service to whoever depends on our endpoint. Instead, we can ask SageMaker to **update** an endpoint to a new endpoint configuration. What is actually happening is that SageMaker will set up a new endpoint with the new characteristics. Once this new endpoint is running, SageMaker will switch the old endpoint so that it now points at the newly deployed model, making sure that this happens seamlessly in the background. ``` session.sagemaker_client.update_endpoint(EndpointName=endpoint_name, EndpointConfigName=linear_endpoint_config_name) ``` To get a glimpse at what is going on, we can ask SageMaker to describe our in-use endpoint now, before the update process has completed. When we do so, we can see that the in-use endpoint still has the same characteristics it had before. ``` pprint(session.sagemaker_client.describe_endpoint(EndpointName=endpoint_name)) ``` If we now wait for the update process to complete, and then ask SageMaker to describe the endpoint, it will return the characteristics of the new endpoint configuration. ``` endpoint_dec = session.wait_for_endpoint(endpoint_name) pprint(session.sagemaker_client.describe_endpoint(EndpointName=endpoint_name)) ``` ## Shut down the endpoint Now that we've finished, we need to make sure to shut down the endpoint. ``` session.sagemaker_client.delete_endpoint(EndpointName = endpoint_name) ``` ## Optional: Clean up The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ``` # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir ```
github_jupyter
``` %load_ext autoreload %autoreload 2 import sys import importlib import re import pandas as pd import sqlalchemy as sa import pudl import logging logger = logging.getLogger() logger.setLevel(logging.INFO) handler = logging.StreamHandler(stream=sys.stdout) formatter = logging.Formatter('%(message)s') handler.setFormatter(formatter) logger.handlers = [handler] import matplotlib.pyplot as plt import matplotlib as mpl import seaborn as sns sns.set() %matplotlib inline mpl.rcParams['figure.figsize'] = (8,8) mpl.rcParams['figure.dpi'] = 150 pd.options.display.max_columns = 100 pd.options.display.max_rows = 100 pudl_settings = pudl.workspace.setup.get_defaults() ferc1_engine = sa.create_engine(pudl_settings['ferc1_db']) pudl_engine = sa.create_engine(pudl_settings['pudl_db']) pudl_settings ``` # Sales by Rate Schedule ### Request 2: * **p304 L8(b)** TOTAL RESIDENTIAL - MWh Sold (difficult b/c rate schedules change) * **p304 L8(c)** TOTAL RESIDENTIAL - Revenue (difficult b/c rate schedules change) * **p304 L8(d)** TOTAL RESIDENTIAL - Avg Number of Customers (difficult b/c rate schedules change) * **p304 L24(b)** TOTAL SM/LG C&I - MWh Sold (difficult b/c rate schedules change) * **p304 L24(c)** TOTAL SM/LG C&I - Revenue (difficult b/c rate schedules change) * **p304 L24(d)** TOTAL SM/LG C&I - Avg. Number of Customers (difficult b/c rate schedules change) * **p304 L43(b)** TOTAL (MWh Sold?) (doable b/c it's a specific designated row) * **p304 L43(c)** TOTAL (Revenue?) (doable b/c it's a specific designated row) * **p304 L43(d)** TOTAL (Avg. Number of Customers?) (doable b/c it's a specific designated row) Note that due to lack of any kind of standardization or categorization in the rate schedule nomenclature, and the variety that exists between all the different states and across years, it is extremely difficult to extract anything useful from this table. Even the `total` line is kind of a mess. However, the same information can be pulled from the `f1_elctrc_oper_rev` table (p300) below. ``` sales_by_sched = ( pd.read_sql("f1_sales_by_sched", ferc1_engine). pipe(pudl.transform.ferc1.unpack_table, table_name="f1_sales_by_sched", data_cols=[ "mwh_sold", "revenue", "avg_num_cstmr" ], data_rows=[ "total" ]) .droplevel(1, axis="columns") .dropna(how="all") .query("report_prd==12") .reset_index() .set_index(["respondent_id", "report_year", "spplmnt_num"]) .drop("report_prd", axis="columns") .rename(columns={ "avg_num_cstmr": "avg_customers", }) .astype({"avg_customers": "Int64"}) ) ``` # O&M Expenses ### Request 2: * **p320 L5(b)** (Steam) Fuel (501) * **p320 L25(b)** (Nuclear) Fuel (518) * **p320 L63(b)** (Other) Fuel (547) * **p321 L76(b)** Purchased Power (555) ``` elec_oandm = ( pd.read_sql("f1_elc_op_mnt_expn", ferc1_engine) .pipe( pudl.transform.ferc1.unpack_table, table_name="f1_elc_op_mnt_expn", data_cols=[ "crnt_yr_amt" ], data_rows=[ "production_steam_ops_acct501_fuel", "production_nuclear_ops_acct518_fuel", "production_other_ops_acct547_fuel", "production_supply_acct555_purchased_power", ]) .droplevel(0, axis="columns") .dropna(how="all") .query("report_prd==12") .reset_index() .set_index(["respondent_id", "report_year"]) .drop(["report_prd", "spplmnt_num"], axis="columns") .rename_axis(columns=None) ) ``` # Operating Revenues ### Request 2: * **p300 L10(b,d,f)** TOTAL Sales to Ultimate Consumers (revenues, MWh, avg customers) * **p300 L11(b,d,f)** Sales for Resale (447) (revenues, MWh, avg customers) * **p300 L12(b,d,f)** TOTAL Sales of Electricity (revenues, MWh, avg customers) ### Additional Data in lieu of `f1_sales_by_sched` * **p300 L2(b,d,f)** Residential Sales (440) (revenues, MWh, avg customers) * **p300 L4(b,d,f)** Commercial and Industrial Sales, Small (442) (revenues, MWh, avg customers) * **p300 L5(b,d,f)** Commercial and Industrial Sales, Large (442) (revenues, MWh, avg customers) ``` sales_hierarchical = ( pd.read_sql("f1_elctrc_oper_rev", ferc1_engine) .pipe( pudl.transform.ferc1.unpack_table, table_name="f1_elctrc_oper_rev", data_cols=[ "rev_amt_crnt_yr", "mwh_sold_crnt_yr", "avg_cstmr_crntyr" ], data_rows=[ "sales_acct440_residential", "sales_acct442_commercial_industrial_small", "sales_acct442_commercial_industrial_large", "sales_acct447_for_resale", "sales_ultimate_consumers_total", "sales_of_electricity_total", "sales_revenues_net_total", ]) .dropna(how="all") .query("report_prd==12") .reset_index() .set_index(["respondent_id", "report_year"]) .drop([("report_prd", ""), ("spplmnt_num", "")], axis="columns") .rename_axis(columns=[None, None]) .assign(avg_cstmr_crntyr=lambda x: x.loc[:, "avg_cstmr_crntyr"].astype("Int64")) ) sales_revenue = ( sales_hierarchical.loc[:, "rev_amt_crnt_yr"] .add_suffix("_revenue") ) sales_mwh = ( sales_hierarchical.loc[:, "mwh_sold_crnt_yr"] .add_suffix("_mwh") ) sales_customers = ( sales_hierarchical.loc[:, "avg_cstmr_crntyr"] .add_suffix("_customers") ) elec_sales = pd.concat([sales_revenue, sales_mwh, sales_customers], axis="columns") elec_sales = elec_sales.loc[:,elec_sales.columns.sort_values()] ``` # Income Statements ## Request 1 * **p115 L2(g+h)** Electric Operating Revenues Current+Prior Year * **p115 L?(g+h)** Electric Operating Expenses Current+Prior Year * **p115 L26(g+h)** Electric Net Util Oper Inc Current Year Current+Prior Year ## Request 2 * **p114 L6(g)** Depreciation Expense (403) * **p114 L7(g)** Depreciation Expense for Asset Retirement Costs (403.1) * **p114 L8(g)** Amort. & Depl. of Utility Plant (404-405) * **p114 L9(g)** Amort. of Uitlity Plant Acq. Adj. (406) * **p114 L10(g)** Amort. of Property Losses, Urecov Plant and Reg. Study Costs (407) * **p114 L11(g)** Amort. of Conversion Expenses (407) * **p114 L14(g)** Taxes other than Income taxes (408.1) * **p114 L15(g)** Income Taxes - Federal (409.1) * **p114 L16(g)** Other (409.1) * **p114 L17(g)** Provision for Deferred Income Taxes (410.1) * **p114 L18(g)** (Less) Provision for Deferred Tax Credits (411.1) * **p114 L19(g)** Investment Tax Credit Adj. - Net (411.4) ``` elec_income = ( pd.read_sql("f1_income_stmnt", ferc1_engine) .pipe( pudl.transform.ferc1.unpack_table, table_name="f1_income_stmnt", data_cols = [ "cy_elctrc_total", ], data_rows = [ "operating_revenues_acct400", "depreciation_expenses_acct403", "depreciation_expenses_asset_retirement_acct403_1", "amortization_depletion_utility_plant_acct404_405", "amortization_utility_plant_acquired_acct406", "amortized_conversion_expenses_acct407", "amortized_losses_acct407", "non_income_tax_acct408_1", "federal_income_tax_acct409_1", "other_acct409_1", "deferred_income_tax_acct410_1", "deferred_income_tax_credit_acct411_1", "investment_tax_credit_acct411_4", "utility_operating_expenses_total", "net_utility_operating_income", ]) .dropna(how="all") .query("report_prd==12") .droplevel(0, axis="columns") .reset_index() .set_index(["respondent_id", "report_year"]) .drop(["report_prd", "spplmnt_num"], axis="columns") .rename_axis(columns=None) ) ``` # Depreciation ### Request 1: * **P336 L12(f)** Electric Depreciation Expense ``` elec_depreciation = ( pd.read_sql("f1_dacs_epda", ferc1_engine) .pipe( pudl.transform.ferc1.unpack_table, table_name="f1_dacs_epda", data_cols=['total'], data_rows=["total_electric_plant"] ) .dropna(how="all") .query("report_prd==12") .droplevel(0, axis="columns") .reset_index() .set_index(["respondent_id", "report_year"]) .drop(["report_prd", "spplmnt_num"], axis="columns") .rename_axis(columns=None) ) ``` # Plant in Service ### Request 1: * **P206 L104(b)** TOTAL Electric Plant in Service Bal Beginning of Year * **P206 L104(g)** TOTAL Electric Plant in Service Bal End of Year ``` elec_plant_in_service = ( pd.read_sql("f1_plant_in_srvce", ferc1_engine) .pipe( pudl.transform.ferc1.unpack_table, table_name="f1_plant_in_srvce", data_cols=["begin_yr_bal", "yr_end_bal"], data_rows=["electric_plant_in_service_total"] ) .dropna(how="all") .query("report_prd==12") .droplevel(1, axis="columns") .reset_index() .set_index(["respondent_id", "report_year"]) .drop(["report_prd", "spplmnt_num"], axis="columns") .rename(columns={ "begin_yr_bal": "starting_balance", "yr_end_bal": "ending_balance", }) ) ``` # EIA 861 Sales by Customer Class ## Request 2: **Pull:** * Annual revenues (USD) * Annual sales (MWh) * Annual customers counts **For each of:** * Residential customers * Commercial customers * Industrial customers * All Customers (even though are others like Transportation sales in there?) ``` import pudl.extract.eia861 eia861_years = range(1999,2019) eia861_extractor = pudl.extract.eia861.ExtractorExcel( dataset_name="eia861", years=eia861_years, pudl_settings=pudl_settings ) eia861_dfs = eia861_extractor.create_dfs(years=eia861_years) sales_eia861 = eia861_dfs["sales_eia861_states"] cols = [ "report_year", "utility_id_eia", "state", "residential_revenues", "residential_sales_mwh", "residential_customers", "commercial_revenues", "commercial_sales_mwh", "commercial_customers", "industrial_revenues", "industrial_sales_mwh", "industrial_customers", "transportation_revenues", "transportation_sales_mwh", "transportation_customers", "other_revenues", "other_sales_mwh", "other_customers", "total_revenues", "total_sales_mwh", "total_customers", ] sales_eia861 = ( sales_eia861.loc[:,cols].reset_index(drop=True) .query("utility_id_eia not in (88888, 99999)") .assign(report_year=lambda x: pd.to_numeric(x.report_year, errors="coerce")) .dropna(subset=["report_year", "utility_id_eia"]) .astype({"report_year": int, "utility_id_eia": int}, errors="ignore") ) rev_cols = sales_eia861.filter(regex=".*_revenues$").columns for col in rev_cols: sales_eia861.loc[:,col] = 1000.0 * pd.to_numeric(sales_eia861[col], errors="coerce") cust_cols = sales_eia861.filter(regex=".*_customers$").columns for col in cust_cols: sales_eia861.loc[:,col] = pd.to_numeric(sales_eia861[col], errors="coerce").astype("Int64") mwh_cols = sales_eia861.filter(regex=".*_sales_mwh$").columns for col in mwh_cols: sales_eia861.loc[:,col] = pd.to_numeric(sales_eia861[col], errors="coerce") #new_df = pd.DataFrame() #for customer_class in ["residential", "commercial", "industrial", "transportation", "other", "total"]: # tmp_df = ( # sales_eia861.set_index(["utility_id_eia", "report_year", "state"]) # .filter(regex=f"^{customer_class}_.*") # .assign(customer_class=customer_class) # .rename(columns=lambda x: re.sub(f"^{customer_class}_", "", x)) # ) # new_df = new_df.append(tmp_df) #sales_eia861 = ( # new_df.reset_index() # .assign( # revenues=lambda x: 1000.0 * pd.to_numeric(x.revenues, errors="coerce"), # customers=lambda x: pd.to_numeric(x.customers, errors="coerce"), # sales_mwh=lambda x: pd.to_numeric(x.sales_mwh, errors="coerce") # ) # .astype({"customers": "Int64"}) # .set_index(["utility_id_eia", "state", "report_year", "customer_class"]) # .reset_index() #) ferc1_dfs = { "elec_oandm_ferc1": elec_oandm, "elec_sales_ferc1": elec_sales, "elec_income_ferc1": elec_income, "elec_depreciation_ferc1": elec_depreciation, "elec_plant_in_service_ferc1": elec_plant_in_service, } ``` # Check Data Quality ## Spot check dataframes ``` elec_plant_in_service.columns elec_depreciation.columns print(elec_plant_in_service.count()) df = pd.merge(elec_plant_in_service, elec_depreciation, left_index=True, right_index=True) sns.scatterplot(x="total_electric_plant", y="ending_balance", data=df) elec_income.columns elec_income.sample(10) df = elec_income.reset_index() df["net_income_calculated"] = df.operating_revenues_acct400 - df.utility_operating_expenses_total income_totals = [ "operating_revenues_acct400", "utility_operating_expenses_total", "net_utility_operating_income", "net_income_calculated", ] for col in income_totals: sns.lineplot(x="report_year", y=col, data=df, estimator="sum", label=col) plt.ylabel("USD") plt.legend() plt.show(); sns.scatterplot(x="net_utility_operating_income", y="net_income_calculated", data=df) df = elec_sales.reset_index() mwh_cols = df.filter(regex=".*acct.*mwh$").columns df = pudl.transform.ferc1.oob_to_nan(df, mwh_cols, ub=1e9) cust_cols = df.filter(regex=".*acct.*customers$").columns df.loc[:,cust_cols] = df.loc[:,cust_cols].astype(float) rev_cols = df.filter(regex=".*acct.*revenue$").columns mwh_cols for var in mwh_cols: sns.lineplot(x="report_year", y=var, data=df, estimator="mean", label=var) plt.ylabel("Electricity Sold [MWh]") plt.xlabel(None) plt.legend() plt.show() for var in cust_cols: sns.lineplot(x="report_year", y=var, data=df, estimator="mean", label=var) plt.ylabel("Number of Customers") plt.xlabel(None) plt.legend() plt.show() df.query("respondent_id==151") sns.lineplot(x="report_year", y="sales_acct447_for_resale_customers", data=df, units="respondent_id", estimator=None) for var in rev_cols: sns.lineplot(x="report_year", y=var, data=df, estimator="mean", label=var) plt.ylabel("Electricity Revenues [USD]") plt.xlabel(None) plt.legend() plt.show() mwh_cols sns.scatterplot(x="sales_acct440_residential_mwh", y="sales_acct440_residential_revenue", data=df, alpha=0.1, label="Residential") plt.show() sns.scatterplot(x="sales_acct442_commercial_industrial_small_mwh", y="sales_acct442_commercial_industrial_small_revenue", data=df, alpha=0.1, label="Small C&I") plt.show() sns.scatterplot(x="sales_acct442_commercial_industrial_large_mwh", y="sales_acct442_commercial_industrial_large_revenue", data=df, alpha=0.1, label="Large C&I") plt.show() df = elec_sales.reset_index() for var in df.filter(regex=".*acct.*customers$").columns: sns.lineplot(x="report_year", y=var, data=df, estimator="sum", label=var) plt.ylabel("Customers") plt.xlabel(None) plt.legend() df = elec_oandm.reset_index() for var in df.filter(regex="^production_.*").columns: sns.lineplot(x="report_year", y=var, data=df, estimator="sum", label=var.split('_')[1]) plt.ylabel("[USD]") plt.xlabel(None) plt.title("Total Fuel / Purchased Power Costs") plt.legend() plt.show() sns.lineplot(x="report_year", y="production_steam_ops_acct501_fuel", data=df, units="respondent_id", estimator=None, alpha=0.1) plt.show() sns.lineplot(x="report_year", y="production_nuclear_ops_acct518_fuel", data=df, units="respondent_id", estimator=None, alpha=0.1) plt.show() sns.lineplot(x="report_year", y="production_other_ops_acct547_fuel", data=df, units="respondent_id", estimator=None, alpha=0.1) plt.show() sns.lineplot(x="report_year", y="production_supply_acct555_purchased_power", data=df, units="respondent_id", estimator=None, alpha=0.1) plt.show() ``` # Prepare Data for Output ## Select requested utilities * Use Binz' target list to select a subset of the tables/columns. * Add missing & unmapped `utility_id_ferc1` values to the (FERC) target list: - **`eia 22500`** : `ferc1 191, 276` (Westar Energy) - **`eia 13780`** : `ferc1 121` (Northern States Power Company - WI) - **`eia 13809, 13902`** : `ferc1 122` (Northwestern Public Service Co) * Merge in additional utility name/ID fields for readability. ``` # Grab the EIA/FERC Utility IDs & Names: utilities_eia = pd.read_sql("utilities_eia", pudl_engine) utilities_ferc1 = pd.read_sql("utilities_ferc1", pudl_engine) # Get Binz' list of utilities based on EIA IDs: rbinz_eia_utils = pd.read_csv("rbinz_instructions.csv", index_col="utility_id_eia") # Infer FERC 1 Utility IDs for Binz' targets: utility_id_eia_targets = ( rbinz_eia_utils .merge(utilities_eia, how="left", on="utility_id_eia", suffixes=("_rbinz", "_pudl")) .astype({"utility_id_pudl": 'Int64'}) .dropna(subset=["utility_id_pudl"]) .merge(utilities_ferc1, how="left", on="utility_id_pudl") .astype({"utility_id_ferc1": 'Int64'}) .set_index("utility_id_eia") .dropna(subset=["utility_id_ferc1"]) ) # Add in a few FERC1 IDs that were missing: utility_id_ferc1_targets = set(utility_id_eia_targets.utility_id_ferc1).union({121, 122, 191, 276}) binz_out = {} for k in ferc1_dfs.keys(): binz_out[k]= ( ferc1_dfs[k].reset_index() .rename(columns={"respondent_id": "utility_id_ferc1"}) .query("utility_id_ferc1 in @utility_id_ferc1_targets") .merge(utilities_ferc1, on="utility_id_ferc1") .merge(utilities_eia[["utility_id_pudl", "utility_id_eia"]], on="utility_id_pudl") .set_index(["utility_id_ferc1", "report_year", "utility_name_ferc1", "utility_id_eia", "utility_id_pudl"]) .reset_index() ) print(f"{len(binz_out[k])} records in {k}") all_target_eia_ids = set(utility_id_eia_targets.reset_index().utility_id_eia).union({22500, 13780, 13809, 13902}) binz_out["elec_sales_eia861"] = ( sales_eia861 .merge(utilities_eia, on="utility_id_eia", how="left") .query("utility_id_eia in @all_target_eia_ids") .merge(utilities_ferc1, on="utility_id_pudl", how="left") .set_index(["utility_id_eia", "report_year", "utility_name_eia", "utility_id_pudl", "utility_id_ferc1", "utility_name_ferc1", "state"]) .reset_index() .astype({"utility_id_pudl": "Int64", "utility_id_ferc1": "Int64"}) ) for df in binz_out: print(f"Writing {df}.csv") binz_out[df].to_csv(f"{df}.csv", index=False) ```
github_jupyter
# Deploy a Trained MXNet Model In this notebook, we walk through the process of deploying a trained model to a SageMaker endpoint. If you recently ran [the notebook for training](get_started_mnist_deploy.ipynb) with %store% magic, the `model_data` can be restored. Otherwise, we retrieve the model artifact from a public S3 bucket. ``` # setups import os import json import boto3 import sagemaker from sagemaker.mxnet import MXNetModel from sagemaker import get_execution_role, Session sess = Session() role = get_execution_role() %store -r mx_mnist_model_data try: mx_mnist_model_data except NameError: import json # copy a pretrained model from a public public to your default bucket with open("code/config.json", "r") as f: CONFIG = json.load(f) bucket = CONFIG["public_bucket"] s3 = boto3.client("s3") key = "datasets/image/MNIST/model/mxnet-training-2020-11-21-01-38-01-009/model.tar.gz" target = os.path.join("/tmp", "model.tar.gz") s3.download_file(bucket, key, target) # upload to default bucket mx_mnist_model_data = sess.upload_data( path=os.path.join("/tmp", "model.tar.gz"), bucket=sess.default_bucket(), key_prefix="model/mxnet", ) print(mx_mnist_model_data) ``` ## MXNet Model Object The `MXNetModel` class allows you to define an environment for making inference using your model artifact. Like `MXNet` class we discussed [in this notebook for training an MXNet model](get_started_mnist_train.ipynb), it is high level API used to set up a docker image for your model hosting service. Once it is properly configured, it can be used to create a SageMaker Endpoint on an EC2 instance. The SageMaker endpoint is a containerized environment that uses your trained model to make inference on incoming data via RESTful API calls. Some common parameters used to initiate the `MXNetModel` class are: - entry_point: A user defined python file to be used by the inference container as handlers of incoming requests - source_dir: The directory of the `entry_point` - role: An IAM role to make AWS service requests - model_data: the S3 bucket URI of the compressed model artifact. It can be a path to a local file if the endpoint is to be deployed on the SageMaker instance you are using to run this notebook (local mode) - framework_version: version of the MXNet package to be used - py_version: python version to be used We elaborate on the `entry_point` below. ``` model = MXNetModel( entry_point="inference.py", source_dir="code", role=role, model_data=mx_mnist_model_data, framework_version="1.7.0", py_version="py3", ) ``` ### Entry Point for the Inference Image Your model artifacts pointed by `model_data` is pulled by the `MXNetModel` and it is decompressed and saved in in the docker image it defines. They become regular model checkpoint files that you would produce outside SageMaker. This means in order to use your trained model for serving, you need to tell `MXNetModel` class how to a recover a MXNet model from the static checkpoint. Also, the deployed endpoint interacts with RESTful API calls, you need to tell it how to parse an incoming request to your model. These two instructions needs to be defined as two functions in the python file pointed by `entry_point`. By convention, we name this entry point file `inference.py` and we put it in the `code` directory. To tell the inference image how to load the model checkpoint, you need to implement a function called `model_fn`. This function takes one positional argument - `model_dir`: the directory of the static model checkpoints in the inference image. The return of `model_fn` is an MXNet model. In this example, the `model_fn` looks like: ```python def model_fn(model_dir): """Load the gluon model. Called once when hosting service starts. :param: model_dir The directory where model files are stored. :return: a model (in this case a Gluon network) """ net = gluon.SymbolBlock.imports( symbol_file=os.path.join(model_dir, 'compiled-symbol.json'), input_names=['data'], param_file=os.path.join(model_dir, 'compiled-0000.params')) return net ``` Next, you need to tell the hosting service how to handle the incoming data. This includes: * How to parse the incoming request * How to use the trained model to make inference * How to return the prediction to the caller of the service You do it by implementing a function called `transform_fn`. This function takes 4 positional arguments: - `net`: the return from `model_fn` - `data`: the payload of the incoming request - `content_type`: the content type of the incoming request - `accept_type`: the conetent type of the response In this example, the `transform_fn` looks like: ```python def transform_fn(net, data, input_content_type, output_content_type): assert input_content_type=='application/json' assert output_content_type=='application/json' # parsed should be a 1d array of length 728 parsed = json.loads(data) parsed = parsed['inputs'] # convert to numpy array arr = np.array(parsed).reshape(-1, 1, 28, 28) # convert to mxnet ndarray nda = mx.nd.array(arr) output = net(nda) prediction = mx.nd.argmax(output, axis=1) response_body = json.dumps(prediction.asnumpy().tolist()) return response_body, output_content_type ``` The `content_type` is used by the function to parse the `data`. In the following example, the functions requires the content type of the payload to be a json string and it parses the json string into a python dictionary by `json.loads`. Moreover, it assumes the parsed dictionary contains a key `inputs` that maps to the input data to be consumed by the model. It also assumes the input data is a flattened 1D array representation that can be reshaped into a numpy array of shape (-1, 1, 28, 28). The input images of a MXNet model follows NCHW convention. It also assumes the input data is already normalized and can be readily consumed by the neural network. After the inference, the function uses `accept_type` to encode the prediction into the content type of the response. In this example, the function requires the caller of the service to accept json string. The return of `transform_fn` is always a tuple of encoded response body and the content type to be accepted by the caller. ## Execute the inference container Once the `MXNetModel` class is initiated, we can call its `deploy` method to run the container for the hosting service. Some common parameters needed to call `deploy` methods are: - initial_instance_count: the number of SageMaker instances to be used to run the hosting service. - instance_type: the type of SageMaker instance to run the hosting service. Set it to `local` if you want run the hosting service on the local SageMaker instance. Local mode are typically used for debugging. - serializer: A python callable used to serialize (encode) the request data. - deserializer: A python callable used to deserialize (decode) the response data. Commonly used serializers and deserialzers are implemented in `sagemaker.serializers` and `sagemaker.deserializer` submodules of the SageMaker Python SDK. Since in the `transform_fn` we declared that the incoming requests are json-encoded, we need use a json serializer, to encode the incoming data into a json string. Also, we declared the return content type to be json string, we need to use a json deserializer to parse the response into a an (in this case, an integer represeting the predicted hand-written digit). <span style="color:red"> Note: local mode is not supported in SageMaker Studio </span> ``` from sagemaker.serializers import JSONSerializer from sagemaker.deserializers import JSONDeserializer # set local_mode to False if you want to deploy on a remote # SageMaker instance local_mode = False if local_mode: instance_type = "local" else: instance_type = "ml.c4.xlarge" predictor = model.deploy( initial_instance_count=1, instance_type=instance_type, serializer=JSONSerializer(), deserializer=JSONDeserializer(), ) ``` The `predictor` we get above can be used to make prediction requests agaist a SageMaker endpoint. For more information, check [the api reference for SageMaker Predictor]( https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html#sagemaker.predictor.Predictor) Now, let's test the endpoint with some dummy data. ``` import random dummy_data = {"inputs": [random.random() for _ in range(784)]} ``` In `transform_fn`, we declared that the parsed data is a python dictionary with a key `inputs` and its value should be a 1D array of length 784. Hence, the definition of `dummy_data`. ``` res = predictor.predict(dummy_data) print("Predicted digit:", *map(int, res)) ``` If the input data does not look exactly like `dummy_data`, the endpoint will raise an exception. This is because of the stringent way we defined the `transform_fn`. Let's test the following example. ``` dummy_data = [random.random() for _ in range(784)] ``` When the `dummy_data` is parsed in `transform_fn`, it does not have an `inputs` field, so `transform_fn` will crush. ``` # uncomment the following line to make inference on incorrectly formated input data # res = predictor.predict(dummy_data) ``` Now, let's use real MNIST test to test the endpoint. We use helper functions defined in `code.utils` to download MNIST data set and normalize the input data. ``` import random import boto3 import matplotlib.pyplot as plt import os import numpy as np import gzip import json %matplotlib inline # Donwload MNIST test set from a public bucket with open("code/config.json", "rb") as f: CONFIG = json.load(f) fname = "t10k-images-idx3-ubyte.gz" bucket = CONFIG["public_bucket"] key = "datasets/image/MNIST/" + fname target = os.path.join("/tmp", fname) s3 = boto3.client("s3") if not os.path.exists(target): s3.download_file(bucket, key, target) # parse to numpy with gzip.open(target, "rb") as f: images = np.frombuffer(f.read(), np.uint8, offset=16).reshape(-1, 28, 28) # randomly sample 16 images to inspect mask = random.sample(range(images.shape[0]), 16) samples = images[mask] # plot the images fig, axs = plt.subplots(nrows=1, ncols=16, figsize=(16, 1)) for i, splt in enumerate(axs): splt.imshow(samples[i]) ``` First, let us use the model to infer the samples one-by-one. This is the typical use case for an online application. ``` # convert to float and normalize normalize the input def normalize(x, axis): eps = np.finfo(float).eps mean = np.mean(x, axis=axis, keepdims=True) # avoid division by zero std = np.std(x, axis=axis, keepdims=True) + eps return (x - mean) / std samples = normalize(samples.astype(np.float32), axis=(1, 2)) # mean 0; std 1 res = [] for img in samples: data = {"inputs": img.flatten().tolist()} res.append(predictor.predict(data)[0]) print("Predictions: ", *map(int, res)) ``` Since in `transform_fn`, the parsed numpy array could have take on any value for its batch dimension, we can send the entire `samples` at once and let the model do a batch inference. ``` data = {"inputs": samples.tolist()} res = predictor.predict(data) print("Predictions: ", *map(int, res)) ``` ## Test and debug the entry point before deployment When deploying a model to a SageMaker endpoint, it is a good practice to test the entry point. The following snippet shows you how you can test and debug the `model_fn` and `transform_fn` you implemented in the entry point for the inference image. ``` !pygmentize code/test_inference.py ``` The `test` function simulates how the inference container works. It pulls the model artifact and loads the model into memory by calling `model_fn` and parse it with `model_dir`. When it receives a request, it calls `transform_fn` and parse it with the loaded model, the payload of the request, request content type and response content type. Implementing such a test function helps you debugging the entry point before put it into the production. If `test` runs correctly, then you can be certain that if the incoming data and its content type are what they suppose to be, then the endpoint point is going to work as expected. ## (Optional) Clean up If you do not plan to use the endpoint, you should delete it to free up some computation resource. If you use local, you will need to manually delete the docker container bounded at port 8080 (the port that listens to the incoming request). ``` import os if not local_mode: predictor.delete_endpoint() else: # detach the inference container from port 8080 (in local mode) os.system("docker container ls | grep 8080 | awk '{print $1}' | xargs docker container rm -f") ```
github_jupyter
# Nova Tarefa - Experimento Preencha aqui com detalhes sobre a tarefa.<br> ### **Em caso de dúvidas, consulte os [tutoriais da PlatIAgro](https://platiagro.github.io/tutorials/).** ## Declaração de parâmetros e hiperparâmetros Declare parâmetros com o botão <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAABhWlDQ1BJQ0MgcHJvZmlsZQAAKJF9kT1Iw0AcxV9TtaIVBzuIOASpThb8QhylikWwUNoKrTqYXPohNGlIUlwcBdeCgx+LVQcXZ10dXAVB8APEydFJ0UVK/F9SaBHjwXE/3t173L0DhFqJqWbbGKBqlpGMRcVMdkUMvKID3QhiCOMSM/V4aiENz/F1Dx9f7yI8y/vcn6NHyZkM8InEs0w3LOJ14ulNS+e8TxxiRUkhPiceNeiCxI9cl11+41xwWOCZISOdnCMOEYuFFpZbmBUNlXiKOKyoGuULGZcVzluc1VKFNe7JXxjMacsprtMcRAyLiCMBETIq2EAJFiK0aqSYSNJ+1MM/4PgT5JLJtQFGjnmUoUJy/OB/8LtbMz854SYFo0D7i21/DAOBXaBete3vY9uunwD+Z+BKa/rLNWDmk/RqUwsfAb3bwMV1U5P3gMsdoP9JlwzJkfw0hXweeD+jb8oCfbdA16rbW2Mfpw9AmrpaugEODoGRAmWveby7s7W3f880+vsBocZyukMJsmwAAAAGYktHRAD/AP8A/6C9p5MAAAAJcEhZcwAADdcAAA3XAUIom3gAAAAHdElNRQfkBgsMIwnXL7c0AAACDUlEQVQ4y92UP4gTQRTGf29zJxhJZ2NxbMBKziYWlmJ/ile44Nlkd+dIYWFzItiNgoIEtFaTzF5Ac/inE/urtLWxsMqmUOwCEpt1Zmw2xxKi53XitPO9H9978+aDf/3IUQvSNG0450Yi0jXG7C/eB0cFeu9viciGiDyNoqh2KFBrHSilWstgnU7nFLBTgl+ur6/7PwK11kGe5z3n3Hul1MaiuCgKDZwALHA7z/Oe1jpYCtRaB+PxuA8kQM1aW68Kt7e3zwBp6a5b1ibj8bhfhQYVZwMRiQHrvW9nWfaqCrTWPgRWvPdvsiy7IyLXgEJE4slk8nw+T5nDgDbwE9gyxryuwpRSF5xz+0BhrT07HA4/AyRJchUYASvAbhiGaRVWLIMBYq3tAojIszkMoNRulbXtPM8HwV/sXSQi54HvQRDcO0wfhGGYArvAKjAq2wAgiqJj3vsHpbtur9f7Vi2utLx60LLW2hljEuBJOYu9OI6vAzQajRvAaeBLURSPlsBelA+VhWGYaq3dwaZvbm6+m06noYicE5ErrVbrK3AXqHvvd4bD4Ye5No7jSERGwKr3Pms2m0pr7Rb30DWbTQWYcnFvAieBT7PZbFB1V6vVfpQaU4UtDQetdTCZTC557/eA48BlY8zbRZ1SqrW2tvaxCvtt2iRJ0i9/xb4x5uJRwmNlaaaJ3AfqIvKY/+78Av++6uiSZhYMAAAAAElFTkSuQmCC" /> na barra de ferramentas.<br> O parâmetro `dataset` identifica os conjuntos de dados. Você pode importar arquivos de dataset com o botão <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAABhWlDQ1BJQ0MgcHJvZmlsZQAAKJF9kT1Iw0AcxV9TtaIVBzuIOASpThb8QhylikWwUNoKrTqYXPohNGlIUlwcBdeCgx+LVQcXZ10dXAVB8APEydFJ0UVK/F9SaBHjwXE/3t173L0DhFqJqWbbGKBqlpGMRcVMdkUMvKID3QhiCOMSM/V4aiENz/F1Dx9f7yI8y/vcn6NHyZkM8InEs0w3LOJ14ulNS+e8TxxiRUkhPiceNeiCxI9cl11+41xwWOCZISOdnCMOEYuFFpZbmBUNlXiKOKyoGuULGZcVzluc1VKFNe7JXxjMacsprtMcRAyLiCMBETIq2EAJFiK0aqSYSNJ+1MM/4PgT5JLJtQFGjnmUoUJy/OB/8LtbMz854SYFo0D7i21/DAOBXaBete3vY9uunwD+Z+BKa/rLNWDmk/RqUwsfAb3bwMV1U5P3gMsdoP9JlwzJkfw0hXweeD+jb8oCfbdA16rbW2Mfpw9AmrpaugEODoGRAmWveby7s7W3f880+vsBocZyukMJsmwAAAAGYktHRAD/AP8A/6C9p5MAAAAJcEhZcwAADdcAAA3XAUIom3gAAAAHdElNRQfkBgsOBy6ASTeXAAAC/0lEQVQ4y5WUT2gcdRTHP29m99B23Uiq6dZisgoWCxVJW0oL9dqLfyhCvGWY2YUBI95MsXgwFISirQcLhS5hfgk5CF3wJIhFI7aHNsL2VFZFik1jS1qkiZKdTTKZ3/MyDWuz0fQLc/m99/vMvDfv+4RMlUrlkKqeAAaBAWAP8DSgwJ/AXRG5rao/WWsvTU5O3qKLBMD3fSMiPluXFZEPoyj67PGAMzw83PeEMABHVT/oGpiamnoAmCcEWhH5tFsgF4bh9oWFhfeKxeJ5a+0JVT0oImWgBPQCKfAQuAvcBq67rltX1b+6ApMkKRcKhe9V9QLwbavV+qRer692Sx4ZGSnEcXw0TdP3gSrQswGYz+d/S5IkVtXTwOlCoZAGQXAfmAdagAvsAErtdnuXiDy6+023l7qNRsMODg5+CawBzwB9wFPA7mx8ns/KL2Tl3xCRz5eWlkabzebahrHxPG+v4zgnc7ncufHx8Z+Hhoa29fT0lNM03Q30ikiqqg+ttX/EcTy3WTvWgdVqtddaOw/kgXvADHBHROZVNRaRvKruUNU+EdkPfGWM+WJTYOaSt1T1LPDS/4zLWWPMaLVaPWytrYvIaBRFl/4F9H2/JCKvGmMu+76/X0QOqGoZKDmOs1NV28AicMsYc97zvFdc1/0hG6kEeNsY83UnsCwivwM3VfU7YEZE7lhr74tIK8tbnJiYWPY8b6/ruleAXR0ftQy8boyZXi85CIIICDYpc2ZgYODY3NzcHmvt1eyvP64lETkeRdE1yZyixWLx5U2c8q4x5mIQBE1g33/0d3FlZeXFR06ZttZesNZejuO4q1NE5CPgWVV9E3ij47wB1IDlJEn+ljAM86urq7+KyAtZTgqsO0VV247jnOnv7/9xbGzMViqVMVX9uANYj6LonfVtU6vVkjRNj6jqGeCXzGrPAQeA10TkuKpOz87ONrayhnIA2Qo7BZwKw3B7kiRloKSqO13Xja21C47jPNgysFO1Wi0GmtmzQap6DWgD24A1Vb3SGf8Hfstmz1CuXEIAAAAASUVORK5CYII=" /> na barra de ferramentas. ``` dataset = "" #@param {type:"string"} ``` ## Acesso ao conjunto de dados O conjunto de dados utilizado nesta etapa será o mesmo carregado através da plataforma.<br> O tipo da variável retornada depende do arquivo de origem: - [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) para CSV e compressed CSV: .csv .csv.zip .csv.gz .csv.bz2 .csv.xz - [Binary IO stream](https://docs.python.org/3/library/io.html#binary-i-o) para outros tipos de arquivo: .jpg .wav .zip .h5 .parquet etc ``` import pandas as pd data = pd.read_csv(f'/tmp/data/{dataset}') data ``` ## Acesso aos metadados do conjunto de dados Utiliza a função `stat_dataset` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para carregar metadados.<br> Por exemplo, arquivos CSV possuem `metadata['featuretypes']` para cada coluna no conjunto de dados (ex: categorical, numerical, or datetime). ``` from platiagro import stat_dataset metadata = stat_dataset(name=dataset) metadata ``` ## Conteúdo da tarefa ``` # adicione seu código aqui... ``` ## Salva alterações no conjunto de dados O conjunto de dados será salvo (e sobrescrito com as respectivas mudanças) localmente, no container da experimentação, utilizando a função `pandas.DataFrame.to_csv`.<br> ``` df.to_csv(f'/tmp/data/{dataset}', index=False) ``` ## Salva métricas Utiliza a função `save_metrics` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para salvar métricas. Por exemplo: `accuracy`, `precision`, `r2_score`, `custom_score` etc.<br> ``` from platiagro import save_metrics save_metrics(accuracy=0.5, custom_score=1000) ``` ## Salva figuras Utiliza a função `save_figures` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para salvar figuras do [matplotlib](https://matplotlib.org/3.2.1/gallery/index.html). ``` from platiagro import save_figures save_figures(figure=matplotfig) ``` ## Salva modelo e outros artefatos Utiliza a função `save_model` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para salvar modelos e outros artefatos.<br> Essa função torna estes artefatos disponíveis para o notebook de implantação. ``` from platiagro import save_model save_model(model=model, other_artifact={"key": "value"}) ```
github_jupyter
# 12-Web-Scraping-and-Document-Databases Eric Nordstrom ### Setup ``` # dependencies from selenium import webdriver from bs4 import BeautifulSoup as BS import requests import pandas as pd # set up selenium driver driver = webdriver.Firefox() ``` ### NASA Mars News ``` # get html to parse url = "https://mars.nasa.gov/news" driver.get(url) soup = BS(driver.page_source, "html.parser") # parse html item = soup.find('li', class_="slide") date = item.find('div', class_="list_date").text title_a = item.find('div', class_="content_title").a title = title_a.text href = title_a['href'] para = item.find('div', class_="article_teaser_body").text # display results print(date) print(title) print() print(para) print("\nMore:", "https://mars.nasa.gov" + href) ``` ### JPL Mars Space Images - Featured Image ``` # get html to parse url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars" r = requests.get(url) assert(r.status_code == 200) soup = BS(r.text, "html.parser") # parse html button_a = soup.find('a', id="full_image") featured_image_url = "https://www.jpl.nasa.gov" + button_a['data-fancybox-href'] title = button_a['data-title'] desc = button_a['data-description'] # display results print(title, desc, "Image:", sep="\n\n", end=" ") print(featured_image_url) ``` ### Mars Weather ``` # get html to parse url = "https://twitter.com/marswxreport?lang=en" r = requests.get(url) assert(r.status_code == 200) soup = BS(r.text, "html.parser") # parse html # for some reason this shows up as a `span` via the inspector, but something # goes wrong via requests and even selenium. the 'p' tag below was found via # <str.find> on the request html but does not appear via the inspector. p = soup.find('p', class_="tweet-text") mars_weather = p.text.split("pic.twitter.com/")[0] # display results print(mars_weather) ``` ### Mars Facts ``` #get html to parse url = "http://space-facts.com/mars/" r = requests.get(url) assert(r.status_code == 200) # parse html # "HTML table string"? i think just a data frame makes sense? tables = pd.read_html(r.text) # display results tables[0] tables[1] # assign variables mars_facts = tables[0].rename(columns={0: "Property", 1: "Value"}).set_index("Property").to_html() earth_comparison = tables[1].rename(columns={"Mars - Earth Comparison": "Property"}).set_index("Property").to_html() # display results print(mars_facts) print() print(earth_comparison) ``` ### Mars Hemispheres ``` # get html to parse url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars" driver.get(url) # parse html on main page imgs = {} # initially the urls of the images pages, then replaced with actual image urls for a in driver.find_elements_by_tag_name('a'): if a.get_attribute('class') == "itemLink product-item" and a.find_elements_by_tag_name('h3'): imgs[a.text] = a.get_attribute('href') # parse html on each image page for key, value in imgs.items(): driver.get(value) for img in driver.find_elements_by_tag_name('img'): if img.get_attribute('class') == "wide-image": imgs[key] = img.get_attribute('src') break # display results imgs ```
github_jupyter
``` %matplotlib inline ``` PyTorch 1.0 Distributed Trainer with Amazon AWS =============================================== **Author**: `Nathan Inkawhich <https://github.com/inkawhich>`_ **Edited by**: `Teng Li <https://github.com/teng-li>`_ In this tutorial we will show how to setup, code, and run a PyTorch 1.0 distributed trainer across two multi-gpu Amazon AWS nodes. We will start with describing the AWS setup, then the PyTorch environment configuration, and finally the code for the distributed trainer. Hopefully you will find that there is actually very little code change required to extend your current training code to a distributed application, and most of the work is in the one-time environment setup. Amazon AWS Setup ---------------- In this tutorial we will run distributed training across two multi-gpu nodes. In this section we will first cover how to create the nodes, then how to setup the security group so the nodes can communicate with eachother. Creating the Nodes ~~~~~~~~~~~~~~~~~~ In Amazon AWS, there are seven steps to creating an instance. To get started, login and select **Launch Instance**. **Step 1: Choose an Amazon Machine Image (AMI)** - Here we will select the ``Deep Learning AMI (Ubuntu) Version 14.0``. As described, this instance comes with many of the most popular deep learning frameworks installed and is preconfigured with CUDA, cuDNN, and NCCL. It is a very good starting point for this tutorial. **Step 2: Choose an Instance Type** - Now, select the GPU compute unit called ``p2.8xlarge``. Notice, each of these instances has a different cost but this instance provides 8 NVIDIA Tesla K80 GPUs per node, and provides a good architecture for multi-gpu distributed training. **Step 3: Configure Instance Details** - The only setting to change here is increasing the *Number of instances* to 2. All other configurations may be left at default. **Step 4: Add Storage** - Notice, by default these nodes do not come with a lot of storage (only 75 GB). For this tutorial, since we are only using the STL-10 dataset, this is plenty of storage. But, if you want to train on a larger dataset such as ImageNet, you will have to add much more storage just to fit the dataset and any trained models you wish to save. **Step 5: Add Tags** - Nothing to be done here, just move on. **Step 6: Configure Security Group** - This is a critical step in the configuration process. By default two nodes in the same security group would not be able to communicate in the distributed training setting. Here, we want to create a **new** security group for the two nodes to be in. However, we cannot finish configuring in this step. For now, just remember your new security group name (e.g. launch-wizard-12) then move on to Step 7. **Step 7: Review Instance Launch** - Here, review the instance then launch it. By default, this will automatically start initializing the two instances. You can monitor the initialization progress from the dashboard. Configure Security Group ~~~~~~~~~~~~~~~~~~~~~~~~ Recall that we were not able to properly configure the security group when creating the instances. Once you have launched the instance, select the *Network & Security > Security Groups* tab in the EC2 dashboard. This will bring up a list of security groups you have access to. Select the new security group you created in Step 6 (i.e. launch-wizard-12), which will bring up tabs called *Description, Inbound, Outbound, and Tags*. First, select the *Inbound* tab and *Edit* to add a rule to allow "All Traffic" from "Sources" in the launch-wizard-12 security group. Then select the *Outbound* tab and do the exact same thing. Now, we have effectively allowed all Inbound and Outbound traffic of all types between nodes in the launch-wizard-12 security group. Necessary Information ~~~~~~~~~~~~~~~~~~~~~ Before continuing, we must find and remember the IP addresses of both nodes. In the EC2 dashboard find your running instances. For both instances, write down the *IPv4 Public IP* and the *Private IPs*. For the remainder of the document, we will refer to these as the **node0-publicIP**, **node0-privateIP**, **node1-publicIP**, and **node1-privateIP**. The public IPs are the addresses we will use to SSH in, and the private IPs will be used for inter-node communication. Environment Setup ----------------- The next critical step is the setup of each node. Unfortunately, we cannot configure both nodes at the same time, so this process must be done on each node separately. However, this is a one time setup, so once you have the nodes configured properly you will not have to reconfigure for future distributed training projects. The first step, once logged onto the node, is to create a new conda environment with python 3.6 and numpy. Once created activate the environment. :: $ conda create -n nightly_pt python=3.6 numpy $ source activate nightly_pt Next, we will install a nightly build of Cuda 9.0 enabled PyTorch with pip in the conda environment. :: $ pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu90/torch_nightly.html We must also install torchvision so we can use the torchvision model and dataset. At this time, we must build torchvision from source as the pip installation will by default install an old version of PyTorch on top of the nightly build we just installed. :: $ cd $ git clone https://github.com/pytorch/vision.git $ cd vision $ python setup.py install And finally, **VERY IMPORTANT** step is to set the network interface name for the NCCL socket. This is set with the environment variable ``NCCL_SOCKET_IFNAME``. To get the correct name, run the ``ifconfig`` command on the node and look at the interface name that corresponds to the node's *privateIP* (e.g. ens3). Then set the environment variable as :: $ export NCCL_SOCKET_IFNAME=ens3 Remember, do this on both nodes. You may also consider adding the NCCL\_SOCKET\_IFNAME setting to your *.bashrc*. An important observation is that we did not setup a shared filesystem between the nodes. Therefore, each node will have to have a copy of the code and a copy of the datasets. For more information about setting up a shared network filesystem between nodes, see `here <https://aws.amazon.com/blogs/aws/amazon-elastic-file-system-shared-file-storage-for-amazon-ec2/>`__. Distributed Training Code ------------------------- With the instances running and the environments setup we can now get into the training code. Most of the code here has been taken from the `PyTorch ImageNet Example <https://github.com/pytorch/examples/tree/master/imagenet>`__ which also supports distributed training. This code provides a good starting point for a custom trainer as it has much of the boilerplate training loop, validation loop, and accuracy tracking functionality. However, you will notice that the argument parsing and other non-essential functions have been stripped out for simplicity. In this example we will use `torchvision.models.resnet18 <https://pytorch.org/docs/stable/torchvision/models.html#torchvision.models.resnet18>`__ model and will train it on the `torchvision.datasets.STL10 <https://pytorch.org/docs/stable/torchvision/datasets.html#torchvision.datasets.STL10>`__ dataset. To accomodate for the dimensionality mismatch of STL-10 with Resnet18, we will resize each image to 224x224 with a transform. Notice, the choice of model and dataset are orthogonal to the distributed training code, you may use any dataset and model you wish and the process is the same. Lets get started by first handling the imports and talking about some helper functions. Then we will define the train and test functions, which have been largely taken from the ImageNet Example. At the end, we will build the main part of the code which handles the distributed training setup. And finally, we will discuss how to actually run the code. Imports ~~~~~~~ The important distributed training specific imports here are `torch.nn.parallel <https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel>`__, `torch.distributed <https://pytorch.org/docs/stable/distributed.html>`__, `torch.utils.data.distributed <https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler>`__, and `torch.multiprocessing <https://pytorch.org/docs/stable/multiprocessing.html>`__. It is also important to set the multiprocessing start method to *spawn* or *forkserver* (only supported in Python 3), as the default is *fork* which may cause deadlocks when using multiple worker processes for dataloading. ``` import time import sys import torch if __name__ == '__main__': torch.multiprocessing.set_start_method('spawn') import torch.nn as nn import torch.nn.parallel import torch.distributed as dist import torch.optim import torch.utils.data import torch.utils.data.distributed import torchvision.transforms as transforms import torchvision.datasets as datasets import torchvision.models as models from torch.multiprocessing import Pool, Process ``` Helper Functions ~~~~~~~~~~~~~~~~ We must also define some helper functions and classes that will make training easier. The ``AverageMeter`` class tracks training statistics like accuracy and iteration count. The ``accuracy`` function computes and returns the top-k accuracy of the model so we can track learning progress. Both are provided for training convenience but neither are distributed training specific. ``` class AverageMeter(object): """Computes and stores the average and current value""" def __init__(self): self.reset() def reset(self): self.val = 0 self.avg = 0 self.sum = 0 self.count = 0 def update(self, val, n=1): self.val = val self.sum += val * n self.count += n self.avg = self.sum / self.count def accuracy(output, target, topk=(1,)): """Computes the precision@k for the specified values of k""" with torch.no_grad(): maxk = max(topk) batch_size = target.size(0) _, pred = output.topk(maxk, 1, True, True) pred = pred.t() correct = pred.eq(target.view(1, -1).expand_as(pred)) res = [] for k in topk: correct_k = correct[:k].view(-1).float().sum(0, keepdim=True) res.append(correct_k.mul_(100.0 / batch_size)) return res ``` Train Functions ~~~~~~~~~~~~~~~ To simplify the main loop, it is best to separate a training epoch step into a function called ``train``. This function trains the input model for one epoch of the *train\_loader*. The only distributed training artifact in this function is setting the `non\_blocking <https://pytorch.org/docs/stable/notes/cuda.html#use-pinned-memory-buffers>`__ attributes of the data and label tensors to ``True`` before the forward pass. This allows asynchronous GPU copies of the data meaning transfers can be overlapped with computation. This function also outputs training statistics along the way so we can track progress throughout the epoch. The other function to define here is ``adjust_learning_rate``, which decays the initial learning rate at a fixed schedule. This is another boilerplate trainer function that is useful to train accurate models. ``` def train(train_loader, model, criterion, optimizer, epoch): batch_time = AverageMeter() data_time = AverageMeter() losses = AverageMeter() top1 = AverageMeter() top5 = AverageMeter() # switch to train mode model.train() end = time.time() for i, (input, target) in enumerate(train_loader): # measure data loading time data_time.update(time.time() - end) # Create non_blocking tensors for distributed training input = input.cuda(non_blocking=True) target = target.cuda(non_blocking=True) # compute output output = model(input) loss = criterion(output, target) # measure accuracy and record loss prec1, prec5 = accuracy(output, target, topk=(1, 5)) losses.update(loss.item(), input.size(0)) top1.update(prec1[0], input.size(0)) top5.update(prec5[0], input.size(0)) # compute gradients in a backward pass optimizer.zero_grad() loss.backward() # Call step of optimizer to update model params optimizer.step() # measure elapsed time batch_time.update(time.time() - end) end = time.time() if i % 10 == 0: print('Epoch: [{0}][{1}/{2}]\t' 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' 'Data {data_time.val:.3f} ({data_time.avg:.3f})\t' 'Loss {loss.val:.4f} ({loss.avg:.4f})\t' 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t' 'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format( epoch, i, len(train_loader), batch_time=batch_time, data_time=data_time, loss=losses, top1=top1, top5=top5)) def adjust_learning_rate(initial_lr, optimizer, epoch): """Sets the learning rate to the initial LR decayed by 10 every 30 epochs""" lr = initial_lr * (0.1 ** (epoch // 30)) for param_group in optimizer.param_groups: param_group['lr'] = lr ``` Validation Function ~~~~~~~~~~~~~~~~~~~ To track generalization performance and simplify the main loop further we can also extract the validation step into a function called ``validate``. This function runs a full validation step of the input model on the input validation dataloader and returns the top-1 accuracy of the model on the validation set. Again, you will notice the only distributed training feature here is setting ``non_blocking=True`` for the training data and labels before they are passed to the model. ``` def validate(val_loader, model, criterion): batch_time = AverageMeter() losses = AverageMeter() top1 = AverageMeter() top5 = AverageMeter() # switch to evaluate mode model.eval() with torch.no_grad(): end = time.time() for i, (input, target) in enumerate(val_loader): input = input.cuda(non_blocking=True) target = target.cuda(non_blocking=True) # compute output output = model(input) loss = criterion(output, target) # measure accuracy and record loss prec1, prec5 = accuracy(output, target, topk=(1, 5)) losses.update(loss.item(), input.size(0)) top1.update(prec1[0], input.size(0)) top5.update(prec5[0], input.size(0)) # measure elapsed time batch_time.update(time.time() - end) end = time.time() if i % 100 == 0: print('Test: [{0}/{1}]\t' 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' 'Loss {loss.val:.4f} ({loss.avg:.4f})\t' 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t' 'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format( i, len(val_loader), batch_time=batch_time, loss=losses, top1=top1, top5=top5)) print(' * Prec@1 {top1.avg:.3f} Prec@5 {top5.avg:.3f}' .format(top1=top1, top5=top5)) return top1.avg ``` Inputs ~~~~~~ With the helper functions out of the way, now we have reached the interesting part. Here is where we will define the inputs for the run. Some of the inputs are standard model training inputs such as batch size and number of training epochs, and some are specific to our distributed training task. The required inputs are: - **batch\_size** - batch size for *each* process in the distributed training group. Total batch size across distributed model is batch\_size\*world\_size - **workers** - number of worker processes used with the dataloaders in each process - **num\_epochs** - total number of epochs to train for - **starting\_lr** - starting learning rate for training - **world\_size** - number of processes in the distributed training environment - **dist\_backend** - backend to use for distributed training communication (i.e. NCCL, Gloo, MPI, etc.). In this tutorial, since we are using several multi-gpu nodes, NCCL is suggested. - **dist\_url** - URL to specify the initialization method of the process group. This may contain the IP address and port of the rank0 process or be a non-existant file on a shared file system. Here, since we do not have a shared file system this will incorporate the **node0-privateIP** and the port on node0 to use. ``` print("Collect Inputs...") # Batch Size for training and testing batch_size = 32 # Number of additional worker processes for dataloading workers = 2 # Number of epochs to train for num_epochs = 2 # Starting Learning Rate starting_lr = 0.1 # Number of distributed processes world_size = 4 # Distributed backend type dist_backend = 'nccl' # Url used to setup distributed training dist_url = "tcp://172.31.22.234:23456" ``` Initialize process group ~~~~~~~~~~~~~~~~~~~~~~~~ One of the most important parts of distributed training in PyTorch is to properly setup the process group, which is the **first** step in initializing the ``torch.distributed`` package. To do this, we will use the ``torch.distributed.init_process_group`` function which takes several inputs. First, a *backend* input which specifies the backend to use (i.e. NCCL, Gloo, MPI, etc.). An *init\_method* input which is either a url containing the address and port of the rank0 machine or a path to a non-existant file on the shared file system. Note, to use the file init\_method, all machines must have access to the file, similarly for the url method, all machines must be able to communicate on the network so make sure to configure any firewalls and network settings to accomodate. The *init\_process\_group* function also takes *rank* and *world\_size* arguments which specify the rank of this process when run and the number of processes in the collective, respectively. The *init\_method* input can also be "env://". In this case, the address and port of the rank0 machine will be read from the following two environment variables respectively: MASTER_ADDR, MASTER_PORT. If *rank* and *world\_size* arguments are not specified in the *init\_process\_group* function, they both can be read from the following two environment variables respectively as well: RANK, WORLD_SIZE. Another important step, especially when each node has multiple gpus is to set the *local\_rank* of this process. For example, if you have two nodes, each with 8 GPUs and you wish to train with all of them then $world\_size=16$ and each node will have a process with local rank 0-7. This local\_rank is used to set the device (i.e. which GPU to use) for the process and later used to set the device when creating a distributed data parallel model. It is also recommended to use NCCL backend in this hypothetical environment as NCCL is preferred for multi-gpu nodes. ``` print("Initialize Process Group...") # Initialize Process Group # v1 - init with url dist.init_process_group(backend=dist_backend, init_method=dist_url, rank=int(sys.argv[1]), world_size=world_size) # v2 - init with file # dist.init_process_group(backend="nccl", init_method="file:///home/ubuntu/pt-distributed-tutorial/trainfile", rank=int(sys.argv[1]), world_size=world_size) # v3 - init with environment variables # dist.init_process_group(backend="nccl", init_method="env://", rank=int(sys.argv[1]), world_size=world_size) # Establish Local Rank and set device on this node local_rank = int(sys.argv[2]) dp_device_ids = [local_rank] torch.cuda.set_device(local_rank) ``` Initialize Model ~~~~~~~~~~~~~~~~ The next major step is to initialize the model to be trained. Here, we will use a resnet18 model from ``torchvision.models`` but any model may be used. First, we initialize the model and place it in GPU memory. Next, we make the model ``DistributedDataParallel``, which handles the distribution of the data to and from the model and is critical for distributed training. The ``DistributedDataParallel`` module also handles the averaging of gradients across the world, so we do not have to explicitly average the gradients in the training step. It is important to note that this is a blocking function, meaning program execution will wait at this function until *world\_size* processes have joined the process group. Also, notice we pass our device ids list as a parameter which contains the local rank (i.e. GPU) we are using. Finally, we specify the loss function and optimizer to train the model with. ``` print("Initialize Model...") # Construct Model model = models.resnet18(pretrained=False).cuda() # Make model DistributedDataParallel model = torch.nn.parallel.DistributedDataParallel(model, device_ids=dp_device_ids, output_device=local_rank) # define loss function (criterion) and optimizer criterion = nn.CrossEntropyLoss().cuda() optimizer = torch.optim.SGD(model.parameters(), starting_lr, momentum=0.9, weight_decay=1e-4) ``` Initialize Dataloaders ~~~~~~~~~~~~~~~~~~~~~~ The last step in preparation for the training is to specify which dataset to use. Here we use the `STL-10 dataset <https://cs.stanford.edu/~acoates/stl10/>`__ from `torchvision.datasets.STL10 <https://pytorch.org/docs/stable/torchvision/datasets.html#torchvision.datasets.STL10>`__. The STL10 dataset is a 10 class dataset of 96x96px color images. For use with our model, we resize the images to 224x224px in the transform. One distributed training specific item in this section is the use of the ``DistributedSampler`` for the training set, which is designed to be used in conjunction with ``DistributedDataParallel`` models. This object handles the partitioning of the dataset across the distributed environment so that not all models are training on the same subset of data, which would be counterproductive. Finally, we create the ``DataLoader``'s which are responsible for feeding the data to the processes. The STL-10 dataset will automatically download on the nodes if they are not present. If you wish to use your own dataset you should download the data, write your own dataset handler, and construct a dataloader for your dataset here. ``` print("Initialize Dataloaders...") # Define the transform for the data. Notice, we must resize to 224x224 with this dataset and model. transform = transforms.Compose( [transforms.Resize(224), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) # Initialize Datasets. STL10 will automatically download if not present trainset = datasets.STL10(root='./data', split='train', download=True, transform=transform) valset = datasets.STL10(root='./data', split='test', download=True, transform=transform) # Create DistributedSampler to handle distributing the dataset across nodes when training # This can only be called after torch.distributed.init_process_group is called train_sampler = torch.utils.data.distributed.DistributedSampler(trainset) # Create the Dataloaders to feed data to the training and validation steps train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=(train_sampler is None), num_workers=workers, pin_memory=False, sampler=train_sampler) val_loader = torch.utils.data.DataLoader(valset, batch_size=batch_size, shuffle=False, num_workers=workers, pin_memory=False) ``` Training Loop ~~~~~~~~~~~~~ The last step is to define the training loop. We have already done most of the work for setting up the distributed training so this is not distributed training specific. The only detail is setting the current epoch count in the ``DistributedSampler``, as the sampler shuffles the data going to each process deterministically based on epoch. After updating the sampler, the loop runs a full training epoch, runs a full validation step then prints the performance of the current model against the best performing model so far. After training for num\_epochs, the loop exits and the tutorial is complete. Notice, since this is an exercise we are not saving models but one may wish to keep track of the best performing model then save it at the end of training (see `here <https://github.com/pytorch/examples/blob/master/imagenet/main.py#L184>`__). ``` best_prec1 = 0 for epoch in range(num_epochs): # Set epoch count for DistributedSampler train_sampler.set_epoch(epoch) # Adjust learning rate according to schedule adjust_learning_rate(starting_lr, optimizer, epoch) # train for one epoch print("\nBegin Training Epoch {}".format(epoch+1)) train(train_loader, model, criterion, optimizer, epoch) # evaluate on validation set print("Begin Validation @ Epoch {}".format(epoch+1)) prec1 = validate(val_loader, model, criterion) # remember best prec@1 and save checkpoint if desired # is_best = prec1 > best_prec1 best_prec1 = max(prec1, best_prec1) print("Epoch Summary: ") print("\tEpoch Accuracy: {}".format(prec1)) print("\tBest Accuracy: {}".format(best_prec1)) ``` Running the Code ---------------- Unlike most of the other PyTorch tutorials, this code may not be run directly out of this notebook. To run, download the .py version of this file (or convert it using `this <https://gist.github.com/chsasank/7218ca16f8d022e02a9c0deb94a310fe>`__) and upload a copy to both nodes. The astute reader would have noticed that we hardcoded the **node0-privateIP** and $world\_size=4$ but input the *rank* and *local\_rank* inputs as arg[1] and arg[2] command line arguments, respectively. Once uploaded, open two ssh terminals into each node. - On the first terminal for node0, run ``$ python main.py 0 0`` - On the second terminal for node0 run ``$ python main.py 1 1`` - On the first terminal for node1, run ``$ python main.py 2 0`` - On the second terminal for node1 run ``$ python main.py 3 1`` The programs will start and wait after printing "Initialize Model..." for all four processes to join the process group. Notice the first argument is not repeated as this is the unique global rank of the process. The second argument is repeated as that is the local rank of the process running on the node. If you run ``nvidia-smi`` on each node, you will see two processes on each node, one running on GPU0 and one on GPU1. We have now completed the distributed training example! Hopefully you can see how you would use this tutorial to help train your own models on your own datasets, even if you are not using the exact same distributed envrionment. If you are using AWS, don't forget to **SHUT DOWN YOUR NODES** if you are not using them or you may find an uncomfortably large bill at the end of the month. **Where to go next** - Check out the `launcher utility <https://pytorch.org/docs/stable/distributed.html#launch-utility>`__ for a different way of kicking off the run - Check out the `torch.multiprocessing.spawn utility <https://pytorch.org/docs/master/multiprocessing.html#spawning-subprocesses>`__ for another easy way of kicking off multiple distributed processes. `PyTorch ImageNet Example <https://github.com/pytorch/examples/tree/master/imagenet>`__ has it implemented and can demonstrate how to use it. - If possible, setup a NFS so you only need one copy of the dataset
github_jupyter
``` # Copyright 2021 Google LLC # Use of this source code is governed by an MIT-style # license that can be found in the LICENSE file or at # https://opensource.org/licenses/MIT. # Notebook authors: Kevin P. Murphy ([email protected]) # and Mahmoud Soliman ([email protected]) # This notebook reproduces figures for chapter 1 from the book # "Probabilistic Machine Learning: An Introduction" # by Kevin Murphy (MIT Press, 2021). # Book pdf is available from http://probml.ai ``` <a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a> <a href="https://colab.research.google.com/github/probml/pml-book/blob/main/pml1/figure_notebooks/chapter1_introduction_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Figure 1.1:<a name='1.1'></a> <a name='iris'></a> Three types of Iris flowers: Setosa, Versicolor and Virginica. Used with kind permission of Dennis Kramb and SIGNA ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.1_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.1_B.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.1_C.png" width="256"/> ## Figure 1.2:<a name='1.2'></a> <a name='cat'></a> Illustration of the image classification problem. From https://cs231n.github.io/ . Used with kind permission of Andrej Karpathy ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.2.png" width="256"/> ## Figure 1.3:<a name='1.3'></a> <a name='irisPairs'></a> Visualization of the Iris data as a pairwise scatter plot. On the diagonal we plot the marginal distribution of each feature for each class. The off-diagonals contain scatterplots of all possible pairs of features. Figure(s) generated by [iris_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n iris_plot.py ``` ## Figure 1.4:<a name='1.4'></a> <a name='dtreeIrisDepth2'></a> Example of a decision tree of depth 2 applied to the Iris data, using just the petal length and petal width features. Leaf nodes are color coded according to the predicted class. The number of training samples that pass from the root to a node is shown inside each box; we show how many values of each class fall into this node. This vector of counts can be normalized to get a distribution over class labels for each node. We can then pick the majority class. Adapted from Figures 6.1 and 6.2 of <a href='#Geron2019'>[Aur19]</a> . To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/master/notebooks/iris_dtree.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.4_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.4_B.png" width="256"/> ## Figure 1.5:<a name='1.5'></a> <a name='linreg'></a> (a) Linear regression on some 1d data. (b) The vertical lines denote the residuals between the observed output value for each input (blue circle) and its predicted value (red cross). The goal of least squares regression is to pick a line that minimizes the sum of squared residuals. Figure(s) generated by [linreg_residuals_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_residuals_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n linreg_residuals_plot.py ``` ## Figure 1.6:<a name='1.6'></a> <a name='polyfit2d'></a> Linear and polynomial regression applied to 2d data. Vertical axis is temperature, horizontal axes are location within a room. Data was collected by some remote sensing motes at Intel's lab in Berkeley, CA (data courtesy of Romain Thibaux). (a) The fitted plane has the form $ f ( \bm x ) = w_0 + w_1 x_1 + w_2 x_2$. (b) Temperature data is fitted with a quadratic of the form $ f ( \bm x ) = w_0 + w_1 x_1 + w_2 x_2 + w_3 x_1^2 + w_4 x_2^2$. Figure(s) generated by [linreg_2d_surface_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_2d_surface_demo.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n linreg_2d_surface_demo.py ``` ## Figure 1.7:<a name='1.7'></a> <a name='linregPoly'></a> (a-c) Polynomials of degrees 2, 14 and 20 fit to 21 datapoints (the same data as in \cref fig:linreg ). (d) MSE vs degree. Figure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n linreg_poly_vs_degree.py ``` ## Figure 1.8:<a name='1.8'></a> <a name='eqn:irisClustering'></a> (a) A scatterplot of the petal features from the iris dataset. (b) The result of unsupervised clustering using $K=3$. Figure(s) generated by [iris_kmeans.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_kmeans.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n iris_kmeans.py ``` ## Figure 1.9:<a name='1.9'></a> <a name='pcaDemo'></a> (a) Scatterplot of iris data (first 3 features). Points are color coded by class. (b) We fit a 2d linear subspace to the 3d data using PCA. The class labels are ignored. Red dots are the original data, black dots are points generated from the model using $ \bm x = \mathbf W \bm z + \bm \mu $, where $ \bm z $ are latent points on the underlying inferred 2d linear manifold. Figure(s) generated by [iris_pca.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_pca.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n iris_pca.py ``` ## Figure 1.10:<a name='1.10'></a> <a name='humanoid'></a> Examples of some control problems. (a) Space Invaders Atari game. From https://gym.openai.com/envs/SpaceInvaders-v0/ . (b) Controlling a humanoid robot in the MuJuCo simulator so it walks as fast as possible without falling over. From https://gym.openai.com/envs/Humanoid-v2/ ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.10_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.10_B.png" width="256"/> ## Figure 1.11:<a name='1.11'></a> <a name='cake'></a> The three types of machine learning visualized as layers of a chocolate cake. This figure (originally from https://bit.ly/2m65Vs1 ) was used in a talk by Yann LeCun at NIPS'16, and is used with his kind permission ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.11.png" width="256"/> ## Figure 1.12:<a name='1.12'></a> <a name='emnist'></a> (a) Visualization of the MNIST dataset. Each image is $28 \times 28$. There are 60k training examples and 10k test examples. We show the first 25 images from the training set. Figure(s) generated by [mnist_viz_tf.py](https://github.com/probml/pyprobml/blob/master/scripts/mnist_viz_tf.py) [emnist_viz_pytorch.py](https://github.com/probml/pyprobml/blob/master/scripts/emnist_viz_pytorch.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n mnist_viz_tf.py try_deimport() %run -n emnist_viz_pytorch.py ``` ## Figure 1.13:<a name='1.13'></a> <a name='CIFAR'></a> (a) Visualization of the Fashion-MNIST dataset <a href='#fashion'>[XRV17]</a> . The dataset has the same size as MNIST, but is harder to classify. There are 10 classes: T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle-boot. We show the first 25 images from the training set. Figure(s) generated by [fashion_viz_tf.py](https://github.com/probml/pyprobml/blob/master/scripts/fashion_viz_tf.py) [cifar_viz_tf.py](https://github.com/probml/pyprobml/blob/master/scripts/cifar_viz_tf.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n fashion_viz_tf.py try_deimport() %run -n cifar_viz_tf.py ``` ## Figure 1.14:<a name='1.14'></a> <a name='imagenetError'></a> (a) Sample images from the \bf ImageNet dataset <a href='#ILSVRC15'>[Rus+15]</a> . This subset consists of 1.3M color training images, each of which is $256 \times 256$ pixels in size. There are 1000 possible labels, one per image, and the task is to minimize the top-5 error rate, i.e., to ensure the correct label is within the 5 most probable predictions. Below each image we show the true label, and a distribution over the top 5 predicted labels. If the true label is in the top 5, its probability bar is colored red. Predictions are generated by a convolutional neural network (CNN) called ``AlexNet'' (\cref sec:alexNet ). From Figure 4 of <a href='#Krizhevsky12'>[KSH12]</a> . Used with kind permission of Alex Krizhevsky. (b) Misclassification rate (top 5) on the ImageNet competition over time. Used with kind permission of Andrej Karpathy ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.14_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.14_B.png" width="256"/> ## Figure 1.15:<a name='1.15'></a> <a name='termDoc'></a> Example of a term-document matrix, where raw counts have been replaced by their TF-IDF values (see \cref sec:tfidf ). Darker cells are larger values. From https://bit.ly/2kByLQI . Used with kind permission of Christoph Carl Kling ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.15.png" width="256"/> ## References: <a name='Geron2019'>[Aur19]</a> G. Aur'elien "Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques for BuildingIntelligent Systems (2nd edition)". (2019). <a name='Krizhevsky12'>[KSH12]</a> A. Krizhevsky, I. Sutskever and G. Hinton. "Imagenet classification with deep convolutional neural networks". (2012). <a name='ILSVRC15'>[Rus+15]</a> O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg and L. Fei-Fei. "ImageNet Large Scale Visual Recognition Challenge". In: ijcv (2015). <a name='fashion'>[XRV17]</a> H. Xiao, K. Rasul and R. Vollgraf. "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms". abs/1708.07747 (2017). arXiv: 1708.07747
github_jupyter
# Common Workflow Language with BioExcel Building Blocks ### Based on the Protein MD Setup tutorial using BioExcel Building Blocks (biobb) *** This tutorial aims to illustrate the process of **building up a CWL workflow** using the **BioExcel Building Blocks library (biobb)**. The tutorial is based on the **Protein Gromacs MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup). *** **Biobb modules** used: - [biobb_io](https://github.com/bioexcel/biobb_io): Tools to fetch biomolecular data from public databases. - [biobb_model](https://github.com/bioexcel/biobb_model): Tools to model macromolecular structures. - [biobb_md](https://github.com/bioexcel/biobb_md): Tools to setup and run Molecular Dynamics simulations. - [biobb_analysis](https://github.com/bioexcel/biobb_analysis): Tools to analyse Molecular Dynamics trajectories. **Software requirements**: - [cwltool](https://github.com/common-workflow-language/cwltool): Common Workflow Language tool description reference implementation. - [docker](https://www.docker.com/): Docker container platform. *** ### Tutorial Sections: 1. [CWL workflows: Brief Introduction](#intro) 2. [BioExcel building blocks TOOLS CWL Descriptions](#tools) * [Tool Building Block CWL Sections](#toolcwl) * [Complete Pdb Building Block CWL description](#pdbcwl) 3. [BioExcel building blocks WORKFLOWS CWL Descriptions](#workflows) * [Header](#cwlheader) * [Inputs](#inputs) * [Outputs](#outputs) * [Steps](#steps) * [Input of a Run](#run) * [Complete Workflow](#wf) * [Running the CWL workflow](#runwf) * [Cwltool workflow output](#wfoutput) 4. [Protein MD-Setup CWL workflow with BioExcel building blocks](#mdsetup) * [Steps](#mdsteps) * [Inputs](#mdinputs) * [Outputs](#mdoutputs) * [Complete Workflow](#mdworkflow) * [Input of a Run](#mdrun) * [Running the CWL workflow](#mdcwlrun) 5. [Questions & Comments](#questions) *** <img src="logo.png" /> *** <a id="intro"></a> ## CWL workflows: Brief Introduction The **Common Workflow Language (CWL)** is an open standard for describing analysis **workflows and tools** in a way that makes them **portable and scalable** across a variety of software and hardware environments, from workstations to cluster, cloud, and high performance computing (HPC) environments. **CWL** is a community-led specification to express **portable workflow and tool descriptions**, which can be executed by **multiple leading workflow engine implementations**. Unlike previous standardisation attempts, CWL has taken a pragmatic approach and focused on what most workflow systems are able to do: Execute command line tools and pass files around in a top-to-bottom pipeline. At the heart of CWL workflows are the **tool descriptions**. A command line is described, with parameters, input and output files, in a **YAML format** so they can be shared across workflows and linked to from registries like **ELIXIR’s bio.tools**. These are then combined and wired together in a **second YAML file** to form a workflow template, which can be **executed on any of the supported implementations**, repeatedly and **on different platforms** by specifying input files and workflow parameters. The [CWL User Guide](https://www.commonwl.org/user_guide/index.html) gives a gentle introduction to the language, while the more detailed [CWL specifications](https://www.commonwl.org/v1.1/) formalize CWL concepts so they can be implemented by the different workflow systems. A couple of **BioExcel webinars** were focused on **CWL**, an [introduction to CWL](https://www.youtube.com/watch?v=jfQb1HJWRac) and a [new open source tool to run CWL workflows on LSF (CWLEXEC)](https://www.youtube.com/watch?v=_jSTZMWtPAY). **BioExcel building blocks** are all **described in CWL**. A specific **CWL** section in the **workflow manager adapters** [github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) gathers all the descriptions, divided in the different categories: io, md, analysis, chemistry, model and pmx (see updated table [here](http://mmb.irbbarcelona.org/webdev/slim/biobb/public/availability/source)). In this tutorial, we are going to use these **BioExcel building blocks CWL descriptions** to build a **CWL** biomolecular workflow. In particular, the assembled workflow will perform a complete **Molecular Dynamics setup** (MD Setup) using **GROMACS MD package**, taking as a base the **Protein Gromacs MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup). No additional installation is required apart from the **Docker platform** and the **CWL tool reference executor**, as the **building blocks** will be launched using their associated **Docker containers**. *** <a id="tools"></a> ## BioExcel building blocks TOOLS CWL Descriptions Writing a workflow in CWL using the **BioExcel building blocks** is possible thanks to the already generated **CWL descriptions** for all the **building blocks** (wrappers). A specific **CWL** section in the **workflow manager adapters** [github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) gathers all the descriptions, divided in the different categories: io, md, analysis, chemistry, model and pmx (see updated table [here](http://mmb.irbbarcelona.org/webdev/slim/biobb/public/availability/source)). *** <a id="toolcwl"></a> ### Tool Building Block CWL sections: **Example**: Step 1 of the workflow, download a **protein structure** from the **PDB database**. The building block used for this is the [Pdb](https://github.com/bioexcel/biobb_io/blob/master/biobb_io/api/pdb.py) building block, from the [biobb_io](https://github.com/bioexcel/biobb_io) package, including tools to **fetch biomolecular data from public databases**. The **CWL description** for this building block can be found in the [adapters github repo](https://github.com/bioexcel/biobb_adapters/blob/master/biobb_adapters/cwl/biobb_io/mmb_api/pdb.cwl), and is shown in the following notebook cell. Description files like this one for all the steps of the workflow are needed to build and run a **CLW workflow**. To build a **CWL workflow** with **BioExcel building blocks**, one just need to download all the needed description files from the [biobb_adapters github](https://github.com/bioexcel/biobb_adapters/blob/master/biobb_adapters/cwl). This particular example of a **Pdb building block** is useful to illustrate the most important points of the **CWL description**: * **hints**: The **CWL hints** section describes the **process requirements** that should (but not have to) be satisfied to run the wrapped command. The implementation may report a **warning** if a hint cannot be satisfied. In the **BioExcel building blocks**, a **DockerRequirement** subsection is always present in the **hints** section, pointing to the associated **Docker container**. The **dockerPull: parameter** takes the same value that you would pass to a **docker pull** command. That is, the name of the **container image**. In this case we have used the container called **biobb_io:latest** that can be found in the **quay.io repository**, which contains the **Pdb** building block. ``` hints: DockerRequirement: dockerPull: quay.io/biocontainers/biobb_io:latest ``` * **namespaces and schemas**: Input and output **metadata** may be represented within a tool or workflow. Such **metadata** must use a **namespace prefix** listed in the **$namespaces and $schemas sections** of the document. All **BioExcel building blocks CWL specifications** use the **EDAM ontology** (http://edamontology.org/) as **namespace**, with all terms included in its **Web Ontology Language** (owl) of knowledge representation (http://edamontology.org/EDAM_1.22.owl). **BioExcel** is contributing to the expansion of the **EDAM ontology** with the addition of new structural terms such as [GROMACS XTC format](http://edamontology.org/format_3875) or the [trajectory visualization operation](http://edamontology.org/operation_3890). ``` $namespaces: edam: http://edamontology.org/ $schemas: - http://edamontology.org/EDAM_1.22.owl ``` * **inputs**: The **inputs section** of a **tool** contains a list of input parameters that **control how to run the tool**. Each parameter has an **id** for the name of parameter, and **type** describing what types of values are valid for that parameter. Available primitive types are *string, int, long, float, double, and null*; complex types are *array and record*; in addition there are special types *File, Directory and Any*. The field **inputBinding** is optional and indicates whether and how the input parameter should appear on the tool’s command line, in which **position** (position), and with which **name** (prefix). The **default field** stores the **default value** for the particular **input parameter**. <br>In this particular example, the **Pdb building block** has two different **input parameters**: *output_pdb_path* and *config*. The *output_pdb_path* input parameter defines the name of the **output file** that will contain the downloaded **PDB structure**. The *config* parameter is common to all **BioExcel building blocks**, and gathers all the **properties** of the building block in a **json format**. The **question mark** after the string type (*string?*) denotes that this input is **optional**. ``` inputs: output_pdb_path: type: string inputBinding: position: 1 prefix: --output_pdb_path default: 'downloaded_structure.pdb' config: type: string? inputBinding: position: 2 prefix: --config default: '{"pdb_code" : "1aki"}' ``` * **outputs**: The **outputs section** of a **tool** contains a list of output parameters that should be returned after running the **tool**. Similarly to the inputs section, each parameter has an **id** for the name of parameter, and **type** describing what types of values are valid for that parameter. The **outputBinding** field describes how to set the value of each output parameter. The **glob field** consists of the name of a file in the **output directory**. In the **BioExcel building blocks**, every **output** has an associated **input parameter** defined in the previous input section, defining the name of the file to be generated. <br>In the particular **Pdb building block** example, the *output_pdb_file* parameter of type *File* is coupled to the *output_pdb_path* input parameter, using the **outputBinding** and the **glob** fields. The standard **PDB** format of the output file is also specified using the **EDAM ontology** format id 1476 ([edam:format_1476](http://edamontology.org/format_1476)). ``` outputs: output_pdb_file: type: File format: edam:format_1476 outputBinding: glob: $(inputs.output_pdb_path) ``` For more information on CWL tools description, please refer to the [CWL User Guide](https://www.commonwl.org/user_guide/index.html) or the [CWL specifications](https://www.commonwl.org/v1.1/). *** <a id="pdbcwl"></a> ### Complete Pdb Building Block CWL description: Example of a **BioExcel building block CWL description** (pdb from biobb_io package) ``` # Example of a BioExcel building block CWL description (pdb from biobb_io package) #!/usr/bin/env cwl-runner cwlVersion: v1.0 class: CommandLineTool baseCommand: pdb hints: DockerRequirement: dockerPull: quay.io/biocontainers/biobb_io:latest inputs: output_pdb_path: type: string inputBinding: position: 1 prefix: --output_pdb_path default: 'downloaded_structure.pdb' config: type: string? inputBinding: position: 2 prefix: --config default: '{"pdb_code" : "1aki"}' outputs: output_pdb_file: type: File format: edam:format_1476 outputBinding: glob: $(inputs.output_pdb_path) $namespaces: edam: http://edamontology.org/ $schemas: - http://edamontology.org/EDAM_1.22.owl ``` *** <a id="workflows"></a> ## BioExcel building blocks WORKFLOWS CWL Descriptions Now that we have seen the **BioExcel building blocks CWL descriptions**, we can use them to build our first **biomolecular workflow** as a demonstrator. All **CWL workflows** are divided in **two files**: the **CWL description** and the **YAML** or **JSON** files containing **all workflow inputs**. Starting with the **CWL workflow description**, let's explore our first example **section by section**. <a id="cwlheader"></a> ### Header: * **cwlVersion** field indicates the version of the **CWL spec** used by the document. * **class** field indicates this document describes a **workflow**. ``` # !/usr/bin/env cwl-runner cwlVersion: v1.0 class: Workflow label: Example CWL Header doc: | An example of how to create a CWl header. We have specified the version of CWL that we are using; the class, which is a 'workflow'. The label field should provide a short title or description of the workflow and the description should provide a longer description of what the workflow doe. ``` <a id="inputs"></a> ### Inputs: The **inputs section** describes the inputs for **each of the steps** of the workflow. The **BioExcel building blocks (biobb)** have three types of **input parameters**: **input**, **output**, and **properties**. The **properties** parameter, which contains all the input parameters that are neither **input** nor **output files**, is defined in **JSON format** (see examples in the **Protein MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup)). **Example**: Step 1 of the workflow, download a **protein structure** from the **PDB database**. Two different **inputs** are needed for this step: the **name of the file** that will contain the downloaded PDB structure (*step1_output_name*), and the **properties** of the building block (*step1_properties*), that in this case will indicate the PDB code to look for (see **Input of a run** section). Both input parameters have type *string* in this **building block**. ``` # CWL workflow inputs section example inputs: step1_output_name: string step1_properties: string ``` <a id="outputs"></a> ### Outputs: The **outputs:** section describes the set of **final outputs** from the **workflow**. These outputs can be a collection of outputs from **different steps of the workflow**. Each output is a `key: value` pair. The `key` should be a unique identifier, and the value should be a dictionary (consisting of `key: value` pairs). These `keys` consists of `label`, which is a title or name for the output; `doc`, which is a longer description of what this output is; `type`, which is the data type expected; and `outputSource`, which connects the output parameter of a **particular step** to the **workflow final output parameter**. ``` outputs: pdb: #unique identifier label: Protein structure doc: | Step 1 of the workflow, download a 'protein structure' from the 'PDB database'. The *pdb* 'output' is a 'file' containing the 'protein structure' in 'PDB format', which is connected to the output parameter *output_pdb_file* of the 'step1 of the workflow' (*step1_pdb*). type: File #data type outputSource: step1_pdb/output_pdb_file ``` <a id="steps"></a> ### Steps: The **steps section** describes the actual steps of the workflow. Steps are **connected** one to the other through the **input parameters**. **Workflow steps** are not necessarily run in the order they are listed, instead **the order is determined by the dependencies between steps**. In addition, workflow steps which do not depend on one another may run **in parallel**. **Example**: Step 1 and 2 of the workflow, download a **protein structure** from the **PDB database**, and **fix the side chains**, adding any side chain atoms missing in the original structure. Note how **step1 and step2** are **connected** through the **output** of one and the **input** of the other: **Step2** (*step2_fixsidechain*) receives as **input** (*input_pdb_path*) the **output of the step1** (*step1_pdb*), identified as *step1_pdb/output_pdb_file*. ``` # CWL workflow steps section example step1_pdb: label: Fetch PDB Structure doc: | Download a protein structure from the PDB database run: biobb/biobb_adapters/cwl/biobb_io/mmb_api/pdb.cwl in: output_pdb_path: step1_pdb_name config: step1_pdb_config out: [output_pdb_file] step2_fixsidechain: label: Fix Protein structure doc: | Fix the side chains, adding any side chain atoms missing in the original structure. run: biobb/biobb_adapters/cwl/biobb_model/model/fix_side_chain.cwl in: input_pdb_path: step1_pdb/output_pdb_file out: [output_pdb_file] ``` <a id="run"></a> ### Input of a run: As previously stated, all **CWL workflows** are divided in **two files**: the **CWL description** and the **YAML** or **JSON** files containing **all workflow inputs**. In this example, we are going to produce a **YAML** formatted object in a separate file describing the **inputs of our run**. **Example**: Step 1 of the workflow, download a **protein structure** from the **PDB database**. The **step1_output_name** contains the name of the file that is going to be produced by the **building block**, whereas the **JSON-formatted properties** (**step1_properties**) contain the **pdb code** of the structure to be downloaded: * step1_output_name: **"tutorial_1aki.pdb"** * step1_properties: **{"pdb_code" : "1aki"}** ``` step1_output_name: 'tutorial_1aki.pdb' step1_properties: '{"pdb_code" : "1aki"}' ``` <a id="wf"></a> ### Complete workflow: Example of a short **CWL workflow** with **BioExcel building blocks**, which retrieves a **PDB file** for the **Lysozyme protein structure** from the RCSB PDB database (**step1: pdb.cwl**), and fixes the possible problems in the structure, adding **missing side chain atoms** if needed (**step2: fix_side_chain.cwl**). ``` # !/usr/bin/env cwl-runner cwlVersion: v1.0 class: Workflow label: Example of a short CWL workflow with BioExcel building blocks doc: | Example of a short 'CWL workflow' with 'BioExcel building blocks', which retrieves a 'PDB file' for the 'Lysozyme protein structure' from the RCSB PDB database ('step1: pdb.cwl'), and fixes the possible problems in the structure, adding 'missing side chain atoms' if needed ('step2: fix_side_chain.cwl'). inputs: step1_properties: '{"pdb_code" : "1aki"}' step1_output_name: 'tutorial_1aki.pdb' outputs: pdb: type: File outputSource: step2_fixsidechain/output_pdb_file steps: step1_pdb: label: Fetch PDB Structure doc: | Download a protein structure from the PDB database run: biobb_adapters/pdb.cwl in: output_pdb_path: step1_output_name config: step1_properties out: [output_pdb_file] step2_fixsidechain: label: Fix Protein structure doc: | Fix the side chains, adding any side chain atoms missing in the original structure. run: biobb_adapters/fix_side_chain.cwl in: input_pdb_path: step1_pdb/output_pdb_file out: [output_pdb_file] ``` <a id="runwf"></a> ### Running the CWL workflow: The final step of the process is **running the workflow described in CWL**. For that, the description presented in the previous cell should be written to a file (e.g. BioExcel-CWL-firstWorkflow.cwl), the **YAML** input should be written to a separate file (e.g. BioExcel-CWL-firstWorkflow-job.yml) and finally both files should be used with the **CWL tool description reference implementation executer** (cwltool). It is important to note that in order to properly run the **CWL workflow**, the **CWL descriptions** for all the **building blocks** used in the **workflow** should be accessible from the file system. In this example, all the **CWL descriptions** needed where downloaded from the [BioExcel building blocks adapters github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) to a folder named **biobb_adapters**. The **command line** is shown in the cell below: ``` # Run CWL workflow with CWL tool description reference implementation (cwltool). cwltool BioExcel-CWL-firstWorkflow.cwl BioExcel-CWL-firstWorkflow-job.yml ``` <a id="wfoutput"></a> ### Cwltool workflow output The **execution of the workflow** will write information to the standard output such as the **step being performed**, the **way it is run** (command line, docker container, etc.), **inputs and outputs** used, and **state of each step** (success, failed). The next cell contains a **real output** for the **execution of our first example**: ``` Resolved 'BioExcel-CWL-firstWorkflow.cwl' to 'file:///PATH/biobb_wf_md_setup/cwl/BioExcel-CWL-firstWorkflow.cwl' [workflow BioExcel-CWL-firstWorkflow.cwl] start [step step1_pdb] start [job step1_pdb] /private/tmp/docker_tmp1g8y0wu0$ docker \ run \ -i \ --volume=/private/tmp/docker_tmp1g8y0wu0:/private/var/spool/cwl:rw \ --volume=/private/var/folders/7f/0hxgf3d971b98lk_fps26jx40000gn/T/tmps4_pw5tj:/tmp:rw \ --workdir=/private/var/spool/cwl \ --read-only=true \ --user=501:20 \ --rm \ --env=TMPDIR=/tmp \ --env=HOME=/private/var/spool/cwl \ quay.io/biocontainers/biobb_io:0.1.3--py_0 \ pdb \ --config \ '{"pdb_code" : "1aki"}' \ --output_pdb_path \ tutorial.pdb 2019-10-24 08:42:06,235 [MainThread ] [INFO ] Downloading: 1aki from: https://files.rcsb.org/download/1aki.pdb 2019-10-24 08:42:07,594 [MainThread ] [INFO ] Writting pdb to: /private/var/spool/cwl/tutorial.pdb 2019-10-24 08:42:07,607 [MainThread ] [INFO ] Filtering lines NOT starting with one of these words: ['ATOM', 'MODEL', 'ENDMDL'] [job step1_pdb] completed success [step step1_pdb] completed success [step step2_fixsidechain] start [job step2_fixsidechain] /private/tmp/docker_tmpuaecttdd$ docker \ run \ -i \ --volume=/private/tmp/docker_tmpuaecttdd:/private/var/spool/cwl:rw \ --volume=/private/var/folders/7f/0hxgf3d971b98lk_fps26jx40000gn/T/tmp9t_nks8r:/tmp:rw \ --volume=/private/tmp/docker_tmp1g8y0wu0/tutorial.pdb:/private/var/lib/cwl/stg5b2950e7-ef54-4df6-be70-677050c4c258/tutorial.pdb:ro \ --workdir=/private/var/spool/cwl \ --read-only=true \ --user=501:20 \ --rm \ --env=TMPDIR=/tmp \ --env=HOME=/private/var/spool/cwl \ quay.io/biocontainers/biobb_model:0.1.3--py_0 \ fix_side_chain \ --input_pdb_path \ /private/var/lib/cwl/stg5b2950e7-ef54-4df6-be70-677050c4c258/tutorial.pdb \ --output_pdb_path \ fixed.pdb [job step2_fixsidechain] completed success [step step2_fixsidechain] completed success [workflow BioExcel-CWL-firstWorkflow.cwl] completed success { "pdb": { "location": "file:///PATH/biobb_wf_md_setup/cwl/fixed.pdb", "basename": "fixed.pdb", "class": "File", "checksum": "sha1$3ef7a955f93f25af5e59b85bcf4cb1d0bbf69a40", "size": 81167, "format": "http://edamontology.org/format_1476", "path": "/PATH/biobb_wf_md_setup/cwl/fixed.pdb" } } Final process status is success ``` *** <a id="mdsetup"></a> ## Protein MD-Setup CWL workflow with BioExcel building blocks The last step of this **tutorial** illustrates the building of a **complex CWL workflow**. The example used is the **Protein Gromacs MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup). It is strongly recommended to take a look at this **notebook** before moving on to the next sections of this **tutorial**, as it contains information for all the **building blocks** used. The aim of this **tutorial** is to illustrate how to build **CWL workflows** using the **BioExcel building blocks**. For information about the science behind every step of the workflow, please refer to the **Protein Gromacs MD Setup** Jupyter Notebook tutorial. The **workflow** presented in the next cells is a translation of the very same workflow to **CWL language**, including the same **number of steps** (23) and **building blocks**. <a id="mdsteps"></a> ### Steps: First of all, let's define the **steps of the workflow**. * **Fetching PDB Structure**: step 1 * **Fix Protein Structure**: step 2 * **Create Protein System Topology**: step 3 * **Create Solvent Box**: step 4 * **Fill the Box with Water Molecules**: step 5 * **Adding Ions**: steps 6 and 7 * **Energetically Minimize the System**: steps 8, 9 and 10 * **Equilibrate the System (NVT)**: steps 11, 12 and 13 * **Equilibrate the System (NPT)**: steps 14, 15 and 16 * **Free Molecular Dynamics Simulation**: steps 17 and 18 * **Post-processing Resulting 3D Trajectory**: steps 19 to 23 Mandatory and optional **inputs** and **outputs** of every **building block** can be consulted in the appropriate **documentation** pages from the corresponding **BioExcel building block** category (see updated table [here](http://mmb.irbbarcelona.org/webdev/slim/biobb/public/availability/source)). ``` step1_pdb: label: Fetch PDB Structure doc: | Download a protein structure from the PDB database run: biobb/biobb_adapters/cwl/biobb_io/mmb_api/pdb.cwl in: output_pdb_path: step1_pdb_name config: step1_pdb_config out: [output_pdb_file] step2_fixsidechain: label: Fix Protein structure doc: | Fix the side chains, adding any side chain atoms missing in the original structure. run: biobb/biobb_adapters/cwl/biobb_model/model/fix_side_chain.cwl in: input_pdb_path: step1_pdb/output_pdb_file out: [output_pdb_file] step3_pdb2gmx: label: Create Protein System Topology run: biobb/biobb_adapters/cwl/biobb_md/gromacs/pdb2gmx.cwl in: input_pdb_path: step2_fixsidechain/output_pdb_file out: [output_gro_file, output_top_zip_file] step4_editconf: label: Create Solvent Box run: biobb/biobb_adapters/cwl/biobb_md/gromacs/editconf.cwl in: input_gro_path: step3_pdb2gmx/output_gro_file out: [output_gro_file] step5_solvate: label: Fill the Box with Water Molecules run: biobb/biobb_adapters/cwl/biobb_md/gromacs/solvate.cwl in: input_solute_gro_path: step4_editconf/output_gro_file input_top_zip_path: step3_pdb2gmx/output_top_zip_file out: [output_gro_file, output_top_zip_file] step6_grompp_genion: label: Add Ions - part 1 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl in: config: step6_gppion_config input_gro_path: step5_solvate/output_gro_file input_top_zip_path: step5_solvate/output_top_zip_file out: [output_tpr_file] step7_genion: label: Add Ions - part 2 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/genion.cwl in: config: step7_genion_config input_tpr_path: step6_grompp_genion/output_tpr_file input_top_zip_path: step5_solvate/output_top_zip_file out: [output_gro_file, output_top_zip_file] step8_grompp_min: label: Energetically Minimize the System - part 1 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl in: config: step8_gppmin_config input_gro_path: step7_genion/output_gro_file input_top_zip_path: step7_genion/output_top_zip_file out: [output_tpr_file] step9_mdrun_min: label: Energetically Minimize the System - part 2 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl in: input_tpr_path: step8_grompp_min/output_tpr_file out: [output_trr_file, output_gro_file, output_edr_file, output_log_file] step10_energy_min: label: Energetically Minimize the System - part 3 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl in: config: step10_energy_min_config output_xvg_path: step10_energy_min_name input_energy_path: step9_mdrun_min/output_edr_file out: [output_xvg_file] step11_grompp_nvt: label: Equilibrate the System (NVT) - part 1 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl in: config: step11_gppnvt_config input_gro_path: step9_mdrun_min/output_gro_file input_top_zip_path: step7_genion/output_top_zip_file out: [output_tpr_file] step12_mdrun_nvt: label: Equilibrate the System (NVT) - part 2 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl in: input_tpr_path: step11_grompp_nvt/output_tpr_file out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file] step13_energy_nvt: label: Equilibrate the System (NVT) - part 3 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl in: config: step13_energy_nvt_config output_xvg_path: step13_energy_nvt_name input_energy_path: step12_mdrun_nvt/output_edr_file out: [output_xvg_file] step14_grompp_npt: label: Equilibrate the System (NPT) - part 1 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl in: config: step14_gppnpt_config input_gro_path: step12_mdrun_nvt/output_gro_file input_top_zip_path: step7_genion/output_top_zip_file input_cpt_path: step12_mdrun_nvt/output_cpt_file out: [output_tpr_file] step15_mdrun_npt: label: Equilibrate the System (NPT) - part 2 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl in: input_tpr_path: step14_grompp_npt/output_tpr_file out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file] step16_energy_npt: label: Equilibrate the System (NPT) - part 3 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl in: config: step16_energy_npt_config output_xvg_path: step16_energy_npt_name input_energy_path: step15_mdrun_npt/output_edr_file out: [output_xvg_file] step17_grompp_md: label: Free Molecular Dynamics Simulation - part 1 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl in: config: step17_gppmd_config input_gro_path: step15_mdrun_npt/output_gro_file input_top_zip_path: step7_genion/output_top_zip_file input_cpt_path: step15_mdrun_npt/output_cpt_file out: [output_tpr_file] step18_mdrun_md: label: Free Molecular Dynamics Simulation - part 2 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl in: input_tpr_path: step17_grompp_md/output_tpr_file out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file] step19_rmsfirst: label: Post-processing Resulting 3D Trajectory - part 1 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rms.cwl in: config: step19_rmsfirst_config output_xvg_path: step19_rmsfirst_name input_structure_path: step17_grompp_md/output_tpr_file input_traj_path: step18_mdrun_md/output_trr_file out: [output_xvg_file] step20_rmsexp: label: Post-processing Resulting 3D Trajectory - part 2 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rms.cwl in: config: step20_rmsexp_config output_xvg_path: step20_rmsexp_name input_structure_path: step8_grompp_min/output_tpr_file input_traj_path: step18_mdrun_md/output_trr_file out: [output_xvg_file] step21_rgyr: label: Post-processing Resulting 3D Trajectory - part 3 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rgyr.cwl in: config: step21_rgyr_config input_structure_path: step8_grompp_min/output_tpr_file input_traj_path: step18_mdrun_md/output_trr_file out: [output_xvg_file] step22_image: label: Post-processing Resulting 3D Trajectory - part 4 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_image.cwl in: config: step22_image_config input_top_path: step17_grompp_md/output_tpr_file input_traj_path: step18_mdrun_md/output_trr_file out: [output_traj_file] step23_dry: label: Post-processing Resulting 3D Trajectory - part 5 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_trjconv_str.cwl in: config: step23_dry_config input_structure_path: step18_mdrun_md/output_gro_file input_top_path: step17_grompp_md/output_tpr_file out: [output_str_file] ``` <a id="mdinputs"></a> ### Inputs: All inputs for the **BioExcel building blocks** are defined as *strings*. Not all the steps in this particular example need **external inputs**, some of them just works using as input/s an output (or outputs) from **previous steps** (e.g. step2_fixsidechain). For the steps that need input, all of them will receive a **JSON** formatted input (of type string), with the **properties parameters** of the **building blocks** (config). Apart from that, some of the **building blocks** in this example are receiving two different input parameters: the **properties** (e.g. *step1_pdb_config*) and the **name of the output file** to be written (e.g. *step1_pdb_name*). This is particularly useful to identify the files generated by different steps of the **workflow**. Besides, in cases where the same **building block** is used more than once, using the **default value** for the **output files** will cause the **overwritting** of the results generated by previous steps (e.g. energy calculation steps). All these inputs will be filled up with values from the **separated YAML input file**. ``` inputs: step1_pdb_name: string step1_pdb_config: string step4_editconf_config: string step6_gppion_config: string step7_genion_config: string step8_gppmin_config: string step10_energy_min_config: string step10_energy_min_name: string step11_gppnvt_config: string step13_energy_nvt_config: string step13_energy_nvt_name: string step14_gppnpt_config: string step16_energy_npt_config: string step16_energy_npt_name: string step17_gppmd_config: string step19_rmsfirst_config: string step19_rmsfirst_name: string step20_rmsexp_config: string step20_rmsexp_name: string step21_rgyr_config: string step22_image_config: string step23_dry_config: string ``` <a id="mdoutputs"></a> ### Outputs: The **outputs section** contains the set of **final outputs** from the **workflow**. In this case, **outputs** from **different steps** of the **workflow** are considered **final outputs**: * **Trajectories**: * **trr**: Raw trajectory from the *free* simulation step. * **trr_imaged_dry**: Post-processed trajectory, dehydrated, imaged (rotations and translations removed) and centered. * **Structures**: * **gro**: Raw structure from the *free* simulation step. * **gro_dry**: Resulting protein structure taken from the post-processed trajectory, to be used as a topology, usually for visualization purposes. * **Topologies**: * **tpr**: GROMACS portable binary run input file, containing the starting structure of the simulation, the molecular topology and all the simulation parameters. * **top**: GROMACS topology file, containing the molecular topology in an ASCII readable format. * **System Setup Observables**: * **xvg_min**: Potential energy of the system during the minimization step. * **xvg_nvt**: Temperature of the system during the NVT equilibration step. * **xvg_npt**: Pressure and density of the system (box) during the NPT equilibration step. * **Simulation Analysis**: * **xvg_rmsfirst**: Root Mean Square deviation (RMSd) throughout the whole *free* simulation step against the first snapshot of the trajectory (equilibrated system). * **xvg_rmsexp**: Root Mean Square deviation (RMSd) throughout the whole *free* simulation step against the experimental structure (minimized system). * **xvg_rgyr**: Radius of Gyration (RGyr) of the molecule throughout the whole *free* simulation step. * **Checkpoint file**: * **cpt**: GROMACS portable checkpoint file, allowing to restore (continue) the simulation from the last step of the setup process. Please note that the name of the **output files** is sometimes fixed by a **specific input** (e.g. step10_energy_min_name), whereas when no specific name is given as input, the **default value** is used (e.g. system.tpr). **Default values** can be found in the **CWL description** files for each **building block** (biobb_adapters). ``` outputs: trr: label: Trajectories - Raw trajectory doc: | Raw trajectory from the free simulation step type: File outputSource: step18_mdrun_md/output_trr_file trr_imaged_dry: label: Trajectories - Post-processed trajectory doc: | Post-processed trajectory, dehydrated, imaged (rotations and translations removed) and centered. type: File outputSource: step22_image/output_traj_file gro_dry: label: Resulting protein structure doc: | Resulting protein structure taken from the post-processed trajectory, to be used as a topology, usually for visualization purposes. type: File outputSource: step23_dry/output_str_file gro: label: Structures - Raw structure doc: | Raw structure from the free simulation step. type: File outputSource: step18_mdrun_md/output_gro_file cpt: label: Checkpoint file doc: | GROMACS portable checkpoint file, allowing to restore (continue) the simulation from the last step of the setup process. type: File outputSource: step18_mdrun_md/output_cpt_file tpr: label: Topologies GROMACS portable binary run doc: | GROMACS portable binary run input file, containing the starting structure of the simulation, the molecular topology and all the simulation parameters. type: File outputSource: step17_grompp_md/output_tpr_file top: label: GROMACS topology file doc: | GROMACS topology file, containing the molecular topology in an ASCII readable format. type: File outputSource: step7_genion/output_top_zip_file xvg_min: label: System Setup Observables - Potential Energy doc: | Potential energy of the system during the minimization step. type: File outputSource: step10_energy_min/output_xvg_file xvg_nvt: label: System Setup Observables - Temperature doc: | Temperature of the system during the NVT equilibration step. type: File outputSource: step13_energy_nvt/output_xvg_file xvg_npt: label: System Setup Observables - Pressure and density type: File outputSource: step16_energy_npt/output_xvg_file xvg_rmsfirst: label: Simulation Analysis doc: | Root Mean Square deviation (RMSd) throughout the whole free simulation step against the first snapshot of the trajectory (equilibrated system). type: File outputSource: step19_rmsfirst/output_xvg_file xvg_rmsexp: label: Simulation Analysis doc: | Root Mean Square deviation (RMSd) throughout the whole free simulation step against the experimental structure (minimized system). type: File outputSource: step20_rmsexp/output_xvg_file xvg_rgyr: label: Simulation Analysis doc: | Radius of Gyration (RGyr) of the molecule throughout the whole free simulation step type: File outputSource: step21_rgyr/output_xvg_file ``` <a id="mdworkflow"></a> ### Complete workflow: The complete **CWL described workflow** to run a **Molecular Dynamics Setup** on a protein structure can be found in the next cell. The **representation of the workflow** using the **CWL Viewer** web service can be found here: XXXXXX. The **full workflow** is a combination of the **inputs**, **outputs** and **steps** revised in the previous cells. ``` # Protein MD-Setup CWL workflow with BioExcel building blocks # https://github.com/bioexcel/biobb_wf_md_setup #!/usr/bin/env cwl-runner cwlVersion: v1.0 class: Workflow inputs: step1_pdb_name: string step1_pdb_config: string step4_editconf_config: string step6_gppion_config: string step7_genion_config: string step8_gppmin_config: string step10_energy_min_config: string step10_energy_min_name: string step11_gppnvt_config: string step13_energy_nvt_config: string step13_energy_nvt_name: string step14_gppnpt_config: string step16_energy_npt_config: string step16_energy_npt_name: string step17_gppmd_config: string step19_rmsfirst_config: string step19_rmsfirst_name: string step20_rmsexp_config: string step20_rmsexp_name: string step21_rgyr_config: string step22_image_config: string step23_dry_config: string outputs: trr: label: Trajectories - Raw trajectory doc: | Raw trajectory from the free simulation step type: File outputSource: step18_mdrun_md/output_trr_file trr_imaged_dry: label: Trajectories - Post-processed trajectory doc: | Post-processed trajectory, dehydrated, imaged (rotations and translations removed) and centered. type: File outputSource: step22_image/output_traj_file gro_dry: label: Resulting protein structure doc: | Resulting protein structure taken from the post-processed trajectory, to be used as a topology, usually for visualization purposes. type: File outputSource: step23_dry/output_str_file gro: label: Structures - Raw structure doc: | Raw structure from the free simulation step. type: File outputSource: step18_mdrun_md/output_gro_file cpt: label: Checkpoint file doc: | GROMACS portable checkpoint file, allowing to restore (continue) the simulation from the last step of the setup process. type: File outputSource: step18_mdrun_md/output_cpt_file tpr: label: Topologies GROMACS portable binary run doc: | GROMACS portable binary run input file, containing the starting structure of the simulation, the molecular topology and all the simulation parameters. type: File outputSource: step17_grompp_md/output_tpr_file top: label: GROMACS topology file doc: | GROMACS topology file, containing the molecular topology in an ASCII readable format. type: File outputSource: step7_genion/output_top_zip_file xvg_min: label: System Setup Observables - Potential Energy doc: | Potential energy of the system during the minimization step. type: File outputSource: step10_energy_min/output_xvg_file xvg_nvt: label: System Setup Observables - Temperature doc: | Temperature of the system during the NVT equilibration step. type: File outputSource: step13_energy_nvt/output_xvg_file xvg_npt: label: System Setup Observables - Pressure and density type: File outputSource: step16_energy_npt/output_xvg_file xvg_rmsfirst: label: Simulation Analysis doc: | Root Mean Square deviation (RMSd) throughout the whole free simulation step against the first snapshot of the trajectory (equilibrated system). type: File outputSource: step19_rmsfirst/output_xvg_file xvg_rmsexp: label: Simulation Analysis doc: | Root Mean Square deviation (RMSd) throughout the whole free simulation step against the experimental structure (minimized system). type: File outputSource: step20_rmsexp/output_xvg_file xvg_rgyr: label: Simulation Analysis doc: | Radius of Gyration (RGyr) of the molecule throughout the whole free simulation step type: File outputSource: step21_rgyr/output_xvg_file steps: step1_pdb: label: Fetch PDB Structure doc: | Download a protein structure from the PDB database run: biobb/biobb_adapters/cwl/biobb_io/mmb_api/pdb.cwl in: output_pdb_path: step1_pdb_name config: step1_pdb_config out: [output_pdb_file] step2_fixsidechain: label: Fix Protein structure doc: | Fix the side chains, adding any side chain atoms missing in the original structure. run: biobb/biobb_adapters/cwl/biobb_model/model/fix_side_chain.cwl in: input_pdb_path: step1_pdb/output_pdb_file out: [output_pdb_file] step3_pdb2gmx: label: Create Protein System Topology run: biobb/biobb_adapters/cwl/biobb_md/gromacs/pdb2gmx.cwl in: input_pdb_path: step2_fixsidechain/output_pdb_file out: [output_gro_file, output_top_zip_file] step4_editconf: label: Create Solvent Box run: biobb/biobb_adapters/cwl/biobb_md/gromacs/editconf.cwl in: input_gro_path: step3_pdb2gmx/output_gro_file out: [output_gro_file] step5_solvate: label: Fill the Box with Water Molecules run: biobb/biobb_adapters/cwl/biobb_md/gromacs/solvate.cwl in: input_solute_gro_path: step4_editconf/output_gro_file input_top_zip_path: step3_pdb2gmx/output_top_zip_file out: [output_gro_file, output_top_zip_file] step6_grompp_genion: label: Add Ions - part 1 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl in: config: step6_gppion_config input_gro_path: step5_solvate/output_gro_file input_top_zip_path: step5_solvate/output_top_zip_file out: [output_tpr_file] step7_genion: label: Add Ions - part 2 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/genion.cwl in: config: step7_genion_config input_tpr_path: step6_grompp_genion/output_tpr_file input_top_zip_path: step5_solvate/output_top_zip_file out: [output_gro_file, output_top_zip_file] step8_grompp_min: label: Energetically Minimize the System - part 1 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl in: config: step8_gppmin_config input_gro_path: step7_genion/output_gro_file input_top_zip_path: step7_genion/output_top_zip_file out: [output_tpr_file] step9_mdrun_min: label: Energetically Minimize the System - part 2 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl in: input_tpr_path: step8_grompp_min/output_tpr_file out: [output_trr_file, output_gro_file, output_edr_file, output_log_file] step10_energy_min: label: Energetically Minimize the System - part 3 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl in: config: step10_energy_min_config output_xvg_path: step10_energy_min_name input_energy_path: step9_mdrun_min/output_edr_file out: [output_xvg_file] step11_grompp_nvt: label: Equilibrate the System (NVT) - part 1 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl in: config: step11_gppnvt_config input_gro_path: step9_mdrun_min/output_gro_file input_top_zip_path: step7_genion/output_top_zip_file out: [output_tpr_file] step12_mdrun_nvt: label: Equilibrate the System (NVT) - part 2 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl in: input_tpr_path: step11_grompp_nvt/output_tpr_file out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file] step13_energy_nvt: label: Equilibrate the System (NVT) - part 3 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl in: config: step13_energy_nvt_config output_xvg_path: step13_energy_nvt_name input_energy_path: step12_mdrun_nvt/output_edr_file out: [output_xvg_file] step14_grompp_npt: label: Equilibrate the System (NPT) - part 1 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl in: config: step14_gppnpt_config input_gro_path: step12_mdrun_nvt/output_gro_file input_top_zip_path: step7_genion/output_top_zip_file input_cpt_path: step12_mdrun_nvt/output_cpt_file out: [output_tpr_file] step15_mdrun_npt: label: Equilibrate the System (NPT) - part 2 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl in: input_tpr_path: step14_grompp_npt/output_tpr_file out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file] step16_energy_npt: label: Equilibrate the System (NPT) - part 3 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl in: config: step16_energy_npt_config output_xvg_path: step16_energy_npt_name input_energy_path: step15_mdrun_npt/output_edr_file out: [output_xvg_file] step17_grompp_md: label: Free Molecular Dynamics Simulation - part 1 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl in: config: step17_gppmd_config input_gro_path: step15_mdrun_npt/output_gro_file input_top_zip_path: step7_genion/output_top_zip_file input_cpt_path: step15_mdrun_npt/output_cpt_file out: [output_tpr_file] step18_mdrun_md: label: Free Molecular Dynamics Simulation - part 2 run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl in: input_tpr_path: step17_grompp_md/output_tpr_file out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file] step19_rmsfirst: label: Post-processing Resulting 3D Trajectory - part 1 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rms.cwl in: config: step19_rmsfirst_config output_xvg_path: step19_rmsfirst_name input_structure_path: step17_grompp_md/output_tpr_file input_traj_path: step18_mdrun_md/output_trr_file out: [output_xvg_file] step20_rmsexp: label: Post-processing Resulting 3D Trajectory - part 2 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rms.cwl in: config: step20_rmsexp_config output_xvg_path: step20_rmsexp_name input_structure_path: step8_grompp_min/output_tpr_file input_traj_path: step18_mdrun_md/output_trr_file out: [output_xvg_file] step21_rgyr: label: Post-processing Resulting 3D Trajectory - part 3 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rgyr.cwl in: config: step21_rgyr_config input_structure_path: step8_grompp_min/output_tpr_file input_traj_path: step18_mdrun_md/output_trr_file out: [output_xvg_file] step22_image: label: Post-processing Resulting 3D Trajectory - part 4 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_image.cwl in: config: step22_image_config input_top_path: step17_grompp_md/output_tpr_file input_traj_path: step18_mdrun_md/output_trr_file out: [output_traj_file] step23_dry: label: Post-processing Resulting 3D Trajectory - part 5 run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_trjconv_str.cwl in: config: step23_dry_config input_structure_path: step18_mdrun_md/output_gro_file input_top_path: step17_grompp_md/output_tpr_file out: [output_str_file] ``` <a id="mdrun"></a> ### Input of the run: As previously stated, all **CWL workflows** are divided in **two files**: the **CWL description** and the **YAML** or **JSON** files containing **all workflow inputs**. The following cell presents the **YAML** file describing the **inputs of the run** for the **Protein Gromacs MD Setup** workflow. All the steps were defined as *strings* in the **CWL workflow**; **Building blocks** inputs ending by "*_name*" contain a simple *string* with the wanted file name; **Building blocks** inputs ending by "*_config*" contain the **properties parameters** in a *string* reproducing a **JSON format**. Please note here that all double quotes in **JSON format** must be escaped. The **properties parameters** were taken from the original **Protein Gromacs MD Setup** workflow [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup). Please refer to it to find information about the values used. ``` # Protein MD-Setup CWL workflow with BioExcel building blocks - Input YAML configuration file # https://github.com/bioexcel/biobb_wf_md_setup step1_pdb_name: 'tutorial.pdb' step1_pdb_config: '{"pdb_code" : "1aki"}' step4_editconf_config: '{"box_type": "cubic","distance_to_molecule": 1.0}' step6_gppion_config: '{"mdp": {"type":"minimization"}}' step7_genion_config: '{"neutral": "True"}' step8_gppmin_config: '{"mdp": {"type":"minimization", "nsteps":"5000", "emtol":"500"}}' step10_energy_min_config: '{"terms": ["Potential"]}' step10_energy_min_name: 'energy_min.xvg' step11_gppnvt_config: '{"mdp": {"type":"nvt", "nsteps":"5000", "dt":0.002, "define":"-DPOSRES"}}' step13_energy_nvt_config: '{"terms": ["Temperature"]}' step13_energy_nvt_name: 'energy_nvt.xvg' step14_gppnpt_config: '{"mdp": {"type":"npt", "nsteps":"5000"}}' step16_energy_npt_config: '{"terms": ["Pressure","Density"]}' step16_energy_npt_name: 'energy_npt.xvg' step17_gppmd_config: '{"mdp": {"type":"free", "nsteps":"50000"}}' step19_rmsfirst_config: '{"selection": "Backbone"}' step19_rmsfirst_name: 'rmsd_first.xvg' step20_rmsexp_config: '{"selection": "Backbone"}' step20_rmsexp_name: 'rmsd_exp.xvg' step21_rgyr_config: '{"selection": "Backbone"}' step22_image_config: '{"center_selection":"Protein","output_selection":"Protein","pbc":"mol"}' step23_dry_config: '{"selection": "Protein"}' ``` <a id="mdcwlrun"></a> ### Running the CWL workflow: The final step of the process is **running the workflow described in CWL**. For that, the complete **workflow description** should be written to a file (e.g. BioExcel-CWL-MDSetup.cwl), the **YAML** input should be written to a separate file (e.g. BioExcel-CWL-MDSetup-job.yml) and finally both files should be used with the **CWL tool description reference implementation executer** (cwltool). As in the previous example, it is important to note that in order to properly run the **CWL workflow**, the **CWL descriptions** for all the **building blocks** used in the **workflow** should be accessible from the file system. In this example, all the **CWL descriptions** needed where downloaded from the [BioExcel building blocks adapters github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) to a folder named **biobb_adapters**. It is worth to note that as this workflow is using different **BioExcel building block modules** (biobb_io, biobb_model, biobb_md and biobb_analysis), so the **Docker container** for each of the modules will be downloaded the first time that it is launched. This process **could take some time** (and **disk space**). Once all the **Docker containers** are correctly downloaded and integrated in the system, the **workflow** should take around 1h (depending on the machine used). The **command line** is shown in the cell below: ``` # Run CWL workflow with CWL tool description reference implementation (cwltool). cwltool BioExcel-CWL-MDSetup.cwl BioExcel-CWL-MDSetup-job.yml ``` *** <a id="questions"></a> ## Questions & Comments Questions, issues, suggestions and comments are really welcome! * GitHub issues: * [https://github.com/bioexcel/biobb](https://github.com/bioexcel/biobb) * BioExcel forum: * [https://ask.bioexcel.eu/c/BioExcel-Building-Blocks-library](https://ask.bioexcel.eu/c/BioExcel-Building-Blocks-library)
github_jupyter
``` import xarray as xr import matplotlib.pyplot as plt import cartopy.crs as ccrs from scipy.io import loadmat #where to find the data adir= 'F:/data/fluxsat/WS_SST_Correlation/' #read in the data ds1=xr.open_dataset(adir+'Corr_High_redone.nc') ds1.close() ds2=xr.open_dataset(adir+'Corr_Full.nc') #Full: corelation using unfiltered daily data: ds2.close() tem = loadmat(adir+'fluxDifferences.mat') ds_err = xr.Dataset({'err': (['lat', 'lon'], tem['combinedSD'].transpose())}, coords={'lon': (['lon'], tem['longitude'][:,0]), 'lat': (['lat'], tem['latitude'][:,0])}) #scientific colormaps import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap cm_data = np.loadtxt("C:/Users/gentemann/Google Drive/d_drive/ScientificColourMaps6/vik/vik.txt") vik_map = LinearSegmentedColormap.from_list("vik", cm_data) cm_data = np.loadtxt("C:/Users/gentemann/Google Drive/d_drive/ScientificColourMaps6/roma/roma.txt") roma_map = LinearSegmentedColormap.from_list("roma", cm_data) roma_map2 = LinearSegmentedColormap.from_list("roma", cm_data[-1::-1]) tem=xr.concat([ds2.sel(lon=slice(20,360)),ds2.sel(lon=slice(0,20))],dim='lon') fig = plt.figure(figsize=(12, 4)) ax = plt.axes(projection=ccrs.Mollweide(central_longitude=-160)) ax.stock_img() ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-1,vmax=1,cmap=vik_map,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6) cax.set_label('Correlation Coefficient') axt = plt.axes((.3, .8, .01, .01)) axt.axis('off') axt.text(0,1.2,'a)',fontsize=16) fig.savefig(adir+'no_filter_wh.png') tem=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon') fig = plt.figure(figsize=(12, 4)) ax = plt.axes(projection=ccrs.Mollweide(central_longitude=-160)) ax.stock_img() ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-1,vmax=1,cmap=vik_map,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6) cax.set_label('Correlation Coefficient High Pass') axt = plt.axes((.3, .8, .01, .01)) axt.axis('off') axt.text(0,1.2,'b)',fontsize=16) fig.savefig(adir+'high_pass_wh.png') tem1=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon') tem=xr.concat([ds_err.sel(lon=slice(20,360)),ds_err.sel(lon=slice(0,20))],dim='lon') fig = plt.figure(figsize=(12, 4)) ax = plt.axes(projection=ccrs.Mollweide(central_longitude=-160)) ax.stock_img() ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem1.lon,tem1.lat,tem1.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.err,vmin=0,vmax=30,cmap=roma_map2,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6) cax.set_label('Standard deviation (W m$^{-2}$)') axt = plt.axes((.3, .8, .01, .01)) axt.axis('off') axt.text(0,1.2,'b)',fontsize=16) fig.savefig(adir+'err.png') vv=.75 tem=xr.concat([ds2.sel(lon=slice(20,360)),ds2.sel(lon=slice(0,20))],dim='lon') fig = plt.figure(figsize=(15, 8)) ax = plt.subplot(211,projection=ccrs.Mollweide(central_longitude=-160)) ax.stock_img() ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01) cax.set_label('Correlation Coefficient') axt = plt.axes((.4, .8, .01, .01)) axt.axis('off') axt.text(0,1.2,'a)',fontsize=16) tem=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon') ax = plt.subplot(212,projection=ccrs.Mollweide(central_longitude=-160)) ax.stock_img() ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01) cax.set_label('Correlation Coefficient High Pass') axt = plt.axes((.4, .4, .01, .01)) axt.axis('off') axt.text(0,1.2,'b)',fontsize=16) fig.savefig(adir+'both.png') vv=.75 tem=xr.concat([ds2.sel(lon=slice(20,360)),ds2.sel(lon=slice(0,20))],dim='lon') fig = plt.figure(figsize=(15, 12)) ax = plt.subplot(311,projection=ccrs.Mollweide(central_longitude=-160)) ax.stock_img() ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01) cax.set_label('Correlation Coefficient') axt = plt.axes((.4, .8, .01, .01)) axt.axis('off') axt.text(0,1.2,'a)',fontsize=16) tem=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon') ax = plt.subplot(312,projection=ccrs.Mollweide(central_longitude=-160)) ax.stock_img() ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01) cax.set_label('Correlation Coefficient \n High Pass') axt = plt.axes((.4, .53, .01, .01)) axt.axis('off') axt.text(0,1.2,'b)',fontsize=16) tem1=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon') tem=xr.concat([ds_err.sel(lon=slice(20,360)),ds_err.sel(lon=slice(0,20))],dim='lon') ax = plt.subplot(313,projection=ccrs.Mollweide(central_longitude=-160)) ax.stock_img() ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem1.lon,tem1.lat,tem1.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.err,vmin=0,vmax=30,cmap=roma_map2,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01) cax.set_label('Standard deviation (W m$^{-2}$)') axt = plt.axes((.4, .26, .01, .01)) axt.axis('off') axt.text(0,1.2,'c)',fontsize=16) fig.savefig(adir+'ALL.png') vv=.75 tem=xr.concat([ds2.sel(lon=slice(20,360)),ds2.sel(lon=slice(0,20))],dim='lon') fig = plt.figure(figsize=(15, 12)) ax = plt.subplot(311,projection=ccrs.Mollweide(central_longitude=-160)) #ax.stock_img() ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01) cax.set_label('Correlation Coefficient') axt = plt.axes((.4, .8, .01, .01)) axt.axis('off') axt.text(0,1.2,'a)',fontsize=16) tem=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon') ax = plt.subplot(312,projection=ccrs.Mollweide(central_longitude=-160)) #ax.stock_img() ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01) cax.set_label('Correlation Coefficient \n High Pass') axt = plt.axes((.4, .53, .01, .01)) axt.axis('off') axt.text(0,1.2,'b)',fontsize=16) tem1=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon') tem=xr.concat([ds_err.sel(lon=slice(20,360)),ds_err.sel(lon=slice(0,20))],dim='lon') ax = plt.subplot(313,projection=ccrs.Mollweide(central_longitude=-160)) #ax.stock_img() ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem1.lon,tem1.lat,tem1.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.err,vmin=0,vmax=30,cmap=roma_map2,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01) cax.set_label('Standard deviation (W m$^{-2}$)') axt = plt.axes((.4, .26, .01, .01)) axt.axis('off') axt.text(0,1.2,'c)',fontsize=16) fig.savefig(adir+'ALL_whiteland.png') import cartopy.feature as cfeature vv=.75 tem=xr.concat([ds2.sel(lon=slice(20,360)),ds2.sel(lon=slice(0,20))],dim='lon') fig = plt.figure(figsize=(15, 12)) ax = plt.subplot(311,projection=ccrs.Mollweide(central_longitude=-160)) #ax.stock_img() ax.add_feature(cfeature.LAND,facecolor='grey') ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01) cax.set_label('Correlation Coefficient') axt = plt.axes((.4, .8, .01, .01)) axt.axis('off') axt.text(0,1.2,'a)',fontsize=16) tem=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon') ax = plt.subplot(312,projection=ccrs.Mollweide(central_longitude=-160)) #ax.stock_img() ax.add_feature(cfeature.LAND,facecolor='grey') ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01) cax.set_label('Correlation Coefficient \n High Pass') axt = plt.axes((.4, .53, .01, .01)) axt.axis('off') axt.text(0,1.2,'b)',fontsize=16) tem1=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon') tem=xr.concat([ds_err.sel(lon=slice(20,360)),ds_err.sel(lon=slice(0,20))],dim='lon') ax = plt.subplot(313,projection=ccrs.Mollweide(central_longitude=-160)) #ax.stock_img() ax.add_feature(cfeature.LAND,facecolor='grey') ax.coastlines(resolution='50m', color='black', linewidth=1) ax0=ax.pcolormesh(tem1.lon,tem1.lat,tem1.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree()) ax1=ax.pcolormesh(tem.lon,tem.lat,tem.err,vmin=0,vmax=30,cmap=roma_map2,transform=ccrs.PlateCarree()) cax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01) cax.set_label('Standard deviation (W m$^{-2}$)') axt = plt.axes((.4, .26, .01, .01)) axt.axis('off') axt.text(0,1.2,'c)',fontsize=16) fig.savefig(adir+'ALL_greyland.png') ```
github_jupyter
# Copy Task Plots ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd from glob import glob import json import os import sys sys.path.append(os.path.abspath(os.getcwd() + "./../")) %matplotlib inline ``` ## Load training history To generate the models and training history used in this notebook, run the following commands: ``` mkdir ./notebooks/copy ./train.py --seed 1 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy ./train.py --seed 10 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy ./train.py --seed 100 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy ./train.py --seed 1000 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy ``` ``` batch_num = 40000 files = glob("./copy/*-{}.json".format(batch_num)) files # Read the metrics from the .json files history = [json.loads(open(fname, "rt").read()) for fname in files] training = np.array([(x['cost'], x['loss'], x['seq_lengths']) for x in history]) print("Training history (seed x metric x sequence) =", training.shape) # Average every dv values across each (seed, metric) dv = 1000 training = training.reshape(len(files), 3, -1, dv).mean(axis=3) print(training.shape) # Average the seeds training_mean = training.mean(axis=0) training_std = training.std(axis=0) print(training_mean.shape) print(training_std.shape) fig = plt.figure(figsize=(12, 5)) # X axis is normalized to thousands x = np.arange(dv / 1000, (batch_num / 1000) + (dv / 1000), dv / 1000) # Plot the cost # plt.plot(x, training_mean[0], 'o-', linewidth=2, label='Cost') plt.errorbar(x, training_mean[0], yerr=training_std[0], fmt='o-', elinewidth=2, linewidth=2, label='Cost') plt.grid() plt.yticks(np.arange(0, training_mean[0][0]+5, 5)) plt.ylabel('Cost per sequence (bits)') plt.xlabel('Sequence (thousands)') plt.title('Training Convergence', fontsize=16) ax = plt.axes([.57, .55, .25, .25], facecolor=(0.97, 0.97, 0.97)) plt.title("BCELoss") plt.plot(x, training_mean[1], 'r-', label='BCE Loss') plt.yticks(np.arange(0, training_mean[1][0]+0.2, 0.2)) plt.grid() plt.show() loss = history[3]['loss'] cost = history[3]['cost'] seq_lengths = history[3]['seq_lengths'] unique_sls = set(seq_lengths) all_metric = list(zip(range(1, batch_num+1), seq_lengths, loss, cost)) fig = plt.figure(figsize=(12, 5)) plt.ylabel('Cost per sequence (bits)') plt.xlabel('Iteration (thousands)') plt.title('Training Convergence (Per Sequence Length)', fontsize=16) for sl in unique_sls: sl_metrics = [i for i in all_metric if i[1] == sl] x = [i[0] for i in sl_metrics] y = [i[3] for i in sl_metrics] num_pts = len(x) // 50 total_pts = num_pts * 50 x_mean = [i.mean()/1000 for i in np.split(np.array(x)[:total_pts], num_pts)] y_mean = [i.mean() for i in np.split(np.array(y)[:total_pts], num_pts)] plt.plot(x_mean, y_mean, label='Seq-{}'.format(sl)) plt.yticks(np.arange(0, 80, 5)) plt.legend(loc=0) plt.show() ``` # Evaluate ``` import torch from IPython.display import Image as IPythonImage from PIL import Image, ImageDraw, ImageFont import io from tasks.copytask import dataloader from train import evaluate from tasks.copytask import CopyTaskModelTraining model = CopyTaskModelTraining() model.net.load_state_dict(torch.load("./copy/copy-task-10-batch-40000.model")) seq_len = 60 _, x, y = next(iter(dataloader(1, 1, 8, seq_len, seq_len))) result = evaluate(model.net, model.criterion, x, y) y_out = result['y_out'] def cmap(value): pixval = value * 255 low = 64 high = 240 factor = (255 - low - (255-high)) / 255 return int(low + pixval * factor) def draw_sequence(y, u=12): seq_len = y.size(0) seq_width = y.size(2) inset = u // 8 pad = u // 2 width = seq_len * u + 2 * pad height = seq_width * u + 2 * pad im = Image.new('L', (width, height)) draw = ImageDraw.ImageDraw(im) draw.rectangle([0, 0, width, height], fill=250) for i in range(seq_len): for j in range(seq_width): val = 1 - y[i, 0, j].data[0] draw.rectangle([pad + i*u + inset, pad + j*u + inset, pad + (i+1)*u - inset, pad + (j+1)*u - inset], fill=cmap(val)) return im def im_to_png_bytes(im): png = io.BytesIO() im.save(png, 'PNG') return bytes(png.getbuffer()) def im_vconcat(im1, im2, pad=8): assert im1.size == im2.size w, h = im1.size width = w height = h * 2 + pad im = Image.new('L', (width, height), color=255) im.paste(im1, (0, 0)) im.paste(im2, (0, h+pad)) return im def make_eval_plot(y, y_out, u=12): im_y = draw_sequence(y, u) im_y_out = draw_sequence(y_out, u) im = im_vconcat(im_y, im_y_out, u//2) w, h = im.size pad_w = u * 7 im2 = Image.new('L', (w+pad_w, h), color=255) im2.paste(im, (pad_w, 0)) # Add text font = ImageFont.truetype("./fonts/PT_Sans-Web-Regular.ttf", 13) draw = ImageDraw.ImageDraw(im2) draw.text((u,4*u), "Targets", font=font) draw.text((u,13*u), "Outputs", font=font) return im2 im = make_eval_plot(y, y_out, u=8) IPythonImage(im_to_png_bytes(im)) ``` ## Create an animated GIF Lets see how the prediction looks like in each checkpoint that we saved. ``` seq_len = 80 _, x, y = next(iter(dataloader(1, 1, 8, seq_len, seq_len))) frames = [] font = ImageFont.truetype("./fonts/PT_Sans-Web-Regular.ttf", 13) for batch_num in range(500, 10500, 500): model = CopyTaskModelTraining() model.net.load_state_dict(torch.load("./copy/copy-task-10-batch-{}.model".format(batch_num))) result = evaluate(model.net, model.criterion, x, y) y_out = result['y_out'] frame = make_eval_plot(y, y_out, u=10) w, h = frame.size frame_seq = Image.new('L', (w, h+40), color=255) frame_seq.paste(frame, (0, 40)) draw = ImageDraw.ImageDraw(frame_seq) draw.text((10, 10), "Sequence Num: {} (Cost: {})".format(batch_num, result['cost']), font=font) frames += [frame_seq] im = frames[0] im.save("./copy-train-80.gif", save_all=True, append_images=frames[1:], loop=0, duration=1000) im = frames[0] im.save("./copy-train-80-fast.gif", save_all=True, append_images=frames[1:], loop=0, duration=100) ```
github_jupyter
# Imports & Installations ``` !pip install pyforest !pip install plotnine !pip install transformers !pip install psycopg2-binary !pip uninstall -y tensorflow-datasets !pip install lit_nlp tfds-nightly transformers==4.1.1 # Automatic library importer (doesn't quite import everything yet) from pyforest import * # Expands Dataframe to view entire pandas dataframe pd.options.display.max_colwidth = 750 # For tracking the duration of executed code cells from time import time # To connect to Blue Witness Labeler's DB import psycopg2 # For visualizations from plotnine import * from plotnine.data import mpg import plotly.graph_objects as go from plotly.subplots import make_subplots # For BERT model import torch from torch.utils.data import TensorDataset, DataLoader, RandomSampler from transformers import BertTokenizer, BertForSequenceClassification, AdamW from transformers import get_linear_schedule_with_warmup from tensorflow.keras.preprocessing.sequence import pad_sequences ``` # Reading in our Tweets ``` def get_df(db_url) -> pd.DataFrame: ''' Connects to our Blue Witness Data Labeler and retrieves manually labelled text before converting them all into a pandas dataframe. Parameters ---------- db_url: psycopg2 database Returns ------- df: pandas datafarme Contains thousands of text with appropriate police (non-)violence labels ''' conn = psycopg2.connect(db_url) curs = conn.cursor() curs.execute("SELECT * FROM training;") cols = [k[0] for k in curs.description] rows = curs.fetchall() df = pd.DataFrame(rows, columns=cols) curs.close() conn.close() return df # ALWAYS REMEMBER TO REMOVE THE PostgreSQL URL ASSIGNED TO THIS VARIABLE WHEN COMITTING TO OUR REPO db_url = "" data_labeler_df = get_df(db_url) data_labeler_df def rank_wrangle(): ''' Loads in both synthetic tweets generated from GPT-2 and authentic tweets scraped and manually labelled from Twitter. Combines both sets of tweets together into a single dataframe. Drops any null values and duplicates. rank2_syn.txt, rank3_syn.txt, and rank4_syn.txt can be found in notebooks/labs37_notebooks/synthetic_tweets Parameters ---------- None Returns ------- df: pandas dataframe Contains fully concatenated dataframe ''' # Supplying our dataframes with proper labels column_headers = ['tweets', 'labels'] # Reading in our three police force rank datasets synthetic_tweets_cop_shot = pd.read_csv("/content/cop_shot_syn.txt", sep = '/', names=column_headers) synthetic_tweets_run_over = pd.read_csv("/content/run_over_syn.txt", sep = '/', names=column_headers) synthetic_tweets_rank2 = pd.read_csv("/content/rank2_syn.txt", sep = '/', names=column_headers) synthetic_tweets_rank3 = pd.read_csv("/content/rank3_syn.txt", sep = '/', names=column_headers) synthetic_tweets_rank4 = pd.read_csv("/content/rank4_syn.txt", sep = '/', names=column_headers) # Concatenating all of our datasets into one compiled = pd.concat([data_labeler_df, synthetic_tweets_cop_shot, synthetic_tweets_run_over, synthetic_tweets_rank2, synthetic_tweets_rank3, synthetic_tweets_rank4]) # Dropping unnecessary column compiled.drop('id', axis=1, inplace=True) # Discarding generated duplicates from GPT-2 while keeping the original Tweets compiled.drop_duplicates(subset='tweets', keep='first', inplace=True) # Dropping any possible NaNs if compiled.isnull().values.any(): compiled.dropna(how='any', inplace=True) return compiled # Applying our function above to view the contents of our dataframe force_ranks = rank_wrangle() force_ranks ``` # Visualizations ``` %matplotlib inline (ggplot(force_ranks) # defining what dataframe to use + aes(x='labels') # defining what variable/column to use + geom_bar(size=20) # defining the type of plot to use and its size + labs(title='Number of Tweets Reporting Police Violence per Force Rank', x='Force Rank', y='Number of Tweets') ) # Creating custom donut chart with Plotly labels = ['0 - No Police Presence', '5 - Lethal Force (Guns & Explosives)', '1 - Non-violent Police Presence', '3 - Blunt Force Trauma (Batons & Shields)', '4 - Chemical & Electric Weapons (Tasers & Pepper Spray)', '2 - Open Handed (Arm Holds & Pushing)'] values = force_ranks.labels.value_counts() bw_colors = ['rgb(138, 138, 144)', 'rgb(34, 53, 101)', 'rgb(37, 212, 247)', 'rgb(59, 88, 181)', 'rgb(56, 75, 126)', 'rgb(99, 133, 242)'] # Using 'pull' on Rank 5 to accentuate the frequency of the most excessive use of force by police # 'hole' determines the size of the donut chart fig = go.Figure(data=[go.Pie(labels=labels, values=values, pull=[0, 0.2, 0, 0, 0, 0], hole=.3, name='Blue Witness', marker_colors=bw_colors)]) # Displaying our donut chart fig.update(layout_title_text='Percentage of Tweets Reporting Police Violence per Force Rank') fig = go.Figure(fig) fig.show() ``` # Preparing Data for BERT Splitting dataframe into training and testing sets before converting to parquet for later reference/resource. ``` def parquet_and_split(): ''' Splits our data into a format amicable to NLP modeling. Saves our original dataframe as well as the two split dataframes into parquet files for later reference/use. ----- Parameters ------ None Returns ------- df: pandas dataframes Contains two split dataframes ready to be fit to and tested against a model ''' # Splitting dataframe into training and testing sets for modeling # 20% of our data will be reserved for testing training, testing = train_test_split(force_ranks, test_size=0.2) # Sanity Check if force_ranks.shape[0] == training.shape[0] + testing.shape[0]: print("Sanity Check - Succesful!") else: print("Sanity Check - Unsuccessful!") # Converting dataframes to parquet format for later reference # Using parquet as our new dataset storage format as they cannot be edited like CSVs can. They are immutable. # For viewing in vscode, install the parquet-viewer extension: https://marketplace.visualstudio.com/items?itemName=dvirtz.parquet-viewer training.to_parquet('synthetic_training.parquet') testing.to_parquet('synthetic_testing.parquet') force_ranks.to_parquet('synthetic_complete.parquet') return training, testing training, testing = parquet_and_split() ``` # BERT ## Training our NLP Multi-Class Classification Model ``` def bert_trainer(df, output_dir: str, epochs: int): start = time() max_len = 280 if torch.cuda.is_available(): print("CUDA Active") device = torch.device("cuda") else: print("CPU Active") device = torch.device("cpu") sentences = df["tweets"].values labels = df["labels"].values tokenizer = BertTokenizer.from_pretrained( 'bert-base-uncased', do_lower_case=True, ) inputs = [ tokenizer.encode(sent, add_special_tokens=True) for sent in sentences ] inputs_ids = pad_sequences( inputs, maxlen=max_len, dtype="long", value=0, truncating="post", padding="post", ) attention_masks = [ [int(token_id != 0) for token_id in sent] for sent in inputs_ids ] train_inputs = torch.tensor(inputs_ids) train_labels = torch.tensor(labels) train_masks = torch.tensor(attention_masks) batch_size = 32 train_data = TensorDataset(train_inputs, train_masks, train_labels) train_sampler = RandomSampler(train_data) train_dataloader = DataLoader( train_data, sampler=train_sampler, batch_size=batch_size, ) model = BertForSequenceClassification.from_pretrained( 'bert-base-uncased', num_labels=6, output_attentions=False, output_hidden_states=False, ) if torch.cuda.is_available(): model.cuda() optimizer = AdamW(model.parameters(), lr=2e-5, eps=1e-8) total_steps = len(train_dataloader) * epochs scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=0, num_training_steps=total_steps, ) loss_values = [] print('\nTraining...') for epoch_i in range(1, epochs + 1): print(f"\nEpoch: {epoch_i}") total_loss = 0 model.train() for step, batch in enumerate(train_dataloader): b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) model.zero_grad() outputs = model( b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels, ) loss = outputs[0] total_loss += loss.item() loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() avg_train_loss = total_loss / len(train_dataloader) loss_values.append(avg_train_loss) print(f"Average Loss: {avg_train_loss}") if not os.path.exists(output_dir): os.makedirs(output_dir) print(f"\nSaving model to {output_dir}") model_to_save = model.module if hasattr(model, 'module') else model model_to_save.save_pretrained(output_dir) tokenizer.save_pretrained(output_dir) end = time() total_run_time_in_hours = (((end - start)/60)/60) rounded_total_run_time_in_hours = np.round(total_run_time_in_hours, decimals=2) print(f"Finished training in {rounded_total_run_time_in_hours} hours!") !nvidia-smi # If running on Colab, the best GPU to have in use is the NVIDIA Tesla P100 from google.colab import drive drive.mount('/content/drive') # Colab notebook may crash the first time this code cell is run. # Running this cell again after runtime restart shouldn't produce any more issues. bert_trainer(training, 'saved_model', epochs=50) ``` ## Making Predictions ``` class FrankenBert: """ Implements BertForSequenceClassification and BertTokenizer for binary classification from a saved model """ def __init__(self, path: str): """ If there's a GPU available, tell PyTorch to use the GPU. Loads model and tokenizer from saved model directory (path) """ if torch.cuda.is_available(): self.device = torch.device('cuda') else: self.device = torch.device('cpu') self.model = BertForSequenceClassification.from_pretrained(path) self.tokenizer = BertTokenizer.from_pretrained(path) self.model.to(self.device) def predict(self, text: str): """ Makes a binary classification prediction based on saved model """ inputs = self.tokenizer( text, padding=True, truncation=True, max_length=280, return_tensors='pt', ).to(self.device) output = self.model(**inputs) prediction = output[0].softmax(1) tensors = prediction.detach().cpu().numpy() result = np.argmax(tensors) confidence = tensors[0][result] return f"Rank: {result}, {100 * confidence:.2f}%" model = FrankenBert('saved_model') model.predict("Mickey Mouse is in the house") model.predict("Cops gave me a speeding ticket for walking too fast") model.predict("Officer Kelly was shot and killed") model.predict("A Texas Department of Public Safety (DPS) trooper ran over and killed a man who was in the road near the State Capitol early Thursday morning, according to the Austin Police Department (APD). The crash happened at around 3:45 a.m. Thursday just west of the Texas State Capitol building. The trooper was heading northbound on Colorado Street and as he was turning left on 13th Street, the trooper hit the pedestrian. DPS said the crash happened while the trooper was patrolling the area.") model.predict("Cop ran me over with his SUV") model.predict("Cops hit her with a baton") model.predict("Cops sprayed my mom with pepper spray") model.predict("Cops shot rubber bullets at the crowd") model.predict("Police used tear gas on a pedestrian for no reason") model.predict("Cops killed that woman") model.predict("Yesterday I saw a policeman hit a poor person behind my house. I wonder whats going on") model.predict("Man ran up to me and pepper sprayed me. I've called the cops, but they have not gotten themselves involved yet.") ``` ## Saving Trained Model ``` from google.colab import drive drive.mount('/content/gdrive') #path that contains folder you want to copy %cd /content/gdrive/MyDrive/ColabNotebooks/Labs/saved_model # copy local folder to folder on Google Drive %cp -av /content/saved_model saved_model ```
github_jupyter
``` from programming_for_biology.data_analysis.cell_polygons import read_disc, Coordinates import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Polygon from matplotlib.collections import PatchCollection import scipy from scipy import stats #function to draw the wing disc def draw_disc(cpx, cpy, area, size): #input arguments: ## cpx, cpy: x,y/positions of the vertices of all cells # format: list (1 element per cell) of sublists (1 number per vertex, eg 3 numbers for a triangle). ## area: cell area # format: 1-dimentsional numpy array (1 number per cell) ## size: 'large' for the large disc and 'small' for the small disc polygs = [] for i in range(len(cpx)): polyg = [] for j in range(len(cpx[i])): polyg.append([cpx[i][j], cpy[i][j]]) polygs.append(Polygon(polyg)) patches = PatchCollection(polygs) patches.set_cmap('jet') colors = 1 * area colors[colors>14] = 14 # color value for all the mitotic cells (area>14) is set to 14 patches.set_array(np.array(colors)) #for colors fig = plt.figure() panel = fig.add_subplot(1,1,1) panel.add_collection(patches) color_bar = fig.colorbar(patches) color_bar.set_label('Cell area (um2)', rotation = 270, labelpad = 15) panel.set_xlim(-120, 110) panel.set_ylim(-85, 85) panel.set_aspect('equal') plt.title(size+' wing disc') plt.show() disc = read_disc("wd-large") areas = np.array([polygon.area() for polygon in disc.polygons]) distances = np.array([ polygon.centroid().distance_to(Coordinates.center()) for polygon in disc.polygons ]) half_distance = np.max(distances) / 2 p_fit = np.polyfit(distances, areas, 1) print("slope:", p_fit[0]) print("intercept:", p_fit[1]) t_statistic, p_value = scipy.stats.ttest_ind(areas[distances <= half_distance], areas[distances > half_distance]) print("t-statistic:", t_statistic) print("p-value:", p_value) plt.title("Area vs. distance to center") plt.plot(distances, areas, ".") plt.xlabel("Distance to center [$\mu$m]") plt.ylabel("Area [$\mu$m$^2$]") plt.plot(np.linspace(0, max(distances), 100), np.polyval(p_fit, np.linspace(0, max(distances), 100))) plt.grid() plt.show() cpx = [[c.x for c in p.coordinates()] for p in disc.polygons] cpy = [[c.y for c in p.coordinates()] for p in disc.polygons] area = np.array([p.area() for p in disc.polygons]) draw_disc(cpx, cpy, area, 'large') disc = read_disc("wd-small") areas = np.array([polygon.area() for polygon in disc.polygons]) distances = np.array([ polygon.centroid().distance_to(Coordinates.center()) for polygon in disc.polygons ]) half_distance = np.max(distances) / 2 p_fit = np.polyfit(distances, areas, 1) print("slope:", p_fit[0]) print("intercept:", p_fit[1]) t_statistic, p_value = scipy.stats.ttest_ind(areas[distances <= half_distance], areas[distances > half_distance]) print("t-statistic:", t_statistic) print("p-value:", p_value) plt.title("Area vs. distance to center") plt.plot(distances, areas, ".") plt.xlabel("Distance to center [$\mu$m]") plt.ylabel("Area [$\mu$m$^2$]") plt.plot(np.linspace(0, max(distances), 100), np.polyval(p_fit, np.linspace(0, max(distances), 100))) plt.grid() plt.show() cpx = [[c.x for c in p.coordinates()] for p in disc.polygons] cpy = [[c.y for c in p.coordinates()] for p in disc.polygons] area = np.array([p.area() for p in disc.polygons]) draw_disc(cpx, cpy, area, 'small') ```
github_jupyter
# HW8: Topic Modeling ``` ## ======================================================= ## IMPORTING ## ======================================================= import os def get_data_from_files(path): directory = os.listdir(path) results = [] for file in directory: f=open(path+file, encoding = "ISO-8859-1") results.append(f.read()) f.close() return results ## ======================================================= ## MODELING ## ======================================================= import pandas as pd from sklearn.decomposition import LatentDirichletAllocation from sklearn.feature_extraction.text import CountVectorizer import gensim from gensim.utils import simple_preprocess from gensim.parsing.preprocessing import STOPWORDS def run_lda(data, num_topics, stop_words): cv = CountVectorizer(stop_words = stop_words) lda_vec = cv.fit_transform(data) lda_columns = cv.get_feature_names() corpus = pd.DataFrame(lda_vec.toarray(), columns = lda_columns) lda = LatentDirichletAllocation(n_components=num_topics, max_iter=10, learning_method='online') lda_model = lda.fit_transform(lda_vec) print_topics(lda, cv) return lda_model, lda, lda_vec, cv, corpus ## ======================================================= ## HELPERS ## ======================================================= import numpy as np np.random.seed(210) def print_topics(model, vectorizer, top_n=10): for idx, topic in enumerate(model.components_): print("Topic %d:" % (idx)) print([(vectorizer.get_feature_names()[i], topic[i]) for i in topic.argsort()[:-top_n - 1:-1]]) ## ======================================================= ## VISUALIZING ## ======================================================= import pyLDAvis.sklearn as LDAvis import pyLDAvis def start_vis(lda, lda_vec, cv): panel = LDAvis.prepare(lda, lda_vec, cv, mds='tsne') # pyLDAvis.show(panel) pyLDAvis.save_html(panel, 'FinalProject_lda_2.html') df = pd.read_csv('../death_row_discritized.csv') def to_string(tokens): try: return " ".join(eval(tokens)) except: return "error" df['statement_string'] = df.apply(lambda x: to_string(x['last_statement']), axis=1) # y=df['vic_kid'].values y=df['prior_record'].values y_labels = list(set(y)) X=df['statement_string'].values all_df = pd.DataFrame(X) all_df['labels'] = y all_df # data = get_data_from_files('Dog_Hike/') # lda_model, lda, lda_vec, cv = run_lda(data,) from sklearn.feature_extraction import text stop_words = text.ENGLISH_STOP_WORDS # data_fd = get_data_from_files('110/110-f-d/') # data_fr = get_data_from_files('110/110-f-r/') # data = data_fd + data_fr # data lda_model, lda, lda_vec, cv, corpus = run_lda(all_df[0].values, 4, stop_words) start_vis(lda, lda_vec, cv) # corpus # c2 = corpus.append(df.sum().rename('Total')) ct = corpus.T ct['total'] = ct.sum(axis=1) big_total = ct[ct['total'] > 68] len(big_total) len(ct) btt = big_total.T additional_stopwords = btt.columns stop_words = text.ENGLISH_STOP_WORDS.union(additional_stopwords) stop_words lda_model, lda, lda_vec, cv, corpus = run_lda(data, 40, stop_words) start_vis(lda, lda_vec, cv) import plotly.plotly as py from plotly.grid_objs import Grid, Column from plotly.tools import FigureFactory as FF import pandas as pd import time ```
github_jupyter
### 1. Gradient Descent Tips *Nhắc lại*: Công thức cập nhật $\theta$ ở vòng lặp thứ $t$: <center>$\theta_{t+1} := \theta_t - \alpha \Delta_{\theta} f(\theta_t)$</center> Trong đó: - $\alpha$: learning rate - tốc độ học tập. - $\Delta_{\theta} f(\theta_t)$: đạo hàm của hàm số tại điểm $\theta$. Việc lựa chọn giá trị $\alpha$ (learning rate) rất quan trọng. Nó quyết định việc bài toán có thể hội tụ tới giá trị global minimum cho hàm $f(\theta)$ hay không. Gradient Descent có thể làm việc hiệu quả hơn bằng cách chọn Learning Rate phù hợp. Một số trường hợp về lựa chọn Learning Rate như sau (các bạn có thể thử thay đổi [tại đây](https://developers.google.com/machine-learning/crash-course/fitter/graph)): **Learning Rate quá lớn** - Gradient Descent không thể hội tụ được về giá trị Minimum. <img src="images/image-3.gif" style="width:50%;height:50%;"> **Learning Rate quá nhỏ**: - Gradient Descent có thể hội tụ được về giá trị Minimum trong bài toán này, nhưng mất tới 81 vòng lặp để hội tụ. Trong một số bài toán có nhiều giá trị cực tiểu địa phương - LR quá nhỏ có thể khiến hàm số bị mắc kẹt tại cực tiểu địa phương và không bao giờ hội tụ được về giá trị tối ưu. <img src="images/image-5.png" style="width:50%;height:50%;"> **Learning Rate vừa:** nếu Learning Rate quá nhỏ khiến bài toán hội tụ lâu, các bạn hãy thử tăng giá trị này lên thêm. Trong bài toán này khi Learning Rate = 1.0 sẽ mất 6 vòng lặp để hội tụ. <img src="images/image-4.png" style="width:50%;height:50%;"> **Learning Rate tối ưu:** trong thực tế rất khó có thể tìm ra được giá trị Learning Rate tối ưu. Việc tìm được giá trị Learning Rate tương đối với giá trị tối ưu sẽ giúp bài toán hội tụ nhanh hơn. <img src="images/image-6.png" style="width:50%;height:50%;"> **Tổng kết:** - Nếu Learning Rate quá nhỏ: mất quá nhiều thời gian để hội tụ, đồng thời có thể bị mắc kẹt ở cực tiểu địa phương. - Nếu Learning Rate quá lớn: không thể hội tụ được. ### Một vài lời khuyên - Trước khi bắt đầu bài toán, các bạn hãy chuẩn hoá dữ liệu về khoảng [-1;1] hay [0;1] sẽ giúp bài toán hội tụ nhanh hơn. - Bắt đầu bài toán bằng Learning Rate nhỏ. Tăng dần Learning Rate nếu không thấy phù hợp. - Với bài toán nhiều dữ liệu hãy sử dụng Mini-batch Gradient Descent (phương pháp này sẽ được đề cập trong bài tới). - Sử dụng Momentum cho Gradient Descent (phương pháp này sẽ được đề cập trong bài tới). ### 2. Normal Equation Normal Equation là phương pháp tìm nghiệm của bài toán Linear Regression mà không cần tới vòng lặp, không cần lựa chọn Learning Rate. Và cũng không cần phải Scaling dữ liệu. Công thức toán đằng sau nghiệm của phương trình này các bạn có thể đọc thêm tại: https://eli.thegreenplace.net/2014/derivation-of-the-normal-equation-for-linear-regression Và công thức quan trọng nhất của chúng ta: <center> $\theta = (X^T X)^{-1} X^Ty $ </center> So sánh giữa Normal Equation và Gradient Descent: <table> <tr> <td> Gradient Descent </td> <td> Normal Equation </td> </tr> <tr> <td> Cần phải chọn Learning Rate </td> <td> Không cần chọn Learning Rate </td> </tr> <tr> <td> Cần nhiều vòng lặp </td> <td> Không cần vòng lặp </td> </tr> <tr> <td> Thời gian tính: $O(kn^2)$ </td> <td> Thời gian tính: $O(n^3)$, cần phải tính ma trận nghịch đảo </td> </tr> <tr> <td> Hoạt động tốt với dữ liệu lớn </td> <td> Rất chậm với dữ liệu lớn </td> </tr> </table> Với Normal Equation, việc tính toán mất thời gian $O(n^3)$ nên với dữ liệu lớn (n > 10.000 dữ liệu) chúng ta nên sử dụng Gradient Descent. ### Tài liệu tham khảo [1] [CS229 - Machine Learning Course](http://cs229.stanford.edu)
github_jupyter
``` import math import numpy as np import matplotlib.pyplot as plt F = np.array([[-1,0],[0,1]]) ACB = [150, -60, 60, -90] ABC = [150, 60, -60, 30] BAC = [-90, -60, 60, 30] BCA = [-90, 60, -60, 150] CBA = [30, -60, 60, 150] CAB = [30, 60, -60, -90] del_s=np.exp(1.31j) del_p=np.exp(2.05j) P=np.array([[del_s, 0],[0, del_p]]) def Rotation(ang): return np.array([[np.cos(ang),np.sin(ang)], [-np.sin(ang),np.cos(ang)]]) def total_matrix(direction): r0 = np.dot(P, Rotation(direction[0])) r1 = np.dot(Rotation(direction[1]),r0) r1 = np.dot(P,r1) r2 = np.dot(Rotation(direction[2]),r1) r2 = np.dot(P,r2) r3 = np.dot(Rotation(direction[3]),r2) return np.dot(F,r3) def OutSignal(side,ang_in,ang_pol): Pin = np.array([[np.cos(math.radians(ang_in))],[np.sin(math.radians(ang_in))]]) Pout = abs(np.dot(total_matrix(side),Pin)) Ex = Pout[[0],[0]] * np.cos(math.radians(ang_pol)) Ey = Pout[[1],[0]] * np.sin(math.radians(ang_pol)) return np.sqrt(Ex**2 + Ey**2) def graph(inPol,polcam): pixel = np.zeros((400,400)) #inPol = 90 #polcam = 90 for x in range(400): for y in range(400): if y < 200 and x < (y-200)/np.sqrt(3) + 200: #CAB color = 256 * OutSignal(CAB,inPol,polcam) pixel[[x],[y]]=color elif x > (y-200)/np.sqrt(3) + 200 and x < -(y-200)/np.sqrt(3) + 200: #CBA color = 256 * OutSignal(CBA,inPol,polcam) pixel[[x],[y]]=color elif x > -(y-200)/np.sqrt(3) + 200 and y < 200 : #BCA color = 256 * OutSignal(BCA,inPol,polcam) pixel[[x],[y]]=color elif y > 200 and x > (y-200)/np.sqrt(3) + 200 : #BAC color = 256 * OutSignal(BAC,inPol,polcam) pixel[[x],[y]]=color elif x < (y-200)/np.sqrt(3) + 200 and x > -(y-200)/np.sqrt(3) + 200: #ABC color = 256 * OutSignal(ABC,inPol,polcam) pixel[[x],[y]]=color elif y > 200 and x < -(y-200)/np.sqrt(3) + 200: #CAB color = 256 * OutSignal(CAB,inPol,polcam) pixel[[x],[y]]=color return pixel InPol = 90 plt.subplot(221) plt.imshow(graph(InPol,0)) plt.show() plt.subplot(222) plt.imshow(graph(InPol,45)) plt.show() plt.subplot(223) plt.imshow(graph(InPol,90)) plt.show() plt.subplot(224) plt.imshow(graph(InPol,135)) plt.colorbar() plt.show() x= np.arange(0,10,0.1) y= np.sin(x) fig = plt.figure(figsize=(8,8)) ax = [plt.subplot(2,2,i+1) for i in range(4)] for i in range(ax): i.plot(y) i.title("%d"%i) i.set_xticklabels([]) i.set_yticklabels([]) i.set_aspect('equal') plt.subplots_adjust(wspace=0, hspace=0) ''' plt.subplot(221) plt.plot(y) plt.title("1") plt.set_aspect('equal') plt.subplot(222) plt.plot(y) plt.title("2") plt.set_aspect('equal') plt.subplot(223) plt.plot(y) plt.title("3") plt.set_aspect('equal') plt.subplot(224) plt.plot(y) plt.title("4") plt.set_aspect('equal') ''' plt.savefig("fig2.png") ```
github_jupyter
``` import warnings import time from data import * #from data_transforms import * from apex import amp from torch import nn from torch.utils.data import Dataset, DataLoader, Subset from sklearn.preprocessing import LabelEncoder from sklearn.metrics import accuracy_score, roc_auc_score from sklearn.model_selection import StratifiedKFold, GroupKFold, KFold from efficientnet_pytorch import EfficientNet from catalyst.data.sampler import BalanceClassSampler #CV2 import cv2 #Importing Tabnet from pytorch_tabnet.tab_network import TabNet import datetime from fastprogress import master_bar, progress_bar %load_ext autoreload %autoreload 2 warnings.simplefilter('ignore') batch_size=32 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Defining Categorical variables and their Indexes, embedding dimensions , number of classes each have df = pd.read_csv('/data/full/folds_13062020.csv') df_test =pd.read_csv('/data/full/test.csv') df_test['anatom_site_general_challenge'].fillna('unknown',inplace=True) df_test['target'] = 0 features = ['sex', 'age_approx', 'anatom_site_general_challenge'] cat = ['sex', 'anatom_site_general_challenge'] target = 'target' categorical_columns = [] for col in cat: print('train', col, df[col].nunique()) print('test', col, df_test[col].nunique()) l_enc = LabelEncoder() df[col] = l_enc.fit_transform(df[col].values) df_test[col] = l_enc.transform(df_test[col].values) class MelanomaDataset(Dataset): def __init__(self, df: pd.DataFrame, imfolder: (str, Path), train: bool = True, transforms = None, meta_features = None): """ Class initialization Args: df (pd.DataFrame): DataFrame with data description imfolder (str): folder with images train (bool): flag of whether a training dataset is being initialized or testing one transforms: image transformation method to be applied meta_features (list): list of features with meta information, such as sex and age """ self.df = df self.imfolder = imfolder self.transforms = transforms self.train = train self.meta_features = meta_features def __getitem__(self, index): im_path = Path(f"{self.imfolder}/{self.df.iloc[index]['image_name']}.jpg") x = cv2.imread(str(im_path)) meta = torch.tensor(self.df.iloc[index][self.meta_features].values, dtype=torch.float) if self.transforms: x = self.transforms(x) if self.train: y = self.df.iloc[index]['target'] y_meta = self.one_hot(2, y) return {'image': x, 'label': y, 'features': meta, 'target': y_meta} else: return {'image': x, 'label': None, 'features': meta, 'target': None} def __len__(self): return len(self.df) @staticmethod def one_hot(size, target): tensor = torch.zeros(size, dtype=torch.float32) tensor[target] = 1. return tensor class CustomTabnet(nn.Module): def __init__(self, input_dim, output_dim,n_d=8, n_a=8,n_steps=3, gamma=1.3, cat_idxs=[], cat_dims=[3,8], cat_emb_dim=[3,5],n_independent=2, n_shared=2, momentum=0.02,mask_type="sparsemax"): super(CustomTabnet, self).__init__() self.tabnet = TabNet(input_dim=input_dim,output_dim=output_dim, n_d=n_d, n_a=n_a,n_steps=n_steps, gamma=gamma, cat_idxs=cat_idxs, cat_dims=cat_dims, cat_emb_dim=cat_emb_dim,n_independent=n_independent, n_shared=n_shared, momentum=momentum,mask_type="sparsemax") def forward(self, x): return self.tabnet(x) tabnet = CustomTabnet(2, 2) list(tabnet.tabnet.modules())[-1].in_features net = EfficientNet.from_pretrained('efficientnet-b0') class Effnet(nn.Module): def __init__(self, arch, input_dim, output_dim): super().__init__() self.arch = arch self.arch._fc = nn.Linear(in_features=1280, out_features=64, bias=True) self.tab = CustomTabnet(input_dim, output_dim) self.tab = nn.Sequential(*list(self.tab.modules())[:-1]) self.ouput = nn.Linear(64 + 8, 1) def forward(self, inputs): """ No sigmoid in forward because we are going to use BCEWithLogitsLoss Which applies sigmoid for us when calculating a loss """ x, meta = inputs['image'], inputs['features'] cnn_features = self.arch(x) meta_features = self.tab(meta) features = torch.cat((cnn_features, meta_features), dim=1) output = self.ouput(features) return output test = MelanomaDataset(df=test_df, imfolder=TEST, train=False, transforms=train_transform, # For TTA meta_features=meta_features) import gc epochs = 15 # Number of epochs to run es_patience = 3 # Early Stopping patience - for how many epochs with no improvements to wait TTA = 3 # Test Time Augmentation rounds oof = np.zeros((len(train_df), 1)) # Out Of Fold predictions preds = torch.zeros((len(test), 1), dtype=torch.float32, device=device) # Predictions for test test skf = KFold(n_splits=5, shuffle=True, random_state=47) for fold,(idxT, idxV) in enumerate(list(skf.split(np.arange(15)))[3:], 4): print('=' * 20, 'Fold', fold, '=' * 20) train_idx = train_df.loc[train_df['fold'].isin(idxT)].index val_idx = train_df.loc[train_df['fold'].isin(idxV)].index model_path = f'/out/model_{fold}.pth' # Path and filename to save model to best_val = 0 # Best validation score within this fold patience = es_patience # Current patience counter arch = EfficientNet.from_pretrained('efficientnet-b1') model = Effnet(arch=arch, n_meta_features=len(meta_features)) # New model for each fold if Path(model_path).exists(): inference = True model = model.to(device) optim = torch.optim.AdamW(model.parameters(), lr=0.001) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer=optim, mode='max', patience=1, verbose=True, factor=0.2) criterion = nn.BCEWithLogitsLoss() train = MelanomaDataset(df=train_df.iloc[train_idx].reset_index(drop=True), imfolder=TRAIN, train=True, transforms=train_transform, meta_features=meta_features) val = MelanomaDataset(df=train_df.iloc[val_idx].reset_index(drop=True), imfolder=TRAIN, train=True, transforms=test_transform, meta_features=meta_features) train_loader = DataLoader(dataset=train, batch_size=batch_size, shuffle=True, num_workers=4) val_loader = DataLoader(dataset=val, batch_size=batch_size, shuffle=False, num_workers=4) test_loader = DataLoader(dataset=test, batch_size=batch_size, shuffle=False, num_workers=4) mb = master_bar(range(epochs)) if not inference: for epoch in mb: start_time = time.time() correct = 0 epoch_loss = 0 model.train() for x, y in progress_bar(train_loader, parent=mb, total=int(len(train)/ 64)): x[0] = torch.tensor(x[0], device=device, dtype=torch.float32) x[1] = torch.tensor(x[1], device=device, dtype=torch.float32) y = torch.tensor(y, device=device, dtype=torch.float32) optim.zero_grad() z = model(x) loss = criterion(z, y.unsqueeze(1)) loss.backward() optim.step() pred = torch.round(torch.sigmoid(z)) # round off sigmoid to obtain predictions correct += (pred.cpu() == y.cpu().unsqueeze(1)).sum().item() # tracking number of correctly predicted samples epoch_loss += loss.item() mb.child.comment = f'{epoch_loss:.4f}' train_acc = correct / len(train_idx) model.eval() # switch model to the evaluation mode val_preds = torch.zeros((len(val_idx), 1), dtype=torch.float32, device=device) with torch.no_grad(): # Do not calculate gradient since we are only predicting # Predicting on validation set for j, (x_val, y_val) in progress_bar(enumerate(val_loader), parent=mb, total=int(len(val)/32)): x_val[0] = torch.tensor(x_val[0], device=device, dtype=torch.float32) x_val[1] = torch.tensor(x_val[1], device=device, dtype=torch.float32) y_val = torch.tensor(y_val, device=device, dtype=torch.float32) z_val = model(x_val) val_pred = torch.sigmoid(z_val) val_preds[j*val_loader.batch_size:j*val_loader.batch_size + x_val[0].shape[0]] = val_pred val_acc = accuracy_score(train_df.iloc[val_idx]['target'].values, torch.round(val_preds.cpu())) val_roc = roc_auc_score(train_df.iloc[val_idx]['target'].values, val_preds.cpu()) mb.write('Epoch {:03}: | Loss: {:.3f} | Train acc: {:.3f} | Val acc: {:.3f} | Val roc_auc: {:.3f} | Training time: {}'.format( epoch + 1, epoch_loss, train_acc, val_acc, val_roc, str(datetime.timedelta(seconds=time.time() - start_time))[:7])) scheduler.step(val_roc) if val_roc >= best_val: best_val = val_roc patience = es_patience # Resetting patience since we have new best validation accuracy torch.save(model, model_path) # Saving current best model else: patience -= 1 if patience == 0: print('Early stopping. Best Val roc_auc: {:.3f}'.format(best_val)) break model = torch.load(model_path) # Loading best model of this fold model.eval() # switch model to the evaluation mode val_preds = torch.zeros((len(val_idx), 1), dtype=torch.float32, device=device) with torch.no_grad(): # Predicting on validation set once again to obtain data for OOF for j, (x_val, y_val) in progress_bar(enumerate(val_loader), total=int(len(val)/32)): x_val[0] = torch.tensor(x_val[0], device=device, dtype=torch.float32) x_val[1] = torch.tensor(x_val[1], device=device, dtype=torch.float32) y_val = torch.tensor(y_val, device=device, dtype=torch.float32) z_val = model(x_val) val_pred = torch.sigmoid(z_val) val_preds[j*val_loader.batch_size:j*val_loader.batch_size + x_val[0].shape[0]] = val_pred oof[val_idx] = val_preds.cpu().numpy() # Predicting on test set for _ in range(TTA): for i, x_test in progress_bar(enumerate(test_loader), parent=mb, total=len(test)//32): x_test[0] = torch.tensor(x_test[0], device=device, dtype=torch.float32) x_test[1] = torch.tensor(x_test[1], device=device, dtype=torch.float32) z_test = model(x_test) z_test = torch.sigmoid(z_test) preds[i*test_loader.batch_size:i*test_loader.batch_size + x_test[0].shape[0]] += z_test preds /= TTA del train, val, train_loader, val_loader, x, y, x_val, y_val gc.collect() preds /= skf.n_splits # Saving OOF predictions so stacking would be easier pd.Series(oof.reshape(-1,)).to_csv('oof.csv', index=False) sub = pd.read_csv(DATA / 'sample_submission.csv') sub['target'] = preds.cpu().numpy().reshape(-1,) sub.to_csv('/out/img_meta_submission.csv', index=False) !kaggle competitions submit -c siim-isic-melanoma-classification -f submission.csv -m "Melanoma Starter Image Size 384" ```
github_jupyter