code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# Tutorial 7: Graph Neural Networks

**Filled notebook:**
[](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/GNN_overview.ipynb)
[](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/GNN_overview.ipynb)
**Pre-trained models:**
[](https://github.com/phlippe/saved_models/tree/main/tutorial7)
[](https://drive.google.com/drive/folders/1DOTV_oYt5boa-MElbc2izat4VMSc1gob?usp=sharing)
**Recordings:**
[](https://youtu.be/fK7d56Ly9q8)
[](https://youtu.be/ZCNSUWe4a_Q)
In this tutorial, we will discuss the application of neural networks on graphs. Graph Neural Networks (GNNs) have recently gained increasing popularity in both applications and research, including domains such as social networks, knowledge graphs, recommender systems, and bioinformatics. While the theory and math behind GNNs might first seem complicated, the implementation of those models is quite simple and helps in understanding the methodology. Therefore, we will discuss the implementation of basic network layers of a GNN, namely graph convolutions, and attention layers. Finally, we will apply a GNN on a node-level, edge-level, and graph-level tasks.
Below, we will start by importing our standard libraries. We will use PyTorch Lightning as already done in Tutorial 5 and 6.
```
## Standard libraries
import os
import json
import math
import numpy as np
import time
## Imports for plotting
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf') # For export
from matplotlib.colors import to_rgb
import matplotlib
matplotlib.rcParams['lines.linewidth'] = 2.0
import seaborn as sns
sns.reset_orig()
sns.set()
## Progress bar
from tqdm.notebook import tqdm
## PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import torch.optim as optim
# Torchvision
import torchvision
from torchvision.datasets import CIFAR10
from torchvision import transforms
# PyTorch Lightning
try:
import pytorch_lightning as pl
except ModuleNotFoundError: # Google Colab does not have PyTorch Lightning installed by default. Hence, we do it here if necessary
!pip install --quiet pytorch-lightning>=1.4
import pytorch_lightning as pl
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
# Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10)
DATASET_PATH = "../data"
# Path to the folder where the pretrained models are saved
CHECKPOINT_PATH = "../saved_models/tutorial7"
# Setting the seed
pl.seed_everything(42)
# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.determinstic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
print(device)
```
We also have a few pre-trained models we download below.
```
import urllib.request
from urllib.error import HTTPError
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/tutorial7/"
# Files to download
pretrained_files = ["NodeLevelMLP.ckpt", "NodeLevelGNN.ckpt", "GraphLevelGraphConv.ckpt"]
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if "/" in file_name:
os.makedirs(file_path.rsplit("/",1)[0], exist_ok=True)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print(f"Downloading {file_url}...")
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print("Something went wrong. Please try to download the file from the GDrive folder, or contact the author with the full output including the following error:\n", e)
```
## Graph Neural Networks
### Graph representation
Before starting the discussion of specific neural network operations on graphs, we should consider how to represent a graph. Mathematically, a graph $\mathcal{G}$ is defined as a tuple of a set of nodes/vertices $V$, and a set of edges/links $E$: $\mathcal{G}=(V,E)$. Each edge is a pair of two vertices, and represents a connection between them. For instance, let's look at the following graph:
<center width="100%" style="padding:10px"><img src="example_graph.svg" width="250px"></center>
The vertices are $V=\{1,2,3,4\}$, and edges $E=\{(1,2), (2,3), (2,4), (3,4)\}$. Note that for simplicity, we assume the graph to be undirected and hence don't add mirrored pairs like $(2,1)$. In application, vertices and edge can often have specific attributes, and edges can even be directed. The question is how we could represent this diversity in an efficient way for matrix operations. Usually, for the edges, we decide between two variants: an adjacency matrix, or a list of paired vertex indices.
The **adjacency matrix** $A$ is a square matrix whose elements indicate whether pairs of vertices are adjacent, i.e. connected, or not. In the simplest case, $A_{ij}$ is 1 if there is a connection from node $i$ to $j$, and otherwise 0. If we have edge attributes or different categories of edges in a graph, this information can be added to the matrix as well. For an undirected graph, keep in mind that $A$ is a symmetric matrix ($A_{ij}=A_{ji}$). For the example graph above, we have the following adjacency matrix:
$$
A = \begin{bmatrix}
0 & 1 & 0 & 0\\
1 & 0 & 1 & 1\\
0 & 1 & 0 & 1\\
0 & 1 & 1 & 0
\end{bmatrix}
$$
While expressing a graph as a list of edges is more efficient in terms of memory and (possibly) computation, using an adjacency matrix is more intuitive and simpler to implement. In our implementations below, we will rely on the adjacency matrix to keep the code simple. However, common libraries use edge lists, which we will discuss later more.
Alternatively, we could also use the list of edges to define a sparse adjacency matrix with which we can work as if it was a dense matrix, but allows more memory-efficient operations. PyTorch supports this with the sub-package `torch.sparse` ([documentation](https://pytorch.org/docs/stable/sparse.html)) which is however still in a beta-stage (API might change in future).
### Graph Convolutions
Graph Convolutional Networks have been introduced by [Kipf et al.](https://openreview.net/pdf?id=SJU4ayYgl) in 2016 at the University of Amsterdam. He also wrote a great [blog post](https://tkipf.github.io/graph-convolutional-networks/) about this topic, which is recommended if you want to read about GCNs from a different perspective. GCNs are similar to convolutions in images in the sense that the "filter" parameters are typically shared over all locations in the graph. At the same time, GCNs rely on message passing methods, which means that vertices exchange information with the neighbors, and send "messages" to each other. Before looking at the math, we can try to visually understand how GCNs work. The first step is that each node creates a feature vector that represents the message it wants to send to all its neighbors. In the second step, the messages are sent to the neighbors, so that a node receives one message per adjacent node. Below we have visualized the two steps for our example graph.
<center width="100%" style="padding:10px"><img src="graph_message_passing.svg" width="700px"></center>
If we want to formulate that in more mathematical terms, we need to first decide how to combine all the messages a node receives. As the number of messages vary across nodes, we need an operation that works for any number. Hence, the usual way to go is to sum or take the mean. Given the previous features of nodes $H^{(l)}$, the GCN layer is defined as follows:
$$H^{(l+1)} = \sigma\left(\hat{D}^{-1/2}\hat{A}\hat{D}^{-1/2}H^{(l)}W^{(l)}\right)$$
$W^{(l)}$ is the weight parameters with which we transform the input features into messages ($H^{(l)}W^{(l)}$). To the adjacency matrix $A$ we add the identity matrix so that each node sends its own message also to itself: $\hat{A}=A+I$. Finally, to take the average instead of summing, we calculate the matrix $\hat{D}$ which is a diagonal matrix with $D_{ii}$ denoting the number of neighbors node $i$ has. $\sigma$ represents an arbitrary activation function, and not necessarily the sigmoid (usually a ReLU-based activation function is used in GNNs).
When implementing the GCN layer in PyTorch, we can take advantage of the flexible operations on tensors. Instead of defining a matrix $\hat{D}$, we can simply divide the summed messages by the number of neighbors afterward. Additionally, we replace the weight matrix with a linear layer, which additionally allows us to add a bias. Written as a PyTorch module, the GCN layer is defined as follows:
```
class GCNLayer(nn.Module):
def __init__(self, c_in, c_out):
super().__init__()
self.projection = nn.Linear(c_in, c_out)
def forward(self, node_feats, adj_matrix):
"""
Inputs:
node_feats - Tensor with node features of shape [batch_size, num_nodes, c_in]
adj_matrix - Batch of adjacency matrices of the graph. If there is an edge from i to j, adj_matrix[b,i,j]=1 else 0.
Supports directed edges by non-symmetric matrices. Assumes to already have added the identity connections.
Shape: [batch_size, num_nodes, num_nodes]
"""
# Num neighbours = number of incoming edges
num_neighbours = adj_matrix.sum(dim=-1, keepdims=True)
node_feats = self.projection(node_feats)
node_feats = torch.bmm(adj_matrix, node_feats)
node_feats = node_feats / num_neighbours
return node_feats
```
To further understand the GCN layer, we can apply it to our example graph above. First, let's specify some node features and the adjacency matrix with added self-connections:
```
node_feats = torch.arange(8, dtype=torch.float32).view(1, 4, 2)
adj_matrix = torch.Tensor([[[1, 1, 0, 0],
[1, 1, 1, 1],
[0, 1, 1, 1],
[0, 1, 1, 1]]])
print("Node features:\n", node_feats)
print("\nAdjacency matrix:\n", adj_matrix)
```
Next, let's apply a GCN layer to it. For simplicity, we initialize the linear weight matrix as an identity matrix so that the input features are equal to the messages. This makes it easier for us to verify the message passing operation.
```
layer = GCNLayer(c_in=2, c_out=2)
layer.projection.weight.data = torch.Tensor([[1., 0.], [0., 1.]])
layer.projection.bias.data = torch.Tensor([0., 0.])
with torch.no_grad():
out_feats = layer(node_feats, adj_matrix)
print("Adjacency matrix", adj_matrix)
print("Input features", node_feats)
print("Output features", out_feats)
```
As we can see, the first node's output values are the average of itself and the second node. Similarly, we can verify all other nodes. However, in a GNN, we would also want to allow feature exchange between nodes beyond its neighbors. This can be achieved by applying multiple GCN layers, which gives us the final layout of a GNN. The GNN can be build up by a sequence of GCN layers and non-linearities such as ReLU. For a visualization, see below (figure credit - [Thomas Kipf, 2016](https://tkipf.github.io/graph-convolutional-networks/)).
<center width="100%" style="padding: 10px"><img src="gcn_network.png" width="600px"></center>
However, one issue we can see from looking at the example above is that the output features for nodes 3 and 4 are the same because they have the same adjacent nodes (including itself). Therefore, GCN layers can make the network forget node-specific information if we just take a mean over all messages. Multiple possible improvements have been proposed. While the simplest option might be using residual connections, the more common approach is to either weigh the self-connections higher or define a separate weight matrix for the self-connections. Alternatively, we can re-visit a concept from the last tutorial: attention.
### Graph Attention
If you remember from the last tutorial, attention describes a weighted average of multiple elements with the weights dynamically computed based on an input query and elements' keys (if you haven't read Tutorial 6 yet, it is recommended to at least go through the very first section called [What is Attention?](https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/tutorial6/Transformers_and_MHAttention.html#What-is-Attention?)). This concept can be similarly applied to graphs, one of such is the Graph Attention Network (called GAT, proposed by [Velickovic et al., 2017](https://arxiv.org/abs/1710.10903)). Similarly to the GCN, the graph attention layer creates a message for each node using a linear layer/weight matrix. For the attention part, it uses the message from the node itself as a query, and the messages to average as both keys and values (note that this also includes the message to itself). The score function $f_{attn}$ is implemented as a one-layer MLP which maps the query and key to a single value. The MLP looks as follows (figure credit - [Velickovic et al.](https://arxiv.org/abs/1710.10903)):
<center width="100%" style="padding:10px"><img src="graph_attention_MLP.svg" width="250px"></center>
$h_i$ and $h_j$ are the original features from node $i$ and $j$ respectively, and represent the messages of the layer with $\mathbf{W}$ as weight matrix. $\mathbf{a}$ is the weight matrix of the MLP, which has the shape $[1,2\times d_{\text{message}}]$, and $\alpha_{ij}$ the final attention weight from node $i$ to $j$. The calculation can be described as follows:
$$\alpha_{ij} = \frac{\exp\left(\text{LeakyReLU}\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_j\right]\right)\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\text{LeakyReLU}\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_k\right]\right)\right)}$$
The operator $||$ represents the concatenation, and $\mathcal{N}_i$ the indices of the neighbors of node $i$. Note that in contrast to usual practice, we apply a non-linearity (here LeakyReLU) before the softmax over elements. Although it seems like a minor change at first, it is crucial for the attention to depend on the original input. Specifically, let's remove the non-linearity for a second, and try to simplify the expression:
$$
\begin{split}
\alpha_{ij} & = \frac{\exp\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_j\right]\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_k\right]\right)}\\[5pt]
& = \frac{\exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i+\mathbf{a}_{:,d/2:}\mathbf{W}h_j\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i+\mathbf{a}_{:,d/2:}\mathbf{W}h_k\right)}\\[5pt]
& = \frac{\exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i\right)\cdot\exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_j\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i\right)\cdot\exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_k\right)}\\[5pt]
& = \frac{\exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_j\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_k\right)}\\
\end{split}
$$
We can see that without the non-linearity, the attention term with $h_i$ actually cancels itself out, resulting in the attention being independent of the node itself. Hence, we would have the same issue as the GCN of creating the same output features for nodes with the same neighbors. This is why the LeakyReLU is crucial and adds some dependency on $h_i$ to the attention.
Once we obtain all attention factors, we can calculate the output features for each node by performing the weighted average:
$$h_i'=\sigma\left(\sum_{j\in\mathcal{N}_i}\alpha_{ij}\mathbf{W}h_j\right)$$
$\sigma$ is yet another non-linearity, as in the GCN layer. Visually, we can represent the full message passing in an attention layer as follows (figure credit - [Velickovic et al.](https://arxiv.org/abs/1710.10903)):
<center width="100%"><img src="graph_attention.jpeg" width="400px"></center>
To increase the expressiveness of the graph attention network, [Velickovic et al.](https://arxiv.org/abs/1710.10903) proposed to extend it to multiple heads similar to the Multi-Head Attention block in Transformers. This results in $N$ attention layers being applied in parallel. In the image above, it is visualized as three different colors of arrows (green, blue, and purple) that are afterward concatenated. The average is only applied for the very final prediction layer in a network.
After having discussed the graph attention layer in detail, we can implement it below:
```
class GATLayer(nn.Module):
def __init__(self, c_in, c_out, num_heads=1, concat_heads=True, alpha=0.2):
"""
Inputs:
c_in - Dimensionality of input features
c_out - Dimensionality of output features
num_heads - Number of heads, i.e. attention mechanisms to apply in parallel. The
output features are equally split up over the heads if concat_heads=True.
concat_heads - If True, the output of the different heads is concatenated instead of averaged.
alpha - Negative slope of the LeakyReLU activation.
"""
super().__init__()
self.num_heads = num_heads
self.concat_heads = concat_heads
if self.concat_heads:
assert c_out % num_heads == 0, "Number of output features must be a multiple of the count of heads."
c_out = c_out // num_heads
# Sub-modules and parameters needed in the layer
self.projection = nn.Linear(c_in, c_out * num_heads)
self.a = nn.Parameter(torch.Tensor(num_heads, 2 * c_out)) # One per head
self.leakyrelu = nn.LeakyReLU(alpha)
# Initialization from the original implementation
nn.init.xavier_uniform_(self.projection.weight.data, gain=1.414)
nn.init.xavier_uniform_(self.a.data, gain=1.414)
def forward(self, node_feats, adj_matrix, print_attn_probs=False):
"""
Inputs:
node_feats - Input features of the node. Shape: [batch_size, c_in]
adj_matrix - Adjacency matrix including self-connections. Shape: [batch_size, num_nodes, num_nodes]
print_attn_probs - If True, the attention weights are printed during the forward pass (for debugging purposes)
"""
batch_size, num_nodes = node_feats.size(0), node_feats.size(1)
# Apply linear layer and sort nodes by head
node_feats = self.projection(node_feats)
node_feats = node_feats.view(batch_size, num_nodes, self.num_heads, -1)
# We need to calculate the attention logits for every edge in the adjacency matrix
# Doing this on all possible combinations of nodes is very expensive
# => Create a tensor of [W*h_i||W*h_j] with i and j being the indices of all edges
edges = adj_matrix.nonzero(as_tuple=False) # Returns indices where the adjacency matrix is not 0 => edges
node_feats_flat = node_feats.view(batch_size * num_nodes, self.num_heads, -1)
edge_indices_row = edges[:,0] * num_nodes + edges[:,1]
edge_indices_col = edges[:,0] * num_nodes + edges[:,2]
a_input = torch.cat([
torch.index_select(input=node_feats_flat, index=edge_indices_row, dim=0),
torch.index_select(input=node_feats_flat, index=edge_indices_col, dim=0)
], dim=-1) # Index select returns a tensor with node_feats_flat being indexed at the desired positions along dim=0
# Calculate attention MLP output (independent for each head)
attn_logits = torch.einsum('bhc,hc->bh', a_input, self.a)
attn_logits = self.leakyrelu(attn_logits)
# Map list of attention values back into a matrix
attn_matrix = attn_logits.new_zeros(adj_matrix.shape+(self.num_heads,)).fill_(-9e15)
attn_matrix[adj_matrix[...,None].repeat(1,1,1,self.num_heads) == 1] = attn_logits.reshape(-1)
# Weighted average of attention
attn_probs = F.softmax(attn_matrix, dim=2)
if print_attn_probs:
print("Attention probs\n", attn_probs.permute(0, 3, 1, 2))
node_feats = torch.einsum('bijh,bjhc->bihc', attn_probs, node_feats)
# If heads should be concatenated, we can do this by reshaping. Otherwise, take mean
if self.concat_heads:
node_feats = node_feats.reshape(batch_size, num_nodes, -1)
else:
node_feats = node_feats.mean(dim=2)
return node_feats
```
Again, we can apply the graph attention layer on our example graph above to understand the dynamics better. As before, the input layer is initialized as an identity matrix, but we set $\mathbf{a}$ to be a vector of arbitrary numbers to obtain different attention values. We use two heads to show the parallel, independent attention mechanisms working in the layer.
```
layer = GATLayer(2, 2, num_heads=2)
layer.projection.weight.data = torch.Tensor([[1., 0.], [0., 1.]])
layer.projection.bias.data = torch.Tensor([0., 0.])
layer.a.data = torch.Tensor([[-0.2, 0.3], [0.1, -0.1]])
with torch.no_grad():
out_feats = layer(node_feats, adj_matrix, print_attn_probs=True)
print("Adjacency matrix", adj_matrix)
print("Input features", node_feats)
print("Output features", out_feats)
```
We recommend that you try to calculate the attention matrix at least for one head and one node for yourself. The entries are 0 where there does not exist an edge between $i$ and $j$. For the others, we see a diverse set of attention probabilities. Moreover, the output features of node 3 and 4 are now different although they have the same neighbors.
## PyTorch Geometric
We had mentioned before that implementing graph networks with adjacency matrix is simple and straight-forward but can be computationally expensive for large graphs. Many real-world graphs can reach over 200k nodes, for which adjacency matrix-based implementations fail. There are a lot of optimizations possible when implementing GNNs, and luckily, there exist packages that provide such layers. The most popular packages for PyTorch are [PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/) and the [Deep Graph Library](https://www.dgl.ai/) (the latter being actually framework agnostic). Which one to use depends on the project you are planning to do and personal taste. In this tutorial, we will look at PyTorch Geometric as part of the PyTorch family. Similar to PyTorch Lightning, PyTorch Geometric is not installed by default on GoogleColab (and actually also not in our `dl2021` environment due to many dependencies that would be unnecessary for the practicals). Hence, let's import and/or install it below:
```
# torch geometric
try:
import torch_geometric
except ModuleNotFoundError:
# Installing torch geometric packages with specific CUDA+PyTorch version.
# See https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html for details
TORCH = torch.__version__.split('+')[0]
CUDA = 'cu' + torch.version.cuda.replace('.','')
!pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-geometric
import torch_geometric
import torch_geometric.nn as geom_nn
import torch_geometric.data as geom_data
```
PyTorch Geometric provides us a set of common graph layers, including the GCN and GAT layer we implemented above. Additionally, similar to PyTorch's torchvision, it provides the common graph datasets and transformations on those to simplify training. Compared to our implementation above, PyTorch Geometric uses a list of index pairs to represent the edges. The details of this library will be explored further in our experiments.
In our tasks below, we want to allow us to pick from a multitude of graph layers. Thus, we define again below a dictionary to access those using a string:
```
gnn_layer_by_name = {
"GCN": geom_nn.GCNConv,
"GAT": geom_nn.GATConv,
"GraphConv": geom_nn.GraphConv
}
```
Additionally to GCN and GAT, we added the layer `geom_nn.GraphConv` ([documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.GraphConv)). GraphConv is a GCN with a separate weight matrix for the self-connections. Mathematically, this would be:
$$
\mathbf{x}_i^{(l+1)} = \mathbf{W}^{(l + 1)}_1 \mathbf{x}_i^{(l)} + \mathbf{W}^{(\ell + 1)}_2 \sum_{j \in \mathcal{N}_i} \mathbf{x}_j^{(l)}
$$
In this formula, the neighbor's messages are added instead of averaged. However, PyTorch Geometric provides the argument `aggr` to switch between summing, averaging, and max pooling.
## Experiments on graph structures
Tasks on graph-structured data can be grouped into three groups: node-level, edge-level and graph-level. The different levels describe on which level we want to perform classification/regression. We will discuss all three types in more detail below.
### Node-level tasks: Semi-supervised node classification
Node-level tasks have the goal to classify nodes in a graph. Usually, we have given a single, large graph with >1000 nodes of which a certain amount of nodes are labeled. We learn to classify those labeled examples during training and try to generalize to the unlabeled nodes.
A popular example that we will use in this tutorial is the Cora dataset, a citation network among papers. The Cora consists of 2708 scientific publications with links between each other representing the citation of one paper by another. The task is to classify each publication into one of seven classes. Each publication is represented by a bag-of-words vector. This means that we have a vector of 1433 elements for each publication, where a 1 at feature $i$ indicates that the $i$-th word of a pre-defined dictionary is in the article. Binary bag-of-words representations are commonly used when we need very simple encodings, and already have an intuition of what words to expect in a network. There exist much better approaches, but we will leave this to the NLP courses to discuss.
We will load the dataset below:
```
cora_dataset = torch_geometric.datasets.Planetoid(root=DATASET_PATH, name="Cora")
```
Let's look at how PyTorch Geometric represents the graph data. Note that although we have a single graph, PyTorch Geometric returns a dataset for compatibility to other datasets.
```
cora_dataset[0]
```
The graph is represented by a `Data` object ([documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/data.html#torch_geometric.data.Data)) which we can access as a standard Python namespace. The edge index tensor is the list of edges in the graph and contains the mirrored version of each edge for undirected graphs. The `train_mask`, `val_mask`, and `test_mask` are boolean masks that indicate which nodes we should use for training, validation, and testing. The `x` tensor is the feature tensor of our 2708 publications, and `y` the labels for all nodes.
After having seen the data, we can implement a simple graph neural network. The GNN applies a sequence of graph layers (GCN, GAT, or GraphConv), ReLU as activation function, and dropout for regularization. See below for the specific implementation.
```
class GNNModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, num_layers=2, layer_name="GCN", dp_rate=0.1, **kwargs):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of the output features. Usually number of classes in classification
num_layers - Number of "hidden" graph layers
layer_name - String of the graph layer to use
dp_rate - Dropout rate to apply throughout the network
kwargs - Additional arguments for the graph layer (e.g. number of heads for GAT)
"""
super().__init__()
gnn_layer = gnn_layer_by_name[layer_name]
layers = []
in_channels, out_channels = c_in, c_hidden
for l_idx in range(num_layers-1):
layers += [
gnn_layer(in_channels=in_channels,
out_channels=out_channels,
**kwargs),
nn.ReLU(inplace=True),
nn.Dropout(dp_rate)
]
in_channels = c_hidden
layers += [gnn_layer(in_channels=in_channels,
out_channels=c_out,
**kwargs)]
self.layers = nn.ModuleList(layers)
def forward(self, x, edge_index):
"""
Inputs:
x - Input features per node
edge_index - List of vertex index pairs representing the edges in the graph (PyTorch geometric notation)
"""
for l in self.layers:
# For graph layers, we need to add the "edge_index" tensor as additional input
# All PyTorch Geometric graph layer inherit the class "MessagePassing", hence
# we can simply check the class type.
if isinstance(l, geom_nn.MessagePassing):
x = l(x, edge_index)
else:
x = l(x)
return x
```
Good practice in node-level tasks is to create an MLP baseline that is applied to each node independently. This way we can verify whether adding the graph information to the model indeed improves the prediction, or not. It might also be that the features per node are already expressive enough to clearly point towards a specific class. To check this, we implement a simple MLP below.
```
class MLPModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, num_layers=2, dp_rate=0.1):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of the output features. Usually number of classes in classification
num_layers - Number of hidden layers
dp_rate - Dropout rate to apply throughout the network
"""
super().__init__()
layers = []
in_channels, out_channels = c_in, c_hidden
for l_idx in range(num_layers-1):
layers += [
nn.Linear(in_channels, out_channels),
nn.ReLU(inplace=True),
nn.Dropout(dp_rate)
]
in_channels = c_hidden
layers += [nn.Linear(in_channels, c_out)]
self.layers = nn.Sequential(*layers)
def forward(self, x, *args, **kwargs):
"""
Inputs:
x - Input features per node
"""
return self.layers(x)
```
Finally, we can merge the models into a PyTorch Lightning module which handles the training, validation, and testing for us.
```
class NodeLevelGNN(pl.LightningModule):
def __init__(self, model_name, **model_kwargs):
super().__init__()
# Saving hyperparameters
self.save_hyperparameters()
if model_name == "MLP":
self.model = MLPModel(**model_kwargs)
else:
self.model = GNNModel(**model_kwargs)
self.loss_module = nn.CrossEntropyLoss()
def forward(self, data, mode="train"):
x, edge_index = data.x, data.edge_index
x = self.model(x, edge_index)
# Only calculate the loss on the nodes corresponding to the mask
if mode == "train":
mask = data.train_mask
elif mode == "val":
mask = data.val_mask
elif mode == "test":
mask = data.test_mask
else:
assert False, f"Unknown forward mode: {mode}"
loss = self.loss_module(x[mask], data.y[mask])
acc = (x[mask].argmax(dim=-1) == data.y[mask]).sum().float() / mask.sum()
return loss, acc
def configure_optimizers(self):
# We use SGD here, but Adam works as well
optimizer = optim.SGD(self.parameters(), lr=0.1, momentum=0.9, weight_decay=2e-3)
return optimizer
def training_step(self, batch, batch_idx):
loss, acc = self.forward(batch, mode="train")
self.log('train_loss', loss)
self.log('train_acc', acc)
return loss
def validation_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="val")
self.log('val_acc', acc)
def test_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="test")
self.log('test_acc', acc)
```
Additionally to the Lightning module, we define a training function below. As we have a single graph, we use a batch size of 1 for the data loader and share the same data loader for the train, validation, and test set (the mask is picked inside the Lightning module). Besides, we set the argument `progress_bar_refresh_rate` to zero as it usually shows the progress per epoch, but an epoch only consists of a single step. The rest of the code is very similar to what we have seen in Tutorial 5 and 6 already.
```
def train_node_classifier(model_name, dataset, **model_kwargs):
pl.seed_everything(42)
node_data_loader = geom_data.DataLoader(dataset, batch_size=1)
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "NodeLevel" + model_name)
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
gpus=1 if str(device).startswith("cuda") else 0,
max_epochs=200,
progress_bar_refresh_rate=0) # 0 because epoch size is 1
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, f"NodeLevel{model_name}.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = NodeLevelGNN.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything()
model = NodeLevelGNN(model_name=model_name, c_in=dataset.num_node_features, c_out=dataset.num_classes, **model_kwargs)
trainer.fit(model, node_data_loader, node_data_loader)
model = NodeLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on the test set
test_result = trainer.test(model, node_data_loader, verbose=False)
batch = next(iter(node_data_loader))
batch = batch.to(model.device)
_, train_acc = model.forward(batch, mode="train")
_, val_acc = model.forward(batch, mode="val")
result = {"train": train_acc,
"val": val_acc,
"test": test_result[0]['test_acc']}
return model, result
```
Finally, we can train our models. First, let's train the simple MLP:
```
# Small function for printing the test scores
def print_results(result_dict):
if "train" in result_dict:
print(f"Train accuracy: {(100.0*result_dict['train']):4.2f}%")
if "val" in result_dict:
print(f"Val accuracy: {(100.0*result_dict['val']):4.2f}%")
print(f"Test accuracy: {(100.0*result_dict['test']):4.2f}%")
node_mlp_model, node_mlp_result = train_node_classifier(model_name="MLP",
dataset=cora_dataset,
c_hidden=16,
num_layers=2,
dp_rate=0.1)
print_results(node_mlp_result)
```
Although the MLP can overfit on the training dataset because of the high-dimensional input features, it does not perform too well on the test set. Let's see if we can beat this score with our graph networks:
```
node_gnn_model, node_gnn_result = train_node_classifier(model_name="GNN",
layer_name="GCN",
dataset=cora_dataset,
c_hidden=16,
num_layers=2,
dp_rate=0.1)
print_results(node_gnn_result)
```
As we would have hoped for, the GNN model outperforms the MLP by quite a margin. This shows that using the graph information indeed improves our predictions and lets us generalizes better.
The hyperparameters in the model have been chosen to create a relatively small network. This is because the first layer with an input dimension of 1433 can be relatively expensive to perform for large graphs. In general, GNNs can become relatively expensive for very big graphs. This is why such GNNs either have a small hidden size or use a special batching strategy where we sample a connected subgraph of the big, original graph.
### Edge-level tasks: Link prediction
In some applications, we might have to predict on an edge-level instead of node-level. The most common edge-level task in GNN is link prediction. Link prediction means that given a graph, we want to predict whether there will be/should be an edge between two nodes or not. For example, in a social network, this is used by Facebook and co to propose new friends to you. Again, graph level information can be crucial to perform this task. The output prediction is usually done by performing a similarity metric on the pair of node features, which should be 1 if there should be a link, and otherwise close to 0. To keep the tutorial short, we will not implement this task ourselves. Nevertheless, there are many good resources out there if you are interested in looking closer at this task.
Tutorials and papers for this topic include:
* [PyTorch Geometric example](https://github.com/rusty1s/pytorch_geometric/blob/master/examples/link_pred.py)
* [Graph Neural Networks: A Review of Methods and Applications](https://arxiv.org/pdf/1812.08434.pdf), Zhou et al. 2019
* [Link Prediction Based on Graph Neural Networks](https://papers.nips.cc/paper/2018/file/53f0d7c537d99b3824f0f99d62ea2428-Paper.pdf), Zhang and Chen, 2018.
### Graph-level tasks: Graph classification
Finally, in this part of the tutorial, we will have a closer look at how to apply GNNs to the task of graph classification. The goal is to classify an entire graph instead of single nodes or edges. Therefore, we are also given a dataset of multiple graphs that we need to classify based on some structural graph properties. The most common task for graph classification is molecular property prediction, in which molecules are represented as graphs. Each atom is linked to a node, and edges in the graph are the bonds between atoms. For example, look at the figure below.
<center width="100%"><img src="molecule_graph.svg" width="600px"></center>
On the left, we have an arbitrary, small molecule with different atoms, whereas the right part of the image shows the graph representation. The atom types are abstracted as node features (e.g. a one-hot vector), and the different bond types are used as edge features. For simplicity, we will neglect the edge attributes in this tutorial, but you can include by using methods like the [Relational Graph Convolution](https://arxiv.org/abs/1703.06103) that uses a different weight matrix for each edge type.
The dataset we will use below is called the MUTAG dataset. It is a common small benchmark for graph classification algorithms, and contain 188 graphs with 18 nodes and 20 edges on average for each graph. The graph nodes have 7 different labels/atom types, and the binary graph labels represent "their mutagenic effect on a specific gram negative bacterium" (the specific meaning of the labels are not too important here). The dataset is part of a large collection of different graph classification datasets, known as the [TUDatasets](https://chrsmrrs.github.io/datasets/), which is directly accessible via `torch_geometric.datasets.TUDataset` ([documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/datasets.html#torch_geometric.datasets.TUDataset)) in PyTorch Geometric. We can load the dataset below.
```
tu_dataset = torch_geometric.datasets.TUDataset(root=DATASET_PATH, name="MUTAG")
```
Let's look at some statistics for the dataset:
```
print("Data object:", tu_dataset.data)
print("Length:", len(tu_dataset))
print(f"Average label: {tu_dataset.data.y.float().mean().item():4.2f}")
```
The first line shows how the dataset stores different graphs. The nodes, edges, and labels of each graph are concatenated to one tensor, and the dataset stores the indices where to split the tensors correspondingly. The length of the dataset is the number of graphs we have, and the "average label" denotes the percentage of the graph with label 1. As long as the percentage is in the range of 0.5, we have a relatively balanced dataset. It happens quite often that graph datasets are very imbalanced, hence checking the class balance is always a good thing to do.
Next, we will split our dataset into a training and test part. Note that we do not use a validation set this time because of the small size of the dataset. Therefore, our model might overfit slightly on the validation set due to the noise of the evaluation, but we still get an estimate of the performance on untrained data.
```
torch.manual_seed(42)
tu_dataset.shuffle()
train_dataset = tu_dataset[:150]
test_dataset = tu_dataset[150:]
```
When using a data loader, we encounter a problem with batching $N$ graphs. Each graph in the batch can have a different number of nodes and edges, and hence we would require a lot of padding to obtain a single tensor. Torch geometric uses a different, more efficient approach: we can view the $N$ graphs in a batch as a single large graph with concatenated node and edge list. As there is no edge between the $N$ graphs, running GNN layers on the large graph gives us the same output as running the GNN on each graph separately. Visually, this batching strategy is visualized below (figure credit - PyTorch Geometric team, [tutorial here](https://colab.research.google.com/drive/1I8a0DfQ3fI7Njc62__mVXUlcAleUclnb?usp=sharing#scrollTo=2owRWKcuoALo)).
<center width="100%"><img src="torch_geometric_stacking_graphs.png" width="600px"></center>
The adjacency matrix is zero for any nodes that come from two different graphs, and otherwise according to the adjacency matrix of the individual graph. Luckily, this strategy is already implemented in torch geometric, and hence we can use the corresponding data loader:
```
graph_train_loader = geom_data.DataLoader(train_dataset, batch_size=64, shuffle=True)
graph_val_loader = geom_data.DataLoader(test_dataset, batch_size=64) # Additional loader if you want to change to a larger dataset
graph_test_loader = geom_data.DataLoader(test_dataset, batch_size=64)
```
Let's load a batch below to see the batching in action:
```
batch = next(iter(graph_test_loader))
print("Batch:", batch)
print("Labels:", batch.y[:10])
print("Batch indices:", batch.batch[:40])
```
We have 38 graphs stacked together for the test dataset. The batch indices, stored in `batch`, show that the first 12 nodes belong to the first graph, the next 22 to the second graph, and so on. These indices are important for performing the final prediction. To perform a prediction over a whole graph, we usually perform a pooling operation over all nodes after running the GNN model. In this case, we will use the average pooling. Hence, we need to know which nodes should be included in which average pool. Using this pooling, we can already create our graph network below. Specifically, we re-use our class `GNNModel` from before, and simply add an average pool and single linear layer for the graph prediction task.
```
class GraphGNNModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, dp_rate_linear=0.5, **kwargs):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of output features (usually number of classes)
dp_rate_linear - Dropout rate before the linear layer (usually much higher than inside the GNN)
kwargs - Additional arguments for the GNNModel object
"""
super().__init__()
self.GNN = GNNModel(c_in=c_in,
c_hidden=c_hidden,
c_out=c_hidden, # Not our prediction output yet!
**kwargs)
self.head = nn.Sequential(
nn.Dropout(dp_rate_linear),
nn.Linear(c_hidden, c_out)
)
def forward(self, x, edge_index, batch_idx):
"""
Inputs:
x - Input features per node
edge_index - List of vertex index pairs representing the edges in the graph (PyTorch geometric notation)
batch_idx - Index of batch element for each node
"""
x = self.GNN(x, edge_index)
x = geom_nn.global_mean_pool(x, batch_idx) # Average pooling
x = self.head(x)
return x
```
Finally, we can create a PyTorch Lightning module to handle the training. It is similar to the modules we have seen before and does nothing surprising in terms of training. As we have a binary classification task, we use the Binary Cross Entropy loss.
```
class GraphLevelGNN(pl.LightningModule):
def __init__(self, **model_kwargs):
super().__init__()
# Saving hyperparameters
self.save_hyperparameters()
self.model = GraphGNNModel(**model_kwargs)
self.loss_module = nn.BCEWithLogitsLoss() if self.hparams.c_out == 1 else nn.CrossEntropyLoss()
def forward(self, data, mode="train"):
x, edge_index, batch_idx = data.x, data.edge_index, data.batch
x = self.model(x, edge_index, batch_idx)
x = x.squeeze(dim=-1)
if self.hparams.c_out == 1:
preds = (x > 0).float()
data.y = data.y.float()
else:
preds = x.argmax(dim=-1)
loss = self.loss_module(x, data.y)
acc = (preds == data.y).sum().float() / preds.shape[0]
return loss, acc
def configure_optimizers(self):
optimizer = optim.AdamW(self.parameters(), lr=1e-2, weight_decay=0.0) # High lr because of small dataset and small model
return optimizer
def training_step(self, batch, batch_idx):
loss, acc = self.forward(batch, mode="train")
self.log('train_loss', loss)
self.log('train_acc', acc)
return loss
def validation_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="val")
self.log('val_acc', acc)
def test_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="test")
self.log('test_acc', acc)
```
Below we train the model on our dataset. It resembles the typical training functions we have seen so far.
```
def train_graph_classifier(model_name, **model_kwargs):
pl.seed_everything(42)
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "GraphLevel" + model_name)
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
gpus=1 if str(device).startswith("cuda") else 0,
max_epochs=500,
progress_bar_refresh_rate=0)
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, f"GraphLevel{model_name}.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = GraphLevelGNN.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything(42)
model = GraphLevelGNN(c_in=tu_dataset.num_node_features,
c_out=1 if tu_dataset.num_classes==2 else tu_dataset.num_classes,
**model_kwargs)
trainer.fit(model, graph_train_loader, graph_val_loader)
model = GraphLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on validation and test set
train_result = trainer.test(model, graph_train_loader, verbose=False)
test_result = trainer.test(model, graph_test_loader, verbose=False)
result = {"test": test_result[0]['test_acc'], "train": train_result[0]['test_acc']}
return model, result
```
Finally, let's perform the training and testing. Feel free to experiment with different GNN layers, hyperparameters, etc.
```
model, result = train_graph_classifier(model_name="GraphConv",
c_hidden=256,
layer_name="GraphConv",
num_layers=3,
dp_rate_linear=0.5,
dp_rate=0.0)
print(f"Train performance: {100.0*result['train']:4.2f}%")
print(f"Test performance: {100.0*result['test']:4.2f}%")
```
The test performance shows that we obtain quite good scores on an unseen part of the dataset. It should be noted that as we have been using the test set for validation as well, we might have overfitted slightly to this set. Nevertheless, the experiment shows us that GNNs can be indeed powerful to predict the properties of graphs and/or molecules.
## Conclusion
In this tutorial, we have seen the application of neural networks to graph structures. We looked at how a graph can be represented (adjacency matrix or edge list), and discussed the implementation of common graph layers: GCN and GAT. The implementations showed the practical side of the layers, which is often easier than the theory. Finally, we experimented with different tasks, on node-, edge- and graph-level. Overall, we have seen that including graph information in the predictions can be crucial for achieving high performance. There are a lot of applications that benefit from GNNs, and the importance of these networks will likely increase over the next years.
|
github_jupyter
|
## Standard libraries
import os
import json
import math
import numpy as np
import time
## Imports for plotting
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf') # For export
from matplotlib.colors import to_rgb
import matplotlib
matplotlib.rcParams['lines.linewidth'] = 2.0
import seaborn as sns
sns.reset_orig()
sns.set()
## Progress bar
from tqdm.notebook import tqdm
## PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import torch.optim as optim
# Torchvision
import torchvision
from torchvision.datasets import CIFAR10
from torchvision import transforms
# PyTorch Lightning
try:
import pytorch_lightning as pl
except ModuleNotFoundError: # Google Colab does not have PyTorch Lightning installed by default. Hence, we do it here if necessary
!pip install --quiet pytorch-lightning>=1.4
import pytorch_lightning as pl
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
# Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10)
DATASET_PATH = "../data"
# Path to the folder where the pretrained models are saved
CHECKPOINT_PATH = "../saved_models/tutorial7"
# Setting the seed
pl.seed_everything(42)
# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.determinstic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
print(device)
import urllib.request
from urllib.error import HTTPError
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/tutorial7/"
# Files to download
pretrained_files = ["NodeLevelMLP.ckpt", "NodeLevelGNN.ckpt", "GraphLevelGraphConv.ckpt"]
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if "/" in file_name:
os.makedirs(file_path.rsplit("/",1)[0], exist_ok=True)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print(f"Downloading {file_url}...")
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print("Something went wrong. Please try to download the file from the GDrive folder, or contact the author with the full output including the following error:\n", e)
class GCNLayer(nn.Module):
def __init__(self, c_in, c_out):
super().__init__()
self.projection = nn.Linear(c_in, c_out)
def forward(self, node_feats, adj_matrix):
"""
Inputs:
node_feats - Tensor with node features of shape [batch_size, num_nodes, c_in]
adj_matrix - Batch of adjacency matrices of the graph. If there is an edge from i to j, adj_matrix[b,i,j]=1 else 0.
Supports directed edges by non-symmetric matrices. Assumes to already have added the identity connections.
Shape: [batch_size, num_nodes, num_nodes]
"""
# Num neighbours = number of incoming edges
num_neighbours = adj_matrix.sum(dim=-1, keepdims=True)
node_feats = self.projection(node_feats)
node_feats = torch.bmm(adj_matrix, node_feats)
node_feats = node_feats / num_neighbours
return node_feats
node_feats = torch.arange(8, dtype=torch.float32).view(1, 4, 2)
adj_matrix = torch.Tensor([[[1, 1, 0, 0],
[1, 1, 1, 1],
[0, 1, 1, 1],
[0, 1, 1, 1]]])
print("Node features:\n", node_feats)
print("\nAdjacency matrix:\n", adj_matrix)
layer = GCNLayer(c_in=2, c_out=2)
layer.projection.weight.data = torch.Tensor([[1., 0.], [0., 1.]])
layer.projection.bias.data = torch.Tensor([0., 0.])
with torch.no_grad():
out_feats = layer(node_feats, adj_matrix)
print("Adjacency matrix", adj_matrix)
print("Input features", node_feats)
print("Output features", out_feats)
class GATLayer(nn.Module):
def __init__(self, c_in, c_out, num_heads=1, concat_heads=True, alpha=0.2):
"""
Inputs:
c_in - Dimensionality of input features
c_out - Dimensionality of output features
num_heads - Number of heads, i.e. attention mechanisms to apply in parallel. The
output features are equally split up over the heads if concat_heads=True.
concat_heads - If True, the output of the different heads is concatenated instead of averaged.
alpha - Negative slope of the LeakyReLU activation.
"""
super().__init__()
self.num_heads = num_heads
self.concat_heads = concat_heads
if self.concat_heads:
assert c_out % num_heads == 0, "Number of output features must be a multiple of the count of heads."
c_out = c_out // num_heads
# Sub-modules and parameters needed in the layer
self.projection = nn.Linear(c_in, c_out * num_heads)
self.a = nn.Parameter(torch.Tensor(num_heads, 2 * c_out)) # One per head
self.leakyrelu = nn.LeakyReLU(alpha)
# Initialization from the original implementation
nn.init.xavier_uniform_(self.projection.weight.data, gain=1.414)
nn.init.xavier_uniform_(self.a.data, gain=1.414)
def forward(self, node_feats, adj_matrix, print_attn_probs=False):
"""
Inputs:
node_feats - Input features of the node. Shape: [batch_size, c_in]
adj_matrix - Adjacency matrix including self-connections. Shape: [batch_size, num_nodes, num_nodes]
print_attn_probs - If True, the attention weights are printed during the forward pass (for debugging purposes)
"""
batch_size, num_nodes = node_feats.size(0), node_feats.size(1)
# Apply linear layer and sort nodes by head
node_feats = self.projection(node_feats)
node_feats = node_feats.view(batch_size, num_nodes, self.num_heads, -1)
# We need to calculate the attention logits for every edge in the adjacency matrix
# Doing this on all possible combinations of nodes is very expensive
# => Create a tensor of [W*h_i||W*h_j] with i and j being the indices of all edges
edges = adj_matrix.nonzero(as_tuple=False) # Returns indices where the adjacency matrix is not 0 => edges
node_feats_flat = node_feats.view(batch_size * num_nodes, self.num_heads, -1)
edge_indices_row = edges[:,0] * num_nodes + edges[:,1]
edge_indices_col = edges[:,0] * num_nodes + edges[:,2]
a_input = torch.cat([
torch.index_select(input=node_feats_flat, index=edge_indices_row, dim=0),
torch.index_select(input=node_feats_flat, index=edge_indices_col, dim=0)
], dim=-1) # Index select returns a tensor with node_feats_flat being indexed at the desired positions along dim=0
# Calculate attention MLP output (independent for each head)
attn_logits = torch.einsum('bhc,hc->bh', a_input, self.a)
attn_logits = self.leakyrelu(attn_logits)
# Map list of attention values back into a matrix
attn_matrix = attn_logits.new_zeros(adj_matrix.shape+(self.num_heads,)).fill_(-9e15)
attn_matrix[adj_matrix[...,None].repeat(1,1,1,self.num_heads) == 1] = attn_logits.reshape(-1)
# Weighted average of attention
attn_probs = F.softmax(attn_matrix, dim=2)
if print_attn_probs:
print("Attention probs\n", attn_probs.permute(0, 3, 1, 2))
node_feats = torch.einsum('bijh,bjhc->bihc', attn_probs, node_feats)
# If heads should be concatenated, we can do this by reshaping. Otherwise, take mean
if self.concat_heads:
node_feats = node_feats.reshape(batch_size, num_nodes, -1)
else:
node_feats = node_feats.mean(dim=2)
return node_feats
layer = GATLayer(2, 2, num_heads=2)
layer.projection.weight.data = torch.Tensor([[1., 0.], [0., 1.]])
layer.projection.bias.data = torch.Tensor([0., 0.])
layer.a.data = torch.Tensor([[-0.2, 0.3], [0.1, -0.1]])
with torch.no_grad():
out_feats = layer(node_feats, adj_matrix, print_attn_probs=True)
print("Adjacency matrix", adj_matrix)
print("Input features", node_feats)
print("Output features", out_feats)
# torch geometric
try:
import torch_geometric
except ModuleNotFoundError:
# Installing torch geometric packages with specific CUDA+PyTorch version.
# See https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html for details
TORCH = torch.__version__.split('+')[0]
CUDA = 'cu' + torch.version.cuda.replace('.','')
!pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-geometric
import torch_geometric
import torch_geometric.nn as geom_nn
import torch_geometric.data as geom_data
gnn_layer_by_name = {
"GCN": geom_nn.GCNConv,
"GAT": geom_nn.GATConv,
"GraphConv": geom_nn.GraphConv
}
cora_dataset = torch_geometric.datasets.Planetoid(root=DATASET_PATH, name="Cora")
cora_dataset[0]
class GNNModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, num_layers=2, layer_name="GCN", dp_rate=0.1, **kwargs):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of the output features. Usually number of classes in classification
num_layers - Number of "hidden" graph layers
layer_name - String of the graph layer to use
dp_rate - Dropout rate to apply throughout the network
kwargs - Additional arguments for the graph layer (e.g. number of heads for GAT)
"""
super().__init__()
gnn_layer = gnn_layer_by_name[layer_name]
layers = []
in_channels, out_channels = c_in, c_hidden
for l_idx in range(num_layers-1):
layers += [
gnn_layer(in_channels=in_channels,
out_channels=out_channels,
**kwargs),
nn.ReLU(inplace=True),
nn.Dropout(dp_rate)
]
in_channels = c_hidden
layers += [gnn_layer(in_channels=in_channels,
out_channels=c_out,
**kwargs)]
self.layers = nn.ModuleList(layers)
def forward(self, x, edge_index):
"""
Inputs:
x - Input features per node
edge_index - List of vertex index pairs representing the edges in the graph (PyTorch geometric notation)
"""
for l in self.layers:
# For graph layers, we need to add the "edge_index" tensor as additional input
# All PyTorch Geometric graph layer inherit the class "MessagePassing", hence
# we can simply check the class type.
if isinstance(l, geom_nn.MessagePassing):
x = l(x, edge_index)
else:
x = l(x)
return x
class MLPModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, num_layers=2, dp_rate=0.1):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of the output features. Usually number of classes in classification
num_layers - Number of hidden layers
dp_rate - Dropout rate to apply throughout the network
"""
super().__init__()
layers = []
in_channels, out_channels = c_in, c_hidden
for l_idx in range(num_layers-1):
layers += [
nn.Linear(in_channels, out_channels),
nn.ReLU(inplace=True),
nn.Dropout(dp_rate)
]
in_channels = c_hidden
layers += [nn.Linear(in_channels, c_out)]
self.layers = nn.Sequential(*layers)
def forward(self, x, *args, **kwargs):
"""
Inputs:
x - Input features per node
"""
return self.layers(x)
class NodeLevelGNN(pl.LightningModule):
def __init__(self, model_name, **model_kwargs):
super().__init__()
# Saving hyperparameters
self.save_hyperparameters()
if model_name == "MLP":
self.model = MLPModel(**model_kwargs)
else:
self.model = GNNModel(**model_kwargs)
self.loss_module = nn.CrossEntropyLoss()
def forward(self, data, mode="train"):
x, edge_index = data.x, data.edge_index
x = self.model(x, edge_index)
# Only calculate the loss on the nodes corresponding to the mask
if mode == "train":
mask = data.train_mask
elif mode == "val":
mask = data.val_mask
elif mode == "test":
mask = data.test_mask
else:
assert False, f"Unknown forward mode: {mode}"
loss = self.loss_module(x[mask], data.y[mask])
acc = (x[mask].argmax(dim=-1) == data.y[mask]).sum().float() / mask.sum()
return loss, acc
def configure_optimizers(self):
# We use SGD here, but Adam works as well
optimizer = optim.SGD(self.parameters(), lr=0.1, momentum=0.9, weight_decay=2e-3)
return optimizer
def training_step(self, batch, batch_idx):
loss, acc = self.forward(batch, mode="train")
self.log('train_loss', loss)
self.log('train_acc', acc)
return loss
def validation_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="val")
self.log('val_acc', acc)
def test_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="test")
self.log('test_acc', acc)
def train_node_classifier(model_name, dataset, **model_kwargs):
pl.seed_everything(42)
node_data_loader = geom_data.DataLoader(dataset, batch_size=1)
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "NodeLevel" + model_name)
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
gpus=1 if str(device).startswith("cuda") else 0,
max_epochs=200,
progress_bar_refresh_rate=0) # 0 because epoch size is 1
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, f"NodeLevel{model_name}.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = NodeLevelGNN.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything()
model = NodeLevelGNN(model_name=model_name, c_in=dataset.num_node_features, c_out=dataset.num_classes, **model_kwargs)
trainer.fit(model, node_data_loader, node_data_loader)
model = NodeLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on the test set
test_result = trainer.test(model, node_data_loader, verbose=False)
batch = next(iter(node_data_loader))
batch = batch.to(model.device)
_, train_acc = model.forward(batch, mode="train")
_, val_acc = model.forward(batch, mode="val")
result = {"train": train_acc,
"val": val_acc,
"test": test_result[0]['test_acc']}
return model, result
# Small function for printing the test scores
def print_results(result_dict):
if "train" in result_dict:
print(f"Train accuracy: {(100.0*result_dict['train']):4.2f}%")
if "val" in result_dict:
print(f"Val accuracy: {(100.0*result_dict['val']):4.2f}%")
print(f"Test accuracy: {(100.0*result_dict['test']):4.2f}%")
node_mlp_model, node_mlp_result = train_node_classifier(model_name="MLP",
dataset=cora_dataset,
c_hidden=16,
num_layers=2,
dp_rate=0.1)
print_results(node_mlp_result)
node_gnn_model, node_gnn_result = train_node_classifier(model_name="GNN",
layer_name="GCN",
dataset=cora_dataset,
c_hidden=16,
num_layers=2,
dp_rate=0.1)
print_results(node_gnn_result)
tu_dataset = torch_geometric.datasets.TUDataset(root=DATASET_PATH, name="MUTAG")
print("Data object:", tu_dataset.data)
print("Length:", len(tu_dataset))
print(f"Average label: {tu_dataset.data.y.float().mean().item():4.2f}")
torch.manual_seed(42)
tu_dataset.shuffle()
train_dataset = tu_dataset[:150]
test_dataset = tu_dataset[150:]
graph_train_loader = geom_data.DataLoader(train_dataset, batch_size=64, shuffle=True)
graph_val_loader = geom_data.DataLoader(test_dataset, batch_size=64) # Additional loader if you want to change to a larger dataset
graph_test_loader = geom_data.DataLoader(test_dataset, batch_size=64)
batch = next(iter(graph_test_loader))
print("Batch:", batch)
print("Labels:", batch.y[:10])
print("Batch indices:", batch.batch[:40])
class GraphGNNModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, dp_rate_linear=0.5, **kwargs):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of output features (usually number of classes)
dp_rate_linear - Dropout rate before the linear layer (usually much higher than inside the GNN)
kwargs - Additional arguments for the GNNModel object
"""
super().__init__()
self.GNN = GNNModel(c_in=c_in,
c_hidden=c_hidden,
c_out=c_hidden, # Not our prediction output yet!
**kwargs)
self.head = nn.Sequential(
nn.Dropout(dp_rate_linear),
nn.Linear(c_hidden, c_out)
)
def forward(self, x, edge_index, batch_idx):
"""
Inputs:
x - Input features per node
edge_index - List of vertex index pairs representing the edges in the graph (PyTorch geometric notation)
batch_idx - Index of batch element for each node
"""
x = self.GNN(x, edge_index)
x = geom_nn.global_mean_pool(x, batch_idx) # Average pooling
x = self.head(x)
return x
class GraphLevelGNN(pl.LightningModule):
def __init__(self, **model_kwargs):
super().__init__()
# Saving hyperparameters
self.save_hyperparameters()
self.model = GraphGNNModel(**model_kwargs)
self.loss_module = nn.BCEWithLogitsLoss() if self.hparams.c_out == 1 else nn.CrossEntropyLoss()
def forward(self, data, mode="train"):
x, edge_index, batch_idx = data.x, data.edge_index, data.batch
x = self.model(x, edge_index, batch_idx)
x = x.squeeze(dim=-1)
if self.hparams.c_out == 1:
preds = (x > 0).float()
data.y = data.y.float()
else:
preds = x.argmax(dim=-1)
loss = self.loss_module(x, data.y)
acc = (preds == data.y).sum().float() / preds.shape[0]
return loss, acc
def configure_optimizers(self):
optimizer = optim.AdamW(self.parameters(), lr=1e-2, weight_decay=0.0) # High lr because of small dataset and small model
return optimizer
def training_step(self, batch, batch_idx):
loss, acc = self.forward(batch, mode="train")
self.log('train_loss', loss)
self.log('train_acc', acc)
return loss
def validation_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="val")
self.log('val_acc', acc)
def test_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="test")
self.log('test_acc', acc)
def train_graph_classifier(model_name, **model_kwargs):
pl.seed_everything(42)
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "GraphLevel" + model_name)
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
gpus=1 if str(device).startswith("cuda") else 0,
max_epochs=500,
progress_bar_refresh_rate=0)
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, f"GraphLevel{model_name}.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = GraphLevelGNN.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything(42)
model = GraphLevelGNN(c_in=tu_dataset.num_node_features,
c_out=1 if tu_dataset.num_classes==2 else tu_dataset.num_classes,
**model_kwargs)
trainer.fit(model, graph_train_loader, graph_val_loader)
model = GraphLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on validation and test set
train_result = trainer.test(model, graph_train_loader, verbose=False)
test_result = trainer.test(model, graph_test_loader, verbose=False)
result = {"test": test_result[0]['test_acc'], "train": train_result[0]['test_acc']}
return model, result
model, result = train_graph_classifier(model_name="GraphConv",
c_hidden=256,
layer_name="GraphConv",
num_layers=3,
dp_rate_linear=0.5,
dp_rate=0.0)
print(f"Train performance: {100.0*result['train']:4.2f}%")
print(f"Test performance: {100.0*result['test']:4.2f}%")
| 0.699562 | 0.960435 |
# Analyzing the relationship between literary topics and literary prestige accross time
We will try to see if:
- The distribution of topics changes between prestigious and non-prestigious writers
- How does a prestigious authors' engagement with certain topics changes before and after their literary consecration
- Does the relationship between topics and prestige change over time ?
## 1. Data preparation
### 1.1. Load raw data for author selection
```
import pandas as pd
import pickle
with open('data/preprocessed_dump.pkl', 'rb') as fp:
processed_content = pickle.load(fp)
preproc = {}
for folder in processed_content.keys():
for i, entry in enumerate(processed_content[folder]):
preproc[folder + '-' + str(i)] = '. '.join(entry['filtered_content'])
original = {}
for folder in processed_content.keys():
for i, entry in enumerate(processed_content[folder]):
original[folder + '-' + str(i)] = entry['content']
ori_df = pd.DataFrame.from_dict(original, orient="index")
ori_df['year']=list(ori_df.index.str[:4])
ori_df['year']=ori_df['year'].astype(int)
ori_df['no_space'] = ori_df[0].apply(lambda x: x.replace(' ', ''))
```
We create a simple heuristic to determine if a text was written by a certain author by checking if his name appears at the beginning of the novel (none of the names are common enough that they would be used to name a character)
```
def is_author_present(author, text):
if author in text and text.find(author) < 1000:
return True
else:
return False
ori_df[ori_df['no_space'].apply(lambda x: is_author_present('리신현', x))]
```
We now create a list of prestigious authors, along with the date of their 'canonization' (their admission into the 4.15 LPU).
We'll print the number of short stories we have for each writer.
```
author_by_year = {'백남룡': 1985,
'리신현': 1998,
'김삼복': 1998,
'윤경찬': 2013,
'안동춘': 1990,
'탁숙본': 2013,
'백보흠': 1989,
'정기종': 1992,
'김수경': 1989,
'림봉철': 2015,
'리동구': 2002,
'박태수': 2000,
'전흥식': 2012,
'최영조': 2011,
'윤정길': 2014,
'진재환': 1984,
'조권일': 2018,
'허춘식': 1997,
'권정웅': 1969,
'김병훈': 1969,
'남대현': 1992,
'리종렬': 1981,
'박룡운': 2001,
'석윤기': 1969,
'최학수': 1976,
'현승걸': 1976,
}
for author, year in author_by_year.items():
print(f'{author}\t{year}\t{sum(ori_df["no_space"].apply(lambda x: is_author_present(author, x)))}')
```
### 1.2. Rerun topic modelling
```
import pickle
with open('data/preprocessed_dump.pkl', 'rb') as fp:
processed_content = pickle.load(fp)
import re
df = pd.DataFrame.from_dict(preproc, orient="index")
def display_topics(model, feature_names, no_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic %d:" % (topic_idx))
print(", ".join([feature_names[i] for i in topic.argsort()[:-no_top_words - 1:-1]]))
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import NMF
vectorizer = TfidfVectorizer(max_df=0.75, min_df=10, max_features=3000, stop_words=['로이', '하', '는', '은', '를', '가', '이', '을', '하시', '머니', '리당', ''])
X = vectorizer.fit_transform([re.sub('\.*께', '', _) for _ in df[0].to_list()])
nmf = NMF(n_components=7, init = 'nndsvd').fit(X)
display_topics(nmf, vectorizer.get_feature_names(), 50)
nmf_output = nmf.transform(X)
```
## 2. Topic distribution for canonized authors (before and after canonization)
```
indices_stars = []
for author in author_by_year.keys():
indices_stars += [i for i, found in enumerate(list(ori_df['no_space'].apply(lambda x: is_author_present(author, x)))) if found]
import numpy as np
topics = ['Education', 'Leaders (1)', 'Family', 'Leaders (2)', 'Industry', 'Agriculture', 'Military']
topic_res = pd.DataFrame(np.round(nmf_output[indices_stars], 2), columns=topics)
topic_res['Leaders'] = topic_res['Leaders (1)'] + topic_res['Leaders (2)']
topic_res.drop([ 'Leaders (1)', 'Leaders (2)'], inplace=True, axis=1)
topic_res['dominant_topic'] = [topic_res.columns[max] for max in np.argmax(topic_res.values, axis=1)]
topic_res
topic_res['dominant_topic'].value_counts()
```
### 2.1. Distribution by dominant topic
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
topic_columns = list(topic_res.columns)
topic_columns.pop(topic_columns.index('dominant_topic'))
colors_gender = ['#fc8d59','#d73027','#fee090','#e0f3f8','#4575b4','#91bfdb']
ax=topic_res['dominant_topic'].value_counts().plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
```
### 2.2. Distribution by topic weight
```
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
colors_gender = ['#fc8d59','#d73027','#e0f3f8', '#fee090','#4575b4','#91bfdb']
ax=topic_res[topic_columns].sum().sort_values(ascending=False).plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
```
## 3. Topic distribution for canonized writers before canonization
We will consider the dominant topic for each short story
```
indices_before_stars = []
for author, year in author_by_year.items():
author_present = ori_df['no_space'].apply(lambda x: is_author_present(author, x))
before_famous = ori_df['year'] < year
indices_before_stars += [i for i, found in enumerate(list(author_present & before_famous)) if found]
import numpy as np
topics = ['Education', 'Leaders (1)', 'Family', 'Leaders (2)', 'Industry *', 'Agriculture', 'Military *']
topic_res = pd.DataFrame(np.round(nmf_output[indices_before_stars], 2), columns=topics)
topic_res['Leaders'] = topic_res['Leaders (1)'] + topic_res['Leaders (2)']
topic_res.drop([ 'Leaders (1)', 'Leaders (2)'], inplace=True, axis=1)
topic_res['dominant_topic'] = [topic_res.columns[max] for max in np.argmax(topic_res.values, axis=1)]
topic_res
```
### 3.1. Charting the distribution
```
topic_res['dominant_topic'].value_counts()
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
topic_columns = list(topic_res.columns)
topic_columns.pop(topic_columns.index('dominant_topic'))
colors_gender = ['#fc8d59','#fee090','#4575b4','#d73027','#e0f3f8','#91bfdb']
ax=topic_res['dominant_topic'].value_counts().plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
```
### 3.2. Assessing significance of differences with permutation test
We separate between the subgroup we are interested in (canonized writers before canonization) and the rest.
```
indices_not_stars = [i for i in range(len(ori_df)) if i not in indices_before_stars]
```
We merge the leaders columns
```
topics_before_stars = np.column_stack([np.round(nmf_output[indices_before_stars], 2)[:,0], np.round(nmf_output[indices_before_stars], 2)[:,1] + np.round(nmf_output[indices_before_stars], 2)[:,3], np.round(nmf_output[indices_before_stars], 2)[:,2], np.round(nmf_output[indices_before_stars], 2)[:,4:]])
topics_not_stars = np.column_stack([np.round(nmf_output[indices_not_stars], 2)[:,0], np.round(nmf_output[indices_not_stars], 2)[:,1] + np.round(nmf_output[indices_not_stars], 2)[:,3], np.round(nmf_output[indices_not_stars], 2)[:,2], np.round(nmf_output[indices_not_stars], 2)[:,4:]])
# http://www2.stat.duke.edu/~ar182/rr/examples-gallery/PermutationTest.html
def run_permutation_test(pooled,sizeZ,sizeY,delta):
np.random.shuffle(pooled)
starZ = pooled[:sizeZ]
starY = pooled[-sizeY:]
return starZ.mean() - starY.mean()
# one hot encoding of dominant topic
def dominant_as_binary_array(a):
b = np.zeros_like(a)
b[np.arange(len(a)), a.argmax(1)] = 1
return b
#dominant_as_binary_array(topics_before_stars)
np.random.seed(7)
reframed_topics = ['Education', 'Leaders', 'Family', 'Industry', 'Agriculture', 'Military']
for i, topic in enumerate(reframed_topics):
print(f"TOPIC: {topic}")
z = dominant_as_binary_array(topics_before_stars)[:,i]
y = dominant_as_binary_array(topics_not_stars)[:,i]
#z = (topics_before_stars)[:,i]
#y = (topics_not_stars)[:,i]
theta_hat = z.mean() - y.mean()
print(f"theta_hat: {theta_hat}")
pooled = np.hstack([z,y])
delta = z.mean() - y.mean()
numSamples = 1000
estimates = np.array(list(map(lambda x: run_permutation_test(pooled,z.size,y.size,delta),range(numSamples))))
diffCount = len(np.where(estimates <=delta)[0])
hat_asl_perm = 1.0 - (float(diffCount)/float(numSamples))
print(f"hat_asl_perm: {hat_asl_perm}")
print('-'*20)
```
## 4. Topic distribution for canonized writers after canonization
### 4.1. Counting the number of stories concerned
```
for author, year in author_by_year.items():
print(f"{author} : {sum(ori_df[ori_df['year'] >= year]['no_space'].apply(lambda x: is_author_present(author, x)))}")
indices_after_stars = []
for author, year in author_by_year.items():
author_present = ori_df['no_space'].apply(lambda x: is_author_present(author, x))
after_famous = ori_df['year'] >= year
indices_after_stars += [i for i, found in enumerate(list(author_present & after_famous)) if found]
len(indices_after_stars)
import numpy as np
topics = ['Education', 'Leaders (1)', 'Family', 'Leaders (2)', 'Industry', 'Agriculture', 'Military']
topic_res = pd.DataFrame(np.round(nmf_output[indices_after_stars], 2), columns=topics)
topic_res['Leaders *'] = topic_res['Leaders (1)'] + topic_res['Leaders (2)']
topic_res.drop([ 'Leaders (1)', 'Leaders (2)'], inplace=True, axis=1)
topic_res['dominant_topic'] = [topic_res.columns[max] for max in np.argmax(topic_res.values, axis=1)]
```
### 4.2. Charting the topic distribution
```
topic_res['dominant_topic'].value_counts()
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
colors_gender = ['#d73027','#fc8d59','#fee090','#4575b4','#e0f3f8','#91bfdb']
ax=topic_res['dominant_topic'].value_counts().plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
topics_after_stars = np.column_stack([np.round(nmf_output[indices_after_stars], 2)[:,0], np.round(nmf_output[indices_after_stars], 2)[:,1] + np.round(nmf_output[indices_after_stars], 2)[:,3], np.round(nmf_output[indices_after_stars], 2)[:,2], np.round(nmf_output[indices_after_stars], 2)[:,4:]])
indices_not_after_stars = [i for i in range(len(ori_df)) if i not in topics_after_stars]
topics_not_after_stars = np.column_stack([np.round(nmf_output[indices_not_after_stars], 2)[:,0], np.round(nmf_output[indices_not_after_stars], 2)[:,1] + np.round(nmf_output[indices_not_after_stars], 2)[:,3], np.round(nmf_output[indices_not_after_stars], 2)[:,2], np.round(nmf_output[indices_not_after_stars], 2)[:,4:]])
```
### 4.3. Assessing significance of differences with permutation test
```
np.random.seed(7)
reframed_topics = ['Education', 'Leaders', 'Family', 'Industry', 'Agriculture', 'Military']
for i, topic in enumerate(reframed_topics):
print(f"TOPIC: {topic}")
z = dominant_as_binary_array(topics_after_stars)[:,i]
y = dominant_as_binary_array(topics_not_after_stars)[:,i]
#z = (topics_after_stars)[:,i]
#y = (topics_not_after_stars)[:,i]
theta_hat = z.mean() - y.mean()
print(f"theta_hat: {theta_hat}")
pooled = np.hstack([z,y])
delta = z.mean() - y.mean()
numSamples = 1000
estimates = np.array(list(map(lambda x: run_permutation_test(pooled,z.size,y.size,delta),range(numSamples))))
diffCount = len(np.where(estimates <=delta)[0])
hat_asl_perm = 1.0 - (float(diffCount)/float(numSamples))
print(f"hat_asl_perm: {hat_asl_perm}")
print('-'*20)
```
## 5. Topic distribution for the new guard (authors canonized after 1991) before canonization
```
indices_young_before_stars = []
for author, year in author_by_year.items():
if year < 1991:
continue
author_present = ori_df['no_space'].apply(lambda x: is_author_present(author, x))
before_famous = ori_df['year'] < year
indices_young_before_stars += [i for i, found in enumerate(list(author_present & before_famous)) if found]
len(indices_young_before_stars)
import numpy as np
topics = ['Education', 'Leaders (1)', 'Family', 'Leaders (2)', 'Industry', 'Agriculture *', 'Military *']
topic_res = pd.DataFrame(np.round(nmf_output[indices_young_before_stars], 2), columns=topics)
topic_res['Leaders'] = topic_res['Leaders (1)'] + topic_res['Leaders (2)']
topic_res.drop([ 'Leaders (1)', 'Leaders (2)'], inplace=True, axis=1)
topic_res['dominant_topic'] = [topic_res.columns[max] for max in np.argmax(topic_res.values, axis=1)]
```
### 5.1. Charting topic distribution for the new guard
```
topic_res['dominant_topic'].value_counts().reindex(topic_res.dominant_topic.unique(), fill_value=0)
topic_columns
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
topic_columns = list(topic_res.columns)
topic_columns.pop(topic_columns.index('dominant_topic'))
colors_gender = ['#fee090','#fc8d59','#4575b4','#d73027','#e0f3f8','#91bfdb']
ax=topic_res['dominant_topic'].value_counts().plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
```
### 5.2. Data for statistical significance study (used later)
```
topics_young_before_stars = np.column_stack([np.round(nmf_output[indices_young_before_stars], 2)[:,0], np.round(nmf_output[indices_young_before_stars], 2)[:,1] + np.round(nmf_output[indices_young_before_stars], 2)[:,3], np.round(nmf_output[indices_young_before_stars], 2)[:,2], np.round(nmf_output[indices_young_before_stars], 2)[:,4:]])
indices_not_young_before_stars = [i for i in range(len(ori_df)) if i not in indices_young_before_stars]
topics_not_young_before_stars = np.column_stack([np.round(nmf_output[indices_not_young_before_stars], 2)[:,0], np.round(nmf_output[indices_not_young_before_stars], 2)[:,1] + np.round(nmf_output[indices_not_young_before_stars], 2)[:,3], np.round(nmf_output[indices_not_young_before_stars], 2)[:,2], np.round(nmf_output[indices_not_young_before_stars], 2)[:,4:]])
```
## 6. Topic distribution for the old guard (authors canonized before 1991) before canonization
```
indices_old_before_stars = []
for author, year in author_by_year.items():
if year >= 1991:
continue
author_present = ori_df['no_space'].apply(lambda x: is_author_present(author, x))
before_famous = ori_df['year'] < year
indices_old_before_stars += [i for i, found in enumerate(list(author_present & before_famous)) if found]
len(indices_old_before_stars)
import numpy as np
topics = ['Education', 'Leaders (1)', 'Family', 'Leaders (2)', 'Industry', 'Agriculture *', 'Military *']
topic_res = pd.DataFrame(np.round(nmf_output[indices_old_before_stars], 2), columns=topics)
topic_res['Leaders'] = topic_res['Leaders (1)'] + topic_res['Leaders (2)']
topic_res.drop([ 'Leaders (1)', 'Leaders (2)'], inplace=True, axis=1)
topic_res['dominant_topic'] = [topic_res.columns[max] for max in np.argmax(topic_res.values, axis=1)]
```
### 6.1. Plotting topic distribution for the old guard
```
topic_columns = list(topic_res.columns)
topic_columns.pop(topic_columns.index('dominant_topic'))
topic_res['dominant_topic'].value_counts(sort=False).reindex(topic_columns, fill_value=0.0000001)
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
colors_gender = ['#d73027','#fc8d59','#fee090','#4575b4','#e0f3f8','#91bfdb']
ax=topic_res['dominant_topic'].value_counts().reindex(topic_columns, fill_value=0.0000001).plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
topics_old_before_stars = np.column_stack([np.round(nmf_output[indices_old_before_stars], 2)[:,0], np.round(nmf_output[indices_old_before_stars], 2)[:,1] + np.round(nmf_output[indices_old_before_stars], 2)[:,3], np.round(nmf_output[indices_old_before_stars], 2)[:,2], np.round(nmf_output[indices_old_before_stars], 2)[:,4:]])
indices_not_old_before_stars = [i for i in range(len(ori_df)) if i not in indices_old_before_stars]
topics_not_old_before_stars = np.column_stack([np.round(nmf_output[indices_not_old_before_stars], 2)[:,0], np.round(nmf_output[indices_not_old_before_stars], 2)[:,1] + np.round(nmf_output[indices_not_old_before_stars], 2)[:,3], np.round(nmf_output[indices_not_old_before_stars], 2)[:,2], np.round(nmf_output[indices_not_old_before_stars], 2)[:,4:]])
```
### 6.2. Assessing significance of differences between the two groups
```
np.random.seed(7)
reframed_topics = ['Education', 'Leaders', 'Family', 'Industry', 'Agriculture', 'Military']
for i, topic in enumerate(reframed_topics):
print(f"TOPIC: {topic}")
z = dominant_as_binary_array(topics_young_before_stars)[:,i]
y = dominant_as_binary_array(topics_old_before_stars)[:,i]
#z = (topics_after_stars)[:,i]
#y = (topics_not_after_stars)[:,i]
theta_hat = z.mean() - y.mean()
print(f"theta_hat: {theta_hat}")
pooled = np.hstack([z,y])
delta = z.mean() - y.mean()
numSamples = 1000
estimates = np.array(list(map(lambda x: run_permutation_test(pooled,z.size,y.size,delta),range(numSamples))))
diffCount = len(np.where(estimates <=delta)[0])
hat_asl_perm = 1.0 - (float(diffCount)/float(numSamples))
print(f"hat_asl_perm: {hat_asl_perm}")
print('-'*20)
```
|
github_jupyter
|
import pandas as pd
import pickle
with open('data/preprocessed_dump.pkl', 'rb') as fp:
processed_content = pickle.load(fp)
preproc = {}
for folder in processed_content.keys():
for i, entry in enumerate(processed_content[folder]):
preproc[folder + '-' + str(i)] = '. '.join(entry['filtered_content'])
original = {}
for folder in processed_content.keys():
for i, entry in enumerate(processed_content[folder]):
original[folder + '-' + str(i)] = entry['content']
ori_df = pd.DataFrame.from_dict(original, orient="index")
ori_df['year']=list(ori_df.index.str[:4])
ori_df['year']=ori_df['year'].astype(int)
ori_df['no_space'] = ori_df[0].apply(lambda x: x.replace(' ', ''))
def is_author_present(author, text):
if author in text and text.find(author) < 1000:
return True
else:
return False
ori_df[ori_df['no_space'].apply(lambda x: is_author_present('리신현', x))]
author_by_year = {'백남룡': 1985,
'리신현': 1998,
'김삼복': 1998,
'윤경찬': 2013,
'안동춘': 1990,
'탁숙본': 2013,
'백보흠': 1989,
'정기종': 1992,
'김수경': 1989,
'림봉철': 2015,
'리동구': 2002,
'박태수': 2000,
'전흥식': 2012,
'최영조': 2011,
'윤정길': 2014,
'진재환': 1984,
'조권일': 2018,
'허춘식': 1997,
'권정웅': 1969,
'김병훈': 1969,
'남대현': 1992,
'리종렬': 1981,
'박룡운': 2001,
'석윤기': 1969,
'최학수': 1976,
'현승걸': 1976,
}
for author, year in author_by_year.items():
print(f'{author}\t{year}\t{sum(ori_df["no_space"].apply(lambda x: is_author_present(author, x)))}')
import pickle
with open('data/preprocessed_dump.pkl', 'rb') as fp:
processed_content = pickle.load(fp)
import re
df = pd.DataFrame.from_dict(preproc, orient="index")
def display_topics(model, feature_names, no_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic %d:" % (topic_idx))
print(", ".join([feature_names[i] for i in topic.argsort()[:-no_top_words - 1:-1]]))
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import NMF
vectorizer = TfidfVectorizer(max_df=0.75, min_df=10, max_features=3000, stop_words=['로이', '하', '는', '은', '를', '가', '이', '을', '하시', '머니', '리당', ''])
X = vectorizer.fit_transform([re.sub('\.*께', '', _) for _ in df[0].to_list()])
nmf = NMF(n_components=7, init = 'nndsvd').fit(X)
display_topics(nmf, vectorizer.get_feature_names(), 50)
nmf_output = nmf.transform(X)
indices_stars = []
for author in author_by_year.keys():
indices_stars += [i for i, found in enumerate(list(ori_df['no_space'].apply(lambda x: is_author_present(author, x)))) if found]
import numpy as np
topics = ['Education', 'Leaders (1)', 'Family', 'Leaders (2)', 'Industry', 'Agriculture', 'Military']
topic_res = pd.DataFrame(np.round(nmf_output[indices_stars], 2), columns=topics)
topic_res['Leaders'] = topic_res['Leaders (1)'] + topic_res['Leaders (2)']
topic_res.drop([ 'Leaders (1)', 'Leaders (2)'], inplace=True, axis=1)
topic_res['dominant_topic'] = [topic_res.columns[max] for max in np.argmax(topic_res.values, axis=1)]
topic_res
topic_res['dominant_topic'].value_counts()
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
topic_columns = list(topic_res.columns)
topic_columns.pop(topic_columns.index('dominant_topic'))
colors_gender = ['#fc8d59','#d73027','#fee090','#e0f3f8','#4575b4','#91bfdb']
ax=topic_res['dominant_topic'].value_counts().plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
colors_gender = ['#fc8d59','#d73027','#e0f3f8', '#fee090','#4575b4','#91bfdb']
ax=topic_res[topic_columns].sum().sort_values(ascending=False).plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
indices_before_stars = []
for author, year in author_by_year.items():
author_present = ori_df['no_space'].apply(lambda x: is_author_present(author, x))
before_famous = ori_df['year'] < year
indices_before_stars += [i for i, found in enumerate(list(author_present & before_famous)) if found]
import numpy as np
topics = ['Education', 'Leaders (1)', 'Family', 'Leaders (2)', 'Industry *', 'Agriculture', 'Military *']
topic_res = pd.DataFrame(np.round(nmf_output[indices_before_stars], 2), columns=topics)
topic_res['Leaders'] = topic_res['Leaders (1)'] + topic_res['Leaders (2)']
topic_res.drop([ 'Leaders (1)', 'Leaders (2)'], inplace=True, axis=1)
topic_res['dominant_topic'] = [topic_res.columns[max] for max in np.argmax(topic_res.values, axis=1)]
topic_res
topic_res['dominant_topic'].value_counts()
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
topic_columns = list(topic_res.columns)
topic_columns.pop(topic_columns.index('dominant_topic'))
colors_gender = ['#fc8d59','#fee090','#4575b4','#d73027','#e0f3f8','#91bfdb']
ax=topic_res['dominant_topic'].value_counts().plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
indices_not_stars = [i for i in range(len(ori_df)) if i not in indices_before_stars]
topics_before_stars = np.column_stack([np.round(nmf_output[indices_before_stars], 2)[:,0], np.round(nmf_output[indices_before_stars], 2)[:,1] + np.round(nmf_output[indices_before_stars], 2)[:,3], np.round(nmf_output[indices_before_stars], 2)[:,2], np.round(nmf_output[indices_before_stars], 2)[:,4:]])
topics_not_stars = np.column_stack([np.round(nmf_output[indices_not_stars], 2)[:,0], np.round(nmf_output[indices_not_stars], 2)[:,1] + np.round(nmf_output[indices_not_stars], 2)[:,3], np.round(nmf_output[indices_not_stars], 2)[:,2], np.round(nmf_output[indices_not_stars], 2)[:,4:]])
# http://www2.stat.duke.edu/~ar182/rr/examples-gallery/PermutationTest.html
def run_permutation_test(pooled,sizeZ,sizeY,delta):
np.random.shuffle(pooled)
starZ = pooled[:sizeZ]
starY = pooled[-sizeY:]
return starZ.mean() - starY.mean()
# one hot encoding of dominant topic
def dominant_as_binary_array(a):
b = np.zeros_like(a)
b[np.arange(len(a)), a.argmax(1)] = 1
return b
#dominant_as_binary_array(topics_before_stars)
np.random.seed(7)
reframed_topics = ['Education', 'Leaders', 'Family', 'Industry', 'Agriculture', 'Military']
for i, topic in enumerate(reframed_topics):
print(f"TOPIC: {topic}")
z = dominant_as_binary_array(topics_before_stars)[:,i]
y = dominant_as_binary_array(topics_not_stars)[:,i]
#z = (topics_before_stars)[:,i]
#y = (topics_not_stars)[:,i]
theta_hat = z.mean() - y.mean()
print(f"theta_hat: {theta_hat}")
pooled = np.hstack([z,y])
delta = z.mean() - y.mean()
numSamples = 1000
estimates = np.array(list(map(lambda x: run_permutation_test(pooled,z.size,y.size,delta),range(numSamples))))
diffCount = len(np.where(estimates <=delta)[0])
hat_asl_perm = 1.0 - (float(diffCount)/float(numSamples))
print(f"hat_asl_perm: {hat_asl_perm}")
print('-'*20)
for author, year in author_by_year.items():
print(f"{author} : {sum(ori_df[ori_df['year'] >= year]['no_space'].apply(lambda x: is_author_present(author, x)))}")
indices_after_stars = []
for author, year in author_by_year.items():
author_present = ori_df['no_space'].apply(lambda x: is_author_present(author, x))
after_famous = ori_df['year'] >= year
indices_after_stars += [i for i, found in enumerate(list(author_present & after_famous)) if found]
len(indices_after_stars)
import numpy as np
topics = ['Education', 'Leaders (1)', 'Family', 'Leaders (2)', 'Industry', 'Agriculture', 'Military']
topic_res = pd.DataFrame(np.round(nmf_output[indices_after_stars], 2), columns=topics)
topic_res['Leaders *'] = topic_res['Leaders (1)'] + topic_res['Leaders (2)']
topic_res.drop([ 'Leaders (1)', 'Leaders (2)'], inplace=True, axis=1)
topic_res['dominant_topic'] = [topic_res.columns[max] for max in np.argmax(topic_res.values, axis=1)]
topic_res['dominant_topic'].value_counts()
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
colors_gender = ['#d73027','#fc8d59','#fee090','#4575b4','#e0f3f8','#91bfdb']
ax=topic_res['dominant_topic'].value_counts().plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
topics_after_stars = np.column_stack([np.round(nmf_output[indices_after_stars], 2)[:,0], np.round(nmf_output[indices_after_stars], 2)[:,1] + np.round(nmf_output[indices_after_stars], 2)[:,3], np.round(nmf_output[indices_after_stars], 2)[:,2], np.round(nmf_output[indices_after_stars], 2)[:,4:]])
indices_not_after_stars = [i for i in range(len(ori_df)) if i not in topics_after_stars]
topics_not_after_stars = np.column_stack([np.round(nmf_output[indices_not_after_stars], 2)[:,0], np.round(nmf_output[indices_not_after_stars], 2)[:,1] + np.round(nmf_output[indices_not_after_stars], 2)[:,3], np.round(nmf_output[indices_not_after_stars], 2)[:,2], np.round(nmf_output[indices_not_after_stars], 2)[:,4:]])
np.random.seed(7)
reframed_topics = ['Education', 'Leaders', 'Family', 'Industry', 'Agriculture', 'Military']
for i, topic in enumerate(reframed_topics):
print(f"TOPIC: {topic}")
z = dominant_as_binary_array(topics_after_stars)[:,i]
y = dominant_as_binary_array(topics_not_after_stars)[:,i]
#z = (topics_after_stars)[:,i]
#y = (topics_not_after_stars)[:,i]
theta_hat = z.mean() - y.mean()
print(f"theta_hat: {theta_hat}")
pooled = np.hstack([z,y])
delta = z.mean() - y.mean()
numSamples = 1000
estimates = np.array(list(map(lambda x: run_permutation_test(pooled,z.size,y.size,delta),range(numSamples))))
diffCount = len(np.where(estimates <=delta)[0])
hat_asl_perm = 1.0 - (float(diffCount)/float(numSamples))
print(f"hat_asl_perm: {hat_asl_perm}")
print('-'*20)
indices_young_before_stars = []
for author, year in author_by_year.items():
if year < 1991:
continue
author_present = ori_df['no_space'].apply(lambda x: is_author_present(author, x))
before_famous = ori_df['year'] < year
indices_young_before_stars += [i for i, found in enumerate(list(author_present & before_famous)) if found]
len(indices_young_before_stars)
import numpy as np
topics = ['Education', 'Leaders (1)', 'Family', 'Leaders (2)', 'Industry', 'Agriculture *', 'Military *']
topic_res = pd.DataFrame(np.round(nmf_output[indices_young_before_stars], 2), columns=topics)
topic_res['Leaders'] = topic_res['Leaders (1)'] + topic_res['Leaders (2)']
topic_res.drop([ 'Leaders (1)', 'Leaders (2)'], inplace=True, axis=1)
topic_res['dominant_topic'] = [topic_res.columns[max] for max in np.argmax(topic_res.values, axis=1)]
topic_res['dominant_topic'].value_counts().reindex(topic_res.dominant_topic.unique(), fill_value=0)
topic_columns
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
topic_columns = list(topic_res.columns)
topic_columns.pop(topic_columns.index('dominant_topic'))
colors_gender = ['#fee090','#fc8d59','#4575b4','#d73027','#e0f3f8','#91bfdb']
ax=topic_res['dominant_topic'].value_counts().plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
topics_young_before_stars = np.column_stack([np.round(nmf_output[indices_young_before_stars], 2)[:,0], np.round(nmf_output[indices_young_before_stars], 2)[:,1] + np.round(nmf_output[indices_young_before_stars], 2)[:,3], np.round(nmf_output[indices_young_before_stars], 2)[:,2], np.round(nmf_output[indices_young_before_stars], 2)[:,4:]])
indices_not_young_before_stars = [i for i in range(len(ori_df)) if i not in indices_young_before_stars]
topics_not_young_before_stars = np.column_stack([np.round(nmf_output[indices_not_young_before_stars], 2)[:,0], np.round(nmf_output[indices_not_young_before_stars], 2)[:,1] + np.round(nmf_output[indices_not_young_before_stars], 2)[:,3], np.round(nmf_output[indices_not_young_before_stars], 2)[:,2], np.round(nmf_output[indices_not_young_before_stars], 2)[:,4:]])
indices_old_before_stars = []
for author, year in author_by_year.items():
if year >= 1991:
continue
author_present = ori_df['no_space'].apply(lambda x: is_author_present(author, x))
before_famous = ori_df['year'] < year
indices_old_before_stars += [i for i, found in enumerate(list(author_present & before_famous)) if found]
len(indices_old_before_stars)
import numpy as np
topics = ['Education', 'Leaders (1)', 'Family', 'Leaders (2)', 'Industry', 'Agriculture *', 'Military *']
topic_res = pd.DataFrame(np.round(nmf_output[indices_old_before_stars], 2), columns=topics)
topic_res['Leaders'] = topic_res['Leaders (1)'] + topic_res['Leaders (2)']
topic_res.drop([ 'Leaders (1)', 'Leaders (2)'], inplace=True, axis=1)
topic_res['dominant_topic'] = [topic_res.columns[max] for max in np.argmax(topic_res.values, axis=1)]
topic_columns = list(topic_res.columns)
topic_columns.pop(topic_columns.index('dominant_topic'))
topic_res['dominant_topic'].value_counts(sort=False).reindex(topic_columns, fill_value=0.0000001)
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.font_manager as fm
matplotlib.rc('font', family=fm.FontProperties(fname='C:\\Windows\\Fonts\\arial.ttf').get_name(), size=22)
colors_gender = ['#d73027','#fc8d59','#fee090','#4575b4','#e0f3f8','#91bfdb']
ax=topic_res['dominant_topic'].value_counts().reindex(topic_columns, fill_value=0.0000001).plot.pie(colors=colors_gender, autopct='%1.1f%%', title='', figsize=(8,8), startangle=90, pctdistance=0.82, explode=len(topic_columns)*[0.05])
ax.set_ylabel('')
centre_circle = plt.Circle((0,0),0.67,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
topics_old_before_stars = np.column_stack([np.round(nmf_output[indices_old_before_stars], 2)[:,0], np.round(nmf_output[indices_old_before_stars], 2)[:,1] + np.round(nmf_output[indices_old_before_stars], 2)[:,3], np.round(nmf_output[indices_old_before_stars], 2)[:,2], np.round(nmf_output[indices_old_before_stars], 2)[:,4:]])
indices_not_old_before_stars = [i for i in range(len(ori_df)) if i not in indices_old_before_stars]
topics_not_old_before_stars = np.column_stack([np.round(nmf_output[indices_not_old_before_stars], 2)[:,0], np.round(nmf_output[indices_not_old_before_stars], 2)[:,1] + np.round(nmf_output[indices_not_old_before_stars], 2)[:,3], np.round(nmf_output[indices_not_old_before_stars], 2)[:,2], np.round(nmf_output[indices_not_old_before_stars], 2)[:,4:]])
np.random.seed(7)
reframed_topics = ['Education', 'Leaders', 'Family', 'Industry', 'Agriculture', 'Military']
for i, topic in enumerate(reframed_topics):
print(f"TOPIC: {topic}")
z = dominant_as_binary_array(topics_young_before_stars)[:,i]
y = dominant_as_binary_array(topics_old_before_stars)[:,i]
#z = (topics_after_stars)[:,i]
#y = (topics_not_after_stars)[:,i]
theta_hat = z.mean() - y.mean()
print(f"theta_hat: {theta_hat}")
pooled = np.hstack([z,y])
delta = z.mean() - y.mean()
numSamples = 1000
estimates = np.array(list(map(lambda x: run_permutation_test(pooled,z.size,y.size,delta),range(numSamples))))
diffCount = len(np.where(estimates <=delta)[0])
hat_asl_perm = 1.0 - (float(diffCount)/float(numSamples))
print(f"hat_asl_perm: {hat_asl_perm}")
print('-'*20)
| 0.18451 | 0.812682 |
```
#Github link https://github.com/chinthojuprajwal/IE517/blob/main/IE517_FY21_HW4/IE517_FY21_HW4_prajwal.ipynb
import pandas as pd
import matplotlib.pyplot as plt
plt.clf()
hsg=pd.read_csv('D:/UIUC_courses/IE517/IE517_FY21_HW4/housing.csv')
print("Raw Dataset: first 5 rows")
print(hsg.head())
print()
print("Raw Dataset:info")
print(hsg.describe())
import seaborn as sns
labels=hsg.columns
sns.heatmap(hsg.corr(),annot=False)
plt.show()
labels_reduced=labels[13:]
hsg_red=hsg[labels_reduced]
sns.heatmap(hsg_red.corr(),annot=True,annot_kws = {'size':5})
plt.show()
print('Summary of most correleated features')
print(hsg_red.describe())
#sns.pairplot(hsg_red)
#commenting as my laptop was running out of memory if this was ran again
plt.show()
from sklearn.preprocessing import StandardScaler
hsg_scaled=StandardScaler().fit_transform(hsg)
from sklearn import linear_model
from sklearn.model_selection import KFold,cross_val_score,GridSearchCV
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
score_y=[]
score_x=[]
X=hsg[labels[:-1]].values
y=hsg[labels[-1]].values
for i in range(1,11):
lasso=linear_model.Lasso(alpha=i/30)
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2,random_state=42)
pipe = Pipeline([('scaler',StandardScaler()), ('estimator', lasso)])
#gs_cv=GridSearchCV(pipe,{},cv=3)
#gs_cv.fit(X,y)
print("cross validation scores=",str(cross_val_score(pipe,X,y,cv=KFold(n_splits=4, shuffle=True))),"for alpha=",str(i/10))
lasso.fit(X_train,y_train)
score_y.append(lasso.coef_)
score_x.append(i/30)
plt.plot(np.transpose(score_x),np.array(score_y))
plt.legend(labels.values,loc='best',fontsize=6)
plt.title("Lasso regression coefficients vs aplha")
plt.xlabel("aplha")
plt.ylabel("coefficient")
plt.show()
score_y1=np.round(np.array(score_y),decimals=1)
print('coefficients for different aplha')
[print(*line) for line in score_y1]
print()
print("As we see from above plot the first 13 random features are already zeros by the time alpha=0.1. NOX drops out next, followed by CHAs, followed by INDUS ")
from sklearn.metrics import mean_squared_error
print('chossing aplha=0.1 gives')
lasso=linear_model.Lasso(alpha=0.1)
lasso.fit(X_train,y_train)
y_predict=lasso.predict(X_test)
y_train_pre=lasso.predict(X_train)
plt.scatter(X_test[:,18],y_test-y_predict,label='Test dataset residuals')
plt.scatter(X_train[:,18],y_train-y_train_pre,label='Train data residuals')
plt.title("Lasso residual errors plot")
plt.xlabel("RM")
plt.ylabel("MEDV")
plt.legend()
plt.show()
print("R squared value is :", str(lasso.score(X_test,y_test)))
print("MSE value is :", str(mean_squared_error(y_predict,y_test)))
print("intercept is",str(lasso.intercept_))
print("Coefficients are",str(lasso.coef_))
score_y=[]
score_x=[]
X=hsg[labels[:-1]].values
y=hsg[labels[-1]].values
for i in range(1,11):
ridge=linear_model.Ridge(alpha=10**(i/2))
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2,random_state=42)
pipe = Pipeline([ ('estimator', ridge)])
print("cross validation scores=",str(cross_val_score(pipe,X,y,cv=KFold(n_splits=4, shuffle=True))),"for alpha=",str(10**(i/2)))
ridge.fit(X_train,y_train)
score_y.append(ridge.coef_)
score_x.append(10**(i/2))
plt.plot(np.transpose(score_x),np.array(score_y))
plt.legend(labels.values,loc='best',fontsize=6)
plt.title("Ridge regression coefficients vs aplha")
plt.xlabel("aplha")
plt.ylabel("coefficient")
plt.show()
score_y1=np.round(np.array(score_y),decimals=1)
print('coefficients for different aplha')
[print(*line) for line in score_y1]
print()
print(" Pruning features with Ridge seems to be a little tricky, nevertheless, the first 13 features' coefficients approach zero before other features ")
from sklearn.metrics import mean_squared_error
print('chossing aplha=20000 gives')
ridge=linear_model.Ridge(alpha=20000)
ridge.fit(X_train,y_train)
y_predict=ridge.predict(X_test)
y_train_pre=ridge.predict(X_train)
plt.scatter(X_test[:,18],y_test-y_predict,label='Test dataset residuals')
plt.scatter(X_train[:,18],y_train-y_train_pre,label='Train data residuals')
plt.xlabel("RM")
plt.ylabel("MEDV")
plt.legend()
plt.show()
print("R squared value is :", str(ridge.score(X_test,y_test)))
print("MSE value is :", str(mean_squared_error(y_predict,y_test)))
print("intercept is",str(lasso.intercept_))
print("Coefficients are",str(lasso.coef_))
score_y=[]
score_x=[]
X=hsg[labels[:-1]].values
y=hsg[labels[-1]].values
from sklearn.metrics import mean_squared_error
print('chossing aplha=0 (linear model) gives')
ridge=linear_model.Ridge(alpha=0)
ridge.fit(X_train,y_train)
y_predict=ridge.predict(X_test)
y_train_pre=ridge.predict(X_train)
plt.scatter(X_test[:,18],y_test-y_predict,label='Test dataset residuals')
plt.scatter(X_train[:,18],y_train-y_train_pre,label='Train data residuals')
plt.title("Linear model residual errors plot ")
plt.xlabel("RM")
plt.ylabel("MEDV")
plt.legend()
plt.show()
print("R squared value is :", str(ridge.score(X_test,y_test)))
print("MSE value is :", str(mean_squared_error(y_predict,y_test)))
print("intercept is",str(lasso.intercept_))
print("Coefficients are",str(lasso.coef_))
score_y=[]
score_x=[]
X=hsg[labels[:-1]].values
y=hsg[labels[-1]].values
for i in range(1,11):
elastic=linear_model.ElasticNet(alpha=i/5)
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2,random_state=42)
pipe = Pipeline([ ('estimator', elastic)])
gs_cv=GridSearchCV(pipe,{'estimator__l1_ratio':np.linspace(0.1,1,num=10)})
gs_cv.fit(X,y)
print("cross validation scores=",str(cross_val_score(gs_cv.best_estimator_,X,y,cv=KFold(n_splits=4, shuffle=True))),"for best fit of l1 ratio and aplha=",str(i/5))
gs_cv.best_estimator_.named_steps['estimator'].fit(X_train,y_train)
score_y.append(gs_cv.best_estimator_.named_steps['estimator'].coef_)
score_x.append(i)
plt.plot(np.transpose(score_x),np.array(score_y))
plt.legend(labels.values,loc='best',fontsize=6)
plt.title("ElasticNet regression coefficients vs aplha")
plt.xlabel("aplha")
plt.ylabel("coefficient")
plt.show()
score_y1=np.round(np.array(score_y),decimals=1)
print('coefficients for different aplha')
[print(*line) for line in score_y1]
print()
print(" As we see from above plot the first 13 random features are already zeros by the time alpha=0.1. NOX drops out next, followed by CHAs, followed by INDUS ")
from sklearn.metrics import mean_squared_error
print('chossing aplha=1.6 gives')
elastic=linear_model.ElasticNet(alpha=1.6)
elastic.fit(X_train,y_train)
y_predict=elastic.predict(X_test)
y_train_pre=elastic.predict(X_train)
plt.scatter(X_test[:,18],y_test-y_predict,label='Test dataset residuals')
plt.scatter(X_train[:,18],y_train-y_train_pre,label='Train data residuals')
plt.title("elasticnet residual errors plot")
plt.xlabel("RM")
plt.ylabel("MEDV")
plt.legend()
plt.show()
print("R squared value is :", str(elastic.score(X_test,y_test)))
print("MSE value is :", str(mean_squared_error(y_predict,y_test)))
print("My name is Prajwal Chinthoju")
print("My NetID is: pkc3")
print("I hereby certify that I have read the University policy on Academic Integrity and that I am not in violation.")
```
|
github_jupyter
|
#Github link https://github.com/chinthojuprajwal/IE517/blob/main/IE517_FY21_HW4/IE517_FY21_HW4_prajwal.ipynb
import pandas as pd
import matplotlib.pyplot as plt
plt.clf()
hsg=pd.read_csv('D:/UIUC_courses/IE517/IE517_FY21_HW4/housing.csv')
print("Raw Dataset: first 5 rows")
print(hsg.head())
print()
print("Raw Dataset:info")
print(hsg.describe())
import seaborn as sns
labels=hsg.columns
sns.heatmap(hsg.corr(),annot=False)
plt.show()
labels_reduced=labels[13:]
hsg_red=hsg[labels_reduced]
sns.heatmap(hsg_red.corr(),annot=True,annot_kws = {'size':5})
plt.show()
print('Summary of most correleated features')
print(hsg_red.describe())
#sns.pairplot(hsg_red)
#commenting as my laptop was running out of memory if this was ran again
plt.show()
from sklearn.preprocessing import StandardScaler
hsg_scaled=StandardScaler().fit_transform(hsg)
from sklearn import linear_model
from sklearn.model_selection import KFold,cross_val_score,GridSearchCV
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
score_y=[]
score_x=[]
X=hsg[labels[:-1]].values
y=hsg[labels[-1]].values
for i in range(1,11):
lasso=linear_model.Lasso(alpha=i/30)
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2,random_state=42)
pipe = Pipeline([('scaler',StandardScaler()), ('estimator', lasso)])
#gs_cv=GridSearchCV(pipe,{},cv=3)
#gs_cv.fit(X,y)
print("cross validation scores=",str(cross_val_score(pipe,X,y,cv=KFold(n_splits=4, shuffle=True))),"for alpha=",str(i/10))
lasso.fit(X_train,y_train)
score_y.append(lasso.coef_)
score_x.append(i/30)
plt.plot(np.transpose(score_x),np.array(score_y))
plt.legend(labels.values,loc='best',fontsize=6)
plt.title("Lasso regression coefficients vs aplha")
plt.xlabel("aplha")
plt.ylabel("coefficient")
plt.show()
score_y1=np.round(np.array(score_y),decimals=1)
print('coefficients for different aplha')
[print(*line) for line in score_y1]
print()
print("As we see from above plot the first 13 random features are already zeros by the time alpha=0.1. NOX drops out next, followed by CHAs, followed by INDUS ")
from sklearn.metrics import mean_squared_error
print('chossing aplha=0.1 gives')
lasso=linear_model.Lasso(alpha=0.1)
lasso.fit(X_train,y_train)
y_predict=lasso.predict(X_test)
y_train_pre=lasso.predict(X_train)
plt.scatter(X_test[:,18],y_test-y_predict,label='Test dataset residuals')
plt.scatter(X_train[:,18],y_train-y_train_pre,label='Train data residuals')
plt.title("Lasso residual errors plot")
plt.xlabel("RM")
plt.ylabel("MEDV")
plt.legend()
plt.show()
print("R squared value is :", str(lasso.score(X_test,y_test)))
print("MSE value is :", str(mean_squared_error(y_predict,y_test)))
print("intercept is",str(lasso.intercept_))
print("Coefficients are",str(lasso.coef_))
score_y=[]
score_x=[]
X=hsg[labels[:-1]].values
y=hsg[labels[-1]].values
for i in range(1,11):
ridge=linear_model.Ridge(alpha=10**(i/2))
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2,random_state=42)
pipe = Pipeline([ ('estimator', ridge)])
print("cross validation scores=",str(cross_val_score(pipe,X,y,cv=KFold(n_splits=4, shuffle=True))),"for alpha=",str(10**(i/2)))
ridge.fit(X_train,y_train)
score_y.append(ridge.coef_)
score_x.append(10**(i/2))
plt.plot(np.transpose(score_x),np.array(score_y))
plt.legend(labels.values,loc='best',fontsize=6)
plt.title("Ridge regression coefficients vs aplha")
plt.xlabel("aplha")
plt.ylabel("coefficient")
plt.show()
score_y1=np.round(np.array(score_y),decimals=1)
print('coefficients for different aplha')
[print(*line) for line in score_y1]
print()
print(" Pruning features with Ridge seems to be a little tricky, nevertheless, the first 13 features' coefficients approach zero before other features ")
from sklearn.metrics import mean_squared_error
print('chossing aplha=20000 gives')
ridge=linear_model.Ridge(alpha=20000)
ridge.fit(X_train,y_train)
y_predict=ridge.predict(X_test)
y_train_pre=ridge.predict(X_train)
plt.scatter(X_test[:,18],y_test-y_predict,label='Test dataset residuals')
plt.scatter(X_train[:,18],y_train-y_train_pre,label='Train data residuals')
plt.xlabel("RM")
plt.ylabel("MEDV")
plt.legend()
plt.show()
print("R squared value is :", str(ridge.score(X_test,y_test)))
print("MSE value is :", str(mean_squared_error(y_predict,y_test)))
print("intercept is",str(lasso.intercept_))
print("Coefficients are",str(lasso.coef_))
score_y=[]
score_x=[]
X=hsg[labels[:-1]].values
y=hsg[labels[-1]].values
from sklearn.metrics import mean_squared_error
print('chossing aplha=0 (linear model) gives')
ridge=linear_model.Ridge(alpha=0)
ridge.fit(X_train,y_train)
y_predict=ridge.predict(X_test)
y_train_pre=ridge.predict(X_train)
plt.scatter(X_test[:,18],y_test-y_predict,label='Test dataset residuals')
plt.scatter(X_train[:,18],y_train-y_train_pre,label='Train data residuals')
plt.title("Linear model residual errors plot ")
plt.xlabel("RM")
plt.ylabel("MEDV")
plt.legend()
plt.show()
print("R squared value is :", str(ridge.score(X_test,y_test)))
print("MSE value is :", str(mean_squared_error(y_predict,y_test)))
print("intercept is",str(lasso.intercept_))
print("Coefficients are",str(lasso.coef_))
score_y=[]
score_x=[]
X=hsg[labels[:-1]].values
y=hsg[labels[-1]].values
for i in range(1,11):
elastic=linear_model.ElasticNet(alpha=i/5)
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2,random_state=42)
pipe = Pipeline([ ('estimator', elastic)])
gs_cv=GridSearchCV(pipe,{'estimator__l1_ratio':np.linspace(0.1,1,num=10)})
gs_cv.fit(X,y)
print("cross validation scores=",str(cross_val_score(gs_cv.best_estimator_,X,y,cv=KFold(n_splits=4, shuffle=True))),"for best fit of l1 ratio and aplha=",str(i/5))
gs_cv.best_estimator_.named_steps['estimator'].fit(X_train,y_train)
score_y.append(gs_cv.best_estimator_.named_steps['estimator'].coef_)
score_x.append(i)
plt.plot(np.transpose(score_x),np.array(score_y))
plt.legend(labels.values,loc='best',fontsize=6)
plt.title("ElasticNet regression coefficients vs aplha")
plt.xlabel("aplha")
plt.ylabel("coefficient")
plt.show()
score_y1=np.round(np.array(score_y),decimals=1)
print('coefficients for different aplha')
[print(*line) for line in score_y1]
print()
print(" As we see from above plot the first 13 random features are already zeros by the time alpha=0.1. NOX drops out next, followed by CHAs, followed by INDUS ")
from sklearn.metrics import mean_squared_error
print('chossing aplha=1.6 gives')
elastic=linear_model.ElasticNet(alpha=1.6)
elastic.fit(X_train,y_train)
y_predict=elastic.predict(X_test)
y_train_pre=elastic.predict(X_train)
plt.scatter(X_test[:,18],y_test-y_predict,label='Test dataset residuals')
plt.scatter(X_train[:,18],y_train-y_train_pre,label='Train data residuals')
plt.title("elasticnet residual errors plot")
plt.xlabel("RM")
plt.ylabel("MEDV")
plt.legend()
plt.show()
print("R squared value is :", str(elastic.score(X_test,y_test)))
print("MSE value is :", str(mean_squared_error(y_predict,y_test)))
print("My name is Prajwal Chinthoju")
print("My NetID is: pkc3")
print("I hereby certify that I have read the University policy on Academic Integrity and that I am not in violation.")
| 0.53048 | 0.541469 |
<a href="http://www.cosmostat.org/" target="_blank"><img align="left" width="300" src="http://www.cosmostat.org/wp-content/uploads/2017/07/CosmoStat-Logo_WhiteBK-e1499155861666.png" alt="CosmoStat Logo"></a>
<br>
<br>
<br>
<br>
# The Anatomy of a Python Class (Part II)
---
> Author: <a href="http://www.cosmostat.org/people/sfarrens" target="_blank" style="text-decoration:none; color: #F08080">Samuel Farrens</a>
> Email: <a href="mailto:[email protected]" style="text-decoration:none; color: #F08080">[email protected]</a>
> Year: 2019
> Version: 1.0
---
<br>
This notebook introduces some more advanced concepts in the manipulation of Python classes. Before starting you should make sure you have completed the [first notebook](./Classes_I.ipynb) or are at least familiar with the topics covered therein. As with the first part, this tutorial is in no way exhaustive and you are encouraged to suppliment your understanding with further reading.
If you are new to Jupyter notebooks note that cells are executed by pressing <kbd>SHIFT</kbd>+<kbd>ENTER</kbd> (⇧+ ⏎). See the <a href="https://jupyter-notebook.readthedocs.io/en/stable/" target_="blanck">Jupyter documentation</a> for more details.
## Contents
---
1. [Set-Up](#1-Set-Up)
1. [Inheritance](#2-Inheritance)
1. [Inherting Attributes](#Inherting-Attributes)
1. [Overriding](#Overriding)
1. [Handling Instantiation](#Handling-Instantiation)
1. [Multiple Parents](#Multiple-Parents)
1. [Method Resolution Order](#Method-Resolution-Order)
1. [Composition](#3-Composition)
1. [Abstract Classes](#4-Abstract-Classes)
1. [Abstract Methods](#Abstract-Methods)
1. [Abstract Properties](#Abstract-Properties)
1. [Exercises](#5-Exercises)
## 1 Set-Up
The following cell contains some set-up commands. Be sure to execute this cell before continuing.
```
# Notebook Set-Up Commands
def print_error(error):
""" Print Error
Function to print exceptions in red.
Parameters
----------
error : string
Error message
"""
print('\033[1;31m{}\033[1;m'.format(error))
```
## 2 Inheritance
---
A powerful property of classes is the ability to inherit attributes and methods from other classes. This helps avoid repetition of code, which makes it easier to develop and maintain.
<br>
### Inherting Attributes
We will begin by defining a simple class with a single static method that will act as a *parent* class.
```
# Define a parent class
class Parent:
@staticmethod
def add(a, b):
return a + b
# Print the parent dictionary
print('Parent.__dict__ =', Parent.__dict__)
```
Note that this class has no special properties, it the same as classes we saw in the previous notebook.
Now we will define a second class, also with a single static function. This class will serve as a *child* class that will inherit from the parent class. This is done by simply passing the parent class name in `()` when defining the child class.
```
# Define a child class
class Child(Parent):
@staticmethod
def subtract(a, b):
return a - b
print('Child.__dict__ =', Child.__dict__)
```
We can see that the contents of the child class dictionary are only what is expected from a normal class definition. Using the special `__bases__` attribute, however, we can see a tuple of classes that the child inherits from.
```
print('Child.__bases__ =', Child.__bases__)
```
Finally, we can demonstrate that the child has indeed inherited new attributes from the parent as follows.
```
print('1 + 2 =', Child.add(1, 2))
print('3 - 2 =', Child.subtract(3, 2))
```
We can clearly see that `Child` has inhertited the static `add` method from `Parent`.
In the following example, we can see that this works for any type of class attribute.
```
class Parent:
x = 1
class Child(Parent):
@classmethod
def show(cls):
return print('x =', cls.x)
Child.show()
```
### Overriding
We saw in the previous notebook that class intance attributes will override class attributes of the same name. The same happens with child attributes with respect to parent attributes.
```
class Parent:
x = 1
class Child(Parent):
x = 2
@classmethod
def show(cls):
return print('x =', cls.x)
Child.show()
```
### Hierachy
It is possible to define a hierchy of parent classes, the attributes of which will be inheried by a given child class.
```
class GrandParent:
x = 1
name = 'Grandparent'
class Parent(GrandParent):
y = 2
name = 'Parent'
class Child(Parent):
z = 3
name = 'Child'
print(Child.name, Child.x, Child.y, Child.z)
```
Note that overriding will also act hierachically, meaning that the last class in the chain will be given precedence.
<br>
### Handling Instantiation
If the parent class contains an `__init__` method, this too can be inherited by the child.
```
class Parent:
def __init__(self, value):
self.myattr = value
class Child(Parent):
def show(self):
return print('myattr =', self.myattr)
Child('A string').show()
```
In fact it also works the other way around.
```
class Parent:
def show(self):
return print('myattr =', self.myattr)
class Child(Parent):
def __init__(self, value):
self.myattr = value
Child('A string').show()
```
This is because the child will lookup any attributes not listed in its own dictionary in all the parent (or base) classes.
If both classes have an `__init__` method only the child class will be initialised (see [Overriding](#Overriding)).
```
class Parent:
def __init__(self):
self.pval = 'parent value'
class Child(Parent):
def __init__(self):
self.cval = 'child value'
inst = Child()
print('cval =', inst.cval)
try:
print('pval =', inst.pval)
except Exception as error:
print_error(error)
```
We can fix this by manually instantiating the parent class.
```
class Parent:
def __init__(self):
self.pval = 'parent value'
class Child(Parent):
def __init__(self):
self.cval = 'child value'
Parent.__init__(self)
inst = Child()
print('cval =', inst.cval)
print('pval =', inst.pval)
```
Python has a useful shortcut to make this process easier using the [`super`](https://docs.python.org/3/library/functions.html?highlight=super#super)function. This allows us to instantiate the parent without naming it, this particularly useful if the name of the parent class were to change.
```
class Parent:
def __init__(self):
self.pval = 'parent value'
class Child(Parent):
def __init__(self):
self.cval = 'child value'
super().__init__()
inst = Child()
print('cval =', inst.cval)
print('pval =', inst.pval)
```
### Multiple Parents
It is possible for a child to inherit attributes from multiple parents.
```
class Mother:
def __init__(self, value):
self.mval = value
def show_mother(self):
print('mother value =', self.mval)
class Father:
def __init__(self, value):
self.fval = value
def show_father(self):
print('father value =', self.fval)
class Child(Mother, Father):
def __init__(self, value1, value2, value3):
self.cval = value3
Mother.__init__(self, value1)
Father.__init__(self, value2)
def show_child(self):
print('child value =', self.cval)
inst = Child(1, 2, 3)
inst.show_mother()
inst.show_father()
inst.show_child()
```
Note that the parent classes were explicitly instantiated in the previous example. Attempting the same thing using the `super` function will raise an error.
```
class Mother:
def __init__(self, value):
self.mval = value
def show_mother(self):
print('mother value =', self.mval)
class Father:
def __init__(self, value):
self.fval = value
def show_father(self):
print('father value =', self.fval)
class Child(Mother, Father):
def __init__(self, value1, value2):
self.cval = value2
super().__init__(value1)
def show_child(self):
print('child value =', self.cval)
inst = Child(1, 2)
try:
inst.show_mother()
inst.show_father()
inst.show_child()
except Exception as error:
print_error(error)
```
As we can see the `Mother` parent class was properly instantiated, but the `Father` parent class was not. The reasons for this are explained in the following subsection.
> In general, caution should be used when building a class archtecture that requires inheritance for multiple parent classes.
<br>
### Method Resolution Order
The Method Resolution Order (MRO) dictates the order in which the child class will search the base classes for a given attribute. We can see the MRO of the child class from the previous example using the `__mro__` attribute as follows.
```
print('Child.__mro__ =', Child.__mro__)
```
We can see that the `Mother` class appears before the `Father` class in the MRO, hence the `super` method in `Child` instantiates `Mother`.
In order to make the previous example work, we would need to add another `super` in the `__init__` of the `Mother` class as follows.
> Note: Don't do this in your code!
```
class Mother:
def __init__(self, value):
self.mval = value
super().__init__(value)
def show_mother(self):
print('mother value =', self.mval)
class Father:
def __init__(self, value):
self.fval = value
def show_father(self):
print('father value =', self.fval)
class Child(Mother, Father):
def __init__(self, value1, value2):
self.cval = value2
super().__init__(value1)
def show_child(self):
print('child value =', self.cval)
inst = Child(1, 2)
inst.show_mother()
inst.show_father()
inst.show_child()
```
This, however, is not recommended as it adds unncessary ambiguity to the code and makes debugging extremely difficult.
<br>
## 3 Composition
---
Inhertitance is not the only way in which a class can access attributes from other classes. This comes back one of main take-away messages of this tutorial, namely that everything in Python is an object and any object can be assigned as an attribute.
This means that a class attribute can actually be an instance of another class.
```
class Composer:
def __init__(self):
self.myattr = 'composer value'
class myClass:
def __init__(self):
self.comp = Composer()
inst = myClass()
print("myattr =", inst.comp.myattr)
```
This can particularly useful as we can pass classes or class intances to our new class without knowing anything about the other classes.
```
class Star:
def __init__(self):
self.whoami = 'I am a star!'
class Galaxy:
def __init__(self):
self.whoami = 'I am a Galaxy!'
class myClass:
def __init__(self, composer):
self.comp = composer()
inst1 = myClass(Star)
inst2 = myClass(Galaxy)
print("whoami =", inst1.comp.whoami)
print("whoami =", inst2.comp.whoami)
```
We can even achieve inheritance-like properties if we know the name of a given attribute that the composer class should have.
```
class Star:
def __init__(self):
self.whoami = 'I am a star!'
class Galaxy:
def __init__(self):
self.whoami = 'I am a Galaxy!'
class myClass:
def __init__(self, composer):
self.whoami = composer().whoami
inst1 = myClass(Star)
inst2 = myClass(Galaxy)
print("whoami =", inst1.whoami)
print("whoami =", inst2.whoami)
```
> Note: This will break if the composer does not have an attribute called `whoami`.
Composition also alows preinitialised class instances to be passed.
```
class Star:
def __init__(self, mag):
self.mag = mag
class Galaxy:
def __init__(self, mag):
self.mag = mag
class myClass:
def __init__(self, composer):
self.comp = composer
self.mag = self.comp.mag
star = Star(11.05)
inst1 = myClass(star)
inst2 = myClass(Galaxy(8.2))
print("mag =", inst1.mag)
print("mag =", inst2.mag)
```
This can be useful, especially if we want to be able to change the original instance attributes.
```
inst1.mag = 12.3
inst1.comp.mag = 13.1
print("inst1 mag =", inst1.mag)
print("star mag =", star.mag)
```
Notice how the `mag` attribute of the instance `star` changed!
## 4 Abstract Classes
---
Abstract classes are classes that contain abstract methods. So it makes sense to move right along and explain what an abstract method is.
<br>
### Abstract Methods
An abstract method is a method that is defined but not implemented. This is useful for definining parent classes that impose some conditions on the child.
```
from abc import ABC, abstractmethod
class Parent(ABC):
@abstractmethod
def get_id(self):
pass
print ('Parent.__dict__', Parent.__dict__)
```
We can see that the method `get_id` has been defined but does nothing.
We now define a child class that inherits this parent.
```
class Child(Parent):
def __init__(self):
self.whoami = 'I am a child!'
try:
inst = Child()
except Exception as error:
print_error(error)
```
This raises an error when trying to create an instance. This is because the child class needs to override the abstract method `get_id`.
```
class Child(Parent):
def __init__(self):
self.get_id()
def get_id(self):
self.whoami = 'I am a child!'
inst = Child()
print(inst.whoami)
```
This allows the parent class, which may be written with no knowledge of the properties of the child class, to impose some conditions on what a given child class should do.
<br>
### Abstract Properties
In the previous notebook we learned how class properties can be defined to impose conditions on the values of certain class attributes. Using Abstract methods we can combine these tools to ensure that a given class has the properties we want.
We start by defining an abstract class with the abstract property `whoami` and its corresponding setter.
```
class AstroObject(ABC):
@property
@abstractmethod
def whoami(self):
pass
@whoami.setter
@abstractmethod
def whoami(self, value):
pass
```
We can then define a child class that inherits this abstract class.
```
class Star(AstroObject):
def __init__(self):
self.whoami = 'I am a star!'
try:
star = Star()
except Exception as error:
print_error(error)
```
We can see that while a value for the attribute is provided the property is not explicitly defined. This way we can impose that the `Star` class must define a getter and setter for the attribute `whoami`.
```
class Star(AstroObject):
def __init__(self):
self.whoami = 'I am a star!'
@property
def whoami(self):
return self._whoami
@whoami.setter
def whoami(self, value):
if not isinstance(value, str):
raise ValueError('whoami must be a string')
self._whoami = value
star = Star()
print("whoami =", star.whoami)
```
This in turn could be used in conjunction with what we learned about class composition to ensure that composers have the properties needed to work with the child class.
## 5 Exercises
---
1. Create a parent class and a child class that can identify its progenitor.
1. Your parent class should have the class attribute `parent_name` with the value of your choice.
1. Your child class should have the attribute `name` with the value of your choice.
1. Printing an instance of your child class should contain its name and its parent's name. *e.g.*
```python
print(Child('Thor'))
Thor Odinson
```
```
# Add your solution here
```
2. Define a class that can be initialised with composer classes that have been constrained by an abstract class.
1. Define an abstract class called `EarthAttr` that has the abstract method `whatami`, whcih should return a string of your choice.
1. Define at least two composer classes (*e.g.* `Moon` and `Core`) that satisfy the requirements of `EarthAttr`.
1. Define a class called `Earth` that composes these classes to get the `whatami` attribute.
1. Printing an instance of your `Earth` class should include the value of `whatami`. *e.g.*
```python
print(Earth(Moon))
The Earth has a moon!
```
5. Finally, define a final composer class (*e.g.* `Lake`) and demonstrate that it will not instantiate if not correctly constrained by `EarthAttr`. You should get the following error:
```bash
'Cant instantiate abstract class Lake with abstract methods whatami'
```
```
# Add your solution here
```
|
github_jupyter
|
# Notebook Set-Up Commands
def print_error(error):
""" Print Error
Function to print exceptions in red.
Parameters
----------
error : string
Error message
"""
print('\033[1;31m{}\033[1;m'.format(error))
# Define a parent class
class Parent:
@staticmethod
def add(a, b):
return a + b
# Print the parent dictionary
print('Parent.__dict__ =', Parent.__dict__)
# Define a child class
class Child(Parent):
@staticmethod
def subtract(a, b):
return a - b
print('Child.__dict__ =', Child.__dict__)
print('Child.__bases__ =', Child.__bases__)
print('1 + 2 =', Child.add(1, 2))
print('3 - 2 =', Child.subtract(3, 2))
class Parent:
x = 1
class Child(Parent):
@classmethod
def show(cls):
return print('x =', cls.x)
Child.show()
class Parent:
x = 1
class Child(Parent):
x = 2
@classmethod
def show(cls):
return print('x =', cls.x)
Child.show()
class GrandParent:
x = 1
name = 'Grandparent'
class Parent(GrandParent):
y = 2
name = 'Parent'
class Child(Parent):
z = 3
name = 'Child'
print(Child.name, Child.x, Child.y, Child.z)
class Parent:
def __init__(self, value):
self.myattr = value
class Child(Parent):
def show(self):
return print('myattr =', self.myattr)
Child('A string').show()
class Parent:
def show(self):
return print('myattr =', self.myattr)
class Child(Parent):
def __init__(self, value):
self.myattr = value
Child('A string').show()
class Parent:
def __init__(self):
self.pval = 'parent value'
class Child(Parent):
def __init__(self):
self.cval = 'child value'
inst = Child()
print('cval =', inst.cval)
try:
print('pval =', inst.pval)
except Exception as error:
print_error(error)
class Parent:
def __init__(self):
self.pval = 'parent value'
class Child(Parent):
def __init__(self):
self.cval = 'child value'
Parent.__init__(self)
inst = Child()
print('cval =', inst.cval)
print('pval =', inst.pval)
class Parent:
def __init__(self):
self.pval = 'parent value'
class Child(Parent):
def __init__(self):
self.cval = 'child value'
super().__init__()
inst = Child()
print('cval =', inst.cval)
print('pval =', inst.pval)
class Mother:
def __init__(self, value):
self.mval = value
def show_mother(self):
print('mother value =', self.mval)
class Father:
def __init__(self, value):
self.fval = value
def show_father(self):
print('father value =', self.fval)
class Child(Mother, Father):
def __init__(self, value1, value2, value3):
self.cval = value3
Mother.__init__(self, value1)
Father.__init__(self, value2)
def show_child(self):
print('child value =', self.cval)
inst = Child(1, 2, 3)
inst.show_mother()
inst.show_father()
inst.show_child()
class Mother:
def __init__(self, value):
self.mval = value
def show_mother(self):
print('mother value =', self.mval)
class Father:
def __init__(self, value):
self.fval = value
def show_father(self):
print('father value =', self.fval)
class Child(Mother, Father):
def __init__(self, value1, value2):
self.cval = value2
super().__init__(value1)
def show_child(self):
print('child value =', self.cval)
inst = Child(1, 2)
try:
inst.show_mother()
inst.show_father()
inst.show_child()
except Exception as error:
print_error(error)
print('Child.__mro__ =', Child.__mro__)
class Mother:
def __init__(self, value):
self.mval = value
super().__init__(value)
def show_mother(self):
print('mother value =', self.mval)
class Father:
def __init__(self, value):
self.fval = value
def show_father(self):
print('father value =', self.fval)
class Child(Mother, Father):
def __init__(self, value1, value2):
self.cval = value2
super().__init__(value1)
def show_child(self):
print('child value =', self.cval)
inst = Child(1, 2)
inst.show_mother()
inst.show_father()
inst.show_child()
class Composer:
def __init__(self):
self.myattr = 'composer value'
class myClass:
def __init__(self):
self.comp = Composer()
inst = myClass()
print("myattr =", inst.comp.myattr)
class Star:
def __init__(self):
self.whoami = 'I am a star!'
class Galaxy:
def __init__(self):
self.whoami = 'I am a Galaxy!'
class myClass:
def __init__(self, composer):
self.comp = composer()
inst1 = myClass(Star)
inst2 = myClass(Galaxy)
print("whoami =", inst1.comp.whoami)
print("whoami =", inst2.comp.whoami)
class Star:
def __init__(self):
self.whoami = 'I am a star!'
class Galaxy:
def __init__(self):
self.whoami = 'I am a Galaxy!'
class myClass:
def __init__(self, composer):
self.whoami = composer().whoami
inst1 = myClass(Star)
inst2 = myClass(Galaxy)
print("whoami =", inst1.whoami)
print("whoami =", inst2.whoami)
class Star:
def __init__(self, mag):
self.mag = mag
class Galaxy:
def __init__(self, mag):
self.mag = mag
class myClass:
def __init__(self, composer):
self.comp = composer
self.mag = self.comp.mag
star = Star(11.05)
inst1 = myClass(star)
inst2 = myClass(Galaxy(8.2))
print("mag =", inst1.mag)
print("mag =", inst2.mag)
inst1.mag = 12.3
inst1.comp.mag = 13.1
print("inst1 mag =", inst1.mag)
print("star mag =", star.mag)
from abc import ABC, abstractmethod
class Parent(ABC):
@abstractmethod
def get_id(self):
pass
print ('Parent.__dict__', Parent.__dict__)
class Child(Parent):
def __init__(self):
self.whoami = 'I am a child!'
try:
inst = Child()
except Exception as error:
print_error(error)
class Child(Parent):
def __init__(self):
self.get_id()
def get_id(self):
self.whoami = 'I am a child!'
inst = Child()
print(inst.whoami)
class AstroObject(ABC):
@property
@abstractmethod
def whoami(self):
pass
@whoami.setter
@abstractmethod
def whoami(self, value):
pass
class Star(AstroObject):
def __init__(self):
self.whoami = 'I am a star!'
try:
star = Star()
except Exception as error:
print_error(error)
class Star(AstroObject):
def __init__(self):
self.whoami = 'I am a star!'
@property
def whoami(self):
return self._whoami
@whoami.setter
def whoami(self, value):
if not isinstance(value, str):
raise ValueError('whoami must be a string')
self._whoami = value
star = Star()
print("whoami =", star.whoami)
print(Child('Thor'))
Thor Odinson
```
2. Define a class that can be initialised with composer classes that have been constrained by an abstract class.
1. Define an abstract class called `EarthAttr` that has the abstract method `whatami`, whcih should return a string of your choice.
1. Define at least two composer classes (*e.g.* `Moon` and `Core`) that satisfy the requirements of `EarthAttr`.
1. Define a class called `Earth` that composes these classes to get the `whatami` attribute.
1. Printing an instance of your `Earth` class should include the value of `whatami`. *e.g.*
```python
print(Earth(Moon))
The Earth has a moon!
```
5. Finally, define a final composer class (*e.g.* `Lake`) and demonstrate that it will not instantiate if not correctly constrained by `EarthAttr`. You should get the following error:
```bash
'Cant instantiate abstract class Lake with abstract methods whatami'
```
| 0.563138 | 0.954605 |
### Counting with pandas
Nucleotide and protein sequences are a great setting to learn probability and statistics.
We'll start by _counting_ how many times each amino acid shows up in a short protein sequence. The pandas package in Python provides useful data structures and methods for this data analysis task.
We'll start by creating a Python string (a `str`) containing the sequence of the short yeast protein Mfa1:
```
MQPSTATAAPKEKTSSEKKDNYIIKGVFWDPACVIA
```
and storing this in a variable named `mfa1`.
Next we'll `import` the NumPy and pandas packages so we can use all the great data structures and methods that they provide.
In order to count how many times each amino acid shows up in this protein, we need to convert the `str` into a pandas `Series` of individual amino acid letters.
We can do this by converting the `str` into a `list` of individual letters, and then constructing a `Series` from that list.
Once we have a `Series` of amino acids, we can use the `value_counts()` method to count how many times each amino acid letter (a "value" in the `Series`) occurs.
We'll store these counts in a variable named `mfa1_counts`.
We can then _look up_ how many times each amino acid occurs in `mfa1_counts`. We can treat this `Series` just like a Python dictionary (`dict`) and use square brackets (`[]`) or the `get()` method.
Notice that these two ways of looking up counts give the same result, _except_ when a letter is missing from the `Series` the square brackets produce an error while `get()` returns `None`.
### BioPython
Next, we'll move on to counting amino acids in the whole yeast proteome. I don't want to include all ~6,000 protein sequences in this notebook, and so we'll use existing Python tools to read it from a file. The biopython module `Bio` has a sub-module specialized for reading and writing files of sequence data, called `SeqIO`. We'll import just the `SeqIO` sub-module from `Bio`.
The SeqIO module has a function called `parse()` that reads sequence entries from a Fasta-format file. The Fasta format is pretty simple: each sequence has a name on a line starting with a >, followed by the sequence itself. So, a Fasta file might look like:
```
>one
AGCTACGT...
GCGATCGT...
>two
TGACTGCA...
...
```
The `parse()` function returns, in essence, an iterator that can loop over all the entries in the file. We just want to look at the first one, though, so we'll use `next` to take just one entry. The general approach is:
```
sequences = SeqIO.parse("my_file.fasta", "fasta")
sequence0 = next(sequences)
```
I have downloaded all of the protein sequences into a file named
```
../S288C_R64-3-1/orf_trans_R64-3-1_20210421.fasta
```
The `parse()` function will turn each of these into a `SeqRecord`, a custom data type that bundles together the name and the sequence. You can get the sequence name from record using `record.id` and the sequence itself using `record.seq`. This sequence isn't an ordinary Python string -- it's another custom data type, called a `Seq`, but you can convert it into a string using `str(record.seq)` and into a list of amino acid letters using `list(record.seq)`.
Now, let's count amino acids in this protein using `value_counts()`.
In order to count all the amino acids in the proteome, we'll need to keep a _running_ total of amino acids counted. It's easy to do this, though, because we can just add together two counts in order to get a sum.
First, let's get the first two proteins from the data file and store them in variables `protein0` and `protein1`.
Then, count amino acids in each of these proteins.
Then, use the `Series.add()` method to add together the counts from `protein0` and `protein1`.
Because some proteins will entirely lack certain amino acids, we need to use the `fill_value` parameter for the `Series.add()` method to fill in a `0` value when an entry is missing. The default is to use a `None` value, leading to a "not-a-number" `NaN` value in the sum.
If we didn't need to use the `fill_value` in order to handle missing amino acids, we could just add together the counts using `+`.
Now, we can iterate over every sequence in the proteome and keep a running sum of amino acid frequencies.
We need to start with an empty set of amino acid counts. Because this is empty, we need to specify the data type for the series:
```
pd.Series(dtype='int64')
```
The counts are now listed in amino acid alphabetical order -- the order of the "index".
In order to figure out the most and least common amino acids, we can sort the `total_counts` data frame according to the "values" with the `Series.sort_values()` method.
### matplotlib
It's also pretty easy to make plots of data in a `Series`. To do this, we need to import another module
```
import matplotlib.pyplot as plt
```
Now, we can use a `plot()` method on our data series. The default plot is a line plot, but a bar plot makes more sense for this kind of data and so we use the `kind='bar'` argument to the `Series.plot()` method.
```
total_counts.plot(kind='bar')
```
You may find it makes more sense to plot the sorted versions of these `Series`
### _Exercise_
The file `../S288C_R64-3-1/S288C_reference_sequence_R64-3-1_20210421.fsa` has the nucleotide sequence of the yeast genome. Each chromosome has its own sequence entry.
Count the nucleotide frequencies in the genome.
Plot a bar graph of nucleotide counts.
|
github_jupyter
|
MQPSTATAAPKEKTSSEKKDNYIIKGVFWDPACVIA
>one
AGCTACGT...
GCGATCGT...
>two
TGACTGCA...
...
sequences = SeqIO.parse("my_file.fasta", "fasta")
sequence0 = next(sequences)
../S288C_R64-3-1/orf_trans_R64-3-1_20210421.fasta
pd.Series(dtype='int64')
import matplotlib.pyplot as plt
total_counts.plot(kind='bar')
| 0.356783 | 0.993831 |
```
# This is where we store the entire system read from the input in a structured way
class CelestialSystem:
def __init__(self):
self.objects = {}
self.direct_orbits = 0
self.indirect_orbits = 0
# Direct orbits should be counting up the connections in the input file
# But Removing the entry we did for the center of mass
def getDirectOrbits(self):
self.direct_orbits = len(self.objects) - 1
# This is a bit more difficult as we need to loop over the objects
# and find out how many steps they are from the center, only counting
# after we get past the direct orbit.
# NOTE: this seems to catch BOTH direct and indirect orbits at the moment...
def getIndirectOrbits(self):
currentIndex = None
# For each item, we need to work our way to the centre
for item in self.objects:
currentIndex = item # Where we are now
thispath_count = 0 # Start counting fresh with each new object
# Continue forever until we reach the centre of mass (COM)
while self.objects[currentIndex]["COM"] is not None:
# Chaing the pointer to show where we are now
currentIndex = self.objects[currentIndex]["COM"]
# We've stepped forward
thispath_count += 1
# At the end of each successful path journey, add the total orbits to the overall total
self.indirect_orbits += thispath_count
def getIndirectOrbitsFor(self, currentItem):
path = list()
currentIndex = currentItem # Where we are now
path.append(currentItem)
# Continue forever until we reach the centre of mass (COM)
currentItem = self.objects[currentIndex]['COM']
while currentItem is not None:
# Chaing the pointer to show where we are now
currentIndex = self.objects[currentIndex]['COM']
currentItem = self.objects[currentIndex]['COM']
path.append(currentIndex)
return path
# Find the center of mass for the whole system...just in case it isn't named "COM"
def findCentre(self):
# Loop over all the objects
for item in self.objects:
parent_object = self.objects[item]["COM"]
# If the parent object is in the list, it isn't the COM
if parent_object in self.objects:
continue
else:
# This must be the center of mass as it isn't in the list.
# Add it to the list so we know when to stop.
self.objects[parent_object] = {"COM": None}
break
# Originally was supposed to return the sum of direct and indirect orbits, but not needed as currently written
def getOrbitTotals(self):
return self.indirect_orbits
# The shortest path, thankfully, is the unique items of both paths to the COM
def shortestPath(self, start, finish):
# Get our path to the COM and Santa's
startpath = self.getIndirectOrbitsFor(start)
finishpath = self.getIndirectOrbitsFor(finish)
# Get rid of the items in each list that aren't unique, and create a new set
# that has the path to santa
path = list(set(startpath) - set(finishpath)) + list(set(finishpath) - set(startpath))
return path
# Just a function to group all of the steps needed
def analyse(self):
self.findCentre()
self.getDirectOrbits()
self.getIndirectOrbits()
'''
We want to test our system, and what better way than to use the sample input from the website
where we already know the answer: 42
'''
# Setup our system
testSystem = CelestialSystem()
# Test data:
testinput = ["COM)B", "B)C", "C)D", "D)E", "E)F", "B)G", "G)H", "D)I", "E)J", "J)K", "K)L", "K)YOU", "I)SAN"]
# Let's loop over the data and put it into a more testable format
for line in testinput:
objects = line.replace('\n','').split(')')
# Add each item in the list we're inputting into the overall list as a dictionary of its own.
testSystem.objects[objects[1]] = {}
testSystem.objects[objects[1]]["COM"] = objects[0]
testSystem.findCentre()
solution = testSystem.shortestPath('SAN','YOU')
print(solution)
# Subtract 2 because we don't want to count us or santa
print(len(solution) - 2)
'''
This is working out the orbits with the real data
'''
# Setup our system
thisSystem = CelestialSystem()
# Open the file that has inputs
with open('Day06-input.txt') as f:
# Let's loop over the data and put it into a more testable format
for line in f:
objects = line.replace('\n','').split(')')
# Add each item in the list we're inputting into the overall list as a dictionary of its own.
thisSystem.objects[objects[1]] = {}
thisSystem.objects[objects[1]]["COM"] = objects[0]
thisSystem.findCentre()
solution = thisSystem.shortestPath('SAN','YOU')
print(solution)
# Subtract 2 because we don't want to count us or santa
print(len(solution) - 2)
```
|
github_jupyter
|
# This is where we store the entire system read from the input in a structured way
class CelestialSystem:
def __init__(self):
self.objects = {}
self.direct_orbits = 0
self.indirect_orbits = 0
# Direct orbits should be counting up the connections in the input file
# But Removing the entry we did for the center of mass
def getDirectOrbits(self):
self.direct_orbits = len(self.objects) - 1
# This is a bit more difficult as we need to loop over the objects
# and find out how many steps they are from the center, only counting
# after we get past the direct orbit.
# NOTE: this seems to catch BOTH direct and indirect orbits at the moment...
def getIndirectOrbits(self):
currentIndex = None
# For each item, we need to work our way to the centre
for item in self.objects:
currentIndex = item # Where we are now
thispath_count = 0 # Start counting fresh with each new object
# Continue forever until we reach the centre of mass (COM)
while self.objects[currentIndex]["COM"] is not None:
# Chaing the pointer to show where we are now
currentIndex = self.objects[currentIndex]["COM"]
# We've stepped forward
thispath_count += 1
# At the end of each successful path journey, add the total orbits to the overall total
self.indirect_orbits += thispath_count
def getIndirectOrbitsFor(self, currentItem):
path = list()
currentIndex = currentItem # Where we are now
path.append(currentItem)
# Continue forever until we reach the centre of mass (COM)
currentItem = self.objects[currentIndex]['COM']
while currentItem is not None:
# Chaing the pointer to show where we are now
currentIndex = self.objects[currentIndex]['COM']
currentItem = self.objects[currentIndex]['COM']
path.append(currentIndex)
return path
# Find the center of mass for the whole system...just in case it isn't named "COM"
def findCentre(self):
# Loop over all the objects
for item in self.objects:
parent_object = self.objects[item]["COM"]
# If the parent object is in the list, it isn't the COM
if parent_object in self.objects:
continue
else:
# This must be the center of mass as it isn't in the list.
# Add it to the list so we know when to stop.
self.objects[parent_object] = {"COM": None}
break
# Originally was supposed to return the sum of direct and indirect orbits, but not needed as currently written
def getOrbitTotals(self):
return self.indirect_orbits
# The shortest path, thankfully, is the unique items of both paths to the COM
def shortestPath(self, start, finish):
# Get our path to the COM and Santa's
startpath = self.getIndirectOrbitsFor(start)
finishpath = self.getIndirectOrbitsFor(finish)
# Get rid of the items in each list that aren't unique, and create a new set
# that has the path to santa
path = list(set(startpath) - set(finishpath)) + list(set(finishpath) - set(startpath))
return path
# Just a function to group all of the steps needed
def analyse(self):
self.findCentre()
self.getDirectOrbits()
self.getIndirectOrbits()
'''
We want to test our system, and what better way than to use the sample input from the website
where we already know the answer: 42
'''
# Setup our system
testSystem = CelestialSystem()
# Test data:
testinput = ["COM)B", "B)C", "C)D", "D)E", "E)F", "B)G", "G)H", "D)I", "E)J", "J)K", "K)L", "K)YOU", "I)SAN"]
# Let's loop over the data and put it into a more testable format
for line in testinput:
objects = line.replace('\n','').split(')')
# Add each item in the list we're inputting into the overall list as a dictionary of its own.
testSystem.objects[objects[1]] = {}
testSystem.objects[objects[1]]["COM"] = objects[0]
testSystem.findCentre()
solution = testSystem.shortestPath('SAN','YOU')
print(solution)
# Subtract 2 because we don't want to count us or santa
print(len(solution) - 2)
'''
This is working out the orbits with the real data
'''
# Setup our system
thisSystem = CelestialSystem()
# Open the file that has inputs
with open('Day06-input.txt') as f:
# Let's loop over the data and put it into a more testable format
for line in f:
objects = line.replace('\n','').split(')')
# Add each item in the list we're inputting into the overall list as a dictionary of its own.
thisSystem.objects[objects[1]] = {}
thisSystem.objects[objects[1]]["COM"] = objects[0]
thisSystem.findCentre()
solution = thisSystem.shortestPath('SAN','YOU')
print(solution)
# Subtract 2 because we don't want to count us or santa
print(len(solution) - 2)
| 0.627495 | 0.696359 |
```
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
#export
from fastai.torch_basics import *
from fastai.data.all import *
#hide
from nbdev.showdoc import *
#default_exp text.core
#default_cls_lvl 3
if sys.platform == "win32":
import _locale
_locale._getdefaultlocale = (lambda *args: ['en_US', 'UTF-8'])
```
# Text core
> Basic function to preprocess text before assembling it in a `DataLoaders`.
```
#export
import spacy,html
from spacy.symbols import ORTH
```
## Preprocessing rules
The following are rules applied to texts before or after it's tokenized.
```
#export
#special tokens
UNK, PAD, BOS, EOS, FLD, TK_REP, TK_WREP, TK_UP, TK_MAJ = "xxunk xxpad xxbos xxeos xxfld xxrep xxwrep xxup xxmaj".split()
#export
_all_ = ["UNK", "PAD", "BOS", "EOS", "FLD", "TK_REP", "TK_WREP", "TK_UP", "TK_MAJ"]
#export
_re_spec = re.compile(r'([/#\\])')
def spec_add_spaces(t):
"Add spaces around / and #"
return _re_spec.sub(r' \1 ', t)
test_eq(spec_add_spaces('#fastai'), ' # fastai')
test_eq(spec_add_spaces('/fastai'), ' / fastai')
test_eq(spec_add_spaces('\\fastai'), ' \\ fastai')
#export
_re_space = re.compile(' {2,}')
def rm_useless_spaces(t):
"Remove multiple spaces"
return _re_space.sub(' ', t)
test_eq(rm_useless_spaces('a b c'), 'a b c')
#export
_re_rep = re.compile(r'(\S)(\1{2,})')
def replace_rep(t):
"Replace repetitions at the character level: cccc -- TK_REP 4 c"
def _replace_rep(m):
c,cc = m.groups()
return f' {TK_REP} {len(cc)+1} {c} '
return _re_rep.sub(_replace_rep, t)
```
It starts replacing at 3 repetitions of the same character or more.
```
test_eq(replace_rep('aa'), 'aa')
test_eq(replace_rep('aaaa'), f' {TK_REP} 4 a ')
#export
_re_wrep = re.compile(r'(?:\s|^)(\w+)\s+((?:\1\s+)+)\1(\s|\W|$)')
#hide
"""
Matches any word repeated at least four times with spaces between them
(?:\s|^) Non-Capture either a whitespace character or the beginning of text
(\w+) Capture any alphanumeric character
\s+ One or more whitespace
((?:\1\s+)+) Capture a repetition of one or more times \1 followed by one or more whitespace
\1 Occurrence of \1
(\s|\W|$) Capture last whitespace, non alphanumeric character or end of text
""";
#export
def replace_wrep(t):
"Replace word repetitions: word word word word -- TK_WREP 4 word"
def _replace_wrep(m):
c,cc,e = m.groups()
return f' {TK_WREP} {len(cc.split())+2} {c} {e}'
return _re_wrep.sub(_replace_wrep, t)
```
It starts replacing at 3 repetitions of the same word or more.
```
test_eq(replace_wrep('ah ah'), 'ah ah')
test_eq(replace_wrep('ah ah ah'), f' {TK_WREP} 3 ah ')
test_eq(replace_wrep('ah ah ah ah'), f' {TK_WREP} 4 ah ')
test_eq(replace_wrep('ah ah ah ah '), f' {TK_WREP} 4 ah ')
test_eq(replace_wrep('ah ah ah ah.'), f' {TK_WREP} 4 ah .')
test_eq(replace_wrep('ah ah ahi'), f'ah ah ahi')
#export
def fix_html(x):
"Various messy things we've seen in documents"
x = x.replace('#39;', "'").replace('amp;', '&').replace('#146;', "'").replace('nbsp;', ' ').replace(
'#36;', '$').replace('\\n', "\n").replace('quot;', "'").replace('<br />', "\n").replace(
'\\"', '"').replace('<unk>',UNK).replace(' @.@ ','.').replace(' @-@ ','-').replace('...',' …')
return html.unescape(x)
test_eq(fix_html('#39;bli#146;'), "'bli'")
test_eq(fix_html('Sarah amp; Duck...'), 'Sarah & Duck …')
test_eq(fix_html('a nbsp; #36;'), 'a $')
test_eq(fix_html('\\" <unk>'), f'" {UNK}')
test_eq(fix_html('quot; @.@ @-@ '), "' .-")
test_eq(fix_html('<br />text\\n'), '\ntext\n')
#export
_re_all_caps = re.compile(r'(\s|^)([A-Z]+[^a-z\s]*)(?=(\s|$))')
#hide
"""
Catches any word in all caps, even with ' or - inside
(\s|^) Capture either a whitespace or the beginning of text
([A-Z]+ Capture one capitalized letter or more...
[^a-z\s]*) ...followed by anything that's non lowercase or whitespace
(?=(\s|$)) Look ahead for a space or end of text
""";
#export
def replace_all_caps(t):
"Replace tokens in ALL CAPS by their lower version and add `TK_UP` before."
def _replace_all_caps(m):
tok = f'{TK_UP} ' if len(m.groups()[1]) > 1 else ''
return f"{m.groups()[0]}{tok}{m.groups()[1].lower()}"
return _re_all_caps.sub(_replace_all_caps, t)
test_eq(replace_all_caps("I'M SHOUTING"), f"{TK_UP} i'm {TK_UP} shouting")
test_eq(replace_all_caps("I'm speaking normally"), "I'm speaking normally")
test_eq(replace_all_caps("I am speaking normally"), "i am speaking normally")
#export
_re_maj = re.compile(r'(\s|^)([A-Z][^A-Z\s]*)(?=(\s|$))')
#hide
"""
Catches any capitalized word
(\s|^) Capture either a whitespace or the beginning of text
([A-Z] Capture exactly one capitalized letter...
[^A-Z\s]*) ...followed by anything that's not uppercase or whitespace
(?=(\s|$)) Look ahead for a space of end of text
""";
#export
def replace_maj(t):
"Replace tokens in Sentence Case by their lower version and add `TK_MAJ` before."
def _replace_maj(m):
tok = f'{TK_MAJ} ' if len(m.groups()[1]) > 1 else ''
return f"{m.groups()[0]}{tok}{m.groups()[1].lower()}"
return _re_maj.sub(_replace_maj, t)
test_eq(replace_maj("Jeremy Howard"), f'{TK_MAJ} jeremy {TK_MAJ} howard')
test_eq(replace_maj("I don't think there is any maj here"), ("i don't think there is any maj here"),)
#export
def lowercase(t, add_bos=True, add_eos=False):
"Converts `t` to lowercase"
return (f'{BOS} ' if add_bos else '') + t.lower().strip() + (f' {EOS}' if add_eos else '')
#export
def replace_space(t):
"Replace embedded spaces in a token with unicode line char to allow for split/join"
return t.replace(' ', '▁')
#export
defaults.text_spec_tok = [UNK, PAD, BOS, EOS, FLD, TK_REP, TK_WREP, TK_UP, TK_MAJ]
defaults.text_proc_rules = [fix_html, replace_rep, replace_wrep, spec_add_spaces, rm_useless_spaces,
replace_all_caps, replace_maj, lowercase]
defaults.text_postproc_rules = [replace_space]
```
## Tokenizing
A tokenizer is a class that must implement `__call__`. This method receives a iterator of texts and must return a generator with their tokenized versions. Here is the most basic example:
```
#export
class BaseTokenizer():
"Basic tokenizer that just splits on spaces"
def __init__(self, split_char=' ', **kwargs): self.split_char=split_char
def __call__(self, items): return (t.split(self.split_char) for t in items)
tok = BaseTokenizer()
test_eq(tok(["This is a text"]), [["This", "is", "a", "text"]])
tok = BaseTokenizer('x')
test_eq(tok(["This is a text"]), [["This is a te", "t"]])
#export
class SpacyTokenizer():
"Spacy tokenizer for `lang`"
def __init__(self, lang='en', special_toks=None, buf_sz=5000):
self.special_toks = ifnone(special_toks, defaults.text_spec_tok)
nlp = spacy.blank(lang, disable=["parser", "tagger", "ner"])
for w in self.special_toks: nlp.tokenizer.add_special_case(w, [{ORTH: w}])
self.pipe,self.buf_sz = nlp.pipe,buf_sz
def __call__(self, items):
return (L(doc).attrgot('text') for doc in self.pipe(map(str,items), batch_size=self.buf_sz))
#export
WordTokenizer = SpacyTokenizer
tok = SpacyTokenizer()
inp,exp = "This isn't the easiest text.",["This", "is", "n't", "the", "easiest", "text", "."]
test_eq(L(tok([inp,inp])), [exp,exp])
#export
class TokenizeWithRules:
"A wrapper around `tok` which applies `rules`, then tokenizes, then applies `post_rules`"
def __init__(self, tok, rules=None, post_rules=None):
self.rules = L(ifnone(rules, defaults.text_proc_rules))
self.post_f = compose(*L(ifnone(post_rules, defaults.text_postproc_rules)))
self.tok = tok
def __call__(self, batch):
return (L(o).map(self.post_f) for o in self.tok(maps(*self.rules, batch)))
f = TokenizeWithRules(BaseTokenizer(),rules=[replace_all_caps])
test_eq(f(["THIS isn't a problem"]), [[TK_UP, 'this', "isn't", 'a', 'problem']])
f = TokenizeWithRules(SpacyTokenizer())
test_eq(f(["This isn't a problem"]), [[BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem']])
f = TokenizeWithRules(BaseTokenizer(split_char="'"), rules=[])
test_eq(f(["This isn't a problem"]), [['This▁isn', 't▁a▁problem']])
```
The main function that will be called during one of the processes handling tokenization. It will iterate through the `batch` of texts, apply them `rules` and tokenize them.
```
texts = ["this is a text", "this is another text"]
tok = TokenizeWithRules(BaseTokenizer(), texts.__getitem__)
test_eq(tok([0,1]), [['this', 'is', 'a', 'text'],['this', 'is', 'another', 'text']])
#export
@delegates(TokenizeWithRules)
def tokenize1(text, tok, **kwargs):
"Call `TokenizeWithRules` with a single text"
return first(TokenizeWithRules(tok=tok, **kwargs)([text]))
test_eq(tokenize1("This isn't a problem", SpacyTokenizer()),
[BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem'])
test_eq(tokenize1("This isn't a problem", tok=BaseTokenizer(), rules=[]),
['This',"isn't",'a','problem'])
#export
def parallel_tokenize(items, tok=None, rules=None, n_workers=defaults.cpus, **kwargs):
"Calls optional `setup` on `tok` before launching `TokenizeWithRules` using `parallel_gen"
if tok is None: tok = WordTokenizer()
if hasattr(tok, 'setup'): tok.setup(items, rules)
return parallel_gen(TokenizeWithRules, items, tok=tok, rules=rules, n_workers=n_workers, **kwargs)
```
Note that since this uses `parallel_gen` behind the scenes, the generator returned contains tuples of indices and results. There is no guarantee that the results are returned in order, so you should sort by the first item of the tuples (the indices) if you need them ordered.
```
res = parallel_tokenize(['0 1', '1 2'], rules=[], n_workers=2)
idxs,toks = zip(*L(res).sorted(itemgetter(0)))
test_eq(toks, [['0','1'],['1','2']])
#hide
res1 = parallel_tokenize(['0 1', '1 2'], tok=BaseTokenizer(), rules=[], n_workers=0)
idxs1,toks1 = zip(*L(res1).sorted(itemgetter(0)))
test_eq(toks, toks1)
```
### Tokenize texts in files
Preprocessing function for texts in filenames. Tokenized texts will be saved in a similar fashion in a directory suffixed with `_tok` in the parent folder of `path` (override with `output_dir`). This directory is the return value.
```
#export
fn_counter_pkl = 'counter.pkl'
fn_lengths_pkl = 'lengths.pkl'
#export
def _tokenize_files(func, files, path, output_dir=None, output_names=None, n_workers=defaults.cpus, rules=None, tok=None,
encoding='utf8', skip_if_exists=False):
"Tokenize text `files` in parallel using `n_workers`"
if tok is None: tok = WordTokenizer()
output_dir = Path(ifnone(output_dir, path.parent/f'{path.name}_tok'))
if skip_if_exists and output_dir.exists(): return output_dir
output_dir.mkdir(exist_ok=True)
if output_names is None: output_names = L(output_dir/f.relative_to(path) for f in files)
rules = partial(Path.read_text, encoding=encoding) + L(ifnone(rules, defaults.text_proc_rules.copy()))
lengths,counter = {},Counter()
for i,tok in parallel_tokenize(files, tok, rules, n_workers=n_workers):
out = func(i,output_dir)
out.mk_write(' '.join(tok), encoding=encoding)
lengths[str(files[i].relative_to(path))] = len(tok)
counter.update(tok)
save_pickle(output_dir/fn_lengths_pkl, lengths)
save_pickle(output_dir/fn_counter_pkl, counter)
return output_dir
#export
@delegates(_tokenize_files)
def tokenize_folder(path, extensions=None, folders=None, output_dir=None, skip_if_exists=True, **kwargs):
"Tokenize text files in `path` in parallel using `n_workers`"
path,extensions = Path(path),ifnone(extensions, ['.txt'])
files = get_files(path, extensions=extensions, recurse=True, folders=folders)
def _f(i,output_dir): return output_dir/files[i].relative_to(path)
return _tokenize_files(_f, files, path, skip_if_exists=skip_if_exists, **kwargs)
```
The result will be in `output_dir` (defaults to a folder in the same parent directory as `path`, with `_tok` added to `path.name`) with the same structure as in `path`. Tokenized texts for a given file will be in the file having the same name in `output_dir`. Additionally, a file with a .len suffix contains the number of tokens and the count of all words is stored in `output_dir/counter.pkl`.
`extensions` will default to `['.txt']` and all text files in `path` are treated unless you specify a list of folders in `include`. `rules` (that defaults to `defaults.text_proc_rules`) are applied to each text before going in the tokenizer.
```
#export
@delegates(_tokenize_files)
def tokenize_files(files, path, output_dir, output_names=None, **kwargs):
"Tokenize text `files` in parallel using `n_workers`"
if output_names is None: output_names = L(output_dir/f.relative_to(path) for f in files)
def _f(i,output_dir): return output_dir/output_names[i]
return _tokenize_files(_f, files, path, output_dir=output_dir, **kwargs)
```
### Tokenize texts in a dataframe
```
#export
def _join_texts(df, mark_fields=False):
"Join texts in row `idx` of `df`, marking each field with `FLD` if `mark_fields=True`"
text_col = (f'{FLD} {1} ' if mark_fields else '' ) + df.iloc[:,0].astype(str)
for i in range(1,len(df.columns)):
text_col += (f' {FLD} {i+1} ' if mark_fields else ' ') + df.iloc[:,i].astype(str)
return text_col.values
#hide
texts = [f"This is an example of text {i}" for i in range(10)]
df = pd.DataFrame({'text': texts, 'text1': texts}, columns=['text', 'text1'])
col = _join_texts(df, mark_fields=True)
for i in range(len(df)):
test_eq(col[i], f'{FLD} 1 This is an example of text {i} {FLD} 2 This is an example of text {i}')
#export
def tokenize_texts(texts, n_workers=defaults.cpus, rules=None, tok=None):
"Tokenize `texts` in parallel using `n_workers`"
rules = L(ifnone(rules, defaults.text_proc_rules.copy()))
outputs = L(parallel_tokenize(texts, tok=tok, rules=rules, n_workers=n_workers)
).sorted().itemgot(1)
return outputs
#export
def tokenize_df(df, text_cols, n_workers=defaults.cpus, rules=None, mark_fields=None,
tok=None, tok_text_col="text"):
"Tokenize texts in `df[text_cols]` in parallel using `n_workers` and stores them in `df[tok_text_col]`"
text_cols = [df.columns[c] if isinstance(c, int) else c for c in L(text_cols)]
#mark_fields defaults to False if there is one column of texts, True if there are multiple
if mark_fields is None: mark_fields = len(text_cols)>1
rules = L(ifnone(rules, defaults.text_proc_rules.copy()))
texts = _join_texts(df[text_cols], mark_fields=mark_fields)
outputs = L(parallel_tokenize(texts, tok, rules, n_workers=n_workers)
).sorted().itemgot(1)
other_cols = df.columns[~df.columns.isin(text_cols)]
res = df[other_cols].copy()
res[tok_text_col] = outputs
res[f'{tok_text_col}_length'] = [len(o) for o in outputs]
return res,Counter(outputs.concat())
```
This function returns a new dataframe with the same non-text columns, a column named text that contains the tokenized texts and a column named text_lengths that contains their respective length. It also returns a counter of all seen words to quickly build a vocabulary afterward.
`rules` (that defaults to `defaults.text_proc_rules`) are applied to each text before going in the tokenizer. If `mark_fields` isn't specified, it defaults to `False` when there is a single text column, `True` when there are several. In that case, the texts in each of those columns are joined with `FLD` markers followed by the number of the field.
```
#export
def tokenize_csv(fname, text_cols, outname=None, n_workers=4, rules=None, mark_fields=None,
tok=None, header='infer', chunksize=50000):
"Tokenize texts in the `text_cols` of the csv `fname` in parallel using `n_workers`"
df = pd.read_csv(fname, header=header, chunksize=chunksize)
outname = Path(ifnone(outname, fname.parent/f'{fname.stem}_tok.csv'))
cnt = Counter()
for i,dfp in enumerate(df):
out,c = tokenize_df(dfp, text_cols, n_workers=n_workers, rules=rules,
mark_fields=mark_fields, tok=tok)
out.text = out.text.str.join(' ')
out.to_csv(outname, header=(None,header)[i==0], index=False, mode=('a','w')[i==0])
cnt.update(c)
save_pickle(outname.with_suffix('.pkl'), cnt)
#export
def load_tokenized_csv(fname):
"Utility function to quickly load a tokenized csv ans the corresponding counter"
fname = Path(fname)
out = pd.read_csv(fname)
for txt_col in out.columns[1:-1]:
out[txt_col] = tuple(out[txt_col].str.split(' '))
return out,load_pickle(fname.with_suffix('.pkl'))
```
The result will be written in a new csv file in `outname` (defaults to the same as `fname` with the suffix `_tok.csv`) and will have the same header as the original file, the same non-text columns, a text and a text_lengths column as described in `tokenize_df`.
`rules` (that defaults to `defaults.text_proc_rules`) are applied to each text before going in the tokenizer. If `mark_fields` isn't specified, it defaults to `False` when there is a single text column, `True` when there are several. In that case, the texts in each of those columns are joined with `FLD` markers followed by the number of the field.
The csv file is opened with `header` and optionally with blocks of `chunksize` at a time. If this argument is passed, each chunk is processed independently and saved in the output file to save memory usage.
```
def _prepare_texts(tmp_d):
"Prepare texts in a folder struct in tmp_d, a csv file and returns a dataframe"
path = Path(tmp_d)/'tmp'
path.mkdir()
for d in ['a', 'b', 'c']:
(path/d).mkdir()
for i in range(5):
with open(path/d/f'text{i}.txt', 'w') as f: f.write(f"This is an example of text {d} {i}")
texts = [f"This is an example of text {d} {i}" for i in range(5) for d in ['a', 'b', 'c']]
df = pd.DataFrame({'text': texts, 'label': list(range(15))}, columns=['text', 'label'])
csv_fname = tmp_d/'input.csv'
df.to_csv(csv_fname, index=False)
return path,df,csv_fname
#hide
# integration test
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
#Tokenize as folders
tokenize_folder(path)
outp = Path(tmp_d)/'tmp_tok'
for d in ['a', 'b', 'c']:
p = outp/d
for i in range(5):
test_eq((p/f'text{i}.txt').read_text(), ' '.join([
BOS, TK_MAJ, 'this', 'is', 'an', 'example', 'of', 'text', d, str(i) ]))
cnt_a = load_pickle(outp/fn_counter_pkl)
test_eq(cnt_a['this'], 15)
test_eq(cnt_a['a'], 5)
test_eq(cnt_a['0'], 3)
#Tokenize as files
files = get_text_files(path)
tokenize_files(files, path, output_dir=path/'d')
for f in files:
test_eq((path/'d'/f.relative_to(path)).read_text(), ' '.join([
BOS, TK_MAJ, 'this', 'is', 'an', 'example', 'of', 'text', f.parent.name, f.name[4]]))
#Tokenize as individual texts
out = tokenize_texts(df['text'].values)
test_eq(out, [(outp/d/f'text{i}.txt').read_text().split(' ') for i in range(5) for d in ['a', 'b', 'c']])
#Tokenize as a dataframe
out,cnt_b = tokenize_df(df, text_cols='text')
test_eq(list(out.columns), ['label', 'text', 'text_length'])
test_eq(out['label'].values, df['label'].values)
test_eq(list(out['text']), [(outp/d/f'text{i}.txt').read_text().split(' ') for i in range(5) for d in ['a', 'b', 'c']])
test_eq(cnt_a, cnt_b)
#Tokenize as a csv
out_fname = Path(tmp_d)/'output.csv'
tokenize_csv(csv_fname, text_cols='text', outname=out_fname)
a,b = load_tokenized_csv(out_fname)
test_eq((out,cnt_b), load_tokenized_csv(out_fname))
```
## `Tokenizer`-
```
#export
class Tokenizer(Transform):
"Provides a consistent `Transform` interface to tokenizers operating on `DataFrame`s and folders"
input_types = (str, list, L, tuple, Path)
def __init__(self, tok, rules=None, counter=None, lengths=None, mode=None, sep=' '):
if isinstance(tok,type): tok=tok()
store_attr('tok,counter,lengths,mode,sep')
self.rules = defaults.text_proc_rules if rules is None else rules
@classmethod
@delegates(tokenize_df, keep=True)
def from_df(cls, text_cols, tok=None, rules=None, sep=' ', **kwargs):
if tok is None: tok = WordTokenizer()
res = cls(tok, rules=rules, mode='df')
res.kwargs,res.train_setup = merge({'tok': tok}, kwargs),False
res.text_cols,res.sep = text_cols,sep
return res
@classmethod
@delegates(tokenize_folder, keep=True)
def from_folder(cls, path, tok=None, rules=None, **kwargs):
path = Path(path)
if tok is None: tok = WordTokenizer()
output_dir = tokenize_folder(path, tok=tok, rules=rules, **kwargs)
res = cls(tok, counter=load_pickle(output_dir/fn_counter_pkl),
lengths=load_pickle(output_dir/fn_lengths_pkl), rules=rules, mode='folder')
res.path,res.output_dir = path,output_dir
return res
def setups(self, dsets):
if not self.mode == 'df' or not isinstance(dsets.items, pd.DataFrame): return
dsets.items,count = tokenize_df(dsets.items, self.text_cols, rules=self.rules, **self.kwargs)
if self.counter is None: self.counter = count
return dsets
def encodes(self, o:Path):
if self.mode=='folder' and str(o).startswith(str(self.path)):
tok = self.output_dir/o.relative_to(self.path)
return L(tok.read_text().split(' '))
else: return self._tokenize1(o.read_text())
def encodes(self, o:str): return self._tokenize1(o)
def _tokenize1(self, o): return first(self.tok([compose(*self.rules)(o)]))
def get_lengths(self, items):
if self.lengths is None: return None
if self.mode == 'df':
if isinstance(items, pd.DataFrame) and 'text_lengths' in items.columns: return items['text_length'].values
if self.mode == 'folder':
try:
res = [self.lengths[str(Path(i).relative_to(self.path))] for i in items]
if len(res) == len(items): return res
except: return None
def decodes(self, o): return TitledStr(self.sep.join(o))
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
items = get_text_files(path)
splits = RandomSplitter()(items)
dsets = Datasets(items, [Tokenizer.from_folder(path)], splits=splits)
print(dsets.train[0])
dsets = Datasets(df, [Tokenizer.from_df('text')], splits=splits)
print(dsets.train[0][0].text)
tst = test_set(dsets, ['This is a test', 'this is another test'])
test_eq(tst, [(['xxbos', 'xxmaj', 'this','is','a','test'],),
(['xxbos','this','is','another','test'],)])
```
## Sentencepiece
```
#export
eu_langs = ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu",
"it","lt","lv","mt","nl","pl","pt","ro","sk","sl","sv"] # all European langs
#export
class SentencePieceTokenizer():#TODO: pass the special tokens symbol to sp
"SentencePiece tokenizer for `lang`"
def __init__(self, lang='en', special_toks=None, sp_model=None, vocab_sz=None, max_vocab_sz=30000,
model_type='unigram', char_coverage=None, cache_dir='tmp'):
try: from sentencepiece import SentencePieceTrainer,SentencePieceProcessor
except ImportError:
raise Exception('sentencepiece module is missing: run `pip install sentencepiece!=0.1.90,!=0.1.91`')
self.sp_model,self.cache_dir = sp_model,Path(cache_dir)
self.vocab_sz,self.max_vocab_sz,self.model_type = vocab_sz,max_vocab_sz,model_type
self.char_coverage = ifnone(char_coverage, 0.99999 if lang in eu_langs else 0.9998)
self.special_toks = ifnone(special_toks, defaults.text_spec_tok)
if sp_model is None: self.tok = None
else:
self.tok = SentencePieceProcessor()
self.tok.Load(str(sp_model))
os.makedirs(self.cache_dir, exist_ok=True)
def _get_vocab_sz(self, raw_text_path):
cnt = Counter()
with open(raw_text_path, 'r') as f:
for line in f.readlines():
cnt.update(line.split())
if len(cnt)//4 > self.max_vocab_sz: return self.max_vocab_sz
res = len(cnt)//4
while res%8 != 0: res+=1
return max(res,29)
def train(self, raw_text_path):
"Train a sentencepiece tokenizer on `texts` and save it in `path/tmp_dir`"
from sentencepiece import SentencePieceTrainer
vocab_sz = self._get_vocab_sz(raw_text_path) if self.vocab_sz is None else self.vocab_sz
spec_tokens = ['\u2581'+s for s in self.special_toks]
SentencePieceTrainer.Train(" ".join([
f"--input={raw_text_path} --vocab_size={vocab_sz} --model_prefix={self.cache_dir/'spm'}",
f"--character_coverage={self.char_coverage} --model_type={self.model_type}",
f"--unk_id={len(spec_tokens)} --pad_id=-1 --bos_id=-1 --eos_id=-1 --minloglevel=2",
f"--user_defined_symbols={','.join(spec_tokens)} --hard_vocab_limit=false"]))
raw_text_path.unlink()
return self.cache_dir/'spm.model'
def setup(self, items, rules=None):
from sentencepiece import SentencePieceProcessor
if rules is None: rules = []
if self.tok is not None: return {'sp_model': self.sp_model}
raw_text_path = self.cache_dir/'texts.out'
with open(raw_text_path, 'w') as f:
for t in progress_bar(maps(*rules, items), total=len(items), leave=False):
f.write(f'{t}\n')
sp_model = self.train(raw_text_path)
self.tok = SentencePieceProcessor()
self.tok.Load(str(sp_model))
return {'sp_model': sp_model}
def __call__(self, items):
if self.tok is None: self.setup(items)
for t in items: yield self.tok.EncodeAsPieces(t)
#export
SubwordTokenizer = SentencePieceTokenizer
texts = [f"This is an example of text {i}" for i in range(10)]
df = pd.DataFrame({'text': texts, 'label': list(range(10))}, columns=['text', 'label'])
out,cnt = tokenize_df(df, text_cols='text', tok=SentencePieceTokenizer(vocab_sz=34), n_workers=1)
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
items = get_text_files(path)
splits = RandomSplitter()(items)
tok = SentencePieceTokenizer(special_toks=[])
dsets = Datasets(items, [Tokenizer.from_folder(path, tok=tok)], splits=splits)
print(dsets.train[0][0])
with warnings.catch_warnings():
dsets = Datasets(df, [Tokenizer.from_df('text', tok=tok)], splits=splits)
print(dsets.train[0][0].text)
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
|
github_jupyter
|
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
#export
from fastai.torch_basics import *
from fastai.data.all import *
#hide
from nbdev.showdoc import *
#default_exp text.core
#default_cls_lvl 3
if sys.platform == "win32":
import _locale
_locale._getdefaultlocale = (lambda *args: ['en_US', 'UTF-8'])
#export
import spacy,html
from spacy.symbols import ORTH
#export
#special tokens
UNK, PAD, BOS, EOS, FLD, TK_REP, TK_WREP, TK_UP, TK_MAJ = "xxunk xxpad xxbos xxeos xxfld xxrep xxwrep xxup xxmaj".split()
#export
_all_ = ["UNK", "PAD", "BOS", "EOS", "FLD", "TK_REP", "TK_WREP", "TK_UP", "TK_MAJ"]
#export
_re_spec = re.compile(r'([/#\\])')
def spec_add_spaces(t):
"Add spaces around / and #"
return _re_spec.sub(r' \1 ', t)
test_eq(spec_add_spaces('#fastai'), ' # fastai')
test_eq(spec_add_spaces('/fastai'), ' / fastai')
test_eq(spec_add_spaces('\\fastai'), ' \\ fastai')
#export
_re_space = re.compile(' {2,}')
def rm_useless_spaces(t):
"Remove multiple spaces"
return _re_space.sub(' ', t)
test_eq(rm_useless_spaces('a b c'), 'a b c')
#export
_re_rep = re.compile(r'(\S)(\1{2,})')
def replace_rep(t):
"Replace repetitions at the character level: cccc -- TK_REP 4 c"
def _replace_rep(m):
c,cc = m.groups()
return f' {TK_REP} {len(cc)+1} {c} '
return _re_rep.sub(_replace_rep, t)
test_eq(replace_rep('aa'), 'aa')
test_eq(replace_rep('aaaa'), f' {TK_REP} 4 a ')
#export
_re_wrep = re.compile(r'(?:\s|^)(\w+)\s+((?:\1\s+)+)\1(\s|\W|$)')
#hide
"""
Matches any word repeated at least four times with spaces between them
(?:\s|^) Non-Capture either a whitespace character or the beginning of text
(\w+) Capture any alphanumeric character
\s+ One or more whitespace
((?:\1\s+)+) Capture a repetition of one or more times \1 followed by one or more whitespace
\1 Occurrence of \1
(\s|\W|$) Capture last whitespace, non alphanumeric character or end of text
""";
#export
def replace_wrep(t):
"Replace word repetitions: word word word word -- TK_WREP 4 word"
def _replace_wrep(m):
c,cc,e = m.groups()
return f' {TK_WREP} {len(cc.split())+2} {c} {e}'
return _re_wrep.sub(_replace_wrep, t)
test_eq(replace_wrep('ah ah'), 'ah ah')
test_eq(replace_wrep('ah ah ah'), f' {TK_WREP} 3 ah ')
test_eq(replace_wrep('ah ah ah ah'), f' {TK_WREP} 4 ah ')
test_eq(replace_wrep('ah ah ah ah '), f' {TK_WREP} 4 ah ')
test_eq(replace_wrep('ah ah ah ah.'), f' {TK_WREP} 4 ah .')
test_eq(replace_wrep('ah ah ahi'), f'ah ah ahi')
#export
def fix_html(x):
"Various messy things we've seen in documents"
x = x.replace('#39;', "'").replace('amp;', '&').replace('#146;', "'").replace('nbsp;', ' ').replace(
'#36;', '$').replace('\\n', "\n").replace('quot;', "'").replace('<br />', "\n").replace(
'\\"', '"').replace('<unk>',UNK).replace(' @.@ ','.').replace(' @-@ ','-').replace('...',' …')
return html.unescape(x)
test_eq(fix_html('#39;bli#146;'), "'bli'")
test_eq(fix_html('Sarah amp; Duck...'), 'Sarah & Duck …')
test_eq(fix_html('a nbsp; #36;'), 'a $')
test_eq(fix_html('\\" <unk>'), f'" {UNK}')
test_eq(fix_html('quot; @.@ @-@ '), "' .-")
test_eq(fix_html('<br />text\\n'), '\ntext\n')
#export
_re_all_caps = re.compile(r'(\s|^)([A-Z]+[^a-z\s]*)(?=(\s|$))')
#hide
"""
Catches any word in all caps, even with ' or - inside
(\s|^) Capture either a whitespace or the beginning of text
([A-Z]+ Capture one capitalized letter or more...
[^a-z\s]*) ...followed by anything that's non lowercase or whitespace
(?=(\s|$)) Look ahead for a space or end of text
""";
#export
def replace_all_caps(t):
"Replace tokens in ALL CAPS by their lower version and add `TK_UP` before."
def _replace_all_caps(m):
tok = f'{TK_UP} ' if len(m.groups()[1]) > 1 else ''
return f"{m.groups()[0]}{tok}{m.groups()[1].lower()}"
return _re_all_caps.sub(_replace_all_caps, t)
test_eq(replace_all_caps("I'M SHOUTING"), f"{TK_UP} i'm {TK_UP} shouting")
test_eq(replace_all_caps("I'm speaking normally"), "I'm speaking normally")
test_eq(replace_all_caps("I am speaking normally"), "i am speaking normally")
#export
_re_maj = re.compile(r'(\s|^)([A-Z][^A-Z\s]*)(?=(\s|$))')
#hide
"""
Catches any capitalized word
(\s|^) Capture either a whitespace or the beginning of text
([A-Z] Capture exactly one capitalized letter...
[^A-Z\s]*) ...followed by anything that's not uppercase or whitespace
(?=(\s|$)) Look ahead for a space of end of text
""";
#export
def replace_maj(t):
"Replace tokens in Sentence Case by their lower version and add `TK_MAJ` before."
def _replace_maj(m):
tok = f'{TK_MAJ} ' if len(m.groups()[1]) > 1 else ''
return f"{m.groups()[0]}{tok}{m.groups()[1].lower()}"
return _re_maj.sub(_replace_maj, t)
test_eq(replace_maj("Jeremy Howard"), f'{TK_MAJ} jeremy {TK_MAJ} howard')
test_eq(replace_maj("I don't think there is any maj here"), ("i don't think there is any maj here"),)
#export
def lowercase(t, add_bos=True, add_eos=False):
"Converts `t` to lowercase"
return (f'{BOS} ' if add_bos else '') + t.lower().strip() + (f' {EOS}' if add_eos else '')
#export
def replace_space(t):
"Replace embedded spaces in a token with unicode line char to allow for split/join"
return t.replace(' ', '▁')
#export
defaults.text_spec_tok = [UNK, PAD, BOS, EOS, FLD, TK_REP, TK_WREP, TK_UP, TK_MAJ]
defaults.text_proc_rules = [fix_html, replace_rep, replace_wrep, spec_add_spaces, rm_useless_spaces,
replace_all_caps, replace_maj, lowercase]
defaults.text_postproc_rules = [replace_space]
#export
class BaseTokenizer():
"Basic tokenizer that just splits on spaces"
def __init__(self, split_char=' ', **kwargs): self.split_char=split_char
def __call__(self, items): return (t.split(self.split_char) for t in items)
tok = BaseTokenizer()
test_eq(tok(["This is a text"]), [["This", "is", "a", "text"]])
tok = BaseTokenizer('x')
test_eq(tok(["This is a text"]), [["This is a te", "t"]])
#export
class SpacyTokenizer():
"Spacy tokenizer for `lang`"
def __init__(self, lang='en', special_toks=None, buf_sz=5000):
self.special_toks = ifnone(special_toks, defaults.text_spec_tok)
nlp = spacy.blank(lang, disable=["parser", "tagger", "ner"])
for w in self.special_toks: nlp.tokenizer.add_special_case(w, [{ORTH: w}])
self.pipe,self.buf_sz = nlp.pipe,buf_sz
def __call__(self, items):
return (L(doc).attrgot('text') for doc in self.pipe(map(str,items), batch_size=self.buf_sz))
#export
WordTokenizer = SpacyTokenizer
tok = SpacyTokenizer()
inp,exp = "This isn't the easiest text.",["This", "is", "n't", "the", "easiest", "text", "."]
test_eq(L(tok([inp,inp])), [exp,exp])
#export
class TokenizeWithRules:
"A wrapper around `tok` which applies `rules`, then tokenizes, then applies `post_rules`"
def __init__(self, tok, rules=None, post_rules=None):
self.rules = L(ifnone(rules, defaults.text_proc_rules))
self.post_f = compose(*L(ifnone(post_rules, defaults.text_postproc_rules)))
self.tok = tok
def __call__(self, batch):
return (L(o).map(self.post_f) for o in self.tok(maps(*self.rules, batch)))
f = TokenizeWithRules(BaseTokenizer(),rules=[replace_all_caps])
test_eq(f(["THIS isn't a problem"]), [[TK_UP, 'this', "isn't", 'a', 'problem']])
f = TokenizeWithRules(SpacyTokenizer())
test_eq(f(["This isn't a problem"]), [[BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem']])
f = TokenizeWithRules(BaseTokenizer(split_char="'"), rules=[])
test_eq(f(["This isn't a problem"]), [['This▁isn', 't▁a▁problem']])
texts = ["this is a text", "this is another text"]
tok = TokenizeWithRules(BaseTokenizer(), texts.__getitem__)
test_eq(tok([0,1]), [['this', 'is', 'a', 'text'],['this', 'is', 'another', 'text']])
#export
@delegates(TokenizeWithRules)
def tokenize1(text, tok, **kwargs):
"Call `TokenizeWithRules` with a single text"
return first(TokenizeWithRules(tok=tok, **kwargs)([text]))
test_eq(tokenize1("This isn't a problem", SpacyTokenizer()),
[BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem'])
test_eq(tokenize1("This isn't a problem", tok=BaseTokenizer(), rules=[]),
['This',"isn't",'a','problem'])
#export
def parallel_tokenize(items, tok=None, rules=None, n_workers=defaults.cpus, **kwargs):
"Calls optional `setup` on `tok` before launching `TokenizeWithRules` using `parallel_gen"
if tok is None: tok = WordTokenizer()
if hasattr(tok, 'setup'): tok.setup(items, rules)
return parallel_gen(TokenizeWithRules, items, tok=tok, rules=rules, n_workers=n_workers, **kwargs)
res = parallel_tokenize(['0 1', '1 2'], rules=[], n_workers=2)
idxs,toks = zip(*L(res).sorted(itemgetter(0)))
test_eq(toks, [['0','1'],['1','2']])
#hide
res1 = parallel_tokenize(['0 1', '1 2'], tok=BaseTokenizer(), rules=[], n_workers=0)
idxs1,toks1 = zip(*L(res1).sorted(itemgetter(0)))
test_eq(toks, toks1)
#export
fn_counter_pkl = 'counter.pkl'
fn_lengths_pkl = 'lengths.pkl'
#export
def _tokenize_files(func, files, path, output_dir=None, output_names=None, n_workers=defaults.cpus, rules=None, tok=None,
encoding='utf8', skip_if_exists=False):
"Tokenize text `files` in parallel using `n_workers`"
if tok is None: tok = WordTokenizer()
output_dir = Path(ifnone(output_dir, path.parent/f'{path.name}_tok'))
if skip_if_exists and output_dir.exists(): return output_dir
output_dir.mkdir(exist_ok=True)
if output_names is None: output_names = L(output_dir/f.relative_to(path) for f in files)
rules = partial(Path.read_text, encoding=encoding) + L(ifnone(rules, defaults.text_proc_rules.copy()))
lengths,counter = {},Counter()
for i,tok in parallel_tokenize(files, tok, rules, n_workers=n_workers):
out = func(i,output_dir)
out.mk_write(' '.join(tok), encoding=encoding)
lengths[str(files[i].relative_to(path))] = len(tok)
counter.update(tok)
save_pickle(output_dir/fn_lengths_pkl, lengths)
save_pickle(output_dir/fn_counter_pkl, counter)
return output_dir
#export
@delegates(_tokenize_files)
def tokenize_folder(path, extensions=None, folders=None, output_dir=None, skip_if_exists=True, **kwargs):
"Tokenize text files in `path` in parallel using `n_workers`"
path,extensions = Path(path),ifnone(extensions, ['.txt'])
files = get_files(path, extensions=extensions, recurse=True, folders=folders)
def _f(i,output_dir): return output_dir/files[i].relative_to(path)
return _tokenize_files(_f, files, path, skip_if_exists=skip_if_exists, **kwargs)
#export
@delegates(_tokenize_files)
def tokenize_files(files, path, output_dir, output_names=None, **kwargs):
"Tokenize text `files` in parallel using `n_workers`"
if output_names is None: output_names = L(output_dir/f.relative_to(path) for f in files)
def _f(i,output_dir): return output_dir/output_names[i]
return _tokenize_files(_f, files, path, output_dir=output_dir, **kwargs)
#export
def _join_texts(df, mark_fields=False):
"Join texts in row `idx` of `df`, marking each field with `FLD` if `mark_fields=True`"
text_col = (f'{FLD} {1} ' if mark_fields else '' ) + df.iloc[:,0].astype(str)
for i in range(1,len(df.columns)):
text_col += (f' {FLD} {i+1} ' if mark_fields else ' ') + df.iloc[:,i].astype(str)
return text_col.values
#hide
texts = [f"This is an example of text {i}" for i in range(10)]
df = pd.DataFrame({'text': texts, 'text1': texts}, columns=['text', 'text1'])
col = _join_texts(df, mark_fields=True)
for i in range(len(df)):
test_eq(col[i], f'{FLD} 1 This is an example of text {i} {FLD} 2 This is an example of text {i}')
#export
def tokenize_texts(texts, n_workers=defaults.cpus, rules=None, tok=None):
"Tokenize `texts` in parallel using `n_workers`"
rules = L(ifnone(rules, defaults.text_proc_rules.copy()))
outputs = L(parallel_tokenize(texts, tok=tok, rules=rules, n_workers=n_workers)
).sorted().itemgot(1)
return outputs
#export
def tokenize_df(df, text_cols, n_workers=defaults.cpus, rules=None, mark_fields=None,
tok=None, tok_text_col="text"):
"Tokenize texts in `df[text_cols]` in parallel using `n_workers` and stores them in `df[tok_text_col]`"
text_cols = [df.columns[c] if isinstance(c, int) else c for c in L(text_cols)]
#mark_fields defaults to False if there is one column of texts, True if there are multiple
if mark_fields is None: mark_fields = len(text_cols)>1
rules = L(ifnone(rules, defaults.text_proc_rules.copy()))
texts = _join_texts(df[text_cols], mark_fields=mark_fields)
outputs = L(parallel_tokenize(texts, tok, rules, n_workers=n_workers)
).sorted().itemgot(1)
other_cols = df.columns[~df.columns.isin(text_cols)]
res = df[other_cols].copy()
res[tok_text_col] = outputs
res[f'{tok_text_col}_length'] = [len(o) for o in outputs]
return res,Counter(outputs.concat())
#export
def tokenize_csv(fname, text_cols, outname=None, n_workers=4, rules=None, mark_fields=None,
tok=None, header='infer', chunksize=50000):
"Tokenize texts in the `text_cols` of the csv `fname` in parallel using `n_workers`"
df = pd.read_csv(fname, header=header, chunksize=chunksize)
outname = Path(ifnone(outname, fname.parent/f'{fname.stem}_tok.csv'))
cnt = Counter()
for i,dfp in enumerate(df):
out,c = tokenize_df(dfp, text_cols, n_workers=n_workers, rules=rules,
mark_fields=mark_fields, tok=tok)
out.text = out.text.str.join(' ')
out.to_csv(outname, header=(None,header)[i==0], index=False, mode=('a','w')[i==0])
cnt.update(c)
save_pickle(outname.with_suffix('.pkl'), cnt)
#export
def load_tokenized_csv(fname):
"Utility function to quickly load a tokenized csv ans the corresponding counter"
fname = Path(fname)
out = pd.read_csv(fname)
for txt_col in out.columns[1:-1]:
out[txt_col] = tuple(out[txt_col].str.split(' '))
return out,load_pickle(fname.with_suffix('.pkl'))
def _prepare_texts(tmp_d):
"Prepare texts in a folder struct in tmp_d, a csv file and returns a dataframe"
path = Path(tmp_d)/'tmp'
path.mkdir()
for d in ['a', 'b', 'c']:
(path/d).mkdir()
for i in range(5):
with open(path/d/f'text{i}.txt', 'w') as f: f.write(f"This is an example of text {d} {i}")
texts = [f"This is an example of text {d} {i}" for i in range(5) for d in ['a', 'b', 'c']]
df = pd.DataFrame({'text': texts, 'label': list(range(15))}, columns=['text', 'label'])
csv_fname = tmp_d/'input.csv'
df.to_csv(csv_fname, index=False)
return path,df,csv_fname
#hide
# integration test
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
#Tokenize as folders
tokenize_folder(path)
outp = Path(tmp_d)/'tmp_tok'
for d in ['a', 'b', 'c']:
p = outp/d
for i in range(5):
test_eq((p/f'text{i}.txt').read_text(), ' '.join([
BOS, TK_MAJ, 'this', 'is', 'an', 'example', 'of', 'text', d, str(i) ]))
cnt_a = load_pickle(outp/fn_counter_pkl)
test_eq(cnt_a['this'], 15)
test_eq(cnt_a['a'], 5)
test_eq(cnt_a['0'], 3)
#Tokenize as files
files = get_text_files(path)
tokenize_files(files, path, output_dir=path/'d')
for f in files:
test_eq((path/'d'/f.relative_to(path)).read_text(), ' '.join([
BOS, TK_MAJ, 'this', 'is', 'an', 'example', 'of', 'text', f.parent.name, f.name[4]]))
#Tokenize as individual texts
out = tokenize_texts(df['text'].values)
test_eq(out, [(outp/d/f'text{i}.txt').read_text().split(' ') for i in range(5) for d in ['a', 'b', 'c']])
#Tokenize as a dataframe
out,cnt_b = tokenize_df(df, text_cols='text')
test_eq(list(out.columns), ['label', 'text', 'text_length'])
test_eq(out['label'].values, df['label'].values)
test_eq(list(out['text']), [(outp/d/f'text{i}.txt').read_text().split(' ') for i in range(5) for d in ['a', 'b', 'c']])
test_eq(cnt_a, cnt_b)
#Tokenize as a csv
out_fname = Path(tmp_d)/'output.csv'
tokenize_csv(csv_fname, text_cols='text', outname=out_fname)
a,b = load_tokenized_csv(out_fname)
test_eq((out,cnt_b), load_tokenized_csv(out_fname))
#export
class Tokenizer(Transform):
"Provides a consistent `Transform` interface to tokenizers operating on `DataFrame`s and folders"
input_types = (str, list, L, tuple, Path)
def __init__(self, tok, rules=None, counter=None, lengths=None, mode=None, sep=' '):
if isinstance(tok,type): tok=tok()
store_attr('tok,counter,lengths,mode,sep')
self.rules = defaults.text_proc_rules if rules is None else rules
@classmethod
@delegates(tokenize_df, keep=True)
def from_df(cls, text_cols, tok=None, rules=None, sep=' ', **kwargs):
if tok is None: tok = WordTokenizer()
res = cls(tok, rules=rules, mode='df')
res.kwargs,res.train_setup = merge({'tok': tok}, kwargs),False
res.text_cols,res.sep = text_cols,sep
return res
@classmethod
@delegates(tokenize_folder, keep=True)
def from_folder(cls, path, tok=None, rules=None, **kwargs):
path = Path(path)
if tok is None: tok = WordTokenizer()
output_dir = tokenize_folder(path, tok=tok, rules=rules, **kwargs)
res = cls(tok, counter=load_pickle(output_dir/fn_counter_pkl),
lengths=load_pickle(output_dir/fn_lengths_pkl), rules=rules, mode='folder')
res.path,res.output_dir = path,output_dir
return res
def setups(self, dsets):
if not self.mode == 'df' or not isinstance(dsets.items, pd.DataFrame): return
dsets.items,count = tokenize_df(dsets.items, self.text_cols, rules=self.rules, **self.kwargs)
if self.counter is None: self.counter = count
return dsets
def encodes(self, o:Path):
if self.mode=='folder' and str(o).startswith(str(self.path)):
tok = self.output_dir/o.relative_to(self.path)
return L(tok.read_text().split(' '))
else: return self._tokenize1(o.read_text())
def encodes(self, o:str): return self._tokenize1(o)
def _tokenize1(self, o): return first(self.tok([compose(*self.rules)(o)]))
def get_lengths(self, items):
if self.lengths is None: return None
if self.mode == 'df':
if isinstance(items, pd.DataFrame) and 'text_lengths' in items.columns: return items['text_length'].values
if self.mode == 'folder':
try:
res = [self.lengths[str(Path(i).relative_to(self.path))] for i in items]
if len(res) == len(items): return res
except: return None
def decodes(self, o): return TitledStr(self.sep.join(o))
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
items = get_text_files(path)
splits = RandomSplitter()(items)
dsets = Datasets(items, [Tokenizer.from_folder(path)], splits=splits)
print(dsets.train[0])
dsets = Datasets(df, [Tokenizer.from_df('text')], splits=splits)
print(dsets.train[0][0].text)
tst = test_set(dsets, ['This is a test', 'this is another test'])
test_eq(tst, [(['xxbos', 'xxmaj', 'this','is','a','test'],),
(['xxbos','this','is','another','test'],)])
#export
eu_langs = ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu",
"it","lt","lv","mt","nl","pl","pt","ro","sk","sl","sv"] # all European langs
#export
class SentencePieceTokenizer():#TODO: pass the special tokens symbol to sp
"SentencePiece tokenizer for `lang`"
def __init__(self, lang='en', special_toks=None, sp_model=None, vocab_sz=None, max_vocab_sz=30000,
model_type='unigram', char_coverage=None, cache_dir='tmp'):
try: from sentencepiece import SentencePieceTrainer,SentencePieceProcessor
except ImportError:
raise Exception('sentencepiece module is missing: run `pip install sentencepiece!=0.1.90,!=0.1.91`')
self.sp_model,self.cache_dir = sp_model,Path(cache_dir)
self.vocab_sz,self.max_vocab_sz,self.model_type = vocab_sz,max_vocab_sz,model_type
self.char_coverage = ifnone(char_coverage, 0.99999 if lang in eu_langs else 0.9998)
self.special_toks = ifnone(special_toks, defaults.text_spec_tok)
if sp_model is None: self.tok = None
else:
self.tok = SentencePieceProcessor()
self.tok.Load(str(sp_model))
os.makedirs(self.cache_dir, exist_ok=True)
def _get_vocab_sz(self, raw_text_path):
cnt = Counter()
with open(raw_text_path, 'r') as f:
for line in f.readlines():
cnt.update(line.split())
if len(cnt)//4 > self.max_vocab_sz: return self.max_vocab_sz
res = len(cnt)//4
while res%8 != 0: res+=1
return max(res,29)
def train(self, raw_text_path):
"Train a sentencepiece tokenizer on `texts` and save it in `path/tmp_dir`"
from sentencepiece import SentencePieceTrainer
vocab_sz = self._get_vocab_sz(raw_text_path) if self.vocab_sz is None else self.vocab_sz
spec_tokens = ['\u2581'+s for s in self.special_toks]
SentencePieceTrainer.Train(" ".join([
f"--input={raw_text_path} --vocab_size={vocab_sz} --model_prefix={self.cache_dir/'spm'}",
f"--character_coverage={self.char_coverage} --model_type={self.model_type}",
f"--unk_id={len(spec_tokens)} --pad_id=-1 --bos_id=-1 --eos_id=-1 --minloglevel=2",
f"--user_defined_symbols={','.join(spec_tokens)} --hard_vocab_limit=false"]))
raw_text_path.unlink()
return self.cache_dir/'spm.model'
def setup(self, items, rules=None):
from sentencepiece import SentencePieceProcessor
if rules is None: rules = []
if self.tok is not None: return {'sp_model': self.sp_model}
raw_text_path = self.cache_dir/'texts.out'
with open(raw_text_path, 'w') as f:
for t in progress_bar(maps(*rules, items), total=len(items), leave=False):
f.write(f'{t}\n')
sp_model = self.train(raw_text_path)
self.tok = SentencePieceProcessor()
self.tok.Load(str(sp_model))
return {'sp_model': sp_model}
def __call__(self, items):
if self.tok is None: self.setup(items)
for t in items: yield self.tok.EncodeAsPieces(t)
#export
SubwordTokenizer = SentencePieceTokenizer
texts = [f"This is an example of text {i}" for i in range(10)]
df = pd.DataFrame({'text': texts, 'label': list(range(10))}, columns=['text', 'label'])
out,cnt = tokenize_df(df, text_cols='text', tok=SentencePieceTokenizer(vocab_sz=34), n_workers=1)
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
items = get_text_files(path)
splits = RandomSplitter()(items)
tok = SentencePieceTokenizer(special_toks=[])
dsets = Datasets(items, [Tokenizer.from_folder(path, tok=tok)], splits=splits)
print(dsets.train[0][0])
with warnings.catch_warnings():
dsets = Datasets(df, [Tokenizer.from_df('text', tok=tok)], splits=splits)
print(dsets.train[0][0].text)
#hide
from nbdev.export import notebook2script
notebook2script()
| 0.311427 | 0.654039 |
# Automatic Speech Recognition with Speaker Diarization
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'main'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html
```
# Introduction
Speaker diarization lets us figure out "who spoke when" in the transcription. Without speaker diarization, we cannot distinguish the speakers in the transcript generated from automatic speech recognition (ASR). Nowadays, ASR combined with speaker diarization has shown immense use in many tasks, ranging from analyzing meeting transcription to media indexing.
In this tutorial, we demonstrate how we can get ASR transcriptions combined with speaker labels. Since we don't include detailed process of getting ASR result or diarization result, please refer to the following links for more in-depth description.
If you need detailed understanding of transcribing words with ASR, refer to this [ASR Tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) tutorial.
For detailed parameter setting and execution of speaker diarization, refer to this [Diarization Inference](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/speaker_tasks/Speaker_Diarization_Inference.ipynb) tutorial.
An example script that runs ASR and speaker diarization together can be found at [ASR with Diarization](https://github.com/NVIDIA/NeMo/blob/main/examples/speaker_tasks/diarization/asr_with_diarization.py).
### Speaker diarization in ASR pipeline
Speaker diarization result in ASR pipeline should align well with ASR output. Thus, we use ASR output to create Voice Activity Detection (VAD) timestamps to obtain segments we want to diarize. The segments we obtain from the VAD timestamps are further segmented into sub-segments in the speaker diarization step. Finally, after obtaining the speaker labels from speaker diarization, we match the decoded words with speaker labels to generate a transcript with speaker labels.
ASR → VAD timestamps and decoded words → speaker diarization → speaker label matching
### Import libraries
Let's first import nemo asr and other libraries for visualization purposes.
```
import nemo.collections.asr as nemo_asr
import numpy as np
from IPython.display import Audio, display
import librosa
import os
import wget
import matplotlib.pyplot as plt
import nemo
from nemo.collections.asr.parts.utils.diarization_utils import ASR_DIAR_OFFLINE
import glob
import pprint
pp = pprint.PrettyPrinter(indent=4)
```
We demonstrate this tutorial using a merged AN4 audioclip. The merged audioclip contains the speech of two speakers (male and female) reading dates in different formats. Run the following script to download the audioclip and play it.
```
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
if not os.path.exists(os.path.join(data_dir,'an4_diarize_test.wav')):
AUDIO_FILENAME = wget.download(an4_audio_url, data_dir)
else:
AUDIO_FILENAME = os.path.join(data_dir,'an4_diarize_test.wav')
audio_file_list = glob.glob(f"{data_dir}/*.wav")
print("Input audio file list: \n", audio_file_list)
signal, sample_rate = librosa.load(AUDIO_FILENAME, sr=None)
display(Audio(signal,rate=sample_rate))
```
`display_waveform()` and `get_color()` functions are defined for displaying the waveform with diarization results.
```
def display_waveform(signal,text='Audio',overlay_color=[]):
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.scatter(np.arange(len(signal)),signal,s=1,marker='o',c='k')
if len(overlay_color):
plt.scatter(np.arange(len(signal)),signal,s=1,marker='o',c=overlay_color)
fig.suptitle(text, fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
plt.ylabel('signal strength', fontsize=14);
plt.axis([0,len(signal),-0.5,+0.5])
time_axis,_ = plt.xticks();
plt.xticks(time_axis[:-1],time_axis[:-1]/sample_rate);
COLORS="b g c m y".split()
def get_color(signal,speech_labels,sample_rate=16000):
c=np.array(['k']*len(signal))
for time_stamp in speech_labels:
start,end,label=time_stamp.split()
start,end = int(float(start)*16000),int(float(end)*16000),
if label == "speech":
code = 'red'
else:
code = COLORS[int(label.split('_')[-1])]
c[start:end]=code
return c
```
Using the above function, we can display the waveform of the example audio clip.
```
display_waveform(signal)
```
### Parameter setting for ASR and diarization
First, we need to setup the following parameters for ASR and diarization. We start our demonstration by first transcribing the audio recording using our pretrained ASR model `QuartzNet15x5Base-En` and use the CTC output probabilities to get timestamps for the spoken words. We then use these timestamps to get speaker label information using speaker diarizer model.
```
from omegaconf import OmegaConf
import shutil
CONFIG_URL = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/offline_diarization_with_asr.yaml"
if not os.path.exists(os.path.join(data_dir,'offline_diarization_with_asr.yaml')):
CONFIG = wget.download(CONFIG_URL, data_dir)
else:
CONFIG = os.path.join(data_dir,'offline_diarization_with_asr.yaml')
cfg = OmegaConf.load(CONFIG)
print(OmegaConf.to_yaml(cfg))
```
Speaker Diarization scripts commonly expects following arguments:
1. manifest_filepath : Path to manifest file containing json lines of format: {"audio_filepath": "/path/to/audio_file", "offset": 0, "duration": null, "label": "infer", "text": "-", "num_speakers": null, "rttm_filepath": "/path/to/rttm/file", "uem_filepath"="/path/to/uem/filepath"}
2. out_dir : directory where outputs and intermediate files are stored.
3. oracle_vad: If this is true then we extract speech activity labels from rttm files, if False then either
4. vad.model_path or external_manifestpath containing speech activity labels has to be passed.
Mandatory fields are audio_filepath, offset, duration, label and text. For the rest if you would like to evaluate with known number of speakers pass the value else `null`. If you would like to score the system with known rttms then that should be passed as well, else `null`. uem file is used to score only part of your audio for evaluation purposes, hence pass if you would like to evaluate on it else `null`.
**Note:** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**.
For example: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name)
Lets create manifest with the an4 audio and rttm available. If you have more than one files you may also use the script `NeMo/scripts/speaker_tasks/rttm_to_manifest.py` to generate manifest file from list of audio files and optionally rttm files
```
# Create a manifest for input with below format.
# {"audio_filepath": "/path/to/audio_file", "offset": 0, "duration": null, "label": "infer", "text": "-",
# "num_speakers": null, "rttm_filepath": "/path/to/rttm/file", "uem_filepath"="/path/to/uem/filepath"}
import json
meta = {
'audio_filepath': AUDIO_FILENAME,
'offset': 0,
'duration':None,
'label': 'infer',
'text': '-',
'num_speakers': 2,
'rttm_filepath': None,
'uem_filepath' : None
}
with open(os.path.join(data_dir,'input_manifest.json'),'w') as fp:
json.dump(meta,fp)
fp.write('\n')
cfg.diarizer.manifest_filepath = os.path.join(data_dir,'input_manifest.json')
!cat {cfg.diarizer.manifest_filepath}
```
Set the parameters required for diarization, here we get voice activity labels from ASR, which is set through parameter `cfg.diarizer.asr.parameters.asr_based_vad`
```
pretrained_speaker_model='ecapa_tdnn'
cfg.diarizer.manifest_filepath = cfg.diarizer.manifest_filepath
cfg.diarizer.out_dir = data_dir #Directory to store intermediate files and prediction outputs
cfg.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
cfg.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
cfg.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
cfg.diarizer.clustering.parameters.oracle_num_speakers=True
# USE VAD generated from ASR timestamps
cfg.diarizer.asr.model_path = 'QuartzNet15x5Base-En'
cfg.diarizer.oracle_vad = False # ----> ORACLE VAD
cfg.diarizer.asr.parameters.asr_based_vad = True
cfg.diarizer.asr.parameters.threshold=300
```
Let's create an instance from ASR_DIAR_OFFLINE class. We pass the ``params`` variable to setup the parameters for both ASR and diarization.
```
from nemo.collections.asr.parts.utils.speaker_utils import audio_rttm_map
asr_diar_offline = ASR_DIAR_OFFLINE(**cfg.diarizer.asr.parameters)
asr_diar_offline.root_path = cfg.diarizer.out_dir
AUDIO_RTTM_MAP = audio_rttm_map(cfg.diarizer.manifest_filepath)
asr_diar_offline.AUDIO_RTTM_MAP = AUDIO_RTTM_MAP
asr_model = asr_diar_offline.set_asr_model(cfg.diarizer.asr.model_path)
```
### Run ASR and get word timestamps
Before we run speaker diarization, we should run ASR and get the ASR output to generate decoded words and timestamps for those words. The following two variables are obtained from `run_ASR()` function.
- words List[str]: contains the sequence of words.
- word_ts List[int]: contains frame level index of the start and the end of each word.
```
word_list, word_ts_list = asr_diar_offline.run_ASR(asr_model)
print("Decoded word output: \n", word_list[0])
print("Word-level timestamps \n", word_ts_list[0])
```
### Run diarization with extracted word timestamps
We need to convert ASR based VAD output (*.rttm format) to VAD manifest (*.json format) file. The following function converts the rttm files into manifest file and returns the path for manifest file.
Now that all the components for diarization is ready, let's run diarization by calling `run_diarization()` function.
```
score = asr_diar_offline.run_diarization(cfg, word_ts_list)
```
`run_diarization()` function creates `an4_diarize_test.rttm` file. Let's see what is written in this `rttm` file.
```
def read_file(path_to_file):
with open(path_to_file) as f:
contents = f.read().splitlines()
return contents
predicted_speaker_label_rttm_path = f"{data_dir}/pred_rttms/an4_diarize_test.rttm"
pred_rttm = read_file(predicted_speaker_label_rttm_path)
pp.pprint(pred_rttm)
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels
pred_labels = rttm_to_labels(predicted_speaker_label_rttm_path)
color = get_color(signal, pred_labels)
display_waveform(signal,'Audio with Speaker Labels', color)
display(Audio(signal,rate=16000))
```
### Check the transcription output
Now we've done all the processes for running ASR and diarization, let's match the diarization result with ASR result and get the final output. `write_json_and_transcript()` function matches diarization output `diar_labels` with `word_list` using the timestamp information `word_ts_list`.
```
asr_output_dict = asr_diar_offline.write_json_and_transcript(word_list, word_ts_list)
```
After running `write_json_and_transcript()` function, the transcription output will be located in `./pred_rttms` folder, which shows **start time to end time of the utterance, speaker ID, and words spoken** during the notified time.
```
transcription_path_to_file = f"{data_dir}/pred_rttms/an4_diarize_test.txt"
transcript = read_file(transcription_path_to_file)
pp.pprint(transcript)
```
Another output is transcription output in JSON format, which is saved in `./pred_rttms/an4_diarize_test.json`.
In the JSON format output, we include information such as **transcription, estimated number of speakers (variable named `speaker_count`), start and end time of each word and most importantly, speaker label for each word.**
```
transcription_path_to_file = f"{data_dir}/pred_rttms/an4_diarize_test.json"
json_contents = read_file(transcription_path_to_file)
pp.pprint(json_contents)
```
|
github_jupyter
|
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'main'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html
import nemo.collections.asr as nemo_asr
import numpy as np
from IPython.display import Audio, display
import librosa
import os
import wget
import matplotlib.pyplot as plt
import nemo
from nemo.collections.asr.parts.utils.diarization_utils import ASR_DIAR_OFFLINE
import glob
import pprint
pp = pprint.PrettyPrinter(indent=4)
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
if not os.path.exists(os.path.join(data_dir,'an4_diarize_test.wav')):
AUDIO_FILENAME = wget.download(an4_audio_url, data_dir)
else:
AUDIO_FILENAME = os.path.join(data_dir,'an4_diarize_test.wav')
audio_file_list = glob.glob(f"{data_dir}/*.wav")
print("Input audio file list: \n", audio_file_list)
signal, sample_rate = librosa.load(AUDIO_FILENAME, sr=None)
display(Audio(signal,rate=sample_rate))
def display_waveform(signal,text='Audio',overlay_color=[]):
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.scatter(np.arange(len(signal)),signal,s=1,marker='o',c='k')
if len(overlay_color):
plt.scatter(np.arange(len(signal)),signal,s=1,marker='o',c=overlay_color)
fig.suptitle(text, fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
plt.ylabel('signal strength', fontsize=14);
plt.axis([0,len(signal),-0.5,+0.5])
time_axis,_ = plt.xticks();
plt.xticks(time_axis[:-1],time_axis[:-1]/sample_rate);
COLORS="b g c m y".split()
def get_color(signal,speech_labels,sample_rate=16000):
c=np.array(['k']*len(signal))
for time_stamp in speech_labels:
start,end,label=time_stamp.split()
start,end = int(float(start)*16000),int(float(end)*16000),
if label == "speech":
code = 'red'
else:
code = COLORS[int(label.split('_')[-1])]
c[start:end]=code
return c
display_waveform(signal)
from omegaconf import OmegaConf
import shutil
CONFIG_URL = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/offline_diarization_with_asr.yaml"
if not os.path.exists(os.path.join(data_dir,'offline_diarization_with_asr.yaml')):
CONFIG = wget.download(CONFIG_URL, data_dir)
else:
CONFIG = os.path.join(data_dir,'offline_diarization_with_asr.yaml')
cfg = OmegaConf.load(CONFIG)
print(OmegaConf.to_yaml(cfg))
# Create a manifest for input with below format.
# {"audio_filepath": "/path/to/audio_file", "offset": 0, "duration": null, "label": "infer", "text": "-",
# "num_speakers": null, "rttm_filepath": "/path/to/rttm/file", "uem_filepath"="/path/to/uem/filepath"}
import json
meta = {
'audio_filepath': AUDIO_FILENAME,
'offset': 0,
'duration':None,
'label': 'infer',
'text': '-',
'num_speakers': 2,
'rttm_filepath': None,
'uem_filepath' : None
}
with open(os.path.join(data_dir,'input_manifest.json'),'w') as fp:
json.dump(meta,fp)
fp.write('\n')
cfg.diarizer.manifest_filepath = os.path.join(data_dir,'input_manifest.json')
!cat {cfg.diarizer.manifest_filepath}
pretrained_speaker_model='ecapa_tdnn'
cfg.diarizer.manifest_filepath = cfg.diarizer.manifest_filepath
cfg.diarizer.out_dir = data_dir #Directory to store intermediate files and prediction outputs
cfg.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
cfg.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
cfg.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
cfg.diarizer.clustering.parameters.oracle_num_speakers=True
# USE VAD generated from ASR timestamps
cfg.diarizer.asr.model_path = 'QuartzNet15x5Base-En'
cfg.diarizer.oracle_vad = False # ----> ORACLE VAD
cfg.diarizer.asr.parameters.asr_based_vad = True
cfg.diarizer.asr.parameters.threshold=300
from nemo.collections.asr.parts.utils.speaker_utils import audio_rttm_map
asr_diar_offline = ASR_DIAR_OFFLINE(**cfg.diarizer.asr.parameters)
asr_diar_offline.root_path = cfg.diarizer.out_dir
AUDIO_RTTM_MAP = audio_rttm_map(cfg.diarizer.manifest_filepath)
asr_diar_offline.AUDIO_RTTM_MAP = AUDIO_RTTM_MAP
asr_model = asr_diar_offline.set_asr_model(cfg.diarizer.asr.model_path)
word_list, word_ts_list = asr_diar_offline.run_ASR(asr_model)
print("Decoded word output: \n", word_list[0])
print("Word-level timestamps \n", word_ts_list[0])
score = asr_diar_offline.run_diarization(cfg, word_ts_list)
def read_file(path_to_file):
with open(path_to_file) as f:
contents = f.read().splitlines()
return contents
predicted_speaker_label_rttm_path = f"{data_dir}/pred_rttms/an4_diarize_test.rttm"
pred_rttm = read_file(predicted_speaker_label_rttm_path)
pp.pprint(pred_rttm)
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels
pred_labels = rttm_to_labels(predicted_speaker_label_rttm_path)
color = get_color(signal, pred_labels)
display_waveform(signal,'Audio with Speaker Labels', color)
display(Audio(signal,rate=16000))
asr_output_dict = asr_diar_offline.write_json_and_transcript(word_list, word_ts_list)
transcription_path_to_file = f"{data_dir}/pred_rttms/an4_diarize_test.txt"
transcript = read_file(transcription_path_to_file)
pp.pprint(transcript)
transcription_path_to_file = f"{data_dir}/pred_rttms/an4_diarize_test.json"
json_contents = read_file(transcription_path_to_file)
pp.pprint(json_contents)
| 0.681939 | 0.913368 |
<a href="https://colab.research.google.com/github/nvisagan/DS-Unit-2-Kaggle-Challenge/blob/master/module3/assignment_kaggle_challenge_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science, Unit 2: Predictive Modeling
# Kaggle Challenge, Module 3
## Assignment
- [ ] [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.
- [ ] Continue to participate in our Kaggle challenge.
- [ ] Use scikit-learn for hyperparameter optimization with RandomizedSearchCV.
- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)
- [ ] Commit your notebook to your fork of the GitHub repo.
## Stretch Goals
### Reading
- Jake VanderPlas, [Python Data Science Handbook, Chapter 5.3](https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html), Hyperparameters and Model Validation
- Jake VanderPlas, [Statistics for Hackers](https://speakerdeck.com/jakevdp/statistics-for-hackers?slide=107)
- Ron Zacharski, [A Programmer's Guide to Data Mining, Chapter 5](http://guidetodatamining.com/chapter5/), 10-fold cross validation
- Sebastian Raschka, [A Basic Pipeline and Grid Search Setup](https://github.com/rasbt/python-machine-learning-book/blob/master/code/bonus/svm_iris_pipeline_and_gridsearch.ipynb)
- Peter Worcester, [A Comparison of Grid Search and Randomized Search Using Scikit Learn](https://blog.usejournal.com/a-comparison-of-grid-search-and-randomized-search-using-scikit-learn-29823179bc85)
### Doing
- In additon to `RandomizedSearchCV`, scikit-learn has [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). Another library called scikit-optimize has [`BayesSearchCV`](https://scikit-optimize.github.io/notebooks/sklearn-gridsearchcv-replacement.html). Experiment with these alternatives.
- _[Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do)_ discusses options for "Grid-Searching Which Model To Use" in Chapter 6:
> You can even go further in combining GridSearchCV and Pipeline: it is also possible to search over the actual steps being performed in the pipeline (say whether to use StandardScaler or MinMaxScaler). This leads to an even bigger search space and should be considered carefully. Trying all possible solutions is usually not a viable machine learning strategy. However, here is an example comparing a RandomForestClassifier and an SVC ...
The example is shown in [the accompanying notebook](https://github.com/amueller/introduction_to_ml_with_python/blob/master/06-algorithm-chains-and-pipelines.ipynb), code cells 35-37. Could you apply this concept to your own pipelines?
### More Categorical Encodings
**1.** The article **[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)** mentions 4 encodings:
- **"Categorical Encoding":** This means using the raw categorical values as-is, not encoded. Scikit-learn doesn't support this, but some tree algorithm implementations do. For example, [Catboost](https://catboost.ai/), or R's [rpart](https://cran.r-project.org/web/packages/rpart/index.html) package.
- **Numeric Encoding:** Synonymous with Label Encoding, or "Ordinal" Encoding with random order. We can use [category_encoders.OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html).
- **One-Hot Encoding:** We can use [category_encoders.OneHotEncoder](http://contrib.scikit-learn.org/categorical-encoding/onehot.html).
- **Binary Encoding:** We can use [category_encoders.BinaryEncoder](http://contrib.scikit-learn.org/categorical-encoding/binary.html).
**2.** The short video
**[Coursera — How to Win a Data Science Competition: Learn from Top Kagglers — Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)** introduces an interesting idea: use both X _and_ y to encode categoricals.
Category Encoders has multiple implementations of this general concept:
- [CatBoost Encoder](http://contrib.scikit-learn.org/categorical-encoding/catboost.html)
- [James-Stein Encoder](http://contrib.scikit-learn.org/categorical-encoding/jamesstein.html)
- [Leave One Out](http://contrib.scikit-learn.org/categorical-encoding/leaveoneout.html)
- [M-estimate](http://contrib.scikit-learn.org/categorical-encoding/mestimate.html)
- [Target Encoder](http://contrib.scikit-learn.org/categorical-encoding/targetencoder.html)
- [Weight of Evidence](http://contrib.scikit-learn.org/categorical-encoding/woe.html)
Category Encoder's mean encoding implementations work for regression problems or binary classification problems.
For multi-class classification problems, you will need to temporarily reformulate it as binary classification. For example:
```python
encoder = ce.TargetEncoder(min_samples_leaf=..., smoothing=...) # Both parameters > 1 to avoid overfitting
X_train_encoded = encoder.fit_transform(X_train, y_train=='functional')
X_val_encoded = encoder.transform(X_train, y_val=='functional')
```
**3.** The **[dirty_cat](https://dirty-cat.github.io/stable/)** library has a Target Encoder implementation that works with multi-class classification.
```python
dirty_cat.TargetEncoder(clf_type='multiclass-clf')
```
It also implements an interesting idea called ["Similarity Encoder" for dirty categories](https://www.slideshare.net/GaelVaroquaux/machine-learning-on-non-curated-data-154905090).
However, it seems like dirty_cat doesn't handle missing values or unknown categories as well as category_encoders does. And you may need to use it with one column at a time, instead of with your whole dataframe.
**4. [Embeddings](https://www.kaggle.com/learn/embeddings)** can work well with sparse / high cardinality categoricals.
_**I hope it’s not too frustrating or confusing that there’s not one “canonical” way to encode categorcals. It’s an active area of research and experimentation! Maybe you can make your own contributions!**_
### BONUS: Stacking!
Here's some code you can use to "stack" multiple submissions, which is another form of ensembling:
```python
import pandas as pd
# Filenames of your submissions you want to ensemble
files = ['submission-01.csv', 'submission-02.csv', 'submission-03.csv']
target = 'status_group'
submissions = (pd.read_csv(file)[[target]] for file in files)
ensemble = pd.concat(submissions, axis='columns')
majority_vote = ensemble.mode(axis='columns')[0]
sample_submission = pd.read_csv('sample_submission.csv')
submission = sample_submission.copy()
submission[target] = majority_vote
submission.to_csv('my-ultimate-ensemble-submission.csv', index=False)
```
```
import os, sys
in_colab = 'google.colab' in sys.modules
# If you're in Colab...
if in_colab:
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git
!git pull origin master
# Install required python packages
!pip install -r requirements.txt
# Change into directory for module
os.chdir('module3')
import pandas as pd
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv('../data/tanzania/train_features.csv'),
pd.read_csv('../data/tanzania/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
import category_encoders as ce
import numpy as np
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.impute import SimpleImputer
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from random import randint
from random import uniform
target = 'status_group'
high_cardinality = ['scheme_name', 'subvillage', 'wpt_name']
features = train.columns.drop([target] + high_cardinality)
X_train = train[features]
y_train = train[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier()
)
param_distributions = {
'simpleimputer__strategy': ['mean', 'median'],
'selectkbest__k': range(1, len(X_train.columns)+1),
'ridge__alpha': [0.1, 1, 10],
}
search = RandomizedSearchCV(
pipeline,
param_distributions=param_distributions,
n_iter=100,
cv=5,
scoring='accuracy',
verbose=10,
return_train_score=True,
n_jobs=-1
)
search.fit(X_train, y_train);
print('Best hyperparameters', search.best_params_)
print('Accuracy', -search.best_score_)
```
|
github_jupyter
|
encoder = ce.TargetEncoder(min_samples_leaf=..., smoothing=...) # Both parameters > 1 to avoid overfitting
X_train_encoded = encoder.fit_transform(X_train, y_train=='functional')
X_val_encoded = encoder.transform(X_train, y_val=='functional')
dirty_cat.TargetEncoder(clf_type='multiclass-clf')
import pandas as pd
# Filenames of your submissions you want to ensemble
files = ['submission-01.csv', 'submission-02.csv', 'submission-03.csv']
target = 'status_group'
submissions = (pd.read_csv(file)[[target]] for file in files)
ensemble = pd.concat(submissions, axis='columns')
majority_vote = ensemble.mode(axis='columns')[0]
sample_submission = pd.read_csv('sample_submission.csv')
submission = sample_submission.copy()
submission[target] = majority_vote
submission.to_csv('my-ultimate-ensemble-submission.csv', index=False)
import os, sys
in_colab = 'google.colab' in sys.modules
# If you're in Colab...
if in_colab:
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git
!git pull origin master
# Install required python packages
!pip install -r requirements.txt
# Change into directory for module
os.chdir('module3')
import pandas as pd
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv('../data/tanzania/train_features.csv'),
pd.read_csv('../data/tanzania/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
import category_encoders as ce
import numpy as np
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.impute import SimpleImputer
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from random import randint
from random import uniform
target = 'status_group'
high_cardinality = ['scheme_name', 'subvillage', 'wpt_name']
features = train.columns.drop([target] + high_cardinality)
X_train = train[features]
y_train = train[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier()
)
param_distributions = {
'simpleimputer__strategy': ['mean', 'median'],
'selectkbest__k': range(1, len(X_train.columns)+1),
'ridge__alpha': [0.1, 1, 10],
}
search = RandomizedSearchCV(
pipeline,
param_distributions=param_distributions,
n_iter=100,
cv=5,
scoring='accuracy',
verbose=10,
return_train_score=True,
n_jobs=-1
)
search.fit(X_train, y_train);
print('Best hyperparameters', search.best_params_)
print('Accuracy', -search.best_score_)
| 0.462473 | 0.931649 |
# Recap, Tips, and Tricks
This short lesson recaps what we've learned. It also expands on a few techniques covered in the previous lessons.
## For More Information on Ray and Anyscale
* [ray.io](https://ray.io): The Ray website. In particular:
* [Documentation](https://ray.readthedocs.io/en/latest/): The full Ray documentation
* [Blog](https://medium.com/distributed-computing-with-ray): The Ray blog
* [GitHub](https://github.com/ray-project/ray): The source code for Ray
* [anyscale.com](https://anyscale.com/): The company developing Ray and these tutorials. In particular:
* [Blog](https://anyscale.com/blog/): The Anyscale blog
* [Events](https://anyscale.com/events/): Online events, [Ray Summit](http://raysummit.org), and meetups
* [Academy](https://anyscale.com/academy/): Training for Ray and Anyscale products
* [Jobs](https://jobs.lever.co/anyscale): Yes, we're hiring!
* Community:
* [Ray Slack](ray-distributed.slack.com): The best forum for help on Ray. Use the `#tutorials` channel to ask for help on these tutorials!
* [ray-dev mailing list](https://groups.google.com/forum/?nomobile=true#!forum/ray-dev)
* [@raydistributed](https://twitter.com/raydistributed)
* [@anyscalecompute](https://twitter.com/anyscalecompute)
## General Task and Actor Tips
* To create a task from a function or an actor from a class, annotate it with `@ray.remote`.
* Invoke tasks and actor methods (including the constructor) with `foo.remote(...)`.
* Invocations return an `ObjectID` for a _future_. Use `ray.get(id)` to return the value.
* However, `ray.get()` blocks, so consider using `ray.wait()` when waiting for a collection of futures, so as they become available, you can retrieve them with `ray.get()` (which won't block on available results), then process the results, all while waiting for the rest to finish.
* Pick functions to parallelize that do enough work so that the Ray "remoting" overhead is not significant. Very short functions will actually yield lower performance if convert to tasks.
* Similarly, avoid too many actors, as each one is pinned to memory until no longer needed.
### Using Existing Functions and Classes
An existing function can be used as a task by defining a new task function that calls. For example:
```python
def work(args):
do_work(...)
@ray.remote
def remote_work(args):
do_work(args)
```
This allows you to use both versions as appropriate.
Similarly, existing classes can be subclassed to create actors:
```python
class Counter():
def __init__(self, init_count):
self.count = init_count
def increment():
self.count +=1
return self.count
@ray.remote
class RemoteCounter(Counter):
def __init__(self, init_count):
super().__init__(init_count)
def get_count():
return self.count
```
Note that we added a `get_count()` method, because member attributes can't be accessed directly, in contrast with normal classes.
However, Ray currently doesn't support an actor subclassing another actor. Only regular Python classes can be used.
### Profiling Code
There are two ways to profile performance of a running Ray application:
* `ray.timeline(file)` ([documentation](https://ray.readthedocs.io/en/latest/package-ref.html#ray.timeline)). Requires a Chrome web browser to view the data.
* Ray Dashboard profiling ([see below](#Profiling-Actors))
The Ray Dashboard method is more convenient, but it has a limitation discussed below.
To use the `ray.timeline(file)` approach, run it as follows:
```
ray.timeline('timeline.txt')
my_long_task.remote(...)
```
Then, open chrome://tracing in the Chrome web browser (only Chrome is supported) and click the _load_ button to load the file. To zoom in or out, click the asymmetric up-down arrow button. To move around, click the crossed arrow and drag a section in view. Click on a box in the timeline to see details about it.
Look for blocks corresponding to long-running tasks and look for idle periods, which reflect processing down outside the context of Ray.
### Using Libraries
If tasks or actors call (or subclass) library code in your project and that code isn't in a subdirectory of the driver, make sure that the process starting Ray has the correct `PYTHONPATH` set to the library location. For example,
```python
PYTHONPATH=$PYTHONPATH:/path/to/library python MyRayAppDriver.py
```
### Cleaning Up Actors
Working in a constrained environment, you my find it useful to kill actors that are no longer needed, but still have references to them, such as in notebooks. More information is in the docs for [ray.kill()](https://ray.readthedocs.io/en/latest/package-ref.html#ray.kill).
* `ray.kill(actor_id)`: Terminate abruptly
* `actor_id.__ray_terminate__.remote()`: Clean up with a nicer termination
## Using the Ray Dashboard
### Opening the Dashboard
As it executes, `ray.init` prints the dashboard URL.
You can get the URL later if needed using `ray.get_webui_url()`.
### Profiling Actors
The _Logical View_ offers a powerful and convenient way to profile actor performance using [flame graphs](http://www.brendangregg.com/flamegraphs.html). Details are in the [Dashboard docs](https://ray.readthedocs.io/en/latest/ray-dashboard.html#ray-dashboard).
Unfortunately, you may be asked to enter the `sudo` password to use this feature, because of the way it instruments processes. Currently, the only way to get this to work with the Dashboard is to use _passwordless sudo_. On MacOS and Linux systems, it should be sufficient to add a line like the following the `/etc/sudoers` (edited using `sudo visudo`!):
```
yourusername ALL = (ALL) NOPASSWD: ALL
```
Carefully the consider the security implications of this change!!
## Debugging and Troubleshooting
### ray.init() Fails
If you get an error like `... INFO services.py:... -- Failed to connect to the redis server, retrying.`, it probably means you are running a VPN on your machine. [At this time](https://github.com/ray-project/ray/issues/6573), you can't use `ray.init()` with a VPN running. You'll have to stop your VPN to run `ray.init()`, then once it finishes, you can restart your VPN.
### Annoyances
If `ray.init()` worked (for example, you see a message like _View the Ray dashboard at localhost:8265_) and you're using a Mac, you may get several annoying dialogs asking you if you want to allow incoming connections for `python` and/or `redis-server`. Click "Accept" for each one and they shouldn't appear again during this lesson. MacOS is trying to verify if these executables have been properly signed. Ray uses Redis. If you installed Python using Anaconda or other mechanism, then it probably isn't properly signed from the point of view of MacOS. To permanently fix this problem, [see this StackExchange post](https://apple.stackexchange.com/questions/3271/how-to-get-rid-of-firewall-accept-incoming-connections-dialog).
## Thank You!
Thanks for going through this tutorial.
|
github_jupyter
|
def work(args):
do_work(...)
@ray.remote
def remote_work(args):
do_work(args)
class Counter():
def __init__(self, init_count):
self.count = init_count
def increment():
self.count +=1
return self.count
@ray.remote
class RemoteCounter(Counter):
def __init__(self, init_count):
super().__init__(init_count)
def get_count():
return self.count
ray.timeline('timeline.txt')
my_long_task.remote(...)
PYTHONPATH=$PYTHONPATH:/path/to/library python MyRayAppDriver.py
yourusername ALL = (ALL) NOPASSWD: ALL
| 0.355887 | 0.978467 |
```
from bokeh.plotting import figure, output_notebook, show
from bokeh.palettes import brewer
from bokeh.io import export_svgs
import numpy as np
import json
import matplotlib.pyplot as plt
import scipy
import scipy.stats
import pathlib
import os
output_notebook()
from analyzer import analyzer
data = analyzer.load_trajectory(pathlib.Path('/dev/shm/response.traj'))
data
signal = analyzer.load_trajectory(pathlib.Path('/dev/shm/signal.traj'))
plt.plot(signal['timestamps'], signal['components']['S'] * 0.2);
plt.plot(data['timestamps'], data['components']['X'])
analyzer.save_signal('S', '/dev/shm/signal.traj', mean=500, correlation_time=10)
def load_signal(i):
return analyzer.load_trajectory(pathlib.Path('/data/signal/sig{}.traj'.format(i)))
def load_response(sig, res):
return analyzer.load_trajectory(pathlib.Path('/data/response/res{}-{}.traj'.format(sig, res)))
def show_hist(data):
hist, edges = np.histogram(data, bins='auto')
p = figure()
p.vbar(x=edges[:-1], width=np.diff(edges), top=hist)
show(p)
sig = load_signal(1)
sig2 = load_signal(2)
res = load_response(1,0)
show_hist(analyzer.likelihoods_given_signal(res, sig))
show_hist(analyzer.likelihoods_given_signal(res, sig2))
res
x = np.stack([s['timestamps'] for s in signals])
y = np.stack([s['components']['S'] for s in signals])
y
palette = brewer['Dark2'][8]
# create a new plot with a title and axis labels
p = figure(title="Trajectories", x_axis_label='t / s', y_axis_label='copies')
# add a line renderer with legend and line thickness
for (x_ax, y_ax), col in zip(zip(x, y), palette):
res_x = np.linspace(np.min(x_ax), np.max(x_ax), 1000)
p.line(res_x, np.interp(res_x, x_ax, y_ax), line_width=2, color=col)
show(p)
np.savetxt("tr2.txt", [x, y[1]])
np.concatenate(([int_x], int_y), axis=0)
p = figure()
p.line(int_x, int_y[0])
p.line(int_x, int_y[1])
show(p)
hist = np.histogram(np.log(np.diff(txt_data[0])), bins='auto', density=True)
p=figure()
p.vbar(hist[1][:-1], width=np.diff(hist[1]), top=hist[0])
show(p)
hist[0]
np.loadtxt("tr2.txt").shape
int_x
def ornstein_uhlenbeck_path(x0, t, mean_rev_speed, mean_rev_level, vola):
""" Simulates a sample path for an Ornstein-Uhlenbeck process."""
assert len(t) > 1
x = scipy.stats.norm.rvs(size=len(t))
x[0] = x0
dt = np.diff(t)
scale = std(dt, mean_rev_speed, vola)
x[1:] = x[1:] * scale
for i in range(1, len(x)):
x[i] += mean(x[i - 1], dt[i - 1], mean_rev_speed, mean_rev_level)
return x
def std(t, mean_rev_speed, vola):
return np.sqrt(variance(t, mean_rev_speed, vola))
def variance(t, mean_rev_speed, vola):
assert mean_rev_speed >= 0
assert vola >= 0
return vola * vola * (1.0 - np.exp(- 2.0 * mean_rev_speed * t)) / (2 * mean_rev_speed)
def mean(x0, t, mean_rev_speed, mean_rev_level):
assert mean_rev_speed >= 0
return x0 * np.exp(-mean_rev_speed * t) + (1.0 - np.exp(- mean_rev_speed * t)) * mean_rev_level
times = np.linspace(0, 100, 100000)
x = ornstein_uhlenbeck_path(10000, times, 0.001, 10000, 900)
p = figure()
p.line(times, x)
show(p)
json_obj = { 'timestamps': times.tolist(), 'components': [x.tolist()] }
with open("response/ouproc.txt", "w") as outfile:
json.dump(json_obj, outfile)
```
|
github_jupyter
|
from bokeh.plotting import figure, output_notebook, show
from bokeh.palettes import brewer
from bokeh.io import export_svgs
import numpy as np
import json
import matplotlib.pyplot as plt
import scipy
import scipy.stats
import pathlib
import os
output_notebook()
from analyzer import analyzer
data = analyzer.load_trajectory(pathlib.Path('/dev/shm/response.traj'))
data
signal = analyzer.load_trajectory(pathlib.Path('/dev/shm/signal.traj'))
plt.plot(signal['timestamps'], signal['components']['S'] * 0.2);
plt.plot(data['timestamps'], data['components']['X'])
analyzer.save_signal('S', '/dev/shm/signal.traj', mean=500, correlation_time=10)
def load_signal(i):
return analyzer.load_trajectory(pathlib.Path('/data/signal/sig{}.traj'.format(i)))
def load_response(sig, res):
return analyzer.load_trajectory(pathlib.Path('/data/response/res{}-{}.traj'.format(sig, res)))
def show_hist(data):
hist, edges = np.histogram(data, bins='auto')
p = figure()
p.vbar(x=edges[:-1], width=np.diff(edges), top=hist)
show(p)
sig = load_signal(1)
sig2 = load_signal(2)
res = load_response(1,0)
show_hist(analyzer.likelihoods_given_signal(res, sig))
show_hist(analyzer.likelihoods_given_signal(res, sig2))
res
x = np.stack([s['timestamps'] for s in signals])
y = np.stack([s['components']['S'] for s in signals])
y
palette = brewer['Dark2'][8]
# create a new plot with a title and axis labels
p = figure(title="Trajectories", x_axis_label='t / s', y_axis_label='copies')
# add a line renderer with legend and line thickness
for (x_ax, y_ax), col in zip(zip(x, y), palette):
res_x = np.linspace(np.min(x_ax), np.max(x_ax), 1000)
p.line(res_x, np.interp(res_x, x_ax, y_ax), line_width=2, color=col)
show(p)
np.savetxt("tr2.txt", [x, y[1]])
np.concatenate(([int_x], int_y), axis=0)
p = figure()
p.line(int_x, int_y[0])
p.line(int_x, int_y[1])
show(p)
hist = np.histogram(np.log(np.diff(txt_data[0])), bins='auto', density=True)
p=figure()
p.vbar(hist[1][:-1], width=np.diff(hist[1]), top=hist[0])
show(p)
hist[0]
np.loadtxt("tr2.txt").shape
int_x
def ornstein_uhlenbeck_path(x0, t, mean_rev_speed, mean_rev_level, vola):
""" Simulates a sample path for an Ornstein-Uhlenbeck process."""
assert len(t) > 1
x = scipy.stats.norm.rvs(size=len(t))
x[0] = x0
dt = np.diff(t)
scale = std(dt, mean_rev_speed, vola)
x[1:] = x[1:] * scale
for i in range(1, len(x)):
x[i] += mean(x[i - 1], dt[i - 1], mean_rev_speed, mean_rev_level)
return x
def std(t, mean_rev_speed, vola):
return np.sqrt(variance(t, mean_rev_speed, vola))
def variance(t, mean_rev_speed, vola):
assert mean_rev_speed >= 0
assert vola >= 0
return vola * vola * (1.0 - np.exp(- 2.0 * mean_rev_speed * t)) / (2 * mean_rev_speed)
def mean(x0, t, mean_rev_speed, mean_rev_level):
assert mean_rev_speed >= 0
return x0 * np.exp(-mean_rev_speed * t) + (1.0 - np.exp(- mean_rev_speed * t)) * mean_rev_level
times = np.linspace(0, 100, 100000)
x = ornstein_uhlenbeck_path(10000, times, 0.001, 10000, 900)
p = figure()
p.line(times, x)
show(p)
json_obj = { 'timestamps': times.tolist(), 'components': [x.tolist()] }
with open("response/ouproc.txt", "w") as outfile:
json.dump(json_obj, outfile)
| 0.697918 | 0.704795 |
# Complex Arithmetic
This is a tutorial designed to introduce you to complex arithmetic.
This topic isn't particularly expansive, but it's important to understand it to be able to work with quantum computing.
This tutorial covers the following topics:
* Imaginary and complex numbers
* Basic complex arithmetic
* Complex plane
* Modulus operator
* Imaginary exponents
* Polar representation
If you need to look up some formulas quickly, you can find them in [this cheatsheet](https://github.com/microsoft/QuantumKatas/blob/main/quickref/qsharp-quick-reference.pdf).
If you are curious to learn more, you can find more information at [Wikipedia](https://en.wikipedia.org/wiki/Complex_number).
This notebook has several tasks that require you to write Python code to test your understanding of the concepts. If you are not familiar with Python, [here](https://docs.python.org/3/tutorial/index.html) is a good introductory tutorial for it.
Let's start by importing some useful mathematical functions and constants, and setting up a few things necessary for testing the exercises. **Do not skip this step**.
Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac).
```
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise
from typing import Tuple
import math
Complex = Tuple[float, float]
Polar = Tuple[float, float]
```
# Algebraic Perspective
## Imaginary numbers
For some purposes, real numbers aren't enough. Probably the most famous example is this equation:
$$x^{2} = -1$$
which has no solution for $x$ among real numbers. If, however, we abandon that constraint, we can do something interesting - we can define our own number. Let's say there exists some number that solves that equation. Let's call that number $i$.
$$i^{2} = -1$$
As we said before, $i$ can't be a real number. In that case, we'll call it an **imaginary unit**. However, there is no reason for us to define it as acting any different from any other number, other than the fact that $i^2 = -1$:
$$i + i = 2i \\
i - i = 0 \\
-1 \cdot i = -i \\
(-i)^{2} = -1$$
We'll call the number $i$ and its real multiples **imaginary numbers**.
> A good video introduction on imaginary numbers can be found [here](https://youtu.be/SP-YJe7Vldo).
### <span style="color:blue">Exercise 1</span>: Powers of $i$.
**Input:** An even integer $n$.
**Goal:** Return the $n$th power of $i$, or $i^n$.
> Fill in the missing code (denoted by `...`) and run the cell below to test your work.
```
@exercise
def imaginary_power(n : int) -> int:
# If n is divisible by 4
if n % 4 == 0:
return 1
else:
return -1
```
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-1:-Powers-of-$i$.).*
## Complex Numbers
Adding imaginary numbers to each other is quite simple, but what happens when we add a real number to an imaginary number? The result of that addition will be partly real and partly imaginary, otherwise known as a **complex number**. A complex number is simply the real part and the imaginary part being treated as a single number. Complex numbers are generally written as the sum of their two parts: $a + bi$, where both $a$ and $b$ are real numbers. For example, $3 + 4i$, or $-5 - 7i$ are valid complex numbers. Note that purely real or purely imaginary numbers can also be written as complex numbers: $2$ is $2 + 0i$, and $-3i$ is $0 - 3i$.
When performing operations on complex numbers, it is often helpful to treat them as polynomials in terms of $i$.
### <span style="color:blue">Exercise 2</span>: Complex addition.
**Inputs:**
1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.
2. A complex number $y = c + di$, represented as a tuple `(c, d)`.
**Goal:** Return the sum of these two numbers $x + y = z = g + hi$, represented as a tuple `(g, h)`.
> A tuple is a pair of numbers.
> You can make a tuple by putting two numbers in parentheses like this: `(3, 4)`.
> * You can access the $n$th element of tuple `x` like so: `x[n]`
> * For this tutorial, complex numbers are represented as tuples where the first element is the real part, and the second element is the real coefficient of the imaginary part
> * For example, $1 + 2i$ would be represented by a tuple `(1, 2)`, and $7 - 5i$ would be represented by `(7, -5)`.
>
> You can find more details about Python's tuple data type in the [official documentation](https://docs.python.org/3/library/stdtypes.html#tuples).
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
Remember, adding complex numbers is just like adding polynomials. Add components of the same type - add the real part to the real part, add the complex part to the complex part. <br>
A video explanation can be found <a href="https://www.youtube.com/watch?v=SfbjqVyQljk">here</a>.
</details>
```
@exercise
def complex_add(x : Complex, y : Complex) -> Complex:
# You can extract elements from a tuple like this
a = x[0]
b = x[1]
c = y[0]
d = y[1]
# This creates a new variable and stores the real component into it
real = a + c
# Replace the ... with code to calculate the imaginary component
imaginary = b + d
# You can create a tuple like this
ans = (real, imaginary)
return ans
```
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-2:-Complex-addition.).*
### <span style="color:blue">Exercise 3</span>: Complex multiplication.
**Inputs:**
1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.
2. A complex number $y = c + di$, represented as a tuple `(c, d)`.
**Goal:** Return the product of these two numbers $x \cdot y = z = g + hi$, represented as a tuple `(g, h)`.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
Remember, multiplying complex numbers is just like multiplying polynomials. Distribute one of the complex numbers:
$$(a + bi)(c + di) = a(c + di) + bi(c + di)$$
Then multiply through, and group the real and imaginary terms together.
<br/>
A video explanation can be found <a href="https://www.youtube.com/watch?v=cWn6g8Qqvs4">here</a>.
</details>
```
@exercise
def complex_mult(x : Complex, y : Complex) -> Complex:
# Fill in your own code
a = x[0]
b = x[1]
c = y[0]
d = y[1]
# This creates a new variable and stores the real component into it
real = (a * c)- (b*d)
# Replace the ... with code to calculate the imaginary component
imaginary = (b*c)+(d*a)
# You can create a tuple like this
ans = (real, imaginary)
return ans
```
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-3:-Complex-multiplication.).*
## Complex Conjugate
Before we discuss any other complex operations, we have to cover the **complex conjugate**. The conjugate is a simple operation: given a complex number $x = a + bi$, its complex conjugate is $\overline{x} = a - bi$.
The conjugate allows us to do some interesting things. The first and probably most important is multiplying a complex number by its conjugate:
$$x \cdot \overline{x} = (a + bi)(a - bi)$$
Notice that the second expression is a difference of squares:
$$(a + bi)(a - bi) = a^2 - (bi)^2 = a^2 - b^2i^2 = a^2 + b^2$$
This means that a complex number multiplied by its conjugate always produces a non-negative real number.
Another property of the conjugate is that it distributes over both complex addition and complex multiplication:
$$\overline{x + y} = \overline{x} + \overline{y} \\
\overline{x \cdot y} = \overline{x} \cdot \overline{y}$$
### <span style="color:blue">Exercise 4</span>: Complex conjugate.
**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.
**Goal:** Return $\overline{x} = g + hi$, the complex conjugate of $x$, represented as a tuple `(g, h)`.
<br/>
<details>
<summary><b>Need a hint? Click here</b></summary>
A video explanation can be found <a href="https://www.youtube.com/watch?v=BZxZ_eEuJBM">here</a>.
</details>
```
@exercise
def conjugate(x : Complex) -> Complex:
real = x[0]
imaginary = -x[1]
return (real, imaginary)
```
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-4:-Complex-conjugate.).*
## Complex Division
The next use for the conjugate is complex division. Let's take two complex numbers: $x = a + bi$ and $y = c + di \neq 0$ (not even complex numbers let you divide by $0$). What does $\frac{x}{y}$ mean?
Let's expand $x$ and $y$ into their component forms:
$$\frac{x}{y} = \frac{a + bi}{c + di}$$
Unfortunately, it isn't very clear what it means to divide by a complex number. We need some way to move either all real parts or all imaginary parts into the numerator. And thanks to the conjugate, we can do just that. Using the fact that any number (except $0$) divided by itself equals $1$, and any number multiplied by $1$ equals itself, we get:
$$\frac{x}{y} = \frac{x}{y} \cdot 1 = \frac{x}{y} \cdot \frac{\overline{y}}{\overline{y}} = \frac{x\overline{y}}{y\overline{y}} = \frac{(a + bi)(c - di)}{(c + di)(c - di)} = \frac{(a + bi)(c - di)}{c^2 + d^2}$$
By doing this, we re-wrote our division problem to have a complex multiplication expression in the numerator, and a real number in the denominator. We already know how to multiply complex numbers, and dividing a complex number by a real number is as simple as dividing both parts of the complex number separately:
$$\frac{a + bi}{r} = \frac{a}{r} + \frac{b}{r}i$$
### <span style="color:blue">Exercise 5</span>: Complex division.
**Inputs:**
1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.
2. A complex number $y = c + di \neq 0$, represented as a tuple `(c, d)`.
**Goal:** Return the result of the division $\frac{x}{y} = \frac{a + bi}{c + di} = g + hi$, represented as a tuple `(g, h)`.
<br/>
<details>
<summary><b>Need a hint? Click here</b></summary>
A video explanation can be found <a href="https://www.youtube.com/watch?v=Z8j5RDOibV4">here</a>.
</details>
```
@exercise
def complex_div(x : Complex, y : Complex) -> Complex:
(a,b) = x
(c,d) = y
den = (c*c)+(d*d)
real = ((a*c)+(b*d))/den
imaginary = ((a*(-d))+(c*b))/den
return (real, imaginary)
```
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-5:-Complex-division.).*
# Geometric Perspective
## The Complex Plane
You may recall that real numbers can be represented geometrically using the [number line](https://en.wikipedia.org/wiki/Number_line) - a line on which each point represents a real number. We can extend this representation to include imaginary and complex numbers, which gives rise to an entirely different number line: the imaginary number line, which only intersects with the real number line at $0$.
A complex number has two components - a real component and an imaginary component. As you no doubt noticed from the exercises, these can be represented by two real numbers - the real component, and the real coefficient of the imaginary component. This allows us to map complex numbers onto a two-dimensional plane - the **complex plane**. The most common mapping is the obvious one: $a + bi$ can be represented by the point $(a, b)$ in the **Cartesian coordinate system**.

This mapping allows us to apply complex arithmetic to geometry, and, more importantly, apply geometric concepts to complex numbers. Many properties of complex numbers become easier to understand when viewed through a geometric lens.
## Modulus
One such property is the **modulus** operator. This operator generalizes the **absolute value** operator on real numbers to the complex plane. Just like the absolute value of a number is its distance from $0$, the modulus of a complex number is its distance from $0 + 0i$. Using the distance formula, if $x = a + bi$, then:
$$|x| = \sqrt{a^2 + b^2}$$
There is also a slightly different, but algebraically equivalent definition:
$$|x| = \sqrt{x \cdot \overline{x}}$$
Like the conjugate, the modulus distributes over multiplication.
$$|x \cdot y| = |x| \cdot |y|$$
Unlike the conjugate, however, the modulus doesn't distribute over addition. Instead, the interaction of the two comes from the triangle inequality:
$$|x + y| \leq |x| + |y|$$
### <span style="color:blue">Exercise 6</span>: Modulus.
**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.
**Goal:** Return the modulus of this number, $|x|$.
> Python's exponentiation operator is `**`, so $2^3$ is `2 ** 3` in Python.
>
> You will probably need some mathematical functions to solve the next few tasks. They are available in Python's math library. You can find the full list and detailed information in the [official documentation](https://docs.python.org/3/library/math.html).
<details>
<summary><strong>Need a hint? Click here</strong></summary>
In particular, you might be interested in <a href=https://docs.python.org/3/library/math.html#math.sqrt>Python's square root function.</a><br>
A video explanation can be found <a href="https://www.youtube.com/watch?v=FwuPXchH2rA">here</a>.
</details>
```
@exercise
def modulus(x : Complex) -> float:
return math.sqrt(x[0] ** 2 + x[1] ** 2)
```
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-6:-Modulus.).*
## Imaginary Exponents
The next complex operation we're going to need is exponentiation. Raising an imaginary number to an integer power is a fairly simple task, but raising a number to an imaginary power, or raising an imaginary (or complex) number to a real power isn't quite as simple.
Let's start with raising real numbers to imaginary powers. Specifically, let's start with a rather special real number - Euler's constant, $e$:
$$e^{i\theta} = \cos \theta + i\sin \theta$$
(Here and later in this tutorial $\theta$ is measured in radians.)
Explaining why that happens is somewhat beyond the scope of this tutorial, as it requires some calculus, so we won't do that here. If you are curious, you can see [this video](https://youtu.be/v0YEaeIClKY) for a beautiful intuitive explanation, or [the Wikipedia article](https://en.wikipedia.org/wiki/Euler%27s_formula#Proofs) for a more mathematically rigorous proof.
Here are some examples of this formula in action:
$$e^{i\pi/4} = \frac{1}{\sqrt{2}} + \frac{i}{\sqrt{2}} \\
e^{i\pi/2} = i \\
e^{i\pi} = -1 \\
e^{2i\pi} = 1$$
> One interesting consequence of this is Euler's Identity:
>
> $$e^{i\pi} + 1 = 0$$
>
> While this doesn't have any notable uses, it is still an interesting identity to consider, as it combines 5 fundamental constants of algebra into one expression.
We can also calculate complex powers of $e$ as follows:
$$e^{a + bi} = e^a \cdot e^{bi}$$
Finally, using logarithms to express the base of the exponent as $r = e^{\ln r}$, we can use this to find complex powers of any positive real number.
### <span style="color:blue">Exercise 7</span>: Complex exponents.
**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.
**Goal:** Return the complex number $e^x = e^{a + bi} = g + hi$, represented as a tuple `(g, h)`.
> Euler's constant $e$ is available in the [math library](https://docs.python.org/3/library/math.html#math.e),
> as are [Python's trigonometric functions](https://docs.python.org/3/library/math.html#trigonometric-functions).
```
@exercise
def complex_exp(x : Complex) -> Complex:
(a, b) = x
ex = math.e ** a
real = ex * math.cos(b)
imaginary = ex * math.sin(b)
return (real, imaginary)
```
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-7:-Complex-exponents.).*
### <span style="color:blue">Exercise 8</span>*: Complex powers of real numbers.
**Inputs:**
1. A non-negative real number $r$.
2. A complex number $x = a + bi$, represented as a tuple `(a, b)`.
**Goal:** Return the complex number $r^x = r^{a + bi} = g + hi$, represented as a tuple `(g, h)`.
> Remember, you can use functions you have defined previously
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
You can use the fact that $r = e^{\ln r}$ to convert exponent bases. Remember though, $\ln r$ is only defined for positive numbers - make sure to check for $r = 0$ separately!
</details>
```
@exercise
def complex_exp_real(r : float, x : Complex) -> Complex:
if (r == 0):
return (0,0)
(a, b) = x
ra = r ** a
l = math.log(r)
real = ra * math.cos(b * l)
imaginary = ra * math.sin(b * l)
return (real, imaginary)
```
<i>Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-8*:-Complex-powers-of-real-numbers.).</i>
## Polar coordinates
Consider the expression $e^{i\theta} = \cos\theta + i\sin\theta$. Notice that if we map this number onto the complex plane, it will land on a **unit circle** around $0 + 0i$. This means that its modulus is always $1$. You can also verify this algebraically: $\cos^2\theta + \sin^2\theta = 1$.
Using this fact we can represent complex numbers using **polar coordinates**. In a polar coordinate system, a point is represented by two numbers: its direction from origin, represented by an angle from the $x$ axis, and how far away it is in that direction.
Another way to think about this is that we're taking a point that is $1$ unit away (which is on the unit circle) in the specified direction, and multiplying it by the desired distance. And to get the point on the unit circle, we can use $e^{i\theta}$.
A complex number of the format $r \cdot e^{i\theta}$ will be represented by a point which is $r$ units away from the origin, in the direction specified by the angle $\theta$.

Sometimes $\theta$ will be referred to as the number's **phase**.
### <span style="color:blue">Exercise 9</span>: Cartesian to polar conversion.
**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.
**Goal:** Return the polar representation of $x = re^{i\theta}$, i.e., the distance from origin $r$ and phase $\theta$ as a tuple `(r, θ)`.
* $r$ should be non-negative: $r \geq 0$
* $\theta$ should be between $-\pi$ and $\pi$: $-\pi < \theta \leq \pi$
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
<a href=https://docs.python.org/3/library/math.html#math.atan2>Python has a separate function</a> for calculating $\theta$ for this purpose.<br>
A video explanation can be found <a href="https://www.youtube.com/watch?v=8RasCV_Lggg">here</a>.
</details>
```
@exercise
def polar_convert(x : Complex) -> Polar:
(a, b) = x
r = math.sqrt(a**2 + b **2)
theta = math.atan2(b, a)
return (r, theta)
```
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-9:-Cartesian-to-polar-conversion.).*
### <span style="color:blue">Exercise 10</span>: Polar to Cartesian conversion.
**Input:** A complex number $x = re^{i\theta}$, represented in polar form as a tuple `(r, θ)`.
**Goal:** Return the Cartesian representation of $x = a + bi$, represented as a tuple `(a, b)`.
```
@exercise
def cartesian_convert(x : Polar) -> Complex:
(r, theta) = x
real = r * math.cos(theta)
imaginary = r * math.sin(theta)
return (real, imaginary)
```
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-10:-Polar-to-Cartesian-conversion.).*
### <span style="color:blue">Exercise 11</span>: Polar multiplication.
**Inputs:**
1. A complex number $x = r_{1}e^{i\theta_1}$ represented in polar form as a tuple `(r1, θ1)`.
2. A complex number $y = r_{2}e^{i\theta_2}$ represented in polar form as a tuple `(r2, θ2)`.
**Goal:** Return the result of the multiplication $x \cdot y = z = r_3e^{i\theta_3}$, represented in polar form as a tuple `(r3, θ3)`.
* $r_3$ should be non-negative: $r_3 \geq 0$
* $\theta_3$ should be between $-\pi$ and $\pi$: $-\pi < \theta_3 \leq \pi$
* Try to avoid converting the numbers into Cartesian form.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
Remember, a number written in polar form already involves multiplication. What is $r_1e^{i\theta_1} \cdot r_2e^{i\theta_2}$?
</details><details>
<summary><strong>Need another hint? Click here</strong></summary>
Is your θ not coming out correctly? Remember you might have to check your boundaries and adjust it to be in the range requested.
</details>
```
@exercise
def polar_mult(x : Polar, y : Polar) -> Polar:
(r1, theta1) = x
(r2, theta2) = y
r = r1 * r2
a = theta1 + theta2
if (a > math.pi):
a = a - 2.0 * math.pi
elif (a <= -math.pi):
a = a + 2.0 * math.pi
return (r, a)
```
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-11:-Polar-multiplication.).*
### <span style="color:blue">Exercise 12</span>**: Arbitrary complex exponents.
You now know enough about complex numbers to figure out how to raise a complex number to a complex power.
**Inputs:**
1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.
2. A complex number $y = c + di$, represented as a tuple `(c, d)`.
**Goal:** Return the result of raising $x$ to the power of $y$: $x^y = (a + bi)^{c + di} = z = g + hi$, represented as a tuple `(g, h)`.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
Convert $x$ to polar form, and raise the result to the power of $y$.
</details>
```
@exercise
def complex_exp_arbitrary(x : Complex, y : Complex) -> Complex:
(a, b) = x
(c, d) = y
r = math.sqrt(a ** 2 + b ** 2)
theta = math.atan2(b, a)
if (r == 0):
return (0, 0)
l = math.log(r)
exponent = math.exp(l * c - d * theta)
real = exponent * (math.cos(l * d + theta * c ))
imaginary = exponent * (math.sin(l * d + theta * c))
return (real, imaginary)
```
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynb#Exercise-12**:-Arbitrary-complex-exponents.).*
## Conclusion
Congratulations! You should now know enough complex arithmetic to get started with quantum computing. When you are ready, you can move on to the next tutorial in this series, covering [linear algebra](../LinearAlgebra/LinearAlgebra.ipynb).
|
github_jupyter
|
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise
from typing import Tuple
import math
Complex = Tuple[float, float]
Polar = Tuple[float, float]
@exercise
def imaginary_power(n : int) -> int:
# If n is divisible by 4
if n % 4 == 0:
return 1
else:
return -1
@exercise
def complex_add(x : Complex, y : Complex) -> Complex:
# You can extract elements from a tuple like this
a = x[0]
b = x[1]
c = y[0]
d = y[1]
# This creates a new variable and stores the real component into it
real = a + c
# Replace the ... with code to calculate the imaginary component
imaginary = b + d
# You can create a tuple like this
ans = (real, imaginary)
return ans
@exercise
def complex_mult(x : Complex, y : Complex) -> Complex:
# Fill in your own code
a = x[0]
b = x[1]
c = y[0]
d = y[1]
# This creates a new variable and stores the real component into it
real = (a * c)- (b*d)
# Replace the ... with code to calculate the imaginary component
imaginary = (b*c)+(d*a)
# You can create a tuple like this
ans = (real, imaginary)
return ans
@exercise
def conjugate(x : Complex) -> Complex:
real = x[0]
imaginary = -x[1]
return (real, imaginary)
@exercise
def complex_div(x : Complex, y : Complex) -> Complex:
(a,b) = x
(c,d) = y
den = (c*c)+(d*d)
real = ((a*c)+(b*d))/den
imaginary = ((a*(-d))+(c*b))/den
return (real, imaginary)
@exercise
def modulus(x : Complex) -> float:
return math.sqrt(x[0] ** 2 + x[1] ** 2)
@exercise
def complex_exp(x : Complex) -> Complex:
(a, b) = x
ex = math.e ** a
real = ex * math.cos(b)
imaginary = ex * math.sin(b)
return (real, imaginary)
@exercise
def complex_exp_real(r : float, x : Complex) -> Complex:
if (r == 0):
return (0,0)
(a, b) = x
ra = r ** a
l = math.log(r)
real = ra * math.cos(b * l)
imaginary = ra * math.sin(b * l)
return (real, imaginary)
@exercise
def polar_convert(x : Complex) -> Polar:
(a, b) = x
r = math.sqrt(a**2 + b **2)
theta = math.atan2(b, a)
return (r, theta)
@exercise
def cartesian_convert(x : Polar) -> Complex:
(r, theta) = x
real = r * math.cos(theta)
imaginary = r * math.sin(theta)
return (real, imaginary)
@exercise
def polar_mult(x : Polar, y : Polar) -> Polar:
(r1, theta1) = x
(r2, theta2) = y
r = r1 * r2
a = theta1 + theta2
if (a > math.pi):
a = a - 2.0 * math.pi
elif (a <= -math.pi):
a = a + 2.0 * math.pi
return (r, a)
@exercise
def complex_exp_arbitrary(x : Complex, y : Complex) -> Complex:
(a, b) = x
(c, d) = y
r = math.sqrt(a ** 2 + b ** 2)
theta = math.atan2(b, a)
if (r == 0):
return (0, 0)
l = math.log(r)
exponent = math.exp(l * c - d * theta)
real = exponent * (math.cos(l * d + theta * c ))
imaginary = exponent * (math.sin(l * d + theta * c))
return (real, imaginary)
| 0.825027 | 0.97546 |
```
import numpy as np
import tensorflow as tf
import matplotlib
matplotlib.use('Agg', warn=False)
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
```
# Tensorflow Basics
### Basic Definitions
- **tensor**: array of any number of dimensions of primitive values
$$ 3 $$
$$ [1, 2, 3] $$
$$ [[1, 2], [3, 4]] $$
$$ [[[1, 2, 3]], [[7, 8, 9]]] $$
- **computational graph**: series of operations arranged into a graph of nodes
- TensorFlow core programs consist of two discrete sections
- **Building** the computational graph
- **Running** the computational graph
### Types of Nodes
- **constant**: no input, outputs a constant tensor
- **placeholder**: promises to provide a value, parameterizes computational graphs, used to feed in data
- **variable**: trainable variables such as weights and biases for the model, must be provided with an initial value
### Sessions
- Preface
- Nodes do not directly evaluate to values
- Nodes are evaluated at runtime of the computational graph
- Computational graphs are run in sessions
- Runing Computational Graphs in Sessions
- Encapsulates the control and state of the TensorFlow runtime
```
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
c = tf.constant(1000.0, dtype=tf.float32)
adder_node = a + b + c
session = tf.Session()
print(sess.run(adder_node, {a: [331, 7000], b: [6, 337]}))
```
# Linear Model Example
- **Placeholder** for input
- **Variable** for weights and bias
- To initialize variables, you must call:
```
init = tf.global_variables_initializer()
session.run(init)
```
```
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b
init = tf.global_variables_initializer()
sess.run(init)
print(sess.run(linear_model, {x: [1, 2, 3, 4]}))
```
### Training with Loss Function
- **y** contains optimal output values
- **square_deltas** measures squared error
- **loss** is sum of squared deltas tensor
```
y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])
print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
```
### Optimization
- **optimizers**: slowly change each variable to minimize loss function
- **gradient descent optimizer**: modifies each variable according to magnitude of the derivative of the loss with respect to that variable
```
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init) # reset variables to incorrect defaults
feed_dict = {
x: [1, 2, 3, 4],
y: [0, -1, -2, -3]
}
for i in range(1000):
sess.run(train, feed_dict)
print(sess.run([W, b]))
```
### Full Example
```
import tensorflow as tf
# Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
# loss
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x: x_train, y: y_train})
# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
```
|
github_jupyter
|
import numpy as np
import tensorflow as tf
import matplotlib
matplotlib.use('Agg', warn=False)
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
c = tf.constant(1000.0, dtype=tf.float32)
adder_node = a + b + c
session = tf.Session()
print(sess.run(adder_node, {a: [331, 7000], b: [6, 337]}))
init = tf.global_variables_initializer()
session.run(init)
```
### Training with Loss Function
- **y** contains optimal output values
- **square_deltas** measures squared error
- **loss** is sum of squared deltas tensor
### Optimization
- **optimizers**: slowly change each variable to minimize loss function
- **gradient descent optimizer**: modifies each variable according to magnitude of the derivative of the loss with respect to that variable
### Full Example
| 0.678433 | 0.970324 |
# Coffea-Casa Benchmark Example 1
```
!pip install pytest ipytest pytest-csv pytest-benchmark
import numpy as np
import pytest
%matplotlib inline
from coffea import hist
import coffea.processor as processor
import awkward as ak
from dask.distributed import Client, LocalCluster
import time
import os
import ipytest
ipytest.config(rewrite_asserts=True, magics=True)
fileset = {'SingleMu' : ["root://eospublic.cern.ch//eos/root-eos/benchmark/Run2012B_SingleMu.root"]}
from dask.distributed import Client, Worker, WorkerPlugin
from typing import List
import os
class DependencyInstaller(WorkerPlugin):
def __init__(self, dependencies: List[str]):
self._depencendies = " ".join(f"'{dep}'" for dep in dependencies)
def setup(self, worker: Worker):
os.system(f"pip install {self._depencendies}")
dependency_installer = DependencyInstaller([
"pytest-benchmark",
])
client = Client("tls://localhost:8786")
#Uncomment only if we would like to compare the same number of workers
#cluster = CoffeaCasaCluster()
#cluster.scale(10)
#client = Client(cluster)
client.register_worker_plugin(dependency_installer)
# This program plots an event-level variable (in this case, MET, but switching it is as easy as a dict-key change). It also demonstrates an easy use of the book-keeping cutflow tool, to keep track of the number of events processed.
# The processor class bundles our data analysis together while giving us some helpful tools. It also leaves looping and chunks to the framework instead of us.
class Processor(processor.ProcessorABC):
def __init__(self):
# Bins and categories for the histogram are defined here. For format, see https://coffeateam.github.io/coffea/stubs/coffea.hist.hist_tools.Hist.html && https://coffeateam.github.io/coffea/stubs/coffea.hist.hist_tools.Bin.html
dataset_axis = hist.Cat("dataset", "")
MET_axis = hist.Bin("MET", "MET [GeV]", 50, 0, 100)
# The accumulator keeps our data chunks together for histogramming. It also gives us cutflow, which can be used to keep track of data.
self._accumulator = processor.dict_accumulator({
'MET': hist.Hist("Counts", dataset_axis, MET_axis),
'cutflow': processor.defaultdict_accumulator(int)
})
@property
def accumulator(self):
return self._accumulator
def process(self, events):
output = self.accumulator.identity()
# This is where we do our actual analysis. The dataset has columns similar to the TTree's; events.columns can tell you them, or events.[object].columns for deeper depth.
dataset = events.metadata["dataset"]
MET = events.MET.pt
# We can define a new key for cutflow (in this case 'all events'). Then we can put values into it. We need += because it's per-chunk (demonstrated below)
output['cutflow']['all events'] += ak.size(MET)
output['cutflow']['number of chunks'] += 1
# This fills our histogram once our data is collected. The hist key ('MET=') will be defined in the bin in __init__.
output['MET'].fill(dataset=dataset, MET=MET)
return output
def postprocess(self, accumulator):
return accumulator
# Function which we are interested to benchmark where chunk_size is changed dependending on iteration of benchmark run.
def coffea_processor_adlexample1(chunk_size):
output = processor.run_uproot_job(fileset,
treename = 'Events',
processor_instance = Processor(),
executor = processor.dask_executor,
chunksize = chunk_size,
executor_args = {'schema': processor.NanoAODSchema,
'client': client,
'savemetrics': True}
)
return output
@pytest.mark.parametrize("chunk_size", range(100000, 200000, 100000))
def test_coffea_processor_adlexample1(benchmark, chunk_size):
output = benchmark(coffea_processor_adlexample1, chunk_size)
# Custom metrics available with `savemetrics` option
benchmark.extra_info['events_s_thread'] = output[1]['entries'] / output[1]['processtime']
ipytest.run("-qq")
```
|
github_jupyter
|
!pip install pytest ipytest pytest-csv pytest-benchmark
import numpy as np
import pytest
%matplotlib inline
from coffea import hist
import coffea.processor as processor
import awkward as ak
from dask.distributed import Client, LocalCluster
import time
import os
import ipytest
ipytest.config(rewrite_asserts=True, magics=True)
fileset = {'SingleMu' : ["root://eospublic.cern.ch//eos/root-eos/benchmark/Run2012B_SingleMu.root"]}
from dask.distributed import Client, Worker, WorkerPlugin
from typing import List
import os
class DependencyInstaller(WorkerPlugin):
def __init__(self, dependencies: List[str]):
self._depencendies = " ".join(f"'{dep}'" for dep in dependencies)
def setup(self, worker: Worker):
os.system(f"pip install {self._depencendies}")
dependency_installer = DependencyInstaller([
"pytest-benchmark",
])
client = Client("tls://localhost:8786")
#Uncomment only if we would like to compare the same number of workers
#cluster = CoffeaCasaCluster()
#cluster.scale(10)
#client = Client(cluster)
client.register_worker_plugin(dependency_installer)
# This program plots an event-level variable (in this case, MET, but switching it is as easy as a dict-key change). It also demonstrates an easy use of the book-keeping cutflow tool, to keep track of the number of events processed.
# The processor class bundles our data analysis together while giving us some helpful tools. It also leaves looping and chunks to the framework instead of us.
class Processor(processor.ProcessorABC):
def __init__(self):
# Bins and categories for the histogram are defined here. For format, see https://coffeateam.github.io/coffea/stubs/coffea.hist.hist_tools.Hist.html && https://coffeateam.github.io/coffea/stubs/coffea.hist.hist_tools.Bin.html
dataset_axis = hist.Cat("dataset", "")
MET_axis = hist.Bin("MET", "MET [GeV]", 50, 0, 100)
# The accumulator keeps our data chunks together for histogramming. It also gives us cutflow, which can be used to keep track of data.
self._accumulator = processor.dict_accumulator({
'MET': hist.Hist("Counts", dataset_axis, MET_axis),
'cutflow': processor.defaultdict_accumulator(int)
})
@property
def accumulator(self):
return self._accumulator
def process(self, events):
output = self.accumulator.identity()
# This is where we do our actual analysis. The dataset has columns similar to the TTree's; events.columns can tell you them, or events.[object].columns for deeper depth.
dataset = events.metadata["dataset"]
MET = events.MET.pt
# We can define a new key for cutflow (in this case 'all events'). Then we can put values into it. We need += because it's per-chunk (demonstrated below)
output['cutflow']['all events'] += ak.size(MET)
output['cutflow']['number of chunks'] += 1
# This fills our histogram once our data is collected. The hist key ('MET=') will be defined in the bin in __init__.
output['MET'].fill(dataset=dataset, MET=MET)
return output
def postprocess(self, accumulator):
return accumulator
# Function which we are interested to benchmark where chunk_size is changed dependending on iteration of benchmark run.
def coffea_processor_adlexample1(chunk_size):
output = processor.run_uproot_job(fileset,
treename = 'Events',
processor_instance = Processor(),
executor = processor.dask_executor,
chunksize = chunk_size,
executor_args = {'schema': processor.NanoAODSchema,
'client': client,
'savemetrics': True}
)
return output
@pytest.mark.parametrize("chunk_size", range(100000, 200000, 100000))
def test_coffea_processor_adlexample1(benchmark, chunk_size):
output = benchmark(coffea_processor_adlexample1, chunk_size)
# Custom metrics available with `savemetrics` option
benchmark.extra_info['events_s_thread'] = output[1]['entries'] / output[1]['processtime']
ipytest.run("-qq")
| 0.748628 | 0.750713 |
## Tuning Model Parameters
In this exercise, you will optimise the parameters for a classification model.
### Prepare the Data
First, import the libraries you will need and prepare the training and test data:
```
// Import Spark SQL and Spark ML libraries
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.tuning.{ParamGridBuilder, TrainValidationSplit}
// Load the source data
val csv = spark.read.option("inferSchema","true").option("header", "true").csv("wasb:///data/flights.csv")
// Select features and label
val data = csv.select($"DayofMonth", $"DayOfWeek", $"OriginAirportID", $"DestAirportID", $"DepDelay", ($"ArrDelay" > 15).cast("Int").alias("label"))
// Split the data
val splits = data.randomSplit(Array(0.7, 0.3))
val train = splits(0)
val test = splits(1).withColumnRenamed("label", "trueLabel")
```
### Define the Pipeline
Now define a pipeline that creates a feature vector and trains a classification model
```
// Define the pipeline
val assembler = new VectorAssembler().setInputCols(Array("DayofMonth", "DayOfWeek", "OriginAirportID", "DestAirportID", "DepDelay")).setOutputCol("features")
val lr = new LogisticRegression().setLabelCol("label").setFeaturesCol("features")
val pipeline = new Pipeline().setStages(Array(assembler, lr))
```
### Tune Parameters
You can tune parameters to find the best model for your data. A simple way to do this is to use **TrainValidationSplit** to evaluate each combination of parameters defined in a **ParameterGrid** against a subset of the training data in order to find the best performing parameters.
```
val paramGrid = new ParamGridBuilder().addGrid(lr.regParam, Array(0.3, 0.1, 0.01)).addGrid(lr.maxIter, Array(10, 5)).addGrid(lr.threshold, Array(0.35, 0.3)).build()
val tvs = new TrainValidationSplit().setEstimator(pipeline).setEvaluator(new BinaryClassificationEvaluator).setEstimatorParamMaps(paramGrid).setTrainRatio(0.8)
val model = tvs.fit(train)
```
### Test the Model
Now you're ready to apply the model to the test data.
```
val prediction = model.transform(test)
val predicted = prediction.select("features", "prediction", "probability", "trueLabel")
predicted.show(100)
```
### Compute Confusion Matrix Metrics
Now you can examine the confusion matrix metrics to judge the performance of the model.
```
val tp = predicted.filter("prediction == 1 AND truelabel == 1").count().toFloat
val fp = predicted.filter("prediction == 1 AND truelabel == 0").count().toFloat
val tn = predicted.filter("prediction == 0 AND truelabel == 0").count().toFloat
val fn = predicted.filter("prediction == 0 AND truelabel == 1").count().toFloat
val metrics = spark.createDataFrame(Seq(
("TP", tp),
("FP", fp),
("TN", tn),
("FN", fn),
("Precision", tp / (tp + fp)),
("Recall", tp / (tp + fn)))).toDF("metric", "value")
metrics.show()
```
### Review the Area Under ROC
You can also assess the accuracy of the model by reviewing the area under ROC metric.
```
val evaluator = new BinaryClassificationEvaluator().setLabelCol("trueLabel").setRawPredictionCol("prediction").setMetricName("areaUnderROC")
val aur = evaluator.evaluate(prediction)
println("AUR = " + (aur))
```
|
github_jupyter
|
// Import Spark SQL and Spark ML libraries
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.tuning.{ParamGridBuilder, TrainValidationSplit}
// Load the source data
val csv = spark.read.option("inferSchema","true").option("header", "true").csv("wasb:///data/flights.csv")
// Select features and label
val data = csv.select($"DayofMonth", $"DayOfWeek", $"OriginAirportID", $"DestAirportID", $"DepDelay", ($"ArrDelay" > 15).cast("Int").alias("label"))
// Split the data
val splits = data.randomSplit(Array(0.7, 0.3))
val train = splits(0)
val test = splits(1).withColumnRenamed("label", "trueLabel")
// Define the pipeline
val assembler = new VectorAssembler().setInputCols(Array("DayofMonth", "DayOfWeek", "OriginAirportID", "DestAirportID", "DepDelay")).setOutputCol("features")
val lr = new LogisticRegression().setLabelCol("label").setFeaturesCol("features")
val pipeline = new Pipeline().setStages(Array(assembler, lr))
val paramGrid = new ParamGridBuilder().addGrid(lr.regParam, Array(0.3, 0.1, 0.01)).addGrid(lr.maxIter, Array(10, 5)).addGrid(lr.threshold, Array(0.35, 0.3)).build()
val tvs = new TrainValidationSplit().setEstimator(pipeline).setEvaluator(new BinaryClassificationEvaluator).setEstimatorParamMaps(paramGrid).setTrainRatio(0.8)
val model = tvs.fit(train)
val prediction = model.transform(test)
val predicted = prediction.select("features", "prediction", "probability", "trueLabel")
predicted.show(100)
val tp = predicted.filter("prediction == 1 AND truelabel == 1").count().toFloat
val fp = predicted.filter("prediction == 1 AND truelabel == 0").count().toFloat
val tn = predicted.filter("prediction == 0 AND truelabel == 0").count().toFloat
val fn = predicted.filter("prediction == 0 AND truelabel == 1").count().toFloat
val metrics = spark.createDataFrame(Seq(
("TP", tp),
("FP", fp),
("TN", tn),
("FN", fn),
("Precision", tp / (tp + fp)),
("Recall", tp / (tp + fn)))).toDF("metric", "value")
metrics.show()
val evaluator = new BinaryClassificationEvaluator().setLabelCol("trueLabel").setRawPredictionCol("prediction").setMetricName("areaUnderROC")
val aur = evaluator.evaluate(prediction)
println("AUR = " + (aur))
| 0.628863 | 0.961606 |
<h1> Time series prediction, end-to-end </h1>
This notebook illustrates several models to find the next value of a time-series:
<ol>
<li> Linear
<li> DNN
<li> CNN
<li> RNN
</ol>
```
# Change these to try this notebook out
BUCKET = "cloud-training-demos-ml"
PROJECT = "qwiklabs-gcp-00-34ffb0f0dc65"
REGION = "us-central1"
SEQ_LEN = 50
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['SEQ_LEN'] = str(SEQ_LEN)
os.environ['TFVERSION'] = "1.13"
```
<h3> Simulate some time-series data </h3>
Essentially a set of sinusoids with random amplitudes and frequencies.
```
import tensorflow as tf
print(tf.__version__)
import numpy as np
import seaborn as sns
def create_time_series():
freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6
ampl = np.random.random() + 0.5 # 0.5 to 1.5
noise = [np.random.random()*0.3 for i in range(SEQ_LEN)] # -0.3 to +0.3 uniformly distributed
x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl + noise
return x
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
for i in range(0, 5):
sns.tsplot( create_time_series(), color=flatui[i%len(flatui)] ); # 5 series
def to_csv(filename, N):
with open(filename, 'w') as ofp:
for lineno in range(0, N):
seq = create_time_series()
line = ",".join(map(str, seq))
ofp.write(line + '\n')
import os
try:
os.makedirs("data/sines/"")
except OSError:
pass
to_csv("data/sines/train-1.csv", 1000) # 1000 sequences
to_csv("data/sines/valid-1.csv", 250)
!head -5 data/sines/*-1.csv
```
<h3> Train model locally </h3>
Make sure the code works as intended.
The `model.py` and `task.py` containing the model code is in <a href="sinemodel">sinemodel/</a>
**Complete the TODOs in `model.py` before proceeding!**
Once you've completed the TODOs, set `--model` below to the appropriate model (linear,dnn,cnn,rnn,rnn2 or rnnN) and run it locally for a few steps to test the code.
```
%%bash
DATADIR=$(pwd)/data/sines
OUTDIR=$(pwd)/trained/sines
rm -rf $OUTDIR
gcloud ml-engine local train \
--module-name=sinemodel.task \
--package-path=${PWD}/sinemodel \
-- \
--train_data_path="${DATADIR}/train-1.csv" \
--eval_data_path="${DATADIR}/valid-1.csv" \
--output_dir=${OUTDIR} \
--model=linear --train_steps=10 --sequence_length=$SEQ_LEN
```
<h3> Cloud ML Engine </h3>
Now to train on Cloud ML Engine with more data.
```
import shutil
shutil.rmtree(path = "data/sines", ignore_errors = True)
os.makedirs("data/sines/")
for i in range(0,10):
to_csv("data/sines/train-{}.csv".format(i), 1000) # 1000 sequences
to_csv("data/sines/valid-{}.csv".format(i), 250)
%%bash
gsutil -m rm -rf gs://${BUCKET}/sines/*
gsutil -m cp data/sines/*.csv gs://${BUCKET}/sines
%%bash
for MODEL in linear dnn cnn rnn rnn2; do
OUTDIR=gs://${BUCKET}/sinewaves/${MODEL}
JOBNAME=sines_${MODEL}_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=sinemodel.task \
--package-path=${PWD}/sinemodel \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--train_data_path="gs://${BUCKET}/sines/train*.csv" \
--eval_data_path="gs://${BUCKET}/sines/valid*.csv" \
--output_dir=$OUTDIR \
--train_steps=3000 --sequence_length=$SEQ_LEN --model=$MODEL
done
```
## Monitor training with TensorBoard
Use this cell to launch tensorboard. If tensorboard appears blank try refreshing after 5 minutes
```
from google.datalab.ml import TensorBoard
TensorBoard().start("gs://{}/sinewaves".format(BUCKET))
for pid in TensorBoard.list()["pid"]:
TensorBoard().stop(pid)
print("Stopped TensorBoard with pid {}".format(pid))
```
## Results
Complete the below table with your own results! Then compare your results to the results in the solution notebook.
| Model | Sequence length | # of steps | Minutes | RMSE |
| --- | ----| --- | --- | --- |
| linear | 50 | 3000 | - | - |
| dnn | 50 | 3000 | - | - |
| cnn | 50 | 3000 | - | - |
| rnn | 50 | 3000 | - | - |
| rnn2 | 50 | 3000 | - | - |
| rnnN | 50 | 3000 | - | - |
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
|
github_jupyter
|
# Change these to try this notebook out
BUCKET = "cloud-training-demos-ml"
PROJECT = "qwiklabs-gcp-00-34ffb0f0dc65"
REGION = "us-central1"
SEQ_LEN = 50
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['SEQ_LEN'] = str(SEQ_LEN)
os.environ['TFVERSION'] = "1.13"
import tensorflow as tf
print(tf.__version__)
import numpy as np
import seaborn as sns
def create_time_series():
freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6
ampl = np.random.random() + 0.5 # 0.5 to 1.5
noise = [np.random.random()*0.3 for i in range(SEQ_LEN)] # -0.3 to +0.3 uniformly distributed
x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl + noise
return x
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
for i in range(0, 5):
sns.tsplot( create_time_series(), color=flatui[i%len(flatui)] ); # 5 series
def to_csv(filename, N):
with open(filename, 'w') as ofp:
for lineno in range(0, N):
seq = create_time_series()
line = ",".join(map(str, seq))
ofp.write(line + '\n')
import os
try:
os.makedirs("data/sines/"")
except OSError:
pass
to_csv("data/sines/train-1.csv", 1000) # 1000 sequences
to_csv("data/sines/valid-1.csv", 250)
!head -5 data/sines/*-1.csv
%%bash
DATADIR=$(pwd)/data/sines
OUTDIR=$(pwd)/trained/sines
rm -rf $OUTDIR
gcloud ml-engine local train \
--module-name=sinemodel.task \
--package-path=${PWD}/sinemodel \
-- \
--train_data_path="${DATADIR}/train-1.csv" \
--eval_data_path="${DATADIR}/valid-1.csv" \
--output_dir=${OUTDIR} \
--model=linear --train_steps=10 --sequence_length=$SEQ_LEN
import shutil
shutil.rmtree(path = "data/sines", ignore_errors = True)
os.makedirs("data/sines/")
for i in range(0,10):
to_csv("data/sines/train-{}.csv".format(i), 1000) # 1000 sequences
to_csv("data/sines/valid-{}.csv".format(i), 250)
%%bash
gsutil -m rm -rf gs://${BUCKET}/sines/*
gsutil -m cp data/sines/*.csv gs://${BUCKET}/sines
%%bash
for MODEL in linear dnn cnn rnn rnn2; do
OUTDIR=gs://${BUCKET}/sinewaves/${MODEL}
JOBNAME=sines_${MODEL}_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=sinemodel.task \
--package-path=${PWD}/sinemodel \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--train_data_path="gs://${BUCKET}/sines/train*.csv" \
--eval_data_path="gs://${BUCKET}/sines/valid*.csv" \
--output_dir=$OUTDIR \
--train_steps=3000 --sequence_length=$SEQ_LEN --model=$MODEL
done
from google.datalab.ml import TensorBoard
TensorBoard().start("gs://{}/sinewaves".format(BUCKET))
for pid in TensorBoard.list()["pid"]:
TensorBoard().stop(pid)
print("Stopped TensorBoard with pid {}".format(pid))
| 0.384219 | 0.833325 |
```
# magics: ensures that any changes to the modules loaded below will be re-loaded automatically
%load_ext autoreload
%autoreload 2
%load_ext line_profiler
# load general packages
import numpy as np
# load modules related to this exercise
from matplotlib.pyplot import spy
from model_zucher import zurcher
import Estimate_MPEC_exante as estimate_MPEC
from Solve_NFXP import solve_NFXP
import estimate_NFXP as estimate_NFXP
```
# Exercise set 4
```
# setup
model = zurcher()
solver = solve_NFXP()
# SIMULATE DATA
N = 500
T = 120
ev, pk = solver.poly(model.bellman, beta=model.beta, output=2)
data = zurcher.sim_data(model,N,T,pk)
samplesize = data.shape[0]
```
#### 1. Run the function mpec.sparsity_pattern.
The function mpec.sparsity_pattern creates sparse matrices of indicators for where there are elements in the Jacobian of the constraints and Hessian of the likelihood function
(a) Look at the figures, and talk about what the different elements of the Jacobian of the constraint and Hessian of the likelihood represent.
```
# Number of parameter to be estimated
Nc = 2
import matplotlib.pyplot as plt
J_pattern, H_pattern = estimate_MPEC.sparsity_pattern(Nc,model.n, len(model.p)+1)
# Figure
fig = plt.figure(figsize=(20,5))# figsize is in inches...
ax = fig.add_subplot(1,2,1)
ax.spy(J_pattern,markersize=5)
ax.set_title(f'Jacobian of constraints')
ax = fig.add_subplot(1,2,2)
ax.spy(H_pattern,markersize=5)
ax.set_title(f'Hessian of likelihood')
plt.show()
```
#### 2. What is the advantages of handle that the Jacobian and Hessian as sparse matrices?
#### 3. Estimate the model using MPEC. In order to estimate the model, you should understand:
<il type ="a">
<li> Estimate_MPEC.estimate </li>
<li> Estimate_MPEC.ll (don't spend too much time on understanding the gradient)</li>
<li> Estimate_MPEC.con_bellman (don't focus too much on computing Jacobian) </li>
</il>
Note that we in the implemenation don't use the information that the Hessian is sparse.
#### 4. Fill in the missing stuff in mpec.ll and mpec.con_bellman, and run the code to check that your results are correct
```
import time
theta0 = [11,2]
t0 = time.time()
res_MPEC, pnames, theta_hat_MPEC = estimate_MPEC.estimate(model,data,theta0=theta0, twostep=1)
t1 = time.time()
time_MPEC=t1-t0
# Print the results
print(f'Structual estimation using busdata from Rust(1987)')
print(f'Beta = {model.beta:.4f}')
print(f'n = {model.n}')
print(f'Sample size = {data.shape[0]}\n \n')
print(f'Parameters Estimates s.e. ')
print(f'{pnames[0]} {theta_hat_MPEC[0]:.4f} ')
print(f'{pnames[1]} {theta_hat_MPEC[1]:.4f} \n ')
print(f'Log-likelihood {-res_MPEC.fun*samplesize:.4f}')
print(f'runtime (seconds) {time_MPEC:.4f}')
print(res_MPEC.message)
#%lprun -f estimate_MPEC.ll -f estimate_MPEC.estimate -f estimate_MPEC.con_bellman -f estimate_MPEC.constraint_jac estimate_MPEC.estimate(model,data,theta0=theta0, twostep=1)
```
#### 5. Compare NFXP and MPEC:
```
# Solve by NFXP
t0 = time.time()
nfxp_model, nfxp_results, pnames, theta_hat_NFXP, Avar_NFXP, converged=estimate_NFXP.estimate(model, solver, data, theta0=theta0, twostep=1)
t1 = time.time()
time_NFXP=t1-t0
#compare the results
print(f'Structual estimation using busdata from Rust(1987) \n')
print(f'MPEC')
print(f'Parameters Estimates s.e. ')
print(f'{pnames[0]} {theta_hat_MPEC[0]:.4f} ')
print(f'{pnames[1]} {theta_hat_MPEC[1]:.4f} \n ')
print(f'Log-likelihood {-res_MPEC.fun*samplesize:.2f}')
print(f'runtime (seconds) {time_MPEC:.4f}\n \n')
print(f'NFXP')
print(f'Parameters Estimates s.e. ')
print(f'{pnames[0]} {theta_hat_NFXP[0]:.4f} {np.sqrt(Avar_NFXP[0,0]):.4f} ')
print(f'{pnames[1]} {theta_hat_NFXP[1]:.4f} {np.sqrt(Avar_NFXP[1,1]):.4f} \n ')
print(f'Log-likelihood {-nfxp_results.fun*samplesize:.2f}')
print(f'runtime (seconds) {time_NFXP:.4f}')
```
(a) Compare the time of NFXP and the time of MPEC, and the time of NFXP and MPEC from the lecture. According to what you saw at the lectures the two methods should be comparable with regards to speed.
```
print(f'Beta = {model.beta:.4f}')
print(f'n = {model.n}')
%timeit estimate_NFXP.estimate(model, solver, data, theta0=theta0, twostep=1)
%timeit estimate_MPEC.estimate(model,data,theta0=theta0, twostep=1)
```
(b) Do we use analytical first-order derivatives?
(c) What about second-order derivatives?
(d) What do they do in Su and Judd (2012)?
(e) Why is our implementation inefficient?
#### 5. How did we get our standard errors using NFXP? How would you calculate them using MPEC?
|
github_jupyter
|
# magics: ensures that any changes to the modules loaded below will be re-loaded automatically
%load_ext autoreload
%autoreload 2
%load_ext line_profiler
# load general packages
import numpy as np
# load modules related to this exercise
from matplotlib.pyplot import spy
from model_zucher import zurcher
import Estimate_MPEC_exante as estimate_MPEC
from Solve_NFXP import solve_NFXP
import estimate_NFXP as estimate_NFXP
# setup
model = zurcher()
solver = solve_NFXP()
# SIMULATE DATA
N = 500
T = 120
ev, pk = solver.poly(model.bellman, beta=model.beta, output=2)
data = zurcher.sim_data(model,N,T,pk)
samplesize = data.shape[0]
# Number of parameter to be estimated
Nc = 2
import matplotlib.pyplot as plt
J_pattern, H_pattern = estimate_MPEC.sparsity_pattern(Nc,model.n, len(model.p)+1)
# Figure
fig = plt.figure(figsize=(20,5))# figsize is in inches...
ax = fig.add_subplot(1,2,1)
ax.spy(J_pattern,markersize=5)
ax.set_title(f'Jacobian of constraints')
ax = fig.add_subplot(1,2,2)
ax.spy(H_pattern,markersize=5)
ax.set_title(f'Hessian of likelihood')
plt.show()
import time
theta0 = [11,2]
t0 = time.time()
res_MPEC, pnames, theta_hat_MPEC = estimate_MPEC.estimate(model,data,theta0=theta0, twostep=1)
t1 = time.time()
time_MPEC=t1-t0
# Print the results
print(f'Structual estimation using busdata from Rust(1987)')
print(f'Beta = {model.beta:.4f}')
print(f'n = {model.n}')
print(f'Sample size = {data.shape[0]}\n \n')
print(f'Parameters Estimates s.e. ')
print(f'{pnames[0]} {theta_hat_MPEC[0]:.4f} ')
print(f'{pnames[1]} {theta_hat_MPEC[1]:.4f} \n ')
print(f'Log-likelihood {-res_MPEC.fun*samplesize:.4f}')
print(f'runtime (seconds) {time_MPEC:.4f}')
print(res_MPEC.message)
#%lprun -f estimate_MPEC.ll -f estimate_MPEC.estimate -f estimate_MPEC.con_bellman -f estimate_MPEC.constraint_jac estimate_MPEC.estimate(model,data,theta0=theta0, twostep=1)
# Solve by NFXP
t0 = time.time()
nfxp_model, nfxp_results, pnames, theta_hat_NFXP, Avar_NFXP, converged=estimate_NFXP.estimate(model, solver, data, theta0=theta0, twostep=1)
t1 = time.time()
time_NFXP=t1-t0
#compare the results
print(f'Structual estimation using busdata from Rust(1987) \n')
print(f'MPEC')
print(f'Parameters Estimates s.e. ')
print(f'{pnames[0]} {theta_hat_MPEC[0]:.4f} ')
print(f'{pnames[1]} {theta_hat_MPEC[1]:.4f} \n ')
print(f'Log-likelihood {-res_MPEC.fun*samplesize:.2f}')
print(f'runtime (seconds) {time_MPEC:.4f}\n \n')
print(f'NFXP')
print(f'Parameters Estimates s.e. ')
print(f'{pnames[0]} {theta_hat_NFXP[0]:.4f} {np.sqrt(Avar_NFXP[0,0]):.4f} ')
print(f'{pnames[1]} {theta_hat_NFXP[1]:.4f} {np.sqrt(Avar_NFXP[1,1]):.4f} \n ')
print(f'Log-likelihood {-nfxp_results.fun*samplesize:.2f}')
print(f'runtime (seconds) {time_NFXP:.4f}')
print(f'Beta = {model.beta:.4f}')
print(f'n = {model.n}')
%timeit estimate_NFXP.estimate(model, solver, data, theta0=theta0, twostep=1)
%timeit estimate_MPEC.estimate(model,data,theta0=theta0, twostep=1)
| 0.719975 | 0.913136 |
# Machines Manufacturing Captal Budgeting Model (Project 1)
Insert your description of the model here and add any additional sections below:
- [**Setup**](#Setup): Runs any imports and other setup
- [**Inputs**](#Inputs): Defines the inputs for the model
## Setup
Setup for the later calculations are here. The necessary packages are imported.
```
from dataclasses import dataclass
import numpy_financial as npf
```
## Inputs
All of the inputs for the model are defined here. A class is constructed to manage the data, and an instance of the class containing the default inputs is created.
```
@dataclass
class ModelInputs:
n_phones: float = 100000
price_scrap: float = 50000
price_phone: float = 2000
cost_machine_adv: float = 1000000
cogs_phone: float = 250
n_life: int = 10
n_machines: int = 5
d_1: float = 100000
g_d: float = 0.2
max_year: float = 20
interest: float = 0.05
# Inputs for bonus problem
elasticity: float = 100
demand_constant: float = 300000
model_data = ModelInputs()
model_data
```
## Oulines
cashflows = revenues - costs
1. Revenues
- sales of phones
- number of goods sold * phones price
- sales of depreciated machines
2. Costs
- purchases of machines
- costs of manuefacturing the phones
- costs of advertisement
Notes:
1. no need to replace the machines (no machines purchases after year 5?)
2. number of goods sold is the lower one between demand and production
3. launching the ads after machines purchases are done (ads costs are induced every year after year 5?)
4. demand grows by g_d = 20% if there is advertisement
Logic:
for every year:
1. identify if there are ads costs. if there are, demand increase
2. whether there are scraped machines
3. how many phones have been sold
```
# determine the number of operating machines and scraped machines for each year
def op_sp_machines(year):
'''
return two lists one contains the number of machines in operation,
the other contains the scraped machines.
'''
op = []
sp = []
if year <= model_data.n_machines: # purchase one machine per year before year 5
op = list(range(year))
elif model_data.n_life >= year > model_data.n_machines: # always have 5 operating machines between year 5 and 10
op = list(range(model_data.n_machines))
elif model_data.n_machines+model_data.n_life >= year > model_data.n_life: # one depreciated machine per year between year 11 and 15
sp = list(range(year-model_data.n_life))
op = list(range(model_data.n_machines-len(sp)))
else: # no operating and scraped machines after year 15
op = []
return op, sp
# determine if there is advertisement
def is_ads(year):
'''
return a Bool value indicating whether advertisement exist
'''
ads = False
if year > model_data.n_machines: # having ads every year after year 5
ads = True
return ads
# determine the number of goods sold each year
def n_goods_sold(production, demand):
'''
return the lower value between each year's production and demand
'''
return production if production<=demand else demand
# calculate the total revenues each year
def total_revenues(n_goods_sold, sp_machines):
'''
return the total revenues, which consists of sales from goods and from scraped machines
'''
sales_from_goods = n_goods_sold * model_data.price_phone
if sp_machines > 0:
sales_from_machines = model_data.cost_machine_adv
else:
sales_from_machines = 0
return sales_from_goods+sales_from_machines
# calculate the total costs per year
def total_costs(n_goods_sold, op_machines, sp_machines, ads_costs):
'''
return the total costs per year.
which consists of ads costs, purchases of machines and phones manufacturing
'''
if op_machines>0 and sp_machines==0: # only need to purchase machines when there are less than 5 operating machines and no scraped machines
machines_costs = model_data.cost_machine_adv
else:
machines_costs = 0
phones_costs = n_goods_sold * model_data.cogs_phone
return machines_costs+phones_costs+ads_costs
# calculate each year's cashflow
def cashflow():
'''
return a list that contains each year's cashflows throughout the project years
'''
cashflows = []
demand = model_data.d_1
for year in range(1, model_data.max_year+1):
op_machines = len(op_sp_machines(year)[0])
sp_machines = len(op_sp_machines(year)[1])
if is_ads(year) == True:
cost_ads = model_data.cost_machine_adv
demand *= (1+model_data.g_d)
else:
cost_ads = 0
production = op_machines * model_data.n_phones
goods_sold = n_goods_sold(production, demand)
revenues = total_revenues(goods_sold, sp_machines)
costs = total_costs(goods_sold, op_machines, sp_machines, cost_ads)
cash_flow = revenues - costs
cashflows.append(cash_flow)
return cashflows
cash_flows = cashflow() # this should ultimately be set to the list containing your cash flow numbers in each year
npv = npf.npv(model_data.interest, cash_flows) # this should ultimately be set to the overally model npv number
cash_flows
npv
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(7, 4))
plt.plot(cash_flows)
plt.show()
```
|
github_jupyter
|
from dataclasses import dataclass
import numpy_financial as npf
@dataclass
class ModelInputs:
n_phones: float = 100000
price_scrap: float = 50000
price_phone: float = 2000
cost_machine_adv: float = 1000000
cogs_phone: float = 250
n_life: int = 10
n_machines: int = 5
d_1: float = 100000
g_d: float = 0.2
max_year: float = 20
interest: float = 0.05
# Inputs for bonus problem
elasticity: float = 100
demand_constant: float = 300000
model_data = ModelInputs()
model_data
# determine the number of operating machines and scraped machines for each year
def op_sp_machines(year):
'''
return two lists one contains the number of machines in operation,
the other contains the scraped machines.
'''
op = []
sp = []
if year <= model_data.n_machines: # purchase one machine per year before year 5
op = list(range(year))
elif model_data.n_life >= year > model_data.n_machines: # always have 5 operating machines between year 5 and 10
op = list(range(model_data.n_machines))
elif model_data.n_machines+model_data.n_life >= year > model_data.n_life: # one depreciated machine per year between year 11 and 15
sp = list(range(year-model_data.n_life))
op = list(range(model_data.n_machines-len(sp)))
else: # no operating and scraped machines after year 15
op = []
return op, sp
# determine if there is advertisement
def is_ads(year):
'''
return a Bool value indicating whether advertisement exist
'''
ads = False
if year > model_data.n_machines: # having ads every year after year 5
ads = True
return ads
# determine the number of goods sold each year
def n_goods_sold(production, demand):
'''
return the lower value between each year's production and demand
'''
return production if production<=demand else demand
# calculate the total revenues each year
def total_revenues(n_goods_sold, sp_machines):
'''
return the total revenues, which consists of sales from goods and from scraped machines
'''
sales_from_goods = n_goods_sold * model_data.price_phone
if sp_machines > 0:
sales_from_machines = model_data.cost_machine_adv
else:
sales_from_machines = 0
return sales_from_goods+sales_from_machines
# calculate the total costs per year
def total_costs(n_goods_sold, op_machines, sp_machines, ads_costs):
'''
return the total costs per year.
which consists of ads costs, purchases of machines and phones manufacturing
'''
if op_machines>0 and sp_machines==0: # only need to purchase machines when there are less than 5 operating machines and no scraped machines
machines_costs = model_data.cost_machine_adv
else:
machines_costs = 0
phones_costs = n_goods_sold * model_data.cogs_phone
return machines_costs+phones_costs+ads_costs
# calculate each year's cashflow
def cashflow():
'''
return a list that contains each year's cashflows throughout the project years
'''
cashflows = []
demand = model_data.d_1
for year in range(1, model_data.max_year+1):
op_machines = len(op_sp_machines(year)[0])
sp_machines = len(op_sp_machines(year)[1])
if is_ads(year) == True:
cost_ads = model_data.cost_machine_adv
demand *= (1+model_data.g_d)
else:
cost_ads = 0
production = op_machines * model_data.n_phones
goods_sold = n_goods_sold(production, demand)
revenues = total_revenues(goods_sold, sp_machines)
costs = total_costs(goods_sold, op_machines, sp_machines, cost_ads)
cash_flow = revenues - costs
cashflows.append(cash_flow)
return cashflows
cash_flows = cashflow() # this should ultimately be set to the list containing your cash flow numbers in each year
npv = npf.npv(model_data.interest, cash_flows) # this should ultimately be set to the overally model npv number
cash_flows
npv
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(7, 4))
plt.plot(cash_flows)
plt.show()
| 0.598312 | 0.979648 |
```
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import sklearn as sk
import pandas as pd
df = pd.read_csv('HolcombRockPeakStageFlow_02025500.csv')
print(df)
peak_flow = df['FLOW CMS']
peak_stage = df['GAUGE M']
action_stage = df['ACTION STAGE']
flood_stage = df['FLOOD STAGE']
moderate_flood = df['MODERATE STAGE']
major_flood = df['MAJOR STAGE']
date = df['DATE']
floodclass = df['CLASS']
floodclass.value_counts().loc[['ACTION', 'FLOOD', 'MODERATE','MAJOR']].plot.barh()
plt.style.use('classic')
plt.xlabel("No. of Flood Events 1935-Present")
plt.ylabel("Flood Class (Action = Lowest, Major = Highest)")
import matplotlib.dates as mdates
plt.style.use('classic')
plt.plot(date,peak_flow)
plt.gcf().autofmt_xdate()
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=10))
plt.gcf().autofmt_xdate()
plt.xlabel("Date")
plt.ylabel("Flow Rate (cms)")
x = date
y1 = peak_flow
y2 = peak_stage
plt.style.use('classic')
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(x, y1,'r-')
ax2.plot(x, y2,'b-')
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=10))
plt.gcf().autofmt_xdate()
ax1.set_xlabel('Date')
ax1.set_ylabel('Peak Flow (cms)',color='r')
ax2.set_ylabel('Peak Stage (m)',color='b')
plt.axhline(y=5.4864, color='c', linestyle='--',label='Action Stage 5.4864 m',lw=2)
plt.axhline(y=6.7056, color='k', linestyle='-',label='Flood Stage 6.7056 m',lw=2)
plt.axhline(y=7.3152, color='k', linestyle='--',label='Moderate Flood Stage 7.3152 m',lw=2)
plt.axhline(y=8.5344, color='r', linestyle='-',label='Major Flood Stage 8.5344 m',lw=2)
plt.legend(bbox_to_anchor=(1.1,0.5), loc="center left", borderaxespad=0)
x = date
y1 = peak_flow
y2 = peak_stage
plt.style.use('classic')
fig, ax1 = plt.subplots()
#ax2 = ax1.twinx()
ax1.plot(x, y1,'r-')
#ax2.plot(x, y2,'b-')
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=10))
plt.gcf().autofmt_xdate()
ax1.set_xlabel('Date')
ax1.set_ylabel('Peak Flow (cms)',color='r')
#ax2.set_ylabel('Peak Stage (m)',color='b')
plt.axhline(y=np.percentile(peak_flow, 25), color='c', linestyle='--',label='25th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_flow, 50), color='k', linestyle='--',label='50th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_flow, 75), color='b', linestyle='--',label='75th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_flow, 95), color='r', linestyle='--',label='95th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_flow, 99), color='g', linestyle='--',label='99th Percentile',lw=2)
plt.legend(bbox_to_anchor=(1.1,0.5), loc="center left", borderaxespad=0)
x = date
y1 = peak_flow
y2 = peak_stage
plt.style.use('classic')
fig, ax1 = plt.subplots()
#ax2 = ax1.twinx()
ax1.plot(x, y2,'b-')
#ax2.plot(x, y2,'b-')
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=10))
plt.gcf().autofmt_xdate()
ax1.set_xlabel('Date')
ax1.set_ylabel('Peak Stage (m)',color='b')
#ax2.set_ylabel('Peak Stage (m)',color='b')
plt.axhline(y=np.percentile(peak_stage, 25), color='c', linestyle='--',label='25th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_stage, 50), color='k', linestyle='--',label='50th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_stage, 75), color='b', linestyle='--',label='75th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_stage, 95), color='r', linestyle='--',label='95th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_stage, 99), color='g', linestyle='--',label='99th Percentile',lw=2)
plt.legend(bbox_to_anchor=(1.1,0.5), loc="center left", borderaxespad=0)
```
|
github_jupyter
|
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import sklearn as sk
import pandas as pd
df = pd.read_csv('HolcombRockPeakStageFlow_02025500.csv')
print(df)
peak_flow = df['FLOW CMS']
peak_stage = df['GAUGE M']
action_stage = df['ACTION STAGE']
flood_stage = df['FLOOD STAGE']
moderate_flood = df['MODERATE STAGE']
major_flood = df['MAJOR STAGE']
date = df['DATE']
floodclass = df['CLASS']
floodclass.value_counts().loc[['ACTION', 'FLOOD', 'MODERATE','MAJOR']].plot.barh()
plt.style.use('classic')
plt.xlabel("No. of Flood Events 1935-Present")
plt.ylabel("Flood Class (Action = Lowest, Major = Highest)")
import matplotlib.dates as mdates
plt.style.use('classic')
plt.plot(date,peak_flow)
plt.gcf().autofmt_xdate()
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=10))
plt.gcf().autofmt_xdate()
plt.xlabel("Date")
plt.ylabel("Flow Rate (cms)")
x = date
y1 = peak_flow
y2 = peak_stage
plt.style.use('classic')
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(x, y1,'r-')
ax2.plot(x, y2,'b-')
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=10))
plt.gcf().autofmt_xdate()
ax1.set_xlabel('Date')
ax1.set_ylabel('Peak Flow (cms)',color='r')
ax2.set_ylabel('Peak Stage (m)',color='b')
plt.axhline(y=5.4864, color='c', linestyle='--',label='Action Stage 5.4864 m',lw=2)
plt.axhline(y=6.7056, color='k', linestyle='-',label='Flood Stage 6.7056 m',lw=2)
plt.axhline(y=7.3152, color='k', linestyle='--',label='Moderate Flood Stage 7.3152 m',lw=2)
plt.axhline(y=8.5344, color='r', linestyle='-',label='Major Flood Stage 8.5344 m',lw=2)
plt.legend(bbox_to_anchor=(1.1,0.5), loc="center left", borderaxespad=0)
x = date
y1 = peak_flow
y2 = peak_stage
plt.style.use('classic')
fig, ax1 = plt.subplots()
#ax2 = ax1.twinx()
ax1.plot(x, y1,'r-')
#ax2.plot(x, y2,'b-')
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=10))
plt.gcf().autofmt_xdate()
ax1.set_xlabel('Date')
ax1.set_ylabel('Peak Flow (cms)',color='r')
#ax2.set_ylabel('Peak Stage (m)',color='b')
plt.axhline(y=np.percentile(peak_flow, 25), color='c', linestyle='--',label='25th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_flow, 50), color='k', linestyle='--',label='50th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_flow, 75), color='b', linestyle='--',label='75th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_flow, 95), color='r', linestyle='--',label='95th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_flow, 99), color='g', linestyle='--',label='99th Percentile',lw=2)
plt.legend(bbox_to_anchor=(1.1,0.5), loc="center left", borderaxespad=0)
x = date
y1 = peak_flow
y2 = peak_stage
plt.style.use('classic')
fig, ax1 = plt.subplots()
#ax2 = ax1.twinx()
ax1.plot(x, y2,'b-')
#ax2.plot(x, y2,'b-')
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=10))
plt.gcf().autofmt_xdate()
ax1.set_xlabel('Date')
ax1.set_ylabel('Peak Stage (m)',color='b')
#ax2.set_ylabel('Peak Stage (m)',color='b')
plt.axhline(y=np.percentile(peak_stage, 25), color='c', linestyle='--',label='25th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_stage, 50), color='k', linestyle='--',label='50th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_stage, 75), color='b', linestyle='--',label='75th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_stage, 95), color='r', linestyle='--',label='95th Percentile',lw=2)
plt.axhline(y=np.percentile(peak_stage, 99), color='g', linestyle='--',label='99th Percentile',lw=2)
plt.legend(bbox_to_anchor=(1.1,0.5), loc="center left", borderaxespad=0)
| 0.487063 | 0.536616 |
<a href="https://colab.research.google.com/github/DiploDatos/AnalisisyVisualizacion/blob/master/Entregable_Parte_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Diplomatura en Ciencia de Datos, Aprendizaje Automático y sus Aplicaciones**
**Edición 2021**
---
## Trabajo práctico entregable - Parte 1
```
import io
import matplotlib
import matplotlib.pyplot as plt
import numpy
import pandas as pd
import seaborn
seaborn.set_context('talk')
```
## Lectura del dataset
En la notebook 00 se explican los detalles de la siguiente sección.
```
url = 'https://cs.famaf.unc.edu.ar/~mteruel/datasets/diplodatos/sysarmy_survey_2020_processed.csv'
df = pd.read_csv(url)
df[:3]
```
# Ejercicio 1 - Análisis descriptivo
Responder a la pregunta: **¿Cuáles son los lenguajes de programación asociados a los mejores salarios?**
Para ello:
1. Seleccionar las columnas relevantes para analizar.
2. Seleccionar las filas relevantes para analizar. Esto incluye la eliminación de valores extremos y erróneos, pero también puede enfocar el análisis en una sub-población. Por ejemplo, se pueden limitar a personas con un salario mayor que 10000 pesos, o a las personas que trabajan sólo en "Data Science", pero deben justificar su elección y reformular la pregunta inicial de ser necesario.
* Obtener una lista de los lenguajes de programación más populares. Decidir cuántos y cuáles seleccionan para incluir en el análisis.
* Para cada una de las otras columnas del punto anterior, elegir los rangos o valores seleccionan para incluir en el análisis.
3. Seleccionar métricas que ayuden a responder la pregunta, y los métodos para analizarlas. Elegir UNA de las siguientes opciones:
* Comparar las distribuciones de salario para cada lenguaje utilizando visualizaciones. Como la visualización es el producto final, debe ser clara y mostrar información relevante.
* Comparar medidas de estadística descriptiva sobre la distribución de salario para cada lenguaje. Sean creativos, la estadística descriptiva nos permite decir cosas como: "el 10% de los mejores sueldos los ganan, en su mayoría, programadores que saben kotlin!" (donde *mayoría* es un término medio engañoso que sólo significa más del 50%). Para comparar múltiples lenguajes, les recomendamos usar también visualizaciones.
* Comparar probabilidades. Por ejemplo: "Si sabés Python o Java, tenés un 30% más de chances de ganar arriba de 100K".
Si lo consideran necesario, realicen varias iteraciones. Es decir, si encuentran que las distribuciones de los lenguajes de programación que seleccionaron inicialmente no son muy diferentes, pueden re-hacer el análisis usando sólo los lenguajes de programación que son diferentes.
```
# complete here if you want to include more columns
relevant_columns = ['tools_programming_languages', 'salary_monthly_NETO']
```
### Conteo de frecuencias de los lenguajes de programación
La columna que contiene información sobre los lenguajes de programación utilizados es `tools_programming_languages`. Sus valores son strings con los lenguajes seleccionados separados por comas.
```
df.tools_programming_languages[:3]
```
Las siguientes celdas de código separan estos lenguajes de programación y cuentan la frecuencia con la que aparecen.
No es necesario entender este código en profundidad, aunque sí es un buen ejercicio.
```
# Convert the comma-separated string of languages to a list of string.
# Remove 'ninguno de los anteriores' option, spaces and training commas.
def split_languages(languages_str):
if not isinstance(languages_str, str):
return []
# Remove 'other' option
languages_str = languages_str.lower()\
.replace('ninguno de los anteriores', '')
# Split string into list of items
# Remove spaces and commas for each item
return [lang.strip().replace(',', '')
for lang in languages_str.split()]
# Create a new column with the list of languages
df.loc[:, 'cured_programming_languages'] = df.tools_programming_languages\
.apply(split_languages)
if 'cured_programming_languages' not in relevant_columns:
relevant_columns.append('cured_programming_languages')
# Duplicate each row of df for each programming language
# mentioned in the response.
# We only include in df_lang the columns we are going to analyze later, so we
# don't duplicate innecesary information.
df_lang = df.cured_programming_languages\
.apply(pd.Series).stack()\
.reset_index(level=-1, drop=True).to_frame()\
.join(df[relevant_columns])\
.rename(columns={0: 'programming_language'})
# Horrible programming style! But a lot of data science code can be written with
# as concatenations of functions (pipelines), and there's no elegant way of
# doing that on Python.
df_lang[:5]
```
En la columna `programming_language` se encuentra cada lenguaje por separado. Notar que si una respuesta contenía 3 lenguajes, como `"HTML, Javascript, Python"`, la fila ha sido replicada 3 veces. Por ello, hay tres filas con índice 1.
```
language_count = df_lang.programming_language.value_counts()\
.reset_index()\
.rename(columns={'index': 'language', 'programming_language': 'frequency'})
language_count[:10]
```
## Filtrado de lenguajes relevantes
El siguiente código permite seleccionar sólo las filas donde el valor de la columna `programming_language` se encuentre en la lista `interesting_languages`.
```
# Filter out languages that we want to exclude
# Complete here with your selected list.
interesting_languages = ["Python"]
filtered_df_lang = df_lang[df_lang.programming_language.isin(interesting_languages)]
filtered_df_lang[:5]
```
# Ejercicio 2 - Densidades y varias varialbes
Responder a la pregunta general: **¿Que herramientas (prácticas y teóricas) són útiles para explorar la base, descubrir patrones, asociaciones?**
Para ello considere (igual al ejercicio Anterior):
1. Seleccionar las columnas relevantes para analizar.
2. Seleccionar las filas relevantes para analizar. Esto incluye la eliminación de valores extremos y erróneos, pero también puede enfocar el análisis en sub-poblaciones.
## a) Densidad conjunta
Que herramientas visuales y modelos puede utilizar para estudiar la distribución y comportamiento de sus datos?
Elija tres variables numéricas y 2 variables categóricas. Visualice la base según varias de las variables elegidas. Puede describir de alguna forma el comportamiento de sus datos? Que herramientas utilizaría? Describa
## b) Asociación
* Necesitamos decidir si sacar o no la columna de salario bruto. Para hacer la encuesta más simple.
¿Existe una correlación entre el salario bruto y el neto? Que abordaje y medidas usaría
## c) Densidad condicional
Estudie la distribución del salario según el nivel de estudio.
Separe la población según el nivel de estudio (elija dos subpoblaciones numerosas) y grafique de manera comparativa ambos histogramas de la variable `'salary_monthly_NETO'`
¿Considera que ambas variables son independientes?
¿Qué analizaría al respecto?
Calcule medidas de centralización y dispersión para cada subpoblación
## d) Densidad Conjunta condicional
Elija dos variables numéricas y una categórica.
Estudie la dispersión (scatterplot) de las dos variables discriminando en color por la variable categórica (ayuda: hue en seaborn)
|
github_jupyter
|
import io
import matplotlib
import matplotlib.pyplot as plt
import numpy
import pandas as pd
import seaborn
seaborn.set_context('talk')
url = 'https://cs.famaf.unc.edu.ar/~mteruel/datasets/diplodatos/sysarmy_survey_2020_processed.csv'
df = pd.read_csv(url)
df[:3]
# complete here if you want to include more columns
relevant_columns = ['tools_programming_languages', 'salary_monthly_NETO']
df.tools_programming_languages[:3]
# Convert the comma-separated string of languages to a list of string.
# Remove 'ninguno de los anteriores' option, spaces and training commas.
def split_languages(languages_str):
if not isinstance(languages_str, str):
return []
# Remove 'other' option
languages_str = languages_str.lower()\
.replace('ninguno de los anteriores', '')
# Split string into list of items
# Remove spaces and commas for each item
return [lang.strip().replace(',', '')
for lang in languages_str.split()]
# Create a new column with the list of languages
df.loc[:, 'cured_programming_languages'] = df.tools_programming_languages\
.apply(split_languages)
if 'cured_programming_languages' not in relevant_columns:
relevant_columns.append('cured_programming_languages')
# Duplicate each row of df for each programming language
# mentioned in the response.
# We only include in df_lang the columns we are going to analyze later, so we
# don't duplicate innecesary information.
df_lang = df.cured_programming_languages\
.apply(pd.Series).stack()\
.reset_index(level=-1, drop=True).to_frame()\
.join(df[relevant_columns])\
.rename(columns={0: 'programming_language'})
# Horrible programming style! But a lot of data science code can be written with
# as concatenations of functions (pipelines), and there's no elegant way of
# doing that on Python.
df_lang[:5]
language_count = df_lang.programming_language.value_counts()\
.reset_index()\
.rename(columns={'index': 'language', 'programming_language': 'frequency'})
language_count[:10]
# Filter out languages that we want to exclude
# Complete here with your selected list.
interesting_languages = ["Python"]
filtered_df_lang = df_lang[df_lang.programming_language.isin(interesting_languages)]
filtered_df_lang[:5]
| 0.353205 | 0.933703 |
```
import os
import torch
import torch.utils.data
import pickle
import numpy as np
import random
import itertools
from tqdm import tqdm
class CustomDataPreprocessorForCNN():
def __init__(self, input_seq_length=5, pred_seq_length=5, datasets=[i for i in range(37)], test_data_sets = [2], dev_ratio_to_test_set = 0.5, forcePreProcess=False, augmentation=False):
'''
Initializer function for the CustomDataSetForCNN class
params:
input_seq_length : input sequence length to be considered
output_seq_length : output sequence length to be predicted
datasets : The indices of the datasets to use
test_data_sets : The indices of the test sets from datasets
dev_ratio_to_test_set : ratio of the validation set size to the test set size
forcePreProcess : Flag to forcefully preprocess the data again from csv files
'''
# List of data directories where raw data resides
self.data_paths = ['./data/train/raw/biwi/biwi_hotel.txt', './data/train/raw/crowds/arxiepiskopi1.txt',
'./data/train/raw/crowds/crowds_zara02.txt', './data/train/raw/crowds/crowds_zara03.txt',
'./data/train/raw/crowds/students001.txt', './data/train/raw/crowds/students003.txt',
'./data/train/raw/stanford/bookstore_0.txt',
'./data/train/raw/stanford/bookstore_1.txt', './data/train/raw/stanford/bookstore_2.txt',
'./data/train/raw/stanford/bookstore_3.txt', './data/train/raw/stanford/coupa_3.txt',
'./data/train/raw/stanford/deathCircle_0.txt', './data/train/raw/stanford/deathCircle_1.txt',
'./data/train/raw/stanford/deathCircle_2.txt', './data/train/raw/stanford/deathCircle_3.txt',
'./data/train/raw/stanford/deathCircle_4.txt', './data/train/raw/stanford/gates_0.txt',
'./data/train/raw/stanford/gates_1.txt', './data/train/raw/stanford/gates_3.txt',
'./data/train/raw/stanford/gates_4.txt', './data/train/raw/stanford/gates_5.txt',
'./data/train/raw/stanford/gates_6.txt', './data/train/raw/stanford/gates_7.txt',
'./data/train/raw/stanford/gates_8.txt', './data/train/raw/stanford/hyang_4.txt',
'./data/train/raw/stanford/hyang_5.txt', './data/train/raw/stanford/hyang_6.txt',
'./data/train/raw/stanford/hyang_7.txt', './data/train/raw/stanford/hyang_9.txt',
'./data/train/raw/stanford/nexus_0.txt', './data/train/raw/stanford/nexus_1.txt',
'./data/train/raw/stanford/nexus_2.txt', './data/train/raw/stanford/nexus_3.txt',
'./data/train/raw/stanford/nexus_4.txt', './data/train/raw/stanford/nexus_7.txt',
'./data/train/raw/stanford/nexus_8.txt', './data/train/raw/stanford/nexus_9.txt']
train_datasets = datasets
for dataset in test_data_sets:
train_datasets.remove(dataset)
self.train_data_paths = [self.data_paths[x] for x in train_datasets]
self.test_data_paths = [self.data_paths[x] for x in test_data_sets]
print("Using the following dataset(s) as test set")
print(self.test_data_paths)
# Number of datasets
self.numDatasets = len(self.data_paths)
# Data directory where the pre-processed pickle file resides
self.data_dir = './data/train/processed'
# Store the arguments
self.input_seq_length = input_seq_length
self.pred_seq_length = pred_seq_length
# Validation arguments
self.dev_ratio = dev_ratio_to_test_set
# Buffer for storing raw data.
self.raw_data_train = []
self.raw_data_test = []
# Buffer for storing processed data.
self.processed_input_output_pairs_train = []
self.processed_input_output_pairs_test = []
# Scale Factor for x and y (computed in self.process())
self.scale_factor_x = None
self.scale_factor_y = None
# Data augmentation flag
self.augmentation = augmentation
# Rotation increment (deg) for data augmentation (only valid if augmentation is True)
self.rot_deg_increment = 120
# How many pedestrian permutations to consider (only valid if augmentation is True)
self.permutations = 4
# Define the path in which the process data would be stored
self.processed_train_data_file = os.path.join(self.data_dir, "trajectories_cnn_train.cpkl")
self.processed_dev_data_file = os.path.join(self.data_dir, "trajectories_cnn_dev.cpkl")
self.processed_test_data_file = os.path.join(self.data_dir, "trajectories_cnn_test.cpkl")
# If the file doesn't exist or forcePreProcess is true
if not(os.path.exists(self.processed_train_data_file)) or not(os.path.exists(self.processed_dev_data_file)) or not(os.path.exists(self.processed_test_data_file)) or forcePreProcess:
print("============ Normalizing raw data (after rotation data augmentation) ============")
print("--> Finding max coordinate values for train data")
x_max_train, x_min_train, y_max_train, y_min_train = self.find_max_coordinates(self.train_data_paths, self.raw_data_train)
print("--> Finding max coordinate values for test data")
x_max_test, x_min_test, y_max_test, y_min_test = self.find_max_coordinates(self.test_data_paths, self.raw_data_test)
x_max_global, y_max_global = max([x_max_train, x_max_test]), max([y_max_train, y_max_test])
x_min_global, y_min_global = min([x_min_train, x_min_test]), min([y_min_train, y_min_test])
self.scale_factor_x = (x_max_global - x_min_global)/(1 + 1)
self.scale_factor_y = (y_max_global - y_min_global)/(1 + 1)
print("--> Normalizing train data")
self.normalize(self.raw_data_train, x_max_global, x_min_global, y_max_global, y_min_global)
print("--> Normalizing test data")
self.normalize(self.raw_data_test, x_max_global, x_min_global, y_max_global, y_min_global)
print("============ Creating pre-processed training data for CNN ============")
self.preprocess(self.raw_data_train, self.processed_input_output_pairs_train, self.processed_train_data_file)
print("============ Creating pre-processed dev & test data for CNN ============")
self.preprocess(self.raw_data_test, self.processed_input_output_pairs_test, self.processed_test_data_file, self.dev_ratio, self.processed_dev_data_file)
def find_max_coordinates(self, data_paths, raw_data_buffer):
if self.augmentation:
print('--> Data Augmentation: Rotation (by ' + str(self.rot_deg_increment) + ' deg incrementally up to 360 deg)')
for path in data_paths:
# Load data from txt file.
txtfile = open(path, 'r')
lines = txtfile.read().splitlines()
data = [line.split() for line in lines]
data = np.transpose(sorted(data, key=lambda line: int(line[0]))).astype(float)
raw_data_buffer.append(data)
if self.augmentation:
# Rotate data by deg_increment deg sequentially for data augmentation (only rotation is considered here)
deg_increment_int = int(self.rot_deg_increment)
for deg in range(deg_increment_int, 360, deg_increment_int):
data_rotated = np.zeros_like(data)
rad = np.radians(deg)
c, s = np.cos(rad), np.sin(rad)
Rot = np.array(((c,-s), (s, c)))
for ii in range(data.shape[1]):
data_rotated[0:2, ii] = data[0:2, ii]
data_rotated[2:, ii] = np.dot(Rot, data[2:, ii])
raw_data_buffer.append(data_rotated)
# Find x_max, x_min, y_max, y_min across all the data in data_paths.
x_max_global, x_min_global, y_max_global, y_min_global = -1000, 1000, -1000, 1000
for data in raw_data_buffer:
x = data[2,:]
x_min, x_max = min(x), max(x)
if x_min < x_min_global:
x_min_global = x_min
if x_max > x_max_global:
x_max_global = x_max
y = data[3,:]
y_min, y_max = min(y), max(y)
if y_min < y_min_global:
y_min_global = y_min
if y_max > y_max_global:
y_max_global = y_max
return x_max_global, x_min_global, y_max_global, y_min_global
def normalize(self, raw_data_buffer, x_max_global, x_min_global, y_max_global, y_min_global):
# Normalize all the data in this buffer to range from -1 to 1.
for data in raw_data_buffer:
x = data[2,:]
x = (1 + 1)*(x - x_min_global)/(x_max_global - x_min_global)
x = x - 1.0
for jj in range(len(x)):
if abs(x[jj]) < 0.0001:
data[2,jj] = 0.0
else:
data[2,jj] = x[jj]
y = data[3,:]
y = (1 + 1)*(y - y_min_global)/(y_max_global - y_min_global)
y = y - 1.0
for jj in range(len(y)):
if abs(y[jj]) < 0.0001:
data[3,jj] = 0.0
else:
data[3,jj] = y[jj]
'''# Sanity check.
# Find x_max, x_min, y_max, y_min in this raw_data_buffer
x_max_buffer, x_min_buffer, y_max_buffer, y_min_buffer = -1000, 1000, -1000, 1000
for data in raw_data_buffer:
x = data[2,:]
x_min, x_max = min(x), max(x)
if x_min < x_min_buffer:
x_min_buffer = x_min
if x_max > x_max_buffer:
x_max_buffer = x_max
y = data[3,:]
y_min, y_max = min(y), max(y)
if y_min < y_min_buffer:
y_min_buffer = y_min
if y_max > y_max_buffer:
y_max_buffer = y_max
print(x_min_buffer, x_max_buffer)
print(y_min_buffer, y_max_buffer)
'''
def preprocess(self, raw_data_buffer, processed_input_output_pairs, processed_data_file, dev_ratio=0., processed_data_file_2=None):
random.seed(1) # Random seed for pedestrian permutation and data shuffling
for data in raw_data_buffer:
# Frame IDs of the frames in the current dataset
frameList = np.unique(data[0, :].astype(int)).tolist()
#print(frameList)
numFrames = len(frameList)
# Frame ID increment for this dataset.
frame_increment = np.min(np.array(frameList[1:-1]) - np.array(frameList[0:-2]))
# For this dataset check which pedestrians exist in each frame.
pedsInFrameList = []
pedsPosInFrameList = []
for ind, frame in enumerate(frameList):
# For this frame check the pedestrian IDs.
pedsInFrame = data[:, data[0, :].astype(int) == frame]
pedsList = pedsInFrame[1, :].astype(int).tolist()
pedsInFrameList.append(pedsList)
# Position information for each pedestrian.
pedsPos = []
for ped in pedsList:
# Extract x and y positions
current_x = pedsInFrame[2, pedsInFrame[1, :].astype(int) == ped][0]
current_y = pedsInFrame[3, pedsInFrame[1, :].astype(int) == ped][0]
pedsPos.extend([current_x, current_y])
if (current_x == 0.0 and current_y == 0.0):
print('[WARNING] There exists a pedestrian at coordinate [0.0, 0.0]')
pedsPosInFrameList.append(pedsPos)
# Go over the frames in this data again to extract data.
ind = 0 # frame index
while ind < len(frameList) - (self.input_seq_length + self.pred_seq_length):
# Check if this sequence contains consecutive frames. Otherwise skip this sequence.
if not frameList[ind + self.input_seq_length + self.pred_seq_length - 1] - frameList[ind] == (self.input_seq_length + self.pred_seq_length - 1)*frame_increment:
ind += 1
continue
# List of pedestrians in this frame.
pedsList = pedsInFrameList[ind]
# Check if same pedestrians exist in the next (input_seq_length + pred_seq_length - 1) frames.
peds_contained = True
for ii in range(self.input_seq_length + self.pred_seq_length):
if pedsInFrameList[ind + ii] != pedsList:
peds_contained = False
if peds_contained:
#print(str(int(self.input_seq_length + self.pred_seq_length)) + ' frames starting from Frame ' + str(int(frameList[ind])) + ' contain pedestrians ' + str(pedsList))
# Initialize numpy arrays for input-output pair
data_input = np.zeros((2*len(pedsList), self.input_seq_length))
data_output = np.zeros((2*len(pedsList), self.pred_seq_length))
for ii in range(self.input_seq_length):
data_input[:, ii] = np.array(pedsPosInFrameList[ind + ii])
for jj in range(self.pred_seq_length):
data_output[:, jj] = np.array(pedsPosInFrameList[ind + self.input_seq_length + jj])
processed_pair = (torch.from_numpy(data_input), torch.from_numpy(data_output))
processed_input_output_pairs.append(processed_pair)
ind += self.input_seq_length + self.pred_seq_length
else:
ind += 1
print('--> Data Size: ' + str(len(processed_input_output_pairs)))
if self.augmentation:
# Perform data augmentation
self.augment_flip(processed_input_output_pairs)
self.augment_permute(processed_input_output_pairs)
else:
print('--> Skipping data augmentation')
# Shuffle data.
print('--> Shuffling all data before saving')
random.shuffle(processed_input_output_pairs)
if dev_ratio != 0.:
# Split data into dev and test sets.
dev_size = int(len(processed_input_output_pairs)*dev_ratio)
processed_dev_set = processed_input_output_pairs[:dev_size]
processed_test_set = processed_input_output_pairs[dev_size:]
print('--> Dumping dev data with size ' + str(len(processed_dev_set)) + ' to pickle file')
f_dev = open(processed_data_file_2, 'wb')
pickle.dump(processed_dev_set, f_dev, protocol=2)
f_dev.close()
print('--> Dumping test data with size ' + str(len(processed_test_set)) + ' to pickle file')
f_test = open(processed_data_file, 'wb')
pickle.dump(processed_test_set, f_test, protocol=2)
f_test.close()
# Clear buffer
raw_data_buffer = []
processed_input_output_pairs = []
else:
assert(processed_data_file_2 == None)
processed_train_set = processed_input_output_pairs
print('--> Dumping train data with size ' + str(len(processed_train_set)) + ' to pickle file')
f_train = open(processed_data_file, 'wb')
pickle.dump(processed_train_set, f_train, protocol=2)
f_train.close()
# Clear buffer
raw_data_buffer = []
processed_input_output_pairs = []
def augment_flip(self, processed_input_output_pairs):
print('--> Data Augmentation: Y Flip')
augmented_input_output_pairs = []
for processed_input_output_pair in tqdm(processed_input_output_pairs):
data_input, data_output = processed_input_output_pair[0].numpy(), processed_input_output_pair[1].numpy()
num_peds = int(data_input.shape[0]/2)
# Flip y
data_input_yflipped = np.zeros_like(data_input)
data_output_yflipped = np.zeros_like(data_output)
for kk in range(num_peds):
data_input_yflipped[2*kk, :] = data_input[2*kk, :]
data_input_yflipped[2*kk+1, :] = -1*data_input[2*kk+1, :]
data_output_yflipped[2*kk, :] = data_output[2*kk, :]
data_output_yflipped[2*kk+1, :] = -1*data_output[2*kk+1, :]
processed_pair_yflipped = (torch.from_numpy(data_input_yflipped), torch.from_numpy(data_output_yflipped))
augmented_input_output_pairs.append(processed_pair_yflipped)
processed_input_output_pairs.extend(augmented_input_output_pairs)
print('--> Augmented Data Size: ' + str(len(processed_input_output_pairs)))
def augment_permute(self, processed_input_output_pairs):
# Specify how many pedestrian permutations to consider per input-output pair
print('--> Data Augmentation: Pedestrian Permutation (' + str(self.permutations) + ' random permutations per input-output pair)')
augmented_input_output_pairs = []
for processed_input_output_pair in tqdm(processed_input_output_pairs):
data_input, data_output = processed_input_output_pair[0].numpy(), processed_input_output_pair[1].numpy()
num_peds = int(data_input.shape[0]/2)
for ii in range(self.permutations):
perm = np.random.permutation(num_peds)
data_input_permuted = np.zeros_like(data_input)
data_output_permuted = np.zeros_like(data_output)
for jj in range(len(perm)):
data_input_permuted[2*jj:2*(jj+1), :] = data_input[2*perm[jj]:2*(perm[jj]+1), :]
data_output_permuted[2*jj:2*(jj+1), :] = data_output[2*perm[jj]:2*(perm[jj]+1), :]
processed_pair_permuted = (torch.from_numpy(data_input_permuted), torch.from_numpy(data_output_permuted))
augmented_input_output_pairs.append(processed_pair_permuted)
processed_input_output_pairs.extend(augmented_input_output_pairs)
print('--> Augmented Data Size: ' + str(len(processed_input_output_pairs)))
processed = CustomDataPreprocessorForCNN(forcePreProcess=True, test_data_sets=[2,3,4], augmentation=True)
processed.scale_factor_x
processed.scale_factor_y
train_file = open(processed.processed_train_data_file, 'rb')
dev_file = open(processed.processed_dev_data_file, 'rb')
test_file = open(processed.processed_test_data_file, 'rb')
processed.processed_train_data_file
train = pickle.load(train_file)
dev = pickle.load(dev_file)
test = pickle.load(test_file)
len(train)
len(dev)
len(test)
class CustomDatasetForCNN(torch.utils.data.Dataset):
def __init__(self, file_path):
self.file_path = file_path
self.file = open(self.file_path, 'rb')
self.data = pickle.load(self.file)
self.file.close()
def __getitem__(self, index):
item = self.data[index]
return item
def __len__(self):
return len(self.data)
train_set = CustomDatasetForCNN(processed.processed_train_data_file)
train_loader = torch.utils.data.DataLoader(dataset=train_set, batch_size=1, shuffle=True)
x, y = train_set.__getitem__(99)
x
next(iter(train_loader))
x, y = train_set.__getitem__(9)
len(train_loader)
```
|
github_jupyter
|
import os
import torch
import torch.utils.data
import pickle
import numpy as np
import random
import itertools
from tqdm import tqdm
class CustomDataPreprocessorForCNN():
def __init__(self, input_seq_length=5, pred_seq_length=5, datasets=[i for i in range(37)], test_data_sets = [2], dev_ratio_to_test_set = 0.5, forcePreProcess=False, augmentation=False):
'''
Initializer function for the CustomDataSetForCNN class
params:
input_seq_length : input sequence length to be considered
output_seq_length : output sequence length to be predicted
datasets : The indices of the datasets to use
test_data_sets : The indices of the test sets from datasets
dev_ratio_to_test_set : ratio of the validation set size to the test set size
forcePreProcess : Flag to forcefully preprocess the data again from csv files
'''
# List of data directories where raw data resides
self.data_paths = ['./data/train/raw/biwi/biwi_hotel.txt', './data/train/raw/crowds/arxiepiskopi1.txt',
'./data/train/raw/crowds/crowds_zara02.txt', './data/train/raw/crowds/crowds_zara03.txt',
'./data/train/raw/crowds/students001.txt', './data/train/raw/crowds/students003.txt',
'./data/train/raw/stanford/bookstore_0.txt',
'./data/train/raw/stanford/bookstore_1.txt', './data/train/raw/stanford/bookstore_2.txt',
'./data/train/raw/stanford/bookstore_3.txt', './data/train/raw/stanford/coupa_3.txt',
'./data/train/raw/stanford/deathCircle_0.txt', './data/train/raw/stanford/deathCircle_1.txt',
'./data/train/raw/stanford/deathCircle_2.txt', './data/train/raw/stanford/deathCircle_3.txt',
'./data/train/raw/stanford/deathCircle_4.txt', './data/train/raw/stanford/gates_0.txt',
'./data/train/raw/stanford/gates_1.txt', './data/train/raw/stanford/gates_3.txt',
'./data/train/raw/stanford/gates_4.txt', './data/train/raw/stanford/gates_5.txt',
'./data/train/raw/stanford/gates_6.txt', './data/train/raw/stanford/gates_7.txt',
'./data/train/raw/stanford/gates_8.txt', './data/train/raw/stanford/hyang_4.txt',
'./data/train/raw/stanford/hyang_5.txt', './data/train/raw/stanford/hyang_6.txt',
'./data/train/raw/stanford/hyang_7.txt', './data/train/raw/stanford/hyang_9.txt',
'./data/train/raw/stanford/nexus_0.txt', './data/train/raw/stanford/nexus_1.txt',
'./data/train/raw/stanford/nexus_2.txt', './data/train/raw/stanford/nexus_3.txt',
'./data/train/raw/stanford/nexus_4.txt', './data/train/raw/stanford/nexus_7.txt',
'./data/train/raw/stanford/nexus_8.txt', './data/train/raw/stanford/nexus_9.txt']
train_datasets = datasets
for dataset in test_data_sets:
train_datasets.remove(dataset)
self.train_data_paths = [self.data_paths[x] for x in train_datasets]
self.test_data_paths = [self.data_paths[x] for x in test_data_sets]
print("Using the following dataset(s) as test set")
print(self.test_data_paths)
# Number of datasets
self.numDatasets = len(self.data_paths)
# Data directory where the pre-processed pickle file resides
self.data_dir = './data/train/processed'
# Store the arguments
self.input_seq_length = input_seq_length
self.pred_seq_length = pred_seq_length
# Validation arguments
self.dev_ratio = dev_ratio_to_test_set
# Buffer for storing raw data.
self.raw_data_train = []
self.raw_data_test = []
# Buffer for storing processed data.
self.processed_input_output_pairs_train = []
self.processed_input_output_pairs_test = []
# Scale Factor for x and y (computed in self.process())
self.scale_factor_x = None
self.scale_factor_y = None
# Data augmentation flag
self.augmentation = augmentation
# Rotation increment (deg) for data augmentation (only valid if augmentation is True)
self.rot_deg_increment = 120
# How many pedestrian permutations to consider (only valid if augmentation is True)
self.permutations = 4
# Define the path in which the process data would be stored
self.processed_train_data_file = os.path.join(self.data_dir, "trajectories_cnn_train.cpkl")
self.processed_dev_data_file = os.path.join(self.data_dir, "trajectories_cnn_dev.cpkl")
self.processed_test_data_file = os.path.join(self.data_dir, "trajectories_cnn_test.cpkl")
# If the file doesn't exist or forcePreProcess is true
if not(os.path.exists(self.processed_train_data_file)) or not(os.path.exists(self.processed_dev_data_file)) or not(os.path.exists(self.processed_test_data_file)) or forcePreProcess:
print("============ Normalizing raw data (after rotation data augmentation) ============")
print("--> Finding max coordinate values for train data")
x_max_train, x_min_train, y_max_train, y_min_train = self.find_max_coordinates(self.train_data_paths, self.raw_data_train)
print("--> Finding max coordinate values for test data")
x_max_test, x_min_test, y_max_test, y_min_test = self.find_max_coordinates(self.test_data_paths, self.raw_data_test)
x_max_global, y_max_global = max([x_max_train, x_max_test]), max([y_max_train, y_max_test])
x_min_global, y_min_global = min([x_min_train, x_min_test]), min([y_min_train, y_min_test])
self.scale_factor_x = (x_max_global - x_min_global)/(1 + 1)
self.scale_factor_y = (y_max_global - y_min_global)/(1 + 1)
print("--> Normalizing train data")
self.normalize(self.raw_data_train, x_max_global, x_min_global, y_max_global, y_min_global)
print("--> Normalizing test data")
self.normalize(self.raw_data_test, x_max_global, x_min_global, y_max_global, y_min_global)
print("============ Creating pre-processed training data for CNN ============")
self.preprocess(self.raw_data_train, self.processed_input_output_pairs_train, self.processed_train_data_file)
print("============ Creating pre-processed dev & test data for CNN ============")
self.preprocess(self.raw_data_test, self.processed_input_output_pairs_test, self.processed_test_data_file, self.dev_ratio, self.processed_dev_data_file)
def find_max_coordinates(self, data_paths, raw_data_buffer):
if self.augmentation:
print('--> Data Augmentation: Rotation (by ' + str(self.rot_deg_increment) + ' deg incrementally up to 360 deg)')
for path in data_paths:
# Load data from txt file.
txtfile = open(path, 'r')
lines = txtfile.read().splitlines()
data = [line.split() for line in lines]
data = np.transpose(sorted(data, key=lambda line: int(line[0]))).astype(float)
raw_data_buffer.append(data)
if self.augmentation:
# Rotate data by deg_increment deg sequentially for data augmentation (only rotation is considered here)
deg_increment_int = int(self.rot_deg_increment)
for deg in range(deg_increment_int, 360, deg_increment_int):
data_rotated = np.zeros_like(data)
rad = np.radians(deg)
c, s = np.cos(rad), np.sin(rad)
Rot = np.array(((c,-s), (s, c)))
for ii in range(data.shape[1]):
data_rotated[0:2, ii] = data[0:2, ii]
data_rotated[2:, ii] = np.dot(Rot, data[2:, ii])
raw_data_buffer.append(data_rotated)
# Find x_max, x_min, y_max, y_min across all the data in data_paths.
x_max_global, x_min_global, y_max_global, y_min_global = -1000, 1000, -1000, 1000
for data in raw_data_buffer:
x = data[2,:]
x_min, x_max = min(x), max(x)
if x_min < x_min_global:
x_min_global = x_min
if x_max > x_max_global:
x_max_global = x_max
y = data[3,:]
y_min, y_max = min(y), max(y)
if y_min < y_min_global:
y_min_global = y_min
if y_max > y_max_global:
y_max_global = y_max
return x_max_global, x_min_global, y_max_global, y_min_global
def normalize(self, raw_data_buffer, x_max_global, x_min_global, y_max_global, y_min_global):
# Normalize all the data in this buffer to range from -1 to 1.
for data in raw_data_buffer:
x = data[2,:]
x = (1 + 1)*(x - x_min_global)/(x_max_global - x_min_global)
x = x - 1.0
for jj in range(len(x)):
if abs(x[jj]) < 0.0001:
data[2,jj] = 0.0
else:
data[2,jj] = x[jj]
y = data[3,:]
y = (1 + 1)*(y - y_min_global)/(y_max_global - y_min_global)
y = y - 1.0
for jj in range(len(y)):
if abs(y[jj]) < 0.0001:
data[3,jj] = 0.0
else:
data[3,jj] = y[jj]
'''# Sanity check.
# Find x_max, x_min, y_max, y_min in this raw_data_buffer
x_max_buffer, x_min_buffer, y_max_buffer, y_min_buffer = -1000, 1000, -1000, 1000
for data in raw_data_buffer:
x = data[2,:]
x_min, x_max = min(x), max(x)
if x_min < x_min_buffer:
x_min_buffer = x_min
if x_max > x_max_buffer:
x_max_buffer = x_max
y = data[3,:]
y_min, y_max = min(y), max(y)
if y_min < y_min_buffer:
y_min_buffer = y_min
if y_max > y_max_buffer:
y_max_buffer = y_max
print(x_min_buffer, x_max_buffer)
print(y_min_buffer, y_max_buffer)
'''
def preprocess(self, raw_data_buffer, processed_input_output_pairs, processed_data_file, dev_ratio=0., processed_data_file_2=None):
random.seed(1) # Random seed for pedestrian permutation and data shuffling
for data in raw_data_buffer:
# Frame IDs of the frames in the current dataset
frameList = np.unique(data[0, :].astype(int)).tolist()
#print(frameList)
numFrames = len(frameList)
# Frame ID increment for this dataset.
frame_increment = np.min(np.array(frameList[1:-1]) - np.array(frameList[0:-2]))
# For this dataset check which pedestrians exist in each frame.
pedsInFrameList = []
pedsPosInFrameList = []
for ind, frame in enumerate(frameList):
# For this frame check the pedestrian IDs.
pedsInFrame = data[:, data[0, :].astype(int) == frame]
pedsList = pedsInFrame[1, :].astype(int).tolist()
pedsInFrameList.append(pedsList)
# Position information for each pedestrian.
pedsPos = []
for ped in pedsList:
# Extract x and y positions
current_x = pedsInFrame[2, pedsInFrame[1, :].astype(int) == ped][0]
current_y = pedsInFrame[3, pedsInFrame[1, :].astype(int) == ped][0]
pedsPos.extend([current_x, current_y])
if (current_x == 0.0 and current_y == 0.0):
print('[WARNING] There exists a pedestrian at coordinate [0.0, 0.0]')
pedsPosInFrameList.append(pedsPos)
# Go over the frames in this data again to extract data.
ind = 0 # frame index
while ind < len(frameList) - (self.input_seq_length + self.pred_seq_length):
# Check if this sequence contains consecutive frames. Otherwise skip this sequence.
if not frameList[ind + self.input_seq_length + self.pred_seq_length - 1] - frameList[ind] == (self.input_seq_length + self.pred_seq_length - 1)*frame_increment:
ind += 1
continue
# List of pedestrians in this frame.
pedsList = pedsInFrameList[ind]
# Check if same pedestrians exist in the next (input_seq_length + pred_seq_length - 1) frames.
peds_contained = True
for ii in range(self.input_seq_length + self.pred_seq_length):
if pedsInFrameList[ind + ii] != pedsList:
peds_contained = False
if peds_contained:
#print(str(int(self.input_seq_length + self.pred_seq_length)) + ' frames starting from Frame ' + str(int(frameList[ind])) + ' contain pedestrians ' + str(pedsList))
# Initialize numpy arrays for input-output pair
data_input = np.zeros((2*len(pedsList), self.input_seq_length))
data_output = np.zeros((2*len(pedsList), self.pred_seq_length))
for ii in range(self.input_seq_length):
data_input[:, ii] = np.array(pedsPosInFrameList[ind + ii])
for jj in range(self.pred_seq_length):
data_output[:, jj] = np.array(pedsPosInFrameList[ind + self.input_seq_length + jj])
processed_pair = (torch.from_numpy(data_input), torch.from_numpy(data_output))
processed_input_output_pairs.append(processed_pair)
ind += self.input_seq_length + self.pred_seq_length
else:
ind += 1
print('--> Data Size: ' + str(len(processed_input_output_pairs)))
if self.augmentation:
# Perform data augmentation
self.augment_flip(processed_input_output_pairs)
self.augment_permute(processed_input_output_pairs)
else:
print('--> Skipping data augmentation')
# Shuffle data.
print('--> Shuffling all data before saving')
random.shuffle(processed_input_output_pairs)
if dev_ratio != 0.:
# Split data into dev and test sets.
dev_size = int(len(processed_input_output_pairs)*dev_ratio)
processed_dev_set = processed_input_output_pairs[:dev_size]
processed_test_set = processed_input_output_pairs[dev_size:]
print('--> Dumping dev data with size ' + str(len(processed_dev_set)) + ' to pickle file')
f_dev = open(processed_data_file_2, 'wb')
pickle.dump(processed_dev_set, f_dev, protocol=2)
f_dev.close()
print('--> Dumping test data with size ' + str(len(processed_test_set)) + ' to pickle file')
f_test = open(processed_data_file, 'wb')
pickle.dump(processed_test_set, f_test, protocol=2)
f_test.close()
# Clear buffer
raw_data_buffer = []
processed_input_output_pairs = []
else:
assert(processed_data_file_2 == None)
processed_train_set = processed_input_output_pairs
print('--> Dumping train data with size ' + str(len(processed_train_set)) + ' to pickle file')
f_train = open(processed_data_file, 'wb')
pickle.dump(processed_train_set, f_train, protocol=2)
f_train.close()
# Clear buffer
raw_data_buffer = []
processed_input_output_pairs = []
def augment_flip(self, processed_input_output_pairs):
print('--> Data Augmentation: Y Flip')
augmented_input_output_pairs = []
for processed_input_output_pair in tqdm(processed_input_output_pairs):
data_input, data_output = processed_input_output_pair[0].numpy(), processed_input_output_pair[1].numpy()
num_peds = int(data_input.shape[0]/2)
# Flip y
data_input_yflipped = np.zeros_like(data_input)
data_output_yflipped = np.zeros_like(data_output)
for kk in range(num_peds):
data_input_yflipped[2*kk, :] = data_input[2*kk, :]
data_input_yflipped[2*kk+1, :] = -1*data_input[2*kk+1, :]
data_output_yflipped[2*kk, :] = data_output[2*kk, :]
data_output_yflipped[2*kk+1, :] = -1*data_output[2*kk+1, :]
processed_pair_yflipped = (torch.from_numpy(data_input_yflipped), torch.from_numpy(data_output_yflipped))
augmented_input_output_pairs.append(processed_pair_yflipped)
processed_input_output_pairs.extend(augmented_input_output_pairs)
print('--> Augmented Data Size: ' + str(len(processed_input_output_pairs)))
def augment_permute(self, processed_input_output_pairs):
# Specify how many pedestrian permutations to consider per input-output pair
print('--> Data Augmentation: Pedestrian Permutation (' + str(self.permutations) + ' random permutations per input-output pair)')
augmented_input_output_pairs = []
for processed_input_output_pair in tqdm(processed_input_output_pairs):
data_input, data_output = processed_input_output_pair[0].numpy(), processed_input_output_pair[1].numpy()
num_peds = int(data_input.shape[0]/2)
for ii in range(self.permutations):
perm = np.random.permutation(num_peds)
data_input_permuted = np.zeros_like(data_input)
data_output_permuted = np.zeros_like(data_output)
for jj in range(len(perm)):
data_input_permuted[2*jj:2*(jj+1), :] = data_input[2*perm[jj]:2*(perm[jj]+1), :]
data_output_permuted[2*jj:2*(jj+1), :] = data_output[2*perm[jj]:2*(perm[jj]+1), :]
processed_pair_permuted = (torch.from_numpy(data_input_permuted), torch.from_numpy(data_output_permuted))
augmented_input_output_pairs.append(processed_pair_permuted)
processed_input_output_pairs.extend(augmented_input_output_pairs)
print('--> Augmented Data Size: ' + str(len(processed_input_output_pairs)))
processed = CustomDataPreprocessorForCNN(forcePreProcess=True, test_data_sets=[2,3,4], augmentation=True)
processed.scale_factor_x
processed.scale_factor_y
train_file = open(processed.processed_train_data_file, 'rb')
dev_file = open(processed.processed_dev_data_file, 'rb')
test_file = open(processed.processed_test_data_file, 'rb')
processed.processed_train_data_file
train = pickle.load(train_file)
dev = pickle.load(dev_file)
test = pickle.load(test_file)
len(train)
len(dev)
len(test)
class CustomDatasetForCNN(torch.utils.data.Dataset):
def __init__(self, file_path):
self.file_path = file_path
self.file = open(self.file_path, 'rb')
self.data = pickle.load(self.file)
self.file.close()
def __getitem__(self, index):
item = self.data[index]
return item
def __len__(self):
return len(self.data)
train_set = CustomDatasetForCNN(processed.processed_train_data_file)
train_loader = torch.utils.data.DataLoader(dataset=train_set, batch_size=1, shuffle=True)
x, y = train_set.__getitem__(99)
x
next(iter(train_loader))
x, y = train_set.__getitem__(9)
len(train_loader)
| 0.546012 | 0.307982 |
# Targeting Direct Marketing with Amazon SageMaker XGBoost
_**Supervised Learning with Gradient Boosted Trees: A Binary Prediction Problem With Unbalanced Classes**_
---
---
## Contents
1. [Background](#Background)
1. [Prepration](#Preparation)
1. [Data](#Data)
1. [Exploration](#Exploration)
1. [Transformation](#Transformation)
1. [Training](#Training)
1. [Hosting](#Hosting)
1. [Evaluation](#Evaluation)
1. [Exentsions](#Extensions)
---
## Background
Direct marketing, either through mail, email, phone, etc., is a common tactic to acquire customers. Because resources and a customer's attention is limited, the goal is to only target the subset of prospects who are likely to engage with a specific offer. Predicting those potential customers based on readily available information like demographics, past interactions, and environmental factors is a common machine learning problem.
This notebook presents an example problem to predict if a customer will enroll for a term deposit at a bank, after one or more phone calls. The steps include:
* Preparing your Amazon SageMaker notebook
* Downloading data from the internet into Amazon SageMaker
* Investigating and transforming the data so that it can be fed to Amazon SageMaker algorithms
* Estimating a model using the Gradient Boosting algorithm
* Evaluating the effectiveness of the model
* Setting the model up to make on-going predictions
---
## Preparation
_This notebook was created and tested on an ml.m4.xlarge notebook instance._
Let's start by specifying:
- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
```
!conda update pandas -y
import sagemaker
bucket=sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-dm'
# Define IAM role
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
```
Now let's bring in the Python libraries that we'll use throughout the analysis
```
import numpy as np # For matrix operations and numerical processing
import pandas as pd # For munging tabular data
import matplotlib.pyplot as plt # For charts and visualizations
from IPython.display import Image # For displaying images in the notebook
from IPython.display import display # For displaying outputs in the notebook
from time import gmtime, strftime # For labeling SageMaker models, endpoints, etc.
import sys # For writing outputs to notebook
import math # For ceiling function
import json # For parsing hosting outputs
import os # For manipulating filepath names
import sagemaker
import zipfile # Amazon SageMaker's Python SDK provides many helper functions
pd.__version__
```
Make sure pandas version is set to 1.2.4 or later. If it is not the case, restart the kernel before going further
---
## Data
Let's start by downloading the [direct marketing dataset](https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip) from the sample data s3 bucket.
\[Moro et al., 2014\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
```
!wget https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
with zipfile.ZipFile('bank-additional.zip', 'r') as zip_ref:
zip_ref.extractall('.')
```
Now lets read this into a Pandas data frame and take a look.
```
data = pd.read_csv('./bank-additional/bank-additional-full.csv')
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 20) # Keep the output on one page
data
```
Let's talk about the data. At a high level, we can see:
* We have a little over 40K customer records, and 20 features for each customer
* The features are mixed; some numeric, some categorical
* The data appears to be sorted, at least by `time` and `contact`, maybe more
_**Specifics on each of the features:**_
*Demographics:*
* `age`: Customer's age (numeric)
* `job`: Type of job (categorical: 'admin.', 'services', ...)
* `marital`: Marital status (categorical: 'married', 'single', ...)
* `education`: Level of education (categorical: 'basic.4y', 'high.school', ...)
*Past customer events:*
* `default`: Has credit in default? (categorical: 'no', 'unknown', ...)
* `housing`: Has housing loan? (categorical: 'no', 'yes', ...)
* `loan`: Has personal loan? (categorical: 'no', 'yes', ...)
*Past direct marketing contacts:*
* `contact`: Contact communication type (categorical: 'cellular', 'telephone', ...)
* `month`: Last contact month of year (categorical: 'may', 'nov', ...)
* `day_of_week`: Last contact day of the week (categorical: 'mon', 'fri', ...)
* `duration`: Last contact duration, in seconds (numeric). Important note: If duration = 0 then `y` = 'no'.
*Campaign information:*
* `campaign`: Number of contacts performed during this campaign and for this client (numeric, includes last contact)
* `pdays`: Number of days that passed by after the client was last contacted from a previous campaign (numeric)
* `previous`: Number of contacts performed before this campaign and for this client (numeric)
* `poutcome`: Outcome of the previous marketing campaign (categorical: 'nonexistent','success', ...)
*External environment factors:*
* `emp.var.rate`: Employment variation rate - quarterly indicator (numeric)
* `cons.price.idx`: Consumer price index - monthly indicator (numeric)
* `cons.conf.idx`: Consumer confidence index - monthly indicator (numeric)
* `euribor3m`: Euribor 3 month rate - daily indicator (numeric)
* `nr.employed`: Number of employees - quarterly indicator (numeric)
*Target variable:*
* `y`: Has the client subscribed a term deposit? (binary: 'yes','no')
### Exploration
Let's start exploring the data. First, let's understand how the features are distributed.
```
# Frequency tables for each categorical feature
for column in data.select_dtypes(include=['object']).columns:
display(pd.crosstab(index=data[column], columns='% observations', normalize='columns'))
# Histograms for each numeric features
display(data.describe())
%matplotlib inline
hist = data.hist(bins=30, sharey=True, figsize=(10, 10))
```
Notice that:
* Almost 90% of the values for our target variable `y` are "no", so most customers did not subscribe to a term deposit.
* Many of the predictive features take on values of "unknown". Some are more common than others. We should think carefully as to what causes a value of "unknown" (are these customers non-representative in some way?) and how we that should be handled.
* Even if "unknown" is included as it's own distinct category, what does it mean given that, in reality, those observations likely fall within one of the other categories of that feature?
* Many of the predictive features have categories with very few observations in them. If we find a small category to be highly predictive of our target outcome, do we have enough evidence to make a generalization about that?
* Contact timing is particularly skewed. Almost a third in May and less than 1% in December. What does this mean for predicting our target variable next December?
* There are no missing values in our numeric features. Or missing values have already been imputed.
* `pdays` takes a value near 1000 for almost all customers. Likely a placeholder value signifying no previous contact.
* Several numeric features have a very long tail. Do we need to handle these few observations with extremely large values differently?
* Several numeric features (particularly the macroeconomic ones) occur in distinct buckets. Should these be treated as categorical?
Next, let's look at how our features relate to the target that we are attempting to predict.
```
for column in data.select_dtypes(include=['object']).columns:
if column != 'y':
display(pd.crosstab(index=data[column], columns=data['y'], normalize='columns'))
for column in data.select_dtypes(exclude=['object']).columns:
print(column)
hist = data[[column, 'y']].hist(by='y', bins=30)
plt.show()
```
Notice that:
* Customers who are-- "blue-collar", "married", "unknown" default status, contacted by "telephone", and/or in "may" are a substantially lower portion of "yes" than "no" for subscribing.
* Distributions for numeric variables are different across "yes" and "no" subscribing groups, but the relationships may not be straightforward or obvious.
Now let's look at how our features relate to one another.
```
display(data.corr())
pd.plotting.scatter_matrix(data, figsize=(12, 12))
plt.show()
```
Notice that:
* Features vary widely in their relationship with one another. Some with highly negative correlation, others with highly positive correlation.
* Relationships between features is non-linear and discrete in many cases.
### Transformation
Cleaning up data is part of nearly every machine learning project. It arguably presents the biggest risk if done incorrectly and is one of the more subjective aspects in the process. Several common techniques include:
* Handling missing values: Some machine learning algorithms are capable of handling missing values, but most would rather not. Options include:
* Removing observations with missing values: This works well if only a very small fraction of observations have incomplete information.
* Removing features with missing values: This works well if there are a small number of features which have a large number of missing values.
* Imputing missing values: Entire [books](https://www.amazon.com/Flexible-Imputation-Missing-Interdisciplinary-Statistics/dp/1439868247) have been written on this topic, but common choices are replacing the missing value with the mode or mean of that column's non-missing values.
* Converting categorical to numeric: The most common method is one hot encoding, which for each feature maps every distinct value of that column to its own feature which takes a value of 1 when the categorical feature is equal to that value, and 0 otherwise.
* Oddly distributed data: Although for non-linear models like Gradient Boosted Trees, this has very limited implications, parametric models like regression can produce wildly inaccurate estimates when fed highly skewed data. In some cases, simply taking the natural log of the features is sufficient to produce more normally distributed data. In others, bucketing values into discrete ranges is helpful. These buckets can then be treated as categorical variables and included in the model when one hot encoded.
* Handling more complicated data types: Mainpulating images, text, or data at varying grains is left for other notebook templates.
Luckily, some of these aspects have already been handled for us, and the algorithm we are showcasing tends to do well at handling sparse or oddly distributed data. Therefore, let's keep pre-processing simple.
```
data['no_previous_contact'] = np.where(data['pdays'] == 999, 1, 0) # Indicator variable to capture when pdays takes a value of 999
data['not_working'] = np.where(np.in1d(data['job'], ['student', 'retired', 'unemployed']), 1, 0) # Indicator for individuals not actively employed
model_data = pd.get_dummies(data) # Convert categorical variables to sets of indicators
```
Another question to ask yourself before building a model is whether certain features will add value in your final use case. For example, if your goal is to deliver the best prediction, then will you have access to that data at the moment of prediction? Knowing it's raining is highly predictive for umbrella sales, but forecasting weather far enough out to plan inventory on umbrellas is probably just as difficult as forecasting umbrella sales without knowledge of the weather. So, including this in your model may give you a false sense of precision.
Following this logic, let's remove the economic features and `duration` from our data as they would need to be forecasted with high precision to use as inputs in future predictions.
Even if we were to use values of the economic indicators from the previous quarter, this value is likely not as relevant for prospects contacted early in the next quarter as those contacted later on.
```
model_data = model_data.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1)
```
When building a model whose primary goal is to predict a target value on new data, it is important to understand overfitting. Supervised learning models are designed to minimize error between their predictions of the target value and actuals, in the data they are given. This last part is key, as frequently in their quest for greater accuracy, machine learning models bias themselves toward picking up on minor idiosyncrasies within the data they are shown. These idiosyncrasies then don't repeat themselves in subsequent data, meaning those predictions can actually be made less accurate, at the expense of more accurate predictions in the training phase.
The most common way of preventing this is to build models with the concept that a model shouldn't only be judged on its fit to the data it was trained on, but also on "new" data. There are several different ways of operationalizing this, holdout validation, cross-validation, leave-one-out validation, etc. For our purposes, we'll simply randomly split the data into 3 uneven groups. The model will be trained on 70% of data, it will then be evaluated on 20% of data to give us an estimate of the accuracy we hope to have on "new" data, and 10% will be held back as a final testing dataset which will be used later on.
```
train_data, validation_data, test_data = np.split(model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data)), int(0.9 * len(model_data))]) # Randomly sort the data then split out first 70%, second 20%, and last 10%
```
Amazon SageMaker's XGBoost container expects data in the libSVM or CSV data format. For this example, we'll stick to CSV. Note that the first column must be the target variable and the CSV should not include headers. Also, notice that although repetitive it's easiest to do this after the train|validation|test split rather than before. This avoids any misalignment issues due to random reordering.
```
pd.concat([train_data['y_yes'], train_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('train.csv', index=False, header=False)
pd.concat([validation_data['y_yes'], validation_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('validation.csv', index=False, header=False)
```
Now we'll copy the file to S3 for Amazon SageMaker's managed training to pickup.
```
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation/validation.csv')).upload_file('validation.csv')
```
---
## Training
Now we know that most of our features have skewed distributions, some are highly correlated with one another, and some appear to have non-linear relationships with our target variable. Also, for targeting future prospects, good predictive accuracy is preferred to being able to explain why that prospect was targeted. Taken together, these aspects make gradient boosted trees a good candidate algorithm.
There are several intricacies to understanding the algorithm, but at a high level, gradient boosted trees works by combining predictions from many simple models, each of which tries to address the weaknesses of the previous models. By doing this the collection of simple models can actually outperform large, complex models. Other Amazon SageMaker notebooks elaborate on gradient boosting trees further and how they differ from similar algorithms.
`xgboost` is an extremely popular, open-source package for gradient boosted trees. It is computationally powerful, fully featured, and has been successfully used in many machine learning competitions. Let's start with a simple `xgboost` model, trained using Amazon SageMaker's managed, distributed training framework.
First we'll need to specify the ECR container location for Amazon SageMaker's implementation of XGBoost.
```
container = sagemaker.image_uris.retrieve(region=boto3.Session().region_name, framework='xgboost', version='latest')
```
Then, because we're training with the CSV file format, we'll create `s3_input`s that our training function can use as a pointer to the files in S3, which also specify that the content type is CSV.
```
s3_input_train = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/train'.format(bucket, prefix), content_type='csv')
s3_input_validation = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/validation/'.format(bucket, prefix), content_type='csv')
```
First we'll need to specify training parameters to the estimator. This includes:
1. The `xgboost` algorithm container
1. The IAM role to use
1. Training instance type and count
1. S3 location for output data
1. Algorithm hyperparameters
And then a `.fit()` function which specifies:
1. S3 location for output data. In this case we have both a training and validation set which are passed in.
```
sess = sagemaker.Session()
xgb = sagemaker.estimator.Estimator(container,
role,
instance_count=1,
instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sess)
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
num_round=100)
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
```
---
## Hosting
Now that we've trained the `xgboost` algorithm on our data, let's deploy a model that's hosted behind a real-time endpoint.
```
xgb_predictor = xgb.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
```
---
## Evaluation
There are many ways to compare the performance of a machine learning model, but let's start by simply comparing actual to predicted values. In this case, we're simply predicting whether the customer subscribed to a term deposit (`1`) or not (`0`), which produces a simple confusion matrix.
First we'll need to determine how we pass data into and receive data from our endpoint. Our data is currently stored as NumPy arrays in memory of our notebook instance. To send it in an HTTP POST request, we'll serialize it as a CSV string and then decode the resulting CSV.
*Note: For inference with CSV format, SageMaker XGBoost requires that the data does NOT include the target variable.*
```
xgb_predictor.serializer = sagemaker.serializers.CSVSerializer()
```
Now, we'll use a simple function to:
1. Loop over our test dataset
1. Split it into mini-batches of rows
1. Convert those mini-batches to CSV string payloads (notice, we drop the target variable from our dataset first)
1. Retrieve mini-batch predictions by invoking the XGBoost endpoint
1. Collect predictions and convert from the CSV output our model provides into a NumPy array
```
def predict(data, predictor, rows=500 ):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = ''
for array in split_array:
predictions = ','.join([predictions, predictor.predict(array).decode('utf-8')])
return np.fromstring(predictions[1:], sep=',')
predictions = predict(test_data.drop(['y_no', 'y_yes'], axis=1).to_numpy(), xgb_predictor)
```
Now we'll check our confusion matrix to see how well we predicted versus actuals.
```
pd.crosstab(index=test_data['y_yes'], columns=np.round(predictions), rownames=['actuals'], colnames=['predictions'])
```
So, of the ~4000 potential customers, we predicted 136 would subscribe and 94 of them actually did. We also had 389 subscribers who subscribed that we did not predict would. This is less than desirable, but the model can (and should) be tuned to improve this. Most importantly, note that with minimal effort, our model produced accuracies similar to those published [here](https://core.ac.uk/download/pdf/55631291.pdf).
_Note that because there is some element of randomness in the algorithm's subsample, your results may differ slightly from the text written above._
---
## Extensions - XGBoost Bring Your Own Model
Amazon SageMaker includes functionality to support a hosted notebook environment, distributed, serverless training, and real-time hosting. We think it works best when all three of these services are used together, but they can also be used independently. Some use cases may only require hosting. Maybe the model was trained prior to Amazon SageMaker existing, in a different service.
This section shows how to use a pre-existing trained XGBoost model with the Amazon SageMaker XGBoost Algorithm container to quickly create a hosted endpoint for that model.
```
%%time
from time import gmtime, strftime
model_file_name = "DEMO-byo-xgboost-model"
key = os.path.join(prefix, model_file_name, "model.tar.gz")
model_name = model_file_name + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
model_data = 's3://{}/{}/output/{}/output/model.tar.gz'.format(bucket, prefix, xgb.latest_training_job.job_name)
sm_client = boto3.client("sagemaker")
print(model_data)
primary_container = {
"Image": container,
"ModelDataUrl": model_data,
}
create_model_response2 = sm_client.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response2["ModelArn"])
from time import gmtime, strftime
endpoint_config_name = "DEMO-XGBoostEndpointConfig-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = sm_client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m4.xlarge",
"InitialInstanceCount": 1,
"InitialVariantWeight": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print("Endpoint Config Arn: " + create_endpoint_config_response["EndpointConfigArn"])
%%time
import time
endpoint_name = "DEMO-XGBoostEndpoint-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = sm_client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Status: " + status)
while status == "Creating":
time.sleep(60)
resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Status: " + status)
print("Arn: " + resp["EndpointArn"])
print("Status: " + status)
```
## Validate the model for use
```
import io
import csv
runtime_client = boto3.client("runtime.sagemaker")
data = test_data.drop(['y_no', 'y_yes'], axis=1).iloc[0].to_numpy()
csv_buffer = io.StringIO()
csv_writer = csv.writer(csv_buffer, delimiter=",")
csv_writer.writerow(data)
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType="text/csv", Body=csv_buffer.getvalue().rstrip("\r\n")
)
result = response["Body"].read().decode("ascii")
print("Predicted Class Probabilities: {}.".format(result))
```
### (Optional) Clean-up
If you are done with this notebook, please run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on.
```
xgb_predictor.delete_endpoint(delete_endpoint_config=True)
```
|
github_jupyter
|
!conda update pandas -y
import sagemaker
bucket=sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-dm'
# Define IAM role
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
import numpy as np # For matrix operations and numerical processing
import pandas as pd # For munging tabular data
import matplotlib.pyplot as plt # For charts and visualizations
from IPython.display import Image # For displaying images in the notebook
from IPython.display import display # For displaying outputs in the notebook
from time import gmtime, strftime # For labeling SageMaker models, endpoints, etc.
import sys # For writing outputs to notebook
import math # For ceiling function
import json # For parsing hosting outputs
import os # For manipulating filepath names
import sagemaker
import zipfile # Amazon SageMaker's Python SDK provides many helper functions
pd.__version__
!wget https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
with zipfile.ZipFile('bank-additional.zip', 'r') as zip_ref:
zip_ref.extractall('.')
data = pd.read_csv('./bank-additional/bank-additional-full.csv')
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 20) # Keep the output on one page
data
# Frequency tables for each categorical feature
for column in data.select_dtypes(include=['object']).columns:
display(pd.crosstab(index=data[column], columns='% observations', normalize='columns'))
# Histograms for each numeric features
display(data.describe())
%matplotlib inline
hist = data.hist(bins=30, sharey=True, figsize=(10, 10))
for column in data.select_dtypes(include=['object']).columns:
if column != 'y':
display(pd.crosstab(index=data[column], columns=data['y'], normalize='columns'))
for column in data.select_dtypes(exclude=['object']).columns:
print(column)
hist = data[[column, 'y']].hist(by='y', bins=30)
plt.show()
display(data.corr())
pd.plotting.scatter_matrix(data, figsize=(12, 12))
plt.show()
data['no_previous_contact'] = np.where(data['pdays'] == 999, 1, 0) # Indicator variable to capture when pdays takes a value of 999
data['not_working'] = np.where(np.in1d(data['job'], ['student', 'retired', 'unemployed']), 1, 0) # Indicator for individuals not actively employed
model_data = pd.get_dummies(data) # Convert categorical variables to sets of indicators
model_data = model_data.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1)
train_data, validation_data, test_data = np.split(model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data)), int(0.9 * len(model_data))]) # Randomly sort the data then split out first 70%, second 20%, and last 10%
pd.concat([train_data['y_yes'], train_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('train.csv', index=False, header=False)
pd.concat([validation_data['y_yes'], validation_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('validation.csv', index=False, header=False)
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation/validation.csv')).upload_file('validation.csv')
container = sagemaker.image_uris.retrieve(region=boto3.Session().region_name, framework='xgboost', version='latest')
s3_input_train = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/train'.format(bucket, prefix), content_type='csv')
s3_input_validation = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/validation/'.format(bucket, prefix), content_type='csv')
sess = sagemaker.Session()
xgb = sagemaker.estimator.Estimator(container,
role,
instance_count=1,
instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sess)
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
num_round=100)
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
xgb_predictor = xgb.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
xgb_predictor.serializer = sagemaker.serializers.CSVSerializer()
def predict(data, predictor, rows=500 ):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = ''
for array in split_array:
predictions = ','.join([predictions, predictor.predict(array).decode('utf-8')])
return np.fromstring(predictions[1:], sep=',')
predictions = predict(test_data.drop(['y_no', 'y_yes'], axis=1).to_numpy(), xgb_predictor)
pd.crosstab(index=test_data['y_yes'], columns=np.round(predictions), rownames=['actuals'], colnames=['predictions'])
%%time
from time import gmtime, strftime
model_file_name = "DEMO-byo-xgboost-model"
key = os.path.join(prefix, model_file_name, "model.tar.gz")
model_name = model_file_name + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
model_data = 's3://{}/{}/output/{}/output/model.tar.gz'.format(bucket, prefix, xgb.latest_training_job.job_name)
sm_client = boto3.client("sagemaker")
print(model_data)
primary_container = {
"Image": container,
"ModelDataUrl": model_data,
}
create_model_response2 = sm_client.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response2["ModelArn"])
from time import gmtime, strftime
endpoint_config_name = "DEMO-XGBoostEndpointConfig-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = sm_client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m4.xlarge",
"InitialInstanceCount": 1,
"InitialVariantWeight": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print("Endpoint Config Arn: " + create_endpoint_config_response["EndpointConfigArn"])
%%time
import time
endpoint_name = "DEMO-XGBoostEndpoint-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = sm_client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Status: " + status)
while status == "Creating":
time.sleep(60)
resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Status: " + status)
print("Arn: " + resp["EndpointArn"])
print("Status: " + status)
import io
import csv
runtime_client = boto3.client("runtime.sagemaker")
data = test_data.drop(['y_no', 'y_yes'], axis=1).iloc[0].to_numpy()
csv_buffer = io.StringIO()
csv_writer = csv.writer(csv_buffer, delimiter=",")
csv_writer.writerow(data)
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType="text/csv", Body=csv_buffer.getvalue().rstrip("\r\n")
)
result = response["Body"].read().decode("ascii")
print("Predicted Class Probabilities: {}.".format(result))
xgb_predictor.delete_endpoint(delete_endpoint_config=True)
| 0.375363 | 0.985115 |
# Sensitivity of sequana_coverage in detecting CNVs
Author: Thomas Cokelaer
Jan 2018
Local time execution: about 10 minutes
# Requirements
- sequana version 0.7.0 was used
- art_illumina
```
%pylab inline
rcParams['figure.figsize'] = (10,7)
```
# Get the reference
There are many ways to download the reference (FN433596). Here below we use sequana_coverage tool but of course, you can use your own tool, or simply go to http://github.com/sequana/resources/coverage (look for FN433596.fasta.bz2).
```
!sequana_coverage --download-reference FN433596
```
# Simulated FastQ data
Simulation of data coverage 100X
```
-l: length of the reads
-f: coverage
-m: mean size of fragments
-s: standard deviation of fragment size
-ss: type of hiseq
```
This taks a few minutes to produce
```
import subprocess
for DP in [200, 100, 80, 60, 40, 20, 10]:
print(DP)
# Creating the simulated data with expected depth of coverage
cmd = "art_illumina -sam -i FN433596.fa -p -l 100 -ss HS20 -f 20 -m 500 -s 40 -o paired_dat -f {}"
cmd = cmd.format(DP)
subprocess.call(cmd.split())
# Creating the BAM files (deletes previous ones)
# This command uses bwa and samtools behind the scene.
cmd = "sequana_mapping --reference FN433596.fa --file1 paired_dat1.fq --file2 paired_dat2.fq"
subprocess.call(cmd.split())
# creating the BED file once for all
# Here, we use bioconvert (http://bioconvert.readthedocs.io) that uses bedtools behind the scene.
cmd = "bioconvert FN433596.fa.sorted.bam simulated_{}X.bed -f".format(DP)
subprocess.call(cmd.split())
```
# Impact of the window parameter on the normalised coverage distribution (100X case)
```
from sequana import *
b = GenomeCov("simulated_100X.bed")
c = b.chr_list[0]
```
Let us run the running median / normalisation / zscore computation using several window parameter (e.g 20001, 80001...)
```
c.run(20001, circular=True)
data20000 = c.df['cov'] / c.df['rm']
c.run(10001, circular=True)
data10000 = c.df['cov'] / c.df['rm']
c.run(40001, circular=True)
data40000 = c.df['cov'] / c.df['rm']
c.run(2001, circular=True)
data2000 = c.df['cov'] / c.df['rm']
c.run(80001, circular=True)
data80000 = c.df['cov'] / c.df['rm']
```
Window parameter does not seem to have any impact on the distribution of the normalised coverage, which is centered around 1 and same standard deviation.
```
#_ = hist(data20000, bins=50, alpha=0.5)
_ = hist(data40000, bins=50, alpha=0.5)
_ = hist(data80000, bins=50, alpha=0.5)
_ = hist(data2000, bins=50, alpha=0.5)
xlabel("normalised coverage")
#_ = hist(data20000, bins=50, alpha=0.5)
_ = hist(data40000, bins=50, alpha=0.5)
_ = hist(data80000, bins=50, alpha=0.5)
_ = hist(data2000, bins=50, alpha=0.5)
xlabel("normalised coverage")
semilogy()
```
Note that if we look at the distribution on a log scale (on Y axis), the distributions are not Gaussian. This is because the mapped data exhibits a mix of distributions. However, the central distribution looks gaussian.
Switching to a logy scale and superimposing a normal distribution should convince the reader that this statement is true.
```
_ = hist(data40000, bins=50, alpha=0.5, normed=True, label="based on simulated data (100X)")
xlabel("normalised coverage")
semilogy()
datanorm = [normal()/10+1 for x in range(1000000)]
_ = hist(datanorm, bins=50, alpha=0.5, normed=True, label="theorethical gaussian distribution")
legend()
```
For lower DOC, the gaussian distribution assumption is not True anymore. You have skewed distribution. Events below the mean DOC may be missed. Events above the lean DOC may be over detected. This means that the thresholds should be ajusted. For isntance instead of the default pair (-4,4), one could use (-4, 6).
```
from sequana import *
b = GenomeCov("simulated_10X.bed")
c = b.chr_list[0]
c.run(20001, circular=True)
data = c.df["cov"]/c.df['rm']
_ = hist(data, bins=30, alpha=0.5, normed=True, label="based on simulated data (10X)")
xlabel("normalised coverage")
semilogy()
datanorm = [normal()/sqrt(10)+1 for x in range(1000000)]
_ = hist(datanorm, bins=50, alpha=0.5, normed=True, label="theorethical gaussian distribution")
legend()
ylim([ylim()[0], 10])
```
# Impact of DOC on the normalised distribution standard deviation
```
DOC = [4, 6, 8, 10, 20, 40, 60, 80, 100, 200,]
STDs = [2, 2.44, 2.82, 3.17, 4.46, 6.31, 7.76, 8.95, 10.08, 14.27]
CVs = [0.5, 0.41, 0.35, 0.32, 0.22, 0.16, 0.13, 0.11, 0.10, 0.07]
stds = [0.51, 0.41, 0.35, 0.32, 0.225, 0.158, 0.129, 0.111, 0.10, 0.07]
```
To obtain the number above, you can use the following function. Note that DOC is depth of coverage, STDs is the standard deviation of the genome coverage. CVs is the coefficient of variation and stds is the standard deviation of the normalized genome coverage.
```
def get_metrics(DOC):
b = GenomeCov("simulated_{}X.bed".format(DOC))
c = b.chr_list[0]
c.run(20001, circular=True)
normed = c.df['cov']/c.df['rm']
DOC = c.df['cov'].mean()
STD = c.df['cov'].std()
return DOC, STD, STD/DOC , std(normed)
get_metrics(20)
```
We can see that the standard distribution of the normalised coverage is equal to the coefficient of variation (CV) of the original coverage:
$ \frac{\sigma(coverage)}{DOC(coverage)} = CV(coverage)$
where CV stands for coefficient of variation
```
plot(DOC, CVs, "o-")
plot(DOC, 1/np.array(DOC)**0.5, "x--")
xlim([0,250])
axvline(10, color="r", ls="--")
```
# Distribution of the running median
The distribution of the running median vector is centered around the mean
of the genome coverage. The standard deviation decreases with increasing W.
```
def get_rm_metrics(DOC, W):
b = GenomeCov("simulated_{}X.bed".format(DOC))
c = b.chr_list[0]
c.run(W, circular=True)
return c.df.copy()
df100 = get_rm_metrics(100, 100)
df1000 = get_rm_metrics(100, 1000)
df10000 = get_rm_metrics(100, 10000)
df100000 = get_rm_metrics(100, 100000)
_ = hist(df100['rm'], normed=True, bins=range(150), alpha=0.5)
_ = hist(df1000['rm'], normed=True, bins=range(150), alpha=0.5)
_ = hist(df10000['rm'], normed=True, bins=range(150), alpha=0.5)
#_ = hist(df100000['rm'], normed=True, bins=range(150), alpha=0.5)
legend(["W=100", "W=1000", "W=10000", "W=100,000"])
xlim([60,140])
```
For very large W, the distribution standard deviation tend to be small and more importantly discrete.
```
_ = hist(df100000['rm'], bins=range(150))
xlim([80,140])
```
|
github_jupyter
|
%pylab inline
rcParams['figure.figsize'] = (10,7)
!sequana_coverage --download-reference FN433596
-l: length of the reads
-f: coverage
-m: mean size of fragments
-s: standard deviation of fragment size
-ss: type of hiseq
import subprocess
for DP in [200, 100, 80, 60, 40, 20, 10]:
print(DP)
# Creating the simulated data with expected depth of coverage
cmd = "art_illumina -sam -i FN433596.fa -p -l 100 -ss HS20 -f 20 -m 500 -s 40 -o paired_dat -f {}"
cmd = cmd.format(DP)
subprocess.call(cmd.split())
# Creating the BAM files (deletes previous ones)
# This command uses bwa and samtools behind the scene.
cmd = "sequana_mapping --reference FN433596.fa --file1 paired_dat1.fq --file2 paired_dat2.fq"
subprocess.call(cmd.split())
# creating the BED file once for all
# Here, we use bioconvert (http://bioconvert.readthedocs.io) that uses bedtools behind the scene.
cmd = "bioconvert FN433596.fa.sorted.bam simulated_{}X.bed -f".format(DP)
subprocess.call(cmd.split())
from sequana import *
b = GenomeCov("simulated_100X.bed")
c = b.chr_list[0]
c.run(20001, circular=True)
data20000 = c.df['cov'] / c.df['rm']
c.run(10001, circular=True)
data10000 = c.df['cov'] / c.df['rm']
c.run(40001, circular=True)
data40000 = c.df['cov'] / c.df['rm']
c.run(2001, circular=True)
data2000 = c.df['cov'] / c.df['rm']
c.run(80001, circular=True)
data80000 = c.df['cov'] / c.df['rm']
#_ = hist(data20000, bins=50, alpha=0.5)
_ = hist(data40000, bins=50, alpha=0.5)
_ = hist(data80000, bins=50, alpha=0.5)
_ = hist(data2000, bins=50, alpha=0.5)
xlabel("normalised coverage")
#_ = hist(data20000, bins=50, alpha=0.5)
_ = hist(data40000, bins=50, alpha=0.5)
_ = hist(data80000, bins=50, alpha=0.5)
_ = hist(data2000, bins=50, alpha=0.5)
xlabel("normalised coverage")
semilogy()
_ = hist(data40000, bins=50, alpha=0.5, normed=True, label="based on simulated data (100X)")
xlabel("normalised coverage")
semilogy()
datanorm = [normal()/10+1 for x in range(1000000)]
_ = hist(datanorm, bins=50, alpha=0.5, normed=True, label="theorethical gaussian distribution")
legend()
from sequana import *
b = GenomeCov("simulated_10X.bed")
c = b.chr_list[0]
c.run(20001, circular=True)
data = c.df["cov"]/c.df['rm']
_ = hist(data, bins=30, alpha=0.5, normed=True, label="based on simulated data (10X)")
xlabel("normalised coverage")
semilogy()
datanorm = [normal()/sqrt(10)+1 for x in range(1000000)]
_ = hist(datanorm, bins=50, alpha=0.5, normed=True, label="theorethical gaussian distribution")
legend()
ylim([ylim()[0], 10])
DOC = [4, 6, 8, 10, 20, 40, 60, 80, 100, 200,]
STDs = [2, 2.44, 2.82, 3.17, 4.46, 6.31, 7.76, 8.95, 10.08, 14.27]
CVs = [0.5, 0.41, 0.35, 0.32, 0.22, 0.16, 0.13, 0.11, 0.10, 0.07]
stds = [0.51, 0.41, 0.35, 0.32, 0.225, 0.158, 0.129, 0.111, 0.10, 0.07]
def get_metrics(DOC):
b = GenomeCov("simulated_{}X.bed".format(DOC))
c = b.chr_list[0]
c.run(20001, circular=True)
normed = c.df['cov']/c.df['rm']
DOC = c.df['cov'].mean()
STD = c.df['cov'].std()
return DOC, STD, STD/DOC , std(normed)
get_metrics(20)
plot(DOC, CVs, "o-")
plot(DOC, 1/np.array(DOC)**0.5, "x--")
xlim([0,250])
axvline(10, color="r", ls="--")
def get_rm_metrics(DOC, W):
b = GenomeCov("simulated_{}X.bed".format(DOC))
c = b.chr_list[0]
c.run(W, circular=True)
return c.df.copy()
df100 = get_rm_metrics(100, 100)
df1000 = get_rm_metrics(100, 1000)
df10000 = get_rm_metrics(100, 10000)
df100000 = get_rm_metrics(100, 100000)
_ = hist(df100['rm'], normed=True, bins=range(150), alpha=0.5)
_ = hist(df1000['rm'], normed=True, bins=range(150), alpha=0.5)
_ = hist(df10000['rm'], normed=True, bins=range(150), alpha=0.5)
#_ = hist(df100000['rm'], normed=True, bins=range(150), alpha=0.5)
legend(["W=100", "W=1000", "W=10000", "W=100,000"])
xlim([60,140])
_ = hist(df100000['rm'], bins=range(150))
xlim([80,140])
| 0.682045 | 0.908739 |
# PyTorch Metric Learning
### Example for the TwoStreamMetricLoss trainer
See the documentation [here](https://kevinmusgrave.github.io/pytorch-metric-learning/)
## Install prereqs
```
!pip install -q pytorch-metric-learning[with-hooks]
!pip install umap-learn
```
## Import the packages
```
%matplotlib inline
from pytorch_metric_learning import losses, miners, samplers, trainers, testers
from pytorch_metric_learning.utils import common_functions
import pytorch_metric_learning.utils.logging_presets as logging_presets
from pytorch_metric_learning.utils.accuracy_calculator import AccuracyCalculator
import numpy as np
import torchvision
from torchvision import datasets, transforms
import torch
import torch.nn as nn
from PIL import Image
import logging
import matplotlib.pyplot as plt
import umap
from cycler import cycler
import record_keeper
import pytorch_metric_learning
logging.getLogger().setLevel(logging.INFO)
logging.info("VERSION %s"%pytorch_metric_learning.__version__)
```
## Create two-stream dataset from CIFAR100
```
class CIFAR100TwoStreamDataset(torch.utils.data.Dataset):
def __init__(self, dataset, anchor_transform, posneg_transform):
# split by some thresholds here 80% anchors, 20% for posnegs
lengths = [int(len(dataset)*0.8), int(len(dataset)*0.2)]
self.anchors, self.posnegs = torch.utils.data.random_split(dataset, lengths)
self.anchor_transform = anchor_transform
self.posneg_transform = posneg_transform
def __len__(self):
return len(self.anchors)
def __getitem__(self, index):
anchor, target = self.anchors[index]
if self.anchor_transform is not None:
anchor = self.anchor_transform(anchor)
# now pair this up with an image from the same class in the second stream
A = np.where( np.array(self.posnegs.dataset.targets)==target )[0]
posneg_idx = np.random.choice(A[np.in1d(A, self.posnegs.indices)])
posneg, target = self.posnegs[np.where(self.posnegs.indices==posneg_idx)[0][0]]
if self.posneg_transform is not None:
posneg = self.posneg_transform(posneg)
return anchor, posneg, target
```
##Simple model def
```
class MLP(nn.Module):
# layer_sizes[0] is the dimension of the input
# layer_sizes[-1] is the dimension of the output
def __init__(self, layer_sizes, final_relu=False):
super().__init__()
layer_list = []
layer_sizes = [int(x) for x in layer_sizes]
num_layers = len(layer_sizes) - 1
final_relu_layer = num_layers if final_relu else num_layers - 1
for i in range(len(layer_sizes) - 1):
input_size = layer_sizes[i]
curr_size = layer_sizes[i + 1]
if i < final_relu_layer:
layer_list.append(nn.ReLU(inplace=False))
layer_list.append(nn.Linear(input_size, curr_size))
self.net = nn.Sequential(*layer_list)
self.last_linear = self.net[-1]
def forward(self, x):
return self.net(x)
```
## Initialize models, optimizers and image transforms
```
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Set trunk model and replace the softmax layer with an identity function
trunk = torchvision.models.resnet18(pretrained=True)
trunk_output_size = trunk.fc.in_features
trunk.fc = common_functions.Identity()
trunk = torch.nn.DataParallel(trunk.to(device))
# Set embedder model. This takes in the output of the trunk and outputs 128 dimensional embeddings
embedder = torch.nn.DataParallel(MLP([trunk_output_size, 128]).to(device))
# Set optimizers
trunk_optimizer = torch.optim.Adam(trunk.parameters(), lr=0.00004, weight_decay=0.00005)
embedder_optimizer = torch.optim.Adam(embedder.parameters(), lr=0.00004, weight_decay=0.00005)
# Set the image transforms
train_transform = transforms.Compose([transforms.Resize(64),
transforms.RandomResizedCrop(scale=(0.16, 1), ratio=(0.75, 1.33), size=64),
transforms.RandomHorizontalFlip(0.5),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
val_transform = transforms.Compose([transforms.Resize(64),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
```
## Initialize the datasets
```
# Download and create datasets
original_train = datasets.CIFAR100(root="CIFAR100_Dataset", train=True, transform=None, download=True)
original_val = datasets.CIFAR100(root="CIFAR100_Dataset", train=False, transform=None, download=True)
# splits CIFAR100 into two streams
# 20% of the images will be used as a stream for positives and negatives
# the remaining images are used as anchor images
train_dataset = CIFAR100TwoStreamDataset(original_train, anchor_transform=train_transform, posneg_transform=train_transform)
val_dataset = CIFAR100TwoStreamDataset(original_val, anchor_transform=val_transform, posneg_transform=val_transform)
```
## Create the loss, miner, sampler, and package them into dictionaries
```
# Set the loss function
loss = losses.TripletMarginLoss(margin=0.2)
# Set the mining function
miner = miners.TripletMarginMiner(margin=0.2)
# Set the dataloader sampler
sampler = samplers.MPerClassSampler(original_train.classes, m=1, length_before_new_iter=len(train_dataset))
# Set other training parameters
batch_size = 128
num_epochs = 4
# Package the above stuff into dictionaries.
models = {"trunk": trunk, "embedder": embedder}
optimizers = {"trunk_optimizer": trunk_optimizer, "embedder_optimizer": embedder_optimizer}
loss_funcs = {"metric_loss": loss}
mining_funcs = {"tuple_miner": miner}
# Remove logs if you want to train with new parameters
!rm -rf example_logs/ example_saved_models/ example_tensorboard/
```
## Create the training and testing hooks
```
record_keeper, _, _ = logging_presets.get_record_keeper("example_logs", "example_tensorboard")
hooks = logging_presets.get_hook_container(record_keeper)
dataset_dict = {"val": val_dataset}
model_folder = "example_saved_models"
def visualizer_hook(umapper, umap_embeddings, labels, split_name, keyname, *args):
logging.info("UMAP plot for the {} split and label set {}".format(split_name, keyname))
label_set = np.unique(labels)
num_classes = len(label_set)
fig = plt.figure(figsize=(20,15))
plt.gca().set_prop_cycle(cycler("color", [plt.cm.nipy_spectral(i) for i in np.linspace(0, 0.9, num_classes)]))
half = int(umap_embeddings.shape[0] / 2)
anchors = umap_embeddings[:half]
posneg = umap_embeddings[half:]
labels = labels[:half]
for i in range(num_classes):
idx = labels == label_set[i]
plt.plot(posneg[idx, 0], posneg[idx, 1], "s", markersize=1)
plt.plot(anchors[idx, 0], anchors[idx, 1], ".", markersize=1)
plt.show()
# Create the tester
tester = testers.GlobalTwoStreamEmbeddingSpaceTester(end_of_testing_hook = hooks.end_of_testing_hook,
visualizer = umap.UMAP(n_neighbors=50),
visualizer_hook = visualizer_hook,
dataloader_num_workers = 32,
accuracy_calculator=AccuracyCalculator(k="max_bin_count"))
end_of_epoch_hook = hooks.end_of_epoch_hook(tester, dataset_dict, model_folder)
```
## Create the trainer
```
trainer = trainers.TwoStreamMetricLoss(models,
optimizers,
batch_size,
loss_funcs,
mining_funcs,
train_dataset,
sampler=sampler,
dataloader_num_workers=2,
end_of_iteration_hook=hooks.end_of_iteration_hook,
end_of_epoch_hook=end_of_epoch_hook
)
```
## Start Tensorboard
(Turn off adblock and other shields)
```
%load_ext tensorboard
%tensorboard --logdir example_tensorboard
```
## Train the model
```
# In the embeddings plots, the small dots represent the 1st stream, and the larger dots represent the 2nd stream
trainer.train(num_epochs=num_epochs)
```
|
github_jupyter
|
!pip install -q pytorch-metric-learning[with-hooks]
!pip install umap-learn
%matplotlib inline
from pytorch_metric_learning import losses, miners, samplers, trainers, testers
from pytorch_metric_learning.utils import common_functions
import pytorch_metric_learning.utils.logging_presets as logging_presets
from pytorch_metric_learning.utils.accuracy_calculator import AccuracyCalculator
import numpy as np
import torchvision
from torchvision import datasets, transforms
import torch
import torch.nn as nn
from PIL import Image
import logging
import matplotlib.pyplot as plt
import umap
from cycler import cycler
import record_keeper
import pytorch_metric_learning
logging.getLogger().setLevel(logging.INFO)
logging.info("VERSION %s"%pytorch_metric_learning.__version__)
class CIFAR100TwoStreamDataset(torch.utils.data.Dataset):
def __init__(self, dataset, anchor_transform, posneg_transform):
# split by some thresholds here 80% anchors, 20% for posnegs
lengths = [int(len(dataset)*0.8), int(len(dataset)*0.2)]
self.anchors, self.posnegs = torch.utils.data.random_split(dataset, lengths)
self.anchor_transform = anchor_transform
self.posneg_transform = posneg_transform
def __len__(self):
return len(self.anchors)
def __getitem__(self, index):
anchor, target = self.anchors[index]
if self.anchor_transform is not None:
anchor = self.anchor_transform(anchor)
# now pair this up with an image from the same class in the second stream
A = np.where( np.array(self.posnegs.dataset.targets)==target )[0]
posneg_idx = np.random.choice(A[np.in1d(A, self.posnegs.indices)])
posneg, target = self.posnegs[np.where(self.posnegs.indices==posneg_idx)[0][0]]
if self.posneg_transform is not None:
posneg = self.posneg_transform(posneg)
return anchor, posneg, target
class MLP(nn.Module):
# layer_sizes[0] is the dimension of the input
# layer_sizes[-1] is the dimension of the output
def __init__(self, layer_sizes, final_relu=False):
super().__init__()
layer_list = []
layer_sizes = [int(x) for x in layer_sizes]
num_layers = len(layer_sizes) - 1
final_relu_layer = num_layers if final_relu else num_layers - 1
for i in range(len(layer_sizes) - 1):
input_size = layer_sizes[i]
curr_size = layer_sizes[i + 1]
if i < final_relu_layer:
layer_list.append(nn.ReLU(inplace=False))
layer_list.append(nn.Linear(input_size, curr_size))
self.net = nn.Sequential(*layer_list)
self.last_linear = self.net[-1]
def forward(self, x):
return self.net(x)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Set trunk model and replace the softmax layer with an identity function
trunk = torchvision.models.resnet18(pretrained=True)
trunk_output_size = trunk.fc.in_features
trunk.fc = common_functions.Identity()
trunk = torch.nn.DataParallel(trunk.to(device))
# Set embedder model. This takes in the output of the trunk and outputs 128 dimensional embeddings
embedder = torch.nn.DataParallel(MLP([trunk_output_size, 128]).to(device))
# Set optimizers
trunk_optimizer = torch.optim.Adam(trunk.parameters(), lr=0.00004, weight_decay=0.00005)
embedder_optimizer = torch.optim.Adam(embedder.parameters(), lr=0.00004, weight_decay=0.00005)
# Set the image transforms
train_transform = transforms.Compose([transforms.Resize(64),
transforms.RandomResizedCrop(scale=(0.16, 1), ratio=(0.75, 1.33), size=64),
transforms.RandomHorizontalFlip(0.5),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
val_transform = transforms.Compose([transforms.Resize(64),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
# Download and create datasets
original_train = datasets.CIFAR100(root="CIFAR100_Dataset", train=True, transform=None, download=True)
original_val = datasets.CIFAR100(root="CIFAR100_Dataset", train=False, transform=None, download=True)
# splits CIFAR100 into two streams
# 20% of the images will be used as a stream for positives and negatives
# the remaining images are used as anchor images
train_dataset = CIFAR100TwoStreamDataset(original_train, anchor_transform=train_transform, posneg_transform=train_transform)
val_dataset = CIFAR100TwoStreamDataset(original_val, anchor_transform=val_transform, posneg_transform=val_transform)
# Set the loss function
loss = losses.TripletMarginLoss(margin=0.2)
# Set the mining function
miner = miners.TripletMarginMiner(margin=0.2)
# Set the dataloader sampler
sampler = samplers.MPerClassSampler(original_train.classes, m=1, length_before_new_iter=len(train_dataset))
# Set other training parameters
batch_size = 128
num_epochs = 4
# Package the above stuff into dictionaries.
models = {"trunk": trunk, "embedder": embedder}
optimizers = {"trunk_optimizer": trunk_optimizer, "embedder_optimizer": embedder_optimizer}
loss_funcs = {"metric_loss": loss}
mining_funcs = {"tuple_miner": miner}
# Remove logs if you want to train with new parameters
!rm -rf example_logs/ example_saved_models/ example_tensorboard/
record_keeper, _, _ = logging_presets.get_record_keeper("example_logs", "example_tensorboard")
hooks = logging_presets.get_hook_container(record_keeper)
dataset_dict = {"val": val_dataset}
model_folder = "example_saved_models"
def visualizer_hook(umapper, umap_embeddings, labels, split_name, keyname, *args):
logging.info("UMAP plot for the {} split and label set {}".format(split_name, keyname))
label_set = np.unique(labels)
num_classes = len(label_set)
fig = plt.figure(figsize=(20,15))
plt.gca().set_prop_cycle(cycler("color", [plt.cm.nipy_spectral(i) for i in np.linspace(0, 0.9, num_classes)]))
half = int(umap_embeddings.shape[0] / 2)
anchors = umap_embeddings[:half]
posneg = umap_embeddings[half:]
labels = labels[:half]
for i in range(num_classes):
idx = labels == label_set[i]
plt.plot(posneg[idx, 0], posneg[idx, 1], "s", markersize=1)
plt.plot(anchors[idx, 0], anchors[idx, 1], ".", markersize=1)
plt.show()
# Create the tester
tester = testers.GlobalTwoStreamEmbeddingSpaceTester(end_of_testing_hook = hooks.end_of_testing_hook,
visualizer = umap.UMAP(n_neighbors=50),
visualizer_hook = visualizer_hook,
dataloader_num_workers = 32,
accuracy_calculator=AccuracyCalculator(k="max_bin_count"))
end_of_epoch_hook = hooks.end_of_epoch_hook(tester, dataset_dict, model_folder)
trainer = trainers.TwoStreamMetricLoss(models,
optimizers,
batch_size,
loss_funcs,
mining_funcs,
train_dataset,
sampler=sampler,
dataloader_num_workers=2,
end_of_iteration_hook=hooks.end_of_iteration_hook,
end_of_epoch_hook=end_of_epoch_hook
)
%load_ext tensorboard
%tensorboard --logdir example_tensorboard
# In the embeddings plots, the small dots represent the 1st stream, and the larger dots represent the 2nd stream
trainer.train(num_epochs=num_epochs)
| 0.879697 | 0.921746 |
# Quick Movie Recommender
See how QuickRecommender can be used for a simple movie recommendation.
In this notebook, I've loaded my custom version of MovieLens, selected a random subset (due to the memory limit), and used a simple TF-IDF vectorization on the titles, overviews, cast list and genres of the movies. I've also applied LSA and normalization on top. The result will be a dense matrix containing all features. This matrix will be fed into QuickRecommender and it'll start recommending movies randomly at first, but will start to recommend more relevant items as you go on selecting movies you like.
## Import dependencies & movies dataset
```
from quickrecommender import QuickRecommender
import pandas as pd
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import TruncatedSVD as LSA
from sklearn.preprocessing import Normalizer
movie_db = pd.read_csv('output.csv')
movie_db = movie_db.sample(frac=0.6198).reset_index(drop=True)
len(movie_db.index)
```
We'll be continuing with 20000 movies.
```
title_corpus = movie_db['title'].astype(str).values.tolist()
description_corpus = movie_db['desc'].astype(str).values.tolist()
cast_corpus = movie_db['cast'].astype(str).values.tolist()
genres_corpus = movie_db['genres'].astype(str).values.tolist()
keywords_corpus = movie_db['keywords'].astype(str).values.tolist()
```
## Vectorization, dim-reduction and normalization
```
pipe_sm = Pipeline([
('tfidfvectorizer', TfidfVectorizer()),
('lsa', LSA(n_components=16, algorithm='arpack', tol=1e-10, random_state=0)),
('normalizer', Normalizer())])
pipe_lg = Pipeline([
('tfidfvectorizer', TfidfVectorizer()),
('lsa', LSA(n_components=128, algorithm='arpack', tol=1e-10, random_state=0)),
('normalizer', Normalizer())])
X_titles = pipe_lg.fit_transform(title_corpus)
X_desc = pipe_lg.fit_transform(description_corpus)
X_cast = pipe_lg.fit_transform(cast_corpus)
X_keywords = pipe_lg.fit_transform(keywords_corpus)
X_genres = pipe_sm.fit_transform(genres_corpus)
X = Normalizer().fit_transform(np.concatenate((X_titles, X_desc, X_cast, X_genres), axis=1))
```
## Fitting QuickRecommender
Let's start with a 20-nearest neighbors graph. The more the neighbors, the quicker the learning, and possibly worse results.
```
qr = QuickRecommender(n_neighbors=20)
qr.fit(X)
```
## Time for some recommendations
The first recommendations are literally random, so you can search the movies first and select your favorites to get more meaningful recommendations at first try.
```
# MovieDB cast search
query = input("Search cast: ")
["{}: {}".format(i, title_corpus[i]) for i in range(len(title_corpus)) if query in cast_corpus[i]][:20]
# MovieDB search
query = input("Search movies: ")
["{}: {}".format(i, title_corpus[i]) for i in range(len(title_corpus)) if query in title_corpus[i]][:20]
selections = [9964, 12751, 13220, 8715, 8313, 1755, 5147]
for movie_idx in selections:
print("Most similar items to {} are:".format(title_corpus[movie_idx]))
for idx in list(qr.get_nn_graph().neighbors[movie_idx,1:6]):
print(" {}: {}".format(idx, title_corpus[idx]))
my_user = qr.update(selections=selections)
recomms = qr.recommend(my_user, n_recommendations=20)
for movie_idx in list(recomms):
print("{} : {} -- {}".format(movie_idx, title_corpus[movie_idx], genres_corpus[movie_idx]))
my_user = qr.update(my_user, selections=[15247, 17737])
recomms = qr.recommend(my_user, n_recommendations=20)
for movie_idx in list(recomms):
print("{} : {} -- {}".format(movie_idx, title_corpus[movie_idx], genres_corpus[movie_idx]))
```
|
github_jupyter
|
from quickrecommender import QuickRecommender
import pandas as pd
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import TruncatedSVD as LSA
from sklearn.preprocessing import Normalizer
movie_db = pd.read_csv('output.csv')
movie_db = movie_db.sample(frac=0.6198).reset_index(drop=True)
len(movie_db.index)
title_corpus = movie_db['title'].astype(str).values.tolist()
description_corpus = movie_db['desc'].astype(str).values.tolist()
cast_corpus = movie_db['cast'].astype(str).values.tolist()
genres_corpus = movie_db['genres'].astype(str).values.tolist()
keywords_corpus = movie_db['keywords'].astype(str).values.tolist()
pipe_sm = Pipeline([
('tfidfvectorizer', TfidfVectorizer()),
('lsa', LSA(n_components=16, algorithm='arpack', tol=1e-10, random_state=0)),
('normalizer', Normalizer())])
pipe_lg = Pipeline([
('tfidfvectorizer', TfidfVectorizer()),
('lsa', LSA(n_components=128, algorithm='arpack', tol=1e-10, random_state=0)),
('normalizer', Normalizer())])
X_titles = pipe_lg.fit_transform(title_corpus)
X_desc = pipe_lg.fit_transform(description_corpus)
X_cast = pipe_lg.fit_transform(cast_corpus)
X_keywords = pipe_lg.fit_transform(keywords_corpus)
X_genres = pipe_sm.fit_transform(genres_corpus)
X = Normalizer().fit_transform(np.concatenate((X_titles, X_desc, X_cast, X_genres), axis=1))
qr = QuickRecommender(n_neighbors=20)
qr.fit(X)
# MovieDB cast search
query = input("Search cast: ")
["{}: {}".format(i, title_corpus[i]) for i in range(len(title_corpus)) if query in cast_corpus[i]][:20]
# MovieDB search
query = input("Search movies: ")
["{}: {}".format(i, title_corpus[i]) for i in range(len(title_corpus)) if query in title_corpus[i]][:20]
selections = [9964, 12751, 13220, 8715, 8313, 1755, 5147]
for movie_idx in selections:
print("Most similar items to {} are:".format(title_corpus[movie_idx]))
for idx in list(qr.get_nn_graph().neighbors[movie_idx,1:6]):
print(" {}: {}".format(idx, title_corpus[idx]))
my_user = qr.update(selections=selections)
recomms = qr.recommend(my_user, n_recommendations=20)
for movie_idx in list(recomms):
print("{} : {} -- {}".format(movie_idx, title_corpus[movie_idx], genres_corpus[movie_idx]))
my_user = qr.update(my_user, selections=[15247, 17737])
recomms = qr.recommend(my_user, n_recommendations=20)
for movie_idx in list(recomms):
print("{} : {} -- {}".format(movie_idx, title_corpus[movie_idx], genres_corpus[movie_idx]))
| 0.417628 | 0.937038 |
# Funktionen
Funktionen sind zusammengefasste Codeblöcke. Mittels Funktionen können wir es vermeiden, mehrmals verwendete Codeblöcke zu wiederholen. Wir definieren stattdessen einmal eine Funktion, die diese Codeblöcke enthält und brauchen an weiteren Stellen nur noch (kurz) die Funktion aufzurufen, ohne die in ihr enthaltenen Codezeilen zu kopieren.
### Eine Funktion definieren und aufrufen
Wir haben schon einige Funktionen kennengelernt, die uns Python zur Verfügung stellt. Die Funktion, die wir bislang wohl am häufigsten verwendet haben, ist die print-Funktion:
```
print("HALLO WELT")
```
P.S. Eine Übersicht über Funktionen in Python findest du hier: https://docs.python.org/3/library/functions.html
Wenn wir eine eigene Funktion verwenden wollen, müssen wir sie zuerst definieren. Eine solche Funktionsdefinition hat die allgemeine Syntax:
**def Funktionname():**
<br>
**Code**
```
def multi_print():
print("Hallo Welt!")
print("Hallo Welt!")
```
Um eine Funktion auszuführen, die definiert wurde, schreiben wir: **Funktionname()**
```
multi_print()
def morgengruss():
print("Guten Morgen!")
print("Danke, gleichfalls!")
morgengruss()
#man muss nur die Funktion aufrufen, und dann wird sie ausgelöst - hier eben print...
```
### Funktionen mit einem Argument
Man kann Funktionen ein **Argument** übergeben, d. h. einen Wert, von dem der Code innerhalb der Funktion abhängt:
**def Funktionsname(Argument):**
<br>
**Code, in dem mit dem spezifischen Argument gearbeitet wird**
```
def multi_print2(name):
print(name)
print(name)
multi_print2("HALLO")
multi_print2("WELT")
```
Du kannst dir einen solchen Parameter als eine zu einer Funktion gehörige Variable vorstellen. Vermeide es, einen Funktionsparameter wie eine bereits bestehende Variable zu benennen - Verwirrungsgefahr!
```
name = "MARS"
def multi_print2(name): #ist die Variable "name" schon vergeben, dann verwendet die Funktion die in der Funktion definierte Variable bzw. Argument.
print(name)
print(name)
multi_print2("HALLO")
multi_print2("WELT")
print(name)
def hallo(name):
print("Hallo " + name)
hallo("edzard")
#mit zwei Argumenten
def hallo(name, gruss): #hier definiere ich die Attribute (Variablen), sie werden mit "," separiert.
print(gruss + " " + name)
hallo("Edzard", "Tschüss") #beim Aufrufen der Funktion gebe ich an, mit welchen Begriffen (Strings!)
def hallo(vorname, name, wochentag):
print("Hallo " + vorname + " " + name + ", einen wunderschönen " + wochentag)
hallo("Edzard", "Schade", "Montag")
```
Du siehst, dass der Wert der Variable _name_ keinen Einfluss auf das Argument _name_ der Funktion hat! Die Variable _name_ außerhalb der Funktion ist also eine andere Variable als die Variable _name_ innerhalb der Funktion.
Daher Achtung: Das macht den Code unübersichtlich!
### Weitere Funktionen in Python
Auch die len-Funktion für Listen kennst du schon. :-)
```
print(len(["Hallo", "Welt"]))
```
Du kannst die len-Funktion auch auf Strings anwenden.
```
print(len("Hallo"))
```
## Übung
Schreibe eine Funktion, die den Gesamtpreis der Produkte im Warenkorb berechnet!
Vervollständige die Funktion list_sum(), der als Parameter eine Liste mit den Preisen übergeben wird. Die Funktion soll dann die Summe der Zahlen aus der Liste ausgeben.
```
cart_prices = [20, 3.5, 6.49, 8.99, 9.99, 14.98]
def list_sum(l):
# hier kommt dein Code hin
print("Hier kommt dein Code hin")
list_sum(cart_prices)
cart_prices = [20, 3.5, 6.49, 8.99, 9.99, 14.98]
def gesamtpreis_warenkorb(preisliste):
gesamtpreis = 0
for preis in preisliste:
gesamtpreis = gesamtpreis + preis
print("Der Gesamtpreis beträgt " + str(gesamtpreis))
gesamtpreis_warenkorb(cart_prices)
#Als Argument kann ich auch eine Liste verwenden
#=> Beim Aufrufen der Funktion muss ich dann auf eine definierte Liste verweisen
cart_prices = [20, 3.5, 6.49, 8.99, 9.99, 14.98]
def gesamtpreis_warenkorb(preisliste):
gesamtpreis = 0
for preis in preisliste:
gesamtpreis = gesamtpreis + preis
print("Der Gesamtpreis beträgt SFR " + str(gesamtpreis))
gesamtpreis_warenkorb(cart_prices)
sum(cart_prices)
```
## Funktionen mit mehreren Argumenten
Eine Funktion darf auch mehrere Argumente enthalten:
**def Funktionenname(Argument1, Argument2, ...):**
<br>
**Code, in dem mit Argument1, Argument2,... gearbeitet wird**
```
def multi_print(name, count):
for i in range(0, count):
print(name)
multi_print("Hallo!", 5)
```
### Funktionen in Funktionen
Funktionen können auch ineinander geschachtelt werden:
```
def weitere_funktion():
multi_print("Hallo!", 3)
multi_print("Welt!", 3)
weitere_funktion()
#funktion in funktion definieren
def muti_print():
print("Hallo Mama")
def papi_print():
print("Hallo Papa")
def hallo_familie():
muti_print()
papi_print()
hallo_familie()
begruessung = ["hallo", "ciao", "tschuess", "sali"]
def muti_print(begruessung):
print(begruessung + " " + "Mama")
def papi_print(begruessung):
print(begruessung + " " + "Papa")
def familie_print(begruessung):
muti_print(begruessung)
papi_print(begruessung)
familie_print(begruessung[3])
```
### Einen Wert zurückgeben
Bislang führen wir mit Funktionen einen Codeblock aus, der von Argumenten abhängen kann. Funktionen können aber auch mittels des Befehls **return** Werte zurückgeben.
```
def return_element(name):
return name
print(return_element("Hi"))
```
Solche Funktionen mit return können wir dann wie Variablen behandeln:
```
def return_with_exclamation(name):
return name + "!"
if return_with_exclamation("Hi") == "Hi!":
print("Right!")
else:
print("Wrong.")
def maximum(a, b):
if a < b:
return b
else:
return a
result = maximum(4, 5)
print(result)
```
## Übung
In Amerika ist es üblich "Tips" zu geben. Du möchtest eine Funktion schreiben der du den Preis des Essens eingibst, die erhöhung in % und eine Tabelle erhälst die dir hilft genug tipp zu geben.
stehen. Genauer soll jedes Element in der Liste so aussehen: "Anzahl x Artikel: Preis".
```
def tipp_generator(price, percent):
gesamtpreis = price + price*percent/100
print("Der Preis des Essens beträgt bei" + " " + str(percent) + " " + "Prozent Trinkgeld" + " " + str(gesamtpreis) + " " + "Dollar")
tipp_generator(1000, 10)
# dein code hier
def calcPrice(price, percent):
newPrice = price + (price*percent/100)
return newPrice
def tipp_generator(price):
print("Der Preis des Essens beträgt bei 5% " + str(calcPrice(price, 5)) + " Dollar")
print("Der Preis des Essens beträgt bei 10% " + str(calcPrice(price, 10)) + " Dollar")
print("Der Preis des Essens beträgt bei 15% " + str(calcPrice(price, 15)) + " Dollar")
tipp_generator(1000)
# dein code hier
def print_percentages(prince, percent, multiplikator):
Print("Mit " + str(percent*multiplikator*100) + "% ist der Preis: " + str(price*(1+percent*multiplikator)))
# ==> auf Slack ist fertige Lösung
```
# Funktionen vs. Methoden
### Funktionen
Bei ihrem Aufruf stehen Funktionen "für sich" und das, worauf sie sich beziehen steht ggf. als Argument in den Klammern hinter ihnen:
```
liste = [1, 2, 3]
print(liste)
print(len(liste))
```
### Methoden
Daneben kennen wir aber auch schon Befehle, die mit einem Punkt an Objekte angehängt werden. Eine Liste ist ein solches **Objekt**. Jedes Objekt hat Methoden, auf die wir zurückgreifen können. Diese Methoden können wir aber nicht auf ein Objekt eines anderen Typs anwenden (meistens zumindest).
Schauen wir uns einige nützliche Methoden des Listen-Objektes an :-) (du brauchst sie dir nicht alle merken)
```
# ein Element anhängen
liste.append(4)
print(liste)
# ein Element an einem bestimmten Index entfernen
liste.pop(2)
# wir sehen, dass die Methode nicht die aktualisierte Liste, sondern das entfernte Element liefert
print(liste)
# Ein Element an einer bestimmten Stelle einfügen
# das erste Argument bei insert gibt an, welches Element in die Liste eingefügt wird,
# das zweite Argument bei insert gibt an, an welcher Stelle das Element eingefügt wird;
# beachte, dass der Index des ersten Elements in einer Liste 0 ist!
liste.insert(1, 4)
print(liste)
# ein Element entfernen
liste.remove(4)
print(liste)
# den Index eines Elementes angeben (die erste Stelle, an der es vorkommt)
print(liste.index(3))
print(liste.index(4))
print(liste.count(4))
# mit reverse können wir die Reihenfolge einer Liste umkehren
liste.reverse()
print(liste)
```
# Übung
Wann braucht man funktionen und wann methoden?
|
github_jupyter
|
print("HALLO WELT")
def multi_print():
print("Hallo Welt!")
print("Hallo Welt!")
multi_print()
def morgengruss():
print("Guten Morgen!")
print("Danke, gleichfalls!")
morgengruss()
#man muss nur die Funktion aufrufen, und dann wird sie ausgelöst - hier eben print...
def multi_print2(name):
print(name)
print(name)
multi_print2("HALLO")
multi_print2("WELT")
name = "MARS"
def multi_print2(name): #ist die Variable "name" schon vergeben, dann verwendet die Funktion die in der Funktion definierte Variable bzw. Argument.
print(name)
print(name)
multi_print2("HALLO")
multi_print2("WELT")
print(name)
def hallo(name):
print("Hallo " + name)
hallo("edzard")
#mit zwei Argumenten
def hallo(name, gruss): #hier definiere ich die Attribute (Variablen), sie werden mit "," separiert.
print(gruss + " " + name)
hallo("Edzard", "Tschüss") #beim Aufrufen der Funktion gebe ich an, mit welchen Begriffen (Strings!)
def hallo(vorname, name, wochentag):
print("Hallo " + vorname + " " + name + ", einen wunderschönen " + wochentag)
hallo("Edzard", "Schade", "Montag")
print(len(["Hallo", "Welt"]))
print(len("Hallo"))
cart_prices = [20, 3.5, 6.49, 8.99, 9.99, 14.98]
def list_sum(l):
# hier kommt dein Code hin
print("Hier kommt dein Code hin")
list_sum(cart_prices)
cart_prices = [20, 3.5, 6.49, 8.99, 9.99, 14.98]
def gesamtpreis_warenkorb(preisliste):
gesamtpreis = 0
for preis in preisliste:
gesamtpreis = gesamtpreis + preis
print("Der Gesamtpreis beträgt " + str(gesamtpreis))
gesamtpreis_warenkorb(cart_prices)
#Als Argument kann ich auch eine Liste verwenden
#=> Beim Aufrufen der Funktion muss ich dann auf eine definierte Liste verweisen
cart_prices = [20, 3.5, 6.49, 8.99, 9.99, 14.98]
def gesamtpreis_warenkorb(preisliste):
gesamtpreis = 0
for preis in preisliste:
gesamtpreis = gesamtpreis + preis
print("Der Gesamtpreis beträgt SFR " + str(gesamtpreis))
gesamtpreis_warenkorb(cart_prices)
sum(cart_prices)
def multi_print(name, count):
for i in range(0, count):
print(name)
multi_print("Hallo!", 5)
def weitere_funktion():
multi_print("Hallo!", 3)
multi_print("Welt!", 3)
weitere_funktion()
#funktion in funktion definieren
def muti_print():
print("Hallo Mama")
def papi_print():
print("Hallo Papa")
def hallo_familie():
muti_print()
papi_print()
hallo_familie()
begruessung = ["hallo", "ciao", "tschuess", "sali"]
def muti_print(begruessung):
print(begruessung + " " + "Mama")
def papi_print(begruessung):
print(begruessung + " " + "Papa")
def familie_print(begruessung):
muti_print(begruessung)
papi_print(begruessung)
familie_print(begruessung[3])
def return_element(name):
return name
print(return_element("Hi"))
def return_with_exclamation(name):
return name + "!"
if return_with_exclamation("Hi") == "Hi!":
print("Right!")
else:
print("Wrong.")
def maximum(a, b):
if a < b:
return b
else:
return a
result = maximum(4, 5)
print(result)
def tipp_generator(price, percent):
gesamtpreis = price + price*percent/100
print("Der Preis des Essens beträgt bei" + " " + str(percent) + " " + "Prozent Trinkgeld" + " " + str(gesamtpreis) + " " + "Dollar")
tipp_generator(1000, 10)
# dein code hier
def calcPrice(price, percent):
newPrice = price + (price*percent/100)
return newPrice
def tipp_generator(price):
print("Der Preis des Essens beträgt bei 5% " + str(calcPrice(price, 5)) + " Dollar")
print("Der Preis des Essens beträgt bei 10% " + str(calcPrice(price, 10)) + " Dollar")
print("Der Preis des Essens beträgt bei 15% " + str(calcPrice(price, 15)) + " Dollar")
tipp_generator(1000)
# dein code hier
def print_percentages(prince, percent, multiplikator):
Print("Mit " + str(percent*multiplikator*100) + "% ist der Preis: " + str(price*(1+percent*multiplikator)))
# ==> auf Slack ist fertige Lösung
liste = [1, 2, 3]
print(liste)
print(len(liste))
# ein Element anhängen
liste.append(4)
print(liste)
# ein Element an einem bestimmten Index entfernen
liste.pop(2)
# wir sehen, dass die Methode nicht die aktualisierte Liste, sondern das entfernte Element liefert
print(liste)
# Ein Element an einer bestimmten Stelle einfügen
# das erste Argument bei insert gibt an, welches Element in die Liste eingefügt wird,
# das zweite Argument bei insert gibt an, an welcher Stelle das Element eingefügt wird;
# beachte, dass der Index des ersten Elements in einer Liste 0 ist!
liste.insert(1, 4)
print(liste)
# ein Element entfernen
liste.remove(4)
print(liste)
# den Index eines Elementes angeben (die erste Stelle, an der es vorkommt)
print(liste.index(3))
print(liste.index(4))
print(liste.count(4))
# mit reverse können wir die Reihenfolge einer Liste umkehren
liste.reverse()
print(liste)
| 0.205615 | 0.908942 |
```
from os import listdir
from numpy import asarray
from numpy import save
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
import matplotlib.pyplot as plt
from matplotlib import image
from PIL import Image
Image.LOAD_TRUNCATED_IMAGES = True
newsize=(227,227)
folder='image_data/train/0/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/1/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/2/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/3/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/4/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/5/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/0/'
loaded_images=list()
labels=list()
for filename in listdir(folder):
output=1
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/train/1/'
for filename in listdir(folder):
output=1
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/train/2/'
for filename in listdir(folder):
output=2
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/train/3/'
for filename in listdir(folder):
output=3
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/train/4/'
for filename in listdir(folder):
output=4
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/train/5/'
for filename in listdir(folder):
output=5
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
photos=asarray(loaded_images)
label=asarray(labels)
photos.shape
label.shape
save('train_photos.npy', photos)
save('train_labels.npy', label)
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
newsize=(227,227)
folder='image_data/test/0/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/1/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/2/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/3/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/4/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/5/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/0/'
loaded_images=list()
labels=list()
for filename in listdir(folder):
output=1
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/test/1/'
for filename in listdir(folder):
output=1
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/test/2/'
for filename in listdir(folder):
output=2
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/test/3/'
for filename in listdir(folder):
output=3
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/test/4/'
for filename in listdir(folder):
output=4
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/test/5/'
for filename in listdir(folder):
output=5
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
photos=asarray(loaded_images)
label=asarray(labels)
save('test_photos.npy', photos)
save('test_labels.npy', label)
len(loaded_images)
```
|
github_jupyter
|
from os import listdir
from numpy import asarray
from numpy import save
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
import matplotlib.pyplot as plt
from matplotlib import image
from PIL import Image
Image.LOAD_TRUNCATED_IMAGES = True
newsize=(227,227)
folder='image_data/train/0/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/1/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/2/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/3/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/4/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/5/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/train/0/'
loaded_images=list()
labels=list()
for filename in listdir(folder):
output=1
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/train/1/'
for filename in listdir(folder):
output=1
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/train/2/'
for filename in listdir(folder):
output=2
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/train/3/'
for filename in listdir(folder):
output=3
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/train/4/'
for filename in listdir(folder):
output=4
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/train/5/'
for filename in listdir(folder):
output=5
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
photos=asarray(loaded_images)
label=asarray(labels)
photos.shape
label.shape
save('train_photos.npy', photos)
save('train_labels.npy', label)
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
newsize=(227,227)
folder='image_data/test/0/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/1/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/2/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/3/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/4/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/5/'
for file in listdir(folder):
im=Image.open(folder+file)
im=im.resize(newsize)
im=im.save(folder+file)
folder='image_data/test/0/'
loaded_images=list()
labels=list()
for filename in listdir(folder):
output=1
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/test/1/'
for filename in listdir(folder):
output=1
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/test/2/'
for filename in listdir(folder):
output=2
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/test/3/'
for filename in listdir(folder):
output=3
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/test/4/'
for filename in listdir(folder):
output=4
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
folder='image_data/test/5/'
for filename in listdir(folder):
output=5
img_data=image.imread(folder+filename)
loaded_images.append(img_data)
labels.append(output)
photos=asarray(loaded_images)
label=asarray(labels)
save('test_photos.npy', photos)
save('test_labels.npy', label)
len(loaded_images)
| 0.135747 | 0.193757 |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Azure ML Reinforcement Learning Sample - Cartpole Problem on Compute Instance
Azure ML Reinforcement Learning (Azure ML RL) is a managed service for running reinforcement learning training and simulation. With Azure MLRL, data scientists can start developing RL systems on one machine, and scale to compute clusters with 100’s of nodes if needed.
This example shows how to use Azure ML RL to train a Cartpole playing agent on a compute instance.
### Cartpole problem
Cartpole, also known as [Inverted Pendulum](https://en.wikipedia.org/wiki/Inverted_pendulum), is a pendulum with a center of mass above its pivot point. This formation is essentially unstable and will easily fall over but can be kept balanced by applying appropriate horizontal forces to the pivot point.
<table style="width:50%">
<tr>
<th>
<img src="./images/cartpole.png" alt="Cartpole image" />
</th>
</tr>
<tr>
<th><p>Fig 1. Cartpole problem schematic description (from <a href="https://towardsdatascience.com/cartpole-introduction-to-reinforcement-learning-ed0eb5b58288">towardsdatascience.com</a>).</p></th>
</tr>
</table>
The goal here is to train an agent to keep the cartpole balanced by applying appropriate forces to the pivot point.
See [this video](https://www.youtube.com/watch?v=XiigTGKZfks) for a real-world demonstration of cartpole problem.
### Prerequisite
The user should have completed the Azure Machine Learning Tutorial: [Get started creating your first ML experiment with the Python SDK](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup). You will need to make sure that you have a valid subscription id, a resource group and a workspace. All datastores and datasets you use should be associated with your workspace.
## Set up Development Environment
The following subsections show typical steps to setup your development environment. Setup includes:
* Connecting to a workspace to enable communication between your local machine and remote resources
* Creating an experiment to track all your runs
* Using a Compute Instance as compute target
### Azure ML SDK
Display the Azure ML SDK version.
```
import azureml.core
print("Azure ML SDK Version: ", azureml.core.VERSION)
```
### Get Azure ML workspace
Get a reference to an existing Azure ML workspace.
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.location, ws.resource_group, sep = ' | ')
```
### Use Compute Instance as compute target
A compute target is a designated compute resource where you run your training and simulation scripts. This location may be your local machine or a cloud-based compute resource. For more information see [What are compute targets in Azure Machine Learning?](https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-target)
The code below shows how to use current compute instance as a compute target. First some helper functions:
```
import os.path
# Get information about the currently running compute instance (notebook VM), like its name and prefix.
def load_nbvm():
if not os.path.isfile("/mnt/azmnt/.nbvm"):
return None
with open("/mnt/azmnt/.nbvm", 'r') as file:
return {key:value for (key, value) in [line.strip().split('=') for line in file]}
# Get information about the capabilities of an azureml.core.compute.AmlCompute target
# In particular how much RAM + GPU + HDD it has.
def get_compute_size(self, workspace):
for size in self.supported_vmsizes(workspace):
if(size['name'].upper() == self.vm_size):
return size
azureml.core.compute.ComputeTarget.size = get_compute_size
del(get_compute_size)
```
Then we use these helper functions to get a handle to current compute instance.
```
# Load current compute instance info
current_compute_instance = load_nbvm()
print("Current compute instance:", current_compute_instance)
# For this demo, let's use the current compute instance as the compute target, if available
if current_compute_instance:
instance_name = current_compute_instance['instance']
else:
instance_name = next(iter(ws.compute_targets))
compute_target = ws.compute_targets[instance_name]
print("Compute target status:")
print(compute_target.get_status().serialize())
print("Compute target size:")
print(compute_target.size(ws))
```
### Create Azure ML experiment
Create an experiment to track the runs in your workspace.
```
from azureml.core.experiment import Experiment
experiment_name = 'CartPole-v0-CI'
exp = Experiment(workspace=ws, name=experiment_name)
```
## Train Cartpole Agent Using Azure ML RL
To facilitate reinforcement learning, Azure Machine Learning Python SDK provides a high level abstraction, the _ReinforcementLearningEstimator_ class, which allows users to easily construct RL run configurations for the underlying RL framework. Azure ML RL initially supports the [Ray framework](https://ray.io/) and its highly customizable [RLlib](https://ray.readthedocs.io/en/latest/rllib.html#rllib-scalable-reinforcement-learning). In this section we show how to use _ReinforcementLearningEstimator_ and Ray/RLlib framework to train a cartpole playing agent.
### Create reinforcement learning estimator
The code below creates an instance of *ReinforcementLearningEstimator*, `training_estimator`, which then will be used to submit a job to Azure Machine Learning to start the Ray experiment run.
Note that this example is purposely simplified to the minimum. Here is a short description of the parameters we are passing into the constructor:
- `source_directory`, local directory containing your training script(s) and helper modules,
- `entry_script`, path to your entry script relative to the source directory,
- `script_params`, constant parameters to be passed to each run of training script,
- `compute_target`, reference to the compute target in which the trainer and worker(s) jobs will be executed,
- `rl_framework`, the RL framework to be used (currently must be Ray).
We use the `script_params` parameter to pass in general and algorithm-specific parameters to the training script.
```
from azureml.contrib.train.rl import ReinforcementLearningEstimator, Ray
training_algorithm = "PPO"
rl_environment = "CartPole-v0"
script_params = {
# Training algorithm
"--run": training_algorithm,
# Training environment
"--env": rl_environment,
# Algorithm-specific parameters
"--config": '\'{"num_gpus": 0, "num_workers": 1}\'',
# Stop conditions
"--stop": '\'{"episode_reward_mean": 200, "time_total_s": 300}\'',
# Frequency of taking checkpoints
"--checkpoint-freq": 2,
# If a checkpoint should be taken at the end - optional argument with no value
"--checkpoint-at-end": "",
# Log directory
"--local-dir": './logs'
}
training_estimator = ReinforcementLearningEstimator(
# Location of source files
source_directory='files',
# Python script file
entry_script='cartpole_training.py',
# A dictionary of arguments to pass to the training script specified in ``entry_script``
script_params=script_params,
# The Azure ML compute target set up for Ray head nodes
compute_target=compute_target,
# RL framework. Currently must be Ray.
rl_framework=Ray()
)
```
### Training script
As recommended in RLlib documentations, we use Ray Tune API to run the training algorithm. All the RLlib built-in trainers are compatible with the Tune API. Here we use `tune.run()` to execute a built-in training algorithm. For convenience, down below you can see part of the entry script where we make this call.
This is the list of parameters we are passing into `tune.run()` via the `script_params` parameter:
- `run_or_experiment`: name of the [built-in algorithm](https://ray.readthedocs.io/en/latest/rllib-algorithms.html#rllib-algorithms), 'PPO' in our example,
- `config`: Algorithm-specific configuration. This includes specifying the environment, `env`, which in our example is the gym **[CartPole-v0](https://gym.openai.com/envs/CartPole-v0/)** environment,
- `stop`: stopping conditions, which could be any of the metrics returned by the trainer. Here we use "mean of episode reward", and "total training time in seconds" as stop conditions, and
- `checkpoint_freq` and `checkpoint_at_end`: Frequency of taking checkpoints (number of training iterations between checkpoints), and if a checkpoint should be taken at the end.
We also specify the `local_dir`, the directory in which the training logs, checkpoints and other training artificats will be recorded.
See [RLlib Training APIs](https://ray.readthedocs.io/en/latest/rllib-training.html#rllib-training-apis) for more details, and also [Training (tune.run, tune.Experiment)](https://ray.readthedocs.io/en/latest/tune/api_docs/execution.html#training-tune-run-tune-experiment) for the complete list of parameters.
```python
import ray
import ray.tune as tune
if __name__ == "__main__":
# parse arguments ...
# Intitialize ray
ay.init(address=args.ray_address)
# Run training task using tune.run
tune.run(
run_or_experiment=args.run,
config=dict(args.config, env=args.env),
stop=args.stop,
checkpoint_freq=args.checkpoint_freq,
checkpoint_at_end=args.checkpoint_at_end,
local_dir=args.local_dir
)
```
### Submit the estimator to start experiment
Now we use the *training_estimator* to submit a run.
```
training_run = exp.submit(training_estimator)
```
### Monitor experiment
Azure ML provides a Jupyter widget to show the real-time status of an experiment run. You could use this widget to monitor status of the runs.
Note that _ReinforcementLearningEstimator_ creates at least two runs: (a) A parent run, i.e. the run returned above, and (b) a collection of child runs. The number of the child runs depends on the configuration of the reinforcement learning estimator. In our simple scenario, configured above, only one child run will be created.
The widget will show a list of the child runs as well. You can click on the link under **Status** to see the details of a child run.
```
from azureml.widgets import RunDetails
RunDetails(training_run).show()
```
### Stop the run
To cancel the run, call `training_run.cancel()`.
```
# Uncomment line below to cancel the run
# training_run.cancel()
```
### Wait for completion
Wait for the run to complete before proceeding.
**Note: The run may take a few minutes to complete.**
```
training_run.wait_for_completion()
```
### Get a handle to the child run
You can obtain a handle to the child run as follows. In our scenario, there is only one child run, we have it called `child_run_0`.
```
import time
child_run_0 = None
timeout = 30
while timeout > 0 and not child_run_0:
child_runs = list(training_run.get_children())
print('Number of child runs:', len(child_runs))
if len(child_runs) > 0:
child_run_0 = child_runs[0]
break
time.sleep(2) # Wait for 2 seconds
timeout -= 2
print('Child run info:')
print(child_run_0)
```
## Evaluate Trained Agent and See Results
We can evaluate a previously trained policy using the `rollout.py` helper script provided by RLlib (see [Evaluating Trained Policies](https://ray.readthedocs.io/en/latest/rllib-training.html#evaluating-trained-policies) for more details). Here we use an adaptation of this script to reconstruct a policy from a checkpoint taken and saved during training. We took these checkpoints by setting `checkpoint-freq` and `checkpoint-at-end` parameters above.
In this section we show how to get access to these checkpoints data, and then how to use them to evaluate the trained policy.
### Create a dataset of training artifacts
To evaluate a trained policy (a checkpoint) we need to make the checkpoint accessible to the rollout script. All the training artifacts are stored in workspace default datastore under **azureml/<run_id>** directory.
Here we create a file dataset from the stored artifacts, and then use this dataset to feed these data to rollout estimator.
```
from azureml.core import Dataset
run_id = child_run_0.id # Or set to run id of a completed run (e.g. 'rl-cartpole-v0_1587572312_06e04ace_head')
run_artifacts_path = os.path.join('azureml', run_id)
print("Run artifacts path:", run_artifacts_path)
# Create a file dataset object from the files stored on default datastore
datastore = ws.get_default_datastore()
training_artifacts_ds = Dataset.File.from_files(datastore.path(os.path.join(run_artifacts_path, '**')))
```
To verify, we can print out the number (and paths) of all the files in the dataset, as follows.
```
artifacts_paths = training_artifacts_ds.to_path()
print("Number of files in dataset:", len(artifacts_paths))
# Uncomment line below to print all file paths
#print("Artifacts dataset file paths: ", artifacts_paths)
```
### Evaluate a trained policy
We need to configure another reinforcement learning estimator, `rollout_estimator`, and then use it to submit another run. Note that the entry script for this estimator now points to `cartpole-rollout.py` script.
Also note how we pass the checkpoints dataset to this script using `inputs` parameter of the _ReinforcementLearningEstimator_.
We are using script parameters to pass in the same algorithm and the same environment used during training. We also specify the checkpoint number of the checkpoint we wish to evaluate, `checkpoint-number`, and number of the steps we shall run the rollout, `steps`.
The checkpoints dataset will be accessible to the rollout script as a mounted folder. The mounted folder and the checkpoint number, passed in via `checkpoint-number`, will be used to create a path to the checkpoint we are going to evaluate. The created checkpoint path then will be passed into RLlib rollout script for evaluation.
Let's find the checkpoints and the last checkpoint number first.
```
# Find checkpoints and last checkpoint number
from os import path
checkpoint_files = [
os.path.basename(file) for file in training_artifacts_ds.to_path() \
if os.path.basename(file).startswith('checkpoint-') and \
not os.path.basename(file).endswith('tune_metadata')
]
checkpoint_numbers = []
for file in checkpoint_files:
checkpoint_numbers.append(int(file.split('-')[1]))
print("Checkpoints:", checkpoint_numbers)
last_checkpoint_number = max(checkpoint_numbers)
print("Last checkpoint number:", last_checkpoint_number)
```
Now let's configure rollout estimator. Note that we use the last checkpoint for evaluation. The assumption is that the last checkpoint points to our best trained agent. You may change this to any of the checkpoint numbers printed above and observe the effect.
```
script_params = {
# Checkpoint number of the checkpoint from which to roll out
"--checkpoint-number": last_checkpoint_number,
# Training algorithm
"--run": training_algorithm,
# Training environment
"--env": rl_environment,
# Algorithm-specific parameters
"--config": '{}',
# Number of rollout steps
"--steps": 2000,
# If should repress rendering of the environment
"--no-render": ""
}
rollout_estimator = ReinforcementLearningEstimator(
# Location of source files
source_directory='files',
# Python script file
entry_script='cartpole_rollout.py',
# A dictionary of arguments to pass to the rollout script specified in ``entry_script``
script_params = script_params,
# Data inputs
inputs=[
training_artifacts_ds.as_named_input('artifacts_dataset'),
training_artifacts_ds.as_named_input('artifacts_path').as_mount()],
# The Azure ML compute target
compute_target=compute_target,
# RL framework. Currently must be Ray.
rl_framework=Ray(),
# Additional pip packages to install
pip_packages = ['azureml-dataprep[fuse,pandas]'])
```
Same as before, we use the *rollout_estimator* to submit a run.
```
rollout_run = exp.submit(rollout_estimator)
```
And then, similar to the training section, we can monitor the real-time progress of the rollout run and its chid as follows. If you browse logs of the child run you can see the evaluation results recorded in driver_log.txt file. Note that you may need to wait several minutes before these results become available.
```
from azureml.widgets import RunDetails
RunDetails(rollout_run).show()
```
Wait for completion of the rollout run, or you may cancel the run.
```
# Uncomment line below to cancel the run
#rollout_run.cancel()
rollout_run.wait_for_completion()
```
## Cleaning up
For your convenience, below you can find code snippets to clean up any resources created as part of this tutorial that you don't wish to retain.
```
# To archive the created experiment:
#exp.archive()
```
## Next
This example was about running Azure ML RL (Ray/RLlib Framework) on compute instance. Please see [Cartpole problem](../cartpole-on-single-compute/cartpole_cc.ipynb)
example which uses Ray RLlib to train a Cartpole playing agent on a single node remote compute.
|
github_jupyter
|
import azureml.core
print("Azure ML SDK Version: ", azureml.core.VERSION)
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.location, ws.resource_group, sep = ' | ')
import os.path
# Get information about the currently running compute instance (notebook VM), like its name and prefix.
def load_nbvm():
if not os.path.isfile("/mnt/azmnt/.nbvm"):
return None
with open("/mnt/azmnt/.nbvm", 'r') as file:
return {key:value for (key, value) in [line.strip().split('=') for line in file]}
# Get information about the capabilities of an azureml.core.compute.AmlCompute target
# In particular how much RAM + GPU + HDD it has.
def get_compute_size(self, workspace):
for size in self.supported_vmsizes(workspace):
if(size['name'].upper() == self.vm_size):
return size
azureml.core.compute.ComputeTarget.size = get_compute_size
del(get_compute_size)
# Load current compute instance info
current_compute_instance = load_nbvm()
print("Current compute instance:", current_compute_instance)
# For this demo, let's use the current compute instance as the compute target, if available
if current_compute_instance:
instance_name = current_compute_instance['instance']
else:
instance_name = next(iter(ws.compute_targets))
compute_target = ws.compute_targets[instance_name]
print("Compute target status:")
print(compute_target.get_status().serialize())
print("Compute target size:")
print(compute_target.size(ws))
from azureml.core.experiment import Experiment
experiment_name = 'CartPole-v0-CI'
exp = Experiment(workspace=ws, name=experiment_name)
from azureml.contrib.train.rl import ReinforcementLearningEstimator, Ray
training_algorithm = "PPO"
rl_environment = "CartPole-v0"
script_params = {
# Training algorithm
"--run": training_algorithm,
# Training environment
"--env": rl_environment,
# Algorithm-specific parameters
"--config": '\'{"num_gpus": 0, "num_workers": 1}\'',
# Stop conditions
"--stop": '\'{"episode_reward_mean": 200, "time_total_s": 300}\'',
# Frequency of taking checkpoints
"--checkpoint-freq": 2,
# If a checkpoint should be taken at the end - optional argument with no value
"--checkpoint-at-end": "",
# Log directory
"--local-dir": './logs'
}
training_estimator = ReinforcementLearningEstimator(
# Location of source files
source_directory='files',
# Python script file
entry_script='cartpole_training.py',
# A dictionary of arguments to pass to the training script specified in ``entry_script``
script_params=script_params,
# The Azure ML compute target set up for Ray head nodes
compute_target=compute_target,
# RL framework. Currently must be Ray.
rl_framework=Ray()
)
import ray
import ray.tune as tune
if __name__ == "__main__":
# parse arguments ...
# Intitialize ray
ay.init(address=args.ray_address)
# Run training task using tune.run
tune.run(
run_or_experiment=args.run,
config=dict(args.config, env=args.env),
stop=args.stop,
checkpoint_freq=args.checkpoint_freq,
checkpoint_at_end=args.checkpoint_at_end,
local_dir=args.local_dir
)
training_run = exp.submit(training_estimator)
from azureml.widgets import RunDetails
RunDetails(training_run).show()
# Uncomment line below to cancel the run
# training_run.cancel()
training_run.wait_for_completion()
import time
child_run_0 = None
timeout = 30
while timeout > 0 and not child_run_0:
child_runs = list(training_run.get_children())
print('Number of child runs:', len(child_runs))
if len(child_runs) > 0:
child_run_0 = child_runs[0]
break
time.sleep(2) # Wait for 2 seconds
timeout -= 2
print('Child run info:')
print(child_run_0)
from azureml.core import Dataset
run_id = child_run_0.id # Or set to run id of a completed run (e.g. 'rl-cartpole-v0_1587572312_06e04ace_head')
run_artifacts_path = os.path.join('azureml', run_id)
print("Run artifacts path:", run_artifacts_path)
# Create a file dataset object from the files stored on default datastore
datastore = ws.get_default_datastore()
training_artifacts_ds = Dataset.File.from_files(datastore.path(os.path.join(run_artifacts_path, '**')))
artifacts_paths = training_artifacts_ds.to_path()
print("Number of files in dataset:", len(artifacts_paths))
# Uncomment line below to print all file paths
#print("Artifacts dataset file paths: ", artifacts_paths)
# Find checkpoints and last checkpoint number
from os import path
checkpoint_files = [
os.path.basename(file) for file in training_artifacts_ds.to_path() \
if os.path.basename(file).startswith('checkpoint-') and \
not os.path.basename(file).endswith('tune_metadata')
]
checkpoint_numbers = []
for file in checkpoint_files:
checkpoint_numbers.append(int(file.split('-')[1]))
print("Checkpoints:", checkpoint_numbers)
last_checkpoint_number = max(checkpoint_numbers)
print("Last checkpoint number:", last_checkpoint_number)
script_params = {
# Checkpoint number of the checkpoint from which to roll out
"--checkpoint-number": last_checkpoint_number,
# Training algorithm
"--run": training_algorithm,
# Training environment
"--env": rl_environment,
# Algorithm-specific parameters
"--config": '{}',
# Number of rollout steps
"--steps": 2000,
# If should repress rendering of the environment
"--no-render": ""
}
rollout_estimator = ReinforcementLearningEstimator(
# Location of source files
source_directory='files',
# Python script file
entry_script='cartpole_rollout.py',
# A dictionary of arguments to pass to the rollout script specified in ``entry_script``
script_params = script_params,
# Data inputs
inputs=[
training_artifacts_ds.as_named_input('artifacts_dataset'),
training_artifacts_ds.as_named_input('artifacts_path').as_mount()],
# The Azure ML compute target
compute_target=compute_target,
# RL framework. Currently must be Ray.
rl_framework=Ray(),
# Additional pip packages to install
pip_packages = ['azureml-dataprep[fuse,pandas]'])
rollout_run = exp.submit(rollout_estimator)
from azureml.widgets import RunDetails
RunDetails(rollout_run).show()
# Uncomment line below to cancel the run
#rollout_run.cancel()
rollout_run.wait_for_completion()
# To archive the created experiment:
#exp.archive()
| 0.622689 | 0.969928 |
```
!pygmentize endpoint-one-model.yml
import boto3
sm = boto3.client('sagemaker')
cf = boto3.client('cloudformation')
```
## Create one-model endpoint
```
# Update this with your own model name
training_job = 'tensorflow-training-2020-06-08-07-46-04-367'
job = sm.describe_training_job(TrainingJobName=training_job)
model_data_url = job['ModelArtifacts']['S3ModelArtifacts']
role_arn = job['RoleArn']
# https://github.com/aws/deep-learning-containers/blob/master/available_images.md
container_image = '763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-inference:2.1.0-cpu-py36-ubuntu18.04'
import time
timestamp = time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
stack_name='endpoint-one-model-'+timestamp
print(stack_name)
with open('endpoint-one-model.yml', 'r') as f:
response = cf.create_stack(StackName=stack_name,
TemplateBody=f.read(),
Parameters=[
{"ParameterKey":"ModelName", "ParameterValue":training_job+'-'+timestamp},
{"ParameterKey":"ContainerImage","ParameterValue":container_image},
{"ParameterKey":"ModelDataUrl", "ParameterValue":model_data_url},
{"ParameterKey":"RoleArn", "ParameterValue":role_arn} ])
print(response)
waiter = cf.get_waiter('stack_create_complete')
waiter.wait(StackName=stack_name)
response = cf.describe_stack_events(StackName=stack_name)
for e in response['StackEvents']:
print('%s %s' % (e['ResourceType'], e['ResourceStatus']))
response = cf.describe_stacks(StackName=stack_name)
print(response['Stacks'][0]['StackStatus'])
for o in response['Stacks'][0]['Outputs']:
if o['OutputKey']=='EndpointName':
endpoint_name = o['OutputValue']
print(endpoint_name)
```
## Apply change set to update instance count
```
response = cf.create_change_set(
StackName=stack_name,
ChangeSetName='add-instance',
UsePreviousTemplate=True,
Parameters=[
{"ParameterKey":"InstanceCount", "ParameterValue": "2"},
{"ParameterKey":"ModelName", "UsePreviousValue": True},
{"ParameterKey":"ContainerImage","UsePreviousValue": True},
{"ParameterKey":"ModelDataUrl", "UsePreviousValue": True},
{"ParameterKey":"RoleArn", "UsePreviousValue": True}
]
)
response
waiter = cf.get_waiter('change_set_create_complete')
waiter.wait(
StackName=stack_name,
ChangeSetName='add-instance'
)
response = cf.describe_change_set(
StackName=stack_name,
ChangeSetName='add-instance'
)
response['Changes']
response = cf.execute_change_set(
StackName=stack_name,
ChangeSetName='add-instance'
)
response
response = cf.describe_stacks(StackName=stack_name)
print(response['Stacks'][0]['StackStatus'])
response = cf.describe_stack_events(StackName=stack_name)
for e in response['StackEvents']:
print('%s %s' % (e['ResourceType'], e['ResourceStatus']))
waiter = cf.get_waiter('stack_update_complete')
waiter.wait(StackName=stack_name)
response = sm.describe_endpoint(EndpointName=endpoint_name)
response['ProductionVariants'][0]['CurrentInstanceCount']
```
## Apply change set to add second production variant to endpoint
```
!pygmentize endpoint-two-models.yml
# Update this with your own model name
training_job_2 = 'tensorflow-training-2020-06-08-07-32-18-734'
job_2 = sm.describe_training_job(TrainingJobName=training_job_2)
model_data_url_2 = job_2['ModelArtifacts']['S3ModelArtifacts']
with open('endpoint-two-models.yml', 'r') as f:
response = cf.create_change_set(
StackName=stack_name,
ChangeSetName='add-model',
TemplateBody=f.read(),
Parameters=[
{"ParameterKey":"ModelName", "UsePreviousValue": True},
{"ParameterKey":"ModelDataUrl", "UsePreviousValue": True},
{"ParameterKey":"ContainerImage", "UsePreviousValue": True},
{"ParameterKey":"RoleArn", "UsePreviousValue": True},
{"ParameterKey":"ModelName2", "ParameterValue": training_job_2+'-'+timestamp},
{"ParameterKey":"ModelDataUrl2", "ParameterValue": model_data_url_2}
]
)
response
waiter = cf.get_waiter('change_set_create_complete')
waiter.wait(
StackName=stack_name,
ChangeSetName='add-model'
)
response = cf.describe_change_set(
StackName=stack_name,
ChangeSetName='add-model'
)
response['Changes']
response = cf.execute_change_set(
StackName=stack_name,
ChangeSetName='add-model'
)
response
waiter = cf.get_waiter('stack_update_complete')
waiter.wait(StackName=stack_name)
response = sm.describe_endpoint(EndpointName=endpoint_name)
response['ProductionVariants']
```
## Create a CloudWatch alarm for model latency
```
cw = boto3.client('cloudwatch')
alarm_name = 'My_endpoint_latency'
response = cw.put_metric_alarm(
AlarmName=alarm_name,
ComparisonOperator='GreaterThanThreshold',
EvaluationPeriods=1,
MetricName='ModelLatency',
Namespace='AWS/SageMaker',
Period=60,
Statistic='Average',
Threshold=500000.0,
AlarmDescription='Alarm when 1-minute average latency exceeds 500ms',
Dimensions=[
{
'Name': 'EndpointName',
'Value': endpoint_name
},
{
'Name': 'VariantName',
'Value': 'variant-2'
}
],
Unit='Microseconds'
)
response
response = cw.describe_alarms(AlarmNames=[alarm_name])
for a in response['MetricAlarms']:
if a['AlarmName'] == alarm_name:
alarm_arn = a['AlarmArn']
print(alarm_arn)
```
## Canary deployment of second model
```
weights = list(range(10,110,10))
print(weights)
for w in weights:
response = cf.update_stack(
StackName=stack_name,
UsePreviousTemplate=True,
Parameters=[
{"ParameterKey":"ModelName", "UsePreviousValue": True},
{"ParameterKey":"ModelDataUrl", "UsePreviousValue": True},
{"ParameterKey":"ContainerImage", "UsePreviousValue": True},
{"ParameterKey":"RoleArn", "UsePreviousValue": True},
{"ParameterKey":"ModelName2", "UsePreviousValue": True},
{"ParameterKey":"ModelDataUrl2", "UsePreviousValue": True},
{"ParameterKey":"VariantWeight", "ParameterValue": str(100-w)},
{"ParameterKey":"VariantWeight2", "ParameterValue": str(w)}
],
RollbackConfiguration={
'RollbackTriggers': [
{
'Arn': alarm_arn,
'Type': 'AWS::CloudWatch::Alarm'
}
],
'MonitoringTimeInMinutes': 5
}
)
waiter = cf.get_waiter('stack_update_complete')
waiter.wait(StackName=stack_name)
print("Sending %d percent of traffic to new model" % w)
cf.delete_stack(StackName=stack_name)
```
|
github_jupyter
|
!pygmentize endpoint-one-model.yml
import boto3
sm = boto3.client('sagemaker')
cf = boto3.client('cloudformation')
# Update this with your own model name
training_job = 'tensorflow-training-2020-06-08-07-46-04-367'
job = sm.describe_training_job(TrainingJobName=training_job)
model_data_url = job['ModelArtifacts']['S3ModelArtifacts']
role_arn = job['RoleArn']
# https://github.com/aws/deep-learning-containers/blob/master/available_images.md
container_image = '763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-inference:2.1.0-cpu-py36-ubuntu18.04'
import time
timestamp = time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
stack_name='endpoint-one-model-'+timestamp
print(stack_name)
with open('endpoint-one-model.yml', 'r') as f:
response = cf.create_stack(StackName=stack_name,
TemplateBody=f.read(),
Parameters=[
{"ParameterKey":"ModelName", "ParameterValue":training_job+'-'+timestamp},
{"ParameterKey":"ContainerImage","ParameterValue":container_image},
{"ParameterKey":"ModelDataUrl", "ParameterValue":model_data_url},
{"ParameterKey":"RoleArn", "ParameterValue":role_arn} ])
print(response)
waiter = cf.get_waiter('stack_create_complete')
waiter.wait(StackName=stack_name)
response = cf.describe_stack_events(StackName=stack_name)
for e in response['StackEvents']:
print('%s %s' % (e['ResourceType'], e['ResourceStatus']))
response = cf.describe_stacks(StackName=stack_name)
print(response['Stacks'][0]['StackStatus'])
for o in response['Stacks'][0]['Outputs']:
if o['OutputKey']=='EndpointName':
endpoint_name = o['OutputValue']
print(endpoint_name)
response = cf.create_change_set(
StackName=stack_name,
ChangeSetName='add-instance',
UsePreviousTemplate=True,
Parameters=[
{"ParameterKey":"InstanceCount", "ParameterValue": "2"},
{"ParameterKey":"ModelName", "UsePreviousValue": True},
{"ParameterKey":"ContainerImage","UsePreviousValue": True},
{"ParameterKey":"ModelDataUrl", "UsePreviousValue": True},
{"ParameterKey":"RoleArn", "UsePreviousValue": True}
]
)
response
waiter = cf.get_waiter('change_set_create_complete')
waiter.wait(
StackName=stack_name,
ChangeSetName='add-instance'
)
response = cf.describe_change_set(
StackName=stack_name,
ChangeSetName='add-instance'
)
response['Changes']
response = cf.execute_change_set(
StackName=stack_name,
ChangeSetName='add-instance'
)
response
response = cf.describe_stacks(StackName=stack_name)
print(response['Stacks'][0]['StackStatus'])
response = cf.describe_stack_events(StackName=stack_name)
for e in response['StackEvents']:
print('%s %s' % (e['ResourceType'], e['ResourceStatus']))
waiter = cf.get_waiter('stack_update_complete')
waiter.wait(StackName=stack_name)
response = sm.describe_endpoint(EndpointName=endpoint_name)
response['ProductionVariants'][0]['CurrentInstanceCount']
!pygmentize endpoint-two-models.yml
# Update this with your own model name
training_job_2 = 'tensorflow-training-2020-06-08-07-32-18-734'
job_2 = sm.describe_training_job(TrainingJobName=training_job_2)
model_data_url_2 = job_2['ModelArtifacts']['S3ModelArtifacts']
with open('endpoint-two-models.yml', 'r') as f:
response = cf.create_change_set(
StackName=stack_name,
ChangeSetName='add-model',
TemplateBody=f.read(),
Parameters=[
{"ParameterKey":"ModelName", "UsePreviousValue": True},
{"ParameterKey":"ModelDataUrl", "UsePreviousValue": True},
{"ParameterKey":"ContainerImage", "UsePreviousValue": True},
{"ParameterKey":"RoleArn", "UsePreviousValue": True},
{"ParameterKey":"ModelName2", "ParameterValue": training_job_2+'-'+timestamp},
{"ParameterKey":"ModelDataUrl2", "ParameterValue": model_data_url_2}
]
)
response
waiter = cf.get_waiter('change_set_create_complete')
waiter.wait(
StackName=stack_name,
ChangeSetName='add-model'
)
response = cf.describe_change_set(
StackName=stack_name,
ChangeSetName='add-model'
)
response['Changes']
response = cf.execute_change_set(
StackName=stack_name,
ChangeSetName='add-model'
)
response
waiter = cf.get_waiter('stack_update_complete')
waiter.wait(StackName=stack_name)
response = sm.describe_endpoint(EndpointName=endpoint_name)
response['ProductionVariants']
cw = boto3.client('cloudwatch')
alarm_name = 'My_endpoint_latency'
response = cw.put_metric_alarm(
AlarmName=alarm_name,
ComparisonOperator='GreaterThanThreshold',
EvaluationPeriods=1,
MetricName='ModelLatency',
Namespace='AWS/SageMaker',
Period=60,
Statistic='Average',
Threshold=500000.0,
AlarmDescription='Alarm when 1-minute average latency exceeds 500ms',
Dimensions=[
{
'Name': 'EndpointName',
'Value': endpoint_name
},
{
'Name': 'VariantName',
'Value': 'variant-2'
}
],
Unit='Microseconds'
)
response
response = cw.describe_alarms(AlarmNames=[alarm_name])
for a in response['MetricAlarms']:
if a['AlarmName'] == alarm_name:
alarm_arn = a['AlarmArn']
print(alarm_arn)
weights = list(range(10,110,10))
print(weights)
for w in weights:
response = cf.update_stack(
StackName=stack_name,
UsePreviousTemplate=True,
Parameters=[
{"ParameterKey":"ModelName", "UsePreviousValue": True},
{"ParameterKey":"ModelDataUrl", "UsePreviousValue": True},
{"ParameterKey":"ContainerImage", "UsePreviousValue": True},
{"ParameterKey":"RoleArn", "UsePreviousValue": True},
{"ParameterKey":"ModelName2", "UsePreviousValue": True},
{"ParameterKey":"ModelDataUrl2", "UsePreviousValue": True},
{"ParameterKey":"VariantWeight", "ParameterValue": str(100-w)},
{"ParameterKey":"VariantWeight2", "ParameterValue": str(w)}
],
RollbackConfiguration={
'RollbackTriggers': [
{
'Arn': alarm_arn,
'Type': 'AWS::CloudWatch::Alarm'
}
],
'MonitoringTimeInMinutes': 5
}
)
waiter = cf.get_waiter('stack_update_complete')
waiter.wait(StackName=stack_name)
print("Sending %d percent of traffic to new model" % w)
cf.delete_stack(StackName=stack_name)
| 0.480722 | 0.436382 |

### Egeria Hands-On Lab
# Welcome to the Performance Test Suite Lab
## Introduction
Egeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information (called metadata) about data and the technology that supports it.
In this hands-on lab you will get a chance to work with the performance test suite that is used to measure the performance of a technology acting as an Egeria metadata repository.
## About the Performance Suite
The Performance Suite can be used to test a Repository Connector to record the performance of its various repository methods.
There are a number of different performance profiles, each intended to measure the performance of one fairly granular area of performance of a repository. The suite will attempt to invoke all metadata methods a number of times, with a variety of parameters, to attempt to identify any potential performance bottlenecks or troublesome scenarios in the repository.
Of course, many of the methods are optional: the suite will therefore simply record results for those methods it is able to invoke, and mark those it is not as unsupported.
**It is important to note that since this suite focuses on performance, it will not necessarily exercise every potential edge case or do an intensive verification that every result from a method call is precisely correct: these should still be confirmed through the [Repository Workbench](run-conformance-test-suite.ipynb).**
## Configuring and running the Performance Suite
We'll come back to the profiles later, but for now let's configure and run the Performance Suite.
We're going to need a pair of OMAG Servers - one to run the repository under test, the other to run the workbench. The servers need to join the same cohort.

> **Figure 1:** Cohort for conformance testing
When the one running the workbench sees the cohort registration of the server under test, it runs the workbench tests against that server's repository.
## Starting up the Egeria platforms
We'll start one OMAG Server Platform on which to run both the servers.
We also need Apache Zookeeper and Apache Kafka.
```
%run ../common/globals.ipynb
import requests
import pprint
import json
import os
import time
# Disable warnings about self-signed certificates
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
ctsPlatformURL = os.environ.get('ctsPlatformURL','https://localhost:9445')
def checkServerPlatform(testPlatformName, testPlatformURL):
response = requests.get(testPlatformURL + "/open-metadata/platform-services/users/garygeeke/server-platform/origin/")
if response.status_code == 200:
print(" ...", testPlatformName, "at", testPlatformURL, "is active - ready to begin")
else:
print(" ...", testPlatformName, "at", testPlatformURL, "is down - start it before proceeding")
print ("\nChecking OMAG Server Platform availability...")
checkServerPlatform("CTS OMAG Server Platform", ctsPlatformURL)
print ("Done.")
```
## Configuring the Servers
We're going to configure both the servers in the diagram above.
It's useful to create some generally useful definitions here.
Knowing both server names up front will be handy for when we configure the workbench.
To configure the servers we'll need a common cohort name and event bus configuration.
We can let the CTS server default to using a local in-memory repository.
The CTS server does not need to run any Access Services.
```
ctsServerName = "CTS_Server"
sutServerName = "SUT_Server"
devCohort = "devCohort"
```
We'll need to pass a couple of JSON request bodies - so let's set up a reusable header:
```
jsonContentHeader = {'content-type':'application/json'}
```
We'll need a JSON request body for configuration of the event bus.
```
eventBusURLroot = os.environ.get('eventBusURLroot', 'localhost:9092')
eventBusBody = {
"producer": {
"bootstrap.servers": eventBusURLroot
},
"consumer":{
"bootstrap.servers": eventBusURLroot
}
}
```
We'll also need a JSON request body for configuration of the workbench. This can be used to set a number of parameters related to the volumes that should be used in the tests:
- `instancesPerType` defines the number of instances of each Egeria type that should be created: both a homed copy and a reference copy of each will be created, so the total number of instances created will in reality be 2x this number. For example, setting this to `5` for a repository that supports 500 types will generate 5000 instances of metadata (5 x 2 x 500).
- `maxSearchResults` defines the number of results to retrieve in each page for a search request. Some scenarios will only attempt to retrieve the initial page of results, while others will cycle through every page to test the performance of paging and count the total number of instances found in the environment.
- `waitBetweenScenarios` can be used to introduce a delay between write operations and read operations, for example if testing a repository that provides eventual consistency of its search indexes in order to improve its write (ingestion) performance.
```
workbenchConfigBody = {
"class" : "RepositoryPerformanceWorkbenchConfig",
"tutRepositoryServerName": sutServerName,
"instancesPerType" : 5,
"maxSearchResults" : 2,
"waitBetweenScenarios" : 0
}
```
We also need a userId for the configuration commands. You could change this to a name you choose.
```
adminUserId = "garygeeke"
```
We can perform configuration operations through the administrative interface provided by the ctsPlatformURL.
The URLs for the configuration REST APIs have a common structure and begin with the following root:
```
adminPlatformURL = ctsPlatformURL
adminCommandURLRoot = adminPlatformURL + '/open-metadata/admin-services/users/' + adminUserId + '/servers/'
```
What follows are descriptions and coded requests to configure each server. There are a lot of common steps
involved in configuring a metadata server, so we first define some simple
functions that can be re-used in later steps for configuring each server.
Each function returns True or False to indicate whether it was successful.
```
def postAndPrintResult(url, json=None, headers=None):
print(" ...... (POST", url, ")")
response = requests.post(url, json=json, headers=headers)
if response.status_code == 200:
print(" ...... Success. Response: ", response.json())
return True
else:
print(" ...... Failed. Response: ", response.json())
return False
def getAndPrintResult(url, json=None, headers=None):
print(" ...... (GET", url, ")")
response = requests.get(url, json=json, headers=headers)
if response.status_code == 200:
print(" ...... Success. Response: ", response.json())
return True
else:
print(" ...... Failed. Response: ", response.json())
return False
def getResult(url, json=None, headers=None):
print("\n ...... (GET", url, ")")
try:
response = requests.get(url, json=json, headers=headers)
if response.status_code == 200:
if response.json()['relatedHTTPCode'] == 200:
return response.json()
return None
except requests.exceptions.RequestException as e:
print (" ...... FAILED - http request threw an exception: ", e)
return None
def configurePlatformURL(serverName, serverPlatform):
print("\n ... Configuring the platform the server will run on...")
url = adminCommandURLRoot + serverName + '/server-url-root?url=' + serverPlatform
return postAndPrintResult(url)
def configureServerType(serverName, serverType):
print ("\n ... Configuring the server's type...")
url = adminCommandURLRoot + serverName + '/server-type?typeName=' + serverType
return postAndPrintResult(url)
def configureUserId(serverName, userId):
print ("\n ... Configuring the server's userId...")
url = adminCommandURLRoot + serverName + '/server-user-id?id=' + userId
return postAndPrintResult(url)
def configurePassword(serverName, password):
print ("\n ... Configuring the server's password (optional)...")
url = adminCommandURLRoot + serverName + '/server-user-password?password=' + password
return postAndPrintResult(url)
def configureMetadataRepository(serverName, repositoryType):
print ("\n ... Configuring the metadata repository...")
url = adminCommandURLRoot + serverName + '/local-repository/mode/' + repositoryType
return postAndPrintResult(url)
def configureDescriptiveName(serverName, collectionName):
print ("\n ... Configuring the short descriptive name of the metadata stored in this server...")
url = adminCommandURLRoot + serverName + '/local-repository/metadata-collection-name/' + collectionName
return postAndPrintResult(url)
def configureEventBus(serverName, busBody):
print ("\n ... Configuring the event bus for this server...")
url = adminCommandURLRoot + serverName + '/event-bus'
return postAndPrintResult(url, json=busBody, headers=jsonContentHeader)
def configureCohortMembership(serverName, cohortName):
print ("\n ... Configuring the membership of the cohort...")
url = adminCommandURLRoot + serverName + '/cohorts/' + cohortName
return postAndPrintResult(url)
def configureRepositoryWorkbench(serverName, workbenchBody):
print ("\n ... Configuring the repository workbench for this server...")
url = adminCommandURLRoot + serverName + '/conformance-suite-workbenches/repository-workbench/performance'
return postAndPrintResult(url, json=workbenchBody, headers=jsonContentHeader)
```
## Configuring the CTS Server
We're going to configure the CTS Server from the diagram above. The CTS Server is the one that runs the repository performance workbench.
The server will default to using a local in-memory repository.
The CTS server does not need to run any Access Services.
Notice that when we configure the CTS Server to run the repository performance workbench, we provide the name of the server under test.
First we introduce a 'success' variable which is used to monitor progress in the subsequent cells.
```
success = True
ctsServerType = "Conformance Suite Server"
ctsServerUserId = "CTS1npa"
ctsServerPassword = "CTS1passw0rd"
ctsServerPlatform = ctsPlatformURL
print("Configuring " + ctsServerName + "...")
if (success):
success = configurePlatformURL(ctsServerName, ctsServerPlatform)
if (success):
success = configureServerType(ctsServerName, ctsServerType)
if (success):
success = configureUserId(ctsServerName, ctsServerUserId)
if (success):
success = configurePassword(ctsServerName, ctsServerPassword)
if (success):
success = configureEventBus(ctsServerName, eventBusBody)
if (success):
success = configureCohortMembership(ctsServerName, devCohort)
if (success):
success = configureRepositoryWorkbench(ctsServerName, workbenchConfigBody)
if (success):
print("\nDone.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
```
## Configuring the SUT Server (Server Under Test)
Next we're going to configure the SUT Server from the diagram above. The SUT Server is the one that hosts the repository that is being tested. The SUT Server will run on the same platform as the CTS Server.
The server will default to using a local in-memory repository.
The SUT server does not need to run any Access Services.
Notice that when we configure the CTS Server to run the repository workbench, we provide the name of the server under test.
```
sutServerType = "Metadata Repository Server"
sutServerUserId = "SUTnpa"
sutServerPassword = "SUTpassw0rd"
metadataCollectionName = "SUT_MDR"
metadataRepositoryTypeInMemory = "in-memory-repository"
metadataRepositoryTypeGraph = "local-graph-repository"
print("Configuring " + sutServerName + "...")
if (success):
success = configurePlatformURL(sutServerName, ctsServerPlatform)
if (success):
success = configureServerType(sutServerName, sutServerType)
if (success):
success = configureUserId(sutServerName, sutServerUserId)
if (success):
success = configurePassword(sutServerName, sutServerPassword)
if (success):
success = configureMetadataRepository(sutServerName, metadataRepositoryTypeInMemory)
if (success):
success = configureDescriptiveName(sutServerName, metadataCollectionName)
if (success):
success = configureEventBus(sutServerName, eventBusBody)
if (success):
success = configureCohortMembership(sutServerName, devCohort)
if (success):
print("\nDone.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
```
The commands below deploy the server configuration documents to the server platforms where the
servers will run.
```
def deployServerToPlatform(serverName, platformURL):
print(" ... deploying", serverName, "to the", platformURL, "platform...")
url = adminCommandURLRoot + serverName + '/configuration/deploy'
platformTarget = {
"class": "URLRequestBody",
"urlRoot": platformURL
}
try:
return postAndPrintResult(url, json=platformTarget, headers=jsonContentHeader)
except requests.exceptions.RequestException as e:
print (" ...... FAILED - http request threw an exception: ", e)
return False
print("\nDeploying server configuration documents to appropriate platforms...")
if (success):
success = deployServerToPlatform(ctsServerName, ctsPlatformURL)
if (success):
success = deployServerToPlatform(sutServerName, ctsPlatformURL)
if (success):
print("\nDone.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
```
## Starting the servers
We'll need to define the URL for the OMRS operational services API.
```
operationalServicesURLcore = "/open-metadata/admin-services/users/" + adminUserId
```
Start the CTS Server, followed by the SUT Server.
When the CTS Server sees the cohort registration for the SUT Server it will start to run the workbench.
```
def startServer(serverName, platformURL):
print(" ... starting server", serverName, "...")
url = platformURL + operationalServicesURLcore + '/servers/' + serverName + '/instance'
return postAndPrintResult(url)
print ("\nStarting the CTS server ...")
if (success):
success = startServer(ctsServerName, ctsPlatformURL)
# Pause to allow server to initialize fully
time.sleep(4)
print ("\nStarting the SUT server ...")
if (success):
success = startServer(sutServerName, ctsPlatformURL)
if (success):
print("\nDone.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
```
## Workbench Progress
The repository performance workbench runs many tests (minimally thousands, but could be many more) and it can take a while to complete -- meaning several hours. There is no 'completion event' because when the performance suite has completed the synchronous workbench tests it continues to run and will perform asynchronous tests in responses to events that may be received within the cohort. The consequence of this is that it is not easy to know when the CTS has 'finished'. However, if you scan the output console logging from the performance suite it is possible to detect the log output:
Thu Nov 21 09:11:01 GMT 2019 CTS_Server Information CONFORMANCE-SUITE-0011 The Open Metadata Conformance Workbench performance-workbench has completed its synchronous tests, further test cases may be triggered from incoming events.
When this has been seen you will probably see a number of further events being processed by the CTS Server. There can be up to several hundred events - that look like the following:
Thu Nov 21 09:11:03 GMT 2019 CTS_Server Event OMRS-AUDIT-8006 Processing incoming event of type DeletedEntityEvent for instance 2fd6cd97-35dd-41d9-ad2f-4d25af30033e from: OMRSEventOriginator{metadataCollectionId='f076a951-fcd0-483b-a06e-d0c7abb61b84', serverName='SUT_Server', serverType='Metadata Repository Server', organizationName='null'}
Thu Nov 21 09:11:03 GMT 2019 CTS_Server Event OMRS-AUDIT-8006 Processing incoming event of type PurgedEntityEvent for instance 2fd6cd97-35dd-41d9-ad2f-4d25af30033e from: OMRSEventOriginator{metadataCollectionId='f076a951-fcd0-483b-a06e-d0c7abb61b84', serverName='SUT_Server', serverType='Metadata Repository Server', organizationName='null'}
These events are usually DELETE and PURGE events relating to instances that have been cleaned up on the SUT Server.
Once these events have been logged the console should go quiet. When you see this, it is possible to retrieve the workbench results from the CTS Server.
## Polling for Status
The following cell can be used to find out whether the workbench has completed its synchronous tests....
```
conformanceSuiteServicesURLcore = "/open-metadata/conformance-suite/users/" + adminUserId
def retrieveStatus(serverName, platformURL):
print(" ... retrieving completion status from server", serverName, "...")
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/status/workbenches/performance-workbench'
return getResult(url)
print ("\nRetrieve performance-workbench status ...")
status_json = retrieveStatus(ctsServerName, ctsPlatformURL)
if (status_json != None):
workbenchId = status_json['workbenchStatus']['workbenchId']
workbenchComplete = status_json['workbenchStatus']['workbenchComplete']
if (workbenchComplete == True):
print("\nWorkbench",workbenchId,"is complete.")
else:
print("\nWorkbench",workbenchId,"has not yet completed.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
```
## Retrieving the Workbench Results
The performance workbench keeps the results of the testcases in memory. When the workbench is complete (see above) you can request a report of the results from the REST API on the CTS Server.
The REST API has several options that supports different styles of report. Here we will request a summary report, followed by requesting the full details of each profile and test case individually. Some of the detailed profile reports can be large (several MB), so if you are running the Jupyter notebook server with its default configuration, the report may exceed the default max data rate for the notebook server. If you are not running the Egeria team's containers (docker/k8s), and you have not done so already, please restart the notebook server with the following configuration option:
jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10
If the following call results in a Java Heap error you may need to increase the memory configured for your container environment, or available locally. Min 2GB, ideally 4GB additional heap space is recommended for CTS.
Given the amount of detail involved, this may take a minute or two to retrieve all of the details of a completed CTS run: wait until the cell shows a number (rather than an asterisk). This indicates the cell has completed, and you should also see a final line of output that states: \"Done -- all details retrieved. (While it runs, you should see the output updating with the iterative REST calls that are made to retrieve each profile's or test case's details.)
(Note that we have provided methods to retrieve the individual test case details; however, as there are thousands of these and the performance metrics are rather in the profile summaries, we will not actually run the retrieval of the detailed test cases.)
```
from requests.utils import quote
import os
report_json = None
cwd = os.getcwd()
profileDir = "profile-details"
testCaseDir = "test-case-details"
conformanceSuiteServicesURLcore = "/open-metadata/conformance-suite/users/" + adminUserId
def retrieveSummary(serverName, platformURL):
print(" ... retrieving test report summary from server", serverName, "...")
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/report/summary'
return getResult(url)
def retrieveProfileNames(serverName, platformURL):
print(" ... retrieving profile list from server", serverName, "...")
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/report/profiles'
return getResult(url)
def retrieveTestCaseIds(serverName, platformURL):
print(" ... retrieving test case list from server", serverName, "...")
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/report/test-cases'
return getResult(url)
def retrieveProfileDetails(serverName, platformURL, profileName):
encodedProfileName = quote(profileName)
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/report/profiles/' + encodedProfileName
return getResult(url)
def retrieveTestCaseDetails(serverName, platformURL, testCaseId):
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/report/test-cases/' + testCaseId
return getResult(url)
print ("\nRetrieve Performance Suite summary results ...")
summary_json = retrieveSummary(ctsServerName, ctsPlatformURL)
if (summary_json != None):
with open("openmetadata_cts_summary.json", 'w') as outfile:
json.dump(summary_json, outfile)
profiles = retrieveProfileNames(ctsServerName, ctsPlatformURL)
profileDetailsDir = cwd + os.path.sep + profileDir
os.makedirs(profileDetailsDir, exist_ok=True)
print("Retrieving details for each profile...")
for profile in profiles['profileNames']:
profile_details = retrieveProfileDetails(ctsServerName, ctsPlatformURL, profile)
with open(profileDetailsDir + os.path.sep + profile.replace(" ", "_") + ".json", 'w') as outfile:
json.dump(profile_details, outfile)
print("\nDone -- all details retrieved.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
```
## Conformance Profile Results
The following is a summary of the status of each performance profile. To ensure that you get a complete summary, make sure you retrieve the results (as above) once the workbench has completed.
(Note that this uses pandas to summarize the results table: if you have not already done so, use pip3 to install pandas and its dependencies.)
```
import pandas
from pandas import json_normalize
if (summary_json != None):
repositoryWorkbenchResults = json_normalize(data = summary_json['testLabSummary'],
record_path =['testSummariesFromWorkbenches','profileSummaries'])
repositoryWorkbenchResultsSummary = repositoryWorkbenchResults[['name','description','profilePriority','conformanceStatus']]
display(repositoryWorkbenchResultsSummary.head(15))
```
|
github_jupyter
|
%run ../common/globals.ipynb
import requests
import pprint
import json
import os
import time
# Disable warnings about self-signed certificates
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
ctsPlatformURL = os.environ.get('ctsPlatformURL','https://localhost:9445')
def checkServerPlatform(testPlatformName, testPlatformURL):
response = requests.get(testPlatformURL + "/open-metadata/platform-services/users/garygeeke/server-platform/origin/")
if response.status_code == 200:
print(" ...", testPlatformName, "at", testPlatformURL, "is active - ready to begin")
else:
print(" ...", testPlatformName, "at", testPlatformURL, "is down - start it before proceeding")
print ("\nChecking OMAG Server Platform availability...")
checkServerPlatform("CTS OMAG Server Platform", ctsPlatformURL)
print ("Done.")
ctsServerName = "CTS_Server"
sutServerName = "SUT_Server"
devCohort = "devCohort"
jsonContentHeader = {'content-type':'application/json'}
eventBusURLroot = os.environ.get('eventBusURLroot', 'localhost:9092')
eventBusBody = {
"producer": {
"bootstrap.servers": eventBusURLroot
},
"consumer":{
"bootstrap.servers": eventBusURLroot
}
}
workbenchConfigBody = {
"class" : "RepositoryPerformanceWorkbenchConfig",
"tutRepositoryServerName": sutServerName,
"instancesPerType" : 5,
"maxSearchResults" : 2,
"waitBetweenScenarios" : 0
}
adminUserId = "garygeeke"
adminPlatformURL = ctsPlatformURL
adminCommandURLRoot = adminPlatformURL + '/open-metadata/admin-services/users/' + adminUserId + '/servers/'
def postAndPrintResult(url, json=None, headers=None):
print(" ...... (POST", url, ")")
response = requests.post(url, json=json, headers=headers)
if response.status_code == 200:
print(" ...... Success. Response: ", response.json())
return True
else:
print(" ...... Failed. Response: ", response.json())
return False
def getAndPrintResult(url, json=None, headers=None):
print(" ...... (GET", url, ")")
response = requests.get(url, json=json, headers=headers)
if response.status_code == 200:
print(" ...... Success. Response: ", response.json())
return True
else:
print(" ...... Failed. Response: ", response.json())
return False
def getResult(url, json=None, headers=None):
print("\n ...... (GET", url, ")")
try:
response = requests.get(url, json=json, headers=headers)
if response.status_code == 200:
if response.json()['relatedHTTPCode'] == 200:
return response.json()
return None
except requests.exceptions.RequestException as e:
print (" ...... FAILED - http request threw an exception: ", e)
return None
def configurePlatformURL(serverName, serverPlatform):
print("\n ... Configuring the platform the server will run on...")
url = adminCommandURLRoot + serverName + '/server-url-root?url=' + serverPlatform
return postAndPrintResult(url)
def configureServerType(serverName, serverType):
print ("\n ... Configuring the server's type...")
url = adminCommandURLRoot + serverName + '/server-type?typeName=' + serverType
return postAndPrintResult(url)
def configureUserId(serverName, userId):
print ("\n ... Configuring the server's userId...")
url = adminCommandURLRoot + serverName + '/server-user-id?id=' + userId
return postAndPrintResult(url)
def configurePassword(serverName, password):
print ("\n ... Configuring the server's password (optional)...")
url = adminCommandURLRoot + serverName + '/server-user-password?password=' + password
return postAndPrintResult(url)
def configureMetadataRepository(serverName, repositoryType):
print ("\n ... Configuring the metadata repository...")
url = adminCommandURLRoot + serverName + '/local-repository/mode/' + repositoryType
return postAndPrintResult(url)
def configureDescriptiveName(serverName, collectionName):
print ("\n ... Configuring the short descriptive name of the metadata stored in this server...")
url = adminCommandURLRoot + serverName + '/local-repository/metadata-collection-name/' + collectionName
return postAndPrintResult(url)
def configureEventBus(serverName, busBody):
print ("\n ... Configuring the event bus for this server...")
url = adminCommandURLRoot + serverName + '/event-bus'
return postAndPrintResult(url, json=busBody, headers=jsonContentHeader)
def configureCohortMembership(serverName, cohortName):
print ("\n ... Configuring the membership of the cohort...")
url = adminCommandURLRoot + serverName + '/cohorts/' + cohortName
return postAndPrintResult(url)
def configureRepositoryWorkbench(serverName, workbenchBody):
print ("\n ... Configuring the repository workbench for this server...")
url = adminCommandURLRoot + serverName + '/conformance-suite-workbenches/repository-workbench/performance'
return postAndPrintResult(url, json=workbenchBody, headers=jsonContentHeader)
success = True
ctsServerType = "Conformance Suite Server"
ctsServerUserId = "CTS1npa"
ctsServerPassword = "CTS1passw0rd"
ctsServerPlatform = ctsPlatformURL
print("Configuring " + ctsServerName + "...")
if (success):
success = configurePlatformURL(ctsServerName, ctsServerPlatform)
if (success):
success = configureServerType(ctsServerName, ctsServerType)
if (success):
success = configureUserId(ctsServerName, ctsServerUserId)
if (success):
success = configurePassword(ctsServerName, ctsServerPassword)
if (success):
success = configureEventBus(ctsServerName, eventBusBody)
if (success):
success = configureCohortMembership(ctsServerName, devCohort)
if (success):
success = configureRepositoryWorkbench(ctsServerName, workbenchConfigBody)
if (success):
print("\nDone.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
sutServerType = "Metadata Repository Server"
sutServerUserId = "SUTnpa"
sutServerPassword = "SUTpassw0rd"
metadataCollectionName = "SUT_MDR"
metadataRepositoryTypeInMemory = "in-memory-repository"
metadataRepositoryTypeGraph = "local-graph-repository"
print("Configuring " + sutServerName + "...")
if (success):
success = configurePlatformURL(sutServerName, ctsServerPlatform)
if (success):
success = configureServerType(sutServerName, sutServerType)
if (success):
success = configureUserId(sutServerName, sutServerUserId)
if (success):
success = configurePassword(sutServerName, sutServerPassword)
if (success):
success = configureMetadataRepository(sutServerName, metadataRepositoryTypeInMemory)
if (success):
success = configureDescriptiveName(sutServerName, metadataCollectionName)
if (success):
success = configureEventBus(sutServerName, eventBusBody)
if (success):
success = configureCohortMembership(sutServerName, devCohort)
if (success):
print("\nDone.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
def deployServerToPlatform(serverName, platformURL):
print(" ... deploying", serverName, "to the", platformURL, "platform...")
url = adminCommandURLRoot + serverName + '/configuration/deploy'
platformTarget = {
"class": "URLRequestBody",
"urlRoot": platformURL
}
try:
return postAndPrintResult(url, json=platformTarget, headers=jsonContentHeader)
except requests.exceptions.RequestException as e:
print (" ...... FAILED - http request threw an exception: ", e)
return False
print("\nDeploying server configuration documents to appropriate platforms...")
if (success):
success = deployServerToPlatform(ctsServerName, ctsPlatformURL)
if (success):
success = deployServerToPlatform(sutServerName, ctsPlatformURL)
if (success):
print("\nDone.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
operationalServicesURLcore = "/open-metadata/admin-services/users/" + adminUserId
def startServer(serverName, platformURL):
print(" ... starting server", serverName, "...")
url = platformURL + operationalServicesURLcore + '/servers/' + serverName + '/instance'
return postAndPrintResult(url)
print ("\nStarting the CTS server ...")
if (success):
success = startServer(ctsServerName, ctsPlatformURL)
# Pause to allow server to initialize fully
time.sleep(4)
print ("\nStarting the SUT server ...")
if (success):
success = startServer(sutServerName, ctsPlatformURL)
if (success):
print("\nDone.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
conformanceSuiteServicesURLcore = "/open-metadata/conformance-suite/users/" + adminUserId
def retrieveStatus(serverName, platformURL):
print(" ... retrieving completion status from server", serverName, "...")
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/status/workbenches/performance-workbench'
return getResult(url)
print ("\nRetrieve performance-workbench status ...")
status_json = retrieveStatus(ctsServerName, ctsPlatformURL)
if (status_json != None):
workbenchId = status_json['workbenchStatus']['workbenchId']
workbenchComplete = status_json['workbenchStatus']['workbenchComplete']
if (workbenchComplete == True):
print("\nWorkbench",workbenchId,"is complete.")
else:
print("\nWorkbench",workbenchId,"has not yet completed.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
from requests.utils import quote
import os
report_json = None
cwd = os.getcwd()
profileDir = "profile-details"
testCaseDir = "test-case-details"
conformanceSuiteServicesURLcore = "/open-metadata/conformance-suite/users/" + adminUserId
def retrieveSummary(serverName, platformURL):
print(" ... retrieving test report summary from server", serverName, "...")
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/report/summary'
return getResult(url)
def retrieveProfileNames(serverName, platformURL):
print(" ... retrieving profile list from server", serverName, "...")
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/report/profiles'
return getResult(url)
def retrieveTestCaseIds(serverName, platformURL):
print(" ... retrieving test case list from server", serverName, "...")
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/report/test-cases'
return getResult(url)
def retrieveProfileDetails(serverName, platformURL, profileName):
encodedProfileName = quote(profileName)
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/report/profiles/' + encodedProfileName
return getResult(url)
def retrieveTestCaseDetails(serverName, platformURL, testCaseId):
url = platformURL + '/servers/' + serverName + conformanceSuiteServicesURLcore + '/report/test-cases/' + testCaseId
return getResult(url)
print ("\nRetrieve Performance Suite summary results ...")
summary_json = retrieveSummary(ctsServerName, ctsPlatformURL)
if (summary_json != None):
with open("openmetadata_cts_summary.json", 'w') as outfile:
json.dump(summary_json, outfile)
profiles = retrieveProfileNames(ctsServerName, ctsPlatformURL)
profileDetailsDir = cwd + os.path.sep + profileDir
os.makedirs(profileDetailsDir, exist_ok=True)
print("Retrieving details for each profile...")
for profile in profiles['profileNames']:
profile_details = retrieveProfileDetails(ctsServerName, ctsPlatformURL, profile)
with open(profileDetailsDir + os.path.sep + profile.replace(" ", "_") + ".json", 'w') as outfile:
json.dump(profile_details, outfile)
print("\nDone -- all details retrieved.")
else:
print("\nFAILED: please check the messages above and correct before proceeding")
import pandas
from pandas import json_normalize
if (summary_json != None):
repositoryWorkbenchResults = json_normalize(data = summary_json['testLabSummary'],
record_path =['testSummariesFromWorkbenches','profileSummaries'])
repositoryWorkbenchResultsSummary = repositoryWorkbenchResults[['name','description','profilePriority','conformanceStatus']]
display(repositoryWorkbenchResultsSummary.head(15))
| 0.160496 | 0.933613 |
# Ensemble check
*Note: This notebook can be run locally by cloning the*
[Github repository](https://github.com/shirtsgroup/physical_validation).
*The notebook is located in* `doc/examples/ensemble_check.ipynb`. *Be aware
that probabilistic quantities such as error estimates based on bootstrapping
will differ when repeating the analysis.*
```
# enable plotting in notebook
%matplotlib notebook
```
The results imported here are results from example simulations which are
stored in another Python file. In real-world usage, the results would
either come from the Python interface of the simulation package, from
flat files containing the results, or from package-specific parsers. See
[SimulationData](../simulation_data.rst)
for more details.
```
from simulation_results import example_simulations
import physical_validation
```
## Check NVT simulations
To check the configurational quantities in NVT, two (otherwise identical)
simulations run at different temperatures are required.
We start by loading the first NVT simulation of 900 water molecules, which
was performed at 298.15K using velocity-rescale temperature coupling.
```
simulation_nvt_vrescale_low = example_simulations.get(
"900 water molecules, NVT at 298K with v-rescale thermostat"
)
num_molecules = 900
simulation_data_nvt_low = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NVT",
natoms=num_molecules * 3,
volume=3.01125 ** 3,
temperature=298.15,
),
observables=physical_validation.data.ObservableData(
# This test requires only the potential energy
potential_energy=simulation_nvt_vrescale_low["potential energy"]
),
)
```
It is not trivial to decide at which temperature to perform a second simulation.
The best results are achieved when the two simulations are close enough to have
good overlap between the distributions, while keeping them far enough apart to
be able to distinguish the physical difference between the distributions from the
statistical error present in simulations.
`physical_validation` offers functionality to compute a rule-of-thumb estimate of
the optimal interval in state point between two functions.
We will now use our first simulation result to get an estimate of where a second
simulation would optimally be located:
```
physical_validation.ensemble.estimate_interval(
data=simulation_data_nvt_low,
)
```
The second simulation available in our example set was performed at 308.15K, which
is reasonably close to the estimate calculated above. Let's load these results:
```
simulation_nvt_vrescale_high = example_simulations.get(
"900 water molecules, NVT at 308K with v-rescale thermostat"
)
simulation_data_nvt_high = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NVT",
natoms=num_molecules * 3,
volume=3.01125 ** 3,
temperature=308.15,
),
observables=physical_validation.data.ObservableData(
# This test requires only the potential energy
potential_energy=simulation_nvt_vrescale_high["potential energy"]
),
)
```
Using both simulation data objects, we can now check the ensemble sampled
by our simulations.
We are using `screen=True` to display a result plot on screen.
See argument `filename` to print that same plot to file.
```
physical_validation.ensemble.check(
data_sim_one=simulation_data_nvt_low,
data_sim_two=simulation_data_nvt_high,
screen=True,
)
```
By default, the ensemble check is estimating the distance in temperature between the two sampled ensembles using a maximum likelihood approach. This distance estimate is expected to be close to the true value. As a rule of thumb, if the true interval is not within about 2-3 standard deviations of the estimated interval, the trajectory is unlikely to have been sampled from the expected ensemble. The quantiles (number of standard deviations) of difference between the true value and the estimate is returned from the test as a machine-readable test result.
Note that in order to print the plot, the line is also linearly fitted to the simulations. This leads to a slightly different estimate, which explains the difference between the quantiles printed in the plot and in the terminal. As the maximum likelihood estimate is considered to be more exact, it's value is reported on the terminal and used as a return value.
We will now repeat this analysis using the same system, but a simulation which was performed using Berendsen pressure coupling. This temperature coupling method was found not to sample the expected ensemble. We will use this example to illustrate that the `physical_validation` checks are able to pick up this discrepancy in sampling.
Since the simulated system was identical to the one first analyzed, we will simply replace the observable trajectory in our simulation data objects:
```
simulation_nvt_berendsen_low = example_simulations.get(
"900 water molecules, NVT at 298K with Berendsen thermostat"
)
simulation_data_nvt_low.observables = physical_validation.data.ObservableData(
potential_energy=simulation_nvt_berendsen_low["potential energy"]
)
simulation_nvt_berendsen_high = example_simulations.get(
"900 water molecules, NVT at 308K with Berendsen thermostat"
)
simulation_data_nvt_high.observables = physical_validation.data.ObservableData(
potential_energy=simulation_nvt_berendsen_high["potential energy"]
)
physical_validation.ensemble.check(
data_sim_one=simulation_data_nvt_low,
data_sim_two=simulation_data_nvt_high,
screen=True,
)
```
The check is confirming that the sampled ensemble using the Berendsen thermostat is not behaving as expected when changing the temperature. The reported estimated temperature interval is around 15 standard deviations from the true value, which makes it easy to reject the hypothesis that the potential energy was sampled from the correct ensemble.
## Check NPT simulations
To check the sampled ensemble of the configurational quantities in NPT, we again need two otherwise identical simulations which were performed at slightly different state points (target temperature and / or pressure). The checks can be performed using identical pressure and different temperatures, which will test whether the sampled ensembles exhibit the expected temperature dependence. The checks can also be performed using identical temperature and different pressures, which will in turn test the pressure dependence of the sampled ensemble. Finally, we can also use two simulations which differs in both the target temperature and pressure, combining the two tests into one. Here, we will showcase the last option for a system of 900 water molecules, sampled using velocity-rescale temperature coupling and Parrinello-Rahman pressure coupling. These coupling algorithms were analytically shown to sample the correct distribution, so we will check whether the simulated results fulfill this expectation.
We will start by loading a first simulation, performed at 298.15 K and 1 bar.
```
simulation_npt_low = example_simulations.get(
"900 water molecules, NPT at 298K and 1bar, using v-rescale and Parrinello-Rahman"
)
num_molecules = 900
simulation_data_npt_low = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NPT",
natoms=num_molecules * 3,
pressure=1.0,
temperature=298.15,
),
observables=physical_validation.data.ObservableData(
# This test requires the potential energy and the volume
potential_energy=simulation_npt_low["potential energy"],
volume=simulation_npt_low["volume"],
),
)
```
As in the NVT case, we can use this simulation to have `physical_validation` suggesting a state point to perform the second simulation in.
```
physical_validation.ensemble.estimate_interval(data=simulation_data_npt_low)
```
The rule of thumb suggests that a second state point with a temperature difference of about 7.8 K and a pressure difference of about 315 bar would be optimal. The second simulation which is available in our example set was performed at 308.15 K and 101 bar, so at a distance of 10 K and 100 bar. According to the `physical_validation` estimate, the pressure distance should be a bit further to have optimal error recognition. The check will, however, not be invalid with this choice of state points.
```
simulation_npt_high = example_simulations.get(
"900 water molecules, NPT at 308K and 101bar, using v-rescale and Parrinello-Rahman"
)
num_molecules = 900
simulation_data_npt_high = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NPT",
natoms=num_molecules * 3,
pressure=101.0,
temperature=308.15,
),
observables=physical_validation.data.ObservableData(
# This test requires the potential energy and the volume
potential_energy=simulation_npt_high["potential energy"],
volume=simulation_npt_high["volume"],
),
)
```
Using both simulation data objects, we can now check the ensemble sampled by our simulations. Note that plotting is not available for NPT simulations which differ in both temperature and pressure, since the 2-dimensional plot would be very hard to interpret.
```
physical_validation.ensemble.check(
data_sim_one=simulation_data_npt_low,
data_sim_two=simulation_data_npt_high,
)
```
The ensemble check now prints both the estimated temperature and pressure intervals. We note that in both cases, the true value is within less than a standard deviation, which means that the null hypothesis of sampling the expected ensemble stands.
It's worth noting that the true pressure difference is given as 98.3 bar rather than 100 bar. When checking simulations which differ in both their pressure and temperature, the pressure interval can only be approximated since the temperature and pressure are not perfectly separable in the NPT partition function. Please refer to [Merz & Shirts 2018](https://doi.org/10.1371/journal.pone.0202764), eq (18), for details.
|
github_jupyter
|
# enable plotting in notebook
%matplotlib notebook
from simulation_results import example_simulations
import physical_validation
simulation_nvt_vrescale_low = example_simulations.get(
"900 water molecules, NVT at 298K with v-rescale thermostat"
)
num_molecules = 900
simulation_data_nvt_low = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NVT",
natoms=num_molecules * 3,
volume=3.01125 ** 3,
temperature=298.15,
),
observables=physical_validation.data.ObservableData(
# This test requires only the potential energy
potential_energy=simulation_nvt_vrescale_low["potential energy"]
),
)
physical_validation.ensemble.estimate_interval(
data=simulation_data_nvt_low,
)
simulation_nvt_vrescale_high = example_simulations.get(
"900 water molecules, NVT at 308K with v-rescale thermostat"
)
simulation_data_nvt_high = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NVT",
natoms=num_molecules * 3,
volume=3.01125 ** 3,
temperature=308.15,
),
observables=physical_validation.data.ObservableData(
# This test requires only the potential energy
potential_energy=simulation_nvt_vrescale_high["potential energy"]
),
)
physical_validation.ensemble.check(
data_sim_one=simulation_data_nvt_low,
data_sim_two=simulation_data_nvt_high,
screen=True,
)
simulation_nvt_berendsen_low = example_simulations.get(
"900 water molecules, NVT at 298K with Berendsen thermostat"
)
simulation_data_nvt_low.observables = physical_validation.data.ObservableData(
potential_energy=simulation_nvt_berendsen_low["potential energy"]
)
simulation_nvt_berendsen_high = example_simulations.get(
"900 water molecules, NVT at 308K with Berendsen thermostat"
)
simulation_data_nvt_high.observables = physical_validation.data.ObservableData(
potential_energy=simulation_nvt_berendsen_high["potential energy"]
)
physical_validation.ensemble.check(
data_sim_one=simulation_data_nvt_low,
data_sim_two=simulation_data_nvt_high,
screen=True,
)
simulation_npt_low = example_simulations.get(
"900 water molecules, NPT at 298K and 1bar, using v-rescale and Parrinello-Rahman"
)
num_molecules = 900
simulation_data_npt_low = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NPT",
natoms=num_molecules * 3,
pressure=1.0,
temperature=298.15,
),
observables=physical_validation.data.ObservableData(
# This test requires the potential energy and the volume
potential_energy=simulation_npt_low["potential energy"],
volume=simulation_npt_low["volume"],
),
)
physical_validation.ensemble.estimate_interval(data=simulation_data_npt_low)
simulation_npt_high = example_simulations.get(
"900 water molecules, NPT at 308K and 101bar, using v-rescale and Parrinello-Rahman"
)
num_molecules = 900
simulation_data_npt_high = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NPT",
natoms=num_molecules * 3,
pressure=101.0,
temperature=308.15,
),
observables=physical_validation.data.ObservableData(
# This test requires the potential energy and the volume
potential_energy=simulation_npt_high["potential energy"],
volume=simulation_npt_high["volume"],
),
)
physical_validation.ensemble.check(
data_sim_one=simulation_data_npt_low,
data_sim_two=simulation_data_npt_high,
)
| 0.890366 | 0.993391 |
```
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12,8)
import pylab as plt
from astrometry.libkd.spherematch import *
from astrometry.util.fits import *
import numpy as np
from astrometry.util.starutil_numpy import *
from astrometry.util.plotutils import *
from glob import glob
from collections import Counter
import os
F1 = fits_table('14610_M33-B01-F01.gst.fits.gz')
F2 = fits_table('14610_M33-B01-F02.gst.fits.gz')
F3 = fits_table('14610_M33-B01-F03.gst.fits.gz')
F7 = fits_table('14610_M33-B01-F07.gst.fits.gz')
F8 = fits_table('14610_M33-B01-F08.gst.fits.gz')
F9 = fits_table('14610_M33-B01-F09.gst.fits.gz')
FF = [F1,F2,F3,F7,F8,F9]
plt.plot(F1.ra, F1.dec, 'b.');
plt.plot(F2.ra, F2.dec, 'g.');
plt.plot(F7.ra, F7.dec, 'm.');
minra = min([F1.ra.min(), F2.ra.min(), F7.ra.min()])
maxra = max([F1.ra.max(), F2.ra.max(), F7.ra.max()])
mindec = min([F1.dec.min(), F2.dec.min(), F7.dec.min()])
maxdec = max([F1.dec.max(), F2.dec.max(), F7.dec.max()])
minra,maxra, mindec, maxdec
F1.about()
ha=dict(range=((minra,maxra),(mindec,maxdec)), doclf=False, docolorbar=False)
plt.subplot(1,3,1)
plothist(F1.ra, F1.dec, **ha);
plt.subplot(1,3,2)
plothist(F2.ra, F2.dec, **ha);
plt.subplot(1,3,3)
plothist(F7.ra, F7.dec, **ha);
plothist(F1.x, F1.y);
I,J,d = match_radec(F1.ra, F1.dec, F2.ra, F2.dec, 0.1/3600.)
plt.hist(d*3600, bins=100);
cosdec = np.cos(np.deg2rad(np.median(F1.dec)))
dra = (F1.ra[I]-F2.ra[J])*cosdec * 3600.*1000.
ddec = (F1.dec[I]-F2.dec[J]) * 3600.*1000.
plothist(dra, ddec)
plt.xlabel('dRA (mas)')
plt.ylabel('dDec (mas)')
plt.title('B01F01 to B01F02 matches')
plt.savefig('f1f2.png');
Ibb = np.flatnonzero((F1.y[I] < 2000) * (F2.y[J] < 2000))
Itb = np.flatnonzero((F1.y[I] > 2200) * (F2.y[J] < 2000))
Itt = np.flatnonzero((F1.y[I] > 2200) * (F2.y[J] > 2200))
ha = dict(doclf=False, docolorbar=False, range=((-100,100),(-100,100)))
plt.subplot(1,3,1)
plothist(dra[Ibb], ddec[Ibb], **ha)
plt.axis('square')
plt.xlabel('dRA (mas)')
plt.ylabel('dDec (mas)')
plt.title('B01F01 to B01F02 matches (bb)')
plt.subplot(1,3,2)
plothist(dra[Itb], ddec[Itb], **ha)
plt.axis('square')
plt.xlabel('dRA (mas)')
plt.ylabel('dDec (mas)')
plt.title('B01F01 to B01F02 matches (tb)')
plt.subplot(1,3,3)
plothist(dra[Itt], ddec[Itt], **ha)
plt.axis('square')
plt.xlabel('dRA (mas)')
plt.ylabel('dDec (mas)')
plt.title('B01F01 to B01F02 matches (tt)')
plt.savefig('f1f2b.png');
plt.plot(F1.y[I], F2.y[J], 'b.')
plt.xlabel('F1.y')
plt.ylabel('F2.y');
plothist(F1.y[I], F2.y[J]-F1.y[I]);
u1 = np.ones(len(F1), bool)
u1[I] = False
u2 = np.ones(len(F2), bool)
u2[J] = False
len(F1), len(I), np.sum(u1)
ha = dict(doclf=False, docolorbar=False, range=((F1.ra.min(), F1.ra.max()), (F1.dec.min(), F1.dec.max())))
plt.subplot(1,3,1)
plothist(F1.ra, F1.dec, **ha)
plt.subplot(1,3,2)
plothist(F1.ra[I], F1.dec[I], **ha)
plt.subplot(1,3,3)
plothist(F1.ra[u1], F1.dec[u1], **ha);
I,J,d = match_radec(F1.ra, F1.dec, F7.ra, F7.dec, 1.0/3600.)
plt.hist(d*3600, bins=100);
def addnew(F):
F.nmatched = np.ones(len(F), np.uint8)
F.avgra = F.ra.copy()
F.avgdec = F.dec.copy()
F = FF[0].copy()
addnew(F)
merged = F
ps = PlotSequence('merge')
avgcols = ['avgra', 'avgdec',
'f475w_rate', 'f475w_raterr', 'f475w_vega', 'f475w_std', 'f475w_err',
'f475w_chi', 'f475w_snr', 'f475w_sharp', 'f475w_round', 'f475w_crowd',
'f814w_rate', 'f814w_raterr', 'f814w_vega', 'f814w_std', 'f814w_err',
'f814w_chi', 'f814w_snr', 'f814w_sharp', 'f814w_round', 'f814w_crowd',]
for F in FF[1:]:
addnew(F)
I,J,d = match_radec(merged.ra, merged.dec, F.ra, F.dec, 0.06/3600., nearest=True)
print('Matched', len(I), 'of', len(merged), 'old and', len(F), 'new')
print('F RA', F.ra.min(), F.ra.max(), 'Dec', F.dec.min(), F.dec.max())
print('matched RA', merged.ra.min(), merged.ra.max(), 'Dec', merged.dec.min(), merged.dec.max())
plt.clf()
plt.hist(d*3600.*1000., bins=50)
plt.xlabel('Match distance (mas)')
plt.xlim(0, 100)
plt.show()
ps.savefig()
# unmatched
#Um = np.ones(len(merged), bool)
#Um[I] = False
Uf = np.ones(len(F), bool)
Uf[J] = False
U = F[Uf]
# matched --
for col in avgcols:
m = merged.get(col)
f = F.get(col)
m[I] += f[J]
merged.nmatched[I] += 1
merged = merge_tables([merged, U])
print('matched RA', merged.ra.min(), merged.ra.max(), 'Dec', merged.dec.min(), merged.dec.max())
for col in avgcols:
m = merged.get(col)
m /= merged.nmatched.astype(float)
plothist(merged.ra, merged.dec);
plt.savefig('merged.png')
len(merged)
merged.writeto('merged.fits')
merged.dec.min(), merged.dec.max()
```
|
github_jupyter
|
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12,8)
import pylab as plt
from astrometry.libkd.spherematch import *
from astrometry.util.fits import *
import numpy as np
from astrometry.util.starutil_numpy import *
from astrometry.util.plotutils import *
from glob import glob
from collections import Counter
import os
F1 = fits_table('14610_M33-B01-F01.gst.fits.gz')
F2 = fits_table('14610_M33-B01-F02.gst.fits.gz')
F3 = fits_table('14610_M33-B01-F03.gst.fits.gz')
F7 = fits_table('14610_M33-B01-F07.gst.fits.gz')
F8 = fits_table('14610_M33-B01-F08.gst.fits.gz')
F9 = fits_table('14610_M33-B01-F09.gst.fits.gz')
FF = [F1,F2,F3,F7,F8,F9]
plt.plot(F1.ra, F1.dec, 'b.');
plt.plot(F2.ra, F2.dec, 'g.');
plt.plot(F7.ra, F7.dec, 'm.');
minra = min([F1.ra.min(), F2.ra.min(), F7.ra.min()])
maxra = max([F1.ra.max(), F2.ra.max(), F7.ra.max()])
mindec = min([F1.dec.min(), F2.dec.min(), F7.dec.min()])
maxdec = max([F1.dec.max(), F2.dec.max(), F7.dec.max()])
minra,maxra, mindec, maxdec
F1.about()
ha=dict(range=((minra,maxra),(mindec,maxdec)), doclf=False, docolorbar=False)
plt.subplot(1,3,1)
plothist(F1.ra, F1.dec, **ha);
plt.subplot(1,3,2)
plothist(F2.ra, F2.dec, **ha);
plt.subplot(1,3,3)
plothist(F7.ra, F7.dec, **ha);
plothist(F1.x, F1.y);
I,J,d = match_radec(F1.ra, F1.dec, F2.ra, F2.dec, 0.1/3600.)
plt.hist(d*3600, bins=100);
cosdec = np.cos(np.deg2rad(np.median(F1.dec)))
dra = (F1.ra[I]-F2.ra[J])*cosdec * 3600.*1000.
ddec = (F1.dec[I]-F2.dec[J]) * 3600.*1000.
plothist(dra, ddec)
plt.xlabel('dRA (mas)')
plt.ylabel('dDec (mas)')
plt.title('B01F01 to B01F02 matches')
plt.savefig('f1f2.png');
Ibb = np.flatnonzero((F1.y[I] < 2000) * (F2.y[J] < 2000))
Itb = np.flatnonzero((F1.y[I] > 2200) * (F2.y[J] < 2000))
Itt = np.flatnonzero((F1.y[I] > 2200) * (F2.y[J] > 2200))
ha = dict(doclf=False, docolorbar=False, range=((-100,100),(-100,100)))
plt.subplot(1,3,1)
plothist(dra[Ibb], ddec[Ibb], **ha)
plt.axis('square')
plt.xlabel('dRA (mas)')
plt.ylabel('dDec (mas)')
plt.title('B01F01 to B01F02 matches (bb)')
plt.subplot(1,3,2)
plothist(dra[Itb], ddec[Itb], **ha)
plt.axis('square')
plt.xlabel('dRA (mas)')
plt.ylabel('dDec (mas)')
plt.title('B01F01 to B01F02 matches (tb)')
plt.subplot(1,3,3)
plothist(dra[Itt], ddec[Itt], **ha)
plt.axis('square')
plt.xlabel('dRA (mas)')
plt.ylabel('dDec (mas)')
plt.title('B01F01 to B01F02 matches (tt)')
plt.savefig('f1f2b.png');
plt.plot(F1.y[I], F2.y[J], 'b.')
plt.xlabel('F1.y')
plt.ylabel('F2.y');
plothist(F1.y[I], F2.y[J]-F1.y[I]);
u1 = np.ones(len(F1), bool)
u1[I] = False
u2 = np.ones(len(F2), bool)
u2[J] = False
len(F1), len(I), np.sum(u1)
ha = dict(doclf=False, docolorbar=False, range=((F1.ra.min(), F1.ra.max()), (F1.dec.min(), F1.dec.max())))
plt.subplot(1,3,1)
plothist(F1.ra, F1.dec, **ha)
plt.subplot(1,3,2)
plothist(F1.ra[I], F1.dec[I], **ha)
plt.subplot(1,3,3)
plothist(F1.ra[u1], F1.dec[u1], **ha);
I,J,d = match_radec(F1.ra, F1.dec, F7.ra, F7.dec, 1.0/3600.)
plt.hist(d*3600, bins=100);
def addnew(F):
F.nmatched = np.ones(len(F), np.uint8)
F.avgra = F.ra.copy()
F.avgdec = F.dec.copy()
F = FF[0].copy()
addnew(F)
merged = F
ps = PlotSequence('merge')
avgcols = ['avgra', 'avgdec',
'f475w_rate', 'f475w_raterr', 'f475w_vega', 'f475w_std', 'f475w_err',
'f475w_chi', 'f475w_snr', 'f475w_sharp', 'f475w_round', 'f475w_crowd',
'f814w_rate', 'f814w_raterr', 'f814w_vega', 'f814w_std', 'f814w_err',
'f814w_chi', 'f814w_snr', 'f814w_sharp', 'f814w_round', 'f814w_crowd',]
for F in FF[1:]:
addnew(F)
I,J,d = match_radec(merged.ra, merged.dec, F.ra, F.dec, 0.06/3600., nearest=True)
print('Matched', len(I), 'of', len(merged), 'old and', len(F), 'new')
print('F RA', F.ra.min(), F.ra.max(), 'Dec', F.dec.min(), F.dec.max())
print('matched RA', merged.ra.min(), merged.ra.max(), 'Dec', merged.dec.min(), merged.dec.max())
plt.clf()
plt.hist(d*3600.*1000., bins=50)
plt.xlabel('Match distance (mas)')
plt.xlim(0, 100)
plt.show()
ps.savefig()
# unmatched
#Um = np.ones(len(merged), bool)
#Um[I] = False
Uf = np.ones(len(F), bool)
Uf[J] = False
U = F[Uf]
# matched --
for col in avgcols:
m = merged.get(col)
f = F.get(col)
m[I] += f[J]
merged.nmatched[I] += 1
merged = merge_tables([merged, U])
print('matched RA', merged.ra.min(), merged.ra.max(), 'Dec', merged.dec.min(), merged.dec.max())
for col in avgcols:
m = merged.get(col)
m /= merged.nmatched.astype(float)
plothist(merged.ra, merged.dec);
plt.savefig('merged.png')
len(merged)
merged.writeto('merged.fits')
merged.dec.min(), merged.dec.max()
| 0.453746 | 0.531027 |
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import pandas as pd
from tqdm import tqdm
%matplotlib inline
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
nclasses = 10
batchSize = 128
keepRate = 0.8
epochs = 25
x = tf.placeholder(tf.float32, shape=[None, 784])
y = tf.placeholder(tf.float32, shape=[None, 10])
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def maxpool2d(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
def cnn(x):
weights = {
'Wconv1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
'Wconv2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
'Wfc1': tf.Variable(tf.random_normal([7*7*64, 1024])),
'out': tf.Variable(tf.random_normal([1024, nclasses]))
}
biases = {
'bconv1': tf.Variable(tf.random_normal([32])),
'bconv2': tf.Variable(tf.random_normal([64])),
'bfc1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([nclasses]))
}
x = tf.reshape(x, [-1, 28, 28, 1])
out = tf.nn.relu(conv2d(x, weights['Wconv1']) + biases['bconv1'])
out = maxpool2d(out)
out = tf.nn.relu(conv2d(out, weights['Wconv2']) + biases['bconv2'])
out = maxpool2d(out)
out = tf.reshape(out, [-1, 7*7*64])
out = tf.nn.relu(tf.matmul(out, weights['Wfc1']) + biases['bfc1'])
out = tf.nn.dropout(out, keepRate)
out = tf.matmul(out, weights['out']) + biases['out']
return out
def trainCNN(x):
losses = []
prediction = cnn(x)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=prediction))
optimizer = tf.train.AdamOptimizer().minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(epochs):
epochLoss = 0
for _ in range(int(mnist.train.num_examples/batchSize)):
batchX, batchY = mnist.train.next_batch(batchSize)
_, l = sess.run([optimizer, loss], feed_dict={x: batchX, y: batchY})
epochLoss += l
losses.append(epochLoss)
print('Epoch', epoch+1, '/', epochs, 'loss:', epochLoss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
acc = tf.reduce_mean(tf.cast(correct, tf.float32))
print('Accuracy: ', sess.run([acc], feed_dict={x: mnist.test.images, y: mnist.test.labels}))
return losses
l = trainCNN(x)
plt.plot(range(epochs), l)
plt.xlabel('Epoch')
plt.ylabel('Loss')
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import pandas as pd
from tqdm import tqdm
%matplotlib inline
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
nclasses = 10
batchSize = 128
keepRate = 0.8
epochs = 25
x = tf.placeholder(tf.float32, shape=[None, 784])
y = tf.placeholder(tf.float32, shape=[None, 10])
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def maxpool2d(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
def cnn(x):
weights = {
'Wconv1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
'Wconv2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
'Wfc1': tf.Variable(tf.random_normal([7*7*64, 1024])),
'out': tf.Variable(tf.random_normal([1024, nclasses]))
}
biases = {
'bconv1': tf.Variable(tf.random_normal([32])),
'bconv2': tf.Variable(tf.random_normal([64])),
'bfc1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([nclasses]))
}
x = tf.reshape(x, [-1, 28, 28, 1])
out = tf.nn.relu(conv2d(x, weights['Wconv1']) + biases['bconv1'])
out = maxpool2d(out)
out = tf.nn.relu(conv2d(out, weights['Wconv2']) + biases['bconv2'])
out = maxpool2d(out)
out = tf.reshape(out, [-1, 7*7*64])
out = tf.nn.relu(tf.matmul(out, weights['Wfc1']) + biases['bfc1'])
out = tf.nn.dropout(out, keepRate)
out = tf.matmul(out, weights['out']) + biases['out']
return out
def trainCNN(x):
losses = []
prediction = cnn(x)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=prediction))
optimizer = tf.train.AdamOptimizer().minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(epochs):
epochLoss = 0
for _ in range(int(mnist.train.num_examples/batchSize)):
batchX, batchY = mnist.train.next_batch(batchSize)
_, l = sess.run([optimizer, loss], feed_dict={x: batchX, y: batchY})
epochLoss += l
losses.append(epochLoss)
print('Epoch', epoch+1, '/', epochs, 'loss:', epochLoss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
acc = tf.reduce_mean(tf.cast(correct, tf.float32))
print('Accuracy: ', sess.run([acc], feed_dict={x: mnist.test.images, y: mnist.test.labels}))
return losses
l = trainCNN(x)
plt.plot(range(epochs), l)
plt.xlabel('Epoch')
plt.ylabel('Loss')
| 0.76708 | 0.771069 |
Discretize PV row sides and indexing
==============================
In this section, we will learn how to:
- create a PV array with discretized PV row sides
- understand the indices of the timeseries surfaces of a PV array
- plot a PV array with indices shown on plot
Imports and settings
```
# Import external libraries
import matplotlib.pyplot as plt
# Settings
%matplotlib inline
```
### Prepare PV array parameters
```
pvarray_parameters = {
'n_pvrows': 3, # number of pv rows
'pvrow_height': 1, # height of pvrows (measured at center / torque tube)
'pvrow_width': 1, # width of pvrows
'axis_azimuth': 0., # azimuth angle of rotation axis
'surface_tilt': 20., # tilt of the pv rows
'surface_azimuth': 270., # azimuth of the pv rows front surface
'solar_zenith': 40., # solar zenith angle
'solar_azimuth': 150., # solar azimuth angle
'gcr': 0.5, # ground coverage ratio
}
```
### Create discretization scheme
```
discretization = {'cut':{
0: {'back': 5}, # discretize the back side of the leftmost PV row into 5 segments
1: {'front': 3} # discretize the front side of the center PV row into 3 segments
}}
pvarray_parameters.update(discretization)
```
### Create a PV array
Import the ``OrderedPVArray`` class and create a PV array object using the parameters above
```
from pvfactors.geometry import OrderedPVArray
# Create pv array
pvarray = OrderedPVArray.fit_from_dict_of_scalars(pvarray_parameters)
```
Plot the PV array at index ``0``
```
# Plot pvarray shapely geometries
f, ax = plt.subplots(figsize=(10, 3))
pvarray.plot_at_idx(0, ax)
plt.show()
```
As we can see, there is some discretization on the leftmost and the center PV rows.
We can check that it was correctly done using the ``pvarray`` object.
```
pvrow_left = pvarray.ts_pvrows[0]
n_segments = len(pvrow_left.back.list_segments)
print("Back side of leftmost PV row has {} segments".format(n_segments))
pvrow_center = pvarray.ts_pvrows[1]
n_segments = len(pvrow_center.front.list_segments)
print("Front side of center PV row has {} segments".format(n_segments))
```
### Indexing the timeseries surfaces in a PV array
In order to perform some calculations on PV array surfaces, it is often important to index them.
``pvfactors`` takes care of this.
We can for instance check the index of the timeseries surfaces on the front side of the center PV row
```
# List some indices
ts_surface_list = pvrow_center.front.all_ts_surfaces
print("Indices of surfaces on front side of center PV row")
for ts_surface in ts_surface_list:
index = ts_surface.index
print("... surface index: {}".format(index))
```
Intuitively, one could have expected only 3 timeseries surfaces because that's what the previous plot at index ``0`` was showing.
But it is important to understand that ALL timeseries surfaces are created at PV array fitting time, even the ones that don't exist for the given timestamps.
So in this example:
- we have 3 illuminated timeseries surfaces, which do exist at timestamp ``0``
- and 3 shaded timeseries surfaces, which do NOT exist at timestamp ``0`` (so they have zero length).
Let's check that.
```
for ts_surface in ts_surface_list:
index = ts_surface.index
shaded = ts_surface.shaded
length = ts_surface.length
print("Surface with index: '{}' has shading status '{}' and length {} m".format(index, shaded, length))
```
As expected, all shaded timeseries surfaces on the front side of the PV row have length zero.
### Plot PV array with indices
It is possible also to visualize the PV surface indices of all the non-zero surfaces when plotting a PV array, for a given timestamp (here at the first timestamp, so ``0``).
```
# Plot pvarray shapely geometries with surface indices
f, ax = plt.subplots(figsize=(10, 4))
pvarray.plot_at_idx(0, ax, with_surface_index=True)
ax.set_xlim(-3, 5)
plt.show()
```
As shown above, the surfaces on the front side of the center PV row have indices 40, 42, and 44.
|
github_jupyter
|
# Import external libraries
import matplotlib.pyplot as plt
# Settings
%matplotlib inline
pvarray_parameters = {
'n_pvrows': 3, # number of pv rows
'pvrow_height': 1, # height of pvrows (measured at center / torque tube)
'pvrow_width': 1, # width of pvrows
'axis_azimuth': 0., # azimuth angle of rotation axis
'surface_tilt': 20., # tilt of the pv rows
'surface_azimuth': 270., # azimuth of the pv rows front surface
'solar_zenith': 40., # solar zenith angle
'solar_azimuth': 150., # solar azimuth angle
'gcr': 0.5, # ground coverage ratio
}
discretization = {'cut':{
0: {'back': 5}, # discretize the back side of the leftmost PV row into 5 segments
1: {'front': 3} # discretize the front side of the center PV row into 3 segments
}}
pvarray_parameters.update(discretization)
from pvfactors.geometry import OrderedPVArray
# Create pv array
pvarray = OrderedPVArray.fit_from_dict_of_scalars(pvarray_parameters)
# Plot pvarray shapely geometries
f, ax = plt.subplots(figsize=(10, 3))
pvarray.plot_at_idx(0, ax)
plt.show()
pvrow_left = pvarray.ts_pvrows[0]
n_segments = len(pvrow_left.back.list_segments)
print("Back side of leftmost PV row has {} segments".format(n_segments))
pvrow_center = pvarray.ts_pvrows[1]
n_segments = len(pvrow_center.front.list_segments)
print("Front side of center PV row has {} segments".format(n_segments))
# List some indices
ts_surface_list = pvrow_center.front.all_ts_surfaces
print("Indices of surfaces on front side of center PV row")
for ts_surface in ts_surface_list:
index = ts_surface.index
print("... surface index: {}".format(index))
for ts_surface in ts_surface_list:
index = ts_surface.index
shaded = ts_surface.shaded
length = ts_surface.length
print("Surface with index: '{}' has shading status '{}' and length {} m".format(index, shaded, length))
# Plot pvarray shapely geometries with surface indices
f, ax = plt.subplots(figsize=(10, 4))
pvarray.plot_at_idx(0, ax, with_surface_index=True)
ax.set_xlim(-3, 5)
plt.show()
| 0.777215 | 0.98915 |
# Tutorial: Hand gesture classification with EMG data using Riemannian metrics
In this notebook we are using EMG time series collected by 8 electrodes placed on the arm skin. We are going to show how to
- Process these kind of signal into covariance matrices that we can manipulate with geomstats tools.
- How to apply ML algorithms on this data to classify four different hand gestures present in the data (Rock, Paper, Scissors, Ok).
- How do the different methods (using Riemanian metrics, projecting on tangent space, Euclidean metric) compare to each other.
<img src="figures/paper_rock_scissors.png" />
## Context
The data are acquired from somOS-interface: an sEMG armband that allows you to interact via bluetooth with an Android smartphone (you can contact Marius Guerard ([email protected]) or Renaud Renault ([email protected]) for more info on how to make this kind of armband yourself).
An example of application is to record static signs that are linked with different actions (moving a cursor and clicking, sign recognition for command based personal assistants, ...). In these experiments, we want to evaluate the difference in performance (measured as the accuracy of sign recognition) between three different real life situations where we change the conditions of training (when user record signs or "calibrate" the device) and testing (when the app guess what sign the user is doing):
- 1. What is the accuracy when doing sign recognition right after training?
- 2. What is the accuracy when calibrating, removing and replacing the armband at the same position and then testing?
- 3. What is the accuracy when calibrating, removing the armband and giving it to someone else that is testing it without calibration?
To simulate these situations, we record data from two different users (rr and mg) and in two different sessions (s1 or s2). The user put the bracelet before every session and remove it after every session.
Quick description of the data:
- Each row corresponds to one acquisition, there is an acquisition every ~4 ms for 8 electrodes which correspond to a 250Hz acquisition rate.
- The time column is in ms.
- The columns c0 to c7 correspond to the electrical value recorded at each of the 8 electrodes (arbitrary unit).
- The label correspond to the sign being recorded by the user at this time point ('rest', 'rock', 'paper', 'scissors', or 'ok). 'rest' correspond to a rested arm.
- the exp identify the user (rr and mg) and the session (s1 or s2)
Note: Another interesting use case, not explored in this notebook, would be to test what is the accruacy when calibrating, removing the armband and giving it to someone else that is calibrating it on its own arm before testing it. The idea being that transfer learning might help getting better results (or faster calibration) than calibrating on one user.
## Setup
Before starting this tutorial, we set the working directory to be the root of the geomstats repository. In order to have the code working on your machine, you need to change this path to the path of your geomstats repository.
```
import os
import subprocess
import matplotlib
matplotlib.interactive(True)
import matplotlib.pyplot as plt
geomstats_gitroot_path = subprocess.check_output(
['git', 'rev-parse', '--show-toplevel'],
universal_newlines=True)
os.chdir(geomstats_gitroot_path[:-1])
print('Working directory: ', os.getcwd())
import geomstats.backend as gs
gs.random.seed(2021)
```
## Parameters
```
N_ELECTRODES = 8
N_SIGNS = 4
```
## The Data
```
import geomstats.datasets.utils as data_utils
data = data_utils.load_emg()
data.head()
fig, ax = plt.subplots(N_SIGNS, figsize=(20, 20))
label_list = ['rock', 'scissors', 'paper', 'ok']
for i, label_i in enumerate(label_list):
sign_df = data[data.label==label_i].iloc[:100]
for electrode in range(N_ELECTRODES):
ax[i].plot(sign_df.iloc[:, 1 + electrode])
ax[i].title.set_text(label_i)
```
We are removing the sign 'rest' for the rest of the analysis.
```
data = data[data.label != 'rest']
```
### Preprocessing into covariance matrices
```
import numpy as np
import pandas as pd
### Parameters.
N_STEPS = 100
LABEL_MAP = {'rock': 0, 'scissors': 1, 'paper': 2, 'ok': 3}
MARGIN = 1000
```
Unpacking data into arrays for batching
```
data_dict = {
'time': gs.array(data.time),
'raw_data': gs.array(data[['c{}'.format(i) for i in range(N_ELECTRODES)]]),
'label': gs.array(data.label),
'exp': gs.array(data.exp)}
from geomstats.datasets.prepare_emg_data import TimeSeriesCovariance
cov_data = TimeSeriesCovariance(data_dict, N_STEPS, N_ELECTRODES, LABEL_MAP, MARGIN)
cov_data.transform()
```
We check that these matrics belong to the space of SPD matrices.
```
import geomstats.geometry.spd_matrices as spd
manifold = spd.SPDMatrices(N_ELECTRODES)
gs.all(manifold.belongs(cov_data.covs))
```
#### Covariances plot of the euclidean average
```
fig, ax = plt.subplots(2, 2, figsize=(20, 10))
for label_i, i in cov_data.label_map.items():
label_ids = np.where(cov_data.labels==i)[0]
sign_cov_mat = cov_data.covs[label_ids]
mean_cov = np.mean(sign_cov_mat, axis=0)
ax[i // 2, i % 2].matshow(mean_cov)
ax[i // 2, i % 2].title.set_text(label_i)
```
Looking at the euclidean average of the spd matrices for each sign, does not show a striking difference between 3 of our signs (scissors, paper, and ok). Minimum Distance to Mean (MDM) algorithm will probably performed poorly if using euclidean mean here.
#### Covariances plot of the Frechet Mean of the affine invariant metric
```
from geomstats.learning.frechet_mean import FrechetMean
from geomstats.geometry.spd_matrices import SPDMetricAffine
metric_affine = SPDMetricAffine(N_ELECTRODES)
mean_affine = FrechetMean(metric=metric_affine, point_type='matrix')
fig, ax = plt.subplots(2, 2, figsize=(20, 10))
for label_i, i in cov_data.label_map.items():
label_ids = np.where(cov_data.labels==i)[0]
sign_cov_mat = cov_data.covs[label_ids]
mean_affine.fit(X=sign_cov_mat)
mean_cov = mean_affine.estimate_
ax[i // 2, i % 2].matshow(mean_cov)
ax[i // 2, i % 2].title.set_text(label_i)
```
We see that the average matrices computed using the affine invariant metric are now more differenciated from each other and can potentially give better results, when using MDM to predict the sign linked to a matrix sample.
## Sign Classification
We are now going to train some classifiers on those matrices to see how we can accurately discriminate these 4 hand positions.
The baseline accuracy is defined as the accuracy we get by randomly guessing the signs. In our case, the baseline accuracy is 25%.
```
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_validate
from sklearn.preprocessing import StandardScaler
# Hiding the numerous sklearn warnings
import warnings
warnings.filterwarnings('ignore')
!pip install keras
!pip install tensorflow
from keras.wrappers.scikit_learn import KerasClassifier
import tensorflow as tf
```
N_EPOCHS is the number of epochs on which to train the DNN. Recommended is ~100
```
N_EPOCHS = 10
N_FEATURES = int(N_ELECTRODES * (N_ELECTRODES + 1) / 2)
```
### A. Test on the same session and user as Training/Calibration
In this first part we are training our model on the same session that we are testing it on. In real life, it corresponds to a user calibrating his armband right before using it. To do this, we are splitting every session in k-folds, training on $(k-1)$ fold to test on the $k^{th}$ last fold.
```
class ExpResults:
"""Class handling the score collection and plotting among the different experiments.
"""
def __init__(self, exps):
self.exps = exps
self.results = {}
self.exp_ids = {}
# Compute the index corresponding to each session only once at initialization.
for exp in set(self.exps):
self.exp_ids[exp] = np.where(self.exps==exp)[0]
def add_result(self, model_name, model, X, y):
"""Add the results from the cross validated pipeline.
For the model 'pipeline', it will add the cross validated results of every session in the model_name
entry of self.results.
Parameters
----------
model_name : str
Name of the pipeline/model that we are adding results from.
model : sklearn.pipeline.Pipeline
sklearn pipeline that we are evaluating.
X : array
data that we are ingesting in the pipeline.
y : array
labels corresponding to the data.
"""
self.results[model_name] = {'fit_time': [], 'score_time': [], 'test_score': [], 'train_score': []}
for exp in self.exp_ids.keys():
ids = self.exp_ids[exp]
exp_result = cross_validate(pipeline, X[ids], y[ids])
for key in exp_result.keys():
self.results[model_name][key] += list(exp_result[key])
print('Average training score: {}, Average test score: {}'.format(np.mean(self.results[model_name]['train_score']),
np.mean(self.results[model_name]['test_score'])))
def plot_results(self, title, variables, err_bar=None, save_name=None, xlabel='Model', ylabel='Acc'):
"""Plot bar plot comparing the different pipelines' results.
Compare the results added previously using the 'add_result' method with bar plots.
Parameters
----------
title : str
Title of the plot.
variables : list of array
List of the variables to plot (e.g. train_score, test_score,...)
err_bar : list of float
list of error to use for plotting error bars. If None, std is used by default.
save_name : str
path to save the plot. If None, plot is not saved.
xlabel : str
Label of the x-axis.
ylabel : str
Label of the y-axis.
"""
### Some defaults parameters.
w = 0.5
colors = ['b', 'r', 'gray']
### Reshaping the results for plotting.
x_labels = self.results.keys()
list_vec = []
for variable in variables:
list_vec.append(np.array([self.results[model][variable] for model in x_labels]).transpose())
rand_m1 = lambda size: np.random.random(size) * 2 - 1
### Plots parameters.
label_loc = np.arange(len(x_labels))
center_bar = [w * (i - 0.5) for i in range(len(list_vec))]
### Plots values.
avg_vec = [np.nanmean(vec, axis=0) for vec in list_vec]
if err_bar is None:
err_bar = [np.nanstd(vec, axis=0) for vec in list_vec]
### Plotting the data.
fig, ax = plt.subplots(figsize=(20, 15))
for i, vec in enumerate(list_vec):
label_i = variable[i] + ' (n = {})'.format(len(vec))
rects = ax.bar(label_loc + center_bar[i], avg_vec[i], w, label=label_i,
yerr=err_bar[i], color=colors[i], alpha=0.6)
for j, x in enumerate(label_loc):
ax.scatter((x + center_bar[i]) + rand_m1(vec[:, j].size) * w/4,
vec[:, j], color=colors[i], edgecolor='k')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_title(title)
ax.set_xticks(label_loc)
ax.set_xticklabels(x_labels)
ax.legend()
plt.legend()
### Saving the figure with a timestamp as a name.
if save_name is not None:
plt.savefig(save_name)
exp_arr = data.exp.iloc[cov_data.batches]
intra_sessions_results = ExpResults(exp_arr)
```
#### A.0. Using Logistic Regression on the vectorized Matrix (Euclidean Method)
```
pipeline = Pipeline(
steps=[('standardize', StandardScaler()),
('logreg', LogisticRegression(solver='lbfgs', multi_class='multinomial'))])
intra_sessions_results.add_result(model_name='logreg_eucl', model=pipeline, X=cov_data.covecs, y=cov_data.labels)
```
#### A.1. Using DNN on the vectorized Matrix (Euclidean Method)
```
def create_model(weights='initial_weights.hd5', n_features=N_FEATURES, n_signs=N_SIGNS):
"""Function to create model, required for using KerasClassifier and wrapp a Keras model inside a
scikitlearn form.
We added a weight saving/loading to remove the randomness of the weight initialization (for better comparison).
"""
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(n_features, activation='relu', input_shape=(n_features,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(17, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(n_signs, activation='softmax'),
])
model.compile(loss = 'sparse_categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
if weights is None:
model.save_weights('initial_weights.hd5')
else:
model.load_weights(weights)
return model
def create_model_covariance(weights='initial_weights.hd5'):
return create_model(weights=weights, n_features=N_FEATURES)
```
Use the line below to generate the 'initial_weights.hd5' file
```
generate_weights = create_model(weights=None)
pipeline = Pipeline(
steps=[('standardize', StandardScaler()),
('dnn', KerasClassifier(build_fn=create_model, epochs=N_EPOCHS, verbose=0))])
intra_sessions_results.add_result(model_name='dnn_eucl', model=pipeline, X=cov_data.covecs, y=cov_data.labels)
```
#### A.2. Using Tangent space projection + Logistic Regression
```
from geomstats.learning.preprocessing import ToTangentSpace
pipeline = Pipeline(
steps=[('feature_ext', ToTangentSpace(geometry=metric_affine)),
('standardize', StandardScaler()),
('logreg', LogisticRegression(solver='lbfgs', multi_class='multinomial'))])
intra_sessions_results.add_result(model_name='logreg_affinvariant_tangent', model=pipeline, X=cov_data.covs, y=cov_data.labels)
```
#### A.3. Using Tangent space projection + DNN
```
pipeline = Pipeline(
steps=[('feature_ext', ToTangentSpace(geometry=metric_affine)),
('standardize', StandardScaler()),
('dnn', KerasClassifier(build_fn=create_model_covariance, epochs=N_EPOCHS, verbose=0))])
intra_sessions_results.add_result(model_name='dnn_affinvariant_tangent', model=pipeline, X=cov_data.covs, y=cov_data.labels)
```
#### A.4. Using Euclidean MDM
#### A.5. Using MDM with a Riemannian metric
#### Summary plots
```
intra_sessions_results.plot_results('intra_sess', ['test_score'])
```
|
github_jupyter
|
import os
import subprocess
import matplotlib
matplotlib.interactive(True)
import matplotlib.pyplot as plt
geomstats_gitroot_path = subprocess.check_output(
['git', 'rev-parse', '--show-toplevel'],
universal_newlines=True)
os.chdir(geomstats_gitroot_path[:-1])
print('Working directory: ', os.getcwd())
import geomstats.backend as gs
gs.random.seed(2021)
N_ELECTRODES = 8
N_SIGNS = 4
import geomstats.datasets.utils as data_utils
data = data_utils.load_emg()
data.head()
fig, ax = plt.subplots(N_SIGNS, figsize=(20, 20))
label_list = ['rock', 'scissors', 'paper', 'ok']
for i, label_i in enumerate(label_list):
sign_df = data[data.label==label_i].iloc[:100]
for electrode in range(N_ELECTRODES):
ax[i].plot(sign_df.iloc[:, 1 + electrode])
ax[i].title.set_text(label_i)
data = data[data.label != 'rest']
import numpy as np
import pandas as pd
### Parameters.
N_STEPS = 100
LABEL_MAP = {'rock': 0, 'scissors': 1, 'paper': 2, 'ok': 3}
MARGIN = 1000
data_dict = {
'time': gs.array(data.time),
'raw_data': gs.array(data[['c{}'.format(i) for i in range(N_ELECTRODES)]]),
'label': gs.array(data.label),
'exp': gs.array(data.exp)}
from geomstats.datasets.prepare_emg_data import TimeSeriesCovariance
cov_data = TimeSeriesCovariance(data_dict, N_STEPS, N_ELECTRODES, LABEL_MAP, MARGIN)
cov_data.transform()
import geomstats.geometry.spd_matrices as spd
manifold = spd.SPDMatrices(N_ELECTRODES)
gs.all(manifold.belongs(cov_data.covs))
fig, ax = plt.subplots(2, 2, figsize=(20, 10))
for label_i, i in cov_data.label_map.items():
label_ids = np.where(cov_data.labels==i)[0]
sign_cov_mat = cov_data.covs[label_ids]
mean_cov = np.mean(sign_cov_mat, axis=0)
ax[i // 2, i % 2].matshow(mean_cov)
ax[i // 2, i % 2].title.set_text(label_i)
from geomstats.learning.frechet_mean import FrechetMean
from geomstats.geometry.spd_matrices import SPDMetricAffine
metric_affine = SPDMetricAffine(N_ELECTRODES)
mean_affine = FrechetMean(metric=metric_affine, point_type='matrix')
fig, ax = plt.subplots(2, 2, figsize=(20, 10))
for label_i, i in cov_data.label_map.items():
label_ids = np.where(cov_data.labels==i)[0]
sign_cov_mat = cov_data.covs[label_ids]
mean_affine.fit(X=sign_cov_mat)
mean_cov = mean_affine.estimate_
ax[i // 2, i % 2].matshow(mean_cov)
ax[i // 2, i % 2].title.set_text(label_i)
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_validate
from sklearn.preprocessing import StandardScaler
# Hiding the numerous sklearn warnings
import warnings
warnings.filterwarnings('ignore')
!pip install keras
!pip install tensorflow
from keras.wrappers.scikit_learn import KerasClassifier
import tensorflow as tf
N_EPOCHS = 10
N_FEATURES = int(N_ELECTRODES * (N_ELECTRODES + 1) / 2)
class ExpResults:
"""Class handling the score collection and plotting among the different experiments.
"""
def __init__(self, exps):
self.exps = exps
self.results = {}
self.exp_ids = {}
# Compute the index corresponding to each session only once at initialization.
for exp in set(self.exps):
self.exp_ids[exp] = np.where(self.exps==exp)[0]
def add_result(self, model_name, model, X, y):
"""Add the results from the cross validated pipeline.
For the model 'pipeline', it will add the cross validated results of every session in the model_name
entry of self.results.
Parameters
----------
model_name : str
Name of the pipeline/model that we are adding results from.
model : sklearn.pipeline.Pipeline
sklearn pipeline that we are evaluating.
X : array
data that we are ingesting in the pipeline.
y : array
labels corresponding to the data.
"""
self.results[model_name] = {'fit_time': [], 'score_time': [], 'test_score': [], 'train_score': []}
for exp in self.exp_ids.keys():
ids = self.exp_ids[exp]
exp_result = cross_validate(pipeline, X[ids], y[ids])
for key in exp_result.keys():
self.results[model_name][key] += list(exp_result[key])
print('Average training score: {}, Average test score: {}'.format(np.mean(self.results[model_name]['train_score']),
np.mean(self.results[model_name]['test_score'])))
def plot_results(self, title, variables, err_bar=None, save_name=None, xlabel='Model', ylabel='Acc'):
"""Plot bar plot comparing the different pipelines' results.
Compare the results added previously using the 'add_result' method with bar plots.
Parameters
----------
title : str
Title of the plot.
variables : list of array
List of the variables to plot (e.g. train_score, test_score,...)
err_bar : list of float
list of error to use for plotting error bars. If None, std is used by default.
save_name : str
path to save the plot. If None, plot is not saved.
xlabel : str
Label of the x-axis.
ylabel : str
Label of the y-axis.
"""
### Some defaults parameters.
w = 0.5
colors = ['b', 'r', 'gray']
### Reshaping the results for plotting.
x_labels = self.results.keys()
list_vec = []
for variable in variables:
list_vec.append(np.array([self.results[model][variable] for model in x_labels]).transpose())
rand_m1 = lambda size: np.random.random(size) * 2 - 1
### Plots parameters.
label_loc = np.arange(len(x_labels))
center_bar = [w * (i - 0.5) for i in range(len(list_vec))]
### Plots values.
avg_vec = [np.nanmean(vec, axis=0) for vec in list_vec]
if err_bar is None:
err_bar = [np.nanstd(vec, axis=0) for vec in list_vec]
### Plotting the data.
fig, ax = plt.subplots(figsize=(20, 15))
for i, vec in enumerate(list_vec):
label_i = variable[i] + ' (n = {})'.format(len(vec))
rects = ax.bar(label_loc + center_bar[i], avg_vec[i], w, label=label_i,
yerr=err_bar[i], color=colors[i], alpha=0.6)
for j, x in enumerate(label_loc):
ax.scatter((x + center_bar[i]) + rand_m1(vec[:, j].size) * w/4,
vec[:, j], color=colors[i], edgecolor='k')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_title(title)
ax.set_xticks(label_loc)
ax.set_xticklabels(x_labels)
ax.legend()
plt.legend()
### Saving the figure with a timestamp as a name.
if save_name is not None:
plt.savefig(save_name)
exp_arr = data.exp.iloc[cov_data.batches]
intra_sessions_results = ExpResults(exp_arr)
pipeline = Pipeline(
steps=[('standardize', StandardScaler()),
('logreg', LogisticRegression(solver='lbfgs', multi_class='multinomial'))])
intra_sessions_results.add_result(model_name='logreg_eucl', model=pipeline, X=cov_data.covecs, y=cov_data.labels)
def create_model(weights='initial_weights.hd5', n_features=N_FEATURES, n_signs=N_SIGNS):
"""Function to create model, required for using KerasClassifier and wrapp a Keras model inside a
scikitlearn form.
We added a weight saving/loading to remove the randomness of the weight initialization (for better comparison).
"""
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(n_features, activation='relu', input_shape=(n_features,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(17, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(n_signs, activation='softmax'),
])
model.compile(loss = 'sparse_categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
if weights is None:
model.save_weights('initial_weights.hd5')
else:
model.load_weights(weights)
return model
def create_model_covariance(weights='initial_weights.hd5'):
return create_model(weights=weights, n_features=N_FEATURES)
generate_weights = create_model(weights=None)
pipeline = Pipeline(
steps=[('standardize', StandardScaler()),
('dnn', KerasClassifier(build_fn=create_model, epochs=N_EPOCHS, verbose=0))])
intra_sessions_results.add_result(model_name='dnn_eucl', model=pipeline, X=cov_data.covecs, y=cov_data.labels)
from geomstats.learning.preprocessing import ToTangentSpace
pipeline = Pipeline(
steps=[('feature_ext', ToTangentSpace(geometry=metric_affine)),
('standardize', StandardScaler()),
('logreg', LogisticRegression(solver='lbfgs', multi_class='multinomial'))])
intra_sessions_results.add_result(model_name='logreg_affinvariant_tangent', model=pipeline, X=cov_data.covs, y=cov_data.labels)
pipeline = Pipeline(
steps=[('feature_ext', ToTangentSpace(geometry=metric_affine)),
('standardize', StandardScaler()),
('dnn', KerasClassifier(build_fn=create_model_covariance, epochs=N_EPOCHS, verbose=0))])
intra_sessions_results.add_result(model_name='dnn_affinvariant_tangent', model=pipeline, X=cov_data.covs, y=cov_data.labels)
intra_sessions_results.plot_results('intra_sess', ['test_score'])
| 0.631026 | 0.987532 |
时间序列数据是一种重要的结构化数据形式。在多个时间点观察或测量到的任何事物都可以形成一段时间序列。根据时间序列的适用场景可以分为以下几种:
- 时间戳(timestamp): 特定的时刻。
- 固定日期(period): 如2021年全年。
- 时间间隔(interval): 由起始时间和结束时间戳表示。
- 实验或过程时间: 每个时间都是相对于特定起始时间的一个度量。
```
import pandas as pd
import numpy as np
```
# 1. 日期和时间数据类型
Python标准库中最常使用的数据类型为 `datetime.datetime`。主要的模块为:`datetime`, `time`, `calendar`。
## 1.1 Datetime Format
- %Y: 4位数的年
- %y: 2位数的年
- %m: 2位数的月 [01,12]
- %d: 2位数的日 [01,31]
- %H: 24小时制 时 [00,23]
- %I: 12小时制 时 [01,12]
- %M: 2位数的 分 [00,59]
- %S: 秒 [00,61] (60和61用于闰秒)
---
- %w: 用整数表示的星期几 [0(星期天),6]
- %U: 每年的第几周 [0, 53]。星期天被认为是每周的第一天,每年第一个星期天之前的那几天被认为是第0周。
- %W: 每年的第几周 [0, 53]。星期一被认为是每周的第一天,每年第一个星期天之前的那几天被认为是第0周。
---
- %F: %Y-%m-%d的简写形式,例如2021-5-23
- %D: %m/%d/%y的简写形式,例如23/05/21
---
限于当前环境的日期格式
- %a: 星期几的简写
- %A: 星期几的全称
- %b: 月份的简写
- %B: 月份的全称
- %c: 完整的日期和时间
- %p: 不同环境的AM和PM
- %x: 适用于当前环境的日期格式
- %X: 适用于当前环境的时间格式
## 1.2 datetime.datetime
```
from datetime import datetime
now = datetime.now()
now
# 1.访问其属性
now.year, now.month, now.day
now.hour, now.minute, now.second
```
## 1.3 datetime.timedelta
```
# 2.datetime对象的运算
start = datetime(2020, 1, 20)
diff = now - start
diff
diff.days
diff.seconds
now
from datetime import timedelta
now + timedelta(12) # 默认加天数
timedelta?
```
timedelta(days=0, seconds=0, microseconds=0, milliseconds=0, minutes=0, hours=0, weeks=0)
## 1.4 字符串和datetime的相互转换
```
# 格式化日期
sixone = '2021-6-01 20:00:00'
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S')
pd.to_datetime(sixone)
# 获取指定日期属于周几
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%w')
# 获取指定日期属于当年的第几周
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%W')
# 获取指定日期属于当年的第几周
int(datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%W'))
# 获取指定时间属于星期几
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%a')
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%A')
# 获取指定时间属于月份
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%b')
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%B')
```
## 1.5 `NaT` (Not a Time) —— pandas中时间戳数据的NA值

```
rootdir = 'D:/Github/BigDataAnalysis/01 Data Analysis and Pre-processing/Dataset/'
filenames = ['Auxiliary_Info.xlsx']
au_info = pd.read_excel(rootdir + filenames[0])
au_info.head()
```
## 1.6 Pandas与datetime的关系
pandas中最基本的时间序列类型就是以时间戳(通常以Python字符串或datetime对象表示)为索引的Series。这些datetime对象被放在一个DatetimeIndex中。
```
ts = [1, 2, 3, 4, 5, 6]
ts[::2]
ts[1::2]
ts[3::2]
# Random values in a given shape.
# rand(d0, d1, ..., dn)
np.random.rand?
np.random.rand(6, 1)
# Return a sample (or samples) from the "standard normal" distribution.
# randn(d0, d1, ..., dn)
np.random.randn?
np.random.randn(6)
dates = [datetime(2021, 6, 1),
datetime(2021, 6, 2),
datetime(2021, 6, 3),
datetime(2021, 6, 10),
datetime(2021, 6, 18),
datetime(2021, 6, 20),
]
mock_value = np.random.randn(len(dates))
# 显式构造 pandas.Series 对象
# 当创建具有DatetimeIndex的Series时,pandas会自动推断为时间序列。
ts = pd.Series(mock_value, index=dates)
ts
type(ts)
isinstance(ts, pd.core.series.Series)
# 以纳秒形式存储
ts.index.dtype
# 索引切片
ts.index[0]
list(ts.index)
```
## 1.7 索引、选取、子集构造
TimeSeries是Series的一个子类,所以在索引以及数据选取方面,它们的行为是一样的。
### 1) 索引
```
stamp = ts.index[2]
stamp
# 传入时间戳
ts[stamp]
# 传入一个可以被解释为日期的字符串
ts['6/1/2021']
```
### 2) 切片
<font color=red> 只对Series有效! </font>
```
# 日期切片
ts[datetime(2021, 6, 3):]
# 范围查询
ts['6/1/2021':'6/3/2021']
```
### 3) 子集构造
```
periods = 100
longer_ts = pd.Series(np.random.randn(periods),
index=pd.date_range('6/1/2021', periods=periods))
longer_ts
%page longer_ts
# before日期之前的丢弃
# after日期之后的丢弃
longer_ts.truncate(before='6/10/2021',
after='6/18/2021')
longer_ts.truncate?
```
```python
longer_ts.truncate(
before=None,
after=None,
axis=None,
copy: 'bool_t' = True,
) -> 'FrameOrSeries'
```
### 4) pd.date_range()
注意 `freq` 参数设置!
```
pd.date_range?
```
```python
pd.date_range(
start=None,
end=None,
periods=None,
freq=None,
tz=None,
normalize=False,
name=None,
closed=None,
**kwargs,
) -> pandas.core.indexes.datetimes.DatetimeIndex
```
```
dates = pd.date_range('6/18/2021',
periods=100,
freq='W-WED')
dates
```
### 5) DataFrame.iloc
```
# 已经移除了
pd.DataFrame.ix?
pd.__version__
pd.DataFrame.iloc?
long_df = pd.DataFrame(np.random.randn(100, 4),
index=dates,
columns=['Colorado', 'Texas', 'New York', 'Califonia'])
long_df
long_df.index
```
## 1.8 带有重复索引的时间序列
在某些应用场景中,可能会存在多个观测数据落在同一个时间点上的情况。
```
dates = pd.DatetimeIndex(['2021-06-23',
'2021-06-30',
'2021-06-30',
'2021-06-30',
'2021-07-07',
'2021-07-14',
'2021-07-14',
'2021-07-14',
'2021-07-21'])
dates
dup_ts = pd.Series(np.arange(len(dates)), index=dates)
dup_ts
# 查看索引是否重复
dup_ts.index.is_unique
dup_ts['2021-06-30']
```
### 对非唯一索引进行聚合 groupby
```
grouped = dup_ts.groupby(level=0)
grouped
dup_ts.groupby?
```
```python
dup_ts.groupby(
by=None,
axis=0,
level=None,
as_index: bool = True,
sort: bool = True,
group_keys: bool = True,
squeeze: bool = <object object at 0x0000021A19AE6530>,
observed: bool = False,
dropna: bool = True,
) -> 'SeriesGroupBy'
```
```
grouped.count()
grouped.mean()
```
# 2. 日期的范围、频率及移动
Pandas具有一套标准时间序列频率以及用于重采样、频率推断、生成固定频率日期范围的工具。可以使用 `resample`将时间序列转换为具有固定频率的时间序列:
### 2.1 生成日期范围 `pd.date_range()`
```
# 默认按照天计算
index = pd.date_range('6/1/2021', '8/1/2021')
index
pd.date_range?
```
```python
Signature:
pd.date_range(
start=None,
end=None,
periods=None,
freq=None,
tz=None,
normalize=False,
name=None,
closed=None,
**kwargs,
) -> pandas.core.indexes.datetimes.DatetimeIndex
Docstring:
Return a fixed frequency DatetimeIndex.
```
### 使用 `freq` 参数
- BM (business end of month): 表示每月最后一个工作日
```
# 默认按照天计算
index = pd.date_range('1/1/2021', '1/1/2022',
freq='BM')
index
```
### 使用 `peroids` 参数
```
index = pd.date_range('1/1/2021', '1/1/2022',
periods=24)
index, len(index)
```
### 使用 `normalize` 参数
将时间戳规范化到午夜0点
```
index = pd.date_range('6/1/2021 11:11:11', periods=11, normalize=True)
index, len(index)
index[0]
```
## 2.2 频率和日期偏移量
- M:月
- H:小时
```
pd.date_range('6/1/2021', '12/11/2021', freq='4h')
pd.date_range('6/1/2021', periods=10, freq='H')
pd.date_range('6/1/2021', periods=10, freq='M')
```
### 传入频率字符串
```
pd.date_range('6/1/2021', periods=10, freq='4h30min')
```
### 时间序列基础频率参数 `freq` 表
|别名|偏移量类型|说明|
|:--|:--|:--|
|D|Day|每日历日|
|B|BusinessDay|每工作日|
|H|Hour|每小时|
|T/min|Minute|每分|
|S|Second|每秒|
|L/ms|Milli|每毫秒|
|U|Micro|每微秒|
|M|MonthEnd|每月最后一个日历日|
|BM|BussinessMonthEnd|每月最后一个工作日|
|MS|MonthBegin|每月第一个日历日|
|BMS|BussinessMonthBegin|每月第一个工作日|
|W-MON\W-TUE...|Week|从指定的星期几(MON\TUE\WED\THU\FRI\SAT\SUN)开始算起,每周|
|WOM-1MON\WOM-2MON...|WeekOfMonth|产生每月第一、第二、第三或第四周的星期几。例如,WOM-3FRI表示每月第三个星期五|
|Q-JAN\Q-FEB...|QuarterEnd|对于以指定月份(JAN\FEB\MAR\APR\MAY\JUN\JUL\AUG\SEP\OCT\NOV\DEC)结束的年度,每季度最后一个月的最后一个日历日|
|BQ-JAN\BQ-FEB...|BussinessQuarterEnd|对于以指定月份结束的年度,每季度最后一个月的最后一个工作日|
|QS-JAN\QS-FEB...|QuarterBegin|对于以指定月份结束的年度,每季度最后一个月的第一个日历日|
|BQS-JAN\BQS-FEB...|BussinessQuarterBegin|对于以指定月份结束的年度,每季度最后一个月的第一个工作日|
|A-JAN\A-FEB...|YearEnd|每年指定月份(JAN\FEB\MAR\APR\MAY\JUN\JUL\AUG\SEP\OCT\NOV\DEC)的最后一个日历日|
|BA-JAN\BA-FEB...|BussinessYearEnd|每年指定月份的最后一个工作日|
|AS-JAN\AS-FEB...|YearBegin|每年指定月份的第一个日历日|
|BA-JAN\BA-FEB...|BussinessYearBegin|每年指定月份的第一个工作日|
```
# 示例
# 'WOM-3FRI'表示每月第三个星期五
rng = pd.date_range('6/1/2021','12/11/2021', freq='WOM-3FRI')
rng
rng = pd.date_range('6/1/2021','1/1/2022', freq='BQ-DEC')
rng
pd.date_range?
```
## 2.3 移动(超前和滞后)数据
移动(shifting)指的是沿着时间轴将数据前移和后移。Series和DataFrame都有一个 `.shitf()` 方法用于执行单纯的前移或后移操作,保持索引不变。
```
periods = 10
ts = pd.Series(np.random.randn(periods),
index=pd.date_range('6/1/2021', periods=periods, freq='M'))
ts
ts.shift?
```
```python
ts.shift(periods=1, freq=None, axis=0, fill_value=None) -> 'Series'
```
```
ts.shift(1)
ts.shift(1, freq='M')
ts
```
### 计算一个或多个时间序列中的百分比变化
```
ts / ts.shift(1) - 1
```
### 通过偏移量对日期进行位移
```
from pandas.tseries.offsets import Day, MonthEnd
now = datetime(2021, 6, 1)
now
Day?
now + 3 * Day()
MonthEnd?
offset = MonthEnd()
offset
offset.rollforward(now)
offset.rollback(now)
```
# 3. 时期及其算术运算
```
p = pd.Period(2007, freq='A-DEC')
p
pd.Period(2021, freq='A-DEC') - p
rng = pd.period_range('6/1/2021', '5/31/2022', freq='M')
rng, len(rng)
```
PeriodIndex保存了一组Period,它可以在任何pandas树结构中被用作轴索引:
```
pd.Series(np.random.randn(len(rng)), index=rng)
values = ['2021Q3', '2021Q2', '2021Q1']
index = pd.PeriodIndex(values, freq = 'Q-DEC')
index
```
## 3.1 时期的频率转换
`Period` 和 `PeriodIndex` 对象都可以通过其asfreq方法被转换成别的频率。
```
p = pd.Period(2007, freq='A-DEC')
p.asfreq('M', how='start')
p.asfreq?
```
```python
Docstring:
Convert Period to desired frequency, at the start or end of the interval.
Parameters
----------
freq : str
The desired frequency.
how : {'E', 'S', 'end', 'start'}, default 'end'
Start or end of the timespan.
Returns
-------
resampled : Period
Type: builtin_function_or_method
---
## 3.2 按季度计算的时间频率
```
# 10,11,12月为第四季度
p = pd.Period('2021Q4', freq='Q-DEC')
p
```
# 4. 重采样即频率转换
重采样(resampling)是指将时间序列从一个频率转换到另一个频率的处理过程。
- 升采样(upsampling):低频到高频
- 降采样(downsampling):高频到低频
```
periods = 10
ts = pd.Series(np.random.randn(periods),
index=pd.date_range('6/1/2021', periods=periods, freq='M'))
ts
ts.resample?
```
```python
Signature:
ts.resample(
rule,
axis=0,
closed: 'Optional[str]' = None,
label: 'Optional[str]' = None,
convention: 'str' = 'start',
kind: 'Optional[str]' = None,
loffset=None,
base: 'Optional[int]' = None,
on=None,
level=None,
origin: 'Union[str, TimestampConvertibleTypes]' = 'start_day',
offset: 'Optional[TimedeltaConvertibleTypes]' = None,
) -> 'Resampler'
Docstring:
Resample time-series data.
Convenience method for frequency conversion and resampling of time
series. Object must have a datetime-like index (`DatetimeIndex`,
`PeriodIndex`, or `TimedeltaIndex`), or pass datetime-like values
to the `on` or `level` keyword.
Parameters
----------
rule : DateOffset, Timedelta or str
The offset string or object representing target conversion.
axis : {0 or 'index', 1 or 'columns'}, default 0
Which axis to use for up- or down-sampling. For `Series` this
will default to 0, i.e. along the rows. Must be
`DatetimeIndex`, `TimedeltaIndex` or `PeriodIndex`.
closed : {'right', 'left'}, default None
Which side of bin interval is closed. The default is 'left'
for all frequency offsets except for 'M', 'A', 'Q', 'BM',
'BA', 'BQ', and 'W' which all have a default of 'right'.
label : {'right', 'left'}, default None
Which bin edge label to label bucket with. The default is 'left'
for all frequency offsets except for 'M', 'A', 'Q', 'BM',
'BA', 'BQ', and 'W' which all have a default of 'right'.
convention : {'start', 'end', 's', 'e'}, default 'start'
For `PeriodIndex` only, controls whether to use the start or
end of `rule`.
kind : {'timestamp', 'period'}, optional, default None
Pass 'timestamp' to convert the resulting index to a
`DateTimeIndex` or 'period' to convert it to a `PeriodIndex`.
By default the input representation is retained.
loffset : timedelta, default None
Adjust the resampled time labels.
.. deprecated:: 1.1.0
You should add the loffset to the `df.index` after the resample.
See below.
base : int, default 0
For frequencies that evenly subdivide 1 day, the "origin" of the
aggregated intervals. For example, for '5min' frequency, base could
range from 0 through 4. Defaults to 0.
.. deprecated:: 1.1.0
The new arguments that you should use are 'offset' or 'origin'.
on : str, optional
For a DataFrame, column to use instead of index for resampling.
Column must be datetime-like.
level : str or int, optional
For a MultiIndex, level (name or number) to use for
resampling. `level` must be datetime-like.
origin : {'epoch', 'start', 'start_day'}, Timestamp or str, default 'start_day'
The timestamp on which to adjust the grouping. The timezone of origin
must match the timezone of the index.
If a timestamp is not used, these values are also supported:
- 'epoch': `origin` is 1970-01-01
- 'start': `origin` is the first value of the timeseries
- 'start_day': `origin` is the first day at midnight of the timeseries
.. versionadded:: 1.1.0
offset : Timedelta or str, default is None
An offset timedelta added to the origin.
.. versionadded:: 1.1.0
Returns
-------
Resampler object
See Also
--------
groupby : Group by mapping, function, label, or list of labels.
Series.resample : Resample a Series.
DataFrame.resample: Resample a DataFrame.
Notes
-----
See the `user guide
<https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#resampling>`_
for more.
To learn more about the offset strings, please see `this link
<https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects>`__.
Examples
--------
Start by creating a series with 9 one minute timestamps.
>>> index = pd.date_range('1/1/2000', periods=9, freq='T')
>>> series = pd.Series(range(9), index=index)
>>> series
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Freq: T, dtype: int64
Downsample the series into 3 minute bins and sum the values
of the timestamps falling into a bin.
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
```
```
rng = pd.date_range('6/1/2021', periods=100, freq='D')
ts = pd.Series(data=np.random.randn(len(rng)), index=rng)
ts
ts.resample('M', kind='period').mean()
```
## 4.1 降采样
```
rng = pd.date_range('6/1/2021', periods=12, freq='T')
ts = pd.Series(data=np.arange(len(rng)), index=rng)
ts
ts.resample('5min').sum()
```
### `closed` 参数
closed='left':会让区间以左边界闭合
```
ts.resample('5min', closed='left').sum()
ts.resample('5min', closed='right').sum()
```
### `label` 参数
label='left':可用面元的左边界对其进行标记
```
ts.resample('5min', closed='left', label='left').sum()
```
### `loffset` 参数
```
ts.resample('5min', loffset='-5s').sum()
```
## 4.2 OHLC 重采样
金融领域中的采样方式,即开盘值,最大值,最小值,收盘值。
```
ts.resample('5min').ohlc()
```
## 4.3 `.groupby()` 重采样
```
rng = pd.date_range('6/1/2021', periods=100, freq='D')
ts = pd.Series(data=np.arange(len(rng)), index=rng)
ts
ts.groupby(lambda x: x.weekday).mean()
ts.groupby(lambda x: x.month).mean()
```
## 4.4 升采样和插值
```
dates = pd.date_range('6/18/2021',
periods=2,
freq='W-WED')
long_df = pd.DataFrame(np.random.randn(2, 4),
index=dates,
columns=['Colorado', 'Texas', 'New York', 'Califonia'])
long_df
long_df.resample('D').mean()
long_df.resample('D').ffill()
long_df.resample('D').ffill(limit=2)
long_df.ffill?
long_df.resample('D').backfill()
long_df.resample('D').fillna(method='bfill')
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from datetime import datetime
now = datetime.now()
now
# 1.访问其属性
now.year, now.month, now.day
now.hour, now.minute, now.second
# 2.datetime对象的运算
start = datetime(2020, 1, 20)
diff = now - start
diff
diff.days
diff.seconds
now
from datetime import timedelta
now + timedelta(12) # 默认加天数
timedelta?
# 格式化日期
sixone = '2021-6-01 20:00:00'
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S')
pd.to_datetime(sixone)
# 获取指定日期属于周几
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%w')
# 获取指定日期属于当年的第几周
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%W')
# 获取指定日期属于当年的第几周
int(datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%W'))
# 获取指定时间属于星期几
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%a')
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%A')
# 获取指定时间属于月份
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%b')
datetime.strptime(sixone, '%Y-%m-%d %H:%M:%S').strftime('%B')
rootdir = 'D:/Github/BigDataAnalysis/01 Data Analysis and Pre-processing/Dataset/'
filenames = ['Auxiliary_Info.xlsx']
au_info = pd.read_excel(rootdir + filenames[0])
au_info.head()
ts = [1, 2, 3, 4, 5, 6]
ts[::2]
ts[1::2]
ts[3::2]
# Random values in a given shape.
# rand(d0, d1, ..., dn)
np.random.rand?
np.random.rand(6, 1)
# Return a sample (or samples) from the "standard normal" distribution.
# randn(d0, d1, ..., dn)
np.random.randn?
np.random.randn(6)
dates = [datetime(2021, 6, 1),
datetime(2021, 6, 2),
datetime(2021, 6, 3),
datetime(2021, 6, 10),
datetime(2021, 6, 18),
datetime(2021, 6, 20),
]
mock_value = np.random.randn(len(dates))
# 显式构造 pandas.Series 对象
# 当创建具有DatetimeIndex的Series时,pandas会自动推断为时间序列。
ts = pd.Series(mock_value, index=dates)
ts
type(ts)
isinstance(ts, pd.core.series.Series)
# 以纳秒形式存储
ts.index.dtype
# 索引切片
ts.index[0]
list(ts.index)
stamp = ts.index[2]
stamp
# 传入时间戳
ts[stamp]
# 传入一个可以被解释为日期的字符串
ts['6/1/2021']
# 日期切片
ts[datetime(2021, 6, 3):]
# 范围查询
ts['6/1/2021':'6/3/2021']
periods = 100
longer_ts = pd.Series(np.random.randn(periods),
index=pd.date_range('6/1/2021', periods=periods))
longer_ts
%page longer_ts
# before日期之前的丢弃
# after日期之后的丢弃
longer_ts.truncate(before='6/10/2021',
after='6/18/2021')
longer_ts.truncate?
longer_ts.truncate(
before=None,
after=None,
axis=None,
copy: 'bool_t' = True,
) -> 'FrameOrSeries'
pd.date_range?
pd.date_range(
start=None,
end=None,
periods=None,
freq=None,
tz=None,
normalize=False,
name=None,
closed=None,
**kwargs,
) -> pandas.core.indexes.datetimes.DatetimeIndex
dates = pd.date_range('6/18/2021',
periods=100,
freq='W-WED')
dates
# 已经移除了
pd.DataFrame.ix?
pd.__version__
pd.DataFrame.iloc?
long_df = pd.DataFrame(np.random.randn(100, 4),
index=dates,
columns=['Colorado', 'Texas', 'New York', 'Califonia'])
long_df
long_df.index
dates = pd.DatetimeIndex(['2021-06-23',
'2021-06-30',
'2021-06-30',
'2021-06-30',
'2021-07-07',
'2021-07-14',
'2021-07-14',
'2021-07-14',
'2021-07-21'])
dates
dup_ts = pd.Series(np.arange(len(dates)), index=dates)
dup_ts
# 查看索引是否重复
dup_ts.index.is_unique
dup_ts['2021-06-30']
grouped = dup_ts.groupby(level=0)
grouped
dup_ts.groupby?
dup_ts.groupby(
by=None,
axis=0,
level=None,
as_index: bool = True,
sort: bool = True,
group_keys: bool = True,
squeeze: bool = <object object at 0x0000021A19AE6530>,
observed: bool = False,
dropna: bool = True,
) -> 'SeriesGroupBy'
grouped.count()
grouped.mean()
# 默认按照天计算
index = pd.date_range('6/1/2021', '8/1/2021')
index
pd.date_range?
Signature:
pd.date_range(
start=None,
end=None,
periods=None,
freq=None,
tz=None,
normalize=False,
name=None,
closed=None,
**kwargs,
) -> pandas.core.indexes.datetimes.DatetimeIndex
Docstring:
Return a fixed frequency DatetimeIndex.
# 默认按照天计算
index = pd.date_range('1/1/2021', '1/1/2022',
freq='BM')
index
index = pd.date_range('1/1/2021', '1/1/2022',
periods=24)
index, len(index)
index = pd.date_range('6/1/2021 11:11:11', periods=11, normalize=True)
index, len(index)
index[0]
pd.date_range('6/1/2021', '12/11/2021', freq='4h')
pd.date_range('6/1/2021', periods=10, freq='H')
pd.date_range('6/1/2021', periods=10, freq='M')
pd.date_range('6/1/2021', periods=10, freq='4h30min')
# 示例
# 'WOM-3FRI'表示每月第三个星期五
rng = pd.date_range('6/1/2021','12/11/2021', freq='WOM-3FRI')
rng
rng = pd.date_range('6/1/2021','1/1/2022', freq='BQ-DEC')
rng
pd.date_range?
periods = 10
ts = pd.Series(np.random.randn(periods),
index=pd.date_range('6/1/2021', periods=periods, freq='M'))
ts
ts.shift?
ts.shift(periods=1, freq=None, axis=0, fill_value=None) -> 'Series'
ts.shift(1)
ts.shift(1, freq='M')
ts
ts / ts.shift(1) - 1
from pandas.tseries.offsets import Day, MonthEnd
now = datetime(2021, 6, 1)
now
Day?
now + 3 * Day()
MonthEnd?
offset = MonthEnd()
offset
offset.rollforward(now)
offset.rollback(now)
p = pd.Period(2007, freq='A-DEC')
p
pd.Period(2021, freq='A-DEC') - p
rng = pd.period_range('6/1/2021', '5/31/2022', freq='M')
rng, len(rng)
pd.Series(np.random.randn(len(rng)), index=rng)
values = ['2021Q3', '2021Q2', '2021Q1']
index = pd.PeriodIndex(values, freq = 'Q-DEC')
index
p = pd.Period(2007, freq='A-DEC')
p.asfreq('M', how='start')
p.asfreq?
Docstring:
Convert Period to desired frequency, at the start or end of the interval.
Parameters
----------
freq : str
The desired frequency.
how : {'E', 'S', 'end', 'start'}, default 'end'
Start or end of the timespan.
Returns
-------
resampled : Period
Type: builtin_function_or_method
---
## 3.2 按季度计算的时间频率
# 4. 重采样即频率转换
重采样(resampling)是指将时间序列从一个频率转换到另一个频率的处理过程。
- 升采样(upsampling):低频到高频
- 降采样(downsampling):高频到低频
## 4.1 降采样
### `closed` 参数
closed='left':会让区间以左边界闭合
### `label` 参数
label='left':可用面元的左边界对其进行标记
### `loffset` 参数
## 4.2 OHLC 重采样
金融领域中的采样方式,即开盘值,最大值,最小值,收盘值。
## 4.3 `.groupby()` 重采样
## 4.4 升采样和插值
| 0.356223 | 0.899652 |
## Linear Support Vector Machines
using dataset:
ex6data1.mat - Example Dataset 1
ex6data2.mat - Example Dataset 2
ex6data3.mat - Example Dataset 3
spamTrain.mat - Spam training set
spamTest.mat - Spam test set
emailSample1.txt - Sample email 1
emailSample2.txt - Sample email 2
spamSample1.txt - Sample spam 1
spamSample2.txt - Sample spam 2
vocab.txt - Vocabulary list
In this part,you will be using support vector machines $(SVMs)$ with various examples 2D datasets.
And use Gaussian kernel with $SVMs$ to build a spam classifer
### 1.1 Example Dataset 1
2D example (ex6data1.mat) dataset which can be separated by a linear boundary
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
%matplotlib inline
%config InlineBackend.figure_format='svg'
dataSet1=loadmat('ex6data1.mat')
%matplotlib inline
%config InlineBackend.figure_format='svg'
def plotData(dataSet):
data=pd.DataFrame(dataSet.get('X'),columns=['X1','X2'])
data['y']=dataSet.get('y')
positive=data[data['y'].isin([0])]
negative=data[data['y'].isin([1])]
plt.figure(figsize=(9,5))
plt.tick_params(direction='in',labelsize=10)
plt.scatter(positive['X1'],positive['X2'],c='yellow',s=50,marker='o',edgecolors='black')
plt.scatter(negative['X1'],negative['X2'],c='black',s=50,marker='+')
plotData(dataSet1)
def find_decision_boundary(svc,x1min,x1max,x2min,x2max,diff):
x1=np.linspace(x1min,x1max,1000)
x2=np.linspace(x2min,x2max,1000)
cordinates=[(x,y) for x in x1 for y in x2]
x_cord,y_cord=zip(*cordinates)
c_val=pd.DataFrame({'x1':x_cord,'x2':y_cord})
c_val['svc_val']=svc.decision_function(c_val[['x1','x2']])
decision=c_val[np.abs(c_val['svc_val'])<diff]
return decision.x1,decision.x2
```
#### 1.1.1 Try C=1
use sklearn to compute param
```
from sklearn.svm import LinearSVC
def LinearSVM(dataSet,C=1):
data=pd.DataFrame(dataSet.get('X'),columns=['X1','X2'])
data['y']=dataSet.get('y')
svc1=LinearSVC(C=C,loss='hinge')
svc1.fit(data[['X1','X2']],data['y'])
score=svc1.score(data[['X1','X2']],data['y'])
print('LinearSVM Scores:{}'.format(score))
data['SVM Confidence']=svc1.decision_function(data[['X1','X2']])
return data,svc1
dataSvc1,svc1=LinearSVM(dataSet1,1)
dataSvc1
%matplotlib inline
%config InlineBackend.figure_format='svg'
x1,x2=find_decision_boundary(svc1,0,4,1.5,5,2*10**-3)
def plotData(dataSet,x1,x2):
data=pd.DataFrame(dataSet.get('X'),columns=['X1','X2'])
data['y']=dataSet.get('y')
positive=data[data['y'].isin([0])]
negative=data[data['y'].isin([1])]
plt.figure(figsize=(9,5))
plt.tick_params(direction='in',labelsize=10)
plt.scatter(positive['X1'],positive['X2'],c='yellow',s=50,marker='o',edgecolors='black')
plt.scatter(negative['X1'],negative['X2'],c='black',s=50,marker='x')
plt.plot(x1,x2,c='blue')
plotData(dataSet1,x1,x2)
```
#### 1.1.2 Try C=100
```
dataSvc100,svc100=LinearSVM(dataSet1,100)
%matplotlib inline
%config InlineBackend.figure_format='svg'
x1,x2=find_decision_boundary(svc100,0,4,1.5,5,2*10**-3)
def plotData(dataSet,x1,x2):
data=pd.DataFrame(dataSet.get('X'),columns=['X1','X2'])
data['y']=dataSet.get('y')
positive=data[data['y'].isin([0])]
negative=data[data['y'].isin([1])]
plt.figure(figsize=(9,5))
plt.tick_params(direction='in',labelsize=10)
plt.scatter(positive['X1'],positive['X2'],c='yellow',s=50,marker='o',edgecolors='black')
plt.scatter(negative['X1'],negative['X2'],c='black',s=50,marker='x')
plt.plot(x1,x2,c='blue')
plotData(dataSet1,x1,x2)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
%matplotlib inline
%config InlineBackend.figure_format='svg'
dataSet1=loadmat('ex6data1.mat')
%matplotlib inline
%config InlineBackend.figure_format='svg'
def plotData(dataSet):
data=pd.DataFrame(dataSet.get('X'),columns=['X1','X2'])
data['y']=dataSet.get('y')
positive=data[data['y'].isin([0])]
negative=data[data['y'].isin([1])]
plt.figure(figsize=(9,5))
plt.tick_params(direction='in',labelsize=10)
plt.scatter(positive['X1'],positive['X2'],c='yellow',s=50,marker='o',edgecolors='black')
plt.scatter(negative['X1'],negative['X2'],c='black',s=50,marker='+')
plotData(dataSet1)
def find_decision_boundary(svc,x1min,x1max,x2min,x2max,diff):
x1=np.linspace(x1min,x1max,1000)
x2=np.linspace(x2min,x2max,1000)
cordinates=[(x,y) for x in x1 for y in x2]
x_cord,y_cord=zip(*cordinates)
c_val=pd.DataFrame({'x1':x_cord,'x2':y_cord})
c_val['svc_val']=svc.decision_function(c_val[['x1','x2']])
decision=c_val[np.abs(c_val['svc_val'])<diff]
return decision.x1,decision.x2
from sklearn.svm import LinearSVC
def LinearSVM(dataSet,C=1):
data=pd.DataFrame(dataSet.get('X'),columns=['X1','X2'])
data['y']=dataSet.get('y')
svc1=LinearSVC(C=C,loss='hinge')
svc1.fit(data[['X1','X2']],data['y'])
score=svc1.score(data[['X1','X2']],data['y'])
print('LinearSVM Scores:{}'.format(score))
data['SVM Confidence']=svc1.decision_function(data[['X1','X2']])
return data,svc1
dataSvc1,svc1=LinearSVM(dataSet1,1)
dataSvc1
%matplotlib inline
%config InlineBackend.figure_format='svg'
x1,x2=find_decision_boundary(svc1,0,4,1.5,5,2*10**-3)
def plotData(dataSet,x1,x2):
data=pd.DataFrame(dataSet.get('X'),columns=['X1','X2'])
data['y']=dataSet.get('y')
positive=data[data['y'].isin([0])]
negative=data[data['y'].isin([1])]
plt.figure(figsize=(9,5))
plt.tick_params(direction='in',labelsize=10)
plt.scatter(positive['X1'],positive['X2'],c='yellow',s=50,marker='o',edgecolors='black')
plt.scatter(negative['X1'],negative['X2'],c='black',s=50,marker='x')
plt.plot(x1,x2,c='blue')
plotData(dataSet1,x1,x2)
dataSvc100,svc100=LinearSVM(dataSet1,100)
%matplotlib inline
%config InlineBackend.figure_format='svg'
x1,x2=find_decision_boundary(svc100,0,4,1.5,5,2*10**-3)
def plotData(dataSet,x1,x2):
data=pd.DataFrame(dataSet.get('X'),columns=['X1','X2'])
data['y']=dataSet.get('y')
positive=data[data['y'].isin([0])]
negative=data[data['y'].isin([1])]
plt.figure(figsize=(9,5))
plt.tick_params(direction='in',labelsize=10)
plt.scatter(positive['X1'],positive['X2'],c='yellow',s=50,marker='o',edgecolors='black')
plt.scatter(negative['X1'],negative['X2'],c='black',s=50,marker='x')
plt.plot(x1,x2,c='blue')
plotData(dataSet1,x1,x2)
| 0.489748 | 0.924824 |
<a href="https://colab.research.google.com/github/stho382/Data_Science_Projects/blob/main/Classification_model_practice.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Creating a Gender Classifier
MIT License
Copyright (c) 2021 Sebastian Thomas
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
### Installing Dependencies
```
!pip install -U scikit-learn
```
### Splitting dataset
```
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn import metrics
# creating a list of data
# [height, weigth, shoe size] of a person
X = [[181, 80, 44], [177, 70, 43], [160, 60, 38], [154, 54, 37], [166, 65, 40], [190, 90, 47], [175, 64, 39], [177, 70, 40], [159, 55, 37], [171, 75, 42], [181, 85, 43]]
Y = ['male', 'female', 'female', 'female', 'male', 'male', 'male', 'female', 'male', 'female', 'male']
#Splitting data into training and testing
var_train, var_test, res_train, res_test = train_test_split(X, Y, test_size = 0.3, random_state=42)
```
### Using decision trees
```
from sklearn.tree import DecisionTreeClassifier
#Creating a decision tree model
decision_tree = DecisionTreeClassifier()
#Training the classifier on our dataset
decision_tree = decision_tree.fit(var_train, res_train)
# Testing the accuracy with the test set
res_pred = decision_tree.predict(var_test)
score = accuracy_score(res_test, res_pred)
print(score)
print(metrics.classification_report(res_test, res_pred))
# predicting the outcome of an input
prediction = decision_tree.predict([[190, 70, 43]])
print(prediction)
```
### Using Support Vector Machines
```
from sklearn.svm import SVC
# Creating a SVM model
Support_Vector = SVC()
#Training the classifier
Support_Vector = Support_Vector.fit(var_train, res_train)
# scoring the accuracy of the model
res_pred = Support_Vector.predict(var_test)
score = accuracy_score(res_test, res_pred)
print(score)
print(metrics.classification_report(res_test, res_pred))
# predicting the outcome of an input
prediction = Support_Vector.predict([[190, 70, 43]])
print(prediction)
```
### Using Logistic Regression
```
from sklearn.linear_model import LogisticRegression
reg_log = LogisticRegression()
#Training the classifier
reg_log = reg_log.fit(var_train, res_train)
# scoring the accuracy of the model
res_pred = reg_log.predict(var_test)
score = accuracy_score(res_test, res_pred)
print(score)
print(metrics.classification_report(res_test, res_pred))
# predicting the outcome of an input
prediction = reg_log.predict([[190, 70, 43]])
print(prediction)
```
### Using K-Nearest Neigbours
```
from sklearn.neighbors import KNeighborsClassifier
#Training and testing the model
reg_knn = KNeighborsClassifier()
reg_knn = reg_knn.fit(var_train, res_train)
res_pred = reg_knn.predict(var_test)
# Scoring
res_pred = reg_knn.predict(var_test)
score = accuracy_score(res_test, res_pred)
print(score)
print(metrics.classification_report(res_test, res_pred))
# predicting the outcome of an input
prediction = reg_knn.predict([[190, 70, 43]])
print(prediction)
```
### Testing all the model through one function and outputting the best one
```
def CompareModels():
packages = [DecisionTreeClassifier(), SVC(), LogisticRegression(), KNeighborsClassifier()]
accuracies = []
for i in range(len(packages)):
classifier = packages[i]
classifier = classifier.fit(var_train, res_train)
predicted = classifier.predict(var_test)
score = accuracy_score(res_test, predicted)
accuracies.append(score)
for j in range(len(accuracies)):
if accuracies[j] == max(accuracies):
Output = classifier.predict([[190, 70, 43]])
print("Classification_Model: {}, Accuracy: {}, Test_Output: {}\n".format(packages[j], accuracies[j], Output))
CompareModels()
```
|
github_jupyter
|
!pip install -U scikit-learn
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn import metrics
# creating a list of data
# [height, weigth, shoe size] of a person
X = [[181, 80, 44], [177, 70, 43], [160, 60, 38], [154, 54, 37], [166, 65, 40], [190, 90, 47], [175, 64, 39], [177, 70, 40], [159, 55, 37], [171, 75, 42], [181, 85, 43]]
Y = ['male', 'female', 'female', 'female', 'male', 'male', 'male', 'female', 'male', 'female', 'male']
#Splitting data into training and testing
var_train, var_test, res_train, res_test = train_test_split(X, Y, test_size = 0.3, random_state=42)
from sklearn.tree import DecisionTreeClassifier
#Creating a decision tree model
decision_tree = DecisionTreeClassifier()
#Training the classifier on our dataset
decision_tree = decision_tree.fit(var_train, res_train)
# Testing the accuracy with the test set
res_pred = decision_tree.predict(var_test)
score = accuracy_score(res_test, res_pred)
print(score)
print(metrics.classification_report(res_test, res_pred))
# predicting the outcome of an input
prediction = decision_tree.predict([[190, 70, 43]])
print(prediction)
from sklearn.svm import SVC
# Creating a SVM model
Support_Vector = SVC()
#Training the classifier
Support_Vector = Support_Vector.fit(var_train, res_train)
# scoring the accuracy of the model
res_pred = Support_Vector.predict(var_test)
score = accuracy_score(res_test, res_pred)
print(score)
print(metrics.classification_report(res_test, res_pred))
# predicting the outcome of an input
prediction = Support_Vector.predict([[190, 70, 43]])
print(prediction)
from sklearn.linear_model import LogisticRegression
reg_log = LogisticRegression()
#Training the classifier
reg_log = reg_log.fit(var_train, res_train)
# scoring the accuracy of the model
res_pred = reg_log.predict(var_test)
score = accuracy_score(res_test, res_pred)
print(score)
print(metrics.classification_report(res_test, res_pred))
# predicting the outcome of an input
prediction = reg_log.predict([[190, 70, 43]])
print(prediction)
from sklearn.neighbors import KNeighborsClassifier
#Training and testing the model
reg_knn = KNeighborsClassifier()
reg_knn = reg_knn.fit(var_train, res_train)
res_pred = reg_knn.predict(var_test)
# Scoring
res_pred = reg_knn.predict(var_test)
score = accuracy_score(res_test, res_pred)
print(score)
print(metrics.classification_report(res_test, res_pred))
# predicting the outcome of an input
prediction = reg_knn.predict([[190, 70, 43]])
print(prediction)
def CompareModels():
packages = [DecisionTreeClassifier(), SVC(), LogisticRegression(), KNeighborsClassifier()]
accuracies = []
for i in range(len(packages)):
classifier = packages[i]
classifier = classifier.fit(var_train, res_train)
predicted = classifier.predict(var_test)
score = accuracy_score(res_test, predicted)
accuracies.append(score)
for j in range(len(accuracies)):
if accuracies[j] == max(accuracies):
Output = classifier.predict([[190, 70, 43]])
print("Classification_Model: {}, Accuracy: {}, Test_Output: {}\n".format(packages[j], accuracies[j], Output))
CompareModels()
| 0.719581 | 0.886371 |
```
import pandas as pd
import numpy as np
import seaborn as sns
import pandas_profiling as pdp
import os
os.chdir('..')
from scripts.project_functions import load_and_process
from scripts.project_functions import rem_columns
from scripts.project_functions import country_medal
from scripts.project_functions import country_medal_year
sns.set_theme(style="darkgrid",font_scale = 1.5)
```
A question that many want to know, is whether their country is the best in terms of winning Olympic medals. The research question that I have decided to explore and analyze is:
<u>**Research Question**</u>
**Which Country has been awarded the most medals?**
```
data = load_and_process("../data/raw/Summer-Olympic-medals-1976-to-2008 2.csv")
data.head()
```
<u>**Columns Removed**</u>
The three main columns that will be explored are 'Country' 'Year' and 'Medal'. In order to focus on the selected columns, I must remove the others
```
rem_columns(data)
```
**Profile Report**
After generating a report, we are able to clearly see that the United States is the country with the most medals awarded. We can generate a basic visualization to see the top ten medal winning countries.
```
rem_columns(data).profile_report()
```
**Visualization #1**
```
total_rem = rem_columns(data)
total_final = country_medal(total_rem)
countries = sns.countplot(data = rem_columns(data), palette = "colorblind", order = total_final[0:5]['Country'].value_counts().index, y = "Country")
countries.set_title('Top 5 Medal Winning Countries')
```
**Visualization #2**
Now that we can visually see that the U.S is the top medal winner, I will graph the type of medals earned by each country. We can in fact see that the U.S not only has earned the most medals, but they have earned the most gold/siver/bronze medals compared to the other top 5 countries
```
countries_2 = sns.countplot(data = rem_columns(data), palette = "colorblind", hue = "Medal", order = total_final[0:5]['Country'].value_counts().index,
y = "Country")
countries_2.set_title('Top 5 Medal Winning Countries')
```
**Visualization #3**
Simple closer look at the U.S medals
```
us_country = sns.countplot(data = rem_columns(data), palette = "colorblind", hue = "Medal",order = total_final[0:1]['Country'].value_counts().index,
x = "Country")
```
**Visualization #4**
Finally we can look at the amount of medals earned by the U.S yearly. The graph below helps indicate that the majority of medals were won between 1980-1990. We will explore this further in task 5.
```
total_final2 = country_medal_year(total_rem)
us_line = total_final2.query("Country == 'United States'")
us_time = sns.lineplot(data = us_line, palette = "colorblind", x = "Year", y = "Count", hue = "Country")
us_time.set_title('U.S Medals Won between 1976-2008')
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import seaborn as sns
import pandas_profiling as pdp
import os
os.chdir('..')
from scripts.project_functions import load_and_process
from scripts.project_functions import rem_columns
from scripts.project_functions import country_medal
from scripts.project_functions import country_medal_year
sns.set_theme(style="darkgrid",font_scale = 1.5)
data = load_and_process("../data/raw/Summer-Olympic-medals-1976-to-2008 2.csv")
data.head()
rem_columns(data)
rem_columns(data).profile_report()
total_rem = rem_columns(data)
total_final = country_medal(total_rem)
countries = sns.countplot(data = rem_columns(data), palette = "colorblind", order = total_final[0:5]['Country'].value_counts().index, y = "Country")
countries.set_title('Top 5 Medal Winning Countries')
countries_2 = sns.countplot(data = rem_columns(data), palette = "colorblind", hue = "Medal", order = total_final[0:5]['Country'].value_counts().index,
y = "Country")
countries_2.set_title('Top 5 Medal Winning Countries')
us_country = sns.countplot(data = rem_columns(data), palette = "colorblind", hue = "Medal",order = total_final[0:1]['Country'].value_counts().index,
x = "Country")
total_final2 = country_medal_year(total_rem)
us_line = total_final2.query("Country == 'United States'")
us_time = sns.lineplot(data = us_line, palette = "colorblind", x = "Year", y = "Count", hue = "Country")
us_time.set_title('U.S Medals Won between 1976-2008')
| 0.303835 | 0.736495 |
```
%matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
```
# Creating a Structured Surface
Create a StructuredGrid surface from NumPy arrays
```
# sphinx_gallery_thumbnail_number = 2
import pyvista as pv
from pyvista import examples
import numpy as np
```
## From NumPy Meshgrid
Create a simple meshgrid using NumPy
```
# Make data
x = np.arange(-10, 10, 0.25)
y = np.arange(-10, 10, 0.25)
x, y = np.meshgrid(x, y)
r = np.sqrt(x ** 2 + y ** 2)
z = np.sin(r)
```
Now pass the NumPy meshgrid to PyVista
```
# Create and plot structured grid
grid = pv.StructuredGrid(x, y, z)
grid.plot()
# Plot mean curvature as well
grid.plot_curvature(clim=[-1, 1])
```
Generating a structured grid is a one liner in this module, and the points
from the resulting surface can be accessed as a NumPy array:
```
grid.points
```
## From XYZ Points
Quite often, you might be given a set of coordinates (XYZ points) in a simple
tabular format where there exists some structure such that grid could be
built between the nodes you have. A great example is found in
`pyvista-support#16`_ where a structured grid that is rotated from the
cartesian reference frame is given as just XYZ points. In these cases, all
that is needed to recover the grid is the dimensions of the grid
(`nx` by `ny` by `nz`) and that the coordinates are ordered appropriately.
For this example, we will create a small dataset and rotate the
coordinates such that they are not on orthogonal to cartesian reference
frame.
```
def make_point_set():
"""Ignore the contents of this function. Just know that it returns an
n by 3 numpy array of structured coordinates."""
n, m = 29, 32
x = np.linspace(-200, 200, num=n) + np.random.uniform(-5, 5, size=n)
y = np.linspace(-200, 200, num=m) + np.random.uniform(-5, 5, size=m)
xx, yy = np.meshgrid(x, y)
A, b = 100, 100
zz = A * np.exp(-0.5 * ((xx / b) ** 2.0 + (yy / b) ** 2.0))
points = np.c_[xx.reshape(-1), yy.reshape(-1), zz.reshape(-1)]
foo = pv.PolyData(points)
foo.rotate_z(36.6)
return foo.points
# Get the points as a 2D NumPy array (N by 3)
points = make_point_set()
points[0:5, :]
```
Now pretend that the (n by 3) NumPy array above are coordinates that you
have, possibly from a file with three columns of XYZ points.
We simply need to recover the dimensions of the grid that these points make
and then we can generate a :class:`pyvista.StructuredGrid` mesh.
Let's preview the points to see what we are dealing with:
```
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
plt.scatter(points[:, 0], points[:, 1], c=points[:, 2])
plt.axis("image")
plt.xlabel("X Coordinate")
plt.ylabel("Y Coordinate")
plt.show()
```
In the figure above, we can see some inherit structure to the points and thus
we could connect the points as a structured grid. All we need to know are the
dimensions of the grid present. In this case, we know (because we made this
dataset) the dimensions are ``[29, 32, 1]``, but you might not know the
dimensions of your pointset. There are a few ways to figure out the
dimensionality of structured grid including:
* manually counting the nodes along the edges of the pointset
* using a technique like principle component analysis to strip the rotation from the dataset and count the unique values along each axis for the new;y projected dataset.
```
# Once you've figured out your grid's dimensions, simple create the
# :class:`pyvista.StructuredGrid` as follows:
mesh = pv.StructuredGrid()
# Set the coordinates from the numpy array
mesh.points = points
# set the dimensions
mesh.dimensions = [29, 32, 1]
# and then inspect it!
mesh.plot(show_edges=True, show_grid=True, cpos="xy")
```
## Extending a 2D StructuredGrid to 3D
A 2D :class:`pyvista.StructuredGrid` mesh can be extended into a 3D mesh.
This is highly applicable when wanting to create a terrain following mesh
in earth science research applications.
For example, we could have a :class:`pyvista.StructuredGrid` of a topography
surface and extend that surface to a few different levels and connect each
"level" to create the 3D terrain following mesh.
Let's start with a simple example by extending the wave mesh to 3D
```
struct = examples.load_structured()
struct.plot(show_edges=True)
top = struct.points.copy()
bottom = struct.points.copy()
bottom[:,-1] = -10.0 # Wherever you want the plane
vol = pv.StructuredGrid()
vol.points = np.vstack((top, bottom))
vol.dimensions = [*struct.dimensions[0:2], 2]
vol.plot(show_edges=True)
```
|
github_jupyter
|
%matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
# sphinx_gallery_thumbnail_number = 2
import pyvista as pv
from pyvista import examples
import numpy as np
# Make data
x = np.arange(-10, 10, 0.25)
y = np.arange(-10, 10, 0.25)
x, y = np.meshgrid(x, y)
r = np.sqrt(x ** 2 + y ** 2)
z = np.sin(r)
# Create and plot structured grid
grid = pv.StructuredGrid(x, y, z)
grid.plot()
# Plot mean curvature as well
grid.plot_curvature(clim=[-1, 1])
grid.points
def make_point_set():
"""Ignore the contents of this function. Just know that it returns an
n by 3 numpy array of structured coordinates."""
n, m = 29, 32
x = np.linspace(-200, 200, num=n) + np.random.uniform(-5, 5, size=n)
y = np.linspace(-200, 200, num=m) + np.random.uniform(-5, 5, size=m)
xx, yy = np.meshgrid(x, y)
A, b = 100, 100
zz = A * np.exp(-0.5 * ((xx / b) ** 2.0 + (yy / b) ** 2.0))
points = np.c_[xx.reshape(-1), yy.reshape(-1), zz.reshape(-1)]
foo = pv.PolyData(points)
foo.rotate_z(36.6)
return foo.points
# Get the points as a 2D NumPy array (N by 3)
points = make_point_set()
points[0:5, :]
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
plt.scatter(points[:, 0], points[:, 1], c=points[:, 2])
plt.axis("image")
plt.xlabel("X Coordinate")
plt.ylabel("Y Coordinate")
plt.show()
# Once you've figured out your grid's dimensions, simple create the
# :class:`pyvista.StructuredGrid` as follows:
mesh = pv.StructuredGrid()
# Set the coordinates from the numpy array
mesh.points = points
# set the dimensions
mesh.dimensions = [29, 32, 1]
# and then inspect it!
mesh.plot(show_edges=True, show_grid=True, cpos="xy")
struct = examples.load_structured()
struct.plot(show_edges=True)
top = struct.points.copy()
bottom = struct.points.copy()
bottom[:,-1] = -10.0 # Wherever you want the plane
vol = pv.StructuredGrid()
vol.points = np.vstack((top, bottom))
vol.dimensions = [*struct.dimensions[0:2], 2]
vol.plot(show_edges=True)
| 0.855429 | 0.984516 |
# Python 104 - Writing Files, Inventorying Files
This notebook goes through the basics of writing files. We look through one basic example and one that extracts specific information from one file then writes it to a new file. After that, we look at a few modules that will help us to build an inventory of basic system information including filenames, locations (paths), and sizes. Once we identify this information we can use it to create an inventory manifest.
First, let's look at the basics of writing files.
## Writing Files
The basic function for writing files is the `write()` function. This can be used to write contents from the argument or
to write multi-line content. Unlike in other environments like the GUI or shell, where the open command is often assumed,
you may need to `open()` and then `close()` files when working in python. You cannot write to a file that is not known and opened, and a file that is not closed may be corrupted.
Fortunately, we can usually use the contexual opener:
```python
with open(file, 'w') as f:
```
This will automatically close the file when the loop completes. The `w` argument indicates that the file is opened in "write" mode. If the file doesn't exist, the file will be written.
```
# Basic use of open() and write()
line = 'Believe that life is worth living, and your belief will help create the fact.'
# Credit William James https://en.wikiquote.org/wiki/William_James
fout = open('quote-output.txt', 'w')
fout.write(line)
fout.close()
# use the with open() syntax to check if the file is there
with open('quote-output.txt', 'r') as f:
print(f.read())
```
We can also extract information from a file then reuse that in another file.
For example, we could extract the email addresses from `mbox-short.txt` and create
an address book file:
```
# create a path to the file
file = '../assets/mbox-short.txt'
# set up a file name for a file to create
fout = 'email-list.txt'
#establish a list to record emails as they are identified
emails = []
# open the source file to extract emails
with open(file, 'r') as f:
for line in f:
if line.startswith('From:'):
email = line[6:]
if email not in emails:
emails.append(email)
print(emails, '\n\n')
# open another file in write mode to write the emails.
with open(fout, 'w') as f:
for email in emails:
f.write(email)
print(open(fout).read())
```
## Inventorying Files
For this activity, we are going to use a few modules that allow us to interact with the file system. These should be somewhat familiar after we have already looked into basic shell commands.
* `os` assists in using aspects of the operating system, in this case particularly file information and paths. See https://docs.python.org/3/library/os.html;
* `os.path` is often called by itself and allows us to interact with file path and directory information. See https://docs.python.org/3/library/os.path.html#module-os.path.
* `shutil` allows to access some shell utilities, like move, copy, rename, delete. See https://docs.python.org/3/library/shutil.html?highlight=shutils.
We will also use the `csv` module since it will help us to write the information that we gather to a structured data file that can later be opened in Excel or other spreadsheet applications. See https://docs.python.org/3/library/csv.html
```
import os
from os.path import join, getsize, getctime
import csv
```
Once we know what we want in the csv, how do we get that information? We can use the `os` module to get file information. We will use the `os.walk` function to "walk" over the file tree, identify folder lists, paths, and filenames.
```
walk_this_directory = os.path.join('..','assets','Bundle-web-files-small')
print(walk_this_directory)
```
### Using os.listdir()
We can generate a list of the files in the directory using the `os.listdir()` function. This list will include the file names for all the files in the directory.
```
dir_list = os.listdir(walk_this_directory)
print(dir_list)
```
Let's use the `listdir()` function to create manifest of the files in the `pdf` directory:
```
# create the list
dir_list = os.listdir(os.path.join(walk_this_directory, 'pdf'))
# set up a file name for a file to create
fout = 'pdf-file-list.txt'
# open another file in write mode to write the emails.
with open(fout, 'w') as f:
for filename in dir_list:
f.write(filename)
f.write('\n')
print(open(fout).read())
```
### Using os.scandir()
This is useful, but there is still a lot of information from the filepath information, which gives a lot of context about the file in the filesystem, that we cannot get using the listdir() method. To get a list that we can iterate through, check whether items are recognized as files by the system, and iterate through the list. For this, we can use the `os.scandir()` function. We can use other functions, like `os.is_file()`, which will evaluate whether the item is a file. We can use this function to create data that we can iterate through using a `with` ... `as` construction, like we have seen in opening files.
```
with os.scandir(walk_this_directory) as items_list:
for entry in items_list:
print('Looking at:',entry)
if entry.is_dir():
file_list = os.listdir(os.path.join(entry))
print('This is a directory and contains',len(file_list),'files (',file_list,').')
if entry.is_file():
print('This is a file named',entry,'that takes up',os.path.getsize(entry),'bytes')
```
### Using os.walk()
The `os.walk()` function allows us to do a more complex mapping of the directory. This function can be used to create a "tuple" – a special datatype that creates a small, unmutable set that we can reuse – and we can store that information to derive foldernames and paths to individual files.
```
for folderName, subfolders, filenames in os.walk(walk_this_directory):
# see what this produces
print('folderName is a type',type(folderName),
'\nsubfolders is a type',type(subfolders),
'\nfilenames is a type',type(filenames))
## so, this is a series of nested loops,
## the top level produces a string for the folder name,
## and the secondary levels create lists of the contained folders and files
# get information about how many files are in each directory and how much space they take up
for FolderPaths, SubfolderNames, filenames in os.walk(walk_this_directory):
print(FolderPaths, "consumes", end=" ")
print(sum(getsize(join(FolderPaths, name)) for name in filenames), end=" ")
print("bytes in", len(filenames), "non-directory files")
for folderName, subfolders, filenames in os.walk(walk_this_directory):
print('Current folder:',folderName)
for subfolder in subfolders:
print('Parent folder:',folderName,'; subfolder:',subfolder)
for filename in filenames:
print('The file', filename,
'\n This is the folder:', folderName,
'\n The filepath is:',os.path.join(folderName, filename))
print('\n')
## Note that this does not record hidden items like . and ..
# get information about each of the files
for folderName, subfolders, filenames in os.walk(walk_this_directory):
for filename in filenames:
filename = filename
folder = folderName
path = os.path.join(folderName, filename)
size = os.path.getsize(path)
print('Found:', filename, folder, path, size)
## Note that this does not record hidden items like . and ..
## get information about each of the files
# first let's set some counters
fileCount = 0
# and a list to hold the information about the file, and another to hold the fileInfo
fileInfo = list()
manifestInfo = list()
for folderName, subfolders, filenames in os.walk(walk_this_directory):
for filename in filenames:
fileCount += 1
index = fileCount
filename = filename
folder = folderName
path = os.path.join(folderName, filename)
size = os.path.getsize(path)
# print('Found:', filename, folder, path, size)
fileInfo = [
index,
filename,
folder,
path,
size
]
manifestInfo.append(fileInfo)
print('Found',len(manifestInfo),'items.\n\n',manifestInfo)
## write to a CSV
# To do: Create a header row, write rows of file information, close the complete file
# set up the csv, create a header row
headers = [
'index',
'filename',
'in_folder_path',
'full_file_path',
'size'
]
# write the information using csvwriter()
with open('file-manifest.csv', 'w') as f:
writer = csv.writer(f)
writer.writerow(headers)
for file in manifestInfo:
print(file)
writer.writerow(file)
print('Wrote the file manifest')
```
## Reflection Activities
1. Write a script that uses `os.listdir()` for each of the directories in the `Bundle-web-files-small` directory. You can put in the path names directly as variables, but you should use the `os.path.join()` function to create filepaths that do not depend on your inputting the exact filepath string, which will vary across operating systems.
1. Write a script that uses `os.scandir()` to check whether or not the entities in the director are files or directories. The script should output a count of files and a count of directories.
1. Examine the examples above that use `os.walk()`. What is the difference between this and the previous two functions? In some ways it lets you get deeper into the file structure, so please explain your observation in a sentence or two.
1. Create a script that will create an inventory of all the files in the assets folder `Bundle-web-files-small`. The inventory should be a CSV file, and it should include the filename of the file, the directory path for the file, the full path to the file, and the file size. You may include any other information that you think is important. Call this file `inventory_script.py`.
1. Extend the above script, using the techniques demonstrated here, and add in a way to determine the file extension of the file, then add the extension to the CSV output? (Hint: you could split the filename string, right?)
1. Write a script that can walk through a series of directories and identify files based on their file extension. For example, perhaps you want to count the number of .pdf files or .jpg. Create file that can look for this information and then tally the files. Then, have the program output the list of filenames and filepaths in a CSV file. Call this file `extension_detector.py`.
1. Building on the above examples, can you a) write functions that bundle code to ask for a directory? You could call this function `create_manifest_information` and it should be able to accept a path to a directory as an argument and return the manifestInfo list. And b) write a function that would accept the manifestInfo list as an argument and create a CSV?
This activity took two weeks when combined with the [Git exercise](exercise-git-intro.md).
Next week, dictionaries (streamline CSV creation), and additional derived information: mimetype and fixity/hash.
1. Write a script that creates a `master` and `derivative` directory within a subdirectory that has the file's name as its name. For example, if there are two files, one named `001.jpg` and `audition.wav`, there should be a directory named `001` and another named `audition`. Within these, there should be master and derivative folders. The original files should be in the `master` folder. Call this file `master_and_derivatives.py`.
|
github_jupyter
|
with open(file, 'w') as f:
```
This will automatically close the file when the loop completes. The `w` argument indicates that the file is opened in "write" mode. If the file doesn't exist, the file will be written.
We can also extract information from a file then reuse that in another file.
For example, we could extract the email addresses from `mbox-short.txt` and create
an address book file:
## Inventorying Files
For this activity, we are going to use a few modules that allow us to interact with the file system. These should be somewhat familiar after we have already looked into basic shell commands.
* `os` assists in using aspects of the operating system, in this case particularly file information and paths. See https://docs.python.org/3/library/os.html;
* `os.path` is often called by itself and allows us to interact with file path and directory information. See https://docs.python.org/3/library/os.path.html#module-os.path.
* `shutil` allows to access some shell utilities, like move, copy, rename, delete. See https://docs.python.org/3/library/shutil.html?highlight=shutils.
We will also use the `csv` module since it will help us to write the information that we gather to a structured data file that can later be opened in Excel or other spreadsheet applications. See https://docs.python.org/3/library/csv.html
Once we know what we want in the csv, how do we get that information? We can use the `os` module to get file information. We will use the `os.walk` function to "walk" over the file tree, identify folder lists, paths, and filenames.
### Using os.listdir()
We can generate a list of the files in the directory using the `os.listdir()` function. This list will include the file names for all the files in the directory.
Let's use the `listdir()` function to create manifest of the files in the `pdf` directory:
### Using os.scandir()
This is useful, but there is still a lot of information from the filepath information, which gives a lot of context about the file in the filesystem, that we cannot get using the listdir() method. To get a list that we can iterate through, check whether items are recognized as files by the system, and iterate through the list. For this, we can use the `os.scandir()` function. We can use other functions, like `os.is_file()`, which will evaluate whether the item is a file. We can use this function to create data that we can iterate through using a `with` ... `as` construction, like we have seen in opening files.
### Using os.walk()
The `os.walk()` function allows us to do a more complex mapping of the directory. This function can be used to create a "tuple" – a special datatype that creates a small, unmutable set that we can reuse – and we can store that information to derive foldernames and paths to individual files.
| 0.777088 | 0.94801 |
### Imputation Methods and Resources
One of the most common methods for working with missing values is by imputing the missing values. Imputation means that you input a value for values that were originally missing.
It is very common to impute in the following ways:
1. Impute the **mean** of a column.<br><br>
2. If you are working with categorical data or a variable with outliers, then use the **mode** of the column.<br><br>
3. Impute 0, a very small number, or a very large number to differentiate missing values from other values.<br><br>
4. Use knn to impute values based on features that are most similar.<br><br>
In general, you should try to be more careful with missing data in understanding the real world implications and reasons for why the missing values exist. At the same time, these solutions are very quick, and they enable you to get models off the ground. You can then iterate on your feature engineering to be more careful as time permits.
Let's take a look at how some of them work. Chris' content is again very helpful for many of these items - and you can access it [here](https://chrisalbon.com/). He uses the [sklearn.preprocessing library](http://scikit-learn.org/stable/modules/preprocessing.html). There are also a ton of ways to fill in missing values directly using pandas, which can be found [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html)
Create the dataset you will be using for this notebook using the code below.
```
import pandas as pd
import numpy as np
import ImputationMethods as t
df = pd.DataFrame({'A':[np.nan, 2, np.nan, 0, 7, 10, 15],
'B':[3, 4, 5, 1, 2, 3, 5],
'C':[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
'D':[np.nan, True, np.nan, False, True, False, np.nan],
'E':['Yes', 'No', 'Maybe', np.nan, np.nan, 'Yes', np.nan]})
df
```
#### Question 1
**1.** Use the dictionary below to label the columns as the appropriate data type.
```
a = 'categorical'
b = 'quantitative'
c = 'we cannot tell'
d = 'boolean - can treat either way'
question1_solution = {'Column A is': #letter here,
'Column B is': #letter here,
'Column C is': #letter here,
'Column D is': #letter here,
'Column E is': #letter here
}
# Check your answer
t.var_test(question1_solution)
```
#### Question 2
**2.** Are there any columns or rows that you feel comfortable dropping in this dataframe?
```
a = "Yes"
b = "No"
should_we_drop =
#Check your answer
t.can_we_drop(should_we_drop)
# Use this cell to drop any columns or rows you feel comfortable dropping based on the above
```
#### Question 3
**3.** Using **new_df**, I wrote a lambda function that you can use to impute the mean for the columns of your dataframe using the **apply** method. Use as many cells as you need to correctly fill in the dictionary **impute_q3** to answer a few questions about your findings.
```
fill_mean = lambda col: col.fillna(col.mean())
try:
new_df.apply(fill_mean, axis=0)
except:
print('That broke...')
# Check what you need to answer the questions below
a = "fills with the mean, but that doesn't actually make sense in this case."
b = "gives an error."
c = "is no problem - it fills the NaN values with the mean as expected."
impute_q3 = {'Filling column A': #letter here,
'Filling column D': #letter here,
'Filling column E': #letter here
}
#Check your answer
t.impute_q3_check(impute_q3)
```
#### Question 4
**4.** Given the results above, it might make more sense to fill some columns with the mode. Write your own function to fill a column with the mode value, and use it on the two columns that might benefit from this type of imputation. Use the dictionary **impute_q4** to answer some questions about your findings.
```
#Similar to the above write a function and apply it to compte the mode for each column
#If you get stuck, here is a helpful resource https://stackoverflow.com/questions/42789324/pandas-fillna-mode
new_df.head()
a = "Did not impute the mode."
b = "Imputes the mode."
impute_q4 = {'Filling column A': #letter here,
'Filling column D': #letter here,
'Filling column E': #letter here
}
#Check your answer
t.impute_q4_check(impute_q4)
```
You saw two of the most common ways to impute values in this notebook, and hopefully, you realized that even these methods have complications. Again, these methods can be a great first step to get your models off the ground, but there are potentially detrimental aspects to the bias introduced into your models using these methods.
|
github_jupyter
|
import pandas as pd
import numpy as np
import ImputationMethods as t
df = pd.DataFrame({'A':[np.nan, 2, np.nan, 0, 7, 10, 15],
'B':[3, 4, 5, 1, 2, 3, 5],
'C':[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
'D':[np.nan, True, np.nan, False, True, False, np.nan],
'E':['Yes', 'No', 'Maybe', np.nan, np.nan, 'Yes', np.nan]})
df
a = 'categorical'
b = 'quantitative'
c = 'we cannot tell'
d = 'boolean - can treat either way'
question1_solution = {'Column A is': #letter here,
'Column B is': #letter here,
'Column C is': #letter here,
'Column D is': #letter here,
'Column E is': #letter here
}
# Check your answer
t.var_test(question1_solution)
a = "Yes"
b = "No"
should_we_drop =
#Check your answer
t.can_we_drop(should_we_drop)
# Use this cell to drop any columns or rows you feel comfortable dropping based on the above
fill_mean = lambda col: col.fillna(col.mean())
try:
new_df.apply(fill_mean, axis=0)
except:
print('That broke...')
# Check what you need to answer the questions below
a = "fills with the mean, but that doesn't actually make sense in this case."
b = "gives an error."
c = "is no problem - it fills the NaN values with the mean as expected."
impute_q3 = {'Filling column A': #letter here,
'Filling column D': #letter here,
'Filling column E': #letter here
}
#Check your answer
t.impute_q3_check(impute_q3)
#Similar to the above write a function and apply it to compte the mode for each column
#If you get stuck, here is a helpful resource https://stackoverflow.com/questions/42789324/pandas-fillna-mode
new_df.head()
a = "Did not impute the mode."
b = "Imputes the mode."
impute_q4 = {'Filling column A': #letter here,
'Filling column D': #letter here,
'Filling column E': #letter here
}
#Check your answer
t.impute_q4_check(impute_q4)
| 0.541651 | 0.984796 |
# QuTiP Lecture: Photon Scattering in Quantum Optical Systems
Author: Ben Bartlett, Stanford University: [[email protected]](mailto:[email protected]) | [stanford.edu/people/benbartlett](https://stanford.edu/people/benbartlett) | [github:bencbartlett](https://github.com/bencbartlett/)
This Jupyter notebook demonstrates functionality for numerically computing photon scattering in arbitrary driven systems coupled to some configuration of output waveguides using [QuTiP: The Quantum Toolbox in Python](http://qutip.org/). This notebook closely follows the treatment of the problem given in K.A. Fischer, et.al. (2017), "Scattering of Coherent Pulses from Quantum-Optical Systems" (arXiv: [1710.02875](https://arxiv.org/abs/1710.02875)).
```
import numpy as np
import matplotlib.pyplot as plt
from qutip import *
from multiprocessing import Pool, cpu_count
from IPython.display import display, Math, Latex
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
worker_count = max(cpu_count() - 1, 1)
```
## Introduction
$$
\newcommand{\ket}[1]{\left|{#1}\right\rangle}
\newcommand{\bra}[1]{\left\langle{#1}\right|}
$$
In this section, we briefly review the generalized problem of photon scattering in quantum optical systems discussed in Fischer, et.al. (2017); see the publication for a more complete treatment of the problem.
### Problem definition
Consider an arbitrary system with a Hamiltonian $H_S(t)$ coupled to a bath of waveguide modes:

The problem we address in this notebook is this: if we drive the system with some excitation field, such as a laser pulse, how do photons scatter from the system into the waveguide?
The system Hamiltonians we will consider take the form
\begin{equation}
H_\textrm{S}(t) =
\begin{cases}
H_\textrm{0S}+ H_\textrm{1S}(t) & \text{if } 0<t<T_P\\
H_\textrm{0S} & \text{otherwise},
\end{cases}
\end{equation}
where $T_P$ is the pulse duration (if well-defined). The waveguide Hamiltonians can be described as
\begin{equation}
H_{0B} = \int_{-\infty}^\infty d\omega \omega b_\omega^\dagger b_\omega,
\end{equation}
which can be rewritten in a temporal basis (roughly speaking, indexed by emission time) by Fourier transforming the operators $b_\omega$:
\begin{equation}
b_\tau \equiv \int_{-\infty}^\infty \frac{d\omega}{\sqrt{2\pi}} e^{-i\omega \tau}b_\omega, \quad \ket{\vec{\tau}^{(m)}} \equiv b_{\tau_1}^\dagger \cdots b_{\tau_m}^\dagger \ket{0_B}
\end{equation}
The total Schrodinger-picture Hamiltonian can be written as a sum of system, bath, and coupling terms $H(t)=H_\textrm{S}(t) + V + H_{0\textrm{B}}$, and can be transformed into the interaction picture:
\begin{equation}
H_\textrm{I}(t)=H_\textrm{S}(t) + \textrm{e}^{i H_{0\textrm{B}} t}V\textrm{e}^{-i H_{0\textrm{B}} t}.
\end{equation}
To solve the dynamics of this system, we could integrate the Schrodinger equation:
\begin{equation}
\textrm{i}\frac{\partial}{\partial t}\left|\Psi_\textrm{I}(t)\right>=H_\textrm{I}(t)\left|\Psi_\textrm{I}(t)\right>.
\end{equation}
### Coarse-grained dynamics and the scattering operator
However, practically integrating this equation is not feasible, so we instead "coarse-grain" the temporal dynamics to $\Delta t$ and take a continuous limit as $\Delta t \rightarrow 0$. If we define an "effective Hamiltonian" $H_\text{eff}(t)=H_S(t)-i\frac{\gamma}{2}a^\dagger a$, we can generate an effective propagator mapping the system from the $k^\text{th}$ to the $k+1^\text{th}$ time bin which is correct up to $\mathscr{O}(\Delta t)$:
\begin{equation}
U_\text{eff}[k+1,k] \equiv \bra{0_k} U[k+1,k] \ket{0_k} \approx \exp\left[-i\int_{k\Delta t}^{(k+1)\Delta t} dt H_\text{eff}(t)\right].
\end{equation}
From this, we can derive the scattering operator for the system into the system of waveguides (see the paper for more detail). For scattering of $N$ photons into single waveguide, this operator $\left < \hat{\Omega}^\dagger_- \right > _{\vec{\tau}^{(m)}} $ takes the form:
$$ \left < \hat{\Omega}^\dagger_- \right > _{\vec{\tau}^{(m)}} = \left<0_S\right| U_\text{eff}(\tau_\text{max},\tau_m) \prod_{q=N}^1 \sqrt\gamma a U_\text{eff}(\tau_q, \tau_{q-1}) \left| \psi_S(0) \right>,$$
with $\tau_0 = 0$, $\tau_\text{max} = \max (T_p, \tau_m )$. The multi-waveguide case will be discussed later in the notebook.
### The temporal basis
For a system coupled to $W$ waveguides emitting $N$ photons approximated by coarse-grained dynamics with $T$ time bins, the temporal basis described in Fischer, et al. (Eq. 138 and 153, with slight notation changes) can be thought of as a system of $T$ qubits for each of the $W$ waveguides with a total of $N$ creation operators applied to $\ket{0}$:
\begin{equation}
\ket{\vec{\tau}_{(W)}^{(N)}} = \ket{\vec{\tau}_1^{(w_1)}, \vec{\tau}_2^{(w_2)}, \cdots ,\vec{\tau}_W^{(w_W)}} = \prod_{i=1}^N b_{w_i,\tau_{1}}^\dagger b_{w_i,\tau_{2}}^\dagger \cdots b_{i,\tau_{n_i}}^\dagger \ket{0},
\end{equation}
where $w_k$ denotes scattering into the $k$th waveguide and $n_i$ denotes the maximum number of photons scattered into some waveguide. Although this basis is exact, it has an intractable space complexity of $\mathscr{O}(2^{T\cdot M})$, making it insuitable for simulation work.
The temporal basis we use in the `qutip.scattering` module is more closely modeled after ladder operators and explicitly restricts the basis to $N$ emissions. To generate the basis, we make $M$ copies of the $T$ time bins. Emission of a photon at the $i$th time bin into the $w$th waveguide is represented by an entry in the $(w T + i)$th index of a $(W T)$-dimensional vector, so the overall temporal basis is given by:
\begin{equation}
\ket{\vec{\tau}_{(W)}^{(N)}} = \ket{\vec{\tau}_1^{(w_1)}, \vec{\tau}_2^{(w_2)}, \cdots ,\vec{\tau}_W^{(w_W)}} = \bigotimes_{n=1}^N \ket{\tau_{n, w_n}} = \bigotimes_{n=1}^N \mathscr{\vec T}[w_n T + \tau_n],
\end{equation}
where $\tau_{n, w_n}$ denotes emission into the $w_n$th waveguide of the $n$th photon and $\mathscr{\vec T}[w_n T + \tau_n]$ denotes the basis vector corresponding to $\tau_{n, w_n}$, namely the $(w_n T+\tau_n)$-th index. The creation operators in the original temporal basis are mapped to $(w_i T + \tau_n)$ applications of the "temporal ladder operator":
\begin{equation}
b_{w_i,\tau_n}^\dagger = \frac{(a^\dagger)^{w_i T + \tau_n}}{\sqrt{(w_i T + \tau_n)!}}
\end{equation}
This gives this basis a space complexity of $\mathscr{O}\left((W T)^N\right)$, which is more managable given that for most applicaitons $T\gg W,N$.
## Single waveguide: driven quantum two-level system
To demonstrate the `qutip.scattering` module, we'll start with the simplest case of a two-level quantum system coupled to a single output waveguide. The system has initial state $\left |{\psi_0} \right> = \left|e\right>_\text{sys} \otimes \left|vac\right>_\text{wg}$ with a bare Hamiltonian of $H_{0S} = \omega_0 \sigma^\dagger \sigma $. Adding an effective non-Hermitian term to govern the evolution of the system under spontaneous emission, $H_\text{eff} = H_{0S} - i \frac{\gamma}{2}\sigma^\dagger \sigma$. When the system is driven by a coherent pulse, it undergoes Rabi oscillations. Picking a square pulse to give a simple Hamiltonian, the overall effective Hamiltonian is $H_\text{eff}(t) = H_{0S} - H_{1S}(t)- i \frac{\gamma}{2}\sigma^\dagger \sigma$, where
$$H_{1S}(t) = \begin{cases}
\Omega\left( i e^{-i \omega_0 t} \sigma^\dagger - i e^{i\omega_0 t} \sigma \right) & \text{ if } 0<t<T_P \\
0 & \text{ otherwise.} \\
\end{cases}$$
We define the Hamiltonian and choose pulse parameters below.
```
# Pulse parameters
w0 = 10 * 2 * np.pi # arbitrary laser frequency
gamma = 1.0 # arbitrary coupling constant
# Operators
sm = np.sqrt(gamma) * destroy(2) # TLS coupled collapse operator
psi0 = basis(2,0) # starting state |psi(0)> = |0>
def Htls(gamma, pulseLength, pulseArea):
RabiFreq = pulseArea / (2*pulseLength)
# Bare Hamiltonian for a TLS
H0S = w0 * create(2) * destroy(2)
# Define H_1S(t)
H1S1 = lambda t, args: RabiFreq * 1j*np.exp(-1j*w0*t) * (t < pulseLength)
H1S2 = lambda t, args: RabiFreq * -1j*np.exp(1j*w0*t) * (t < pulseLength)
# Put the Hamiltonian in QuTiP list-callback form
return [H0S - 1j/2 * sm.dag() * sm,
[sm.dag(), H1S1],
[sm, H1S2]]
```
### Computing photon scattering amplitudes
Let's begin by computing the scattering amplitude of a single-photon emission as a function of time. For this, we can use the `temporal_scattered_state()` function in the `scattering` module, which computes:
$$
\begin{align}
\left| \phi_n \right> & = \int_0^\infty d\tau_1 \int_{\tau_1}^\infty d\tau_2 \cdots \int_{\tau_{n-1}}^\infty d\tau_n \left<0_S, \{ \tau_1,\tau_2,\cdots,\tau_n \} \mid \psi(t\rightarrow \infty) \right> \left|\tau_1,\tau_2,\cdots,\tau_n\right> \\
& = \int_{\vec\tau_n} d\vec\tau_n \left<0_S, \{\vec\tau_n \} \mid \psi(t\rightarrow \infty) \right> \left|\vec\tau_n\right> \\
& = \int_{\vec\tau_n} d\vec\tau_n \left < \hat{\Omega}^\dagger_- \right > _{\vec{\tau}_n} \left|\vec\tau_n\right> \\
& = \hat{\Omega}^\dagger_- \ket{\psi_0}.
\end{align}
$$
This function takes as arguments the Hamiltonian or the effective Hamiltonian, the initial system state, the number of emissions, a list of collapse operators (one for each waveguide - see the following section for more detail), and a list of times. The list of times must exceed the duration of the pulse for the function to yield sensible results (or, if the pulse does not have a well-defined end, the times list must contain most of the temporal region of interest).
By passing the keyword argument `construct_effective_hamiltonian`, you can tell the function whether the Hamiltonian you provided is $H$ or $H_\text{eff}$; by default, the value is `True`, so an effective Hamiltonian will be constructed from the provided list of collapse operators as $H_\text{eff} = H - \frac{i}{2} \sum_n \tt{\text{c_ops}}[n]$. The function iteratively calls `photon_scattering_operator()` and returns the temporal scattered state as a `Qobj`, so to extract the amplitudes $a_t$, we will need to project it onto the temporal basis:
$$a_t = \left< t \mid \phi_n \right> = \bra{t}\hat{\Omega}^\dagger_- \ket{\psi_0} = \bra{t}\int_{\vec\tau_n}d\vec\tau_n \left < \hat{\Omega}^\dagger_- \right > _{\vec{\tau}_n} \left|\vec\tau_n\right>,$$
which we can do usng the `temporal_basis_vector()` function. This function takes a nested list of temporal emission indices for each waveguide and the total number of time bins. For the single-waveguide case, the nested list of time indices simply reduces to `[[indexOf(t)]]`.
Computing the scattering amplitudes using the same parameters as the ones used in the analytical results of Figure 5(b) in Fischer, et al., we obtain visually identical results:
```
T = 200
tlist = np.linspace(0,1.5/gamma,T)
pulse_length = 0.2 / gamma
pulse_areas = [np.pi, 2*np.pi, 3*np.pi]
for pulse_area in pulse_areas:
# Use construct_effective_hamiltonian=False since we are providing H_eff in this case
scattered_state = temporal_scattered_state(Htls(gamma, pulse_length, pulse_area), psi0, 1, [sm], tlist,
construct_effective_hamiltonian = False)
amplitudes = []
for i in range(T):
amplitudes.append((temporal_basis_vector([[i]], T).dag() * scattered_state).full().item())
# Adjust amplitudes for time evolution
amplitudes = np.real(np.array(amplitudes) * np.exp(1j * w0 * tlist))
plt.plot(tlist, amplitudes, label = "$A_R = {}\pi$".format(round(pulse_area / np.pi)))
plt.ylim(-1,1)
plt.xlim(tlist[0],tlist[-1])
plt.xlabel('Time index, $\\tau_1$ [$1/\gamma$]')
plt.ylabel('$\phi_1 ( \\tau_1) e^{-i \omega_0 \\tau_1} [\gamma^{1/2}]$')
plt.legend(loc = 'upper right')
plt.show()
```
### Total photon scattering probability
To calculate the total probability of emitting a certain number of photons, $P_n = \left<\phi_n \mid \phi_n\right>$, we can expand in terms a complete set of temporal projection operators $\int_{\vec\tau_n} \left| \tau_n \right>\left<\tau_n \right| d \tau_n$:
$$\begin{align}
P_n & = \left<\phi_n \mid \phi_n\right> \\
& = \int_{\vec\tau_n} d\vec\tau_n \left<\phi_n \mid \vec\tau_n \right>\left<\vec\tau_n \mid \phi_n \right> \\
& = \int_0^\infty d\tau_1 \int_{\tau_1}^\infty d\tau_2 \cdots \int_{\tau_{n-1}}^\infty d\tau_n \left<\phi_n \mid \tau_1, \tau_2, \cdots \tau_n \right>\left<\tau_1, \tau_2, \cdots \tau_n \mid \phi_n \right> \\
\end{align}$$
More simply, however, you can use the `scattering_probability()` function, which recursively integrates the results of `temporal_scattered_state()` to return the total probability of $N$ photons being scattered from the system over the specified list of times. Notably, the time list does not need to be linear - the integration routines will account for unevenly spaced time bins. This allows you to do things like provide logarithmically spaced times, which better captures regions closer to $t=0$ where more interesting dynamics occur.
To make things faster, we'll remove the time dependence of $H_\text{eff}$ with a rotating frame transformation. We'll also drop the $-\frac{i}{2} \sigma^\dagger \sigma$ term and the `construct_effective_hamiltonian = False` argument to allow `temporal_scattered_state()` to construct the effective Hamiltonian on its own.
Since `scattering_probability()` returns a pickleable result (a number), it is also very easily multiprocessed, so we'll take this opportunity to show how this can be done. (Note that this does make debugging untested code a more opaque process.) Computing the total scattering probabilities for $N=0,1,2$ photons as a function of pulse area yields a similar result to Figure 5(a) in Fischer, et al:
```
def Htls_rft(gamma, pulseLength, pulseArea):
RabiFreq = pulseArea / (2*pulseLength)
return [[sm.dag() + sm, lambda t, args: RabiFreq * (t < pulseLength)]]
pulse_length = 0.2 / gamma
pulse_areas = np.linspace(0,4*np.pi,100)
tlist = np.geomspace(gamma, 7*gamma, 40) - gamma
emission_nums = [0,1,2]
def scattering_probability_multiprocess(pulse_area, n):
# Helper function to allow pool.map parallelism
return scattering_probability(Htls_rft(gamma, pulse_length, pulse_area), psi0, n, [sm], tlist)
pool = Pool(worker_count)
for n in emission_nums:
args = [(pulse_area, n) for pulse_area in pulse_areas]
scatter_probs = pool.starmap(scattering_probability_multiprocess, args)
plt.plot(pulse_areas / np.pi, scatter_probs, label = "$P_{}$".format(n))
pool.close()
plt.ylim(0,1)
plt.xlim(pulse_areas[0]/np.pi, pulse_areas[-1]/np.pi)
plt.xlabel("Pulse area, $A_R [\\pi]$")
plt.legend()
plt.show()
```
### Computing second-order coherence in the scattered state
In experiments, the two-photon wavefunction is often characterized from the second-order coherence:
\begin{equation}
G^{(2)}(t_1,t_2) \approx \bra{\phi_2} b_0^\dagger(t_1) b_0^\dagger(t_2) b_0(t_2) b_0(t_1) \ket{\phi_2}.
\end{equation}
Since the creation operators $b_0^\dagger$ do not translate exactly into the temporal basis used in `qutip.scattering`, this is not directly computable in this form, but we can still calculate $G^{(2)}$ with creative application of `temporal_basis_vector()`. The second-order coherence measures the correlations for photons to be emitted at times $t_1$ and $t_2$ with corresponding time-bin indices `i` and `j`. To compute the coherence, we first compute the temporal scattered state, then project it onto `temporal_basis_vector([[i,j]], T)`, which gives the basis vector corresponding to photons emitted at time indices `i` and `j` (into the same - first - waveguide) out of `T` total time bins. This projection onto a (approximately) complete set of temporal basis vectors gives the second-order coherence, which is Figure 5(c) in Fischer, et al.:
```
T = 200
tlist = np.linspace(0,1.5/gamma,T)
pulse_area = 6*np.pi
pulse_length = 0.2 / gamma
correlations = np.zeros((T, T))
H = Htls_rft(gamma, pulse_length, pulse_area)
scattered_state = temporal_scattered_state(H, psi0, 2, [sm], tlist)
for i in range(T):
for j in range(T):
# temporal_scattered_state() computes only using ordered emission times, so to
# get the full set of correlations, we need to use ordered temporal basis vector
[a,b] = sorted([i,j])
basis_vec = temporal_basis_vector([[a,b]], T)
correlations[i,j] = np.abs((basis_vec.dag() * scattered_state).full().item())**2
fig, ax1 = plt.subplots(1,1)
cax = ax1.imshow(correlations, interpolation='nearest', origin='lower')
ax1.set_xticks(np.linspace(0,T-1,4))
ax1.set_xticklabels([0.0, 0.5, 1.0, 1.5])
ax1.set_xlabel("Time, $t_1$ [$1/\gamma$]")
ax1.set_yticks(np.linspace(0,T-1,4))
ax1.set_yticklabels([0.0, 0.5, 1.0, 1.5])
ax1.set_ylabel("Time, $t_2$ [$1/\gamma$]")
fig.colorbar(cax)
plt.show()
```
### Pulse-wise second-order coherence
Experimentally accessing the temporal correlations given by $G^{(2)}$ or photocount distributions $P_m$ can be quite challenging, so typically a quantity called the pulse-wise second-order coherence is used, defined as:
\begin{equation}
g^{(2)}[0] = \frac{\sum_m m(m-1) P_m}{\left( \sum_m m P_m \right)^2} \approx \frac{2 P_2}{(P_1+2P_2)^2}.
\end{equation}
We can easily compute this with `scattering_probability`, obtaining similar results to Figure 5(d) in Fischer, et al.:
```
pulse_length = 0.2/gamma
pulse_areas = np.linspace(0.01,4*np.pi,150)
emission_nums = [1,2]
# you can use non-linearly space time bins with scattering_probability()
tlist = np.geomspace(gamma, 21*gamma, 40) - gamma
def scatter_prob(pulse_area, n):
# Helper function to allow pool.map parallelism
return scattering_probability(Htls_rft(gamma, pulse_length, pulse_area), psi0, n, [sm], tlist)
pool = Pool(worker_count)
Pm = dict.fromkeys(emission_nums)
for n in emission_nums:
args = [(pulse_area, n) for pulse_area in pulse_areas]
Pm[n] = np.array(pool.starmap(scatter_prob, args))
pool.close()
# Calculate pulse-wise coherence
pulseWiseCoherence = np.sum([m * (m-1) * Pm[m] for m in Pm], axis=0) / \
np.square(np.sum([m * Pm[m] for m in Pm], axis=0))
plt.plot(pulse_areas/np.pi, pulseWiseCoherence)
plt.ylim(0,6)
plt.xlim(pulse_areas[0]/np.pi, pulse_areas[-1]/np.pi)
plt.xlabel("Pulse area, $A_R$ $[\pi]$")
plt.ylabel("$g^{(2)}[0]$")
plt.show()
```
## Multiple waveguides: spontaneous parametric downconversion
We'll now extend the problem to multiple waveguides by simulating the scattering dynamics of spontaneous parametric downconversion. The scattering amplitude discussed above extended to a system with $W$ waveguides is:
\begin{equation}
\left<\hat{\Omega}_{-}^\dagger\right>_{\tilde{\tau}^{(N)}}\equiv\left<\hat{\Omega}_{-}^\dagger\right>_{\vec{\boldsymbol{\tau}}_1^{(m_1)},\vec{\boldsymbol{\tau}}_2^{(m_2)},\dots, \vec{\boldsymbol{\tau}}_M^{(m_M)}} =\bra{\textbf{0}_\text{S}} U_\text{eff}(\tau_\text{max}, \tilde{\tau}_N) \prod_{q=N}^1 \sqrt{\gamma_{Q[q]}}a_{Q[q]} U_\text{eff}(\tilde{\tau}_q,\tilde{\tau}_{q-1}) \ket{\psi_\text{S}(0)}
\end{equation}
as a projection onto $|\vec{\boldsymbol{\tau}}_1^{(m_1)},\vec{\boldsymbol{\tau}}_2^{(m_2)},\dots, \vec{\boldsymbol{\tau}}_W^{(m_W)}\rangle$, where $N = m_1+m_2+ \cdots+ m_W$ is the total number of photons scattered, $\tilde{\tau}^{(N)}$ is a chronologically sorted set of all time indices from the $\vec{\boldsymbol{\tau}}_i^{(m_i)}$'s, and $Q[q]$ is the index of the waveguide corresponding to the photon scattered at $\tilde{\tau}_q$. We present this equation without derivation; see Fischer, et al. for more details.
Consider a SPDC cavity with a Hamiltonian given by a sum of time-independent and -dependent parts $H=H_{0S}+H_{1S}$, with:
$$H_{0S} = \omega_1 a_1^\dagger a_1 + \omega_2 a_2^\dagger a_2,$$
and
$$H_{1S} = g(t) \left(e^{i\omega_p t} a_1 a_2 + e^{-i\omega_p t} a_1^\dagger a_2^\dagger \right),$$
where $a_1$ and $a_2$ annihilate photons at frequencies $\omega_1$ and $\omega_2$, respectively, $\omega_p = \omega_1 + \omega_2$, and $g(t)$ is a function depending on the amplitude of the pump beam and the nonlinear susceptibility of the cavity. As a specific example, let's consider driving the system with a Gaussian pulse, such that $g(t) = g_0 \exp \left( -\frac{(t-t_0)^2}{2\tau^2} \right)$. Truncating the cavity excitation capacity to $n=6$, we define the Hamiltonian for the system, again using a rotating frame transformation as before:
$$H_\text{SPDC} = \left(a_1^\dagger a_2^\dagger + a_1 a_2\right) g(t) + H_\text{eff}\text{ terms},$$
where we allow the scattering functions to construct the effective Hamiltonian by adding the $-\frac{i}{2} \sum_n \tt{\text{c_ops}}[n]$ terms.
```
Ncav = 6 # truncated cavity excitation capacity
a1 = tensor(destroy(Ncav), qeye(Ncav)) # left cavity annihilator
a2 = tensor(qeye(Ncav), destroy(Ncav)) # right cavity annihilator
cavity_vac = tensor(basis(Ncav, 0), basis(Ncav, 0)) # vacuum state, 0 excitations in either cavity
w1 = w2 = 1/gamma # cavity frequencies
wp = w1 + w2 # pump frequency
spdc_c_ops = [np.sqrt(gamma)*a1, np.sqrt(gamma)*a2] # cavity collapse operators
# Gaussian laser pulse
def g(t, t0, g0, tau):
return g0 * np.exp(-1 * (t-t0)**2 / (2 * tau**2))
# SPDC Hamiltonian with rotating frame transformation applied
def Hspdc(t0, g0, tau):
return [[a1.dag() * a2.dag() + a1 * a2, lambda t, args: g(t, t0, g0, tau)]]
```
### Two-photon scattering amplitudes
Here we compute the amplitude for the two-photon part of the output state projected onto the temporal basis. We plot only the case where one photon is scattered into the first waveguide and the other into the second: this is of course symmetric under reversal, and the cases of two photons scattered into only one waveguide are forbidden and have amplitude 0, since the difference in the number of photons in the two cavities is conserved in the presence of the pump beam.
Using similar parameters as Fig 6(a) in Fischer, et al., we obtain a similar result:
```
tau = 0.05 / gamma # width of gaussian pulse
t0 = 3.5 * tau # center of gaussian pulse
g0 = gamma # amplitude of gaussian pulse
T = 100 # number of time bins
W = 2 # number of waveguides
tlist = np.linspace(0, 3/gamma, T)
phi = temporal_scattered_state(Hspdc(t0, g0, tau), cavity_vac, 2, spdc_c_ops, tlist)
amplitudes = np.zeros((W, W, T, T,))
for i, tau1 in enumerate(tlist):
for j, tau2 in enumerate(tlist):
[a,b] = sorted([i,j]) # sort the indices to comply with time-ordering
for wg1 in [0,1]:
for wg2 in [0,1]:
indices = [[] for _ in range(W)]
indices[wg1].append(a)
indices[wg2].append(b)
basisVec = temporal_basis_vector(indices, T)
amplitudes[wg1,wg2,i,j] = np.abs((basisVec.dag() * phi).full().item())**2
# Plot the correlation for emission times emitted into different waveguides; note
# that amplitudes[0][0] = amplitudes[1][1] = 0 and amplitudes[0][1] = amplitudes[1][0].
fig, ax1 = plt.subplots(1,1)
cax = ax1.imshow(amplitudes[0][1], interpolation='nearest', origin='lower')
ax1.set_xticks(np.linspace(0,T-1,4))
ax1.set_xticklabels([0, 1, 2, 3])
ax1.set_xlabel("Time, $t_1$ [$1/\gamma$]")
ax1.set_yticks(np.linspace(0,T-1,4))
ax1.set_yticklabels([0, 1, 2, 3])
ax1.set_ylabel("Time, $t_2$ [$1/\gamma$]")
fig.colorbar(cax)
plt.show()
```
### Multi-waveguide photon emission probability
Finally, we can compute the variation in probability of single-and two-photon emission as a function of the pulse length. This simulation exhibits a slight variation from the expected behavior in Figure 5(c) of Fischer, et al., more apparent at larger times, due to the interaction timescale of interest increasing relative to the total timescale as a function of pulse length. However, the results do closely resemble the expected analytical results:
```
emission_nums = [0, 2]
pulse_lengths = np.linspace(0.05/gamma, 1.1 / gamma, 50)
tlist = np.geomspace(1/gamma, 21/gamma, 50) - 1/gamma
def scattering_probability_multiprocess(pulse_length, n):
tau = pulse_length
t0 = 3.5 * tau
H = Hspdc(t0, gamma, tau)
return scattering_probability(H, cavity_vac, n, spdc_c_ops, tlist)
pool = Pool(worker_count)
probs = {}
for n in emission_nums:
args = [(pulse_length, n) for pulse_length in pulse_lengths]
probs[n] = np.array(pool.starmap(scattering_probability_multiprocess, args))
pool.close()
# Compute the purity of the output state
purity = [probs[2][p] / (1-probs[0][p]) for p in range(len(pulse_lengths))]
# Plot it
for n in probs:
plt.plot(pulse_lengths / gamma, probs[n], label = "$P_{}$".format(n))
plt.plot(pulse_lengths / gamma, purity, '--', label = "Purity")
plt.ylim(0,1)
plt.xlim(pulse_lengths[0]/gamma, pulse_lengths[-1]/gamma)
plt.xlabel("Pulse length, $\\tau$ $[1/\gamma]$")
plt.legend()
plt.show()
```
### Software version:
```
from qutip.ipynbtools import version_table
version_table()
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from qutip import *
from multiprocessing import Pool, cpu_count
from IPython.display import display, Math, Latex
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
worker_count = max(cpu_count() - 1, 1)
# Pulse parameters
w0 = 10 * 2 * np.pi # arbitrary laser frequency
gamma = 1.0 # arbitrary coupling constant
# Operators
sm = np.sqrt(gamma) * destroy(2) # TLS coupled collapse operator
psi0 = basis(2,0) # starting state |psi(0)> = |0>
def Htls(gamma, pulseLength, pulseArea):
RabiFreq = pulseArea / (2*pulseLength)
# Bare Hamiltonian for a TLS
H0S = w0 * create(2) * destroy(2)
# Define H_1S(t)
H1S1 = lambda t, args: RabiFreq * 1j*np.exp(-1j*w0*t) * (t < pulseLength)
H1S2 = lambda t, args: RabiFreq * -1j*np.exp(1j*w0*t) * (t < pulseLength)
# Put the Hamiltonian in QuTiP list-callback form
return [H0S - 1j/2 * sm.dag() * sm,
[sm.dag(), H1S1],
[sm, H1S2]]
T = 200
tlist = np.linspace(0,1.5/gamma,T)
pulse_length = 0.2 / gamma
pulse_areas = [np.pi, 2*np.pi, 3*np.pi]
for pulse_area in pulse_areas:
# Use construct_effective_hamiltonian=False since we are providing H_eff in this case
scattered_state = temporal_scattered_state(Htls(gamma, pulse_length, pulse_area), psi0, 1, [sm], tlist,
construct_effective_hamiltonian = False)
amplitudes = []
for i in range(T):
amplitudes.append((temporal_basis_vector([[i]], T).dag() * scattered_state).full().item())
# Adjust amplitudes for time evolution
amplitudes = np.real(np.array(amplitudes) * np.exp(1j * w0 * tlist))
plt.plot(tlist, amplitudes, label = "$A_R = {}\pi$".format(round(pulse_area / np.pi)))
plt.ylim(-1,1)
plt.xlim(tlist[0],tlist[-1])
plt.xlabel('Time index, $\\tau_1$ [$1/\gamma$]')
plt.ylabel('$\phi_1 ( \\tau_1) e^{-i \omega_0 \\tau_1} [\gamma^{1/2}]$')
plt.legend(loc = 'upper right')
plt.show()
def Htls_rft(gamma, pulseLength, pulseArea):
RabiFreq = pulseArea / (2*pulseLength)
return [[sm.dag() + sm, lambda t, args: RabiFreq * (t < pulseLength)]]
pulse_length = 0.2 / gamma
pulse_areas = np.linspace(0,4*np.pi,100)
tlist = np.geomspace(gamma, 7*gamma, 40) - gamma
emission_nums = [0,1,2]
def scattering_probability_multiprocess(pulse_area, n):
# Helper function to allow pool.map parallelism
return scattering_probability(Htls_rft(gamma, pulse_length, pulse_area), psi0, n, [sm], tlist)
pool = Pool(worker_count)
for n in emission_nums:
args = [(pulse_area, n) for pulse_area in pulse_areas]
scatter_probs = pool.starmap(scattering_probability_multiprocess, args)
plt.plot(pulse_areas / np.pi, scatter_probs, label = "$P_{}$".format(n))
pool.close()
plt.ylim(0,1)
plt.xlim(pulse_areas[0]/np.pi, pulse_areas[-1]/np.pi)
plt.xlabel("Pulse area, $A_R [\\pi]$")
plt.legend()
plt.show()
T = 200
tlist = np.linspace(0,1.5/gamma,T)
pulse_area = 6*np.pi
pulse_length = 0.2 / gamma
correlations = np.zeros((T, T))
H = Htls_rft(gamma, pulse_length, pulse_area)
scattered_state = temporal_scattered_state(H, psi0, 2, [sm], tlist)
for i in range(T):
for j in range(T):
# temporal_scattered_state() computes only using ordered emission times, so to
# get the full set of correlations, we need to use ordered temporal basis vector
[a,b] = sorted([i,j])
basis_vec = temporal_basis_vector([[a,b]], T)
correlations[i,j] = np.abs((basis_vec.dag() * scattered_state).full().item())**2
fig, ax1 = plt.subplots(1,1)
cax = ax1.imshow(correlations, interpolation='nearest', origin='lower')
ax1.set_xticks(np.linspace(0,T-1,4))
ax1.set_xticklabels([0.0, 0.5, 1.0, 1.5])
ax1.set_xlabel("Time, $t_1$ [$1/\gamma$]")
ax1.set_yticks(np.linspace(0,T-1,4))
ax1.set_yticklabels([0.0, 0.5, 1.0, 1.5])
ax1.set_ylabel("Time, $t_2$ [$1/\gamma$]")
fig.colorbar(cax)
plt.show()
pulse_length = 0.2/gamma
pulse_areas = np.linspace(0.01,4*np.pi,150)
emission_nums = [1,2]
# you can use non-linearly space time bins with scattering_probability()
tlist = np.geomspace(gamma, 21*gamma, 40) - gamma
def scatter_prob(pulse_area, n):
# Helper function to allow pool.map parallelism
return scattering_probability(Htls_rft(gamma, pulse_length, pulse_area), psi0, n, [sm], tlist)
pool = Pool(worker_count)
Pm = dict.fromkeys(emission_nums)
for n in emission_nums:
args = [(pulse_area, n) for pulse_area in pulse_areas]
Pm[n] = np.array(pool.starmap(scatter_prob, args))
pool.close()
# Calculate pulse-wise coherence
pulseWiseCoherence = np.sum([m * (m-1) * Pm[m] for m in Pm], axis=0) / \
np.square(np.sum([m * Pm[m] for m in Pm], axis=0))
plt.plot(pulse_areas/np.pi, pulseWiseCoherence)
plt.ylim(0,6)
plt.xlim(pulse_areas[0]/np.pi, pulse_areas[-1]/np.pi)
plt.xlabel("Pulse area, $A_R$ $[\pi]$")
plt.ylabel("$g^{(2)}[0]$")
plt.show()
Ncav = 6 # truncated cavity excitation capacity
a1 = tensor(destroy(Ncav), qeye(Ncav)) # left cavity annihilator
a2 = tensor(qeye(Ncav), destroy(Ncav)) # right cavity annihilator
cavity_vac = tensor(basis(Ncav, 0), basis(Ncav, 0)) # vacuum state, 0 excitations in either cavity
w1 = w2 = 1/gamma # cavity frequencies
wp = w1 + w2 # pump frequency
spdc_c_ops = [np.sqrt(gamma)*a1, np.sqrt(gamma)*a2] # cavity collapse operators
# Gaussian laser pulse
def g(t, t0, g0, tau):
return g0 * np.exp(-1 * (t-t0)**2 / (2 * tau**2))
# SPDC Hamiltonian with rotating frame transformation applied
def Hspdc(t0, g0, tau):
return [[a1.dag() * a2.dag() + a1 * a2, lambda t, args: g(t, t0, g0, tau)]]
tau = 0.05 / gamma # width of gaussian pulse
t0 = 3.5 * tau # center of gaussian pulse
g0 = gamma # amplitude of gaussian pulse
T = 100 # number of time bins
W = 2 # number of waveguides
tlist = np.linspace(0, 3/gamma, T)
phi = temporal_scattered_state(Hspdc(t0, g0, tau), cavity_vac, 2, spdc_c_ops, tlist)
amplitudes = np.zeros((W, W, T, T,))
for i, tau1 in enumerate(tlist):
for j, tau2 in enumerate(tlist):
[a,b] = sorted([i,j]) # sort the indices to comply with time-ordering
for wg1 in [0,1]:
for wg2 in [0,1]:
indices = [[] for _ in range(W)]
indices[wg1].append(a)
indices[wg2].append(b)
basisVec = temporal_basis_vector(indices, T)
amplitudes[wg1,wg2,i,j] = np.abs((basisVec.dag() * phi).full().item())**2
# Plot the correlation for emission times emitted into different waveguides; note
# that amplitudes[0][0] = amplitudes[1][1] = 0 and amplitudes[0][1] = amplitudes[1][0].
fig, ax1 = plt.subplots(1,1)
cax = ax1.imshow(amplitudes[0][1], interpolation='nearest', origin='lower')
ax1.set_xticks(np.linspace(0,T-1,4))
ax1.set_xticklabels([0, 1, 2, 3])
ax1.set_xlabel("Time, $t_1$ [$1/\gamma$]")
ax1.set_yticks(np.linspace(0,T-1,4))
ax1.set_yticklabels([0, 1, 2, 3])
ax1.set_ylabel("Time, $t_2$ [$1/\gamma$]")
fig.colorbar(cax)
plt.show()
emission_nums = [0, 2]
pulse_lengths = np.linspace(0.05/gamma, 1.1 / gamma, 50)
tlist = np.geomspace(1/gamma, 21/gamma, 50) - 1/gamma
def scattering_probability_multiprocess(pulse_length, n):
tau = pulse_length
t0 = 3.5 * tau
H = Hspdc(t0, gamma, tau)
return scattering_probability(H, cavity_vac, n, spdc_c_ops, tlist)
pool = Pool(worker_count)
probs = {}
for n in emission_nums:
args = [(pulse_length, n) for pulse_length in pulse_lengths]
probs[n] = np.array(pool.starmap(scattering_probability_multiprocess, args))
pool.close()
# Compute the purity of the output state
purity = [probs[2][p] / (1-probs[0][p]) for p in range(len(pulse_lengths))]
# Plot it
for n in probs:
plt.plot(pulse_lengths / gamma, probs[n], label = "$P_{}$".format(n))
plt.plot(pulse_lengths / gamma, purity, '--', label = "Purity")
plt.ylim(0,1)
plt.xlim(pulse_lengths[0]/gamma, pulse_lengths[-1]/gamma)
plt.xlabel("Pulse length, $\\tau$ $[1/\gamma]$")
plt.legend()
plt.show()
from qutip.ipynbtools import version_table
version_table()
| 0.720958 | 0.987448 |
```
# compare standalone models for binary classification
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from matplotlib import pyplot
# get the dataset
def get_dataset():
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1)
return X, y
# get a list of models to evaluate
def get_models():
models = dict()
models['lr'] = LogisticRegression()
models['knn'] = KNeighborsClassifier()
models['cart'] = DecisionTreeClassifier()
models['svm'] = SVC()
models['bayes'] = GaussianNB()
return models
# evaluate a given model using cross-validation
def evaluate_model(model, X, y):
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise')
return scores
# define dataset
X, y = get_dataset()
# get the models to evaluate
models = get_models()
# evaluate the models and store results
results, names = list(), list()
for name, model in models.items():
scores = evaluate_model(model, X, y)
results.append(scores)
names.append(name)
print('>%s %.3f (%.3f)' % (name, mean(scores), std(scores)))
# plot model performance for comparison
pyplot.boxplot(results, labels=names, showmeans=True)
pyplot.show()
X
y
# make a prediction with a stacking ensemble
from sklearn.datasets import make_classification
from sklearn.ensemble import StackingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
# define dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1)
# define the base models
level0 = list()
level0.append(('lr', LogisticRegression()))
level0.append(('knn', KNeighborsClassifier()))
level0.append(('cart', DecisionTreeClassifier()))
level0.append(('svm', SVC()))
level0.append(('bayes', GaussianNB()))
# define meta learner model
level1 = LogisticRegression()
# define the stacking ensemble
model = StackingClassifier(estimators=level0, final_estimator=level1, cv=5)
# fit the model on all available data
model.fit(X, y)
# make a prediction for one example
data = [[2.47475454,0.40165523,1.68081787,2.88940715,0.91704519,-3.07950644,4.39961206,0.72464273,-4.86563631,-6.06338084,-1.22209949,-0.4699618,1.01222748,-0.6899355,-0.53000581,6.86966784,-3.27211075,-6.59044146,-2.21290585,-3.139579]]
yhat = model.predict(data)
print('Predicted Class: %d' % (yhat))
```
|
github_jupyter
|
# compare standalone models for binary classification
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from matplotlib import pyplot
# get the dataset
def get_dataset():
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1)
return X, y
# get a list of models to evaluate
def get_models():
models = dict()
models['lr'] = LogisticRegression()
models['knn'] = KNeighborsClassifier()
models['cart'] = DecisionTreeClassifier()
models['svm'] = SVC()
models['bayes'] = GaussianNB()
return models
# evaluate a given model using cross-validation
def evaluate_model(model, X, y):
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise')
return scores
# define dataset
X, y = get_dataset()
# get the models to evaluate
models = get_models()
# evaluate the models and store results
results, names = list(), list()
for name, model in models.items():
scores = evaluate_model(model, X, y)
results.append(scores)
names.append(name)
print('>%s %.3f (%.3f)' % (name, mean(scores), std(scores)))
# plot model performance for comparison
pyplot.boxplot(results, labels=names, showmeans=True)
pyplot.show()
X
y
# make a prediction with a stacking ensemble
from sklearn.datasets import make_classification
from sklearn.ensemble import StackingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
# define dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1)
# define the base models
level0 = list()
level0.append(('lr', LogisticRegression()))
level0.append(('knn', KNeighborsClassifier()))
level0.append(('cart', DecisionTreeClassifier()))
level0.append(('svm', SVC()))
level0.append(('bayes', GaussianNB()))
# define meta learner model
level1 = LogisticRegression()
# define the stacking ensemble
model = StackingClassifier(estimators=level0, final_estimator=level1, cv=5)
# fit the model on all available data
model.fit(X, y)
# make a prediction for one example
data = [[2.47475454,0.40165523,1.68081787,2.88940715,0.91704519,-3.07950644,4.39961206,0.72464273,-4.86563631,-6.06338084,-1.22209949,-0.4699618,1.01222748,-0.6899355,-0.53000581,6.86966784,-3.27211075,-6.59044146,-2.21290585,-3.139579]]
yhat = model.predict(data)
print('Predicted Class: %d' % (yhat))
| 0.812496 | 0.602529 |
```
import glob
import jsonlines
import pandas as pd
from collections import defaultdict
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['font.family'] = 'Times New Roman'
import matplotlib
matplotlib.rcParams['text.usetex'] = True
matplotlib.rcParams['text.latex.preview'] = True
plt.rc('font', family='serif', serif=['Times'])
import warnings
warnings.filterwarnings("ignore")
lang2name = {
'en': 'ENG',
'ar': 'ARB',
'be': 'BEL',
'bg': 'BUL',
'da': 'DAN',
'et': 'EST',
'de': 'DEU',
'el': 'ELL',
'fr': 'FRA',
'id': 'IND',
'ja': 'JPN',
'ko': 'KOR',
'zh': 'CMN',
'pt': 'POR',
'ru': 'RUS',
'es': 'SPA',
'sw': 'SWA',
'ta': 'TAM',
'tr': 'TUR',
'vi': 'VIE',
}
dset_fn = "../dataset_dir/XVNLI/annotations/"
langs = ['en', 'ar', 'es', 'fr', 'ru']
shots = [1, 5, 10, 20, 25, 48]
with jsonlines.open(dset_fn + "en/train.jsonl") as reader:
train = [item for item in reader]
with jsonlines.open(dset_fn + "en/dev.jsonl") as reader:
dev = [item for item in reader]
dev[0]
lang2test = {}
for lang in langs:
with jsonlines.open(dset_fn + f"{lang}/test.jsonl") as reader:
lang2test[lang] = [item for item in reader]
lang2few = defaultdict(dict)
for lang in langs:
for shot in shots:
with jsonlines.open(dset_fn + f"{lang}/train_{shot}.jsonl") as reader:
lang2few[lang][shot] = [item for item in reader]
```
## Label distribution
```
train_labels = [e['gold_label'] for e in train]
dev_labels = [e['gold_label'] for e in dev]
lang2test_labels = {lang: [e['gold_label'] for e in l] for lang, l in lang2test.items()}
lang2few_labels = {lang: {s: [e['gold_label'] for e in l] for s, l in d.items()} for lang, d in lang2few.items()}
xs = ['train', 'dev', 'test'] + ['1 shot'] + [f'{s} shots' for s in shots[1:]]
label2counts = {
'entailment': [],
'neutral': [],
'contradiction': [],
}
for l in [train_labels, dev_labels, lang2test_labels['en']]:
for label in label2counts:
elems = [e for e in l if e == label]
label2counts[label].append(len(elems))
for l in lang2few_labels['en'].values():
for label in label2counts:
elems = [e for e in l if e == label]
label2counts[label].append(len(elems))
label2counts['neutral']
f, ax = plt.subplots(1, 1, figsize=(14,8))
colors = ['#b5ddd8', '#b1c4e7', '#f5f3c1']
width=0.3
ix = 0
label = list(label2counts.keys())[ix]
ax.bar([ix-width for ix in range(len(xs))], label2counts[label], edgecolor='k', width=width, color=colors[ix], label=label.capitalize())
ix = 1
label = list(label2counts.keys())[ix]
ax.bar([ix for ix in range(len(xs))], label2counts[label], edgecolor='k', width=width, color=colors[ix], label=label.capitalize())
ix = 2
label = list(label2counts.keys())[ix]
ax.bar([ix+width for ix in range(len(xs))], label2counts[label], edgecolor='k', width=width, color=colors[ix], label=label.capitalize())
ax.grid(alpha=0.3)
ax.tick_params(axis='both', which='major', labelsize=24)
ax.set_xticks([ix for ix in range(len(xs))])
ax.set_xticklabels(xs, fontsize=20)
ax.set_xlabel('Split', fontsize=32)
ax.set_ylabel('Count', fontsize=32)
ax.set_yscale("log")
ax.legend(title='\\textbf{Label}', loc='upper right', ncol=3, fontsize=22, title_fontsize=24)
f.savefig("xvnli-labels.pdf", bbox_anchor="tight")
```
## Hypotheses length distribution
```
dev[0]
train_lens = [len(e['sentence2']) for e in train]
dev_lens = [len(e['sentence2']) for e in dev]
lang2test_lens = {lang: [len(e['sentence2']) for e in l] for lang, l in lang2test.items()}
lang2few_lens = {lang: {s: [len(e['sentence2']) for e in l] for s, l in d.items()} for lang, d in lang2few.items()}
from collections import Counter
train_cnts = Counter(train_lens)
dev_cnts = Counter(dev_lens)
lang2test_cnts = {lang: Counter(l) for lang, l in lang2test_lens.items()}
import numpy as np
from scipy import stats
f, ax = plt.subplots(1, 1, figsize=(14,8))
colors = ['#000000', '#377eb8', '#ff7f00', '#4daf4a', '#f781bf', '#a65628', '#984ea3', '#999999', '#e41a1c', '#dede00', '#cccccc']
x = np.arange(0, 215, 1)
for ix, (lang, l) in enumerate(lang2test_lens.items()):
density = stats.kde.gaussian_kde(l)
ax.plot(x, density(x), lw=2, label=lang2name[lang], color=colors[ix])
ax.grid(alpha=0.3)
ax.tick_params(axis='both', which='major', labelsize=24)
ax.set_xlabel('Sentence length [\# characters]', fontsize=32)
ax.set_ylabel('Density', fontsize=32)
ax.legend(title='\\textbf{Language}', loc='upper right', ncol=1, fontsize=22, title_fontsize=24)
f.savefig("xvnli-lens.pdf", bbox_anchor="tight")
```
|
github_jupyter
|
import glob
import jsonlines
import pandas as pd
from collections import defaultdict
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['font.family'] = 'Times New Roman'
import matplotlib
matplotlib.rcParams['text.usetex'] = True
matplotlib.rcParams['text.latex.preview'] = True
plt.rc('font', family='serif', serif=['Times'])
import warnings
warnings.filterwarnings("ignore")
lang2name = {
'en': 'ENG',
'ar': 'ARB',
'be': 'BEL',
'bg': 'BUL',
'da': 'DAN',
'et': 'EST',
'de': 'DEU',
'el': 'ELL',
'fr': 'FRA',
'id': 'IND',
'ja': 'JPN',
'ko': 'KOR',
'zh': 'CMN',
'pt': 'POR',
'ru': 'RUS',
'es': 'SPA',
'sw': 'SWA',
'ta': 'TAM',
'tr': 'TUR',
'vi': 'VIE',
}
dset_fn = "../dataset_dir/XVNLI/annotations/"
langs = ['en', 'ar', 'es', 'fr', 'ru']
shots = [1, 5, 10, 20, 25, 48]
with jsonlines.open(dset_fn + "en/train.jsonl") as reader:
train = [item for item in reader]
with jsonlines.open(dset_fn + "en/dev.jsonl") as reader:
dev = [item for item in reader]
dev[0]
lang2test = {}
for lang in langs:
with jsonlines.open(dset_fn + f"{lang}/test.jsonl") as reader:
lang2test[lang] = [item for item in reader]
lang2few = defaultdict(dict)
for lang in langs:
for shot in shots:
with jsonlines.open(dset_fn + f"{lang}/train_{shot}.jsonl") as reader:
lang2few[lang][shot] = [item for item in reader]
train_labels = [e['gold_label'] for e in train]
dev_labels = [e['gold_label'] for e in dev]
lang2test_labels = {lang: [e['gold_label'] for e in l] for lang, l in lang2test.items()}
lang2few_labels = {lang: {s: [e['gold_label'] for e in l] for s, l in d.items()} for lang, d in lang2few.items()}
xs = ['train', 'dev', 'test'] + ['1 shot'] + [f'{s} shots' for s in shots[1:]]
label2counts = {
'entailment': [],
'neutral': [],
'contradiction': [],
}
for l in [train_labels, dev_labels, lang2test_labels['en']]:
for label in label2counts:
elems = [e for e in l if e == label]
label2counts[label].append(len(elems))
for l in lang2few_labels['en'].values():
for label in label2counts:
elems = [e for e in l if e == label]
label2counts[label].append(len(elems))
label2counts['neutral']
f, ax = plt.subplots(1, 1, figsize=(14,8))
colors = ['#b5ddd8', '#b1c4e7', '#f5f3c1']
width=0.3
ix = 0
label = list(label2counts.keys())[ix]
ax.bar([ix-width for ix in range(len(xs))], label2counts[label], edgecolor='k', width=width, color=colors[ix], label=label.capitalize())
ix = 1
label = list(label2counts.keys())[ix]
ax.bar([ix for ix in range(len(xs))], label2counts[label], edgecolor='k', width=width, color=colors[ix], label=label.capitalize())
ix = 2
label = list(label2counts.keys())[ix]
ax.bar([ix+width for ix in range(len(xs))], label2counts[label], edgecolor='k', width=width, color=colors[ix], label=label.capitalize())
ax.grid(alpha=0.3)
ax.tick_params(axis='both', which='major', labelsize=24)
ax.set_xticks([ix for ix in range(len(xs))])
ax.set_xticklabels(xs, fontsize=20)
ax.set_xlabel('Split', fontsize=32)
ax.set_ylabel('Count', fontsize=32)
ax.set_yscale("log")
ax.legend(title='\\textbf{Label}', loc='upper right', ncol=3, fontsize=22, title_fontsize=24)
f.savefig("xvnli-labels.pdf", bbox_anchor="tight")
dev[0]
train_lens = [len(e['sentence2']) for e in train]
dev_lens = [len(e['sentence2']) for e in dev]
lang2test_lens = {lang: [len(e['sentence2']) for e in l] for lang, l in lang2test.items()}
lang2few_lens = {lang: {s: [len(e['sentence2']) for e in l] for s, l in d.items()} for lang, d in lang2few.items()}
from collections import Counter
train_cnts = Counter(train_lens)
dev_cnts = Counter(dev_lens)
lang2test_cnts = {lang: Counter(l) for lang, l in lang2test_lens.items()}
import numpy as np
from scipy import stats
f, ax = plt.subplots(1, 1, figsize=(14,8))
colors = ['#000000', '#377eb8', '#ff7f00', '#4daf4a', '#f781bf', '#a65628', '#984ea3', '#999999', '#e41a1c', '#dede00', '#cccccc']
x = np.arange(0, 215, 1)
for ix, (lang, l) in enumerate(lang2test_lens.items()):
density = stats.kde.gaussian_kde(l)
ax.plot(x, density(x), lw=2, label=lang2name[lang], color=colors[ix])
ax.grid(alpha=0.3)
ax.tick_params(axis='both', which='major', labelsize=24)
ax.set_xlabel('Sentence length [\# characters]', fontsize=32)
ax.set_ylabel('Density', fontsize=32)
ax.legend(title='\\textbf{Language}', loc='upper right', ncol=1, fontsize=22, title_fontsize=24)
f.savefig("xvnli-lens.pdf", bbox_anchor="tight")
| 0.359027 | 0.586493 |
___
<a href='https://github.com/ai-vithink'> <img src='https://avatars1.githubusercontent.com/u/41588940?s=200&v=4' /></a>
___
# Seaborn Exercises - Solutions
Time to practice your new seaborn skills! Try to recreate the plots below (don't worry about color schemes, just the plot itself.
## The Data
We will be working with a famous titanic data set for these exercises. Later on in the Machine Learning section of the course, we will revisit this data, and use it to predict survival rates of passengers. For now, we'll just focus on the visualization of the data with seaborn:
```
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style('whitegrid')
titanic = sns.load_dataset('titanic')
titanic.head()
```
# Exercises
** Recreate the plots below using the titanic dataframe. There are very few hints since most of the plots can be done with just one or two lines of code and a hint would basically give away the solution. Keep careful attention to the x and y labels for hints.**
** *Note! In order to not lose the plot image, make sure you don't code in the cell that is directly above the plot, there is an extra cell above that one which won't overwrite that plot!* **
```
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.jointplot(x='fare',y='age',data=titanic)
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.distplot(titanic['fare'],bins=30,kde=False,color='red')
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.boxplot(x='class',y='age',data=titanic,palette='rainbow')
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.swarmplot(x='class',y='age',data=titanic,palette='Set2')
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.countplot(x='sex',data=titanic)
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.heatmap(titanic.corr(),cmap='coolwarm')
plt.title('titanic.corr()')
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
g = sns.FacetGrid(data=titanic,col='sex')
g.map(plt.hist,'age')
```
# Great Job!
### That is it for now! We'll see a lot more of seaborn practice problems in the machine learning section!
|
github_jupyter
|
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style('whitegrid')
titanic = sns.load_dataset('titanic')
titanic.head()
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.jointplot(x='fare',y='age',data=titanic)
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.distplot(titanic['fare'],bins=30,kde=False,color='red')
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.boxplot(x='class',y='age',data=titanic,palette='rainbow')
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.swarmplot(x='class',y='age',data=titanic,palette='Set2')
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.countplot(x='sex',data=titanic)
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.heatmap(titanic.corr(),cmap='coolwarm')
plt.title('titanic.corr()')
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
g = sns.FacetGrid(data=titanic,col='sex')
g.map(plt.hist,'age')
| 0.233881 | 0.864654 |
# Example 1
We will demonstrate how to use `cpnet` in stages. To get started, let's import some packages: `networkx`, `numpy`, and `matplotlib`.
```
%load_ext autoreload
%autoreload 2
import cpnet
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
```
# Basic usage
The network we will analyze is the karate club network, which can be loaded using `networkx`.
```
G = nx.karate_club_graph()
```
`networkx` has many easy-to-use APIs that constructs a network from files like an edge list. For details, please see `networkx` documentation.
Among many algorithms implemented in `cpnet`, we demonstrate an algorithm named the Borgatti-Everett (BE) algorithm.
In `cpnet`, detecting core-periphery structure requires two steps: loading algorithm and giving a network to it as an input:
```
alg = cpnet.BE() # Load the Borgatti-Everett algorithm
alg.detect(G) # Give the network as an input
```
All set. The detected core-periphery structure can be retrieved with `get_coreness` and `get_pair_id` methods in the `cpnet.BE`:
```
x = alg.get_coreness() # Get the coreness of nodes
c = alg.get_pair_id() # Get the group membership of nodes
```
So, what are `x` and `c`?
Both `x` and `c` are python `dict` objects, with keys corresponding to the IDs of nodes (which we can see by `G.nodes()`).
`x[i]` indicates the coreness indicate a *coreness*. The coreness ranges in [0,1], where a larger value indicates a stronger affiliation to the core. For example, the detected `x` looks like
```
print(x)
```
where `x[i]=1` or `x[i]=0` means that node `i` belongs to a core or a periphery, respectively. In BE algorithm, a node belongs to either a core and a periphery. Therefore, `x[i]` takes 0 or 1.
The other `dict` object, `c`, indicates the group to which the node `i` belongs, which we will explain in more details soon.
`cpnet` offers a simple function to visualize the detected core-periphery structure:
```
fig = plt.figure(figsize=(8, 6))
ax = plt.gca()
ax, pos = cpnet.draw(G, c, x, ax)
```
where the filled and open circles indicate the detected core and periphery, respectively.
# Continuous core-periphery structure
The BE algorithm classifies nodes into a core and a periphery. However, such binary classification can be too crude if a node has a mixed property of core and periphery. Therefore, some algorithms aim to find core-periphery structure with a fuzzy boundary between core and periphery, continuous core-periphery structure.
Let us demonstrate an algorithm to this end, called MINRES:
```
alg = cpnet.MINRES()
alg.detect(G)
x = alg.get_coreness()
c = alg.get_pair_id()
```
We note that the coreness value varies between 0 and 1.
```
print(x)
```
We can visualize the continuous spectrum of coreness values by
```
fig = plt.figure(figsize=(8, 6))
ax = plt.gca()
ax, pos = cpnet.draw(G, c, x, ax, pos = pos)
```
where the darkness of the circles indicates the coreness of the node. Unlike the BE algorithm, there is no clear cut between core and periphery.
# Multiple core-periphery pairs
So far, we consider that a network has a single core and a single periphery, with or without some sub-peripheral nodes in between. However, a network may have multiple groups, where each group is a pair of a core and a periphery.
An algorithm to this end is the KM algorithm:
```
kmconfig = cpnet.KM_config()
kmconfig.detect(G)
```
Get the results by
```
c = kmconfig.get_pair_id()
x = kmconfig.get_coreness()
```
Here `c` has meaningful information.
`c` is used to store the group membership of nodes, where `c[i]` indicates the membership of the node.
```
print(c)
```
We can visualize the results in the same way as for the BE and MINRES algorithms:
```
fig = plt.figure(figsize=(8, 6))
ax = plt.gca()
ax, _ = cpnet.draw(G, c, x, ax, pos=pos)
```
The color of nodes indicate the membership of nodes.
# Summary
In this example, we have demonstrated three algorithms of different types, BE, MINRES and KM algorithms. These algorithms find different core-periphery structure because they aim for different types of core-periphery structure.
We develop `cpnet` to use the different algorithms in the same APIs. There are many other algorithms not demonstrated in this example such as LowRank algorithm, Rombach's algorithms, Surprise. Please see the ReadMe or the documentations for the available algorithms.
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import cpnet
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
G = nx.karate_club_graph()
alg = cpnet.BE() # Load the Borgatti-Everett algorithm
alg.detect(G) # Give the network as an input
x = alg.get_coreness() # Get the coreness of nodes
c = alg.get_pair_id() # Get the group membership of nodes
print(x)
fig = plt.figure(figsize=(8, 6))
ax = plt.gca()
ax, pos = cpnet.draw(G, c, x, ax)
alg = cpnet.MINRES()
alg.detect(G)
x = alg.get_coreness()
c = alg.get_pair_id()
print(x)
fig = plt.figure(figsize=(8, 6))
ax = plt.gca()
ax, pos = cpnet.draw(G, c, x, ax, pos = pos)
kmconfig = cpnet.KM_config()
kmconfig.detect(G)
c = kmconfig.get_pair_id()
x = kmconfig.get_coreness()
print(c)
fig = plt.figure(figsize=(8, 6))
ax = plt.gca()
ax, _ = cpnet.draw(G, c, x, ax, pos=pos)
| 0.417628 | 0.990839 |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109A Introduction to Data Science
## Standard Section 8: Review Trees and Boosting including Ada Boosting Gradient Boosting and XGBoost.
**Harvard University**<br/>
**Fall 2019**<br/>
**Instructors**: Pavlos Protopapas, Kevin Rader, and Chris Tanner<br/>
**Section Leaders**: Marios Mattheakis, Abhimanyu (Abhi) Vasishth, Robbert (Rob) Struyven<br/>
```
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
```
This section will work with a spam email dataset again. Our ultimate goal is to be able to build models so that we can predict whether an email is spam or not spam based on word characteristics within each email. We will review Decision Trees, Bagging, and Random Forest methods, and introduce Boosting: Ada Boost and XGBoost.
Specifically, we will:
1. *Quick review of last week*
2. Rebuild the Decision Tree model, Bagging model, Random Forest Model just for comparison with Boosting.
3. *Theory:* What is Boosting?
4. Use the Adaboost on the Spam Dataset.
5. *Theory:* What is Gradient Boosting and XGBoost?
6. Use XGBoost on the Spam Dataset: Extreme Gradient Boosting
Optional: Example to better understand Bias vs Variance tradeoff.
---------
## 1. *Quick review of last week*
#### The Idea: Decision Trees are just flowcharts and interpretable!
It turns out that simple flow charts can be formulated as mathematical models for classification and these models have the properties we desire;
- interpretable by humans
- have sufficiently complex decision boundaries
- the decision boundaries are locally linear, each component of the decision boundary is simple to describe mathematically.
----------
#### How to build Decision Trees (the Learning Algorithm in words):
To learn a decision tree model, we take a greedy approach:
1. Start with an empty decision tree (undivided feature space)
2. Choose the ‘optimal’ predictor on which to split and choose the ‘optimal’ threshold value for splitting by applying a **splitting criterion (1)**
3. Recurse on on each new node until **stopping condition (2)** is met
#### So we need a (1) splitting criterion and a (2) stopping condition:
#### (1) Splitting criterion
<img src="data/split2_adj.png" alt="split2" width="70%"/>
#### (2) Stopping condition
**Not stopping while building a deeper and deeper tree = 100% training accuracy; Yet we will overfit!
To prevent the **overfitting** from happening, we should have stopping condition.
-------------
#### How do we go from Classification to Regression?
- For classification, we return the majority class in the points of each leaf node.
- For regression we return the average of the outputs for the points in each leaf node.
-------------
#### What is bagging?
One way to adjust for the high variance of the output of an experiment is to perform the experiment multiple times and then average the results.
1. **Bootstrap:** we generate multiple samples of training data, via bootstrapping. We train a full decision tree on each sample of data.
2. **AGgregatiING** for a given input, we output the averaged outputs of all the models for that input.
This method is called **Bagging: B** ootstrap + **AGG**regat**ING**.
-------------
#### What is Random Forest?
- **Many trees** make a **forest**.
- **Many random trees** make a **random forest**.
Random Forest is a modified form of bagging that creates ensembles of independent decision trees.
To *de-correlate the trees*, we:
1. train each tree on a separate bootstrap **random sample** of the full training set (same as in bagging)
2. for each tree, at each split, we **randomly select a set of 𝐽′ predictors from the full set of predictors.** (not done in bagging)
3. From amongst the 𝐽′ predictors, we select the optimal predictor and the optimal corresponding threshold for the split.
-------------
#### Interesting Piazza post: why randomness in simple decision tree?
```"Hi there. I notice that there is a parameter called "random_state" in decision tree function and I wonder why we need randomness in simple decision tree. If we add randomness in such case, isn't it the same as random forest?"```
- The problem of learning an optimal decision tree is known to be **NP-complete** under several aspects of optimality and even for simple concepts.
- Consequently, practical decision-tree learning algorithms are based on **heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node**.
- Such algorithms **cannot guarantee to return the globally optimal decision tree**.
- This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement (Bagging).
For example: **What is the defaulth DecisionTreeClassifier behaviour when there are 2 or more best features for a certain split (a tie among "splitters")?** (after a deep dive and internet search [link](https://github.com/scikit-learn/scikit-learn/issues/12259 ) ):
- The current default behaviour when splitter="best" is to shuffle the features at each step and take the best feature to split.
- In case there is a tie, we take a random one.
-------------
## 2. Just re-building the tree models of last week
### Rebuild the Decision Tree model, Bagging model and Random Forest Model for comparison with Boosting methods
We will be working with a spam email dataset. The dataset has 57 predictors with a response variable called `Spam` that indicates whether an email is spam or not spam. **The goal is to be able to create a classifier or method that acts as a spam filter.**
Link to description : https://archive.ics.uci.edu/ml/datasets/spambase
```
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
import sklearn.metrics as metrics
import time
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import LogisticRegressionCV
from sklearn.model_selection import KFold
from sklearn.metrics import confusion_matrix
%matplotlib inline
pd.set_option('display.width', 1500)
pd.set_option('display.max_columns', 100)
from sklearn.model_selection import learning_curve
#Import Dataframe and Set Column Names
spam_df = pd.read_csv('data/spam.csv', header=None)
columns = ["Column_"+str(i+1) for i in range(spam_df.shape[1]-1)] + ['Spam']
spam_df.columns = columns
display(spam_df.head())
#Let us split the dataset into a 70-30 split by using the following:
#Split data into train and test
np.random.seed(42)
msk = np.random.rand(len(spam_df)) < 0.7
data_train = spam_df[msk]
data_test = spam_df[~msk]
#Split predictor and response columns
x_train, y_train = data_train.drop(['Spam'], axis=1), data_train['Spam']
x_test , y_test = data_test.drop(['Spam'] , axis=1), data_test['Spam']
print("Shape of Training Set :",data_train.shape)
print("Shape of Testing Set :" ,data_test.shape)
#Check Percentage of Spam in Train and Test Set
percentage_spam_training = 100*y_train.sum()/len(y_train)
percentage_spam_testing = 100*y_test.sum()/len(y_test)
print("Percentage of Spam in Training Set \t : {:0.2f}%.".format(percentage_spam_training))
print("Percentage of Spam in Testing Set \t : {:0.2f}%.".format(percentage_spam_testing))
```
-----------
### Fitting an Optimal Single Decision Tree
```
# Best depth for single decision trees of last week
best_depth = 7
print("The best depth was found to be:", best_depth)
#Evalaute the performance at the best depth
model_tree = DecisionTreeClassifier(max_depth=best_depth)
model_tree.fit(x_train, y_train)
#Check Accuracy of Spam Detection in Train and Test Set
acc_trees_training = accuracy_score(y_train, model_tree.predict(x_train))
acc_trees_testing = accuracy_score(y_test, model_tree.predict(x_test))
print("Simple Decision Trees: Accuracy, Training Set \t : {:.2%}".format(acc_trees_training))
print("Simple Decision Trees: Accuracy, Testing Set \t : {:.2%}".format(acc_trees_testing))
```
--------
### Fitting 100 Single Decision Trees while Bagging
```
n_trees = 100 # we tried a variety of numbers here
#Creating model
np.random.seed(0)
model = DecisionTreeClassifier(max_depth=best_depth+5)
#Initializing variables
predictions_train = np.zeros((data_train.shape[0], n_trees))
predictions_test = np.zeros((data_test.shape[0], n_trees))
#Conduct bootstraping iterations
for i in range(n_trees):
temp = data_train.sample(frac=1, replace=True)
response_variable = temp['Spam']
temp = temp.drop(['Spam'], axis=1)
model.fit(temp, response_variable)
predictions_train[:,i] = model.predict(x_train)
predictions_test[:,i] = model.predict(x_test)
#Make Predictions Dataframe
columns = ["Bootstrap-Model_"+str(i+1) for i in range(n_trees)]
predictions_train = pd.DataFrame(predictions_train, columns=columns)
predictions_test = pd.DataFrame(predictions_test, columns=columns)
#Function to ensemble the prediction of each bagged decision tree model
def get_prediction(df, count=-1):
count = df.shape[1] if count==-1 else count
temp = df.iloc[:,0:count]
return np.mean(temp, axis=1)>0.5
#Check Accuracy of Spam Detection in Train and Test Set
acc_bagging_training = 100*accuracy_score(y_train, get_prediction(predictions_train, count=-1))
acc_bagging_testing = 100*accuracy_score(y_test, get_prediction(predictions_test, count=-1))
print("Bagging: \tAccuracy, Training Set \t: {:0.2f}%".format(acc_bagging_training))
print("Bagging: \tAccuracy, Testing Set \t: {:0.2f}%".format( acc_bagging_testing))
```
### Fitting Random Forest
```
#Fit a Random Forest Model
#Training
model = RandomForestClassifier(n_estimators=n_trees, max_depth=best_depth+5)
model.fit(x_train, y_train)
#Predict
y_pred_train = model.predict(x_train)
y_pred_test = model.predict(x_test)
#Performance Evaluation
acc_random_forest_training = accuracy_score(y_train, y_pred_train)*100
acc_random_forest_testing = accuracy_score(y_test, y_pred_test)*100
print("Random Forest: Accuracy, Training Set : {:0.2f}%".format(acc_random_forest_training))
print("Random Forest: Accuracy, Testing Set : {:0.2f}%".format(acc_random_forest_testing))
```
#### Let's compare the performance of our 3 models:
```
print("Decision Trees:\tAccuracy, Training Set \t: {:.2%}".format(acc_trees_training))
print("Decision Trees:\tAccuracy, Testing Set \t: {:.2%}".format(acc_trees_testing))
print("\nBagging: \tAccuracy, Training Set \t: {:0.2f}%".format(acc_bagging_training))
print("Bagging: \tAccuracy, Testing Set \t: {:0.2f}%".format( acc_bagging_testing))
print("\nRandom Forest: \tAccuracy, Training Set \t: {:0.2f}%".format(acc_random_forest_training))
print("Random Forest: \tAccuracy, Testing Set \t: {:0.2f}%".format(acc_random_forest_testing))
```
## 3. *Theory:* What is Boosting?
- **Bagging and Random Forest:**
- complex and deep trees **overfit**
- thus **let's perform variance reduction on complex trees!**
- **Boosting:**
- simple and shallow trees **underfit**
- thus **let's perform bias reduction of simple trees!**
- make the simple trees more expressive!
**Boosting** attempts to improve the predictive flexibility of simple models.
- It trains a **large number of “weak” learners in sequence**.
- A weak learner is a constrained model (limit the max depth of each decision tree).
- Each one in the sequence focuses on **learning from the mistakes** of the one before it.
- By more heavily weighting in the mistakes in the next tree, our next tree will learn from the mistakes.
- A combining all the weak learners into a single strong learner = **a boosted tree**.
<img src="data/gradient_boosting1.png?" alt="tree_adj" width="70%"/>
----------
### Illustrative example (from [source](https://towardsdatascience.com/underfitting-and-overfitting-in-machine-learning-and-how-to-deal-with-it-6fe4a8a49dbf))
<img src="data/boosting.png" alt="tree_adj" width="70%"/>
We built multiple trees consecutively: Tree 1 -> Tree 2 -> Tree 3 - > ....
**The size of the plus or minus singns indicates the weights of a data points for every Tree**. How do we determine these weights?
For each consecutive tree and iteration we do the following:
- The **wrongly classified data points ("mistakes" = red circles)** are identified and **more heavily weighted in the next tree (green arrow)**.
- Thus the size of the plus or minus changes in the next tree
- This change in weights will influence and change the next simple decision tree
- The **correct predictions are** identified and **less heavily weighted in the next tree**.
We iterate this process for a certain number of times, stop and construct our final model:
- The ensemble (**"Final: Combination"**) is a linear combination of the simple trees, and is more expressive!
- The ensemble (**"Final: Combination"**) has indeed not just one simple decision boundary line, and fits the data better.
<img src="data/boosting_2.png?" alt="tree_adj" width="70%"/>
### What is Ada Boost?
- Ada Boost = Adaptive Boosting.
- AdaBoost is adaptive in the sense that subsequent weak learners are tweaked in favor of those instances misclassified by previous classifiers
<img src="data/AdaBoost1.png" alt="tree_adj" width="70%"/>
<img src="data/AdaBoost2.png" alt="tree_adj" width="70%"/>
<img src="data/AdaBoost3.png" alt="tree_adj" width="70%"/>
**Notice that when $\hat{y}_n = 𝑦_n$, the weight $w_n$ is small; when $\hat{y}_n \neq 𝑦_n$, the weight $w_n$ is larger.**
### Illustrative Example (from slides)
------
**Step1. Start with equal distribition initially**
<img src="data/ADA2.png" alt="tree_adj" width="40%">
------
**Step2. Fit a simple classifier**
<img src="data/ADA3.png" alt="tree_adj" width="40%"/>
------
**Step3. Update the weights**
<img src="data/ADA4.png" alt="tree_adj" width="40%"/>
**Step4. Update the classifier:** First time trivial (we have no model yet.)
------
**Step2. Fit a simple classifier**
<img src="data/ADA5.png" alt="tree_adj" width="40%"/>
**Step3. Update the weights:** not shown.
------
**Step4. Update the classifier:**
<img src="data/ADA6.png" alt="tree_adj" width="40%">
## 4. Use the Adaboost method to visualize Bias-Variance tradeoff.
Now let's try Boosting!
```
#Fit an Adaboost Model
#Training
model = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=4),
n_estimators=200,
learning_rate=0.05)
model.fit(x_train, y_train)
#Predict
y_pred_train = model.predict(x_train)
y_pred_test = model.predict(x_test)
#Performance Evaluation
acc_boosting_training = accuracy_score(y_train, y_pred_train)*100
acc_boosting_test = accuracy_score(y_test, y_pred_test)*100
print("Ada Boost:\tAccuracy, Training Set \t: {:0.2f}%".format(acc_boosting_training))
print("Ada Boost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_boosting_test))
```
**How does the test and training accuracy evolve with every iteration (tree)?**
```
#Plot Iteration based score
train_scores = list(model.staged_score(x_train,y_train))
test_scores = list(model.staged_score(x_test, y_test))
plt.figure(figsize=(10,7))
plt.plot(train_scores,label='train')
plt.plot(test_scores,label='test')
plt.xlabel('Iteration')
plt.ylabel('Accuracy')
plt.title("Variation of Accuracy with Iterations - ADA Boost")
plt.legend();
```
What about performance?
```
print("Decision Trees:\tAccuracy, Testing Set \t: {:.2%}".format(acc_trees_testing))
print("Bagging: \tAccuracy, Testing Set \t: {:0.2f}%".format( acc_bagging_testing))
print("Random Forest: \tAccuracy, Testing Set \t: {:0.2f}%".format(acc_random_forest_testing))
print("Ada Boost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_boosting_test))
```
AdaBoost seems to be performing better than Simple Decision Trees and has a similar Test Set Accuracy performance compared to Random Forest.
**Random tip:** If a "for"-loop takes som time and you want to know the progress while running the loop, use: **tqdm()** ([link](https://github.com/tqdm/tqdm)). No need for 1000's of ```print(i)``` outputs.
Usage: ```for i in tqdm( range(start,finish) ):```
- tqdm means *"progress"* in Arabic (taqadum, تقدّم) and
- tqdm is an abbreviation for *"I love you so much"* in Spanish (te quiero demasiado).
#### What if we change the depth of our AdaBoost trees?
```
# Start Timer
start = time.time()
#Find Optimal Depth of trees for Boosting
score_train, score_test, depth_start, depth_end = {}, {}, 2, 30
for i in tqdm(range(depth_start, depth_end, 2)):
model = AdaBoostClassifier(
base_estimator=DecisionTreeClassifier(max_depth=i),
n_estimators=200, learning_rate=0.05)
model.fit(x_train, y_train)
score_train[i] = accuracy_score(y_train, model.predict(x_train))
score_test[i] = accuracy_score(y_test, model.predict(x_test))
# Stop Timer
end = time.time()
elapsed_adaboost = end - start
#Plot
lists1 = sorted(score_train.items())
lists2 = sorted(score_test.items())
x1, y1 = zip(*lists1)
x2, y2 = zip(*lists2)
plt.figure(figsize=(10,7))
plt.ylabel("Accuracy")
plt.xlabel("Depth")
plt.title('Variation of Accuracy with Depth - ADA Boost Classifier')
plt.plot(x1, y1, 'b-', label='Train')
plt.plot(x2, y2, 'g-', label='Test')
plt.legend()
plt.show()
```
Adaboost complexity depends on both the number of estimators and the base estimator.
- In the beginning as our model complexity increases (depth 2-3), we first observe a small increase in accuracy.
- But as we go further to the right of the graph (**deeper trees**), our model **will overfit the data.**
- **REMINDER and validation: Boosting relies on simple trees!**
**Food for Thought :**
- Are **boosted models independent of one another?** Do they need to wait for the previous model's residuals?
- Are **bagging or random forest models independent of each other**, can they be trained in a parallel fashion?
## 5. *Theory:* What is Gradient Boosting and XGBoost?
### What is Gradient Boosting?
To improve its predictions, **gradient boosting looks at the difference between its current approximation, and the known correct target vector, which is called the residual**.
The mathematics:
- It may be assumed that there is some imperfect model $F_{m}$
- The gradient boosting algorithm improves on $F_{m}$ constructing a new model that adds an estimator $h$ to provide a better model:
$$F_{m+1}(x)=F_{m}(x)+h(x)$$
- To find $h$, the gradient boosting solution starts with the observation that a perfect **h** would imply
$$F_{m+1}(x)=F_{m}(x)+h(x)=y$$
- or, equivalently solving for h,
$$h(x)=y-F_{m}(x)$$
- Therefore, gradient boosting will fit h to the residual $y-F_{m}(x)$
<img src="data/gradient_boosting2.png" alt="tree_adj" width="80%"/>
-------
### XGBoost: ["Long May She Reign!"](https://towardsdatascience.com/https-medium-com-vishalmorde-xgboost-algorithm-long-she-may-rein-edd9f99be63d)
<img src="data/kaggle.png" alt="tree_adj" width="100%"/>
----------
### What is XGBoost and why is it so good!?
- Based on Gradient Boosting
- XGBoost = **eXtreme Gradient Boosting**; refers to the engineering goal to push the limit of computations resources for boosted tree algorithm
**Accuracy:**
- XGBoost however uses a **more regularized model formalizaiton to control overfitting** (=better performance) by both L1 and L2 regularization.
- Tree Pruning methods: more shallow tree will also prevent overfitting
- Improved convergence techniques (like early stopping when no improvement is made for X number of iterations)
- Built-in Cross-Validaiton
**Computing Speed:**
- Special Vector and matrix type data structures for faster results.
- Parallelized tree building: using all of your CPU cores during training.
- Distributed Computing: for training very large models using a cluster of machines.
- Cache Optimization of data structures and algorithm: to make best use of hardware.
**XGBoost is building boosted trees in parallel? What? How?**
- No: Xgboost doesn't run multiple trees in parallel, you need predictions after each tree to update gradients.
- Rather it does the parallelization WITHIN a single tree my using openMP to create branches independently.
## 6. Use XGBoost: Extreme Gradient Boosting
```
# Let's install XGBoost
! pip install xgboost
import xgboost as xgb
# Create the training and test data
dtrain = xgb.DMatrix(x_train, label=y_train)
dtest = xgb.DMatrix(x_test, label=y_test)
# Parameters
param = {
'max_depth': best_depth, # the maximum depth of each tree
'eta': 0.3, # the training step for each iteration
'silent': 1, # logging mode - quiet
'objective': 'multi:softprob', # error evaluation for multiclass training
'num_class': 2} # the number of classes that exist in this datset
# Number of training iterations
num_round = 200
# Start timer
start = time.time()
# Train XGBoost
bst = xgb.train(param,
dtrain,
num_round,
evals= [(dtrain, 'train')],
early_stopping_rounds=20, # early stopping
verbose_eval=20)
# Make prediction training set
preds_train = bst.predict(dtrain)
best_preds_train = np.asarray([np.argmax(line) for line in preds_train])
# Make prediction test set
preds_test = bst.predict(dtest)
best_preds_test = np.asarray([np.argmax(line) for line in preds_test])
# Performance Evaluation
acc_XGBoost_training = accuracy_score(y_train, best_preds_train)*100
acc_XGBoost_test = accuracy_score(y_test, best_preds_test)*100
# Stop Timer
end = time.time()
elapsed_xgboost = end - start
print("XGBoost:\tAccuracy, Training Set \t: {:0.2f}%".format(acc_XGBoost_training))
print("XGBoost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_XGBoost_test))
```
### What about the accuracy performance: AdaBoost versus XGBoost?
```
print("Ada Boost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_boosting_test))
print("XGBoost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_XGBoost_test))
```
### What about the computing performance: AdaBoost versus XGBoost?
```
print("AdaBoost elapsed time: \t{:0.2f}s".format(elapsed_adaboost))
print("XGBoost elapsed time: \t{:0.2f}s".format(elapsed_xgboost))
```
### What if we change the depth of our XGBoost trees and compare to Ada Boost?
```
def model_xgboost(best_depth):
param = {
'max_depth': best_depth, # the maximum depth of each tree
'eta': 0.3, # the training step for each iteration
'silent': 1, # logging mode - quiet
'objective': 'multi:softprob', # error evaluation for multiclass training
'num_class': 2} # the number of classes that exist in this datset
# the number of training iterations
num_round = 200
bst = xgb.train(param,
dtrain,
num_round,
evals= [(dtrain, 'train')],
early_stopping_rounds=20,
verbose_eval=False)
preds_train = bst.predict(dtrain)
best_preds_train = np.asarray([np.argmax(line) for line in preds_train])
preds_test = bst.predict(dtest)
best_preds_test = np.asarray([np.argmax(line) for line in preds_test])
#Performance Evaluation
XGBoost_training = accuracy_score(y_train, best_preds_train)
XGBoost_test = accuracy_score(y_test, best_preds_test)
return XGBoost_training, XGBoost_test
#Find Optimal Depth of trees for Boosting
score_train_xgb, score_test_xgb = {}, {}
depth_start, depth_end = 2, 30
for i in tqdm(range(depth_start, depth_end, 2)):
XGBoost_training, XGBoost_test = model_xgboost(i)
score_train_xgb[i] = XGBoost_training
score_test_xgb[i] = XGBoost_test
#Plot
lists1 = sorted(score_train_xgb.items())
lists2 = sorted(score_test_xgb.items())
x3, y3 = zip(*lists1)
x4, y4 = zip(*lists2)
plt.figure(figsize=(10,7))
plt.ylabel("Accuracy")
plt.xlabel("Depth")
plt.title('Variation of Accuracy with Depth - Adaboost & XGBoost Classifier')
plt.plot(x1, y1, label='Train Accuracy Ada Boost')
plt.plot(x2, y2, label='Test Accuracy Ada Boost')
plt.plot(x3, y3, label='Train Accuracy XGBoost')
plt.plot(x4, y4, label='Test Accuracy XGBoost')
plt.legend()
plt.show()
```
**Interesting**:
- No real optimal depth of the simple tree for XGBoost, probably a lot of regularization, pruning, or early stopping when using a deep tree at the start.
- XGBoost does not seem to overfit when the depth of the tree increases, as opposed to Ada Boost.
**All the accuracy performances:**
```
print("Decision Trees:\tAccuracy, Testing Set \t: {:.2%}".format(acc_trees_testing))
print("Bagging: \tAccuracy, Testing Set \t: {:0.2f}%".format( acc_bagging_testing))
print("Random Forest: \tAccuracy, Testing Set \t: {:0.2f}%".format(acc_random_forest_testing))
print("Ada Boost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_boosting_test))
print("XGBoost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_XGBoost_test))
```
----------
**Overview of all the tree algorithms:** [Source](https://towardsdatascience.com/https-medium-com-vishalmorde-xgboost-algorithm-long-she-may-rein-edd9f99be63d)
<img src="data/trees.png" alt="tree_adj" width="100%"/>
## End of Section
----------
## Optional: Example to better understand Bias vs Variance tradeoff.
A central notion underlying what we've been learning in lectures and sections so far is the trade-off between overfitting and underfitting. If you remember back to Homework 3, we had a model that seemed to represent our data accurately. However, we saw that as we made it more and more accurate on the training set, it did not generalize well to unobserved data.
As a different example, in face recognition algorithms, such as that on the iPhone X, a too-accurate model would be unable to identity someone who styled their hair differently that day. The reason is that our model may learn irrelevant features in the training data. On the contrary, an insufficiently trained model would not generalize well either. For example, it was recently reported that a face mask could sufficiently fool the iPhone X.
A widely used solution in statistics to reduce overfitting consists of adding structure to the model, with something like regularization. This method favors simpler models during training.
The bias-variance dilemma is closely related.
- The **bias** of a model quantifies how precise a model is across training sets.
- The **variance** quantifies how sensitive the model is to small changes in the training set.
- A **robust** model is not overly sensitive to small changes.
- **The dilemma involves minimizing both bias and variance**; we want a precise and robust model. Simpler models tend to be less accurate but more robust. Complex models tend to be more accurate but less robust.
**How to reduce bias:**
- **Use more complex models, more features, less regularization,** ...
- **Boosting:** attempts to improve the predictive flexibility of simple models. Boosting uses simple base models and tries to “boost” their aggregate complexity.
**How to reduce variance:**
- **Early Stopping:** Its rules provide us with guidance as to how many iterations can be run before the learner begins to over-fit.
- **Pruning:** Pruning is extensively used while building related models. It simply removes the nodes which add little predictive power for the problem in hand.
- **Regularization:** It introduces a cost term for bringing in more features with the objective function. Hence it tries to push the coefficients for many variables to zero and hence reduce cost term.
- **Train with more data:** It won’t work every time, but training with more data can help algorithms detect the signal better.
- **Ensembling:** Ensembles are machine learning methods for combining predictions from multiple separate models. For example:
- **Bagging** attempts to reduce the chance of overfitting complex models: Bagging uses complex base models and tries to “smooth out” their predictions.
|
github_jupyter
|
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
- The problem of learning an optimal decision tree is known to be **NP-complete** under several aspects of optimality and even for simple concepts.
- Consequently, practical decision-tree learning algorithms are based on **heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node**.
- Such algorithms **cannot guarantee to return the globally optimal decision tree**.
- This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement (Bagging).
For example: **What is the defaulth DecisionTreeClassifier behaviour when there are 2 or more best features for a certain split (a tie among "splitters")?** (after a deep dive and internet search [link](https://github.com/scikit-learn/scikit-learn/issues/12259 ) ):
- The current default behaviour when splitter="best" is to shuffle the features at each step and take the best feature to split.
- In case there is a tie, we take a random one.
-------------
## 2. Just re-building the tree models of last week
### Rebuild the Decision Tree model, Bagging model and Random Forest Model for comparison with Boosting methods
We will be working with a spam email dataset. The dataset has 57 predictors with a response variable called `Spam` that indicates whether an email is spam or not spam. **The goal is to be able to create a classifier or method that acts as a spam filter.**
Link to description : https://archive.ics.uci.edu/ml/datasets/spambase
-----------
### Fitting an Optimal Single Decision Tree
--------
### Fitting 100 Single Decision Trees while Bagging
### Fitting Random Forest
#### Let's compare the performance of our 3 models:
## 3. *Theory:* What is Boosting?
- **Bagging and Random Forest:**
- complex and deep trees **overfit**
- thus **let's perform variance reduction on complex trees!**
- **Boosting:**
- simple and shallow trees **underfit**
- thus **let's perform bias reduction of simple trees!**
- make the simple trees more expressive!
**Boosting** attempts to improve the predictive flexibility of simple models.
- It trains a **large number of “weak” learners in sequence**.
- A weak learner is a constrained model (limit the max depth of each decision tree).
- Each one in the sequence focuses on **learning from the mistakes** of the one before it.
- By more heavily weighting in the mistakes in the next tree, our next tree will learn from the mistakes.
- A combining all the weak learners into a single strong learner = **a boosted tree**.
<img src="data/gradient_boosting1.png?" alt="tree_adj" width="70%"/>
----------
### Illustrative example (from [source](https://towardsdatascience.com/underfitting-and-overfitting-in-machine-learning-and-how-to-deal-with-it-6fe4a8a49dbf))
<img src="data/boosting.png" alt="tree_adj" width="70%"/>
We built multiple trees consecutively: Tree 1 -> Tree 2 -> Tree 3 - > ....
**The size of the plus or minus singns indicates the weights of a data points for every Tree**. How do we determine these weights?
For each consecutive tree and iteration we do the following:
- The **wrongly classified data points ("mistakes" = red circles)** are identified and **more heavily weighted in the next tree (green arrow)**.
- Thus the size of the plus or minus changes in the next tree
- This change in weights will influence and change the next simple decision tree
- The **correct predictions are** identified and **less heavily weighted in the next tree**.
We iterate this process for a certain number of times, stop and construct our final model:
- The ensemble (**"Final: Combination"**) is a linear combination of the simple trees, and is more expressive!
- The ensemble (**"Final: Combination"**) has indeed not just one simple decision boundary line, and fits the data better.
<img src="data/boosting_2.png?" alt="tree_adj" width="70%"/>
### What is Ada Boost?
- Ada Boost = Adaptive Boosting.
- AdaBoost is adaptive in the sense that subsequent weak learners are tweaked in favor of those instances misclassified by previous classifiers
<img src="data/AdaBoost1.png" alt="tree_adj" width="70%"/>
<img src="data/AdaBoost2.png" alt="tree_adj" width="70%"/>
<img src="data/AdaBoost3.png" alt="tree_adj" width="70%"/>
**Notice that when $\hat{y}_n = 𝑦_n$, the weight $w_n$ is small; when $\hat{y}_n \neq 𝑦_n$, the weight $w_n$ is larger.**
### Illustrative Example (from slides)
------
**Step1. Start with equal distribition initially**
<img src="data/ADA2.png" alt="tree_adj" width="40%">
------
**Step2. Fit a simple classifier**
<img src="data/ADA3.png" alt="tree_adj" width="40%"/>
------
**Step3. Update the weights**
<img src="data/ADA4.png" alt="tree_adj" width="40%"/>
**Step4. Update the classifier:** First time trivial (we have no model yet.)
------
**Step2. Fit a simple classifier**
<img src="data/ADA5.png" alt="tree_adj" width="40%"/>
**Step3. Update the weights:** not shown.
------
**Step4. Update the classifier:**
<img src="data/ADA6.png" alt="tree_adj" width="40%">
## 4. Use the Adaboost method to visualize Bias-Variance tradeoff.
Now let's try Boosting!
**How does the test and training accuracy evolve with every iteration (tree)?**
What about performance?
AdaBoost seems to be performing better than Simple Decision Trees and has a similar Test Set Accuracy performance compared to Random Forest.
**Random tip:** If a "for"-loop takes som time and you want to know the progress while running the loop, use: **tqdm()** ([link](https://github.com/tqdm/tqdm)). No need for 1000's of ```print(i)``` outputs.
Usage: ```for i in tqdm( range(start,finish) ):```
- tqdm means *"progress"* in Arabic (taqadum, تقدّم) and
- tqdm is an abbreviation for *"I love you so much"* in Spanish (te quiero demasiado).
#### What if we change the depth of our AdaBoost trees?
Adaboost complexity depends on both the number of estimators and the base estimator.
- In the beginning as our model complexity increases (depth 2-3), we first observe a small increase in accuracy.
- But as we go further to the right of the graph (**deeper trees**), our model **will overfit the data.**
- **REMINDER and validation: Boosting relies on simple trees!**
**Food for Thought :**
- Are **boosted models independent of one another?** Do they need to wait for the previous model's residuals?
- Are **bagging or random forest models independent of each other**, can they be trained in a parallel fashion?
## 5. *Theory:* What is Gradient Boosting and XGBoost?
### What is Gradient Boosting?
To improve its predictions, **gradient boosting looks at the difference between its current approximation, and the known correct target vector, which is called the residual**.
The mathematics:
- It may be assumed that there is some imperfect model $F_{m}$
- The gradient boosting algorithm improves on $F_{m}$ constructing a new model that adds an estimator $h$ to provide a better model:
$$F_{m+1}(x)=F_{m}(x)+h(x)$$
- To find $h$, the gradient boosting solution starts with the observation that a perfect **h** would imply
$$F_{m+1}(x)=F_{m}(x)+h(x)=y$$
- or, equivalently solving for h,
$$h(x)=y-F_{m}(x)$$
- Therefore, gradient boosting will fit h to the residual $y-F_{m}(x)$
<img src="data/gradient_boosting2.png" alt="tree_adj" width="80%"/>
-------
### XGBoost: ["Long May She Reign!"](https://towardsdatascience.com/https-medium-com-vishalmorde-xgboost-algorithm-long-she-may-rein-edd9f99be63d)
<img src="data/kaggle.png" alt="tree_adj" width="100%"/>
----------
### What is XGBoost and why is it so good!?
- Based on Gradient Boosting
- XGBoost = **eXtreme Gradient Boosting**; refers to the engineering goal to push the limit of computations resources for boosted tree algorithm
**Accuracy:**
- XGBoost however uses a **more regularized model formalizaiton to control overfitting** (=better performance) by both L1 and L2 regularization.
- Tree Pruning methods: more shallow tree will also prevent overfitting
- Improved convergence techniques (like early stopping when no improvement is made for X number of iterations)
- Built-in Cross-Validaiton
**Computing Speed:**
- Special Vector and matrix type data structures for faster results.
- Parallelized tree building: using all of your CPU cores during training.
- Distributed Computing: for training very large models using a cluster of machines.
- Cache Optimization of data structures and algorithm: to make best use of hardware.
**XGBoost is building boosted trees in parallel? What? How?**
- No: Xgboost doesn't run multiple trees in parallel, you need predictions after each tree to update gradients.
- Rather it does the parallelization WITHIN a single tree my using openMP to create branches independently.
## 6. Use XGBoost: Extreme Gradient Boosting
### What about the accuracy performance: AdaBoost versus XGBoost?
### What about the computing performance: AdaBoost versus XGBoost?
### What if we change the depth of our XGBoost trees and compare to Ada Boost?
**Interesting**:
- No real optimal depth of the simple tree for XGBoost, probably a lot of regularization, pruning, or early stopping when using a deep tree at the start.
- XGBoost does not seem to overfit when the depth of the tree increases, as opposed to Ada Boost.
**All the accuracy performances:**
| 0.8321 | 0.993385 |
```
import scipy.io as sio
import numpy as np
import matplotlib.pyplot as plt
from bresenham import bresenham
from numpy import matmul as mm
from tqdm import tqdm
from scipy.stats import mode
import math
import tqdm
data = sio.loadmat('practice.mat')
M = data['M']; init_pose = data['init_pose'];
pose = data['pose']; ranges = data['ranges']
scanAngles = data['scanAngles']; t = data['t']
param = {}
param['resol'], param['origin'] = 25, np.array([[685],[572]])
param['init_pose'] = -init_pose
tmp1 = ranges[:,0].reshape(-1,1)*np.cos(scanAngles)
tmp2 = -ranges[:,0].reshape(-1,1)*np.sin(scanAngles)
lidar_local = np.hstack((tmp1,tmp2))
plt.figure(figsize=(20,10))
plt.plot(0,0,'rs')
plt.plot(lidar_local[:,0],lidar_local[:,1],'.-')
plt.axis('equal')
plt.gca().invert_yaxis()
plt.xlabel('x'); plt.ylabel('y')
plt.grid(True)
plt.title('Lidar measurement in the body frame')
plt.imshow(M)
lidar_global = np.zeros((ranges.shape[0],2))
lidar_global[:,0]=np.array([(ranges[:,0]*np.cos(scanAngles+pose[2,0]).flatten()+
pose[0,0])*param['resol']+param['origin'][0]])
lidar_global[:,1]=np.array([(-ranges[:,0]*np.sin(scanAngles+pose[2,0]).flatten()+
pose[1,0])*param['resol']+param['origin'][1]])
plt.figure(figsize=(20,10))
plt.imshow(M,cmap='gray')
plt.plot(lidar_global[:,0],lidar_global[:,1],'g.')
plt.grid(True)
plt.plot(pose[0,:]*param['resol']+param['origin'][0],
pose[1,:]*param['resol']+param['origin'][1],'r.-')
def particleLocalization(ranges,scanAngles,Map,param):
N,M = ranges.shape[1],1200
myPose = np.zeros((3,N))
myResolution, myOrigin = param['resol'],param['origin']
myPose[:,0] = param['init_pose'].flatten()
map_threshold_low = mode(Map,None)[0] - .3
map_threshold_high = mode(Map,None)[0] + .3
resample_threshold,radius = .85,.048
sigma_m = .029*np.array([[1],[1],[2]])
direction = myPose[2,0]
P = np.tile(myPose[:,0],(1,M))
W = np.tile(1/M,(1,M))
lidar_global = np.zeros((ranges.shape[0],2))
for j in tqdm.tqdm(range(1,N)):
P = np.tile(myPose[:,j-1].reshape(-1,1),(1,M))
R = radius
P += np.random.normal(0,1,(3,M))*(mm(sigma_m,np.ones((1,M))))
P[0,:M] += R*np.cos(P[2,:M])
P[1,:M] += R*np.sin(P[2,:M])
W = np.tile(1/M,(1,M))
P_corr = np.zeros((1,M))
for i in range(M):
lidar_global[:,0]=np.array([(ranges[:,j]*np.cos(scanAngles+P[2,i]).flatten()+
P[0,i])*myResolution+myOrigin[0]]).astype(int)
lidar_global[:,1]=np.array([(-ranges[:,j]*np.sin(scanAngles+P[2,i]).flatten()+
P[1,i])*myResolution+myOrigin[1]]).astype(int)
lidar_global[lidar_global[:,0]<1,0] = myOrigin[0]
lidar_global[lidar_global[:,0]<1,1] = myOrigin[1]
lidar_global[lidar_global[:,1]<1,0] = myOrigin[0]
lidar_global[lidar_global[:,1]<1,1] = myOrigin[1]
lidar_global[lidar_global[:,0]>Map.shape[1]-1,0] = myOrigin[0]
lidar_global[lidar_global[:,0]>Map.shape[1]-1,1] = myOrigin[1]
lidar_global[lidar_global[:,1]>Map.shape[0]-1,0] = myOrigin[0]
lidar_global[lidar_global[:,1]>Map.shape[0]-1,1] = myOrigin[1]
lidar_global = lidar_global.astype(int)
corr_values = Map[lidar_global[:,1],lidar_global[:,0]]
P_corr[0,i]=-3*np.sum(corr_values<=map_threshold_low)+10*np.sum(corr_values>=map_threshold_high)
P_corr -= np.min(P_corr)
W = W[:M]*P_corr/np.sum(P_corr)
W /= np.sum(W)
ind = np.argmax(W)
myPose[:,j] = P[:,ind]
return myPose
pose1 = particleLocalization(ranges[:,:1000],scanAngles,M,param)
lidar_global = np.zeros((ranges.shape[0],2))
lidar_global[:,0]=np.array([(ranges[:,0]*np.cos(scanAngles+pose1[2,0]).flatten()+
pose1[0,0])*param['resol']+param['origin'][0]])
lidar_global[:,1]=np.array([(-ranges[:,0]*np.sin(scanAngles+pose1[2,0]).flatten()+
pose1[1,0])*param['resol']+param['origin'][1]])
plt.figure(figsize=(20,10))
plt.imshow(M,cmap='gray')
plt.plot(lidar_global[:,0],lidar_global[:,1],'g.')
plt.grid(True)
plt.plot(pose1[0,:]*param['resol']+param['origin'][0],
pose1[1,:]*param['resol']+param['origin'][1],'r.-')
```
|
github_jupyter
|
import scipy.io as sio
import numpy as np
import matplotlib.pyplot as plt
from bresenham import bresenham
from numpy import matmul as mm
from tqdm import tqdm
from scipy.stats import mode
import math
import tqdm
data = sio.loadmat('practice.mat')
M = data['M']; init_pose = data['init_pose'];
pose = data['pose']; ranges = data['ranges']
scanAngles = data['scanAngles']; t = data['t']
param = {}
param['resol'], param['origin'] = 25, np.array([[685],[572]])
param['init_pose'] = -init_pose
tmp1 = ranges[:,0].reshape(-1,1)*np.cos(scanAngles)
tmp2 = -ranges[:,0].reshape(-1,1)*np.sin(scanAngles)
lidar_local = np.hstack((tmp1,tmp2))
plt.figure(figsize=(20,10))
plt.plot(0,0,'rs')
plt.plot(lidar_local[:,0],lidar_local[:,1],'.-')
plt.axis('equal')
plt.gca().invert_yaxis()
plt.xlabel('x'); plt.ylabel('y')
plt.grid(True)
plt.title('Lidar measurement in the body frame')
plt.imshow(M)
lidar_global = np.zeros((ranges.shape[0],2))
lidar_global[:,0]=np.array([(ranges[:,0]*np.cos(scanAngles+pose[2,0]).flatten()+
pose[0,0])*param['resol']+param['origin'][0]])
lidar_global[:,1]=np.array([(-ranges[:,0]*np.sin(scanAngles+pose[2,0]).flatten()+
pose[1,0])*param['resol']+param['origin'][1]])
plt.figure(figsize=(20,10))
plt.imshow(M,cmap='gray')
plt.plot(lidar_global[:,0],lidar_global[:,1],'g.')
plt.grid(True)
plt.plot(pose[0,:]*param['resol']+param['origin'][0],
pose[1,:]*param['resol']+param['origin'][1],'r.-')
def particleLocalization(ranges,scanAngles,Map,param):
N,M = ranges.shape[1],1200
myPose = np.zeros((3,N))
myResolution, myOrigin = param['resol'],param['origin']
myPose[:,0] = param['init_pose'].flatten()
map_threshold_low = mode(Map,None)[0] - .3
map_threshold_high = mode(Map,None)[0] + .3
resample_threshold,radius = .85,.048
sigma_m = .029*np.array([[1],[1],[2]])
direction = myPose[2,0]
P = np.tile(myPose[:,0],(1,M))
W = np.tile(1/M,(1,M))
lidar_global = np.zeros((ranges.shape[0],2))
for j in tqdm.tqdm(range(1,N)):
P = np.tile(myPose[:,j-1].reshape(-1,1),(1,M))
R = radius
P += np.random.normal(0,1,(3,M))*(mm(sigma_m,np.ones((1,M))))
P[0,:M] += R*np.cos(P[2,:M])
P[1,:M] += R*np.sin(P[2,:M])
W = np.tile(1/M,(1,M))
P_corr = np.zeros((1,M))
for i in range(M):
lidar_global[:,0]=np.array([(ranges[:,j]*np.cos(scanAngles+P[2,i]).flatten()+
P[0,i])*myResolution+myOrigin[0]]).astype(int)
lidar_global[:,1]=np.array([(-ranges[:,j]*np.sin(scanAngles+P[2,i]).flatten()+
P[1,i])*myResolution+myOrigin[1]]).astype(int)
lidar_global[lidar_global[:,0]<1,0] = myOrigin[0]
lidar_global[lidar_global[:,0]<1,1] = myOrigin[1]
lidar_global[lidar_global[:,1]<1,0] = myOrigin[0]
lidar_global[lidar_global[:,1]<1,1] = myOrigin[1]
lidar_global[lidar_global[:,0]>Map.shape[1]-1,0] = myOrigin[0]
lidar_global[lidar_global[:,0]>Map.shape[1]-1,1] = myOrigin[1]
lidar_global[lidar_global[:,1]>Map.shape[0]-1,0] = myOrigin[0]
lidar_global[lidar_global[:,1]>Map.shape[0]-1,1] = myOrigin[1]
lidar_global = lidar_global.astype(int)
corr_values = Map[lidar_global[:,1],lidar_global[:,0]]
P_corr[0,i]=-3*np.sum(corr_values<=map_threshold_low)+10*np.sum(corr_values>=map_threshold_high)
P_corr -= np.min(P_corr)
W = W[:M]*P_corr/np.sum(P_corr)
W /= np.sum(W)
ind = np.argmax(W)
myPose[:,j] = P[:,ind]
return myPose
pose1 = particleLocalization(ranges[:,:1000],scanAngles,M,param)
lidar_global = np.zeros((ranges.shape[0],2))
lidar_global[:,0]=np.array([(ranges[:,0]*np.cos(scanAngles+pose1[2,0]).flatten()+
pose1[0,0])*param['resol']+param['origin'][0]])
lidar_global[:,1]=np.array([(-ranges[:,0]*np.sin(scanAngles+pose1[2,0]).flatten()+
pose1[1,0])*param['resol']+param['origin'][1]])
plt.figure(figsize=(20,10))
plt.imshow(M,cmap='gray')
plt.plot(lidar_global[:,0],lidar_global[:,1],'g.')
plt.grid(True)
plt.plot(pose1[0,:]*param['resol']+param['origin'][0],
pose1[1,:]*param['resol']+param['origin'][1],'r.-')
| 0.109016 | 0.435181 |
# Домашнее задание к лекции "Введение в типы данных и циклы. Часть 2"
## Задание 1
Дана переменная, в которой хранится словарь, содержащий гео-метки для каждого пользователя (пример структуры данных приведен ниже). Вам необходимо написать программу, которая выведет на экран множество уникальных гео-меток всех пользователей.
Пример работы программы:
```
ids = {'user1': [213, 213, 213, 15, 213],
'user2': [54, 54, 119, 119, 119],
'user3': [213, 98, 98, 35]}
```
Результат:
`{98, 35, 15, 213, 54, 119}`
```
ids = {'user1': [213, 213, 213, 15, 213],
'user2': [54, 54, 119, 119, 119],
'user3': [213, 98, 98, 35]}
set(sum(ids.values(), []))
```
## Задание 2
Дана переменная, в которой хранится список поисковых запросов пользователя (пример структуры данных приведен ниже). Вам необходимо написать программу, которая выведет на экран распределение количества слов в запросах в требуемом виде.
Пример работы программы:
```
queries = [
'смотреть сериалы онлайн',
'новости спорта',
'афиша кино',
'курс доллара',
'сериалы этим летом',
'курс по питону',
'сериалы про спорт',
]
```
Результат:
```
Поисковых запросов, содержащих 2 слов(а): 42.86%
Поисковых запросов, содержащих 3 слов(а): 57.14%
```
```
queries = [
'смотреть сериалы онлайн',
'новости спорта',
'афиша кино',
'курс доллара',
'сериалы этим летом',
'курс по питону',
'сериалы про спорт',
]
my_dict = {}
for q in queries:
key = len(q.split())
my_dict.setdefault(key, 0)
my_dict[key] += 1
all = sum(my_dict.values())
for k,v in my_dict.items():
print(f'Поисковых запросов, содержащих {k} слов(а): {(v * 100 / all):.2f}%')
```
## Задание 3
Дана переменная, в которой хранится информация о затратах и доходе рекламных кампаний по различным источникам. Необходимо дополнить исходную структуру показателем [ROI](https://ru.wikipedia.org/wiki/%D0%9E%D0%BA%D1%83%D0%BF%D0%B0%D0%B5%D0%BC%D0%BE%D1%81%D1%82%D1%8C_%D0%B8%D0%BD%D0%B2%D0%B5%D1%81%D1%82%D0%B8%D1%86%D0%B8%D0%B9), который рассчитаем по формуле: **(revenue / cost - 1) * 100**
Пример работы программы:
```
results = {
'vk': {'revenue': 103, 'cost': 98},
'yandex': {'revenue': 179, 'cost': 153},
'facebook': {'revenue': 103, 'cost': 110},
'adwords': {'revenue': 35, 'cost': 34},
'twitter': {'revenue': 11, 'cost': 24},
}
```
Результат:
```
{'adwords': {'cost': 34, 'revenue': 35, 'ROI': 2.94},
'facebook': {'cost': 110, 'revenue': 103, 'ROI': -6.36},
'twitter': {'cost': 24, 'revenue': 11, 'ROI': -54.17},
'vk': {'cost': 98, 'revenue': 103, 'ROI': 5.1},
'yandex': {'cost': 153, 'revenue': 179, 'ROI': 16.99}}
```
```
results = {
'vk': {'revenue': 103, 'cost': 98},
'yandex': {'revenue': 179, 'cost': 153},
'facebook': {'revenue': 103, 'cost': 110},
'adwords': {'revenue': 35, 'cost': 34},
'twitter': {'revenue': 11, 'cost': 24},
}
def append_roi(d):
val = (d['revenue'] / d['cost'] - 1) * 100
d['ROI'] = round(val,2)
return d
{k : append_roi(v) for k,v in results.items()}
```
## Задание 4
Дана переменная, в которой хранится статистика рекламных каналов по объемам продаж (пример структуры данных приведен ниже). Напишите программу, которая возвращает название канала с максимальным объемом продаж.
Пример работы программы:
`stats = {'facebook': 55, 'yandex': 115, 'vk': 120, 'google': 99, 'email': 42, 'ok': 98}`
Результат:
`Максимальный объем продаж на рекламном канале: vk`
```
stats = {'facebook': 55, 'yandex': 115, 'vk': 120, 'google': 99, 'email': 42, 'ok': 98}
print(f"Максимальный объем продаж на рекламном канале: {max(stats, key=stats.get)}")
```
## Задание 5 (необязательно)
Дан список произвольной длины. Необходимо написать код, который на основе исходного списка составит словарь такого уровня вложенности, какова длина исхондого списка.
Примеры работы программы:
1. `my_list = ['2018-01-01', 'yandex', 'cpc', 100]`
Результат:
`{'2018-01-01': {'yandex': {'cpc': 100}}}`
2. `my_list = ['a', 'b', 'c', 'd', 'e', 'f']`
Результат:
`{'a': {'b': {'c': {'d': {'e': 'f'}}}}}`
```
# my_list = ['2018-01-01', 'yandex', 'cpc', 100]
my_list = ['a', 'b', 'c', 'd', 'e', 'f']
my_dict = {}
counter = 0
for el in reversed(my_list):
if counter > 0:
if counter < 2:
my_dict = {el: my_list[-1]}
else:
my_dict = {el: my_dict}
counter += 1
my_dict
# TODO: Recursive solution
```
## Задание 6 (необязательно)
Дана книга рецептов с информацией о том, сколько ингредиентов нужно для приготовления блюда в расчете на одну порцию (пример данных представлен ниже).
Напишите программу, которая будет запрашивать у пользователя количество порций для приготовления этих блюд и отображать информацию о суммарном количестве требуемых ингредиентов в указанном виде.
**Внимание!** Одинаковые ингридиенты с разными размерностями нужно считать раздельно!
Пример работы программы:
```
cook_book = {
'салат': [
{'ingridient_name': 'сыр', 'quantity': 50, 'measure': 'гр'},
{'ingridient_name': 'томаты', 'quantity': 2, 'measure': 'шт'},
{'ingridient_name': 'огурцы', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'маслины', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'оливковое масло', 'quantity': 20, 'measure': 'мл'},
{'ingridient_name': 'салат', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'перец', 'quantity': 20, 'measure': 'гр'}
],
'пицца': [
{'ingridient_name': 'сыр', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'колбаса', 'quantity': 30, 'measure': 'гр'},
{'ingridient_name': 'бекон', 'quantity': 30, 'measure': 'гр'},
{'ingridient_name': 'оливки', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'томаты', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'тесто', 'quantity': 100, 'measure': 'гр'},
],
'лимонад': [
{'ingridient_name': 'лимон', 'quantity': 1, 'measure': 'шт'},
{'ingridient_name': 'вода', 'quantity': 200, 'measure': 'мл'},
{'ingridient_name': 'сахар', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'лайм', 'quantity': 20, 'measure': 'гр'},
]
}
Введите количество порций:
3
```
Результат:
```
Сыр: 210 гр
Томаты: 6 шт
Огурцы: 60 гр
Маслины: 30 гр
Оливковое масло: 60 мл
Салат: 30 гр
Перец: 60 гр
Колбаса: 90 гр
Бекон: 90 гр
Оливки: 30 гр
Томаты: 60 гр
Тесто: 300 гр
Лимон: 3 шт
Вода: 600 мл
Сахар: 30 гр
Лайм: 60 гр
```
```
cook_book = {
'салат': [
{'ingridient_name': 'сыр', 'quantity': 50, 'measure': 'гр'},
{'ingridient_name': 'томаты', 'quantity': 2, 'measure': 'шт'},
{'ingridient_name': 'огурцы', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'маслины', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'оливковое масло', 'quantity': 20, 'measure': 'мл'},
{'ingridient_name': 'салат', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'перец', 'quantity': 20, 'measure': 'гр'}
],
'пицца': [
{'ingridient_name': 'сыр', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'колбаса', 'quantity': 30, 'measure': 'гр'},
{'ingridient_name': 'бекон', 'quantity': 30, 'measure': 'гр'},
{'ingridient_name': 'оливки', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'томаты', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'тесто', 'quantity': 100, 'measure': 'гр'},
],
'лимонад': [
{'ingridient_name': 'лимон', 'quantity': 1, 'measure': 'шт'},
{'ingridient_name': 'вода', 'quantity': 200, 'measure': 'мл'},
{'ingridient_name': 'сахар', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'лайм', 'quantity': 20, 'measure': 'гр'},
]
}
count = int(input('Введите количество порций: '))
res = {}
for dish, igr in cook_book.items():
for el in igr:
key = (el["ingridient_name"],el["measure"])
res.setdefault(key, 0)
res[key] += el["quantity"] * count
for k,v in res.items():
print(f'{k[0]}: {v} {k[1]}.')
```
|
github_jupyter
|
ids = {'user1': [213, 213, 213, 15, 213],
'user2': [54, 54, 119, 119, 119],
'user3': [213, 98, 98, 35]}
ids = {'user1': [213, 213, 213, 15, 213],
'user2': [54, 54, 119, 119, 119],
'user3': [213, 98, 98, 35]}
set(sum(ids.values(), []))
queries = [
'смотреть сериалы онлайн',
'новости спорта',
'афиша кино',
'курс доллара',
'сериалы этим летом',
'курс по питону',
'сериалы про спорт',
]
Поисковых запросов, содержащих 2 слов(а): 42.86%
Поисковых запросов, содержащих 3 слов(а): 57.14%
queries = [
'смотреть сериалы онлайн',
'новости спорта',
'афиша кино',
'курс доллара',
'сериалы этим летом',
'курс по питону',
'сериалы про спорт',
]
my_dict = {}
for q in queries:
key = len(q.split())
my_dict.setdefault(key, 0)
my_dict[key] += 1
all = sum(my_dict.values())
for k,v in my_dict.items():
print(f'Поисковых запросов, содержащих {k} слов(а): {(v * 100 / all):.2f}%')
results = {
'vk': {'revenue': 103, 'cost': 98},
'yandex': {'revenue': 179, 'cost': 153},
'facebook': {'revenue': 103, 'cost': 110},
'adwords': {'revenue': 35, 'cost': 34},
'twitter': {'revenue': 11, 'cost': 24},
}
{'adwords': {'cost': 34, 'revenue': 35, 'ROI': 2.94},
'facebook': {'cost': 110, 'revenue': 103, 'ROI': -6.36},
'twitter': {'cost': 24, 'revenue': 11, 'ROI': -54.17},
'vk': {'cost': 98, 'revenue': 103, 'ROI': 5.1},
'yandex': {'cost': 153, 'revenue': 179, 'ROI': 16.99}}
results = {
'vk': {'revenue': 103, 'cost': 98},
'yandex': {'revenue': 179, 'cost': 153},
'facebook': {'revenue': 103, 'cost': 110},
'adwords': {'revenue': 35, 'cost': 34},
'twitter': {'revenue': 11, 'cost': 24},
}
def append_roi(d):
val = (d['revenue'] / d['cost'] - 1) * 100
d['ROI'] = round(val,2)
return d
{k : append_roi(v) for k,v in results.items()}
stats = {'facebook': 55, 'yandex': 115, 'vk': 120, 'google': 99, 'email': 42, 'ok': 98}
print(f"Максимальный объем продаж на рекламном канале: {max(stats, key=stats.get)}")
# my_list = ['2018-01-01', 'yandex', 'cpc', 100]
my_list = ['a', 'b', 'c', 'd', 'e', 'f']
my_dict = {}
counter = 0
for el in reversed(my_list):
if counter > 0:
if counter < 2:
my_dict = {el: my_list[-1]}
else:
my_dict = {el: my_dict}
counter += 1
my_dict
# TODO: Recursive solution
cook_book = {
'салат': [
{'ingridient_name': 'сыр', 'quantity': 50, 'measure': 'гр'},
{'ingridient_name': 'томаты', 'quantity': 2, 'measure': 'шт'},
{'ingridient_name': 'огурцы', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'маслины', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'оливковое масло', 'quantity': 20, 'measure': 'мл'},
{'ingridient_name': 'салат', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'перец', 'quantity': 20, 'measure': 'гр'}
],
'пицца': [
{'ingridient_name': 'сыр', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'колбаса', 'quantity': 30, 'measure': 'гр'},
{'ingridient_name': 'бекон', 'quantity': 30, 'measure': 'гр'},
{'ingridient_name': 'оливки', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'томаты', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'тесто', 'quantity': 100, 'measure': 'гр'},
],
'лимонад': [
{'ingridient_name': 'лимон', 'quantity': 1, 'measure': 'шт'},
{'ingridient_name': 'вода', 'quantity': 200, 'measure': 'мл'},
{'ingridient_name': 'сахар', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'лайм', 'quantity': 20, 'measure': 'гр'},
]
}
Введите количество порций:
3
Сыр: 210 гр
Томаты: 6 шт
Огурцы: 60 гр
Маслины: 30 гр
Оливковое масло: 60 мл
Салат: 30 гр
Перец: 60 гр
Колбаса: 90 гр
Бекон: 90 гр
Оливки: 30 гр
Томаты: 60 гр
Тесто: 300 гр
Лимон: 3 шт
Вода: 600 мл
Сахар: 30 гр
Лайм: 60 гр
cook_book = {
'салат': [
{'ingridient_name': 'сыр', 'quantity': 50, 'measure': 'гр'},
{'ingridient_name': 'томаты', 'quantity': 2, 'measure': 'шт'},
{'ingridient_name': 'огурцы', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'маслины', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'оливковое масло', 'quantity': 20, 'measure': 'мл'},
{'ingridient_name': 'салат', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'перец', 'quantity': 20, 'measure': 'гр'}
],
'пицца': [
{'ingridient_name': 'сыр', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'колбаса', 'quantity': 30, 'measure': 'гр'},
{'ingridient_name': 'бекон', 'quantity': 30, 'measure': 'гр'},
{'ingridient_name': 'оливки', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'томаты', 'quantity': 20, 'measure': 'гр'},
{'ingridient_name': 'тесто', 'quantity': 100, 'measure': 'гр'},
],
'лимонад': [
{'ingridient_name': 'лимон', 'quantity': 1, 'measure': 'шт'},
{'ingridient_name': 'вода', 'quantity': 200, 'measure': 'мл'},
{'ingridient_name': 'сахар', 'quantity': 10, 'measure': 'гр'},
{'ingridient_name': 'лайм', 'quantity': 20, 'measure': 'гр'},
]
}
count = int(input('Введите количество порций: '))
res = {}
for dish, igr in cook_book.items():
for el in igr:
key = (el["ingridient_name"],el["measure"])
res.setdefault(key, 0)
res[key] += el["quantity"] * count
for k,v in res.items():
print(f'{k[0]}: {v} {k[1]}.')
| 0.100326 | 0.959573 |
```
import numpy as np
%run magic.ipynb
```
## Chain Rule
考慮 $F = f(\mathbf{a},\mathbf{g}(\mathbf{b},\mathbf{h}(\mathbf{c}, \mathbf{i}))$
$\mathbf{a},\mathbf{b},\mathbf{c},$ 代表著權重 , $\mathbf{i}$ 是輸入
站在 \mathbf{g} 的角度,為了要更新權重,我們想算
### $\frac{\partial F}{\partial b_i}$
我們需要什麼? 由 chain rule 得知
### $\frac{\partial F}{\partial b_i} =
\sum_j \frac{\partial F}{\partial g_j}\frac{\partial g_j}{\partial b_i}$
或者寫成 Jabobian 的形式
### $\frac{\partial F}{\partial \mathbf{b}} =
\frac{\partial F}{\partial \mathbf{g}} \frac{\partial \mathbf{g}}{\partial \mathbf{b}}$
所以我們希望前面能傳給我們 $\frac{\partial F}{\partial \mathbf{g}}$
將心比心,因為 $\mathbf{h}$ 也要算 $\frac{\partial F}{\partial \mathbf{c}}$, 所以我們還要負責傳 $\frac{\partial F}{\partial \mathbf{h}}$ 給他。 而因為
### $\frac{\partial F}{\partial \mathbf{h}}=
\frac{\partial F}{\partial \mathbf{g}} \frac{\partial \mathbf{g}}{\partial \mathbf{h}}$
所以 $\mathbf{g}$ 中間真正需要負責計算的東西就是 $\frac{\partial \mathbf{g}}{\partial \mathbf{h}}$ 和 $\frac{\partial \mathbf{g}}{\partial \mathbf{b}}$
## Gradient descent
### 誤差函數
我們的誤差函數還是 Cross entropy,
假設輸入值 $x$ 對應到的真實類別是 $y$, 那我們定義誤差函數
## $ loss = -\log(q_y)=- \log(Predict(Y=y|x)) $
或比較一般的
## $ loss = - p \cdot \log q $
其中 $ p_i = \Pr(Y=i|x) $ 代表真實發生的機率
以一層 hidden layer 的 feedforward neural network 來看
## $ L= loss = -p \cdot \log \sigma(C(f(Ax+b))+d) $
由於
### $-\log \sigma (Z) = 1 \log (\sum e^{Z_j})-Z$
### $\frac{\partial -\log \sigma (Z)}{\partial Z} = 1 \sigma(Z)^T - \delta$
let $U = f(Ax+b) $, $Z=CU+d$
### $ \frac{\partial L}{\partial d} = \frac{\partial L}{\partial Z} \frac{\partial CU+d}{\partial d}
= \frac{\partial L}{\partial Z}
= p^T (1 \sigma(Z)^T - \delta)
= \sigma(Z)^T - p^T
= \sigma(CU+d)^T - p^T
$
### $ \frac{\partial L}{\partial C_{i,j} }
= \frac{\partial L}{\partial Z} \frac{\partial CU+d}{\partial C_{i,j}}
= (p^T (1 \sigma(Z)^T - \delta))_i U_j
= (\sigma(Z) - p)_i U_j
$
所以
### $ \frac{\partial L}{\partial C }
= (\sigma(Z) - p) U^T
$
到目前為止,都跟原來 softmax 的結果一樣。
繼續計算 A, b 的偏微分
### $ \frac{\partial L}{\partial U }
= \frac{\partial L}{\partial Z} \frac{\partial CU+d}{\partial U}
= (p^T (1 \sigma(Z)^T - \delta)) C
= (\sigma(Z) - p)^T C
$
$ \frac{\partial U_k}{\partial b_i}
= \frac{\partial f(A_kx+b_k)}{\partial b_i}
= \delta_{k,i} f'(Ax+b)_i $
$ \frac{\partial L}{\partial b_i }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i$
$ \frac{\partial L}{\partial A_{i,j} }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i x_j$
### 任務:先暴力的利用上面直接微分好的式子來試試看
* 把之前的 softmax, relu, sigmoid 都拿回來看看
* 計算 relu 和 sigmoid 的微分
* 來試試看 mod 3 問題
* 隨機設定 A,b,C,d (可以嘗試不同的隱藏層維度)
* 看看 loss
* 設定一個 x
* 計算 gradient
* 扣掉 gradient
* 看看 loss 是否有減少?
```
# 參考範例, 各種函數、微分
%run -i solutions/ff_funcs.py
# 參考範例, 計算 loss
%run -i solutions/ff_compute_loss2.py
```
$ \frac{\partial L}{\partial d} = \sigma(CU+d)^T - p^T$
$ \frac{\partial L}{\partial C } = (\sigma(Z) - p) U^T$
$ \frac{\partial L}{\partial b_i }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i$
$ \frac{\partial L}{\partial A_{i,j} }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i x_j$
```
# 計算 gradient
%run -i solutions/ff_compute_gradient.py
# 更新權重,計算新的 loss
%run -i solutions/ff_update.py
```
練習:隨機訓練 20000 次
```
%matplotlib inline
import matplotlib.pyplot as plt
# 參考範例
%run -i solutions/ff_train_mod3.py
plt.plot(L_history);
# 訓練結果測試
for i in range(16):
x = Vector(i%2, (i>>1)%2, (i>>2)%2, (i>>3)%2)
y = i%3
U = relu(A@x+b)
q = softmax(C@U+d)
print(q.argmax(), y)
```
### 練習:井字棋的判定
```
def truth(x):
x = x.reshape(3,3)
return int(x.all(axis=0).any() or
x.all(axis=1).any() or
x.diagonal().all() or
x[::-1].diagonal().all())
%run -i solutions/ff_train_ttt.py
plt.plot(accuracy_history);
```
|
github_jupyter
|
import numpy as np
%run magic.ipynb
# 參考範例, 各種函數、微分
%run -i solutions/ff_funcs.py
# 參考範例, 計算 loss
%run -i solutions/ff_compute_loss2.py
# 計算 gradient
%run -i solutions/ff_compute_gradient.py
# 更新權重,計算新的 loss
%run -i solutions/ff_update.py
%matplotlib inline
import matplotlib.pyplot as plt
# 參考範例
%run -i solutions/ff_train_mod3.py
plt.plot(L_history);
# 訓練結果測試
for i in range(16):
x = Vector(i%2, (i>>1)%2, (i>>2)%2, (i>>3)%2)
y = i%3
U = relu(A@x+b)
q = softmax(C@U+d)
print(q.argmax(), y)
def truth(x):
x = x.reshape(3,3)
return int(x.all(axis=0).any() or
x.all(axis=1).any() or
x.diagonal().all() or
x[::-1].diagonal().all())
%run -i solutions/ff_train_ttt.py
plt.plot(accuracy_history);
| 0.365343 | 0.972441 |
## 数据概览
```
import pandas as pd
import numpy as np
# 载入数据
train = pd.read_csv('/Users/frank/Documents/workspace/kaggle/dataset/San_Francisco_Crime_Classification/train.csv', parse_dates = ['Dates'])
test = pd.read_csv('/Users/frank/Documents/workspace/kaggle/dataset/San_Francisco_Crime_Classification/test.csv', parse_dates = ['Dates'])
```
预览训练集
```
print train.head(10)
```
预览测试集合
```
print test.head(10)
```
我们看到训练集和测试集都有Dates、DayOfWeek、PdDistrict三个特征,我们先从这三个特征入手。训练集中的Category是我们的预测目标,我们先对其进行编码,这里用到sklearn的LabelEncoder(),示例如下:
```
from sklearn import preprocessing
label = preprocessing.LabelEncoder()
label.fit([1, 2, 2, 6])
print label.transform([1, 1, 2, 6])
```
接下来我们对类别进行编码:
```
crime = label.fit_transform(train.Category)
```
对于离散化的特征,有一种常用的特征处理方式是二值化处理,pandas中有get_dummies()函数,函数示例如下:
```
pd.get_dummies(pd.Series(list('abca')))
```
接下来对Dates、DayOfWeek、PdDistrict三个特征进行二值化处理:
```
days = pd.get_dummies(train.DayOfWeek)
district = pd.get_dummies(train.PdDistrict)
hour = pd.get_dummies(train.Dates.dt.hour)
```
接下来重新组合训练集,并把类别附加上:
```
train_data = pd.concat([days, district, hour], axis=1)
train_data['crime'] = crime
```
针对测试集做同样的处理:
```
days = pd.get_dummies(test.DayOfWeek)
district = pd.get_dummies(test.PdDistrict)
hour = pd.get_dummies(test.Dates.dt.hour)
test_data = pd.concat([days, district, hour], axis=1)
```
预览新的训练集和测试集:
```
print train_data.head(10)
print test_data.head(10)
```
分割训练集和验证集(70%训练,30%验证)准备建模:
```
from sklearn.cross_validation import train_test_split
training, validation = train_test_split(train_data, train_size=0.6)
```
## 贝叶斯训练
```
from sklearn.metrics import log_loss
from sklearn.naive_bayes import BernoulliNB
model = BernoulliNB()
feature_list = training.columns.tolist()
feature_list = feature_list[:len(feature_list) - 1]
print '选取的特征列:', feature_list
model.fit(training[feature_list], training['crime'])
predicted = np.array(model.predict_proba(validation[feature_list]))
print "朴素贝叶斯log损失为 %f" % (log_loss(validation['crime'], predicted))
```
## 逻辑回归
```
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(C=0.1)
model.fit(training[feature_list], training['crime'])
predicted = np.array(model.predict_proba(validation[feature_list]))
print "逻辑回归log损失为 %f" %(log_loss(validation['crime'], predicted))
```
在测试集上运行:
```
test_predicted = np.array(model.predict_proba(test_data[feature_list]))
```
保存结果:
```
col_names = np.sort(train['Category'].unique())
print col_names
result = pd.DataFrame(data=test_predicted, columns=col_names)
result['Id'] = test['Id'].astype(int)
result.to_csv('output.csv', index=False)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
# 载入数据
train = pd.read_csv('/Users/frank/Documents/workspace/kaggle/dataset/San_Francisco_Crime_Classification/train.csv', parse_dates = ['Dates'])
test = pd.read_csv('/Users/frank/Documents/workspace/kaggle/dataset/San_Francisco_Crime_Classification/test.csv', parse_dates = ['Dates'])
print train.head(10)
print test.head(10)
from sklearn import preprocessing
label = preprocessing.LabelEncoder()
label.fit([1, 2, 2, 6])
print label.transform([1, 1, 2, 6])
crime = label.fit_transform(train.Category)
pd.get_dummies(pd.Series(list('abca')))
days = pd.get_dummies(train.DayOfWeek)
district = pd.get_dummies(train.PdDistrict)
hour = pd.get_dummies(train.Dates.dt.hour)
train_data = pd.concat([days, district, hour], axis=1)
train_data['crime'] = crime
days = pd.get_dummies(test.DayOfWeek)
district = pd.get_dummies(test.PdDistrict)
hour = pd.get_dummies(test.Dates.dt.hour)
test_data = pd.concat([days, district, hour], axis=1)
print train_data.head(10)
print test_data.head(10)
from sklearn.cross_validation import train_test_split
training, validation = train_test_split(train_data, train_size=0.6)
from sklearn.metrics import log_loss
from sklearn.naive_bayes import BernoulliNB
model = BernoulliNB()
feature_list = training.columns.tolist()
feature_list = feature_list[:len(feature_list) - 1]
print '选取的特征列:', feature_list
model.fit(training[feature_list], training['crime'])
predicted = np.array(model.predict_proba(validation[feature_list]))
print "朴素贝叶斯log损失为 %f" % (log_loss(validation['crime'], predicted))
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(C=0.1)
model.fit(training[feature_list], training['crime'])
predicted = np.array(model.predict_proba(validation[feature_list]))
print "逻辑回归log损失为 %f" %(log_loss(validation['crime'], predicted))
test_predicted = np.array(model.predict_proba(test_data[feature_list]))
col_names = np.sort(train['Category'].unique())
print col_names
result = pd.DataFrame(data=test_predicted, columns=col_names)
result['Id'] = test['Id'].astype(int)
result.to_csv('output.csv', index=False)
| 0.305594 | 0.842734 |
# BiSeNet
Instalación de la librería fast.ai (a continuación reiniciar el entorno de ejecución).
```
!pip install fastai --upgrade
from fastai.basics import *
from fastai.vision import models
from fastai.vision.all import *
from fastai.metrics import *
from fastai.data.all import *
from fastai.callback import *
from fastai.learner import defaults, Learner
from pathlib import Path
import random
```
Descarga de la librería de arquitecturas.
```
!wget https://www.dropbox.com/s/cmoblvx5icdifwl/architectures.zip?dl=1 -O architectures.zip
!unzip architectures.zip
```
Descarga del dataset.
```
!wget https://www.dropbox.com/s/p92cw15pleunmqe/dataset.zip?dl=1 -O dataset.zip
!unzip dataset.zip
```
Conexión con Drive para el almacenaje de los modelos.
```
from google.colab import drive
drive.mount('/content/drive')
```
Rutas a los directorios del dataset.
```
path=Path('dataset/')
path_images = path/"Images"
path_labels = path/"Labels"
test_name = "test"
```
Función que dada la ruta de una imagen devuelve el path de su anotación.
```
def get_y_fn (x):
return Path(str(x).replace("Images","Labels"))
```
Clases: Background y Stoma.
```
codes = np.loadtxt(path/'codes.txt', dtype=str)
```
Función que permite partir el dataset entre entrenamiento y test.
```
def ParentSplitter(x):
return Path(x).parent.name==test_name
```
# Data augmentation
Carga de la librería Albumentations.
```
from albumentations import (
Compose,
OneOf,
ElasticTransform,
GridDistortion,
OpticalDistortion,
HorizontalFlip,
Flip,
Rotate,
Transpose,
CLAHE,
ShiftScaleRotate
)
class SegmentationAlbumentationsTransform(ItemTransform):
split_idx = 0
def __init__(self, aug):
self.aug = aug
def encodes(self, x):
img,mask = x
aug = self.aug(image=np.array(img), mask=np.array(mask))
return PILImage.create(aug["image"]), PILMask.create(aug["mask"])
```
Transformación que aplica a las imagenes giros horizontales, rotaciones y una operación de distorsión.
```
transforms=Compose([HorizontalFlip(p=0.5),
Flip(p=0.5),
Rotate(p=0.40,limit=10),
],p=1)
transformPipeline=SegmentationAlbumentationsTransform(transforms)
```
Transformación que no aplica cambios a las imagenes.
```
transforms2=Compose([],p=1)
transform2Pipeline=SegmentationAlbumentationsTransform(transforms2)
```
Transformación que cambia todos los píxeles con valor 255 a valor 1 en las máscaras.
```
class TargetMaskConvertTransform(ItemTransform):
def __init__(self):
pass
def encodes(self, x):
img,mask = x
#Convert to array
mask = np.array(mask)
mask[mask!=255]=0
# Change 255 for 1
mask[mask==255]=1
# Back to PILMask
mask = PILMask.create(mask)
return img, mask
```
# Dataloaders
DataBlock de entrenamiento con aumento de datos.
```
trainDB = DataBlock(blocks=(ImageBlock, MaskBlock(codes)),
get_items=partial(get_image_files,folders=['train']),
get_y=get_y_fn,
splitter=RandomSplitter(valid_pct=0.2),
item_tfms=[Resize((50,50)), TargetMaskConvertTransform(), transformPipeline],
batch_tfms=Normalize.from_stats(*imagenet_stats)
)
```
DataBlock de entrenamiento sin aumento de datos.
```
train2DB = DataBlock(blocks=(ImageBlock, MaskBlock(codes)),
get_items=partial(get_image_files,folders=['train']),
get_y=get_y_fn,
splitter=RandomSplitter(valid_pct=0.2),
item_tfms=[Resize((50,50)), TargetMaskConvertTransform(), transform2Pipeline],
batch_tfms=Normalize.from_stats(*imagenet_stats)
)
```
DataBlock de test.
```
testDB = DataBlock(blocks=(ImageBlock, MaskBlock(codes)),
get_items=partial(get_image_files,folders=['train','test']),
get_y=get_y_fn,
splitter=FuncSplitter(ParentSplitter),
item_tfms=[Resize((50,50)), TargetMaskConvertTransform(), transformPipeline],
batch_tfms=Normalize.from_stats(*imagenet_stats)
)
```
Creación de los dataloaders.
```
bs = 2
trainDLS = trainDB.dataloaders(path_images,bs=bs)
train2DLS = trainDB.dataloaders(path_images,bs=bs)
testDLS = testDB.dataloaders(path_images,bs=bs)
```
Prueba de la carga de datos.
```
trainDLS.show_batch(vmin=0,vmax=1,figsize=(12, 9))
```
# Modelos con aumento de datos
Definición del modelo.
```
from architectures import BiSeNet
model = BiSeNet(backbone_name="resnet18", nclass=2)
```
Creación del Learner con wd=1e-2 y definición del directorio de trabajo.
```
learn = Learner(dls=trainDLS, model=model, metrics=[Dice(), JaccardCoeff()], wd=1e-2)
learn.model_dir = "/content/drive/MyDrive/Colab Notebooks/BiSeNet"
```
Freeze y elección de la tasa de aprendizaje.
```
learn.freeze()
learn.lr_find()
learn.recorder
```
Entrenamiento del modelo usando EarlyStoppingCallback según el valid_loss (min_delta=0.0001 y patience=2).
```
name = "model_BiSeNet_resnet18_da_wd2"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
```
Comprobación de los resultados obtenidos en el modelo almacenado.
```
learn.load("model_BiSeNet_resnet18_da_wd2")
learn.validate()
```
Unfreeze y elección de la tasa de aprendizaje.
```
learn.unfreeze()
learn.lr_find()
learn.recorder
```
Entrenamiento del modelo usando EarlyStoppingCallback según el valid_loss (min_delta=0.0001 y patience=2).
```
name = "model_BiSeNet_resnet18_da_wd2_unfreeze"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
```
Comprobación de los resultados obtenidos en el modelo almacenado.
```
learn.load("model_BiSeNet_resnet18_da_wd2_unfreeze")
learn.validate()
```
---
Definición del modelo.
```
del model, learn
model = BiSeNet(backbone_name="resnet18", nclass=2)
```
Creación del Learner con wd=1e-1 y definición del directorio de trabajo.
```
learn = Learner(dls=trainDLS, model=model, metrics=[Dice(), JaccardCoeff()], wd=1e-1)
learn.model_dir = "/content/drive/MyDrive/Colab Notebooks/BiSeNet"
```
Freeze y elección de la tasa de aprendizaje.
```
learn.freeze()
learn.lr_find()
learn.recorder
```
Entrenamiento del modelo usando EarlyStoppingCallback según el valid_loss (min_delta=0.0001 y patience=2).
```
name = "model_BiSeNet_resnet18_da_wd1"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
```
Comprobación de los resultados obtenidos en el modelo almacenado.
```
learn.load("model_BiSeNet_resnet18_da_wd1")
learn.validate()
```
Unfreeze y elección de la tasa de aprendizaje.
```
learn.unfreeze()
learn.lr_find()
learn.recorder
```
Entrenamiento del modelo usando EarlyStoppingCallback según el valid_loss (min_delta=0.0001 y patience=2).
```
name = "model_BiSeNet_resnet18_da_wd1_unfreeze"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
```
Comprobación de los resultados obtenidos en el modelo almacenado.
```
learn.load("model_BiSeNet_resnet18_da_wd1_unfreeze")
learn.validate()
```
# Modelos sin aumento de datos
Definición del modelo.
```
del model, learn
model = BiSeNet(backbone_name="resnet18", nclass=2)
```
Creación del Learner con wd=1e-2 y definición del directorio de trabajo.
```
learn = Learner(dls=train2DLS, model=model, metrics=[Dice(), JaccardCoeff()], wd=1e-2)
learn.model_dir = "/content/drive/MyDrive/Colab Notebooks/BiSeNet"
```
Freeze y elección de la tasa de aprendizaje.
```
learn.freeze()
learn.lr_find()
learn.recorder
```
Entrenamiento del modelo usando EarlyStoppingCallback según el valid_loss (min_delta=0.0001 y patience=2).
```
name = "model_BiSeNet_resnet18_wd2"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
```
Comprobación de los resultados obtenidos en el modelo almacenado.
```
learn.load("model_BiSeNet_resnet18_wd2")
learn.validate()
```
Unfreeze y elección de la tasa de aprendizaje.
```
learn.unfreeze()
learn.lr_find()
learn.recorder
```
Entrenamiento del modelo usando EarlyStoppingCallback según el valid_loss (min_delta=0.0001 y patience=2).
```
name = "model_BiSeNet_resnet18_wd2_unfreeze"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
```
Comprobación de los resultados obtenidos en el modelo almacenado.
```
learn.load("model_BiSeNet_resnet18_wd2_unfreeze")
learn.validate()
```
---
Definición del modelo.
```
del model, learn
model = BiSeNet(backbone_name="resnet18", nclass=2)
```
Creación del Learner con wd=1e-1 y definición del directorio de trabajo.
```
learn = Learner(dls=train2DLS, model=model, metrics=[Dice(), JaccardCoeff()], wd=1e-1)
learn.model_dir = "/content/drive/MyDrive/Colab Notebooks/BiSeNet"
```
Freeze y elección de la tasa de aprendizaje.
```
learn.freeze()
learn.lr_find()
learn.recorder
```
Entrenamiento del modelo usando EarlyStoppingCallback según el valid_loss (min_delta=0.0001 y patience=2).
```
name = "model_BiSeNet_resnet18_wd1"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
```
Comprobación de los resultados obtenidos en el modelo almacenado.
```
learn.load("model_BiSeNet_resnet18_wd1")
learn.validate()
```
Unfreeze y elección de la tasa de aprendizaje.
```
learn.unfreeze()
learn.lr_find()
learn.recorder
```
Entrenamiento del modelo usando EarlyStoppingCallback según el valid_loss (min_delta=0.0001 y patience=2).
```
name = "model_BiSeNet_resnet18_wd1_unfreeze"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
```
Comprobación de los resultados obtenidos en el modelo almacenado.
```
learn.load("model_BiSeNet_resnet18_wd1_unfreeze")
learn.validate()
```
# Evaluación de resultados
## Modelos con aumento de datos
Carga del primer modelo en la CPU.
```
learn.load("model_BiSeNet_resnet18_da_wd2")
aux=learn.model
aux=aux.cpu()
```
Asignación del dataloader de test y validación.
```
learn.dls = testDLS
learn.validate()
```
Comparación de resultado buscado contra resultado obtenido.
```
learn.show_results(vmin=0,vmax=1)
```
Carga del segundo modelo en la CPU.
```
learn.load("model_BiSeNet_resnet18_da_wd2_unfreeze")
aux=learn.model
aux=aux.cpu()
```
Asignación del dataloader de test y validación.
```
learn.dls = testDLS
learn.validate()
```
Comparación de resultado buscado contra resultado obtenido.
```
learn.show_results(vmin=0,vmax=1)
```
---
Carga del tercer modelo en la CPU.
```
learn.load("model_BiSeNet_resnet18_da_wd1")
aux=learn.model
aux=aux.cpu()
```
Asignación del dataloader de test y validación.
```
learn.dls = testDLS
learn.validate()
```
Comparación de resultado buscado contra resultado obtenido.
```
learn.show_results(vmin=0,vmax=1)
```
Carga del cuarto modelo en la CPU.
```
learn.load("model_BiSeNet_resnet18_da_wd1_unfreeze")
aux=learn.model
aux=aux.cpu()
```
Asignación del dataloader de test y validación.
```
learn.dls = testDLS
learn.validate()
```
Comparación de resultado buscado contra resultado obtenido.
```
learn.show_results(vmin=0,vmax=1)
```
## Modelos sin aumento de datos
Carga del primer modelo en la CPU.
```
learn.load("model_BiSeNet_resnet18_wd2")
aux=learn.model
aux=aux.cpu()
```
Asignación del dataloader de test y validación.
```
learn.dls = testDLS
learn.validate()
```
Comparación de resultado buscado contra resultado obtenido.
```
learn.show_results(vmin=0,vmax=1)
```
Carga del segundo modelo en la CPU.
```
learn.load("model_BiSeNet_resnet18_wd2_unfreeze")
aux=learn.model
aux=aux.cpu()
```
Asignación del dataloader de test y validación.
```
learn.dls = testDLS
learn.validate()
```
Comparación de resultado buscado contra resultado obtenido.
```
learn.show_results(vmin=0,vmax=1)
```
---
Carga del tercer modelo en la CPU.
```
learn.load("model_BiSeNet_resnet18_wd1")
aux=learn.model
aux=aux.cpu()
```
Asignación del dataloader de test y validación.
```
learn.dls = testDLS
learn.validate()
```
Comparación de resultado buscado contra resultado obtenido.
```
learn.show_results(vmin=0,vmax=1)
```
Carga del cuarto modelo en la CPU.
```
learn.load("model_BiSeNet_resnet18_wd1_unfreeze")
aux=learn.model
aux=aux.cpu()
```
Asignación del dataloader de test y validación.
```
learn.dls = testDLS
learn.validate()
```
Comparación de resultado buscado contra resultado obtenido.
```
learn.show_results(vmin=0,vmax=1)
```
# Exportación del mejor modelo
Carga del modelo en la CPU.
```
learn.load("model_BiSeNet_resnet18_da_wd1_unfreeze")
learn.dls = testDLS
learn.validate()
aux=learn.model
aux=aux.cpu()
```
Exportación del modelo mediante torch.jit.trace.
```
import torchvision.transforms as transforms
img = PILImage.create(path_images/'train/1D2_0.png')
transformer=transforms.Compose([transforms.Resize((50,50)),
transforms.ToTensor(),
transforms.Normalize(
[0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
img=transformer(img).unsqueeze(0)
img=img.cpu()
traced_cell=torch.jit.trace(aux, (img))
traced_cell.save("/content/drive/MyDrive/Colab Notebooks/BiSeNet/model_BiSeNet_resnet18.pkl")
```
|
github_jupyter
|
!pip install fastai --upgrade
from fastai.basics import *
from fastai.vision import models
from fastai.vision.all import *
from fastai.metrics import *
from fastai.data.all import *
from fastai.callback import *
from fastai.learner import defaults, Learner
from pathlib import Path
import random
!wget https://www.dropbox.com/s/cmoblvx5icdifwl/architectures.zip?dl=1 -O architectures.zip
!unzip architectures.zip
!wget https://www.dropbox.com/s/p92cw15pleunmqe/dataset.zip?dl=1 -O dataset.zip
!unzip dataset.zip
from google.colab import drive
drive.mount('/content/drive')
path=Path('dataset/')
path_images = path/"Images"
path_labels = path/"Labels"
test_name = "test"
def get_y_fn (x):
return Path(str(x).replace("Images","Labels"))
codes = np.loadtxt(path/'codes.txt', dtype=str)
def ParentSplitter(x):
return Path(x).parent.name==test_name
from albumentations import (
Compose,
OneOf,
ElasticTransform,
GridDistortion,
OpticalDistortion,
HorizontalFlip,
Flip,
Rotate,
Transpose,
CLAHE,
ShiftScaleRotate
)
class SegmentationAlbumentationsTransform(ItemTransform):
split_idx = 0
def __init__(self, aug):
self.aug = aug
def encodes(self, x):
img,mask = x
aug = self.aug(image=np.array(img), mask=np.array(mask))
return PILImage.create(aug["image"]), PILMask.create(aug["mask"])
transforms=Compose([HorizontalFlip(p=0.5),
Flip(p=0.5),
Rotate(p=0.40,limit=10),
],p=1)
transformPipeline=SegmentationAlbumentationsTransform(transforms)
transforms2=Compose([],p=1)
transform2Pipeline=SegmentationAlbumentationsTransform(transforms2)
class TargetMaskConvertTransform(ItemTransform):
def __init__(self):
pass
def encodes(self, x):
img,mask = x
#Convert to array
mask = np.array(mask)
mask[mask!=255]=0
# Change 255 for 1
mask[mask==255]=1
# Back to PILMask
mask = PILMask.create(mask)
return img, mask
trainDB = DataBlock(blocks=(ImageBlock, MaskBlock(codes)),
get_items=partial(get_image_files,folders=['train']),
get_y=get_y_fn,
splitter=RandomSplitter(valid_pct=0.2),
item_tfms=[Resize((50,50)), TargetMaskConvertTransform(), transformPipeline],
batch_tfms=Normalize.from_stats(*imagenet_stats)
)
train2DB = DataBlock(blocks=(ImageBlock, MaskBlock(codes)),
get_items=partial(get_image_files,folders=['train']),
get_y=get_y_fn,
splitter=RandomSplitter(valid_pct=0.2),
item_tfms=[Resize((50,50)), TargetMaskConvertTransform(), transform2Pipeline],
batch_tfms=Normalize.from_stats(*imagenet_stats)
)
testDB = DataBlock(blocks=(ImageBlock, MaskBlock(codes)),
get_items=partial(get_image_files,folders=['train','test']),
get_y=get_y_fn,
splitter=FuncSplitter(ParentSplitter),
item_tfms=[Resize((50,50)), TargetMaskConvertTransform(), transformPipeline],
batch_tfms=Normalize.from_stats(*imagenet_stats)
)
bs = 2
trainDLS = trainDB.dataloaders(path_images,bs=bs)
train2DLS = trainDB.dataloaders(path_images,bs=bs)
testDLS = testDB.dataloaders(path_images,bs=bs)
trainDLS.show_batch(vmin=0,vmax=1,figsize=(12, 9))
from architectures import BiSeNet
model = BiSeNet(backbone_name="resnet18", nclass=2)
learn = Learner(dls=trainDLS, model=model, metrics=[Dice(), JaccardCoeff()], wd=1e-2)
learn.model_dir = "/content/drive/MyDrive/Colab Notebooks/BiSeNet"
learn.freeze()
learn.lr_find()
learn.recorder
name = "model_BiSeNet_resnet18_da_wd2"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
learn.load("model_BiSeNet_resnet18_da_wd2")
learn.validate()
learn.unfreeze()
learn.lr_find()
learn.recorder
name = "model_BiSeNet_resnet18_da_wd2_unfreeze"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
learn.load("model_BiSeNet_resnet18_da_wd2_unfreeze")
learn.validate()
del model, learn
model = BiSeNet(backbone_name="resnet18", nclass=2)
learn = Learner(dls=trainDLS, model=model, metrics=[Dice(), JaccardCoeff()], wd=1e-1)
learn.model_dir = "/content/drive/MyDrive/Colab Notebooks/BiSeNet"
learn.freeze()
learn.lr_find()
learn.recorder
name = "model_BiSeNet_resnet18_da_wd1"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
learn.load("model_BiSeNet_resnet18_da_wd1")
learn.validate()
learn.unfreeze()
learn.lr_find()
learn.recorder
name = "model_BiSeNet_resnet18_da_wd1_unfreeze"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
learn.load("model_BiSeNet_resnet18_da_wd1_unfreeze")
learn.validate()
del model, learn
model = BiSeNet(backbone_name="resnet18", nclass=2)
learn = Learner(dls=train2DLS, model=model, metrics=[Dice(), JaccardCoeff()], wd=1e-2)
learn.model_dir = "/content/drive/MyDrive/Colab Notebooks/BiSeNet"
learn.freeze()
learn.lr_find()
learn.recorder
name = "model_BiSeNet_resnet18_wd2"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
learn.load("model_BiSeNet_resnet18_wd2")
learn.validate()
learn.unfreeze()
learn.lr_find()
learn.recorder
name = "model_BiSeNet_resnet18_wd2_unfreeze"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
learn.load("model_BiSeNet_resnet18_wd2_unfreeze")
learn.validate()
del model, learn
model = BiSeNet(backbone_name="resnet18", nclass=2)
learn = Learner(dls=train2DLS, model=model, metrics=[Dice(), JaccardCoeff()], wd=1e-1)
learn.model_dir = "/content/drive/MyDrive/Colab Notebooks/BiSeNet"
learn.freeze()
learn.lr_find()
learn.recorder
name = "model_BiSeNet_resnet18_wd1"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
learn.load("model_BiSeNet_resnet18_wd1")
learn.validate()
learn.unfreeze()
learn.lr_find()
learn.recorder
name = "model_BiSeNet_resnet18_wd1_unfreeze"
learn.fit_one_cycle(100,slice(1e-5,1e-4),cbs=[
EarlyStoppingCallback(monitor='valid_loss', min_delta=0.0001, patience=2),
ShowGraphCallback(),
SaveModelCallback(monitor='valid_loss', min_delta=0.0001, fname=name, every_epoch=False)])
learn.load("model_BiSeNet_resnet18_wd1_unfreeze")
learn.validate()
learn.load("model_BiSeNet_resnet18_da_wd2")
aux=learn.model
aux=aux.cpu()
learn.dls = testDLS
learn.validate()
learn.show_results(vmin=0,vmax=1)
learn.load("model_BiSeNet_resnet18_da_wd2_unfreeze")
aux=learn.model
aux=aux.cpu()
learn.dls = testDLS
learn.validate()
learn.show_results(vmin=0,vmax=1)
learn.load("model_BiSeNet_resnet18_da_wd1")
aux=learn.model
aux=aux.cpu()
learn.dls = testDLS
learn.validate()
learn.show_results(vmin=0,vmax=1)
learn.load("model_BiSeNet_resnet18_da_wd1_unfreeze")
aux=learn.model
aux=aux.cpu()
learn.dls = testDLS
learn.validate()
learn.show_results(vmin=0,vmax=1)
learn.load("model_BiSeNet_resnet18_wd2")
aux=learn.model
aux=aux.cpu()
learn.dls = testDLS
learn.validate()
learn.show_results(vmin=0,vmax=1)
learn.load("model_BiSeNet_resnet18_wd2_unfreeze")
aux=learn.model
aux=aux.cpu()
learn.dls = testDLS
learn.validate()
learn.show_results(vmin=0,vmax=1)
learn.load("model_BiSeNet_resnet18_wd1")
aux=learn.model
aux=aux.cpu()
learn.dls = testDLS
learn.validate()
learn.show_results(vmin=0,vmax=1)
learn.load("model_BiSeNet_resnet18_wd1_unfreeze")
aux=learn.model
aux=aux.cpu()
learn.dls = testDLS
learn.validate()
learn.show_results(vmin=0,vmax=1)
learn.load("model_BiSeNet_resnet18_da_wd1_unfreeze")
learn.dls = testDLS
learn.validate()
aux=learn.model
aux=aux.cpu()
import torchvision.transforms as transforms
img = PILImage.create(path_images/'train/1D2_0.png')
transformer=transforms.Compose([transforms.Resize((50,50)),
transforms.ToTensor(),
transforms.Normalize(
[0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
img=transformer(img).unsqueeze(0)
img=img.cpu()
traced_cell=torch.jit.trace(aux, (img))
traced_cell.save("/content/drive/MyDrive/Colab Notebooks/BiSeNet/model_BiSeNet_resnet18.pkl")
| 0.598782 | 0.861247 |
```
#load our friends
import numpy as np
import math as mt
import lmfit
import csv
import scipy.stats as sta
import matplotlib.pyplot as plt
from lmfit import Minimizer, Parameters
from lmfit.lineshapes import gaussian
from lmfit.printfuncs import report_fit
#define a polynomial with a guassian note polyval is a pre-coded polynomial
def polgaus(x, p0, p1, p2, p3, p4, p5, norm, mu, sigma):
pols=[p0,p1,p2,p3,p4,p5]
y = norm*(np.exp(-np.power((x-mu),2.)/(2.*sigma**2)))+ np.polyval(pols,x)
return y
#define just a polynomial
def polback(x, p0, p1, p2, p3, p4, p5):
pols=[p0,p1,p2,p3,p4,p5]
y = np.polyval(pols,x)
return y
def gaus(x,norm,mu,sigma):
y = norm*(np.exp(-np.power((x-mu),2.)/(2.*sigma**2)))
return y
#run the fit
def fitFile_fancy(label,outname, mu=3800,vary_mu=True):
x = []
y = []
y_err = []
with open(label,'r') as csvfile:
plots = csv.reader(csvfile, delimiter=' ')
for row in plots:
print('debug',row[1],row[2])
if float(row[1]) < 50:
continue
x.append(float(row[1]))
y.append(float(row[2]))
#add poisson uncertainties
y_err.append(mt.sqrt(float(row[2])))
#Dumb trick to get uncertaities
weights = np.linspace(0.,len(y),num=len(y))
for i0 in range(len(y)):
weights[i0] = float(1./y_err[i0])
#Now setup the fit (for signal+background)
poly_mod = lmfit.Model(polback,prefix='pol_')
#pars = poly_mod.guess(y, x=x)
gauss1 = lmfit.Model(gaus,prefix='g1_')
#gauss2 = lmfit.Model(gaus,prefix='g2_')
pars = poly_mod.make_params(p0=-3.48924610e-06,p1=2.79987292e-03,p2=-9.00945726e-01,p3=1.45645139e+02,p4=-1.18689484e+04,p5=3.92197860e+05)
pars.update(gauss1.make_params(norm=10,mu=mu,sigma=40))
#pars.update(gauss2.make_params(norm=609.0,mu=mu_2,sigma=13.4))
print(pars)
#p = model.make_params(p0=-3.48924610e-06,p1=2.79987292e-03,p2=-9.00945726e-01,p3=1.45645139e+02,p4=-1.18689484e+04,p5=3.92197860e+05,
# norm_1=305.04,mu_1=mu_1,sigma_1=4.5,norm_2=609.0,mu_2=mu_2,sigma_2=13.4)
pars['g1_mu'].set(value = mu, min =3000.,max = 6000.,vary=vary_mu)
#pars['g2_mu'].set(value = mu_2, min =50.,max = 160.,vary=vary_mu)
mod = poly_mod+gauss1
#init = mod.eval(pars, x=x, weights=weights)
result = mod.fit(y, pars, x=x, weights=weights)
#plt.figure()
fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))
axes[0].plot(x, y, 'b')
#axes[0].plot(x, init, 'k--', label='initial fit')
axes[0].plot(x, result.best_fit, 'r-', label='best fit')
axes[0].legend(loc='best')
comps = result.eval_components(x=x)
axes[1].plot(x, y, 'b')
axes[1].plot(x, comps['g1_'], 'g--', label='Gaussian component')
#axes[1].plot(x, comps['g2_'], 'm--', label='Gaussian component 2')
axes[1].plot(x, comps['pol_'], 'k--', label='Polynomial background')
axes[1].legend(loc='best')
labels_x = "mass[GeV]"
labels_y = "Entries/bin"
#result.plot()
plt.xlabel(labels_x,position=(0.92,0.1))
plt.ylabel(labels_y,position=(0.1,0.84))
plt.savefig(outname+'.png')
print(result.fit_report())
return result.chisqr
#run the fit
def fitFile(label,mu=3800,vary_mu=True):
x = []
y = []
y_err = []
with open(label,'r') as csvfile:
plots = csv.reader(csvfile, delimiter=' ')
for row in plots:
print('debug',row[1],row[2])
if float(row[1]) < 50:
continue
x.append(float(row[1]))
y.append(float(row[2]))
#add poisson uncertainties
y_err.append(mt.sqrt(float(row[2])))
#Dumb trick to get uncertaities
weights = np.linspace(0.,len(y),num=len(y))
for i0 in range(len(y)):
weights[i0] = float(1./y_err[i0])
#Now setup the fit (for signal+background)
model = lmfit.Model(polgaus)
p = model.make_params(p0=-3.48924610e-06,p1=2.79987292e-03,p2=-9.00945726e-01,p3=1.45645139e+02,p4=-1.18689484e+04,p5=3.92197860e+05,norm=3.53117893e+01,mu=mu,sigma=2.5)
p['mu'].set(vary=vary_mu)
result = model.fit(data=y, params=p, x=x, weights=weights)
plt.figure()
labels_x = "mass[GeV]"
labels_y = "Entries/bin"
result.plot()
plt.xlabel(labels_x,position=(0.92,0.1))
plt.ylabel(labels_y,position=(0.1,0.84))
print(result.fit_report())
return result.chisqr
#run the fit
def fitFile_doublegaus_fancy(label,outname,mu_1=80.38,mu_2=91.1876,vary_mu=True):
x = []
y = []
y_err = []
with open(label,'r') as csvfile:
plots = csv.reader(csvfile, delimiter=' ')
for row in plots:
#print('debug',row[1],row[2])
if float(row[1]) < 52:
continue
x.append(float(row[1]))
y.append(float(row[2]))
#add poisson uncertainties
y_err.append(mt.sqrt(float(row[2])))
#Dumb trick to get uncertaities
weights = np.linspace(0.,len(y),num=len(y))
for i0 in range(len(y)):
weights[i0] = float(1./y_err[i0])
#Now setup the fit (for signal+background)
#model = lmfit.Model(poldoublegaus)
poly_mod = lmfit.Model(polback,prefix='pol_')
#pars = poly_mod.guess(y, x=x)
gauss1 = lmfit.Model(gaus,prefix='g1_')
gauss2 = lmfit.Model(gaus,prefix='g2_')
pars = poly_mod.make_params(p0=-3.48924610e-06,p1=2.79987292e-03,p2=-9.00945726e-01,p3=1.45645139e+02,p4=-1.18689484e+04,p5=3.92197860e+05)
pars.update(gauss1.make_params(norm=305.04,mu=mu_1,sigma=4.5))
pars.update(gauss2.make_params(norm=609.0,mu=mu_2,sigma=13.4))
print(pars)
#p = model.make_params(p0=-3.48924610e-06,p1=2.79987292e-03,p2=-9.00945726e-01,p3=1.45645139e+02,p4=-1.18689484e+04,p5=3.92197860e+05,
# norm_1=305.04,mu_1=mu_1,sigma_1=4.5,norm_2=609.0,mu_2=mu_2,sigma_2=13.4)
pars['g1_mu'].set(value = mu_1, min =50.,max = 160.,vary=vary_mu)
pars['g2_mu'].set(value = mu_2, min =50.,max = 160.,vary=vary_mu)
mod = poly_mod+gauss1+gauss2
#init = mod.eval(pars, x=x, weights=weights)
result = mod.fit(y, pars, x=x, weights=weights)
#plt.figure()
fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))
axes[0].plot(x, y, 'b')
#axes[0].plot(x, init, 'k--', label='initial fit')
axes[0].plot(x, result.best_fit, 'r-', label='best fit')
axes[0].legend(loc='best')
comps = result.eval_components(x=x)
axes[1].plot(x, y, 'b')
axes[1].plot(x, comps['g1_'], 'g--', label='Gaussian component 1')
axes[1].plot(x, comps['g2_'], 'm--', label='Gaussian component 2')
axes[1].plot(x, comps['pol_'], 'k--', label='Polynomial background')
axes[1].legend(loc='best')
labels_x = "mass[GeV]"
labels_y = "Entries/bin"
#result.plot()
plt.xlabel(labels_x,position=(0.92,0.1))
plt.ylabel(labels_y,position=(0.1,0.84))
plt.savefig(outname+'.png')
print(result.fit_report())
return result.chisqr
fitFile_fancy("blackbox2_hist.txt",'bb2',4000)
```
|
github_jupyter
|
#load our friends
import numpy as np
import math as mt
import lmfit
import csv
import scipy.stats as sta
import matplotlib.pyplot as plt
from lmfit import Minimizer, Parameters
from lmfit.lineshapes import gaussian
from lmfit.printfuncs import report_fit
#define a polynomial with a guassian note polyval is a pre-coded polynomial
def polgaus(x, p0, p1, p2, p3, p4, p5, norm, mu, sigma):
pols=[p0,p1,p2,p3,p4,p5]
y = norm*(np.exp(-np.power((x-mu),2.)/(2.*sigma**2)))+ np.polyval(pols,x)
return y
#define just a polynomial
def polback(x, p0, p1, p2, p3, p4, p5):
pols=[p0,p1,p2,p3,p4,p5]
y = np.polyval(pols,x)
return y
def gaus(x,norm,mu,sigma):
y = norm*(np.exp(-np.power((x-mu),2.)/(2.*sigma**2)))
return y
#run the fit
def fitFile_fancy(label,outname, mu=3800,vary_mu=True):
x = []
y = []
y_err = []
with open(label,'r') as csvfile:
plots = csv.reader(csvfile, delimiter=' ')
for row in plots:
print('debug',row[1],row[2])
if float(row[1]) < 50:
continue
x.append(float(row[1]))
y.append(float(row[2]))
#add poisson uncertainties
y_err.append(mt.sqrt(float(row[2])))
#Dumb trick to get uncertaities
weights = np.linspace(0.,len(y),num=len(y))
for i0 in range(len(y)):
weights[i0] = float(1./y_err[i0])
#Now setup the fit (for signal+background)
poly_mod = lmfit.Model(polback,prefix='pol_')
#pars = poly_mod.guess(y, x=x)
gauss1 = lmfit.Model(gaus,prefix='g1_')
#gauss2 = lmfit.Model(gaus,prefix='g2_')
pars = poly_mod.make_params(p0=-3.48924610e-06,p1=2.79987292e-03,p2=-9.00945726e-01,p3=1.45645139e+02,p4=-1.18689484e+04,p5=3.92197860e+05)
pars.update(gauss1.make_params(norm=10,mu=mu,sigma=40))
#pars.update(gauss2.make_params(norm=609.0,mu=mu_2,sigma=13.4))
print(pars)
#p = model.make_params(p0=-3.48924610e-06,p1=2.79987292e-03,p2=-9.00945726e-01,p3=1.45645139e+02,p4=-1.18689484e+04,p5=3.92197860e+05,
# norm_1=305.04,mu_1=mu_1,sigma_1=4.5,norm_2=609.0,mu_2=mu_2,sigma_2=13.4)
pars['g1_mu'].set(value = mu, min =3000.,max = 6000.,vary=vary_mu)
#pars['g2_mu'].set(value = mu_2, min =50.,max = 160.,vary=vary_mu)
mod = poly_mod+gauss1
#init = mod.eval(pars, x=x, weights=weights)
result = mod.fit(y, pars, x=x, weights=weights)
#plt.figure()
fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))
axes[0].plot(x, y, 'b')
#axes[0].plot(x, init, 'k--', label='initial fit')
axes[0].plot(x, result.best_fit, 'r-', label='best fit')
axes[0].legend(loc='best')
comps = result.eval_components(x=x)
axes[1].plot(x, y, 'b')
axes[1].plot(x, comps['g1_'], 'g--', label='Gaussian component')
#axes[1].plot(x, comps['g2_'], 'm--', label='Gaussian component 2')
axes[1].plot(x, comps['pol_'], 'k--', label='Polynomial background')
axes[1].legend(loc='best')
labels_x = "mass[GeV]"
labels_y = "Entries/bin"
#result.plot()
plt.xlabel(labels_x,position=(0.92,0.1))
plt.ylabel(labels_y,position=(0.1,0.84))
plt.savefig(outname+'.png')
print(result.fit_report())
return result.chisqr
#run the fit
def fitFile(label,mu=3800,vary_mu=True):
x = []
y = []
y_err = []
with open(label,'r') as csvfile:
plots = csv.reader(csvfile, delimiter=' ')
for row in plots:
print('debug',row[1],row[2])
if float(row[1]) < 50:
continue
x.append(float(row[1]))
y.append(float(row[2]))
#add poisson uncertainties
y_err.append(mt.sqrt(float(row[2])))
#Dumb trick to get uncertaities
weights = np.linspace(0.,len(y),num=len(y))
for i0 in range(len(y)):
weights[i0] = float(1./y_err[i0])
#Now setup the fit (for signal+background)
model = lmfit.Model(polgaus)
p = model.make_params(p0=-3.48924610e-06,p1=2.79987292e-03,p2=-9.00945726e-01,p3=1.45645139e+02,p4=-1.18689484e+04,p5=3.92197860e+05,norm=3.53117893e+01,mu=mu,sigma=2.5)
p['mu'].set(vary=vary_mu)
result = model.fit(data=y, params=p, x=x, weights=weights)
plt.figure()
labels_x = "mass[GeV]"
labels_y = "Entries/bin"
result.plot()
plt.xlabel(labels_x,position=(0.92,0.1))
plt.ylabel(labels_y,position=(0.1,0.84))
print(result.fit_report())
return result.chisqr
#run the fit
def fitFile_doublegaus_fancy(label,outname,mu_1=80.38,mu_2=91.1876,vary_mu=True):
x = []
y = []
y_err = []
with open(label,'r') as csvfile:
plots = csv.reader(csvfile, delimiter=' ')
for row in plots:
#print('debug',row[1],row[2])
if float(row[1]) < 52:
continue
x.append(float(row[1]))
y.append(float(row[2]))
#add poisson uncertainties
y_err.append(mt.sqrt(float(row[2])))
#Dumb trick to get uncertaities
weights = np.linspace(0.,len(y),num=len(y))
for i0 in range(len(y)):
weights[i0] = float(1./y_err[i0])
#Now setup the fit (for signal+background)
#model = lmfit.Model(poldoublegaus)
poly_mod = lmfit.Model(polback,prefix='pol_')
#pars = poly_mod.guess(y, x=x)
gauss1 = lmfit.Model(gaus,prefix='g1_')
gauss2 = lmfit.Model(gaus,prefix='g2_')
pars = poly_mod.make_params(p0=-3.48924610e-06,p1=2.79987292e-03,p2=-9.00945726e-01,p3=1.45645139e+02,p4=-1.18689484e+04,p5=3.92197860e+05)
pars.update(gauss1.make_params(norm=305.04,mu=mu_1,sigma=4.5))
pars.update(gauss2.make_params(norm=609.0,mu=mu_2,sigma=13.4))
print(pars)
#p = model.make_params(p0=-3.48924610e-06,p1=2.79987292e-03,p2=-9.00945726e-01,p3=1.45645139e+02,p4=-1.18689484e+04,p5=3.92197860e+05,
# norm_1=305.04,mu_1=mu_1,sigma_1=4.5,norm_2=609.0,mu_2=mu_2,sigma_2=13.4)
pars['g1_mu'].set(value = mu_1, min =50.,max = 160.,vary=vary_mu)
pars['g2_mu'].set(value = mu_2, min =50.,max = 160.,vary=vary_mu)
mod = poly_mod+gauss1+gauss2
#init = mod.eval(pars, x=x, weights=weights)
result = mod.fit(y, pars, x=x, weights=weights)
#plt.figure()
fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))
axes[0].plot(x, y, 'b')
#axes[0].plot(x, init, 'k--', label='initial fit')
axes[0].plot(x, result.best_fit, 'r-', label='best fit')
axes[0].legend(loc='best')
comps = result.eval_components(x=x)
axes[1].plot(x, y, 'b')
axes[1].plot(x, comps['g1_'], 'g--', label='Gaussian component 1')
axes[1].plot(x, comps['g2_'], 'm--', label='Gaussian component 2')
axes[1].plot(x, comps['pol_'], 'k--', label='Polynomial background')
axes[1].legend(loc='best')
labels_x = "mass[GeV]"
labels_y = "Entries/bin"
#result.plot()
plt.xlabel(labels_x,position=(0.92,0.1))
plt.ylabel(labels_y,position=(0.1,0.84))
plt.savefig(outname+'.png')
print(result.fit_report())
return result.chisqr
fitFile_fancy("blackbox2_hist.txt",'bb2',4000)
| 0.275812 | 0.534127 |
```
import torch,torchvision
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
import numpy as np
import pandas as pd
import wandb
import os, json, cv2, random
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor,DefaultTrainer
from detectron2.config import get_cfg
from detectron2.structures import BoxMode
from tqdm import tqdm
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
data = pd.read_csv('./data.csv')
data
pred = {'instances':{'pred_classes':[],'pred_boxes':[]}}
# pred['instances'].pred_classes
xmin,ymin,xmax,ymax = 281,187,327,223
x = xmin
y = ymin
w = xmax - xmin
h = ymax - ymin
im = cv2.imread('./data/vid_4_1000.jpg')
roi=im[y:y+h,x:x+w]
cv2.imwrite(str('crop') + '.jpg', roi)
cv2.rectangle(im,(x,y),(x+w,y+h),(200,0,0),2)
cv2.imwrite(str('box') + '.jpg', im)
data
def load_data():
new_data = []
idx = len(data)
for i in tqdm(range(idx)):
record = {}
info = data.iloc[i]
img_path = f'./data/{info["image"]}'
img = cv2.imread(f'./data/{info["image"]}')
img = img / 255.0
record['file_name'] = img_path
record['image_id'] = i
record['height'],record['width'] = img.shape[:2]
objs = [{
'bbox':[xmin,ymin,xmax,ymax],
'bbox_mode':BoxMode.XYXY_ABS,
'category_id':0
}]
record['annotations'] = objs
new_data.append(record)
return new_data
labels = ['car']
DatasetCatalog.register('data',lambda : load_data())
MetadataCatalog.get('data').set(thing_classes=labels)
metadata = MetadataCatalog.get('data')
wandb.init(sync_tensorboard=True,name='baseline')
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file('COCO-Detection/faster_rcnn_R_101_C4_3x.yaml'))
cfg.DATASETS.TRAIN = ('data',)
cfg.DATASETS.TEST = ()
cfg.TEST.EVAL_PERIOD = 100
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url('COCO-Detection/faster_rcnn_R_101_C4_3x.yaml')
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025
cfg.SOLVER.MAX_ITER = 2500
cfg.SOLVER.STEPS = []
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
predictor = DefaultPredictor(cfg)
# Look at training curves in tensorboard:
%load_ext tensorboard
%tensorboard --logdir output
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") # path to the model we just trained
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set a custom testing threshold
predictor = DefaultPredictor(cfg)
import matplotlib.pyplot as plt
from detectron2.utils.visualizer import ColorMode
im = cv2.imread('./data/vid_4_1000.jpg')
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=metadata,
scale=0.5,
instance_mode=ColorMode.IMAGE_BW
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
plt.figure(figsize=(12,6))
plt.imshow(out.get_image()[:, :, ::-1])
plt.savefig('./pred.png')
plt.close()
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
evaluator = COCOEvaluator("data", output_dir="./output")
val_loader = build_detection_test_loader(cfg, "data")
print(inference_on_dataset(predictor.model, val_loader, evaluator))
wandb.log({'coco':inference_on_dataset(predictor.model, val_loader, evaluator)})
wandb.log(inference_on_dataset(predictor.model, val_loader, evaluator))
```
|
github_jupyter
|
import torch,torchvision
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
import numpy as np
import pandas as pd
import wandb
import os, json, cv2, random
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor,DefaultTrainer
from detectron2.config import get_cfg
from detectron2.structures import BoxMode
from tqdm import tqdm
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
data = pd.read_csv('./data.csv')
data
pred = {'instances':{'pred_classes':[],'pred_boxes':[]}}
# pred['instances'].pred_classes
xmin,ymin,xmax,ymax = 281,187,327,223
x = xmin
y = ymin
w = xmax - xmin
h = ymax - ymin
im = cv2.imread('./data/vid_4_1000.jpg')
roi=im[y:y+h,x:x+w]
cv2.imwrite(str('crop') + '.jpg', roi)
cv2.rectangle(im,(x,y),(x+w,y+h),(200,0,0),2)
cv2.imwrite(str('box') + '.jpg', im)
data
def load_data():
new_data = []
idx = len(data)
for i in tqdm(range(idx)):
record = {}
info = data.iloc[i]
img_path = f'./data/{info["image"]}'
img = cv2.imread(f'./data/{info["image"]}')
img = img / 255.0
record['file_name'] = img_path
record['image_id'] = i
record['height'],record['width'] = img.shape[:2]
objs = [{
'bbox':[xmin,ymin,xmax,ymax],
'bbox_mode':BoxMode.XYXY_ABS,
'category_id':0
}]
record['annotations'] = objs
new_data.append(record)
return new_data
labels = ['car']
DatasetCatalog.register('data',lambda : load_data())
MetadataCatalog.get('data').set(thing_classes=labels)
metadata = MetadataCatalog.get('data')
wandb.init(sync_tensorboard=True,name='baseline')
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file('COCO-Detection/faster_rcnn_R_101_C4_3x.yaml'))
cfg.DATASETS.TRAIN = ('data',)
cfg.DATASETS.TEST = ()
cfg.TEST.EVAL_PERIOD = 100
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url('COCO-Detection/faster_rcnn_R_101_C4_3x.yaml')
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025
cfg.SOLVER.MAX_ITER = 2500
cfg.SOLVER.STEPS = []
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
predictor = DefaultPredictor(cfg)
# Look at training curves in tensorboard:
%load_ext tensorboard
%tensorboard --logdir output
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") # path to the model we just trained
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set a custom testing threshold
predictor = DefaultPredictor(cfg)
import matplotlib.pyplot as plt
from detectron2.utils.visualizer import ColorMode
im = cv2.imread('./data/vid_4_1000.jpg')
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=metadata,
scale=0.5,
instance_mode=ColorMode.IMAGE_BW
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
plt.figure(figsize=(12,6))
plt.imshow(out.get_image()[:, :, ::-1])
plt.savefig('./pred.png')
plt.close()
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
evaluator = COCOEvaluator("data", output_dir="./output")
val_loader = build_detection_test_loader(cfg, "data")
print(inference_on_dataset(predictor.model, val_loader, evaluator))
wandb.log({'coco':inference_on_dataset(predictor.model, val_loader, evaluator)})
wandb.log(inference_on_dataset(predictor.model, val_loader, evaluator))
| 0.487307 | 0.199483 |
# Introduction to Deep Learning with PyTorch
In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
## Neural Networks
Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
<img src="assets/simple_neuron.png" width=400px>
Mathematically this looks like:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
With vectors this is the dot/inner product of two vectors:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## Tensors
It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
<img src="assets/tensor_examples.svg" width=600px>
With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
```
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
```
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:
`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.
`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.
Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.
PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.
> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
```
## Calculate the output of this network using the weights and bias tensors
y = activation(torch.sum(features * weights) + bias)
print(y)
```
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.
Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error
```python
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
```
As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.
**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.
There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).
* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.
* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.
* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.
I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.
> **Exercise**: Calculate the output of our little network using matrix multiplication.
```
## Calculate the output of this network using matrix multiplication
y = activation(torch.mm(features, weights.view(5,1)) + bias)
```
### Stack them up!
That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
<img src='assets/multilayer_diagram_weights.png' width=450px>
The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
```
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
```
## Your solution here
h = activation(torch.mm(features, W1) + B1)
print(h)
output = activation(torch.mm(h, W2) + B2)
print(output)
```
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.
The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.
## Numpy to Torch and back
Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
```
import numpy as np
np.set_printoptions(precision=8)
a = np.random.rand(4,3)
a
torch.set_printoptions(precision=8)
b = torch.from_numpy(a)
b
b.numpy()
```
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
```
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
```
|
github_jupyter
|
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
## Calculate the output of this network using the weights and bias tensors
y = activation(torch.sum(features * weights) + bias)
print(y)
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
## Calculate the output of this network using matrix multiplication
y = activation(torch.mm(features, weights.view(5,1)) + bias)
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
## Your solution here
h = activation(torch.mm(features, W1) + B1)
print(h)
output = activation(torch.mm(h, W2) + B2)
print(output)
import numpy as np
np.set_printoptions(precision=8)
a = np.random.rand(4,3)
a
torch.set_printoptions(precision=8)
b = torch.from_numpy(a)
b
b.numpy()
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
| 0.78785 | 0.994724 |
# Assignment 4
### Question 1.
part 1. Using open, and write.
a. Create a dictionary with the following:
Sport name and the number of players save it to a file.
sports = { 'basketball': 10, ... }
Save it to a text file, and load the file and print it on the screen.
part 2. Using Pickle save the dictionary into a binary file and load and print it out
part 3. Repeat the same steps using Shelve.
part 4. Repeat the same steps using Json.
```
# Part 1: Create a dictionary for a Sport team
f = open('mySportsTeam.txt', 'w')
f.write("Sport: Football [NFL]\n")
f.write("Team Name: New Orleans Saints\n")
f.write("Number of Players: 11\n")
f.write("\n")
f.write("\n")
f.close()
f = open('mySportsTeam.txt', 'a')
# Adding 11 players
class football_player:
def __init__(self, name, number, position):
self.name = name
self.number = number
self.position = position
def __str__(self):
return f'{self.name} {self.number} {self.position}'
player01 = football_player("Drew Brees", 9, "QB")
player02 = football_player("Kurt Coleman", 39, "S")
player03 = football_player("Austin Carr", 80, "WR")
player04 = football_player("Demario Davis", 56, "LB")
player05 = football_player("Justin Hardee", 34, "DB")
player06 = football_player("Cameron Jordan", 94, "DE")
player07 = football_player("Mark Ingram II", 22, "RB")
player08 = football_player("Thomas Morstead", 6, "P")
player09 = football_player("Michael Thomas", 13, "WR")
player10 = football_player("Alvin Kamara", 41, "RB")
player11 = football_player("Cameron Meredith", 81, "WR")
players = [player01, player02, player03, player04, player05, player06, player07, player08, player09, player10, player11]
with open('mySportsTeam.txt', 'w') as file:
for p in players:
file.write(str("%s\n" % p))
file = open('mySportsTeam.txt', 'r')
file.read()
# Part 2: Using pickle
import pickle
with open('mySportsTeam.txt', mode = 'wb') as f:
pickle.dump(players, f)
with open('mySportsTeam.txt', mode='rb') as f:
rooster = pickle.load(f)
for player in rooster:
print(str(player))
import shelve
with shelve.open('mySportsTeam.txt') as db:
db['mySportsTeam.txt'] = players
shelved_players = []
with shelve.open('players', writeback=True) as db:
shelved_players = db['players']
db['players'].append(Character('Jimmy Graham', 88, "WR"))
for players in shelved_players:
print(str(player))
import json
data = {}
data['Player'] = []
data['Player'].append({
'name': 'Drew Brees',
'number': '09',
'position': 'QB'
})
with open('data.txt', 'w') as outfile:
json.dump(data, outfile)
with open('players.json', mode='w') as f:
json.dump(players, f)
```
### Question 2.
Why is shelves preferred over pickle for large data sets?
Answer:
Pickle is for serializing some object as a bytestream in a file.
Shelve builds on top of pickle, and implements a serialization dictionary where objects are pickled, but associated with a key (some string), so you can load your shelved data file and access your pickled objects via keys.
Applying both to datastreams menas that pickle will run slower, while shelves will save memory with a smart approach to repeated values.
### Question 3.
Using SQL Queries, create a database using python's sqlite3 module
We would like to have a database for an Role playing game. In this database there are 3 tables.
Character table.
<pre>
CharacterId INTEGER, Name TEXT, Race TEXT, Gender TEXT, Class TEXT, Health INTEGER, Mana INTEGER, InventoryID INTEGER
</pre>
Inventory table.
<pre>
InventoryId INTEGER, ItemId INTEGER, Quantity INTEGER
</pre>
Item table
<pre>
ItemId INTEGER, ItemName TEXT, ItemDescription TEXT, Consumable TINYINT
</pre>
Insert the following entries once the tables are created
<pre>
CharacterId, Race, Gender, Class, Health, Mana, InventoryId
0, Bruce, Human, Male, Fighter, 100, 0, 0
1, Thrall, Orc, Male, Warrior, 120, 0, 1
2, Legolas, Elf, Male, Archer, 100, 0, 2
3, Edwyrd, Elf, Male, Fighter, 100, 0, 3
4, Lixiss, Elf, Female, Mage, 80, 100, 4
5, Jasmine, Fairy, Female, Mage, 80, 120, 5
</pre>
Insert the following entires to the Items table
<pre>
ItemId, ItemName, ItemDescription, Consumable
0, sword, a sharp blade, 0
1, axe, good for chopping things, 0
2, hammer, heavy but good for blacksmiths, 0
3, staff, long reaching stick, 0
4, bow, good for long ranged attacks, 0
5, quiver of arrows, required by the bow, 0
6, book, full of spells, 0
7, light mana potion, replish mana, 1
8, med mana potion, replish more mana, 1
9, light health potion, cures light wounds, 1
10, med health potion, cures medium wounds, 1
</pre>
Insert the following entries to the Inventory table
<pre>
InventoryId, ItemId, Quantity
0, 0, 1
0, 9, 5
1, 1, 1
1, 9, 2
1, 10, 2
2, 4, 1
2, 5, 30
2, 9, 4
3, 0, 1
3, 9, 4
4, 3, 1
4, 7, 5
5, 6, 1
5, 8, 3
</pre>
```
import sqlite3
db = sqlite3.connect('rpg_information.sqlite')
db.execute("CREATE TABLE IF NOT EXISTS Characters ( CharacterId INTEGER, Name TEXT, Race TEXT, Gender TEXT, Class TEXT, Health INTEGER, Mana INTEGER, InventoryID INTEGER)")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (0, 'Bruce', 'Human', 'Male', 'Fighter', 100, 0, 0) ")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (1, 'Thrall', 'Orc', 'Male', 'Warrior', 120, 0, 1) ")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (2, 'Legolas', 'Elf', 'Male', 'Archer', 100, 0, 2) ")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (3, 'Edwyrd', 'Elf', 'Male', 'Fighter', 100, 0, 3) ")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (4, 'Lixiss', 'Elf', 'Female', 'Mage', 80, 100, 4) ")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (5, 'Jasmine', 'Fairy', 'Female', 'Mage', 80, 120, 5) ")
#----------------------------------------------------------
db.execute("CREATE TABLE IF NOT EXISTS Inventory ( InventoryId INTEGER, ItemId INTEGER, Quantity INTEGER)")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (0, 0, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (0, 9, 5) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (1, 1, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (1, 9, 2) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (1, 10, 2) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (2, 4, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (2, 5, 30) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (2, 9, 4) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (3, 0, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (3, 9, 4) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (4, 3, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (4, 7, 5) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (5, 6, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (5, 8, 3) ")
#----------------------------------------------------------
db.execute("CREATE TABLE IF NOT EXISTS Itemtable ( ItemId INTEGER, ItemName TEXT, ItemDescription TEXT, Consumable TINYINT)")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (0, 'sword', 'a sharp blade', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (1, 'axe', 'good for chopping things', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (2, 'hammer', 'heavy but good for blacksmiths', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (3, 'staff', 'long reaching stick', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (4, 'bow', 'good for long ranged attacks', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (5, 'quiver of arrows', 'required by the bow', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (6, 'book', 'full of spells', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (7, 'light mana potion', 'replish mana', 1) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (8, 'med mana potion', 'replish more mana', 1) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (9, 'light health potion', 'cures light wounds', 1) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (10, 'med health potion', 'cures medium wounds', 1) ")
```
Once all the data have been added, do the following:
a. display all the characters, display all the items ( this will require SELECT * statement)
b. display only the character who are elves (this will require the WHERE clause)
c. display the inventory for each character (this will require a INNER JOIN with the 3 tables)
```
db = sqlite3.connect('rpg_information.sqlite')
cursor = db.cursor()
for row in cursor.execute("SELECT * FROM Characters"):
print(row)
#--------------------------------------------------------
#cursor = db.cursor()
for row in cursor.execute("SELECT * FROM Itemtable"):
print(row)
#--------------------------------------------------------
#cursor = db.cursor()
for row in cursor.execute("SELECT * FROM Characters WHERE Class ='Elf'"):
print(row)
#cursor = db.cursor()
for row in cursor.execute("SELECT * FROM Itemtable INNER JOIN Inventory ON Characters.InventoryID"):
print(row)
```
|
github_jupyter
|
# Part 1: Create a dictionary for a Sport team
f = open('mySportsTeam.txt', 'w')
f.write("Sport: Football [NFL]\n")
f.write("Team Name: New Orleans Saints\n")
f.write("Number of Players: 11\n")
f.write("\n")
f.write("\n")
f.close()
f = open('mySportsTeam.txt', 'a')
# Adding 11 players
class football_player:
def __init__(self, name, number, position):
self.name = name
self.number = number
self.position = position
def __str__(self):
return f'{self.name} {self.number} {self.position}'
player01 = football_player("Drew Brees", 9, "QB")
player02 = football_player("Kurt Coleman", 39, "S")
player03 = football_player("Austin Carr", 80, "WR")
player04 = football_player("Demario Davis", 56, "LB")
player05 = football_player("Justin Hardee", 34, "DB")
player06 = football_player("Cameron Jordan", 94, "DE")
player07 = football_player("Mark Ingram II", 22, "RB")
player08 = football_player("Thomas Morstead", 6, "P")
player09 = football_player("Michael Thomas", 13, "WR")
player10 = football_player("Alvin Kamara", 41, "RB")
player11 = football_player("Cameron Meredith", 81, "WR")
players = [player01, player02, player03, player04, player05, player06, player07, player08, player09, player10, player11]
with open('mySportsTeam.txt', 'w') as file:
for p in players:
file.write(str("%s\n" % p))
file = open('mySportsTeam.txt', 'r')
file.read()
# Part 2: Using pickle
import pickle
with open('mySportsTeam.txt', mode = 'wb') as f:
pickle.dump(players, f)
with open('mySportsTeam.txt', mode='rb') as f:
rooster = pickle.load(f)
for player in rooster:
print(str(player))
import shelve
with shelve.open('mySportsTeam.txt') as db:
db['mySportsTeam.txt'] = players
shelved_players = []
with shelve.open('players', writeback=True) as db:
shelved_players = db['players']
db['players'].append(Character('Jimmy Graham', 88, "WR"))
for players in shelved_players:
print(str(player))
import json
data = {}
data['Player'] = []
data['Player'].append({
'name': 'Drew Brees',
'number': '09',
'position': 'QB'
})
with open('data.txt', 'w') as outfile:
json.dump(data, outfile)
with open('players.json', mode='w') as f:
json.dump(players, f)
import sqlite3
db = sqlite3.connect('rpg_information.sqlite')
db.execute("CREATE TABLE IF NOT EXISTS Characters ( CharacterId INTEGER, Name TEXT, Race TEXT, Gender TEXT, Class TEXT, Health INTEGER, Mana INTEGER, InventoryID INTEGER)")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (0, 'Bruce', 'Human', 'Male', 'Fighter', 100, 0, 0) ")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (1, 'Thrall', 'Orc', 'Male', 'Warrior', 120, 0, 1) ")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (2, 'Legolas', 'Elf', 'Male', 'Archer', 100, 0, 2) ")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (3, 'Edwyrd', 'Elf', 'Male', 'Fighter', 100, 0, 3) ")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (4, 'Lixiss', 'Elf', 'Female', 'Mage', 80, 100, 4) ")
db.execute("INSERT INTO Characters ( CharacterId, Name, Race, Gender, Class, Health, Mana, InventoryID) VALUES (5, 'Jasmine', 'Fairy', 'Female', 'Mage', 80, 120, 5) ")
#----------------------------------------------------------
db.execute("CREATE TABLE IF NOT EXISTS Inventory ( InventoryId INTEGER, ItemId INTEGER, Quantity INTEGER)")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (0, 0, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (0, 9, 5) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (1, 1, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (1, 9, 2) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (1, 10, 2) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (2, 4, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (2, 5, 30) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (2, 9, 4) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (3, 0, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (3, 9, 4) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (4, 3, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (4, 7, 5) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (5, 6, 1) ")
db.execute("INSERT INTO Inventory ( InventoryId, ItemId, Quantity) VALUES (5, 8, 3) ")
#----------------------------------------------------------
db.execute("CREATE TABLE IF NOT EXISTS Itemtable ( ItemId INTEGER, ItemName TEXT, ItemDescription TEXT, Consumable TINYINT)")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (0, 'sword', 'a sharp blade', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (1, 'axe', 'good for chopping things', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (2, 'hammer', 'heavy but good for blacksmiths', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (3, 'staff', 'long reaching stick', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (4, 'bow', 'good for long ranged attacks', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (5, 'quiver of arrows', 'required by the bow', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (6, 'book', 'full of spells', 0) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (7, 'light mana potion', 'replish mana', 1) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (8, 'med mana potion', 'replish more mana', 1) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (9, 'light health potion', 'cures light wounds', 1) ")
db.execute("INSERT INTO Itemtable (ItemId, ItemName, ItemDescription, Consumable) VALUES (10, 'med health potion', 'cures medium wounds', 1) ")
db = sqlite3.connect('rpg_information.sqlite')
cursor = db.cursor()
for row in cursor.execute("SELECT * FROM Characters"):
print(row)
#--------------------------------------------------------
#cursor = db.cursor()
for row in cursor.execute("SELECT * FROM Itemtable"):
print(row)
#--------------------------------------------------------
#cursor = db.cursor()
for row in cursor.execute("SELECT * FROM Characters WHERE Class ='Elf'"):
print(row)
#cursor = db.cursor()
for row in cursor.execute("SELECT * FROM Itemtable INNER JOIN Inventory ON Characters.InventoryID"):
print(row)
| 0.539226 | 0.81309 |
<a href="https://colab.research.google.com/github/bminixhofer/nnsplit/blob/master/train/evaluate.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Setup
```
!git clone https://github.com/bminixhofer/nnsplit
!pip install -r nnsplit/train/requirements.txt
!wget -O "raw.de.gz" http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2018/mono/OpenSubtitles.raw.de.gz
!gunzip "raw.de.gz"
!wget -O "raw.en.gz" http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2018/mono/OpenSubtitles.raw.en.gz
!gunzip "raw.en.gz"
!python -m spacy download de_core_news_sm
!python -m spacy download en_core_web_sm
import sys
!pip install nnsplit
```
# Evaluate
```
import sys
sys.path.append("nnsplit/train")
import evaluate
from evaluate import OpenSubtitlesDataset, Evaluator
from torch.utils.data import Subset
from nnsplit import NNSplit
import spacy
import numpy as np
import torch
import pandas as pd
class NNSplitInterface:
def __init__(self, splitter):
self.splitter = splitter
def split(self, texts):
out = []
for split in self.splitter.split(texts):
out.append([str(x) for x in split])
return out
class SpacyInterface:
def __init__(self, name, use_sentencizer):
if use_sentencizer:
nlp = spacy.load(name, disable=["tagger", "parser", "ner"])
nlp.add_pipe(nlp.create_pipe("sentencizer"))
else:
nlp = spacy.load(name, disable=["tagger", "ner"])
self.nlp = nlp
def split(self, texts):
out = []
for doc in self.nlp.pipe(texts):
sentences = []
for sent in doc.sents:
sentences.append("".join([x.text + x.whitespace_ for x in sent]))
out.append(sentences)
return out
data = [
[
"German",
Subset(OpenSubtitlesDataset("../train_data/de/raw.de", 1_000_000), np.arange(100_000)),
{
"NNSplit": NNSplitInterface(NNSplit("models/german/model.onnx")),
"Spacy (Tagger)": SpacyInterface("de_core_news_sm", use_sentencizer=False),
"Spacy (Sentencizer)": SpacyInterface("de_core_news_sm", use_sentencizer=True)
}
],
]
eval_setups = {
"Clean": (0.0, 0.0),
"Partial punctuation": (0.5, 0.0),
"Partial case": (0.0, 0.5),
"Partial punctuation and case": (0.5, 0.5),
"No punctuation and case": (1.0, 1.0),
}
results = {}
preds = {}
for dataset_name, dataset, targets in data:
results[dataset_name] = {}
for eval_name, (remove_punct_prob, lower_start_prob) in eval_setups.items():
results[dataset_name][eval_name] = {}
evaluator = Evaluator(dataset, remove_punct_prob, lower_start_prob)
for target_name, interface in targets.items():
correct = evaluator.evaluate(interface.split)
preds[f"{dataset_name}_{eval_name}_{target_name}"] = {
"samples": evaluator.texts,
"correct": correct,
}
results[dataset_name][eval_name][target_name] = correct.mean()
pd.DataFrame.from_dict(results["German"]).T
pd.DataFrame.from_dict(results["English"]).T
```
|
github_jupyter
|
!git clone https://github.com/bminixhofer/nnsplit
!pip install -r nnsplit/train/requirements.txt
!wget -O "raw.de.gz" http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2018/mono/OpenSubtitles.raw.de.gz
!gunzip "raw.de.gz"
!wget -O "raw.en.gz" http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2018/mono/OpenSubtitles.raw.en.gz
!gunzip "raw.en.gz"
!python -m spacy download de_core_news_sm
!python -m spacy download en_core_web_sm
import sys
!pip install nnsplit
import sys
sys.path.append("nnsplit/train")
import evaluate
from evaluate import OpenSubtitlesDataset, Evaluator
from torch.utils.data import Subset
from nnsplit import NNSplit
import spacy
import numpy as np
import torch
import pandas as pd
class NNSplitInterface:
def __init__(self, splitter):
self.splitter = splitter
def split(self, texts):
out = []
for split in self.splitter.split(texts):
out.append([str(x) for x in split])
return out
class SpacyInterface:
def __init__(self, name, use_sentencizer):
if use_sentencizer:
nlp = spacy.load(name, disable=["tagger", "parser", "ner"])
nlp.add_pipe(nlp.create_pipe("sentencizer"))
else:
nlp = spacy.load(name, disable=["tagger", "ner"])
self.nlp = nlp
def split(self, texts):
out = []
for doc in self.nlp.pipe(texts):
sentences = []
for sent in doc.sents:
sentences.append("".join([x.text + x.whitespace_ for x in sent]))
out.append(sentences)
return out
data = [
[
"German",
Subset(OpenSubtitlesDataset("../train_data/de/raw.de", 1_000_000), np.arange(100_000)),
{
"NNSplit": NNSplitInterface(NNSplit("models/german/model.onnx")),
"Spacy (Tagger)": SpacyInterface("de_core_news_sm", use_sentencizer=False),
"Spacy (Sentencizer)": SpacyInterface("de_core_news_sm", use_sentencizer=True)
}
],
]
eval_setups = {
"Clean": (0.0, 0.0),
"Partial punctuation": (0.5, 0.0),
"Partial case": (0.0, 0.5),
"Partial punctuation and case": (0.5, 0.5),
"No punctuation and case": (1.0, 1.0),
}
results = {}
preds = {}
for dataset_name, dataset, targets in data:
results[dataset_name] = {}
for eval_name, (remove_punct_prob, lower_start_prob) in eval_setups.items():
results[dataset_name][eval_name] = {}
evaluator = Evaluator(dataset, remove_punct_prob, lower_start_prob)
for target_name, interface in targets.items():
correct = evaluator.evaluate(interface.split)
preds[f"{dataset_name}_{eval_name}_{target_name}"] = {
"samples": evaluator.texts,
"correct": correct,
}
results[dataset_name][eval_name][target_name] = correct.mean()
pd.DataFrame.from_dict(results["German"]).T
pd.DataFrame.from_dict(results["English"]).T
| 0.408041 | 0.744796 |
## Valores booleanos o lógicos
Python tiene un tipo de dato especial para almacenar aquellos datos que solo puede ser verdaderos o falsos. Estos tips de datos se llaman lóficos o booleanos por el matemático y lógico [George Bool](https://es.wikipedia.org/wiki/George_Boole), que creó una rama entera de las matemáticas, el [algebra de bool](https://es.wikipedia.org/wiki/%C3%81lgebra_de_Boole).
Las variables booleanas solo pueden tener dos valores: verdadero o falso, representadas en Python con las palabras reservadas `True` y `False`. Como en cualquier algebra, hay una serie de operaciones, que trabajan con boolenos y cuyo rsultado es otro valor booleano. eastas operaciones son el And Lógico (`and`), el Or lógico (`or`) y el inverso o negacion lógica (`not`).
Además de eso, las operaciones de comparacion producen resultados lógicos que se podría esperar, por ejemplo, la expresión `2 > 45` al evaluarse produce un resultado que, es este caso, sera falso.
En las expresions de control de flujo como `if` o `while` utilizamos sin definirlos demasiado estos valores lógicos. Los veremos ahora con algo más de profundidad.
### Comparaciones
Todas las operaciones de comparación producen como resultado un valor booleano. Las operaciones de camparación son las mismas que se definen en matematicas, pero por limitaciones técnicas, a veces el símbolo no es el mismo, opor ejemplo, para la comparacion "Es menor o igual que", cuyo símbolo es ≤, usamos la combinación de símbolos `<=`. También hay que hacer notar que el operador para comprobaar si dos valores son iguales es `==`, es decir, dos veces el simbolo igual (Esto es así porque el ya usamos el símbolo `=` para la asignación de valores a variables)
| operador | El resultado es verdadero si |
|----------|-----------------------------------------------------------------|
| == | Ambos valores son iguales |
| != | Ambos valores son diferentes (≠) |
| <= | El operando izquierdo es menor o igual (≤) que el derecho |
| < | El operando izquierdo es estríctamente menor (<) que el derecho |
| >= | El operando izquierdo es mayor o igual (⩾) que el derecho |
| < | El operando izquierdo es estríctamente mayor (<) que el derecho |
Estas operaciones funcionan sobre todos los tipos de datos, no sólo los numéricos. Por ejemplo, podemos comparar dos
cadenas de texto igual que lo hariamos con números:
```
s = 'hola'
print(s == 'hola', s != 'hola')
```
### Condiciones trabajando con secuencias
Podemos comprobar si un elemento determinado está en una secuencia, como una lista o una cadena de texto, usando el operador `in`:
```
l = [123, 12, 45, 2, 44, -1]
print(45 in l)
vocales = 'aeiou'
print('f' in vocales)
```
### Operadores lógicos
Los operadores lógicos nos permiten evaluar condiciones más complejas. Los operadores `and`, `or` nos permiten realizar las operaciones _y lógico_ y _o lógico_, cuyos resultados se muestran en la siguiente tabla (Abreviamos `true` y `False` como T y F respectivamente):
| a | b | a and b | a or b |
|---|---|---------|--------|
| F | F | F | F |
| F | T | F | T |
| T | F | F | T |
| T | T | T | T |
El resultado de la operación **xor** o _o exclusivo_ se puede obtener, si ambos valores son booleanos, con el operador !=
| a | b | a != b |
|---|---|--------|
| F | F | T |
| F | T | F |
| T | F | F |
| T | T | T |
Por último, el operador lógico `not` espera un único valor, que es invertido; si es verdadero pasa a ser falso y si es falso pasa a ser verdadero:
```
if not 3 == 4:
print('3 no es igual a cuatro')
```
**Ejercicio:** Tenemos la siguiente lista de notas:
notas = [3, 4.5, 7, 6.2, 8.4, 3.2, 0.5, 5, 5.5, 6.5, 9, 10, 8, 7.3]
Y queremos imprimirla,pero queremos que si la nota es inferior a 5, en vez de la nota se imprima la frase 'No apto'.
Pista: tendremos que usar un bucle `for` para recorrer la lista, y _dentro del bucle_ usaremos un `if`.
```
notas = [3, 4.5, 7, 6.2, 8.4, 3.2, 0.5, 5, 5.5, 6.5, 9, 10, 8, 7.3]
...
```
|
github_jupyter
|
s = 'hola'
print(s == 'hola', s != 'hola')
l = [123, 12, 45, 2, 44, -1]
print(45 in l)
vocales = 'aeiou'
print('f' in vocales)
if not 3 == 4:
print('3 no es igual a cuatro')
notas = [3, 4.5, 7, 6.2, 8.4, 3.2, 0.5, 5, 5.5, 6.5, 9, 10, 8, 7.3]
...
| 0.133415 | 0.96793 |
# Titanic data set
```
# Starter code:
import pandas as pd
# read in the CSV
df = pd.read_csv('titanic.csv')
df.head()
```
## Plot the Histogram of Age for Titanic Dataset
```
import seaborn as sns
ls_age = df['Age'].dropna()
sns.distplot(ls_age, hist=True, kde=False, bins=16)
```
**kde = true**
```
import seaborn as sns
sns.distplot(df['Age'].dropna(), hist=True, kde=True, bins=16)
```
# What percent of passengers are younger than 40?
```
How_many_females = df[df['Sex'] == "female"]
female_percentage = float(len(How_many_females)) / float(len(df['Sex'].dropna()))
print(len(How_many_females), len(df['Sex'].dropna()))
female_percentage
```
# Playing with the data
```
df.describe()
df.hist(column="Age")
df.max(axis=0)
df.max(axis=0)
df[df['Value']==df['Value'].max()]
children = df[df['Age'] < 16]
print("All count: ", len(df["Age"]), " children count: ", len(children))
children.shape
children.hist()
children.head()
```
# Challenge 1
Describing Age with mean, mode and median
```
print("\n----------- Calculate Mean -----------\n")
print(df.mean())
print("\n----------- Calculate Median -----------\n")
print(df.median())
print("\n----------- Calculate Mode -----------\n")
print(df.mode())
```
# Challenge 2 | Probability
### What rate of the people that survived were female or younger than 16 yo?
```
women_and_children = df[(df['Sex'] == "female") | (df['Age'] < 16)]
w_a_c_survival_rate = women_and_children['Survived'].value_counts(normalize=True) * 100
print("Female/Child Survival rate on titanic: ",w_a_c_survival_rate)
```
### Chance of surviving Titanic
```
import matplotlib.pyplot as plt
# Child chance of survival
children = df[df['Age'] < 16]
surviving_children = df[(df['Age'] < 16) & (df['Survived'] == 1)]
child_chance_of_survival = surviving_children.shape[0] / children.shape[0]
format(child_chance_of_survival, ".0%")
# Woman chance of survival
women = df[(df['Sex'] == 'female') & (df['Age'] > 16)]
surviving_women = df[(df['Sex'] == 'female') & (df['Age'] > 16) & (df['Survived'] == 1)]
women_chance_of_survival = surviving_women.shape[0] / women.shape[0]
format(women_chance_of_survival, ".0%")
# Man chance of survival
adult_men = df[(df['Sex'].str.match('male')) & (df['Age'] > 16)]
a_m_survival_rate = adult_men['Survived'].value_counts(normalize=True) * 100
men_chance_of_survival = a_m_survival_rate[1] / 100
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
x_axis = ["Children", "Women", "Men"]
data = [child_chance_of_survival, women_chance_of_survival, men_chance_of_survival]
ax.bar(x_axis, data)
plt.show()
```
|
github_jupyter
|
# Starter code:
import pandas as pd
# read in the CSV
df = pd.read_csv('titanic.csv')
df.head()
import seaborn as sns
ls_age = df['Age'].dropna()
sns.distplot(ls_age, hist=True, kde=False, bins=16)
import seaborn as sns
sns.distplot(df['Age'].dropna(), hist=True, kde=True, bins=16)
How_many_females = df[df['Sex'] == "female"]
female_percentage = float(len(How_many_females)) / float(len(df['Sex'].dropna()))
print(len(How_many_females), len(df['Sex'].dropna()))
female_percentage
df.describe()
df.hist(column="Age")
df.max(axis=0)
df.max(axis=0)
df[df['Value']==df['Value'].max()]
children = df[df['Age'] < 16]
print("All count: ", len(df["Age"]), " children count: ", len(children))
children.shape
children.hist()
children.head()
print("\n----------- Calculate Mean -----------\n")
print(df.mean())
print("\n----------- Calculate Median -----------\n")
print(df.median())
print("\n----------- Calculate Mode -----------\n")
print(df.mode())
women_and_children = df[(df['Sex'] == "female") | (df['Age'] < 16)]
w_a_c_survival_rate = women_and_children['Survived'].value_counts(normalize=True) * 100
print("Female/Child Survival rate on titanic: ",w_a_c_survival_rate)
import matplotlib.pyplot as plt
# Child chance of survival
children = df[df['Age'] < 16]
surviving_children = df[(df['Age'] < 16) & (df['Survived'] == 1)]
child_chance_of_survival = surviving_children.shape[0] / children.shape[0]
format(child_chance_of_survival, ".0%")
# Woman chance of survival
women = df[(df['Sex'] == 'female') & (df['Age'] > 16)]
surviving_women = df[(df['Sex'] == 'female') & (df['Age'] > 16) & (df['Survived'] == 1)]
women_chance_of_survival = surviving_women.shape[0] / women.shape[0]
format(women_chance_of_survival, ".0%")
# Man chance of survival
adult_men = df[(df['Sex'].str.match('male')) & (df['Age'] > 16)]
a_m_survival_rate = adult_men['Survived'].value_counts(normalize=True) * 100
men_chance_of_survival = a_m_survival_rate[1] / 100
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
x_axis = ["Children", "Women", "Men"]
data = [child_chance_of_survival, women_chance_of_survival, men_chance_of_survival]
ax.bar(x_axis, data)
plt.show()
| 0.366817 | 0.877109 |
```
import datasets
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from consts import COUNTING_CMAP, HALF_PAGE_FIGSIZE, HISTOGRAM_CMAP
from matplotlib import colors
plt.style.use("mike.mplstyle")
dataset_root = "../datasets/"
genome_dset = datasets.Dataset.load_from_disk(dataset_root + "FLT_genome")
resist_dset = datasets.Dataset.load_from_disk(dataset_root + "PR_resist")
coreceptor_dset = datasets.Dataset.load_from_disk(dataset_root + "V3_coreceptor")
bodysite_dset = datasets.Dataset.load_from_disk(dataset_root + "V3_bodysite")
def make_genome_figure(genome_dset, ax):
prots = ["GagPol", "Vif", "Vpr", "Tat", "Rev", "Vpu", "Env", "Nef"]
prot_lens = pd.DataFrame(genome_dset)[prots].applymap(len)
expected = prot_lens.mode().iloc[0]
prot_lens["proteome"] = prot_lens.sum(axis=1)
mdf = pd.melt(
prot_lens,
value_vars=prots + ["proteome"],
value_name="length",
var_name="protein",
)
sns.histplot(
data=mdf,
y="protein",
x="length",
ax=ax,
lw=0.1,
cmap="crest",
vmin=1,
vmax=len(prot_lens),
cbar=True,
cbar_kws={
"location": "top",
"label": "Sequences",
"ticks": [1, 500, 1000, 1500, len(prot_lens)],
},
)
ax.set_ylabel("")
ax.set_xlabel("Protein Length")
sns.despine(ax=ax)
min_allowed = expected * 0.9
short = {}
for col in prots:
short[col] = prot_lens[col] < min_allowed[col]
info = {}
info["premature_stop"] = 100 * pd.DataFrame(short).any(axis=1).mean()
info["dset_size"] = prot_lens.sum().sum() / 1e6
info["per_orig"] = 100 * info["dset_size"] / (393)
return info
def make_resist_figure(resist_dset, ax):
drugs = ["FPV", "IDV", "NFV", "SQV"]
drug_data = pd.DataFrame(resist_dset)[drugs]
drug_data["MultiDrug"] = drug_data.sum(axis=1)
drug_data.head()
mdf = pd.melt(
drug_data.replace({True: 1, False: 0}),
value_vars=drugs + ["MultiDrug"],
value_name="value",
var_name="drug",
)
drug_counts = pd.crosstab(mdf["drug"], mdf["value"])
ax = drug_counts.loc[["MultiDrug"] + drugs].plot(kind="barh", stacked=True, ax=ax)
ax.legend([], [], frameon=False)
sns.despine(ax=ax)
ax.set_ylabel("")
ax.set_xlabel("Sequences")
ax.set_xticks([0, 500, 1000, len(drug_data)])
ax.set_xlim(0, len(drug_data))
info = {
"any": drug_data.any(axis=1).mean() * 100,
"all": drug_data.all(axis=1).mean() * 100,
"obs": len(drug_data),
}
return info
def make_coreceptor_figure(coreceptor_dset, ax):
receptors = ["CXCR4", "CCR5"]
coreceptor_data = pd.DataFrame(coreceptor_dset)[receptors]
coreceptor_data["DualTropic"] = coreceptor_data.all(axis=1)
mdf = pd.melt(
coreceptor_data,
value_vars=receptors + ["DualTropic"],
value_name="value",
var_name="receptor",
)
receptor_counts = pd.crosstab(mdf["receptor"], mdf["value"])
ax = receptor_counts.loc[["DualTropic"] + receptors].plot(
kind="barh", stacked=True, ax=ax
)
ax.legend([], [], frameon=False)
sns.despine(ax=ax)
ax.set_ylabel("")
ax.set_xlabel("Sequences")
ax.set_xticks([0, 1000, 2000, len(coreceptor_data)])
ax.set_xlim(0, len(coreceptor_data))
info = {
"dual": coreceptor_data["DualTropic"].mean() * 100,
"r5": coreceptor_data["CCR5"].mean() * 100,
"x4": coreceptor_data["CXCR4"].mean() * 100,
"obs": len(coreceptor_data),
}
return info
def make_bodysite_figure(bodysite_dset, ax):
sites = [
"periphery-tcell",
"periphery-monocyte",
"CNS",
"breast-milk",
"female-genitals",
"male-genitals",
"gastric",
"lung",
"organ",
]
bodysite_data = pd.DataFrame(bodysite_dset)[sites]
bodysite_data["MultiSite"] = bodysite_data.sum(axis=1)
mdf = pd.melt(
bodysite_data.replace({True: 1, False: 0}),
value_vars=sites + ["MultiSite"],
value_name="value",
var_name="site",
)
bodsite_counts = pd.crosstab(mdf["site"], mdf["value"])
ax = bodsite_counts.loc[["MultiSite"] + sites[::-1]].plot(
kind="barh", stacked=True, ax=ax
)
ax.legend([], [], frameon=False)
sns.despine(ax=ax)
ax.set_ylabel("")
ax.set_xlabel("Sequences")
ax.set_xticks([0, 2000, 4000, len(bodysite_data)])
ax.set_xlim(0, len(bodysite_data))
info = {
"multi": (bodysite_data["MultiSite"] > 1).mean() * 100,
"obs": len(bodysite_data),
}
return info
fig, axs = plt.subplots(2, 2, figsize=HALF_PAGE_FIGSIZE)
with sns.color_palette(HISTOGRAM_CMAP):
genome_info = make_genome_figure(genome_dset, axs[0, 0])
axs[0, 0].set_title("A", pad=50)
with sns.color_palette(COUNTING_CMAP):
resist_info = make_resist_figure(resist_dset, axs[0, 1])
axs[0, 1].set_title("B", pad=50)
cmap = plt.get_cmap(COUNTING_CMAP)
norm = colors.BoundaryNorm(np.arange(-0.5, 5.5, 1), 5)
sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
sm.set_array([])
cbar = fig.colorbar(
sm, ax=axs[0, 1], ticks=[0, 1, 2, 3, 4], location="top", label="Positives"
)
cbar.ax.set_xticklabels(["False\n0", "True\n1", 2, 3, 4])
body_info = make_bodysite_figure(bodysite_dset, axs[1, 0])
axs[1, 0].set_title("C", pad=10)
co_info = make_coreceptor_figure(coreceptor_dset, axs[1, 1])
axs[1, 1].set_title("D", pad=10)
fig.tight_layout()
try:
fig.savefig(str(snakemake.output['dataset_description']), dpi=300)
except NameError:
fig.savefig("Fig2-dataset_description-high.png", dpi=300)
statements = [
f'{genome_info["premature_stop"]:0.1f}% of genomes contained at least one gene with a premature stop-codon.',
f'When concatenated, this dataset contains {genome_info["dset_size"]:0.1f} million characters, approximately {genome_info["per_orig"]:0.1f}% of the size of the original training dataset',
f'Out of the {resist_info["obs"]} Protease sequences with known drug resistance, {resist_info["any"]:0.1f}% of sequences have resistance to at least one drug while {resist_info["all"]:0.1f}% have resistance to all four.',
f'Figure 2C describes the profile of body-sites where {body_info["obs"]} unique V3 sequences have been isolated with {body_info["multi"]:0.1f}% isolated from multiple locations.',
f'A partially overlapping set of {co_info["obs"]} V3 sequences contained coreceptor information with the majority being CCR5 binding {co_info["r5"]:0.1f}%, {co_info["x4"]:0.1f}% CXCR4 binding, and {co_info["dual"]:0.1f}% dual tropic.',
]
for s in statements:
print(s)
print("")
```
|
github_jupyter
|
import datasets
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from consts import COUNTING_CMAP, HALF_PAGE_FIGSIZE, HISTOGRAM_CMAP
from matplotlib import colors
plt.style.use("mike.mplstyle")
dataset_root = "../datasets/"
genome_dset = datasets.Dataset.load_from_disk(dataset_root + "FLT_genome")
resist_dset = datasets.Dataset.load_from_disk(dataset_root + "PR_resist")
coreceptor_dset = datasets.Dataset.load_from_disk(dataset_root + "V3_coreceptor")
bodysite_dset = datasets.Dataset.load_from_disk(dataset_root + "V3_bodysite")
def make_genome_figure(genome_dset, ax):
prots = ["GagPol", "Vif", "Vpr", "Tat", "Rev", "Vpu", "Env", "Nef"]
prot_lens = pd.DataFrame(genome_dset)[prots].applymap(len)
expected = prot_lens.mode().iloc[0]
prot_lens["proteome"] = prot_lens.sum(axis=1)
mdf = pd.melt(
prot_lens,
value_vars=prots + ["proteome"],
value_name="length",
var_name="protein",
)
sns.histplot(
data=mdf,
y="protein",
x="length",
ax=ax,
lw=0.1,
cmap="crest",
vmin=1,
vmax=len(prot_lens),
cbar=True,
cbar_kws={
"location": "top",
"label": "Sequences",
"ticks": [1, 500, 1000, 1500, len(prot_lens)],
},
)
ax.set_ylabel("")
ax.set_xlabel("Protein Length")
sns.despine(ax=ax)
min_allowed = expected * 0.9
short = {}
for col in prots:
short[col] = prot_lens[col] < min_allowed[col]
info = {}
info["premature_stop"] = 100 * pd.DataFrame(short).any(axis=1).mean()
info["dset_size"] = prot_lens.sum().sum() / 1e6
info["per_orig"] = 100 * info["dset_size"] / (393)
return info
def make_resist_figure(resist_dset, ax):
drugs = ["FPV", "IDV", "NFV", "SQV"]
drug_data = pd.DataFrame(resist_dset)[drugs]
drug_data["MultiDrug"] = drug_data.sum(axis=1)
drug_data.head()
mdf = pd.melt(
drug_data.replace({True: 1, False: 0}),
value_vars=drugs + ["MultiDrug"],
value_name="value",
var_name="drug",
)
drug_counts = pd.crosstab(mdf["drug"], mdf["value"])
ax = drug_counts.loc[["MultiDrug"] + drugs].plot(kind="barh", stacked=True, ax=ax)
ax.legend([], [], frameon=False)
sns.despine(ax=ax)
ax.set_ylabel("")
ax.set_xlabel("Sequences")
ax.set_xticks([0, 500, 1000, len(drug_data)])
ax.set_xlim(0, len(drug_data))
info = {
"any": drug_data.any(axis=1).mean() * 100,
"all": drug_data.all(axis=1).mean() * 100,
"obs": len(drug_data),
}
return info
def make_coreceptor_figure(coreceptor_dset, ax):
receptors = ["CXCR4", "CCR5"]
coreceptor_data = pd.DataFrame(coreceptor_dset)[receptors]
coreceptor_data["DualTropic"] = coreceptor_data.all(axis=1)
mdf = pd.melt(
coreceptor_data,
value_vars=receptors + ["DualTropic"],
value_name="value",
var_name="receptor",
)
receptor_counts = pd.crosstab(mdf["receptor"], mdf["value"])
ax = receptor_counts.loc[["DualTropic"] + receptors].plot(
kind="barh", stacked=True, ax=ax
)
ax.legend([], [], frameon=False)
sns.despine(ax=ax)
ax.set_ylabel("")
ax.set_xlabel("Sequences")
ax.set_xticks([0, 1000, 2000, len(coreceptor_data)])
ax.set_xlim(0, len(coreceptor_data))
info = {
"dual": coreceptor_data["DualTropic"].mean() * 100,
"r5": coreceptor_data["CCR5"].mean() * 100,
"x4": coreceptor_data["CXCR4"].mean() * 100,
"obs": len(coreceptor_data),
}
return info
def make_bodysite_figure(bodysite_dset, ax):
sites = [
"periphery-tcell",
"periphery-monocyte",
"CNS",
"breast-milk",
"female-genitals",
"male-genitals",
"gastric",
"lung",
"organ",
]
bodysite_data = pd.DataFrame(bodysite_dset)[sites]
bodysite_data["MultiSite"] = bodysite_data.sum(axis=1)
mdf = pd.melt(
bodysite_data.replace({True: 1, False: 0}),
value_vars=sites + ["MultiSite"],
value_name="value",
var_name="site",
)
bodsite_counts = pd.crosstab(mdf["site"], mdf["value"])
ax = bodsite_counts.loc[["MultiSite"] + sites[::-1]].plot(
kind="barh", stacked=True, ax=ax
)
ax.legend([], [], frameon=False)
sns.despine(ax=ax)
ax.set_ylabel("")
ax.set_xlabel("Sequences")
ax.set_xticks([0, 2000, 4000, len(bodysite_data)])
ax.set_xlim(0, len(bodysite_data))
info = {
"multi": (bodysite_data["MultiSite"] > 1).mean() * 100,
"obs": len(bodysite_data),
}
return info
fig, axs = plt.subplots(2, 2, figsize=HALF_PAGE_FIGSIZE)
with sns.color_palette(HISTOGRAM_CMAP):
genome_info = make_genome_figure(genome_dset, axs[0, 0])
axs[0, 0].set_title("A", pad=50)
with sns.color_palette(COUNTING_CMAP):
resist_info = make_resist_figure(resist_dset, axs[0, 1])
axs[0, 1].set_title("B", pad=50)
cmap = plt.get_cmap(COUNTING_CMAP)
norm = colors.BoundaryNorm(np.arange(-0.5, 5.5, 1), 5)
sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
sm.set_array([])
cbar = fig.colorbar(
sm, ax=axs[0, 1], ticks=[0, 1, 2, 3, 4], location="top", label="Positives"
)
cbar.ax.set_xticklabels(["False\n0", "True\n1", 2, 3, 4])
body_info = make_bodysite_figure(bodysite_dset, axs[1, 0])
axs[1, 0].set_title("C", pad=10)
co_info = make_coreceptor_figure(coreceptor_dset, axs[1, 1])
axs[1, 1].set_title("D", pad=10)
fig.tight_layout()
try:
fig.savefig(str(snakemake.output['dataset_description']), dpi=300)
except NameError:
fig.savefig("Fig2-dataset_description-high.png", dpi=300)
statements = [
f'{genome_info["premature_stop"]:0.1f}% of genomes contained at least one gene with a premature stop-codon.',
f'When concatenated, this dataset contains {genome_info["dset_size"]:0.1f} million characters, approximately {genome_info["per_orig"]:0.1f}% of the size of the original training dataset',
f'Out of the {resist_info["obs"]} Protease sequences with known drug resistance, {resist_info["any"]:0.1f}% of sequences have resistance to at least one drug while {resist_info["all"]:0.1f}% have resistance to all four.',
f'Figure 2C describes the profile of body-sites where {body_info["obs"]} unique V3 sequences have been isolated with {body_info["multi"]:0.1f}% isolated from multiple locations.',
f'A partially overlapping set of {co_info["obs"]} V3 sequences contained coreceptor information with the majority being CCR5 binding {co_info["r5"]:0.1f}%, {co_info["x4"]:0.1f}% CXCR4 binding, and {co_info["dual"]:0.1f}% dual tropic.',
]
for s in statements:
print(s)
print("")
| 0.599016 | 0.413714 |
# Movement
You can move, rotate and mirror ComponentReference as well as `Port`, `Polygon`, `CellArray`, `Label`, and `Group`
```
import gdsfactory as gf
# Start with a blank Component
D = gf.Component()
# Create some more shape Devices
T = gf.components.text("hello", size=10, layer=(0, 0))
E = gf.components.ellipse(radii=(10, 5))
R = gf.components.rectangle(size=(10, 3), layer=(2, 0))
# Add the shapes to D as references
text = D << T
ellipse = D << E
rect1 = D << R
rect2 = D << R
D
c = gf.Component("move_one_ellipse")
e1 = c << gf.components.ellipse(radii=(10, 5))
e2 = c << gf.components.ellipse(radii=(10, 5))
e1.movex(10)
c
c = gf.Component("move_one_ellipse")
e1 = c << gf.components.ellipse(radii=(10, 5))
e2 = c << gf.components.ellipse(radii=(10, 5))
e2.xmin = e1.xmax
c
```
Now let's practice moving and rotating the objects:
```
D = gf.Component("ellipse")
E = gf.components.ellipse(radii=(10, 5))
e1 = D << E
e2 = D << E
D
c = gf.Component("ellipse")
e = gf.components.ellipse(radii=(10, 5))
e1 = c << e
e2 = c << e
e2.move(origin=[5, 5], destination=[10, 10]) # Translate by dx = 5, dy = 5
c
c = gf.Component("ellipse")
e = gf.components.ellipse(radii=(10, 5))
e1 = c << e
e2 = c << e
e2.move([5, 5]) # Translate by dx = 5, dy = 5
c
c = gf.Component("rectangles")
r = gf.components.rectangle(size=(10, 5), layer=(0, 0))
rect1 = c << r
rect2 = c << r
rect1.rotate(45) # Rotate the first straight by 45 degrees around (0,0)
rect2.rotate(
-30, center=[1, 1]
) # Rotate the second straight by -30 degrees around (1,1)
c
c = gf.Component()
text = c << gf.components.text("hello")
text.mirror(p1=[1, 1], p2=[1, 3]) # Reflects across the line formed by p1 and p2
c
c = gf.Component()
text = c << gf.components.text("hello")
c
```
Each Component and ComponentReference object has several properties which can be
used
to learn information about the object (for instance where it's center coordinate
is). Several of these properties can actually be used to move the geometry by
assigning them new values.
Available properties are:
- `xmin` / `xmax`: minimum and maximum x-values of all points within the object
- `ymin` / `ymax`: minimum and maximum y-values of all points within the object
- `x`: centerpoint between minimum and maximum x-values of all points within the
object
- `y`: centerpoint between minimum and maximum y-values of all points within the
object
- `bbox`: bounding box (see note below) in format ((xmin,ymin),(xmax,ymax))
- `center`: center of bounding box
```
print("bounding box:")
print(
text.bbox
) # Will print the bounding box of text in terms of [(xmin, ymin), (xmax, ymax)]
print("xsize and ysize:")
print(text.xsize) # Will print the width of text in the x dimension
print(text.ysize) # Will print the height of text in the y dimension
print("center:")
print(text.center) # Gives you the center coordinate of its bounding box
print("xmax")
print(text.xmax) # Gives you the rightmost (+x) edge of the text bounding box
```
Let's use these properties to manipulate our shapes to arrange them a little
better
```
import gdsfactory as gf
c = gf.Component()
text = c << gf.components.text("hello")
E = gf.components.ellipse(radii=(10, 5))
R = gf.components.rectangle(size=(10, 5), layer=(0, 0))
rect1 = c << R
rect2 = c << R
ellipse = c << E
c
# First let's center the ellipse
ellipse.center = [
0,
0,
] # Move the ellipse such that the bounding box center is at (0,0)
# Next, let's move the text to the left edge of the ellipse
text.y = (
ellipse.y
) # Move the text so that its y-center is equal to the y-center of the ellipse
text.xmax = ellipse.xmin # Moves the ellipse so its xmax == the ellipse's xmin
# Align the right edge of the rectangles with the x=0 axis
rect1.xmax = 0
rect2.xmax = 0
# Move the rectangles above and below the ellipse
rect1.ymin = ellipse.ymax + 5
rect2.ymax = ellipse.ymin - 5
c
```
In addition to working with the properties of the references inside the
Component,
we can also manipulate the whole Component if we want. Let's try mirroring the
whole Component `D`:
```
print(c.xmax) # Prints out '10.0'
c.mirror((0, 1)) # Mirror across line made by (0,0) and (0,1)
c
```
A bounding box is the smallest enclosing box which contains all points of the geometry.
```
# The gf.components.library has a handy bounding-box function
# which takes a bounding box and returns the rectangle points for it
import gdsfactory as gf
c = gf.Component()
text = c << gf.components.text("hi")
bbox = D.bbox
c << gf.components.bbox(bbox=bbox, layer=(2, 0))
c
# gf.get_padding_points can also add a bbox with respect to the bounding box edges
c = gf.Component()
text = c << gf.components.text("bye")
device_bbox = text.bbox
c.add_polygon(gf.get_padding_points(D, default=1), layer=(2, 0))
c
```
When we query the properties of D, they will be calculated with respect to this
bounding-rectangle. For instance:
```
print("Center of Component c:")
print(c.center)
print("X-max of Component c:")
print(c.xmax)
D = gf.Component()
R = gf.components.rectangle(size=(10, 3), layer=(0, 0))
rect1 = D << R
D.plot()
```
You can chain many of the movement/manipulation functions because they all return the object they manipulate.
For instance you can combine two expressions:
```
rect1.rotate(angle=37)
rect1.move([10, 20])
D
```
...into this single-line expression
```
D = gf.Component()
R = gf.components.rectangle(size=(10, 3), layer=(2, 0))
rect1 = D << R
rect1.rotate(angle=37).move([10, 20])
D
```
|
github_jupyter
|
import gdsfactory as gf
# Start with a blank Component
D = gf.Component()
# Create some more shape Devices
T = gf.components.text("hello", size=10, layer=(0, 0))
E = gf.components.ellipse(radii=(10, 5))
R = gf.components.rectangle(size=(10, 3), layer=(2, 0))
# Add the shapes to D as references
text = D << T
ellipse = D << E
rect1 = D << R
rect2 = D << R
D
c = gf.Component("move_one_ellipse")
e1 = c << gf.components.ellipse(radii=(10, 5))
e2 = c << gf.components.ellipse(radii=(10, 5))
e1.movex(10)
c
c = gf.Component("move_one_ellipse")
e1 = c << gf.components.ellipse(radii=(10, 5))
e2 = c << gf.components.ellipse(radii=(10, 5))
e2.xmin = e1.xmax
c
D = gf.Component("ellipse")
E = gf.components.ellipse(radii=(10, 5))
e1 = D << E
e2 = D << E
D
c = gf.Component("ellipse")
e = gf.components.ellipse(radii=(10, 5))
e1 = c << e
e2 = c << e
e2.move(origin=[5, 5], destination=[10, 10]) # Translate by dx = 5, dy = 5
c
c = gf.Component("ellipse")
e = gf.components.ellipse(radii=(10, 5))
e1 = c << e
e2 = c << e
e2.move([5, 5]) # Translate by dx = 5, dy = 5
c
c = gf.Component("rectangles")
r = gf.components.rectangle(size=(10, 5), layer=(0, 0))
rect1 = c << r
rect2 = c << r
rect1.rotate(45) # Rotate the first straight by 45 degrees around (0,0)
rect2.rotate(
-30, center=[1, 1]
) # Rotate the second straight by -30 degrees around (1,1)
c
c = gf.Component()
text = c << gf.components.text("hello")
text.mirror(p1=[1, 1], p2=[1, 3]) # Reflects across the line formed by p1 and p2
c
c = gf.Component()
text = c << gf.components.text("hello")
c
print("bounding box:")
print(
text.bbox
) # Will print the bounding box of text in terms of [(xmin, ymin), (xmax, ymax)]
print("xsize and ysize:")
print(text.xsize) # Will print the width of text in the x dimension
print(text.ysize) # Will print the height of text in the y dimension
print("center:")
print(text.center) # Gives you the center coordinate of its bounding box
print("xmax")
print(text.xmax) # Gives you the rightmost (+x) edge of the text bounding box
import gdsfactory as gf
c = gf.Component()
text = c << gf.components.text("hello")
E = gf.components.ellipse(radii=(10, 5))
R = gf.components.rectangle(size=(10, 5), layer=(0, 0))
rect1 = c << R
rect2 = c << R
ellipse = c << E
c
# First let's center the ellipse
ellipse.center = [
0,
0,
] # Move the ellipse such that the bounding box center is at (0,0)
# Next, let's move the text to the left edge of the ellipse
text.y = (
ellipse.y
) # Move the text so that its y-center is equal to the y-center of the ellipse
text.xmax = ellipse.xmin # Moves the ellipse so its xmax == the ellipse's xmin
# Align the right edge of the rectangles with the x=0 axis
rect1.xmax = 0
rect2.xmax = 0
# Move the rectangles above and below the ellipse
rect1.ymin = ellipse.ymax + 5
rect2.ymax = ellipse.ymin - 5
c
print(c.xmax) # Prints out '10.0'
c.mirror((0, 1)) # Mirror across line made by (0,0) and (0,1)
c
# The gf.components.library has a handy bounding-box function
# which takes a bounding box and returns the rectangle points for it
import gdsfactory as gf
c = gf.Component()
text = c << gf.components.text("hi")
bbox = D.bbox
c << gf.components.bbox(bbox=bbox, layer=(2, 0))
c
# gf.get_padding_points can also add a bbox with respect to the bounding box edges
c = gf.Component()
text = c << gf.components.text("bye")
device_bbox = text.bbox
c.add_polygon(gf.get_padding_points(D, default=1), layer=(2, 0))
c
print("Center of Component c:")
print(c.center)
print("X-max of Component c:")
print(c.xmax)
D = gf.Component()
R = gf.components.rectangle(size=(10, 3), layer=(0, 0))
rect1 = D << R
D.plot()
rect1.rotate(angle=37)
rect1.move([10, 20])
D
D = gf.Component()
R = gf.components.rectangle(size=(10, 3), layer=(2, 0))
rect1 = D << R
rect1.rotate(angle=37).move([10, 20])
D
| 0.436862 | 0.892563 |
# The $p$-Median Problem
## Summary
The goal of the $p$-median problem is to locating $p$ facilities to minimize the demand weighted average distance between demand nodes and the nearest of the selected facilities. Hakimi (1964, 1965) first considered this problem for the design of network switch centers.
However, this problem has been used to model a wide range of applications, such as warehouse location, depot location, school districting and sensor placement.
## Problem Statement
The $p$-median problem can be formulated mathematically as an integer programming problem using the following model.
### Sets
$M$ = set of candidate locations
$N$ = set of customer demand nodes
### Parameters
$p$ = number of facilities to locate
$d_j$ = demand of customer $j$, $\forall j \in N$
$c_{ij}$ = unit cost of satisfying customer $j$ from facility $i$, $\forall i \in M, \forall j \in N$
### Variables
$x_{ij}$ = fraction of the demand of customer $j$ that is supplied by facility $i$, $\forall i \in M, \forall j \in N$
$y_i$ = a binary value that is $1$ is a facility is located at location $i$, $\forall i \in M$
### Objective
Minimize the demand-weighted total cost
$\min \sum_{i \in M} \sum_{j \in N} d_j c_{ij} x_{ij}$
### Constraints
All of the demand for customer $j$ must be satisfied
$\sum_{i \in M} x_{ij} = 1$, $\forall j \in N$
Exactly $p$ facilities are located
$\sum_{i \in M} y_i = p$
Demand nodes can only be assigned to open facilities
$x_{ij} \leq y_i$, $\forall i \in M, \forall j \in N$
The assignment variables must be non-negative
$x_{ij} \geq 0$, $\forall i \in M, \forall j \in N$
## Pyomo Formulation
The following is an abstract Pyomo model for this problem:
```
!cat p-median.py
```
****
This model is simplified in several respects. First, the candidate locations and customer locations are treated as numeric ranges. Second, the demand values, $d_j$ are initialized with a default value of $1$. Finally, the cost values, $c_{ij}$ are randomly assigned.
## Model Data
This model is parameterized by three values: the number of facility locations, the number of customers, and the number of facilities. For example:
```
!cat p-median.dat
```
****
## Solution
Pyomo includes a `pyomo` command that automates the construction and optimization of models. The GLPK solver can be used in this simple example:
```
!pyomo solve --solver=glpk p-median.py p-median.dat
```
By default, the optimization results are stored in the file `results.yml`:
```
!cat results.yml
```
****
This solution places facilities at locations 3, 6 and 9. Facility 3 meets the demand of customer 4, facility 6 meets the demand of customers 1, 2, 3 and 5, and facility 9 meets the demand of customer 6.
## References
* S.L. Hakimi (1964) Optimum location of switching centers and the absolute centers and medians of a graph. Oper Res 12:450–459
* S.L. Hakimi (1965) Optimum distribution of switching centers in a communication network and some related graph theoretic problems. Oper Res 13:462–475
|
github_jupyter
|
!cat p-median.py
!cat p-median.dat
!pyomo solve --solver=glpk p-median.py p-median.dat
!cat results.yml
| 0.398758 | 0.992604 |
## merge note lines into full note
#### CONFIGURATIONS
```
INPUT_DATASET="<bigquery-project-id>:i2b2_nlp_data"
RESULT_DATASET="<bigquery-project-id>:i2b2_nlp_data"
TARGET_GCS_BUCKET_SHC="<output-bucket>"
GOOGLE_CREDENTIAL="service-account-sample-key.json"
#DATAFLOW CONFIG
DATAFLOW_RUNNER_GCS_BUCKET="<gcs-bucket-for-staging-and-temp-files>"
DATAFLOW_PROJECT="<dataflow-project-id>"
DATAFLOW_DLP_PROJ="<dlp-project-id>"
DATAFLOW_SERVICEACCT="<service-account-email>"
DATAFLOW_MACHINE="n1-standard-8"
DATAFLOW_WORKER="1"
#END OF CONFIGURATION
import subprocess
import os
import json
import sys
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = str(GOOGLE_CREDENTIAL)
```
#### EXPORT DEID INPUT DATA TO GCS
Optional. Deid can also directly read from BigQuery
```
%%bash -s "$INPUT_DATASET" "$TARGET_GCS_BUCKET_SHC"
echo "#### NOTE ###"
echo "bq --location=US extract --destination_format NEWLINE_DELIMITED_JSON '$1.training_data' $2/input/i2b2/training_data/text-input-*.json"
echo "bq --location=US extract --destination_format NEWLINE_DELIMITED_JSON '$1.testing_data' $2/input/i2b2/testing_data/text-input-*.json"
```
### run deid on DataFlow
```
%%bash -s "$TARGET_GCS_BUCKET_SHC" "$DATAFLOW_RUNNER_GCS_BUCKET" "$DATAFLOW_PROJECT" "$DATAFLOW_DLP_PROJ" "$DATAFLOW_SERVICEACCT" "$DATAFLOW_MACHINE" "$DATAFLOW_WORKER"
cd ..
CMDPRE="mvn -Pdataflow-runner compile exec:java -Dexec.mainClass=com.github.susom.starr.deid.Main -Dexec.args=\"--project=$3 --dlpProject=$4 --serviceAccount=$5 --stagingLocation=$2/staging --gcpTempLocation=$2/temp --tempLocation=$2/temp --region=us-west1 --workerMachineType=$6 --maxNumWorkers=$7 --diskSizeGb=100 --runner=DataflowRunner --deidConfigFile=deid_config_general.yaml --inputType=gcp_gcs "
echo "#### NOTE ####"
echo "$CMDPRE --textIdFields=\"note_id\" --textInputFields=\"note_text\" --inputResource=$1/input/i2b2/training_data/text-input-*.json --outputResource=$1/i2b2/training_data/DEID_result\""
echo "$CMDPRE --textIdFields=\"note_id\" --textInputFields=\"note_text\" --inputResource=$1/input/i2b2/testing_data/text-input-*.json --outputResource=$2/i2b2/testing_data/DEID_result\""
```
### load result to BigQuery
```
%%bash -s "$TARGET_GCS_BUCKET_SHC" "$RESULT_DATASET"
cd ..
CMDPRE="bq --location=US load --autodetect --source_format=NEWLINE_DELIMITED_JSON "
echo "#### NOTE ####"
echo "$CMDPRE $2.training_data_deid \"$1/i2b2/training_data/DEID_result/DeidNote-*\" "
echo "$CMDPRE $2.testing_data_deid \"$1/i2b2/testing_data/DEID_result/DeidNote-*\" "
```
|
github_jupyter
|
INPUT_DATASET="<bigquery-project-id>:i2b2_nlp_data"
RESULT_DATASET="<bigquery-project-id>:i2b2_nlp_data"
TARGET_GCS_BUCKET_SHC="<output-bucket>"
GOOGLE_CREDENTIAL="service-account-sample-key.json"
#DATAFLOW CONFIG
DATAFLOW_RUNNER_GCS_BUCKET="<gcs-bucket-for-staging-and-temp-files>"
DATAFLOW_PROJECT="<dataflow-project-id>"
DATAFLOW_DLP_PROJ="<dlp-project-id>"
DATAFLOW_SERVICEACCT="<service-account-email>"
DATAFLOW_MACHINE="n1-standard-8"
DATAFLOW_WORKER="1"
#END OF CONFIGURATION
import subprocess
import os
import json
import sys
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = str(GOOGLE_CREDENTIAL)
%%bash -s "$INPUT_DATASET" "$TARGET_GCS_BUCKET_SHC"
echo "#### NOTE ###"
echo "bq --location=US extract --destination_format NEWLINE_DELIMITED_JSON '$1.training_data' $2/input/i2b2/training_data/text-input-*.json"
echo "bq --location=US extract --destination_format NEWLINE_DELIMITED_JSON '$1.testing_data' $2/input/i2b2/testing_data/text-input-*.json"
%%bash -s "$TARGET_GCS_BUCKET_SHC" "$DATAFLOW_RUNNER_GCS_BUCKET" "$DATAFLOW_PROJECT" "$DATAFLOW_DLP_PROJ" "$DATAFLOW_SERVICEACCT" "$DATAFLOW_MACHINE" "$DATAFLOW_WORKER"
cd ..
CMDPRE="mvn -Pdataflow-runner compile exec:java -Dexec.mainClass=com.github.susom.starr.deid.Main -Dexec.args=\"--project=$3 --dlpProject=$4 --serviceAccount=$5 --stagingLocation=$2/staging --gcpTempLocation=$2/temp --tempLocation=$2/temp --region=us-west1 --workerMachineType=$6 --maxNumWorkers=$7 --diskSizeGb=100 --runner=DataflowRunner --deidConfigFile=deid_config_general.yaml --inputType=gcp_gcs "
echo "#### NOTE ####"
echo "$CMDPRE --textIdFields=\"note_id\" --textInputFields=\"note_text\" --inputResource=$1/input/i2b2/training_data/text-input-*.json --outputResource=$1/i2b2/training_data/DEID_result\""
echo "$CMDPRE --textIdFields=\"note_id\" --textInputFields=\"note_text\" --inputResource=$1/input/i2b2/testing_data/text-input-*.json --outputResource=$2/i2b2/testing_data/DEID_result\""
%%bash -s "$TARGET_GCS_BUCKET_SHC" "$RESULT_DATASET"
cd ..
CMDPRE="bq --location=US load --autodetect --source_format=NEWLINE_DELIMITED_JSON "
echo "#### NOTE ####"
echo "$CMDPRE $2.training_data_deid \"$1/i2b2/training_data/DEID_result/DeidNote-*\" "
echo "$CMDPRE $2.testing_data_deid \"$1/i2b2/testing_data/DEID_result/DeidNote-*\" "
| 0.079585 | 0.356307 |
```
import sagas.graph.dgraph_helper as helper
import pydgraph
client=helper.reset('''
name: string @index(exact, term) .
rated: uid @reverse @count .
title: string @lang .
''')
import json_utils
feed_json=json_utils.read_json_file('data/graph/alice.json')
_=helper.set_json(client, feed_json)
helper.run_q(client, '''{
data(func: eq(name, "Alice")) {
name
car @facets
title
friend @facets {
name
car @facets
title@ru
}
}
}''')
import sagas.graph.dgraph_helper as helper
import pydgraph
import json_utils
from tqdm import tqdm
client=helper.reset('''
name: string @index(exact, term) .
nsubj: string @index(exact, term) .
dobj: string @index(exact) .
pobj: string @index(exact) .
attr: string @index(exact) .
sents: string @index(fulltext) @lang .
lemmas: string @index(term) .
verbs: string @index(term) .
''')
def list_with_suffix(dir, suffix):
import os
rs=[]
for root, dirs, files in os.walk(dir):
for file in files:
if file.endswith(suffix):
rs.append(os.path.join(root, file))
return rs
files=list_with_suffix('data/graph', '_feed.json')
for file in tqdm(files):
feed_json=json_utils.read_json_file(file)
_=helper.set_json(client, feed_json)
vars = {'$a': 'afraid'}
helper.query_with_vars(client, '''query data($a: string){
data(func: anyofterms(lemmas, $a)) {
sents@en:.
sents@fr
sents@de
sents@zh
sents@ja
sents@es
nsubj @facets
verbs
}
}''', vars)
import numpy
array = numpy.array([[11 ,22, 33], [44, 55, 66], [77, 88, 99]])
print("Printing 2D Array")
print(array)
print("Choose random row from 2D array")
randomRow = numpy.random.randint(3, size=2)
print('pickup', randomRow)
print(array[randomRow,:])
from sagas.nlu.corpus_helper import filter_term, lines, divide_chunks
dataf = "/pi/ai/seq2seq/fra-eng-2019/fra.txt"
pairs = lines(dataf)
total=len(pairs)
print('total', total)
array = numpy.array(pairs)
random_rows = numpy.random.randint(total, size=10)
print('pickup', random_rows)
print(array[random_rows,:])
rows=array[random_rows,:]
for r in rows:
print(r[0])
print('\t', r[1].strip())
from sagas.nlu.corenlp_helper import langs, extract_lemma, extract_pos
sents='Apple is looking at buying U.K. startup for $1 billion'
nlp=langs['en']()
doc = nlp(sents)
extract_lemma(doc)
from sagas.nlu.corenlp_helper import CoreNlpViz, nlp_en, nlp_fr
viz=CoreNlpViz()
viz.analyse(sents, nlp)
viz.f
import spacy
nlp_spacy = spacy.load('en_core_web_sm')
doc = nlp_spacy(sents)
def put_entities(doc, props):
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
props[ent.label_]=ent.text
facet="%s|%s"%(ent.label_, 'loc')
props[facet]="%d %d"%(ent.start_char, ent.end_char)
sentences=["Apple is looking at buying U.K. startup for $1 billion"]
dataset=[]
for sents in sentences:
props={}
doc = nlp_spacy(sents)
put_entities(doc, props)
dataset.append(props)
print(json.dumps(dataset, indent=2))
doc = nlp_spacy(u"Mr. Best flew to New York on Saturday morning.")
ents = list(doc.ents)
print(ents[0].label)
print(ents[0].label_)
print(ents[0].text)
def doc_collect(doc):
toks={'text':[], 'lemma':[], 'pos':[], 'tag':[], 'dep':[]}
for token in doc:
# print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
# token.shape_, token.is_alpha, token.is_stop)
toks['text'].append(token.text)
toks['lemma'].append(token.lemma_)
toks['pos'].append(token.pos_)
toks['tag'].append(token.tag_)
toks['dep'].append(token.dep_)
return toks
doc = nlp_spacy(u'Apple is looking at buying U.K. startup for $1 billion')
toks=doc_collect(doc)
lemmas=' '.join(toks['lemma'])
print(lemmas)
```
|
github_jupyter
|
import sagas.graph.dgraph_helper as helper
import pydgraph
client=helper.reset('''
name: string @index(exact, term) .
rated: uid @reverse @count .
title: string @lang .
''')
import json_utils
feed_json=json_utils.read_json_file('data/graph/alice.json')
_=helper.set_json(client, feed_json)
helper.run_q(client, '''{
data(func: eq(name, "Alice")) {
name
car @facets
title
friend @facets {
name
car @facets
title@ru
}
}
}''')
import sagas.graph.dgraph_helper as helper
import pydgraph
import json_utils
from tqdm import tqdm
client=helper.reset('''
name: string @index(exact, term) .
nsubj: string @index(exact, term) .
dobj: string @index(exact) .
pobj: string @index(exact) .
attr: string @index(exact) .
sents: string @index(fulltext) @lang .
lemmas: string @index(term) .
verbs: string @index(term) .
''')
def list_with_suffix(dir, suffix):
import os
rs=[]
for root, dirs, files in os.walk(dir):
for file in files:
if file.endswith(suffix):
rs.append(os.path.join(root, file))
return rs
files=list_with_suffix('data/graph', '_feed.json')
for file in tqdm(files):
feed_json=json_utils.read_json_file(file)
_=helper.set_json(client, feed_json)
vars = {'$a': 'afraid'}
helper.query_with_vars(client, '''query data($a: string){
data(func: anyofterms(lemmas, $a)) {
sents@en:.
sents@fr
sents@de
sents@zh
sents@ja
sents@es
nsubj @facets
verbs
}
}''', vars)
import numpy
array = numpy.array([[11 ,22, 33], [44, 55, 66], [77, 88, 99]])
print("Printing 2D Array")
print(array)
print("Choose random row from 2D array")
randomRow = numpy.random.randint(3, size=2)
print('pickup', randomRow)
print(array[randomRow,:])
from sagas.nlu.corpus_helper import filter_term, lines, divide_chunks
dataf = "/pi/ai/seq2seq/fra-eng-2019/fra.txt"
pairs = lines(dataf)
total=len(pairs)
print('total', total)
array = numpy.array(pairs)
random_rows = numpy.random.randint(total, size=10)
print('pickup', random_rows)
print(array[random_rows,:])
rows=array[random_rows,:]
for r in rows:
print(r[0])
print('\t', r[1].strip())
from sagas.nlu.corenlp_helper import langs, extract_lemma, extract_pos
sents='Apple is looking at buying U.K. startup for $1 billion'
nlp=langs['en']()
doc = nlp(sents)
extract_lemma(doc)
from sagas.nlu.corenlp_helper import CoreNlpViz, nlp_en, nlp_fr
viz=CoreNlpViz()
viz.analyse(sents, nlp)
viz.f
import spacy
nlp_spacy = spacy.load('en_core_web_sm')
doc = nlp_spacy(sents)
def put_entities(doc, props):
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
props[ent.label_]=ent.text
facet="%s|%s"%(ent.label_, 'loc')
props[facet]="%d %d"%(ent.start_char, ent.end_char)
sentences=["Apple is looking at buying U.K. startup for $1 billion"]
dataset=[]
for sents in sentences:
props={}
doc = nlp_spacy(sents)
put_entities(doc, props)
dataset.append(props)
print(json.dumps(dataset, indent=2))
doc = nlp_spacy(u"Mr. Best flew to New York on Saturday morning.")
ents = list(doc.ents)
print(ents[0].label)
print(ents[0].label_)
print(ents[0].text)
def doc_collect(doc):
toks={'text':[], 'lemma':[], 'pos':[], 'tag':[], 'dep':[]}
for token in doc:
# print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
# token.shape_, token.is_alpha, token.is_stop)
toks['text'].append(token.text)
toks['lemma'].append(token.lemma_)
toks['pos'].append(token.pos_)
toks['tag'].append(token.tag_)
toks['dep'].append(token.dep_)
return toks
doc = nlp_spacy(u'Apple is looking at buying U.K. startup for $1 billion')
toks=doc_collect(doc)
lemmas=' '.join(toks['lemma'])
print(lemmas)
| 0.136335 | 0.192957 |
# Artificial Neural Network (ANN) model
This is the code for the completed neural network. Throughout this document all the areas are explained in more detail.
```
class NeuralNetwork(object):
def __init__(self):
# Define Hyperparameters
self.inputLayerSize = 2
self.hiddenLayerSize = 3
self.outputLayerSize = 1
# Weights (parameters)
self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize)
self.W2 = np.random.randn(self.hiddenLayerSize, self.outputLayerSize)
def forwardPropagation(self, X):
# Propagate inputs through network
self.z2 = np.dot(X, self.W1)
self.a2 = self.sigmoid(self.z2)
self.z3 = np.dot(self.a2, self.W2)
yHat = self.sigmoid(self.z3)
return yHat
def sigmoid(self, z):
return 1/(1+np.exp(-z))
def sigmoidPrime(self, z):
return np.exp(-z)/((1+np.exp(-z))**2)
```
## Problem
Suppose we want to predict our test score based on how many hours we sleep and how many hours we study the night before. In other words, we want to predict the output value $y$ which are scores for a set of input values $X$ which are hours of (sleep, study).
```
%pylab inline
import numpy as np
# X = (hours sleeping, hours studying), y = Score on test
X = np.array(([3,5], [5,1], [10,2]), dtype=float)
y = np.array(([75], [82], [93]), dtype=float)
X
y
```
This is a supervised regression problem. It's supervised because our examples have outputs($y$). It's regression because we're predicting the the test score, which is a continuous output.
We want to scale the data so the result is in the interval $[0,1]$.
```
X = X/np.amax(X, axis=0)
y = y/100 # Max test score is 100
X
y
```
Now we can start building the neural network. It will have 2 inputs($X$) and 1 output($y$). We call our output $\hat{y}$ because it is an estimate of $y$. We will be using a hidden later with 3 neurons. Finally, we will be using sigmoid activation functions.
## Forward Propagation
### Variables
|Code symbol|Math symbol|Definition|Dimensions|
|--|--|--|--|
|X|$X$|Input Data, each row in an example|(numExamples, inputLayerSize)|
|y|$y$|Target data|(numExamples, outputLayerSize)|
|W1|$W^{(1)}$|Layer 1 Weights|(inputLayerSize, hiddenLayerSize)|
|W2|$W^{(2)}$|Layer 2 Weights|(hiddenLayerSize, outputLayerSize)|
|z2|$z^{(2)}$|Layer 2 Activation|(numExamples, hiddenLayerSize)|
|a2|$z^{(2)}$|Layer 2 Activity|(numExamples, hiddenLayerSize)|
|z3|$z^{(2)}$|Layer 3 Activation|(numExamples, outputLayerSize)|
$$
\begin{align}
z^{(2)} &= XW^{(1)} \\
a^{(2)} &= f(z^{(2)}) \\
z^{(3)} &= a^{(3)}W^{(2)} \\
\hat{y} &= f(z^{(3)})
\end{align}
$$
Each input value in matrix $X$ should be multiplied by a corresponding weight and then added together with all the other results for each neuron.
$z^{(2)}$ is the activity of our second layer and it can be calculated as the following:
$$
z^{(2)} = XW^{(1)} = \begin{bmatrix}
3 & 5 \\
5 & 1 \\
10 & 2
\end{bmatrix}
\begin{bmatrix}
W_{11}^{(1)} & W_{12}^{(1)} & W_{13}^{(1)}\\
W_{21}^{(1)} & W_{22}^{(1)} & W_{23}^{(1)}
\end{bmatrix} = \begin{bmatrix}
3 W_{11}^{(1)} + 5 W_{21}^{(1)} & 3 W_{12}^{(1)} + 5 W_{22}^{(1)} & 3 W_{13}^{(1)} + 5 W_{23}^{(1)} \\
5 W_{11}^{(1)} + W_{21}^{(1)} & 5 W_{12}^{(1)} + W_{22}^{(1)} & 5 W_{13}^{(1)} + W_{23}^{(1)} \\
10 W_{11}^{(1)} + 2 W_{21}^{(1)} & 10 W_{12}^{(1)} + 2 W_{22}^{(1)} & 10 W_{13}^{(1)} + 2 W_{23}^{(1)}
\end{bmatrix}
$$
Note that each entry in $z$ is a sum of weighted inputs to each hidden neuron. $z$ is a $3\times 3$ matrix, one row for each sample, and one column for each hidden unit.
### Activation function - Sigmoid
Now that we have the activities for our second layer, $z^{(2)} = XW^{(1)}$, we need to apply the activation function. We'll independently apply the sigmoid function to each entry in the matrix $z$.
```
NN = NeuralNetwork()
testInput = np.arange(-6,6,0.01)
plot(testInput, NN.sigmoid(testInput), color='b', lineWidth=2)
grid(1)
```
Let's see how the sigmoid() takes an input and returns the result:
```
NN.sigmoid(1)
NN.sigmoid(np.array([-1,0,1]))
NN.sigmoid(np.random.randn(3,3))
```
### Weight-matrices $W^{(1)}$ and $W^{(2)}$
These are initialized in the \__init__\() method with random numbers.
### Implementing forward propagation
Using our activation function $f$, we can write that our second layer activity $a^{(2)} = f(z^{(2)})$. The $a^{(2)}$ will be a matrix of the same size ($3 \times 3$).
To finish forward propagation we want to propagate $a^{(2)}$ all the way to the output $\hat{y}$.
All we have to do now is multiply $a^{(2)}$ by our second layer weights $W^{(2)}$ and apply on more activation function. The $W^{(2)}$ will be of size $3 \times 1$, one weight for each synapse:
$$z^{(3)}=a^{(2)}W^{(2)}$$
Multiplying $a^{(2)}$, a ($3 \times 3$ matrix), by $W^{(2)}$, a ($3 \times 1$ matrix) results in a $3 \times 1$ matrix $z^{(3)}$, the activity of our 3rd layer. The $z^{(3)}$ has three activity values, one for each sample.
Then we'll apply our activation function to $z^{(3)}$ yielding our estimate of test score, $\hat{y}$:
$$\hat{y}=f(z^{(3)})$$
### Getting an estimate of test score
Now we have a class capable of estimating our test score given how many hours we sleep and how many hours we study.
```
X
NN = NeuralNetwork()
yHat = NN.forwardPropagation(X)
yHat
y
bar([0,1,2], y, width=0.35, alpha=0.8, color='b')
bar([0.35,1.35,2.35], yHat, width=0.35, color='r', alpha=0.8)
grid(1)
legend(['y', 'yHat']);
```
We can see that our predictions $\hat{y}$ are pretty inaccurate.
## Gradient Descent
### Cost function $J$
To improve our model, we need to find a way of quantifying exactly how wrong our predictions are. One way of doing it is to use a cost function. For a given sample, a cost function tells us how costly or model is.
We'll use sum of square errors to compute an overall cost and we'll try to minimize it. Actually, training a network means minimizing a cost functions:
$$J = \sum\limits_{i=1}^N(y_i-\hat{y}_i)$$
where $N$ is the number of training samples. We can make $J$ as small as possible with a optimal combination of the weights.
### Curse of dimensionality
Suppose we want to find the optimal weight value for one weight:
```
import time
weightToTry = np.linspace(-5,5,1000)
costs = np.zeros(1000)
startTime = time.clock()
for i in range(1000):
NN.W1[0,0] = weightToTry[i]
yHat = NN.forwardPropagation(X)
costs[i] = 0.5*sum((y-yHat)**2)
endTime = time.clock()
elapsedTime = endTime - startTime
elapsedTime
```
It took about 0.03 seconds to check 1000 different weight values for our neural network.
Here is the plot for the 1000 weights:
```
plot(weightToTry, costs, color='b')
grid(1)
xlabel('Cost')
ylabel('Weight')
```
If we want to optimize 2 weights it will take 1000\*1000 iterations to check all the values. If we want to check all the weights it will take:
```
elapsedTime*(1000**(9-1))/(3600*24*365)/1000
```
almost 1 trillion milleniums. Needless to say, this is infeasible.
### Gradient descent method
There are two methods for the gradient descent: batch (standard) or stochastic. We're going to use the batch to train our neural network.
In batch gradient descent method sums up all the derivatives of $J$ for all samples:
$$ \sum\frac{dJ}{dW} $$
## Backpropagation of Errors
Backpropagation (backward propagation of errors) is an algorithm used to train artificial neural networks, it can update the weights very efficiently.
Basically, backpropagation is just a very computationally efficient approach to compute the derivates of a complex cost function and our goal is to use those derivaties to determine the weight coefficients for parameterizing a multi-layer neural network.
In other words, the method calculates the gradient ($\frac{dJ}{dW}$) of a cost (loss or objective) function with respect to all the weights in the network, so that the gradient is fed to the gradient descent method which in turn uses it to update the weights in order to minimize the cost function.
Since backpropagation requires a known, target data for each input value in order to calculate the cost function gradient, it is usually used in supervised networks.
This will require additional variables, so our table now becomes:
|Code symbol|Math symbol|Definition|Dimensions|
|--|--|--|--|
|X|$X$|Input Data, each row in an example|(numExamples, inputLayerSize)|
|y|$y$|Target data|(numExamples, outputLayerSize)|
|W1|$W^{(1)}$|Layer 1 Weights|(inputLayerSize, hiddenLayerSize)|
|W2|$W^{(2)}$|Layer 2 Weights|(hiddenLayerSize, outputLayerSize)|
|z2|$z^{(2)}$|Layer 2 Activation|(numExamples, hiddenLayerSize)|
|a2|$z^{(2)}$|Layer 2 Activity|(numExamples, hiddenLayerSize)|
|z3|$z^{(2)}$|Layer 3 Activation|(numExamples, outputLayerSize)|
|J|$J$|Cost|(1, outputLayerSize)|
|dJdz3|$\frac{\partial J}{\partial z^{(3)}}$|Partial derivative of cost with respect to $z^{(3)}$|(numExamples, outputLayerSize)|
|dJdW2|$\frac{\partial J}{\partial W^{(2)}}$|Partial derivative of cost with respect to $W^{(2)}$|(hiddenLayerSize, outputLayerSize)|
|dz3dz2|$\frac{\partial z^{(3)}}{\partial z^{(2)}}$|Partial derivative of $z^{(3)}$ with respect to $z^{(2)}$|(numExamples, hiddenLayerSize)|
|dJdW1|$\frac{\partial J}{\partial W^{(1)}}$|Partial derivative of cost with respect to $W^{(1)}$|(inputLayerSize, hiddenLayerSize)|
|delta2|$\delta^{(2)}$|Backpropagating Error 2|(numExamples, hiddenLayerSize)|
|delta3|$\delta^{(3)}$|Backpropagating Error 3|(numExamples, outputLayerSize)|
### Computing gradient $\frac{dJ}{dW}$
We have a hidden layer and an output layer. So, we need to compute two gradients overall: $\frac{\partial J}{\partial W^{(1)}}$, and $\frac{\partial J}{\partial W^{(2)}}$, the gradient with respect to the weight for the hidden layer, and the gradient with respect to the weight for the output later, respectively.
A way of quantifying exactly how wrong (or correct) our predictions are is by using a cost function.
We'll use sum of square errors, the difference between target (known) data and the value estimated by our network to compute an overall cost and we'll try to minimize it. the cost function is:
$$J=\frac{1}{2}\sum\limits_{i=1}^N(y_i-\hat{y}_i)^2$$
where $N$ is the number of training samples. Here, the $J$ is the error of the network for a single training iteration. Note that $\sum$ is required for our batch gradient descent algorithm.
To perform gradient descent, we need an equation and some code for our gradient $\frac{dJ}{dW}$.
We'll seperate our $\frac{dJ}{dW}$ computation by computing $\frac{\partial J}{\partial W^{(1)}}$, and $\frac{\partial J}{\partial W^{(2)}}$ independently.
Let's work on $\frac{\partial J}{\partial W^{(2)}}$ first, which is for the output layer.
The sum in our cost function adds the error from each sample to create our overall cost. We'll take advantage of the sum rule in differentiation. We can move our $\sum$ outside and worry about the derivative of the inside expression first:
$$\frac{\partial J}{\partial W^{(2)}} = \frac{\partial(\sum \frac{1}{2}(y-\hat{y})^2)}{\partial W^{(2)}} = \sum \frac{\partial(\frac{1}{2}(y-\hat{y})^2)}{\partial W^{(2)}}$$
Unlike $\hat{y}$ which depends on $W^{(2)}$, $y$ is constant. So, $\frac{\partial y}{\partial W^{(2)}}=0$, and we have the following:
$$\frac{\partial J}{\partial W^{(2)}}=-\sum(y-\hat{y})\frac{\partial\hat{y}}{\partial W^{(2)}}$$
We now need to think about the derivative of $\frac{\partial\hat{y}}{\partial W^{(2)}}$. From our earlier equation $$\hat{y}=f(z^{(3)})$$
we know that $\hat{y}$ is our activation function of $z^{(3)}$, so we may want to apply the chain rule again to break $\frac{\partial\hat{y}}{\partial W^{(2)}}$ into $\frac{\partial\hat{y}}{\partial z^{(3)}}$ times $\frac{\partial z^{(3)}}{\partial W^{(2)}}$:
$$\frac{\partial J}{\partial W^{(2)}} = -\sum (y-\hat{y}) \frac{\partial\hat{y}}{\partial z^{(3)}} \frac{\partial z^{(3)}}{\partial W^{(2)}}$$.
To keep things simple, we'll drop our summation. Once we've computed $\frac{\partial J}{\partial W}$ for a single sample, we'll add up all our individual derivative terms.
To find the rate of change of $\hat{y}$ with respect to $z^{(3)}$, we need to differentiate our sigmoid activation function with respect to $z$:
$$f(z) = \frac {1}{1+e^{-z}}$$
$$f^\prime(z) = \frac {e^{-z}} {(1+e^{-z})^2}$$
### Code for sigmoid prime $f'(z)$
The code for the sigmoid prime function will be:
```
def sigmoidPrime(self, z):
return np.exp(-z)/((1+np.exp(-z))**2)
```
Now the function will look like this:
```
NN = NeuralNetwork()
sigTestValues = np.arange(-5,5,0.1)
plot(sigTestValues, NN.sigmoid(sigTestValues), linewidth=2, color='b')
plot(sigTestValues, NN.sigmoidPrime(sigTestValues), linewidth=2, color='r')
grid(1)
legend(['f', "f'"]);
```
### Backpropagation Errors ($\delta$)
http://www.bogotobogo.com/python/scikit-learn/Artificial-Neural-Network-ANN-4-Backpropagation.php
# ???
|
github_jupyter
|
class NeuralNetwork(object):
def __init__(self):
# Define Hyperparameters
self.inputLayerSize = 2
self.hiddenLayerSize = 3
self.outputLayerSize = 1
# Weights (parameters)
self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize)
self.W2 = np.random.randn(self.hiddenLayerSize, self.outputLayerSize)
def forwardPropagation(self, X):
# Propagate inputs through network
self.z2 = np.dot(X, self.W1)
self.a2 = self.sigmoid(self.z2)
self.z3 = np.dot(self.a2, self.W2)
yHat = self.sigmoid(self.z3)
return yHat
def sigmoid(self, z):
return 1/(1+np.exp(-z))
def sigmoidPrime(self, z):
return np.exp(-z)/((1+np.exp(-z))**2)
%pylab inline
import numpy as np
# X = (hours sleeping, hours studying), y = Score on test
X = np.array(([3,5], [5,1], [10,2]), dtype=float)
y = np.array(([75], [82], [93]), dtype=float)
X
y
X = X/np.amax(X, axis=0)
y = y/100 # Max test score is 100
X
y
NN = NeuralNetwork()
testInput = np.arange(-6,6,0.01)
plot(testInput, NN.sigmoid(testInput), color='b', lineWidth=2)
grid(1)
NN.sigmoid(1)
NN.sigmoid(np.array([-1,0,1]))
NN.sigmoid(np.random.randn(3,3))
X
NN = NeuralNetwork()
yHat = NN.forwardPropagation(X)
yHat
y
bar([0,1,2], y, width=0.35, alpha=0.8, color='b')
bar([0.35,1.35,2.35], yHat, width=0.35, color='r', alpha=0.8)
grid(1)
legend(['y', 'yHat']);
import time
weightToTry = np.linspace(-5,5,1000)
costs = np.zeros(1000)
startTime = time.clock()
for i in range(1000):
NN.W1[0,0] = weightToTry[i]
yHat = NN.forwardPropagation(X)
costs[i] = 0.5*sum((y-yHat)**2)
endTime = time.clock()
elapsedTime = endTime - startTime
elapsedTime
plot(weightToTry, costs, color='b')
grid(1)
xlabel('Cost')
ylabel('Weight')
elapsedTime*(1000**(9-1))/(3600*24*365)/1000
def sigmoidPrime(self, z):
return np.exp(-z)/((1+np.exp(-z))**2)
NN = NeuralNetwork()
sigTestValues = np.arange(-5,5,0.1)
plot(sigTestValues, NN.sigmoid(sigTestValues), linewidth=2, color='b')
plot(sigTestValues, NN.sigmoidPrime(sigTestValues), linewidth=2, color='r')
grid(1)
legend(['f', "f'"]);
| 0.783533 | 0.992123 |
### ILAS: Introduction to Programming 2017/18
# Coursework Assignment: Plant-life Report
__Complete exercises A to D.__
<br>__The exercises should be completed using Python programming skills we have covered in class. The questions are focussed on an imaginary case study:__
>It is though that the acidification of an area of protected land is having a destructive effect on plant populations.
<br>Experts are particularly worried about the demise of a species of shrub called *winter heath*, that supports the area's insect populations, and the spread of an acid-loving poisonous weed called *darley heath* . <br>Chemical waste from local industries are thought to be reposonsible for the soil acidification.
<br>Your job is to process data collected over a number of years to present as part of a report.
<br>The report will be used as evidence to try and impose restrictions disposal of industrial waste within the area.
<img src="img/map2.png" alt="Drawing" style="width: 500px;"/>
### Input data
Data collectd by a plant survey over the past 20 years is given in the folder `environmental_survey` in the `sample_data` folder of the ILAS_python repository.
The survey was conducted once a year.
The locations and characteristics of plants and trees were recorded.
Soil pH was also recorded at different locations.
### Setting up
Create a new folder in which to store your project.
Copy the `environmental_survey` folder into the project folder.
### Part A: Assembling a Data Set
__Aim: Import plant data from .csv files and manipulate data to convert units and remove unecessary values.__
__(1.) Input and Output: Data Frames
<br>*(5 marks)*__
<br>Write a Python program that imports the data from the file `plants2017` and stores it as a __`pandas DataFrame`__.
The data set should contain only the data for shrub plants.
<br>Remove the rows with "tree" in the plants column to leave only information about shrubs in your data set.
(Hint: After removing data from a DataFrame use `df.reset_index(drop=True)` (where 'df' is the DataFrame name) to re-assign index numbers).
__(2.) Functions__
<br>__*(5 marks)*__
<br>The GPS location information for each plant is in units of decimal degrees.
<br>To make them more "human readable", the values should be converted to represent each data point on a 2D grid, with units of metres (or kilometres).
<img src="img/lat_long.png" alt="Drawing" style="width: 400px;"/>
The following equations can be used to approximate:
- the vertical distance from the *equator* from `GPS_lat`
- the horizontal distance from the *meridian* from `GPS_lon`
The latitude in m from the equator:
$lat = \frac{40,008,000 \times GPS_{lat}}{360} $
The longitude in m from the meridian:
$lon = \frac{40,075,160 \times GPS_{lon}}{360} \times \cos(GPS_{lat})$
<img src="img/ParametricCircle.png" alt="Drawing" style="width: 200px;"/>
Write code to convert GPS_lat and GPS_lon in decimal degrees to units of m or km, using the equation above.
<br>__*Hint: `GPS_lat` and `GPS_lat` are given in degrees, `numpy.cos` automatically applies to angles given in radians.*__
Encapsulate your code in a function so that it can be applied to any data frame.
(Hint: your function should take the columns of data frame to be converted as its arguments).
Show your function works by applying it to your data frame.
(You can also want to *rename* your column heading as they are no longer GPS coordinates.)
__(3.) Functions and Data Structures: Boolean Indexing__
<br>__*(5 marks)*__
<br>When fully grown, the four main shrubs that grow in the area can be identified by distinct features.
To include *only fully grown* plants in your data set:
- Write a function that selects only plants above a height of 50cm.
- Apply the function to your data set.
- Edit your function so that the same function may be used to:
- remove plants below 50cm by default
- remove plants below a height set by the user
### Part B: Refining the Data Set and Mapping pH
__Aim: Split the area over which the survey was taken into a grid of equally sized cells. Sort the pH samples by grid cell to show how pH varies across the area.__
__(1.) Input and Output__
<br>__*(2 marks)*__
<br>In the same Python file you wrote in __Part A__, import the data from the file `pH_2017` and store it as a new __`pandas DataFrame`__ called `pH`.
<br>
__(2.) Functions__
<br>__*(2 marks)*__
<br>Use the function that you wrote in __Part A (2.)__ to convert the the columns GPS_lat and GPS_lon in `pH` to units of m or km.
```
')
```
The sampled area measures approximately 3445m x 3950m.
<br>An orthoganol grid of 15 x 15 cells (3000m x 3000m) can be used to represent the sampled area:
- the grid is chosen to be slightly smaller than the sampled area so that no unsampled regions are included.
- the origin is chosen to be at
- $x = x_{min} + \frac{3445-3000}{2}$
- $y = y_{min} + \frac{3950-3000}{2}$
<img src="img/map.png" alt="Drawing" style="width: 500px;"/>
The following equation can be used to map a point, $P$, in range A to range B.
$P_B=\frac{P_A-A_{min}}{A_{max}-A_{min}} \times (B_{max}-B_{min}) + B_{min}$
__(3.) Functions and mathematical operators.__
<br>__*(5 marks)*__
Write a function called `scale` to map points in the range (origin, origin+3000) to the range (0, 3000).
By floor dividing (seminar 2) points in the range 0 to 3000 by 200, each point can be assigned an integer value in the range 0 to 14. Create an additional step in your function that uses floor division to assign an x and y grid reference to each data point.
Note:
- some grid references may be outside of the range 0 to 14.
- multiple cells will blong to the same grid reference.
Add two new columns to your DataFrame to store the x and y grid reference for each data point
Store your function that assigns a grid index as function so that it can be applied to any data set collected in the same area.
__(3.) `numpy` multi-dimensional arrays.__
<br>__*(2 marks)*__
<br>_Find the mean of the pH readings taken in each grid cell.
<br>Use a 2D numpy array to store each mean reading at each 2D grid location.
__(4.) Plotting__
<br>__*(3 marks)*__
<br>Plot the mean pH for each grid cell as a colour map of the gridded area.
<br>You may use a *2D colour map* or a *3D plot*.
<br>Save your figure as a .png file in your project folder.
### Part C: Classifying Plants Using Simple Mathematical Operations
__Aim: Sort the plant samples species. Produce a total count of each species in each grid cell.__
<br>The shrub plants in your DataFrame from __Part A__ can be catagorsied as one of four species.
The *average* physical characteristics of each *plant species* are shown in the table below:
|Shrub |Height (m)|Leaf length (cm)|Leaf aspect ratio|Bud length (cm)|
|------------|----------|----------------|-----------------|---------------|
|Winter heath| 1.2| 3.5| 2.0| 2.3|
|Bell heather| 1.8| 1.5| 1.2| 2.3|
|Brush bush | 0.7| 2.1| 10.2| 1.5|
|Darley heath| 0.7| 2.2| 3.1| 1.7|
<br>The *vector quantisation algorithm* is a simple algorithm used for catagorisation.
It determines which catagory a data point should belong to from its closest proximity to a set of values representing possible catagories.
<br>Each value represents the *average* of the corresponding catagory.
The *closeness* of the characteristics of a point $(c_1, c_2, c_3, ... c_n)$ to the average value of a catagory $(ca_1, ca_2, ca_3, ... ca_n)$ can be determined by the magnitude:
<br>$d = \sqrt{(ca_1-c_1)^2 + (ca_2-c_2)^2 + (ca_3-c_3)^2 + ... + (ca_n-c_n)^2}$ <br>
If $d$ is evaluated for each catagory, the catagory with the *minimium* value of $d$ represents the closest fit.
The vector quantisation algorithm can be applied to each data point using a for loop or numpy broadcasting.
__(1.) Mathematical compuation with Numpy__
<br>__*(5 marks)*__
<br>Use the vector quantisation algorithm to determine the species of each plant.
<br>Hint: Use a for loop or use broadcasting.
<br>Add a column to your DataFrame called "species" with the species of each plant that most closely fits the plant characteristics.
__(2.) Functions__
<br>__*(1 mark)*__
<br>Use the function that you wrote for __Part B: (2.)__ to assign a grid reference to each data point. <br>Save the grid refernce x and y value as two columns in your Data Frame.
__(3.) Data Structures: Lists__
<br>__*(5 marks)*__
Create a list for each of the following fields.
1. x grid index
1. y grid index
1. average pH reading
1. total count of *Winter heath* plant
1. total count of *Bell heather* plant
1. total count of *Brush bush* plant
1. total count of *Darley heath* plant
Loop through each grid cell and store a computed value for each field.
Store the lists as a list of lists (nested lists).
```
#what about the averge pH ? there is no PH in this data
# do you want a single number or all the indexes of the Bell winter etc ...
#the cos probelm ?
#check out my graph ..
```
### Part D: Using Multiple Files to Produce Time-Series Data
__Aim: Run all the steps that you coded in Part A-C for every envioronmental survey collected between the years 1997-2017 to produce time series data of the plant count and average pH.__
__(1.) Control Flow__
<br>__*(5 marks)*__
<br>Use a for loop to store a list of lists like you created in __Part C: (3.)__ for each year of the environmental survey (1997-2017)
Hint: You can loop through each plant survey using:
>```Python
annual_data=[]
for year in range(1997, 2018):
df = pd.read_csv("environmental_survey/plants" + str(year) + ".csv")
```
Hint: Append the list of lists created in __Part C: (3.)__ to the list `annual_data` each time the code loops.
>```Python
annual_data=[]
for year in range(1997, 2018):
df = pd.read_csv("environmental_survey/plants" + str(year) + ".csv")
```
__(2.) Plotting and Curve Fitting__
<br>__*(5 marks)*__
<br>The two closest industrial sites to the area of land are:
<br>__Sketchy inc.__ , established 1995, GPS coordinates lon = 136.7647, lat = 35.7336
<br>__Philamore co.__ , established 1990, GPS coordinates lon = 136.8262, lat = 35.7498
<br>Choose one grid cell that is close to an industrial site and one grid cell that is far from the industrial sites.
<br>Plot a scatter graph of the average pH and plant count for each species (y axis) against time (x axis).
<br>Fit a trendline to each data series
<br>Show the equation of the trendline and the proximity to an industrial site as labels.
|
github_jupyter
|
')
#what about the averge pH ? there is no PH in this data
# do you want a single number or all the indexes of the Bell winter etc ...
#the cos probelm ?
#check out my graph ..
Hint: Append the list of lists created in __Part C: (3.)__ to the list `annual_data` each time the code loops.
>```Python
annual_data=[]
for year in range(1997, 2018):
df = pd.read_csv("environmental_survey/plants" + str(year) + ".csv")
| 0.46563 | 0.960063 |
# CS229 Homework 1 Problem 1
In this exercise we use logistic regression to construct a decision boundary for a binary classification problem. In order to do so, we must first load the data.
```
import numpy as np
import pandas as pd
import logistic_regression as lr
```
Here we load the data sets. They are text files, so the numpy ```loadtxt``` function will suffice.
```
X = np.loadtxt('logistic_x.txt')
y = np.loadtxt('logistic_y.txt')
```
Next we pack a column of ones into the design matrix ```X``` so when we perform logistic regression to estimate the intercept parameter, we can pack it all into a matrix.
```
ones = np.ones((99,1))
Xsplit = np.split(X, indices_or_sections=[1], axis=1)
# Pack the intercept coordinates into X so we can calculate the
# intercept for the logistic regression.
X = np.concatenate([ones, Xsplit[0], Xsplit[1]], axis=1)
```
Here we pack the data into a DataFrame for plotting.
```
Xd = pd.DataFrame(X, columns=['x0', 'x1', 'x2'])
yd = pd.DataFrame(y, columns=['y'])
df = pd.concat((yd, Xd), axis=1)
```
Now we perform regression. The logistic regression function uses the Newton-Raphson method to estimate the parameters for the decision boundary in the data set.
```
theta, cost = lr.logistic_regression(X, y, epsilon=lr.EPSILON, max_iters=lr.MAX_ITERS)
```
### Exercise 1.a.
Here are the resulting parameter estimates from logistic regression
```
print('theta = {}'.format(theta))
```
with the resulting costs per iteration of Newton-Raphson. The first term is the intercept term for the line, corresponding to the first column in the design matrix ```X``` being all ones.
```
print('cost = {}'.format(cost))
```
So the logistic regression function appears to be converging. The cost functional is minimized on the last iteration.
### Exercise 1.b.
For the final step, we plot the results. We use a color map to distinguish the classification of each datum. The color purple is used for -1, and the color yellow is used for +1.
```
import matplotlib.pyplot as plt
import matplotlib.colors as clr
colors = ['red', 'blue']
levels = [0, 1]
cmap, norm = clr.from_levels_and_colors(levels=levels, colors=colors, extend='max')
cs = np.where(df['y'] < 0, 0, 1)
cs
```
Now we plot the results. First, create a polynomial p from the estimated parameters.
```
p = np.poly1d([-theta[1]/theta[2], -theta[0]/theta[2]])
x = np.linspace(0, 8, 200)
p
```
Then plot the results.
```
plt.scatter(df['x1'], df['x2'], c=cs)
plt.plot(x, p(x))
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
```
This completes the exercise.
|
github_jupyter
|
import numpy as np
import pandas as pd
import logistic_regression as lr
X = np.loadtxt('logistic_x.txt')
y = np.loadtxt('logistic_y.txt')
ones = np.ones((99,1))
Xsplit = np.split(X, indices_or_sections=[1], axis=1)
# Pack the intercept coordinates into X so we can calculate the
# intercept for the logistic regression.
X = np.concatenate([ones, Xsplit[0], Xsplit[1]], axis=1)
Xd = pd.DataFrame(X, columns=['x0', 'x1', 'x2'])
yd = pd.DataFrame(y, columns=['y'])
df = pd.concat((yd, Xd), axis=1)
theta, cost = lr.logistic_regression(X, y, epsilon=lr.EPSILON, max_iters=lr.MAX_ITERS)
print('theta = {}'.format(theta))
print('cost = {}'.format(cost))
import matplotlib.pyplot as plt
import matplotlib.colors as clr
colors = ['red', 'blue']
levels = [0, 1]
cmap, norm = clr.from_levels_and_colors(levels=levels, colors=colors, extend='max')
cs = np.where(df['y'] < 0, 0, 1)
cs
p = np.poly1d([-theta[1]/theta[2], -theta[0]/theta[2]])
x = np.linspace(0, 8, 200)
p
plt.scatter(df['x1'], df['x2'], c=cs)
plt.plot(x, p(x))
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
| 0.47317 | 0.990338 |
```
import numpy as np
import pandas as pd
import pickle
from collections import defaultdict, Counter
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
import arff
from sklearn.model_selection import train_test_split
from itertools import chain
import nltk
import sklearn
import scipy.stats
from sklearn.metrics import make_scorer
from sklearn.model_selection import cross_val_score
from sklearn.grid_search import RandomizedSearchCV
import sklearn_crfsuite
from sklearn_crfsuite import scorers
from sklearn_crfsuite import metrics
# Load our data and try
with open('modified_data/just_tags.txt', 'rb') as f:
just_tags = pickle.load(f)
with open('modified_data/just_words.txt', 'rb') as f:
just_words = pickle.load(f)
np.unique(just_tags)
just_tags = np.array(just_tags).reshape(len(just_words),1)
def gen_features(data):
# Generating features
# Capitalization, length, suffixes
lens = [len(w) for w in data]
caps = [1 if w[0].isupper() else 0 for w in data]
num_caps = [sum([True for a in w if a.isupper()]) for w in data]
suffixes = [w[-3:] for w in data]
isdigit = [1 if w.isdigit() else 0 for w in data]
feat_names = ['length', 'caps', 'num_caps', 'suffixes', 'isdigit']
features = [lens, caps, num_caps, suffixes, isdigit]
# features = pd.DataFrame(dict(zip(feat_names, features)))
return(list(zip(lens, caps, num_caps, suffixes, isdigit)))
# return (features)
len(just_words)
# Create train test split
words_train, words_test, tags_train, tags_test = train_test_split(just_words, just_tags, random_state = 42, test_size = 0.2)
features_train = [list(i) for i in gen_features(words_train)]
features_test = gen_features(words_test)
features_train[0]
features_train.shape
tags_train.shape
```
## Test CRF fit
```
%%time
crf = sklearn_crfsuite.CRF(
algorithm='l2sgd',
# c1=0.1,
c2=0.1,
max_iterations=100,
all_possible_transitions=True
)
crf.fit(features_train, tags_train)
features['target'] = just_tags
features['suffixes'] = features.suffixes.astype('category')
features['suffixes'][0]
# Save as ARFF file
arff.dump('word_features.arff'
, features.values
, relation = 'TrainFeatures'
, names=features.columns)
for f in features:
print(f)
' '.join([w for w in features['length']])
file_to_write = '@relation TrainFeature\n'
for f in features:
print(f)
file_to_write += '@attribute ' + f
line = ' '.join([str(w) for w in features[f]]) + '\n'
file_to_write += line
with open('temp.txt', 'w+', encoding='utf-8') as f:
f.write(','.join(['\''+str(w)+'\'' for w in list(np.unique(features['suffixes']))]))
# Save as CSV
features.to_csv('word_features.csv')
ohe_features = pd.get_dummies(features)
ohe_features.shape
np.sum(features.isna())
logreg_model = LogisticRegression()
logreg_model.fit(features.drop(columns = ['suffixes']), just_tags)
logreg_model.score(features.drop(columns = ['suffixes']), just_tags)
svc_model = SVC()
svc_model.fit(ohe_features, just_tags)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import pickle
from collections import defaultdict, Counter
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
import arff
from sklearn.model_selection import train_test_split
from itertools import chain
import nltk
import sklearn
import scipy.stats
from sklearn.metrics import make_scorer
from sklearn.model_selection import cross_val_score
from sklearn.grid_search import RandomizedSearchCV
import sklearn_crfsuite
from sklearn_crfsuite import scorers
from sklearn_crfsuite import metrics
# Load our data and try
with open('modified_data/just_tags.txt', 'rb') as f:
just_tags = pickle.load(f)
with open('modified_data/just_words.txt', 'rb') as f:
just_words = pickle.load(f)
np.unique(just_tags)
just_tags = np.array(just_tags).reshape(len(just_words),1)
def gen_features(data):
# Generating features
# Capitalization, length, suffixes
lens = [len(w) for w in data]
caps = [1 if w[0].isupper() else 0 for w in data]
num_caps = [sum([True for a in w if a.isupper()]) for w in data]
suffixes = [w[-3:] for w in data]
isdigit = [1 if w.isdigit() else 0 for w in data]
feat_names = ['length', 'caps', 'num_caps', 'suffixes', 'isdigit']
features = [lens, caps, num_caps, suffixes, isdigit]
# features = pd.DataFrame(dict(zip(feat_names, features)))
return(list(zip(lens, caps, num_caps, suffixes, isdigit)))
# return (features)
len(just_words)
# Create train test split
words_train, words_test, tags_train, tags_test = train_test_split(just_words, just_tags, random_state = 42, test_size = 0.2)
features_train = [list(i) for i in gen_features(words_train)]
features_test = gen_features(words_test)
features_train[0]
features_train.shape
tags_train.shape
%%time
crf = sklearn_crfsuite.CRF(
algorithm='l2sgd',
# c1=0.1,
c2=0.1,
max_iterations=100,
all_possible_transitions=True
)
crf.fit(features_train, tags_train)
features['target'] = just_tags
features['suffixes'] = features.suffixes.astype('category')
features['suffixes'][0]
# Save as ARFF file
arff.dump('word_features.arff'
, features.values
, relation = 'TrainFeatures'
, names=features.columns)
for f in features:
print(f)
' '.join([w for w in features['length']])
file_to_write = '@relation TrainFeature\n'
for f in features:
print(f)
file_to_write += '@attribute ' + f
line = ' '.join([str(w) for w in features[f]]) + '\n'
file_to_write += line
with open('temp.txt', 'w+', encoding='utf-8') as f:
f.write(','.join(['\''+str(w)+'\'' for w in list(np.unique(features['suffixes']))]))
# Save as CSV
features.to_csv('word_features.csv')
ohe_features = pd.get_dummies(features)
ohe_features.shape
np.sum(features.isna())
logreg_model = LogisticRegression()
logreg_model.fit(features.drop(columns = ['suffixes']), just_tags)
logreg_model.score(features.drop(columns = ['suffixes']), just_tags)
svc_model = SVC()
svc_model.fit(ohe_features, just_tags)
| 0.519765 | 0.468122 |
```
from tensorflow import keras
from tensorflow.keras import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.regularizers import l2#正则化L2
import tensorflow as tf
import numpy as np
import pandas as pd
# 12-0.2
# 13-2.4
# 18-12.14
import pandas as pd
import numpy as np
normal = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.12-未发生缠绕前\2013-9.12振动\2013-9-12振动-1450rmin-mat\1450r_normalvibx.txt', delimiter=',')
chanrao = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.17-发生缠绕后\振动\9-17下午振动1450rmin-mat\1450r_chanraovibx.txt', delimiter=',')
print(normal.shape,chanrao.shape,"***************************************************")
data_normal=normal[14:16] #提取前两行
data_chanrao=chanrao[14:16] #提取前两行
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
data_normal=data_normal.reshape(1,-1)
data_chanrao=data_chanrao.reshape(1,-1)
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
#水泵的两种故障类型信号normal正常,chanrao故障
data_normal=data_normal.reshape(-1, 512)#(65536,1)-(128, 515)
data_chanrao=data_chanrao.reshape(-1,512)
print(data_normal.shape,data_chanrao.shape)
import numpy as np
def yuchuli(data,label):#(4:1)(51:13)
#打乱数据顺序
np.random.shuffle(data)
train = data[0:102,:]
test = data[102:128,:]
label_train = np.array([label for i in range(0,102)])
label_test =np.array([label for i in range(0,26)])
return train,test ,label_train ,label_test
def stackkk(a,b,c,d,e,f,g,h):
aa = np.vstack((a, e))
bb = np.vstack((b, f))
cc = np.hstack((c, g))
dd = np.hstack((d, h))
return aa,bb,cc,dd
x_tra0,x_tes0,y_tra0,y_tes0 = yuchuli(data_normal,0)
x_tra1,x_tes1,y_tra1,y_tes1 = yuchuli(data_chanrao,1)
tr1,te1,yr1,ye1=stackkk(x_tra0,x_tes0,y_tra0,y_tes0 ,x_tra1,x_tes1,y_tra1,y_tes1)
x_train=tr1
x_test=te1
y_train = yr1
y_test = ye1
#打乱数据
state = np.random.get_state()
np.random.shuffle(x_train)
np.random.set_state(state)
np.random.shuffle(y_train)
state = np.random.get_state()
np.random.shuffle(x_test)
np.random.set_state(state)
np.random.shuffle(y_test)
#对训练集和测试集标准化
def ZscoreNormalization(x):
"""Z-score normaliaztion"""
x = (x - np.mean(x)) / np.std(x)
return x
x_train=ZscoreNormalization(x_train)
x_test=ZscoreNormalization(x_test)
# print(x_test[0])
#转化为一维序列
x_train = x_train.reshape(-1,512,1)
x_test = x_test.reshape(-1,512,1)
print(x_train.shape,x_test.shape)
def to_one_hot(labels,dimension=2):
results = np.zeros((len(labels),dimension))
for i,label in enumerate(labels):
results[i,label] = 1
return results
one_hot_train_labels = to_one_hot(y_train)
one_hot_test_labels = to_one_hot(y_test)
x = layers.Input(shape=[512,1,1])
#普通卷积层
conv1 = layers.Conv2D(filters=16, kernel_size=(2, 1), activation='relu',padding='valid',name='conv1')(x)
#池化层
POOL1 = MaxPooling2D((2,1))(conv1)
#普通卷积层
conv2 = layers.Conv2D(filters=32, kernel_size=(2, 1), activation='relu',padding='valid',name='conv2')(POOL1)
#池化层
POOL2 = MaxPooling2D((2,1))(conv2)
#Dropout层
Dropout=layers.Dropout(0.1)(POOL2 )
Flatten=layers.Flatten()(Dropout)
#全连接层
Dense1=layers.Dense(50, activation='relu')(Flatten)
Dense2=layers.Dense(2, activation='softmax')(Dense1)
model = keras.Model(x, Dense2)
model.summary()
#定义优化
model.compile(loss='categorical_crossentropy',
optimizer='adam',metrics=['accuracy'])
import time
time_begin = time.time()
history = model.fit(x_train,one_hot_train_labels,
validation_split=0.1,
epochs=50,batch_size=10,
shuffle=True)
time_end = time.time()
time = time_end - time_begin
print('time:', time)
import time
time_begin = time.time()
score = model.evaluate(x_test,one_hot_test_labels, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
time_end = time.time()
time = time_end - time_begin
print('time:', time)
#绘制acc-loss曲线
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['val_loss'],color='g')
plt.plot(history.history['accuracy'],color='b')
plt.plot(history.history['val_accuracy'],color='k')
plt.title('model loss and acc')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'test_loss','train_acc', 'test_acc'], loc='center right')
# plt.legend(['train_loss','train_acc'], loc='upper left')
#plt.savefig('1.png')
plt.show()
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['accuracy'],color='b')
plt.title('model loss and sccuracy ')
plt.ylabel('loss/sccuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'train_sccuracy'], loc='center right')
plt.show()
```
|
github_jupyter
|
from tensorflow import keras
from tensorflow.keras import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.regularizers import l2#正则化L2
import tensorflow as tf
import numpy as np
import pandas as pd
# 12-0.2
# 13-2.4
# 18-12.14
import pandas as pd
import numpy as np
normal = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.12-未发生缠绕前\2013-9.12振动\2013-9-12振动-1450rmin-mat\1450r_normalvibx.txt', delimiter=',')
chanrao = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.17-发生缠绕后\振动\9-17下午振动1450rmin-mat\1450r_chanraovibx.txt', delimiter=',')
print(normal.shape,chanrao.shape,"***************************************************")
data_normal=normal[14:16] #提取前两行
data_chanrao=chanrao[14:16] #提取前两行
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
data_normal=data_normal.reshape(1,-1)
data_chanrao=data_chanrao.reshape(1,-1)
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
#水泵的两种故障类型信号normal正常,chanrao故障
data_normal=data_normal.reshape(-1, 512)#(65536,1)-(128, 515)
data_chanrao=data_chanrao.reshape(-1,512)
print(data_normal.shape,data_chanrao.shape)
import numpy as np
def yuchuli(data,label):#(4:1)(51:13)
#打乱数据顺序
np.random.shuffle(data)
train = data[0:102,:]
test = data[102:128,:]
label_train = np.array([label for i in range(0,102)])
label_test =np.array([label for i in range(0,26)])
return train,test ,label_train ,label_test
def stackkk(a,b,c,d,e,f,g,h):
aa = np.vstack((a, e))
bb = np.vstack((b, f))
cc = np.hstack((c, g))
dd = np.hstack((d, h))
return aa,bb,cc,dd
x_tra0,x_tes0,y_tra0,y_tes0 = yuchuli(data_normal,0)
x_tra1,x_tes1,y_tra1,y_tes1 = yuchuli(data_chanrao,1)
tr1,te1,yr1,ye1=stackkk(x_tra0,x_tes0,y_tra0,y_tes0 ,x_tra1,x_tes1,y_tra1,y_tes1)
x_train=tr1
x_test=te1
y_train = yr1
y_test = ye1
#打乱数据
state = np.random.get_state()
np.random.shuffle(x_train)
np.random.set_state(state)
np.random.shuffle(y_train)
state = np.random.get_state()
np.random.shuffle(x_test)
np.random.set_state(state)
np.random.shuffle(y_test)
#对训练集和测试集标准化
def ZscoreNormalization(x):
"""Z-score normaliaztion"""
x = (x - np.mean(x)) / np.std(x)
return x
x_train=ZscoreNormalization(x_train)
x_test=ZscoreNormalization(x_test)
# print(x_test[0])
#转化为一维序列
x_train = x_train.reshape(-1,512,1)
x_test = x_test.reshape(-1,512,1)
print(x_train.shape,x_test.shape)
def to_one_hot(labels,dimension=2):
results = np.zeros((len(labels),dimension))
for i,label in enumerate(labels):
results[i,label] = 1
return results
one_hot_train_labels = to_one_hot(y_train)
one_hot_test_labels = to_one_hot(y_test)
x = layers.Input(shape=[512,1,1])
#普通卷积层
conv1 = layers.Conv2D(filters=16, kernel_size=(2, 1), activation='relu',padding='valid',name='conv1')(x)
#池化层
POOL1 = MaxPooling2D((2,1))(conv1)
#普通卷积层
conv2 = layers.Conv2D(filters=32, kernel_size=(2, 1), activation='relu',padding='valid',name='conv2')(POOL1)
#池化层
POOL2 = MaxPooling2D((2,1))(conv2)
#Dropout层
Dropout=layers.Dropout(0.1)(POOL2 )
Flatten=layers.Flatten()(Dropout)
#全连接层
Dense1=layers.Dense(50, activation='relu')(Flatten)
Dense2=layers.Dense(2, activation='softmax')(Dense1)
model = keras.Model(x, Dense2)
model.summary()
#定义优化
model.compile(loss='categorical_crossentropy',
optimizer='adam',metrics=['accuracy'])
import time
time_begin = time.time()
history = model.fit(x_train,one_hot_train_labels,
validation_split=0.1,
epochs=50,batch_size=10,
shuffle=True)
time_end = time.time()
time = time_end - time_begin
print('time:', time)
import time
time_begin = time.time()
score = model.evaluate(x_test,one_hot_test_labels, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
time_end = time.time()
time = time_end - time_begin
print('time:', time)
#绘制acc-loss曲线
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['val_loss'],color='g')
plt.plot(history.history['accuracy'],color='b')
plt.plot(history.history['val_accuracy'],color='k')
plt.title('model loss and acc')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'test_loss','train_acc', 'test_acc'], loc='center right')
# plt.legend(['train_loss','train_acc'], loc='upper left')
#plt.savefig('1.png')
plt.show()
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['accuracy'],color='b')
plt.title('model loss and sccuracy ')
plt.ylabel('loss/sccuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'train_sccuracy'], loc='center right')
plt.show()
| 0.50952 | 0.428891 |
```
import random
train_attn_path = './files/train_attn.txt'
test_attn_path = './files/test_attn.txt'
train_attn_sp_path = './files/train_attn_sp.txt'
val_attn_sp_path = './files/val_attn_sp.txt'
test_attn_sp_path = './files/test_attn_sp.txt'
train_answer_keys_path = './files/train_answer_keys.txt'
val_answer_keys_path = './files/val_answer_keys.txt'
test_answer_keys_path = './files/test_answer_keys.txt'
def add_sent_number(file_in, file_out):
print(file_in)
print(file_out)
f_in = open(file_in, 'r')
lines = f_in.readlines()
f_in.close()
f_out = open(file_out, 'w')
for i in range(len(lines)):
num = str(int(i+1))
ln = num + " " + lines[i]
f_out.write(ln)
f_out.close()
# Call
add_sent_number(train_attn_path, train_attn_sp_path)
add_sent_number(test_attn_path, test_attn_sp_path)
def get_val_sent_index():
global train_attn_sp_path
label_to_sent_num = {}
f_in = open(train_attn_sp_path, 'r')
lines = f_in.readlines()
f_in.close()
for l in lines:
l = l.strip().split(" ")[:2]
num = int(l[0])
lab = str(l[1])
if lab not in label_to_sent_num:
label_to_sent_num[lab] = []
label_to_sent_num[lab].append(num)
val_index = []
for l in label_to_sent_num:
sent_num = label_to_sent_num[l]
num = int(len(sent_num) / 10)
random.shuffle(sent_num)
random.shuffle(sent_num)
val_index += sent_num[:num]
val_index = sorted(val_index)
print("len(val_index)", len(val_index))
print("val_index[:5]", val_index[:5])
return val_index
# Call
val_index = get_val_sent_index()
def train_val_split(val_index):
global train_attn_sp_path, val_attn_sp_path
f_in = open(train_attn_sp_path, 'r')
lines = f_in.readlines()
f_in.close()
f_train = open(train_attn_sp_path, 'w')
f_val = open(val_attn_sp_path, 'w')
for l in lines:
l = l.strip().split(" ")
num = int(l[0])
lab = str(l[1])
if num in val_index:
f_val.write(" ".join(l) + "\n")
else:
f_train.write(" ".join(l) + "\n")
f_train.close()
f_val.close()
print("Train - Val - Split ")
# Call
train_val_split(val_index)
def train_val_total_check(train_attn_sp_path, val_attn_sp_path):
def get_count(file_path):
print(file_path)
label_to_sent_count = {}
f_in = open(file_path, 'r')
lines = f_in.readlines()
f_in.close()
for l in lines:
l = l.strip().split(" ")[:2]
num = int(l[0])
lab = str(l[1])
if lab not in label_to_sent_count:
label_to_sent_count[lab] = 0
label_to_sent_count[lab] += 1
return label_to_sent_count
train = get_count(train_attn_sp_path)
val = get_count(val_attn_sp_path)
c = 0
for l in train:
c += train[l]
if l in val:
c += val[l]
print(c)
# Call
train_val_total_check(train_attn_sp_path, val_attn_sp_path)
def create_answer_keys(in_file, out_file):
f_in = open(in_file, 'r')
lines = f_in.readlines()
f_in.close()
f_out = open(out_file, 'w')
for i in range(0, len(lines)):
l = lines[i].strip().split(" ")
num = str(i+1)
lab = str(l[1])
f_out.write(num + "\t" + lab)
f_out.write("\n")
f_out.close()
print(out_file + " " + "Created")
create_answer_keys(train_attn_sp_path, train_answer_keys_path)
create_answer_keys(val_attn_sp_path, val_answer_keys_path)
create_answer_keys(test_attn_sp_path, test_answer_keys_path)
```
|
github_jupyter
|
import random
train_attn_path = './files/train_attn.txt'
test_attn_path = './files/test_attn.txt'
train_attn_sp_path = './files/train_attn_sp.txt'
val_attn_sp_path = './files/val_attn_sp.txt'
test_attn_sp_path = './files/test_attn_sp.txt'
train_answer_keys_path = './files/train_answer_keys.txt'
val_answer_keys_path = './files/val_answer_keys.txt'
test_answer_keys_path = './files/test_answer_keys.txt'
def add_sent_number(file_in, file_out):
print(file_in)
print(file_out)
f_in = open(file_in, 'r')
lines = f_in.readlines()
f_in.close()
f_out = open(file_out, 'w')
for i in range(len(lines)):
num = str(int(i+1))
ln = num + " " + lines[i]
f_out.write(ln)
f_out.close()
# Call
add_sent_number(train_attn_path, train_attn_sp_path)
add_sent_number(test_attn_path, test_attn_sp_path)
def get_val_sent_index():
global train_attn_sp_path
label_to_sent_num = {}
f_in = open(train_attn_sp_path, 'r')
lines = f_in.readlines()
f_in.close()
for l in lines:
l = l.strip().split(" ")[:2]
num = int(l[0])
lab = str(l[1])
if lab not in label_to_sent_num:
label_to_sent_num[lab] = []
label_to_sent_num[lab].append(num)
val_index = []
for l in label_to_sent_num:
sent_num = label_to_sent_num[l]
num = int(len(sent_num) / 10)
random.shuffle(sent_num)
random.shuffle(sent_num)
val_index += sent_num[:num]
val_index = sorted(val_index)
print("len(val_index)", len(val_index))
print("val_index[:5]", val_index[:5])
return val_index
# Call
val_index = get_val_sent_index()
def train_val_split(val_index):
global train_attn_sp_path, val_attn_sp_path
f_in = open(train_attn_sp_path, 'r')
lines = f_in.readlines()
f_in.close()
f_train = open(train_attn_sp_path, 'w')
f_val = open(val_attn_sp_path, 'w')
for l in lines:
l = l.strip().split(" ")
num = int(l[0])
lab = str(l[1])
if num in val_index:
f_val.write(" ".join(l) + "\n")
else:
f_train.write(" ".join(l) + "\n")
f_train.close()
f_val.close()
print("Train - Val - Split ")
# Call
train_val_split(val_index)
def train_val_total_check(train_attn_sp_path, val_attn_sp_path):
def get_count(file_path):
print(file_path)
label_to_sent_count = {}
f_in = open(file_path, 'r')
lines = f_in.readlines()
f_in.close()
for l in lines:
l = l.strip().split(" ")[:2]
num = int(l[0])
lab = str(l[1])
if lab not in label_to_sent_count:
label_to_sent_count[lab] = 0
label_to_sent_count[lab] += 1
return label_to_sent_count
train = get_count(train_attn_sp_path)
val = get_count(val_attn_sp_path)
c = 0
for l in train:
c += train[l]
if l in val:
c += val[l]
print(c)
# Call
train_val_total_check(train_attn_sp_path, val_attn_sp_path)
def create_answer_keys(in_file, out_file):
f_in = open(in_file, 'r')
lines = f_in.readlines()
f_in.close()
f_out = open(out_file, 'w')
for i in range(0, len(lines)):
l = lines[i].strip().split(" ")
num = str(i+1)
lab = str(l[1])
f_out.write(num + "\t" + lab)
f_out.write("\n")
f_out.close()
print(out_file + " " + "Created")
create_answer_keys(train_attn_sp_path, train_answer_keys_path)
create_answer_keys(val_attn_sp_path, val_answer_keys_path)
create_answer_keys(test_attn_sp_path, test_answer_keys_path)
| 0.074732 | 0.232375 |
# DPU example: Yolo_v3
This notebooks shows how to run a YOLO network based application for object detection. The application, as well as the DPU IP, is pulled from the official [Vitis AI Github Repository](https://github.com/Xilinx/Vitis-AI).
For more information, please refer to the [Xilinx Vitis AI page](https://www.xilinx.com/products/design-tools/vitis/vitis-ai.html).
In this notebook we will be using the DNNDK **Python API** to run the DPU tasks.
## 1. Prepare the overlay
We will download the overlay onto the board. Then we will load the
corresponding DPU model.
```
from pynq_dpu import DpuOverlay
overlay = DpuOverlay("dpu.bit")
overlay.load_model("dpu_tf_yolov3.elf")
```
## 2. Constants and helper functions
You can view all of the helper functions in [DNNDK yolo example](https://github.com/Xilinx/Vitis-AI/blob/v1.1/mpsoc/vitis_ai_dnndk_samples/tf_yolov3_voc_py/tf_yolov3_voc.py).
The helper functions released along with Vitis AI cover pre-processing of
the images, so they can be normalized and resized to be compatible with
the DPU model. These functions are included in our `pynq_dpu` package.
```
import numpy as np
import random
import cv2
import colorsys
from PIL import Image
import pylab as plt
from IPython import display
from matplotlib import pyplot as plt
import time
%matplotlib inline
from pynq_dpu.edge.dnndk.tf_yolov3_voc_py.tf_yolov3_voc import *
import requests
```
### Constants
Yolo V2 and V3 predict offsets from a predetermined set of boxes with
particular height-width ratios; those predetermined set of boxes are the
anchor boxes. We will use the predefined [anchors](https://github.com/Xilinx/Vitis-AI/blob/v1.1/mpsoc/vitis_ai_dnndk_samples/tf_yolov3_voc_py/model_data/yolo_anchors.txt).
```
anchor_list = [10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326]
anchor_float = [float(x) for x in anchor_list]
anchors = np.array(anchor_float).reshape(-1, 2)
```
We will use the `get_class()` function in `tf_yolov3_voc` module to
get class names from predefined [class names](https://github.com/Xilinx/Vitis-AI/blob/v1.1/mpsoc/vitis_ai_dnndk_samples/tf_yolov3_voc_py/image/voc_classes.txt).
```
classes_path = "voc_classes.txt"
class_names = get_class(classes_path)
```
Depending on the number of classes, we will define a unique color for each
class.
```
num_classes = len(class_names)
hsv_tuples = [(1.0 * x / num_classes, 1., 1.) for x in range(num_classes)]
colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
colors = list(map(lambda x:
(int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)),
colors))
random.seed(0)
random.shuffle(colors)
random.seed(None)
```
We can define some DPU-related parameters, such as DPU kernel name and
input/output node names.
```
KERNEL_CONV="tf_yolov3"
CONV_INPUT_NODE="conv2d_1_convolution"
CONV_OUTPUT_NODE1="conv2d_59_convolution"
CONV_OUTPUT_NODE2="conv2d_67_convolution"
CONV_OUTPUT_NODE3="conv2d_75_convolution"
```
### Drawing bounding boxes
We now define a custom function that draws the bounding boxes around
the identified objects after we have the classification results.
```
def draw_boxes(image, boxes, scores, classes):
image_h, image_w, _ = image.shape
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 1
thickness = 3
counter=[0,0,0,0]
for i, bbox in enumerate(boxes):
[top, left, bottom, right] = bbox
width, height = right - left, bottom - top
center_x, center_y = left + width*0.5, top + height*0.5
score, class_index = scores[i], classes[i]
if(score > .6 and (class_names[class_index]=="person" or
class_names[class_index]=="car" or
class_names[class_index]=="bus"or
class_names[class_index]=="dog"or
class_names[class_index]=="motorbike"or
class_names[class_index]=="cat"
)):
if(class_names[class_index]=="person"):
counter[0] = counter[0] + 1
if(class_names[class_index]=="car"):
counter[1] = counter[1] + 1
if(class_names[class_index]=="motorbike"):
counter[2] = counter[2] + 1
if(class_names[class_index]=="dog"):
counter[3] = counter[3] + 1
#label = '{}: {:.4f}'.format(class_names[class_index], score)
label = '{}'.format(class_names[class_index])
color = (0,255,0)
cv2.rectangle(image, (left,top), (right,bottom), color, thickness)
cv2.putText(image, label, (int(left), int(top-5)) , font, fontScale, color, thickness, cv2.LINE_AA)
return image, counter
```
### Predicting classes
We need to define a function that evaluates the scores and makes predictions
based on the provided class names.
```
def evaluate(yolo_outputs, image_shape, class_names, anchors):
score_thresh = 0.2
anchor_mask = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
boxes = []
box_scores = []
input_shape = np.shape(yolo_outputs[0])[1 : 3]
input_shape = np.array(input_shape)*32
for i in range(len(yolo_outputs)):
_boxes, _box_scores = boxes_and_scores(
yolo_outputs[i], anchors[anchor_mask[i]], len(class_names),
input_shape, image_shape)
boxes.append(_boxes)
box_scores.append(_box_scores)
boxes = np.concatenate(boxes, axis = 0)
box_scores = np.concatenate(box_scores, axis = 0)
mask = box_scores >= score_thresh
boxes_ = []
scores_ = []
classes_ = []
for c in range(len(class_names)):
class_boxes_np = boxes[mask[:, c]]
class_box_scores_np = box_scores[:, c]
class_box_scores_np = class_box_scores_np[mask[:, c]]
nms_index_np = nms_boxes(class_boxes_np, class_box_scores_np)
class_boxes_np = class_boxes_np[nms_index_np]
class_box_scores_np = class_box_scores_np[nms_index_np]
classes_np = np.ones_like(class_box_scores_np, dtype = np.int32) * c
boxes_.append(class_boxes_np)
scores_.append(class_box_scores_np)
classes_.append(classes_np)
boxes_ = np.concatenate(boxes_, axis = 0)
scores_ = np.concatenate(scores_, axis = 0)
classes_ = np.concatenate(classes_, axis = 0)
return boxes_, scores_, classes_
```
## 3. Run application
We create DPU kernel and task.
```
n2cube.dpuOpen()
kernel = n2cube.dpuLoadKernel(KERNEL_CONV)
task = n2cube.dpuCreateTask(kernel, 0)
```
Now we execute the DPU task to classify an input video frame.
```
input_len = n2cube.dpuGetInputTensorSize(task, CONV_INPUT_NODE)
from IPython import display
cap = cv2.VideoCapture("test.png")
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
size = (frame_width, frame_height)
fps=0
try:
while(cap.isOpened()):
ret, frame = cap.read()
image = frame
counter = [0,0,0,0]
if ret == True:
start_time = time.time()
image_size = image.shape[:2]
image_data = np.array(pre_process(image, (416, 416)), dtype=np.float32)
n2cube.dpuSetInputTensorInHWCFP32(
task, CONV_INPUT_NODE, image_data, input_len)
n2cube.dpuRunTask(task)
conv_sbbox_size = n2cube.dpuGetOutputTensorSize(task, CONV_OUTPUT_NODE1)
conv_out1 = n2cube.dpuGetOutputTensorInHWCFP32(task, CONV_OUTPUT_NODE1,
conv_sbbox_size)
conv_out1 = np.reshape(conv_out1, (1, 13, 13, 75))
conv_mbbox_size = n2cube.dpuGetOutputTensorSize(task, CONV_OUTPUT_NODE2)
conv_out2 = n2cube.dpuGetOutputTensorInHWCFP32(task, CONV_OUTPUT_NODE2,
conv_mbbox_size)
conv_out2 = np.reshape(conv_out2, (1, 26, 26, 75))
conv_lbbox_size = n2cube.dpuGetOutputTensorSize(task, CONV_OUTPUT_NODE3)
conv_out3 = n2cube.dpuGetOutputTensorInHWCFP32(task, CONV_OUTPUT_NODE3,
conv_lbbox_size)
conv_out3 = np.reshape(conv_out3, (1, 52, 52, 75))
yolo_outputs = [conv_out1, conv_out2, conv_out3]
boxes, scores, classes = evaluate(yolo_outputs, image_size,
class_names, anchors)
#print("FPS: ", 1.0 / (time.time() - start_time))
fps = 1.0 / (time.time() - start_time)
image, counter = draw_boxes(image, boxes, scores, classes)
plt.imshow(image[:,:,::-1])
display.clear_output(wait=True)
plt.show()
print("People:{} | Cars:{} | Motorcycles:{} | Dogs:{} | FPS:{}".format(counter[0],counter[1],counter[2],counter[3],fps))
url = "http://192.168.1.127:8080/send?humans={}&cars={}&motor={}&dogs={}&fps={:0.2f}".format(counter[0],counter[1],counter[2],counter[3],fps)
response = requests.request("GET", url)
print("Data sent to the node")
#print(".", end = '')
except:
cap.release()
cv2.destroyAllWindows()
n2cube.dpuDestroyTask(task)
n2cube.dpuDestroyKernel(kernel)
print("ok")
```
|
github_jupyter
|
from pynq_dpu import DpuOverlay
overlay = DpuOverlay("dpu.bit")
overlay.load_model("dpu_tf_yolov3.elf")
import numpy as np
import random
import cv2
import colorsys
from PIL import Image
import pylab as plt
from IPython import display
from matplotlib import pyplot as plt
import time
%matplotlib inline
from pynq_dpu.edge.dnndk.tf_yolov3_voc_py.tf_yolov3_voc import *
import requests
anchor_list = [10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326]
anchor_float = [float(x) for x in anchor_list]
anchors = np.array(anchor_float).reshape(-1, 2)
classes_path = "voc_classes.txt"
class_names = get_class(classes_path)
num_classes = len(class_names)
hsv_tuples = [(1.0 * x / num_classes, 1., 1.) for x in range(num_classes)]
colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
colors = list(map(lambda x:
(int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)),
colors))
random.seed(0)
random.shuffle(colors)
random.seed(None)
KERNEL_CONV="tf_yolov3"
CONV_INPUT_NODE="conv2d_1_convolution"
CONV_OUTPUT_NODE1="conv2d_59_convolution"
CONV_OUTPUT_NODE2="conv2d_67_convolution"
CONV_OUTPUT_NODE3="conv2d_75_convolution"
def draw_boxes(image, boxes, scores, classes):
image_h, image_w, _ = image.shape
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 1
thickness = 3
counter=[0,0,0,0]
for i, bbox in enumerate(boxes):
[top, left, bottom, right] = bbox
width, height = right - left, bottom - top
center_x, center_y = left + width*0.5, top + height*0.5
score, class_index = scores[i], classes[i]
if(score > .6 and (class_names[class_index]=="person" or
class_names[class_index]=="car" or
class_names[class_index]=="bus"or
class_names[class_index]=="dog"or
class_names[class_index]=="motorbike"or
class_names[class_index]=="cat"
)):
if(class_names[class_index]=="person"):
counter[0] = counter[0] + 1
if(class_names[class_index]=="car"):
counter[1] = counter[1] + 1
if(class_names[class_index]=="motorbike"):
counter[2] = counter[2] + 1
if(class_names[class_index]=="dog"):
counter[3] = counter[3] + 1
#label = '{}: {:.4f}'.format(class_names[class_index], score)
label = '{}'.format(class_names[class_index])
color = (0,255,0)
cv2.rectangle(image, (left,top), (right,bottom), color, thickness)
cv2.putText(image, label, (int(left), int(top-5)) , font, fontScale, color, thickness, cv2.LINE_AA)
return image, counter
def evaluate(yolo_outputs, image_shape, class_names, anchors):
score_thresh = 0.2
anchor_mask = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
boxes = []
box_scores = []
input_shape = np.shape(yolo_outputs[0])[1 : 3]
input_shape = np.array(input_shape)*32
for i in range(len(yolo_outputs)):
_boxes, _box_scores = boxes_and_scores(
yolo_outputs[i], anchors[anchor_mask[i]], len(class_names),
input_shape, image_shape)
boxes.append(_boxes)
box_scores.append(_box_scores)
boxes = np.concatenate(boxes, axis = 0)
box_scores = np.concatenate(box_scores, axis = 0)
mask = box_scores >= score_thresh
boxes_ = []
scores_ = []
classes_ = []
for c in range(len(class_names)):
class_boxes_np = boxes[mask[:, c]]
class_box_scores_np = box_scores[:, c]
class_box_scores_np = class_box_scores_np[mask[:, c]]
nms_index_np = nms_boxes(class_boxes_np, class_box_scores_np)
class_boxes_np = class_boxes_np[nms_index_np]
class_box_scores_np = class_box_scores_np[nms_index_np]
classes_np = np.ones_like(class_box_scores_np, dtype = np.int32) * c
boxes_.append(class_boxes_np)
scores_.append(class_box_scores_np)
classes_.append(classes_np)
boxes_ = np.concatenate(boxes_, axis = 0)
scores_ = np.concatenate(scores_, axis = 0)
classes_ = np.concatenate(classes_, axis = 0)
return boxes_, scores_, classes_
n2cube.dpuOpen()
kernel = n2cube.dpuLoadKernel(KERNEL_CONV)
task = n2cube.dpuCreateTask(kernel, 0)
input_len = n2cube.dpuGetInputTensorSize(task, CONV_INPUT_NODE)
from IPython import display
cap = cv2.VideoCapture("test.png")
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
size = (frame_width, frame_height)
fps=0
try:
while(cap.isOpened()):
ret, frame = cap.read()
image = frame
counter = [0,0,0,0]
if ret == True:
start_time = time.time()
image_size = image.shape[:2]
image_data = np.array(pre_process(image, (416, 416)), dtype=np.float32)
n2cube.dpuSetInputTensorInHWCFP32(
task, CONV_INPUT_NODE, image_data, input_len)
n2cube.dpuRunTask(task)
conv_sbbox_size = n2cube.dpuGetOutputTensorSize(task, CONV_OUTPUT_NODE1)
conv_out1 = n2cube.dpuGetOutputTensorInHWCFP32(task, CONV_OUTPUT_NODE1,
conv_sbbox_size)
conv_out1 = np.reshape(conv_out1, (1, 13, 13, 75))
conv_mbbox_size = n2cube.dpuGetOutputTensorSize(task, CONV_OUTPUT_NODE2)
conv_out2 = n2cube.dpuGetOutputTensorInHWCFP32(task, CONV_OUTPUT_NODE2,
conv_mbbox_size)
conv_out2 = np.reshape(conv_out2, (1, 26, 26, 75))
conv_lbbox_size = n2cube.dpuGetOutputTensorSize(task, CONV_OUTPUT_NODE3)
conv_out3 = n2cube.dpuGetOutputTensorInHWCFP32(task, CONV_OUTPUT_NODE3,
conv_lbbox_size)
conv_out3 = np.reshape(conv_out3, (1, 52, 52, 75))
yolo_outputs = [conv_out1, conv_out2, conv_out3]
boxes, scores, classes = evaluate(yolo_outputs, image_size,
class_names, anchors)
#print("FPS: ", 1.0 / (time.time() - start_time))
fps = 1.0 / (time.time() - start_time)
image, counter = draw_boxes(image, boxes, scores, classes)
plt.imshow(image[:,:,::-1])
display.clear_output(wait=True)
plt.show()
print("People:{} | Cars:{} | Motorcycles:{} | Dogs:{} | FPS:{}".format(counter[0],counter[1],counter[2],counter[3],fps))
url = "http://192.168.1.127:8080/send?humans={}&cars={}&motor={}&dogs={}&fps={:0.2f}".format(counter[0],counter[1],counter[2],counter[3],fps)
response = requests.request("GET", url)
print("Data sent to the node")
#print(".", end = '')
except:
cap.release()
cv2.destroyAllWindows()
n2cube.dpuDestroyTask(task)
n2cube.dpuDestroyKernel(kernel)
print("ok")
| 0.321673 | 0.954095 |
# Simplex
#### Katherine Yohanna Mazariegos Guerra
#### Abner Xocop Chacach
```
import numpy as np
import sys
import itertools
# En archivo exercise01.txt x1, x2 y x3 es el nombre de las variables, se usaron esas para seguir con el estándar de nombramiento en Programación Lineal.
# Es necesario escribir si es min/max para que el programa identifique si es de maximización o minimización, st para indicar que esas son las restricciones y end para indicar al programa que el problema termina ahí
def parse_coefficients(coefficient_list, monomial):
"""
Este es un parseador de coeficientes. Consiste en comprobar si una cadena tiene una expresión regular en donde pueda extraer caracteres específicos, en este caso se busca extraer los coeficientes.
Args:
:rtype: None
:param coefficient_list: Lista en la que se almacenarán los coeficientes
:param monomial: Una cadena (por ejemplo, -3x1) que será analizada hasta su coeficiente (por ejemplo, -3)
Verifica qué patrón coincide. Válidos son: (s)(n)lv
Los paréntesis indican la existencia opcional
s es + o - (la ausencia significa +)
n es un número (coeficiente, la ausencia significa 1)
l es una letra latina minúscula (letra variable)
v es un número, probablemente incremental (número variable)
Import re:
Una expresión regular (o RE) especifica un conjunto de cadenas que se corresponde con ella; las funciones de este módulo le permiten comprobar si una cadena particular se corresponde con una expresión regular dada (o si una expresión regular dada se corresponde con una cadena particular, que se reduce a lo mismo)
Source: https://docs.python.org/3/library/re.html
"""
import re
if re.match('[ ]*[\+ ]?[\d]+[\.]?[\d]*', monomial):
float_cast = float(re.match('[ ]*[\+ ]?[\d]+[\.]?[\d]*', monomial).group(0))
coefficient_list.append(float_cast)
elif re.match('[ ]*[\-][\d]+[\.]?[\d]*', monomial):
float_cast = float(re.match('[ ]*[\-][\d]+[\.]?[\d]*', monomial).group(0))
coefficient_list.append(float_cast)
elif re.match('[ ]*[\+]*[a-z][\d]+', monomial):
coefficient_list.append(1)
elif re.match('[ ]*[\-][a-z][\d]+', monomial):
coefficient_list.append(-1)
lines = []
def parse_lp1(input_filename):
"""
Esta función es la encargada de leer el archivo, hará uso del parser para mandar línea a línea el contenido del .txt y según los coeficientes que obtenga devolverá las matrices y arrays correspondientes.
Tiene tareas como verificar que el archivo haya sido encontrado, leer si es un problema de maximización o minimización, llenar las matrices/ arrays.
:rtype : tuple
:param input_filename: Nombre de archivo de la entrada del problema lineal
:return: Retorna A-matrix, b-vector, c-vector, MinMax
"""
import re
error = 0 # Inicializar la variable de error. Si el error!=0 entonces hubo un problema de entrada de archivo
try:
infile = open('Simplex.txt')
except FileNotFoundError:
error = 1
print('\nInput file error: Archivo no encontrado.') # Archivo no encontrado
#lines = []
if error != 1:
for line in infile:
lines.append(line)
infile.close()
for line in lines:
print(line, end='')
minmax_line = '' # Verficar si el problema es de maximización o de minimización
for line in lines:
if re.match('^[ ]*max|min', line):
minmax_line = line
minmax = 0
objective_function = ''
if re.match('^[ ]*max', minmax_line): #Si en el archivo se encuentra la palabra 'max' entonces el problema es de maximización
minmax = -1
objective_function = minmax_line
objective_function = objective_function.strip('max')
elif re.match('^[ ]*min', minmax_line): # Si en el archivo se encuentra la palabra 'min' entonces el problema es de minimización
minmax = 1
objective_function = minmax_line
objective_function = objective_function.strip('min')
if minmax_line == '' and minmax == 0: # Si en el archivo no se encuentra ni 'max' ni 'min' entonces no hay función objetivo
error = 2
print('\nInput file error: Función objetivo no encontrada.')
c_vector = [] # Rellenar el vector c con coeficientes de función objetiva
regex = re.compile('^[\+\- ]?[\d]*[\.]?[\d]*[a-z][\d+]')
while regex.match(objective_function):
monomial = regex.match(objective_function).group(0)
parse_coefficients(c_vector, monomial)
objective_function = objective_function.replace(monomial, '', 1)
a_matrix = [] # Rellenar la matriz A (coeficientes) y el vector b utilizando las restricciones del problema
b_vector = []
eqin = []
st_line = ''
st_index = 0
for index, line in enumerate(lines):
if 'st' in line:
st_index = index
st_line = line
if re.match('^[ ]*st', st_line):
st_line = st_line.replace('st', ' ', 1)
if st_line == '':
error = 3
print('\nInput file error: Línea de restricciones no encontrada. No existe la keyword \'st\'.')
while st_index < len(lines) - 1:
sub_a_vector = []
a_matrix.append(sub_a_vector)
while True:
st_line = st_line.strip(' ')
if re.match('^[\+\- ]?[\d]*[\.]?[\d]*[a-z][\d+]', st_line):
monomial = re.match('^[\+\- ]?[\d]*[\.]?[\d]*[a-z][\d+]', st_line).group(0)
parse_coefficients(sub_a_vector, monomial)
st_line = st_line.replace(monomial, '', 1)
elif re.match('^[<>=]+', st_line):
monomial = re.match('^[<>=]+', st_line).group(0)
if monomial == '<=':
eqin.append(-1)
elif monomial == '>=':
eqin.append(1)
elif monomial == '==':
eqin.append(0)
else:
error = 4
print('\nInput file error: Caracter inesperado; esperados <=, >=, = al menos', monomial)
st_line = st_line.replace(monomial, '', 1)
elif re.match('^[\d]+', st_line):
monomial = re.match('^[\d]+', st_line).group(0)
int_cast = int(re.match('^[\d]+', st_line).group(0))
b_vector.append(int_cast)
st_line = st_line.replace(monomial, '', 1)
else:
if not sub_a_vector: # Evalúa true cuando las líneas están vacías entre las restricciones
a_matrix.pop()
break
st_index += 1 # Incrementar el número de línea y obtener el siguiente
st_line = lines[st_index]
if st_line == 'end\n' and error == 0: # Búsqueda de la declaración final y ausencia de errores
print('\nArchivo cargado exitosamente.')
break
return a_matrix, b_vector, c_vector, eqin, minmax # Devolver todas las listas y variables creadas
def convert_to_dual(input_filename, output_filename):
"""
Verifica si son restricciones de >=, <= o =. También tiene como tarea hacer un archivo de salida en el que muestre los resultados de las matrices que se llenaron.
:param input_filename: Nombre de archivo de la entrada del problema lineal
:param output_filename: Filename of the linear problem output
:return: Returns A-matrix, b-vector, c-vector, Variable-constraints, MinMax
"""
(a_matrix, b_vector, c_vector, eqin, minmax) = parse_lp1(input_filename) # Llamar la función parse_lp1
variable_constraints = [] # Convertir las restricciones a equivalentes duales '*' significa libre
if minmax == -1:
for el in eqin:
if el == 0:
variable_constraints.append('==')
elif el == 1:
variable_constraints.append('>=')
elif el == -1:
variable_constraints.append('<=')
a_matrix = list(zip(a_matrix)) # Traspuesta de A-matrix
minmax = -minmax # min(max) el problema dual es max(min)
outfile = open(output_filename, 'w') # Escribir el problema a un archivo de salida
outfile.write('(Objective Function) b-vector: [' + ', '.join(map(str, b_vector)) + ']\n')
outfile.write('\nA-matrix: [')
thing = ''
for index, sub_a_vector in enumerate(a_matrix):
thing += '[ ' + ', '.join(map(str, sub_a_vector)) + ']'
if index != (len(a_matrix) - 1):
thing += ', '
outfile.write(thing + ']\n')
outfile.write('\n(Contraints) c-vector: [' + ', '.join(map(str, c_vector)) + ']\n')
outfile.write('\n(Variable Contraints) variable_constraints-vector: [' + ', '.join(map(str, c_vector)) + ']\n')
outfile.write('\nEqin: [' + ', '.join(map(str, eqin)) + ']\n')
outfile.write('\nMinMax: [' + str(minmax) + ']\n')
outfile.close()
return a_matrix, b_vector, c_vector, variable_constraints, eqin, minmax
(a_matrix, b_vector, c_vector, variable_contraints, eqin, minmax) = convert_to_dual('input-lp1', 'output-lp2')
"""
únicamente imprimimos los distintos arrays necesarios para realizar el programa.
"""
print(a_matrix)
print(b_vector)
print(c_vector)
print(variable_contraints)
"""
Solicita el número de restricciones para que el programa sepa las iteraciones que tiene que hacer para
la matriz de restricciones
"""
no_restricciones = int(input("Ingrese el no. de restricciones: "))
"""
Solicita el número de variables para que sepa la cantidad de columnas que tiene que tener el programa para crear una matriz
de restricciones.
Variables sirven para saber el número de columnas de la matriz
Restricciones sirven para saber el número de filas de la matriz
"""
no_variables = int(input("Ingrese el no. de variables: "))
'''
Revisar si los coeficientes son binarios o no.
Lo que es hace es multiplicar la cantidad de variables por la cantidad de restricciones, esto indicará la cantidad
de coeficientes que estarán en la matriz.
Se añadió un contador llamado "sumador_binarios", que se autosuma una unidad cuando encuentra una variable binaria.
Cuando el número de coeficientes coincida con la suma del contador, es porque ya puede resolver la maximización.
'''
sumador_enteros = 0
numero_para_cant_enteros = no_variables * no_restricciones
enteros = 0
a_matrix2 = []
for i in a_matrix:
a_matrix2.append((i[0]))
print(a_matrix2)
for row in a_matrix2:
for elem in row:
num_int = round(elem)
#check_int = isinstance(elem, int)
if elem-num_int != 0:
print(elem)
print("No todos los coeficientes son enteros, vuelva ingresar")
else:
sumador_enteros+=1
print(elem)
print("Coeficiente sí es entero")
enteros = 1
print("numero_para_cant_enteros ", numero_para_cant_enteros)
print("sumador_enteros", sumador_enteros)
if sumador_enteros == numero_para_cant_enteros:
print("Ya todos son enteros, se resolverá la maximización")
else:
print("Debe actualizar para que sean sólo enteros")
sys.exit(1)
'''
Revisar si los coeficientes de la matriz C son binarios o no.
Lo que es hace es tomar el cuenta el número de variables..
Se añadió un contador llamado "sumador_binarios_c", que se autosuma una unidad cuando encuentra una variable binaria.
Cuando el número de variables coincida con la suma del contador, es porque ya puede resolver la maximización.
'''
print(c_vector)
sumador_enteros_c = 0
numero_para_cant_enteros_c = no_variables
for row in c_vector:
num_int = round(elem)
#check_int = isinstance(elem, int)
if elem-num_int != 0:
print(elem)
print("No todos los coeficientes son enteros, vuelva ingresar")
else:
sumador_enteros_c+=1
print(elem)
print("Coeficiente sí es entero")
enteros = 1
print("numero_para_cant_enteros ", numero_para_cant_enteros_c)
print("sumador_enteros", sumador_enteros_c)
if sumador_enteros_c == numero_para_cant_enteros_c:
print("Ya todos son enteros, se resolverá la maximización")
else:
print("Debe actualizar para que sean sólo enteros")
sys.exit(1)
#Hacer matriz
#aqui estan los coeficientes de las restricciones
"""
Aqui se hace la matriz con las slack variables y con las variables artificiales que sean necesarias
"""
positions = []
a_matrix2 = []
for i in a_matrix:
a_matrix2.append((i[0]))
#print(a_matrix2[0])
print(a_matrix2)
#convertimos a_matrix2 en una matriz de numpy para hacer operaciones con ella
mat = np.asmatrix(a_matrix2)
size = len(variable_contraints)
for i in range (len(variable_contraints)):
if variable_contraints[i] == "<=":
#hacemos la columna de ceros, de tamaño igual
# a las columnas que están en la matrix (length del arreglo de símbolos)
new_matrix = np.asmatrix(np.zeros(size))
#colocamos 1 en el espacio correspondiente de la fila en donde está
# la constrait
new_matrix[0,i]=1
#asignamos la forma para que sea un vector columna
new_matrix.shape = (size,1)
# obtenemos las dimensiones de la matrix actual
x, y = mat.shape
# hacemos el append de la columna de ceros y el uno en la posición correspondiente
# al final de la matriz, a la derecha
mat = np.hstack((mat[:,:y], new_matrix, mat[:,y:]))
print(a_matrix2[i])
print("menor")
if variable_contraints[i] == "==":
new_matrix = np.asmatrix(np.zeros(size))
new_matrix[0,i]=1
new_matrix.shape = (size,1)
x, y = mat.shape
mat = np.hstack((mat[:,:y], new_matrix, mat[:,y:]))
print(a_matrix2[i])
print("igual")
positions.append(y)
if variable_contraints[i] == ">=":
new_matrix = np.asmatrix(np.zeros(size))
new_matrix[0,i]=1
new_matrix1 = np.asmatrix(np.zeros(size))
new_matrix1[0,i]=-1
new_matrix.shape = (size,1)
new_matrix1.shape = (size,1)
x, y = mat.shape
mat = np.hstack((mat[:,:y], new_matrix, mat[:,y:]))
mat = np.hstack((mat[:,:y+1], new_matrix1, mat[:,y+1:]))
print(a_matrix2[i])
print("mayor")
positions.append(y)
#print(variable_contraints[i])
#print(variable_contraints[0])
print(mat)
#Numero de columnas en la matriz
num_cols = mat.shape[1]
print(num_cols)
print(positions)
"""
Aquí es la orquetastación principal, porque aquí es donde se miden el número de filas y columnas que tendrá la tableau.
Así mismo se inicializa el algoritmo simplex de manera explícita. También se le indica si es un problema tipo Min o Max.Luego
empieza a buscar los puntos pivote, apoyándose de la eliminación gaussiana.
Más adelante han sido definidas dos funciones: una de maximización, donde se le envía la palabra 'MAX' y esta función
reconoce que tiene que resolver una maximización. La otra función es de minimización, que envía la palabra 'MIN'
lo que significa que se requiere de una minimización.
Estas palabras permiten que el algoritmo prepare la tableau según lo requiere la maximización y la minimización, porque
ambos tienen una resolución distinta.
"""
def simplex(of,basis,tableau,opt):
# Obtiene el número de filas y columnas que tiene la tableau
n_rows = tableau.shape[0]
n_cols = tableau.shape[1]
if opt =='MIN':
# Inicia el algoritmo simplex
# Calcula zj-cj. Si cj - zj >= 0 para todas las columnas, entonces la solución actual es la solución óptima.
check = of - np.sum(np.reshape(of[list(basis)],(n_rows,1)) * tableau[:,0:n_cols-1],axis=0)
else:
# Inicia el algoritmo simplex
# Calcula cj-zj. Si zj - cj >= 0 para todas las columnas, entonces la solución actual es la solución óptima.
check = np.sum(np.reshape(of[list(basis)],(n_rows,1)) *tableau[:,0:n_cols-1],axis=0) - of
count = 0
while ~np.all(check >=0):
print(check)
# Determina la columna pivote: La columna pivotante es la columna correspondiente al mínimo zj-cj.
pivot_col = np.argmin(check)
# Determinar los elementos positivos en la columna pivote. Si no hay elementos positivos en la columna pivote,
# entonces la solución óptima no tiene límites.
positive_rows = np.where(tableau[:,pivot_col] > 0)[0]
if positive_rows.size == 0:
print('*******SOLUCIÓN SIN LÍMITES******')
break
# Determinar la hilera de pivote: prueba de min-ration
divide=(tableau[positive_rows,n_cols-1]
/tableau[positive_rows,pivot_col])
pivot_row = positive_rows[np.where(divide
== divide.min())[0][-1]]
# Actualizar las bases
basis[pivot_row] = pivot_col
# Realizar la eliminación gaussiana para hacer girar el elemento uno y los elementos por encima y por debajo del cero:
tableau[pivot_row,:]=(tableau[pivot_row,:]
/tableau[pivot_row,pivot_col])
for row in range(n_rows):
if row != pivot_row:
tableau[row,:] = (tableau[row,:]
- tableau[row,pivot_col]*tableau[pivot_row,:])
if opt =='MIN':
check = of - np.sum(np.reshape(of[list(basis)],(n_rows,1)) * tableau[:,0:n_cols-1],axis=0)
else:
check = np.sum(np.reshape(of[list(basis)],(n_rows,1)) *tableau[:,0:n_cols-1],axis=0) - of
count += 1
print('Paso %d' % count)
print(tableau)
return basis,tableau
def get_solution(of,basis,tableau):
# Obtiene el número de columnas de la tableau
n_cols = tableau.shape[1]
# Obtener la solución óptima
solution = np.zeros(of.size)
solution[list(basis)] = tableau[:,n_cols-1]
# Determinar el valor óptimo
value = np.sum(of[list(basis)] * tableau[:,n_cols-1])
return solution,value
"""
Esta función es muy importante, ya que permite ingresar variables artificiales si son necesarias. Los símbolos que requieren
que se añadan variables artificiales son los signos de >=. El símbolo == requiere una variable artificial en vez de una de
holgura, el signo >= requiere de una variable artificial 1 y de una variable de holgura -1.
"""
n_b_vector = []
for i in b_vector:
list1 = [i]
n_b_vector.append(list1)
#Esta es la matriz que si sirve
matrix_mat = np.concatenate((mat, n_b_vector),axis =1)
print(matrix_mat)
print(variable_contraints)
def check_availability(element, collection: iter):
return element in collection
verificar_menoresque = check_availability('<=', variable_contraints)
print(verificar_menoresque)
verificar_mayoresque = check_availability('>=', variable_contraints)
print(verificar_mayoresque)
verificar_igualesque = check_availability('==', variable_contraints)
print(verificar_igualesque)
"""
Esta función es utilizado por la maximización, por eso aquí se añaden los coeficientes de la función objetivo, y se añaden
la misma cantidad de ceros que las variables de holgura, por ejemplo si se añadieron 3 variables de holgura, esta identifica
que tiene que añadir 3 ceros. Pero si se añaden variables de holgura y artifiales, la suma de estas será la cantidad de ceros
que se añadan.
"""
matrix_cero = []
size_ceros = num_cols - no_variables
print(size_ceros)
for i in range(size_ceros):
matrix_cero.append(0)
print(matrix_cero)
objective_matrix = np.concatenate((c_vector, matrix_cero), axis=0)
print(objective_matrix )
"""
Esta función la hemos colocado, ya que nuestro algoritmo requiere que se le indican las variables de holgura, las cuales
son la base de la resolución del caso. Este array con bases es importante para el buen funcionamiento de los pivotes,
ya que si no se le dice explícitamente cuáles son las bases, sería difícil para nuestro algoritmo determinar cuál columna
sale y cuál fila entra.
"""
array_con_bases = []
numero_base = no_variables
for item in range(no_restricciones):
array_con_bases.append(item + numero_base)
print(array_con_bases)
"""
Esta es la función de maximización. Nos hemos fijado que la maximización requieres que los coeficientes de la función objetivo
no sean negativos, por eso se le manda explícitamente a la función simplex la palabra 'MAX' que identifica la maximización.
Eso permite que se diferencie la minimización con la maximización, y haga el procedimiento adecuado según lo requiera
cada caso.
"""
# Definir la tableau:
tableau = np.array(matrix_mat)
print(tableau)
# Define la función objetivo y las bases iniciales
print(objective_matrix)
of = np.array(objective_matrix)
# bases iniciales
#basis = np.array([4,5,6])
basis = np.array(array_con_bases)
# Correr el algorithm simplex
basis,tableau = simplex(of,basis,tableau,'MAX')
# Obtener la solución óptima
optimal_solution,optimal_value = get_solution(of,basis,tableau)
# Imprimir al tableau final.
print('La base final es:')
print(basis)
print('solución')
for i in range(len(optimal_solution)):
print('X%d'%(i+1),'=',optimal_solution[i])
print('Z=',optimal_value)
#max_function()
```
"""
Debido a que solo se harán maximizaciones, dejamos solo el código para realizar estas
Al mismo tiempo, imprime la la solución optimizada.
"""
print("==== MAXIMIZACION ====")
max_function()
```
"""
A través de un contador se verifica si las variables x óptimas son enteras. Si todas son enteras, se detiene porque el
resultado ya es entero, de lo contrario tiene que continuar con el programa donde se operarán las variables no
enteras.
"""
#Revisar si la solución es entera
cont_T = 0
for i in range (no_variables):
check_int = isinstance(optimal_solution[i], int)
print(check_int)
if check_int == True:
cont_T = cont_T + 1
if cont_T == no_variables:
print("Todas las respuestas son enteros")
sys.exit(1)
else:
print("No todas las respuestas son enteras, proceda")
#print(optimal_solution)
"""
Se guarda un array con las soluciones aproximadas de las variables x de la solución óptima. A través de un for
se recorre según el número de variables, para que se garantice que todas las variables x hayan pasado por la prueba.
"""
round_sol = []
posible_sol = []
for i in range(no_variables):
round_sol.append(round(optimal_solution[i]))
#Soluciones aproximadas
print(round_sol)
#Multiplicar con la objetivo, las respuestas para hacer pruebas
#Aquí está la Funcion objetivo
print(c_vector)
"""
Son dos arrays en los que se guardan los números enteros que aún cumplen con las restricciones.
"""
validos_x1=[] #este empieza en dos que es la solucion y termina en 5 que es la restricción
validos_x2=[] #este empieza en 3 y termina en 0
"""
Esta es una función que recibe el vector c 'c_vector' y la solución que se encontró. Tiene una variable temp inicializada
en cero, para luego recorrer con for el rango del vector c. Dentro del for, se hace una operación para que el número dentro
de vector_c[i] se multiplique con el número en sol[i], este resultado debe sumarse en la variable temp y la función
retorna el valor de la variable temp.
"""
#Valores optimos funcionales
def posible_funct_val(c_vector, sol):
temp=0
for i in range(len(c_vector)):
temp += c_vector[i]*sol[i]
return temp
"""
Se llama la función posible_funct_val, los arrays que se le mandan son el vector c original y la solución
aproximada a que se ha encontrado en unos pasos anteriores.
"""
posible_funct_val(c_vector,round_sol)
"""
Se definió una función pqcclr que recibe como parámetros a_matrix2, punto, variable_constraints y el b_vector.
básicamente lo que hace es que toma el número de restricciones para hacer un for, que dentro de sí contiene otro for
que se repite según el número de variables. Lo que hace es que suma a la variable temp inicializada en cero, sumarle el
producto de a_matrix2 en la posición [i][j] por el punto [j].
Se hacen unas comprobaciones para saber si el constraint es <=, >= o ==.
Si se encuentra el constraint <= y la variable temporal es > que b_b_vector[i], devuelve falso.
Si se encuentra el constraint >= y la variable temporal es < que b_b_vector[i], devuelve falso.
Si se encuentra el constraint == y la variable temporal es != que b_b_vector[i], devuelve falso.
Si ninguna de las anteriores se cumple, devuelve true.
Devolver true, significa que cumple con las restricciones, de lo contrario no cumple con las restricciones.
"""
#punto que cumplen con las restricciones
def pqcclr(a_matrix2, punto, variable_contraints, b_vector):
for i in range(no_restricciones):
temp = 0
for j in range(no_variables):
temp += a_matrix2[i][j]*punto[j]
if variable_contraints[i]=='<=' and temp > b_vector[i]:
return False
if variable_contraints[i]=='>=' and temp < b_vector[i]:
return False
if variable_contraints[i]=='==' and temp != b_vector[i]:
return False
return True
"""
Se mandan a la función pqcclr los parámetros a_matrix2, round_sol, variable_constraints y b_vector.
Devolver true, significa que cumple con las restricciones, de lo contrario no cumple con las restricciones.
"""
pqcclr(a_matrix2, round_sol, variable_contraints, b_vector)
"""
Se definió esta función para verificar si los puntos y las variables x aproximadas a enteros proporcionan una solución
óptima y que cumpla con las restricciones necesarias. Se prueban cada una de x aproximadas y se hace uso de la función
pqcclr realizada anteriormente para verificar que los puntos sigan estando dentro de las restricciones.
Al final, imprime las soluciones óptimas.
"""
import re
def mvp(c_vector,round_sol,a_matrix2, variable_contraints):
valores=[]
puntos=[]
rangos_variables=[]
for i in round_sol:
temp=[]
for j in range(int(i-i), int(i+(2*i))):
temp.append(j)
rangos_variables.append(temp)
#print(rangos_variables)
for i in range(no_variables):
print("Rango para variable x", i+1, ": ",rangos_variables[i])
for i in itertools.product(*rangos_variables):
combinacion=[]
for j in i:
combinacion.append(j)
#print(combinacion)
if(pqcclr(a_matrix2, combinacion, variable_contraints, b_vector)):
puntos.append(combinacion)
valores.append(posible_funct_val(c_vector,combinacion))
greater=valores[0]
for item in range(0, len(valores)):
if valores[item]>greater:
greater=valores[item]
grater_index = valores.index(valores[item])
#print(grater_index)
#print(puntos[grater_index])
print("Z óptimo entero: ", max(valores))
print("Solución con variables enteras: ", puntos[grater_index])
mvp(c_vector,round_sol,a_matrix2, variable_contraints)
```
|
github_jupyter
|
import numpy as np
import sys
import itertools
# En archivo exercise01.txt x1, x2 y x3 es el nombre de las variables, se usaron esas para seguir con el estándar de nombramiento en Programación Lineal.
# Es necesario escribir si es min/max para que el programa identifique si es de maximización o minimización, st para indicar que esas son las restricciones y end para indicar al programa que el problema termina ahí
def parse_coefficients(coefficient_list, monomial):
"""
Este es un parseador de coeficientes. Consiste en comprobar si una cadena tiene una expresión regular en donde pueda extraer caracteres específicos, en este caso se busca extraer los coeficientes.
Args:
:rtype: None
:param coefficient_list: Lista en la que se almacenarán los coeficientes
:param monomial: Una cadena (por ejemplo, -3x1) que será analizada hasta su coeficiente (por ejemplo, -3)
Verifica qué patrón coincide. Válidos son: (s)(n)lv
Los paréntesis indican la existencia opcional
s es + o - (la ausencia significa +)
n es un número (coeficiente, la ausencia significa 1)
l es una letra latina minúscula (letra variable)
v es un número, probablemente incremental (número variable)
Import re:
Una expresión regular (o RE) especifica un conjunto de cadenas que se corresponde con ella; las funciones de este módulo le permiten comprobar si una cadena particular se corresponde con una expresión regular dada (o si una expresión regular dada se corresponde con una cadena particular, que se reduce a lo mismo)
Source: https://docs.python.org/3/library/re.html
"""
import re
if re.match('[ ]*[\+ ]?[\d]+[\.]?[\d]*', monomial):
float_cast = float(re.match('[ ]*[\+ ]?[\d]+[\.]?[\d]*', monomial).group(0))
coefficient_list.append(float_cast)
elif re.match('[ ]*[\-][\d]+[\.]?[\d]*', monomial):
float_cast = float(re.match('[ ]*[\-][\d]+[\.]?[\d]*', monomial).group(0))
coefficient_list.append(float_cast)
elif re.match('[ ]*[\+]*[a-z][\d]+', monomial):
coefficient_list.append(1)
elif re.match('[ ]*[\-][a-z][\d]+', monomial):
coefficient_list.append(-1)
lines = []
def parse_lp1(input_filename):
"""
Esta función es la encargada de leer el archivo, hará uso del parser para mandar línea a línea el contenido del .txt y según los coeficientes que obtenga devolverá las matrices y arrays correspondientes.
Tiene tareas como verificar que el archivo haya sido encontrado, leer si es un problema de maximización o minimización, llenar las matrices/ arrays.
:rtype : tuple
:param input_filename: Nombre de archivo de la entrada del problema lineal
:return: Retorna A-matrix, b-vector, c-vector, MinMax
"""
import re
error = 0 # Inicializar la variable de error. Si el error!=0 entonces hubo un problema de entrada de archivo
try:
infile = open('Simplex.txt')
except FileNotFoundError:
error = 1
print('\nInput file error: Archivo no encontrado.') # Archivo no encontrado
#lines = []
if error != 1:
for line in infile:
lines.append(line)
infile.close()
for line in lines:
print(line, end='')
minmax_line = '' # Verficar si el problema es de maximización o de minimización
for line in lines:
if re.match('^[ ]*max|min', line):
minmax_line = line
minmax = 0
objective_function = ''
if re.match('^[ ]*max', minmax_line): #Si en el archivo se encuentra la palabra 'max' entonces el problema es de maximización
minmax = -1
objective_function = minmax_line
objective_function = objective_function.strip('max')
elif re.match('^[ ]*min', minmax_line): # Si en el archivo se encuentra la palabra 'min' entonces el problema es de minimización
minmax = 1
objective_function = minmax_line
objective_function = objective_function.strip('min')
if minmax_line == '' and minmax == 0: # Si en el archivo no se encuentra ni 'max' ni 'min' entonces no hay función objetivo
error = 2
print('\nInput file error: Función objetivo no encontrada.')
c_vector = [] # Rellenar el vector c con coeficientes de función objetiva
regex = re.compile('^[\+\- ]?[\d]*[\.]?[\d]*[a-z][\d+]')
while regex.match(objective_function):
monomial = regex.match(objective_function).group(0)
parse_coefficients(c_vector, monomial)
objective_function = objective_function.replace(monomial, '', 1)
a_matrix = [] # Rellenar la matriz A (coeficientes) y el vector b utilizando las restricciones del problema
b_vector = []
eqin = []
st_line = ''
st_index = 0
for index, line in enumerate(lines):
if 'st' in line:
st_index = index
st_line = line
if re.match('^[ ]*st', st_line):
st_line = st_line.replace('st', ' ', 1)
if st_line == '':
error = 3
print('\nInput file error: Línea de restricciones no encontrada. No existe la keyword \'st\'.')
while st_index < len(lines) - 1:
sub_a_vector = []
a_matrix.append(sub_a_vector)
while True:
st_line = st_line.strip(' ')
if re.match('^[\+\- ]?[\d]*[\.]?[\d]*[a-z][\d+]', st_line):
monomial = re.match('^[\+\- ]?[\d]*[\.]?[\d]*[a-z][\d+]', st_line).group(0)
parse_coefficients(sub_a_vector, monomial)
st_line = st_line.replace(monomial, '', 1)
elif re.match('^[<>=]+', st_line):
monomial = re.match('^[<>=]+', st_line).group(0)
if monomial == '<=':
eqin.append(-1)
elif monomial == '>=':
eqin.append(1)
elif monomial == '==':
eqin.append(0)
else:
error = 4
print('\nInput file error: Caracter inesperado; esperados <=, >=, = al menos', monomial)
st_line = st_line.replace(monomial, '', 1)
elif re.match('^[\d]+', st_line):
monomial = re.match('^[\d]+', st_line).group(0)
int_cast = int(re.match('^[\d]+', st_line).group(0))
b_vector.append(int_cast)
st_line = st_line.replace(monomial, '', 1)
else:
if not sub_a_vector: # Evalúa true cuando las líneas están vacías entre las restricciones
a_matrix.pop()
break
st_index += 1 # Incrementar el número de línea y obtener el siguiente
st_line = lines[st_index]
if st_line == 'end\n' and error == 0: # Búsqueda de la declaración final y ausencia de errores
print('\nArchivo cargado exitosamente.')
break
return a_matrix, b_vector, c_vector, eqin, minmax # Devolver todas las listas y variables creadas
def convert_to_dual(input_filename, output_filename):
"""
Verifica si son restricciones de >=, <= o =. También tiene como tarea hacer un archivo de salida en el que muestre los resultados de las matrices que se llenaron.
:param input_filename: Nombre de archivo de la entrada del problema lineal
:param output_filename: Filename of the linear problem output
:return: Returns A-matrix, b-vector, c-vector, Variable-constraints, MinMax
"""
(a_matrix, b_vector, c_vector, eqin, minmax) = parse_lp1(input_filename) # Llamar la función parse_lp1
variable_constraints = [] # Convertir las restricciones a equivalentes duales '*' significa libre
if minmax == -1:
for el in eqin:
if el == 0:
variable_constraints.append('==')
elif el == 1:
variable_constraints.append('>=')
elif el == -1:
variable_constraints.append('<=')
a_matrix = list(zip(a_matrix)) # Traspuesta de A-matrix
minmax = -minmax # min(max) el problema dual es max(min)
outfile = open(output_filename, 'w') # Escribir el problema a un archivo de salida
outfile.write('(Objective Function) b-vector: [' + ', '.join(map(str, b_vector)) + ']\n')
outfile.write('\nA-matrix: [')
thing = ''
for index, sub_a_vector in enumerate(a_matrix):
thing += '[ ' + ', '.join(map(str, sub_a_vector)) + ']'
if index != (len(a_matrix) - 1):
thing += ', '
outfile.write(thing + ']\n')
outfile.write('\n(Contraints) c-vector: [' + ', '.join(map(str, c_vector)) + ']\n')
outfile.write('\n(Variable Contraints) variable_constraints-vector: [' + ', '.join(map(str, c_vector)) + ']\n')
outfile.write('\nEqin: [' + ', '.join(map(str, eqin)) + ']\n')
outfile.write('\nMinMax: [' + str(minmax) + ']\n')
outfile.close()
return a_matrix, b_vector, c_vector, variable_constraints, eqin, minmax
(a_matrix, b_vector, c_vector, variable_contraints, eqin, minmax) = convert_to_dual('input-lp1', 'output-lp2')
"""
únicamente imprimimos los distintos arrays necesarios para realizar el programa.
"""
print(a_matrix)
print(b_vector)
print(c_vector)
print(variable_contraints)
"""
Solicita el número de restricciones para que el programa sepa las iteraciones que tiene que hacer para
la matriz de restricciones
"""
no_restricciones = int(input("Ingrese el no. de restricciones: "))
"""
Solicita el número de variables para que sepa la cantidad de columnas que tiene que tener el programa para crear una matriz
de restricciones.
Variables sirven para saber el número de columnas de la matriz
Restricciones sirven para saber el número de filas de la matriz
"""
no_variables = int(input("Ingrese el no. de variables: "))
'''
Revisar si los coeficientes son binarios o no.
Lo que es hace es multiplicar la cantidad de variables por la cantidad de restricciones, esto indicará la cantidad
de coeficientes que estarán en la matriz.
Se añadió un contador llamado "sumador_binarios", que se autosuma una unidad cuando encuentra una variable binaria.
Cuando el número de coeficientes coincida con la suma del contador, es porque ya puede resolver la maximización.
'''
sumador_enteros = 0
numero_para_cant_enteros = no_variables * no_restricciones
enteros = 0
a_matrix2 = []
for i in a_matrix:
a_matrix2.append((i[0]))
print(a_matrix2)
for row in a_matrix2:
for elem in row:
num_int = round(elem)
#check_int = isinstance(elem, int)
if elem-num_int != 0:
print(elem)
print("No todos los coeficientes son enteros, vuelva ingresar")
else:
sumador_enteros+=1
print(elem)
print("Coeficiente sí es entero")
enteros = 1
print("numero_para_cant_enteros ", numero_para_cant_enteros)
print("sumador_enteros", sumador_enteros)
if sumador_enteros == numero_para_cant_enteros:
print("Ya todos son enteros, se resolverá la maximización")
else:
print("Debe actualizar para que sean sólo enteros")
sys.exit(1)
'''
Revisar si los coeficientes de la matriz C son binarios o no.
Lo que es hace es tomar el cuenta el número de variables..
Se añadió un contador llamado "sumador_binarios_c", que se autosuma una unidad cuando encuentra una variable binaria.
Cuando el número de variables coincida con la suma del contador, es porque ya puede resolver la maximización.
'''
print(c_vector)
sumador_enteros_c = 0
numero_para_cant_enteros_c = no_variables
for row in c_vector:
num_int = round(elem)
#check_int = isinstance(elem, int)
if elem-num_int != 0:
print(elem)
print("No todos los coeficientes son enteros, vuelva ingresar")
else:
sumador_enteros_c+=1
print(elem)
print("Coeficiente sí es entero")
enteros = 1
print("numero_para_cant_enteros ", numero_para_cant_enteros_c)
print("sumador_enteros", sumador_enteros_c)
if sumador_enteros_c == numero_para_cant_enteros_c:
print("Ya todos son enteros, se resolverá la maximización")
else:
print("Debe actualizar para que sean sólo enteros")
sys.exit(1)
#Hacer matriz
#aqui estan los coeficientes de las restricciones
"""
Aqui se hace la matriz con las slack variables y con las variables artificiales que sean necesarias
"""
positions = []
a_matrix2 = []
for i in a_matrix:
a_matrix2.append((i[0]))
#print(a_matrix2[0])
print(a_matrix2)
#convertimos a_matrix2 en una matriz de numpy para hacer operaciones con ella
mat = np.asmatrix(a_matrix2)
size = len(variable_contraints)
for i in range (len(variable_contraints)):
if variable_contraints[i] == "<=":
#hacemos la columna de ceros, de tamaño igual
# a las columnas que están en la matrix (length del arreglo de símbolos)
new_matrix = np.asmatrix(np.zeros(size))
#colocamos 1 en el espacio correspondiente de la fila en donde está
# la constrait
new_matrix[0,i]=1
#asignamos la forma para que sea un vector columna
new_matrix.shape = (size,1)
# obtenemos las dimensiones de la matrix actual
x, y = mat.shape
# hacemos el append de la columna de ceros y el uno en la posición correspondiente
# al final de la matriz, a la derecha
mat = np.hstack((mat[:,:y], new_matrix, mat[:,y:]))
print(a_matrix2[i])
print("menor")
if variable_contraints[i] == "==":
new_matrix = np.asmatrix(np.zeros(size))
new_matrix[0,i]=1
new_matrix.shape = (size,1)
x, y = mat.shape
mat = np.hstack((mat[:,:y], new_matrix, mat[:,y:]))
print(a_matrix2[i])
print("igual")
positions.append(y)
if variable_contraints[i] == ">=":
new_matrix = np.asmatrix(np.zeros(size))
new_matrix[0,i]=1
new_matrix1 = np.asmatrix(np.zeros(size))
new_matrix1[0,i]=-1
new_matrix.shape = (size,1)
new_matrix1.shape = (size,1)
x, y = mat.shape
mat = np.hstack((mat[:,:y], new_matrix, mat[:,y:]))
mat = np.hstack((mat[:,:y+1], new_matrix1, mat[:,y+1:]))
print(a_matrix2[i])
print("mayor")
positions.append(y)
#print(variable_contraints[i])
#print(variable_contraints[0])
print(mat)
#Numero de columnas en la matriz
num_cols = mat.shape[1]
print(num_cols)
print(positions)
"""
Aquí es la orquetastación principal, porque aquí es donde se miden el número de filas y columnas que tendrá la tableau.
Así mismo se inicializa el algoritmo simplex de manera explícita. También se le indica si es un problema tipo Min o Max.Luego
empieza a buscar los puntos pivote, apoyándose de la eliminación gaussiana.
Más adelante han sido definidas dos funciones: una de maximización, donde se le envía la palabra 'MAX' y esta función
reconoce que tiene que resolver una maximización. La otra función es de minimización, que envía la palabra 'MIN'
lo que significa que se requiere de una minimización.
Estas palabras permiten que el algoritmo prepare la tableau según lo requiere la maximización y la minimización, porque
ambos tienen una resolución distinta.
"""
def simplex(of,basis,tableau,opt):
# Obtiene el número de filas y columnas que tiene la tableau
n_rows = tableau.shape[0]
n_cols = tableau.shape[1]
if opt =='MIN':
# Inicia el algoritmo simplex
# Calcula zj-cj. Si cj - zj >= 0 para todas las columnas, entonces la solución actual es la solución óptima.
check = of - np.sum(np.reshape(of[list(basis)],(n_rows,1)) * tableau[:,0:n_cols-1],axis=0)
else:
# Inicia el algoritmo simplex
# Calcula cj-zj. Si zj - cj >= 0 para todas las columnas, entonces la solución actual es la solución óptima.
check = np.sum(np.reshape(of[list(basis)],(n_rows,1)) *tableau[:,0:n_cols-1],axis=0) - of
count = 0
while ~np.all(check >=0):
print(check)
# Determina la columna pivote: La columna pivotante es la columna correspondiente al mínimo zj-cj.
pivot_col = np.argmin(check)
# Determinar los elementos positivos en la columna pivote. Si no hay elementos positivos en la columna pivote,
# entonces la solución óptima no tiene límites.
positive_rows = np.where(tableau[:,pivot_col] > 0)[0]
if positive_rows.size == 0:
print('*******SOLUCIÓN SIN LÍMITES******')
break
# Determinar la hilera de pivote: prueba de min-ration
divide=(tableau[positive_rows,n_cols-1]
/tableau[positive_rows,pivot_col])
pivot_row = positive_rows[np.where(divide
== divide.min())[0][-1]]
# Actualizar las bases
basis[pivot_row] = pivot_col
# Realizar la eliminación gaussiana para hacer girar el elemento uno y los elementos por encima y por debajo del cero:
tableau[pivot_row,:]=(tableau[pivot_row,:]
/tableau[pivot_row,pivot_col])
for row in range(n_rows):
if row != pivot_row:
tableau[row,:] = (tableau[row,:]
- tableau[row,pivot_col]*tableau[pivot_row,:])
if opt =='MIN':
check = of - np.sum(np.reshape(of[list(basis)],(n_rows,1)) * tableau[:,0:n_cols-1],axis=0)
else:
check = np.sum(np.reshape(of[list(basis)],(n_rows,1)) *tableau[:,0:n_cols-1],axis=0) - of
count += 1
print('Paso %d' % count)
print(tableau)
return basis,tableau
def get_solution(of,basis,tableau):
# Obtiene el número de columnas de la tableau
n_cols = tableau.shape[1]
# Obtener la solución óptima
solution = np.zeros(of.size)
solution[list(basis)] = tableau[:,n_cols-1]
# Determinar el valor óptimo
value = np.sum(of[list(basis)] * tableau[:,n_cols-1])
return solution,value
"""
Esta función es muy importante, ya que permite ingresar variables artificiales si son necesarias. Los símbolos que requieren
que se añadan variables artificiales son los signos de >=. El símbolo == requiere una variable artificial en vez de una de
holgura, el signo >= requiere de una variable artificial 1 y de una variable de holgura -1.
"""
n_b_vector = []
for i in b_vector:
list1 = [i]
n_b_vector.append(list1)
#Esta es la matriz que si sirve
matrix_mat = np.concatenate((mat, n_b_vector),axis =1)
print(matrix_mat)
print(variable_contraints)
def check_availability(element, collection: iter):
return element in collection
verificar_menoresque = check_availability('<=', variable_contraints)
print(verificar_menoresque)
verificar_mayoresque = check_availability('>=', variable_contraints)
print(verificar_mayoresque)
verificar_igualesque = check_availability('==', variable_contraints)
print(verificar_igualesque)
"""
Esta función es utilizado por la maximización, por eso aquí se añaden los coeficientes de la función objetivo, y se añaden
la misma cantidad de ceros que las variables de holgura, por ejemplo si se añadieron 3 variables de holgura, esta identifica
que tiene que añadir 3 ceros. Pero si se añaden variables de holgura y artifiales, la suma de estas será la cantidad de ceros
que se añadan.
"""
matrix_cero = []
size_ceros = num_cols - no_variables
print(size_ceros)
for i in range(size_ceros):
matrix_cero.append(0)
print(matrix_cero)
objective_matrix = np.concatenate((c_vector, matrix_cero), axis=0)
print(objective_matrix )
"""
Esta función la hemos colocado, ya que nuestro algoritmo requiere que se le indican las variables de holgura, las cuales
son la base de la resolución del caso. Este array con bases es importante para el buen funcionamiento de los pivotes,
ya que si no se le dice explícitamente cuáles son las bases, sería difícil para nuestro algoritmo determinar cuál columna
sale y cuál fila entra.
"""
array_con_bases = []
numero_base = no_variables
for item in range(no_restricciones):
array_con_bases.append(item + numero_base)
print(array_con_bases)
"""
Esta es la función de maximización. Nos hemos fijado que la maximización requieres que los coeficientes de la función objetivo
no sean negativos, por eso se le manda explícitamente a la función simplex la palabra 'MAX' que identifica la maximización.
Eso permite que se diferencie la minimización con la maximización, y haga el procedimiento adecuado según lo requiera
cada caso.
"""
# Definir la tableau:
tableau = np.array(matrix_mat)
print(tableau)
# Define la función objetivo y las bases iniciales
print(objective_matrix)
of = np.array(objective_matrix)
# bases iniciales
#basis = np.array([4,5,6])
basis = np.array(array_con_bases)
# Correr el algorithm simplex
basis,tableau = simplex(of,basis,tableau,'MAX')
# Obtener la solución óptima
optimal_solution,optimal_value = get_solution(of,basis,tableau)
# Imprimir al tableau final.
print('La base final es:')
print(basis)
print('solución')
for i in range(len(optimal_solution)):
print('X%d'%(i+1),'=',optimal_solution[i])
print('Z=',optimal_value)
#max_function()
"""
A través de un contador se verifica si las variables x óptimas son enteras. Si todas son enteras, se detiene porque el
resultado ya es entero, de lo contrario tiene que continuar con el programa donde se operarán las variables no
enteras.
"""
#Revisar si la solución es entera
cont_T = 0
for i in range (no_variables):
check_int = isinstance(optimal_solution[i], int)
print(check_int)
if check_int == True:
cont_T = cont_T + 1
if cont_T == no_variables:
print("Todas las respuestas son enteros")
sys.exit(1)
else:
print("No todas las respuestas son enteras, proceda")
#print(optimal_solution)
"""
Se guarda un array con las soluciones aproximadas de las variables x de la solución óptima. A través de un for
se recorre según el número de variables, para que se garantice que todas las variables x hayan pasado por la prueba.
"""
round_sol = []
posible_sol = []
for i in range(no_variables):
round_sol.append(round(optimal_solution[i]))
#Soluciones aproximadas
print(round_sol)
#Multiplicar con la objetivo, las respuestas para hacer pruebas
#Aquí está la Funcion objetivo
print(c_vector)
"""
Son dos arrays en los que se guardan los números enteros que aún cumplen con las restricciones.
"""
validos_x1=[] #este empieza en dos que es la solucion y termina en 5 que es la restricción
validos_x2=[] #este empieza en 3 y termina en 0
"""
Esta es una función que recibe el vector c 'c_vector' y la solución que se encontró. Tiene una variable temp inicializada
en cero, para luego recorrer con for el rango del vector c. Dentro del for, se hace una operación para que el número dentro
de vector_c[i] se multiplique con el número en sol[i], este resultado debe sumarse en la variable temp y la función
retorna el valor de la variable temp.
"""
#Valores optimos funcionales
def posible_funct_val(c_vector, sol):
temp=0
for i in range(len(c_vector)):
temp += c_vector[i]*sol[i]
return temp
"""
Se llama la función posible_funct_val, los arrays que se le mandan son el vector c original y la solución
aproximada a que se ha encontrado en unos pasos anteriores.
"""
posible_funct_val(c_vector,round_sol)
"""
Se definió una función pqcclr que recibe como parámetros a_matrix2, punto, variable_constraints y el b_vector.
básicamente lo que hace es que toma el número de restricciones para hacer un for, que dentro de sí contiene otro for
que se repite según el número de variables. Lo que hace es que suma a la variable temp inicializada en cero, sumarle el
producto de a_matrix2 en la posición [i][j] por el punto [j].
Se hacen unas comprobaciones para saber si el constraint es <=, >= o ==.
Si se encuentra el constraint <= y la variable temporal es > que b_b_vector[i], devuelve falso.
Si se encuentra el constraint >= y la variable temporal es < que b_b_vector[i], devuelve falso.
Si se encuentra el constraint == y la variable temporal es != que b_b_vector[i], devuelve falso.
Si ninguna de las anteriores se cumple, devuelve true.
Devolver true, significa que cumple con las restricciones, de lo contrario no cumple con las restricciones.
"""
#punto que cumplen con las restricciones
def pqcclr(a_matrix2, punto, variable_contraints, b_vector):
for i in range(no_restricciones):
temp = 0
for j in range(no_variables):
temp += a_matrix2[i][j]*punto[j]
if variable_contraints[i]=='<=' and temp > b_vector[i]:
return False
if variable_contraints[i]=='>=' and temp < b_vector[i]:
return False
if variable_contraints[i]=='==' and temp != b_vector[i]:
return False
return True
"""
Se mandan a la función pqcclr los parámetros a_matrix2, round_sol, variable_constraints y b_vector.
Devolver true, significa que cumple con las restricciones, de lo contrario no cumple con las restricciones.
"""
pqcclr(a_matrix2, round_sol, variable_contraints, b_vector)
"""
Se definió esta función para verificar si los puntos y las variables x aproximadas a enteros proporcionan una solución
óptima y que cumpla con las restricciones necesarias. Se prueban cada una de x aproximadas y se hace uso de la función
pqcclr realizada anteriormente para verificar que los puntos sigan estando dentro de las restricciones.
Al final, imprime las soluciones óptimas.
"""
import re
def mvp(c_vector,round_sol,a_matrix2, variable_contraints):
valores=[]
puntos=[]
rangos_variables=[]
for i in round_sol:
temp=[]
for j in range(int(i-i), int(i+(2*i))):
temp.append(j)
rangos_variables.append(temp)
#print(rangos_variables)
for i in range(no_variables):
print("Rango para variable x", i+1, ": ",rangos_variables[i])
for i in itertools.product(*rangos_variables):
combinacion=[]
for j in i:
combinacion.append(j)
#print(combinacion)
if(pqcclr(a_matrix2, combinacion, variable_contraints, b_vector)):
puntos.append(combinacion)
valores.append(posible_funct_val(c_vector,combinacion))
greater=valores[0]
for item in range(0, len(valores)):
if valores[item]>greater:
greater=valores[item]
grater_index = valores.index(valores[item])
#print(grater_index)
#print(puntos[grater_index])
print("Z óptimo entero: ", max(valores))
print("Solución con variables enteras: ", puntos[grater_index])
mvp(c_vector,round_sol,a_matrix2, variable_contraints)
| 0.242475 | 0.799873 |
# Projected correlation functions with CosmoSIS
The basic CosmoSIS pipeline in this repository will produce projected correlation functions for galaxy clustering $w_{gg}$, the galaxy density-intrinsic shear correlation $w_{g+}$, and the intrinsic shear auto-correlation $w_{++}$.
Models must be chosen for the computation of desired power spectra $P(k)$; here, we use halofit for the non-linear matter power, and we use the Non-Linear Alignment (NLA) model for the matter-intrinsic power $P_{\delta{}I}(k)$ and intrinsic auto-power $P_{II}(k)$.
Alternative choices might include models for baryonic contributions to the matter power, or more complex intrinsic alignment models, e.g. TATT, EFT, halo models, etc.
CosmoSIS implements no specific Hankel routine for the transformation of $P(k,z)$'s into $w(r_p,z)$'s, but we can use the Hankel transformers implemented for $C_\ell$'s. After running into SegFaults with nicaea, I have switched to another CosmoSIS default module: cl_to_corr.
The pipeline "Cosmosis_wgplus.ini" fully specifies the workflow, and contains comments describing the procedure - I recommend reading these before proceeding.
Let's generate a basic Smail-type redshift distribution $n(z)$ to demonstrate the pipeline:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# these choices will produce something vaguely GAMA-like
alpha = 2.
beta = 2.2
z0 = 0.3 / np.sqrt(2.)
z = np.linspace(0.0, 0.51, 52)
nz = z**alpha * np.exp(-(z/z0)**beta)
# normalise
nz /= np.trapz(nz, x=z)
# visualise
plt.plot(z, nz)
plt.xlabel('$z$', fontsize='x-large')
plt.ylabel('$n(z)$', fontsize='x-large')
# save as nofz.txt - to be read-in by CosmoSIS
np.savetxt('nofz.txt', np.column_stack((z, nz)), header='z\tbin_1')
```
We will use this $n(z)$ to describe all of our samples here, but a real analysis will likely employ several such distributions, each describing a sample of galaxies selected on quantities such as colour, magnitude, mass, etc. In those cases, labels for each sample ("nz_test" in this pipeline) must be carefully tracked, and some of the modules written for this notebook will need generalising. For now, let us run this simplified pipeline (can also be done from command line with "cosmosis Cosmosis_wgplus.ini"):
```
from cosmosis.runtime.config import Inifile
from cosmosis.runtime.pipeline import LikelihoodPipeline
from cosmosis.samplers.test.test_sampler import TestSampler
from cosmosis.output.in_memory_output import InMemoryOutput
ini = Inifile("CosmoSIS_wgplus.ini")
pipeline = LikelihoodPipeline(ini)
sampler = TestSampler(ini, pipeline, None)
sampler.config()
sampler.execute()
```
See from the stdout that the runtimes for 'project' modules (the Hankel pieces) are ~30ms for $w_{gg}$ and $w_{g+}$, and twice that for $w_{++}$ which requires two calls (the sum of the $J_0$ and $J_4$ Hankel integrations, see Singh et al. 2016).
Should now have a subdirectory called 'datablock', which contains power spectra, distances, parameter values, redshift distributions, and derived quantities, such as our projected correlation functions.
```
import os
ls = os.listdir('datablock')
ls.sort()
print(ls)
```
Now let's take a look at the theoretical curves, and compare them to KiDS+GAMA measurements of projected statistics (presented in Johnston et al., 2019). Note that we are using a toy $n(z)$, and ignoring the integral constraint for galaxy clustering, so we do not expect a perfect reproduction. Since we ran with unit galaxy bias $b_g$ and NLA amplitude $A_{IA}$ (see values.ini file) we will re-scale the theory spectra by the best fit parameters from Johnston et al. (2019), who fitted to scales $r_p>6\;{\rm{Mpc}}/h$.
```
from os.path import join
f, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(13, 7))
plt.yscale('log')
plt.xscale('log')
rp = np.loadtxt('datablock/projected_galaxy_intrinsic/theta.txt')
wgg = np.loadtxt('datablock/projected_galaxy_power/wgg_r_1_1.txt')
wgp = np.loadtxt('datablock/projected_galaxy_intrinsic/wgp_r_1_1.txt')
wpp = np.loadtxt('datablock/projected_intrinsic/wpp_r_1_1.txt')
# wg+ is negative by GGL convention, but we tend to measure radial alignments as positive
wgp = -wgp
# mark the non-linear regime where Johnston et al., 2019 did not fit their models
for a in ax:
a.axvspan(1e-2, 6., color='grey', alpha=0.2, lw=0)
# let's plot the measured data-points for KiDS+GAMA & SDSS Main samples,
# split by redshift and/or galaxy colour
# colours/labels/horizontal offsets, for clarity
keys = ['z2_b','z2_r','z1_b','z1_r','sdss_b','sdss_r']
names = dict(zip(keys, ['KG $z>0.26$','KG $z>0.26$','KG $z<0.26$','KG $z<0.26$','SDSS Main','SDSS Main']))
cols = dict(zip(keys, ['blue','red','darkcyan','darkorange','steelblue','maroon']))
marks = dict(zip(keys, ['^','^','v','v','o','o']))
split = dict(zip(keys, [0.97,0.97,1.,1.,1.03,1.03]))
bias = dict(zip(keys, [1.10,1.52,1.55,1.84,0.88,1.19]))
aia = dict(zip(keys, [0.21,3.18,0.21,3.18,0.21,3.18]))
cosmosis_curves = {}
for df in os.listdir('J19_measurements'):
_r, _w, _e = np.loadtxt(join('J19_measurements', df)).T
name = df.replace('wgp_','').replace('wgg_','')
if name.endswith('_r'): a = ax[0]
elif name.endswith('_b'): a = ax[1]
# scale theory correlation functions
if 'wgg' in df:
th_w = wgg * bias[name]**2.
label = None
elif 'wgp' in df:
th_w = wgp * aia[name] * bias[name]
th_wpp = wpp * aia[name]**2.
label = names[name]
# discard largest-r_p bin for low-z GAMA (see Johnston et al., 2019)
if 'z1' in df:
_r, _w, _e = map(lambda x: x[:-1], (_r, _w, _e))
# plot measurements, with open points for negative values
c = _w > 0.
eb = a.errorbar(_r[c]*split[name], _w[c], _e[c],
ls='', marker=marks[name], c=cols[name],
label=label, capsize=1.5)
if any(_w < 0.):
a.errorbar(_r[~c]*split[name], -_w[~c], _e[~c],
ls='', marker=marks[name], c=cols[name],
label=None, capsize=1.5, mfc='none')
# plot theory curves
a.plot(rp, th_w, c=eb[0].get_c())
cosmosis_curves[df] = th_w # store for comparisons
# also plot expected w++ for completeness, though low S/N
# means that we have no measurements to compare with
if 'wgp' in df:
a.plot(rp, th_wpp, c=eb[0].get_c(), ls='-.')
cosmosis_curves[df+'wpp'] = th_wpp # store for comparisons
for lab, a in zip(('Red galaxies ($g-r>0.66$)', 'Blue galaxies ($g-r<0.66$)'), ax):
a.set_title(lab, fontsize='xx-large')
l = a.legend(loc='best', ncol=2, frameon=0, fontsize='xx-large')
a.set_xlabel('$r_p\,[\\rm{Mpc/h}]$', fontsize='xx-large')
ax[0].set_ylabel('$w_{xy}(r_p)\,[\\rm{Mpc/h}]$', fontsize='xx-large')
ax[0].annotate('$w_{gg}$', xy=(10,70), xycoords='data', fontsize='x-large')
ax[0].annotate('$w_{g+}$', xy=(10,0.5), xycoords='data', fontsize='x-large')
ax[0].annotate('$w_{++}$', xy=(10,0.004), xycoords='data', fontsize='x-large')
plt.xlim(0.07, 70)
plt.ylim(1e-4, None)
plt.tight_layout()
```
One sees that the theoretical curves produced here are very close to the published, no-shortcuts versions. We can also do the Hankel transformation with CCL. Let's use the same $P(k)$'s and compare:
```
# we need to extrapolate onto a wider wavevector range for the integration
# (in CosmoSIS, this is done internally)
_k = np.loadtxt('datablock/galaxy_power/k_h.txt')
kmin, kmax = _k.min(), _k.max()
k = np.logspace(-5, 3, 80)
# load 3D power spectra
pk_z = np.loadtxt('datablock/galaxy_power/z.txt')
Pgg = np.loadtxt('datablock/galaxy_power/p_k.txt')
PgI = np.loadtxt('datablock/galaxy_intrinsic_power/p_k.txt')
PII = np.loadtxt('datablock/intrinsic_power/p_k.txt')
# also load the redshift window function W(z) generated by CosmoSIS helper module projected_alignments.py
# W(z) depends only on redshift distributions; see Mandelbaum et al. 2011 for equations
z = np.loadtxt('datablock/projected_galaxy_power/z.txt')
Wz = np.loadtxt('datablock/projected_galaxy_power/w_z.txt')
# also load and cut-down the r_p range for the output
# r_p = np.loadtxt('datablock/projected_galaxy_power/r_p.txt')
# r_p = r_p[(r_p > 0.05) & (r_p < 100.)]
r_p = np.logspace(-5, 8, 130)
# interpolate P(k)'s onto same redshifts as n(z)
from scipy.interpolate import interp2d
_Pgg = interp2d(_k, pk_z, Pgg, kind='cubic', bounds_error=False)(k, z)
_PgI = interp2d(_k, pk_z, PgI, kind='cubic', bounds_error=False)(k, z)
_PII = interp2d(_k, pk_z, PII, kind='cubic', bounds_error=False)(k, z)
# we have NaNs outside the integration range; replace with simple
# power law extrapolations, as done internally by CosmoSIS transformer module
lower = 1.
upper = -2.
def extrapolate(P):
bad_low = np.isnan(P) & (k < kmin)
bad_high = np.isnan(P) & (k > kmax)
_P = P.copy()
_P[bad_low] = P[0] * (k[bad_low] / kmin)**lower
_P[bad_high] = P[-1] * (k[bad_high] / kmax)**upper
return _P
_Pgg = np.array([extrapolate(pk) for pk in _Pgg])
_PgI = np.array([extrapolate(pk) for pk in _PgI])
_PII = np.array([extrapolate(pk) for pk in _PII])
import pyccl as ccl
# initialise a Cosmology object, with the same parameters as the CosmoSIS pipeline
cosmo = ccl.Cosmology(Omega_c=0.25, Omega_b=0.05, h=0.7, A_s=2e-9, n_s=0.96, m_nu=0.06,
matter_power_spectrum='halofit')
%%time
# Hankel transform for w_xy(r, z)
method = 'fftlog'
w_gg_rz = np.array([ccl.correlation(cosmo, k, pk, r_p, type='NN', method=method) for pk in _Pgg])
w_gp_rz = np.array([ccl.correlation(cosmo, k, pk, r_p, type='NG', method=method) for pk in _PgI])
w_pp_rz = np.array([ccl.correlation(cosmo, k, pk, r_p, type='GG+', method=method) \
+ ccl.correlation(cosmo, k, pk, r_p, type='GG-', method=method) for pk in _PII])
# and integrate (Riemann sum) over the redshift window function for projected statistics
dz = z[1] - z[0]
wgg = (w_gg_rz.T * Wz * dz).sum(axis=-1)
wgp = (w_gp_rz.T * Wz * dz).sum(axis=-1)
wpp = (w_pp_rz.T * Wz * dz).sum(axis=-1)
```
Out of the box speed is of similar order to CosmoSIS for FFTLog integration method - may be room for tuning of this/CosmoSIS via accuracy settings etc. Let's take a look at the difference between these and the CosmoSIS outputs (note that the Hankel transformer is the only real variable here, as we loaded the CosmoSIS power spectra for projection):
```
from scipy.interpolate import interp1d
from matplotlib.ticker import FuncFormatter
def compare_CosmoSIS_and_CCL(r_p, wgg, wgp, wpp):
f1, ax1 = plt.subplots(1, 3, sharex=True, sharey=False, figsize=(12,4))
ax1[0].set_xscale('log')
c = (r_p > 0.05) & (r_p < 80)
for df in os.listdir('J19_measurements'):
name = df.replace('wgp_','').replace('wgg_','')
# scale theory correlation functions
if 'wgg' in df:
th_w = wgg * bias[name]**2.
label = None
a = ax1[0]
elif 'wgp' in df:
th_w = wgp * aia[name] * bias[name]
th_wpp = wpp * aia[name]**2.
label = names[name]
a = ax1[1]
# compare with CosmoSIS version
th_w_cosmosis = interp1d(rp, cosmosis_curves[df],
kind='cubic', bounds_error=False, fill_value=np.nan)(r_p)
ratio = th_w / th_w_cosmosis - 1.
a.plot(r_p[c], ratio[c], c=cols[name], label=label)
if 'wgp' in df:
th_wpp_cosmosis = interp1d(rp, cosmosis_curves[df+'wpp'],
kind='cubic', bounds_error=False, fill_value=np.nan)(r_p)
ratio1 = th_wpp / th_wpp_cosmosis - 1.
ax1[2].plot(r_p[c], th_wpp[c], c=cols[name])
f1.text(0.5, 0.995, 'Fractional difference CCL / Cosmosis',
fontsize='xx-large', ha='center', va='center')
ax1[0].set_ylabel('$w_{gg}$', fontsize='x-large')
ax1[1].set_ylabel('$w_{g+}$', fontsize='x-large')
ax1[2].set_ylabel('$w_{++}$', fontsize='x-large')
for a in ax1:
a.axhline(0, c='k', lw=0.7)
a.set_yscale('symlog')
a.set_xlabel('$r_p\,[\\rm{Mpc/h}]$', fontsize='x-large')
yticks = a.get_yticks()
a.set_yticks(np.round(yticks,2))
formatter = FuncFormatter(lambda y, _: '{:.16g}'.format(y))
a.yaxis.set_major_formatter(formatter)
plt.tight_layout()
plt.show()
compare_CosmoSIS_and_CCL(r_p, wgg, wgp, wpp)
```
We have clear ringing and aliasing affecting the Hankel integtrations - this can usually be addressed by choosing appropriate windows in $k$ over which to integrate, and/or modifying the input $P(k)$ so that it plays more nicely with CCL's FFTLog (fast) implementation.
Christos Georgiou kindly found some time to tackle this problem in CCL, with an edit to the CCL/pyccl/correlations.py file (branch not yet merged). Let's implement his new functionality below, naively plugging in our previous variables.
```
# copy some variables
a_sample = 1. / (1. + z[::-1]) # reverse order so that a increases monotonically
k_sample = k
z_arr = z
pz = nz
pk_GI_NLA = _PgI[::-1] # also reverse for P(k)'s
pk_II_NLA = _PII[::-1]
pk_gg = _Pgg[::-1]
pk2d_GI_NLA = ccl.pk2d.Pk2D(a_arr=a_sample, lk_arr=np.log(k_sample), pk_arr=pk_GI_NLA, is_logp=False)
pk2d_II_NLA = ccl.pk2d.Pk2D(a_arr=a_sample, lk_arr=np.log(k_sample), pk_arr=pk_II_NLA, is_logp=False)
pk2d_gg = ccl.pk2d.Pk2D(a_arr=a_sample, lk_arr=np.log(k_sample), pk_arr=pk_gg, is_logp=False)
# test some array shapes
wgg_ccl = cosmo.correlation_ab(1., z_arr, pz, p_of_k_a=pk2d_gg, type='gg')
print(wgg_ccl)
wgg_ccl = cosmo.correlation_ab(np.array([1]), z_arr, pz, p_of_k_a=pk2d_gg, type='gg')
print(wgg_ccl)
wgg_ccl = cosmo.correlation_ab(np.array([1,2]), z_arr, pz, p_of_k_a=pk2d_gg, type='gg')
print(wgg_ccl)
%%time
# and now compute the projected correlations
wgg_new = cosmo.correlation_ab(r_p, z_arr, pz, p_of_k_a=pk2d_gg, type='gg')
wgp_new = cosmo.correlation_ab(r_p, z_arr, pz, p_of_k_a=pk2d_GI_NLA, type='g+')
wpp_new = cosmo.correlation_ab(r_p, z_arr, pz, p_of_k_a=pk2d_II_NLA, type='++')
```
Runtime is almost identical to CosmoSIS -- let's see how the results compare with the CosmoSIS outputs.
```
# and now the moment of truth...
compare_CosmoSIS_and_CCL(r_p, wgg_new, wgp_new, wpp_new)
```
Nearly there! Some residual disagreements to investigate in the coming weeks.
I can be reached at [email protected] for any questions regarding the example CosmoSIS pipeline, or for suggestions to improve this notebook.
If making use of this notebook, modules, etc., please consider citing arXiv:1811.09598.
|
github_jupyter
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# these choices will produce something vaguely GAMA-like
alpha = 2.
beta = 2.2
z0 = 0.3 / np.sqrt(2.)
z = np.linspace(0.0, 0.51, 52)
nz = z**alpha * np.exp(-(z/z0)**beta)
# normalise
nz /= np.trapz(nz, x=z)
# visualise
plt.plot(z, nz)
plt.xlabel('$z$', fontsize='x-large')
plt.ylabel('$n(z)$', fontsize='x-large')
# save as nofz.txt - to be read-in by CosmoSIS
np.savetxt('nofz.txt', np.column_stack((z, nz)), header='z\tbin_1')
from cosmosis.runtime.config import Inifile
from cosmosis.runtime.pipeline import LikelihoodPipeline
from cosmosis.samplers.test.test_sampler import TestSampler
from cosmosis.output.in_memory_output import InMemoryOutput
ini = Inifile("CosmoSIS_wgplus.ini")
pipeline = LikelihoodPipeline(ini)
sampler = TestSampler(ini, pipeline, None)
sampler.config()
sampler.execute()
import os
ls = os.listdir('datablock')
ls.sort()
print(ls)
from os.path import join
f, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(13, 7))
plt.yscale('log')
plt.xscale('log')
rp = np.loadtxt('datablock/projected_galaxy_intrinsic/theta.txt')
wgg = np.loadtxt('datablock/projected_galaxy_power/wgg_r_1_1.txt')
wgp = np.loadtxt('datablock/projected_galaxy_intrinsic/wgp_r_1_1.txt')
wpp = np.loadtxt('datablock/projected_intrinsic/wpp_r_1_1.txt')
# wg+ is negative by GGL convention, but we tend to measure radial alignments as positive
wgp = -wgp
# mark the non-linear regime where Johnston et al., 2019 did not fit their models
for a in ax:
a.axvspan(1e-2, 6., color='grey', alpha=0.2, lw=0)
# let's plot the measured data-points for KiDS+GAMA & SDSS Main samples,
# split by redshift and/or galaxy colour
# colours/labels/horizontal offsets, for clarity
keys = ['z2_b','z2_r','z1_b','z1_r','sdss_b','sdss_r']
names = dict(zip(keys, ['KG $z>0.26$','KG $z>0.26$','KG $z<0.26$','KG $z<0.26$','SDSS Main','SDSS Main']))
cols = dict(zip(keys, ['blue','red','darkcyan','darkorange','steelblue','maroon']))
marks = dict(zip(keys, ['^','^','v','v','o','o']))
split = dict(zip(keys, [0.97,0.97,1.,1.,1.03,1.03]))
bias = dict(zip(keys, [1.10,1.52,1.55,1.84,0.88,1.19]))
aia = dict(zip(keys, [0.21,3.18,0.21,3.18,0.21,3.18]))
cosmosis_curves = {}
for df in os.listdir('J19_measurements'):
_r, _w, _e = np.loadtxt(join('J19_measurements', df)).T
name = df.replace('wgp_','').replace('wgg_','')
if name.endswith('_r'): a = ax[0]
elif name.endswith('_b'): a = ax[1]
# scale theory correlation functions
if 'wgg' in df:
th_w = wgg * bias[name]**2.
label = None
elif 'wgp' in df:
th_w = wgp * aia[name] * bias[name]
th_wpp = wpp * aia[name]**2.
label = names[name]
# discard largest-r_p bin for low-z GAMA (see Johnston et al., 2019)
if 'z1' in df:
_r, _w, _e = map(lambda x: x[:-1], (_r, _w, _e))
# plot measurements, with open points for negative values
c = _w > 0.
eb = a.errorbar(_r[c]*split[name], _w[c], _e[c],
ls='', marker=marks[name], c=cols[name],
label=label, capsize=1.5)
if any(_w < 0.):
a.errorbar(_r[~c]*split[name], -_w[~c], _e[~c],
ls='', marker=marks[name], c=cols[name],
label=None, capsize=1.5, mfc='none')
# plot theory curves
a.plot(rp, th_w, c=eb[0].get_c())
cosmosis_curves[df] = th_w # store for comparisons
# also plot expected w++ for completeness, though low S/N
# means that we have no measurements to compare with
if 'wgp' in df:
a.plot(rp, th_wpp, c=eb[0].get_c(), ls='-.')
cosmosis_curves[df+'wpp'] = th_wpp # store for comparisons
for lab, a in zip(('Red galaxies ($g-r>0.66$)', 'Blue galaxies ($g-r<0.66$)'), ax):
a.set_title(lab, fontsize='xx-large')
l = a.legend(loc='best', ncol=2, frameon=0, fontsize='xx-large')
a.set_xlabel('$r_p\,[\\rm{Mpc/h}]$', fontsize='xx-large')
ax[0].set_ylabel('$w_{xy}(r_p)\,[\\rm{Mpc/h}]$', fontsize='xx-large')
ax[0].annotate('$w_{gg}$', xy=(10,70), xycoords='data', fontsize='x-large')
ax[0].annotate('$w_{g+}$', xy=(10,0.5), xycoords='data', fontsize='x-large')
ax[0].annotate('$w_{++}$', xy=(10,0.004), xycoords='data', fontsize='x-large')
plt.xlim(0.07, 70)
plt.ylim(1e-4, None)
plt.tight_layout()
# we need to extrapolate onto a wider wavevector range for the integration
# (in CosmoSIS, this is done internally)
_k = np.loadtxt('datablock/galaxy_power/k_h.txt')
kmin, kmax = _k.min(), _k.max()
k = np.logspace(-5, 3, 80)
# load 3D power spectra
pk_z = np.loadtxt('datablock/galaxy_power/z.txt')
Pgg = np.loadtxt('datablock/galaxy_power/p_k.txt')
PgI = np.loadtxt('datablock/galaxy_intrinsic_power/p_k.txt')
PII = np.loadtxt('datablock/intrinsic_power/p_k.txt')
# also load the redshift window function W(z) generated by CosmoSIS helper module projected_alignments.py
# W(z) depends only on redshift distributions; see Mandelbaum et al. 2011 for equations
z = np.loadtxt('datablock/projected_galaxy_power/z.txt')
Wz = np.loadtxt('datablock/projected_galaxy_power/w_z.txt')
# also load and cut-down the r_p range for the output
# r_p = np.loadtxt('datablock/projected_galaxy_power/r_p.txt')
# r_p = r_p[(r_p > 0.05) & (r_p < 100.)]
r_p = np.logspace(-5, 8, 130)
# interpolate P(k)'s onto same redshifts as n(z)
from scipy.interpolate import interp2d
_Pgg = interp2d(_k, pk_z, Pgg, kind='cubic', bounds_error=False)(k, z)
_PgI = interp2d(_k, pk_z, PgI, kind='cubic', bounds_error=False)(k, z)
_PII = interp2d(_k, pk_z, PII, kind='cubic', bounds_error=False)(k, z)
# we have NaNs outside the integration range; replace with simple
# power law extrapolations, as done internally by CosmoSIS transformer module
lower = 1.
upper = -2.
def extrapolate(P):
bad_low = np.isnan(P) & (k < kmin)
bad_high = np.isnan(P) & (k > kmax)
_P = P.copy()
_P[bad_low] = P[0] * (k[bad_low] / kmin)**lower
_P[bad_high] = P[-1] * (k[bad_high] / kmax)**upper
return _P
_Pgg = np.array([extrapolate(pk) for pk in _Pgg])
_PgI = np.array([extrapolate(pk) for pk in _PgI])
_PII = np.array([extrapolate(pk) for pk in _PII])
import pyccl as ccl
# initialise a Cosmology object, with the same parameters as the CosmoSIS pipeline
cosmo = ccl.Cosmology(Omega_c=0.25, Omega_b=0.05, h=0.7, A_s=2e-9, n_s=0.96, m_nu=0.06,
matter_power_spectrum='halofit')
%%time
# Hankel transform for w_xy(r, z)
method = 'fftlog'
w_gg_rz = np.array([ccl.correlation(cosmo, k, pk, r_p, type='NN', method=method) for pk in _Pgg])
w_gp_rz = np.array([ccl.correlation(cosmo, k, pk, r_p, type='NG', method=method) for pk in _PgI])
w_pp_rz = np.array([ccl.correlation(cosmo, k, pk, r_p, type='GG+', method=method) \
+ ccl.correlation(cosmo, k, pk, r_p, type='GG-', method=method) for pk in _PII])
# and integrate (Riemann sum) over the redshift window function for projected statistics
dz = z[1] - z[0]
wgg = (w_gg_rz.T * Wz * dz).sum(axis=-1)
wgp = (w_gp_rz.T * Wz * dz).sum(axis=-1)
wpp = (w_pp_rz.T * Wz * dz).sum(axis=-1)
from scipy.interpolate import interp1d
from matplotlib.ticker import FuncFormatter
def compare_CosmoSIS_and_CCL(r_p, wgg, wgp, wpp):
f1, ax1 = plt.subplots(1, 3, sharex=True, sharey=False, figsize=(12,4))
ax1[0].set_xscale('log')
c = (r_p > 0.05) & (r_p < 80)
for df in os.listdir('J19_measurements'):
name = df.replace('wgp_','').replace('wgg_','')
# scale theory correlation functions
if 'wgg' in df:
th_w = wgg * bias[name]**2.
label = None
a = ax1[0]
elif 'wgp' in df:
th_w = wgp * aia[name] * bias[name]
th_wpp = wpp * aia[name]**2.
label = names[name]
a = ax1[1]
# compare with CosmoSIS version
th_w_cosmosis = interp1d(rp, cosmosis_curves[df],
kind='cubic', bounds_error=False, fill_value=np.nan)(r_p)
ratio = th_w / th_w_cosmosis - 1.
a.plot(r_p[c], ratio[c], c=cols[name], label=label)
if 'wgp' in df:
th_wpp_cosmosis = interp1d(rp, cosmosis_curves[df+'wpp'],
kind='cubic', bounds_error=False, fill_value=np.nan)(r_p)
ratio1 = th_wpp / th_wpp_cosmosis - 1.
ax1[2].plot(r_p[c], th_wpp[c], c=cols[name])
f1.text(0.5, 0.995, 'Fractional difference CCL / Cosmosis',
fontsize='xx-large', ha='center', va='center')
ax1[0].set_ylabel('$w_{gg}$', fontsize='x-large')
ax1[1].set_ylabel('$w_{g+}$', fontsize='x-large')
ax1[2].set_ylabel('$w_{++}$', fontsize='x-large')
for a in ax1:
a.axhline(0, c='k', lw=0.7)
a.set_yscale('symlog')
a.set_xlabel('$r_p\,[\\rm{Mpc/h}]$', fontsize='x-large')
yticks = a.get_yticks()
a.set_yticks(np.round(yticks,2))
formatter = FuncFormatter(lambda y, _: '{:.16g}'.format(y))
a.yaxis.set_major_formatter(formatter)
plt.tight_layout()
plt.show()
compare_CosmoSIS_and_CCL(r_p, wgg, wgp, wpp)
# copy some variables
a_sample = 1. / (1. + z[::-1]) # reverse order so that a increases monotonically
k_sample = k
z_arr = z
pz = nz
pk_GI_NLA = _PgI[::-1] # also reverse for P(k)'s
pk_II_NLA = _PII[::-1]
pk_gg = _Pgg[::-1]
pk2d_GI_NLA = ccl.pk2d.Pk2D(a_arr=a_sample, lk_arr=np.log(k_sample), pk_arr=pk_GI_NLA, is_logp=False)
pk2d_II_NLA = ccl.pk2d.Pk2D(a_arr=a_sample, lk_arr=np.log(k_sample), pk_arr=pk_II_NLA, is_logp=False)
pk2d_gg = ccl.pk2d.Pk2D(a_arr=a_sample, lk_arr=np.log(k_sample), pk_arr=pk_gg, is_logp=False)
# test some array shapes
wgg_ccl = cosmo.correlation_ab(1., z_arr, pz, p_of_k_a=pk2d_gg, type='gg')
print(wgg_ccl)
wgg_ccl = cosmo.correlation_ab(np.array([1]), z_arr, pz, p_of_k_a=pk2d_gg, type='gg')
print(wgg_ccl)
wgg_ccl = cosmo.correlation_ab(np.array([1,2]), z_arr, pz, p_of_k_a=pk2d_gg, type='gg')
print(wgg_ccl)
%%time
# and now compute the projected correlations
wgg_new = cosmo.correlation_ab(r_p, z_arr, pz, p_of_k_a=pk2d_gg, type='gg')
wgp_new = cosmo.correlation_ab(r_p, z_arr, pz, p_of_k_a=pk2d_GI_NLA, type='g+')
wpp_new = cosmo.correlation_ab(r_p, z_arr, pz, p_of_k_a=pk2d_II_NLA, type='++')
# and now the moment of truth...
compare_CosmoSIS_and_CCL(r_p, wgg_new, wgp_new, wpp_new)
| 0.52756 | 0.948775 |
```
#env library
import nltk
nltk.download('punkt')
```
### Device Check
```
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
torch.cuda.current_device()
torch.cuda.device(0)
%load_ext autoreload
%autoreload 2
```
## Loading Dataset, Model
```
# coding=utf-8
import argparse
import logging
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
import random
import numpy as np
import pandas as pd
import torch
from transformers import (BertConfig, BertForTokenClassification,
BertTokenizer)
from torch.utils.data import DataLoader
from datasets import load_datasets_and_vocabs
from model import (Aspect_Text_GAT_ours,
Pure_Bert, Aspect_Bert_GAT, Aspect_Text_GAT_only)
from trainer import train
logger = logging.getLogger(__name__)
def set_seed(args):
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
def parse_args(args):
parser = argparse.ArgumentParser()
# Required parameters
parser.add_argument('--dataset_name', type=str, default='rest',
choices=['rest', 'laptop', 'twitter'],
help='Choose absa dataset.')
parser.add_argument('--output_dir', type=str, default='/data1/SHENWZH/ABSA_online/data/output-gcn',
help='Directory to store intermedia data, such as vocab, embeddings, tags_vocab.')
parser.add_argument('--num_classes', type=int, default=3,
help='Number of classes of ABSA.')
parser.add_argument('--cuda_id', type=str, default='3',
help='Choose which GPUs to run')
parser.add_argument('--seed', type=int, default=2019,
help='random seed for initialization')
# Model parameters
parser.add_argument('--glove_dir', type=str, default='/data1/SHENWZH/wordvec',
help='Directory storing glove embeddings')
parser.add_argument('--bert_model_dir', type=str, default='/data1/SHENWZH/models/bert_base',
help='Path to pre-trained Bert model.')
parser.add_argument('--pure_bert', action='store_true',
help='Cat text and aspect, [cls] to predict.')
parser.add_argument('--pure_bert_layer_agg', action='store_true',
help='Pure bert layer aggregation enable/not')
parser.add_argument('--pure_bert_layer_agg_list', type=str, default="12",
help='Pure Bert layer number to aggregate')
parser.add_argument('--pure_bert_linear_layer_count', type=int, default=2,
help='Pure Bert final linear layer count')
parser.add_argument('--gat_bert', action='store_true',
help='Cat text and aspect, [cls] to predict.')
parser.add_argument('--highway', action='store_true',
help='Use highway embed.')
parser.add_argument('--num_layers', type=int, default=2,
help='Number of layers of bilstm or highway or elmo.')
parser.add_argument('--add_non_connect', type= bool, default=True,
help='Add a sepcial "non-connect" relation for aspect with no direct connection.')
parser.add_argument('--multi_hop', type= bool, default=True,
help='Multi hop non connection.')
parser.add_argument('--max_hop', type = int, default=4,
help='max number of hops')
parser.add_argument('--num_heads', type=int, default=6,
help='Number of heads for gat.')
parser.add_argument('--dropout', type=float, default=0,
help='Dropout rate for embedding.')
parser.add_argument('--num_gcn_layers', type=int, default=1,
help='Number of GCN layers.')
parser.add_argument('--gcn_mem_dim', type=int, default=300,
help='Dimension of the W in GCN.')
parser.add_argument('--gcn_dropout', type=float, default=0.2,
help='Dropout rate for GCN.')
# GAT
parser.add_argument('--gat', action='store_true',
help='GAT')
parser.add_argument('--gat_our', action='store_true',
help='GAT_our')
parser.add_argument('--gat_attention_type', type = str, choices=['linear','dotprod','gcn'], default='dotprod',
help='The attention used for gat')
parser.add_argument('--embedding_type', type=str,default='glove', choices=['glove','bert'])
parser.add_argument('--embedding_dim', type=int, default=300,
help='Dimension of glove embeddings')
parser.add_argument('--dep_relation_embed_dim', type=int, default=300,
help='Dimension for dependency relation embeddings.')
parser.add_argument('--hidden_size', type=int, default=300,
help='Hidden size of bilstm, in early stage.')
parser.add_argument('--final_hidden_size', type=int, default=300,
help='Hidden size of bilstm, in early stage.')
parser.add_argument('--num_mlps', type=int, default=2,
help='Number of mlps in the last of model.')
# Training parameters
parser.add_argument("--per_gpu_train_batch_size", default=16, type=int,
help="Batch size per GPU/CPU for training.")
parser.add_argument("--per_gpu_eval_batch_size", default=32, type=int,
help="Batch size per GPU/CPU for evaluation.")
parser.add_argument('--gradient_accumulation_steps', type=int, default=2,
help="Number of updates steps to accumulate before performing a backward/update pass.")
parser.add_argument("--learning_rate", default=1e-3, type=float,
help="The initial learning rate for Adam.")
parser.add_argument("--weight_decay", default=0.0, type=float,
help="Weight deay if we apply some.")
parser.add_argument("--adam_epsilon", default=1e-8, type=float,
help="Epsilon for Adam optimizer.")
parser.add_argument("--max_grad_norm", default=1.0, type=float,
help="Max gradient norm.")
parser.add_argument("--num_train_epochs", default=30.0, type=float,
help="Total number of training epochs to perform.")
parser.add_argument("--max_steps", default=-1, type=int,
help="If > 0: set total number of training steps(that update the weights) to perform. Override num_train_epochs.")
parser.add_argument('--logging_steps', type=int, default=50,
help="Log every X updates steps.")
return parser.parse_args(args)
def check_args(args):
'''
eliminate confilct situations
'''
logger.info(vars(args))
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt='%m/%d/%Y %H:%M:%S',
level=logging.INFO)
# Parse args
args_str = "--embedding_type bert --output_dir data/output-gcn --dropout 0.3 --hidden_size 200 --learning_rate 5e-5 --bert_model_dir ./test/saved_model --pure_bert --pure_bert_layer_agg --pure_bert_layer_agg_list 11,12 --pure_bert_linear_layer_count 2"
#args = parse_args(['--gat_our', '--highway', '--num_heads', '7', '--dropout', '0.8', '--output_dir',
# 'output/r-gat', '--glove_dir', 'glove', '--cuda_id', '0'])
args = parse_args(args_str.split(' '))
check_args(args)
# Setup CUDA, GPU training
os.environ["CUDA_VISIBLE_DEVICES"] = args.cuda_id
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
args.device = device
logger.info('Device is %s', args.device)
# Set seed
set_seed(args)
# Bert, load pretrained model and tokenizer, check if neccesary to put bert here
if args.embedding_type == 'bert':
tokenizer = BertTokenizer.from_pretrained(args.bert_model_dir)
args.tokenizer = tokenizer
# Load datasets and vocabs
train_dataset, test_dataset, word_vocab, dep_tag_vocab, pos_tag_vocab= load_datasets_and_vocabs(args)
# Build Model
# model = Aspect_Text_Multi_Syntax_Encoding(args, dep_tag_vocab['len'], pos_tag_vocab['len'])
if args.pure_bert:
model = Pure_Bert(args)
elif args.gat_bert:
model = Aspect_Bert_GAT(args, dep_tag_vocab['len'], pos_tag_vocab['len']) # R-GAT + Bert
elif args.gat_our:
model = Aspect_Text_GAT_ours(args, dep_tag_vocab['len'], pos_tag_vocab['len']) # R-GAT with reshaped tree
else:
model = Aspect_Text_GAT_only(args, dep_tag_vocab['len'], pos_tag_vocab['len']) # original GAT with reshaped tree
model.to(args.device)
# Train
```
### GAT+GLOVE
```
# Train
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
# Pure Bert
## Single Layer
### Output of Bert, output[1]
```
# Train
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
### Pure Bert
### output 0th hiddden layer, (pooled_output = outputs[2][0][:,0, :])
### relu((768,256))Layer_0) ->(256,3)
```
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
### Pure Bert,
### output 1st hidden layer (pooled_output = outputs[2][1][:,0, :])
### relu((768,256))Layer_1) ->(256,3)
```
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
### Pure Bert,
### output 6th hidden layer, (pooled_output = outputs[2][6][:,0, :])
### relu((768,256))Layer_6) ->(256,3)
```
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
### Pure Bert,
### output 8th hidden layer, (pooled_output = outputs[2][8][:,0, :])
### relu((768,256))Layer_8) ->(256,3)
```
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
### Pure Bert,
### output 10th hidden layer, (pooled_output = outputs[2][10][:,0, :])
### relu((768,256))Layer_10) ->(256,3)
```
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
### Pure Bert,
### output 11th hidden layer, (pooled_output = outputs[2][11][:,0, :])
### relu((768,256))Layer_11) ->(256,3)
```
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
## Addiction of two Hidden Layers
### Pure Bert, FC1
### relu((768,256)layer_11+(768,256)layer_12) -> (256,3)
```
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
### Pure Bert, FC1
### relu((768,256)layer_0+(768,256)layer_12) -> (256,3)
```
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
## Pure Bert, FC2
### relu((768,256)Layer_11+(768,256)Layer_12) -> relu(256,256) ->(256,3)
```
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
## Pure Bert, FC3
### (768, 768) layer 11 + (768, 768) Layer 12 =>(768,256)=>relu=>(256,3)
```
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
```
|
github_jupyter
|
#env library
import nltk
nltk.download('punkt')
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
torch.cuda.current_device()
torch.cuda.device(0)
%load_ext autoreload
%autoreload 2
# coding=utf-8
import argparse
import logging
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
import random
import numpy as np
import pandas as pd
import torch
from transformers import (BertConfig, BertForTokenClassification,
BertTokenizer)
from torch.utils.data import DataLoader
from datasets import load_datasets_and_vocabs
from model import (Aspect_Text_GAT_ours,
Pure_Bert, Aspect_Bert_GAT, Aspect_Text_GAT_only)
from trainer import train
logger = logging.getLogger(__name__)
def set_seed(args):
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
def parse_args(args):
parser = argparse.ArgumentParser()
# Required parameters
parser.add_argument('--dataset_name', type=str, default='rest',
choices=['rest', 'laptop', 'twitter'],
help='Choose absa dataset.')
parser.add_argument('--output_dir', type=str, default='/data1/SHENWZH/ABSA_online/data/output-gcn',
help='Directory to store intermedia data, such as vocab, embeddings, tags_vocab.')
parser.add_argument('--num_classes', type=int, default=3,
help='Number of classes of ABSA.')
parser.add_argument('--cuda_id', type=str, default='3',
help='Choose which GPUs to run')
parser.add_argument('--seed', type=int, default=2019,
help='random seed for initialization')
# Model parameters
parser.add_argument('--glove_dir', type=str, default='/data1/SHENWZH/wordvec',
help='Directory storing glove embeddings')
parser.add_argument('--bert_model_dir', type=str, default='/data1/SHENWZH/models/bert_base',
help='Path to pre-trained Bert model.')
parser.add_argument('--pure_bert', action='store_true',
help='Cat text and aspect, [cls] to predict.')
parser.add_argument('--pure_bert_layer_agg', action='store_true',
help='Pure bert layer aggregation enable/not')
parser.add_argument('--pure_bert_layer_agg_list', type=str, default="12",
help='Pure Bert layer number to aggregate')
parser.add_argument('--pure_bert_linear_layer_count', type=int, default=2,
help='Pure Bert final linear layer count')
parser.add_argument('--gat_bert', action='store_true',
help='Cat text and aspect, [cls] to predict.')
parser.add_argument('--highway', action='store_true',
help='Use highway embed.')
parser.add_argument('--num_layers', type=int, default=2,
help='Number of layers of bilstm or highway or elmo.')
parser.add_argument('--add_non_connect', type= bool, default=True,
help='Add a sepcial "non-connect" relation for aspect with no direct connection.')
parser.add_argument('--multi_hop', type= bool, default=True,
help='Multi hop non connection.')
parser.add_argument('--max_hop', type = int, default=4,
help='max number of hops')
parser.add_argument('--num_heads', type=int, default=6,
help='Number of heads for gat.')
parser.add_argument('--dropout', type=float, default=0,
help='Dropout rate for embedding.')
parser.add_argument('--num_gcn_layers', type=int, default=1,
help='Number of GCN layers.')
parser.add_argument('--gcn_mem_dim', type=int, default=300,
help='Dimension of the W in GCN.')
parser.add_argument('--gcn_dropout', type=float, default=0.2,
help='Dropout rate for GCN.')
# GAT
parser.add_argument('--gat', action='store_true',
help='GAT')
parser.add_argument('--gat_our', action='store_true',
help='GAT_our')
parser.add_argument('--gat_attention_type', type = str, choices=['linear','dotprod','gcn'], default='dotprod',
help='The attention used for gat')
parser.add_argument('--embedding_type', type=str,default='glove', choices=['glove','bert'])
parser.add_argument('--embedding_dim', type=int, default=300,
help='Dimension of glove embeddings')
parser.add_argument('--dep_relation_embed_dim', type=int, default=300,
help='Dimension for dependency relation embeddings.')
parser.add_argument('--hidden_size', type=int, default=300,
help='Hidden size of bilstm, in early stage.')
parser.add_argument('--final_hidden_size', type=int, default=300,
help='Hidden size of bilstm, in early stage.')
parser.add_argument('--num_mlps', type=int, default=2,
help='Number of mlps in the last of model.')
# Training parameters
parser.add_argument("--per_gpu_train_batch_size", default=16, type=int,
help="Batch size per GPU/CPU for training.")
parser.add_argument("--per_gpu_eval_batch_size", default=32, type=int,
help="Batch size per GPU/CPU for evaluation.")
parser.add_argument('--gradient_accumulation_steps', type=int, default=2,
help="Number of updates steps to accumulate before performing a backward/update pass.")
parser.add_argument("--learning_rate", default=1e-3, type=float,
help="The initial learning rate for Adam.")
parser.add_argument("--weight_decay", default=0.0, type=float,
help="Weight deay if we apply some.")
parser.add_argument("--adam_epsilon", default=1e-8, type=float,
help="Epsilon for Adam optimizer.")
parser.add_argument("--max_grad_norm", default=1.0, type=float,
help="Max gradient norm.")
parser.add_argument("--num_train_epochs", default=30.0, type=float,
help="Total number of training epochs to perform.")
parser.add_argument("--max_steps", default=-1, type=int,
help="If > 0: set total number of training steps(that update the weights) to perform. Override num_train_epochs.")
parser.add_argument('--logging_steps', type=int, default=50,
help="Log every X updates steps.")
return parser.parse_args(args)
def check_args(args):
'''
eliminate confilct situations
'''
logger.info(vars(args))
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt='%m/%d/%Y %H:%M:%S',
level=logging.INFO)
# Parse args
args_str = "--embedding_type bert --output_dir data/output-gcn --dropout 0.3 --hidden_size 200 --learning_rate 5e-5 --bert_model_dir ./test/saved_model --pure_bert --pure_bert_layer_agg --pure_bert_layer_agg_list 11,12 --pure_bert_linear_layer_count 2"
#args = parse_args(['--gat_our', '--highway', '--num_heads', '7', '--dropout', '0.8', '--output_dir',
# 'output/r-gat', '--glove_dir', 'glove', '--cuda_id', '0'])
args = parse_args(args_str.split(' '))
check_args(args)
# Setup CUDA, GPU training
os.environ["CUDA_VISIBLE_DEVICES"] = args.cuda_id
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
args.device = device
logger.info('Device is %s', args.device)
# Set seed
set_seed(args)
# Bert, load pretrained model and tokenizer, check if neccesary to put bert here
if args.embedding_type == 'bert':
tokenizer = BertTokenizer.from_pretrained(args.bert_model_dir)
args.tokenizer = tokenizer
# Load datasets and vocabs
train_dataset, test_dataset, word_vocab, dep_tag_vocab, pos_tag_vocab= load_datasets_and_vocabs(args)
# Build Model
# model = Aspect_Text_Multi_Syntax_Encoding(args, dep_tag_vocab['len'], pos_tag_vocab['len'])
if args.pure_bert:
model = Pure_Bert(args)
elif args.gat_bert:
model = Aspect_Bert_GAT(args, dep_tag_vocab['len'], pos_tag_vocab['len']) # R-GAT + Bert
elif args.gat_our:
model = Aspect_Text_GAT_ours(args, dep_tag_vocab['len'], pos_tag_vocab['len']) # R-GAT with reshaped tree
else:
model = Aspect_Text_GAT_only(args, dep_tag_vocab['len'], pos_tag_vocab['len']) # original GAT with reshaped tree
model.to(args.device)
# Train
# Train
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
# Train
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
_, _, all_eval_results = train(args, train_dataset, model, test_dataset)
if len(all_eval_results):
best_eval_result = max(all_eval_results, key=lambda x: x['acc'])
for key in sorted(best_eval_result.keys()):
logger.info(" %s = %s", key, str(best_eval_result[key]))
| 0.444806 | 0.401336 |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'prepare/mesolitica-tpu.json'
b2_application_key_id = os.environ['b2_application_key_id']
b2_application_key = os.environ['b2_application_key']
from google.cloud import storage
client = storage.Client()
bucket = client.bucket('mesolitica-tpu-general')
best = '1050000'
directory = 't5-3x-super-tiny-true-case'
!rm -rf output out {directory}
!mkdir {directory}
model = best
blob = bucket.blob(f'{directory}/model.ckpt-{model}.data-00000-of-00002')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.data-00000-of-00002')
blob = bucket.blob(f'{directory}/model.ckpt-{model}.data-00001-of-00002')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.data-00001-of-00002')
blob = bucket.blob(f'{directory}/model.ckpt-{model}.index')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.index')
blob = bucket.blob(f'{directory}/model.ckpt-{model}.meta')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.meta')
blob = bucket.blob(f'{directory}/checkpoint')
blob.download_to_filename(f'{directory}/checkpoint')
blob = bucket.blob(f'{directory}/operative_config.gin')
blob.download_to_filename(f'{directory}/operative_config.gin')
with open(f'{directory}/checkpoint', 'w') as fopen:
fopen.write(f'model_checkpoint_path: "model.ckpt-{model}"')
from b2sdk.v1 import *
info = InMemoryAccountInfo()
b2_api = B2Api(info)
application_key_id = b2_application_key_id
application_key = b2_application_key
b2_api.authorize_account("production", application_key_id, application_key)
file_info = {'how': 'good-file'}
b2_bucket = b2_api.get_bucket_by_name('malaya-model')
tar = 't5-3x-super-tiny-true-case-2021-09-10.tar.gz'
os.system(f'tar -czvf {tar} {directory}')
outPutname = f'finetuned/{tar}'
b2_bucket.upload_local_file(
local_file=tar,
file_name=outPutname,
file_infos=file_info,
)
os.system(f'rm {tar}')
import tensorflow as tf
import tensorflow_datasets as tfds
import t5
model = t5.models.MtfModel(
model_dir=directory,
tpu=None,
tpu_topology=None,
model_parallelism=1,
batch_size=1,
sequence_length={"inputs": 256, "targets": 256},
learning_rate_schedule=0.003,
save_checkpoints_steps=5000,
keep_checkpoint_max=3,
iterations_per_loop=100,
mesh_shape="model:1,batch:1",
mesh_devices=["cpu:0"]
)
!rm -rf output/*
import gin
from t5.data import sentencepiece_vocabulary
DEFAULT_SPM_PATH = 'prepare/sp10m.cased.ms-en.model'
DEFAULT_EXTRA_IDS = 100
model_dir = directory
def get_default_vocabulary():
return sentencepiece_vocabulary.SentencePieceVocabulary(
DEFAULT_SPM_PATH, DEFAULT_EXTRA_IDS)
with gin.unlock_config():
gin.parse_config_file(t5.models.mtf_model._operative_config_path(model_dir))
gin.bind_parameter("Bitransformer.decode.beam_size", 1)
gin.bind_parameter("Bitransformer.decode.temperature", 0)
gin.bind_parameter("utils.get_variable_dtype.slice_dtype", "float32")
gin.bind_parameter(
"utils.get_variable_dtype.activation_dtype", "float32")
vocabulary = t5.data.SentencePieceVocabulary(DEFAULT_SPM_PATH)
estimator = model.estimator(vocabulary, disable_tpu=True)
import os
checkpoint_step = t5.models.mtf_model._get_latest_checkpoint_from_dir(model_dir)
model_ckpt = "model.ckpt-" + str(checkpoint_step)
checkpoint_path = os.path.join(model_dir, model_ckpt)
checkpoint_step, model_ckpt, checkpoint_path
from mesh_tensorflow.transformer import dataset as transformer_dataset
def serving_input_fn():
inputs = tf.placeholder(
dtype=tf.string,
shape=[None],
name="inputs")
batch_size = tf.shape(inputs)[0]
padded_inputs = tf.pad(inputs, [(0, tf.mod(-tf.size(inputs), batch_size))])
dataset = tf.data.Dataset.from_tensor_slices(padded_inputs)
dataset = dataset.map(lambda x: {"inputs": x})
dataset = transformer_dataset.encode_all_features(dataset, vocabulary)
dataset = transformer_dataset.pack_or_pad(
dataset=dataset,
length=model._sequence_length,
pack=False,
feature_keys=["inputs"]
)
dataset = dataset.batch(tf.cast(batch_size, tf.int64))
features = tf.data.experimental.get_single_element(dataset)
return tf.estimator.export.ServingInputReceiver(
features=features, receiver_tensors=inputs)
out = estimator.export_saved_model('output', serving_input_fn, checkpoint_path=checkpoint_path)
config = tf.ConfigProto()
config.allow_soft_placement = True
sess = tf.Session(config = config)
meta_graph_def = tf.saved_model.loader.load(
sess,
[tf.saved_model.tag_constants.SERVING],
out)
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, '3x-super-tiny-true-case/model.ckpt')
strings = [
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('encoder' in n.op
or 'decoder' in n.name
or 'shared' in n.name
or 'inputs' in n.name
or 'output' in n.name
or 'SentenceTokenizer' in n.name
or 'self/Softmax' in n.name)
and 'adam' not in n.name
and 'Assign' not in n.name
]
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names,
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('3x-super-tiny-true-case', strings)
import struct
unknown = b'\xff\xff\xff\xff'
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
for node in graph_def.node:
if node.op == 'RefSwitch':
node.op = 'Switch'
for index in xrange(len(node.input)):
if 'moving_' in node.input[index]:
node.input[index] = node.input[index] + '/read'
elif node.op == 'AssignSub':
node.op = 'Sub'
if 'use_locking' in node.attr: del node.attr['use_locking']
elif node.op == 'AssignAdd':
node.op = 'Add'
if 'use_locking' in node.attr: del node.attr['use_locking']
elif node.op == 'Assign':
node.op = 'Identity'
if 'use_locking' in node.attr: del node.attr['use_locking']
if 'validate_shape' in node.attr: del node.attr['validate_shape']
if len(node.input) == 2:
node.input[0] = node.input[1]
del node.input[1]
if 'Reshape/shape' in node.name or 'Reshape_1/shape' in node.name:
b = node.attr['value'].tensor.tensor_content
arr_int = [int.from_bytes(b[i:i + 4], 'little') for i in range(0, len(b), 4)]
if len(arr_int):
arr_byte = [unknown] + [struct.pack('<i', i) for i in arr_int[1:]]
arr_byte = b''.join(arr_byte)
node.attr['value'].tensor.tensor_content = arr_byte
if len(node.attr['value'].tensor.int_val):
node.attr['value'].tensor.int_val[0] = -1
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('3x-super-tiny-true-case/frozen_model.pb')
i = g.get_tensor_by_name('import/inputs:0')
o = g.get_tensor_by_name('import/SelectV2_3:0')
i, o
test_sess = tf.Session(graph = g)
import sentencepiece as spm
sp_model = spm.SentencePieceProcessor()
sp_model.Load(DEFAULT_SPM_PATH)
string1 = 'FORMAT TERBUKA. FORMAT TERBUKA IALAH SUATU FORMAT FAIL UNTUK TUJUAN MENYIMPAN DATA DIGITAL, DI MANA FORMAT INI DITAKRIFKAN BERDASARKAN SPESIFIKASI YANG DITERBITKAN DAN DIKENDALIKAN PERTUBUHAN PIAWAIAN , SERTA BOLEH DIGUNA PAKAI KHALAYAK RAMAI .'
string2 = 'Husein ska mkn ayam dkat kampng Jawa'
strings = [string1, string2]
[f'kes benar: {s}' for s in strings]
%%time
o_ = test_sess.run(o, feed_dict = {i: [f'kes benar: {s}' for s in strings]})
o_.shape
for k in range(len(o_)):
print(k, sp_model.DecodeIds(o_[k].tolist()))
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(minimum_size=1536000)',
#'quantize_weights(fallback_min=-10240, fallback_max=10240)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = '3x-super-tiny-true-case/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
transformed_graph_def = TransformGraph(input_graph_def,
['inputs'],
['SelectV2_3'], transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
g = load_graph('3x-super-tiny-true-case/frozen_model.pb.quantized')
i = g.get_tensor_by_name('import/inputs:0')
o = g.get_tensor_by_name('import/SelectV2_3:0')
i, o
test_sess = tf.InteractiveSession(graph = g)
file = '3x-super-tiny-true-case/frozen_model.pb.quantized'
outPutname = 'true-case/3x-super-tiny-t5-quantized/model.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
file = '3x-super-tiny-true-case/frozen_model.pb'
outPutname = 'true-case/3x-super-tiny-t5/model.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
```
|
github_jupyter
|
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'prepare/mesolitica-tpu.json'
b2_application_key_id = os.environ['b2_application_key_id']
b2_application_key = os.environ['b2_application_key']
from google.cloud import storage
client = storage.Client()
bucket = client.bucket('mesolitica-tpu-general')
best = '1050000'
directory = 't5-3x-super-tiny-true-case'
!rm -rf output out {directory}
!mkdir {directory}
model = best
blob = bucket.blob(f'{directory}/model.ckpt-{model}.data-00000-of-00002')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.data-00000-of-00002')
blob = bucket.blob(f'{directory}/model.ckpt-{model}.data-00001-of-00002')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.data-00001-of-00002')
blob = bucket.blob(f'{directory}/model.ckpt-{model}.index')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.index')
blob = bucket.blob(f'{directory}/model.ckpt-{model}.meta')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.meta')
blob = bucket.blob(f'{directory}/checkpoint')
blob.download_to_filename(f'{directory}/checkpoint')
blob = bucket.blob(f'{directory}/operative_config.gin')
blob.download_to_filename(f'{directory}/operative_config.gin')
with open(f'{directory}/checkpoint', 'w') as fopen:
fopen.write(f'model_checkpoint_path: "model.ckpt-{model}"')
from b2sdk.v1 import *
info = InMemoryAccountInfo()
b2_api = B2Api(info)
application_key_id = b2_application_key_id
application_key = b2_application_key
b2_api.authorize_account("production", application_key_id, application_key)
file_info = {'how': 'good-file'}
b2_bucket = b2_api.get_bucket_by_name('malaya-model')
tar = 't5-3x-super-tiny-true-case-2021-09-10.tar.gz'
os.system(f'tar -czvf {tar} {directory}')
outPutname = f'finetuned/{tar}'
b2_bucket.upload_local_file(
local_file=tar,
file_name=outPutname,
file_infos=file_info,
)
os.system(f'rm {tar}')
import tensorflow as tf
import tensorflow_datasets as tfds
import t5
model = t5.models.MtfModel(
model_dir=directory,
tpu=None,
tpu_topology=None,
model_parallelism=1,
batch_size=1,
sequence_length={"inputs": 256, "targets": 256},
learning_rate_schedule=0.003,
save_checkpoints_steps=5000,
keep_checkpoint_max=3,
iterations_per_loop=100,
mesh_shape="model:1,batch:1",
mesh_devices=["cpu:0"]
)
!rm -rf output/*
import gin
from t5.data import sentencepiece_vocabulary
DEFAULT_SPM_PATH = 'prepare/sp10m.cased.ms-en.model'
DEFAULT_EXTRA_IDS = 100
model_dir = directory
def get_default_vocabulary():
return sentencepiece_vocabulary.SentencePieceVocabulary(
DEFAULT_SPM_PATH, DEFAULT_EXTRA_IDS)
with gin.unlock_config():
gin.parse_config_file(t5.models.mtf_model._operative_config_path(model_dir))
gin.bind_parameter("Bitransformer.decode.beam_size", 1)
gin.bind_parameter("Bitransformer.decode.temperature", 0)
gin.bind_parameter("utils.get_variable_dtype.slice_dtype", "float32")
gin.bind_parameter(
"utils.get_variable_dtype.activation_dtype", "float32")
vocabulary = t5.data.SentencePieceVocabulary(DEFAULT_SPM_PATH)
estimator = model.estimator(vocabulary, disable_tpu=True)
import os
checkpoint_step = t5.models.mtf_model._get_latest_checkpoint_from_dir(model_dir)
model_ckpt = "model.ckpt-" + str(checkpoint_step)
checkpoint_path = os.path.join(model_dir, model_ckpt)
checkpoint_step, model_ckpt, checkpoint_path
from mesh_tensorflow.transformer import dataset as transformer_dataset
def serving_input_fn():
inputs = tf.placeholder(
dtype=tf.string,
shape=[None],
name="inputs")
batch_size = tf.shape(inputs)[0]
padded_inputs = tf.pad(inputs, [(0, tf.mod(-tf.size(inputs), batch_size))])
dataset = tf.data.Dataset.from_tensor_slices(padded_inputs)
dataset = dataset.map(lambda x: {"inputs": x})
dataset = transformer_dataset.encode_all_features(dataset, vocabulary)
dataset = transformer_dataset.pack_or_pad(
dataset=dataset,
length=model._sequence_length,
pack=False,
feature_keys=["inputs"]
)
dataset = dataset.batch(tf.cast(batch_size, tf.int64))
features = tf.data.experimental.get_single_element(dataset)
return tf.estimator.export.ServingInputReceiver(
features=features, receiver_tensors=inputs)
out = estimator.export_saved_model('output', serving_input_fn, checkpoint_path=checkpoint_path)
config = tf.ConfigProto()
config.allow_soft_placement = True
sess = tf.Session(config = config)
meta_graph_def = tf.saved_model.loader.load(
sess,
[tf.saved_model.tag_constants.SERVING],
out)
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, '3x-super-tiny-true-case/model.ckpt')
strings = [
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('encoder' in n.op
or 'decoder' in n.name
or 'shared' in n.name
or 'inputs' in n.name
or 'output' in n.name
or 'SentenceTokenizer' in n.name
or 'self/Softmax' in n.name)
and 'adam' not in n.name
and 'Assign' not in n.name
]
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names,
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('3x-super-tiny-true-case', strings)
import struct
unknown = b'\xff\xff\xff\xff'
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
for node in graph_def.node:
if node.op == 'RefSwitch':
node.op = 'Switch'
for index in xrange(len(node.input)):
if 'moving_' in node.input[index]:
node.input[index] = node.input[index] + '/read'
elif node.op == 'AssignSub':
node.op = 'Sub'
if 'use_locking' in node.attr: del node.attr['use_locking']
elif node.op == 'AssignAdd':
node.op = 'Add'
if 'use_locking' in node.attr: del node.attr['use_locking']
elif node.op == 'Assign':
node.op = 'Identity'
if 'use_locking' in node.attr: del node.attr['use_locking']
if 'validate_shape' in node.attr: del node.attr['validate_shape']
if len(node.input) == 2:
node.input[0] = node.input[1]
del node.input[1]
if 'Reshape/shape' in node.name or 'Reshape_1/shape' in node.name:
b = node.attr['value'].tensor.tensor_content
arr_int = [int.from_bytes(b[i:i + 4], 'little') for i in range(0, len(b), 4)]
if len(arr_int):
arr_byte = [unknown] + [struct.pack('<i', i) for i in arr_int[1:]]
arr_byte = b''.join(arr_byte)
node.attr['value'].tensor.tensor_content = arr_byte
if len(node.attr['value'].tensor.int_val):
node.attr['value'].tensor.int_val[0] = -1
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('3x-super-tiny-true-case/frozen_model.pb')
i = g.get_tensor_by_name('import/inputs:0')
o = g.get_tensor_by_name('import/SelectV2_3:0')
i, o
test_sess = tf.Session(graph = g)
import sentencepiece as spm
sp_model = spm.SentencePieceProcessor()
sp_model.Load(DEFAULT_SPM_PATH)
string1 = 'FORMAT TERBUKA. FORMAT TERBUKA IALAH SUATU FORMAT FAIL UNTUK TUJUAN MENYIMPAN DATA DIGITAL, DI MANA FORMAT INI DITAKRIFKAN BERDASARKAN SPESIFIKASI YANG DITERBITKAN DAN DIKENDALIKAN PERTUBUHAN PIAWAIAN , SERTA BOLEH DIGUNA PAKAI KHALAYAK RAMAI .'
string2 = 'Husein ska mkn ayam dkat kampng Jawa'
strings = [string1, string2]
[f'kes benar: {s}' for s in strings]
%%time
o_ = test_sess.run(o, feed_dict = {i: [f'kes benar: {s}' for s in strings]})
o_.shape
for k in range(len(o_)):
print(k, sp_model.DecodeIds(o_[k].tolist()))
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(minimum_size=1536000)',
#'quantize_weights(fallback_min=-10240, fallback_max=10240)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = '3x-super-tiny-true-case/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
transformed_graph_def = TransformGraph(input_graph_def,
['inputs'],
['SelectV2_3'], transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
g = load_graph('3x-super-tiny-true-case/frozen_model.pb.quantized')
i = g.get_tensor_by_name('import/inputs:0')
o = g.get_tensor_by_name('import/SelectV2_3:0')
i, o
test_sess = tf.InteractiveSession(graph = g)
file = '3x-super-tiny-true-case/frozen_model.pb.quantized'
outPutname = 'true-case/3x-super-tiny-t5-quantized/model.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
file = '3x-super-tiny-true-case/frozen_model.pb'
outPutname = 'true-case/3x-super-tiny-t5/model.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
| 0.40392 | 0.138491 |
```
import numpy as np
import cv2
from collections import deque
#default called trackbar function
def setValues(x):
print("")
# Creating the trackbars needed for adjusting the marker colour
cv2.namedWindow("Color detectors")
cv2.createTrackbar("Upper Hue", "Color detectors", 153, 180,setValues)
cv2.createTrackbar("Upper Saturation", "Color detectors", 255, 255,setValues)
cv2.createTrackbar("Upper Value", "Color detectors", 255, 255,setValues)
cv2.createTrackbar("Lower Hue", "Color detectors", 64, 180,setValues)
cv2.createTrackbar("Lower Saturation", "Color detectors", 72, 255,setValues)
cv2.createTrackbar("Lower Value", "Color detectors", 49, 255,setValues)
# Giving different arrays to handle colour points of different colour
bpoints = [deque(maxlen=1024)]
gpoints = [deque(maxlen=1024)]
rpoints = [deque(maxlen=1024)]
ypoints = [deque(maxlen=1024)]
# These indexes will be used to mark the points in particular arrays of specific colour
blue_index = 0
green_index = 0
red_index = 0
yellow_index = 0
#The kernel to be used for dilation purpose
kernel = np.ones((5,5),np.uint8)
colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255), (0, 255, 255)]
colorIndex = 0
# Here is code for Canvas setup
paintWindow = np.zeros((471,636,3)) + 255
paintWindow = cv2.rectangle(paintWindow, (40,1), (140,65), (0,0,0), 2)
paintWindow = cv2.rectangle(paintWindow, (160,1), (255,65), colors[0], -1)
paintWindow = cv2.rectangle(paintWindow, (275,1), (370,65), colors[1], -1)
paintWindow = cv2.rectangle(paintWindow, (390,1), (485,65), colors[2], -1)
paintWindow = cv2.rectangle(paintWindow, (505,1), (600,65), colors[3], -1)
cv2.putText(paintWindow, "ERASE ALL", (49, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(paintWindow, "BLUE", (185, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(paintWindow, "GREEN", (298, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(paintWindow, "RED", (420, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(paintWindow, "YELLOW", (520, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0,0,0), 2, cv2.LINE_AA)
cv2.namedWindow('Paint', cv2.WINDOW_AUTOSIZE)
# Loading the default webcam of PC.
cap = cv2.VideoCapture(0)
# Keep looping
while True:
# Reading the frame from the camera
ret, frame = cap.read()
#Flipping the frame to see same side of yours
frame = cv2.flip(frame, 1)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
u_hue = cv2.getTrackbarPos("Upper Hue", "Color detectors")
u_saturation = cv2.getTrackbarPos("Upper Saturation", "Color detectors")
u_value = cv2.getTrackbarPos("Upper Value", "Color detectors")
l_hue = cv2.getTrackbarPos("Lower Hue", "Color detectors")
l_saturation = cv2.getTrackbarPos("Lower Saturation", "Color detectors")
l_value = cv2.getTrackbarPos("Lower Value", "Color detectors")
Upper_hsv = np.array([u_hue,u_saturation,u_value])
Lower_hsv = np.array([l_hue,l_saturation,l_value])
# Adding the colour buttons to the live frame for colour access
frame = cv2.rectangle(frame, (40,1), (140,65), (122,122,122), -1)
frame = cv2.rectangle(frame, (160,1), (255,65), colors[0], -1)
frame = cv2.rectangle(frame, (275,1), (370,65), colors[1], -1)
frame = cv2.rectangle(frame, (390,1), (485,65), colors[2], -1)
frame = cv2.rectangle(frame, (505,1), (600,65), colors[3], -1)
cv2.putText(frame, "ERASE ALL", (49, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(frame, "BLUE", (185, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(frame, "GREEN", (298, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(frame, "RED", (420, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(frame, "YELLOW", (520, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
# Identifying the pointer by making its mask
Mask = cv2.inRange(hsv, Lower_hsv, Upper_hsv)
Mask = cv2.erode(Mask, kernel, iterations=1)
Mask = cv2.morphologyEx(Mask, cv2.MORPH_OPEN, kernel)
Mask = cv2.dilate(Mask, kernel, iterations=1)
# Find contours for the pointer after idetifying it
cnts,_ = cv2.findContours(Mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
center = None
# Ifthe contours are formed
if len(cnts) > 0:
# sorting the contours to find biggest
cnt = sorted(cnts, key = cv2.contourArea, reverse = True)[0]
# Get the radius of the enclosing circle around the found contour
((x, y), radius) = cv2.minEnclosingCircle(cnt)
# Draw the circle around the contour
cv2.circle(frame, (int(x), int(y)), int(radius), (0, 255, 255), 2)
# Calculating the center of the detected contour
M = cv2.moments(cnt)
center = (int(M['m10'] / M['m00']), int(M['m01'] / M['m00']))
# Now checking if the user wants to click on any button above the screen
if center[1] <= 65:
if 40 <= center[0] <= 140: # Clear Button
bpoints = [deque(maxlen=512)]
gpoints = [deque(maxlen=512)]
rpoints = [deque(maxlen=512)]
ypoints = [deque(maxlen=512)]
blue_index = 0
green_index = 0
red_index = 0
yellow_index = 0
paintWindow[67:,:,:] = 255
elif 160 <= center[0] <= 255:
colorIndex = 0 # Blue
elif 275 <= center[0] <= 370:
colorIndex = 1 # Green
elif 390 <= center[0] <= 485:
colorIndex = 2 # Red
elif 505 <= center[0] <= 600:
colorIndex = 3 # Yellow
else :
if colorIndex == 0:
bpoints[blue_index].appendleft(center)
elif colorIndex == 1:
gpoints[green_index].appendleft(center)
elif colorIndex == 2:
rpoints[red_index].appendleft(center)
elif colorIndex == 3:
ypoints[yellow_index].appendleft(center)
# Append the next deques when nothing is detected to avois messing up
else:
bpoints.append(deque(maxlen=512))
blue_index += 1
gpoints.append(deque(maxlen=512))
green_index += 1
rpoints.append(deque(maxlen=512))
red_index += 1
ypoints.append(deque(maxlen=512))
yellow_index += 1
# Draw lines of all the colors on the canvas and frame
points = [bpoints, gpoints, rpoints, ypoints]
for i in range(len(points)):
for j in range(len(points[i])):
for k in range(1, len(points[i][j])):
if points[i][j][k - 1] is None or points[i][j][k] is None:
continue
cv2.line(frame, points[i][j][k - 1], points[i][j][k], colors[i], 2)
cv2.line(paintWindow, points[i][j][k - 1], points[i][j][k], colors[i], 2)
# Show all the windows
cv2.imshow("Air Canvas Tracking", frame)
cv2.imshow("Air Canvas Paint", paintWindow)
cv2.imshow("Air Canvas mask",Mask)
# If the 'q' key is pressed then stop the application
if cv2.waitKey(1) & 0xFF == ord("q"):
break
# Release the camera and all resources
cap.release()
cv2.destroyAllWindows()
```
|
github_jupyter
|
import numpy as np
import cv2
from collections import deque
#default called trackbar function
def setValues(x):
print("")
# Creating the trackbars needed for adjusting the marker colour
cv2.namedWindow("Color detectors")
cv2.createTrackbar("Upper Hue", "Color detectors", 153, 180,setValues)
cv2.createTrackbar("Upper Saturation", "Color detectors", 255, 255,setValues)
cv2.createTrackbar("Upper Value", "Color detectors", 255, 255,setValues)
cv2.createTrackbar("Lower Hue", "Color detectors", 64, 180,setValues)
cv2.createTrackbar("Lower Saturation", "Color detectors", 72, 255,setValues)
cv2.createTrackbar("Lower Value", "Color detectors", 49, 255,setValues)
# Giving different arrays to handle colour points of different colour
bpoints = [deque(maxlen=1024)]
gpoints = [deque(maxlen=1024)]
rpoints = [deque(maxlen=1024)]
ypoints = [deque(maxlen=1024)]
# These indexes will be used to mark the points in particular arrays of specific colour
blue_index = 0
green_index = 0
red_index = 0
yellow_index = 0
#The kernel to be used for dilation purpose
kernel = np.ones((5,5),np.uint8)
colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255), (0, 255, 255)]
colorIndex = 0
# Here is code for Canvas setup
paintWindow = np.zeros((471,636,3)) + 255
paintWindow = cv2.rectangle(paintWindow, (40,1), (140,65), (0,0,0), 2)
paintWindow = cv2.rectangle(paintWindow, (160,1), (255,65), colors[0], -1)
paintWindow = cv2.rectangle(paintWindow, (275,1), (370,65), colors[1], -1)
paintWindow = cv2.rectangle(paintWindow, (390,1), (485,65), colors[2], -1)
paintWindow = cv2.rectangle(paintWindow, (505,1), (600,65), colors[3], -1)
cv2.putText(paintWindow, "ERASE ALL", (49, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(paintWindow, "BLUE", (185, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(paintWindow, "GREEN", (298, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(paintWindow, "RED", (420, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(paintWindow, "YELLOW", (520, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0,0,0), 2, cv2.LINE_AA)
cv2.namedWindow('Paint', cv2.WINDOW_AUTOSIZE)
# Loading the default webcam of PC.
cap = cv2.VideoCapture(0)
# Keep looping
while True:
# Reading the frame from the camera
ret, frame = cap.read()
#Flipping the frame to see same side of yours
frame = cv2.flip(frame, 1)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
u_hue = cv2.getTrackbarPos("Upper Hue", "Color detectors")
u_saturation = cv2.getTrackbarPos("Upper Saturation", "Color detectors")
u_value = cv2.getTrackbarPos("Upper Value", "Color detectors")
l_hue = cv2.getTrackbarPos("Lower Hue", "Color detectors")
l_saturation = cv2.getTrackbarPos("Lower Saturation", "Color detectors")
l_value = cv2.getTrackbarPos("Lower Value", "Color detectors")
Upper_hsv = np.array([u_hue,u_saturation,u_value])
Lower_hsv = np.array([l_hue,l_saturation,l_value])
# Adding the colour buttons to the live frame for colour access
frame = cv2.rectangle(frame, (40,1), (140,65), (122,122,122), -1)
frame = cv2.rectangle(frame, (160,1), (255,65), colors[0], -1)
frame = cv2.rectangle(frame, (275,1), (370,65), colors[1], -1)
frame = cv2.rectangle(frame, (390,1), (485,65), colors[2], -1)
frame = cv2.rectangle(frame, (505,1), (600,65), colors[3], -1)
cv2.putText(frame, "ERASE ALL", (49, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(frame, "BLUE", (185, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(frame, "GREEN", (298, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(frame, "RED", (420, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
cv2.putText(frame, "YELLOW", (520, 33), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), 2, cv2.LINE_AA)
# Identifying the pointer by making its mask
Mask = cv2.inRange(hsv, Lower_hsv, Upper_hsv)
Mask = cv2.erode(Mask, kernel, iterations=1)
Mask = cv2.morphologyEx(Mask, cv2.MORPH_OPEN, kernel)
Mask = cv2.dilate(Mask, kernel, iterations=1)
# Find contours for the pointer after idetifying it
cnts,_ = cv2.findContours(Mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
center = None
# Ifthe contours are formed
if len(cnts) > 0:
# sorting the contours to find biggest
cnt = sorted(cnts, key = cv2.contourArea, reverse = True)[0]
# Get the radius of the enclosing circle around the found contour
((x, y), radius) = cv2.minEnclosingCircle(cnt)
# Draw the circle around the contour
cv2.circle(frame, (int(x), int(y)), int(radius), (0, 255, 255), 2)
# Calculating the center of the detected contour
M = cv2.moments(cnt)
center = (int(M['m10'] / M['m00']), int(M['m01'] / M['m00']))
# Now checking if the user wants to click on any button above the screen
if center[1] <= 65:
if 40 <= center[0] <= 140: # Clear Button
bpoints = [deque(maxlen=512)]
gpoints = [deque(maxlen=512)]
rpoints = [deque(maxlen=512)]
ypoints = [deque(maxlen=512)]
blue_index = 0
green_index = 0
red_index = 0
yellow_index = 0
paintWindow[67:,:,:] = 255
elif 160 <= center[0] <= 255:
colorIndex = 0 # Blue
elif 275 <= center[0] <= 370:
colorIndex = 1 # Green
elif 390 <= center[0] <= 485:
colorIndex = 2 # Red
elif 505 <= center[0] <= 600:
colorIndex = 3 # Yellow
else :
if colorIndex == 0:
bpoints[blue_index].appendleft(center)
elif colorIndex == 1:
gpoints[green_index].appendleft(center)
elif colorIndex == 2:
rpoints[red_index].appendleft(center)
elif colorIndex == 3:
ypoints[yellow_index].appendleft(center)
# Append the next deques when nothing is detected to avois messing up
else:
bpoints.append(deque(maxlen=512))
blue_index += 1
gpoints.append(deque(maxlen=512))
green_index += 1
rpoints.append(deque(maxlen=512))
red_index += 1
ypoints.append(deque(maxlen=512))
yellow_index += 1
# Draw lines of all the colors on the canvas and frame
points = [bpoints, gpoints, rpoints, ypoints]
for i in range(len(points)):
for j in range(len(points[i])):
for k in range(1, len(points[i][j])):
if points[i][j][k - 1] is None or points[i][j][k] is None:
continue
cv2.line(frame, points[i][j][k - 1], points[i][j][k], colors[i], 2)
cv2.line(paintWindow, points[i][j][k - 1], points[i][j][k], colors[i], 2)
# Show all the windows
cv2.imshow("Air Canvas Tracking", frame)
cv2.imshow("Air Canvas Paint", paintWindow)
cv2.imshow("Air Canvas mask",Mask)
# If the 'q' key is pressed then stop the application
if cv2.waitKey(1) & 0xFF == ord("q"):
break
# Release the camera and all resources
cap.release()
cv2.destroyAllWindows()
| 0.513912 | 0.373162 |
# 實價登錄
```
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from time import sleep
import os
import re
import requests as rq
options = webdriver.ChromeOptions()
options.add_argument("--start-maximized")
options.add_argument("--incognito")
options.add_argument("--disable-popup-blocking")
driver = webdriver.Chrome( options = options )
# Visit 實價登錄 open data
def visit():
driver.get("https://plvr.land.moi.gov.tw/DownloadOpenData");
sleep(2)
# Selection for dataset
def select():
# 非本期下載
select_past = driver.find_element(By.CSS_SELECTOR, "a#ui-id-2")
select_past.click()
sleep(2)
# 檔案格式 -- csv
select_csv = Select(driver.find_element(By.CSS_SELECTOR, "select#fileFormatId"))
select_csv.select_by_value("csv")
sleep(2)
# 下載方式 -- 進階
select_type = driver.find_element(By.CSS_SELECTOR, "#downloadTypeId2")
select_type.click()
sleep(2)
# 新北市3個選項
select_city1 = driver.find_element(By.CSS_SELECTOR, "#table5 > tbody > tr:nth-child(8) > td:nth-child(2) > input")
select_city1.click()
sleep(1)
select_city2 = driver.find_element(By.CSS_SELECTOR, "#table5 > tbody > tr:nth-child(8) > td:nth-child(3) > input")
select_city2.click()
sleep(1)
select_city3 = driver.find_element(By.CSS_SELECTOR, "#table5 > tbody > tr:nth-child(8) > td:nth-child(4) > input")
select_city3.click()
sleep(1)
# 發布日期
select_season = Select(driver.find_element_by_name("season"))
opt = select_season.options
for index in range(0, len(opt)):
select_season.select_by_index(index)
sleep(1)
# click download
select_download = select_type = driver.find_element(By.CSS_SELECTOR, "#downloadBtnId")
select_download.click()
sleep(15)
def close():
driver.quit()
if __name__ == '__main__':
visit()
select()
close()
```
# Bus stop
```
import numpy as np
import pandas as pd
import json
import csv
for i in range(0, 33):
busURL = "https://data.ntpc.gov.tw/api/datasets/34B402A8-53D9-483D-9406-24A682C2D6DC/csv?page=" + str(i) + "&size=1500"
busreq = rq.get(busURL)
url_con = busreq.content
with open("./data/bus/" + str(i) + ".csv", "wb") as f:
f.write(url_con)
```
# 周邊設施
```
# 讀取整理後(有包含房屋物件的經緯度)的大表
df1 = pd.read_csv("./combine/presale_location.csv")
# 抓出不重複的地號,存成list
df2 = df1.drop_duplicates(subset = ["土地區段位置建物區段門牌"])
add = list(df2["土地區段位置建物區段門牌"])
# 在整理出地號各自的經緯度,存成list
x_h = list(df2["x_h"])
y_h = list(df2["y_h"])
```
## 嫌惡設施
```
# 嫌惡設施 API
# 500m以內有嫌惡設施
h_dic1 = {}
for i in range(0, len(x_h)):
addr = add[i]
badURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/dis/" + str(x_h[i]) + "/" + str(y_h[i]) + "/500"
bad_h = rq.get(badURL).json()
h_dic1[addr] = bad_h
# 100m以內有嫌惡設施
h_dic2 = {}
for i in range(0, len(x_h)):
addr = add[i]
badURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/dis/" + str(x_h[i]) + "/" + str(y_h[i]) + "/100"
bad_h = rq.get(badURL).json()
h_dic2[addr] = bad_h
# check 哪幾個地方附近在100m有嫌惡設施
badList1 = []
for i in range(0, len(x_h)):
if len(list(h_dic2.values())[i]) != 0:
badList1.append(list(h_dic2.keys())[i])
badList1
# 確認有無殯儀館、機場、墓...在500公尺內
h_list1 = []
for i in range(0, len(x_h)):
badURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/dis/" + str(x_h[i]) + "/" + str(y_h[i]) + "/500"
bad_h = rq.get(badURL).json()
h_list3.append(bad_h)
word = "殯儀" # 可替換成其他字詞
for a in h_list1:
for b in a:
if word in b["name"]:
print("YES")
else:
print("NO")
# 存出去(100m, 500m)
with open("./data/spot/bad5m.json", "w", encoding = "utf-8") as a:
a.write(json.dumps(h_dic1, ensure_ascii = False))
with open("./data/spot/bad1m.json", "w", encoding = "utf-8") as b:
b.write(json.dumps(h_dic2, ensure_ascii = False))
```
## 文教設施
```
# 文教設施 API
# 100m
h_dic1 = {}
for i in range(0, len(x_h)):
addr = add[i]
eduURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/edu/" + str(x_h[i]) + "/" + str(y_h[i]) + "/100"
edu_h = rq.get(eduURL).json()
h_dic1[addr] = edu_h
# 500m
h_dic2 = {}
for i in range(0, len(x_h)):
addr = add[i]
eduURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/edu/" + str(x_h[i]) + "/" + str(y_h[i]) + "/500"
edu_h = rq.get(eduURL).json()
h_dic2[addr] = edu_h
# 1000m
h_dic3 = {}
for i in range(0, len(x_h)):
addr = add[i]
eduURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/edu/" + str(x_h[i]) + "/" + str(y_h[i]) + "/1000"
edu_h = rq.get(eduURL).json()
h_dic3[addr] = edu_h
# 存出去(1000m, 500m)
with open("./data/spot/edu10m.json", "w", encoding = "utf-8") as a:
a.write(json.dumps(h_dic3, ensure_ascii = False))
with open("./data/spot/edu5m.json", "w", encoding = "utf-8") as b:
b.write(json.dumps(h_dic2, ensure_ascii = False))
```
## 醫療設施
```
# 醫療設施 API
# 100m
h_dic4 = {}
for i in range(0, len(x_h)):
addr = add[i]
medURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/med/" + str(x_h[i]) + "/" + str(y_h[i]) + "/100"
med_h = rq.get(medURL).json()
h_dic4[addr] = med_h
# 500m
h_dic5 = {}
for i in range(0, len(x_h)):
addr = add[i]
medURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/med/" + str(x_h[i]) + "/" + str(y_h[i]) + "/500"
med_h = rq.get(medURL).json()
h_dic5[addr] = med_h
# 1000m
h_dic6 = {}
for i in range(0, len(x_h)):
addr = add[i]
medURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/med/" + str(x_h[i]) + "/" + str(y_h[i]) + "/1000"
med_h = rq.get(medURL).json()
h_dic6[addr] = med_h
# 存出去
with open("./data/spot/med1m.json", "w", encoding = "utf-8") as a:
a.write(json.dumps(h_dic4, ensure_ascii = False))
with open("./data/spot/med5m.json", "w", encoding = "utf-8") as b:
b.write(json.dumps(h_dic5, ensure_ascii = False))
with open("./data/spot/med10m.json", "w", encoding = "utf-8") as c:
c.write(json.dumps(h_dic6, ensure_ascii = False))
```
|
github_jupyter
|
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from time import sleep
import os
import re
import requests as rq
options = webdriver.ChromeOptions()
options.add_argument("--start-maximized")
options.add_argument("--incognito")
options.add_argument("--disable-popup-blocking")
driver = webdriver.Chrome( options = options )
# Visit 實價登錄 open data
def visit():
driver.get("https://plvr.land.moi.gov.tw/DownloadOpenData");
sleep(2)
# Selection for dataset
def select():
# 非本期下載
select_past = driver.find_element(By.CSS_SELECTOR, "a#ui-id-2")
select_past.click()
sleep(2)
# 檔案格式 -- csv
select_csv = Select(driver.find_element(By.CSS_SELECTOR, "select#fileFormatId"))
select_csv.select_by_value("csv")
sleep(2)
# 下載方式 -- 進階
select_type = driver.find_element(By.CSS_SELECTOR, "#downloadTypeId2")
select_type.click()
sleep(2)
# 新北市3個選項
select_city1 = driver.find_element(By.CSS_SELECTOR, "#table5 > tbody > tr:nth-child(8) > td:nth-child(2) > input")
select_city1.click()
sleep(1)
select_city2 = driver.find_element(By.CSS_SELECTOR, "#table5 > tbody > tr:nth-child(8) > td:nth-child(3) > input")
select_city2.click()
sleep(1)
select_city3 = driver.find_element(By.CSS_SELECTOR, "#table5 > tbody > tr:nth-child(8) > td:nth-child(4) > input")
select_city3.click()
sleep(1)
# 發布日期
select_season = Select(driver.find_element_by_name("season"))
opt = select_season.options
for index in range(0, len(opt)):
select_season.select_by_index(index)
sleep(1)
# click download
select_download = select_type = driver.find_element(By.CSS_SELECTOR, "#downloadBtnId")
select_download.click()
sleep(15)
def close():
driver.quit()
if __name__ == '__main__':
visit()
select()
close()
import numpy as np
import pandas as pd
import json
import csv
for i in range(0, 33):
busURL = "https://data.ntpc.gov.tw/api/datasets/34B402A8-53D9-483D-9406-24A682C2D6DC/csv?page=" + str(i) + "&size=1500"
busreq = rq.get(busURL)
url_con = busreq.content
with open("./data/bus/" + str(i) + ".csv", "wb") as f:
f.write(url_con)
# 讀取整理後(有包含房屋物件的經緯度)的大表
df1 = pd.read_csv("./combine/presale_location.csv")
# 抓出不重複的地號,存成list
df2 = df1.drop_duplicates(subset = ["土地區段位置建物區段門牌"])
add = list(df2["土地區段位置建物區段門牌"])
# 在整理出地號各自的經緯度,存成list
x_h = list(df2["x_h"])
y_h = list(df2["y_h"])
# 嫌惡設施 API
# 500m以內有嫌惡設施
h_dic1 = {}
for i in range(0, len(x_h)):
addr = add[i]
badURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/dis/" + str(x_h[i]) + "/" + str(y_h[i]) + "/500"
bad_h = rq.get(badURL).json()
h_dic1[addr] = bad_h
# 100m以內有嫌惡設施
h_dic2 = {}
for i in range(0, len(x_h)):
addr = add[i]
badURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/dis/" + str(x_h[i]) + "/" + str(y_h[i]) + "/100"
bad_h = rq.get(badURL).json()
h_dic2[addr] = bad_h
# check 哪幾個地方附近在100m有嫌惡設施
badList1 = []
for i in range(0, len(x_h)):
if len(list(h_dic2.values())[i]) != 0:
badList1.append(list(h_dic2.keys())[i])
badList1
# 確認有無殯儀館、機場、墓...在500公尺內
h_list1 = []
for i in range(0, len(x_h)):
badURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/dis/" + str(x_h[i]) + "/" + str(y_h[i]) + "/500"
bad_h = rq.get(badURL).json()
h_list3.append(bad_h)
word = "殯儀" # 可替換成其他字詞
for a in h_list1:
for b in a:
if word in b["name"]:
print("YES")
else:
print("NO")
# 存出去(100m, 500m)
with open("./data/spot/bad5m.json", "w", encoding = "utf-8") as a:
a.write(json.dumps(h_dic1, ensure_ascii = False))
with open("./data/spot/bad1m.json", "w", encoding = "utf-8") as b:
b.write(json.dumps(h_dic2, ensure_ascii = False))
# 文教設施 API
# 100m
h_dic1 = {}
for i in range(0, len(x_h)):
addr = add[i]
eduURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/edu/" + str(x_h[i]) + "/" + str(y_h[i]) + "/100"
edu_h = rq.get(eduURL).json()
h_dic1[addr] = edu_h
# 500m
h_dic2 = {}
for i in range(0, len(x_h)):
addr = add[i]
eduURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/edu/" + str(x_h[i]) + "/" + str(y_h[i]) + "/500"
edu_h = rq.get(eduURL).json()
h_dic2[addr] = edu_h
# 1000m
h_dic3 = {}
for i in range(0, len(x_h)):
addr = add[i]
eduURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/edu/" + str(x_h[i]) + "/" + str(y_h[i]) + "/1000"
edu_h = rq.get(eduURL).json()
h_dic3[addr] = edu_h
# 存出去(1000m, 500m)
with open("./data/spot/edu10m.json", "w", encoding = "utf-8") as a:
a.write(json.dumps(h_dic3, ensure_ascii = False))
with open("./data/spot/edu5m.json", "w", encoding = "utf-8") as b:
b.write(json.dumps(h_dic2, ensure_ascii = False))
# 醫療設施 API
# 100m
h_dic4 = {}
for i in range(0, len(x_h)):
addr = add[i]
medURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/med/" + str(x_h[i]) + "/" + str(y_h[i]) + "/100"
med_h = rq.get(medURL).json()
h_dic4[addr] = med_h
# 500m
h_dic5 = {}
for i in range(0, len(x_h)):
addr = add[i]
medURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/med/" + str(x_h[i]) + "/" + str(y_h[i]) + "/500"
med_h = rq.get(medURL).json()
h_dic5[addr] = med_h
# 1000m
h_dic6 = {}
for i in range(0, len(x_h)):
addr = add[i]
medURL = "https://api.nlsc.gov.tw/other/MarkBufferAnlys/med/" + str(x_h[i]) + "/" + str(y_h[i]) + "/1000"
med_h = rq.get(medURL).json()
h_dic6[addr] = med_h
# 存出去
with open("./data/spot/med1m.json", "w", encoding = "utf-8") as a:
a.write(json.dumps(h_dic4, ensure_ascii = False))
with open("./data/spot/med5m.json", "w", encoding = "utf-8") as b:
b.write(json.dumps(h_dic5, ensure_ascii = False))
with open("./data/spot/med10m.json", "w", encoding = "utf-8") as c:
c.write(json.dumps(h_dic6, ensure_ascii = False))
| 0.100973 | 0.258209 |
# 000-Index.ipynb
## 002-US-States_HOT_spots_DEATHS.ipynb
#### <a href="002-US-States_HOT_spots_DEATHS.ipynb">LINK: 002-US-States_HOT_spots_DEATHS.ipynb</a>
How to retrieve data from:
https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_US.csv<br />
Creating lists of counties with multiple consectutive days of increase, using a threshold.<br />
<a href="000-Index.ipynb">Index.ipynb</a><br />
<a href="002-US-States_HOT_spots_DEATHS.ipynb">US-States_HOT_spots_DEATHS</a><br />
<a href="003-Available-Matplotlib-Colors.ipynb">Available-Matplotlib-Colors.ipynb</a><br />
<a href="007-Drawing-Maps-with-Shapefiles-and-US_State_Bounding_Boxes.ipynb">Drawing-Maps-with-Shapefiles-and-US_State_Bounding_Boxes</a><br />
<a href="008-Plotting-on-Basemaps.ipynb">Plotting-on-Basemaps.ipynb</a><br />
<a href="009-Get-Adjacent-Counties.ipynb">Get-Adjacent-Counties.ipynb</a><br />
<a href="010-Series-Castle-Rock-Restaurant.ipynb">Series-Castle-Rock-Restaurant.ipynb</a><br />
<a href="010-Series-COMMACK,NY—Protesters-May.ipynb">Series-COMMACK,NY—Protesters-May.ipynb</a><br />
<a href="010-Series-FLORIDA-research.ipynb">Series-FLORIDA-research.ipynb</a><br />
<a href="010-Series-NewYork.ipynb">Series-NewYork.ipynb</a><br />
<a href="010-Series-Protesters-on-Michigan-Capitol.ipynb">Series-Protesters-on-Michigan-Capitol.ipynb</a><br />
<a href="010-Series-ReOpen-Maryland-Annapolis-April-18.ipynb">Series-ReOpen-Maryland-Annapolis-April-18.ipynb</a><br />
<a href="012a-Pandas-Folium-Choropleth.ipynb">Pandas-Folium-Choropleth.ipynb</a><br />
<a href="012-Folium-maps.ipynb">Folium-maps.ipynb</a><br />
<a href="013-USA-COUNTIES-Finding-Hot-Spots.ipynb">USA-COUNTIES-Finding-Hot-Spots.ipynb</a><br />
<a href="013-ver_2-USA-COUNTIES-Finding-Hot-Spots.ipynb">ver_2-USA-COUNTIES-Finding-Hot-Spots.ipynb</a><br />
<a href="015-LEGENDS-Understanding-Plot-Legends.ipynb">LEGENDS-Understanding-Plot-Legends.ipynb</a><br />
<a href="019-Module-CountyData_py.py.ipynb">Module-CountyData_py.py.ipynb</a><br />
<a href="020-Plotting-CHINA.ipynb">Plotting-CHINA.ipynb</a><br />
<a href="021-plotly.graph-Deaths-by-Country.ipynb">plotly.graph-Deaths-by-Country.ipynb</a><br />
<a href="022-GlobalData-Country.ipynb">GlobalData-Country.ipynb</a><br />
<a href="022-review-shapefile-CSSEGISandData-COVID-19_Creating-Plots.ipynb">review-shapefile-CSSEGISandData-COVID-19_Creating-Plots.ipynb</a><br />
<a href="023-Getting-html-DataFrames-Resources-Statisics.ipynb">Getting-html-DataFrames-Resources-Statisics.ipynb</a><br />
<a href="024-python-matplotlib-and-cartopy-custom-legends.ipynb">python-matplotlib-and-cartopy-custom-legends.ipynb</a><br />
<a href="025-python2.7-Post-Plots-to-Twitter.ipynb">python2.7-Post-Plots-to-Twitter.ipynb</a><br />
<a href="026-Basemap-shapefiles-review.ipynb">Basemap-shapefiles-review.ipynb</a><br />
<a href="027-get-cities-and-counties-info.ipynb">get-cities-and-counties-info.ipynb</a><br />
<a href="028-Twitter-plot-bot.ipynb">Twitter-plot-bot.ipynb</a><br />
<a href="029-Python Maps with Folium-Advanced.ipynb">Python Maps with Folium-Advanced.ipynb</a><br />
<a href="030-Click-list-save-Latitude-and-Longitude-With-Mouse.ipynb">Click-list-save-Latitude-and-Longitude-With-Mouse.ipynb</a><br />
<a href="030-Create-a-Shapefile.ipynb">Create-a-Shapefile.ipynb</a><br />
<a href="031-naturalearthdata.ipynb">naturalearthdata.ipynb</a><br />
<a href="032-Experimental-resources-mouse-npy-cut-and-paste-arrays.ipynb">Experimental-resources-mouse-npy-cut-and-paste-arrays.ipynb</a><br />
<a href="033-Experiments-Tests.ipynb">Experiments-Tests.ipynb</a><br />
<a href="034-Working-with-SQLITE3-storage.ipynb">Working-with-SQLITE3-storage.ipynb</a><br />
<a href="050-grb-test.ipynb">grb-test.ipynb</a><br />
|
github_jupyter
|
# 000-Index.ipynb
## 002-US-States_HOT_spots_DEATHS.ipynb
#### <a href="002-US-States_HOT_spots_DEATHS.ipynb">LINK: 002-US-States_HOT_spots_DEATHS.ipynb</a>
How to retrieve data from:
https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_US.csv<br />
Creating lists of counties with multiple consectutive days of increase, using a threshold.<br />
<a href="000-Index.ipynb">Index.ipynb</a><br />
<a href="002-US-States_HOT_spots_DEATHS.ipynb">US-States_HOT_spots_DEATHS</a><br />
<a href="003-Available-Matplotlib-Colors.ipynb">Available-Matplotlib-Colors.ipynb</a><br />
<a href="007-Drawing-Maps-with-Shapefiles-and-US_State_Bounding_Boxes.ipynb">Drawing-Maps-with-Shapefiles-and-US_State_Bounding_Boxes</a><br />
<a href="008-Plotting-on-Basemaps.ipynb">Plotting-on-Basemaps.ipynb</a><br />
<a href="009-Get-Adjacent-Counties.ipynb">Get-Adjacent-Counties.ipynb</a><br />
<a href="010-Series-Castle-Rock-Restaurant.ipynb">Series-Castle-Rock-Restaurant.ipynb</a><br />
<a href="010-Series-COMMACK,NY—Protesters-May.ipynb">Series-COMMACK,NY—Protesters-May.ipynb</a><br />
<a href="010-Series-FLORIDA-research.ipynb">Series-FLORIDA-research.ipynb</a><br />
<a href="010-Series-NewYork.ipynb">Series-NewYork.ipynb</a><br />
<a href="010-Series-Protesters-on-Michigan-Capitol.ipynb">Series-Protesters-on-Michigan-Capitol.ipynb</a><br />
<a href="010-Series-ReOpen-Maryland-Annapolis-April-18.ipynb">Series-ReOpen-Maryland-Annapolis-April-18.ipynb</a><br />
<a href="012a-Pandas-Folium-Choropleth.ipynb">Pandas-Folium-Choropleth.ipynb</a><br />
<a href="012-Folium-maps.ipynb">Folium-maps.ipynb</a><br />
<a href="013-USA-COUNTIES-Finding-Hot-Spots.ipynb">USA-COUNTIES-Finding-Hot-Spots.ipynb</a><br />
<a href="013-ver_2-USA-COUNTIES-Finding-Hot-Spots.ipynb">ver_2-USA-COUNTIES-Finding-Hot-Spots.ipynb</a><br />
<a href="015-LEGENDS-Understanding-Plot-Legends.ipynb">LEGENDS-Understanding-Plot-Legends.ipynb</a><br />
<a href="019-Module-CountyData_py.py.ipynb">Module-CountyData_py.py.ipynb</a><br />
<a href="020-Plotting-CHINA.ipynb">Plotting-CHINA.ipynb</a><br />
<a href="021-plotly.graph-Deaths-by-Country.ipynb">plotly.graph-Deaths-by-Country.ipynb</a><br />
<a href="022-GlobalData-Country.ipynb">GlobalData-Country.ipynb</a><br />
<a href="022-review-shapefile-CSSEGISandData-COVID-19_Creating-Plots.ipynb">review-shapefile-CSSEGISandData-COVID-19_Creating-Plots.ipynb</a><br />
<a href="023-Getting-html-DataFrames-Resources-Statisics.ipynb">Getting-html-DataFrames-Resources-Statisics.ipynb</a><br />
<a href="024-python-matplotlib-and-cartopy-custom-legends.ipynb">python-matplotlib-and-cartopy-custom-legends.ipynb</a><br />
<a href="025-python2.7-Post-Plots-to-Twitter.ipynb">python2.7-Post-Plots-to-Twitter.ipynb</a><br />
<a href="026-Basemap-shapefiles-review.ipynb">Basemap-shapefiles-review.ipynb</a><br />
<a href="027-get-cities-and-counties-info.ipynb">get-cities-and-counties-info.ipynb</a><br />
<a href="028-Twitter-plot-bot.ipynb">Twitter-plot-bot.ipynb</a><br />
<a href="029-Python Maps with Folium-Advanced.ipynb">Python Maps with Folium-Advanced.ipynb</a><br />
<a href="030-Click-list-save-Latitude-and-Longitude-With-Mouse.ipynb">Click-list-save-Latitude-and-Longitude-With-Mouse.ipynb</a><br />
<a href="030-Create-a-Shapefile.ipynb">Create-a-Shapefile.ipynb</a><br />
<a href="031-naturalearthdata.ipynb">naturalearthdata.ipynb</a><br />
<a href="032-Experimental-resources-mouse-npy-cut-and-paste-arrays.ipynb">Experimental-resources-mouse-npy-cut-and-paste-arrays.ipynb</a><br />
<a href="033-Experiments-Tests.ipynb">Experiments-Tests.ipynb</a><br />
<a href="034-Working-with-SQLITE3-storage.ipynb">Working-with-SQLITE3-storage.ipynb</a><br />
<a href="050-grb-test.ipynb">grb-test.ipynb</a><br />
| 0.446977 | 0.599339 |
<a href="https://colab.research.google.com/github/Nisarg03/Astrophysics/blob/main/Basics%20of%20Image%20Reduction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Electromagnetic follow-up of Gravitational Wave events (EMGW)
## Main Motive
Understanding the standard telescope data and making it ready for further science analysis. This step is usually termed as calibraing the pre-processing of the data.
## Key steps
- Understanding the data acquisition.
- Handling fits files.
- Pre-processing RAW images using bias and flat fields.
**A few important things before we get started:-**
- python3 environment is recommended for this notebook with the following modeules insatlled: (you can also make use of conda to make such an environment.)
- numpy
- matplotlib
- astropy
- photutils
If any of these modules are not installed, a simple pip insatll might do the job. i.e. `pip install <module>`. You can also use conda to install these modules if you are working in a conda environment. If you are working with conda environment, you might want to make sure that your environment is active and pip is insatlled within your working conda environment to your conda environment
**We also require a few additional astrometic software dependency :-**
- SExtractor https://www.astromatic.net/software
## Let's get started!
We will first import all necessary modules and do some important checks!
```
! pip install astroquery
! pip install astroscrappy
! pip install astropy
! pip install photutils
# Importing all necessary modules
import os
import glob
import numpy as np
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy.stats import sigma_clipped_stats
import photutils
import astroscrappy
```
Where is your data sitting? For this notebook, keep all of the given files in the folder 'data' in the directory where you're running this notebook.
Directory structure should be:
- CurrentWorkingDirectory <br>
- data <br>
- bias <br>
- flats <br>
- science <br>
```
# mounting google drive to import data files
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
cwd = "/content/drive/MyDrive"
# Assigning names for all folders
bias_path = os.path.join(cwd,'data','bias')
flat_path = os.path.join(cwd,'data','flats')
science_path = os.path.join(cwd,'data','science')
os.chdir(cwd)
os.mkdir('reduced') #creates new folder in \content
reduced_path = os.path.join(cwd,'reduced')
# A simple function to check whether some directory exists or not. Useful while using these scripts on new data.
def do_path_check(path_list):
a_exist = [f for f in path_list if os.path.isdir(f)]
not_exist=list(set(a_exist) ^ set(path_list))
print("The following directories exists:\n {} \n".format(a_exist))
if len(not_exist) > 0:
print("Please check the path you have given for {}.\n \nIt does not exist!!! \n".format(not_exist))
return
else:
print("All paths exist. \n")
def make_folder_check(path):
if os.path.exists(path):
print("{} Directory exists.".format(path.split("/")[-1]))
else:
print("{} does not exist.".format(path))
make_folder_check(bias_path)
make_folder_check(flat_path)
make_folder_check(science_path)
# make_folder_check(reduced_path)
# Making a list for the available data. Useful reference.
bias_list = glob.glob(bias_path +'/*.fits')
flat_list = glob.glob(flat_path +'/*.fits')
sci_list = glob.glob(science_path +'/*wcs.fits')
print("Number of bias frames avaialble: {}".format(len(bias_list)))
print("Number of flat frames avaialble: {}".format(len(flat_list)))
print("Number of science frames avaialble: {}".format(len(sci_list)))
test_hdu = fits.open(bias_list[0])
# Let's check the hdu info
test_hdu.info()
```
Now let's have a actual look at the image itself using matplotlib tools.
This can be done easily using `plt.imshow`. But before that let's set up the matplotlib for nicer visualisation.
```
plt.rc('axes', labelsize=14)
plt.rc('axes', titlesize=16)
plt.rc('axes', titleweight='bold')
plt.rc('axes', labelweight='bold')
plt.rc('font', family='sans-serif')
# Display the image with a 'gray' colormap
fig = plt.figure(figsize = (10,10))
plt.imshow()
plt.colorbar()
```
Ahh!!! this is certainly not what you were expecting, right? <br>
So now what we can do to improve the visibility of image?
Hint: look at the colorbar scale and statstics of image.
Try to manipulate the image display style.
```
fig = plt.figure(figsize = (10,10))
plt.imshow()
plt.colorbar()
# Combining median frame to create a master frame
def median_combine_frames(img_list):
x_len = fits.getval(img_list[0], 'NAXIS2')
y_len = fits.getval(img_list[0], 'NAXIS1')
n_images = len(img_list)
all_frames = np.empty((n_images, x_len, y_len))
for i in range(n_images):
all_frames[i,:,:] = fits.open(img_list[i])[0].data
master_img = np.median(all_frames, axis=0)
return master_img
```
Now create a Master Bias! Don't forget to understand every line in the function above.
```
# Creating a Master Bias
master_bias =
plt.figure(figsize = (10,10))
plt.imshow()
plt.colorbar()
```
# Let's make a masterflat.
There are a couple of things you should keep in mind.
We use flat frames to correct for pixel response. This response function depends on the wavlength of light and hence depends on which filter we used for that image. So, it is required that we make masterflats for individual filters in practice. You all need not to worry though, we have given you all images of same filter and you can always verify. You already know how to check that. <br>
The other thing you should think about is that do you need to correct flats using masterbias before combining ?
# So, this bring us this the next exercise !!!
Let's make a masterflat in same manner as we have made a masterbias frame.
# Exercise
Print stats of all flats to verify that they are useable or not. We will discard the 'bad' flats if necessary. Find out what the median counts of all flats are.
```
Median_Counts
```
After verifying that all flats are good, we will now combine them to create a master-flat. You have already combined bias images to create a master-bias.
```
def flat_combine():
return master_img
master_flat = flat_combine()
print(np.median(master_flat))
mean_f, median_f, std_f = sigma_clipped_stats(master_flat)
fig = plt.figure(figsize = (10,10))
plt.imshow(master_flat, vmin = median_f - 5*std_f, vmax = median_f + 5*std_f)
plt.colorbar()
```
Are you able to see some pattern in the image? Well!! if you look carefully, there are 2 patterns in this image.
Can you tell us what could be the possible reasons behind these patterns?
```
## Set the unexposed pixels to NaN, and display again
master_flat_norm = np.copy(master_flat)*1.0 # Use 'copy' to preserve the original masterFlat
# Set a mask for all pixels where the counts are less than 0.8
plt.figure(figsize=(8,8))
plt.imshow()
plt.colorbar()
plt.show()
```
# Finally!
The Bias and flat corrections - The big step that we have been working for so far!
Use the correction equation to get a reduced science image. Then save this new reduced science image.
Don't forget to add headers that say you have corrected for bias and flats. This is an important step that will help you understand the importance of headers. It also acts like a documentation for all analysis done on the file. Very useful for your own use and especially when someone else accesses your data.
```
for i in range(len(sci_list)):
# Read in the FITS data from the science image
sci_hdu =
sci_data =
sci_header =
fbcr_data =
# Write it to a new FITS file
new_hdu =
# Save the reduced image to a fits file
try:
new_hdu.header.remove('BZERO')
new_hdu.header.remove('BSCALE')
except:
print("No BZERO, BSCALE keyword found ")
fbcr_filename = sci_list[i].split("/")[-1].replace('fits','fb.fits')
new_hdu.writeto(os.path.join(reduced_path, fbcr_filename), overwrite=True)
```
## But how does it look like?
Let's plot and find out!
So, plot all 4 reduced science images to visualize the corrections.
```
fbproc_list = glob.glob(reduced_path + "/*fb.fits") # list of all flat, bias corrected files.
plt.figure(figsize=(14,14))
for i in range(len(fbproc_list)):
plt.colorbar()
plt.show()
```
### Now on to the final stage of the pre-processing: Cosmic ray removal!
You might already know that cosmic rays are chared particals protons and nuclei. These charged particles move through space with velocity nearly the speed of light. These produce a shower of secondary particles as soon as they hit the upper layer of the earth's atmosphere. Charged particles intract in different way as compared to the photons. They deposit most of their energy in very small area and can have different profiles in CCD image. We use this cretireia to differentiate them from astrophysical sources.
`lacosmic` is one of the best available algorithm to identify various types of cosmic ray hits. We are going to use the python package `astroscrappy` (https://astroscrappy.readthedocs.io/en/latest/) which is based on the above mentioned algorithm.
```
Gain = 1.6 # electrons / ADU
read_noise = 14.0 # electrons
saturation = 96000 # electrons
for i in range(len(fbproc_list)):
fbproc_hdu = fits.open(os.path.join(reduced_path, fbproc_list[i]))
fbproc_data = fbproc_hdu[0].data
fbproc_header = fbproc_hdu[0].header
new_data = np.copy(fbproc_data)
cosmic_mask, clean_data = astroscrappy.detect_cosmics(new_data, gain=Gain, readnoise=read_noise, satlevel=saturation)
print('{} pixels are affected by cosmic rays for file {}'.format(np.sum(cosmic_mask), fbproc_list[i].split("/")[-1]))
proc_data = clean_data / Gain
if np.any(master_flat < 0.8):
proc_data[mask] = float('NaN') # Correcting for the vignetting region
proc_header = fbproc_header
proc_header.add_history('Cosmic ray removed')
cleaned_image = fbproc_list[i].replace("fb.fits", "proc.fits")
fits.writeto(os.path.join(reduced_path, cleaned_image), proc_data, proc_header, overwrite=True)
```
Plot the cleaned_image below and see whether the vignetting region is still present.
```
```
Let's check if the cosmic ray correction worked or not.
```
_, m1, std1 = sigma_clipped_stats(fbproc_data[2812-20:2812+20, 1396-20:1396+20])
_, m2, std2 = sigma_clipped_stats(proc_data[2812-20:2812+20, 1396-20:1396+20])
plt.imshow(fbproc_data[2812-20:2812+20, 1396-20:1396+20], vmin = m1-5*std1, vmax = m1 + 5*std1)
plt.show()
plt.imshow(proc_data[2812-20:2812+20, 1396-20:1396+20], vmin = m2-5*std2, vmax = m2 + 5*std2)
```
The above mention three step (bias correction, flat fielding and cosmic ray removal) are most common operation of pre-processing. Although, depending on each telescope facility there might be other operations can be done in pre-processing. Such corrections are not applicable for general use as these operation are usually telescope faciity specific.
Sometimes astronomers try to take multiple short exposures and stack the images to get better S/N ratio. This also help in getting rid of comsic ray removal step. Spend a minute thinking why we can skip cosmic ray removal part after stacking (see home exercise at the end of notebook) the image?
After all these corrections, we usually perform one more step i.e. solving image for astrometry.
You must be thinking what does that means?
When we take images through telescope facility, the camera stores photons numbers in each pixels. It has no information of where the photons are coming from. Although the telescope pointing software usaully gives rough estimate of direction, but it is impossible to associate the photons of each pixel to aspecific part of sky. During solving images for astrometry we establish a unique realtion between image pixels and sky coordinates. This allos us to identify sources in the image.
This step require astrometry.net's `solve-field` engine to solve the images (you can download it from here [https://astrometry.net/use.html], if you are interested) with the help of index files. These indexfiles are rather large ~ 50GB in total. We have provided you astrometry-solved images to avoid this heavy download. Alternate option is to upload image on the publically available online server of astrometry.net here: [https://nova.astrometry.net/upload] and download the solved images from there.
*Important Note*:- You might want to login first and then change a few advanced settings in order to keep your data private. Otherwise uploaded images will be publicaly available.
|
github_jupyter
|
! pip install astroquery
! pip install astroscrappy
! pip install astropy
! pip install photutils
# Importing all necessary modules
import os
import glob
import numpy as np
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy.stats import sigma_clipped_stats
import photutils
import astroscrappy
# mounting google drive to import data files
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
cwd = "/content/drive/MyDrive"
# Assigning names for all folders
bias_path = os.path.join(cwd,'data','bias')
flat_path = os.path.join(cwd,'data','flats')
science_path = os.path.join(cwd,'data','science')
os.chdir(cwd)
os.mkdir('reduced') #creates new folder in \content
reduced_path = os.path.join(cwd,'reduced')
# A simple function to check whether some directory exists or not. Useful while using these scripts on new data.
def do_path_check(path_list):
a_exist = [f for f in path_list if os.path.isdir(f)]
not_exist=list(set(a_exist) ^ set(path_list))
print("The following directories exists:\n {} \n".format(a_exist))
if len(not_exist) > 0:
print("Please check the path you have given for {}.\n \nIt does not exist!!! \n".format(not_exist))
return
else:
print("All paths exist. \n")
def make_folder_check(path):
if os.path.exists(path):
print("{} Directory exists.".format(path.split("/")[-1]))
else:
print("{} does not exist.".format(path))
make_folder_check(bias_path)
make_folder_check(flat_path)
make_folder_check(science_path)
# make_folder_check(reduced_path)
# Making a list for the available data. Useful reference.
bias_list = glob.glob(bias_path +'/*.fits')
flat_list = glob.glob(flat_path +'/*.fits')
sci_list = glob.glob(science_path +'/*wcs.fits')
print("Number of bias frames avaialble: {}".format(len(bias_list)))
print("Number of flat frames avaialble: {}".format(len(flat_list)))
print("Number of science frames avaialble: {}".format(len(sci_list)))
test_hdu = fits.open(bias_list[0])
# Let's check the hdu info
test_hdu.info()
plt.rc('axes', labelsize=14)
plt.rc('axes', titlesize=16)
plt.rc('axes', titleweight='bold')
plt.rc('axes', labelweight='bold')
plt.rc('font', family='sans-serif')
# Display the image with a 'gray' colormap
fig = plt.figure(figsize = (10,10))
plt.imshow()
plt.colorbar()
fig = plt.figure(figsize = (10,10))
plt.imshow()
plt.colorbar()
# Combining median frame to create a master frame
def median_combine_frames(img_list):
x_len = fits.getval(img_list[0], 'NAXIS2')
y_len = fits.getval(img_list[0], 'NAXIS1')
n_images = len(img_list)
all_frames = np.empty((n_images, x_len, y_len))
for i in range(n_images):
all_frames[i,:,:] = fits.open(img_list[i])[0].data
master_img = np.median(all_frames, axis=0)
return master_img
# Creating a Master Bias
master_bias =
plt.figure(figsize = (10,10))
plt.imshow()
plt.colorbar()
Median_Counts
def flat_combine():
return master_img
master_flat = flat_combine()
print(np.median(master_flat))
mean_f, median_f, std_f = sigma_clipped_stats(master_flat)
fig = plt.figure(figsize = (10,10))
plt.imshow(master_flat, vmin = median_f - 5*std_f, vmax = median_f + 5*std_f)
plt.colorbar()
## Set the unexposed pixels to NaN, and display again
master_flat_norm = np.copy(master_flat)*1.0 # Use 'copy' to preserve the original masterFlat
# Set a mask for all pixels where the counts are less than 0.8
plt.figure(figsize=(8,8))
plt.imshow()
plt.colorbar()
plt.show()
for i in range(len(sci_list)):
# Read in the FITS data from the science image
sci_hdu =
sci_data =
sci_header =
fbcr_data =
# Write it to a new FITS file
new_hdu =
# Save the reduced image to a fits file
try:
new_hdu.header.remove('BZERO')
new_hdu.header.remove('BSCALE')
except:
print("No BZERO, BSCALE keyword found ")
fbcr_filename = sci_list[i].split("/")[-1].replace('fits','fb.fits')
new_hdu.writeto(os.path.join(reduced_path, fbcr_filename), overwrite=True)
fbproc_list = glob.glob(reduced_path + "/*fb.fits") # list of all flat, bias corrected files.
plt.figure(figsize=(14,14))
for i in range(len(fbproc_list)):
plt.colorbar()
plt.show()
Gain = 1.6 # electrons / ADU
read_noise = 14.0 # electrons
saturation = 96000 # electrons
for i in range(len(fbproc_list)):
fbproc_hdu = fits.open(os.path.join(reduced_path, fbproc_list[i]))
fbproc_data = fbproc_hdu[0].data
fbproc_header = fbproc_hdu[0].header
new_data = np.copy(fbproc_data)
cosmic_mask, clean_data = astroscrappy.detect_cosmics(new_data, gain=Gain, readnoise=read_noise, satlevel=saturation)
print('{} pixels are affected by cosmic rays for file {}'.format(np.sum(cosmic_mask), fbproc_list[i].split("/")[-1]))
proc_data = clean_data / Gain
if np.any(master_flat < 0.8):
proc_data[mask] = float('NaN') # Correcting for the vignetting region
proc_header = fbproc_header
proc_header.add_history('Cosmic ray removed')
cleaned_image = fbproc_list[i].replace("fb.fits", "proc.fits")
fits.writeto(os.path.join(reduced_path, cleaned_image), proc_data, proc_header, overwrite=True)
```
Let's check if the cosmic ray correction worked or not.
| 0.437343 | 0.935465 |
# "FashionMNIST with PyTorch & fastAI"
> "[Part 2] Solving FashionMNIST for Google Code-In for Julia"
- toc: false
- branch: master
- badges: true
- comments: true
- categories: [pytorch, ml, gci19]
- image: images/fmnist.png
- hide: false
- search_exclude: false
## Task Statement :
Fashion MNIST is a good way to introduce the concept of autoenoders and for classification tasks. Write an efficient Fashion MNIST implementation using Flux and benchmark it against equivalent implementations in TensorFlow and PyTorch. A good extension might be to have it run smoothly on GPUs too. The FashionMNIST dataset can be easily obtained and unpackaged into ready-to-use Julia data types with the help of MLDatasets.jl. A working example of using Flux for classification of handwritten digits from the MNIST dataset can be found here, for students who are already familiar with basic image detection techniques and want to hit the ground running. Flux's documentation can be found here.
I am going to use a pretrained (CNN) called resnet34. (Only thing I understood after watching first three fastAI lectures that use this thing for image-classification tasks.) Hoping to understand more theory by reading this [article](https://towardsdatascience.com/understanding-and-visualizing-resnets-442284831be8)
But honestly, I don't know the complete theory behind a CNN myself. I'm still trying to learn it from the lectures given in the Deep Learning Specialisation. I comletely know how to build simple multilayer perceptrons though and the theory behind them too. xD So I'll also try to make some of them on data-set.
Also the fastAI course followed a top-down approach to things, so yeah some concepts remain unclear but with reference to some of the image classification tasks we did in lectures 1 and 2 in the course, I was able to make this !
Julia code will be submitted seperately.
P.S: Special thanks to my mentor Kartikey Gupta for all his support and his [implementation](https://github.com/kraftpunk97/FashionMNIST-with-keras/blob/master/Fashion_MNIST_with_keras.ipynb) in Keras which provided me a path to write the notebook.

```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
import pandas as pd
fmnist_test = pd.read_csv("../input/fashionmnist/fashion-mnist_test.csv")
fmnist_train = pd.read_csv("../input/fashionmnist/fashion-mnist_train.csv")
%reload_ext autoreload
%autoreload 2
%matplotlib inline
#autoreload reloads modules automatically before entering the execution of code typed. It is beneficial to update matplotlib functions
# everytime a cell is run.
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
torch.cuda.is_available()
torch.backends.cudnn.enabled
print(os.listdir('../input/'))
PATH = "../input/"
TMP_PATH = "/tmp/tmp"
MODEL_PATH = "/tmp/model/"
arch = resnet34
sz = 14
```
## Data-Preprocessing
```
#collapse
fmnist_test = pd.read_csv("../input/fashionmnist/fashion-mnist_test.csv")
fmnist_train = pd.read_csv("../input/fashionmnist/fashion-mnist_train.csv")
#Shape of the data-sets.
print(f'fmnist_train shape : {fmnist_train.shape}') #60,000 rows and 785 columns
print(f'fmnist_test shape : {fmnist_test.shape}') #10,000 rows and 785 columns
#Seeing some of the data distribution.
fmnist_train.head(7)
```
* As from we can see, the first column depicts the label of the image which, from the official repository of the data-set are:<br>
**Labels**<br><br>
Each training and test example is assigned to one of the following labels:
| Label | Description |
| --- | --- |
| 0 | T-shirt/top |
| 1 | Trouser |
| 2 | Pullover |
| 3 | Dress |
| 4 | Coat |
| 5 | Sandal |
| 6 | Shirt |
| 7 | Sneaker |
| 8 | Bag |
| 9 | Ankle boot |
```
#I'll be now splitting 20% of the training data into validation data-set.
fmnist_valid = fmnist_train.sample(frac=0.2)
print(fmnist_valid.shape, '| Shape of Validation Set')
#Dropping the label's column since we would be predicting that.
fmnist_train = fmnist_train.drop(fmnist_valid.index)
print(fmnist_train.shape, '| Shape Training Set')
#Defining labels to predict
labels = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
#Getting the images as X (reshaping the images into 28x28) and labels (flattened) as y from the data-sets. (Changing the dimensions)
def split(data):
'''returns a tuple (X, y) where
X : the training inputs which is in (samples, height, width, channel) shape
y : flattened (one-D) label vector
'''
y = data['label'].values.flatten()
X = data.drop('label', axis=1).values
X = X.reshape(X.shape[0], 28, 28)
return (X,y)
X_train, y_train = split(fmnist_train)
X_valid, y_valid = split(fmnist_valid)
X_test, y_test = split(fmnist_test)
print("Training Set Shape")
print(X_train.shape,'\n',y_train.shape)
print("Validation Set Shape")
print(X_valid.shape,'\n',y_valid.shape)
print("Test Set Shape")
print(X_test.shape,'\n',y_test.shape)
```
#### Some image processing tasks
Normalising image data [(learnt here)](https://en.wikipedia.org/wiki/Normalization_(image_processing)
<code>Scaling the values of the individual pixels from 0->255 to 0->1 for reduced computational complexity.</code>
and adding image missing colour channels (don't understand this concept, saw this in many models on the same, will try to dig deep to learn more)
```
X_train = X_train.astype('float64') / 255
X_valid = X_valid.astype('float64') / 255
X_test = X_test.astype('float64') / 255
X_train = np.stack((X_train,) * 3, axis=-1)
X_valid = np.stack((X_valid,) * 3, axis=-1)
X_test = np.stack((X_test,) * 3, axis=-1)
```
## Visualising a images.
using Matplotlib.
```
index = 42 #THE ANSWER TO LIFE, THE UNIVERSE AND EVERYTHING is a Pullover.
plt.imshow(X_train[index,], cmap='gray')
plt.title(labels[y_train[index]])
#Code inspiration from Kartikey's Keras implementation of the same
plt.figure(figsize=(10, 10))
for i in range(25):
plt.subplot(5, 5, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(X_train[i], cmap='gray')
plt.title(labels[y_train[i]])
plt.show()
```
# Training the Model using pre-trained cnn (resnet34)
for 7 epochs
```
data = ImageClassifierData.from_arrays(PATH, trn=(X_train,y_train), classes=[0,1,2,3,4,5,6,7,8,9],val=(X_valid, y_valid), tfms=tfms_from_model(arch, 28), test=X_test)
learn = ConvLearner.pretrained(arch, data, precompute=True, tmp_name=TMP_PATH, models_name=MODEL_PATH)
learn.fit(7e-3, 3, cycle_len=1, cycle_mult=2)
```
We get around a **85.5517** which is good and not inflated like the 99% percent on MNIST data-sets.
From what I've scavenged from the web, the oneshot high accuracy of fast-ai library can be explained via:
1. TTA involves taking a series of different versions of the original image (for example cropping different areas, or changing the zoom) and passing them through the model. The average output is then calculated for the different versions and this is given as the final output score for the image.
2. Dropout combats overfitting and so would have proved crucial in winning on a relatively small dataset such at CIFAR10. Dropout is implemented automatically by fast ai when creating a learn object, though can be altered using the ps variable (not used here though)
```
log_predicns, _ = learn.TTA(is_test=True)
prods = np.exp(log_predicns)
prods = np.mean(prods, 0)
accuracy_np(prods, y_test)
```
# 0.8565. -> 85.65 %
Some notes on accuracy vs precision in ML for my revision. <br>
\begin{equation}
accuracy=\frac{TruePositive+TrueNegative}{TruePositive+TrueNegative+FalsePositive+FlaseNegative}
\end{equation}
-PseudoCodeNerd
|
github_jupyter
|
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
import pandas as pd
fmnist_test = pd.read_csv("../input/fashionmnist/fashion-mnist_test.csv")
fmnist_train = pd.read_csv("../input/fashionmnist/fashion-mnist_train.csv")
%reload_ext autoreload
%autoreload 2
%matplotlib inline
#autoreload reloads modules automatically before entering the execution of code typed. It is beneficial to update matplotlib functions
# everytime a cell is run.
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
torch.cuda.is_available()
torch.backends.cudnn.enabled
print(os.listdir('../input/'))
PATH = "../input/"
TMP_PATH = "/tmp/tmp"
MODEL_PATH = "/tmp/model/"
arch = resnet34
sz = 14
#collapse
fmnist_test = pd.read_csv("../input/fashionmnist/fashion-mnist_test.csv")
fmnist_train = pd.read_csv("../input/fashionmnist/fashion-mnist_train.csv")
#Shape of the data-sets.
print(f'fmnist_train shape : {fmnist_train.shape}') #60,000 rows and 785 columns
print(f'fmnist_test shape : {fmnist_test.shape}') #10,000 rows and 785 columns
#Seeing some of the data distribution.
fmnist_train.head(7)
#I'll be now splitting 20% of the training data into validation data-set.
fmnist_valid = fmnist_train.sample(frac=0.2)
print(fmnist_valid.shape, '| Shape of Validation Set')
#Dropping the label's column since we would be predicting that.
fmnist_train = fmnist_train.drop(fmnist_valid.index)
print(fmnist_train.shape, '| Shape Training Set')
#Defining labels to predict
labels = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
#Getting the images as X (reshaping the images into 28x28) and labels (flattened) as y from the data-sets. (Changing the dimensions)
def split(data):
'''returns a tuple (X, y) where
X : the training inputs which is in (samples, height, width, channel) shape
y : flattened (one-D) label vector
'''
y = data['label'].values.flatten()
X = data.drop('label', axis=1).values
X = X.reshape(X.shape[0], 28, 28)
return (X,y)
X_train, y_train = split(fmnist_train)
X_valid, y_valid = split(fmnist_valid)
X_test, y_test = split(fmnist_test)
print("Training Set Shape")
print(X_train.shape,'\n',y_train.shape)
print("Validation Set Shape")
print(X_valid.shape,'\n',y_valid.shape)
print("Test Set Shape")
print(X_test.shape,'\n',y_test.shape)
X_train = X_train.astype('float64') / 255
X_valid = X_valid.astype('float64') / 255
X_test = X_test.astype('float64') / 255
X_train = np.stack((X_train,) * 3, axis=-1)
X_valid = np.stack((X_valid,) * 3, axis=-1)
X_test = np.stack((X_test,) * 3, axis=-1)
index = 42 #THE ANSWER TO LIFE, THE UNIVERSE AND EVERYTHING is a Pullover.
plt.imshow(X_train[index,], cmap='gray')
plt.title(labels[y_train[index]])
#Code inspiration from Kartikey's Keras implementation of the same
plt.figure(figsize=(10, 10))
for i in range(25):
plt.subplot(5, 5, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(X_train[i], cmap='gray')
plt.title(labels[y_train[i]])
plt.show()
data = ImageClassifierData.from_arrays(PATH, trn=(X_train,y_train), classes=[0,1,2,3,4,5,6,7,8,9],val=(X_valid, y_valid), tfms=tfms_from_model(arch, 28), test=X_test)
learn = ConvLearner.pretrained(arch, data, precompute=True, tmp_name=TMP_PATH, models_name=MODEL_PATH)
learn.fit(7e-3, 3, cycle_len=1, cycle_mult=2)
log_predicns, _ = learn.TTA(is_test=True)
prods = np.exp(log_predicns)
prods = np.mean(prods, 0)
accuracy_np(prods, y_test)
| 0.5083 | 0.977001 |
## Loading preprosed data
```
LEVEL="sequence"
PATH = "../weights/auto_encoder{}/version_1".format(LEVEL)
BATCH_SIZE=256
LAMBDA = 10
NUM_EPOCH = 1000000
NUM_OF_ACIDS = 21
EMBEDDING_SIZE = 8
import tensorflow as tf
tf.__version__
import numpy as np
train_data = np.load("..//data//train_features_3.6.1.7LLevel_4.npy")
val_data = np.load("..//data//val_features_3.6.1.7LLevel_4.npy")
train_data.shape, val_data.shape
SEQUENCE_LENGTH=train_data.shape[1]
SEQUENCE_LENGTH
STEPS_PER_EPOCH = int(train_data.shape[0]/BATCH_SIZE)+1
STEPS_PER_EPOCH
```
# Model
## Discriminator
```
def discriminator(x, is_training):
with tf.variable_scope('discriminator', reuse=tf.AUTO_REUSE) as scope:
print('discriminator')
conv1 = tf.layers.conv2d(
inputs=x,
filters=64,
kernel_size=[3,EMBEDDING_SIZE],
strides=(2,1),
padding="same",
activation=tf.nn.leaky_relu,
name = "dconv1")
print(conv1.shape)
# Convolutional Layer #2
conv2 = tf.layers.conv2d(
inputs=conv1,
filters=128,
kernel_size=[3,EMBEDDING_SIZE],
strides=(2,1),
padding="same",
activation=tf.nn.leaky_relu,
name = "dconv2")
conv2 = tf.layers.batch_normalization(conv2, name = "dbn1")
print(conv2.shape)
conv3 = tf.layers.conv2d(
inputs=conv2,
filters=256,
kernel_size=[3,EMBEDDING_SIZE],
strides=(2,1),
padding="same",
activation=tf.nn.leaky_relu,
name = "dconv3")
conv3 = tf.layers.batch_normalization(conv3, name = "dbn2")
print(conv3.shape)
flat = tf.layers.flatten(conv3, name="dflat")
output = tf.layers.dense(inputs=flat,
activation=None,
units=1,
name="doutput")
print(output.shape)
output = tf.reshape(output, [-1])
print(output.shape)
return output
```
# Generator
```
import math
def generator(input_batch=None, is_training=True):
with tf.variable_scope('generator') as scope:
print('generator')
if input_batch is None:
input_batch = tf.random_normal([BATCH_SIZE, 128])
dim = math.floor(SEQUENCE_LENGTH/4)
print (input_batch.shape)
dense1 = tf.layers.dense(inputs=input_batch,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.02),
bias_initializer=tf.zeros_initializer (),
units=dim*EMBEDDING_SIZE*256,
activation=tf.nn.relu,
name="dense1")
reshaped1 = tf.reshape(dense1, shape=[-1, -1, 1, 1], name='reshape1')
reshaped1 = tf.layers.batch_normalization(reshaped1, name = "gbn1")
print(reshaped1.shape)
up1 = tf.keras.layers.UpSampling2D(size=(2, 1))(reshaped1)
conv2 = tf.layers.conv2d(inputs=up1,
filters=128,
kernel_size=[3,EMBEDDING_SIZE],
padding="same",
activation=tf.nn.relu,
name = "conv2")
conv2 = tf.layers.batch_normalization(conv2, name = "gbn2")
print(conv2.shape)
up2 = tf.keras.layers.UpSampling2D(size=(2, 1))(conv2)
conv3 = tf.layers.conv2d(inputs=up2,
filters=64,
kernel_size=[3,EMBEDDING_SIZE],
padding="same",
activation=tf.nn.relu,
name = "conv3")
conv3 = tf.layers.batch_normalization(conv3, name = "gbn3")
print(conv3.shape)
conv4 = tf.layers.conv2d(inputs=conv3,
filters=1,
kernel_size=[3,EMBEDDING_SIZE],
padding="same",
activation=tf.nn.sigmoid,
name = "conv4")
print(conv4.shape)
return conv4
```
## Graph
```
tf.reset_default_graph()
with tf.variable_scope('input'):
real_sequences = tf.placeholder(tf.int32, [None, SEQUENCE_LENGTH], name='real_sequence')
is_training = tf.placeholder(tf.bool, name='is_train')
dataset = tf.data.Dataset.from_tensor_slices(real_sequences)
dataset = dataset.shuffle(buffer_size=10000, reshuffle_each_iteration=True)
dataset = dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE)).repeat(NUM_EPOCH)
iterator = dataset.make_initializable_iterator()
acid_embeddings = tf.get_variable("acid_embeddings", [NUM_OF_ACIDS, EMBEDDING_SIZE])
batch_real_sequences = iterator.get_next()
embedded_real_sequences = tf.nn.embedding_lookup(acid_embeddings, batch_real_sequences)
embedded_real_sequences = tf.reshape(embedded_real_sequences, shape=[-1, SEQUENCE_LENGTH, EMBEDDING_SIZE, 1], name='embedded_real_sequences')
fake = generator(is_training=is_training)
logits_real = discriminator(embedded_real_sequences, is_training)
logits_fake = discriminator(fake, is_training)
d_loss = tf.reduce_mean(logits_fake) - tf.reduce_mean(logits_real) # This optimizes the discriminator.
g_loss = -tf.reduce_mean(logits_fake) # This optimizes the generator.
# # wgan-gp gradient panelty
with tf.name_scope("Gradient_penalty"):
eps = tf.random_uniform([BATCH_SIZE,1, 1, 1], minval=0.0,maxval=1.0)
interpolates = embedded_real_sequences + eps*(fake - embedded_real_sequences)
gradients = tf.gradients(discriminator(interpolates, is_training), [interpolates])[0]
slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))
gradient_penalty = tf.reduce_mean(tf.square(slopes - 1.))
d_loss += 10 * gradient_penalty
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'discriminator')
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'generator')
trainer_d = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.5, beta2=0.9).minimize(d_loss, var_list=D_vars)
trainer_g = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.5, beta2=0.9).minimize(g_loss, var_list=G_vars)
```
## Helpers for training model
## Review generated examples
```
def save_weights(saver, sess, path):
save_path = saver.save(sess, path)
print("Model saved in path: %s" % save_path)
def mean(l):
if len(l) == 0:
return 0
else:
return sum(l) / float(len(l))
def print_summary(steps, dLosses, gLosses):
if steps % int(STEPS_PER_EPOCH*100) == 0:
print('steps:{} \td_loss:{:.4f} \tg_loss:{:.4f}'.format(steps, mean(dLosses), mean(gLosses)))
dLosses, gLosses = [], []
return dLosses, gLosses
def display_sequence():
sequences = reverse_embedding_lookup(acid_embeddings, tf.squeeze(fake))
generated_sequences, logits = sess.run([sequences, logits_fake], feed_dict={is_training: False})
indexToLetter= {0: '0', 1: 'A', 2: 'C', 3: 'D', 4: 'E', 5: 'F', 6: 'G', 7: 'H', 8: 'I', 9: 'K', 10: 'L', 11: 'M', 12: 'N',
13: 'P', 14: 'Q', 15: 'R', 16: 'S', 17: 'T', 18: 'V', 19: 'W', 20: 'Y'}
best_sequence = "".join([ indexToLetter[acid_index] for acid_index in generated_sequences[np.argmax(logits)]])
worst_sequence = "".join([ indexToLetter[acid_index] for acid_index in generated_sequences[np.argmin(logits)]])
print("{} | Discriminator value {}".format(best_sequence, logits[np.argmax(logits)]))
print("{} | Discriminator value {}".format(worst_sequence, logits[np.argmin(logits)]))
import datetime
def save_model(saver, sess):
# Epoch ended
if steps % (STEPS_PER_EPOCH*100) == 0:
display_sequence()
print("Epoch {}. Fineshed at {}".format((steps/STEPS_PER_EPOCH), str(datetime.datetime.now()).split('.')[0]))
save_weights(saver, sess, PATH)
```
## Running model
```
sess = tf.Session()
saver = tf.train.Saver(max_to_keep=3)
sess.run(tf.global_variables_initializer())
```
# CAUTION: Training the model
```
print ("Start training with batch size: {}, epoch num: {}".format(BATCH_SIZE, NUM_EPOCH))
sess.run(iterator.initializer, feed_dict={real_sequences: train_data})
steps, gen_iterations = 0, 0
dLosses, gLosses = [], []
while True:
try:
d_iters = (100 if gen_iterations < 25 or gen_iterations % 500 == 0 else 5)
for k in range(d_iters): # Discriminator
_, dLoss = sess.run([trainer_d, d_loss], feed_dict={is_training: True})
steps = steps + 1
dLosses.append(dLoss)
dLosses, gLosses = print_summary(steps, dLosses, gLosses)
save_model(saver, sess)
# Generator
_, gLoss = sess.run([trainer_g, g_loss], feed_dict={is_training: True})
gLosses.append(gLoss)
steps = steps + 1
gen_iterations = gen_iterations + 1
dLosses, gLosses = print_summary(steps, dLosses, gLosses)
save_model(saver, sess)
except tf.errors.OutOfRangeError:
print ("Training is finished")
break;
```
## Validation of discriminator
```
def reverse_embedding_lookup(acid_embeddings, embedded_sequence):
acid_embeddings_expanded = tf.tile(tf.expand_dims(acid_embeddings, axis = 0), [BATCH_SIZE, 1,1])
emb_distances = tf.matmul(
tf.nn.l2_normalize(acid_embeddings_expanded, axis=1),
tf.nn.l2_normalize(embedded_sequence, axis=1),
transpose_b=True)
return tf.argmax(emb_distances, axis=1)
val_real = discriminator(real_reshaped, is_training, reuse=True)
val_fake = discriminator(embedded_random_sequences, is_training, reuse=True)
val_loss = tf.reduce_mean(val_real-val_fake)
real_predictions = tf.rint(val_real)
fake_predictions = tf.rint(val_fake)
correct_real_predictions = tf.equal(real_predictions, tf.zeros([BATCH_SIZE], dtype=tf.float32))
correct_fake_predictions = tf.equal(fake_predictions, tf.ones([BATCH_SIZE], dtype=tf.float32))
casted_real = tf.cast(correct_real_predictions, tf.float32)
casted_fake = tf.cast(correct_fake_predictions, tf.float32)
accuracy = (tf.reduce_mean(casted_real) + tf.reduce_mean(casted_fake))/2
#Validate discriminator by giving from validate data set and randomly generated
print ('validating discriminator...')
sess.run(iterator.initializer,
feed_dict={real_sequences: val_data, random_sequences: get_random_sequence(val_data.shape[0])})
losses = []
accuracies = []
while True:
try:
v_loss, v_accuracy = sess.run([val_loss, accuracy], feed_dict={is_training: False})
losses.append(v_loss)
accuracies.append(v_accuracy)
except tf.errors.OutOfRangeError:
print ('Validation g_loss:{:.4f} ,accuracy :{:.4f}'.format(mean(losses), mean(accuracies)))
break
def restore_weights(saver, sess, path):
saver.restore(sess, path)
print("Model restored.")
restore_weights(saver, sess, PATH)
```
## Review generated examples
```
sequences = reverse_embedding_lookup(acid_embeddings, tf.squeeze(fake))
print (sequences.shape)
print ('Generating sequences...')
generated_sequences = sess.run([sequences], feed_dict={is_training: False})
generated_sequences[0]
indexToLetter= {0: '0', 1: 'A', 2: 'C', 3: 'D', 4: 'E', 5: 'F', 6: 'G', 7: 'H', 8: 'I', 9: 'K', 10: 'L', 11: 'M', 12: 'N',
13: 'P', 14: 'Q', 15: 'R', 16: 'S', 17: 'T', 18: 'V', 19: 'W', 20: 'Y'}
print("".join([ indexToLetter[acid_index] for acid_index in s ]))
for s in generated_sequences[0]:
print("".join([ indexToLetter[acid_index] for acid_index in s ]))
print("")
print("".join([ indexToLetter[acid_index] for acid_index in s if acid_index != 0 ]))
print("---------------------")
```
|
github_jupyter
|
LEVEL="sequence"
PATH = "../weights/auto_encoder{}/version_1".format(LEVEL)
BATCH_SIZE=256
LAMBDA = 10
NUM_EPOCH = 1000000
NUM_OF_ACIDS = 21
EMBEDDING_SIZE = 8
import tensorflow as tf
tf.__version__
import numpy as np
train_data = np.load("..//data//train_features_3.6.1.7LLevel_4.npy")
val_data = np.load("..//data//val_features_3.6.1.7LLevel_4.npy")
train_data.shape, val_data.shape
SEQUENCE_LENGTH=train_data.shape[1]
SEQUENCE_LENGTH
STEPS_PER_EPOCH = int(train_data.shape[0]/BATCH_SIZE)+1
STEPS_PER_EPOCH
def discriminator(x, is_training):
with tf.variable_scope('discriminator', reuse=tf.AUTO_REUSE) as scope:
print('discriminator')
conv1 = tf.layers.conv2d(
inputs=x,
filters=64,
kernel_size=[3,EMBEDDING_SIZE],
strides=(2,1),
padding="same",
activation=tf.nn.leaky_relu,
name = "dconv1")
print(conv1.shape)
# Convolutional Layer #2
conv2 = tf.layers.conv2d(
inputs=conv1,
filters=128,
kernel_size=[3,EMBEDDING_SIZE],
strides=(2,1),
padding="same",
activation=tf.nn.leaky_relu,
name = "dconv2")
conv2 = tf.layers.batch_normalization(conv2, name = "dbn1")
print(conv2.shape)
conv3 = tf.layers.conv2d(
inputs=conv2,
filters=256,
kernel_size=[3,EMBEDDING_SIZE],
strides=(2,1),
padding="same",
activation=tf.nn.leaky_relu,
name = "dconv3")
conv3 = tf.layers.batch_normalization(conv3, name = "dbn2")
print(conv3.shape)
flat = tf.layers.flatten(conv3, name="dflat")
output = tf.layers.dense(inputs=flat,
activation=None,
units=1,
name="doutput")
print(output.shape)
output = tf.reshape(output, [-1])
print(output.shape)
return output
import math
def generator(input_batch=None, is_training=True):
with tf.variable_scope('generator') as scope:
print('generator')
if input_batch is None:
input_batch = tf.random_normal([BATCH_SIZE, 128])
dim = math.floor(SEQUENCE_LENGTH/4)
print (input_batch.shape)
dense1 = tf.layers.dense(inputs=input_batch,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.02),
bias_initializer=tf.zeros_initializer (),
units=dim*EMBEDDING_SIZE*256,
activation=tf.nn.relu,
name="dense1")
reshaped1 = tf.reshape(dense1, shape=[-1, -1, 1, 1], name='reshape1')
reshaped1 = tf.layers.batch_normalization(reshaped1, name = "gbn1")
print(reshaped1.shape)
up1 = tf.keras.layers.UpSampling2D(size=(2, 1))(reshaped1)
conv2 = tf.layers.conv2d(inputs=up1,
filters=128,
kernel_size=[3,EMBEDDING_SIZE],
padding="same",
activation=tf.nn.relu,
name = "conv2")
conv2 = tf.layers.batch_normalization(conv2, name = "gbn2")
print(conv2.shape)
up2 = tf.keras.layers.UpSampling2D(size=(2, 1))(conv2)
conv3 = tf.layers.conv2d(inputs=up2,
filters=64,
kernel_size=[3,EMBEDDING_SIZE],
padding="same",
activation=tf.nn.relu,
name = "conv3")
conv3 = tf.layers.batch_normalization(conv3, name = "gbn3")
print(conv3.shape)
conv4 = tf.layers.conv2d(inputs=conv3,
filters=1,
kernel_size=[3,EMBEDDING_SIZE],
padding="same",
activation=tf.nn.sigmoid,
name = "conv4")
print(conv4.shape)
return conv4
tf.reset_default_graph()
with tf.variable_scope('input'):
real_sequences = tf.placeholder(tf.int32, [None, SEQUENCE_LENGTH], name='real_sequence')
is_training = tf.placeholder(tf.bool, name='is_train')
dataset = tf.data.Dataset.from_tensor_slices(real_sequences)
dataset = dataset.shuffle(buffer_size=10000, reshuffle_each_iteration=True)
dataset = dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE)).repeat(NUM_EPOCH)
iterator = dataset.make_initializable_iterator()
acid_embeddings = tf.get_variable("acid_embeddings", [NUM_OF_ACIDS, EMBEDDING_SIZE])
batch_real_sequences = iterator.get_next()
embedded_real_sequences = tf.nn.embedding_lookup(acid_embeddings, batch_real_sequences)
embedded_real_sequences = tf.reshape(embedded_real_sequences, shape=[-1, SEQUENCE_LENGTH, EMBEDDING_SIZE, 1], name='embedded_real_sequences')
fake = generator(is_training=is_training)
logits_real = discriminator(embedded_real_sequences, is_training)
logits_fake = discriminator(fake, is_training)
d_loss = tf.reduce_mean(logits_fake) - tf.reduce_mean(logits_real) # This optimizes the discriminator.
g_loss = -tf.reduce_mean(logits_fake) # This optimizes the generator.
# # wgan-gp gradient panelty
with tf.name_scope("Gradient_penalty"):
eps = tf.random_uniform([BATCH_SIZE,1, 1, 1], minval=0.0,maxval=1.0)
interpolates = embedded_real_sequences + eps*(fake - embedded_real_sequences)
gradients = tf.gradients(discriminator(interpolates, is_training), [interpolates])[0]
slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))
gradient_penalty = tf.reduce_mean(tf.square(slopes - 1.))
d_loss += 10 * gradient_penalty
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'discriminator')
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'generator')
trainer_d = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.5, beta2=0.9).minimize(d_loss, var_list=D_vars)
trainer_g = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.5, beta2=0.9).minimize(g_loss, var_list=G_vars)
def save_weights(saver, sess, path):
save_path = saver.save(sess, path)
print("Model saved in path: %s" % save_path)
def mean(l):
if len(l) == 0:
return 0
else:
return sum(l) / float(len(l))
def print_summary(steps, dLosses, gLosses):
if steps % int(STEPS_PER_EPOCH*100) == 0:
print('steps:{} \td_loss:{:.4f} \tg_loss:{:.4f}'.format(steps, mean(dLosses), mean(gLosses)))
dLosses, gLosses = [], []
return dLosses, gLosses
def display_sequence():
sequences = reverse_embedding_lookup(acid_embeddings, tf.squeeze(fake))
generated_sequences, logits = sess.run([sequences, logits_fake], feed_dict={is_training: False})
indexToLetter= {0: '0', 1: 'A', 2: 'C', 3: 'D', 4: 'E', 5: 'F', 6: 'G', 7: 'H', 8: 'I', 9: 'K', 10: 'L', 11: 'M', 12: 'N',
13: 'P', 14: 'Q', 15: 'R', 16: 'S', 17: 'T', 18: 'V', 19: 'W', 20: 'Y'}
best_sequence = "".join([ indexToLetter[acid_index] for acid_index in generated_sequences[np.argmax(logits)]])
worst_sequence = "".join([ indexToLetter[acid_index] for acid_index in generated_sequences[np.argmin(logits)]])
print("{} | Discriminator value {}".format(best_sequence, logits[np.argmax(logits)]))
print("{} | Discriminator value {}".format(worst_sequence, logits[np.argmin(logits)]))
import datetime
def save_model(saver, sess):
# Epoch ended
if steps % (STEPS_PER_EPOCH*100) == 0:
display_sequence()
print("Epoch {}. Fineshed at {}".format((steps/STEPS_PER_EPOCH), str(datetime.datetime.now()).split('.')[0]))
save_weights(saver, sess, PATH)
sess = tf.Session()
saver = tf.train.Saver(max_to_keep=3)
sess.run(tf.global_variables_initializer())
print ("Start training with batch size: {}, epoch num: {}".format(BATCH_SIZE, NUM_EPOCH))
sess.run(iterator.initializer, feed_dict={real_sequences: train_data})
steps, gen_iterations = 0, 0
dLosses, gLosses = [], []
while True:
try:
d_iters = (100 if gen_iterations < 25 or gen_iterations % 500 == 0 else 5)
for k in range(d_iters): # Discriminator
_, dLoss = sess.run([trainer_d, d_loss], feed_dict={is_training: True})
steps = steps + 1
dLosses.append(dLoss)
dLosses, gLosses = print_summary(steps, dLosses, gLosses)
save_model(saver, sess)
# Generator
_, gLoss = sess.run([trainer_g, g_loss], feed_dict={is_training: True})
gLosses.append(gLoss)
steps = steps + 1
gen_iterations = gen_iterations + 1
dLosses, gLosses = print_summary(steps, dLosses, gLosses)
save_model(saver, sess)
except tf.errors.OutOfRangeError:
print ("Training is finished")
break;
def reverse_embedding_lookup(acid_embeddings, embedded_sequence):
acid_embeddings_expanded = tf.tile(tf.expand_dims(acid_embeddings, axis = 0), [BATCH_SIZE, 1,1])
emb_distances = tf.matmul(
tf.nn.l2_normalize(acid_embeddings_expanded, axis=1),
tf.nn.l2_normalize(embedded_sequence, axis=1),
transpose_b=True)
return tf.argmax(emb_distances, axis=1)
val_real = discriminator(real_reshaped, is_training, reuse=True)
val_fake = discriminator(embedded_random_sequences, is_training, reuse=True)
val_loss = tf.reduce_mean(val_real-val_fake)
real_predictions = tf.rint(val_real)
fake_predictions = tf.rint(val_fake)
correct_real_predictions = tf.equal(real_predictions, tf.zeros([BATCH_SIZE], dtype=tf.float32))
correct_fake_predictions = tf.equal(fake_predictions, tf.ones([BATCH_SIZE], dtype=tf.float32))
casted_real = tf.cast(correct_real_predictions, tf.float32)
casted_fake = tf.cast(correct_fake_predictions, tf.float32)
accuracy = (tf.reduce_mean(casted_real) + tf.reduce_mean(casted_fake))/2
#Validate discriminator by giving from validate data set and randomly generated
print ('validating discriminator...')
sess.run(iterator.initializer,
feed_dict={real_sequences: val_data, random_sequences: get_random_sequence(val_data.shape[0])})
losses = []
accuracies = []
while True:
try:
v_loss, v_accuracy = sess.run([val_loss, accuracy], feed_dict={is_training: False})
losses.append(v_loss)
accuracies.append(v_accuracy)
except tf.errors.OutOfRangeError:
print ('Validation g_loss:{:.4f} ,accuracy :{:.4f}'.format(mean(losses), mean(accuracies)))
break
def restore_weights(saver, sess, path):
saver.restore(sess, path)
print("Model restored.")
restore_weights(saver, sess, PATH)
sequences = reverse_embedding_lookup(acid_embeddings, tf.squeeze(fake))
print (sequences.shape)
print ('Generating sequences...')
generated_sequences = sess.run([sequences], feed_dict={is_training: False})
generated_sequences[0]
indexToLetter= {0: '0', 1: 'A', 2: 'C', 3: 'D', 4: 'E', 5: 'F', 6: 'G', 7: 'H', 8: 'I', 9: 'K', 10: 'L', 11: 'M', 12: 'N',
13: 'P', 14: 'Q', 15: 'R', 16: 'S', 17: 'T', 18: 'V', 19: 'W', 20: 'Y'}
print("".join([ indexToLetter[acid_index] for acid_index in s ]))
for s in generated_sequences[0]:
print("".join([ indexToLetter[acid_index] for acid_index in s ]))
print("")
print("".join([ indexToLetter[acid_index] for acid_index in s if acid_index != 0 ]))
print("---------------------")
| 0.734691 | 0.719137 |
```
from __future__ import print_function, division
%load_ext autoreload
import sys
sys.path.append('..')
%autoreload
import copy, os, pickle, pandas as pd, numpy as np
from mimic_direct_extract import (
save_numerics, get_values_by_name_from_df_column_or_index, get_variable_mapping, get_variable_ranges
)
from datapackage_io_util import (
load_datapackage_schema,
load_sanitized_df_from_csv,
save_sanitized_df_to_csv,
sanitize_df,
)
```
# Build Data
```
with open('../resources/testing_schemas.pkl', mode='rb') as f:
schema_data, schema_X, schema_I, schema_var_ranges, schema_var_map, schema_got_out = pickle.load(f)
GENDER, ETHNICITY, AGE = 'U', 'U', 40
DBSOURCE, LINKSTO, CATEGORY = 'TEST', 'TEST', 'TEST'
ADMITTIME, DISCHTIME = '2100-10-01 00:00:00', '2100-11-01 00:00:00'
DEATHTIME = '2101-10-01 00:00:00'
LOS_ICU, ADMISSION_TYPE = 31, 'U'
FIRST_CAREUNIT, MORT_ICU = 'U', 0
MORT_HOSP = 0
HOSPITAL_EXPIRE_FLAG = 0
HOSPSTAY_SEQ = 1
DATETIME_COLS = set([
'charttime', 'admittime', 'dischtime', 'deathtime', 'intime', 'outtime'
])
def make_datetime(df):
for col in set(df.columns).intersection(DATETIME_COLS): df[col] = pd.to_datetime(df[col])
return df
def build_sample_data(
subject_id, hadm_id, icustay_id, itemid,
itemid_label, itemid_unitname, level2, level1,
# gender, ethnicity, age,
intime, outtime,
charttime_hour1,
value_hour1,
valueuom_hour1,
outlier_low=np.NaN,
valid_low=np.NaN,
impute=np.NaN,
valid_high=np.NaN,
outlier_high=np.NaN
):
"""TODO(mmd): Generalize (slightly!!!)"""
X = schema_X.copy()
X = make_datetime(X.append(
{
'subject_id': subject_id,
'hadm_id': hadm_id,
'icustay_id': icustay_id,
'charttime': charttime_hour1,
'itemid': itemid,
'value': value_hour1,
'valueuom': valueuom_hour1,
},
ignore_index = True,
))
I = schema_I.copy()
I = I.append(
pd.DataFrame(
{
'label': itemid_label,
'dbsource': DBSOURCE,
'linksto': LINKSTO,
'category': CATEGORY,
'unitname': itemid_unitname,
},
index = pd.Index([itemid], name='itemid'),
)
)
data = schema_data.copy()
data = make_datetime(data.append(
{
'subject_id': subject_id,
'hadm_id': hadm_id,
'gender': GENDER,
'ethnicity': ETHNICITY,
'age': AGE,
'admittime': ADMITTIME,
'dischtime': DISCHTIME,
'deathtime': DEATHTIME,
'intime': intime,
'outtime': outtime,
'los_icu': LOS_ICU,
'admission_type': ADMISSION_TYPE,
'first_careunit': FIRST_CAREUNIT,
'mort_icu': MORT_ICU,
'mort_hosp': MORT_HOSP,
'hospital_expire_flag': HOSPITAL_EXPIRE_FLAG,
'hospstay_seq': HOSPSTAY_SEQ,
},
ignore_index = True
))
data.index = [icustay_id]
data.index.names = ['icustay_id']
var_map_columns = [
'LEVEL2', 'LEVEL1', 'ITEMID'
]
var_map = pd.DataFrame(
[[level2, level1, itemid]],
columns = var_map_columns,
# dtype = schema_var_map[var_map_columns].dtypes.to_dict(),
)
# TODO(mmd): var_range
var_range = schema_var_ranges.copy()
var_range = var_range.append(
pd.DataFrame(
{
'OUTLIER_LOW': outlier_low,
'VALID_LOW': valid_low,
'IMPUTE': impute,
'VALID_HIGH': valid_high,
'OUTLIER_HIGH': outlier_high,
},
index = pd.Index([level2], name='itemid'),
)
)
return X, data, I, var_map, var_range
def build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1,
aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 3, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, 0, np.NaN, np.NaN, np.NaN, np.NaN]
},
level2 = 'test_level2'
):
tmp = pd.DataFrame(aggregation_functions)
tmp['subject_id'] = subject_id
tmp['hadm_id'] = hadm_id
tmp['icustay_id'] = icustay_id
tmp['hours_in'] = np.arange(len(tmp))
tmp.set_index(['subject_id', 'hadm_id', 'icustay_id', 'hours_in'], inplace=True)
tmp.columns = pd.MultiIndex.from_tuples(
[(level2, c) for c in tmp.columns],
names=('LEVEL2', 'Aggregation Function')
)
return tmp
def build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1,
aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 3, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, 0, np.NaN, np.NaN, np.NaN, np.NaN]
},
level2 = 'test_level2',
level1 = 'test_level1',
itemid = 1,
label = 'test'
):
tmp = pd.DataFrame(aggregation_functions)
tmp['subject_id'] = subject_id
tmp['hadm_id'] = hadm_id
tmp['icustay_id'] = icustay_id
tmp['hours_in'] = np.arange(len(tmp))
tmp.set_index(['subject_id', 'hadm_id', 'icustay_id', 'hours_in'], inplace=True)
tmp.columns = pd.MultiIndex.from_tuples(
[(itemid, label, level1, level2, c) for c in tmp.columns],
names=('itemid','label','LEVEL1','LEVEL2', 'Aggregation Function')
)
return tmp
```
## Multiple observations data
```
X, data, I, var_map, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:15:00',
value_hour1 = '3',
valueuom_hour1 = 'm',
)
X_2, _, _, _ , _= build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:35:00',
value_hour1 = '5',
valueuom_hour1 = 'm',
)
X_3, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 03:35:00',
value_hour1 = '6',
valueuom_hour1 = 'm',
)
X = pd.concat([X, X_2, X_3])
# pd grouping calls std NaNs when there is only 1 obs. TODO Probably want to fix that.
# pd grouping calls counts NaNs when zero for some reason. TODO Probably want to fix that.
expected_out_level2 = build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 2, 0, 1, 0, 0], 'mean': [np.NaN, 4, np.NaN, 6, np.NaN, np.NaN],
'std': [np.NaN, np.sqrt(2), np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'test_level2'
)
expected_out_no_group = build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 2, 0, 1, 0, 0], 'mean': [np.NaN, 4, np.NaN, 6, np.NaN, np.NaN],
'std': [np.NaN, np.sqrt(2), np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'test_level2',
level1 = 'test_level1',
itemid = '1',
label = 'test'
)
X
data
I
var_map
expected_out_level2
expected_out_no_group
```
## Outlier Dectection Data
```
X_outlier, _, _, _, var_range_outlier = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:15:00',
value_hour1 = '0',
valueuom_hour1 = 'm',
outlier_low = 1,
valid_low = 3,
impute = np.NaN,
valid_high = 10,
outlier_high = 15
)
X_2, _, _, _ , _= build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:35:00',
value_hour1 = '2',
valueuom_hour1 = 'm',
)
X_3, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 03:35:00',
value_hour1 = '5',
valueuom_hour1 = 'm',
)
X_4, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 03:50:00',
value_hour1 = '12',
valueuom_hour1 = 'm',
)
X_5, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 04:10:00',
value_hour1 = '18',
valueuom_hour1 = 'm',
)
X_outlier = pd.concat([X_outlier, X_2, X_3, X_4, X_5])
# pd grouping calls std NaNs when there is only 1 obs. TODO Probably want to fix that.
# pd grouping calls counts NaNs when there is no observation;
# pd grouping calls counts NaNs when there is only NaN observation. TODO Probably want to fix that.
expected_out_detection = build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 2, 0, 0], 'mean': [np.NaN, 3, np.NaN, 7.5, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.sqrt(12.5), np.NaN, np.NaN]
}, level2 = 'test_level2'
)
expected_out_no_detection = build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 2, 0, 2, 1, 0], 'mean': [np.NaN, 1, np.NaN, 8.5, 18, np.NaN],
'std': [np.NaN, np.sqrt(2), np.NaN, np.sqrt(24.5), np.NaN, np.NaN]
}, level2 = 'test_level2'
)
X_outlier
var_range_outlier
expected_out_detection
expected_out_no_detection
```
## Multi-level Data
```
X_multi_level, _, I_multi_level, var_map_multi_level, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'eye', itemid_unitname = 'N/A', level2 = 'face', level1 = 'eyes',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:15:00',
value_hour1 = '3',
valueuom_hour1 = 'm',
)
X_2, _, I_2, var_map_2, _= build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '2',
itemid_label = 'nose', itemid_unitname = 'N/A', level2 = 'face', level1 = 'nose',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:35:00',
value_hour1 = '5',
valueuom_hour1 = 'm',
)
X_3, _, I_3, var_map_3, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '3',
itemid_label = 'arm', itemid_unitname = 'N/A', level2 = 'body', level1 = 'arms',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:25:00',
value_hour1 = '6',
valueuom_hour1 = 'm',
)
X_multi_level = pd.concat([X_multi_level, X_2, X_3])
I_multi_level = pd.concat([I_multi_level, I_2, I_3])
var_map_multi_level = pd.concat([var_map_multi_level, var_map_2, var_map_3])
# pd grouping calls std NaNs when there is only 1 obs. TODO Probably want to fix that.
# pd grouping calls counts NaNs when zero for some reason. TODO Probably want to fix that.
expected_out_level2_multi = pd.concat([build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 6, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'body'
),
build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 2, 0, 0, 0, 0], 'mean': [np.NaN, 4, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.sqrt(2), np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'face'
)],axis=1)
expected_out_no_group_multi = pd.concat([build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 3, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'face',
level1 = 'eyes',
itemid = '1',
label = 'eye'
),
build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 5, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'face',
level1 = 'nose',
itemid = '2',
label = 'nose'
),
build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 6, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'body',
level1 = 'arms',
itemid = '3',
label = 'arm'
)],axis=1)
X_multi_level
I_multi_level
var_map_multi_level
expected_out_level2_multi
expected_out_no_group_multi
```
# Missingness Data
```
X_missing, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'eye', itemid_unitname = 'N/A', level2 = 'face', level1 = 'eyes',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:15:00',
value_hour1 = '3',
valueuom_hour1 = 'm',
)
X_2, _, _, _, _= build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '2',
itemid_label = 'nose', itemid_unitname = 'N/A', level2 = 'face', level1 = 'nose',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 02:35:00',
value_hour1 = '5',
valueuom_hour1 = 'm',
)
X_3, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '3',
itemid_label = 'arm', itemid_unitname = 'N/A', level2 = 'body', level1 = 'arms',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:25:00',
value_hour1 = '6',
valueuom_hour1 = 'm',
)
X_missing = pd.concat([X_missing, X_2, X_3])
# pd grouping calls std NaNs when there is only 1 obs. TODO Probably want to fix that.
expected_out_missing = build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 1, 0, 0, 0], 'mean': [np.NaN, 3, 5, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'face'
)
X_missing
expected_out_missing
```
## Unit conversion data
```
X_unit, _, I_unit, var_map_unit, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'weight_oz', itemid_unitname = 'oz', level2 = 'weight', level1 = 'weight',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:15:00',
value_hour1 = '35.274',
valueuom_hour1 = 'oz',
)
X_2, _, _, _ , _= build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'weight_oz', itemid_unitname = 'oz', level2 = 'weight', level1 = 'weight',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 03:35:00',
value_hour1 = '352.74',
valueuom_hour1 = 'oz',
)
X_3, _, I_3, var_map_3, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '3',
itemid_label = 'weight_kg', itemid_unitname = 'kg', level2 = 'weight', level1 = 'weight',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:35:00',
value_hour1 = '10',
valueuom_hour1 = 'kg',
)
X_unit = pd.concat([X_unit, X_2, X_3])
I_unit = pd.concat([I_unit, I_3])
var_map_unit = pd.concat([var_map_unit, var_map_3])
# pd grouping calls std NaNs when there is only 1 obs. TODO Probably want to fix that.
# pd grouping calls counts NaNs when zero for some reason. TODO Probably want to fix that.
expected_out_unit_level2 = build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 2, 0, 1, 0, 0], 'mean': [np.NaN, 5.5, np.NaN, 10, np.NaN, np.NaN],
'std': [np.NaN, np.sqrt(40.5), np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'weight'
)
expected_out_unit_no_group = pd.concat([build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 1, 0, 0], 'mean': [np.NaN, 1, np.NaN, 10, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'weight',
level1 = 'weight',
itemid = '1',
label = 'weight_oz'
),
build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 10, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'weight',
level1 = 'weight',
itemid = '3',
label = 'weight_kg'
)],axis=1)
X_unit
I_unit
var_map_unit
expected_out_unit_level2
expected_out_unit_no_group
```
# Tests
```
BASE_PARAMS = {
'outPath': None, # Should probably be out_path?
'columns_filename': None,
'subjects_filename': None,
'times_filename': None,
'dynamic_filename': None,
'dynamic_hd5_filename': None,
'group_by_level2': True,
'apply_var_limit': True,
'min_percent': 0,
}
TEST_CASES = [
(
{
'data': schema_data,
'X': schema_X,
'I': schema_I,
'var_map': schema_var_map,
'var_ranges': schema_var_ranges,
},
True,
None,
"Empty Input"
),
(
{
'data': data,
'X': X,
'I': I,
'var_map': var_map,
'var_ranges': schema_var_ranges,
},
False,
expected_out_level2,
"Multiple Observation with Grouping"
),
(
{
'data': data,
'X': X,
'I': I,
'var_map': var_map,
'var_ranges': schema_var_ranges,
'group_by_level2': False,
},
False,
expected_out_no_group,
"Multiple Observation without Grouping"
),
(
{
'data': data,
'X': X_outlier,
'I': I,
'var_map': var_map,
'apply_var_limit': False,
'var_ranges': var_range_outlier,
},
False,
expected_out_no_detection,
"Outlier Dectection Applied"
),
(
{
'data': data,
'X': X_outlier,
'I': I,
'var_map': var_map,
'var_ranges': var_range_outlier,
},
False,
expected_out_detection,
"Outlier Dectection Not Applied"
),
(
{
'data': data,
'X': X_multi_level,
'I': I_multi_level,
'var_map': var_map_multi_level,
'var_ranges': schema_var_ranges,
},
False,
expected_out_level2_multi,
"Multiple Level 2 With Grouping"
),
(
{
'data': data,
'X': X_multi_level,
'I': I_multi_level,
'var_map': var_map_multi_level,
'var_ranges': schema_var_ranges,
'group_by_level2': False,
},
False,
expected_out_no_group_multi,
"Multiple Level 2 No Grouping"
),
(
{
'data': data,
'X': X_missing,
'I': I_multi_level,
'var_map': var_map_multi_level,
'var_ranges': schema_var_ranges,
'min_percent': 30,
},
False,
expected_out_missing,
"Missing"
),
(
{
'data': data,
'X': X_unit,
'I': I_unit,
'var_map': var_map_unit,
'var_ranges': schema_var_ranges,
},
False,
expected_out_unit_level2,
"Unit conversion - grouping by level2"
),
(
{
'data': data,
'X': X_unit,
'I': I_unit,
'var_map': var_map_unit,
'var_ranges': schema_var_ranges,
'group_by_level2': False,
},
False,
expected_out_unit_no_group,
"Unit conversion - no grouping"
),
]
for test_case, (test_inputs, expect_error, expected_output, name) in enumerate(TEST_CASES):
pass_msg = "Passed Test %d: %s" % (test_case, name)
fail_msg = "Failed Test %d: %s" % (test_case, name)
inputs = copy.copy(BASE_PARAMS)
for k, v in test_inputs.items(): inputs[k] = v
try: got_out = save_numerics(**inputs)
except Exception as e:
if expect_error:
print(pass_msg)
continue
else:
print('\n'.join((fail_msg, "Test errored unexpectedly", str(e))))
print('\n\n')
continue
if expect_error:
print('\n'.join((fail_msg, "Test should've errored but didn't")))
print('\n\n')
continue
if (np.isclose(got_out, expected_output, equal_nan=True) | (got_out.isnull() & expected_output.isnull())).all().all(): print(pass_msg)
else:
print(fail_msg + '\nOutputs unequal!')
print("Want:")
print(expected_output)
print("Got:")
print(got_out)
print('\n\n')
continue
got_out
expected_output
```
|
github_jupyter
|
from __future__ import print_function, division
%load_ext autoreload
import sys
sys.path.append('..')
%autoreload
import copy, os, pickle, pandas as pd, numpy as np
from mimic_direct_extract import (
save_numerics, get_values_by_name_from_df_column_or_index, get_variable_mapping, get_variable_ranges
)
from datapackage_io_util import (
load_datapackage_schema,
load_sanitized_df_from_csv,
save_sanitized_df_to_csv,
sanitize_df,
)
with open('../resources/testing_schemas.pkl', mode='rb') as f:
schema_data, schema_X, schema_I, schema_var_ranges, schema_var_map, schema_got_out = pickle.load(f)
GENDER, ETHNICITY, AGE = 'U', 'U', 40
DBSOURCE, LINKSTO, CATEGORY = 'TEST', 'TEST', 'TEST'
ADMITTIME, DISCHTIME = '2100-10-01 00:00:00', '2100-11-01 00:00:00'
DEATHTIME = '2101-10-01 00:00:00'
LOS_ICU, ADMISSION_TYPE = 31, 'U'
FIRST_CAREUNIT, MORT_ICU = 'U', 0
MORT_HOSP = 0
HOSPITAL_EXPIRE_FLAG = 0
HOSPSTAY_SEQ = 1
DATETIME_COLS = set([
'charttime', 'admittime', 'dischtime', 'deathtime', 'intime', 'outtime'
])
def make_datetime(df):
for col in set(df.columns).intersection(DATETIME_COLS): df[col] = pd.to_datetime(df[col])
return df
def build_sample_data(
subject_id, hadm_id, icustay_id, itemid,
itemid_label, itemid_unitname, level2, level1,
# gender, ethnicity, age,
intime, outtime,
charttime_hour1,
value_hour1,
valueuom_hour1,
outlier_low=np.NaN,
valid_low=np.NaN,
impute=np.NaN,
valid_high=np.NaN,
outlier_high=np.NaN
):
"""TODO(mmd): Generalize (slightly!!!)"""
X = schema_X.copy()
X = make_datetime(X.append(
{
'subject_id': subject_id,
'hadm_id': hadm_id,
'icustay_id': icustay_id,
'charttime': charttime_hour1,
'itemid': itemid,
'value': value_hour1,
'valueuom': valueuom_hour1,
},
ignore_index = True,
))
I = schema_I.copy()
I = I.append(
pd.DataFrame(
{
'label': itemid_label,
'dbsource': DBSOURCE,
'linksto': LINKSTO,
'category': CATEGORY,
'unitname': itemid_unitname,
},
index = pd.Index([itemid], name='itemid'),
)
)
data = schema_data.copy()
data = make_datetime(data.append(
{
'subject_id': subject_id,
'hadm_id': hadm_id,
'gender': GENDER,
'ethnicity': ETHNICITY,
'age': AGE,
'admittime': ADMITTIME,
'dischtime': DISCHTIME,
'deathtime': DEATHTIME,
'intime': intime,
'outtime': outtime,
'los_icu': LOS_ICU,
'admission_type': ADMISSION_TYPE,
'first_careunit': FIRST_CAREUNIT,
'mort_icu': MORT_ICU,
'mort_hosp': MORT_HOSP,
'hospital_expire_flag': HOSPITAL_EXPIRE_FLAG,
'hospstay_seq': HOSPSTAY_SEQ,
},
ignore_index = True
))
data.index = [icustay_id]
data.index.names = ['icustay_id']
var_map_columns = [
'LEVEL2', 'LEVEL1', 'ITEMID'
]
var_map = pd.DataFrame(
[[level2, level1, itemid]],
columns = var_map_columns,
# dtype = schema_var_map[var_map_columns].dtypes.to_dict(),
)
# TODO(mmd): var_range
var_range = schema_var_ranges.copy()
var_range = var_range.append(
pd.DataFrame(
{
'OUTLIER_LOW': outlier_low,
'VALID_LOW': valid_low,
'IMPUTE': impute,
'VALID_HIGH': valid_high,
'OUTLIER_HIGH': outlier_high,
},
index = pd.Index([level2], name='itemid'),
)
)
return X, data, I, var_map, var_range
def build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1,
aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 3, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, 0, np.NaN, np.NaN, np.NaN, np.NaN]
},
level2 = 'test_level2'
):
tmp = pd.DataFrame(aggregation_functions)
tmp['subject_id'] = subject_id
tmp['hadm_id'] = hadm_id
tmp['icustay_id'] = icustay_id
tmp['hours_in'] = np.arange(len(tmp))
tmp.set_index(['subject_id', 'hadm_id', 'icustay_id', 'hours_in'], inplace=True)
tmp.columns = pd.MultiIndex.from_tuples(
[(level2, c) for c in tmp.columns],
names=('LEVEL2', 'Aggregation Function')
)
return tmp
def build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1,
aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 3, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, 0, np.NaN, np.NaN, np.NaN, np.NaN]
},
level2 = 'test_level2',
level1 = 'test_level1',
itemid = 1,
label = 'test'
):
tmp = pd.DataFrame(aggregation_functions)
tmp['subject_id'] = subject_id
tmp['hadm_id'] = hadm_id
tmp['icustay_id'] = icustay_id
tmp['hours_in'] = np.arange(len(tmp))
tmp.set_index(['subject_id', 'hadm_id', 'icustay_id', 'hours_in'], inplace=True)
tmp.columns = pd.MultiIndex.from_tuples(
[(itemid, label, level1, level2, c) for c in tmp.columns],
names=('itemid','label','LEVEL1','LEVEL2', 'Aggregation Function')
)
return tmp
X, data, I, var_map, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:15:00',
value_hour1 = '3',
valueuom_hour1 = 'm',
)
X_2, _, _, _ , _= build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:35:00',
value_hour1 = '5',
valueuom_hour1 = 'm',
)
X_3, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 03:35:00',
value_hour1 = '6',
valueuom_hour1 = 'm',
)
X = pd.concat([X, X_2, X_3])
# pd grouping calls std NaNs when there is only 1 obs. TODO Probably want to fix that.
# pd grouping calls counts NaNs when zero for some reason. TODO Probably want to fix that.
expected_out_level2 = build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 2, 0, 1, 0, 0], 'mean': [np.NaN, 4, np.NaN, 6, np.NaN, np.NaN],
'std': [np.NaN, np.sqrt(2), np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'test_level2'
)
expected_out_no_group = build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 2, 0, 1, 0, 0], 'mean': [np.NaN, 4, np.NaN, 6, np.NaN, np.NaN],
'std': [np.NaN, np.sqrt(2), np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'test_level2',
level1 = 'test_level1',
itemid = '1',
label = 'test'
)
X
data
I
var_map
expected_out_level2
expected_out_no_group
X_outlier, _, _, _, var_range_outlier = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:15:00',
value_hour1 = '0',
valueuom_hour1 = 'm',
outlier_low = 1,
valid_low = 3,
impute = np.NaN,
valid_high = 10,
outlier_high = 15
)
X_2, _, _, _ , _= build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:35:00',
value_hour1 = '2',
valueuom_hour1 = 'm',
)
X_3, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 03:35:00',
value_hour1 = '5',
valueuom_hour1 = 'm',
)
X_4, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 03:50:00',
value_hour1 = '12',
valueuom_hour1 = 'm',
)
X_5, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'test', itemid_unitname = 'N/A', level2 = 'test_level2', level1 = 'test_level1',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 04:10:00',
value_hour1 = '18',
valueuom_hour1 = 'm',
)
X_outlier = pd.concat([X_outlier, X_2, X_3, X_4, X_5])
# pd grouping calls std NaNs when there is only 1 obs. TODO Probably want to fix that.
# pd grouping calls counts NaNs when there is no observation;
# pd grouping calls counts NaNs when there is only NaN observation. TODO Probably want to fix that.
expected_out_detection = build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 2, 0, 0], 'mean': [np.NaN, 3, np.NaN, 7.5, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.sqrt(12.5), np.NaN, np.NaN]
}, level2 = 'test_level2'
)
expected_out_no_detection = build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 2, 0, 2, 1, 0], 'mean': [np.NaN, 1, np.NaN, 8.5, 18, np.NaN],
'std': [np.NaN, np.sqrt(2), np.NaN, np.sqrt(24.5), np.NaN, np.NaN]
}, level2 = 'test_level2'
)
X_outlier
var_range_outlier
expected_out_detection
expected_out_no_detection
X_multi_level, _, I_multi_level, var_map_multi_level, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'eye', itemid_unitname = 'N/A', level2 = 'face', level1 = 'eyes',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:15:00',
value_hour1 = '3',
valueuom_hour1 = 'm',
)
X_2, _, I_2, var_map_2, _= build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '2',
itemid_label = 'nose', itemid_unitname = 'N/A', level2 = 'face', level1 = 'nose',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:35:00',
value_hour1 = '5',
valueuom_hour1 = 'm',
)
X_3, _, I_3, var_map_3, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '3',
itemid_label = 'arm', itemid_unitname = 'N/A', level2 = 'body', level1 = 'arms',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:25:00',
value_hour1 = '6',
valueuom_hour1 = 'm',
)
X_multi_level = pd.concat([X_multi_level, X_2, X_3])
I_multi_level = pd.concat([I_multi_level, I_2, I_3])
var_map_multi_level = pd.concat([var_map_multi_level, var_map_2, var_map_3])
# pd grouping calls std NaNs when there is only 1 obs. TODO Probably want to fix that.
# pd grouping calls counts NaNs when zero for some reason. TODO Probably want to fix that.
expected_out_level2_multi = pd.concat([build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 6, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'body'
),
build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 2, 0, 0, 0, 0], 'mean': [np.NaN, 4, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.sqrt(2), np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'face'
)],axis=1)
expected_out_no_group_multi = pd.concat([build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 3, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'face',
level1 = 'eyes',
itemid = '1',
label = 'eye'
),
build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 5, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'face',
level1 = 'nose',
itemid = '2',
label = 'nose'
),
build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 6, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'body',
level1 = 'arms',
itemid = '3',
label = 'arm'
)],axis=1)
X_multi_level
I_multi_level
var_map_multi_level
expected_out_level2_multi
expected_out_no_group_multi
X_missing, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'eye', itemid_unitname = 'N/A', level2 = 'face', level1 = 'eyes',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:15:00',
value_hour1 = '3',
valueuom_hour1 = 'm',
)
X_2, _, _, _, _= build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '2',
itemid_label = 'nose', itemid_unitname = 'N/A', level2 = 'face', level1 = 'nose',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 02:35:00',
value_hour1 = '5',
valueuom_hour1 = 'm',
)
X_3, _, _, _, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '3',
itemid_label = 'arm', itemid_unitname = 'N/A', level2 = 'body', level1 = 'arms',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:25:00',
value_hour1 = '6',
valueuom_hour1 = 'm',
)
X_missing = pd.concat([X_missing, X_2, X_3])
# pd grouping calls std NaNs when there is only 1 obs. TODO Probably want to fix that.
expected_out_missing = build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 1, 0, 0, 0], 'mean': [np.NaN, 3, 5, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'face'
)
X_missing
expected_out_missing
X_unit, _, I_unit, var_map_unit, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'weight_oz', itemid_unitname = 'oz', level2 = 'weight', level1 = 'weight',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:15:00',
value_hour1 = '35.274',
valueuom_hour1 = 'oz',
)
X_2, _, _, _ , _= build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '1',
itemid_label = 'weight_oz', itemid_unitname = 'oz', level2 = 'weight', level1 = 'weight',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 03:35:00',
value_hour1 = '352.74',
valueuom_hour1 = 'oz',
)
X_3, _, I_3, var_map_3, _ = build_sample_data(
subject_id = 1, hadm_id = 1, icustay_id = 1, itemid = '3',
itemid_label = 'weight_kg', itemid_unitname = 'kg', level2 = 'weight', level1 = 'weight',
intime = '2100-10-01 00:00:00', outtime = '2100-10-01 05:00:00',
charttime_hour1 = '2100-10-01 01:35:00',
value_hour1 = '10',
valueuom_hour1 = 'kg',
)
X_unit = pd.concat([X_unit, X_2, X_3])
I_unit = pd.concat([I_unit, I_3])
var_map_unit = pd.concat([var_map_unit, var_map_3])
# pd grouping calls std NaNs when there is only 1 obs. TODO Probably want to fix that.
# pd grouping calls counts NaNs when zero for some reason. TODO Probably want to fix that.
expected_out_unit_level2 = build_lvl2_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 2, 0, 1, 0, 0], 'mean': [np.NaN, 5.5, np.NaN, 10, np.NaN, np.NaN],
'std': [np.NaN, np.sqrt(40.5), np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'weight'
)
expected_out_unit_no_group = pd.concat([build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 1, 0, 0], 'mean': [np.NaN, 1, np.NaN, 10, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'weight',
level1 = 'weight',
itemid = '1',
label = 'weight_oz'
),
build_nogroup_out(
subject_id = 1, hadm_id = 1, icustay_id = 1, aggregation_functions={
'count': [0, 1, 0, 0, 0, 0], 'mean': [np.NaN, 10, np.NaN, np.NaN, np.NaN, np.NaN],
'std': [np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN]
}, level2 = 'weight',
level1 = 'weight',
itemid = '3',
label = 'weight_kg'
)],axis=1)
X_unit
I_unit
var_map_unit
expected_out_unit_level2
expected_out_unit_no_group
BASE_PARAMS = {
'outPath': None, # Should probably be out_path?
'columns_filename': None,
'subjects_filename': None,
'times_filename': None,
'dynamic_filename': None,
'dynamic_hd5_filename': None,
'group_by_level2': True,
'apply_var_limit': True,
'min_percent': 0,
}
TEST_CASES = [
(
{
'data': schema_data,
'X': schema_X,
'I': schema_I,
'var_map': schema_var_map,
'var_ranges': schema_var_ranges,
},
True,
None,
"Empty Input"
),
(
{
'data': data,
'X': X,
'I': I,
'var_map': var_map,
'var_ranges': schema_var_ranges,
},
False,
expected_out_level2,
"Multiple Observation with Grouping"
),
(
{
'data': data,
'X': X,
'I': I,
'var_map': var_map,
'var_ranges': schema_var_ranges,
'group_by_level2': False,
},
False,
expected_out_no_group,
"Multiple Observation without Grouping"
),
(
{
'data': data,
'X': X_outlier,
'I': I,
'var_map': var_map,
'apply_var_limit': False,
'var_ranges': var_range_outlier,
},
False,
expected_out_no_detection,
"Outlier Dectection Applied"
),
(
{
'data': data,
'X': X_outlier,
'I': I,
'var_map': var_map,
'var_ranges': var_range_outlier,
},
False,
expected_out_detection,
"Outlier Dectection Not Applied"
),
(
{
'data': data,
'X': X_multi_level,
'I': I_multi_level,
'var_map': var_map_multi_level,
'var_ranges': schema_var_ranges,
},
False,
expected_out_level2_multi,
"Multiple Level 2 With Grouping"
),
(
{
'data': data,
'X': X_multi_level,
'I': I_multi_level,
'var_map': var_map_multi_level,
'var_ranges': schema_var_ranges,
'group_by_level2': False,
},
False,
expected_out_no_group_multi,
"Multiple Level 2 No Grouping"
),
(
{
'data': data,
'X': X_missing,
'I': I_multi_level,
'var_map': var_map_multi_level,
'var_ranges': schema_var_ranges,
'min_percent': 30,
},
False,
expected_out_missing,
"Missing"
),
(
{
'data': data,
'X': X_unit,
'I': I_unit,
'var_map': var_map_unit,
'var_ranges': schema_var_ranges,
},
False,
expected_out_unit_level2,
"Unit conversion - grouping by level2"
),
(
{
'data': data,
'X': X_unit,
'I': I_unit,
'var_map': var_map_unit,
'var_ranges': schema_var_ranges,
'group_by_level2': False,
},
False,
expected_out_unit_no_group,
"Unit conversion - no grouping"
),
]
for test_case, (test_inputs, expect_error, expected_output, name) in enumerate(TEST_CASES):
pass_msg = "Passed Test %d: %s" % (test_case, name)
fail_msg = "Failed Test %d: %s" % (test_case, name)
inputs = copy.copy(BASE_PARAMS)
for k, v in test_inputs.items(): inputs[k] = v
try: got_out = save_numerics(**inputs)
except Exception as e:
if expect_error:
print(pass_msg)
continue
else:
print('\n'.join((fail_msg, "Test errored unexpectedly", str(e))))
print('\n\n')
continue
if expect_error:
print('\n'.join((fail_msg, "Test should've errored but didn't")))
print('\n\n')
continue
if (np.isclose(got_out, expected_output, equal_nan=True) | (got_out.isnull() & expected_output.isnull())).all().all(): print(pass_msg)
else:
print(fail_msg + '\nOutputs unequal!')
print("Want:")
print(expected_output)
print("Got:")
print(got_out)
print('\n\n')
continue
got_out
expected_output
| 0.167593 | 0.348036 |
# Machine Learning Foundation
## Section 1, Part c: EDA Lab
## Introduction
We will be using the iris data set for this tutorial. This is a well-known data set containing iris species and sepal and petal measurements. The data we will use are in a file called `iris_data.csv` found in the [data](data/) directory.
```
import os
import numpy as np
import pandas as pd
```
## Question 1
Load the data from the file using the techniques learned today. Examine it.
Determine the following:
* The number of data points (rows). (*Hint:* check out the dataframe `.shape` attribute.)
* The column names. (*Hint:* check out the dataframe `.columns` attribute.)
* The data types for each column. (*Hint:* check out the dataframe `.dtypes` attribute.)
```
filepath = "data/iris_data.csv"
data = pd.read_csv(filepath)
data.head()
### BEGIN SOLUTION
# Number of rows
print(f'Number of rows: {data.shape[0]}')
# Column names
print(f'Column names: \t{data.columns.tolist()}')
# Data types
print(f'Data types:\n{data.dtypes}')
### END SOLUTION
```
## Question 2
Examine the species names and note that they all begin with 'Iris-'. Remove this portion of the name so the species name is shorter.
*Hint:* there are multiple ways to do this, but you could use either the [string processing methods](http://pandas.pydata.org/pandas-docs/stable/text.html) or the [apply method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html).
```
### BEGIN SOLUTION
print(data.head())
# The str method maps the following function to each entry as a string
data['species'] = data.species.str.replace('Iris-', '')
# alternatively
# data['species'] = data.species.apply(lambda r: r.replace('Iris-', ''))
print('-------------------------------------------------------------------')
data.head()
### END SOLUTION
```
## Question 3
Determine the following:
* The number of each species present. (*Hint:* check out the series `.value_counts` method.)
* The mean, median, and quantiles and ranges (max-min) for each petal and sepal measurement.
*Hint:* for the last question, the `.describe` method does have median, but it's not called median. It's the *50%* quantile. `.describe` does not have range though, and in order to get the range, you will need to create a new entry in the `.describe` table, which is `max - min`.
```
### BEGIN SOLUTION
# One way to count each species
data.species.value_counts()
# Select just the rows desired from the 'describe' method and add in the 'median'
stats_df = data.describe()
stats_df.loc['range'] = stats_df.loc['max'] - stats_df.loc['min']
out_fields = ['mean','25%','50%','75%', 'range']
stats_df = stats_df.loc[out_fields]
stats_df.rename({'50%': 'median'}, inplace=True)
stats_df
### END SOLUTION
```
## Question 4
Calculate the following **for each species** in a separate dataframe:
* The mean of each measurement (sepal_length, sepal_width, petal_length, and petal_width).
* The median of each of these measurements.
*Hint:* you may want to use Pandas [`groupby` method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) to group by species before calculating the statistic.
If you finish both of these, try calculating both statistics (mean and median) in a single table (i.e. with a single groupby call). See the section of the Pandas documentation on [applying multiple functions at once](http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-multiple-functions-at-once) for a hint.
```
### BEGIN SOLUTION
# The mean calculation
data.groupby('species').mean()
# The median calculation
data.groupby('species').median()
# applying multiple functions at once - 2 methods
data.groupby('species').agg(['mean', 'median']) # passing a list of recognized strings
data.groupby('species').agg([np.mean, np.median]) # passing a list of explicit aggregation functions
# If certain fields need to be aggregated differently, we can do:
from pprint import pprint
agg_dict = {field: ['mean', 'median'] for field in data.columns if field != 'species'}
agg_dict['petal_length'] = 'max'
pprint(agg_dict)
data.groupby('species').agg(agg_dict)
### END SOLUTION
```
## Question 5
Make a scatter plot of `sepal_length` vs `sepal_width` using Matplotlib. Label the axes and give the plot a title.
```
### BEGIN SOLUTION
import matplotlib.pyplot as plt
%matplotlib inline
# A simple scatter plot with Matplotlib
ax = plt.axes()
ax.scatter(data.sepal_length, data.sepal_width)
# Label the axes
ax.set(xlabel='Sepal Length (cm)',
ylabel='Sepal Width (cm)',
title='Sepal Length vs Width');
### END SOLUTION
```
## Question 6
Make a histogram of any one of the four features. Label axes and title it as appropriate.
```
### BEGIN SOLUTION
# Using Matplotlib's plotting functionality
ax = plt.axes()
ax.hist(data.petal_length, bins=25);
ax.set(xlabel='Petal Length (cm)',
ylabel='Frequency',
title='Distribution of Petal Lengths');
# Alternatively using Pandas plotting functionality
ax = data.petal_length.plot.hist(bins=25)
ax.set(xlabel='Petal Length (cm)',
ylabel='Frequency',
title='Distribution of Petal Lengths');
### END SOLUTION
```
## Question 7
Now create a single plot with histograms for each feature (`petal_width`, `petal_length`, `sepal_width`, `sepal_length`) overlayed. If you have time, next try to create four individual histogram plots in a single figure, where each plot contains one feature.
For some hints on how to do this with Pandas plotting methods, check out the [visualization guide](http://pandas.pydata.org/pandas-docs/version/0.18.1/visualization.html) for Pandas.
```
import seaborn as sns
sns.set_context('notebook')
### BEGIN SOLUTION
# This uses the `.plot.hist` method
ax = data.plot.hist(bins=25, alpha=0.5)
ax.set_xlabel('Size (cm)');
data.hist(bins=25, figsize=(8,8))
# To create four separate plots, use Pandas `.hist` method
axList = data.hist(bins=25, figsize=(8,8))
# Add some x- and y- labels to first column and last row
for ax in axList.flatten():
if ax.is_last_row():
ax.set_xlabel('Size (cm)')
if ax.is_first_col():
ax.set_ylabel('Frequency')
### END SOLUTION
```
## Question 8
Using Pandas, make a boxplot of each petal and sepal measurement. Here is the documentation for [Pandas boxplot method](http://pandas.pydata.org/pandas-docs/version/0.18.1/visualization.html#visualization-box).
```
### BEGIN SOLUTION
# Here we have four separate plots
data.boxplot(by='species', figsize=(16,8));
### END SOLUTION
```
## Question 9
Now make a single boxplot where the features are separated in the x-axis and species are colored with different hues.
*Hint:* you may want to check the documentation for [Seaborn boxplots](http://seaborn.pydata.org/generated/seaborn.boxplot.html).
Also note that Seaborn is very picky about data format--for this plot to work, the input dataframe will need to be manipulated so that each row contains a single data point (a species, a measurement type, and the measurement value). Check out Pandas [stack](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html) method as a starting place.
Here is an example of a data format that will work:
| | species | measurement | size |
| - | ------- | ------------ | ---- |
| 0 | setosa | sepal_length | 5.1 |
| 1 | setosa | sepal_width | 3.5 |
```
data.set_index('species').stack().to_frame()
### BEGIN SOLUTION
# First we have to reshape the data so there is
# only a single measurement in each column
plot_data = (data
.set_index('species')
.stack()
.to_frame()
.reset_index()
.rename(columns={0:'size', 'level_1':'measurement'})
)
plot_data.head()
### END SOLUTION
### BEGIN SOLUTION
# Now plot the dataframe from above using Seaborn
sns.set_style('white')
sns.set_context('notebook')
sns.set_palette('dark')
f = plt.figure(figsize=(6,4))
sns.boxplot(x='measurement', y='size',
hue='species', data=plot_data);
### END SOLUTION
```
## Question 10
Make a [pairplot](http://seaborn.pydata.org/generated/seaborn.pairplot.html) with Seaborn to examine the correlation between each of the measurements.
*Hint:* this plot may look complicated, but it is actually only a single line of code. This is the power of Seaborn and dataframe-aware plotting! See the lecture notes for reference.
```
### BEGIN SOLUTION
sns.set_context('talk')
sns.pairplot(data, hue='species');
### END SOLUTION
```
---
### Machine Learning Foundation (C) 2020 IBM Corporation
|
github_jupyter
|
import os
import numpy as np
import pandas as pd
filepath = "data/iris_data.csv"
data = pd.read_csv(filepath)
data.head()
### BEGIN SOLUTION
# Number of rows
print(f'Number of rows: {data.shape[0]}')
# Column names
print(f'Column names: \t{data.columns.tolist()}')
# Data types
print(f'Data types:\n{data.dtypes}')
### END SOLUTION
### BEGIN SOLUTION
print(data.head())
# The str method maps the following function to each entry as a string
data['species'] = data.species.str.replace('Iris-', '')
# alternatively
# data['species'] = data.species.apply(lambda r: r.replace('Iris-', ''))
print('-------------------------------------------------------------------')
data.head()
### END SOLUTION
### BEGIN SOLUTION
# One way to count each species
data.species.value_counts()
# Select just the rows desired from the 'describe' method and add in the 'median'
stats_df = data.describe()
stats_df.loc['range'] = stats_df.loc['max'] - stats_df.loc['min']
out_fields = ['mean','25%','50%','75%', 'range']
stats_df = stats_df.loc[out_fields]
stats_df.rename({'50%': 'median'}, inplace=True)
stats_df
### END SOLUTION
### BEGIN SOLUTION
# The mean calculation
data.groupby('species').mean()
# The median calculation
data.groupby('species').median()
# applying multiple functions at once - 2 methods
data.groupby('species').agg(['mean', 'median']) # passing a list of recognized strings
data.groupby('species').agg([np.mean, np.median]) # passing a list of explicit aggregation functions
# If certain fields need to be aggregated differently, we can do:
from pprint import pprint
agg_dict = {field: ['mean', 'median'] for field in data.columns if field != 'species'}
agg_dict['petal_length'] = 'max'
pprint(agg_dict)
data.groupby('species').agg(agg_dict)
### END SOLUTION
### BEGIN SOLUTION
import matplotlib.pyplot as plt
%matplotlib inline
# A simple scatter plot with Matplotlib
ax = plt.axes()
ax.scatter(data.sepal_length, data.sepal_width)
# Label the axes
ax.set(xlabel='Sepal Length (cm)',
ylabel='Sepal Width (cm)',
title='Sepal Length vs Width');
### END SOLUTION
### BEGIN SOLUTION
# Using Matplotlib's plotting functionality
ax = plt.axes()
ax.hist(data.petal_length, bins=25);
ax.set(xlabel='Petal Length (cm)',
ylabel='Frequency',
title='Distribution of Petal Lengths');
# Alternatively using Pandas plotting functionality
ax = data.petal_length.plot.hist(bins=25)
ax.set(xlabel='Petal Length (cm)',
ylabel='Frequency',
title='Distribution of Petal Lengths');
### END SOLUTION
import seaborn as sns
sns.set_context('notebook')
### BEGIN SOLUTION
# This uses the `.plot.hist` method
ax = data.plot.hist(bins=25, alpha=0.5)
ax.set_xlabel('Size (cm)');
data.hist(bins=25, figsize=(8,8))
# To create four separate plots, use Pandas `.hist` method
axList = data.hist(bins=25, figsize=(8,8))
# Add some x- and y- labels to first column and last row
for ax in axList.flatten():
if ax.is_last_row():
ax.set_xlabel('Size (cm)')
if ax.is_first_col():
ax.set_ylabel('Frequency')
### END SOLUTION
### BEGIN SOLUTION
# Here we have four separate plots
data.boxplot(by='species', figsize=(16,8));
### END SOLUTION
data.set_index('species').stack().to_frame()
### BEGIN SOLUTION
# First we have to reshape the data so there is
# only a single measurement in each column
plot_data = (data
.set_index('species')
.stack()
.to_frame()
.reset_index()
.rename(columns={0:'size', 'level_1':'measurement'})
)
plot_data.head()
### END SOLUTION
### BEGIN SOLUTION
# Now plot the dataframe from above using Seaborn
sns.set_style('white')
sns.set_context('notebook')
sns.set_palette('dark')
f = plt.figure(figsize=(6,4))
sns.boxplot(x='measurement', y='size',
hue='species', data=plot_data);
### END SOLUTION
### BEGIN SOLUTION
sns.set_context('talk')
sns.pairplot(data, hue='species');
### END SOLUTION
| 0.510496 | 0.982807 |
```
%reset
```
###################################################################
#Script Name :
#Description :
#Args :
#Author : Nor Raymond
#Email : [email protected]
###################################################################
```
import os, sys
import pandas as pd
import numpy as np
import yaml
from IPython.core.display import display, HTML
# Function to load yaml configuration file
def load_config(config_name):
with open(os.path.join(config_path, config_name), 'r') as file:
config = yaml.safe_load(file)
return config
config_path = "conf/base"
try:
# load yaml catalog configuration file
config = load_config("catalog.yml")
os.chdir(config["project_path"])
root_path = os.getcwd()
except:
os.chdir('..')
# load yaml catalog configuration file
config = load_config("catalog.yml")
os.chdir(config["project_path"])
root_path = os.getcwd()
```
### Functions to initialize data ingestion
```
def raw_file_checker(files):
keyword = ['RC', 'Vocab_2', 'Vocab_1']
checker = []
file_exists = {}
for fname in files:
for key in keyword:
if key in fname:
checker.append(True)
file_exists[key] = os.path.join(fname)
if len(checker) == 3 :
print("PASS: All files exists!")
condition = True
else:
print("FAIL: Not all file exists! Please check the raw data folder to ensure RC, Vocab_1 and Vocab_2 file exists.")
condition = False
return condition, file_exists
def data_ingestion_initialize(root_path):
# Function to load yaml configuration file
def load_config(config_name):
with open(os.path.join(root_path, config_path, config_name), 'r') as file:
config = yaml.safe_load(file)
return config
# load yaml catalog configuration file
config = load_config("catalog.yml")
print("Initialize data ingestion and file checking...")
# define input and output data paths
raw_data_path = os.path.join(root_path, config["data_path"]["input"])
out_data_path = os.path.join(root_path, config["data_path"]["output"])
# define reference file paths
ref_path = os.path.join(root_path, config["data_path"]["ref"])
ref_filepath = os.path.join(ref_path, config["filenames"]["rc_col_ref"])
ref_data = pd.read_excel(io = ref_filepath, sheet_name="columns_check", header=None)
ref_data_cols = ref_data[0].tolist()
# get the list of files in raw folder
files = os.listdir(raw_data_path)
files = [f for f in files if f[-4:] == '.xls']
condition, file_exists = raw_file_checker(files)
## Define raw data filepaths
rc_filepath = os.path.join(raw_data_path, file_exists['RC'])
v1_filepath = os.path.join(raw_data_path, file_exists['Vocab_1'])
v2_filepath = os.path.join(raw_data_path, file_exists['Vocab_2'])
return raw_data_path, out_data_path, ref_path, ref_filepath, ref_data, ref_data_cols, files, file_exists, rc_filepath, v1_filepath, v2_filepath
raw_data_path, out_data_path, ref_path, ref_filepath, ref_data, ref_data_cols, files, file_exists, rc_filepath, v1_filepath, v2_filepath = data_ingestion_initialize(root_path)
```
### Function to create dataframes
```
def create_dataframes(file_initial, rc_filepath, v1_filepath , v2_filepath):
'''
file_initial choices -
RC: Reading Comprehension
Vocab_1: Vocabulary 1
Vocab_2: Vocabulary 2
'''
if file_initial == 'RC':
filepath = rc_filepath
elif file_initial == 'Vocab_1':
filepath = v1_filepath
elif file_initial == 'Vocab_2':
filepath = v2_filepath
# create dataframe from 'Summary' sheet
df_summary = pd.read_excel(io = filepath, sheet_name="Summary")
df_summary_cols = list(df_summary.columns)
# create dataframe from 'Data' sheet
df_data = pd.read_excel(io=filepath, sheet_name="Data")
df_data_cols = list(df_data.columns)
df_data = df_data.dropna(axis = 0, how = 'all')
# create dataframe from 'Data' sheet
df_ans_key = pd.read_excel(io=filepath, sheet_name="Answer Key")
df_ans_key_cols = list(df_ans_key.columns)
print(f"Dataframe created from {file_initial} file")
return df_summary, df_summary_cols, df_data, df_data_cols, df_ans_key, df_ans_key_cols
#df_summary, df_summary_cols, df_data, df_data_cols, df_ans_key, df_ans_key_cols = create_dataframes('RC', rc_filepath, v1_filepath , v2_filepath)
```
### Data integrity scanning functions
```
class color:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
def print_scan_results(col_condition_num, scan_num, file_initial , sheets = 'Summary'):
if scan_num == 1:
print(f"\nSCAN-{scan_num} : {file_initial} - {sheets} : Checking if the sheet contains either 'Language' and 'Market' columns ...")
if col_condition_num == True:
print(color.GREEN + "PASS" + color.END + ": 'Summary' sheet contains both 'Language' and 'Market' columns")
else:
print(color.RED + "FAIL" + color.END + ": 'Summary' sheet does not contain either 'Language' and 'Market' columns")
if scan_num == 2:
print(f"\nSCAN-{scan_num} : {file_initial} - {sheets} : Checking if Language' and 'Market' columns are empty ...")
if col_condition_num == True:
print(color.GREEN + "PASS" + color.END + ": Both 'Language' and 'Market' columns in 'Summary' contains complete data")
else:
print(color.RED + "FAIL" + color.END + ": Both or either 'Language' and 'Market' columns in 'Summary' sheet are empty or incomplete")
if scan_num == 3 or scan_num == 6:
print(f"\nSCAN-{scan_num} : {file_initial} - {sheets} : Checking if '_worker_id' column name is correct ...")
if col_condition_num == True:
print(color.GREEN + "PASS" + color.END + ": valid '_workder_id' column name")
else:
print(color.RED + "FAIL" + color.END + ": invalid '_workder_id' column name")
if scan_num == 4:
print(f"\nSCAN-{scan_num} : {file_initial} - {sheets} : Checking if sheet contains 'Language' column ...")
if col_condition_num == True:
print(color.GREEN + "PASS" + color.END + ": 'Data' sheet contains 'Language' columns")
else:
print(color.RED + "FAIL" + color.END + ": 'Data' sheet does not contain 'Language' columns")
if scan_num == 5:
print(f"\nSCAN-{scan_num} : {file_initial} - {sheets} : Checking if Language' column are empty ...")
if col_condition_num == True:
print(color.GREEN + "PASS" + color.END + ": 'Language'column in 'Data' contains complete data")
else:
print(color.RED + "FAIL" + color.END + ": 'Language' column in 'Data' sheet are empty or incomplete")
if scan_num == 7 and file_initial == 'RC':
print(f"\nSCAN-7 : {file_initial} - {sheets} : checking if columns in the 'Data' sheet are identical to the reference columns ...")
if col_condition_num == True:
print (color.GREEN + "PASS" + color.END + ": The columns in the 'Data' sheet are identical to the reference")
else:
print (color.RED + "FAIL" + color.END + ": The columns in the 'Data' sheet are not identical to the reference")
def summary_col_check(df_summary, df_summary_cols, file_initial , sheets = 'Summary'):
# --- SCAN-1 : checking if "Summary" sheet contains "Language" and "Market" columns ---------------------
# PASS -> 'Summary' sheet contains both 'Language' and 'Market' columns
scan_num = 1
cols_to_check = ['Language', 'Market']
col_checker = {}
for col in cols_to_check:
if col in df_summary_cols:
col_checker[col] = True
else:
col_checker[col] = False
condition_1 = col_checker['Language']
condition_2 = col_checker['Market']
col_condition_1 = all([condition_1, condition_2]) # both conditions has to be true
return col_condition_1, scan_num
def summary_col_value_check(df_summary, file_initial, sheets = 'Summary'):
# --- SCAN-2 :checking if "Language" and "Market" columns in "Summary" is empty -------------------------
# PASS -> Both 'Language' and 'Market' columns in 'Summary' contains complete data
scan_num = 2
cols_to_check = ['Language', 'Market']
col_checker = {}
for col in cols_to_check:
if df_summary[col].notnull().values.all() == True:
col_checker[col] = True
else:
col_checker[col] = False
condition_3 = col_checker['Language']
condition_4 = col_checker['Market']
col_condition_2 = all([condition_3, condition_4]) # both conditions has to be true
return col_condition_2, scan_num
def col_header_check(df_summary_data, file_initial, sheets):
# --- SCAN-3 : checking if worker_id column contains _ at the start -------------------------------------
# PASS -> if the number of character is 10 not 9 and column name is _workder_id
scan_num = 3
find_worker_idx = df_summary_data.columns.str.contains('worker')
worker_idx = [i for i, x in enumerate(find_worker_idx) if x][0]
worker_col = df_summary_data.columns[worker_idx]
worker_col_len = len(worker_col)
if worker_col_len == 10 and worker_col[0] == "_":
col_condition_3 = True
elif worker_col_len == 9 and worker_col[0] == "w":
col_condition_3 = False
return col_condition_3, scan_num
def data_col_check(df_data, df_data_cols, file_initial, sheets = 'Data'):
# --- SCAN-4 : checking if "Data" sheet contains "Language" column --------------------------------------
# PASS -> 'Data' sheet contains both 'Language' column
scan_num = 4
cols_to_check = ['Language']
col_checker = {}
for col in cols_to_check:
if col in df_data_cols:
col_checker[col] = True
else:
col_checker[col] = False
condition_1 = col_checker['Language']
col_condition_4 = all([condition_1])
return col_condition_4, scan_num
def data_col_value_check(df_data, file_initial, sheets = 'Data'):
# --- SCAN-5 :checking if "Language" column in "Data" is empty -------------------------
# PASS -> 'Language' column in 'Data' contains complete data
scan_num = 5
cols_to_check = ['Language']
col_checker = {}
for col in cols_to_check:
if df_data[col].notnull().values.all() == True:
col_checker[col] = True
else:
col_checker[col] = False
condition_3 = col_checker['Language']
col_condition_5 = all([condition_3])
return col_condition_5, scan_num
def data_col_header_check(df_data_cols, ref_data_cols, file_initial, sheets = 'Data'):
# --- SCAN-7 : checking if columns in "Data" sheet are identical to the reference columns ------------------------
# refer to the file in reference > reference_checks.xlsx
# PASS -> if the two column lists are identical
scan_num = 7
ref_data_cols_sorted = ref_data_cols
df_data_cols_sorted = df_data_cols
# sorting both the lists
ref_data_cols_sorted.sort()
df_data_cols_sorted.sort()
# using == to check if
if ref_data_cols_sorted == df_data_cols_sorted:
col_condition_7 = True
else :
col_condition_7 = False
return col_condition_7, scan_num
def data_integrity_check(df_summary, df_summary_cols, df_data, df_data_cols, file_initial):
print(color.BOLD + f"Reading {file_initial} raw data and perform data integrity scanning...:\n" + color.END)
conditions_list = []
# SCAN-1
col_condition_1, scan_num = summary_col_check(df_summary, df_summary_cols, file_initial , 'Summary')
print_scan_results(col_condition_1, scan_num, file_initial , sheets = 'Summary')
conditions_list.append(col_condition_1)
# SCAN-2
# Runs only when col_condition_1 returns True
if col_condition_1 == True:
col_condition_2, scan_num = summary_col_value_check(df_summary, file_initial, 'Summary')
print_scan_results(col_condition_2, scan_num, file_initial , sheets = 'Summary')
conditions_list.append(col_condition_2)
else:
conditions_list = conditions_list
# SCAN-3
col_condition_3, scan_num = col_header_check(df_summary, file_initial, 'Summary')
print_scan_results(col_condition_3, scan_num, file_initial , sheets = 'Summary')
conditions_list.append(col_condition_3)
# SCAN-4
col_condition_4, scan_num = data_col_check(df_data, df_data_cols, file_initial, sheets = 'Data')
print_scan_results(col_condition_4, scan_num, file_initial , sheets = 'Data')
conditions_list.append(col_condition_4)
# SCAN-5
# Runs only when col_condition_4 returns True
if col_condition_4 == True:
col_condition_5, scan_num = data_col_value_check(df_data, file_initial, sheets = 'Data')
print_scan_results(col_condition_5, scan_num, file_initial , sheets = 'Data')
conditions_list.append(col_condition_5)
else:
conditions_list = conditions_list
# SCAN-6
col_condition_6, scan_num = col_header_check(df_data, file_initial, 'Data')
scan_num = 6
print_scan_results(col_condition_6, scan_num, file_initial , 'Data')
conditions_list.append(col_condition_6)
# SCAN-7
if file_initial == 'RC':
col_condition_7, scan_num = data_col_header_check(df_data_cols, ref_data_cols, file_initial, sheets = 'Data')
print_scan_results(col_condition_7, scan_num, file_initial , 'Data')
conditions_list.append(col_condition_7)
# Final data integrity results after all checks
# PASS -> when all scans return True/PASS
if len(conditions_list) > 1 :
integrity_result = all(conditions_list)
if integrity_result == True:
print(color.BOLD + f'\n{file_initial} data integrity result:' + color.GREEN + ' PASS' + color.END + '\n')
else:
print(color.BOLD + f'\n{file_initial} data integrity result:' + color.RED + ' FAIL' + color.END + '\n')
elif len(conditions_list) == 1:
print(color.BOLD + f'\n{file_initial} data integrity result:' + color.RED + ' FAIL' + color.END + '\n')
return integrity_result, conditions_list
```
#### Generate the data integrity report
```
def main():
file_initials = ['RC', 'Vocab_1', 'Vocab_2']
int_results = {}
for file_initial in file_initials:
df_summary, df_summary_cols, df_data, df_data_cols, df_ans_key, df_ans_key_cols = create_dataframes(file_initial, rc_filepath, v1_filepath , v2_filepath)
integrity_result, conditions_list = data_integrity_check(df_summary, df_summary_cols, df_data, df_data_cols, file_initial)
int_results[file_initial] = integrity_result
if __name__ == "__main__":
main()
```
|
github_jupyter
|
%reset
import os, sys
import pandas as pd
import numpy as np
import yaml
from IPython.core.display import display, HTML
# Function to load yaml configuration file
def load_config(config_name):
with open(os.path.join(config_path, config_name), 'r') as file:
config = yaml.safe_load(file)
return config
config_path = "conf/base"
try:
# load yaml catalog configuration file
config = load_config("catalog.yml")
os.chdir(config["project_path"])
root_path = os.getcwd()
except:
os.chdir('..')
# load yaml catalog configuration file
config = load_config("catalog.yml")
os.chdir(config["project_path"])
root_path = os.getcwd()
def raw_file_checker(files):
keyword = ['RC', 'Vocab_2', 'Vocab_1']
checker = []
file_exists = {}
for fname in files:
for key in keyword:
if key in fname:
checker.append(True)
file_exists[key] = os.path.join(fname)
if len(checker) == 3 :
print("PASS: All files exists!")
condition = True
else:
print("FAIL: Not all file exists! Please check the raw data folder to ensure RC, Vocab_1 and Vocab_2 file exists.")
condition = False
return condition, file_exists
def data_ingestion_initialize(root_path):
# Function to load yaml configuration file
def load_config(config_name):
with open(os.path.join(root_path, config_path, config_name), 'r') as file:
config = yaml.safe_load(file)
return config
# load yaml catalog configuration file
config = load_config("catalog.yml")
print("Initialize data ingestion and file checking...")
# define input and output data paths
raw_data_path = os.path.join(root_path, config["data_path"]["input"])
out_data_path = os.path.join(root_path, config["data_path"]["output"])
# define reference file paths
ref_path = os.path.join(root_path, config["data_path"]["ref"])
ref_filepath = os.path.join(ref_path, config["filenames"]["rc_col_ref"])
ref_data = pd.read_excel(io = ref_filepath, sheet_name="columns_check", header=None)
ref_data_cols = ref_data[0].tolist()
# get the list of files in raw folder
files = os.listdir(raw_data_path)
files = [f for f in files if f[-4:] == '.xls']
condition, file_exists = raw_file_checker(files)
## Define raw data filepaths
rc_filepath = os.path.join(raw_data_path, file_exists['RC'])
v1_filepath = os.path.join(raw_data_path, file_exists['Vocab_1'])
v2_filepath = os.path.join(raw_data_path, file_exists['Vocab_2'])
return raw_data_path, out_data_path, ref_path, ref_filepath, ref_data, ref_data_cols, files, file_exists, rc_filepath, v1_filepath, v2_filepath
raw_data_path, out_data_path, ref_path, ref_filepath, ref_data, ref_data_cols, files, file_exists, rc_filepath, v1_filepath, v2_filepath = data_ingestion_initialize(root_path)
def create_dataframes(file_initial, rc_filepath, v1_filepath , v2_filepath):
'''
file_initial choices -
RC: Reading Comprehension
Vocab_1: Vocabulary 1
Vocab_2: Vocabulary 2
'''
if file_initial == 'RC':
filepath = rc_filepath
elif file_initial == 'Vocab_1':
filepath = v1_filepath
elif file_initial == 'Vocab_2':
filepath = v2_filepath
# create dataframe from 'Summary' sheet
df_summary = pd.read_excel(io = filepath, sheet_name="Summary")
df_summary_cols = list(df_summary.columns)
# create dataframe from 'Data' sheet
df_data = pd.read_excel(io=filepath, sheet_name="Data")
df_data_cols = list(df_data.columns)
df_data = df_data.dropna(axis = 0, how = 'all')
# create dataframe from 'Data' sheet
df_ans_key = pd.read_excel(io=filepath, sheet_name="Answer Key")
df_ans_key_cols = list(df_ans_key.columns)
print(f"Dataframe created from {file_initial} file")
return df_summary, df_summary_cols, df_data, df_data_cols, df_ans_key, df_ans_key_cols
#df_summary, df_summary_cols, df_data, df_data_cols, df_ans_key, df_ans_key_cols = create_dataframes('RC', rc_filepath, v1_filepath , v2_filepath)
class color:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
def print_scan_results(col_condition_num, scan_num, file_initial , sheets = 'Summary'):
if scan_num == 1:
print(f"\nSCAN-{scan_num} : {file_initial} - {sheets} : Checking if the sheet contains either 'Language' and 'Market' columns ...")
if col_condition_num == True:
print(color.GREEN + "PASS" + color.END + ": 'Summary' sheet contains both 'Language' and 'Market' columns")
else:
print(color.RED + "FAIL" + color.END + ": 'Summary' sheet does not contain either 'Language' and 'Market' columns")
if scan_num == 2:
print(f"\nSCAN-{scan_num} : {file_initial} - {sheets} : Checking if Language' and 'Market' columns are empty ...")
if col_condition_num == True:
print(color.GREEN + "PASS" + color.END + ": Both 'Language' and 'Market' columns in 'Summary' contains complete data")
else:
print(color.RED + "FAIL" + color.END + ": Both or either 'Language' and 'Market' columns in 'Summary' sheet are empty or incomplete")
if scan_num == 3 or scan_num == 6:
print(f"\nSCAN-{scan_num} : {file_initial} - {sheets} : Checking if '_worker_id' column name is correct ...")
if col_condition_num == True:
print(color.GREEN + "PASS" + color.END + ": valid '_workder_id' column name")
else:
print(color.RED + "FAIL" + color.END + ": invalid '_workder_id' column name")
if scan_num == 4:
print(f"\nSCAN-{scan_num} : {file_initial} - {sheets} : Checking if sheet contains 'Language' column ...")
if col_condition_num == True:
print(color.GREEN + "PASS" + color.END + ": 'Data' sheet contains 'Language' columns")
else:
print(color.RED + "FAIL" + color.END + ": 'Data' sheet does not contain 'Language' columns")
if scan_num == 5:
print(f"\nSCAN-{scan_num} : {file_initial} - {sheets} : Checking if Language' column are empty ...")
if col_condition_num == True:
print(color.GREEN + "PASS" + color.END + ": 'Language'column in 'Data' contains complete data")
else:
print(color.RED + "FAIL" + color.END + ": 'Language' column in 'Data' sheet are empty or incomplete")
if scan_num == 7 and file_initial == 'RC':
print(f"\nSCAN-7 : {file_initial} - {sheets} : checking if columns in the 'Data' sheet are identical to the reference columns ...")
if col_condition_num == True:
print (color.GREEN + "PASS" + color.END + ": The columns in the 'Data' sheet are identical to the reference")
else:
print (color.RED + "FAIL" + color.END + ": The columns in the 'Data' sheet are not identical to the reference")
def summary_col_check(df_summary, df_summary_cols, file_initial , sheets = 'Summary'):
# --- SCAN-1 : checking if "Summary" sheet contains "Language" and "Market" columns ---------------------
# PASS -> 'Summary' sheet contains both 'Language' and 'Market' columns
scan_num = 1
cols_to_check = ['Language', 'Market']
col_checker = {}
for col in cols_to_check:
if col in df_summary_cols:
col_checker[col] = True
else:
col_checker[col] = False
condition_1 = col_checker['Language']
condition_2 = col_checker['Market']
col_condition_1 = all([condition_1, condition_2]) # both conditions has to be true
return col_condition_1, scan_num
def summary_col_value_check(df_summary, file_initial, sheets = 'Summary'):
# --- SCAN-2 :checking if "Language" and "Market" columns in "Summary" is empty -------------------------
# PASS -> Both 'Language' and 'Market' columns in 'Summary' contains complete data
scan_num = 2
cols_to_check = ['Language', 'Market']
col_checker = {}
for col in cols_to_check:
if df_summary[col].notnull().values.all() == True:
col_checker[col] = True
else:
col_checker[col] = False
condition_3 = col_checker['Language']
condition_4 = col_checker['Market']
col_condition_2 = all([condition_3, condition_4]) # both conditions has to be true
return col_condition_2, scan_num
def col_header_check(df_summary_data, file_initial, sheets):
# --- SCAN-3 : checking if worker_id column contains _ at the start -------------------------------------
# PASS -> if the number of character is 10 not 9 and column name is _workder_id
scan_num = 3
find_worker_idx = df_summary_data.columns.str.contains('worker')
worker_idx = [i for i, x in enumerate(find_worker_idx) if x][0]
worker_col = df_summary_data.columns[worker_idx]
worker_col_len = len(worker_col)
if worker_col_len == 10 and worker_col[0] == "_":
col_condition_3 = True
elif worker_col_len == 9 and worker_col[0] == "w":
col_condition_3 = False
return col_condition_3, scan_num
def data_col_check(df_data, df_data_cols, file_initial, sheets = 'Data'):
# --- SCAN-4 : checking if "Data" sheet contains "Language" column --------------------------------------
# PASS -> 'Data' sheet contains both 'Language' column
scan_num = 4
cols_to_check = ['Language']
col_checker = {}
for col in cols_to_check:
if col in df_data_cols:
col_checker[col] = True
else:
col_checker[col] = False
condition_1 = col_checker['Language']
col_condition_4 = all([condition_1])
return col_condition_4, scan_num
def data_col_value_check(df_data, file_initial, sheets = 'Data'):
# --- SCAN-5 :checking if "Language" column in "Data" is empty -------------------------
# PASS -> 'Language' column in 'Data' contains complete data
scan_num = 5
cols_to_check = ['Language']
col_checker = {}
for col in cols_to_check:
if df_data[col].notnull().values.all() == True:
col_checker[col] = True
else:
col_checker[col] = False
condition_3 = col_checker['Language']
col_condition_5 = all([condition_3])
return col_condition_5, scan_num
def data_col_header_check(df_data_cols, ref_data_cols, file_initial, sheets = 'Data'):
# --- SCAN-7 : checking if columns in "Data" sheet are identical to the reference columns ------------------------
# refer to the file in reference > reference_checks.xlsx
# PASS -> if the two column lists are identical
scan_num = 7
ref_data_cols_sorted = ref_data_cols
df_data_cols_sorted = df_data_cols
# sorting both the lists
ref_data_cols_sorted.sort()
df_data_cols_sorted.sort()
# using == to check if
if ref_data_cols_sorted == df_data_cols_sorted:
col_condition_7 = True
else :
col_condition_7 = False
return col_condition_7, scan_num
def data_integrity_check(df_summary, df_summary_cols, df_data, df_data_cols, file_initial):
print(color.BOLD + f"Reading {file_initial} raw data and perform data integrity scanning...:\n" + color.END)
conditions_list = []
# SCAN-1
col_condition_1, scan_num = summary_col_check(df_summary, df_summary_cols, file_initial , 'Summary')
print_scan_results(col_condition_1, scan_num, file_initial , sheets = 'Summary')
conditions_list.append(col_condition_1)
# SCAN-2
# Runs only when col_condition_1 returns True
if col_condition_1 == True:
col_condition_2, scan_num = summary_col_value_check(df_summary, file_initial, 'Summary')
print_scan_results(col_condition_2, scan_num, file_initial , sheets = 'Summary')
conditions_list.append(col_condition_2)
else:
conditions_list = conditions_list
# SCAN-3
col_condition_3, scan_num = col_header_check(df_summary, file_initial, 'Summary')
print_scan_results(col_condition_3, scan_num, file_initial , sheets = 'Summary')
conditions_list.append(col_condition_3)
# SCAN-4
col_condition_4, scan_num = data_col_check(df_data, df_data_cols, file_initial, sheets = 'Data')
print_scan_results(col_condition_4, scan_num, file_initial , sheets = 'Data')
conditions_list.append(col_condition_4)
# SCAN-5
# Runs only when col_condition_4 returns True
if col_condition_4 == True:
col_condition_5, scan_num = data_col_value_check(df_data, file_initial, sheets = 'Data')
print_scan_results(col_condition_5, scan_num, file_initial , sheets = 'Data')
conditions_list.append(col_condition_5)
else:
conditions_list = conditions_list
# SCAN-6
col_condition_6, scan_num = col_header_check(df_data, file_initial, 'Data')
scan_num = 6
print_scan_results(col_condition_6, scan_num, file_initial , 'Data')
conditions_list.append(col_condition_6)
# SCAN-7
if file_initial == 'RC':
col_condition_7, scan_num = data_col_header_check(df_data_cols, ref_data_cols, file_initial, sheets = 'Data')
print_scan_results(col_condition_7, scan_num, file_initial , 'Data')
conditions_list.append(col_condition_7)
# Final data integrity results after all checks
# PASS -> when all scans return True/PASS
if len(conditions_list) > 1 :
integrity_result = all(conditions_list)
if integrity_result == True:
print(color.BOLD + f'\n{file_initial} data integrity result:' + color.GREEN + ' PASS' + color.END + '\n')
else:
print(color.BOLD + f'\n{file_initial} data integrity result:' + color.RED + ' FAIL' + color.END + '\n')
elif len(conditions_list) == 1:
print(color.BOLD + f'\n{file_initial} data integrity result:' + color.RED + ' FAIL' + color.END + '\n')
return integrity_result, conditions_list
def main():
file_initials = ['RC', 'Vocab_1', 'Vocab_2']
int_results = {}
for file_initial in file_initials:
df_summary, df_summary_cols, df_data, df_data_cols, df_ans_key, df_ans_key_cols = create_dataframes(file_initial, rc_filepath, v1_filepath , v2_filepath)
integrity_result, conditions_list = data_integrity_check(df_summary, df_summary_cols, df_data, df_data_cols, file_initial)
int_results[file_initial] = integrity_result
if __name__ == "__main__":
main()
| 0.176494 | 0.472501 |
# Another explanation about PCA
<img src = 'pca.jpeg' width="width" height="height"/>
<sub>photo credit: Raunak Joshi</sub>
In this lab, we are going to view another explanation about Principal Component Analysis(PCA). PCA is a statistical technique invented in 1901 by Karl Pearson that uses orthogonal transformations to map a set of variables into a set of linearly uncorrelated variables called Principal Components.
PCA is based on the Singular Value Decomposition(SVD) of the Covariance Matrix of the original dataset. The Eigenvectors of such decomposition are used as a rotation matrix. The Eigenvectors are arranged in the rotation matrix in decreasing order according to its explained variance. This last term is related to the EigenValues of the SVD.
PCA is a potent technique with applications ranging from simple space transformation, dimensionality reduction, and mixture separation from spectral information.
Follow this lab to view another explanation for PCA. In this case, we are going to use the concept of rotation matrices applied to correlated random data, just as illustrated in the next picture.
<img src=GaussianScatterPCA.svg>
Source: https://en.wikipedia.org/wiki/Principal_component_analysis
As usual, we must import the libraries that will use in this lab.
```
import numpy as np # Linear algebra library
import matplotlib.pyplot as plt # library for visualization
from sklearn.decomposition import PCA # PCA library
import pandas as pd # Data frame library
import math # Library for math functions
import random # Library for pseudo random numbers
```
To start, let us consider a pair of random variables x, y. Consider the base case when y = n * x. The x and y variables will be perfectly correlated to each other since y is just a scaling of x.
```
n = 1 # The amount of the correlation
x = np.random.uniform(1,2,1000) # Generate 1000 samples from a uniform random variable
y = x.copy() * n # Make y = n * x
# PCA works better if the data is centered
x = x - np.mean(x) # Center x. Remove its mean
y = y - np.mean(y) # Center y. Remove its mean
data = pd.DataFrame({'x': x, 'y': y}) # Create a data frame with x and y
plt.scatter(data.x, data.y) # Plot the original correlated data in blue
pca = PCA(n_components=2) # Instantiate a PCA. Choose to get 2 output variables,
#n_components : number of components to keep
# Create the transformation model for this data. Internally, it gets the rotation
# matrix and the explained variance
pcaTr = pca.fit(data)
rotatedData = pcaTr.transform(data) # Transform the data base on the rotation matrix of pcaTr
# # Create a data frame with the new variables. We call these new variables PC1 and PC2
dataPCA = pd.DataFrame(data = rotatedData, columns = ['PC1', 'PC2'])
# Plot the transformed data in orange
plt.scatter(dataPCA.PC1, dataPCA.PC2)
plt.show()
```
Now, what is the direction in which the variables point?
## Understanding the transformation model pcaTr
As mentioned before, a PCA model is composed of a rotation matrix and its corresponding explained variance. In the next module, we will explain the details of the rotation matrices.
* `pcaTr.components_` has the rotation matrix
* `pcaTr.explained_variance_` has the explained variance of each principal component
```
print('Eigenvectors or principal component: First row must be in the direction of [1, n]')
print(pcaTr.components_)
print()
print('Eigenvalues or explained variance')
print(pcaTr.explained_variance_)
```
$cos(45^o) = 0.7071$
The rotation matrix is equal to:
$$R = \begin{bmatrix} cos(45^o) & sin(45^o) \\ -sin(45^o) & cos(45^o) \end{bmatrix}$$
And $45^o$ is the same angle that form the variables y = 1 * x.
Then, PCA has identified the angle in which point the original variables.
And the explained Variance is around [0.166 0]. Remember that the Variance of a uniform random variable x ~ U(1, 2), as our x and y, is equal to:
$$Var(x) = \frac {(2 - 1)^2}{12} = 0.083333$$
Then the explained variance given by the PCA can be interpret as
$$[Var(x) + Var(y) \ 0] = [0.0833 + 0.0833 \ 0] = [0.166 \ 0]$$
Which means that all the explained variance of our new system is explained by our first principal component.
## Correlated Normal Random Variables.
Now, we will use a controlled dataset composed of 2 random variables with different variances and with a specific Covariance among them. The only way I know to get such a dataset is, first, create two independent Normal random variables with the desired variances and then combine them using a rotation matrix. In this way, the new resulting variables will be a linear combination of the original random variables and thus be dependent and correlated.
```
import matplotlib.lines as mlines
import matplotlib.transforms as mtransforms
random.seed(100)
std1 = 1 # The desired standard deviation of our first random variable
std2 = 0.333 # The desired standard deviation of our second random variable
x = np.random.normal(0, std1, 1000) # Get 1000 samples from x ~ N(0, std1)
y = np.random.normal(0, std2, 1000) # Get 1000 samples from y ~ N(0, std2)
#y = y + np.random.normal(0,1,1000)*noiseLevel * np.sin(0.78)
# PCA works better if the data is centered
x = x - np.mean(x) # Center x
y = y - np.mean(y) # Center y
#Define a pair of dependent variables with a desired amount of covariance
n = 1 # Magnitude of covariance.
angle = np.arctan(1 / n) # Convert the covariance to and angle
print('angle: ', angle * 180 / math.pi)
# Create a rotation matrix using the given angle
rotationMatrix = np.array([[np.cos(angle), np.sin(angle)],
[-np.sin(angle), np.cos(angle)]])
print('rotationMatrix')
print(rotationMatrix)
xy = np.concatenate(([x] , [y]), axis=0).T # Create a matrix with columns x and y
# Transform the data using the rotation matrix. It correlates the two variables
data = np.dot(xy, rotationMatrix) # Return a nD array
# Print the rotated data
plt.scatter(data[:,0], data[:,1])
plt.show()
```
Let us print the original and the resulting transformed system using the result of the PCA in the same plot alongside with the 2 Principal Component vectors in red and blue
```
plt.scatter(data[:,0], data[:,1]) # Print the original data in blue
# Apply PCA. In theory, the Eigenvector matrix must be the
# inverse of the original rotationMatrix.
pca = PCA(n_components=2) # Instantiate a PCA. Choose to get 2 output variables
# Create the transformation model for this data. Internally it gets the rotation
# matrix and the explained variance
pcaTr = pca.fit(data)
# Create an array with the transformed data
dataPCA = pcaTr.transform(data)
print('Eigenvectors or principal component: First row must be in the direction of [1, n]')
print(pcaTr.components_)
print()
print('Eigenvalues or explained variance')
print(pcaTr.explained_variance_)
# Print the rotated data
plt.scatter(dataPCA[:,0], dataPCA[:,1])
# Plot the first component axe. Use the explained variance to scale the vector
plt.plot([0, rotationMatrix[0][0] * std1 * 3], [0, rotationMatrix[0][1] * std1 * 3], 'k-', color='red')
# Plot the second component axe. Use the explained variance to scale the vector
plt.plot([0, rotationMatrix[1][0] * std2 * 3], [0, rotationMatrix[1][1] * std2 * 3], 'k-', color='green')
plt.show()
```
The explanation of this chart is as follows:
* The rotation matrix used to create our correlated variables took the original uncorrelated variables `x` and `y` and transformed them into the blue points.
* The PCA transformation finds out the rotation matrix used to create our correlated variables (blue points). Using the PCA model to transform our data, puts back the variables as our original uncorrelated variables.
* The explained Variance of the PCA is
$$[1.0094, 0.1125] $$
which is approximately
$$[1, 0.333 * 0.333] = [std1^2, std2^2],$$
the parameters of our original random variables x and y
You can use the previous code to try with other standard deviations and correlations and convince your self of this fact.
## PCA as a strategy for dimensionality reduction
The principal components contained in the rotation matrix, are decreasingly sorted depending on its explained Variance. It usually means that the first components retain most of the power of the data to explain the patterns that **generalize** the data. Nevertheless, for some applications, we are interested in the patterns that explain much less Variance, for example, in novelty detection.
In the next figure, we can see the original data and its corresponding projection over the first and second principal components. In other words, data comprised of a single variable.
```
nPoints = len(data)
# Plot the original data in blue
plt.scatter(data[:,0], data[:,1])
#Plot the projection along the first component in orange
plt.scatter(data[:,0], np.zeros(nPoints))
#Plot the projection along the second component in green
plt.scatter(np.zeros(nPoints), data[:,1])
plt.show()
```
## PCA as a strategy to plot complex data
The next chart shows a sample diagram displaying a dataset of pictures of cats and dogs. Raw pictures are composed of hundreds or even thousands of features. However, PCA allows us to reduce that many features to only two. In that reduced space of uncorrelated variables, we can easily separate cats and dogs.
<img src = 'catdog.png'>
You will learn how to generate a chart like this with word vectors in this week's programming assignment.
|
github_jupyter
|
import numpy as np # Linear algebra library
import matplotlib.pyplot as plt # library for visualization
from sklearn.decomposition import PCA # PCA library
import pandas as pd # Data frame library
import math # Library for math functions
import random # Library for pseudo random numbers
n = 1 # The amount of the correlation
x = np.random.uniform(1,2,1000) # Generate 1000 samples from a uniform random variable
y = x.copy() * n # Make y = n * x
# PCA works better if the data is centered
x = x - np.mean(x) # Center x. Remove its mean
y = y - np.mean(y) # Center y. Remove its mean
data = pd.DataFrame({'x': x, 'y': y}) # Create a data frame with x and y
plt.scatter(data.x, data.y) # Plot the original correlated data in blue
pca = PCA(n_components=2) # Instantiate a PCA. Choose to get 2 output variables,
#n_components : number of components to keep
# Create the transformation model for this data. Internally, it gets the rotation
# matrix and the explained variance
pcaTr = pca.fit(data)
rotatedData = pcaTr.transform(data) # Transform the data base on the rotation matrix of pcaTr
# # Create a data frame with the new variables. We call these new variables PC1 and PC2
dataPCA = pd.DataFrame(data = rotatedData, columns = ['PC1', 'PC2'])
# Plot the transformed data in orange
plt.scatter(dataPCA.PC1, dataPCA.PC2)
plt.show()
print('Eigenvectors or principal component: First row must be in the direction of [1, n]')
print(pcaTr.components_)
print()
print('Eigenvalues or explained variance')
print(pcaTr.explained_variance_)
import matplotlib.lines as mlines
import matplotlib.transforms as mtransforms
random.seed(100)
std1 = 1 # The desired standard deviation of our first random variable
std2 = 0.333 # The desired standard deviation of our second random variable
x = np.random.normal(0, std1, 1000) # Get 1000 samples from x ~ N(0, std1)
y = np.random.normal(0, std2, 1000) # Get 1000 samples from y ~ N(0, std2)
#y = y + np.random.normal(0,1,1000)*noiseLevel * np.sin(0.78)
# PCA works better if the data is centered
x = x - np.mean(x) # Center x
y = y - np.mean(y) # Center y
#Define a pair of dependent variables with a desired amount of covariance
n = 1 # Magnitude of covariance.
angle = np.arctan(1 / n) # Convert the covariance to and angle
print('angle: ', angle * 180 / math.pi)
# Create a rotation matrix using the given angle
rotationMatrix = np.array([[np.cos(angle), np.sin(angle)],
[-np.sin(angle), np.cos(angle)]])
print('rotationMatrix')
print(rotationMatrix)
xy = np.concatenate(([x] , [y]), axis=0).T # Create a matrix with columns x and y
# Transform the data using the rotation matrix. It correlates the two variables
data = np.dot(xy, rotationMatrix) # Return a nD array
# Print the rotated data
plt.scatter(data[:,0], data[:,1])
plt.show()
plt.scatter(data[:,0], data[:,1]) # Print the original data in blue
# Apply PCA. In theory, the Eigenvector matrix must be the
# inverse of the original rotationMatrix.
pca = PCA(n_components=2) # Instantiate a PCA. Choose to get 2 output variables
# Create the transformation model for this data. Internally it gets the rotation
# matrix and the explained variance
pcaTr = pca.fit(data)
# Create an array with the transformed data
dataPCA = pcaTr.transform(data)
print('Eigenvectors or principal component: First row must be in the direction of [1, n]')
print(pcaTr.components_)
print()
print('Eigenvalues or explained variance')
print(pcaTr.explained_variance_)
# Print the rotated data
plt.scatter(dataPCA[:,0], dataPCA[:,1])
# Plot the first component axe. Use the explained variance to scale the vector
plt.plot([0, rotationMatrix[0][0] * std1 * 3], [0, rotationMatrix[0][1] * std1 * 3], 'k-', color='red')
# Plot the second component axe. Use the explained variance to scale the vector
plt.plot([0, rotationMatrix[1][0] * std2 * 3], [0, rotationMatrix[1][1] * std2 * 3], 'k-', color='green')
plt.show()
nPoints = len(data)
# Plot the original data in blue
plt.scatter(data[:,0], data[:,1])
#Plot the projection along the first component in orange
plt.scatter(data[:,0], np.zeros(nPoints))
#Plot the projection along the second component in green
plt.scatter(np.zeros(nPoints), data[:,1])
plt.show()
| 0.882479 | 0.993275 |
# Integration Using Tables and Computer Algebra Systems
Finding antiderivatives is tricky business, often requiring sophisticated techniques of integration that are beyond the scope of this course. At times, we will be running across an function for which we cannot find an antiderivative. Does this mean we cannot make further progress? Not necessarily!
In this notebook, we will introduce two tools, created by professional mathematicians and computer scientists, that are readily available to us, namely
1. Tables of Integrals, and
2. Computer Algebra Systems / Online Integrators.
### 1. Tables of Integrals
During our first class, we crowdsourced a Basic Table of Integrals, based on our knowledge of basic functions from the previous calculus course, such as
\begin{align}
\int x^n \ dx & = \frac{x^{n+1}}{n+1} + C \ \ \ \mbox{for} \ \ n \ne 1 \\
\int \frac{1}{x} \ dx & = \ln|x| + C \\
\int e^x \ dx & = e^x + C \\
\int \sin(x) \ dx & = -\cos(x) + C \\
& . \\
& . \\
& . \\
\end{align}
Mathematicians have collected antiderivatives of thousands of functions over the years, and organized them for lookup by others. A small Table of Integrals is available at the front of the textbook (Reference Pages 6-10). For convenience, we have posted a pdf file with these pages on eClass (with permission from the publisher).
In the next video, we take a look at how this Table to Integrals is organized, and we do a few examples to demonstrate how to use the Table.
<font color=blue> NOTE: At minute 5:02, there is a small mistake, namely that a factor of 2 was not carried through from one line to the next.</font>
```
from IPython.display import YouTubeVideo
YouTubeVideo('zxGYeDl-FmA')
```
It is important to keep in mind that it is not uncommon to need to use algebraic techniques (such as completing the square) and/or substitution to rewrite a given function into one of the forms listed in the Table of Integrals.
### 2. A Powerful Online Integrator
In the last decade or so, mathematicians and computer scientists have made tremendous progress in developing software to evaluate integrals. An example is WolframAlpha's free computational engine, which can be accessed for free online.
For example, to evaluate $\displaystyle \int \frac{dx}{x^2 \sqrt{4x^2+9}}$:
> Navigate to https://www.wolframalpha.com and type
`integrate 1 / ( x^2 sqrt( 4x^2 + 9 ) )`
With the press of a button, we find
$$
\int \frac{dx}{x^2 \sqrt{4x^2+9}} = \frac{ - \sqrt{ 4x^2 + 9 } }{ 9x} + C.
$$
Similarly, to evaluate $\displaystyle \int_0^{\pi} x^3 \sin(x) \ dx$:
> Navigate to https://www.wolframalpha.com and type
`integrate x^3 sin(x) from 0 to pi`
We readily find
$$
\int_0^{\pi} x^3 \sin(x) \ dx = \pi ( \pi^2 - 6 ).
$$
Pretty cool, eh!
<font color=red>A word of caution: WolframAlpha is a very powerful tool, and it is easy to be tempted to use WolframAlpha to complete the pre-class quizzes and online homework problems. While you would earn the marks for these assessments, you would of course be cheating yourself from learning by not doing the work. We encourage you to be disciplined, and use WolframAlpha only as a tool to check your work and when told to use it (for example, for a question where an integral needs to be evaluated but the evaluation itself is not the focus).</font>
### 3. Not All Functions Have an Antiderivative That Can Be Written in Terms of Functions That We Know
Not all functions, even seemingly innocuous functions such as $\displaystyle f(x) = \frac{e^x}{x}$ or $\displaystyle g(x) = \sin(x^2)$, have an antiderivative that can be written in terms of functions that we know.
For example, let's take a look at what WolframAlpha says if we try to evaluate $\displaystyle \int \sin(x^2) \ dx$:
> Navigate to https://www.wolframalpha.com and type
`integrate sin( x^2 )`
WolframAlpha returns
$$
\int \sin(x^2) \ dx = \sqrt{ \frac{\pi}{2} } \mathcal{S}\left( \sqrt{ \frac{2}{\pi} } x \right) + C,
$$
with a note indicating that $\mathcal{S}(x)$ is the Fresnel S integral. The Fresnel S integral is a mathematical object that has applications in physics and engineering. It is a topic of study in advanced mathematics and physics courses; it is well beyond the scope of our course.
The take-home message of this last experiment is that the antiderivative of $\sin(x^2)$ cannot be written in terms of elementary functions.
Although WolframAlpha cannot evaluate the indefinite integral $\displaystyle \sin(x^2) \ dx$ in terms of elementary functions, it <b>can</b> evaluate a definite integral such as $\displaystyle \int_0^3 \sin(x^2) \ dx$:
> Navigate to https://www.wolframalpha.com and type
`integrate sin( x^2 ) from 0 to 3`
WolframAlpha returns
$$
\int_0^3 \sin(x^2) \ dx \approx 0.773563.
$$
Note that WolframAlpha provides a visual representation of the definite integral immediately below the numerical result: it shows that the definite integral represents the net area under the graph of $f(x) = \sin(x^2)$ between $x=0$ and $x=3$ (area of the blue regions lying above the $x$-axis minus the area of the red region lying below the $x$-axis).
### 4. Summary
- A Table of Integrals is a useful resource in the evaluation of an integral. It is not uncommon to need to use algebraic techniques (such as completing the square) and/or substitution to rewrite a given function into one of the forms listed in the Table of Integrals.
- WolframAlpha is an excellent tool in the evaluation of both indefinite and definite integrals.
- Not all functions have an antiderivative that can be written in terms of functions that we know.
### 5. Further Study
Please refer to Section 5.7 in the textbook for additional treatment of this topic.
### 6. Don't Forget
Don't forget to return to eClass to complete the pre-class quiz.
|
github_jupyter
|
from IPython.display import YouTubeVideo
YouTubeVideo('zxGYeDl-FmA')
| 0.271252 | 0.993189 |
# cuDatashader GPU FDEB Edge Bundling
This algorithm takes a graph (nodes and egdes) as input and produces a new graph by applying FDEB Edge Bundling on it. The produced graph is displayable thanks to Datashader/cuDatashader functions.
### General functionning of the algorithm (for more information https://ieeexplore.ieee.org/document/6385238)
```
from IPython.display import Image
Image(filename='data/FDEB_edge_bundling.gif')
```
### Computation of pairwise edge compatibility (for more information http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.212.7989&rep=rep1&type=pdf)
```
Image(filename='data/compatibility.png')
```
### Computation of Spring and Electostatic forces (for more information https://ieeexplore.ieee.org/document/6385238)
```
Image(filename='data/forces.gif')
```
### Demonstration of GPU FDEB Edge Bundling on small graphs
```
import math
import numpy as np
import pandas as pd
import cudatashader as ds
import cudatashader.transfer_functions as tf
from cudatashader.bundling import fdeb_bundle, connect_edges
import networkx as nx
import cudf
import matplotlib.pyplot as plt
import time
np.set_printoptions(threshold=np.inf)
def nodesplot(nodes, canvas=None, cat=None): # Plot nodes
canvas = ds.Canvas(plot_height=400, plot_width=400) if canvas is None else canvas
agg = canvas.points(nodes, 'x','y')
return tf.spread(tf.shade(agg, cmap=['white', 'red'], how='linear'), px=5, shape='circle', how='saturate')
def edgesplot(edges, canvas=None): # Plot edges
canvas = ds.Canvas(plot_height=400, plot_width=400) if canvas is None else canvas
return tf.spread(tf.shade(canvas.line(edges, 'x','y'), how='linear'), px=1, shape='square', how='over')
def graphplot(nodes, edges, canvas=None, cat=None): # Plot graph (nodes + edges)
if canvas is None:
xr = nodes.x.min() - 0.1, nodes.x.max() + 0.1
yr = nodes.y.min() - 0.1, nodes.y.max() + 0.1
canvas = ds.Canvas(plot_height=400, plot_width=400, x_range=xr, y_range=yr)
ep = edgesplot(edges, canvas) # Plot nodes
np = nodesplot(nodes, canvas, cat) # Plot edges
return tf.stack(ep, np, how="over") # Stack two images for final render
def nx_layout(graph): # Produce small graphs
layout = nx.circular_layout(graph)
data = [[node]+layout[node].tolist() for node in graph.nodes]
nodes = pd.DataFrame(data, columns=['id', 'x', 'y'])
nodes.set_index('id', inplace=True)
edges = pd.DataFrame(list(graph.edges), columns=['source', 'target'])
return nodes, edges
def nx_plot_without_bundling(graph): # Plot graph without edge bundling
nodes, edges = nx_layout(graph) # Produce small graph
# Convert to cuDF DataFrames
nodes = cudf.DataFrame.from_pandas(nodes)
edges = cudf.DataFrame.from_pandas(edges)
direct = connect_edges(nodes, edges) # Simply connect edges (instead of edge bundling)
return graphplot(nodes, direct)
def nx_plot_with_bundling(graph): # Plot graph with edge bundling
nodes, edges = nx_layout(graph) # Produce small graph
# Convert to cuDF DataFrames
nodes = cudf.DataFrame.from_pandas(nodes)
edges = cudf.DataFrame.from_pandas(edges)
direct = fdeb_bundle(nodes, edges) # GPU FDEB Edge Bundling
return graphplot(nodes, direct) # Plot graph
def display_comparison(graph):
t0 = time.time()
img1 = nx_plot_without_bundling(graph) # First image (on the left) without edge bundling
print("Edge Bundling rendered in {} ms".format(round((time.time() - t0) * 1000)))
img2 = nx_plot_with_bundling(graph) # Second image (on the right) with edge bundling
f = plt.figure(dpi=200)
f.add_subplot(1,2, 1)
plt.imshow(img1.data)
f.add_subplot(1,2, 2)
plt.imshow(img2.data)
plt.show(block=True)
display_comparison(nx.tetrahedral_graph())
display_comparison(nx.complete_graph(5))
display_comparison(nx.petersen_graph())
display_comparison(nx.complete_bipartite_graph(4, 5))
display_comparison(nx.barabasi_albert_graph(10, 5))
display_comparison(nx.erdos_renyi_graph(20, 0.15))
display_comparison(nx.complete_graph(10))
```
|
github_jupyter
|
from IPython.display import Image
Image(filename='data/FDEB_edge_bundling.gif')
Image(filename='data/compatibility.png')
Image(filename='data/forces.gif')
import math
import numpy as np
import pandas as pd
import cudatashader as ds
import cudatashader.transfer_functions as tf
from cudatashader.bundling import fdeb_bundle, connect_edges
import networkx as nx
import cudf
import matplotlib.pyplot as plt
import time
np.set_printoptions(threshold=np.inf)
def nodesplot(nodes, canvas=None, cat=None): # Plot nodes
canvas = ds.Canvas(plot_height=400, plot_width=400) if canvas is None else canvas
agg = canvas.points(nodes, 'x','y')
return tf.spread(tf.shade(agg, cmap=['white', 'red'], how='linear'), px=5, shape='circle', how='saturate')
def edgesplot(edges, canvas=None): # Plot edges
canvas = ds.Canvas(plot_height=400, plot_width=400) if canvas is None else canvas
return tf.spread(tf.shade(canvas.line(edges, 'x','y'), how='linear'), px=1, shape='square', how='over')
def graphplot(nodes, edges, canvas=None, cat=None): # Plot graph (nodes + edges)
if canvas is None:
xr = nodes.x.min() - 0.1, nodes.x.max() + 0.1
yr = nodes.y.min() - 0.1, nodes.y.max() + 0.1
canvas = ds.Canvas(plot_height=400, plot_width=400, x_range=xr, y_range=yr)
ep = edgesplot(edges, canvas) # Plot nodes
np = nodesplot(nodes, canvas, cat) # Plot edges
return tf.stack(ep, np, how="over") # Stack two images for final render
def nx_layout(graph): # Produce small graphs
layout = nx.circular_layout(graph)
data = [[node]+layout[node].tolist() for node in graph.nodes]
nodes = pd.DataFrame(data, columns=['id', 'x', 'y'])
nodes.set_index('id', inplace=True)
edges = pd.DataFrame(list(graph.edges), columns=['source', 'target'])
return nodes, edges
def nx_plot_without_bundling(graph): # Plot graph without edge bundling
nodes, edges = nx_layout(graph) # Produce small graph
# Convert to cuDF DataFrames
nodes = cudf.DataFrame.from_pandas(nodes)
edges = cudf.DataFrame.from_pandas(edges)
direct = connect_edges(nodes, edges) # Simply connect edges (instead of edge bundling)
return graphplot(nodes, direct)
def nx_plot_with_bundling(graph): # Plot graph with edge bundling
nodes, edges = nx_layout(graph) # Produce small graph
# Convert to cuDF DataFrames
nodes = cudf.DataFrame.from_pandas(nodes)
edges = cudf.DataFrame.from_pandas(edges)
direct = fdeb_bundle(nodes, edges) # GPU FDEB Edge Bundling
return graphplot(nodes, direct) # Plot graph
def display_comparison(graph):
t0 = time.time()
img1 = nx_plot_without_bundling(graph) # First image (on the left) without edge bundling
print("Edge Bundling rendered in {} ms".format(round((time.time() - t0) * 1000)))
img2 = nx_plot_with_bundling(graph) # Second image (on the right) with edge bundling
f = plt.figure(dpi=200)
f.add_subplot(1,2, 1)
plt.imshow(img1.data)
f.add_subplot(1,2, 2)
plt.imshow(img2.data)
plt.show(block=True)
display_comparison(nx.tetrahedral_graph())
display_comparison(nx.complete_graph(5))
display_comparison(nx.petersen_graph())
display_comparison(nx.complete_bipartite_graph(4, 5))
display_comparison(nx.barabasi_albert_graph(10, 5))
display_comparison(nx.erdos_renyi_graph(20, 0.15))
display_comparison(nx.complete_graph(10))
| 0.677261 | 0.930395 |
# Long Short-Term Memory (LSTM)
:label:`sec_lstm`
The challenge to address long-term information preservation and short-term input
skipping in latent variable models has existed for a long time. One of the
earliest approaches to address this was the
long short-term memory (LSTM) :cite:`Hochreiter.Schmidhuber.1997`. It shares many of the properties of the
GRU.
Interestingly, LSTMs have a slightly more complex
design than GRUs but predates GRUs by almost two decades.
## Gated Memory Cell
Arguably LSTM's design is inspired
by logic gates of a computer.
LSTM introduces a *memory cell* (or *cell* for short)
that has the same shape as the hidden state
(some literatures consider the memory cell
as a special type of the hidden state),
engineered to record additional information.
To control the memory cell
we need a number of gates.
One gate is needed to read out the entries from the
cell.
We will refer to this as the
*output gate*.
A second gate is needed to decide when to read data into the
cell.
We refer to this as the *input gate*.
Last, we need a mechanism to reset
the content of the cell, governed by a *forget gate*.
The motivation for such a
design is the same as that of GRUs,
namely to be able to decide when to remember and
when to ignore inputs in the hidden state via a dedicated mechanism. Let us see
how this works in practice.
### Input Gate, Forget Gate, and Output Gate
Just like in GRUs,
the data feeding into the LSTM gates are
the input at the current time step and
the hidden state of the previous time step,
as illustrated in :numref:`lstm_0`.
They are processed by
three fully-connected layers with a sigmoid activation function to compute the values of
the input, forget. and output gates.
As a result, values of the three gates
are in the range of $(0, 1)$.

:label:`lstm_0`
Mathematically,
suppose that there are $h$ hidden units, the batch size is $n$, and the number of inputs is $d$.
Thus, the input is $\mathbf{X}_t \in \mathbb{R}^{n \times d}$ and the hidden state of the previous time step is $\mathbf{H}_{t-1} \in \mathbb{R}^{n \times h}$. Correspondingly, the gates at time step $t$
are defined as follows: the input gate is $\mathbf{I}_t \in \mathbb{R}^{n \times h}$, the forget gate is $\mathbf{F}_t \in \mathbb{R}^{n \times h}$, and the output gate is $\mathbf{O}_t \in \mathbb{R}^{n \times h}$. They are calculated as follows:
$$
\begin{aligned}
\mathbf{I}_t &= \sigma(\mathbf{X}_t \mathbf{W}_{xi} + \mathbf{H}_{t-1} \mathbf{W}_{hi} + \mathbf{b}_i),\\
\mathbf{F}_t &= \sigma(\mathbf{X}_t \mathbf{W}_{xf} + \mathbf{H}_{t-1} \mathbf{W}_{hf} + \mathbf{b}_f),\\
\mathbf{O}_t &= \sigma(\mathbf{X}_t \mathbf{W}_{xo} + \mathbf{H}_{t-1} \mathbf{W}_{ho} + \mathbf{b}_o),
\end{aligned}
$$
where $\mathbf{W}_{xi}, \mathbf{W}_{xf}, \mathbf{W}_{xo} \in \mathbb{R}^{d \times h}$ and $\mathbf{W}_{hi}, \mathbf{W}_{hf}, \mathbf{W}_{ho} \in \mathbb{R}^{h \times h}$ are weight parameters and $\mathbf{b}_i, \mathbf{b}_f, \mathbf{b}_o \in \mathbb{R}^{1 \times h}$ are bias parameters.
### Candidate Memory Cell
Next we design the memory cell. Since we have not specified the action of the various gates yet, we first introduce the *candidate* memory cell $\tilde{\mathbf{C}}_t \in \mathbb{R}^{n \times h}$. Its computation is similar to that of the three gates described above, but using a $\tanh$ function with a value range for $(-1, 1)$ as the activation function. This leads to the following equation at time step $t$:
$$\tilde{\mathbf{C}}_t = \text{tanh}(\mathbf{X}_t \mathbf{W}_{xc} + \mathbf{H}_{t-1} \mathbf{W}_{hc} + \mathbf{b}_c),$$
where $\mathbf{W}_{xc} \in \mathbb{R}^{d \times h}$ and $\mathbf{W}_{hc} \in \mathbb{R}^{h \times h}$ are weight parameters and $\mathbf{b}_c \in \mathbb{R}^{1 \times h}$ is a bias parameter.
A quick illustration of the candidate memory cell is shown in :numref:`lstm_1`.

:label:`lstm_1`
### Memory Cell
In GRUs, we have a mechanism to govern input and forgetting (or skipping).
Similarly,
in LSTMs we have two dedicated gates for such purposes: the input gate $\mathbf{I}_t$ governs how much we take new data into account via $\tilde{\mathbf{C}}_t$ and the forget gate $\mathbf{F}_t$ addresses how much of the old memory cell content $\mathbf{C}_{t-1} \in \mathbb{R}^{n \times h}$ we retain. Using the same pointwise multiplication trick as before, we arrive at the following update equation:
$$\mathbf{C}_t = \mathbf{F}_t \odot \mathbf{C}_{t-1} + \mathbf{I}_t \odot \tilde{\mathbf{C}}_t.$$
If the forget gate is always approximately 1 and the input gate is always approximately 0, the past memory cells $\mathbf{C}_{t-1}$ will be saved over time and passed to the current time step.
This design is introduced to alleviate the vanishing gradient problem and to better capture
long range dependencies within sequences.
We thus arrive at the flow diagram in :numref:`lstm_2`.

:label:`lstm_2`
### Hidden State
Last, we need to define how to compute the hidden state $\mathbf{H}_t \in \mathbb{R}^{n \times h}$. This is where the output gate comes into play. In LSTM it is simply a gated version of the $\tanh$ of the memory cell.
This ensures that the values of $\mathbf{H}_t$ are always in the interval $(-1, 1)$.
$$\mathbf{H}_t = \mathbf{O}_t \odot \tanh(\mathbf{C}_t).$$
Whenever the output gate approximates 1 we effectively pass all memory information through to the predictor, whereas for the output gate close to 0 we retain all the information only within the memory cell and perform no further processing.
:numref:`lstm_3` has a graphical illustration of the data flow.

:label:`lstm_3`
## Implementation from Scratch
Now let us implement an LSTM from scratch.
As same as the experiments in :numref:`sec_rnn_scratch`,
we first load the time machine dataset.
```
from mxnet import np, npx
from mxnet.gluon import rnn
from d2l import mxnet as d2l
npx.set_np()
batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
```
### [**Initializing Model Parameters**]
Next we need to define and initialize the model parameters. As previously, the hyperparameter `num_hiddens` defines the number of hidden units. We initialize weights following a Gaussian distribution with 0.01 standard deviation, and we set the biases to 0.
```
def get_lstm_params(vocab_size, num_hiddens, device):
num_inputs = num_outputs = vocab_size
def normal(shape):
return np.random.normal(scale=0.01, size=shape, ctx=device)
def three():
return (normal((num_inputs, num_hiddens)),
normal((num_hiddens, num_hiddens)),
np.zeros(num_hiddens, ctx=device))
W_xi, W_hi, b_i = three() # Input gate parameters
W_xf, W_hf, b_f = three() # Forget gate parameters
W_xo, W_ho, b_o = three() # Output gate parameters
W_xc, W_hc, b_c = three() # Candidate memory cell parameters
# Output layer parameters
W_hq = normal((num_hiddens, num_outputs))
b_q = np.zeros(num_outputs, ctx=device)
# Attach gradients
params = [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc,
b_c, W_hq, b_q]
for param in params:
param.attach_grad()
return params
```
### Defining the Model
In [**the initialization function**], the hidden state of the LSTM needs to return an *additional* memory cell with a value of 0 and a shape of (batch size, number of hidden units). Hence we get the following state initialization.
```
def init_lstm_state(batch_size, num_hiddens, device):
return (np.zeros((batch_size, num_hiddens), ctx=device),
np.zeros((batch_size, num_hiddens), ctx=device))
```
[**The actual model**] is defined just like what we discussed before: providing three gates and an auxiliary memory cell. Note that only the hidden state is passed to the output layer. The memory cell $\mathbf{C}_t$ does not directly participate in the output computation.
```
def lstm(inputs, state, params):
[W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c,
W_hq, b_q] = params
(H, C) = state
outputs = []
for X in inputs:
I = npx.sigmoid(np.dot(X, W_xi) + np.dot(H, W_hi) + b_i)
F = npx.sigmoid(np.dot(X, W_xf) + np.dot(H, W_hf) + b_f)
O = npx.sigmoid(np.dot(X, W_xo) + np.dot(H, W_ho) + b_o)
C_tilda = np.tanh(np.dot(X, W_xc) + np.dot(H, W_hc) + b_c)
C = F * C + I * C_tilda
H = O * np.tanh(C)
Y = np.dot(H, W_hq) + b_q
outputs.append(Y)
return np.concatenate(outputs, axis=0), (H, C)
```
### [**Training**] and Prediction
Let us train an LSTM as same as what we did in :numref:`sec_gru`, by instantiating the `RNNModelScratch` class as introduced in :numref:`sec_rnn_scratch`.
```
vocab_size, num_hiddens, device = len(vocab), 256, d2l.try_gpu()
num_epochs, lr = 500, 1
model = d2l.RNNModelScratch(len(vocab), num_hiddens, device, get_lstm_params,
init_lstm_state, lstm)
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
```
## [**Concise Implementation**]
Using high-level APIs,
we can directly instantiate an `LSTM` model.
This encapsulates all the configuration details that we made explicit above. The code is significantly faster as it uses compiled operators rather than Python for many details that we spelled out in detail before.
```
lstm_layer = rnn.LSTM(num_hiddens)
model = d2l.RNNModel(lstm_layer, len(vocab))
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
```
LSTMs are the prototypical latent variable autoregressive model with nontrivial state control.
Many variants thereof have been proposed over the years, e.g., multiple layers, residual connections, different types of regularization. However, training LSTMs and other sequence models (such as GRUs) are quite costly due to the long range dependency of the sequence.
Later we will encounter alternative models such as transformers that can be used in some cases.
## Summary
* LSTMs have three types of gates: input gates, forget gates, and output gates that control the flow of information.
* The hidden layer output of LSTM includes the hidden state and the memory cell. Only the hidden state is passed into the output layer. The memory cell is entirely internal.
* LSTMs can alleviate vanishing and exploding gradients.
## Exercises
1. Adjust the hyperparameters and analyze the their influence on running time, perplexity, and the output sequence.
1. How would you need to change the model to generate proper words as opposed to sequences of characters?
1. Compare the computational cost for GRUs, LSTMs, and regular RNNs for a given hidden dimension. Pay special attention to the training and inference cost.
1. Since the candidate memory cell ensures that the value range is between $-1$ and $1$ by using the $\tanh$ function, why does the hidden state need to use the $\tanh$ function again to ensure that the output value range is between $-1$ and $1$?
1. Implement an LSTM model for time series prediction rather than character sequence prediction.
[Discussions](https://discuss.d2l.ai/t/343)
|
github_jupyter
|
from mxnet import np, npx
from mxnet.gluon import rnn
from d2l import mxnet as d2l
npx.set_np()
batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
def get_lstm_params(vocab_size, num_hiddens, device):
num_inputs = num_outputs = vocab_size
def normal(shape):
return np.random.normal(scale=0.01, size=shape, ctx=device)
def three():
return (normal((num_inputs, num_hiddens)),
normal((num_hiddens, num_hiddens)),
np.zeros(num_hiddens, ctx=device))
W_xi, W_hi, b_i = three() # Input gate parameters
W_xf, W_hf, b_f = three() # Forget gate parameters
W_xo, W_ho, b_o = three() # Output gate parameters
W_xc, W_hc, b_c = three() # Candidate memory cell parameters
# Output layer parameters
W_hq = normal((num_hiddens, num_outputs))
b_q = np.zeros(num_outputs, ctx=device)
# Attach gradients
params = [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc,
b_c, W_hq, b_q]
for param in params:
param.attach_grad()
return params
def init_lstm_state(batch_size, num_hiddens, device):
return (np.zeros((batch_size, num_hiddens), ctx=device),
np.zeros((batch_size, num_hiddens), ctx=device))
def lstm(inputs, state, params):
[W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c,
W_hq, b_q] = params
(H, C) = state
outputs = []
for X in inputs:
I = npx.sigmoid(np.dot(X, W_xi) + np.dot(H, W_hi) + b_i)
F = npx.sigmoid(np.dot(X, W_xf) + np.dot(H, W_hf) + b_f)
O = npx.sigmoid(np.dot(X, W_xo) + np.dot(H, W_ho) + b_o)
C_tilda = np.tanh(np.dot(X, W_xc) + np.dot(H, W_hc) + b_c)
C = F * C + I * C_tilda
H = O * np.tanh(C)
Y = np.dot(H, W_hq) + b_q
outputs.append(Y)
return np.concatenate(outputs, axis=0), (H, C)
vocab_size, num_hiddens, device = len(vocab), 256, d2l.try_gpu()
num_epochs, lr = 500, 1
model = d2l.RNNModelScratch(len(vocab), num_hiddens, device, get_lstm_params,
init_lstm_state, lstm)
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
lstm_layer = rnn.LSTM(num_hiddens)
model = d2l.RNNModel(lstm_layer, len(vocab))
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
| 0.570092 | 0.992539 |
# Learning Word2Vec Subword Representations using BlazingText
Word2Vec is a popular algorithm used for generating dense vector representations of words in large corpora using unsupervised learning. These representations are useful for many natural language processing (NLP) tasks like sentiment analysis, named entity recognition and machine translation.
Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. *SageMaker BlazingText* can learn vector representations associated with character n-grams; representing words as the sum of these character n-grams representations [1]. This method enables *BlazingText* to generate vectors for out-of-vocabulary (OOV) words, as demonstrated in this notebook.
Popular tools like [FastText](https://github.com/facebookresearch/fastText) learn subword embeddings to generate OOV word representations, but scale poorly as they can run only on CPUs. BlazingText extends the FastText model to leverage GPUs, thus providing more than 10x speedup, depending on the hardware.
[1] P. Bojanowski, E. Grave, A. Joulin, T. Mikolov, [Enriching Word Vectors with Subword Information](https://arxiv.org/pdf/1607.04606.pdf)
## Setup
Let's start by specifying:
- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting. If you don't specify a bucket, SageMaker SDK will create a default bucket following a pre-defined naming convention in the same region.
- The IAM role ARN used to give SageMaker access to your data. It can be fetched using the **get_execution_role** method from sagemaker python SDK.
```
import sagemaker
from sagemaker import get_execution_role
import boto3
import json
sess = sagemaker.Session()
role = get_execution_role()
print(
role
) # This is the role that SageMaker would use to leverage AWS resources (S3, CloudWatch) on your behalf
bucket = sess.default_bucket() # Replace with your own bucket name if needed
print(bucket)
prefix = "blazingtext/subwords" # Replace with the prefix under which you want to store the data if needed
```
### Data Ingestion
Next, we download a dataset from the web on which we want to train the word vectors. BlazingText expects a single preprocessed text file with space separated tokens and each line of the file should contain a single sentence.
In this example, let us train the vectors on [text8](http://mattmahoney.net/dc/textdata.html) dataset (100 MB), which is a small (already preprocessed) version of Wikipedia dump.
```
s3 = boto3.client("s3")
s3.download_file("sagemaker-sample-files", "datasets/text/text8/text8.gz", "text8.gz")
# Uncompressing
!gzip -d text8.gz -f
```
After the data downloading and uncompressing is complete, we need to upload it to S3 so that it can be consumed by SageMaker to execute training jobs. We'll use Python SDK to upload these two files to the bucket and prefix location that we have set above.
```
train_channel = prefix + "/train"
sess.upload_data(path="text8", bucket=bucket, key_prefix=train_channel)
s3_train_data = "s3://{}/{}".format(bucket, train_channel)
```
Next we need to setup an output location at S3, where the model artifact will be dumped. These artifacts are also the output of the algorithm's training job.
```
s3_output_location = "s3://{}/{}/output".format(bucket, prefix)
```
## Training Setup
Now that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job.
```
region_name = boto3.Session().region_name
container = sagemaker.image_uris.retrieve(
region=region_name, framework="blazingtext", version="latest"
)
print("Using SageMaker BlazingText container: {} ({})".format(container, region_name))
```
## Training the BlazingText model for generating word vectors
Similar to the original implementation of [Word2Vec](https://arxiv.org/pdf/1301.3781.pdf), SageMaker BlazingText provides an efficient implementation of the continuous bag-of-words (CBOW) and skip-gram architectures using Negative Sampling, on CPUs and additionally on GPU[s]. The GPU implementation uses highly optimized CUDA kernels. To learn more, please refer to [*BlazingText: Scaling and Accelerating Word2Vec using Multiple GPUs*](https://dl.acm.org/citation.cfm?doid=3146347.3146354).
Besides skip-gram and CBOW, SageMaker BlazingText also supports the "Batch Skipgram" mode, which uses efficient mini-batching and matrix-matrix operations ([BLAS Level 3 routines](https://software.intel.com/en-us/mkl-developer-reference-fortran-blas-level-3-routines)). This mode enables distributed word2vec training across multiple CPU nodes, allowing almost linear scale up of word2vec computation to process hundreds of millions of words per second. Please refer to [*Parallelizing Word2Vec in Shared and Distributed Memory*](https://arxiv.org/pdf/1604.04661.pdf) to learn more.
BlazingText also supports a *supervised* mode for text classification. It extends the FastText text classifier to leverage GPU acceleration using custom CUDA kernels. The model can be trained on more than a billion words in a couple of minutes using a multi-core CPU or a GPU, while achieving performance on par with the state-of-the-art deep learning text classification algorithms. For more information, please refer to [algorithm documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/blazingtext.html) or [the text classification notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/blazingtext_text_classification_dbpedia/blazingtext_text_classification_dbpedia.ipynb).
To summarize, the following modes are supported by BlazingText on different types instances:
| Modes | cbow (supports subwords training) | skipgram (supports subwords training) | batch_skipgram | supervised |
|:----------------------: |:----: |:--------: |:--------------: | :--------------: |
| Single CPU instance | ✔ | ✔ | ✔ | ✔ |
| Single GPU instance | ✔ | ✔ | | ✔ (Instance with 1 GPU only) |
| Multiple CPU instances | | | ✔ | | |
Now, let's define the resource configuration and hyperparameters to train word vectors on *text8* dataset, using "skipgram" mode on a `c4.2xlarge` instance.
```
bt_model = sagemaker.estimator.Estimator(
container,
role,
instance_count=1,
instance_type="ml.c4.2xlarge", # Use of ml.p3.2xlarge is highly recommended for highest speed and cost efficiency
volume_size=30,
max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
```
Please refer to [algorithm documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/blazingtext_hyperparameters.html) for the complete list of hyperparameters.
```
bt_model.set_hyperparameters(
mode="skipgram",
epochs=5,
min_count=5,
sampling_threshold=0.0001,
learning_rate=0.05,
window_size=5,
vector_dim=100,
negative_samples=5,
subwords=True, # Enables learning of subword embeddings for OOV word vector generation
min_char=3, # min length of char ngrams
max_char=6, # max length of char ngrams
batch_size=11, # = (2*window_size + 1) (Preferred. Used only if mode is batch_skipgram)
evaluation=True,
) # Perform similarity evaluation on WS-353 dataset at the end of training
```
Now that the hyper-parameters are setup, let us prepare the handshake between our data channels and the algorithm. To do this, we need to create the `sagemaker.session.s3_input` objects from our data channels. These objects are then put in a simple dictionary, which the algorithm consumes.
```
train_data = sagemaker.inputs.TrainingInput(
s3_train_data,
distribution="FullyReplicated",
content_type="text/plain",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data}
```
We have our `Estimator` object, we have set the hyper-parameters for this object and we have our data channels linked with the algorithm. The only remaining thing to do is to train the algorithm. The following command will train the algorithm. Training the algorithm involves a few steps. Firstly, the instance that we requested while creating the `Estimator` classes is provisioned and is setup with the appropriate libraries. Then, the data from our channels are downloaded into the instance. Once this is done, the training job begins. The provisioning and data downloading will take some time, depending on the size of the data. Therefore it might be a few minutes before we start getting training logs for our training jobs. The data logs will also print out `Spearman's Rho` on some pre-selected validation datasets after the training job has executed. This metric is a proxy for the quality of the algorithm.
Once the job has finished a "Job complete" message will be printed. The trained model can be found in the S3 bucket that was setup as `output_path` in the estimator.
```
bt_model.fit(inputs=data_channels, logs=True)
```
## Hosting / Inference
Once the training is done, we can deploy the trained model as an Amazon SageMaker real-time hosted endpoint. This will allow us to make predictions (or inference) from the model. Note that we don't have to host on the same type of instance that we used to train. Because instance endpoints will be up and running for long, it's advisable to choose a cheaper instance for inference.
```
bt_endpoint = bt_model.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
```
### Getting vector representations for words [including out-of-vocabulary (OOV) words]
Since, we trained with **```subwords = "True"```**, we can get vector representations for any word - including misspelled words or words which were not there in the training dataset.
If we train without the subwords flag, the training will be much faster but the model won't be able to generate vectors for OOV words. Instead, it will return a vector of zeros for such words.
#### Use JSON format for inference
The payload should contain a list of words with the key as "**instances**". BlazingText supports content-type `application/json`.
```
from sagemaker.serializers import JSONSerializer
bt_endpoint.serializer = JSONSerializer()
words = ["awesome", "awweeesome"]
payload = {"instances": words}
response = bt_endpoint.predict(payload)
vecs = json.loads(response)
print(vecs)
```
As expected, we get an n-dimensional vector (where n is vector_dim as specified in hyperparameters) for each of the words.
### Evaluation
We can evaluate the quality of these representations on the task of word similarity / relatedness. We do so by computing Spearman’s rank correlation coefficient (Spearman, 1904) between human judgement and the cosine similarity between the vector representations. For English, we can use the [rare word dataset (RW)](https://nlp.stanford.edu/~lmthang/morphoNLM/), introduced by Luong et al. (2013).
```
s3.download_file("sagemaker-sample-files", "datasets/text/stanford_rare_words/rw.zip", "rw.zip")
!unzip "rw.zip"
!cut -f 1,2 rw/rw.txt | awk '{print tolower($0)}' | tr '\t' '\n' > query_words.txt
```
The above command downloads the RW dataset and dumps all the words for which we need vectors in query_words.txt. Let's read this file and hit the endpoint to get the vectors in batches of 500 words [to respect the 5MB limit of SageMaker hosting.](https://docs.aws.amazon.com/sagemaker/latest/dg/API_runtime_InvokeEndpoint.html#API_runtime_InvokeEndpoint_RequestSyntax)
```
query_words = []
with open("query_words.txt") as f:
for line in f.readlines():
query_words.append(line.strip())
query_words = list(set(query_words))
total_words = len(query_words)
vectors = {}
import numpy as np
import math
from scipy import stats
batch_size = 500
batch_start = 0
batch_end = batch_start + batch_size
while len(vectors) != total_words:
batch_end = min(batch_end, total_words)
subset_words = query_words[batch_start:batch_end]
payload = {"instances": subset_words}
response = bt_endpoint.predict(payload)
vecs = json.loads(response)
for i in vecs:
arr = np.array(i["vector"], dtype=float)
if np.linalg.norm(arr) == 0:
continue
vectors[i["word"]] = arr
batch_start += batch_size
batch_end += batch_size
```
Now that we have gotten all the vectors, we can compute the Spearman’s rank correlation coefficient between human judgement and the cosine similarity between the vector representations.
```
mysim = []
gold = []
dropped = 0
nwords = 0
def similarity(v1, v2):
n1 = np.linalg.norm(v1)
n2 = np.linalg.norm(v2)
return np.dot(v1, v2) / n1 / n2
fin = open("rw/rw.txt", "rb")
for line in fin:
tline = line.decode("utf8").split()
word1 = tline[0].lower()
word2 = tline[1].lower()
nwords += 1
if (word1 in vectors) and (word2 in vectors):
v1 = vectors[word1]
v2 = vectors[word2]
d = similarity(v1, v2)
mysim.append(d)
gold.append(float(tline[2]))
else:
dropped += 1
fin.close()
corr = stats.spearmanr(mysim, gold)
print("Correlation: %s, Dropped words: %s%%" % (corr[0] * 100, math.ceil(dropped / nwords * 100.0)))
```
We can expect a Correlation coefficient of ~40, which is pretty good for a small training dataset like text8. For more details, please refer to [Enriching Word Vectors with Subword Information](https://arxiv.org/pdf/1607.04606.pdf)
### Stop / Close the Endpoint (Optional)
Finally, we should delete the endpoint before we close the notebook.
```
bt_endpoint.delete_endpoint()
```
|
github_jupyter
|
import sagemaker
from sagemaker import get_execution_role
import boto3
import json
sess = sagemaker.Session()
role = get_execution_role()
print(
role
) # This is the role that SageMaker would use to leverage AWS resources (S3, CloudWatch) on your behalf
bucket = sess.default_bucket() # Replace with your own bucket name if needed
print(bucket)
prefix = "blazingtext/subwords" # Replace with the prefix under which you want to store the data if needed
s3 = boto3.client("s3")
s3.download_file("sagemaker-sample-files", "datasets/text/text8/text8.gz", "text8.gz")
# Uncompressing
!gzip -d text8.gz -f
train_channel = prefix + "/train"
sess.upload_data(path="text8", bucket=bucket, key_prefix=train_channel)
s3_train_data = "s3://{}/{}".format(bucket, train_channel)
s3_output_location = "s3://{}/{}/output".format(bucket, prefix)
region_name = boto3.Session().region_name
container = sagemaker.image_uris.retrieve(
region=region_name, framework="blazingtext", version="latest"
)
print("Using SageMaker BlazingText container: {} ({})".format(container, region_name))
bt_model = sagemaker.estimator.Estimator(
container,
role,
instance_count=1,
instance_type="ml.c4.2xlarge", # Use of ml.p3.2xlarge is highly recommended for highest speed and cost efficiency
volume_size=30,
max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
bt_model.set_hyperparameters(
mode="skipgram",
epochs=5,
min_count=5,
sampling_threshold=0.0001,
learning_rate=0.05,
window_size=5,
vector_dim=100,
negative_samples=5,
subwords=True, # Enables learning of subword embeddings for OOV word vector generation
min_char=3, # min length of char ngrams
max_char=6, # max length of char ngrams
batch_size=11, # = (2*window_size + 1) (Preferred. Used only if mode is batch_skipgram)
evaluation=True,
) # Perform similarity evaluation on WS-353 dataset at the end of training
train_data = sagemaker.inputs.TrainingInput(
s3_train_data,
distribution="FullyReplicated",
content_type="text/plain",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data}
bt_model.fit(inputs=data_channels, logs=True)
bt_endpoint = bt_model.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
from sagemaker.serializers import JSONSerializer
bt_endpoint.serializer = JSONSerializer()
words = ["awesome", "awweeesome"]
payload = {"instances": words}
response = bt_endpoint.predict(payload)
vecs = json.loads(response)
print(vecs)
s3.download_file("sagemaker-sample-files", "datasets/text/stanford_rare_words/rw.zip", "rw.zip")
!unzip "rw.zip"
!cut -f 1,2 rw/rw.txt | awk '{print tolower($0)}' | tr '\t' '\n' > query_words.txt
query_words = []
with open("query_words.txt") as f:
for line in f.readlines():
query_words.append(line.strip())
query_words = list(set(query_words))
total_words = len(query_words)
vectors = {}
import numpy as np
import math
from scipy import stats
batch_size = 500
batch_start = 0
batch_end = batch_start + batch_size
while len(vectors) != total_words:
batch_end = min(batch_end, total_words)
subset_words = query_words[batch_start:batch_end]
payload = {"instances": subset_words}
response = bt_endpoint.predict(payload)
vecs = json.loads(response)
for i in vecs:
arr = np.array(i["vector"], dtype=float)
if np.linalg.norm(arr) == 0:
continue
vectors[i["word"]] = arr
batch_start += batch_size
batch_end += batch_size
mysim = []
gold = []
dropped = 0
nwords = 0
def similarity(v1, v2):
n1 = np.linalg.norm(v1)
n2 = np.linalg.norm(v2)
return np.dot(v1, v2) / n1 / n2
fin = open("rw/rw.txt", "rb")
for line in fin:
tline = line.decode("utf8").split()
word1 = tline[0].lower()
word2 = tline[1].lower()
nwords += 1
if (word1 in vectors) and (word2 in vectors):
v1 = vectors[word1]
v2 = vectors[word2]
d = similarity(v1, v2)
mysim.append(d)
gold.append(float(tline[2]))
else:
dropped += 1
fin.close()
corr = stats.spearmanr(mysim, gold)
print("Correlation: %s, Dropped words: %s%%" % (corr[0] * 100, math.ceil(dropped / nwords * 100.0)))
bt_endpoint.delete_endpoint()
| 0.340814 | 0.984032 |
# Documentation for the Sherlockpipe's observation planning tool
In this document we will give an example of how to use the observation planning tool of Sherlock by applying it to TIC 2527981. This star was observed by TESS in its sector 27.
## Setup
To generate an observation plan for your target, you must already have done a fit of your signal using SHERLOCK and go in the resulting folder (usually named "fit_0"). In this part we will briefly do a quick recap on how we get there.
First we need a .yaml file with the properties of our target. Here is the one used in our case, named *input.yaml*:
```
TARGETS:
'TIC 2527981':
SECTORS: [27]
AUTO_DETREND_ENABLED: True
INITIAL_SMOOTH_ENABLED: True
INITIAL_HIGH_RMS_MASK: True
INITIAL_HIGH_RMS_THRESHOLD: 1.5
DETREND_METHOD: 'biweight'
DETRENDS_NUMBER: 12
DETREND_CORES: 80
MAX_RUNS: 4
SNR_MIN: 6
SDE_MIN: 7
CPU_CORES: 80
OVERSAMPLING: 3
```
Then we initiate the run with the line:
`nice -15 python3 -m sherlockpipe --properties input.yaml`
_Yes, we even "nice" it to be cool with our colleagues sharing the same clusters !_
----------------------------
After the run, we get an output folder, called mmmmmmmmmmm, where all the results appear.
We will fit the first candidate
mmmmmmm add image
You must also have a csv file containing basic informations about the (ground) observatories you want to consider, such as this :
```
name,tz,lat,lon,alt
Trappist-North,,31.2061,-7.8664,2751
Speculoos-South,,-24.6272,-70.4042,2518
Extremely-Great-Gigantic-Prodigiously-Large-and-Abusively-Notorious-Telescope,,51.385362, -68.711408,42
```
The parameters are defined as:
1. name : name of the observatory (call it whatever makes it for you, it's not regulated).
2. tz : the time zone of the observatory, you can leave it empty, SHERLOCK gets it by itself.
3. lat : Observatory's latitude
4. lon : Observatory's longitude
5. alt : Observatory's altitude
Once you have these files, you can execute the planning module of SHERLOCK with this line :\
`python3 -m sherlockpipe.plan --observatories Observatories.csv`
If you encounter any issue, please refer to the "Troubleshooting" file. It is still at the draft state, as we need your bugs to expand it :)\
If your error is not solved in the "Troubleshooting" file, please let us know about it, so we can work on a patch !
## Output
During the execution, SHERLOCK will create a "plan" folder in which you will find two files, one csv and one pdf.
The csv file contains the following informations:
- observatory : observatory's name as you defined it.
- timezone : time zone of the observatory.
- start_obs : date and time where the observation would start. Format is yyyy-mm-dd for the date, then "T" for the time, formated hh:mm:ss.sss in 24h format.
- end_obs : date and time where the observation would end, same format as for "start_obs".
- ingress : time where the transit should begin (best estimation), same format as for "start_obs".
- egress : time where the transit should end (best estimation), same format as for "start_obs".
- midtime : middle time of the transit (best estimation), same format as for "start_obs".
- midtime_up_err_h : maximum time deviation from the midtime, in hours (?) mmmmmmmmmmmm.
- midtime_low_err_h : mmmmmmmmm deviation from the midtime, in hours.
- twilight_evening : earliest time at which the observation can start, same format as for "start_obs".
- twilight_morning : Latest time at which the observation can end, same format as for "start_obs".
- observable : Minimum fraction of the transit that must be observable to consider an observation.
- moon_phase : Phase of the Moon, from 0 (new Moon) to 1 (full Moon).
- moon_dist : Angular distance between the Moon and the target, in degrees.
In the pdf file, you will find a quick recap of the targeted star, signal, few key parameters for the observation and the observatories. After that, begin a
large table containing all the elements required to schedule an observation, along with small visual interpretation of the conditions of the observations.
The first column "Observatory" is the name of the observatory as you defined it with the second column "TZ" its time zone. The third one, "Event times", gives
the key times for the observation such as :
- TWE : "Twilight Evening", time in the evening from when an observation is possible.
- SO : Start of the observation.
- I : Expected time of ingress (begining of the transit).
- M : Expected time of the middle time of the transit.
- E : Expected time of the egress (end of the transit).
- EO : End of the observation.
- TWM : "Twilight Morning", time in the morning until whent an observation is possible.
The next column, "TT Error" gives the error margins for the time where the transit should happen, in hours. "Moon" gives a recap of the state of the moon
durring the observation night, with first its phase (in %) and then its angular distance to the target (in °). Then comes the "Image" column, where there is
a lot to say. The abscice is the time which is not visually quantified as the values are in the column "Event times"). The background shows when it is the
night (grey) or day (white). The blue line is a visualisation of the elevation of the target, with the values on the right axis in degrees and the air mass on the left. The bottom green
patch is the part of the sky where the target would be too low to observe. Vertical lines are :
- Black : Expected time of the middle time of the transit.
- Orange : Expected times of the ingress and egress.
- Pink/violet : Start and end of the observation.
- Red : Temporal incertainity for the ingres (left line) and egress (right line).
|
github_jupyter
|
TARGETS:
'TIC 2527981':
SECTORS: [27]
AUTO_DETREND_ENABLED: True
INITIAL_SMOOTH_ENABLED: True
INITIAL_HIGH_RMS_MASK: True
INITIAL_HIGH_RMS_THRESHOLD: 1.5
DETREND_METHOD: 'biweight'
DETRENDS_NUMBER: 12
DETREND_CORES: 80
MAX_RUNS: 4
SNR_MIN: 6
SDE_MIN: 7
CPU_CORES: 80
OVERSAMPLING: 3
name,tz,lat,lon,alt
Trappist-North,,31.2061,-7.8664,2751
Speculoos-South,,-24.6272,-70.4042,2518
Extremely-Great-Gigantic-Prodigiously-Large-and-Abusively-Notorious-Telescope,,51.385362, -68.711408,42
| 0.35354 | 0.929248 |
```
%matplotlib inline
```
`파이토치(PyTorch) 기본 익히기 <intro.html>`_ ||
`빠른 시작 <quickstart_tutorial.html>`_ ||
`텐서(Tensor) <tensorqs_tutorial.html>`_ ||
`Dataset과 Dataloader <data_tutorial.html>`_ ||
`변형(Transform) <transforms_tutorial.html>`_ ||
`신경망 모델 구성하기 <buildmodel_tutorial.html>`_ ||
`Autograd <autogradqs_tutorial.html>`_ ||
**최적화(Optimization)** ||
`모델 저장하고 불러오기 <saveloadrun_tutorial.html>`_
모델 매개변수 최적화하기
==========================================================================
이제 모델과 데이터가 준비되었으니, 데이터에 매개변수를 최적화하여 모델을 학습하고, 검증하고, 테스트할 차례입니다.
모델을 학습하는 과정은 반복적인 과정을 거칩니다; (*에폭(epoch)*\ 이라고 부르는) 각 반복 단계에서 모델은 출력을 추측하고,
추측과 정답 사이의 오류(\ *손실(loss)*\ )를 계산하고, (`이전 장 <autograd_tutorial.html>`_\ 에서 본 것처럼)
매개변수에 대한 오류의 도함수(derivative)를 수집한 뒤, 경사하강법을 사용하여 이 파라매터들을 **최적화(optimize)**\ 합니다.
이 과정에 대한 자세한 설명은 `3Blue1Brown의 역전파 <https://www.youtube.com/watch?v=tIeHLnjs5U8>`__ 영상을 참고하세요.
기본(Pre-requisite) 코드
------------------------------------------------------------------------------------------
이전 장인 `Dataset과 DataLoader <data_tutorial.html>`_\ 와 `신경망 모델 구성하기 <buildmodel_tutorial.html>`_\ 에서
코드를 기져왔습니다.
```
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
train_dataloader = DataLoader(training_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
nn.ReLU()
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork()
```
하이퍼파라매터(Hyperparameter)
------------------------------------------------------------------------------------------
하이퍼파라매터(Hyperparameter)는 모델 최적화 과정을 제어할 수 있는 조절 가능한 매개변수입니다.
서로 다른 하이퍼파라매터 값은 모델 학습과 수렴율(convergence rate)에 영향을 미칠 수 있습니다.
(하이퍼파라매터 튜닝(tuning)에 대해 `더 알아보기 <https://tutorials.pytorch.kr/beginner/hyperparameter_tuning_tutorial.html>`__)
학습 시에는 다음과 같은 하이퍼파라매터를 정의합니다:
- **에폭(epoch) 수** - 데이터셋을 반복하는 횟수
- **배치 크기(batch size)** - 매개변수가 갱신되기 전 신경망을 통해 전파된 데이터 샘플의 수
- **학습율(learning rate)** - 각 배치/에폭에서 모델의 매개변수를 조절하는 비율. 값이 작을수록 학습 속도가 느려지고, 값이 크면 학습 중 예측할 수 없는 동작이 발생할 수 있습니다.
```
learning_rate = 1e-3
batch_size = 64
epochs = 5
```
최적화 단계(Optimization Loop)
------------------------------------------------------------------------------------------
하이퍼파라매터를 설정한 뒤에는 최적화 단계를 통해 모델을 학습하고 최적화할 수 있습니다.
최적화 단계의 각 반복(iteration)을 **에폭**\ 이라고 부릅니다.
하나의 에폭은 다음 두 부분으로 구성됩니다:
- **학습 단계(train loop)** - 학습용 데이터셋을 반복(iterate)하고 최적의 매개변수로 수렴합니다.
- **검증/테스트 단계(validation/test loop)** - 모델 성능이 개선되고 있는지를 확인하기 위해 테스트 데이터셋을 반복(iterate)합니다.
학습 단계(training loop)에서 일어나는 몇 가지 개념들을 간략히 살펴보겠습니다. 최적화 단계(optimization loop)를 보려면
`full-impl-label` 부분으로 건너뛰시면 됩니다.
손실 함수(loss function)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
학습용 데이터를 제공하면, 학습되지 않은 신경망은 정답을 제공하지 않을 확률이 높습니다. **손실 함수(loss function)**\ 는
획득한 결과와 실제 값 사이의 틀린 정도(degree of dissimilarity)를 측정하며, 학습 중에 이 값을 최소화하려고 합니다.
주어진 데이터 샘플을 입력으로 계산한 예측과 정답(label)을 비교하여 손실(loss)을 계산합니다.
일반적인 손실함수에는 회귀 문제(regression task)에 사용하는 `nn.MSELoss <https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html#torch.nn.MSELoss>`_\ (평균 제곱 오차(MSE; Mean Square Error))나
분류(classification)에 사용하는 `nn.NLLLoss <https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html#torch.nn.NLLLoss>`_ (음의 로그 우도(Negative Log Likelihood)),
그리고 ``nn.LogSoftmax``\ 와 ``nn.NLLLoss``\ 를 합친 `nn.CrossEntropyLoss <https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss>`_
등이 있습니다.
모델의 출력 로짓(logit)을 ``nn.CrossEntropyLoss``\ 에 전달하여 로짓(logit)을 정규화하고 예측 오류를 계산합니다.
```
# 손실 함수를 초기화합니다.
loss_fn = nn.CrossEntropyLoss()
```
옵티마이저(Optimizer)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
최적화는 각 학습 단계에서 모델의 오류를 줄이기 위해 모델 매개변수를 조정하는 과정입니다. **최적화 알고리즘**\ 은 이 과정이 수행되는 방식(여기에서는 확률적 경사하강법(SGD; Stochastic Gradient Descent))을 정의합니다.
모든 최적화 절차(logic)는 ``optimizer`` 객체에 캡슐화(encapsulate)됩니다. 여기서는 SGD 옵티마이저를 사용하고 있으며, PyTorch에는 ADAM이나 RMSProp과 같은 다른 종류의 모델과 데이터에서 더 잘 동작하는
`다양한 옵티마이저 <https://pytorch.org/docs/stable/optim.html>`_\ 가 있습니다.
학습하려는 모델의 매개변수와 학습율(learning rate) 하이퍼파라매터를 등록하여 옵티마이저를 초기화합니다.
```
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
```
학습 단계(loop)에서 최적화는 세단계로 이뤄집니다:
* ``optimizer.zero_grad()``\ 를 호출하여 모델 매개변수의 변화도를 재설정합니다. 기본적으로 변화도는 더해지기(add up) 때문에 중복 계산을 막기 위해 반복할 때마다 명시적으로 0으로 설정합니다.
* ``loss.backward()``\ 를 호출하여 예측 손실(prediction loss)을 역전파합니다. PyTorch는 각 매개변수에 대한 손실의 변화도를 저장합니다.
* 변화도를 계산한 뒤에는 ``optimizer.step()``\ 을 호출하여 역전파 단계에서 수집된 변화도로 매개변수를 조정합니다.
전체 구현
------------------------------------------------------------------------------------------
최적화 코드를 반복하여 수행하는 ``train_loop``\ 와 테스트 데이터로 모델의 성능을 측정하는 ``test_loop``\ 를 정의하였습니다.
```
def train_loop(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X, y) in enumerate(dataloader):
# 예측(prediction)과 손실(loss) 계산
pred = model(X)
loss = loss_fn(pred, y)
# 역전파
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
def test_loop(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
```
손실 함수와 옵티마이저를 초기화하고 ``train_loop``\ 와 ``test_loop``\ 에 전달합니다.
모델의 성능 향상을 알아보기 위해 자유롭게 에폭(epoch) 수를 증가시켜 볼 수 있습니다.
```
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
epochs = 10
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train_loop(train_dataloader, model, loss_fn, optimizer)
test_loop(test_dataloader, model, loss_fn)
print("Done!")
```
더 읽어보기
------------------------------------------------------------------------------------------
- `Loss Functions <https://pytorch.org/docs/stable/nn.html#loss-functions>`_
- `torch.optim <https://pytorch.org/docs/stable/optim.html>`_
- `Warmstart Training a Model <https://tutorials.pytorch.kr/recipes/recipes/warmstarting_model_using_parameters_from_a_different_model.html>`_
|
github_jupyter
|
%matplotlib inline
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
train_dataloader = DataLoader(training_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
nn.ReLU()
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork()
learning_rate = 1e-3
batch_size = 64
epochs = 5
# 손실 함수를 초기화합니다.
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
def train_loop(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X, y) in enumerate(dataloader):
# 예측(prediction)과 손실(loss) 계산
pred = model(X)
loss = loss_fn(pred, y)
# 역전파
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
def test_loop(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
epochs = 10
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train_loop(train_dataloader, model, loss_fn, optimizer)
test_loop(test_dataloader, model, loss_fn)
print("Done!")
| 0.928822 | 0.964522 |
```
#IMPORT MODULES
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing
import numpy as np
import math
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import normalize
from sklearn.metrics import accuracy_score
import seaborn as sns
from sklearn.metrics import confusion_matrix
# READ PROCESSED DATA
df = pd.read_csv('Final Data');
#PICK THE REQUIRED FEATURES FROM THE DATASET
#X = np.array(df[['2 min wind speed squared','Avg Wind Speed Squared','5 second wind speed squared','Fog/Ice','Heavy/Freezing Fog','Thunder']]);
X = np.array(df[['5 second wind speed squared', '2 min wind speed squared', 'Thunder','Fog/Ice']]);
y = np.array(df['Power Outage']);
#Split the data into X and Y, and then into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state = 20);
#DECLARE NUMBER OF BINS AND DATA DIMENSIONALITY
numberBins = 25 #number of bins for each feature
dim = 4
#CODE RESPONSIBLE FOR GENERATING BUCKETS FOR THE TRAINING DATA TO FALL INTO
binNumbers_train = np.zeros(dim) #The set of feature bins that each data point falls into
from collections import defaultdict
Buckets_train = defaultdict(list) #Final number of buckets
Values_train = defaultdict(list)
for i in range (0, len(X_train)):
for j in range(0,dim):
c = np.ceil(X_train[i][j] * numberBins)
binNumbers_train[j] = c
s = 0
for k in range(0, len(binNumbers_train)):
if binNumbers_train[k] != 0:
s = s + ((binNumbers_train[k]-1)*((numberBins)**(dim-k-1))) #finding bucket number
bucketNumber = s
Buckets_train[int(s)].append(X_train[i])
Values_train[int(s)].append(y_train[i])
#CODE RESPONSIBLE FOR FINDING THE BUCKET THAT EACH TESTING DATA POINT FALLS INTO
predict = np.zeros(len(X_test))
binNumbers_test = np.zeros(dim)
actualPredictedPoints = [np.array([])]*len(X_test)
for i in range(0, len(X_test)):
for j in range(0,dim):
c = np.ceil(X_test[i][j] * numberBins)
binNumbers_test[j] = c
s = 0
for k in range(0, len(binNumbers_test)):
if binNumbers_test[k] != 0:
s = s + ((binNumbers_test[k]-1)*((numberBins)**(dim-k-1))) #finding bucket number
bucketNumber = s
sumV = sum(Values_train[s])
if sumV == 0:
predict[i] = 0.0
elif sumV > len(Values_train[s])/2:
predict[i] = 1.0
else:
predict[i] = 0.0
#PRINTING ACCURACY SCORE
accuracy = accuracy_score(y_test, predict);
print(accuracy)
#GENERATING HEATMAP
cm = pd.DataFrame(confusion_matrix(y_test, predict));
sns.heatmap(cm, annot=True);
plt.show()
```
|
github_jupyter
|
#IMPORT MODULES
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing
import numpy as np
import math
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import normalize
from sklearn.metrics import accuracy_score
import seaborn as sns
from sklearn.metrics import confusion_matrix
# READ PROCESSED DATA
df = pd.read_csv('Final Data');
#PICK THE REQUIRED FEATURES FROM THE DATASET
#X = np.array(df[['2 min wind speed squared','Avg Wind Speed Squared','5 second wind speed squared','Fog/Ice','Heavy/Freezing Fog','Thunder']]);
X = np.array(df[['5 second wind speed squared', '2 min wind speed squared', 'Thunder','Fog/Ice']]);
y = np.array(df['Power Outage']);
#Split the data into X and Y, and then into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state = 20);
#DECLARE NUMBER OF BINS AND DATA DIMENSIONALITY
numberBins = 25 #number of bins for each feature
dim = 4
#CODE RESPONSIBLE FOR GENERATING BUCKETS FOR THE TRAINING DATA TO FALL INTO
binNumbers_train = np.zeros(dim) #The set of feature bins that each data point falls into
from collections import defaultdict
Buckets_train = defaultdict(list) #Final number of buckets
Values_train = defaultdict(list)
for i in range (0, len(X_train)):
for j in range(0,dim):
c = np.ceil(X_train[i][j] * numberBins)
binNumbers_train[j] = c
s = 0
for k in range(0, len(binNumbers_train)):
if binNumbers_train[k] != 0:
s = s + ((binNumbers_train[k]-1)*((numberBins)**(dim-k-1))) #finding bucket number
bucketNumber = s
Buckets_train[int(s)].append(X_train[i])
Values_train[int(s)].append(y_train[i])
#CODE RESPONSIBLE FOR FINDING THE BUCKET THAT EACH TESTING DATA POINT FALLS INTO
predict = np.zeros(len(X_test))
binNumbers_test = np.zeros(dim)
actualPredictedPoints = [np.array([])]*len(X_test)
for i in range(0, len(X_test)):
for j in range(0,dim):
c = np.ceil(X_test[i][j] * numberBins)
binNumbers_test[j] = c
s = 0
for k in range(0, len(binNumbers_test)):
if binNumbers_test[k] != 0:
s = s + ((binNumbers_test[k]-1)*((numberBins)**(dim-k-1))) #finding bucket number
bucketNumber = s
sumV = sum(Values_train[s])
if sumV == 0:
predict[i] = 0.0
elif sumV > len(Values_train[s])/2:
predict[i] = 1.0
else:
predict[i] = 0.0
#PRINTING ACCURACY SCORE
accuracy = accuracy_score(y_test, predict);
print(accuracy)
#GENERATING HEATMAP
cm = pd.DataFrame(confusion_matrix(y_test, predict));
sns.heatmap(cm, annot=True);
plt.show()
| 0.367384 | 0.521349 |
# Bayesian Linear Regression
Why not MLE? They overfit.
Why not MAP? the prior fits the overfitting, but no representation of uncertainty.
ex: fit data to something (financial data), then have to predict value for new x. How certain are you? MAP doesn't say
Bayesian approach gets you level of uncertainty. Optimize the appropriate loss function.
gives us $p(y \mid x, D) $
## Setup
given data $D = [[x_1, y_1], ... [x_n, y_n]]$
model: $y_1, ... y_n$ independent given $w, Y_i \sim N(w^Tx_i, a^{-1})$
$a$ is the precision $1/\sigma^2$
$w \sim N(\boldsymbol 0, b^{-1}I), b > 0$
w is $w = (w_1, w_2, ... w_d)$, each is independent, i.i.d.
Assume a, b are known. So $\theta = w$, the only unknown.
## To model nonlinearies in Xs
Can replace $x_i$ by $\Phi(x_i)$ for some basis function $\Phi(x_i) = (\Phi_1(x_1) ... \Phi_n(x_i))$
## Compute Posterior Distribution
First need likelihood: $P(D|\theta) = P(D|w) \propto exp(-a/2(y-Aw)^T(y-Aw))$
A is the design matrix:
$$A =\begin{bmatrix} x_1^T \\ \vdots \\ x_n^T\end{bmatrix}$$
$y = (y_1, ... y_n)^T$
Posterior is the prob of w given the data $P(w|D) \propto p(D|w) P(w)\, \, \, (likelihood * prior )$
$\propto exp(-a/2(y-Aw)^T(y-Aw)-\frac b 2 w^tw)$
this is quadratic in w, so it is Gaussian:
$$a(y-Aw)^T(y-Aw) = bw^Tw = a(y^Ty - 2w^TA^Ty + w^TA^TAw) + bw^Tw \\
= ay^Ty - 2sw^TA^Ty + w^T(aA^TA + bI)w
$$
Want to make this look like a gaussian. Gaussian, in general has $(w - \mu)\Lambda (w-\mu) = w^T\Lambda w - 2w^T\Lambda \mu + const$ Lambda is the precision matrix (inverse of covariance).
$\Lambda = aA^TA + bI$
$$2sw^TA^Ty = w^T\Lambda\mu \\
aA^Ty = \Lambda \mu \\
\mu = a\Lambda^{-1}A^Ty \\
\Lambda = aA^TA + BI$$
Therefore, posterior of w given data is
$$p(w|D) = N(w, \mu, \Lambda^{-1}) \\
\mu = a\Lambda^{-1}A^Ty \\
\Lambda = aA^TA + bI$$
Aside, is precision matrix invertible? In form $B+cI$. Think of eigenvalues. $Bu =\lambda u, cIu = cu$
$(B+cI)u = Bu + cIu = \lambda u + cu = (\lambda + c)u$. If all eigenvalues are strictly positive, then it is invertible.
### MAP estimate of w
Given $p(w|D) = N(w, \mu, \Lambda^{-1})$, we can get MAP. MAP is maximum a posteriori, value of w that maximizes the posterior distribution. MAP for a Gaussian is just the mean (top of bell curve!)
so, $w_{MAP} = \mu = a(aA^TA + bI)^{-1}A^Ty = \boxed{(A^TA + \frac b a I)^{-1}A^Ty}$
Compare to MLE:
$w_{MLE} = (A^TA)^{-1}A^Ty = A^+ y$
Difference is the $\frac b a I$, which is the *regularization parameter*. this may not correspond to a proper prior, you may make it from an improper prior, so more general than Bayesian.
If you go back to $\propto exp(-a/2(y-Aw)^T(y-Aw)-\frac b 2 w^tw)$, $\frac b 2 w^tw$ is the regularization term. You are regularizing according to the squared norm of $w$.
test
## Predictive Distribution
$p(y\mid x, D)$. This is a *new* y that corresponds to a new $x$.
In practice, use linear Gaussian models. But, to calculate ourselves:
$p(y\mid x, D) = \int p(y \mid x, D, w)p(w|x,D) dw$
x is just notational, the x for the y, so
$p(y \mid x, D, w) = p(y \mid x, w)$
$p(w \mid x, D) = p(w \mid D)$
Goal is to set $= \int N(w \mid ...) g(y) dw = g(y) \propto N(y \mid ...)$
$$p(y\mid \mathbf x, D) = \int N(y \mid w^T\mathbf x, a^{-1})N(w \mid \mu, \Lambda^{-1}) dw \\
\propto \int exp(-\frac a 2 (y-w^T\mathbf x)^2) exp(-\frac 1 2 (w-\mu)^T\Lambda(w-\mu))\\
= \int exp(-a/2 (y^2 - 2(w^T\mathbf x)y + (w^T\mathbf x)^2) - \frac 1 2 (w^T\Lambda w - 2w^T\Lambda \mu + \mu^T\Lambda\mu))
$$
pull out 1/2, expression for exp is:
$ay^2 - 2w^T\mathbf xya + aw^T\mathbf{xx}^Tw + w^T\Lambda w - 2w^T\Lambda\mu$ (dropped constant $mu^T\Lambda\mu$)
$= w^T(a\mathbf{xx}^T + \Lambda)w -2w^T(\mathbf xya - \Lambda\mu) + ay^2$
Now want to turn this into form gaussian form $(w-m)^TL(w-m) = w^TLw - 2w^TLm +m^TLm$.
let $L = a\mathbf{xx}^T + \Lambda$
$Lm = \mathbf xya + \Lambda \mu$
$m = L^{-1}(ay\mathbf x + \Lambda \mu$ (assuming L inverse exists)
Plug in
$$= w^TLw - 2w^TLm + m^TLm - m^TLm + ay^2 \\
= (w-m)^TL(W-m) - m^TLm + ay^2$$
$p(y \mid x,D) \propto \int \exp(-\frac 1 2 (w-m)^TL(w-m)) \times \exp(frac 1 2 m^TLm - \frac 1 2 ay^2) dw \\
\propto \exp(\frac 1 2 m^TLm - \frac 1 2 ay^2)$.
this is g(y). Now rearrange to look like a normal!
$ay^2 - m^TLm$
$$m^TLm = (ay\mathbf x + \Lambda\mu)^TL^{-1}LL^{-1}(ay\mathbf x + \Lambda\mu) \\
= ay\mathbf x^TL^{-1}(ay\mathbf x) + 2ay\mathbf x^TL^{-1}\Lambda \mu + \mu^TL^{-1}\Lambda \mu \\
= (a^2\mathbf x^TL^{-1}x)y^2 + 2(a\mathbf x^TL^{-1}\Lambda \mu) y + const $$
So,
$$ay^2 - m^TLm = (a - a^2\mathbf x^TL^{-1}x)y^2 - 2(a\mathbf x^TL^{-1}\Lambda \mu) y +c $$
let $\lambda = a(1-ax^TL^{-1}x)$
$u = \frac 1 \lambda ax^TL^{-1}\Lambda \mu$
$$ p(y \mid \mathbf x, D) \propto \exp(- \frac \lambda 2(y-u)^2)$$
Can clean up:
$\boxed{u = \mu^Tx \\ \\
\frac 1 \lambda = \frac 1 a + \mathbf x^T\Lambda^{-1} \mathbf x \\
p(y \mid \mathbf x,D) = N(y \mid u, \frac 1 \lambda)}$
So, it is a Gaussian!!!!
Show value for $\lambda$:
$$L^{-1} = (axx^T + \Lambda)^{-1}$$
Use Sherman Morrison Theorem.
$$L^{-1} = \Lambda^{-1} - \frac{\Lambda^{-1}axx^T\Lambda^{-1}}{1+ax^T\Lambda^{-1}x}$$
$$x^T\Lambda^{-1}x = \alpha - \frac{a\alpha^2}{1+a\alpha} = \frac{\alpha + a\alpha^2 - a\alpha^2}{1+a\alpha}$$
$$\lambda = a(1-a\frac{a\alpha}{1+a\alpha}) = \frac{a}{1+a\alpha}$$
$$\mu^Tx = \frac 1 \lambda ax^TL^{-1}\Lambda \mu = \mu^T \frac 1 \lambda ax^TL^{-1}\Lambda
$$
$$\mathbf x = \frac 1 \lambda ax^TL^{-1}\Lambda$$
$$L \Lambda^{-1}x = \frac 1 \lambda a \mathbf x\\
(axx^T + \Lambda)\Lambda^{-1}x = axx^T\Lambda^{-1}x + x = (a\alpha + 1)x$$
$$\frac 1 \lambda = \frac{a\alpha + 1}{\alpha}$$
|
github_jupyter
|
# Bayesian Linear Regression
Why not MLE? They overfit.
Why not MAP? the prior fits the overfitting, but no representation of uncertainty.
ex: fit data to something (financial data), then have to predict value for new x. How certain are you? MAP doesn't say
Bayesian approach gets you level of uncertainty. Optimize the appropriate loss function.
gives us $p(y \mid x, D) $
## Setup
given data $D = [[x_1, y_1], ... [x_n, y_n]]$
model: $y_1, ... y_n$ independent given $w, Y_i \sim N(w^Tx_i, a^{-1})$
$a$ is the precision $1/\sigma^2$
$w \sim N(\boldsymbol 0, b^{-1}I), b > 0$
w is $w = (w_1, w_2, ... w_d)$, each is independent, i.i.d.
Assume a, b are known. So $\theta = w$, the only unknown.
## To model nonlinearies in Xs
Can replace $x_i$ by $\Phi(x_i)$ for some basis function $\Phi(x_i) = (\Phi_1(x_1) ... \Phi_n(x_i))$
## Compute Posterior Distribution
First need likelihood: $P(D|\theta) = P(D|w) \propto exp(-a/2(y-Aw)^T(y-Aw))$
A is the design matrix:
$$A =\begin{bmatrix} x_1^T \\ \vdots \\ x_n^T\end{bmatrix}$$
$y = (y_1, ... y_n)^T$
Posterior is the prob of w given the data $P(w|D) \propto p(D|w) P(w)\, \, \, (likelihood * prior )$
$\propto exp(-a/2(y-Aw)^T(y-Aw)-\frac b 2 w^tw)$
this is quadratic in w, so it is Gaussian:
$$a(y-Aw)^T(y-Aw) = bw^Tw = a(y^Ty - 2w^TA^Ty + w^TA^TAw) + bw^Tw \\
= ay^Ty - 2sw^TA^Ty + w^T(aA^TA + bI)w
$$
Want to make this look like a gaussian. Gaussian, in general has $(w - \mu)\Lambda (w-\mu) = w^T\Lambda w - 2w^T\Lambda \mu + const$ Lambda is the precision matrix (inverse of covariance).
$\Lambda = aA^TA + bI$
$$2sw^TA^Ty = w^T\Lambda\mu \\
aA^Ty = \Lambda \mu \\
\mu = a\Lambda^{-1}A^Ty \\
\Lambda = aA^TA + BI$$
Therefore, posterior of w given data is
$$p(w|D) = N(w, \mu, \Lambda^{-1}) \\
\mu = a\Lambda^{-1}A^Ty \\
\Lambda = aA^TA + bI$$
Aside, is precision matrix invertible? In form $B+cI$. Think of eigenvalues. $Bu =\lambda u, cIu = cu$
$(B+cI)u = Bu + cIu = \lambda u + cu = (\lambda + c)u$. If all eigenvalues are strictly positive, then it is invertible.
### MAP estimate of w
Given $p(w|D) = N(w, \mu, \Lambda^{-1})$, we can get MAP. MAP is maximum a posteriori, value of w that maximizes the posterior distribution. MAP for a Gaussian is just the mean (top of bell curve!)
so, $w_{MAP} = \mu = a(aA^TA + bI)^{-1}A^Ty = \boxed{(A^TA + \frac b a I)^{-1}A^Ty}$
Compare to MLE:
$w_{MLE} = (A^TA)^{-1}A^Ty = A^+ y$
Difference is the $\frac b a I$, which is the *regularization parameter*. this may not correspond to a proper prior, you may make it from an improper prior, so more general than Bayesian.
If you go back to $\propto exp(-a/2(y-Aw)^T(y-Aw)-\frac b 2 w^tw)$, $\frac b 2 w^tw$ is the regularization term. You are regularizing according to the squared norm of $w$.
test
## Predictive Distribution
$p(y\mid x, D)$. This is a *new* y that corresponds to a new $x$.
In practice, use linear Gaussian models. But, to calculate ourselves:
$p(y\mid x, D) = \int p(y \mid x, D, w)p(w|x,D) dw$
x is just notational, the x for the y, so
$p(y \mid x, D, w) = p(y \mid x, w)$
$p(w \mid x, D) = p(w \mid D)$
Goal is to set $= \int N(w \mid ...) g(y) dw = g(y) \propto N(y \mid ...)$
$$p(y\mid \mathbf x, D) = \int N(y \mid w^T\mathbf x, a^{-1})N(w \mid \mu, \Lambda^{-1}) dw \\
\propto \int exp(-\frac a 2 (y-w^T\mathbf x)^2) exp(-\frac 1 2 (w-\mu)^T\Lambda(w-\mu))\\
= \int exp(-a/2 (y^2 - 2(w^T\mathbf x)y + (w^T\mathbf x)^2) - \frac 1 2 (w^T\Lambda w - 2w^T\Lambda \mu + \mu^T\Lambda\mu))
$$
pull out 1/2, expression for exp is:
$ay^2 - 2w^T\mathbf xya + aw^T\mathbf{xx}^Tw + w^T\Lambda w - 2w^T\Lambda\mu$ (dropped constant $mu^T\Lambda\mu$)
$= w^T(a\mathbf{xx}^T + \Lambda)w -2w^T(\mathbf xya - \Lambda\mu) + ay^2$
Now want to turn this into form gaussian form $(w-m)^TL(w-m) = w^TLw - 2w^TLm +m^TLm$.
let $L = a\mathbf{xx}^T + \Lambda$
$Lm = \mathbf xya + \Lambda \mu$
$m = L^{-1}(ay\mathbf x + \Lambda \mu$ (assuming L inverse exists)
Plug in
$$= w^TLw - 2w^TLm + m^TLm - m^TLm + ay^2 \\
= (w-m)^TL(W-m) - m^TLm + ay^2$$
$p(y \mid x,D) \propto \int \exp(-\frac 1 2 (w-m)^TL(w-m)) \times \exp(frac 1 2 m^TLm - \frac 1 2 ay^2) dw \\
\propto \exp(\frac 1 2 m^TLm - \frac 1 2 ay^2)$.
this is g(y). Now rearrange to look like a normal!
$ay^2 - m^TLm$
$$m^TLm = (ay\mathbf x + \Lambda\mu)^TL^{-1}LL^{-1}(ay\mathbf x + \Lambda\mu) \\
= ay\mathbf x^TL^{-1}(ay\mathbf x) + 2ay\mathbf x^TL^{-1}\Lambda \mu + \mu^TL^{-1}\Lambda \mu \\
= (a^2\mathbf x^TL^{-1}x)y^2 + 2(a\mathbf x^TL^{-1}\Lambda \mu) y + const $$
So,
$$ay^2 - m^TLm = (a - a^2\mathbf x^TL^{-1}x)y^2 - 2(a\mathbf x^TL^{-1}\Lambda \mu) y +c $$
let $\lambda = a(1-ax^TL^{-1}x)$
$u = \frac 1 \lambda ax^TL^{-1}\Lambda \mu$
$$ p(y \mid \mathbf x, D) \propto \exp(- \frac \lambda 2(y-u)^2)$$
Can clean up:
$\boxed{u = \mu^Tx \\ \\
\frac 1 \lambda = \frac 1 a + \mathbf x^T\Lambda^{-1} \mathbf x \\
p(y \mid \mathbf x,D) = N(y \mid u, \frac 1 \lambda)}$
So, it is a Gaussian!!!!
Show value for $\lambda$:
$$L^{-1} = (axx^T + \Lambda)^{-1}$$
Use Sherman Morrison Theorem.
$$L^{-1} = \Lambda^{-1} - \frac{\Lambda^{-1}axx^T\Lambda^{-1}}{1+ax^T\Lambda^{-1}x}$$
$$x^T\Lambda^{-1}x = \alpha - \frac{a\alpha^2}{1+a\alpha} = \frac{\alpha + a\alpha^2 - a\alpha^2}{1+a\alpha}$$
$$\lambda = a(1-a\frac{a\alpha}{1+a\alpha}) = \frac{a}{1+a\alpha}$$
$$\mu^Tx = \frac 1 \lambda ax^TL^{-1}\Lambda \mu = \mu^T \frac 1 \lambda ax^TL^{-1}\Lambda
$$
$$\mathbf x = \frac 1 \lambda ax^TL^{-1}\Lambda$$
$$L \Lambda^{-1}x = \frac 1 \lambda a \mathbf x\\
(axx^T + \Lambda)\Lambda^{-1}x = axx^T\Lambda^{-1}x + x = (a\alpha + 1)x$$
$$\frac 1 \lambda = \frac{a\alpha + 1}{\alpha}$$
| 0.793546 | 0.989642 |
<a href="https://colab.research.google.com/github/Iramuk-ganh/rl/blob/main/neural_network_for_function_approximation_and_qlearning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import sys, os
if 'google.colab' in sys.modules:
%tensorflow_version 1.x
if not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/grading.py -O ../grading.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week4_approx/submit.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0").env
env.reset()
no_of_actions = env.action_space.n
obs_state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
```
###Building neural network for action-value function approximation
```
import tensorflow as tf
import keras
import keras.layers as L
tf.reset_default_graph()
sess = tf.InteractiveSession()
keras.backend.set_session(sess)
neural_network = keras.models.Sequential()
neural_network.add(L.InputLayer(obs_state_dim))
neural_network.add(L.Dense(200, activation='relu'))#relu is a non-saturating non-linearity
neural_network.add(L.Dense(100, activation='relu'))
neural_network.add(L.Dense(no_of_actions, activation = 'linear'))
```
😌np.random.rand() when no arguments passed returns a random decimal number between 0 and 1
```
np.random.rand()
no_of_actions
np.random.choice(no_of_actions, 1)
def get_action(state, epsilon=0):
"""
sample actions with epsilon-greedy policy
recap: with p = epsilon pick random action, else pick action with highest Q(s,a)
"""
q_values = neural_network.predict(state[None])[0]#neural_network outputs action_value functions for a particular state and all actions
#q_values is an array with q_value corresponding to each action i.e no. of elements in q_values = no_of_actions
prob = np.random.rand()
if prob < epsilon:
action = np.random.choice(no_of_actions, 1)[0]
else:
action = np.argmax(q_values)
return action
np.random.choice([0, 1], p=(0.1, 1-0.1))
def get_action(state, epsilon=0):
"""
sample actions with epsilon-greedy policy
recap: with p = epsilon pick random action, else pick action with highest Q(s,a)
"""
q_values = neural_network.predict(state[None])[0]
choice = np.random.choice([0, 1], p=(epsilon, 1-epsilon))
if choice:#explore or take random actions
action = np.random.choice(no_of_actions, 1)[0]
else:#exploit or take the best action
action = np.argmax(q_values)
return action
assert neural_network.output_shape == (None, no_of_actions), "please make sure your model maps state s -> [Q(s,a0), ..., Q(s, a_last)]"
assert neural_network.layers[-1].activation == keras.activations.linear, "please make sure you predict q-values without nonlinearity"
# test epsilon-greedy exploration
s = env.reset()
assert np.shape(get_action(s)) == (), "please return just one action (integer)"
for eps in [0., 0.1, 0.5, 1.0]:
state_frequencies = np.bincount([get_action(s, epsilon=eps) for i in range(10000)], minlength=no_of_actions)
best_action = state_frequencies.argmax()
assert abs(state_frequencies[best_action] - 10000 * (1 - eps + eps / no_of_actions)) < 200
for other_action in range(no_of_actions):
if other_action != best_action:
assert abs(state_frequencies[other_action] - 10000 * (eps / no_of_actions)) < 200
print('e=%.1f tests passed'%eps)
```
###Q-learning via gradient descent
```
# Create placeholders for the <s, a, r, s'> tuple and a special indicator for game end (is_done = True)
states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + obs_state_dim)
actions_ph = keras.backend.placeholder(dtype='int32', shape=[None])
rewards_ph = keras.backend.placeholder(dtype='float32', shape=[None])
next_states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + obs_state_dim)
is_done_ph = keras.backend.placeholder(dtype='bool', shape=[None])
```
😌tf.one_hot()
indices = [0, 1, 2]
depth = 3
tf.one_hot(indices, depth)
output: [3 x 3]
[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]
😌tf.reduce_sum()
x = tf.constant([[1, 1, 1], [1, 1, 1]])
tf.reduce_sum(x) # 6
tf.reduce_sum(x, 0) # [2, 2, 2]
tf.reduce_sum(x, 1) # [3, 3]
```
#get q-values for all actions in current states
predicted_qvalues = neural_network(states_ph)
#select q-values for chosen actions
predicted_qvalues_for_actions = tf.reduce_sum(predicted_qvalues * tf.one_hot(actions_ph, no_of_actions), axis = 1)
gamma = 0.99
# compute q-values for all actions in next states
predicted_next_qvalues = neural_network(next_states_ph)
# compute V*(next_states) using predicted next q-values
next_state_values = tf.reduce_max(predicted_next_qvalues, axis = 1)#max_qvalue for the next state over all possible actions
# compute "target q-values" for loss - it's what's inside square parentheses in the above formula.
target_qvalues_for_actions = rewards_ph + gamma * next_state_values
# at the last state we shall use simplified formula: Q(s,a) = r(s,a) since s' doesn't exist
target_qvalues_for_actions = tf.where(is_done_ph, rewards_ph, target_qvalues_for_actions)
#mean squared error loss to minimize
loss = (predicted_qvalues_for_actions - tf.stop_gradient(target_qvalues_for_actions)) ** 2
loss = tf.reduce_mean(loss)
# training function that resembles agent.update(state, action, reward, next_state) from tabular agent
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
assert tf.gradients(loss, [predicted_qvalues_for_actions])[0] is not None, "make sure you update q-values for chosen actions and not just all actions"
assert tf.gradients(loss, [predicted_next_qvalues])[0] is None, "make sure you don't propagate gradient w.r.t. Q_(s',a')"
assert predicted_next_qvalues.shape.ndims == 2, "make sure you predicted q-values for all actions in next state"
assert next_state_values.shape.ndims == 1, "make sure you computed V(s') as maximum over just the actions axis and not all axes"
assert target_qvalues_for_actions.shape.ndims == 1, "there's something wrong with target q-values, they must be a vector"
```
###Playing the game
❓sess.run()
```
sess.run(tf.global_variables_initializer())
def generate_session(env, t_max = 1000, epsilon = 0, train = False):
total_reward = 0
s = env.reset()
for t in range(t_max):
a = get_action(s, epsilon=epsilon)
next_s, r, done, _ = env.step(a)
if train:
sess.run(train_step,
{
states_ph: [s],
actions_ph: [a],
rewards_ph: [r],
next_states_ph: [next_s],
is_done_ph: [done]
})
total_reward += r
s = next_s
if done:
break
return total_reward
epsilon = 0.5
for i in range(1000):
session_rewards = [generate_session(env, epsilon=epsilon, train=True) for _ in range(100)]
print("epoch #{}\tmean reward = {:.3f}\tepsilon = {:.3f}".format(i, np.mean(session_rewards), epsilon))
epsilon *= 0.99
assert epsilon >= 1e-4, "Make sure epsilon is always nonzero during training"
if np.mean(session_rewards) > 300:
print("You Win!")
break
```
|
github_jupyter
|
import sys, os
if 'google.colab' in sys.modules:
%tensorflow_version 1.x
if not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/grading.py -O ../grading.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week4_approx/submit.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0").env
env.reset()
no_of_actions = env.action_space.n
obs_state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
import tensorflow as tf
import keras
import keras.layers as L
tf.reset_default_graph()
sess = tf.InteractiveSession()
keras.backend.set_session(sess)
neural_network = keras.models.Sequential()
neural_network.add(L.InputLayer(obs_state_dim))
neural_network.add(L.Dense(200, activation='relu'))#relu is a non-saturating non-linearity
neural_network.add(L.Dense(100, activation='relu'))
neural_network.add(L.Dense(no_of_actions, activation = 'linear'))
np.random.rand()
no_of_actions
np.random.choice(no_of_actions, 1)
def get_action(state, epsilon=0):
"""
sample actions with epsilon-greedy policy
recap: with p = epsilon pick random action, else pick action with highest Q(s,a)
"""
q_values = neural_network.predict(state[None])[0]#neural_network outputs action_value functions for a particular state and all actions
#q_values is an array with q_value corresponding to each action i.e no. of elements in q_values = no_of_actions
prob = np.random.rand()
if prob < epsilon:
action = np.random.choice(no_of_actions, 1)[0]
else:
action = np.argmax(q_values)
return action
np.random.choice([0, 1], p=(0.1, 1-0.1))
def get_action(state, epsilon=0):
"""
sample actions with epsilon-greedy policy
recap: with p = epsilon pick random action, else pick action with highest Q(s,a)
"""
q_values = neural_network.predict(state[None])[0]
choice = np.random.choice([0, 1], p=(epsilon, 1-epsilon))
if choice:#explore or take random actions
action = np.random.choice(no_of_actions, 1)[0]
else:#exploit or take the best action
action = np.argmax(q_values)
return action
assert neural_network.output_shape == (None, no_of_actions), "please make sure your model maps state s -> [Q(s,a0), ..., Q(s, a_last)]"
assert neural_network.layers[-1].activation == keras.activations.linear, "please make sure you predict q-values without nonlinearity"
# test epsilon-greedy exploration
s = env.reset()
assert np.shape(get_action(s)) == (), "please return just one action (integer)"
for eps in [0., 0.1, 0.5, 1.0]:
state_frequencies = np.bincount([get_action(s, epsilon=eps) for i in range(10000)], minlength=no_of_actions)
best_action = state_frequencies.argmax()
assert abs(state_frequencies[best_action] - 10000 * (1 - eps + eps / no_of_actions)) < 200
for other_action in range(no_of_actions):
if other_action != best_action:
assert abs(state_frequencies[other_action] - 10000 * (eps / no_of_actions)) < 200
print('e=%.1f tests passed'%eps)
# Create placeholders for the <s, a, r, s'> tuple and a special indicator for game end (is_done = True)
states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + obs_state_dim)
actions_ph = keras.backend.placeholder(dtype='int32', shape=[None])
rewards_ph = keras.backend.placeholder(dtype='float32', shape=[None])
next_states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + obs_state_dim)
is_done_ph = keras.backend.placeholder(dtype='bool', shape=[None])
#get q-values for all actions in current states
predicted_qvalues = neural_network(states_ph)
#select q-values for chosen actions
predicted_qvalues_for_actions = tf.reduce_sum(predicted_qvalues * tf.one_hot(actions_ph, no_of_actions), axis = 1)
gamma = 0.99
# compute q-values for all actions in next states
predicted_next_qvalues = neural_network(next_states_ph)
# compute V*(next_states) using predicted next q-values
next_state_values = tf.reduce_max(predicted_next_qvalues, axis = 1)#max_qvalue for the next state over all possible actions
# compute "target q-values" for loss - it's what's inside square parentheses in the above formula.
target_qvalues_for_actions = rewards_ph + gamma * next_state_values
# at the last state we shall use simplified formula: Q(s,a) = r(s,a) since s' doesn't exist
target_qvalues_for_actions = tf.where(is_done_ph, rewards_ph, target_qvalues_for_actions)
#mean squared error loss to minimize
loss = (predicted_qvalues_for_actions - tf.stop_gradient(target_qvalues_for_actions)) ** 2
loss = tf.reduce_mean(loss)
# training function that resembles agent.update(state, action, reward, next_state) from tabular agent
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
assert tf.gradients(loss, [predicted_qvalues_for_actions])[0] is not None, "make sure you update q-values for chosen actions and not just all actions"
assert tf.gradients(loss, [predicted_next_qvalues])[0] is None, "make sure you don't propagate gradient w.r.t. Q_(s',a')"
assert predicted_next_qvalues.shape.ndims == 2, "make sure you predicted q-values for all actions in next state"
assert next_state_values.shape.ndims == 1, "make sure you computed V(s') as maximum over just the actions axis and not all axes"
assert target_qvalues_for_actions.shape.ndims == 1, "there's something wrong with target q-values, they must be a vector"
sess.run(tf.global_variables_initializer())
def generate_session(env, t_max = 1000, epsilon = 0, train = False):
total_reward = 0
s = env.reset()
for t in range(t_max):
a = get_action(s, epsilon=epsilon)
next_s, r, done, _ = env.step(a)
if train:
sess.run(train_step,
{
states_ph: [s],
actions_ph: [a],
rewards_ph: [r],
next_states_ph: [next_s],
is_done_ph: [done]
})
total_reward += r
s = next_s
if done:
break
return total_reward
epsilon = 0.5
for i in range(1000):
session_rewards = [generate_session(env, epsilon=epsilon, train=True) for _ in range(100)]
print("epoch #{}\tmean reward = {:.3f}\tepsilon = {:.3f}".format(i, np.mean(session_rewards), epsilon))
epsilon *= 0.99
assert epsilon >= 1e-4, "Make sure epsilon is always nonzero during training"
if np.mean(session_rewards) > 300:
print("You Win!")
break
| 0.500488 | 0.94625 |
# Pulumi Automation API
Pulumi's Automation API is the programmatic interface for driving pulumi programs from within your code.
The package can be used for a number of use cases:
* Driving pulumi deployments within CI/CD workflows
* Integration testing
* Multi-stage deployments such as blue-green deployment patterns
* Deployments involving application code like database migrations
* Building higher level tools, custom CLIs over pulumi, etc
* Using pulumi behind a REST or GRPC API
* Debugging Pulumi programs (by using a single main entrypoint with "inline" programs)
This jupyter notebook explores various facets of automation API itself and explores how to deploy infrastructure without ever leaving the notebook.
To run this example you'll need a few pre-reqs:
1. A Pulumi CLI installation ([v3.0.0](https://www.pulumi.com/docs/get-started/install/versions/) or later)
2. The AWS CLI, with appropriate credentials.
Alright, let's get started.
### Automation API 101
In addition to fine-grained building blocks, Automation API provides two out-of-the-box ways to work with Stacks:
1. Programs locally available on-disk and addressed via a filepath (local source):
```python
stack = create_stack("myOrg/myProj/myStack", work_dir=os.path.join("..", "path", "to", "project"))
```
2. Programs defined as a function alongside your Automation API code (inline source):
```python
def pulumi_program():
bucket = s3.Bucket("bucket")
pulumi.export("bucket_name", bucket.Bucket)
stack = create_stack("myOrg/myProj/myStack", program=pulumi_program)
```
Each of these creates a stack with access to the full range of Pulumi lifecycle methods
(up/preview/refresh/destroy), as well as methods for managing config, stack, and project settings:
```python
stack.set_config("key", ConfigValue(value="value", secret=True))
preview_response = stack.preview()
```
### Pulumi programs as functions
An inline program allows you to define your infrastructure within a function alongside your other code. Consider the following function called `s3_static_site`. It creates an s3 bucket, sets it up as a basic static website and exports the URL.
```
import pulumi
from pulumi_aws import s3
def s3_static_site():
# Create a bucket and expose a website index document
site_bucket = s3.Bucket("s3-website-bucket", website=s3.BucketWebsiteArgs(index_document="index.html"))
index_content = """
<html>
<head><title>Hello S3</title><meta charset="UTF-8"></head>
<body>
<p>Hello, world!</p>
<p>Made with ❤️ with <a href="https://pulumi.com">Pulumi</a></p>
</body>
</html>
"""
# Write our index.html into the site bucket
s3.BucketObject("index",
bucket=site_bucket.id, # reference to the s3.Bucket object
content=index_content,
key="index.html", # set the key of the object
content_type="text/html; charset=utf-8") # set the MIME type of the file
# Set the access policy for the bucket so all objects are readable
s3.BucketPolicy("bucket-policy", bucket=site_bucket.id, policy={
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject"],
# Policy refers to bucket explicitly
"Resource": [pulumi.Output.concat("arn:aws:s3:::", site_bucket.id, "/*")]
},
})
# Export the website URL
pulumi.export("website_url", site_bucket.website_endpoint)
```
### Automating your deployment
Now, let's define some functions to deploy and destroy our stacks.
```
from typing import List, Tuple, Optional, Dict
from pulumi import automation as auto
stack_name = "dev"
def noop():
pass
def deploy_project(project_name: str,
program: callable,
plugins: Optional[List[Tuple]] = None,
config: Optional[Dict[str, auto.ConfigValue]] = None):
# create (or select if one already exists) a stack that uses our inline program
stack = auto.create_or_select_stack(stack_name=stack_name,
project_name=project_name,
program=program)
if plugins:
for plugin in plugins:
stack.workspace.install_plugin(plugin[0], plugin[1])
print("plugins installed")
if config:
stack.set_all_config(config)
print("config set")
stack.refresh(on_output=print)
stack.up(on_output=print)
return stack
def destroy_project(project_name: str):
stack = auto.create_or_select_stack(stack_name=stack_name,
project_name=project_name,
program=noop)
stack.destroy(on_output=print)
stack.workspace.remove_stack(stack_name)
print(f"stack {stack_name} in project {project_name} removed")
```
### Deploy all the things!
Alright, we're ready to deploy our first project. Execute the code below and watch the output as your program progresses.
```
s3_site = deploy_project("my_first_project",
s3_static_site,
plugins=[("aws", "v4.0.0")],
config={"aws:region": auto.ConfigValue(value="us-west-2")})
```
### Using stack outputs
Now that our stack is deployed, let's make sure everything was deployed correctly by making a request to the URL we exported.
```
outputs = s3_site.outputs()
url = f"http://{outputs['website_url'].value}"
import requests
site_content = requests.get(url).text
site_content
```
Cool! Looks like we got some HTML back. Let's display it in our notebook using IPython.
```
from IPython.core.display import HTML
HTML(site_content)
```
Alright, that looks much better. We can even open our website in a new browser tab.
```
import webbrowser
outputs = s3_site.outputs()
webbrowser.open(url)
```
### Clean up
Now that we're done testing everything out, we can destroy our stack.
```
destroy_project("my_first_project")
```
|
github_jupyter
|
stack = create_stack("myOrg/myProj/myStack", work_dir=os.path.join("..", "path", "to", "project"))
```
2. Programs defined as a function alongside your Automation API code (inline source):
```python
def pulumi_program():
bucket = s3.Bucket("bucket")
pulumi.export("bucket_name", bucket.Bucket)
stack = create_stack("myOrg/myProj/myStack", program=pulumi_program)
```
Each of these creates a stack with access to the full range of Pulumi lifecycle methods
(up/preview/refresh/destroy), as well as methods for managing config, stack, and project settings:
### Pulumi programs as functions
An inline program allows you to define your infrastructure within a function alongside your other code. Consider the following function called `s3_static_site`. It creates an s3 bucket, sets it up as a basic static website and exports the URL.
### Automating your deployment
Now, let's define some functions to deploy and destroy our stacks.
### Deploy all the things!
Alright, we're ready to deploy our first project. Execute the code below and watch the output as your program progresses.
### Using stack outputs
Now that our stack is deployed, let's make sure everything was deployed correctly by making a request to the URL we exported.
Cool! Looks like we got some HTML back. Let's display it in our notebook using IPython.
Alright, that looks much better. We can even open our website in a new browser tab.
### Clean up
Now that we're done testing everything out, we can destroy our stack.
| 0.525856 | 0.716727 |
# Homework 8: Dataset 2: Baggage claims
* Open your dataset up using pandas in a Jupyter notebook
* Do a .head() to get a feel for your data
* Write down 12 questions to ask your data, or 12 things to hunt for in the data
* Attempt to answer those ten questions using the magic of pandas
* Make three charts with your dataset
* Keep track of anything that was problematic - it can be non-standard country names, extra spaces in columns, trouble getting multiple colors in scatterplots, whatever you'd like.
```
import pandas as pd
import matplotlib.pyplot as plt
% matplotlib inline
df=pd.read_csv('baggageclaims_data.csv')
df.head()
```
## 1) Which claim type is the most common?
```
df.columns
df['Claim Type'].value_counts()
#Passenger Property Loss is most common claim type
```
## 2. How do airlines compare on this most common claim type?
```
loss = df[df['Claim Type'] == 'Passenger Property Loss']
loss_by_airline = pd.DataFrame(loss.groupby('Airline Name')['Claim Type'].value_counts())
loss_by_airline.sort_values(by='Claim Type', ascending = False).head(20)
plt.style.use('ggplot')
loss_by_airline.sort_values(by='Claim Type', ascending = True).tail(20).plot(kind='barh', xlim=(0,900), ylim=(0,900), legend=False, figsize=(10,10))
```
## 3. How does most common claimtype disperse over item categories?
```
loss_by_item = loss.groupby('Item Category')['Claim Type'].value_counts()
loss_by_item.sort_values().tail(20)
loss_by_item_df = pd.DataFrame(loss.groupby('Item Category')['Claim Type'].value_counts())
loss_by_item_df.sort_values('Claim Type', ascending = False).head(20)
```
## 4. Is there a correlation between the ten most lost items and when these get lost?
```
# convert dates into actual dates for python
# date format of 'Date Received': 28-May-15
from datetime import datetime
date_object = datetime.strptime('28-May-15', '%d-%b-%y')
date_object
#strftime.net
right_formatted_dates = []
for date in df['Date Received']:
#print(date)
date_formatted = datetime.strptime(date, '%d-%b-%y')
#print(date_formatted)
right_formatted_dates.append(date_formatted)
df['right_dates'] = right_formatted_dates
new_table = df[df['Claim Type'] == 'Passenger Property Loss']
latest = new_table[['right_dates', 'Claim Type', 'Item Category', 'Claim Number']]
latest
Clothing = latest[latest['Item Category'] == 'Clothing']
clothing_count = Clothing['Item Category'].count()
clothing_count
from datetime import date
for element in Clothing['right_dates']:
if element.date() > date(2015, 1, 1) and element.date() < date(2015, 2, 1):
clothing_count = Clothing['Item Category'].count()
print(clothing_count)
# The idea is to loop though the dataframe latest and the clothing category (and subsequently for others --> function) loop through each month (here, so far only January) and count how many entries there are for this month, by doing .count() on any variable of that list (works, see cell above)
from datetime import date
d = date(2013, 12 ,22)
type(d)
Clothing = new_table[new_table['Item Category'] == 'Clothing']
Jewelry = new_table[new_table['Item Category'] == 'Jewelry & Watches']
Travel_Accessories = new_table[new_table['Item Category'] == 'Travel Accessories']
Personal_Electronics = new_table[new_table['Item Category'] == 'Personal Electronics']
Cosmetics = new_table[new_table['Item Category'] == 'Cosmetics & Grooming']
Computer = new_table[new_table['Item Category'] == 'Computer & Accessories']
plt.scatter(y= Clothing["date"], x= Clothing["date"], c='c', alpha=0.75, marker='x')
plt.scatter(y= Jewelry["date"], x= Jewelry["date"], c='y', alpha=0.75, marker='o')
plt.scatter(y= Travel_Accessories["date"], x= Travel_Accessories["date"], c='m', alpha=0.75, marker='v')
plt.scatter(y= Personal_Electronics["date"], x= Personal_Electronics["date"], c='m', alpha=0.75, marker='s')
plt.scatter(y= Cosmetics["date"], x= Cosmetics["date"], c='m', alpha=0.75, marker='.')
plt.scatter(y= Computer["date"], x= Computer["date"], c='m', alpha=0.75, marker='*')
#markers: http://matplotlib.org/api/markers_api.html
# make x a timeline: http://stackoverflow.com/questions/1574088/plotting-time-in-python-with-matplotlib
# https://blog.mafr.de/2012/03/11/time-series-data-with-matplotlib/
```
## 5. What is the airport with most property damage?
```
damage = df[df['Claim Type'] == 'Property Damage']
damage['Claim Type'].value_counts()
damage_by_airport = damage.groupby('Airport Name')['Claim Type'].value_counts()
damage_by_airport.sort_values().tail(30)
```
## 6. How many of the claims were granted, how many denied?
```
end = df['Disposition'].value_counts()
end
plt.style.use('ggplot')
end.plot(kind='pie')
```
## 7. How do airlines compare on denial and full approval of claims?
```
approval = df[df['Disposition'] == 'Approve in Full']
approval['Disposition'].value_counts()
approval_by_airline = approval.groupby('Airline Name')['Disposition'].value_counts()
approval_by_airline.sort_values().tail(20)
denial = df[df['Disposition'] == 'Deny']
denial['Disposition'].value_counts()
denial_by_airline = denial.groupby('Airline Name')['Disposition'].value_counts()
denial_by_airline.sort_values().tail(20)
```
## 8. What is the average close amount? What is mean/max?
```
float_amount = df['Close Amount'].str.replace('$','').str.replace('-','0').str.replace(',','').astype(float)
df['Amount_float'] = float_amount
df.head(3)
df['Amount_float'].describe()
```
## 9. Per airline, what is the average close amount?
```
df.groupby('Airline Name')['Amount_float'].mean().sort_values().tail(50)
close_df = pd.DataFrame(df.groupby('Airline Name')['Amount_float'].mean())
close_df.sort_values(by='Amount_float', ascending=True).tail(20).plot(kind="barh", legend=False, figsize=(10,10))
```
## 10. Per item category, what is the average close amount?
```
#loss['Amount_float'] = float_amount
#loss.groupby('Item Category')['Amount_float'].mean()
category_amount = pd.DataFrame(df.groupby('Item Category')['Amount_float'].mean())
cleaned_category_amount = category_amount['Amount_float'] != 0
category_amount[cleaned_category_amount].sort_values('Amount_float', ascending=False).head(10)
#how to take only those with one entry? ie only "Audio/Video" instead of "Audio/Video; Audio/Video"?
#--> exclude all the cells that have a ; in it as that is how multiple entries are separated?
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
% matplotlib inline
df=pd.read_csv('baggageclaims_data.csv')
df.head()
df.columns
df['Claim Type'].value_counts()
#Passenger Property Loss is most common claim type
loss = df[df['Claim Type'] == 'Passenger Property Loss']
loss_by_airline = pd.DataFrame(loss.groupby('Airline Name')['Claim Type'].value_counts())
loss_by_airline.sort_values(by='Claim Type', ascending = False).head(20)
plt.style.use('ggplot')
loss_by_airline.sort_values(by='Claim Type', ascending = True).tail(20).plot(kind='barh', xlim=(0,900), ylim=(0,900), legend=False, figsize=(10,10))
loss_by_item = loss.groupby('Item Category')['Claim Type'].value_counts()
loss_by_item.sort_values().tail(20)
loss_by_item_df = pd.DataFrame(loss.groupby('Item Category')['Claim Type'].value_counts())
loss_by_item_df.sort_values('Claim Type', ascending = False).head(20)
# convert dates into actual dates for python
# date format of 'Date Received': 28-May-15
from datetime import datetime
date_object = datetime.strptime('28-May-15', '%d-%b-%y')
date_object
#strftime.net
right_formatted_dates = []
for date in df['Date Received']:
#print(date)
date_formatted = datetime.strptime(date, '%d-%b-%y')
#print(date_formatted)
right_formatted_dates.append(date_formatted)
df['right_dates'] = right_formatted_dates
new_table = df[df['Claim Type'] == 'Passenger Property Loss']
latest = new_table[['right_dates', 'Claim Type', 'Item Category', 'Claim Number']]
latest
Clothing = latest[latest['Item Category'] == 'Clothing']
clothing_count = Clothing['Item Category'].count()
clothing_count
from datetime import date
for element in Clothing['right_dates']:
if element.date() > date(2015, 1, 1) and element.date() < date(2015, 2, 1):
clothing_count = Clothing['Item Category'].count()
print(clothing_count)
# The idea is to loop though the dataframe latest and the clothing category (and subsequently for others --> function) loop through each month (here, so far only January) and count how many entries there are for this month, by doing .count() on any variable of that list (works, see cell above)
from datetime import date
d = date(2013, 12 ,22)
type(d)
Clothing = new_table[new_table['Item Category'] == 'Clothing']
Jewelry = new_table[new_table['Item Category'] == 'Jewelry & Watches']
Travel_Accessories = new_table[new_table['Item Category'] == 'Travel Accessories']
Personal_Electronics = new_table[new_table['Item Category'] == 'Personal Electronics']
Cosmetics = new_table[new_table['Item Category'] == 'Cosmetics & Grooming']
Computer = new_table[new_table['Item Category'] == 'Computer & Accessories']
plt.scatter(y= Clothing["date"], x= Clothing["date"], c='c', alpha=0.75, marker='x')
plt.scatter(y= Jewelry["date"], x= Jewelry["date"], c='y', alpha=0.75, marker='o')
plt.scatter(y= Travel_Accessories["date"], x= Travel_Accessories["date"], c='m', alpha=0.75, marker='v')
plt.scatter(y= Personal_Electronics["date"], x= Personal_Electronics["date"], c='m', alpha=0.75, marker='s')
plt.scatter(y= Cosmetics["date"], x= Cosmetics["date"], c='m', alpha=0.75, marker='.')
plt.scatter(y= Computer["date"], x= Computer["date"], c='m', alpha=0.75, marker='*')
#markers: http://matplotlib.org/api/markers_api.html
# make x a timeline: http://stackoverflow.com/questions/1574088/plotting-time-in-python-with-matplotlib
# https://blog.mafr.de/2012/03/11/time-series-data-with-matplotlib/
damage = df[df['Claim Type'] == 'Property Damage']
damage['Claim Type'].value_counts()
damage_by_airport = damage.groupby('Airport Name')['Claim Type'].value_counts()
damage_by_airport.sort_values().tail(30)
end = df['Disposition'].value_counts()
end
plt.style.use('ggplot')
end.plot(kind='pie')
approval = df[df['Disposition'] == 'Approve in Full']
approval['Disposition'].value_counts()
approval_by_airline = approval.groupby('Airline Name')['Disposition'].value_counts()
approval_by_airline.sort_values().tail(20)
denial = df[df['Disposition'] == 'Deny']
denial['Disposition'].value_counts()
denial_by_airline = denial.groupby('Airline Name')['Disposition'].value_counts()
denial_by_airline.sort_values().tail(20)
float_amount = df['Close Amount'].str.replace('$','').str.replace('-','0').str.replace(',','').astype(float)
df['Amount_float'] = float_amount
df.head(3)
df['Amount_float'].describe()
df.groupby('Airline Name')['Amount_float'].mean().sort_values().tail(50)
close_df = pd.DataFrame(df.groupby('Airline Name')['Amount_float'].mean())
close_df.sort_values(by='Amount_float', ascending=True).tail(20).plot(kind="barh", legend=False, figsize=(10,10))
#loss['Amount_float'] = float_amount
#loss.groupby('Item Category')['Amount_float'].mean()
category_amount = pd.DataFrame(df.groupby('Item Category')['Amount_float'].mean())
cleaned_category_amount = category_amount['Amount_float'] != 0
category_amount[cleaned_category_amount].sort_values('Amount_float', ascending=False).head(10)
#how to take only those with one entry? ie only "Audio/Video" instead of "Audio/Video; Audio/Video"?
#--> exclude all the cells that have a ; in it as that is how multiple entries are separated?
| 0.509764 | 0.887107 |
# Jupyter Notebook problems in the Essentials of Paleomagnetism Textbook by L. Tauxe
## Problems in Chapter 2
### Problem 1a:
Write a script to convert declination, inclination, intensity data to North, East and Down. First we need to import numpy, the module with lots of math functions and pandas with nice data manipulation functions
```
import numpy as np
import pandas as pd
```
Let's write a little function to do the conversion.
```
def dir2cart(data):
""" Converts data array with [Declination, Inclination, Intensity]
to cartesian coordinates, X=North, Y=East, Z=Down
Returns array with [X,Y,Z]
"""
# convert degrees to radians for declination and inclination
decs,incs,ints=np.radians(data[0]),np.radians(data[1]),data[2]
X=ints*np.cos(decs)*np.cos(incs)
Y=ints*np.sin(decs)*np.cos(incs)
Z=ints*np.sin(incs)
cart=np.array([X,Y,Z]).transpose()
return cart
```
Now let's read in a data file with some geomagnetic field vectors in it.
```
# read in the data and transpose it to rows of dec, inc, int
data=np.loadtxt('Chapter_2/ps2_prob1_data.txt').transpose()
print (dir2cart(data))
```
### Problem 1b:
Get locations from 10 random spots on Earth and calculate the IGRF vectors at each place.
To solve this problem, we have to understand how the function **pmag.get_unf( )** works. To do this, we need to tell the notebook where the **pmag** module lives, import it and print out the doc string for **get_unf()**:
```
import pmagpy.pmag as pmag
help(pmag.get_unf)
```
Now we can use that function to generate a list of random points on the Earth's surface.
```
places=pmag.get_unf(10)
print (places)
```
Now let's find out about ipmag.igrf
```
import pmagpy.ipmag as ipmag
help(ipmag.igrf)
```
And now we can ship the **data** in places to **ipmag.igrf**.
```
for place in places:
print (ipmag.igrf([2006,0,place[1],place[0]]))
```
## Problem 1c:
Take the output from Problem 1b and call **dir2cart**.
```
data=[] # make a blank list
for place in places:
Dir=ipmag.igrf([2006,0,place[1],place[0]])
data.append(Dir) # append to the data list
data=np.array(data).transpose() # dir2cart takes arrays of data
print (dir2cart(data))
```
## Problem 2b:
Take the output from Problem 1c and plot as an equal area projection (first by hand and then with **ipmag** functions). The **ipmag** functions call **pmagplotlib** and use **matplotlib**, so these will have to be imported as well.
```
import pmagpy.pmagplotlib as pmagplotlib
import matplotlib.pyplot as plt
# this 'magic command' (starts with %) let's us plot things in the notebook
%matplotlib inline
ipmag.plot_net(1) # make an equal angle net
ipmag.plot_di(data[0],data[1]) # put on the dots
```
### Problem 3:
Use the dipole formula ($\tan (I) = 2 \tan (\lambda)$ where $I$ is inclination and $\lambda$ is latitude and calculate the GAD field at 36 $^{\circ}$N. Note that declination is always zero for a GAD field. We can make a **lambda** function for this!
```
lat = np.radians(36.) # remember to convert to radians!
inc = lambda lat: np.degrees(np.arctan(2.*np.tan(lat))) # and back!
print ('%7.1f'%(inc(lat))) # and print it out
```
Let's use the pmag function **dia_vgp**. First let's figure out what it does:
```
help(pmag.dia_vgp)
```
Now we can use it to convert our directions to VGPs. Note that alpha95 is require but is not given here so we supply a zero in its place. Note also that westward longitudes are indicated by minus signs
```
vgp_lat,vgp_lon,dp,dp= pmag.dia_vgp(345,47,0.,36,-112)
print ('%7.1f %7.1f'%(vgp_lat,vgp_lon))
```
|
github_jupyter
|
import numpy as np
import pandas as pd
def dir2cart(data):
""" Converts data array with [Declination, Inclination, Intensity]
to cartesian coordinates, X=North, Y=East, Z=Down
Returns array with [X,Y,Z]
"""
# convert degrees to radians for declination and inclination
decs,incs,ints=np.radians(data[0]),np.radians(data[1]),data[2]
X=ints*np.cos(decs)*np.cos(incs)
Y=ints*np.sin(decs)*np.cos(incs)
Z=ints*np.sin(incs)
cart=np.array([X,Y,Z]).transpose()
return cart
# read in the data and transpose it to rows of dec, inc, int
data=np.loadtxt('Chapter_2/ps2_prob1_data.txt').transpose()
print (dir2cart(data))
import pmagpy.pmag as pmag
help(pmag.get_unf)
places=pmag.get_unf(10)
print (places)
import pmagpy.ipmag as ipmag
help(ipmag.igrf)
for place in places:
print (ipmag.igrf([2006,0,place[1],place[0]]))
data=[] # make a blank list
for place in places:
Dir=ipmag.igrf([2006,0,place[1],place[0]])
data.append(Dir) # append to the data list
data=np.array(data).transpose() # dir2cart takes arrays of data
print (dir2cart(data))
import pmagpy.pmagplotlib as pmagplotlib
import matplotlib.pyplot as plt
# this 'magic command' (starts with %) let's us plot things in the notebook
%matplotlib inline
ipmag.plot_net(1) # make an equal angle net
ipmag.plot_di(data[0],data[1]) # put on the dots
lat = np.radians(36.) # remember to convert to radians!
inc = lambda lat: np.degrees(np.arctan(2.*np.tan(lat))) # and back!
print ('%7.1f'%(inc(lat))) # and print it out
help(pmag.dia_vgp)
vgp_lat,vgp_lon,dp,dp= pmag.dia_vgp(345,47,0.,36,-112)
print ('%7.1f %7.1f'%(vgp_lat,vgp_lon))
| 0.527073 | 0.977543 |
```
from google.colab import drive
drive.mount('/content/drive/')
%cd /content/drive/MyDrive/NIOMATA/strollerGAN_austria/app/ganspace_api/ganspace
!pip install pyrebase4
!pip install boto3
!pip install fbpca
!pip install flask_cors
!pip install flask_ngrok
```
# **import everything**
```
import os
import torch
import random
import tqdm
import flask
import json
import numpy as np
from generator import Model
from PIL import Image
from flask_cors import CORS
from flask_ngrok import run_with_ngrok
from flask import request, Response, send_file
from rules.stroller_dict import get_stroller_dict
from rules.childseat_dict import get_childseat_dict
def generate_seed(num):
return [random.randint(0,9999) for i in range(num)]
def generate_combination(start,end):
res = []
for i in range(start,end) :
for j in range(i,end+1):
if i == j:
continue
res.append((i,j))
return res
def generate_image_to_API(output_dir,model):
seeds = generate_seed(6)
combinations = generate_combination(1,6)
model.generate_transversal(output_dir=output_dir,
combinations=combinations,
num=seeds)
def generate_image_style(model,step,scale,ls,le,z):
z = model.generate_z(1)
z = model.normalize_z(z)
res = []
for i in range(-5,5):
z_styles = apply_rule(z,model.model,i,scale,ls,le)
img = model.model.sample_np(z_styles)
img = Image.fromarray((img * 255).astype(np.uint8)).resize((500,500),Image.LANCZOS)
res.append(img)
return res
def apply_rule(z,model,step,scale,ls,le):
rule_path ='rules/4W_to_3W.npy'
rule = np.load(rule_path)
z = [z]*model.get_max_latents()
for l in range(ls, le):
z[l] = z[l] + rule * step * scale
return z
```
# **INIT**
```
childseat = Model('childseat')
stroller = Model('stroller')
pushchair = Model('pushchair')
stroller_rules = get_stroller_dict()
childseat_rules = get_childseat_dict()
output_dir = '/content'
app = flask.Flask(__name__)
CORS(app)
run_with_ngrok(app)
@app.route('/', methods=['GET'])
def home():
return "<h1>HELLO!</h1><p>API to send array of images</p>"
@app.route('/generate_image',methods=['GET','POST'])
def generate_image():
cat = request.args.get('category')
if cat == 'childseat':
generate_image_to_API(output_dir,childseat)
if cat == 'stroller':
generate_image_to_API(output_dir,stroller)
if cat == 'pushchair':
generate_image_to_API(output_dir,pushchair)
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
@app.route('/generate_style',methods=['GET','POST'])
def generate_style():
cat = request.args.get('category')
filename = request.args.get('filename')
style_name = request.args.get('name')
z = np.load(os.path.join(output_dir,cat,filename))
if cat == 'childseat':
data = next((x for x in childseat_rules if x['name'] == style_name), None)
childseat.generate_style(z,data,'output')
if cat == 'stroller':
data = next((x for x in stroller_rules if x['name'] == style_name), None)
stroller.generate_style(z,data,'output')
if cat == 'pushchair':
data = next((x for x in stroller_rules if x['name'] == style_name), None)
pushchair.generate_style(z,data,'output')
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
app.run()
generate_image_to_API(output_dir,childseat)
```
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive/')
%cd /content/drive/MyDrive/NIOMATA/strollerGAN_austria/app/ganspace_api/ganspace
!pip install pyrebase4
!pip install boto3
!pip install fbpca
!pip install flask_cors
!pip install flask_ngrok
import os
import torch
import random
import tqdm
import flask
import json
import numpy as np
from generator import Model
from PIL import Image
from flask_cors import CORS
from flask_ngrok import run_with_ngrok
from flask import request, Response, send_file
from rules.stroller_dict import get_stroller_dict
from rules.childseat_dict import get_childseat_dict
def generate_seed(num):
return [random.randint(0,9999) for i in range(num)]
def generate_combination(start,end):
res = []
for i in range(start,end) :
for j in range(i,end+1):
if i == j:
continue
res.append((i,j))
return res
def generate_image_to_API(output_dir,model):
seeds = generate_seed(6)
combinations = generate_combination(1,6)
model.generate_transversal(output_dir=output_dir,
combinations=combinations,
num=seeds)
def generate_image_style(model,step,scale,ls,le,z):
z = model.generate_z(1)
z = model.normalize_z(z)
res = []
for i in range(-5,5):
z_styles = apply_rule(z,model.model,i,scale,ls,le)
img = model.model.sample_np(z_styles)
img = Image.fromarray((img * 255).astype(np.uint8)).resize((500,500),Image.LANCZOS)
res.append(img)
return res
def apply_rule(z,model,step,scale,ls,le):
rule_path ='rules/4W_to_3W.npy'
rule = np.load(rule_path)
z = [z]*model.get_max_latents()
for l in range(ls, le):
z[l] = z[l] + rule * step * scale
return z
childseat = Model('childseat')
stroller = Model('stroller')
pushchair = Model('pushchair')
stroller_rules = get_stroller_dict()
childseat_rules = get_childseat_dict()
output_dir = '/content'
app = flask.Flask(__name__)
CORS(app)
run_with_ngrok(app)
@app.route('/', methods=['GET'])
def home():
return "<h1>HELLO!</h1><p>API to send array of images</p>"
@app.route('/generate_image',methods=['GET','POST'])
def generate_image():
cat = request.args.get('category')
if cat == 'childseat':
generate_image_to_API(output_dir,childseat)
if cat == 'stroller':
generate_image_to_API(output_dir,stroller)
if cat == 'pushchair':
generate_image_to_API(output_dir,pushchair)
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
@app.route('/generate_style',methods=['GET','POST'])
def generate_style():
cat = request.args.get('category')
filename = request.args.get('filename')
style_name = request.args.get('name')
z = np.load(os.path.join(output_dir,cat,filename))
if cat == 'childseat':
data = next((x for x in childseat_rules if x['name'] == style_name), None)
childseat.generate_style(z,data,'output')
if cat == 'stroller':
data = next((x for x in stroller_rules if x['name'] == style_name), None)
stroller.generate_style(z,data,'output')
if cat == 'pushchair':
data = next((x for x in stroller_rules if x['name'] == style_name), None)
pushchair.generate_style(z,data,'output')
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
app.run()
generate_image_to_API(output_dir,childseat)
| 0.266644 | 0.201381 |
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)
# Discrete Bayes Filter
```
#format the book
%matplotlib inline
from __future__ import division, print_function
from book_format import load_style
load_style()
```
The Kalman filter belongs to a family of filters called *Bayesian filters*. Most textbook treatments of the Kalman filter present the Bayesian formula, perhaps shows how it factors into the Kalman filter equations, but mostly keeps the discussion at a very abstract level.
That approach requires a fairly sophisticated understanding of several fields of mathematics, and it still leaves much of the work of understanding and forming an intuitive grasp of the situation in the hands of the reader.
I will use a different way to develop the topic, to which I owe the work of Dieter Fox and Sebastian Thrun a great debt. It depends on building an intuition on how Bayesian statistics work by tracking an object through a hallway - they use a robot, I use a dog. I like dogs, and they are less predictable than robots which imposes interesting difficulties for filtering. The first published example of this that I can find seems to be Fox 1999 [1], with a fuller example in Fox 2003 [2]. Sebastian Thrun also uses this formulation in his excellent Udacity course Artificial Intelligence for Robotics [3]. In fact, if you like watching videos, I highly recommend pausing reading this book in favor of first few lessons of that course, and then come back to this book for a deeper dive into the topic.
Let's now use a simple thought experiment, much like we did with the g-h filter, to see how we might reason about the use of probabilities for filtering and tracking.
## Tracking a Dog
Let's begin with a simple problem. We have a dog friendly workspace, and so people bring their dogs to work. Occasionally the dogs wander out of offices and down the halls. We want to be able to track them. So during a hackathon somebody created a little sonar sensor to attach to the dog's collar. It emits a signal, listens for the echo, and based on how quickly an echo comes back we can tell whether the dog is in front of an open doorway or not. It also senses when the dog walks, and reports in which direction the dog has moved. It connects to our network via wifi and sends an update once a second.
I want to track my dog Simon, so I attach the device to his collar and then fire up Python, ready to write code to track him through the building. At first blush this may appear impossible. If I start listening to the sensor of Simon's collar I might read 'door', 'hall', 'hall', and so on. How can I use that information to determine where Simon is?
To keep the problem small enough to plot easily we will assume that there are only 10 positions in a single hallway to consider, which we will number 0 to 9, where 1 is to the right of 0, 2 is to the right of 1, and so on. For reasons that will be clear later, we will also assume that the hallway is circular or rectangular. If you move right from position 9, you will be at position 0.
When I begin listening to the sensor I have no reason to believe that Simon is at any particular position in the hallway. He is equally likely to be in any position. Their are 10 positions, so the probability that he is in any given position is 1/10.
Let's represent our belief of his position at any time in a NumPy array.
```
import numpy as np
belief = np.array([1/10]*10)
np.set_printoptions(precision=3, linewidth=50)
print(belief)
```
In Bayesian statistics this is called our *prior*. It basically means the probability prior to incorporating measurements or other information. More completely, this is called the *prior probability distribution*, but that is a mouthful and so it is normally shorted to prior. A *probability distribution* is a collection of all possible probabilities for an event. Probability distributions always to sum to 1 because something had to happen; the distribution lists all the different somethings and the probability of each.
I'm sure you've used probabilities before - as in "the probability of rain today is 30%". The last paragraph sounds like more of that. But Bayesian statistics was a revolution in probability because it treats the probability as a belief about a single event. Let's take an example. I know if I flip a fair coin a infinitely many times 50% of the flips will be heads and 50% tails. That is standard probability, not Bayesian, and is called *frequentist statistics* to distinguish it from Bayes. Now, let's say I flip the coin one more time. Which way do I believe it landed? Frequentist probablility has nothing to say about that; it will merely state that 50% of coin flips land as heads. Bayes treats this as a belief about a single event - the strength of my belief that this specific coin flip is heads is 50%.
There are more differences, but for now recognize that Bayesian statistics reasons about our belief about a single event, whereas frequentists reason about collections of events. In this rest of this chapter, and most of the book, when I talk about the probability of something I am referring to the probability that some specific thing is true. When I do that I'm taking the Bayesian approach.
Now let's create a map of the hallway in another list. Suppose there are first two doors close together, and then another door quite a bit further down the hallway. We will use 1 to denote a door, and 0 to denote a wall:
```
hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])
```
So I start listening to Simon's transmissions on the network, and the first data I get from the sensor is "door". For the moment assume the sensor always returns the correct answer. From this I conclude that he is in front of a door, but which one? I have no idea. I have no reason to believe he is is in front of the first, second, or third door. But what I can do is assign a probability to each door. All doors are equally likely, and there are three of them, so I assign a probability of 1/3 to each door.
```
from book_format import figsize, set_figsize
import book_plots as bp
import matplotlib.pyplot as plt
belief = np.array([1./3, 1./3, 0, 0, 0, 0, 0, 0, 1./3, 0])
set_figsize(y=2)
bp.bar_plot(belief)
```
This distribution is called a *categorical distribution*, which is the term used to describe a discrete distribution describing the probability of observing $n$ outcomes. It is a *multimodal distribution* because we have multiple beliefs about the position of our dog. Of course we are not saying that we think he is simultaneously in three different locations, merely that so far we have narrowed down our knowledge in his position to be one of these three locations. My (Bayesian) belief is that there is a 33.3% chance of being at door 0, 33.3% at door 1, and a 33.3% chance of being at door 8.
A few words about the *mode* of a distribution. This terms from elementary statistics. Given a set of numbers, such as {1, 2, 2, 2, 3, 3, 4}, the *mode* is the number that occurs most often. For this set the mode is 2. A set can contain more than one mode. The set {1, 2, 2, 2, 3, 3, 4, 4, 4} contains the modes 2 and 4, because both occur three times. We say the former set is *unimodal*, and the latter is *multimodal*.
I hand coded the `belief` array in the code above. How would we implement this in code? Well, hallway represents each door as a 1, and wall as 0, so we will multiply the hallway variable by the percentage, like so;
```
pbelief = hallway * (1/3)
print(pbelief)
```
## Extracting Information from Multiple Sensor Readings
Let's put Python aside and think about the problem a bit. Suppose we were to read the following from Simon's sensor:
* door
* move right
* door
Can we deduce where Simon is at the end of that sequence? Of course! Given the hallway's layout there is only one place where you can be in front of a door, move once to the right, and be in front of another door, and that is at the left end. Therefore we can confidently state that Simon is in front of the second doorway. If this is not clear, suppose Simon had started at the second or third door. After moving to the right, his sensor would have returned 'wall'. That doesn't match the sensor readings, so we know he didn't start there. We can continue with that logic for all the remaining starting positions. Therefore the only possibility is that he is now in front of the second door. We implement this in Python with:
```
belief = np.array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0.])
```
Obviously I carefully constructed the hallway layout and sensor readings to give us an exact answer quickly. Real problems will not be so clear cut. But this should trigger your intuition - the first sensor reading only gave us very low probabilities (0.333) for Simon's location, but after a position update and another sensor reading we knew much more about where he is. You might suspect, correctly, that if you had a very long hallway with a large number of doors that after several sensor readings and positions updates we would either be able to know where Simon was, or have the possibilities narrowed down to a small number of possibilities. For example, suppose we had a long sequence of "door, right, door, right, wall, right, wall, right, door, right, door, right, wall, right, wall, right, wall, right, wall, right, door". Simon could only have started in a location where his movements had a door sequence of [1,1,0,0,1,1,0,0,0,0,1] in the hallway. There might be only one match for that, or at most a few. Either way we will be far more certain about his position then when we started.
We could implement this solution now, but instead let us consider a real world complication to the problem.
## Noisy Sensors
Perfect sensors are rare. Perhaps the sensor would not detect a door if Simon sat in front of it while scratching himself, or it might report there is a door if he is facing towards the wall instead of down the hallway. So in practice when I get a report 'door' I cannot assign 1/3 as the probability for each door. I have to assign something less than 1/3 to each door, and then assign a small probability to each blank wall position.
At this point it doesn't matter exactly what numbers we assign; let us say that the probability of the sensor being right is 3 times more likely to be right than wrong. How would we do this?
At first this may seem like an insurmountable problem. If the sensor is noisy it casts doubt on every piece of data. How can we conclude anything if we are always unsure?
The answer, as with the problem above, is probabilities. We are already comfortable assigning a probabilistic belief about the location of the dog; now we have to incorporate the additional uncertainty caused by the sensor noise. Lets say we get a reading of 'door'. We already said that the sensor is three times as likely to be correct as incorrect, so we should scale the probability distribution by 3 where there is a door. If we do that the result will no longer be a probability distribution, but we will learn how to correct that in a moment.
Let's look at that in Python code. Here I use the variable `z` to denote the measurement as that is the customary choice in the literature (`y` is also commonly used).
```
def update(map_, belief, z, correct_scale):
for i, val in enumerate(map_):
if val == z:
belief[i] *= correct_scale
belief = np.array([0.1] * 10)
reading = 1 # 1 is 'door'
update(hallway, belief, z=reading, correct_scale=3.)
print('sum =', sum(belief))
bp.bar_plot(belief)
```
We can see that this is not a probability distribution because it does not sum to 1.0. But we can see that the code is doing mostly the right thing - the doors are assigned a number (0.3) that is 3 times higher than the walls (0.1). All we need to do is normalize the result so that the probabilities correctly sum to 1.0. Normalization is done by dividing each element by the sum of all elements in the list.
Also, it is a bit odd to be talking about "3 times as likely to be right as wrong". We are working in probabilities, so let's specify the probability of the sensor being correct, and computing the scale factor from that.
```
def normalize(distribution):
assert distribution.dtype.kind == 'f'
""" Normalize distribution so it sums to 1.0"""
distribution /= sum(distribution.astype(float))
def update(map_, belief, z, prob_correct):
scale = prob_correct / (1. - prob_correct)
for i, val in enumerate(map_):
if val == z:
belief[i] *= scale
normalize(belief)
belief = np.array([0.1] * 10)
update(hallway, belief, 1, prob_correct=.75)
print('sum =', sum(belief))
print('probability of door =', belief[0])
print('probability of wall =', belief[2])
bp.bar_plot(belief)
```
We can see from the output that the sum is now 1.0, and that the probability of a door vs wall is still three times larger. The result also fits our intuition that the probability of a door must be less than 0.333, and that the probability of a wall must be greater than 0.0. Finally, it should fit our intuition that we have not yet been given any information that would allow us to distinguish between any given door or wall position, so all door positions should have the same value, and the same should be true for wall positions.
This result is called the *posterior*, which is short for *posterior probability distribution*. All this means is a probability distribution *after* incorporating the measurement information (posterior means 'after' in this context). To review, the *prior* is the probability distribution before including the measurement's information. Another term is the *likelihood* - this is the probability for the *evidence* (the measurement in this case) being true. That gives us this equation:
$$\mathtt{posterior} = \frac{\mathtt{prior}\times \mathtt{likelihood}}{\mathtt{normalization}}$$
It is very important to learn and internalize these terms as most of the literature uses them exclusively.
## Incorporating Movement Data
Recall how quickly we were able to find an exact solution to our dog's position when we incorporated a series of measurements and movement updates. However, that occurred in a fictional world of perfect sensors. Might we be able to find an exact solution even in the presence of noisy sensors?
Unfortunately, the answer is no. Even if the sensor readings perfectly match an extremely complicated hallway map we could not say that we are 100% sure that the dog is in a specific position - there is, after all, the possibility that every sensor reading was wrong! Naturally, in a more typical situation most sensor readings will be correct, and we might be close to 100% sure of our answer, but never 100% sure. This may seem complicated, but lets go ahead and program the math, which as we have seen is quite simple.
First let's deal with the simple case - assume the movement sensor is perfect, and it reports that the dog has moved one space to the right. How would we alter our `belief` array?
I hope after a moment's thought it is clear that we should shift all the values one space to the right. If we previously thought there was a 50% chance of Simon being at position 3, then after the move to the right we should believe that there is a 50% chance he is at position 4. So let's implement that. Recall that the hallway is circular, so we will use modulo arithmetic to perform the shift correctly.
```
def perfect_predict(belief, move):
""" move the position by 'move' spaces, where positive is
to the right, and negative is to the left
"""
n = len(belief)
result = np.zeros(n)
for i in range(n):
result[i] = belief[(i-move) % n]
belief[:] = result # copy back to original array
belief = np.array([.35, .1, .2, .3, 0, 0, 0, 0, 0, .05])
bp.bar_plot(belief, title='Before prediction')
plt.show()
perfect_predict(belief, 1)
bp.bar_plot(belief, title='After prediction')
```
We can see that we correctly shifted all values one position to the right, wrapping from the end of the array back to the beginning.
## Terminology
I introduced this terminology in the last chapter, but let's review to help solidify your knowledge.
The *system* is what we are trying to model or filter. Here the system is our dog. The state is its current configuration or value. In this chapter the state is our dog's position. We rarely know the actual state, so we say our filters produce the *estimated state* of the system. In practice this often gets called the state, so be careful to understand if the reference is to the state of the filter (which is the estimated state), or the state of the system (which is the actual state).
One cycle of prediction and updating with a measurement is called the state or system *evolution*, which is short for *time evolution* [7]. Another term is *system propogation*. This term refers to how the state of the system changes over time. For filters, this time is usually discrete. For our dog tracking, the system state is the position of the dog, so the state evolution is the position after a discrete amount of time has passed.
We model the system behavior with the *process model*. Here, our process model is that the dog moves one position at each time step. This is not a particularly accurate model of how dog's behave. The error in the model is called the *system error* or *process error*.
More terminology - the prediction is our new *prior*. Time has moved forward and we made a prediction without benefit of knowing the measurements.
## Adding Noise to the Prediction
We want to solve real world problems, and we have already stated that all sensors have noise. Therefore the code above must be wrong since it assumes we have perfect measurements. What if the sensor reported that our dog moved one space, but he actually moved two spaces, or zero? Once again this may initially sound like an insurmountable problem, but let's model it and see what happens. Since this is an example we will create a simple noise model for the sensor - later in the book we will handle more difficult errors.
We will say that when the sensor sends a movement update, it is 80% likely to be right, and it is 10% likely to overshoot one position to the right, and 10% likely to undershoot to the left. That is, if we say the movement was 4 (meaning 4 spaces to the right), the dog is 80% likely to have moved 4 spaces to the right, 10% to have moved 3 spaces, and 10% to have moved 5 spaces.
This is slightly harder than the math we have done so far, but it is still tractable. Each result in the array now needs to incorporate probabilities for 3 different situations. For example, consider position 9 for the case where the reported movement is 2. It should be clear that after the move we need to incorporate the probability that was at position 7 (9-2). However, there is a small chance that our dog actually moved from either 1 or 3 spaces away due to the sensor noise, so we also need to use positions 6 and 8. How much? Well, we have the probabilities, so we can just multiply and add. It would be 80% of position 7 plus 10% of position 6 and 10% of position 8! Let's try coding that:
```
def predict(belief, move, p_correct, p_under, p_over):
n = len(belief)
result = np.zeros(n)
for i in range(n):
result[i] = (
belief[(i-move) % n] * p_correct +
belief[(i-move-1) % n] * p_over +
belief[(i-move+1) % n] * p_under)
belief[:] = result
belief = np.array([0., 0., 0., 1., 0., 0., 0., 0., 0., 0.])
predict(belief, move=2, p_correct=.8, p_under=.1, p_over=.1)
bp.bar_plot(belief)
```
The simple test case that we ran appears to work correctly. We initially believed that the dog was in position 3 with 100% certainty; after the movement update we now give an 80% probability to the dog being in position 5, and a 10% chance to undershooting to position 4, and a 10% chance of overshooting to position 6. Let us look at a case where we have multiple beliefs:
```
belief = np.array([0, 0, .4, .6, 0, 0, 0, 0, 0, 0])
predict(belief, move=2, p_correct=.8, p_under=.1, p_over=.1)
bp.bar_plot(belief)
```
Here the results are more complicated, but you should still be able to work it out in your head. The 0.04 is due to the possibility that the 0.4 belief undershot by 1. The 0.38 is due to the following: the 80% chance that we moved 2 positions (0.4 $\times$ 0.8) and the 10% chance that we undershot (0.6 $\times$ 0.1). Overshooting plays no role here because if we overshot both 0.4 and 0.6 would be past this position. **I strongly suggest working some examples until all of this is very clear, as so much of what follows depends on understanding this step.**
If you look at the probabilities after performing the update you probably feel dismay. In the example above we started with probabilities of 0.4 and 0.6 in two fields; after performing the update the probabilities are not only lowered, but they are strewn out across the map.
```
bp.bar_plot(belief)
```
This is not a coincidence, or the result of a carefully chosen example - it is always true of the evolution (predict step). This is inevitable; if our sensor is noisy we will lose a bit of information on every prediction. Suppose we were to perform the prediction an infinite number of times - what would the result be? If we lose information on every step, we must eventually end up with no information at all, and our probabilities will be equally distributed across the `belief` array. Let's try this with 500 iterations.
```
belief = np.array([1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
for i in range(500):
predict(belief, move=1, p_correct=.8, p_under=.1, p_over=.1)
bp.bar_plot(belief)
```
After 500 iterations we have lost all information, even though we were 100% sure that we started in position 1. Feel free to play with the numbers to see the effect of differing number of updates. For example, after 100 updates we have a small amount of information left.
And, if you are viewing this on the web or in Jupyter Notebook, here is an animation of that output.
<img src="animations/02_no_info.gif">
## Generalizing with Convolution
In the code above we made the assumption that the movement error is only one position. In any real problem it will almost always be possible for the error to be two, three, or more positions.
This is easily solved with *convolution*. Convolution is the mathematical technique we use to modify one function with another function. In our case we are modifying our probability distribution with the error function that represents the probability of a movement error. In fact, the implementation of `predict()` is a convolution, though we did not call it that. Formally, convolution is defined as
$$ (f \ast g) (t) = \int_0^t \!f(\tau) \, g(t-\tau) \, \mathrm{d}\tau$$
where $f\ast g$ is the notation for convolving f by g. It does not mean multiply.
Integrals are for continuous functions, but we are using discrete functions, so we replace the integral with a summation, and the parenthesis with array brackets.
$$ (f \ast g) [t] = \sum\limits_{\tau=0}^t \!f[\tau] \, g[t-\tau]$$
If you look at that equation and compare it to the `predict()` function you can see that they are doing the same thing.
I would love to go on and on about convolution, but we have a filter to implement. Khan Academy [4] has a good mathematical introduction to convolution, and Wikipedia has some excellent animations of convolutions [5]. But the general idea is already clear to you. You slide across an array, multiplying the neighbors of the current cell with the values of a second array. This second array is called the *kernel*. In our example above we used 0.8 for the probability of moving to the correct location, 0.1 for undershooting, and 0.1 for overshooting. We make a kernel of this with the array `[0.1, 0.8, 0.1]`. So all we need to do is write a loop that goes over each element of our array, multiplying by the kernel, and summing the results. To emphasize that the belief is a probability distribution I have named the belief `distribution`.
```
def predict(distribution, offset, kernel):
N = len(distribution)
kN = len(kernel)
width = int((kN - 1) / 2)
result = np.zeros(N)
for i in range(N):
for k in range (kN):
index = (i + (width-k) - offset) % N
result[i] += distribution[index] * kernel[k]
distribution[:] = result[:] # update belief
```
We should test that this is correct:
```
belief = np.array([.05] * 10)
belief[4] = .55
predict(belief, offset=1, kernel=[.1, .8, .1])
bp.bar_plot(belief)
```
All of the elements are unchanged except the middle ones, which is correct. The value in position 4 should be $(0.1 \times 0.05)+ (0.8 \times 0.05) + (0.1 \times 0.55) = 0.1$, which it does. Position 5 should be $(0.1 \times 0.05) + (0.8 \times 0.55)+ (0.1 \times 0.05) = 0.45$, which it does. Finally, position 6 is computed the same as position 4, and it is also correct.
Finally, let's ensure that it shifts the positions correctly for movements greater than one.
```
belief = np.array([.05] * 10)
belief[4] = .55
predict(belief, offset=3, kernel=[.1, .8, .1])
bp.bar_plot(belief)
```
The position was correctly shifted by 3 positions, so the code seems to be correct.
It will usually be true that a small error is very likely, and a large error is very unlikely. For example, a kernel might look like
[.02, .1, .2, .36, .2, .1, .02]
But maybe an overshoot is very likely, and and undershoot is impossible. That kernel might be
[.0, .0, .5, .3, .2]
You probably will never be able to compute exact probabilities for the kernel of a real problem. It is usually enough to be approximately right. What you *must* do is ensure that the terms of the kernel sum to 1. The kernel expresses the probability of any given move, and the sum of any probability distribution must be 1.
An easy way to do this is to make each term *proportionally* correct and then normalize it to sum to 1.
```
kernel = np.array([1, 2, 6, 2, 1], dtype=float)
normalize(kernel)
print(kernel)
```
## Integrating Measurements and Movement Updates
The problem of losing information during a prediction may make it seem as if our system would quickly devolve into no knowledge. However, recall that our process is not an endless series of predictions, but a cycle of predictions followed by updates. The output of the update step, where we measure the current position, is fed into the prediction. The prediction step, with a degraded certainty, is then fed back into the update step where we measure the position again.
Let's think about this intuitively. After the first update->predict round we have degraded the knowledge we gained by the measurement by a small amount. But now we take another measurement. When we try to incorporate that new measurement into our belief, do we become more certain, less certain, or equally certain? Consider a simple case - you are sitting in your office. A co-worker asks another co-worker where you are, and they report "in his office". You keep sitting there while they ask and answer "has he moved"? "No" "Where is he" "In his office". Eventually you get up and move, and lets say the person didn't see you move. At that time the questions will go "Has he moved" "no" (but you have!) "Where is he" "In the kitchen". Wow! At that moment the statement that you haven't moved conflicts strongly with the next measurement that you are in the kitchen. If we were modeling these with probabilities the probability that you are in your office would lower, and the probability that you are in the kitchen would go up a little bit. But now imagine the subsequent conversation: "has he moved" "no" "where is he" "in the kitchen". Pretty quickly the belief that you are in your office would fade away, and the belief that you are in the kitchen would increase to near certainty. The belief that you are in the office will never go to zero, nor will the belief that you are in the kitchen ever go to 1.0 because of the chances of error, but in practice your co-workers would be correct to be quite confident in their system.
That is what intuition tells us. What does the math tell us?
Well, we have already programmed the update step, and we have programmed the predict step. All we need to do is feed the result of one into the other, and we will have programmed our dog tracker!!! Let's see how it performs. We will input measurements as if the dog started at position 0 and moved right at each update. However, as in a real world application, we will start with no knowledge and assign equal probability to all positions.
```
hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])
belief = np.array([.1] * 10)
update(hallway, belief, z=1, prob_correct=.75)
bp.bar_plot(belief)
kernel = (.1, .8, .1)
predict(belief, 1, kernel)
bp.bar_plot(belief)
```
So after the first update we have assigned a high probability to each door position, and a low probability to each wall position. The predict step shifted these probabilities to the right, smearing them about a bit. Now let's look at what happens at the next sense.
```
update(hallway, belief, z=1, prob_correct=.75)
bp.bar_plot(belief)
```
Notice the tall bar at position 1. This corresponds with the (correct) case of starting at position 0, sensing a door, shifting 1 to the right, and sensing another door. No other positions make this set of observations as likely. Now lets add an update and then sense the wall.
```
predict(belief, 1, kernel)
update(hallway, belief, z=0, prob_correct=.75)
bp.bar_plot(belief)
```
This is exciting! We have a very prominent bar at position 2 with a value of around 35%. It is over twice the value of any other bar in the plot, and is about 4% larger than our last plot, where the tallest bar was around 31%. Let's see one more sense->update cycle.
```
predict(belief, 1, kernel)
update(hallway, belief, z=0, prob_correct=.75)
bp.bar_plot(belief)
```
Here things have degraded a bit due to the long string of wall positions in the map. We cannot be as sure where we are when there is an undifferentiated line of wall positions, so naturally our probabilities spread out a bit.
I spread the computation across several cells, iteratively calling `predict()` and `update()`. This chart from the **g-h Filter** chapter illustrates the algorithm.
```
import gh_internal
gh_internal.create_predict_update_chart()
```
This filter is a form of the g-h filter. Here we are using the percentages for the errors to implicitly compute the $g$ and $h$ parameters. We could express the discrete Bayes algorithm as a g-h filter, but that would obscure the logic of this filter.
We can express this in pseudocode.
**Initialization**
1. Initialize our belief in the state
**Predict**
1. Based on the system behavior, predict state at the next time step
2. Adjust belief to account for the uncertainty in prediction
**Update**
1. Get a measurement and associated belief about its accuracy
2. Compute residual between estimated state and measurement
3. Determine whether whether the measurement matches each state
4. Update state belief if it matches the measurement
When we cover the Kalman filter we will use this exact same algorithm; only the details of the computation will differ.
Finally, for those viewing this in a Jupyter Notebook or on the web, here is an animation of that algorithm.
<img src="animations/02_simulate.gif">
## The Effect of Bad Sensor Data
You may be suspicious of the results above because I always passed correct sensor data into the functions. However, we are claiming that this code implements a *filter* - it should filter out bad sensor measurements. Does it do that?
To make this easy to program and visualize I will change the layout of the hallway to mostly alternating doors and hallways, and run the algorithm on 5 correct measurements:
```
hallway = [1, 0, 1, 0, 0, 1, 0, 1, 0, 0]
kernel = (.1, .8, .1)
belief = np.array([.1] * 10)
measurements = [1, 0, 1, 0, 0]
for m in measurements:
update(hallway, belief, z=m, prob_correct=.75)
predict(belief, 1, kernel)
bp.bar_plot(belief)
```
At this point we have correctly identified the likely cases, we either started at position 0 or 5, because we saw the following sequence of doors and walls 1,0,1,0,0. But now lets inject a bad measurement, and see what happens:
```
update(hallway, belief, z=m, prob_correct=.75)
predict(belief, 1, kernel)
bp.bar_plot(belief)
```
That one bad measurement appears to have significantly eroded our knowledge. However, note that our highest probabilities are still at 0 and 5, which is correct. Now let's continue with a series of correct measurements
```
with figsize(y=5.5):
measurements = [0, 1, 0, 1, 0, 0]
for i, m in enumerate(measurements):
update(hallway, belief, z=m, prob_correct=.75)
predict(belief, 1, kernel)
plt.subplot(3, 2, i+1)
bp.bar_plot(belief, title='step{}'.format(i+1))
```
As you can see we quickly filtered out the bad sensor reading and converged on the most likely positions for our dog.
## Drawbacks and Limitations
Do not be mislead by the simplicity of the examples I chose. This is a robust and complete implementation of a histogram filter, and you may use the code in real world solutions. If you need a multimodal, discrete filter, this filter works.
With that said, while this filter is used in industry, it is not used often because it has several limitations. Getting around those limitations is the motivation behind the chapters in the rest of this book.
The first problem is scaling. Our dog tracking problem used only one variable, $pos$, to denote the dog's position. Most interesting problems will want to track several things in a large space. Realistically, at a minimum we would want to track our dogs $(x,y)$ coordinate, and probably his velocity $(\dot{x},\dot{y})$ as well. We have not covered the multidimensional case, but instead of a histogram we use a multidimensional grid to store the probabilities at each discrete location. Each `update()` and `predict()` step requires updating all values in the grid, so a simple four variable problem would require $O(n^4)$ running time *per time step*. Realistic filters can have 10 or more variables to track, leading to exorbitant computation requirements.
The second problem is that the histogram is discrete, but we live in a continuous world. The histogram requires that you model the output of your filter as a set of discrete points. In our dog in the hallway example, we used 10 positions, which is obviously far too few positions for anything but a toy problem. For example, for a 100 meter hallway you would need 10,000 positions to model the hallway to 1cm accuracy. So each update and predict operation would entail performing calculations for 10,000 different probabilities. It gets exponentially worse as we add dimensions. If our dog was roaming in a 100x100 m$^2$ courtyard, we would need 100,000,000 bins (10,000$^2$) to get 1cm accuracy.
A third problem is that the histogram is multimodal. This is not always a problem - an entire class of filters, the particle filters, are multimodal and are often used because of this property. But imagine if the GPS in your car reported to you that it is 40% sure that you are on D street, but 30% sure you are on Willow Avenue. I doubt that you would find that useful. Also, GPSs report their error - they might report that you are at (109.878W, 38.326N) with an error of 9 meters. There is no clear mathematical way to extract error information from a histogram. Heuristics suggest themselves to be sure, but there is no exact determination. You may or may not care about that while driving, but you surely do care if you are trying to send a rocket to Mars or track and hit an oncoming missile.
This difficulty is related to the fact that the filter often does not represent what is physically occurring in the world. Consider this distribution for our dog:
```
belief = [0.2245871, 0.06288015, 0.06109133,
0.0581008, 0.09334062, 0.2245871,
0.06288015, 0.06109133, 0.0581008, 0.09334062]
bp.bar_plot(belief)
```
The largest probabilities are in position 0 and position 5. This does not fit our physical intuition at all. A dog cannot be in two places at once (my dog Simon certainly tries - his food bowl and my lap often have equal allure to him). We would have to use heuristics to decide how to interpret this distribution, and there is usually no satisfactory answer. This is not always a weakness - a considerable amount of literature has been written on *Multi-Hypothesis Tracking (MHT)*. We cannot always distill our knowledge to one conclusion, and MHT uses various techniques to maintain multiple story lines at once, using backtracking schemes to go *back in time* to correct hypothesis once more information is known. This will be the subject of later chapters. In other cases we truly have a multimodal situation - we may be optically tracking pedestrians on the street and need to represent all of their positions.
In practice it is the exponential increase in computation time that leads to the discrete Bayes filter being the least frequently used of all filters in this book. Many problems are best formulated as discrete or multimodal, but we have other filter choices with better performance. With that said, if I had a small problem that this technique could handle I would choose to use it; it is trivial to implement, debug, and understand, all virtues.
## Tracking and Control
So far we have been tracking an object which is moving independently. But consider this very similar problem. I am automating a warehouse and want to use robots to collect all of the items for a customer's order. Perhaps the easiest way to do this is to have the robots travel on a train track. I want to be able to send the robot a destination and have it go correctly to that point. But train tracks and robot motors are imperfect. Wheel slippage and imperfect motors means that the robot is unlikely to travel to exactly the position you command.
So, we add sensors. We can add some sort of device to help the robot determine it's position. Perhaps we mount magnets on the track every few feet, and use a Hall sensor to count how many magnets are passed. If we have counted 10 magnets then the robot should be at the 10th magnet. Of course it is possible to either miss a magnet or to count it twice, so we have to accommodate some degree of error. In any case, is should be clear that we can use the code in the previous section to track our robot since the magnet counting is very similar to doorway sensing.
But we are not done. A key lesson from the g-h filters chapter is to never throw information away. If you have information you should use it to improve your estimate. What information are we leaving out? Well, we know what our destination is, and we know what control inputs we are feeding to the wheels of the robot at each moment in time. For example, let's say that once a second we send a movement command to the robot - move left 1 unit, move right 1 unit, or stand still. This is obviously a simplification because I am not taking acceleration into account, but I am not trying to teach control theory. If I send the command 'move left 1 unit' I expect that in one second from now the robot will be 1 unit to the left of where it is now. But, wheels and motors are imperfect. We will assume that it never makes a mistake and goes right when told to go left, so the errors should be relatively small. Thus the robot might end up 0.9 units away, or maybe 1.2 units away.
Now the entire solution is clear. For the dog which was moving independently we assumed that he kept moving in whatever direction he was previously moving. That is a dubious assumption for my dog! Robots are far more predictable. Instead of making a dubious prediction based on assumption of behavior we will feed in the command that we sent to the robot! In other words, when we call `predict()` we will pass in the commanded movement that we gave the robot along with a kernel that describes the likelihood of that movement.
### Simulating the Train Behavior
To fully implement this filter we need to simulate an imperfect train. When we command it to move it will sometimes make a small mistake, and it's sensor will sometimes return the incorrect value.
```
class Train(object):
def __init__(self, track_len, kernel=[1.], sensor_accuracy=.9):
self.track_len = track_len
self.pos = 0
self.kernel = kernel
self.sensor_accuracy = sensor_accuracy
def move(self, distance=1):
""" move in the specified direction
with some small chance of error"""
self.pos += distance
# insert random movement error according to kernel
r = random.random()
s = 0
offset = -(len(self.kernel) - 1) / 2
for k in self.kernel:
s += k
if r <= s:
break
offset += 1
self.pos = int((self.pos + offset) % self.track_len)
return self.pos
def sense(self):
pos = self.pos
# insert random sensor error
r = random.random()
if r > self.sensor_accuracy:
if random.random() > 0.5:
pos += 1
else:
pos -= 1
return pos
```
With that we are ready to write the simulation. We will put it in a function so that we can run it with different assumptions. I will assume that the robot always starts at the beginning of the track. The track is implemented as being 10 units long, but think of it as a track of length, say 10,000, with the magnet pattern repeated every 10 units. 10 makes it easier to plot and inspect.
```
def simulate(iterations, kernel, sensor_accuracy,
move_distance, do_print=True):
track = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
belief = np.array([0.01] * 10)
belief[0] = .9
normalize(belief)
robot = Train(len(track), kernel, sensor_accuracy)
for i in range(iterations):
robot.move(distance=move_distance)
m = robot.sense()
if do_print:
print('''time {}: pos {}, sensed {}, '''
'''at position {}'''.format(
i, robot.pos, m, track[robot.pos]))
update(track, belief, m, sensor_accuracy)
index = np.argmax(belief)
if do_print:
print(''' predicted position is {}'''
''' with confidence {:.4f}%:'''.format(
index, belief[index]*100))
if i < iterations - 1:
predict(belief, move_distance, kernel)
bp.bar_plot(belief)
if do_print:
print()
print('final position is', robot.pos)
iindex = np.argmax(belief)
print('''predicted position is {} with '''
'''confidence {:.4f}%:'''.format(
index, belief[index]*100))
```
Read the code and make sure you understand it. Now let's do a run with no sensor or movement error. If the code is correct it should be able to locate the robot with no error. The output is a bit tedious to read, but if you are at all unsure of how the update/predict cycle works make sure you read through it carefully to solidify your understanding.
```
import random
random.seed(3)
np.set_printoptions(precision=2, suppress=True, linewidth=60)
simulate(4, kernel=[1.], sensor_accuracy=.999,
move_distance=4, do_print=True)
```
We can see that the code was able to perfectly track the robot so we should feel reasonably confident that the code is working. Now let's see how it fairs with some errors.
```
random.seed(3)
simulate(4, kernel=[.1, .8, .1], sensor_accuracy=.9,
move_distance=4, do_print=True)
```
Here we see that there was a sense error at time 1, but we are still quite confident in our position.
Now lets run a very long simulation and see how the filter responds to errors.
```
with figsize(y=5):
for i in range (4):
random.seed(3)
plt.subplot(321+i)
simulate(148+i, kernel=[.1, .8, .1],
sensor_accuracy=.8,
move_distance=4, do_print=False)
plt.title ('iteration {}'.format(148+i))
```
We can see that there was a problem on iteration 149 as the confidence degrades. But within a few iterations the filter is able to correct itself and regain confidence in the estimated position.
## Bayes Theorem
We developed the math in this chapter merely by reasoning about the information we have at each moment. In the process we discovered *Bayes Theorem*. We will go into the specifics of the math of Bayes theorem later in the book. For now we will take a more intuitive approach. Recall from the preface that Bayes theorem tells us how to compute the probability of an event given previous information. That is exactly what we have been doing in this chapter. With luck our code should match the Bayes Theorem equation!
We implemented the `update()` function with this probability calculation:
$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$
To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.
Bayes theorem is
$$P(A|B) = \frac{P(B | A)\, P(A)}{P(B)}$$
If you are not familiar with this notation, let's review. $P(A)$ means the probability of event $A$. If $A$ is the event of a fair coin landing heads, then $P(A) = 0.5$.
$P(A|B)$ is called a *conditional probability*. That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today if it also rained yesterday because rain systems tend to last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P(\mathtt{rain_{today}}|\mathtt{rain_{yesterday}})$.
In the Bayes theorem equation above $B$ is the *evidence*, $P(A)$ is the *prior*, $P(B | A)$ is the *likelihood*, and $P(A|B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches out update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $Z$ for the measurement. Hence, we want to know $P(x_i|Z)$, that is, the probability of the dog being at $x_i$ given the measurement $Z$.
So, let's plug that into the equation and solve it.
$$P(x_i|Z) = \frac{P(Z|x_i) P(x_i)}{P(Z)}$$
That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $P(Z|x_i)$. This is the probability for the measurement at every cell $x_i$. $P(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function, where `belief` is our prior $P(x_i)$.
```python
for i, val in enumerate(map_):
if val == z:
belief[i] *= correct_scale
else:
belief[i] *= 1.
```
I added the `else` here, which has no mathematical effect, to point out that every element in $x$ (called `belief` in the code) is multiplied by a probability. You may object that I am multiplying by a scale factor, which I am, but this scale factor is derived from the probability of the measurement being correct vs the probability being incorrect.
The last term to consider is the denominator $P(Z)$. This is the probability of getting the measurement $Z$ without taking the location into account. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes theorem.
The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as
$$P(A|B) = \frac{P(B | A)\, P(A)}{\int P(B|A) P(B) \mathtt{d}y}\cdot$$
In practice the denominator can be fiendishly difficult to solve analytically (a recent opinion piece for the Royal Statistical Society called it a "dog's breakfast" [8], and filtering textbooks are filled with integral laden equations which you cannot be expected to solve. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them of sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent.
## Total Probability Theorem
We know now the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the *total probability theorem*. Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is
$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$
That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation
```python
for i in range(N):
for k in range (kN):
index = (i + (width-k) - offset) % N
result[i] += prob_dist[index] * kernel[k]
```
## Summary
The code is very short, but the result is huge! We have implemented a form of a Bayesian filter. We have learned how to start with no information and derive information from noisy sensors. Even though the sensors in this chapter are very noisy (most sensors are more than 80% accurate, for example) we quickly converge on the most likely position for our dog. We have learned how the predict step always degrades our knowledge, but the addition of another measurement, even when it might have noise in it, improves our knowledge, allowing us to converge on the most likely result.
If you followed the math carefully you will realize that all of this math is exact. The bar charts that we are displaying are not an estimate or guess - they are mathematically exact results that exactly represent our knowledge. The knowledge is probabilistic, to be sure, but it is exact, and correct.
Furthermore, through basic reasoning we were able to discover two extremely important theorems: Bayes theorem and the total probability theorem. I hope you spent time on those sections as in almost any other source they will express filtering algorithms in terms of these two theorems. It will be your job to understand what these equations mean and how to turn them into code.
This book is mostly about the Kalman filter. In the g-h filter chapter I told you that the Kalman filter is a type of g-h filter. It is also a type of Bayesian filter. It also uses Bayes theorem and the total probability theorem to filter data, although with a different set of assumptions and conditions than used in this chapter.
The discrete Bayes filter allows us to filter sensors and track an object, but we are a long way from tracking an airplane or a car. This code only handles the 1 dimensional case, whereas cars and planes operate in 2 or 3 dimensions. Also, our position vector is *multimodal*. It expresses multiple beliefs at once. Imagine if your GPS told you "it's 20% likely that you are here, but 10% likely that you are on this other road, and 5% likely that you are at one of 14 other locations". That would not be very useful information. Also, the data is discrete. We split an area into 10 (or whatever) different locations, whereas in most real world applications we want to work with continuous data. We want to be able to represent moving 1 km, 1 meter, 1 mm, or any arbitrary amount, such as 2.347 cm.
Finally, the bar charts may strike you as being a bit less certain than we would want. A 25% certainty may not give you a lot of confidence in the answer. Of course, what is important here is the ratio of this probability to the other probabilities in your vector. If the next largest bar is 23% then we are not very knowledgeable about our position, whereas if the next largest is 3% we are in fact quite certain. But this is not clear or intuitive. However, there is an extremely important insight that Kalman filters implement that will significantly improve our accuracy from the same data.
**If you can understand this chapter you will be able to understand and implement Kalman filters.** I cannot stress this enough. If anything is murky, go back and reread this chapter and play with the code. The rest of this book will build on the algorithms that we use here. If you don't intuitively understand why this filter works, and can't at least work through the math, you will have little success with the rest of the material. However, if you grasp the fundamental insight - multiplying probabilities when we measure, and shifting probabilities when we update leads to a converging solution - then you understand everything important you need for the Kalman filter.
## References
* [1] D. Fox, W. Burgard, and S. Thrun. "Monte carlo localization: Efficient position estimation for mobile robots." In *Journal of Artifical Intelligence Research*, 1999.
http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/jair-localize.html
* [2] Dieter Fox, et. al. "Bayesian Filters for Location Estimation". In *IEEE Pervasive Computing*, September 2003.
http://swarmlab.unimaas.nl/wp-content/uploads/2012/07/fox2003bayesian.pdf
* [3] Sebastian Thrun. "Artificial Intelligence for Robotics".
https://www.udacity.com/course/cs373
* [4] Khan Acadamy. "Introduction to the Convolution"
https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution
* [5] Wikipedia. "Convolution"
http://en.wikipedia.org/wiki/Convolution
* [6] Wikipedia. "Law of total probability"
http://en.wikipedia.org/wiki/Law_of_total_probability
* [7] Wikipedia. "Time Evolution"
https://en.wikipedia.org/wiki/Time_evolution
* [8] We need to rethink how we teach statistics from the ground up
http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up
|
github_jupyter
|
#format the book
%matplotlib inline
from __future__ import division, print_function
from book_format import load_style
load_style()
import numpy as np
belief = np.array([1/10]*10)
np.set_printoptions(precision=3, linewidth=50)
print(belief)
hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])
from book_format import figsize, set_figsize
import book_plots as bp
import matplotlib.pyplot as plt
belief = np.array([1./3, 1./3, 0, 0, 0, 0, 0, 0, 1./3, 0])
set_figsize(y=2)
bp.bar_plot(belief)
pbelief = hallway * (1/3)
print(pbelief)
belief = np.array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0.])
def update(map_, belief, z, correct_scale):
for i, val in enumerate(map_):
if val == z:
belief[i] *= correct_scale
belief = np.array([0.1] * 10)
reading = 1 # 1 is 'door'
update(hallway, belief, z=reading, correct_scale=3.)
print('sum =', sum(belief))
bp.bar_plot(belief)
def normalize(distribution):
assert distribution.dtype.kind == 'f'
""" Normalize distribution so it sums to 1.0"""
distribution /= sum(distribution.astype(float))
def update(map_, belief, z, prob_correct):
scale = prob_correct / (1. - prob_correct)
for i, val in enumerate(map_):
if val == z:
belief[i] *= scale
normalize(belief)
belief = np.array([0.1] * 10)
update(hallway, belief, 1, prob_correct=.75)
print('sum =', sum(belief))
print('probability of door =', belief[0])
print('probability of wall =', belief[2])
bp.bar_plot(belief)
def perfect_predict(belief, move):
""" move the position by 'move' spaces, where positive is
to the right, and negative is to the left
"""
n = len(belief)
result = np.zeros(n)
for i in range(n):
result[i] = belief[(i-move) % n]
belief[:] = result # copy back to original array
belief = np.array([.35, .1, .2, .3, 0, 0, 0, 0, 0, .05])
bp.bar_plot(belief, title='Before prediction')
plt.show()
perfect_predict(belief, 1)
bp.bar_plot(belief, title='After prediction')
def predict(belief, move, p_correct, p_under, p_over):
n = len(belief)
result = np.zeros(n)
for i in range(n):
result[i] = (
belief[(i-move) % n] * p_correct +
belief[(i-move-1) % n] * p_over +
belief[(i-move+1) % n] * p_under)
belief[:] = result
belief = np.array([0., 0., 0., 1., 0., 0., 0., 0., 0., 0.])
predict(belief, move=2, p_correct=.8, p_under=.1, p_over=.1)
bp.bar_plot(belief)
belief = np.array([0, 0, .4, .6, 0, 0, 0, 0, 0, 0])
predict(belief, move=2, p_correct=.8, p_under=.1, p_over=.1)
bp.bar_plot(belief)
bp.bar_plot(belief)
belief = np.array([1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
for i in range(500):
predict(belief, move=1, p_correct=.8, p_under=.1, p_over=.1)
bp.bar_plot(belief)
def predict(distribution, offset, kernel):
N = len(distribution)
kN = len(kernel)
width = int((kN - 1) / 2)
result = np.zeros(N)
for i in range(N):
for k in range (kN):
index = (i + (width-k) - offset) % N
result[i] += distribution[index] * kernel[k]
distribution[:] = result[:] # update belief
belief = np.array([.05] * 10)
belief[4] = .55
predict(belief, offset=1, kernel=[.1, .8, .1])
bp.bar_plot(belief)
belief = np.array([.05] * 10)
belief[4] = .55
predict(belief, offset=3, kernel=[.1, .8, .1])
bp.bar_plot(belief)
kernel = np.array([1, 2, 6, 2, 1], dtype=float)
normalize(kernel)
print(kernel)
hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])
belief = np.array([.1] * 10)
update(hallway, belief, z=1, prob_correct=.75)
bp.bar_plot(belief)
kernel = (.1, .8, .1)
predict(belief, 1, kernel)
bp.bar_plot(belief)
update(hallway, belief, z=1, prob_correct=.75)
bp.bar_plot(belief)
predict(belief, 1, kernel)
update(hallway, belief, z=0, prob_correct=.75)
bp.bar_plot(belief)
predict(belief, 1, kernel)
update(hallway, belief, z=0, prob_correct=.75)
bp.bar_plot(belief)
import gh_internal
gh_internal.create_predict_update_chart()
hallway = [1, 0, 1, 0, 0, 1, 0, 1, 0, 0]
kernel = (.1, .8, .1)
belief = np.array([.1] * 10)
measurements = [1, 0, 1, 0, 0]
for m in measurements:
update(hallway, belief, z=m, prob_correct=.75)
predict(belief, 1, kernel)
bp.bar_plot(belief)
update(hallway, belief, z=m, prob_correct=.75)
predict(belief, 1, kernel)
bp.bar_plot(belief)
with figsize(y=5.5):
measurements = [0, 1, 0, 1, 0, 0]
for i, m in enumerate(measurements):
update(hallway, belief, z=m, prob_correct=.75)
predict(belief, 1, kernel)
plt.subplot(3, 2, i+1)
bp.bar_plot(belief, title='step{}'.format(i+1))
belief = [0.2245871, 0.06288015, 0.06109133,
0.0581008, 0.09334062, 0.2245871,
0.06288015, 0.06109133, 0.0581008, 0.09334062]
bp.bar_plot(belief)
class Train(object):
def __init__(self, track_len, kernel=[1.], sensor_accuracy=.9):
self.track_len = track_len
self.pos = 0
self.kernel = kernel
self.sensor_accuracy = sensor_accuracy
def move(self, distance=1):
""" move in the specified direction
with some small chance of error"""
self.pos += distance
# insert random movement error according to kernel
r = random.random()
s = 0
offset = -(len(self.kernel) - 1) / 2
for k in self.kernel:
s += k
if r <= s:
break
offset += 1
self.pos = int((self.pos + offset) % self.track_len)
return self.pos
def sense(self):
pos = self.pos
# insert random sensor error
r = random.random()
if r > self.sensor_accuracy:
if random.random() > 0.5:
pos += 1
else:
pos -= 1
return pos
def simulate(iterations, kernel, sensor_accuracy,
move_distance, do_print=True):
track = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
belief = np.array([0.01] * 10)
belief[0] = .9
normalize(belief)
robot = Train(len(track), kernel, sensor_accuracy)
for i in range(iterations):
robot.move(distance=move_distance)
m = robot.sense()
if do_print:
print('''time {}: pos {}, sensed {}, '''
'''at position {}'''.format(
i, robot.pos, m, track[robot.pos]))
update(track, belief, m, sensor_accuracy)
index = np.argmax(belief)
if do_print:
print(''' predicted position is {}'''
''' with confidence {:.4f}%:'''.format(
index, belief[index]*100))
if i < iterations - 1:
predict(belief, move_distance, kernel)
bp.bar_plot(belief)
if do_print:
print()
print('final position is', robot.pos)
iindex = np.argmax(belief)
print('''predicted position is {} with '''
'''confidence {:.4f}%:'''.format(
index, belief[index]*100))
import random
random.seed(3)
np.set_printoptions(precision=2, suppress=True, linewidth=60)
simulate(4, kernel=[1.], sensor_accuracy=.999,
move_distance=4, do_print=True)
random.seed(3)
simulate(4, kernel=[.1, .8, .1], sensor_accuracy=.9,
move_distance=4, do_print=True)
with figsize(y=5):
for i in range (4):
random.seed(3)
plt.subplot(321+i)
simulate(148+i, kernel=[.1, .8, .1],
sensor_accuracy=.8,
move_distance=4, do_print=False)
plt.title ('iteration {}'.format(148+i))
for i, val in enumerate(map_):
if val == z:
belief[i] *= correct_scale
else:
belief[i] *= 1.
for i in range(N):
for k in range (kN):
index = (i + (width-k) - offset) % N
result[i] += prob_dist[index] * kernel[k]
| 0.539226 | 0.990159 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.