Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
14,900 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Please find jax implementation of this notebook here
Step2: Residual block
Step3: Example where number of input and output channels is the same.
Step4: Example where we change the number of channels.
Step5: Example where we change the number of channels and the spatial size.
Step6: Resnet block
We define a resnet block to be a sequence of residual blocks, where the first element in the sequence has a 1x1 convolution. However, the first such resnet block does not have 1x1 conv.
Step7: The full resnet18 model
Step21: Train on Fashion-MNIST
We upscale images from 28x28 to 96x96, so that the input to the global average pooling layer has size 3x3 (since the network downscales by a factor of 32).
Step23: Training Function
Step24: Learning curve | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import math
from IPython import display
try:
import torch
except ModuleNotFoundError:
%pip install -qq torch
import torch
try:
import torchvision
except ModuleNotFoundError:
%pip install -qq torchvision
import torchvision
from torch import nn
from torch.nn import functional as F
from torch.utils import data
from torchvision import transforms
import random
import os
import time
np.random.seed(seed=1)
torch.manual_seed(1)
!mkdir figures # for saving plots
Explanation: Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/14/resnet_jax.ipynb
<a href="https://colab.research.google.com/github/Nirzu97/pyprobml/blob/resnet-torch/notebooks/resnet_torch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Residual networks
We implement residual network CNN.
Based on 7.6 of http://d2l.ai/chapter_convolutional-modern/resnet.html
End of explanation
class Residual(nn.Module):
The Residual block of ResNet.
def __init__(self, input_channels, num_channels, use_1x1conv=False, strides=1):
super().__init__()
self.conv1 = nn.Conv2d(input_channels, num_channels, kernel_size=3, padding=1, stride=strides)
self.conv2 = nn.Conv2d(num_channels, num_channels, kernel_size=3, padding=1)
if use_1x1conv:
self.conv3 = nn.Conv2d(input_channels, num_channels, kernel_size=1, stride=strides)
else:
self.conv3 = None
self.bn1 = nn.BatchNorm2d(num_channels)
self.bn2 = nn.BatchNorm2d(num_channels)
def forward(self, X):
Y = F.relu(self.bn1(self.conv1(X)))
Y = self.bn2(self.conv2(Y))
if self.conv3:
X = self.conv3(X)
Y += X
return F.relu(Y)
Explanation: Residual block
End of explanation
blk = Residual(3, 3)
X = torch.rand(4, 3, 6, 6) # batch size x channels x height x width
Y = blk(X)
Y.shape
Explanation: Example where number of input and output channels is the same.
End of explanation
blk = Residual(3, 6, use_1x1conv=True)
blk(X).shape
Explanation: Example where we change the number of channels.
End of explanation
blk = Residual(3, 6, use_1x1conv=True, strides=2)
blk(X).shape
Explanation: Example where we change the number of channels and the spatial size.
End of explanation
def resnet_block(input_channels, num_channels, num_residuals, first_block=False):
blk = []
for i in range(num_residuals):
if i == 0 and not first_block:
blk.append(Residual(input_channels, num_channels, use_1x1conv=True, strides=2))
else:
blk.append(Residual(num_channels, num_channels))
return blk
Explanation: Resnet block
We define a resnet block to be a sequence of residual blocks, where the first element in the sequence has a 1x1 convolution. However, the first such resnet block does not have 1x1 conv.
End of explanation
b1 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
)
b2 = nn.Sequential(*resnet_block(64, 64, 2, first_block=True))
b3 = nn.Sequential(*resnet_block(64, 128, 2))
b4 = nn.Sequential(*resnet_block(128, 256, 2))
b5 = nn.Sequential(*resnet_block(256, 512, 2))
# We assume 10 output classes (for MNIST)
net = nn.Sequential(b1, b2, b3, b4, b5, nn.AdaptiveAvgPool2d((1, 1)), nn.Flatten(), nn.Linear(512, 10))
X = torch.rand(size=(1, 1, 224, 224))
for layer in net:
X = layer(X)
print(layer.__class__.__name__, "output shape:\t", X.shape)
Explanation: The full resnet18 model
End of explanation
def load_data_fashion_mnist(batch_size, resize=None):
Download the Fashion-MNIST dataset and then load it into memory.
trans = [transforms.ToTensor()]
if resize:
trans.insert(0, transforms.Resize(resize))
trans = transforms.Compose(trans)
mnist_train = torchvision.datasets.FashionMNIST(root="../data", train=True, transform=trans, download=True)
mnist_test = torchvision.datasets.FashionMNIST(root="../data", train=False, transform=trans, download=True)
return (
data.DataLoader(mnist_train, batch_size, shuffle=True, num_workers=4),
data.DataLoader(mnist_test, batch_size, shuffle=False, num_workers=4),
)
class Animator:
For plotting data in animation.
def __init__(
self,
xlabel=None,
ylabel=None,
legend=None,
xlim=None,
ylim=None,
xscale="linear",
yscale="linear",
fmts=("-", "m--", "g-.", "r:"),
nrows=1,
ncols=1,
figsize=(3.5, 2.5),
):
# Incrementally plot multiple lines
if legend is None:
legend = []
display.set_matplotlib_formats("svg")
self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [
self.axes,
]
# Use a lambda function to capture arguments
self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# Add multiple data points into the figure
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
class Timer:
Record multiple running times.
def __init__(self):
self.times = []
self.start()
def start(self):
Start the timer.
self.tik = time.time()
def stop(self):
Stop the timer and record the time in a list.
self.times.append(time.time() - self.tik)
return self.times[-1]
def avg(self):
Return the average time.
return sum(self.times) / len(self.times)
def sum(self):
Return the sum of time.
return sum(self.times)
def cumsum(self):
Return the accumulated time.
return np.array(self.times).cumsum().tolist()
class Accumulator:
For accumulating sums over `n` variables.
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
Set the axes for matplotlib.
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
def try_gpu(i=0):
Return gpu(i) if exists, otherwise return cpu().
if torch.cuda.device_count() >= i + 1:
return torch.device(f"cuda:{i}")
return torch.device("cpu")
def accuracy(y_hat, y):
Compute the number of correct predictions.
if len(y_hat.shape) > 1 and y_hat.shape[1] > 1:
y_hat = torch.argmax(y_hat, axis=1)
cmp_ = y_hat.type(y.dtype) == y
return float(cmp_.type(y.dtype).sum())
def evaluate_accuracy_gpu(net, data_iter, device=None):
Compute the accuracy for a model on a dataset using a GPU.
if isinstance(net, torch.nn.Module):
net.eval() # Set the model to evaluation mode
if not device:
device = next(iter(net.parameters())).device
# No. of correct predictions, no. of predictions
metric = Accumulator(2)
for X, y in data_iter:
X = X.to(device)
y = y.to(device)
metric.add(accuracy(net(X), y), y.numel())
return metric[0] / metric[1]
Explanation: Train on Fashion-MNIST
We upscale images from 28x28 to 96x96, so that the input to the global average pooling layer has size 3x3 (since the network downscales by a factor of 32).
End of explanation
def train(net, train_iter, test_iter, num_epochs, lr, device):
Train a model with a GPU (defined in Chapter 6).
def init_weights(m):
if type(m) == nn.Linear or type(m) == nn.Conv2d:
nn.init.xavier_uniform_(m.weight)
net.apply(init_weights)
print("training on", device)
net.to(device)
optimizer = torch.optim.SGD(net.parameters(), lr=lr)
loss = nn.CrossEntropyLoss()
animator = Animator(xlabel="epoch", xlim=[1, num_epochs], legend=["train loss", "train acc", "test acc"])
timer, num_batches = Timer(), len(train_iter)
for epoch in range(num_epochs):
# Sum of training loss, sum of training accuracy, no. of examples
metric = Accumulator(3)
net.train()
for i, (X, y) in enumerate(train_iter):
timer.start()
optimizer.zero_grad()
X, y = X.to(device), y.to(device)
y_hat = net(X)
l = loss(y_hat, y)
l.backward()
optimizer.step()
with torch.no_grad():
metric.add(l * X.shape[0], accuracy(y_hat, y), X.shape[0])
timer.stop()
train_l = metric[0] / metric[2]
train_acc = metric[1] / metric[2]
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches, (train_l, train_acc, None))
test_acc = evaluate_accuracy_gpu(net, test_iter)
animator.add(epoch + 1, (None, None, test_acc))
print(f"loss {train_l:.3f}, train acc {train_acc:.3f}, " f"test acc {test_acc:.3f}")
print(f"{metric[2] * num_epochs / timer.sum():.1f} examples/sec " f"on {str(device)}")
Explanation: Training Function
End of explanation
lr, num_epochs, batch_size = 0.05, 10, 256
train_iter, test_iter = load_data_fashion_mnist(batch_size, resize=96)
train(net, train_iter, test_iter, num_epochs, lr, try_gpu())
Explanation: Learning curve
End of explanation |
14,901 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dijkstra's Shortest Path Algorithm
The notebook Set.ipynb implements <em style="color
Step1: The function call shortest_path takes a node source and a set Edges.
The function shortest_path takes two arguments.
- source is the start node.
- Edges is a dictionary that encodes the set of edges of the graph. For every node x the value of Edges[x] has the form
$$ \bigl[ (y_1, l_1), \cdots, (y_n, l_n) \bigr]. $$
This list is interpreted as follows
Step2: The version of shortest_path given below provides a graphical animation of the algorithm.
Step3: Code to Display the Directed Graph
Step4: The function $\texttt{toDot}(\texttt{source}, \texttt{Edges}, \texttt{Fringe}, \texttt{Distance}, \texttt{Visited})$ takes a graph that is represented by
its Edges, a set of nodes Fringe, and a dictionary Distance that has the distance of a node from the node source, and set Visited of nodes that have already been visited.
Step5: Code for Testing
Step6: Crossing the Tunnel
Four persons, Alice, Britney, Charly and Daniel have to cross a tunnel.
The tunnel is so narrow, that at most two persons can cross it together.
In order to cross the tunnel, a torch is needed. Together, they only
have a single torch.
1. Alice is the fastest and can cross the tunnel in 1 minute.
2. Britney needs 2 minutes to cross the tunnel.
3. Charly is slower and needs 4 minutes.
4. Daniel is slowest and takes 5 minutes to cross the tunnel.
What is the fastest plan to cross the tunnel?
We will model this problem as a graph theoretical problem. The nodes of the graph will be sets
of people. In particular, it will be the set of people at the entrance of the tunnel. In order to model the torch, the torch can also be a member of these sets.
Step7: The timining is modelled by a dictionary.
Step8: The function $\texttt{power}(M)$ defined below computes the power list of the set $M$, i.e. we have
Step9: If $B$ is a set of persons, then $\texttt{duration}(B)$ is the time that this group needs to cross the tunnel.
$B$ also contains 'Torch'.
Step10: $\texttt{left_right}(S)$ describes a crossing of the tunnel from the entrance at the left side left to the exit at the right side of the tunnel.
Step11: $\texttt{right_left}(S)$ describes a crossing of the tunnel from right to left.
Step12: The function shortest_path is Dijkstra's algorithm. It returns both a dictionary Parent containing
the parent nodes and a dictionary Distance with the distances. The dictionary Parent can be used to
compute the shortest path leading from the node source to some other node.
Step13: Let us see whether the goal was reachable and how long it takes to reach the goal.
Step14: Given to nodes source and goal and a dictionary containing the parent of every node, the function
find_path returns the path from source to goal. | Python Code:
%run Set.ipynb
Explanation: Dijkstra's Shortest Path Algorithm
The notebook Set.ipynb implements <em style="color:blue">sets</em> as
<a href="https://en.wikipedia.org/wiki/AVL_tree">AVL trees</a>.
The API provided by Set offers the following API:
- Set() creates an empty set.
- S.isEmpty() checks whether the set Sis empty.
- S.member(x) checks whether x is an element of the given set S.
- S.insert(x) inserts x into the set S.
This does not return a new set but rather modifies the given set S.
- S.delete(x) deletes x from the set S.
This does not return a new set but rather modifies the set S.
- S.pop() returns the <em style="color:blue">smallest element</em> of the set S.
Furthermore, this element is removed from the given set S.
Since sets are implemented as ordered binary trees, the elements of a set need to be comparable, i.e. if
x and y are inserted into a set, then the expression x < y has to be defined and has to return a
Boolean value. Furthermore, the relation < has to be a
<a href="https://en.wikipedia.org/wiki/linear_order">linear order</a>.
The class Set can be used to implement a priority queue that supports the
<em style="color:blue">removal</em> of elements.
End of explanation
def shortest_path(source, Edges):
Distance = { source: 0 }
Visited = { source }
Fringe = Set()
Fringe.insert( (0, source) )
while not Fringe.isEmpty():
d, u = Fringe.pop() # get and remove smallest element
for v, l in Edges[u]:
dv = Distance.get(v, None)
if dv == None or d + l < dv:
if dv != None:
Fringe.delete( (dv, v) )
Distance[v] = d + l
Fringe.insert( (d + l, v) )
Visited.add(u)
return Distance
Explanation: The function call shortest_path takes a node source and a set Edges.
The function shortest_path takes two arguments.
- source is the start node.
- Edges is a dictionary that encodes the set of edges of the graph. For every node x the value of Edges[x] has the form
$$ \bigl[ (y_1, l_1), \cdots, (y_n, l_n) \bigr]. $$
This list is interpreted as follows: For every $i = 1,\cdots,n$ there is an edge
$(x, y_i)$ pointing from $x$ to $y_i$ and this edge has the length $l_i$.
The function returns the dictionary Distance. For every node u such that there is a path from source to
u, Distance[u] is the length of the shortest path from source to u. The implementation uses
<a href="https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm">Dijkstra's algorithm</a> and proceeds as follows:
Distance is a dictionary mapping nodes to their estimated distance from the node
source. If d = Distance[x], then we know that there is a path of length d leading
from source to x. However, in general we do not know whether there is a path shorter
than d that also connects the source to the node x.
The function shortest_path maintains an additional variable called Visited.
This variable contains the set of those nodes that have been <em style="color:blue">visited</em>
by the algorithm.
To be more precise, Visited contains those nodes u that have been removed from the
Fringe and for which all neighboring nodes, i.e. those nodes y such that
there is an edge (u,y), have been examined. It can be shown that once a node u is added to
Visited, Distance[u] is the length of the shortest path from source to u.
Fringe is a priority queue that contains pairs of the form (d, x), where x is a node and d
is the distance that x has from the node source. This priority queue is implemented as a set,
which in turn is represented by an ordered binary tree. The fact that we store the node x and the
distance d as a pair (d,x) implies that the distances are used as priorities because pairs are
compared lexicographically.
Initially the only node that is known to be
reachable from source is the node source. Hence Fringe is initialized as the
set { (0, source) }.
As long as the set Fringe is not empty, line 7 of the implementation removes that node u
from the set Fringe that has the smallest distance d from the node source.
Next, all edges leading away from u are visited. If there is an edge (u, v) that has length l,
then we check whether the node v has already a distance assigned. If the node v already has the
distance dv assigned but the value d + l is less than dv, then we have found a
shorter path from source to v. This path leads from source to u and then proceeds
to v via the edge (u,v).
If v had already been visited before and hence dv=Distance[v] is defined, we
have to update the priority of the v in the Fringe. The easiest way to do this is to remove
the old pair (dv, v) from the Fringe and replace this pair by the new pair
(d+l, v), because d+l is the new estimate of the distance between source and v and
d+l is the new priority of v.
Once we have inspected all neighbours of the node u, u is added to the set of those nodes that have
been Visited.
When the Fringe has been exhausted, the dictionary Distance contains the distances of
every node that is reachable from the node source
End of explanation
def shortest_path(source, Edges):
Distance = { source: 0 }
Visited = { source } # set only needed for visualization
Fringe = Set()
Fringe.insert( (0, source) )
while not Fringe.isEmpty():
d, u = Fringe.pop()
display(toDot(source, u, Edges, Fringe, Distance, Visited))
print('_' * 80)
for v, l in Edges[u]:
dv = Distance.get(v, None)
if dv == None or d + l < dv:
if dv != None:
Fringe.delete( (dv, v) )
Distance[v] = d + l
Fringe.insert( (d + l, v) )
Visited.add(u)
display(toDot(source, None, Edges, Fringe, Distance, Visited))
return Distance
Explanation: The version of shortest_path given below provides a graphical animation of the algorithm.
End of explanation
import graphviz as gv
Explanation: Code to Display the Directed Graph
End of explanation
def toDot(source, p, Edges, Fringe, Distance, Visited):
V = set()
for x in Edges.keys():
V.add(x)
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
dot.attr(rankdir='LR', size='8,5')
for x in V:
if x == source:
dot.node(str(x), color='blue', shape='doublecircle')
else:
d = str(Distance.get(x, ''))
if x == p:
dot.node(str(x), label='{' + str(x) + '|' + d + '}', color='magenta')
elif x in Distance and Fringe.member( (Distance[x], x) ):
dot.node(str(x), label='{' + str(x) + '|' + d + '}', color='red')
elif x in Visited:
dot.node(str(x), label='{' + str(x) + '|' + d + '}', color='blue')
else:
dot.node(str(x), label='{' + str(x) + '|' + d + '}')
for u in V:
for v, l in Edges[u]:
dot.edge(str(u), str(v), label=str(l))
return dot
Explanation: The function $\texttt{toDot}(\texttt{source}, \texttt{Edges}, \texttt{Fringe}, \texttt{Distance}, \texttt{Visited})$ takes a graph that is represented by
its Edges, a set of nodes Fringe, and a dictionary Distance that has the distance of a node from the node source, and set Visited of nodes that have already been visited.
End of explanation
Edges = { 'a': [ ('c', 2), ('b', 9)],
'b': [('d', 1)],
'c': [('e', 5), ('g', 3)],
'd': [('f', 2), ('e', 4)],
'e': [('f', 1), ('b', 2)],
'f': [('h', 5)],
'g': [('e', 1)],
'h': []
}
s = 'a'
sp = shortest_path(s, Edges)
sp
Explanation: Code for Testing
End of explanation
All = frozenset({ 'Alice', 'Britney', 'Charly', 'Daniel', 'Torch' })
Explanation: Crossing the Tunnel
Four persons, Alice, Britney, Charly and Daniel have to cross a tunnel.
The tunnel is so narrow, that at most two persons can cross it together.
In order to cross the tunnel, a torch is needed. Together, they only
have a single torch.
1. Alice is the fastest and can cross the tunnel in 1 minute.
2. Britney needs 2 minutes to cross the tunnel.
3. Charly is slower and needs 4 minutes.
4. Daniel is slowest and takes 5 minutes to cross the tunnel.
What is the fastest plan to cross the tunnel?
We will model this problem as a graph theoretical problem. The nodes of the graph will be sets
of people. In particular, it will be the set of people at the entrance of the tunnel. In order to model the torch, the torch can also be a member of these sets.
End of explanation
Time = { 'Alice': 1, 'Britney': 2, 'Charly': 4, 'Daniel': 5, 'Torch': 0 }
Explanation: The timining is modelled by a dictionary.
End of explanation
def power(M):
if M == set():
return { frozenset() }
else:
C = set(M) # C is a copy of M as we don't want to change the set M
x = C.pop() # pop removes the element x from the set C
P1 = power(C)
P2 = { A | {x} for A in P1 }
return P1 | P2
Explanation: The function $\texttt{power}(M)$ defined below computes the power list of the set $M$, i.e. we have:
$$ \texttt{power}(M) = 2^M = \bigl{A \mid A \subseteq M \bigr} $$
End of explanation
def duration(B):
return max(Time[x] for x in B)
Explanation: If $B$ is a set of persons, then $\texttt{duration}(B)$ is the time that this group needs to cross the tunnel.
$B$ also contains 'Torch'.
End of explanation
def left_right(S):
return [(S - B, duration(B)) for B in power(S) if 'Torch' in B and 2 <= len(B) <= 3]
Explanation: $\texttt{left_right}(S)$ describes a crossing of the tunnel from the entrance at the left side left to the exit at the right side of the tunnel.
End of explanation
def right_left(S):
return [(S | B, duration(B)) for B in power(All - S) if 'Torch' in B and 2 <= len(B) <= 3]
Edges = { S: left_right(S) + right_left(S) for S in power(All) }
len(Edges)
Explanation: $\texttt{right_left}(S)$ describes a crossing of the tunnel from right to left.
End of explanation
def shortest_path(source, Edges):
Distance = { source: 0 }
Parent = {}
Fringe = Set()
Fringe.insert( (0, source) )
while not Fringe.isEmpty():
d, u = Fringe.pop()
for v, l in Edges[u]:
dv = Distance.get(v, None)
if dv == None or d + l < dv:
if dv != None:
Fringe.delete( (dv, v) )
Distance[v] = d + l
Fringe.insert( (d + l, v) )
Parent[v] = u
return Parent, Distance
Parent, Distance = shortest_path(frozenset(All), Edges)
Explanation: The function shortest_path is Dijkstra's algorithm. It returns both a dictionary Parent containing
the parent nodes and a dictionary Distance with the distances. The dictionary Parent can be used to
compute the shortest path leading from the node source to some other node.
End of explanation
goal = frozenset()
Distance[goal]
Explanation: Let us see whether the goal was reachable and how long it takes to reach the goal.
End of explanation
def find_path(source, goal, Parent):
p = Parent.get(goal)
if p == None:
return [source]
return find_path(source, p, Parent) + [goal]
Path = find_path(frozenset(All), frozenset(), Parent)
def print_path():
total = 0
print("_" * 81);
for i in range(len(Path)):
Left = set(Path[i])
Right = set(All) - set(Left)
if Left == set() or Right == set():
print(Left, " " * 25, Right)
else:
print(Left, " " * 30, Right)
print("_" * 81);
if i < len(Path) - 1:
if "Torch" in Path[i]:
Diff = set(Path[i]) - set(Path[i+1])
time = duration(Diff)
total += time
print(" " * 20, ">>> ", Diff, ':', time, " >>>")
else:
Diff = set(Path[i+1]) - set(Path[i])
time = duration(Diff)
total += time
print(" " * 20, "<<< ", Diff, ':', time, " <<<")
print("_" * 81)
print('Total time:', total, 'minutes.')
print_path()
Explanation: Given to nodes source and goal and a dictionary containing the parent of every node, the function
find_path returns the path from source to goal.
End of explanation |
14,902 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load test file
Step1: Run Vivado Simulation
!vivado_hls /disk0/Work/xike_hls_module/hls_proj/spk_dect/solution1/script.tcl
State Machine of Each Channel
Each individual channel has a finite state machine which operate in FPGA while real-time data acquisition and is designed as below
Step2: 1. This simulation generate the dout as states
Step3: Now, clearly we extract 8 peaks from 8 independent state machine, the upper 4 is from one tetrode and the below 4 from another.
The next task is to generate a pivital position
Step4: Notice that all the output is decided by one time stamps delay, so that from FPGA view point Mn is related to the value of the next value val.
3. Simulation of pivotal point
_val is the current value to be processed by state machine which you already know val is the following value from current ch
_val_nn is the current value from all channels in the same group
ch_nn[i] is the channel numbers in this group
Mn is the minimum value from all channels updated by val
start_cnt[ch]==1 means pivotal is already found at that ch and no pivotal can be labelled unless start_cnt[ch] get reset.
inactivating counter (independent for each channel)
verilog
if(start_cnt[ch])
{
cnt[ch] += 1;
if (cnt[ch] == INACTIVE_LEN)
{
cnt[ch] = 0;
start_cnt[ch] = 0;
}
}
pivotal judgement (accross channel comparison so not independent)
verilog
bool if_pivotal = 1;
Get_Com_Loop
Step5: Summary
Step6: verilog
muap.data = thr*is_peak[ch];
Step7: verilog
muap.data = _val; | Python Code:
din = np.fromfile('spkDect_test_spk.bin', dtype='float32')
data = np.zeros((40,8))
k = 0
for t in range(40):
for ch in range(8):
data[t,ch] = din[k]
k+=1
fig,ax = subplots(1,2,figsize=(15,5))
ax[0].plot(data[:,:4], '-o');
ax[1].plot(data[:,4:], '-o');
Explanation: Load test file
End of explanation
def show_sim_results(dout, thr):
y = dout.reshape(40,8)
fig, ax = subplots(4,2,figsize=(15,20))
color = ['g','c']
for ch in range(8):
x = data[:,ch]
idx = argmin(x)
ax[ch/2, ch%2].plot(x,'-o', c=color[ch/4], alpha=0.5)
ax[ch/2, ch%2].plot(y[:,ch]/float(2**13),'-o', alpha=0.5);
ax[ch/2, ch%2].axhline(thr ,c='m', ls='--')
ax[ch/2, ch%2].axvline(idx ,c='m', ls='-.')
Explanation: Run Vivado Simulation
!vivado_hls /disk0/Work/xike_hls_module/hls_proj/spk_dect/solution1/script.tcl
State Machine of Each Channel
Each individual channel has a finite state machine which operate in FPGA while real-time data acquisition and is designed as below:
//////////////////////////////////////////////////////////////////////////////////
//
// s0 s0
// s0 s0 s0 s0
// s0 s0 s0 s0 s0
// --------------------------------------------------------- threshold
// s1 s2 s1 s2
// s2 s1
// s1 s2
// s2
// s1
//
//////////////////////////////////////////////////////////////////////////////////
End of explanation
dout = np.fromfile('spkDect_test_compare.bin', dtype='int32')
thr = -50
show_sim_results(dout, thr)
Explanation: 1. This simulation generate the dout as states
End of explanation
dout = np.fromfile('spkDect_test_compare.bin', dtype='int32')
thr = -50
show_sim_results(dout, thr)
Explanation: Now, clearly we extract 8 peaks from 8 independent state machine, the upper 4 is from one tetrode and the below 4 from another.
The next task is to generate a pivital position:(t, ch) tuple from the state machines and label it.
2. This simulation generate the dout as Min value
Min value is resored in Mn and is updated by comparing thr to current val
verilog
if(val<thr && val<Mn[ch]) Mn[ch] = val;
else if(val>=thr) Mn[ch] = 0;
End of explanation
dout = np.fromfile('spkDect_test_compare.bin', dtype='int32')
thr = -50
show_sim_results(dout, thr)
Explanation: Notice that all the output is decided by one time stamps delay, so that from FPGA view point Mn is related to the value of the next value val.
3. Simulation of pivotal point
_val is the current value to be processed by state machine which you already know val is the following value from current ch
_val_nn is the current value from all channels in the same group
ch_nn[i] is the channel numbers in this group
Mn is the minimum value from all channels updated by val
start_cnt[ch]==1 means pivotal is already found at that ch and no pivotal can be labelled unless start_cnt[ch] get reset.
inactivating counter (independent for each channel)
verilog
if(start_cnt[ch])
{
cnt[ch] += 1;
if (cnt[ch] == INACTIVE_LEN)
{
cnt[ch] = 0;
start_cnt[ch] = 0;
}
}
pivotal judgement (accross channel comparison so not independent)
verilog
bool if_pivotal = 1;
Get_Com_Loop: // loop hls unroll
for(int i=0; i<CH_PG; i++)
{
_val = buf_2d[l][ch];
_val_nn[i] = buf_2d[l][ch_nn[i]];
if_pivotal = (_val <= Mn[ch_nn[i]]) &&
(_val <= _val_nn[i]) &&
!start_cnt[ch_nn[i]] && if_pivotal;
}
if_pivotal (not independent for each channel) at S1 to S2 (independent)
verilog
case S1:
if(_val<thr && _val>val){
state[ch] = S1;
is_peak[ch] = 0;
first_S2[ch] = 0;
}
else if(_val<thr && _val<=val){
state[ch] = S2;
is_peak[ch] = 0;
first_S2[ch] = 1;
if(if_pivotal)
{
_val.range(0,0) = 1;
is_peak[ch] = 1;
start_cnt[ch] = 1;
}
else
{
is_peak[ch] = 0;
}
}
else if(_val>=thr){
state[ch] = S0;
is_peak[ch] = 0;
first_S2[ch] = 0;
}
break;
End of explanation
dout = np.fromfile('spkDect_test_compare.bin', dtype='int32')
thr = -50
Explanation: Summary
End of explanation
y = dout.reshape(40,8)/float(2**13)
piv = np.where(y==thr)
Explanation: verilog
muap.data = thr*is_peak[ch];
End of explanation
y = dout.reshape(40,8)
piv = np.where(y%2==1)
piv
fig,ax = subplots(1,2,figsize=(20,6))
ax[0].plot(data[:,:4], '-o');
ax[1].plot(data[:,4:], '-o');
for ch, i in zip(piv[1],piv[0]):
if ch/4==0:
ax[0].axvline(i, c='m', ls='-.', alpha=0.7)
elif ch/4==1:
ax[1].axvline(i, c='m', ls='-.', alpha=0.7)
Explanation: verilog
muap.data = _val;
End of explanation |
14,903 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Migration, urban-bias and the informal sector
The Harris-Todaro Model
This model is an adaptation of a standard two-sector open economy specific factors model (SFM) of migration.
* The two sectors are agriculture and 'modern-manufacturing'
* The agricultural labor market is competitive. Labor is mobile between the agricultural and manufacturing sectors.
If the market of jobs in the modern manufacturing sector were also competitive this would be a standard specific factors model and migration between sectors would occur until the wage was equalized across sectors, or same thing, at the market-clearing equilibrium wage $w^e$ the sum of labor demands from each sector equaled the economy-wide labor supply $\bar L$.
$$L_a (w^e) + L_a (w^e) = \bar L $$
However, for institutional/political economy reasons wages in the modern manufacturing sector will be set artificially high, for example by union activity or minimum-wage policies in that sector. This high wage will lead firms in that sector to cut back hiring but will also attract migrants to urban areas. lured by the prospect of possibly landing a high-wage job in the urban sector. Since not all migrants succeed in landing these rationed jobs however this migration will lead to the endogenous emergence of an informal urban sector in the economy and 'urban-bias' (a larger than efficient urban sector).
Laborers can now either stay in the rural sector to earn wage equilibrium rural wage $w_r$ or migrate to the urban area where they may land either in (a) the informal sector where they earn a low-productivity determined wage $w_u$ or (b) in the high-wage modern manufacturing sector where they earn the institutionally-determined wage $w_m$. We model assumes that only urban dwellers can apply for modern-manufacturing and that jobs are allocated among such applicants by lottery whenever those jobs are in excess demand.
A fixed constant informal sector wage can be justified by assuming that the informal sector is competitive and production takes place with a simple low-productivity linear technology. Wages in that sector are then pinned to $w_u = a_u$ where $a_u$ is the constant marginal product of labor in the informal sector.
Migration will now take place until the rural wage is equalized to the expected wage of an urban resident
Step1: The efficient competitive equilibrium is given by the point where the two labor demand curves intersect. We solve for the level of agricultural employment at which there is zero excess demand for agricultural labor. This gives an equilibrium agricultural labor demand economy-wide equilibrium wage.
Step2: A Harris-Todaro equilibrium is one where the rural wage equals the expected urban wage. Diagramatically the equilibrium level of rural employtment is given by the intersection of the rural labor demand curve and the rectangular hyperbola running through $(w_m, L_m(w_m))$.
Step3: This next function plots the diagram.
Step4: The high institutional wage $w_m$ lowers demand for labor in the formal manufacturing sector relative to a competitive equilibiurm. In the Harris-Todaro model it's not apriori obvious whether the high institutional wage in the formal manufacturing sector will increase or decrease the size of the urban sector relative to the efficient competitive equilibrium. Migrants have to weigh the lower probability of landing a formal sector job against the higher wage they will capture if they are lucky enough to get a job.
If we assume the informal sector (or unemployment) wage is zero, then for our Cobb-Douglas demands the following diagram suggests the policy creates an informal urban sector but overall reduces the size of the size of the urban sector relative to the rural sector, compared to the efficient competitive equilibrium.
Step5: But if we make the informal sector sufficiently attractive it is possible to get 'urban-bias' or an excessively large urban sector relative to the efficient allocation.
For the following diagram we first make note of the efficient equilibrium wage and rural sector size
Step6: Suppose that workers in the informal sector have access to a low-productivity occupation which pays a wage below this efficient level. That means that in an efficient allocation workers would earn more in either formal manufacturing or the rural sector and hence there would be no informal sector in equilibrium.
However the existence of this low-productivity option in the urban informal sector raises the expected return to migration in the Harris-Todaro distorted equilibrium and as the following example illustrates this may result in a smaller than efficient rural sector
Step7: Interactive plot
Step8: Extensions
Harris Todaro and Inequality
Jonathan Temple's (2005) "Growth and Wage Inequality in a dual economy" makes some simple points about wage inequality in the HT model. He shows that in the case of $wu = $ the Gini coefficient can be written simply | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact
from scipy.optimize import bisect,newton
%matplotlib inline
Tbar = 200 # Fixed specific land in ag.
Kbar = 200 # Fixed specific capital in manuf
Lbar = 400 # Total number of mobile workers
LbarMax = 400 # Lbar will be on slider, max value.
A = 1
p = 1.00 # initial rel price of ag goods, p = Pa/Pm
alpha, beta = 0.75, 0.5 # labor share in ag, manuf
def F(L,T, A=1, alpha=0.5):
return A*(T**alpha)*(L**(1-alpha))
def G(L, K, A=1, beta=0.5):
return A*(K**beta)*(L**(1-beta))
def mplr(L,T=Tbar, A=1, alpha=0.5):
return (1-alpha)*F(L,T,A,alpha)/L
def mplm(L, K=Kbar, A=1, beta=0.5):
return (1-beta)*G(L,K,A,beta)/L
def Lm(w):
return Kbar*((p/w)*(A*(1-beta)))**(1/beta)
def expret(Lr,wm):
return wm*Lm(wm)/(Lbar-Lr)
def expwage(Lr,wm,wu):
return (wm*Lm(wm) + wu*(Lbar-Lm(wm)-Lr) )/(Lbar-Lr)
Explanation: Migration, urban-bias and the informal sector
The Harris-Todaro Model
This model is an adaptation of a standard two-sector open economy specific factors model (SFM) of migration.
* The two sectors are agriculture and 'modern-manufacturing'
* The agricultural labor market is competitive. Labor is mobile between the agricultural and manufacturing sectors.
If the market of jobs in the modern manufacturing sector were also competitive this would be a standard specific factors model and migration between sectors would occur until the wage was equalized across sectors, or same thing, at the market-clearing equilibrium wage $w^e$ the sum of labor demands from each sector equaled the economy-wide labor supply $\bar L$.
$$L_a (w^e) + L_a (w^e) = \bar L $$
However, for institutional/political economy reasons wages in the modern manufacturing sector will be set artificially high, for example by union activity or minimum-wage policies in that sector. This high wage will lead firms in that sector to cut back hiring but will also attract migrants to urban areas. lured by the prospect of possibly landing a high-wage job in the urban sector. Since not all migrants succeed in landing these rationed jobs however this migration will lead to the endogenous emergence of an informal urban sector in the economy and 'urban-bias' (a larger than efficient urban sector).
Laborers can now either stay in the rural sector to earn wage equilibrium rural wage $w_r$ or migrate to the urban area where they may land either in (a) the informal sector where they earn a low-productivity determined wage $w_u$ or (b) in the high-wage modern manufacturing sector where they earn the institutionally-determined wage $w_m$. We model assumes that only urban dwellers can apply for modern-manufacturing and that jobs are allocated among such applicants by lottery whenever those jobs are in excess demand.
A fixed constant informal sector wage can be justified by assuming that the informal sector is competitive and production takes place with a simple low-productivity linear technology. Wages in that sector are then pinned to $w_u = a_u$ where $a_u$ is the constant marginal product of labor in the informal sector.
Migration will now take place until the rural wage is equalized to the expected wage of an urban resident:
$$w_r = \frac{L_m (w_m)}{L_u + L_m (w_m)} \cdot w_m + \frac{L_u}{L_u + L_m (w_m)} \cdot w_u $$
Labor demand in each sector will depend on product prices. Without loss of generality and to simplify let's normalize $P_r = 1$ and now call $p = \frac{P_r}{P_m}$ the relative price of agricultural goods.
We can then write $L_r(w)$ as the solution to
$$p \cdot F_L(\bar T, L_r) = w$$
and labor demand $L_m(w)$ as the solution to
$$G_L(\bar K, L_m) = w$$
Given the assumption that jobs in the high-wage manufacturing sector are allocated by fair lottery the equilibium probability of getting such a job will be given simply by the share of the urban sector labor force in that sector.
If, without loss of generality we normalize the informal sector wage $w_u$ to zero (we'll change that later) the equilbrium condition becomes just:
$$w_r = \frac{L_m (w_m)}{L_u + L_m (w_m)} \cdot w_m $$
Since $w_m$ is fixed labor use in the modern manufacturing sector will be $L_m (w_m)$ and the market can be thought of as clearing at a rural wage $w_r$ and a size of the urban informal sector $L_u$ of just the right size to dissuade any further migration.
Note that this condition can be re-written more simply as follows:
$$ w_m \cdot L_m = w_r \cdot ({\bar L - L_r}) $$
Since $\bar L$, $w_m$ and hence also $L_m = L_m(w_m)$ are all fixed quantities this is an equation in two unknowns.
We can solve for the two unknowns $w_r$ and $L_r$ from a system of two equations.
The first is this last equation which is a rectangular hyperbola of the form $x \cdot y = \kappa$, where here $x = \bar L - L_r$ and $y = w_r$).
The other equation is the competitive equilibrium condition
$$p \cdot F_L(\bar K_r, L_r) = w_r$$
that at a competitive optimum the rural wage $w_r$ will be given by the marginal value product of labor in agriculture.
Diagram analysis
Although this is a simple system of two non-linear equations in two unknowns, it's hard to get a tidy closed form solution for Cobb Douglas production functions. It is easy to see the solution graphically and solve for it numerically, however.
Suppose production in the agricultural and manufacturing sectors is carried out by identical firms in each sector each employing the following linear homogenous Cobb-Douglas technologies:
$$G(\bar T, L_r)=A_r \bar T^{\alpha} \cdot L_r^{1-\alpha}$$
$$F(\bar K, L_m)=A_m \bar K^{\beta} \cdot L_m^{1-\beta}$$
Labor demand in manufacturing as a function of $w$:
$$L_m(w_m) = \left [ { \frac{A_m (1-\beta) \bar K}{w_m/P_m} } \right ]^\frac{1}{\beta} $$
and rural labor demand:
$$L_r(w_r) = \left [ { \frac{A_r (1-\alpha) \bar T}{w_r/P_r} } \right ]^\frac{1}{\alpha} $$
End of explanation
def effeq():
ed = lambda L: mplr(L) - mplm(Lbar-L)
LE = bisect(ed,10,Lbar-10)
return mplr(LE), LE
Explanation: The efficient competitive equilibrium is given by the point where the two labor demand curves intersect. We solve for the level of agricultural employment at which there is zero excess demand for agricultural labor. This gives an equilibrium agricultural labor demand economy-wide equilibrium wage.
End of explanation
def harristodaro(wm,wu):
LM = Lm(wm)
WE, LE = effeq()
hteq = lambda L: mplr(L) - (wm*LM + wu*(Lbar-LM-L) )/(Lbar-L)
LR = newton(hteq, LE)
WR = mplr(LR)
return WR, LR, LM, WE, LE
Explanation: A Harris-Todaro equilibrium is one where the rural wage equals the expected urban wage. Diagramatically the equilibrium level of rural employtment is given by the intersection of the rural labor demand curve and the rectangular hyperbola running through $(w_m, L_m(w_m))$.
End of explanation
def HTplot(wm, wu):
WR, LR, LM, WE, LE = harristodaro(wm, wu)
print('(wr, Lr), Lm, (we, le)=({:5.2f},{:5.0f}),{:5.0f},({:5.2f},{:5.0f},)'.format(WR, LR, LM, WE, LE))
lr = np.arange(1,Lbar)
lup = np.arange(LR-20, Lbar-LM+20)
fig, ax = plt.subplots(figsize=(10,6))
ax.plot(lr[:-50], mplr(lr[:-50]), lw=2)
ax.plot(lr[50:], mplm(Lbar-lr[50:]), lw=2)
ax.plot(lup, expwage(lup, wm, wu), 'k',lw=1.5)
ax.vlines(LR,0,WR, linestyles="dashed")
ax.vlines(Lbar-LM,0,wm,linestyles="dashed")
ax.hlines(wm,Lbar,Lbar-LM, linestyles="dashed")
ax.hlines(WR,LR,Lbar, linestyles="dashed")
ax.plot(Lbar-LM,wm,'ob')
ax.text(Lbar,wm,'$w_m$',fontsize=16)
ax.text(LE,WE*1.05,'$E$',fontsize=16)
ax.text(LR,WR*1.10,'$Z$',fontsize=16)
ax.text(Lbar-LM-10,wm*1.05,'$D$',fontsize=16)
ax.text(Lbar,WR,'$w_r$',fontsize=16)
ax.plot([LE,LR, Lbar-LM],[WE, WR, wm],'ok')
ax.set_xlim(0, Lbar)
ax.set_ylim(0, 1.25)
ax.set_xlabel(r'$c_1$', fontsize=18)
ax.set_ylabel('$c_2$', fontsize=18)
ax.spines['top'].set_visible(False)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
Explanation: This next function plots the diagram.
End of explanation
WM, WU = 0.9, 0
HTplot(WM, WU)
Explanation: The high institutional wage $w_m$ lowers demand for labor in the formal manufacturing sector relative to a competitive equilibiurm. In the Harris-Todaro model it's not apriori obvious whether the high institutional wage in the formal manufacturing sector will increase or decrease the size of the urban sector relative to the efficient competitive equilibrium. Migrants have to weigh the lower probability of landing a formal sector job against the higher wage they will capture if they are lucky enough to get a job.
If we assume the informal sector (or unemployment) wage is zero, then for our Cobb-Douglas demands the following diagram suggests the policy creates an informal urban sector but overall reduces the size of the size of the urban sector relative to the rural sector, compared to the efficient competitive equilibrium.
End of explanation
WE, LE = effeq()
print('The efficient competitive wage is w ={:5.2f} and the rural sector is Lr ={:5.0f}'.format(WE, LE))
Explanation: But if we make the informal sector sufficiently attractive it is possible to get 'urban-bias' or an excessively large urban sector relative to the efficient allocation.
For the following diagram we first make note of the efficient equilibrium wage and rural sector size:
End of explanation
WM, WU = 0.9, 0.45
HTplot(WM, WU)
Explanation: Suppose that workers in the informal sector have access to a low-productivity occupation which pays a wage below this efficient level. That means that in an efficient allocation workers would earn more in either formal manufacturing or the rural sector and hence there would be no informal sector in equilibrium.
However the existence of this low-productivity option in the urban informal sector raises the expected return to migration in the Harris-Todaro distorted equilibrium and as the following example illustrates this may result in a smaller than efficient rural sector:
End of explanation
interact(HTplot, wm =(WE,3.5*WE,0.05),wu=(0,WE,0.05))
Explanation: Interactive plot
End of explanation
def htlorenz(wm, wu):
WR, LR, LM, WE, LE = harristodaro(wm, wu)
lrp = LR/Lbar
lmp = LM/Lbar
lup = 1 - lrp -lmp
ytot = wu*(1-lrp-lmp) + WR*lrp + wm*lmp
yup = wu*(1-lrp-lmp)/ytot
yrp = WR*lrp/ytot
ymp = wm*lmp/ytot
A = 0.5 - (yup*((1-lup)+ 0.5*lup)+(yrp-yup)*(lmp+0.5*lrp)+0.5*lmp*(ymp))
Gini = 2*A
gtext = "Gini ={:5.2f}".format(Gini)
fig, ax = plt.subplots(figsize=(6,6))
ax.plot([0,lup,lup+lrp,1],[0,yup,yup+yrp,1] , lw=2)
ax.plot([0,1],[0,1], 'k--', lw=1)
ax.text(0.2,0.8,gtext,fontsize=16)
WM, WU = 0.9,0
WR, LR, LM, WE, LE = harristodaro(WM, WU)
interact(htlorenz, wm =(WE,3.5*WE,0.05),wu=(0,WE,0.05))
Explanation: Extensions
Harris Todaro and Inequality
Jonathan Temple's (2005) "Growth and Wage Inequality in a dual economy" makes some simple points about wage inequality in the HT model. He shows that in the case of $wu = $ the Gini coefficient can be written simply:
$$Gini = L_u(2-\frac{L_u}{u})$$
where here $L_u$ is the proportion of the labor force in unemployment and $u$ (slightly redefining what we had above... or, same thing, if we normalize the total labor force to 1) and $u$ is urban unemployment rate or the fraction of the unemployed in the urban population (i.e. $u=\frac{L_u}{L_u+L_m}$). From this one can prove that inequality will unambiguously rise if one of the following statements holds if the urban unemployment rate $u$:
rises, and the number of unemployed $L_u$ goes up.
is constant, and the number of unemployed $L_u$ rises. Modern sector employment rises, and agricultural employment falls.
rises, and the number of unemployed is constant. Modern sector employment falls, and agricultural employment rises
Another result is that rural growth (driven say by improved agricultural technology) leads to an unambiguous reduction in wage inequality. The effects of urban growth are ambiguous.
Below we plot the Lorenz curve and slightly extend Temple's analysis to the case where 'the unemployed' (or informal sector workers) earn a non-zero wage. For now we simply focus on calculating the Gini numerically.
(note/fix: GINI CALCULATION DOES NOT SEEM RIGHT)
End of explanation |
14,904 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Video Codec Unit (VCU) Demo Example
Step1: Run the Demo
Step2: Insert file path
Step3: Transcode
Step4: Advanced options | Python Code:
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
Explanation: Video Codec Unit (VCU) Demo Example: TRANSCODE -> FILE
Introduction
Video Codec Unit (VCU) in ZynqMP SOC is capable of encoding and decoding AVC/HEVC compressed video streams in real time.
This notebook example shows video transcoding usecase - the process of converting from one encoding format to another format. This example reads compressed file, decode it using VCU and again encode it to different codec and save the file in different format. For video decoding & re-encoding it uses VCU while in case of audio(optional), it uses software Gstreamer element.
Implementation Details
<img src="pictures/block-diagram-transcode-file.png" align="center" alt="Drawing" style="width: 600px; height: 200px"/>
Board Setup
Connect Ethernet cable. Check Internet connectivity. It is required for downloading compressed file from web-server.
Connect serial cable to monitor logs on serial console.
Connect USB camera with board.
If Board is connected to private network, then export proxy settings in /home/root/.bashrc file as below,
create/open a bashrc file using "vi ~/.bashrc"
Insert below line to bashrc file
export http_proxy="< private network proxy address >"
export https_proxy="< private network proxy address >"
Save and close bashrc file.
End of explanation
from ipywidgets import interact
import ipywidgets as widgets
from common import common_vcu_demo_transcode_to_file
import os
from ipywidgets import HBox, VBox, Text, Layout
Explanation: Run the Demo
End of explanation
input_path=widgets.Text(value='',
placeholder='Insert file path',
description='Input File:',
#style={'description_width': 'initial'},
disabled=False)
output_path=widgets.Text(value='',
placeholder='(optional) /mnt/sata/op.h265',
description='Output Path:',
disabled=False)
HBox([input_path, output_path])
Explanation: Insert file path
End of explanation
codec_type=widgets.RadioButtons(
options=['avc', 'hevc'],
description='Video Codec:',
disabled=False)
audio_codec_type=widgets.RadioButtons(
options=['none', 'aac', 'vorbis'],
description='Audio Codec:',
disabled=False)
HBox([codec_type, audio_codec_type])
Explanation: Transcode
End of explanation
bit_rate=widgets.Text(value='',
placeholder='(optional) 1000, 20000',
description='Bit Rate(Kbps):',
style={'description_width': 'initial'},
disabled=False)
periodicity_idr=widgets.Text(value='',
placeholder='(optional) 30, 40, 50, etc',
description='Periodicity Idr:',
style={'description_width': 'initial'},
#layout=Layout(width='33%', height='30px'),
disabled=False)
gop_length=widgets.Text(value='',
placeholder='(optional) 30, 60',
description='Gop Length:',
disabled=False)
HBox([periodicity_idr , gop_length, bit_rate])
entropy_buffers=widgets.Dropdown(
options=['2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15'],
value='5',
description='Entropy Buffers Nos:',
style={'description_width': 'initial'},
disabled=False,)
show_fps=widgets.Checkbox(
value=False,
description='show-fps',
#style={'description_width': 'initial'},
disabled=False)
HBox([entropy_buffers, show_fps])
#HBox([periodicity_idr, output_path])
from IPython.display import clear_output
from IPython.display import Javascript
def run_all(ev):
display(Javascript('IPython.notebook.execute_cells_below()'))
def clear_op(event):
clear_output(wait=True)
return
button1 = widgets.Button(
description='Clear Output',
style= {'button_color':'lightgreen'},
#style= {'button_color':'lightgreen', 'description_width': 'initial'},
layout={'width': '300px'}
)
button2 = widgets.Button(
description='',
style= {'button_color':'white'},
#style= {'button_color':'lightgreen', 'description_width': 'initial'},
layout={'width': '80px'},
disabled=True
)
button1.on_click(run_all)
button1.on_click(clear_op)
def start_demo(event):
#clear_output(wait=True)
arg = common_vcu_demo_transcode_to_file.cmd_line_args_generator(input_path.value, bit_rate.value, codec_type.value, audio_codec_type.value, output_path.value, entropy_buffers.value, gop_length.value, periodicity_idr.value, show_fps.value);
#!sh vcu-demo-transcode-to-file.sh $arg > logs.txt 2>&1
!sh vcu-demo-transcode-to-file.sh $arg
return
button = widgets.Button(
description='click to start vcu-transcode-to-file demo',
style= {'button_color':'lightgreen'},
#style= {'button_color':'lightgreen', 'description_width': 'initial'},
layout={'width': '300px'}
)
button.on_click(start_demo)
HBox([button, button2, button1])
Explanation: Advanced options:
End of explanation |
14,905 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tuples
Tuples are like Lists, but they are immutable, means once we assign a value to a tuple we cannot change it or it cannot be changed.
Tuple values are enclosed in (). Tuple can hold values of different types.
You can think of them as constant arrays.
Step1: You can slice a tuple like you do in Lists.
Step2: Remember that we are slicing the tuple for display purpose only. We cannot change a tuple.
Now lets try to change the value in a tuple and see what happens.
Step3: So we have an error saying that tuple do not support item assignment.
By this constraint tuples are of fixed size. A Tuple cannot grow, which means we cannot add an item once a tuple is created.
Step4: Basic functions for tuples
There are only two functions available for tuples count() and index().
The index() returns the index of the value supplied and the count() returns the number of time a value appears in a tuple.
Step5: A tuple value can be any python object, it can be a List, dictonary etc. | Python Code:
t = (1,2.0,'Three')
t
t[0]
Explanation: Tuples
Tuples are like Lists, but they are immutable, means once we assign a value to a tuple we cannot change it or it cannot be changed.
Tuple values are enclosed in (). Tuple can hold values of different types.
You can think of them as constant arrays.
End of explanation
# Slicing
t[1:]
# Reversing a tuple
t[::-1]
Explanation: You can slice a tuple like you do in Lists.
End of explanation
t[0] = 10
Explanation: Remember that we are slicing the tuple for display purpose only. We cannot change a tuple.
Now lets try to change the value in a tuple and see what happens.
End of explanation
t[3] = 4
t.append(4)
Explanation: So we have an error saying that tuple do not support item assignment.
By this constraint tuples are of fixed size. A Tuple cannot grow, which means we cannot add an item once a tuple is created.
End of explanation
# Finding the index of the value: 2.0
t.index(2.0)
# Number of values in a tuple
t.count('Three')
Explanation: Basic functions for tuples
There are only two functions available for tuples count() and index().
The index() returns the index of the value supplied and the count() returns the number of time a value appears in a tuple.
End of explanation
t = ('hello', {'element': 'Oxygen', 'weight': 15.999}, 1234)
t
# accessing the 2nd value in the tuple.
t[1]
# accessing the values in the dictionary which is the 2nd element in our tuple.
t[1]['element']
Explanation: A tuple value can be any python object, it can be a List, dictonary etc.
End of explanation |
14,906 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 3
Imports
Step1: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is
Step2: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step3: Next make a visualization using one of the pcolor functions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 3
Imports
End of explanation
def well2d(x, y, nx, ny, L=1.0):
sine1 = np.sin(nx*np.pi*x/L)
sine2 = np.sin(ny*np.pi*y/L)
eq = 2/L * sine1 * sine2
return(eq)
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
Explanation: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is:
$$ \psi_{n_x,n_y}(x,y) = \frac{2}{L}
\sin{\left( \frac{n_x \pi x}{L} \right)}
\sin{\left( \frac{n_y \pi y}{L} \right)} $$
This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well.
Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.
End of explanation
#Set given variables
nx = 3
ny = 2
L = 1
x = np.linspace(0,1,100)
y = np.linspace(0,1,100)
X, Y = np.meshgrid(x, y)
#We need a 2D matrix, so set a second function
psi2 = well2d(X, Y, nx, ny, L)
#Contour graph
plt.contour(X, Y, psi2)
#Style Graph
plt.xlabel("X")
plt.ylabel("Y")
plt.title("2-D Infinite Well")
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
assert True # use this cell for grading the contour plot
Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction:
Use $n_x=3$, $n_y=2$ and $L=0$.
Use the limits $[0,1]$ for the x and y axis.
Customize your plot to make it effective and beautiful.
Use a non-default colormap.
Add a colorbar to you visualization.
First make a plot using one of the contour functions:
End of explanation
res = 70
# Because pcolor resolution depends on the size of psi and psi2 arrays
# I label a new variable "res"
# Greater "res" allows for greater resolution
psi2 = well2d(X, Y, nx, ny, L)
#Create matrix for "z" argument
z = sum(np.meshgrid(psi, psi2))
#Set colors for max/min contours (note: for this problem, this is unnecessary. It defaults the same values.)
maxd = np.amax(psi2)
mind = np.amin(psi2)
#Call the graph x, y are the lengths of the box, z is the function matrix values
plt.pcolor(X, Y, psi2, cmap='RdBu', vmin=mind, vmax=maxd)
#Style Graph
plt.xlabel("X")
plt.ylabel("Y")
plt.title("2D Infinite Well")
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.show()
assert True # use this cell for grading the pcolor plot
Explanation: Next make a visualization using one of the pcolor functions:
End of explanation |
14,907 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Machine Learning
LA Team Submission 5 ##
Lukas Mosser, Alfredo De la Fuente
In this approach for solving the facies classfication problem ( https
Step1: Data Preprocessing
Step2: We procceed to run Paolo Bestagini's routine to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.
Step3: Data Analysis
In this section we will run a Cross Validation routine
Step4: Prediction | Python Code:
%%sh
pip install pandas
pip install scikit-learn
pip install tpot
from __future__ import print_function
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
from sklearn.model_selection import LeavePGroupsOut
from sklearn.multiclass import OneVsOneClassifier
from sklearn.ensemble import RandomForestClassifier
from scipy.signal import medfilt
Explanation: Facies classification using Machine Learning
LA Team Submission 5 ##
Lukas Mosser, Alfredo De la Fuente
In this approach for solving the facies classfication problem ( https://github.com/seg/2016-ml-contest. ) we will explore the following statregies:
- Features Exploration: based on Paolo Bestagini's work, we will consider imputation, normalization and augmentation routines for the initial features.
- Model tuning:
Libraries
We will need to install the following libraries and packages.
End of explanation
#Load Data
data = pd.read_csv('../facies_vectors.csv')
# Parameters
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
print(data.head())
data['PE'] = data.groupby("Facies").PE.transform(lambda x: x.fillna(x.mean()))
Explanation: Data Preprocessing
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X_reg, y_reg, train_size=0.75, test_size=0.25)
# Store features and labels
X = data[feature_names].values
y = data['Facies'].values
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
X_aug, padded_rows = augment_features(X, well, depth)
# Initialize model selection methods
lpgo = LeavePGroupsOut(2)
# Generate splits
split_list = []
for train, val in lpgo.split(X, y, groups=data['Well Name']):
hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):
split_list.append({'train':train, 'val':val})
def preprocess():
# Preprocess data to use in model
X_train_aux = []
X_test_aux = []
y_train_aux = []
y_test_aux = []
# For each data split
split = split_list[5]
# Remove padded rows
split_train_no_pad = np.setdiff1d(split['train'], padded_rows)
# Select training and validation data from current split
X_tr = X_aug[split_train_no_pad, :]
X_v = X_aug[split['val'], :]
y_tr = y[split_train_no_pad]
y_v = y[split['val']]
# Select well labels for validation data
well_v = well[split['val']]
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
X_train_aux.append( X_tr )
X_test_aux.append( X_v )
y_train_aux.append( y_tr )
y_test_aux.append ( y_v )
X_train = np.concatenate( X_train_aux )
X_test = np.concatenate ( X_test_aux )
y_train = np.concatenate ( y_train_aux )
y_test = np.concatenate ( y_test_aux )
return X_train , X_test , y_train , y_test
Explanation: We procceed to run Paolo Bestagini's routine to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.
End of explanation
from tpot import TPOTClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = preprocess()
tpot = TPOTClassifier(generations=5, population_size=20,
verbosity=2,max_eval_time_mins=20,
max_time_mins=100,scoring='f1_micro',
random_state = 17)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('FinalPipeline_LM_mean_per_facies.py')
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.pipeline import make_pipeline, make_union
from sklearn.preprocessing import FunctionTransformer
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
# Train and test a classifier
def train_and_test(X_tr, y_tr, X_v, well_v):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
# Train classifier
#clf = make_pipeline(make_union(VotingClassifier([("est", ExtraTreesClassifier(criterion="gini", max_features=1.0, n_estimators=500))]), FunctionTransformer(lambda X: X)), XGBClassifier(learning_rate=0.73, max_depth=10, min_child_weight=10, n_estimators=500, subsample=0.27))
#clf = make_pipeline( KNeighborsClassifier(n_neighbors=5, weights="distance") )
#clf = make_pipeline(MaxAbsScaler(),make_union(VotingClassifier([("est", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)),ExtraTreesClassifier(criterion="entropy", max_features=0.0001, n_estimators=500))
# * clf = make_pipeline( make_union(VotingClassifier([("est", BernoulliNB(alpha=60.0, binarize=0.26, fit_prior=True))]), FunctionTransformer(lambda X: X)),RandomForestClassifier(n_estimators=500))
clf = make_pipeline ( XGBClassifier(learning_rate=0.12, max_depth=3, min_child_weight=10, n_estimators=150, seed = 17, colsample_bytree = 0.9) )
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)
return y_v_hat
Explanation: Data Analysis
In this section we will run a Cross Validation routine
End of explanation
#Load testing data
test_data = pd.read_csv('../validation_data_nofacies.csv')
# Prepare training data
X_tr = X
y_tr = y
# Augment features
X_tr, padded_rows = augment_features(X_tr, well, depth)
# Removed padded rows
X_tr = np.delete(X_tr, padded_rows, axis=0)
y_tr = np.delete(y_tr, padded_rows, axis=0)
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
# Augment features
X_ts, padded_rows = augment_features(X_ts, well_ts, depth_ts)
# Predict test labels
y_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts)
# Save predicted labels
test_data['Facies'] = y_ts_hat
test_data.to_csv('Prediction_XX_Final.csv')
Explanation: Prediction
End of explanation |
14,908 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
Read the pregnancy file.
Step1: Select live births, then make a CDF of <tt>totalwgt_lb</tt>.
Step2: Display the CDF.
Step3: Find out how much you weighed at birth, if you can, and compute CDF(x).
If you are a first child, look up your birthweight in the CDF of first children; otherwise use the CDF of other children.
Compute the percentile rank of your birthweight
Compute the median birth weight by looking up the value associated with p=0.5.
Compute the interquartile range (IQR) by computing percentiles corresponding to 25 and 75.
Make a random selection from <tt>cdf</tt>.
Draw a random sample from <tt>cdf</tt>.
Draw a random sample from <tt>cdf</tt>, then compute the percentile rank for each value, and plot the distribution of the percentile ranks.
Generate 1000 random values using <tt>random.random()</tt> and plot their PMF.
Step4: Assuming that the PMF doesn't work very well, try plotting the CDF instead. | Python Code:
%matplotlib inline
import nsfg
preg = nsfg.ReadFemPreg()
Explanation: Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
Read the pregnancy file.
End of explanation
import thinkstats2 as ts
live = preg[preg.outcome == 1]
wgt_cdf = ts.Cdf(live.totalwgt_lb, label = 'weight')
Explanation: Select live births, then make a CDF of <tt>totalwgt_lb</tt>.
End of explanation
import thinkplot as tp
tp.Cdf(wgt_cdf, label = 'weight')
tp.Show()
Explanation: Display the CDF.
End of explanation
import random
random.random?
import random
thousand = [random.random() for x in range(1000)]
thousand_pmf = ts.Pmf(thousand, label = 'rando')
tp.Pmf(thousand_pmf, linewidth=0.1)
tp.Show()
t_hist = ts.Hist(thousand)
tp.Hist(t_hist, label = "rando")
tp.Show()
Explanation: Find out how much you weighed at birth, if you can, and compute CDF(x).
If you are a first child, look up your birthweight in the CDF of first children; otherwise use the CDF of other children.
Compute the percentile rank of your birthweight
Compute the median birth weight by looking up the value associated with p=0.5.
Compute the interquartile range (IQR) by computing percentiles corresponding to 25 and 75.
Make a random selection from <tt>cdf</tt>.
Draw a random sample from <tt>cdf</tt>.
Draw a random sample from <tt>cdf</tt>, then compute the percentile rank for each value, and plot the distribution of the percentile ranks.
Generate 1000 random values using <tt>random.random()</tt> and plot their PMF.
End of explanation
thousand_cdf = ts.Cdf(thousand, label='rando')
tp.Cdf(thousand_cdf)
tp.Show()
import scipy.stats
scipy.stats?
Explanation: Assuming that the PMF doesn't work very well, try plotting the CDF instead.
End of explanation |
14,909 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
functions for doing the work
For a given filename, colnum and rank create the desired matrix, and possibly save it in a comma separate value format that is readable by excel, origin, etc.
Step1: create the matrix with a specified rank from a given column
Step2: plot the heatmap
Other possible heatmaps are listed in matplotlib documentation
Step3: different colum and colormap | Python Code:
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
def create_matrix(filename, colnum, rank, sep=':', savecsv=False):
df = pd.read_csv(filename, sep=sep, header=None, comment='#')
matrix = df.iloc[:, [colnum]].values.reshape(rank, rank)
if savecsv:
mname = os.path.splitext(os.path.basename(filename))[0] + '_col-{0:d}'.format(colnum) + '.csv'
np.savetxt(mname, matrix)
return matrix
def heatmap(matrix, size, cmap='viridis', show_colorbar=True):
'''
Args:
matrix : array_like
Matrix to be colormapped
size : int
Size of the plot in inches
'''
plt.figure(figsize=(size, size))
plt.pcolor(m, cmap=cmap)
ax = plt.gca()
ax.set_aspect('equal')
if show_colorbar:
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.2)
plt.colorbar(cax=cax)
plt.tight_layout()
plt.show()
Explanation: functions for doing the work
For a given filename, colnum and rank create the desired matrix, and possibly save it in a comma separate value format that is readable by excel, origin, etc.
End of explanation
m = create_matrix('data/result_slice01.txt', colnum=1, rank=58)
Explanation: create the matrix with a specified rank from a given column
End of explanation
heatmap(m, 10)
Explanation: plot the heatmap
Other possible heatmaps are listed in matplotlib documentation
End of explanation
m = create_matrix('data/result_slice01.txt', colnum=6, rank=58)
heatmap(m, 10, cmap='Greens_r')
Explanation: different colum and colormap
End of explanation |
14,910 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Occupancy data
Step2: Parameter projection
Because the visualizer only displays results across two parameters, we need some way of reducing the dimension to 2.
Our approach
Step3: As of Scikit-learn 0.18, cv_results has replaced grid_scores as the grid search results format
Step4: Demo the use of param_projection... It identifies the unique values of the the two parameter values and gets the best score for each (here taking the max over gamma values)
Step5: GridSearchColorPlot
This visualizer wraps the GridSearchCV object and plots the values obtained from param_projection.
Step6: If there are missing values in the grid, these are filled with a hatch (see https
Step7: Choose a different metric...
Step8: Quick Method
Because grid search can take a long time and we may want to interactively cut the results a few different ways, by default the quick method assumes that the GridSearchCV object is already fit if no X data is passed in.
Step9: Parameter errors
Bad param values
Step10: Bad metric option
Step11: Metric option exists in cv_results but is not numeric -> not valid | Python Code:
## [from examples/examples.py]
from download import download_all
## The path to the test data sets
FIXTURES = os.path.join(os.getcwd(), "data")
## Dataset loading mechanisms
datasets = {
"credit": os.path.join(FIXTURES, "credit", "credit.csv"),
"concrete": os.path.join(FIXTURES, "concrete", "concrete.csv"),
"occupancy": os.path.join(FIXTURES, "occupancy", "occupancy.csv"),
"mushroom": os.path.join(FIXTURES, "mushroom", "mushroom.csv"),
}
def load_data(name, download=True):
Loads and wrangles the passed in dataset by name.
If download is specified, this method will download any missing files.
# Get the path from the datasets
path = datasets[name]
# Check if the data exists, otherwise download or raise
if not os.path.exists(path):
if download:
download_all()
else:
raise ValueError((
"'{}' dataset has not been downloaded, "
"use the download.py module to fetch datasets"
).format(name))
# Return the data frame
return pd.read_csv(path)
# Load the classification data set
data = load_data('occupancy')
print(len(data))
data.head()
# Specify the features of interest and the classes of the target
features = ["temperature", "relative humidity", "light", "C02", "humidity"]
classes = ['unoccupied', 'occupied']
# Searching the whole dataset takes a while (15 mins on my mac)...
# For demo purposes, we reduce the size
X = data[features].head(2000)
y = data.occupancy.head(2000)
Explanation: Occupancy data
End of explanation
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from yellowbrick.gridsearch.base import param_projection
# Fit a vanilla grid search... these are the example parameters from sklearn's gridsearch docs.
svc = SVC()
grid = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4], 'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
gs = GridSearchCV(svc, grid, n_jobs=4)
%%time
gs.fit(X, y)
Explanation: Parameter projection
Because the visualizer only displays results across two parameters, we need some way of reducing the dimension to 2.
Our approach: for each value of the parameters of interest, display the maximum score across all the other parameters.
Here we demo the param_projection utility function that does this
End of explanation
gs.cv_results_
Explanation: As of Scikit-learn 0.18, cv_results has replaced grid_scores as the grid search results format
End of explanation
param_1 = 'C'
param_2 = 'kernel'
param_1_vals, param2_vals, best_scores = param_projection(gs.cv_results_, param_1, param_2)
param_1_vals, param2_vals, best_scores
Explanation: Demo the use of param_projection... It identifies the unique values of the the two parameter values and gets the best score for each (here taking the max over gamma values)
End of explanation
from yellowbrick.gridsearch import GridSearchColorPlot
gs_viz = GridSearchColorPlot(gs, 'C', 'kernel')
gs_viz.fit(X, y).poof()
gs_viz = GridSearchColorPlot(gs, 'kernel', 'C')
gs_viz.fit(X, y).poof()
gs_viz = GridSearchColorPlot(gs, 'C', 'gamma')
gs_viz.fit(X, y).poof()
Explanation: GridSearchColorPlot
This visualizer wraps the GridSearchCV object and plots the values obtained from param_projection.
End of explanation
gs_viz = GridSearchColorPlot(gs, 'kernel', 'gamma')
gs_viz.fit(X, y).poof()
Explanation: If there are missing values in the grid, these are filled with a hatch (see https://stackoverflow.com/a/35905483/7637679)
End of explanation
gs_viz = GridSearchColorPlot(gs, 'C', 'kernel', metric='mean_fit_time')
gs_viz.fit(X, y).poof()
Explanation: Choose a different metric...
End of explanation
from yellowbrick.gridsearch import gridsearch_color_plot
%%time
# passing the GridSearchCV object pre-fit
gridsearch_color_plot(gs, 'C', 'kernel')
%%time
# trying a different cut across parameters
gridsearch_color_plot(gs, 'C', 'gamma')
%%time
# When we provide X, the `fit` method will call fit (takes longer)
gridsearch_color_plot(gs, 'C', 'kernel', X=X, y=y)
%%time
# can also choose a different metric
gridsearch_color_plot(gs, 'C', 'kernel', metric='mean_fit_time')
Explanation: Quick Method
Because grid search can take a long time and we may want to interactively cut the results a few different ways, by default the quick method assumes that the GridSearchCV object is already fit if no X data is passed in.
End of explanation
gs_viz = GridSearchColorPlot(gs, 'foo', 'kernel')
gs_viz.fit(X, y).poof()
gs_viz = GridSearchColorPlot(gs, 'C', 'foo')
gs_viz.fit(X, y).poof()
Explanation: Parameter errors
Bad param values
End of explanation
gs_viz = GridSearchColorPlot(gs, 'C', 'kernel', metric='foo')
gs_viz.fit(X, y).poof()
Explanation: Bad metric option
End of explanation
gs_viz = GridSearchColorPlot(gs, 'C', 'kernel', metric='param_kernel')
gs_viz.fit(X, y).poof()
Explanation: Metric option exists in cv_results but is not numeric -> not valid
End of explanation |
14,911 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Продажи австралийского вина
Известны ежемесячные продажи австралийского вина в тысячах литров с января 1980 по июль 1995, необходимо построить прогноз на следующие три года.
Step1: Проверка стационарности и STL-декомпозиция ряда
Step2: Стабилизация дисперсии
Сделаем преобразование Бокса-Кокса для стабилизации дисперсии
Step3: Стационарность
Критерий Дики-Фуллера отвергает гипотезу нестационарности, но визуально в данных виден тренд. Попробуем сезонное дифференцирование; сделаем на продифференцированном ряде STL-декомпозицию и проверим стационарность
Step4: Критерий Дики-Фуллера не отвергает гипотезу нестационарности, и полностью избавиться от тренда не удалось. Попробуем добавить ещё обычное дифференцирование
Step5: Гипотеза нестационарности отвергается, и визуально ряд выглядит лучше — тренда больше нет.
Подбор модели
Посмотрим на ACF и PACF полученного ряда
Step6: Начальные приближения
Step7: Если в предыдущей ячейке возникает ошибка, убедитесь, что обновили statsmodels до версии не меньше 0.8.0rc1.
Step8: Лучшая модель
Step9: Её остатки
Step10: Остатки несмещены (подтверждается критерием Стьюдента) стационарны (подтверждается критерием Дики-Фуллера и визуально), неавтокоррелированы (подтверждается критерием Льюнга-Бокса и коррелограммой).
Посмотрим, насколько хорошо модель описывает данные
Step11: Прогноз | Python Code:
%pylab inline
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
import warnings
from itertools import product
def invboxcox(y,lmbda):
if lmbda == 0:
return(np.exp(y))
else:
return(np.exp(np.log(lmbda*y+1)/lmbda))
wine = pd.read_csv('monthly-australian-wine-sales.csv',',', index_col=['month'], parse_dates=['month'], dayfirst=True)
wine.sales = wine.sales * 1000
plt.figure(figsize(15,7))
wine.sales.plot()
plt.ylabel('Wine sales')
pylab.show()
Explanation: Продажи австралийского вина
Известны ежемесячные продажи австралийского вина в тысячах литров с января 1980 по июль 1995, необходимо построить прогноз на следующие три года.
End of explanation
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(wine.sales).plot()
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(wine.sales)[1])
Explanation: Проверка стационарности и STL-декомпозиция ряда:
End of explanation
wine['sales_box'], lmbda = stats.boxcox(wine.sales)
plt.figure(figsize(15,7))
wine.sales_box.plot()
plt.ylabel(u'Transformed wine sales')
print("Оптимальный параметр преобразования Бокса-Кокса: %f" % lmbda)
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(wine.sales_box)[1])
Explanation: Стабилизация дисперсии
Сделаем преобразование Бокса-Кокса для стабилизации дисперсии:
End of explanation
wine['sales_box_diff'] = wine.sales_box - wine.sales_box.shift(12)
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(wine.sales_box_diff[12:]).plot()
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(wine.sales_box_diff[12:])[1])
Explanation: Стационарность
Критерий Дики-Фуллера отвергает гипотезу нестационарности, но визуально в данных виден тренд. Попробуем сезонное дифференцирование; сделаем на продифференцированном ряде STL-декомпозицию и проверим стационарность:
End of explanation
wine['sales_box_diff2'] = wine.sales_box_diff - wine.sales_box_diff.shift(1)
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(wine.sales_box_diff2[13:]).plot()
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(wine.sales_box_diff2[13:])[1])
Explanation: Критерий Дики-Фуллера не отвергает гипотезу нестационарности, и полностью избавиться от тренда не удалось. Попробуем добавить ещё обычное дифференцирование:
End of explanation
plt.figure(figsize(15,8))
ax = plt.subplot(211)
sm.graphics.tsa.plot_acf(wine.sales_box_diff2[13:].values.squeeze(), lags=48, ax=ax)
pylab.show()
ax = plt.subplot(212)
sm.graphics.tsa.plot_pacf(wine.sales_box_diff2[13:].values.squeeze(), lags=48, ax=ax)
pylab.show()
Explanation: Гипотеза нестационарности отвергается, и визуально ряд выглядит лучше — тренда больше нет.
Подбор модели
Посмотрим на ACF и PACF полученного ряда:
End of explanation
ps = range(0, 5)
d=1
qs = range(0, 3)
Ps = range(0, 2)
D=1
Qs = range(0, 2)
parameters = product(ps, qs, Ps, Qs)
parameters_list = list(parameters)
len(parameters_list)
%%time
results = []
best_aic = float("inf")
warnings.filterwarnings('ignore')
for param in parameters_list:
#try except нужен, потому что на некоторых наборах параметров модель не обучается
try:
model=sm.tsa.statespace.SARIMAX(wine.sales_box, order=(param[0], d, param[1]),
seasonal_order=(param[2], D, param[3], 12)).fit(disp=-1)
#выводим параметры, на которых модель не обучается и переходим к следующему набору
except ValueError:
print('wrong parameters:', param)
continue
aic = model.aic
#сохраняем лучшую модель, aic, параметры
if aic < best_aic:
best_model = model
best_aic = aic
best_param = param
results.append([param, model.aic])
warnings.filterwarnings('default')
Explanation: Начальные приближения: Q=1, q=2, P=1, p=4
End of explanation
result_table = pd.DataFrame(results)
result_table.columns = ['parameters', 'aic']
print(result_table.sort_values(by = 'aic', ascending=True).head())
Explanation: Если в предыдущей ячейке возникает ошибка, убедитесь, что обновили statsmodels до версии не меньше 0.8.0rc1.
End of explanation
print(best_model.summary())
Explanation: Лучшая модель:
End of explanation
plt.figure(figsize(15,8))
plt.subplot(211)
best_model.resid[13:].plot()
plt.ylabel(u'Residuals')
ax = plt.subplot(212)
sm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax)
print("Критерий Стьюдента: p=%f" % stats.ttest_1samp(best_model.resid[13:], 0)[1])
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1])
Explanation: Её остатки:
End of explanation
wine['model'] = invboxcox(best_model.fittedvalues, lmbda)
plt.figure(figsize(15,7))
wine.sales.plot()
wine.model[13:].plot(color='r')
plt.ylabel('Wine sales')
pylab.show()
Explanation: Остатки несмещены (подтверждается критерием Стьюдента) стационарны (подтверждается критерием Дики-Фуллера и визуально), неавтокоррелированы (подтверждается критерием Льюнга-Бокса и коррелограммой).
Посмотрим, насколько хорошо модель описывает данные:
End of explanation
wine2 = wine[['sales']]
date_list = [datetime.datetime.strptime("1994-09-01", "%Y-%m-%d") + relativedelta(months=x) for x in range(0,36)]
future = pd.DataFrame(index=date_list, columns= wine2.columns)
wine2 = pd.concat([wine2, future])
wine2['forecast'] = invboxcox(best_model.predict(start=176, end=211), lmbda)
plt.figure(figsize(15,7))
wine2.sales.plot()
wine2.forecast.plot(color='r')
plt.ylabel('Wine sales')
pylab.show()
Explanation: Прогноз
End of explanation |
14,912 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem 24
A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are
Step1: Problem 25
The Fibonacci sequence is defined by the recurrence relation
Step2: Problem 26
A unit fraction contains 1 in the numerator. The decimal representation of the unit fractions with denominators 2 to 10 are given | Python Code:
import itertools as it
def lexicographicPermutations():
l=list(range(10))
r=[''.join(map(str,x)) for x in list(it.permutations(l))]
#print(len(r))
print("Millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9: "+r[999999])
lexicographicPermutations()
Explanation: Problem 24
A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are:
012 021 102 120 201 210
What is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9?
End of explanation
def fibonacciSequence():
old=1
current=1
new=0
i=2
while True:
i+=1
new=old+current
if checkLength(new):
break
old=current
current=new
return i
def checkLength(n):
if len(str(n))==1000:
return True
else:
return False
print("Index of the first term in the Fibonacci sequence to contain 1000 digits: "+str(fibonacciSequence()))
Explanation: Problem 25
The Fibonacci sequence is defined by the recurrence relation:
Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1.
Hence the first 12 terms will be:
F1 = 1
F2 = 1
F3 = 2
F4 = 3
F5 = 5
F6 = 8
F7 = 13
F8 = 21
F9 = 34
F10 = 55
F11 = 89
F12 = 144
The 12th term, F12, is the first term to contain three digits.
What is the index of the first term in the Fibonacci sequence to contain 1000 digits?
End of explanation
def cycleLength(n):
res = 10
j = 0
while res != 10 or j < 1:
res = (res % n) * 10
j += 1
return j
def Euler26():
long = 0
for i in range(2, 1000):
if i%2 != 0 and i%5 != 0:
length = cycleLength(i)
if length > long:
long = length
res = i
return res
print(Euler26())
Explanation: Problem 26
A unit fraction contains 1 in the numerator. The decimal representation of the unit fractions with denominators 2 to 10 are given:
1/2 = 0.5
1/3 = 0.(3)
1/4 = 0.25
1/5 = 0.2
1/6 = 0.1(6)
1/7 = 0.(142857)
1/8 = 0.125
1/9 = 0.(1)
1/10 = 0.1
Where 0.1(6) means 0.166666..., and has a 1-digit recurring cycle. It can be seen that 1/7 has a 6-digit recurring cycle.
Find the value of d < 1000 for which 1/d contains the longest recurring cycle in its decimal fraction part.
End of explanation |
14,913 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 1
Read 2 arrays x,y containing floating point values
Calculate mean of x & y
Calculate variance for x
$$variance(x)=sum((x-mean(x))^2)$$
Calculate covariance of x & y
$$covariance = sum((x(i) - mean(x)) * (y(i) - mean(y)))$$
Calculate value of c
$$c = covariance(x,y)/variance(x)$$
Calculate value of m
$$m = mean(y) -c* mean(x)$$
Step1: Part 2
Plot graph for actual values against predicted value
Calculate root mean square error. | Python Code:
import tensorflow as tf
with tf.name_scope("var"):
with tf.name_scope("mean_x"):
a=tf.constant([5.0,7.0,20.2,17.32],shape=[1,4],name='a')
b=tf.constant([7.0,9.0,19.0,18.0],shape=[1,4],name='b')
x=tf.reduce_mean(a)
sess=tf.Session()
print("mean",sess.run(x))
#mean x
#mean of y
with tf.name_scope("mean_y"):
y=tf.reduce_mean(b)
sess=tf.Session()
print("mean",sess.run(y))
#variance of x
d=tf.subtract(a,x)
sess=tf.Session()
print(sess.run(d ))
e=tf.square(d)
f=tf.reduce_sum(e)
sess=tf.Session()
print(sess.run(f))
#covariance
with tf.name_scope("covariance"):
g=tf.subtract(b,y)
sess=tf.Session()
g=tf.multiply(d,g)
h=tf.reduce_sum(g)
print(sess.run(h))
#value of c
with tf.name_scope("value_of_c"):
j=tf.divide(h,f)
print(sess.run(j))
#m value
with tf.name_scope("value_m"):
i=tf.multiply(j,x)
k=tf.subtract(y,i)
print(sess.run(j))
Explanation: Part 1
Read 2 arrays x,y containing floating point values
Calculate mean of x & y
Calculate variance for x
$$variance(x)=sum((x-mean(x))^2)$$
Calculate covariance of x & y
$$covariance = sum((x(i) - mean(x)) * (y(i) - mean(y)))$$
Calculate value of c
$$c = covariance(x,y)/variance(x)$$
Calculate value of m
$$m = mean(y) -c* mean(x)$$
End of explanation
with tf.Session() as lb:
Writer =tf.summary.FileWriter("/tmp/tboard/output3",lb.graph)
Writer.close()
Explanation: Part 2
Plot graph for actual values against predicted value
Calculate root mean square error.
End of explanation |
14,914 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generating simple audio samples with music21
We'd like to synthesize simple audio samples containing a single note or a chord. The samples, however, should be parameterized by several attributes. We'd like to modify
Step1: We're about to create a Note object which represents a single note and both its pitch and duration.
Step2: If we have MuseScore installed, we can we the music sheet representation.
Step3: Note that there's some rest at the beginning and end of the MIDI file. It looks like a quarter-note rest. The reason is that "MIDI controllers may not be able to play notes at deltaTime=0" See
Step4: Properties of the Note
Step5: Creating Note with parameters
Step6: Changing duration
Step7: Changing volume
Volume can be specified by parameters
Step8: How to set tempo?
Step9: Just add a metronome mark at the beginning of the stream.
Step11: Sequence of notes
Step12: Let's make a sequence. Note that by just passing a list of notes to the Stream we get a chord, not a sequence, so we must append each note separately.
Step13: In the previous example we see, that notes may overlap. So let's add some rests to make better separation. | Python Code:
import music21
from music21.chord import Chord
from music21.duration import Duration
from music21.instrument import Instrument
from music21.note import Note, Rest
from music21.stream import Stream
from music21.tempo import MetronomeMark
from music21.volume import Volume
import os
data_dir = 'data/working/example-parametric-note'
os.makedirs(data_dir, exist_ok=True)
Explanation: Generating simple audio samples with music21
We'd like to synthesize simple audio samples containing a single note or a chord. The samples, however, should be parameterized by several attributes. We'd like to modify:
pitch
[precise frequency]
instrument
volume
duration
location in the sample
The main output should be MIDI which can be synthesized to audio. Besides that we'd like to store the parameters and metadata like the the pitch class, frequency, etc.
In this notebook we'll explore how to acomplish this with the music21 library.
End of explanation
Note('C')
s = Stream([Note('C')])
Explanation: We're about to create a Note object which represents a single note and both its pitch and duration.
End of explanation
s.show()
s.show('midi')
Explanation: If we have MuseScore installed, we can we the music sheet representation.
End of explanation
s.write('midi', data_dir + '/c.midi')
Explanation: Note that there's some rest at the beginning and end of the MIDI file. It looks like a quarter-note rest. The reason is that "MIDI controllers may not be able to play notes at deltaTime=0" See: https://groups.google.com/d/msg/music21list/ne_P_ZUvRNk/XynaiODzAgAJ
Anyway this might be good in the audio processing, since there will be a space for the windowing function.
End of explanation
n = Note('C')
n
def describe_note(note):
p = note.pitch
print(note)
print('pitch:', note.pitch)
print('duration:', note.duration)
print('name:', p.name)
print('full name:', p.fullName)
print('pitch class:', p.pitchClass)
print('octave:', p.octave)
print('frequency', p.frequency, 'Hz')
print('midi:', p.midi)
print('pitch space:', p.ps) # like MIDI, but floating point
describe_note(n)
Explanation: Properties of the Note
End of explanation
# different note in the default octave
describe_note(Note('E'))
# a note in the specific octave
describe_note(Note('G#3'))
# note specified by its octave and pitch class within an octave
describe_note(Note(octave=2, pitchClass=3))
# note specified by its integer MIDI number
describe_note(Note(midi=21))
# microtonal pitch using the pitch space attribute (like MIDI but floating point)
describe_note(Note(ps=21.25))
Explanation: Creating Note with parameters
End of explanation
# note with duration of half of a quarter note
note = Note(midi=21, duration=Duration(0.5))
describe_note(note)
# note with duration of half of a quarter note
note = Note(midi=21, duration=Duration(2.5))
describe_note(note)
Explanation: Changing duration
End of explanation
for v in [0, 32, 64, 127]:
print(Volume(velocity=v))
for v in [0, 0.25, 0.5, 1.0]:
print(Volume(velocityScalar=v))
Chord(['C']).volume
c = Chord([Note('C')])
c.volume = Volume(velocityScalar=0.25)
c.volume
Explanation: Changing volume
Volume can be specified by parameters:
velocity with range from 0 to 127 or by
velocityScalar with range from 0.0 to 1.0
End of explanation
metronome = MetronomeMark(number=60)
metronome.durationToSeconds(Duration(1.0))
Explanation: How to set tempo?
End of explanation
Stream([MetronomeMark(number=60), Note('C')]).show()
Explanation: Just add a metronome mark at the beginning of the stream.
End of explanation
def make_instrument(id):
i = Instrument()
i.midiProgram = id
return i
def chord_with_volume(chord, volume):
chord.volume = Volume(velocityScalar=volume)
return chord
def generate_single_note(midi_number, midi_instrument=0, volume=1.0, duration=1.0, tempo=120):
Generates a stream containing a single note with given parameters.
midi_number - MIDI note number, 0 to 127
midi_instrument - MIDI intrument number, 0 to 127
duration - floating point number (in quarter note lengths)
volume - 0.0 to 1.0
tempo - number of quarter notes per minute (eg. 120)
Note that there's a quarter note rest at the beginning and at the end.
return Stream([
MetronomeMark(number=tempo),
make_instrument(midi_instrument),
chord_with_volume(Chord([
Note(midi=midi_number, duration=Duration(duration))
]), volume)
])
generate_single_note(60).show('midi')
Explanation: Sequence of notes
End of explanation
s = Stream()
s.append(make_instrument(50))
s.append(Note(midi=60))
s.append(Note(midi=64))
s.append(Note(midi=67))
s.write('midi', data_dir + '/sequence_separated.midi')
s.show('midi')
Explanation: Let's make a sequence. Note that by just passing a list of notes to the Stream we get a chord, not a sequence, so we must append each note separately.
End of explanation
s = Stream()
s.append(make_instrument(50))
s.append(Note(midi=60))
s.append(Rest(duration=Duration(2.0)))
s.append(Note(midi=64))
s.append(Rest(duration=Duration(2.0)))
s.append(Note(midi=67))
s.write('midi', data_dir + '/sequence_separated.midi')
s.show('midi')
Explanation: In the previous example we see, that notes may overlap. So let's add some rests to make better separation.
End of explanation |
14,915 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Los modelos lineales son fundamentales tanto en estadística como en el aprendizaje automático, pues muchos métodos se apoyan en la combinación lineal de variables que describen los datos. Lo más sencillo será ajustar una línea recta con LinearRegression, pero veremos que contamos con un abaníco mucho más grande de herramientas.
Para mostrar cómo funcionan estos modelos vamos a emplear uno de los dataset que ya incorpora scikit-learn.
Step1: El Boston dataset es un conjunto de datos para el análisis de los precios de las viviendas en la región de Boston. Con boston.DESCR podemos obtener una descripción del dataset, con información sobre el mismo, como el tipo de atributos.
Step2: Vemos que tenemos 506 muestras con 13 atributos que nos ayudarán a predecir el precio medio de la vivienda. Ahora bien, no todos los atributos serán significativos ni todos tendrán el mismo peso a la hora de determinar el precio de la vivienda; pero eso es algo que iremos viendo conforme adquiramos experiencia e intuición.
LinearRegression
Ya tenemos los datos, vamos a ajustar una línea recta para ver cuál es la tendencia que siguen los precios en función del atributo.
Lo primero es importar LinearRegression y crear un objeto.
Step3: Una vez tenemos claro el modelo a emplear, el siguiente paso es entrenarlo con los datos de variables independientes y variables dependientes que tenemos. Para ello, en scikit-learn tenemos funciones del tipo
Step4: Éste, al tratarse de un modelo sencillo y con muy pocas muestra tardará muy poco en entrenarse. Una vez completado el proceso podemos ver los coeficientes que ha asignado a cada atributo y así ver de qué manera contribuyen al precio final de la vivienda.
Step5: Con esto ya tendríamos una pequeña idea de cuales son los factores que más contribuyen a incrementar o disminuir el precio de la vivienda. Pero no vayamos a sacar conclusiones precipitadas como han hecho en su día Reinhart y Rogoff y visualicemos los datos primero.
Step6: En este caso hemos representado el precio medio la vivienda frente a la proporción de viviendas anteriores a 1940 que hay en la zona. Y como poder ver cláramente, emplear sólo un parámetro (AGE) para determinar el precio de la vivienda mediante una línea recta no parece lo ideal. Pero si tomamos en cuenta todas las variables las predicciones posiblemente mejoren.
Por tanto vamos a utilizar el modelo ya entrenado para predecir los precios de las viviendas. Aunque en este caso no vamos a utilizar datos nuevos, sino los mismos datos que hemos empleado para entrenar el modelo y así ver las diferencias.
Step7: Podemos ver que el error medio es despreciable y que la mayoría de los valores se concentran entorno al 0. Pero, ¿cómo hemos llegado a esos valores?
La idea detrás de la regresión lineal es encontrar unos coeficientes $\beta$ que satisfagan
$$y = X\beta,$$
donde $X$ es nuestra matriz de datos e $y$ son nuestros valores objetivo. Puesto que es muy poco probable que a partir de nuestros valores de $X$ obtengamos los coeficientes que plenamente satisfagan la ecuación, es necesario añadir un término de error $\varepsilon$, tal que
$$y = X\beta + \varepsilon.$$
Con el fin de obtener ese conjunto de coeficientes $\beta$ que relacionan $X$ con $y$, LinearRegression recurre al método de mínimos cuadrados
$$\underset{\beta}{min\,} {|| X \beta - y||_2}^2.$$
Para éste problema también existe una solución analítica,
$$\beta = (X^T X)^{-1}X^Ty,$$
pero, ¿qué ocurre si nuestros datos no son independientes? En ese caso, $X^T X$ no es invertible y si contamos con columnas que son función de otras, o están de alguna manera correlacionadas, la estimación por mínimos cuadrados se vuelve altamente sensible a errores aleatorios incrementándose la varianza.
Regularización
Para esos casos emplearemos el modelo Ridge que añade un factor de regularización $\alpha$ que en español se conoce como factor de Tíjinov.
$$\underset{\beta}{min\,} {{|| X \beta - y||_2}^2 + \alpha {||\beta||_2}^2},$$
y así la solución analítica queda como
$$\beta = (X^T X + \alpha^2I)^{-1}X^Ty.$$
Veamos un ejemplo. Para ello, en vez de cargar un dataset crearemos nosotros uno con tres atributos, y donde sólo dos sean linealmente independientes. Para ello utilizamos la función make_regression.
Step8: Nos interesará también optimizar el valor de $\alpha$. Eso lo haremos con la validación cruzada mediante el objeto RidgeCV que emplea una técnica similar al leave-one-out cross-validation (LOOCV), i.e., dejando uno fuera para test mientras entrena con el resto de las muestras.
Step9: A la hora de crear el objeto le vamos a indicar los valores de $\alpha$ a evaluar. También guardamos los datos que obtenemos al realizar la validación cruzada con store_cv_values=True para representarlos gráficamente.
Step10: Con rcv.alpha_ obtenemos el valor de $\alpha$ que nuestro que nuestro método RidgeCV ha considerado minimiza el error, lo cual también acamos de comprobar gráficamente.
Pero métodos para regresión lineal hay muchos, y en la documentación de scikit-learn podréis encontrar una descripción bastante completa de cada alternativa.
Regresión no lineal
Ahora bien, ¿qué hacer cuando la relación no es lineal y creemos que un polinomio haría un mejor ajuste? Si tomamos como ejemplo una función $f$ que toma la forma
$$f(x) = a + bx + cx^2 $$
la función $f$ es no lineal en función de $x$ pero si es lineal en función de los parámetros desconocidos $a$, $b$, y $c$. O visto de otra manera
Step11: Scikit-learn tiene un objeto PolynomialFeatures que nos va a servir para convertir nuestra variable $x$ en un array $z$ del tipo $z = [1, x, x^2, \ldots, n^n]$, que es lo que nos interesa.
El resultado de esa transformación se la pasamos a nuestro modelo Ridge. Para facilitar la tarea en este tipo de casos —donde se realizan varios pasos que van desde el pre-tratamiento de los datos hasta un posible post-tratamiento pasando por el entrenamiento—, podemos hacer uso de las Pipeline que nos permiten encadenar multiples estimadores en uno. Esto es especialmente útil cuando hay secuencia de pasos predefinidos en el procesado de datos con, por ejemplo, selección de atributos, normalización y clasificación.
Step12: Acabamos de utilizar un modelo Ridge que implementa regularización, pero sin optimizar. ¿Qué pasaría si optimizamos el parámetro de regularización $\alpha$ con RidgeCV? | Python Code:
from sklearn import datasets
boston = datasets.load_boston()
Explanation: Los modelos lineales son fundamentales tanto en estadística como en el aprendizaje automático, pues muchos métodos se apoyan en la combinación lineal de variables que describen los datos. Lo más sencillo será ajustar una línea recta con LinearRegression, pero veremos que contamos con un abaníco mucho más grande de herramientas.
Para mostrar cómo funcionan estos modelos vamos a emplear uno de los dataset que ya incorpora scikit-learn.
End of explanation
print(boston.DESCR)
Explanation: El Boston dataset es un conjunto de datos para el análisis de los precios de las viviendas en la región de Boston. Con boston.DESCR podemos obtener una descripción del dataset, con información sobre el mismo, como el tipo de atributos.
End of explanation
from sklearn.linear_model import LinearRegression
lr = LinearRegression(normalize=True)
Explanation: Vemos que tenemos 506 muestras con 13 atributos que nos ayudarán a predecir el precio medio de la vivienda. Ahora bien, no todos los atributos serán significativos ni todos tendrán el mismo peso a la hora de determinar el precio de la vivienda; pero eso es algo que iremos viendo conforme adquiramos experiencia e intuición.
LinearRegression
Ya tenemos los datos, vamos a ajustar una línea recta para ver cuál es la tendencia que siguen los precios en función del atributo.
Lo primero es importar LinearRegression y crear un objeto.
End of explanation
lr.fit(boston.data, boston.target)
Explanation: Una vez tenemos claro el modelo a emplear, el siguiente paso es entrenarlo con los datos de variables independientes y variables dependientes que tenemos. Para ello, en scikit-learn tenemos funciones del tipo:
modelo.fit(X, y)
End of explanation
for (feature, coef) in zip(boston.feature_names, lr.coef_):
print('{:>7}: {: 9.5f}'.format(feature, coef))
Explanation: Éste, al tratarse de un modelo sencillo y con muy pocas muestra tardará muy poco en entrenarse. Una vez completado el proceso podemos ver los coeficientes que ha asignado a cada atributo y así ver de qué manera contribuyen al precio final de la vivienda.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def plot_feature(feature):
f = (boston.feature_names == feature)
plt.scatter(boston.data[:,f], boston.target, c='b', alpha=0.3)
plt.plot(boston.data[:,f], boston.data[:,f]*lr.coef_[f] + lr.intercept_, 'k')
plt.legend(['Predicted value', 'Actual value'])
plt.xlabel(feature)
plt.ylabel("Median value in $1000's")
plot_feature('AGE')
Explanation: Con esto ya tendríamos una pequeña idea de cuales son los factores que más contribuyen a incrementar o disminuir el precio de la vivienda. Pero no vayamos a sacar conclusiones precipitadas como han hecho en su día Reinhart y Rogoff y visualicemos los datos primero.
End of explanation
predictions = lr.predict(boston.data)
f, ax = plt.subplots(1)
ax.hist(boston.target - predictions, bins=50, alpha=0.7)
ax.set_title('Histograma de residuales')
ax.text(0.95, 0.90, 'Media de residuales: {:.3e}'.format(np.mean(boston.target - predictions)),
transform=ax.transAxes, verticalalignment='top', horizontalalignment='right')
Explanation: En este caso hemos representado el precio medio la vivienda frente a la proporción de viviendas anteriores a 1940 que hay en la zona. Y como poder ver cláramente, emplear sólo un parámetro (AGE) para determinar el precio de la vivienda mediante una línea recta no parece lo ideal. Pero si tomamos en cuenta todas las variables las predicciones posiblemente mejoren.
Por tanto vamos a utilizar el modelo ya entrenado para predecir los precios de las viviendas. Aunque en este caso no vamos a utilizar datos nuevos, sino los mismos datos que hemos empleado para entrenar el modelo y así ver las diferencias.
End of explanation
from sklearn.datasets import make_regression
reg_data, reg_target = make_regression(n_samples=2000, n_features=3, effective_rank=2, noise=10)
Explanation: Podemos ver que el error medio es despreciable y que la mayoría de los valores se concentran entorno al 0. Pero, ¿cómo hemos llegado a esos valores?
La idea detrás de la regresión lineal es encontrar unos coeficientes $\beta$ que satisfagan
$$y = X\beta,$$
donde $X$ es nuestra matriz de datos e $y$ son nuestros valores objetivo. Puesto que es muy poco probable que a partir de nuestros valores de $X$ obtengamos los coeficientes que plenamente satisfagan la ecuación, es necesario añadir un término de error $\varepsilon$, tal que
$$y = X\beta + \varepsilon.$$
Con el fin de obtener ese conjunto de coeficientes $\beta$ que relacionan $X$ con $y$, LinearRegression recurre al método de mínimos cuadrados
$$\underset{\beta}{min\,} {|| X \beta - y||_2}^2.$$
Para éste problema también existe una solución analítica,
$$\beta = (X^T X)^{-1}X^Ty,$$
pero, ¿qué ocurre si nuestros datos no son independientes? En ese caso, $X^T X$ no es invertible y si contamos con columnas que son función de otras, o están de alguna manera correlacionadas, la estimación por mínimos cuadrados se vuelve altamente sensible a errores aleatorios incrementándose la varianza.
Regularización
Para esos casos emplearemos el modelo Ridge que añade un factor de regularización $\alpha$ que en español se conoce como factor de Tíjinov.
$$\underset{\beta}{min\,} {{|| X \beta - y||_2}^2 + \alpha {||\beta||_2}^2},$$
y así la solución analítica queda como
$$\beta = (X^T X + \alpha^2I)^{-1}X^Ty.$$
Veamos un ejemplo. Para ello, en vez de cargar un dataset crearemos nosotros uno con tres atributos, y donde sólo dos sean linealmente independientes. Para ello utilizamos la función make_regression.
End of explanation
from sklearn.linear_model import RidgeCV
Explanation: Nos interesará también optimizar el valor de $\alpha$. Eso lo haremos con la validación cruzada mediante el objeto RidgeCV que emplea una técnica similar al leave-one-out cross-validation (LOOCV), i.e., dejando uno fuera para test mientras entrena con el resto de las muestras.
End of explanation
alphas = np.linspace(0.01, 0.5)
rcv = RidgeCV(alphas=alphas, store_cv_values=True)
rcv.fit(reg_data, reg_target)
plt.rc('text', usetex=False)
f, ax = plt.subplots()
ax.plot(alphas, rcv.cv_values_.mean(axis=0))
ax.text(0.05, 0.90, 'alpha que minimiza el error: {:.3f}'.format(rcv.alpha_),
transform=ax.transAxes)
Explanation: A la hora de crear el objeto le vamos a indicar los valores de $\alpha$ a evaluar. También guardamos los datos que obtenemos al realizar la validación cruzada con store_cv_values=True para representarlos gráficamente.
End of explanation
f, ax = plt.subplots()
x = np.linspace(0, 2*np.pi)
y = np.sin(x)
ax.plot(x, np.sin(x), 'r', label='sin ruido')
# añadimos algo de ruido
xr = x + np.random.normal(scale=0.1, size=x.shape)
yr = y + np.random.normal(scale=0.2, size=y.shape)
ax.scatter(xr, yr, label='con ruido')
ax.legend()
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
Explanation: Con rcv.alpha_ obtenemos el valor de $\alpha$ que nuestro que nuestro método RidgeCV ha considerado minimiza el error, lo cual también acamos de comprobar gráficamente.
Pero métodos para regresión lineal hay muchos, y en la documentación de scikit-learn podréis encontrar una descripción bastante completa de cada alternativa.
Regresión no lineal
Ahora bien, ¿qué hacer cuando la relación no es lineal y creemos que un polinomio haría un mejor ajuste? Si tomamos como ejemplo una función $f$ que toma la forma
$$f(x) = a + bx + cx^2 $$
la función $f$ es no lineal en función de $x$ pero si es lineal en función de los parámetros desconocidos $a$, $b$, y $c$. O visto de otra manera: podemos sustituir nuestras variables $x$ por un array $z$ tal que
$$ z = [1, x, x^2] $$
con el que podríamos reescribir nuestra función $f$ como
$$ f(z) = az_0 + bz_1 + cz_2$$
Para ello en scikit-learn contamos con la herramienta PolynomialFeatures. Veamos un ejemplo.
En este caso vamos a tomar la función seno entre 0 y 2$\pi$ a la que añadiremos un poco de ruido.
End of explanation
f, ax = plt.subplots()
ax.plot(x, np.sin(x), 'r', label='sin ruido')
ax.scatter(xr, yr, label='con ruido')
X = xr[:, np.newaxis]
for degree in [3, 4, 5]:
model = make_pipeline(PolynomialFeatures(degree), Ridge())
model.fit(X, y)
y = model.predict(x[:, np.newaxis])
ax.plot(x, y, '--', lw=2, label="degree %d" % degree)
ax.legend()
Explanation: Scikit-learn tiene un objeto PolynomialFeatures que nos va a servir para convertir nuestra variable $x$ en un array $z$ del tipo $z = [1, x, x^2, \ldots, n^n]$, que es lo que nos interesa.
El resultado de esa transformación se la pasamos a nuestro modelo Ridge. Para facilitar la tarea en este tipo de casos —donde se realizan varios pasos que van desde el pre-tratamiento de los datos hasta un posible post-tratamiento pasando por el entrenamiento—, podemos hacer uso de las Pipeline que nos permiten encadenar multiples estimadores en uno. Esto es especialmente útil cuando hay secuencia de pasos predefinidos en el procesado de datos con, por ejemplo, selección de atributos, normalización y clasificación.
End of explanation
f, ax = plt.subplots()
ax.plot(x, np.sin(x), 'r', label='sin ruido')
ax.scatter(xr, yr, label='con ruido')
X = xr[:, np.newaxis]
for degree in [3, 4, 5]:
model = make_pipeline(PolynomialFeatures(degree), RidgeCV(alphas=alphas))
model.fit(X, y)
y = model.predict(x[:, np.newaxis])
ax.plot(x, y, '--', lw=2, label="degree %d" % degree)
ax.legend()
Explanation: Acabamos de utilizar un modelo Ridge que implementa regularización, pero sin optimizar. ¿Qué pasaría si optimizamos el parámetro de regularización $\alpha$ con RidgeCV?
End of explanation |
14,916 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right
Step1: Motivating KDE
Step2: We have previously seen that the standard count-based histogram can be created with the plt.hist() function.
By specifying the normed parameter of the histogram, we end up with a normalized histogram where the height of the bins does not reflect counts, but instead reflects probability density
Step3: Notice that for equal binning, this normalization simply changes the scale on the y-axis, leaving the relative heights essentially the same as in a histogram built from counts.
This normalization is chosen so that the total area under the histogram is equal to 1, as we can confirm by looking at the output of the histogram function
Step4: One of the issues with using a histogram as a density estimator is that the choice of bin size and location can lead to representations that have qualitatively different features.
For example, if we look at a version of this data with only 20 points, the choice of how to draw the bins can lead to an entirely different interpretation of the data!
Consider this example
Step5: On the left, the histogram makes clear that this is a bimodal distribution.
On the right, we see a unimodal distribution with a long tail.
Without seeing the preceding code, you would probably not guess that these two histograms were built from the same data
Step6: The problem with our two binnings stems from the fact that the height of the block stack often reflects not on the actual density of points nearby, but on coincidences of how the bins align with the data points.
This mis-alignment between points and their blocks is a potential cause of the poor histogram results seen here.
But what if, instead of stacking the blocks aligned with the bins, we were to stack the blocks aligned with the points they represent?
If we do this, the blocks won't be aligned, but we can add their contributions at each location along the x-axis to find the result.
Let's try this
Step7: The result looks a bit messy, but is a much more robust reflection of the actual data characteristics than is the standard histogram.
Still, the rough edges are not aesthetically pleasing, nor are they reflective of any true properties of the data.
In order to smooth them out, we might decide to replace the blocks at each location with a smooth function, like a Gaussian.
Let's use a standard normal curve at each point instead of a block
Step8: This smoothed-out plot, with a Gaussian distribution contributed at the location of each input point, gives a much more accurate idea of the shape of the data distribution, and one which has much less variance (i.e., changes much less in response to differences in sampling).
These last two plots are examples of kernel density estimation in one dimension
Step9: The result here is normalized such that the area under the curve is equal to 1.
Selecting the bandwidth via cross-validation
The choice of bandwidth within KDE is extremely important to finding a suitable density estimate, and is the knob that controls the bias–variance trade-off in the estimate of density
Step10: Now we can find the choice of bandwidth which maximizes the score (which in this case defaults to the log-likelihood)
Step11: The optimal bandwidth happens to be very close to what we used in the example plot earlier, where the bandwidth was 1.0 (i.e., the default width of scipy.stats.norm).
Example
Step12: With this data loaded, we can use the Basemap toolkit (mentioned previously in Geographic Data with Basemap) to plot the observed locations of these two species on the map of South America.
Step13: Unfortunately, this doesn't give a very good idea of the density of the species, because points in the species range may overlap one another.
You may not realize it by looking at this plot, but there are over 1,600 points shown here!
Let's use kernel density estimation to show this distribution in a more interpretable way
Step15: Compared to the simple scatter plot we initially used, this visualization paints a much clearer picture of the geographical distribution of observations of these two species.
Example
Step16: The anatomy of a custom estimator
Let's step through this code and discuss the essential features
Step18: Next we can plot the cross-validation score as a function of bandwidth
Step19: We see that this not-so-naive Bayesian classifier reaches a cross-validation accuracy of just over 96%; this is compared to around 80% for the naive Bayesian classification | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
<!--NAVIGATION-->
< In Depth: Gaussian Mixture Models | Contents | Application: A Face Detection Pipeline >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.13-Kernel-Density-Estimation.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
In-Depth: Kernel Density Estimation
In the previous section we covered Gaussian mixture models (GMM), which are a kind of hybrid between a clustering estimator and a density estimator.
Recall that a density estimator is an algorithm which takes a $D$-dimensional dataset and produces an estimate of the $D$-dimensional probability distribution which that data is drawn from.
The GMM algorithm accomplishes this by representing the density as a weighted sum of Gaussian distributions.
Kernel density estimation (KDE) is in some senses an algorithm which takes the mixture-of-Gaussians idea to its logical extreme: it uses a mixture consisting of one Gaussian component per point, resulting in an essentially non-parametric estimator of density.
In this section, we will explore the motivation and uses of KDE.
We begin with the standard imports:
End of explanation
def make_data(N, f=0.3, rseed=1):
rand = np.random.RandomState(rseed)
x = rand.randn(N)
x[int(f * N):] += 5
return x
x = make_data(1000)
Explanation: Motivating KDE: Histograms
As already discussed, a density estimator is an algorithm which seeks to model the probability distribution that generated a dataset.
For one dimensional data, you are probably already familiar with one simple density estimator: the histogram.
A histogram divides the data into discrete bins, counts the number of points that fall in each bin, and then visualizes the results in an intuitive manner.
For example, let's create some data that is drawn from two normal distributions:
End of explanation
hist = plt.hist(x, bins=30, density=True)
Explanation: We have previously seen that the standard count-based histogram can be created with the plt.hist() function.
By specifying the normed parameter of the histogram, we end up with a normalized histogram where the height of the bins does not reflect counts, but instead reflects probability density:
End of explanation
density, bins, patches = hist
widths = bins[1:] - bins[:-1]
(density * widths).sum()
Explanation: Notice that for equal binning, this normalization simply changes the scale on the y-axis, leaving the relative heights essentially the same as in a histogram built from counts.
This normalization is chosen so that the total area under the histogram is equal to 1, as we can confirm by looking at the output of the histogram function:
End of explanation
x = make_data(20)
bins = np.linspace(-5, 10, 10)
fig, ax = plt.subplots(1, 2, figsize=(12, 4),
sharex=True, sharey=True,
subplot_kw={'xlim':(-4, 9),
'ylim':(-0.02, 0.3)})
fig.subplots_adjust(wspace=0.05)
for i, offset in enumerate([0.0, 0.6]):
ax[i].hist(x, bins=bins + offset, density=True)
ax[i].plot(x, np.full_like(x, -0.01), '|k',
markeredgewidth=1)
Explanation: One of the issues with using a histogram as a density estimator is that the choice of bin size and location can lead to representations that have qualitatively different features.
For example, if we look at a version of this data with only 20 points, the choice of how to draw the bins can lead to an entirely different interpretation of the data!
Consider this example:
End of explanation
fig, ax = plt.subplots()
bins = np.arange(-3, 8)
ax.plot(x, np.full_like(x, -0.1), '|k',
markeredgewidth=1)
for count, edge in zip(*np.histogram(x, bins)):
for i in range(count):
ax.add_patch(plt.Rectangle((edge, i), 1, 1,
alpha=0.5))
ax.set_xlim(-4, 8)
ax.set_ylim(-0.2, 8)
Explanation: On the left, the histogram makes clear that this is a bimodal distribution.
On the right, we see a unimodal distribution with a long tail.
Without seeing the preceding code, you would probably not guess that these two histograms were built from the same data: with that in mind, how can you trust the intuition that histograms confer?
And how might we improve on this?
Stepping back, we can think of a histogram as a stack of blocks, where we stack one block within each bin on top of each point in the dataset.
Let's view this directly:
End of explanation
x_d = np.linspace(-4, 8, 2000)
density = sum((abs(xi - x_d) < 0.5) for xi in x)
plt.fill_between(x_d, density, alpha=0.5)
plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1)
plt.axis([-4, 8, -0.2, 8]);
Explanation: The problem with our two binnings stems from the fact that the height of the block stack often reflects not on the actual density of points nearby, but on coincidences of how the bins align with the data points.
This mis-alignment between points and their blocks is a potential cause of the poor histogram results seen here.
But what if, instead of stacking the blocks aligned with the bins, we were to stack the blocks aligned with the points they represent?
If we do this, the blocks won't be aligned, but we can add their contributions at each location along the x-axis to find the result.
Let's try this:
End of explanation
from scipy.stats import norm
x_d = np.linspace(-4, 8, 1000)
density = sum(norm(xi).pdf(x_d) for xi in x)
plt.fill_between(x_d, density, alpha=0.5)
plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1)
plt.axis([-4, 8, -0.2, 5]);
Explanation: The result looks a bit messy, but is a much more robust reflection of the actual data characteristics than is the standard histogram.
Still, the rough edges are not aesthetically pleasing, nor are they reflective of any true properties of the data.
In order to smooth them out, we might decide to replace the blocks at each location with a smooth function, like a Gaussian.
Let's use a standard normal curve at each point instead of a block:
End of explanation
from sklearn.neighbors import KernelDensity
# instantiate and fit the KDE model
kde = KernelDensity(bandwidth=1.0, kernel='gaussian')
kde.fit(x[:, None])
# score_samples returns the log of the probability density
logprob = kde.score_samples(x_d[:, None])
plt.fill_between(x_d, np.exp(logprob), alpha=0.5)
plt.plot(x, np.full_like(x, -0.01), '|k', markeredgewidth=1)
plt.ylim(-0.02, 0.22)
Explanation: This smoothed-out plot, with a Gaussian distribution contributed at the location of each input point, gives a much more accurate idea of the shape of the data distribution, and one which has much less variance (i.e., changes much less in response to differences in sampling).
These last two plots are examples of kernel density estimation in one dimension: the first uses a so-called "tophat" kernel and the second uses a Gaussian kernel.
We'll now look at kernel density estimation in more detail.
Kernel Density Estimation in Practice
The free parameters of kernel density estimation are the kernel, which specifies the shape of the distribution placed at each point, and the kernel bandwidth, which controls the size of the kernel at each point.
In practice, there are many kernels you might use for a kernel density estimation: in particular, the Scikit-Learn KDE implementation supports one of six kernels, which you can read about in Scikit-Learn's Density Estimation documentation.
While there are several versions of kernel density estimation implemented in Python (notably in the SciPy and StatsModels packages), I prefer to use Scikit-Learn's version because of its efficiency and flexibility.
It is implemented in the sklearn.neighbors.KernelDensity estimator, which handles KDE in multiple dimensions with one of six kernels and one of a couple dozen distance metrics.
Because KDE can be fairly computationally intensive, the Scikit-Learn estimator uses a tree-based algorithm under the hood and can trade off computation time for accuracy using the atol (absolute tolerance) and rtol (relative tolerance) parameters.
The kernel bandwidth, which is a free parameter, can be determined using Scikit-Learn's standard cross validation tools as we will soon see.
Let's first show a simple example of replicating the above plot using the Scikit-Learn KernelDensity estimator:
End of explanation
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import LeaveOneOut
bandwidths = 10 ** np.linspace(-1, 1, 100)
grid = GridSearchCV(KernelDensity(kernel='gaussian'),
{'bandwidth': bandwidths},
cv=LeaveOneOut())
grid.fit(x[:, None]);
Explanation: The result here is normalized such that the area under the curve is equal to 1.
Selecting the bandwidth via cross-validation
The choice of bandwidth within KDE is extremely important to finding a suitable density estimate, and is the knob that controls the bias–variance trade-off in the estimate of density: too narrow a bandwidth leads to a high-variance estimate (i.e., over-fitting), where the presence or absence of a single point makes a large difference. Too wide a bandwidth leads to a high-bias estimate (i.e., under-fitting) where the structure in the data is washed out by the wide kernel.
There is a long history in statistics of methods to quickly estimate the best bandwidth based on rather stringent assumptions about the data: if you look up the KDE implementations in the SciPy and StatsModels packages, for example, you will see implementations based on some of these rules.
In machine learning contexts, we've seen that such hyperparameter tuning often is done empirically via a cross-validation approach.
With this in mind, the KernelDensity estimator in Scikit-Learn is designed such that it can be used directly within the Scikit-Learn's standard grid search tools.
Here we will use GridSearchCV to optimize the bandwidth for the preceding dataset.
Because we are looking at such a small dataset, we will use leave-one-out cross-validation, which minimizes the reduction in training set size for each cross-validation trial:
End of explanation
grid.best_params_
Explanation: Now we can find the choice of bandwidth which maximizes the score (which in this case defaults to the log-likelihood):
End of explanation
from sklearn.datasets import fetch_species_distributions
# this step might fail based on permssions and network access
# if in Docker, specify --network=host
# if in docker-compose specify version 3.4 and build -> network: host
data = fetch_species_distributions()
# Get matrices/arrays of species IDs and locations
latlon = np.vstack([data.train['dd lat'],
data.train['dd long']]).T
species = np.array([d.decode('ascii').startswith('micro')
for d in data.train['species']], dtype='int')
Explanation: The optimal bandwidth happens to be very close to what we used in the example plot earlier, where the bandwidth was 1.0 (i.e., the default width of scipy.stats.norm).
Example: KDE on a Sphere
Perhaps the most common use of KDE is in graphically representing distributions of points.
For example, in the Seaborn visualization library (see Visualization With Seaborn), KDE is built in and automatically used to help visualize points in one and two dimensions.
Here we will look at a slightly more sophisticated use of KDE for visualization of distributions.
We will make use of some geographic data that can be loaded with Scikit-Learn: the geographic distributions of recorded observations of two South American mammals, Bradypus variegatus (the Brown-throated Sloth) and Microryzomys minutus (the Forest Small Rice Rat).
With Scikit-Learn, we can fetch this data as follows:
End of explanation
# !conda install -c conda-forge basemap-data-hires -y
# RESTART KERNEL
#Hack to fix missing PROJ4 env var
import os
import conda
conda_file_dir = conda.__file__
conda_dir = conda_file_dir.split('lib')[0]
proj_lib = os.path.join(os.path.join(conda_dir, 'share'), 'proj')
os.environ["PROJ_LIB"] = proj_lib
from mpl_toolkits.basemap import Basemap
from sklearn.datasets.species_distributions import construct_grids
xgrid, ygrid = construct_grids(data)
# plot coastlines with basemap
m = Basemap(projection='cyl', resolution='c',
llcrnrlat=ygrid.min(), urcrnrlat=ygrid.max(),
llcrnrlon=xgrid.min(), urcrnrlon=xgrid.max())
m.drawmapboundary(fill_color='#DDEEFF')
m.fillcontinents(color='#FFEEDD')
m.drawcoastlines(color='gray', zorder=2)
m.drawcountries(color='gray', zorder=2)
# plot locations
m.scatter(latlon[:, 1], latlon[:, 0], zorder=3,
c=species, cmap='rainbow', latlon=True);
Explanation: With this data loaded, we can use the Basemap toolkit (mentioned previously in Geographic Data with Basemap) to plot the observed locations of these two species on the map of South America.
End of explanation
# Set up the data grid for the contour plot
X, Y = np.meshgrid(xgrid[::5], ygrid[::5][::-1])
land_reference = data.coverages[6][::5, ::5]
land_mask = (land_reference > -9999).ravel()
xy = np.vstack([Y.ravel(), X.ravel()]).T
xy = np.radians(xy[land_mask])
# Create two side-by-side plots
fig, ax = plt.subplots(1, 2)
fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05)
species_names = ['Bradypus Variegatus', 'Microryzomys Minutus']
cmaps = ['Purples', 'Reds']
for i, axi in enumerate(ax):
axi.set_title(species_names[i])
# plot coastlines with basemap
m = Basemap(projection='cyl', llcrnrlat=Y.min(),
urcrnrlat=Y.max(), llcrnrlon=X.min(),
urcrnrlon=X.max(), resolution='c', ax=axi)
m.drawmapboundary(fill_color='#DDEEFF')
m.drawcoastlines()
m.drawcountries()
# construct a spherical kernel density estimate of the distribution
kde = KernelDensity(bandwidth=0.03, metric='haversine')
kde.fit(np.radians(latlon[species == i]))
# evaluate only on the land: -9999 indicates ocean
Z = np.full(land_mask.shape[0], -9999.0)
Z[land_mask] = np.exp(kde.score_samples(xy))
Z = Z.reshape(X.shape)
# plot contours of the density
levels = np.linspace(0, Z.max(), 25)
axi.contourf(X, Y, Z, levels=levels, cmap=cmaps[i])
Explanation: Unfortunately, this doesn't give a very good idea of the density of the species, because points in the species range may overlap one another.
You may not realize it by looking at this plot, but there are over 1,600 points shown here!
Let's use kernel density estimation to show this distribution in a more interpretable way: as a smooth indication of density on the map.
Because the coordinate system here lies on a spherical surface rather than a flat plane, we will use the haversine distance metric, which will correctly represent distances on a curved surface.
There is a bit of boilerplate code here (one of the disadvantages of the Basemap toolkit) but the meaning of each code block should be clear:
End of explanation
from sklearn.base import BaseEstimator, ClassifierMixin
class KDEClassifier(BaseEstimator, ClassifierMixin):
Bayesian generative classification based on KDE
Parameters
----------
bandwidth : float
the kernel bandwidth within each class
kernel : str
the kernel name, passed to KernelDensity
def __init__(self, bandwidth=1.0, kernel='gaussian'):
self.bandwidth = bandwidth
self.kernel = kernel
def fit(self, X, y):
self.classes_ = np.sort(np.unique(y))
training_sets = [X[y == yi] for yi in self.classes_]
self.models_ = [KernelDensity(bandwidth=self.bandwidth,
kernel=self.kernel).fit(Xi)
for Xi in training_sets]
self.logpriors_ = [np.log(Xi.shape[0] / X.shape[0])
for Xi in training_sets]
return self
def predict_proba(self, X):
logprobs = np.array([model.score_samples(X)
for model in self.models_]).T
result = np.exp(logprobs + self.logpriors_)
return result / result.sum(1, keepdims=True)
def predict(self, X):
return self.classes_[np.argmax(self.predict_proba(X), 1)]
Explanation: Compared to the simple scatter plot we initially used, this visualization paints a much clearer picture of the geographical distribution of observations of these two species.
Example: Not-So-Naive Bayes
This example looks at Bayesian generative classification with KDE, and demonstrates how to use the Scikit-Learn architecture to create a custom estimator.
In In Depth: Naive Bayes Classification, we took a look at naive Bayesian classification, in which we created a simple generative model for each class, and used these models to build a fast classifier.
For Gaussian naive Bayes, the generative model is a simple axis-aligned Gaussian.
With a density estimation algorithm like KDE, we can remove the "naive" element and perform the same classification with a more sophisticated generative model for each class.
It's still Bayesian classification, but it's no longer naive.
The general approach for generative classification is this:
Split the training data by label.
For each set, fit a KDE to obtain a generative model of the data.
This allows you for any observation $x$ and label $y$ to compute a likelihood $P(x~|~y)$.
From the number of examples of each class in the training set, compute the class prior, $P(y)$.
For an unknown point $x$, the posterior probability for each class is $P(y~|~x) \propto P(x~|~y)P(y)$.
The class which maximizes this posterior is the label assigned to the point.
The algorithm is straightforward and intuitive to understand; the more difficult piece is couching it within the Scikit-Learn framework in order to make use of the grid search and cross-validation architecture.
This is the code that implements the algorithm within the Scikit-Learn framework; we will step through it following the code block:
End of explanation
from sklearn.datasets import load_digits
from sklearn.model_selection import GridSearchCV
digits = load_digits()
bandwidths = 10 ** np.linspace(0, 2, 100)
grid = GridSearchCV(KDEClassifier(), {'bandwidth': bandwidths})
grid.fit(digits.data, digits.target)
# scores = [val.mean_validation_score for val in grid.grid_scores_]
scores = grid.cv_results_['mean_test_score']
Explanation: The anatomy of a custom estimator
Let's step through this code and discuss the essential features:
```python
from sklearn.base import BaseEstimator, ClassifierMixin
class KDEClassifier(BaseEstimator, ClassifierMixin):
Bayesian generative classification based on KDE
Parameters
----------
bandwidth : float
the kernel bandwidth within each class
kernel : str
the kernel name, passed to KernelDensity
```
Each estimator in Scikit-Learn is a class, and it is most convenient for this class to inherit from the BaseEstimator class as well as the appropriate mixin, which provides standard functionality.
For example, among other things, here the BaseEstimator contains the logic necessary to clone/copy an estimator for use in a cross-validation procedure, and ClassifierMixin defines a default score() method used by such routines.
We also provide a doc string, which will be captured by IPython's help functionality (see Help and Documentation in IPython).
Next comes the class initialization method:
python
def __init__(self, bandwidth=1.0, kernel='gaussian'):
self.bandwidth = bandwidth
self.kernel = kernel
This is the actual code that is executed when the object is instantiated with KDEClassifier().
In Scikit-Learn, it is important that initialization contains no operations other than assigning the passed values by name to self.
This is due to the logic contained in BaseEstimator required for cloning and modifying estimators for cross-validation, grid search, and other functions.
Similarly, all arguments to __init__ should be explicit: i.e. *args or **kwargs should be avoided, as they will not be correctly handled within cross-validation routines.
Next comes the fit() method, where we handle training data:
python
def fit(self, X, y):
self.classes_ = np.sort(np.unique(y))
training_sets = [X[y == yi] for yi in self.classes_]
self.models_ = [KernelDensity(bandwidth=self.bandwidth,
kernel=self.kernel).fit(Xi)
for Xi in training_sets]
self.logpriors_ = [np.log(Xi.shape[0] / X.shape[0])
for Xi in training_sets]
return self
Here we find the unique classes in the training data, train a KernelDensity model for each class, and compute the class priors based on the number of input samples.
Finally, fit() should always return self so that we can chain commands. For example:
python
label = model.fit(X, y).predict(X)
Notice that each persistent result of the fit is stored with a trailing underscore (e.g., self.logpriors_).
This is a convention used in Scikit-Learn so that you can quickly scan the members of an estimator (using IPython's tab completion) and see exactly which members are fit to training data.
Finally, we have the logic for predicting labels on new data:
```python
def predict_proba(self, X):
logprobs = np.vstack([model.score_samples(X)
for model in self.models_]).T
result = np.exp(logprobs + self.logpriors_)
return result / result.sum(1, keepdims=True)
def predict(self, X):
return self.classes_[np.argmax(self.predict_proba(X), 1)]
`
Because this is a probabilistic classifier, we first implementpredict_proba()which returns an array of class probabilities of shape[n_samples, n_classes].
Entry[i, j]of this array is the posterior probability that sampleiis a member of classj``, computed by multiplying the likelihood by the class prior and normalizing.
Finally, the predict() method uses these probabilities and simply returns the class with the largest probability.
Using our custom estimator
Let's try this custom estimator on a problem we have seen before: the classification of hand-written digits.
Here we will load the digits, and compute the cross-validation score for a range of candidate bandwidths using the GridSearchCV meta-estimator (refer back to Hyperparameters and Model Validation):
End of explanation
plt.semilogx(bandwidths, scores)
plt.xlabel('bandwidth')
plt.ylabel('accuracy')
plt.title('KDE Model Performance')
print(grid.best_params_)
print('accuracy =', grid.best_score_)
Explanation: Next we can plot the cross-validation score as a function of bandwidth:
End of explanation
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import cross_val_score
cross_val_score(GaussianNB(), digits.data, digits.target).mean()
Explanation: We see that this not-so-naive Bayesian classifier reaches a cross-validation accuracy of just over 96%; this is compared to around 80% for the naive Bayesian classification:
End of explanation |
14,917 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
How can I perform regression in sklearn, using SVM and a gaussian kernel? | Problem:
import numpy as np
import pandas as pd
import sklearn
X, y = load_data()
assert type(X) == np.ndarray
assert type(y) == np.ndarray
# fit, then predict X
from sklearn.svm import SVR
svr_rbf = SVR(kernel='rbf')
svr_rbf.fit(X, y)
predict = svr_rbf.predict(X) |
14,918 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Demo-iadftscaleproperty" data-toc-modified-id="Demo-iadftscaleproperty-1"><span class="toc-item-num">1 </span>Demo iadftscaleproperty</a></div><div class="lev2 toc-item"><a href="#Description" data-toc-modified-id="Description-11"><span class="toc-item-num">1.1 </span>Description</a></div><div class="lev2 toc-item"><a href="#Image-read,-ROI-selection-and-display" data-toc-modified-id="Image-read,-ROI-selection-and-display-12"><span class="toc-item-num">1.2 </span>Image read, ROI selection and display</a></div><div class="lev2 toc-item"><a href="#DFT-of-the-ROI-image" data-toc-modified-id="DFT-of-the-ROI-image-13"><span class="toc-item-num">1.3 </span>DFT of the ROI image</a></div><div class="lev2 toc-item"><a href="#Image-expansion-(without-interpolation)-and-DFT" data-toc-modified-id="Image-expansion-(without-interpolation)-and-DFT-14"><span class="toc-item-num">1.4 </span>Image expansion (without interpolation) and DFT</a></div><div class="lev2 toc-item"><a href="#DFT-of-the-expansion-without-interpolation" data-toc-modified-id="DFT-of-the-expansion-without-interpolation-15"><span class="toc-item-num">1.5 </span>DFT of the expansion without interpolation</a></div><div class="lev2 toc-item"><a href="#Comparing-in-the-frequency-domain." data-toc-modified-id="Comparing-in-the-frequency-domain.-16"><span class="toc-item-num">1.6 </span>Comparing in the frequency domain.</a></div><div class="lev2 toc-item"><a href="#Comparing-in-the-spatial-domain." data-toc-modified-id="Comparing-in-the-spatial-domain.-17"><span class="toc-item-num">1.7 </span>Comparing in the spatial domain.</a></div>
# Demo iadftscaleproperty
Illustrate the scale property of the Discrete Fourier Transform.
## Description
The scale property of the Discrete Fourier Transform (DFT) is not the same as in the continuous Fourier Transform. In the discrete case the property is the following. If the image is enlarged in such a way that the new pixels have value zero, then its DFT is filled with a replication of the original DFT.
In this demonstration, a small original image is expanded and its DFT is compared to the replicated DFT of the original image. The results should be the same.
## Image read, ROI selection and display
The image is read and a small portion (64x64) is selected.
Step1: DFT of the ROI image
The DFT of the ROI image is taken and its spectrum is displayed
Step2: Image expansion (without interpolation) and DFT
The image is expanded by 4, but filling the new pixels with 0
Step3: DFT of the expansion without interpolation
the resulting DFT is a periodical replication of the original DFT.
Step4: Comparing in the frequency domain.
Alternatively, the original DFT (F) is replicated by 4 in each direction and compared with the DFT of the expanded image. For quantitative comparison, both the sum of the absolute errors of all pixels is computed and displayed.
Step5: Comparing in the spatial domain. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
f = mpimg.imread('../data/cameraman.tif')
froi = f[19:19+64,99:99+64] # ROI selection
plt.imshow(f,cmap='gray');
plt.imshow(froi,cmap='gray')
plt.colorbar();
nb = ia.nbshow(2)
nb.nbshow(f,'original',)
nb.nbshow(froi,'pequena')
nb.nbshow()
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Demo-iadftscaleproperty" data-toc-modified-id="Demo-iadftscaleproperty-1"><span class="toc-item-num">1 </span>Demo iadftscaleproperty</a></div><div class="lev2 toc-item"><a href="#Description" data-toc-modified-id="Description-11"><span class="toc-item-num">1.1 </span>Description</a></div><div class="lev2 toc-item"><a href="#Image-read,-ROI-selection-and-display" data-toc-modified-id="Image-read,-ROI-selection-and-display-12"><span class="toc-item-num">1.2 </span>Image read, ROI selection and display</a></div><div class="lev2 toc-item"><a href="#DFT-of-the-ROI-image" data-toc-modified-id="DFT-of-the-ROI-image-13"><span class="toc-item-num">1.3 </span>DFT of the ROI image</a></div><div class="lev2 toc-item"><a href="#Image-expansion-(without-interpolation)-and-DFT" data-toc-modified-id="Image-expansion-(without-interpolation)-and-DFT-14"><span class="toc-item-num">1.4 </span>Image expansion (without interpolation) and DFT</a></div><div class="lev2 toc-item"><a href="#DFT-of-the-expansion-without-interpolation" data-toc-modified-id="DFT-of-the-expansion-without-interpolation-15"><span class="toc-item-num">1.5 </span>DFT of the expansion without interpolation</a></div><div class="lev2 toc-item"><a href="#Comparing-in-the-frequency-domain." data-toc-modified-id="Comparing-in-the-frequency-domain.-16"><span class="toc-item-num">1.6 </span>Comparing in the frequency domain.</a></div><div class="lev2 toc-item"><a href="#Comparing-in-the-spatial-domain." data-toc-modified-id="Comparing-in-the-spatial-domain.-17"><span class="toc-item-num">1.7 </span>Comparing in the spatial domain.</a></div>
# Demo iadftscaleproperty
Illustrate the scale property of the Discrete Fourier Transform.
## Description
The scale property of the Discrete Fourier Transform (DFT) is not the same as in the continuous Fourier Transform. In the discrete case the property is the following. If the image is enlarged in such a way that the new pixels have value zero, then its DFT is filled with a replication of the original DFT.
In this demonstration, a small original image is expanded and its DFT is compared to the replicated DFT of the original image. The results should be the same.
## Image read, ROI selection and display
The image is read and a small portion (64x64) is selected.
End of explanation
fd = froi.astype(float)
F = np.fft.fft2(fd) # F is the DFT of f
#F = ia.dft(fd)
ia.adshow(froi)
ia.adshow(ia.dftview(F))
Explanation: DFT of the ROI image
The DFT of the ROI image is taken and its spectrum is displayed
End of explanation
H,W = froi.shape
fx4 = np.zeros((4*H,4*W),'uint8') # size is 4 times larger
fx4[::4,::4] = froi # filling the expanded image
ia.adshow(froi)
ia.adshow(fx4)
print(fx4.mean(),froi.mean())
Explanation: Image expansion (without interpolation) and DFT
The image is expanded by 4, but filling the new pixels with 0
End of explanation
fdx4 = fx4.astype(float)
Fx4 = np.fft.fft2(fdx4) # Fx4 is the DFT of fx4 (expanded f)
ia.adshow(ia.dftview(F))
ia.adshow(ia.dftview(Fx4))
Explanation: DFT of the expansion without interpolation
the resulting DFT is a periodical replication of the original DFT.
End of explanation
aux = np.concatenate((F,F,F,F))
FFx4 = np.concatenate((aux,aux,aux,aux), 1) # replicate the DFT of f
ia.adshow(ia.dftview(FFx4))
diff = abs(FFx4 - Fx4).sum() # compare the replicated DFT with DFT of expanded f
print(diff) # print the error signal power
Explanation: Comparing in the frequency domain.
Alternatively, the original DFT (F) is replicated by 4 in each direction and compared with the DFT of the expanded image. For quantitative comparison, both the sum of the absolute errors of all pixels is computed and displayed.
End of explanation
ffdx4 = np.fft.ifft2(FFx4)
fimag = ffdx4.imag
print(fimag.sum())
ffdx4 = np.floor(0.5 + ffdx4.real) # round
ia.adshow(ia.normalize(ffdx4.astype('int32')))
error = abs(fdx4 - ffdx4).sum()
print(error)
Explanation: Comparing in the spatial domain.
End of explanation |
14,919 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced features tutorial
The following tutorials highlight advanced functionality and provide in-depth
material on ensemble APIs.
=============================== ==============================================
Tutorial Content
=============================== ==============================================
propa-tutorial Propagate feature input features through layers
\ to allow several layers to see the same input.
proba-tutorial Build layers that output class probabilities from each base
\ learner so that the next layer or meta estimator learns
\ from probability distributions.
subsemble-tutorial Learn homogenous partitions of feature space
\ that maximize base learner's performance on each partition.
sequential-tutorial How to build ensembles with different layer classes
memory-tutorial Avoid loading data into the parent process by specifying a
\ file path to a memmaped array or a csv file.
model-selection-tutorial Build transformers that replicate layers in ensembles for
\ model selection of higher-order layers and / or meta learners.
=============================== ==============================================
We use the same preliminary settings as in the
getting started <getting-started> section.
Step2: Propagating input features
When stacking several layers of base learners, the variance of the input
will typically get smaller as learners get better and better at predicting
the output and the remaining errors become increasingly difficult to correct
for. This multicolinearity can significantly limit the ability of the
ensemble to improve upon the best score of the subsequent layer as there is too
little variation in predictions for the ensemble to learn useful combinations.
One way to increase this variation is to propagate features from the original
input and / or earlier layers. To achieve this in ML-Ensemble, we use the
propagate_features attribute. To see how this works, let's compare
a three-layer ensemble with and without feature propagation.
Step3: Without feature propagation, the meta learner will learn from the predictions
of the penultimate layers
Step4: When we propagate features, some (or all) of the input seen by one layer is
passed along to the next layer. For instance, we can propagate some or all of
the input array through our two intermediate layers to the meta learner input
of the meta learner
Step6: In this scenario, the meta learner will see noth the predictions made by the
penultimate layer, as well as the second and fourth feature of the original
input. By propagating
features, the issue of multicolinearity in deep ensembles can be mitigated.
In particular, it can give the meta learner greater opportunity to identify
neighborhoods in the original feature space where base learners struggle. We
can get an idea of how feature propagation works with our toy example. First,
we need a simple ensemble evaluation routine. In our case, propagating the
original features through two layers of the same
library of base learners gives a dramatic increase in performance on the test
set
Step7: .. py
Step8: In the above example, the two first features of the original input data
will be propagated through both layers, but the second layer will not be
trained on it. Instead, it will only see the predictions made by the base
learners in the first layer.
Step10: Probabilistic ensemble learning
When the target to predict is a class label, it can often be beneficial to
let higher-order layers or the meta learner learn from class probabilities,
as opposed to the predicted class. Scikit-learn classifiers can return a
matrix that, for each observation in the test set, gives the probability that
the observation belongs to the a given class. While we are ultimately
interested in class membership, this information is much richer that just
feeding the predicted class to the meta learner. In essence, using class
probabilities allow the meta learner to weigh in not just the predicted
class label (the highest probability), but also with what confidence each
estimator makes the prediction, and how estimators consider the alternative.
First, let us set a benchmark ensemble performance when learning is by
predicted class membership.
Step11: As in the ensemble guide <ensemble-guide>, we fit on the first half,
and test on the remainder.
Step12: Now, to enable probabilistic learning, we set proba=True in the add
method for all layers except the final meta learner layer.
Step14: In this case, using probabilities has a drastic effect on predictive
performance, increasing some 40 percentage points. For an applied example
see the ensemble used to beat the Scikit-learn mnist benchmark.
Advanced Subsemble techniques
.. currentmodule
Step16: During training, the base learners are copied to each partition,
so the output of each layer gets multiplied by the number of partitions. In this
case, we have 2 base learners for 3 partitions, giving 6 prediction features.
By creating partitions, subsembles scale significantly better <bench>
than the
Step17: The Iris dataset can actually separate the classes perfectly with a KMeans
estimator which leads to zero label variation in each partition. For that
reason the above code fits the KMeans estimator on only the first two
columns. But this approach is nota very good way of doing it since we loose
the rest of the data when fitting the estimators too. Instead, we could
customize the
partitioning estimator to make the subset selection itself. For instance,
we can use Scikit-learn's
Step20: In general, you may need to wrap an estimator around a custom class to modify
it's output to generate good partitions. For instance, in regression problems,
the output of a supervised estimator needs to be binarized to give a discrete
number of partitions. Here's minimalist way of wrapping a Scikit-learn
estimator
Step22: Importantly, your partition estimator should implement a get_params
method to avoid unexpected errors. If you don't, you may encounter
a NotFittedError when calling predict.
To summarize the functionality in one example,
let's implement a simple (but rather useless) partition estimator that splits
the data in half based on the sum of the features.
Step23: A final word of caution. When implementing custom estimators from scratch, some
care needs to be taken if you plan on copying the Subsemble. It is advised that
the estimator inherits the
Step24: The below table maps the types of layers available in the
Step25: In this case, the multi-layer
Step26: If you are following the examples on your machine,
don't forget to remove the temporary directory.
Step27: Ensemble model selection
Ensembles benefit from a diversity of base learners, but often it is not clear
how to parametrize the base learners. In fact, combining base learners with
lower predictive power can often yield a superior ensemble. This hinges on the
errors made by the base learners being relatively uncorrelated, thus allowing
a meta estimator to learn how to overcome each model's weakness. But with
highly correlated errors, there is little for the ensemble to learn from.
To fully exploit the learning capacity in an ensemble, it is beneficial to
conduct careful hyper parameter tuning, treating the base learner's parameters
as the parameters of the ensemble. By far the most critical part of the
ensemble is the meta learner, but selecting an appropriate meta learner can be
an ardous task if the entire ensemble has to be evaluated each time.
.. py
Step28: We can now compare the performance of the best fit for each candidate
meta learner. | Python Code:
import numpy as np
from pandas import DataFrame
from sklearn.metrics import accuracy_score
from sklearn.datasets import load_iris
seed = 2017
np.random.seed(seed)
data = load_iris()
idx = np.random.permutation(150)
X = data.data[idx]
y = data.target[idx]
Explanation: Advanced features tutorial
The following tutorials highlight advanced functionality and provide in-depth
material on ensemble APIs.
=============================== ==============================================
Tutorial Content
=============================== ==============================================
propa-tutorial Propagate feature input features through layers
\ to allow several layers to see the same input.
proba-tutorial Build layers that output class probabilities from each base
\ learner so that the next layer or meta estimator learns
\ from probability distributions.
subsemble-tutorial Learn homogenous partitions of feature space
\ that maximize base learner's performance on each partition.
sequential-tutorial How to build ensembles with different layer classes
memory-tutorial Avoid loading data into the parent process by specifying a
\ file path to a memmaped array or a csv file.
model-selection-tutorial Build transformers that replicate layers in ensembles for
\ model selection of higher-order layers and / or meta learners.
=============================== ==============================================
We use the same preliminary settings as in the
getting started <getting-started> section.
End of explanation
from mlens.ensemble import SuperLearner
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
def build_ensemble(incl_meta, propagate_features=None):
Return an ensemble.
if propagate_features:
n = len(propagate_features)
propagate_features_1 = propagate_features
propagate_features_2 = [i for i in range(n)]
else:
propagate_features_1 = propagate_features_2 = None
estimators = [RandomForestClassifier(random_state=seed), SVC()]
ensemble = SuperLearner()
ensemble.add(estimators, propagate_features=propagate_features_1)
ensemble.add(estimators, propagate_features=propagate_features_2)
if incl_meta:
ensemble.add_meta(LogisticRegression())
return ensemble
Explanation: Propagating input features
When stacking several layers of base learners, the variance of the input
will typically get smaller as learners get better and better at predicting
the output and the remaining errors become increasingly difficult to correct
for. This multicolinearity can significantly limit the ability of the
ensemble to improve upon the best score of the subsequent layer as there is too
little variation in predictions for the ensemble to learn useful combinations.
One way to increase this variation is to propagate features from the original
input and / or earlier layers. To achieve this in ML-Ensemble, we use the
propagate_features attribute. To see how this works, let's compare
a three-layer ensemble with and without feature propagation.
End of explanation
base = build_ensemble(False)
base.fit(X, y)
pred = base.predict(X)[:5]
print("Input to meta learner :\n %r" % pred)
Explanation: Without feature propagation, the meta learner will learn from the predictions
of the penultimate layers:
End of explanation
base = build_ensemble(False, [1, 3])
base.fit(X, y)
pred = base.predict(X)[:5]
print("Input to meta learner :\n %r" % pred)
Explanation: When we propagate features, some (or all) of the input seen by one layer is
passed along to the next layer. For instance, we can propagate some or all of
the input array through our two intermediate layers to the meta learner input
of the meta learner:
End of explanation
def evaluate_ensemble(propagate_features):
Wrapper for ensemble evaluation.
ens = build_ensemble(True, propagate_features)
ens.fit(X[:75], y[:75])
pred = ens.predict(X[75:])
return accuracy_score(pred, y[75:])
score_no_prep = evaluate_ensemble(None)
score_prep = evaluate_ensemble([0, 1, 2, 3])
print("Test set score no feature propagation : %.3f" % score_no_prep)
print("Test set score with feature propagation: %.3f" % score_prep)
Explanation: In this scenario, the meta learner will see noth the predictions made by the
penultimate layer, as well as the second and fourth feature of the original
input. By propagating
features, the issue of multicolinearity in deep ensembles can be mitigated.
In particular, it can give the meta learner greater opportunity to identify
neighborhoods in the original feature space where base learners struggle. We
can get an idea of how feature propagation works with our toy example. First,
we need a simple ensemble evaluation routine. In our case, propagating the
original features through two layers of the same
library of base learners gives a dramatic increase in performance on the test
set:
End of explanation
from mlens.preprocessing import Subset
estimators = [RandomForestClassifier(random_state=seed), SVC()]
ensemble = SuperLearner()
# Initial layer, propagate as before
ensemble.add(estimators, propagate_features=[0, 1])
# Intermediate layer, keep propagating, but add a preprocessing
# pipeline that selects a subset of the input
ensemble.add(estimators,
preprocessing=[Subset([2, 3])],
propagate_features=[0, 1])
Explanation: .. py:currentmodule:: mlens.preprocessing
By combining feature propagation with the :class:Subset transformer, you can
propagate the feature through several layers without any of the base estimators
in those layers seeing the propagated features. This can be desirable if you
want to propagate the input features to the meta learner without intermediate
base learners always having access to the original input data. In this case,
we specify propagation as above, but add a preprocessing pipeline to
intermediate layers:
End of explanation
ensemble.fit(X, y)
n = list(ensemble.layer_2.learners[0].learner)[0].estimator.feature_importances_.shape[0]
m = ensemble.predict(X).shape[1]
print("Num features seen by estimators in intermediate layer: %i" % n)
print("Num features in the output array of the intermediate layer: %i" % m)
Explanation: In the above example, the two first features of the original input data
will be propagated through both layers, but the second layer will not be
trained on it. Instead, it will only see the predictions made by the base
learners in the first layer.
End of explanation
from mlens.ensemble import BlendEnsemble
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
def build_ensemble(proba, **kwargs):
Return an ensemble.
estimators = [RandomForestClassifier(random_state=seed),
SVC(probability=proba)]
ensemble = BlendEnsemble(**kwargs)
ensemble.add(estimators, proba=proba) # Specify 'proba' here
ensemble.add_meta(LogisticRegression())
return ensemble
Explanation: Probabilistic ensemble learning
When the target to predict is a class label, it can often be beneficial to
let higher-order layers or the meta learner learn from class probabilities,
as opposed to the predicted class. Scikit-learn classifiers can return a
matrix that, for each observation in the test set, gives the probability that
the observation belongs to the a given class. While we are ultimately
interested in class membership, this information is much richer that just
feeding the predicted class to the meta learner. In essence, using class
probabilities allow the meta learner to weigh in not just the predicted
class label (the highest probability), but also with what confidence each
estimator makes the prediction, and how estimators consider the alternative.
First, let us set a benchmark ensemble performance when learning is by
predicted class membership.
End of explanation
ensemble = build_ensemble(proba=False)
ensemble.fit(X[:75], y[:75])
preds = ensemble.predict(X[75:])
print("Accuracy:\n%r" % accuracy_score(preds, y[75:]))
Explanation: As in the ensemble guide <ensemble-guide>, we fit on the first half,
and test on the remainder.
End of explanation
ensemble = build_ensemble(proba=True)
ensemble.fit(X[:75], y[:75])
preds = ensemble.predict(X[75:])
print("\nAccuracy:\n%r" % accuracy_score(preds, y[75:]))
Explanation: Now, to enable probabilistic learning, we set proba=True in the add
method for all layers except the final meta learner layer.
End of explanation
from mlens.ensemble import Subsemble
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
def build_subsemble():
Build a subsemble with random partitions
sub = Subsemble(partitions=3, folds=2)
sub.add([SVC(), LogisticRegression()])
return sub
sub= build_subsemble()
sub.fit(X, y)
s = sub.predict(X[:10]).shape[1]
print("No. prediction features: %i " % s)
Explanation: In this case, using probabilities has a drastic effect on predictive
performance, increasing some 40 percentage points. For an applied example
see the ensemble used to beat the Scikit-learn mnist benchmark.
Advanced Subsemble techniques
.. currentmodule:: mlens.ensemble
Subsembles leverages the idea that neighborhoods of feature space have a
specific local structure. When we fit an estimator across all feature space,
it is very hard to capture several such local properties. Subsembles partition
the feature space and fits each base learner to each partitions, thereby
allow base learners to optimize locally. Instead, the task of generalizing
across neighborhoods is left to the meta learner. This strategy can be very
powerful when the local structure first needs to be extracted, before an
estimator can learn to generalize. Suppose you want to learn the probability
distribution of some variable $y$. Often, the true distribution is
multi-modal, which is an extremely hard problem. In fact, most
machine learning algorithms, especially with convex optimization objectives, are
ill equipped to solve this problem. Subsembles can overcome this issue allowing
base estimators to fit one mode of the distribution at a time, which yields a
better representation of the distribution and greatly facilitates the learning
problem of the meta learner.
.. py:currentmodule:: mlens.ensemble
By default, the :class:Subsemble class partitioning the dataset randomly.
Note however that partitions are created on the data "as is", so if the ordering
of observations is not random, neither will the partitioning be. For this
reason, it is recommended to shuffle the data (e.g. via the shuffle
option at initialization). To build a subsemble with random partitions, the
only parameter needed is the number of partitions when instantiating
the :class:Subsemble.
End of explanation
from sklearn.cluster import KMeans
def build_clustered_subsemble(estimator):
Build a subsemble with random partitions
sub = Subsemble(partitions=2,
partition_estimator=estimator,
folds=2, verbose=2)
sub.add([SVC(), LogisticRegression()])
sub.add_meta(SVC())
return sub
sub = build_clustered_subsemble(KMeans(2))
sub.fit(X[:, [0, 1]], y)
Explanation: During training, the base learners are copied to each partition,
so the output of each layer gets multiplied by the number of partitions. In this
case, we have 2 base learners for 3 partitions, giving 6 prediction features.
By creating partitions, subsembles scale significantly better <bench>
than the
:class:SuperLearner, but in contrast to :class:BlendEnsemble,
the full training data is leveraged during training. But randomly partitioning
the data does however not exploit the full advantage of locality, since it is
only by luck that we happen to create such partitions. A better way is to
learn how to best partition the data. We can either use
unsupervised algorithms to generate clusters, or supervised estimators and
create partitions based on their predictions. In ML-Ensemble, this is
achieved by passing an estimator as partition_estimator. This estimator
can differ between layers.
.. currentmodule:: mlens.index
Very few limitation are imposed on the estimator: you can specify whether
you want to fit it before generating partitions, whether to use
labels in the partitioning, and what method to call to generate the
partitions. See :class:ClusteredSubsetIndex for the full documentation.
This level of generality does impose some
responsibility on the user. In particular, it is up to the user to ensure that
sensible partitions are created. Problems to watch out for is too small
partitions (too many clusters, too uneven cluster sizes) and clusters with too
little variation: for instance with only a single class label in the entire
partition, base learners have nothing to learn. Let's see how to do this in
practice. For instance, we can use an unsupervised K-Means
clustering estimator to partition the data, like so:
End of explanation
from mlens.preprocessing import Subset
from sklearn.pipeline import make_pipeline
# This partition estimator is equivalent to the one used above
pe = make_pipeline(Subset([0, 1]), KMeans(2))
sub = build_clustered_subsemble(pe)
sub.fit(X, y)
Explanation: The Iris dataset can actually separate the classes perfectly with a KMeans
estimator which leads to zero label variation in each partition. For that
reason the above code fits the KMeans estimator on only the first two
columns. But this approach is nota very good way of doing it since we loose
the rest of the data when fitting the estimators too. Instead, we could
customize the
partitioning estimator to make the subset selection itself. For instance,
we can use Scikit-learn's :class:sklearn.pipeline.Pipeline
class to put a dimensionality reduction transformer before the partitioning
estimator, such as a :class:sklearn.decomposition.PCA, or the
:class:mlens.preprocessing.Subset transformer to drop some features before
estimation. We then use this pipeline as a our partition estimator and fit
the subsemble on all features.
End of explanation
from sklearn.linear_model import LinearRegression
class MyClass(LinearRegression):
def __init__(self, **kwargs):
super(MyClass, self).__init__(**kwargs)
def fit(self, X, y):
Fit estimator.
super(MyClass, self).fit(X, y)
return self
def predict(self, X):
Generate partition
p = super(MyClass, self).predict(X)
return 1 * (p > p.mean())
Explanation: In general, you may need to wrap an estimator around a custom class to modify
it's output to generate good partitions. For instance, in regression problems,
the output of a supervised estimator needs to be binarized to give a discrete
number of partitions. Here's minimalist way of wrapping a Scikit-learn
estimator:
End of explanation
class SimplePartitioner():
def __init__(self):
pass
def our_custom_function(self, X, y=None):
Split the data in half based on the sum of features
# Labels should be numerical
return 1 * (X.sum(axis=1) > X.sum(axis=1).mean())
def get_params(self, deep=False):
return {}
# Note that the number of partitions the estimator creates *must* match the
# ``partitions`` argument passed to the subsemble.
sub = Subsemble(partitions=2, folds=3, verbose=1)
sub.add([SVC(), LogisticRegression()],
partition_estimator=SimplePartitioner(),
fit_estimator=False,
attr="our_custom_function")
sub.fit(X, y)
Explanation: Importantly, your partition estimator should implement a get_params
method to avoid unexpected errors. If you don't, you may encounter
a NotFittedError when calling predict.
To summarize the functionality in one example,
let's implement a simple (but rather useless) partition estimator that splits
the data in half based on the sum of the features.
End of explanation
from mlens.ensemble import SequentialEnsemble
ensemble = SequentialEnsemble()
# The initial layer is a blended layer, same as a layer in the BlendEnsemble
ensemble.add('blend',
[SVC(), RandomForestClassifier(random_state=seed)])
# The second layer is a stacked layer, same as a layer of the SuperLearner
ensemble.add('stack', [SVC(), RandomForestClassifier(random_state=seed)])
# The third layer is a subsembled layer, same as a layer of the Subsemble
ensemble.add('subsemble', [SVC(), RandomForestClassifier(random_state=seed)])
# The meta estimator is added as in any other ensemble
ensemble.add_meta(SVC())
Explanation: A final word of caution. When implementing custom estimators from scratch, some
care needs to be taken if you plan on copying the Subsemble. It is advised that
the estimator inherits the :class:sklearn.base.BaseEstimator class to
provide a Scikit-learn compatible interface. For further information,
see the API documentation of the :class:Subsemble
and :class:mlens.base.indexer.ClusteredSubsetIndex.
For an example of using clustered subsemble, see the subsemble
used to beat the Scikit-learn mnist benchmark.
General multi-layer ensemble learning
.. currentmodule:: mlens.ensemble
To alternate between the type of layer with each add call,
the :class:SequentialEnsemble class can be used to specify what type of
layer (i.e. stacked, blended, subsamle-style) to add. This is particularly
powerful if facing a large dataset, as the first layer can use a fast approach
such as blending, while subsequent layers fitted on the remaining data can
use more computationally intensive approaches.
End of explanation
preds = ensemble.fit(X[:75], y[:75]).predict(X[75:])
accuracy_score(preds, y[75:])
Explanation: The below table maps the types of layers available in the :class:SequentialEnsemble with the corresponding ensemble.
=================== ============================
Ensemble equivalent SequentialEnsemble parameter
=================== ============================
'SuperLearner' 'stack'
'BlendEnsemble' 'blend'
'Subsemble' 'subsemble'
=================== ============================
Once instantiated, the :class:`SequentialEnsemble`` behaves as expect:
End of explanation
import os
import tempfile
# We create a temporary folder in the current working directory
temp = tempfile.TemporaryDirectory(dir=os.getcwd())
# Dump the X and y array in the temporary directory, here as csv files
fx = os.path.join(temp.name, 'X.csv')
fy = os.path.join(temp.name, 'y.csv')
np.savetxt(fx, X)
np.savetxt(fy, y)
# We can now fit any ensemble simply by passing the file pointers ``fx`` and
# ``fy``. Remember to set ``array_check=0``.
ensemble = build_ensemble(False, array_check=0)
ensemble.fit(fx, fy)
preds = ensemble.predict(fx)
print(preds[:10])
Explanation: In this case, the multi-layer :class:SequentialEnsemble with an initial
blended layer and second stacked layer achieves similar performance as the
:class:BlendEnsemble with probabilistic learning. Note that we could have
made any of the layers probabilistic by setting Proba=True.
Passing file paths as data input
With large datasets, it can be expensive to load the full data into memory as
a numpy array. Since ML-Ensemle uses a memmaped cache, the need to keep the
full array in memory can be entirely circumvented by passing a file path as
entry to X and y. There are two important things to note when doing
this.
First, ML-Ensemble delpoys Scikit-learn's array checks, and passing a
string will cause an error. To avoid this, the ensemble must be initialized
with array_check=0, in which case there will be no checks on the array.
The user should make certain that the the data is approprate for esitmation,
by converting missing values and infinites to numerical representation,
ensuring that all features are numerical, and remove any headers,
index columns and footers.
Second, ML-Ensemble expects the file to be either a csv,
an npy or mmap file and will treat these differently.
- If a path to a ``csv`` file is passed, the ensemble will first **load**
the file into memory, then dump it into the cache, before discarding the
file from memory by replacing it with a pointer to the memmaped file.
The loading module used for the ``csv``
file is the :func:`numpy.loadtxt` function.
- If a path to a ``npy`` file is passed, a memmaped pointer to it will be
loaded.
- If a path to a ``mmap`` file is passed, it will be used as the memmaped
input array for estimation.
End of explanation
try:
temp.cleanup()
del temp
except OSError:
# This can fail on Windows
pass
Explanation: If you are following the examples on your machine,
don't forget to remove the temporary directory.
End of explanation
from mlens.model_selection import Evaluator
from mlens.metrics import make_scorer
from scipy.stats import uniform, randint
# Set up two competing ensemble bases as preprocessing transformers:
# one stacked ensemble base with proba and one without
base_learners = [RandomForestClassifier(random_state=seed),
SVC(probability=True)]
proba_transformer = SequentialEnsemble(
model_selection=True, random_state=seed).add(
'blend', base_learners, proba=True)
class_transformer = SequentialEnsemble(
model_selection=True, random_state=seed).add(
'blend', base_learners, proba=False)
# Set up a preprocessing mapping
# Each pipeline in this map is fitted once on each fold before
# evaluating candidate meta learners.
preprocessing = {'proba': [('layer-1', proba_transformer)],
'class': [('layer-1', class_transformer)]}
# Set up candidate meta learners
# We can specify a dictionary if we wish to try different candidates on
# different cases, or a list if all estimators should be run on all
# preprocessing pipelines (as in this example)
meta_learners = [SVC(), ('rf', RandomForestClassifier(random_state=seed))]
# Set parameter mapping
# Here, we differentiate distributions between cases for the random forest
params = {'svc': {'C': uniform(0, 10)},
'class.rf': {'max_depth': randint(2, 10)},
'proba.rf': {'max_depth': randint(2, 10),
'max_features': uniform(0.5, 0.5)}
}
scorer = make_scorer(accuracy_score)
evaluator = Evaluator(scorer=scorer, random_state=seed, cv=2)
evaluator.fit(X, y, meta_learners, params, preprocessing=preprocessing, n_iter=2)
Explanation: Ensemble model selection
Ensembles benefit from a diversity of base learners, but often it is not clear
how to parametrize the base learners. In fact, combining base learners with
lower predictive power can often yield a superior ensemble. This hinges on the
errors made by the base learners being relatively uncorrelated, thus allowing
a meta estimator to learn how to overcome each model's weakness. But with
highly correlated errors, there is little for the ensemble to learn from.
To fully exploit the learning capacity in an ensemble, it is beneficial to
conduct careful hyper parameter tuning, treating the base learner's parameters
as the parameters of the ensemble. By far the most critical part of the
ensemble is the meta learner, but selecting an appropriate meta learner can be
an ardous task if the entire ensemble has to be evaluated each time.
.. py:currentmodule:: mlens.preprocessing
The task can be made considerably easier by treating the lower layers of an
ensemble as preprocessing pipeline, and performing model selection on
higher-order layers or meta learners. To use an ensemble for this purpose,
set the model_selection parameter to True before fitting. This will
modify how the transform method behaves, to ensure predict is called
on test folds.
<div class="alert alert-danger"><h4>Warning</h4><p>Remember to turn model selection off when done.</p></div>
End of explanation
print("Results:\n%s" % evaluator.results)
Explanation: We can now compare the performance of the best fit for each candidate
meta learner.
End of explanation |
14,920 | Given the following text description, write Python code to implement the functionality described.
Description:
Task
We are given two strings s and c, you have to deleted all the characters in s that are equal to any character in c
then check if the result string is palindrome.
A string is called palindrome if it reads the same backward as forward.
You should return a tuple containing the result string and True/False for the check.
Example
For s = "abcde", c = "ae", the result should be ('bcd',False)
For s = "abcdef", c = "b" the result should be ('acdef',False)
For s = "abcdedcba", c = "ab", the result should be ('cdedc',True)
| Python Code:
def reverse_delete(s,c):
s = ''.join([char for char in s if char not in c])
return (s,s[::-1] == s) |
14,921 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vector Laplacian in curvilinear coordinates
The vector Laplacian is
$$
\nabla^2 \vec{u} = \nabla \cdot \nabla \vec{u}
$$
A vector identity gives the vector Laplacian as
$$
\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}
$$
We will check if this identity holds for shenfun using both cylindrical and spherical coordinates.
For reference, the vector Laplacian is given here
Cylinder coordinates are mapped to Cartesian through
$$
\begin{align}
x &= r \cos \theta \
y &= r \sin \theta \
z &= z
\end{align}
$$
and we use a domain $(r, \theta, z) \in [0, 1] \times [0, 2 \pi] \times [0, 2 \pi]$.
Spherical coordinates are mapped as
$$
\begin{align}
x &= r \sin(\theta) \cos(\phi)\
y &= r \sin(\theta) \sin(\phi)\
z &= r \cos(\theta)
\end{align}
$$
for a domain $(r, \theta, \phi) \in [0, 1] \times [0, \pi] \times [0, 2 \pi]$.
This is all we need to know for using these coordinate systems with shenfun.
Cylinder coordinates
Step1: The vector Laplacian can now be found as
Step2: We can look at du using the following
Step3: Note that the basis vectors $\mathbf{b}_i$ are not unit vectors (i.e., of length 1). For this reason the equation does not look exactly like the one here. The basis vectors are
Step4: Notice that $|\mathbf{b}{\theta}|=r$. Shenfun can use either non-normalized covariant basis vectors or normalized (physical) basis vectors of lenght 1 for describing all vectors and higher order tensors. The vector components components shown are contraviariant and as such use a superscript $u^{\theta}$ and not subscript $u{\theta}$. Note that for orthogonal coordinates the scaled unit vectors are the same for either contra- or covariant basis vectors and as such this distinction is not necessary here. The distinction is only required for non-orthogonal coordinate systems. Shenfun can handle both orthogonal and non-orthogonal coordinates, but requires that equations to be solved are separable.
Now check the vector identity
$$
\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}
$$
Step5: We see that the order is different, but the vector is actually identical to the previous one (du). To show that they are equal we can subtract one from the other and simplify.
Step6: If you are not convinced we can assemble some matrices and check that du and dv behave the same way.
Step7: A0 and A1 now contains lists of tensor product matrices, because the vector identities contain a lot of different terms (as we have seen above). To check that A0 and A1 are identical, we test the vector product of the matrices with a random vector. Since we are working with vectors we use a BlockMatrix for the combined tensor product matrices.
Step8: Spherical coordinates
We now turn to spherical coordinates and run the same test.
Step9: This proves that for shenfun the vector identity $\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}$ holds true also for spherical coordinates. | Python Code:
from shenfun import *
from IPython.display import Math
import sympy as sp
config['basisvectors'] = 'normal' #'covariant' # or
r, theta, z = psi = sp.symbols('x,y,z', real=True, positive=True)
rv = (r*sp.cos(theta), r*sp.sin(theta), z)
N = 10
F0 = FunctionSpace(N, 'F', dtype='d')
F1 = FunctionSpace(N, 'F', dtype='D')
L = FunctionSpace(N, 'L', domain=(0, 1))
T = TensorProductSpace(comm, (L, F1, F0), coordinates=(psi, rv))
V = VectorSpace(T)
u = TrialFunction(V)
du = div(u)
Math(du.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'}))
du.tosympy(basis=(r*sp.cos(theta), sp.sin(theta), z), psi=psi)
Explanation: Vector Laplacian in curvilinear coordinates
The vector Laplacian is
$$
\nabla^2 \vec{u} = \nabla \cdot \nabla \vec{u}
$$
A vector identity gives the vector Laplacian as
$$
\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}
$$
We will check if this identity holds for shenfun using both cylindrical and spherical coordinates.
For reference, the vector Laplacian is given here
Cylinder coordinates are mapped to Cartesian through
$$
\begin{align}
x &= r \cos \theta \
y &= r \sin \theta \
z &= z
\end{align}
$$
and we use a domain $(r, \theta, z) \in [0, 1] \times [0, 2 \pi] \times [0, 2 \pi]$.
Spherical coordinates are mapped as
$$
\begin{align}
x &= r \sin(\theta) \cos(\phi)\
y &= r \sin(\theta) \sin(\phi)\
z &= r \cos(\theta)
\end{align}
$$
for a domain $(r, \theta, \phi) \in [0, 1] \times [0, \pi] \times [0, 2 \pi]$.
This is all we need to know for using these coordinate systems with shenfun.
Cylinder coordinates
End of explanation
du = div(grad(u))
#Math((div(grad(TrialFunction(T)))).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'}))
T.coors.sg
Explanation: The vector Laplacian can now be found as
End of explanation
Math((du).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'}))
Explanation: We can look at du using the following
End of explanation
Math(T.coors.latex_basis_vectors(symbol_names={r: 'r', theta: '\\theta', z: 'z'}))
Explanation: Note that the basis vectors $\mathbf{b}_i$ are not unit vectors (i.e., of length 1). For this reason the equation does not look exactly like the one here. The basis vectors are
End of explanation
dv = grad(div(u)) - curl(curl(u))
dv.simplify()
Math((dv).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'}))
Explanation: Notice that $|\mathbf{b}{\theta}|=r$. Shenfun can use either non-normalized covariant basis vectors or normalized (physical) basis vectors of lenght 1 for describing all vectors and higher order tensors. The vector components components shown are contraviariant and as such use a superscript $u^{\theta}$ and not subscript $u{\theta}$. Note that for orthogonal coordinates the scaled unit vectors are the same for either contra- or covariant basis vectors and as such this distinction is not necessary here. The distinction is only required for non-orthogonal coordinate systems. Shenfun can handle both orthogonal and non-orthogonal coordinates, but requires that equations to be solved are separable.
Now check the vector identity
$$
\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}
$$
End of explanation
dw = du-dv
dw.simplify()
Math(dw.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'}))
Explanation: We see that the order is different, but the vector is actually identical to the previous one (du). To show that they are equal we can subtract one from the other and simplify.
End of explanation
v = TestFunction(V)
A0 = inner(v, du)
A1 = inner(v, dv)
Explanation: If you are not convinced we can assemble some matrices and check that du and dv behave the same way.
End of explanation
u_hat = Function(V)
u_hat[:] = np.random.random(u_hat.shape) + np.random.random(u_hat.shape)*1j
a0 = BlockMatrix(A0)
a1 = BlockMatrix(A1)
b0 = Function(V)
b1 = Function(V)
b0 = a0.matvec(u_hat, b0)
b1 = a1.matvec(u_hat, b1)
print('Error ', np.linalg.norm(b0-b1))
Explanation: A0 and A1 now contains lists of tensor product matrices, because the vector identities contain a lot of different terms (as we have seen above). To check that A0 and A1 are identical, we test the vector product of the matrices with a random vector. Since we are working with vectors we use a BlockMatrix for the combined tensor product matrices.
End of explanation
r, theta, phi = psi = sp.symbols('x,y,z', real=True, positive=True)
rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))
N = 6
F = FunctionSpace(N, 'F', dtype='d')
L0 = FunctionSpace(N, 'L', domain=(0, 1))
L1 = FunctionSpace(N, 'L', domain=(0, np.pi))
T = TensorProductSpace(comm, (L0, L1, F), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))
V = VectorSpace(T)
u = TrialFunction(V)
du = div(grad(u))
dv = grad(div(u)) - curl(curl(u))
dv.simplify()
dw = du-dv
dw.simplify()
Math(dw.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'}))
Explanation: Spherical coordinates
We now turn to spherical coordinates and run the same test.
End of explanation
Math(du.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'}))
Math(T.coors.latex_basis_vectors(symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'}))
Math((grad(u)).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'}))
Math((grad(u)[0]).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'}))
Explanation: This proves that for shenfun the vector identity $\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}$ holds true also for spherical coordinates.
End of explanation |
14,922 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning Curves and Bias-Variance Tradeoff
In practice, much of the task of machine learning involves selecting algorithms,
parameters, and sets of data to optimize the results of the method. All of these
things can affect the quality of the results, but it’s not always clear which is
best. For example, if your results have an error that’s larger than you hoped,
you might imagine that increasing the training set size will always lead to better
results. But this is not the case! Below, we’ll explore the reasons for this.
Much of the material in this section was adapted from Andrew Ng’s excellent set
of machine learning video lectures. See http
Step1: Polynomial regression can be done with the functions polyfit
and polyval, available in numpy. For example
Step2: Using a 1st-degree polynomial fit (that is, fitting a straight line to x and y),
we predicted the value of y for a new input. This prediction has an absolute
error of about 0.2 for the few test points which we tried. We can visualize
the fit with the following function
Step3: When the error of predicted results is larger than desired,
there are a few courses of action that can be taken
Step4: Run the following code to produce an example plot
Step5: In the above figure, we see fits for three different values of $d$.
For $d = 1$, the data is under-fit. This means that the model is too
simplistic
Step6: In order to quantify the effects of bias and variance and construct
the best possible estimator, we will split our training data into
three parts
Step7: This figure compactly shows the reason that cross-validation is
important. On the left side of the plot, we have very low-degree
polynomial, which under-fits the data. This leads to a very high
error for both the training set and the cross-validation set. On
the far right side of the plot, we have a very high degree
polynomial, which over-fits the data. This can be seen in the fact
that the training error is very low, while the cross-validation
error is very high. Plotted for comparison is the intrinsic error
(this is the scatter artificially added to the data
Step8: Here we show the learning curve for $d = 1$. From the above
discussion, we know that $d = 1$ is a high-bias estimator which
under-fits the data. This is indicated by the fact that both the
training and cross-validation errors are very high. If this is
the case, adding more training data will not help matters
Step9: Here we show the learning curve for $d = 20$. From the above
discussion, we know that $d = 20$ is a high-variance estimator
which over-fits the data. This is indicated by the fact that the
training error is much less than the cross-validation error. As
we add more samples to this training set, the training error will
continue to climb, while the cross-validation error will continue
to decrease, until they meet in the middle. In this case, our
intrinsic error was set to 1.0, and we can infer that adding more
data will allow the estimator to very closely match the best
possible cross-validation error. | Python Code:
%pylab inline
Explanation: Learning Curves and Bias-Variance Tradeoff
In practice, much of the task of machine learning involves selecting algorithms,
parameters, and sets of data to optimize the results of the method. All of these
things can affect the quality of the results, but it’s not always clear which is
best. For example, if your results have an error that’s larger than you hoped,
you might imagine that increasing the training set size will always lead to better
results. But this is not the case! Below, we’ll explore the reasons for this.
Much of the material in this section was adapted from Andrew Ng’s excellent set
of machine learning video lectures. See http://www.ml-class.org.
In this section we’ll work with an extremely simple learning model: polynomial
regression. This simply fits a polynomial of degree d to the data: if d = 1,
then it is simple linear regression.
First we'll ensure that we're in pylab mode, with figures being displayed inline:
End of explanation
import numpy as np
np.random.seed(42)
x = np.random.random(20)
y = np.sin(2 * x)
p = np.polyfit(x, y, 1) # fit a 1st-degree polynomial (i.e. a line) to the data
print p # slope and intercept
x_new = np.random.random(3)
y_new = np.polyval(p, x_new) # evaluate the polynomial at x_new
print abs(np.sin(x_new) - y_new)
Explanation: Polynomial regression can be done with the functions polyfit
and polyval, available in numpy. For example:
End of explanation
import pylab as pl
def plot_fit(x, y, p):
xfit = np.linspace(0, 1, 1000)
yfit = np.polyval(p, xfit)
pl.scatter(x, y, c='k')
pl.plot(xfit, yfit)
pl.xlabel('x')
pl.ylabel('y')
plot_fit(x, y, p)
Explanation: Using a 1st-degree polynomial fit (that is, fitting a straight line to x and y),
we predicted the value of y for a new input. This prediction has an absolute
error of about 0.2 for the few test points which we tried. We can visualize
the fit with the following function:
End of explanation
def test_func(x, err=0.5):
return np.random.normal(10 - 1. / (x + 0.1), err)
def compute_error(x, y, p):
yfit = np.polyval(p, x)
return np.sqrt(np.mean((y - yfit) ** 2))
Explanation: When the error of predicted results is larger than desired,
there are a few courses of action that can be taken:
Increase the number of training points N. This might give us a
training set with more coverage, and lead to greater accuracy.
Increase the degree d of the polynomial. This might allow us to
more closely fit the training data, and lead to a better result
Add more features. If we were to, for example, perform a linear
regression using $x$, $\sqrt{x}$, $x^{-1}$, or other functions, we might
hit on a functional form which can better be mapped to the value of y.
The best course to take will vary from situation to situation, and from
problem to problem. In this situation, number 2 and 3 may be useful, but
number 1 will certainly not help: our model does not intrinsically fit the
data very well. In machine learning terms, we say that it has high bias
and that the data is under-fit. The ability to quickly figure out how
to tune and improve your model is what separates good machine learning
practitioners from the bad ones. In this section we’ll discuss some tools
that can help determine which course is most likely to lead to good results.
Bias, Variance, Overfitting, and Underfitting
We’ll work with a simple example. Imagine that you would like to build
an algorithm which will predict the price of a house given its size.
Naively, we’d expect that the cost of a house grows as the size increases,
but there are many other factors which can contribute. Imagine we approach
this problem with the polynomial regression discussed above. We can tune
the degree $d$ to try to get the best fit.
First let's define some utility functions:
End of explanation
N = 8
np.random.seed(42)
x = 10 ** np.linspace(-2, 0, N)
y = test_func(x)
xfit = np.linspace(-0.2, 1.2, 1000)
titles = ['d = 1 (under-fit)', 'd = 2', 'd = 6 (over-fit)']
degrees = [1, 2, 6]
pl.figure(figsize = (9, 3.5))
pl.subplots_adjust(left = 0.06, right=0.98,
bottom=0.15, top=0.85,
wspace=0.05)
for i, d in enumerate(degrees):
pl.subplot(131 + i, xticks=[], yticks=[])
pl.scatter(x, y, marker='x', c='k', s=50)
p = np.polyfit(x, y, d)
yfit = np.polyval(p, xfit)
pl.plot(xfit, yfit, '-b')
pl.xlim(-0.2, 1.2)
pl.ylim(0, 12)
pl.xlabel('house size')
if i == 0:
pl.ylabel('price')
pl.title(titles[i])
Explanation: Run the following code to produce an example plot:
End of explanation
Ntrain = 100
Ncrossval = 100
Ntest = 50
error = 1.0
# randomly sample the data
np.random.seed(0)
x = np.random.random(Ntrain + Ncrossval + Ntest)
y = test_func(x, error)
# select training set
# data is already random, so we can just choose a slice.
xtrain = x[:Ntrain]
ytrain = y[:Ntrain]
# select cross-validation set
xcrossval = x[Ntrain:Ntrain + Ncrossval]
ycrossval = y[Ntrain:Ntrain + Ncrossval]
# select test set
xtest = x[Ntrain:-Ntest]
ytest = y[Ntrain:-Ntest]
pl.scatter(xtrain, ytrain, color='red')
pl.scatter(xcrossval, ycrossval, color='blue')
Explanation: In the above figure, we see fits for three different values of $d$.
For $d = 1$, the data is under-fit. This means that the model is too
simplistic: no straight line will ever be a good fit to this data. In
this case, we say that the model suffers from high bias. The model
itself is biased, and this will be reflected in the fact that the data
is poorly fit. At the other extreme, for $d = 6$ the data is over-fit.
This means that the model has too many free parameters (6 in this case)
which can be adjusted to perfectly fit the training data. If we add a
new point to this plot, though, chances are it will be very far from
the curve representing the degree-6 fit. In this case, we say that the
model suffers from high variance. The reason for this label is that if
any of the input points are varied slightly, it could result in an
extremely different model.
In the middle, for $d = 2$, we have found a good mid-point. It fits
the data fairly well, and does not suffer from the bias and variance
problems seen in the figures on either side. What we would like is a
way to quantitatively identify bias and variance, and optimize the
metaparameters (in this case, the polynomial degree d) in order to
determine the best algorithm. This can be done through a process
called cross-validation.
Cross-validation and Testing
Let's start by defining a new dataset which we can use to explore
cross-validation. We will use a simple x vs. y regression estimator
for ease of visualization, but the concepts also readily apply to
more complicated datasets and models.
End of explanation
degrees = np.arange(1, 21)
train_err = np.zeros(len(degrees))
crossval_err = np.zeros(len(degrees))
test_err = np.zeros(len(degrees))
for i, d in enumerate(degrees):
p = np.polyfit(xtrain, ytrain, d)
train_err[i] = compute_error(xtrain, ytrain, p)
crossval_err[i] = compute_error(xcrossval, ycrossval, p)
pl.figure()
pl.title('Error for 100 Training Points')
pl.plot(degrees, crossval_err, lw=2, label = 'cross-validation error')
pl.plot(degrees, train_err, lw=2, label = 'training error')
pl.plot([0, 20], [error, error], '--k', label='intrinsic error')
pl.legend()
pl.xlabel('degree of fit')
pl.ylabel('rms error')
Explanation: In order to quantify the effects of bias and variance and construct
the best possible estimator, we will split our training data into
three parts: a training set, a cross-validation set, and a
test set. As a general rule, the training set should be about
60% of the samples, and the cross-validation and test sets should
be about 20% each.
The general idea is as follows. The model parameters (in our case,
the coefficients of the polynomials) are learned using the training
set as above. The error is evaluated on the cross-validation set,
and the meta-parameters (in our case, the degree of the polynomial)
are adjusted so that this cross-validation error is minimized.
Finally, the labels are predicted for the test set. These labels
are used to evaluate how well the algorithm can be expected to
perform on unlabeled data.
Why do we need both a cross-validation set and a test set? Many
machine learning practitioners use the same set of data as both
a cross-validation set and a test set. This is not the best approach,
for the same reasons we outlined above. Just as the parameters can
be over-fit to the training data, the meta-parameters can be over-fit
to the cross-validation data. For this reason, the minimal
cross-validation error tends to under-estimate the error expected
on a new set of data.
The cross-validation error of our polynomial classifier can be visualized by plotting the error as a function of the polynomial degree d. We can do this as follows. This will spit out warnings about "poorly conditioned" polynomials: that is OK for now.
End of explanation
# suppress warnings from Polyfit
import warnings
warnings.filterwarnings('ignore', message='Polyfit*')
def plot_learning_curve(d):
sizes = np.linspace(2, Ntrain, 50).astype(int)
train_err = np.zeros(sizes.shape)
crossval_err = np.zeros(sizes.shape)
for i, size in enumerate(sizes):
p = np.polyfit(xtrain[:size], ytrain[:size], d)
crossval_err[i] = compute_error(xcrossval, ycrossval, p)
train_err[i] = compute_error(xtrain[:size], ytrain[:size], p)
fig = pl.figure()
pl.plot(sizes, crossval_err, lw=2, label='cross-val error')
pl.plot(sizes, train_err, lw=2, label='training error')
pl.plot([0, Ntrain], [error, error], '--k', label='intrinsic error')
pl.xlabel('traning set size')
pl.ylabel('rms error')
pl.legend(loc = 0)
pl.ylim(0, 4)
pl.xlim(0, 99)
pl.title('d = %i' % d)
plot_learning_curve(d=1)
Explanation: This figure compactly shows the reason that cross-validation is
important. On the left side of the plot, we have very low-degree
polynomial, which under-fits the data. This leads to a very high
error for both the training set and the cross-validation set. On
the far right side of the plot, we have a very high degree
polynomial, which over-fits the data. This can be seen in the fact
that the training error is very low, while the cross-validation
error is very high. Plotted for comparison is the intrinsic error
(this is the scatter artificially added to the data: click on the
above image to see the source code). For this toy dataset,
error = 1.0 is the best we can hope to attain. Choosing $d=6$ in
this case gets us very close to the optimal error.
The astute reader will realize that something is amiss here: in
the above plot, $d = 6$ gives the best results. But in the previous
plot, we found that $d = 6$ vastly over-fits the data. What’s going
on here? The difference is the number of training points used.
In the previous example, there were only eight training points.
In this example, we have 100. As a general rule of thumb, the more
training points used, the more complicated model can be used.
But how can you determine for a given model whether more training
points will be helpful? A useful diagnostic for this are learning curves.
Learning Curves
A learning curve is a plot of the training and cross-validation
error as a function of the number of training points. Note that
when we train on a small subset of the training data, the training
error is computed using this subset, not the full training set.
These plots can give a quantitative view into how beneficial it
will be to add training samples.
End of explanation
plot_learning_curve(d=20)
Explanation: Here we show the learning curve for $d = 1$. From the above
discussion, we know that $d = 1$ is a high-bias estimator which
under-fits the data. This is indicated by the fact that both the
training and cross-validation errors are very high. If this is
the case, adding more training data will not help matters: both
lines have converged to a relatively high error.
End of explanation
plot_learning_curve(d=6)
Explanation: Here we show the learning curve for $d = 20$. From the above
discussion, we know that $d = 20$ is a high-variance estimator
which over-fits the data. This is indicated by the fact that the
training error is much less than the cross-validation error. As
we add more samples to this training set, the training error will
continue to climb, while the cross-validation error will continue
to decrease, until they meet in the middle. In this case, our
intrinsic error was set to 1.0, and we can infer that adding more
data will allow the estimator to very closely match the best
possible cross-validation error.
End of explanation |
14,923 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a name="pagetop"></a>
<div style="width
Step1: We got a Pandas dataframe back, which is great. Sadly, Pandas does not play well with units, so we need to attach units and make some other kind of data structure. We've provided a helper function for this - it takes the dataframe with our special .units attribute and returns a dictionary where the keys are column (data series) names and the values are united arrays. This means we can still use the dictionary access syntax and mostly forget that it is not a data frame any longer.
Fist, let's look at the special attribute siphon added
Step2: Now let's import the helper and the units registry from MetPy and get units attached.
Step3: <a href="#pagetop">Top</a>
<hr style="height
Step4: Exercise
Part 1
Download your own data using the Wyoming upper-air archive. Have a look at the documentation to help get started.
Attach units using the unit helper.
Step5: Solution
Step6: Part 2
Make a figure and SkewT object.
Plot the temperature and dewpoint in red and green lines.
Set the axis limits to sensible limits with set_xlim and set_ylim.
Step7: Solution
Step8: Part 3
Plot wind barbs using the plot_barbs method of the SkewT object.
Add the fiducial lines for dry adiabats, moist adiabats, and mixing ratio lines using the plot_dry_adiabats(), plot_moist_adiabats(), plot_mixing_lines() functions.
Step9: Solution
Step10: <a href="#pagetop">Top</a>
<hr style="height
Step11: We can this as a point on our sounding using the scatter method.
Step12: We can also calculate the ideal parcel profile and plot it.
Step13: Exercise
Part 1
Calculate the LFC and EL for the sounding.
Plot them as horizontal line markers (see how we did it above for the LCL).
Step14: Solution
Step15: Bonus
Use the function surface_based_cape_cin in the MetPy calculations module to calculate the CAPE and CIN of this sounding. Print out the values
Using the methods shade_cape and shade_cin on the SkewT object, shade the areas representing CAPE and CIN.
Step16: Solution
Step17: <a href="#pagetop">Top</a>
<hr style="height
Step18: We can even add wind vectors, which is helpful for learning/teaching hodographs.
Step19: This is great, but we generally don't care about wind shear for the entire sounding. Let's say we want to view it in the lowest 10km of the atmosphere. We can do this with the powerful, but complex get_layer function. Let's get a subset of the u-wind, v-wind, and windspeed.
Step20: Let's make the same hodograph again, but we'll also color the line by the value of the windspeed and we'll use the trimmed data we just created.
Step21: Exercise
In this exercise you'll create a hodograph that is colored by a variable that is not displayed - height above ground level. We generally wouldn't want to color this in a continuous fashion, so we'll make a hodograph that is segmented by height.
Part 1
Make a variable to hold the height above ground level (subtract the surface height from the heights in the sounding).
Make an list of boundary values that we'll use to segment the hodograph from 0-1, 1-3, 3-5, and 5-8 km. (Hint the array should have one more value than the number of segments desired.)
Make a list of colors for each segment.
Step22: Solution
Step23: Part 2
Make a new figure and hodograph object.
Using the bounds and colors keyword arguments to plot_colormapped create the segmented hodograph.
BONUS
Step24: Solution
Step25: <a href="#pagetop">Top</a>
<hr style="height | Python Code:
# Create a datetime for our request - notice the times are from laregest (year) to smallest (hour)
from datetime import datetime
request_time = datetime(1999, 5, 3, 12)
# Store the station name in a variable for flexibility and clarity
station = 'OUN'
# Import the Wyoming simple web service and request the data
# Don't worry about a possible warning from Pandas - it's related to our handling of units
from siphon.simplewebservice.wyoming import WyomingUpperAir
df = WyomingUpperAir.request_data(request_time, station)
# Let's see what we got in return
df.head()
Explanation: <a name="pagetop"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Upper Air and the Skew-T Log-P</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="https://unidata.github.io/MetPy/latest/_images/sphx_glr_Advanced_Sounding_001.png" alt="Example Skew-T" style="height: 500px;"></div>
Overview:
Teaching: 25 minutes
Exercises: 25 minutes
Questions
Where can upper air data be found and what format is it in?
How can I obtain upper air data programatically?
How can MetPy be used to make a Skew-T Log-P diagram and associated fiducial lines?
How are themodynamic calculations performed on upper-air data?
Table of Contents
<a href="#upperairdata">Obtain upper air data</a>
<a href="#makeskewt">Make a Skew-T</a>
<a href="#thermo">Thermodynamics</a>
<a href="#hodograph">Plotting a Hodograph</a>
<a href="#advanced">Advanced Layout</a>
<hr style="height:2px;">
<a name="upperairdata"></a>
Obtain upper air data
Overview
Upper air observations are generally reported as a plain text file in a tabular format that represents the down sampled raw data transmitted by the rawinsonde. Data are reported an mandatory levels and at levels of significant change. An example of sounding data may look like this:
```
PRES HGHT TEMP DWPT RELH MIXR DRCT SKNT THTA THTE THTV
hPa m C C % g/kg deg knot K K K
1000.0 270
991.0 345 -0.3 -2.8 83 3.15 0 0 273.6 282.3 274.1
984.0 403 10.2 -7.8 27 2.17 327 4 284.7 291.1 285.0
963.0 581 11.8 -9.2 22 1.99 226 17 288.0 294.1 288.4
959.7 610 11.6 -9.4 22 1.96 210 19 288.1 294.1 288.5
```
Data are available to download from the University of Wyoming archive, the Iowa State archive, and the Integrated Global Radiosonde Archive (IGRA). There is no need to download data manually. We can use the siphon library (also developed at Unidata) to request and download these data. Be sure to checkout the documentation on all of siphon's capabilities.
Getting our data
First, we need to create a datetime object that has the time of observation we are looking for. We can then request the data for a specific station. Note that if you provide an invalid time or station where no sounding data are present you will receive an error.
End of explanation
df.units
Explanation: We got a Pandas dataframe back, which is great. Sadly, Pandas does not play well with units, so we need to attach units and make some other kind of data structure. We've provided a helper function for this - it takes the dataframe with our special .units attribute and returns a dictionary where the keys are column (data series) names and the values are united arrays. This means we can still use the dictionary access syntax and mostly forget that it is not a data frame any longer.
Fist, let's look at the special attribute siphon added:
End of explanation
from metpy.units import pandas_dataframe_to_unit_arrays, units
sounding = pandas_dataframe_to_unit_arrays(df)
sounding
Explanation: Now let's import the helper and the units registry from MetPy and get units attached.
End of explanation
import matplotlib.pyplot as plt
from metpy.plots import SkewT
%matplotlib inline
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(10, 10))
skew = SkewT(fig)
# Plot the data using normal plotting functions, all of the transforms
# happen in the background!
skew.plot(sounding['pressure'], sounding['temperature'], color='tab:red')
skew.ax.set_ylim(1050,100)
skew.ax.set_xlim(-50,20)
# Redisplay the figure
fig
# Plot a isotherm using axvline (axis vertical line)
skew.ax.axvline([0] * units.degC, color='cyan', linestyle='--')
# Redisplay the figure
fig
Explanation: <a href="#pagetop">Top</a>
<hr style="height:2px;">
<a name="makeskewt"></a>
Make a Skew-T
Now that we have data, we can actually start making our Skew-T Log-P diagram. This consists of:
Import matplotlib
Importing the SkewT object
Creating a figure
Creating a SkewT object based upon that figure
Plotting our data
End of explanation
# Import the Wyoming simple web service upper air object
# YOUR CODE GOES HERE
# Create the datetime and station variables you'll need
# YOUR CODE GOES HERE
# Make the request for the data
# YOUR CODE GOES HERE
# Attach units to the data
# YOUR CODE GOES HERE
Explanation: Exercise
Part 1
Download your own data using the Wyoming upper-air archive. Have a look at the documentation to help get started.
Attach units using the unit helper.
End of explanation
# %load solutions/skewt_get_data.py
df
Explanation: Solution
End of explanation
# Make a figure
# Make a SkewT object
# Plot the temperature and dewpoint
Explanation: Part 2
Make a figure and SkewT object.
Plot the temperature and dewpoint in red and green lines.
Set the axis limits to sensible limits with set_xlim and set_ylim.
End of explanation
# %load solutions/skewt_make_figure.py
Explanation: Solution
End of explanation
# Plot wind barbs
# Add dry adiabats
# Add moist adiabats
# Add mixing ratio lines
# Redisplay figure
Explanation: Part 3
Plot wind barbs using the plot_barbs method of the SkewT object.
Add the fiducial lines for dry adiabats, moist adiabats, and mixing ratio lines using the plot_dry_adiabats(), plot_moist_adiabats(), plot_mixing_lines() functions.
End of explanation
# %load solutions/skewt_wind_fiducials.py
Explanation: Solution
End of explanation
# Grab data for our original case and make a basic figure for us to keep working with.
df = WyomingUpperAir.request_data(datetime(1999, 5, 3, 12), 'OUN')
sounding = pandas_dataframe_to_unit_arrays(df)
# Create a new figure and SkewT object
fig = plt.figure(figsize=(10, 10))
skew = SkewT(fig)
skew.plot(sounding['pressure'], sounding['temperature'], color='tab:red')
skew.plot(sounding['pressure'], sounding['dewpoint'], color='tab:blue')
skew.ax.set_xlim(-60, 30)
skew.ax.set_ylim(1000, 100)
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
import metpy.calc as mpcalc
lcl_pressure, lcl_temperature = mpcalc.lcl(sounding['pressure'][0],
sounding['temperature'][0],
sounding['dewpoint'][0])
print(lcl_pressure, lcl_temperature)
Explanation: <a href="#pagetop">Top</a>
<hr style="height:2px;">
<a name="thermo"></a>
Thermodynamics
Using MetPy's calculations functions we can calculate thermodynamic paramters like LCL, LFC, EL, CAPE, and CIN. Let's start off with the LCL.
End of explanation
skew.ax.plot(lcl_temperature, lcl_pressure, marker="_", color='k', markersize=30, markeredgewidth=3)
fig
Explanation: We can this as a point on our sounding using the scatter method.
End of explanation
sounding['profile'] = mpcalc.parcel_profile(sounding['pressure'], sounding['temperature'][0], sounding['dewpoint'][0])
print(sounding['profile'])
# Plot the profile
skew.plot(sounding['pressure'], sounding['profile'], color='black')
# Redisplay the figure
fig
Explanation: We can also calculate the ideal parcel profile and plot it.
End of explanation
# Get data for the sounding
df = WyomingUpperAir.request_data(datetime(1999, 5, 3, 12), 'OUN')
# Calculate the ideal surface parcel path
sounding['profile'] = mpcalc.parcel_profile(sounding['pressure'],
sounding['temperature'][0],
sounding['dewpoint'][0]).to('degC')
# Calculate the LCL
lcl_pressure, lcl_temperature = mpcalc.lcl(sounding['pressure'][0],
sounding['temperature'][0],
sounding['dewpoint'][0])
# Calculate the LFC
# YOUR CODE GOES HERE
# Calculate the EL
# YOUR CODE GOES HERE
# Create a new figure and SkewT object
fig = plt.figure(figsize=(10, 10))
skew = SkewT(fig)
# Plot the profile and data
skew.plot(sounding['pressure'], sounding['profile'], color='black')
skew.plot(sounding['pressure'], sounding['temperature'], color='tab:red')
skew.plot(sounding['pressure'], sounding['dewpoint'], color='tab:blue')
# Plot the LCL, LFC, and EL as horizontal lines
# YOUR CODE GOES HERE
# Set axis limits
skew.ax.set_xlim(-60, 30)
skew.ax.set_ylim(1000, 100)
# Add fiducial lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
Explanation: Exercise
Part 1
Calculate the LFC and EL for the sounding.
Plot them as horizontal line markers (see how we did it above for the LCL).
End of explanation
# %load solutions/skewt_thermo.py
Explanation: Solution
End of explanation
# Calculate surface based cape/cin
# YOUR CODE GOES HERE
# Print CAPE and CIN
# YOUR CODE GOES HERE
# Shade CAPE
# YOUR CODE GOES HERE
# Shade CIN
# YOUR CODE GOES HERE
# Redisplay the figure
fig
Explanation: Bonus
Use the function surface_based_cape_cin in the MetPy calculations module to calculate the CAPE and CIN of this sounding. Print out the values
Using the methods shade_cape and shade_cin on the SkewT object, shade the areas representing CAPE and CIN.
End of explanation
# %load solutions/skewt_cape_cin.py
Explanation: Solution
End of explanation
# Import the hodograph class
from metpy.plots import Hodograph
# Make a figure and axis
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
# Create a hodograph
h = Hodograph(ax, component_range=60.)
# Add "range rings" to the plot
h.add_grid(increment=20)
# Plot the wind data
h.plot(sounding['u_wind'], sounding['v_wind'], color='tab:red')
Explanation: <a href="#pagetop">Top</a>
<hr style="height:2px;">
<a name="hodograph"></a>
Plotting a Hodograph
Hodographs are a great way to look at wind shear - they are created by drawing wind vectors, all starting at the origin of a plot, and the connecting the vector tips. They are often thought of as a polar plot where the range rings (lines of constant radius) represent speed and the angle representes the compass angle of the wind.
In MetPy we can create a hodograph in a similar way to a skew-T - we create a hodograph object and attach it to an axes.
End of explanation
# Add vectors
h.wind_vectors(sounding['u_wind'], sounding['v_wind'])
# Redisplay figure
fig
Explanation: We can even add wind vectors, which is helpful for learning/teaching hodographs.
End of explanation
(_, u_trimmed, v_trimmed,
speed_trimmed, height_trimmed) = mpcalc.get_layer(sounding['pressure'],
sounding['u_wind'],
sounding['v_wind'],
sounding['speed'],
sounding['height'],
heights=sounding['height'],
depth=10 * units.km)
Explanation: This is great, but we generally don't care about wind shear for the entire sounding. Let's say we want to view it in the lowest 10km of the atmosphere. We can do this with the powerful, but complex get_layer function. Let's get a subset of the u-wind, v-wind, and windspeed.
End of explanation
from metpy.plots import colortables
import numpy as np
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
h = Hodograph(ax, component_range=60.)
h.add_grid(increment=20)
norm, cmap = colortables.get_with_range('ir_rgbv', np.nanmin(speed_trimmed),
np.nanmax(speed_trimmed))
h.plot_colormapped(u_trimmed, v_trimmed, speed_trimmed,
cmap=cmap, norm=norm)
h.wind_vectors(u_trimmed[::3], v_trimmed[::3])
Explanation: Let's make the same hodograph again, but we'll also color the line by the value of the windspeed and we'll use the trimmed data we just created.
End of explanation
# Calculate the height above ground level (AGL)
# YOUR CODE GOES HERE
# Make an array of segment boundaries - don't forget units!
# YOUR CODE GOES HERE
# Make a list of colors for the segments
# YOUR CODE GOES HERE
Explanation: Exercise
In this exercise you'll create a hodograph that is colored by a variable that is not displayed - height above ground level. We generally wouldn't want to color this in a continuous fashion, so we'll make a hodograph that is segmented by height.
Part 1
Make a variable to hold the height above ground level (subtract the surface height from the heights in the sounding).
Make an list of boundary values that we'll use to segment the hodograph from 0-1, 1-3, 3-5, and 5-8 km. (Hint the array should have one more value than the number of segments desired.)
Make a list of colors for each segment.
End of explanation
# %load solutions/hodograph_preprocessing.py
Explanation: Solution
End of explanation
# Create figure/axis
# YOUR CODE GOES HERE
# Create a hodograph object/fiducial lines
# YOUR CODE GOES HERE
# Plot the data
# YOUR CODE GOES HERE
# BONUS - add a colorbar
# YOUR CODE GOES HERE
Explanation: Part 2
Make a new figure and hodograph object.
Using the bounds and colors keyword arguments to plot_colormapped create the segmented hodograph.
BONUS: Add a colorbar!
End of explanation
# %load solutions/hodograph_segmented.py
Explanation: Solution
End of explanation
# Get the data we want
df = WyomingUpperAir.request_data(datetime(1998, 10, 4, 0), 'OUN')
sounding = pandas_dataframe_to_unit_arrays(df)
# Calculate thermodynamics
lcl_pressure, lcl_temperature = mpcalc.lcl(sounding['pressure'][0],
sounding['temperature'][0],
sounding['dewpoint'][0])
lfc_pressure, lfc_temperature = mpcalc.lfc(sounding['pressure'],
sounding['temperature'],
sounding['dewpoint'])
el_pressure, el_temperature = mpcalc.el(sounding['pressure'],
sounding['temperature'],
sounding['dewpoint'])
parcel_profile = mpcalc.parcel_profile(sounding['pressure'],
sounding['temperature'][0],
sounding['dewpoint'][0])
# Some new imports
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from metpy.plots import add_metpy_logo
# Make the plot
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(9, 9))
add_metpy_logo(fig, 630, 80, size='large')
# Grid for plots
gs = gridspec.GridSpec(3, 3)
skew = SkewT(fig, rotation=45, subplot=gs[:, :2])
# Plot the sounding using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(sounding['pressure'], sounding['temperature'], 'tab:red')
skew.plot(sounding['pressure'], sounding['dewpoint'], 'tab:green')
skew.plot(sounding['pressure'], parcel_profile, 'k')
# Mask barbs to be below 100 hPa only
mask = sounding['pressure'] >= 100 * units.hPa
skew.plot_barbs(sounding['pressure'][mask], sounding['u_wind'][mask], sounding['v_wind'][mask])
skew.ax.set_ylim(1000, 100)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Shade areas
skew.shade_cin(sounding['pressure'], sounding['temperature'], parcel_profile)
skew.shade_cape(sounding['pressure'], sounding['temperature'], parcel_profile)
# Good bounds for aspect ratio
skew.ax.set_xlim(-30, 40)
if lcl_pressure:
skew.ax.plot(lcl_temperature, lcl_pressure, marker="_", color='black', markersize=30, markeredgewidth=3)
if lfc_pressure:
skew.ax.plot(lfc_temperature, lfc_pressure, marker="_", color='brown', markersize=30, markeredgewidth=3)
if el_pressure:
skew.ax.plot(el_temperature, el_pressure, marker="_", color='blue', markersize=30, markeredgewidth=3)
# Create a hodograph
agl = sounding['height'] - sounding['height'][0]
mask = agl <= 10 * units.km
intervals = np.array([0, 1, 3, 5, 8]) * units.km
colors = ['tab:red', 'tab:green', 'tab:blue', 'tab:olive']
ax = fig.add_subplot(gs[0, -1])
h = Hodograph(ax, component_range=30.)
h.add_grid(increment=10)
h.plot_colormapped(sounding['u_wind'][mask], sounding['v_wind'][mask], agl[mask], bounds=intervals, colors=colors)
Explanation: <a href="#pagetop">Top</a>
<hr style="height:2px;">
<a name="advanced"></a>
Advanced Layout
This section is meant to show you some fancy matplotlib to make nice Skew-T/Hodograph combinations. It's a good starting place to make your custom plot for your needs.
End of explanation |
14,924 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Step 12
Step2: Publish model to Firebase ML
Step 1. Upload the private key (json file) for your service account and Initialize Firebase Admin
Step3: Step 2. Upload the model file to Cloud Storage
Step4: Step 3. Deploy the model to Firebase | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
# Import dependencies
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
print("TensorFlow version:", tf.__version__)
# Import MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Add a color dimension to the images in "train" and "validate" dataset to
# leverage Keras's data augmentation utilities later.
train_images = np.expand_dims(train_images, axis=3)
test_images = np.expand_dims(test_images, axis=3)
# Define data augmentation configs
datagen = keras.preprocessing.image.ImageDataGenerator(
rotation_range=30,
width_shift_range=0.25,
height_shift_range=0.25,
shear_range=0.25,
zoom_range=0.2
)
# Generate augmented data from MNIST dataset
train_generator = datagen.flow(train_images, train_labels)
test_generator = datagen.flow(test_images, test_labels)
# Define and train the Keras model.
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=32, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.Conv2D(filters=64, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Dropout(0.25),
keras.layers.Flatten(),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_generator, epochs=5, validation_data=test_generator)
# Convert Keras model to TF Lite format and quantize.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
with open('mnist_v2.tflite', "wb") as f:
f.write(tflite_model)
Explanation: Step 12: Deploy a second model to Firebase ML
This is the notebook for step 12 of the codelab Add Firebase to your TensorFlow Lite-powered app.
In this notebook, we will train an improved version of the handwritten digit classification model using data augmentation. Then we will upload the model to Firebase using the Firebase ML Model Management API.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/lite/codelabs/digit_classifier/ml/step7_improve_accuracy.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/lite/codelabs/digit_classifier/ml/step7_improve_accuracy.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
Train an improved TensorFlow Lite model
Let's start by training the improved model.
We will not go into details about the model training here but if you are interested to learn more about why we apply data augmentation to this model and other details, check out this notebook.
End of explanation
import os
from google.colab import files
import firebase_admin
from firebase_admin import ml
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]='/content/' + fn
projectID = fn.rsplit("-firebase")[0]
firebase_admin.initialize_app(
options={'projectId': projectID,
'storageBucket': projectID + '.appspot.com' })
Explanation: Publish model to Firebase ML
Step 1. Upload the private key (json file) for your service account and Initialize Firebase Admin
End of explanation
# This uploads it to your bucket as mmnist_v2.tflite
source = ml.TFLiteGCSModelSource.from_keras_model(model, 'mnist_v2.tflite')
print (source.gcs_tflite_uri)
Explanation: Step 2. Upload the model file to Cloud Storage
End of explanation
# Create a Model Format
model_format = ml.TFLiteFormat(model_source=source)
# Create a Model object
sdk_model_1 = ml.Model(display_name="mnist_v2", model_format=model_format)
# Make the Create API call to create the model in Firebase
firebase_model_1 = ml.create_model(sdk_model_1)
print(firebase_model_1.as_dict())
# Publish the model
model_id = firebase_model_1.model_id
firebase_model_1 = ml.publish_model(model_id)
Explanation: Step 3. Deploy the model to Firebase
End of explanation |
14,925 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MeshCat Animations
MeshCat.jl also provides an animation interface, built on top of the three.js animation system. While it is possible to construct animation clips and tracks manually, just as you would in Three.js, it's generally easier to use the MeshCat Animation type.
Let's show off building a simple animation. We first have to create our scene
Step1: Building an Animation
We construct an animation by first creating a blank Animation() object. We can then use the at_frame method to set properties or transforms of the animation at specific frames of the animation. Three.js will automatically interpolate between whatever values we provide.
For example, let's animate moving the box from [0, 0, 0] to [0, 1, 0]
Step2: You should see the box slide 1 meter to the right in the viewer. If you missed the animation, you can run it again from the viewer. Click "Open Controls", find the "Animations" section, and click "play".
Animating the Camera
The camera is just another object in the MeshCat scene. To set its transform, we just need to index into the visualizer with the right path (note the leading /)
Step3: To animate the camera, we just have to do that same kind of settransform! to individual frames in an animation
Step4: We can also animate object properties. For example, let's animate the camera's zoom property to smoothly zoom out and then back in. Note that to do this, we have to access a deeper path in the visualizer to get to the actual camera object. For more information, see
Step5: Recording an Animation
To record an animation at a smooth, fixed frame rate, click on "Open Controls" in the viewer, and then go to "Animations" -> "default" -> "Recording" -> "record". This will play the entire animation, recording every frame and then let you download the resulting frames to your computer.
To record activity in the MeshCat window that isn't a MeshCat animation, we suggest using a screen-capture tool like Quicktime for macOS or RecordMyDesktop for Linux.
Converting the Animation into a Video
Currently, meshcat can only save an animation as a .tar file consisting of a list of .png images, one for each frame. To convert that into a video, you will need to install the ffmpeg program, and then you can run | Python Code:
import meshcat
from meshcat.geometry import Box
vis = meshcat.Visualizer()
## To open the visualizer in a new browser tab, do:
# vis.open()
## To open the visualizer inside this jupyter notebook, do:
# vis.jupyter_cell()
vis["box1"].set_object(Box([0.1, 0.2, 0.3]))
Explanation: MeshCat Animations
MeshCat.jl also provides an animation interface, built on top of the three.js animation system. While it is possible to construct animation clips and tracks manually, just as you would in Three.js, it's generally easier to use the MeshCat Animation type.
Let's show off building a simple animation. We first have to create our scene:
End of explanation
from meshcat.animation import Animation
import meshcat.transformations as tf
anim = Animation()
with anim.at_frame(vis, 0) as frame:
# `frame` behaves like a Visualizer, in that we can
# call `set_transform` and `set_property` on it, but
# it just stores information inside the animation
# rather than changing the current visualization
frame["box1"].set_transform(tf.translation_matrix([0, 0, 0]))
with anim.at_frame(vis, 30) as frame:
frame["box1"].set_transform(tf.translation_matrix([0, 1, 0]))
# `set_animation` actually sends the animation to the
# viewer. By default, the viewer will play the animation
# right away. To avoid that, you can also pass `play=false`.
vis.set_animation(anim)
Explanation: Building an Animation
We construct an animation by first creating a blank Animation() object. We can then use the at_frame method to set properties or transforms of the animation at specific frames of the animation. Three.js will automatically interpolate between whatever values we provide.
For example, let's animate moving the box from [0, 0, 0] to [0, 1, 0]:
End of explanation
vis["/Cameras/default"].set_transform(tf.translation_matrix([0, 0, 1]))
Explanation: You should see the box slide 1 meter to the right in the viewer. If you missed the animation, you can run it again from the viewer. Click "Open Controls", find the "Animations" section, and click "play".
Animating the Camera
The camera is just another object in the MeshCat scene. To set its transform, we just need to index into the visualizer with the right path (note the leading /):
End of explanation
anim = Animation()
with anim.at_frame(vis, 0) as frame:
frame["/Cameras/default"].set_transform(tf.translation_matrix([0, 0, 0]))
with anim.at_frame(vis, 30) as frame:
frame["/Cameras/default"].set_transform(tf.translation_matrix([0, 0, 1]))
# we can repeat the animation playback with the
# repetitions argument:
vis.set_animation(anim, repetitions=2)
Explanation: To animate the camera, we just have to do that same kind of settransform! to individual frames in an animation:
End of explanation
anim = Animation()
camera_path = "/Cameras/default/rotated/<object>"
with anim.at_frame(vis, 0) as frame:
frame[camera_path].set_property("zoom", "number", 1)
with anim.at_frame(vis, 30) as frame:
frame[camera_path].set_property("zoom", "number", 0.5)
with anim.at_frame(vis, 60) as frame:
frame[camera_path].set_property("zoom", "number", 1)
# While we're animating the camera zoom, we can also animate any other
# properties we want. Let's simultaneously translate the box during
# the same animation:
with anim.at_frame(vis, 0) as frame:
frame["box1"].set_transform(tf.translation_matrix([0, -1, 0]))
with anim.at_frame(vis, 60) as frame:
frame["box1"].set_transform(tf.translation_matrix([0, 1, 0]))
vis.set_animation(anim)
Explanation: We can also animate object properties. For example, let's animate the camera's zoom property to smoothly zoom out and then back in. Note that to do this, we have to access a deeper path in the visualizer to get to the actual camera object. For more information, see: https://github.com/rdeits/meshcat#camera-control
End of explanation
from meshcat.animation import convert_frames_to_video
convert_frames_to_video("/home/rdeits/Downloads/meshcat_1528401494656.tar", overwrite=True)
Explanation: Recording an Animation
To record an animation at a smooth, fixed frame rate, click on "Open Controls" in the viewer, and then go to "Animations" -> "default" -> "Recording" -> "record". This will play the entire animation, recording every frame and then let you download the resulting frames to your computer.
To record activity in the MeshCat window that isn't a MeshCat animation, we suggest using a screen-capture tool like Quicktime for macOS or RecordMyDesktop for Linux.
Converting the Animation into a Video
Currently, meshcat can only save an animation as a .tar file consisting of a list of .png images, one for each frame. To convert that into a video, you will need to install the ffmpeg program, and then you can run:
End of explanation |
14,926 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Theory and Practice of Visualization Exercise 2
Imports
Step1: Violations of graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information.
CNN
Fox News
Time
Upload the image for the visualization to this directory and display the image inline in this notebook. | Python Code:
from IPython.display import Image
Explanation: Theory and Practice of Visualization Exercise 2
Imports
End of explanation
# Add your filename and uncomment the following line:
Image(filename='TheoryAndPracticeEx02graph.png')
Explanation: Violations of graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information.
CNN
Fox News
Time
Upload the image for the visualization to this directory and display the image inline in this notebook.
End of explanation |
14,927 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
9 June 2017
Wayne Nixalo
This notebook started out trying to generate convolutional test features using Sequential.predict_generator, by using bcolz to save the generated features to disk, in batches as they were created. This was successful after a few days of work. (roughly
Step1: Manual iteration through test image to generate convolutional test features. Saves each batch to disk insetad of loading in memory.
Step2: I think conv_feat below should be conv_test_feat
Step3: Question
Step4: As expected (& which motivated this) the full set of convolutional test features does not fit at once in memory.
Step5: Loading train/valid features; defining & fitting NN model
Step6: Made a mistake on the last loop above. The penultimate batch -- the last full 4096-image batch -- was added onto the end of the predictions array twice. The final 2194 image predictions were never run.
Easy enough to fix
Step7: Redoing predictions here
Step8: Oh I forgot, predictions through a FC NN are fast. CNNs are where it takes a long time.
This is just quick testing that it works. Full/polished will be in the reworked statefarm-codealong (or just statefarm) JNB | Python Code:
import theano
import os, sys
sys.path.insert(1, os.path.join('utils'))
from __future__ import print_function, division
path = 'data/statefarm/'
import utils; reload(utils)
from utils import *
batch_size=16
vgg = Vgg16()
model = vgg.model
last_conv_idx = [i for i, l in enumerate(model.layers) if type(l) is Convolution2D][-1]
conv_layers = model.layers[:last_conv_idx + 1]
conv_model = Sequential(conv_layers)
gen = image.ImageDataGenerator()
test_batches = get_batches(path + 'test', batch_size=batch_size, shuffle=False)
Explanation: 9 June 2017
Wayne Nixalo
This notebook started out trying to generate convolutional test features using Sequential.predict_generator, by using bcolz to save the generated features to disk, in batches as they were created. This was successful after a few days of work. (roughly: 6 - 9 June).
This notebook's continued on to build a full set of submittable predictions. Once that's solved, I can build a strong model, unconstrained by system memory limits. Video-memory is another matter, at this time.
End of explanation
# conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)
Explanation: Manual iteration through test image to generate convolutional test features. Saves each batch to disk insetad of loading in memory.
End of explanation
fname = path + 'results/conv_test_feat.dat'
%rm -r $fname
for i in xrange(test_batches.n // batch_size + 1):
conv_test_feat = conv_model.predict_on_batch(test_batches.next()[0])
if not i:
c = bcolz.carray(conv_feat, rootdir= path + '/results/conv_test_feat.dat', mode='a')
else:
c.append(conv_feat)
c.flush()
Explanation: I think conv_feat below should be conv_test_feat
End of explanation
# apparently you can just open a (massive) bcolz carray this way
# without crashing memory... okay I'm learning things
# carr = bcolz.open(fname)
# forgot to add the '+1' so missed the last 14 images. Doing that here:
# NOTE: below code only adds on the missed batch
# iterate generator until final missed batch, then work:
fname = path + 'results/conv_test_feat.dat'
test_batches.reset()
iters = test_batches.n // batch_size
for i in xrange(iters): test_batches.next()
conv_test_feat = conv_model.predict_on_batch(test_batches.next()[0])
# c = bcolz.carray(conv_test_feat, rootdir=fname, mode='a')
c = bcolz.open(fname)
c.append(conv_test_feat)
c.flush()
Explanation: Question: Why does it look like I can have the entire conv_test_feat array open at once, when opened w/ bcolz; but when it's explicitly loaded as a Numpy array via bcolz.open(fname)[:], all of a sudden the RAM takes a severe memory hit?
End of explanation
fname = path + 'results/conv_test_feat.dat'
x = bcolz.open(fname)
len(x)
Explanation: As expected (& which motivated this) the full set of convolutional test features does not fit at once in memory.
End of explanation
# conv_train_feat_batches = get_batches(path + '/results/conv_feat.dat')
# conv_valid_feat_batches = get_batches(path + '/results/conv_val_feat.dat')
conv_trn_feat = load_array(path + '/results/conv_feat.dat')
conv_val_feat = load_array(path + '/results/conv_val_feat.dat')
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
p = 0.8
bn_model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
])
bn_model.compile(Adam(lr=1e-3), loss='categorical_crossentropy', metrics=['accuracy'])
# Sequential.fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose=1, callbacks=None, validation_data=None, nb_val_samples=None, class_weight=None, max_q_size=10, nb_worker=1, pickle_safe=False, initial_epoch=0, **kwargs)
# bn_model.fit_generator((conv_train_feat_batches, trn_labels), conv_train_feat_batches.nb_sample, nb_epoch=1,
# validation_data=(conv_valid_feat_batches, val_labels), nb_val_samples=conv_valid_feat_batches.nb_sample)
bn_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data = (conv_val_feat, val_labels))
bn_model.optimizer.lr=1e-2
bn_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data = (conv_val_feat, val_labels))
# bn_model.save_weights(path + 'models/da_conv8.h5')
bn_model.load_weights(path + 'models/da_conv8.h5')
# conv_test_feat_batches = bcolz.iterblocks(path + fname)
fname = path + 'results/conv_test_feat.dat'
idx, inc = 0, 4096
preds = []
while idx < test_batches.n - inc:
conv_test_feat = bcolz.open(fname)[idx:idx+inc]
idx += inc
if len(preds):
next_preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0)
preds = np.concatenate([preds, next_preds])
else:
preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0)
conv_test_feat = bcolz.open(fname)[idx:]
next_preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0)
preds = np.concatenate([preds, next_preds])
print(len(preds))
if len(preds) != len(bcolz.open(fname)):
print("Ya done fucked up, son.")
Explanation: Loading train/valid features; defining & fitting NN model
End of explanation
print(81920 - 79726)
print(79726 % 4096)
print(81920 % 4096) # <-- that's yeh problem right there, kid
x = preds[len(preds) - 4096]
print(preds[-1])
print(x)
preds[0]
# ??image.ImageDataGenerator.flow_from_directory
# ??Sequential.predict()
Explanation: Made a mistake on the last loop above. The penultimate batch -- the last full 4096-image batch -- was added onto the end of the predictions array twice. The final 2194 image predictions were never run.
Easy enough to fix: modify the above code to work perfectly. Then either:
* create entirely new predictions from scratch (~ 1 hour)
* remove the last increment (4096) of predictions from the array, and add the last batch.
Gonna take option 2.
EDIT:
actually, option 1. preds was stored in memory, which was erased when I closed this machine for the night. So this time I'll just build the predictions array properly.
Below is testing/debugging output from the night before
End of explanation
fname = path + 'results/conv_test_feat.dat'
idx, inc = 4096, 4096
preds = []
conv_test_feat = bcolz.open(fname)[:idx]
preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0)
while idx < test_batches.n - inc:
conv_test_feat = bcolz.open(fname)[idx:idx+inc]
idx += inc
next_preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0)
preds = np.concatenate([preds, next_preds])
conv_test_feat = bcolz.open(fname)[idx:]
next_preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0)
preds = np.concatenate([preds, next_preds])
print(len(preds))
if len(preds) != len(bcolz.open(fname)):
print("Ya done fucked up, son.")
Explanation: Redoing predictions here:
End of explanation
def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx)
subm = do_clip(preds, 0.93)
subm_name = path + 'results/subm01.gz'
trn_batches = get_batches(path + 'train', batch_size=batch_size, shuffle=False)
# make sure training batches defined before this:
classes = sorted(trn_batches.class_indices, key=trn_batches.class_indices.get)
import pandas as pd
submission = pd.DataFrame(subm, columns=classes)
submission.insert(0, 'img', [f[8:] for f in test_filenames])
submission.head()
submission.to_csv(subm_name, index=False, compression='gzip')
from IPython.display import FileLink
FileLink(subm_name)
Explanation: Oh I forgot, predictions through a FC NN are fast. CNNs are where it takes a long time.
This is just quick testing that it works. Full/polished will be in the reworked statefarm-codealong (or just statefarm) JNB:
End of explanation |
14,928 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Introduction to Spark In-memmory Computing via Python PySpark </center>
Spark is an implementation of the MapReduce programming paradigm that operates on in-memory data and allows data reuses across multiple computations.
Performance of Spark is significantly better than its predecessor, Hadoop MapReduce.
Spark's primary data abstraction is Resilient Distributed Dataset (RDD)
Step1: 1. Getting Started
Spark stores data in memory. This memory space is represented by variable sc (SparkContext).
Step2: 2. What does Spark do with my data?
Storage Level
Step3: By default, each transformed RDD may be recomputed each time you run an action on it.
It is also possible to persist RDD in memory using persist() or cache()
persist() allows you to specify level of storage for RDD
cache() only persists RDD in memory
To retire RDD from memory, unpersist() is called
3. WordCount
Data operations in Spark are categorized into two groups, transformation and action.
- A transformation creates new dataset from existing data. Examples of transformation include map, filter, reduceByKey, and sort.
- An action returns a value to the driver program (aka memory space of this notebook) after running a computation on the data set. Examples of action include count, collect, reduce, and save.
"All transformations in Spark are lazy, in that they do not compute their results right away. Instead, they just remember the transformations applied to some base dataset (e.g. a file). The transformations are only computed when an action requires a result to be returned to the driver program." -- Spark Documentation
RDD Operations in Spark
Transformations
Step4: Step-by-step actions
Step5: Challenge
Augment the mapping process of WordCount with a function to filter out punctuations and capitalization from the unique words
To stop the Spark job, call sc.stop() | Python Code:
!module list
Explanation: <center> Introduction to Spark In-memmory Computing via Python PySpark </center>
Spark is an implementation of the MapReduce programming paradigm that operates on in-memory data and allows data reuses across multiple computations.
Performance of Spark is significantly better than its predecessor, Hadoop MapReduce.
Spark's primary data abstraction is Resilient Distributed Dataset (RDD):
Read-only, partitioned collection of records
Created (aka written) through deterministic operations on data:
Loading from stable storage
Transforming from other RDDs
Generating through coarse-grained operations such as map, join, filter ...
Do not need to be materialized at all time and are recoverable via data lineage
<img src="pictures/18/spark2_arch.png" width="600"/>
End of explanation
!cypress-kinit
!klist
import sys
import os
sys.path.insert(0, '/usr/hdp/current/spark2-client/python')
sys.path.insert(0, '/usr/hdp/current/spark2-client/python/lib/py4j-0.10.4-src.zip')
os.environ['SPARK_HOME'] = '/usr/hdp/current/spark2-client/'
os.environ['SPARK_CONF_DIR'] = '/etc/hadoop/synced_conf/spark2/'
os.environ['PYSPARK_PYTHON'] = '/software/anaconda3/4.2.0/bin/python'
import pyspark
conf = pyspark.SparkConf()
conf.setMaster("yarn")
conf.set("spark.driver.memory","4g")
conf.set("spark.executor.memory","60g")
conf.set("spark.num.executors","3")
conf.set("spark.executor.cores","12")
sc = pyspark.SparkContext(conf=conf)
sc
textFile = sc.textFile("/repository/gutenberg-shakespeare.txt")
print (textFile)
Explanation: 1. Getting Started
Spark stores data in memory. This memory space is represented by variable sc (SparkContext).
End of explanation
textFile.getStorageLevel()
textFile.getNumPartitions()
textFile.cache()
textFile.getStorageLevel()
Explanation: 2. What does Spark do with my data?
Storage Level:
- Does RDD use disk?
- Does RDD use memory?
- Does RDD use off-heap memory?
- Should an RDD be serialized (while persisting)?
- How many replicas (default: 1) to use (can only be less than 40)?
End of explanation
textFile = sc.textFile("/repository/gutenberg-shakespeare.txt")
textFile
%%time
textFile.count()
wordcount = textFile.flatMap(lambda line: line.split(" ")) \
.map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a + b)
wordcount
!hdfs dfs -rm -r intro-to-spark
!hdfs dfs -mkdir intro-to-spark
wordcount.saveAsTextFile("intro-to-spark/output-wordcount-01")
!hdfs dfs -cat intro-to-spark/output-wordcount-01/part-00000 \
2>/dev/null | head -n 20
Explanation: By default, each transformed RDD may be recomputed each time you run an action on it.
It is also possible to persist RDD in memory using persist() or cache()
persist() allows you to specify level of storage for RDD
cache() only persists RDD in memory
To retire RDD from memory, unpersist() is called
3. WordCount
Data operations in Spark are categorized into two groups, transformation and action.
- A transformation creates new dataset from existing data. Examples of transformation include map, filter, reduceByKey, and sort.
- An action returns a value to the driver program (aka memory space of this notebook) after running a computation on the data set. Examples of action include count, collect, reduce, and save.
"All transformations in Spark are lazy, in that they do not compute their results right away. Instead, they just remember the transformations applied to some base dataset (e.g. a file). The transformations are only computed when an action requires a result to be returned to the driver program." -- Spark Documentation
RDD Operations in Spark
Transformations:
map(f: T -> U) : RDD[T] -> RDD[U]
filter(f: T -> Bool) : RDD[T] -> RDD[T]
flatMap(f: T -> Seq[U]) : RDD[T] -> RDD[U]
sample(fraction: Float) : RDD[T] -> RDD[T] (deterministic sampling)
groupByKey() : RDD[(K,V)] -> RDD[(K, Seq[V])]
reduceByKey(f: (V,V) -> V) : RDD[(K,V)] -> RDD[(K,V)]
union() : (RDD[T], RDD[T]) -> RDD[T]
join() : (RDD[(K,V)], RDD[(K,W)]) -> RDD[(K,(V,W))]
cogroup() : (RDD[(K,V)], RDD[(K,W)] -> RDD[(K, (Seq[V],Seq[W]))]
crossProduct() : (RDD[T], RDD[U]) -> RDD[(T,U)]
mapValues(f: V -> W) : RDD[(K,V)] -> RDD[(K,W)] (preserves partitioning)
sort(c: Comparator[K]) : RDD[(K,V)] -> RDD[(K,V)]
partitionBy(p: Partitioner[K]) : RDD[(K,V)] -> RDD[(K,V)]
Actions:
count() : RDD[T] -> Long
collect() : RDD[T] -> Seq[T]
reduce(f: (T,T) -> T) : RDD[T] -> T
lookup(k : K) : RDD[(K,V)] -> Seq[V] (on hash/range partitionied RDDs)
save(path: String) : Outputs RDD to a storage system
End of explanation
!hdfs dfs -cat /repository/gutenberg-shakespeare.txt \
2>/dev/null | head -n 100
wordcount_step_01 = textFile.flatMap(lambda line: line.split(" "))
wordcount_step_01
wordcount_step_01.take(20)
wordcount_step_02 = wordcount_step_01.map(lambda word: (word, 1))
wordcount_step_02.take(20)
wordcount_step_03 = wordcount_step_02.reduceByKey(lambda a, b: a + b)
wordcount_step_03.take(20)
Explanation: Step-by-step actions:
End of explanation
sc.stop()
Explanation: Challenge
Augment the mapping process of WordCount with a function to filter out punctuations and capitalization from the unique words
To stop the Spark job, call sc.stop()
End of explanation |
14,929 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Darwin's bibliography <a class="tocSkip">
<p><img src="https
Step1: Data
Step2: Tokenize
Step3: Stemming
<p>As we are analysing 20 full books, the stemming algorithm can take several minutes to run and, in order to make the process faster, we will directly load the final results from a pickle file and review the method used to generate it.</p>
Step4: Modelling
Building a bag-of-words model
<p>Now that we have transformed the texts into stemmed tokens, we need to build models that will be useable by downstream algorithms.</p>
<p>First, we need to will create a universe of all words contained in our corpus of Charles Darwin's books, which we call <em>a dictionary</em>. Then, using the stemmed tokens and the dictionary, we will create <strong>bag-of-words models</strong> (BoW) of each of our texts. The BoW models will represent our books as a list of all uniques tokens they contain associated with their respective number of occurrences. </p>
<p>To better understand the structure of such a model, we will print the five first elements of one of the "<em>On the Origin of Species</em>" BoW model.</p>
Step5: The most common words of a given book
<p>The results returned by the bag-of-words model is certainly easy to use for a computer but hard to interpret for a human. It is not straightforward to understand which stemmed tokens are present in a given book from Charles Darwin, and how many occurrences we can find.</p>
<p>In order to better understand how the model has been generated and visualize its content, we will transform it into a DataFrame and display the 10 most common stems for the book "<em>On the Origin of Species</em>".</p>
Step6: Build a tf-idf model
<p>If it wasn't for the presence of the stem "<em>speci</em>", we would have a hard time to guess this BoW model comes from the <em>On the Origin of Species</em> book. The most recurring words are, apart from few exceptions, very common and unlikely to carry any information peculiar to the given book. We need to use an additional step in order to determine which tokens are the most specific to a book.</p>
<p>To do so, we will use a <strong>tf-idf model</strong> (term frequency–inverse document frequency). This model defines the importance of each word depending on how frequent it is in this text and how infrequent it is in all the other documents. As a result, a high tf-idf score for a word will indicate that this word is specific to this text.</p>
<p>After computing those scores, we will print the 10 words most specific to the "<em>On the Origin of Species</em>" book (i.e., the 10 words with the highest tf-idf score).</p>
Step7: The results of the tf-idf model
<p>Once again, the format of those results is hard to interpret for a human. Therefore, we will transform it into a more readable version and display the 10 most specific words for the "<em>On the Origin of Species</em>" book.</p>
Step8: Compute distance between texts
<p>The results of the tf-idf algorithm now return stemmed tokens which are specific to each book. We can, for example, see that topics such as selection, breeding or domestication are defining "<em>On the Origin of Species</em>" (and yes, in this book, Charles Darwin talks quite a lot about pigeons too). Now that we have a model associating tokens to how specific they are to each book, we can measure how related to books are between each other.</p>
<p>To this purpose, we will use a measure of similarity called <strong>cosine similarity</strong> and we will visualize the results as a distance matrix, i.e., a matrix showing all pairwise distances between Darwin's books.</p>
Step9: The book most similar to "On the Origin of Species"
<p>We now have a matrix containing all the similarity measures between any pair of books from Charles Darwin! We can now use this matrix to quickly extract the information we need, i.e., the distance between one book and one or several others. </p>
<p>As a first step, we will display which books are the most similar to "<em>On the Origin of Species</em>," more specifically we will produce a bar chart showing all books ranked by how similar they are to Darwin's landmark work.</p>
Step10: Which books have similar content?
<p>This turns out to be extremely useful if we want to determine a given book's most similar work. For example, we have just seen that if you enjoyed "<em>On the Origin of Species</em>," you can read books discussing similar concepts such as "<em>The Variation of Animals and Plants under Domestication</em>" or "<em>The Descent of Man, and Selection in Relation to Sex</em>." If you are familiar with Darwin's work, these suggestions will likely seem natural to you. Indeed, <em>On the Origin of Species</em> has a whole chapter about domestication and <em>The Descent of Man, and Selection in Relation to Sex</em> applies the theory of natural selection to human evolution. Hence, the results make sense.</p>
<p>However, we now want to have a better understanding of the big picture and see how Darwin's books are generally related to each other (in terms of topics discussed). To this purpose, we will represent the whole similarity matrix as a dendrogram, which is a standard tool to display such data. <strong>This last approach will display all the information about book similarities at once.</strong> For example, we can find a book's closest relative but, also, we can visualize which groups of books have similar topics (e.g., the cluster about Charles Darwin personal life with his autobiography and letters). If you are familiar with Darwin's bibliography, the results should not surprise you too much, which indicates the method gives good results. Otherwise, next time you read one of the author's book, you will know which other books to read next in order to learn more about the topics it addressed.</p> | Python Code:
import glob
import re, os
from tqdm import tqdm_notebook
import pickle
import pandas as pd
from nltk.stem import PorterStemmer
from gensim import corpora
from gensim.models import TfidfModel
from gensim import similarities
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.cluster import hierarchy
ps = PorterStemmer()
Explanation: Darwin's bibliography <a class="tocSkip">
<p><img src="https://assets.datacamp.com/production/project_607/img/CharlesDarwin.jpg" alt="Charles Darwin" width="300px"></p>
<p>Charles Darwin is one of the few universal figures of science. His most renowned work is without a doubt his "<em>On the Origin of Species</em>" published in 1859 which introduced the concept of natural selection. But Darwin wrote many other books on a wide range of topics, including geology, plants or his personal life. In this notebook, we will automatically detect how closely related his books are to each other.</p>
<p>To this purpose, we will develop the bases of <strong>a content-based book recommendation system</strong>, which will determine which books are close to each other based on how similar the discussed topics are. The methods we will use are commonly used in text- or documents-heavy industries such as legal, tech or customer support to perform some common task such as text classification or handling search engine queries.</p>
<p>Let's take a look at the books we'll use in our recommendation system.</p>
Imports
Dependencies
End of explanation
folder = "datasets/"
files = glob.glob(folder + '*.txt')
files.sort()
txts = []
titles = []
for n in files:
f = open(n, encoding='utf-8-sig')
# Remove all non-alpha-numeric characters
txts.append(re.sub('[\W_]+', ' ', f.read()))
titles.append(os.path.basename(n).replace(".txt", ""))
# ['{} - {:,}'.format(title, len(txt)) for title, txt in zip(titles, txts)]
pd.DataFrame(data = [
(title, len(txt)) for title, txt in zip(titles, txts)
], columns=['Title', '#characters']).sort_values('#characters', ascending=False)
# for i in range(len(titles)):
# if titles[i] == 'OriginofSpecies':
# ori = i
book_index = titles.index('OriginofSpecies')
book_index
Explanation: Data
End of explanation
%%time
# stop words
stoplist = set('for a of the and to in to be which some is at that we i who whom show via may my our might as well'.split())
txts_lower_case = [txt.lower() for txt in txts]
txts_split = [txt.split() for txt in txts_lower_case]
texts = [[word for word in txt if word not in stoplist] for txt in txts_split]
print(texts[book_index][:20])
Explanation: Tokenize
End of explanation
# # Load the stemmed tokens list from the pregenerated pickle file
# texts_stem = pickle.load( open( 'datasets/texts_stem.p', 'rb' ) )
%%time
# texts_stem = [[ps.stem(word) for word in text] for text in texts]
texts_stem = []
for i in tqdm_notebook(range(len(texts))):
book_stemmed = []
for word in texts[i]:
book_stemmed.append( ps.stem(word) )
texts_stem.append(book_stemmed)
print(texts_stem[book_index][:20])
Explanation: Stemming
<p>As we are analysing 20 full books, the stemming algorithm can take several minutes to run and, in order to make the process faster, we will directly load the final results from a pickle file and review the method used to generate it.</p>
End of explanation
dictionary = corpora.Dictionary(texts_stem)
# Create a bag-of-words model for each book, using the previously generated dictionary
bows = [dictionary.doc2bow(txt) for txt in texts_stem]
print(bows[book_index][:5])
Explanation: Modelling
Building a bag-of-words model
<p>Now that we have transformed the texts into stemmed tokens, we need to build models that will be useable by downstream algorithms.</p>
<p>First, we need to will create a universe of all words contained in our corpus of Charles Darwin's books, which we call <em>a dictionary</em>. Then, using the stemmed tokens and the dictionary, we will create <strong>bag-of-words models</strong> (BoW) of each of our texts. The BoW models will represent our books as a list of all uniques tokens they contain associated with their respective number of occurrences. </p>
<p>To better understand the structure of such a model, we will print the five first elements of one of the "<em>On the Origin of Species</em>" BoW model.</p>
End of explanation
# Convert the BoW model for "On the Origin of Species" into a DataFrame
df_bow_origin = pd.DataFrame(bows[book_index], columns=['index', 'occurrences'])
# Add a column containing the token corresponding to the dictionary index
df_bow_origin['token'] = df_bow_origin['index'].apply(lambda i: texts_stem[book_index][i])
df_bow_origin.sort_values('occurrences', ascending=False).head(10)
Explanation: The most common words of a given book
<p>The results returned by the bag-of-words model is certainly easy to use for a computer but hard to interpret for a human. It is not straightforward to understand which stemmed tokens are present in a given book from Charles Darwin, and how many occurrences we can find.</p>
<p>In order to better understand how the model has been generated and visualize its content, we will transform it into a DataFrame and display the 10 most common stems for the book "<em>On the Origin of Species</em>".</p>
End of explanation
model = TfidfModel(bows)
# Print the model for "On the Origin of Species"
print(len(model[bows[book_index]]))
Explanation: Build a tf-idf model
<p>If it wasn't for the presence of the stem "<em>speci</em>", we would have a hard time to guess this BoW model comes from the <em>On the Origin of Species</em> book. The most recurring words are, apart from few exceptions, very common and unlikely to carry any information peculiar to the given book. We need to use an additional step in order to determine which tokens are the most specific to a book.</p>
<p>To do so, we will use a <strong>tf-idf model</strong> (term frequency–inverse document frequency). This model defines the importance of each word depending on how frequent it is in this text and how infrequent it is in all the other documents. As a result, a high tf-idf score for a word will indicate that this word is specific to this text.</p>
<p>After computing those scores, we will print the 10 words most specific to the "<em>On the Origin of Species</em>" book (i.e., the 10 words with the highest tf-idf score).</p>
End of explanation
# Convert the tf-idf model for "On the Origin of Species" into a DataFrame
df_tfidf = pd.DataFrame(model[bows[book_index]], columns=['id', 'score'])
# Add the tokens corresponding to the numerical indices for better readability
df_tfidf['token'] = df_tfidf['id'].apply(lambda i: texts_stem[book_index][i])
df_tfidf.sort_values('score', ascending=False).head(10)
Explanation: The results of the tf-idf model
<p>Once again, the format of those results is hard to interpret for a human. Therefore, we will transform it into a more readable version and display the 10 most specific words for the "<em>On the Origin of Species</em>" book.</p>
End of explanation
sims = similarities.MatrixSimilarity(model[bows])
sim_df = pd.DataFrame(list(sims))
sim_df.columns = titles
sim_df.index = titles
print(sim_df)
Explanation: Compute distance between texts
<p>The results of the tf-idf algorithm now return stemmed tokens which are specific to each book. We can, for example, see that topics such as selection, breeding or domestication are defining "<em>On the Origin of Species</em>" (and yes, in this book, Charles Darwin talks quite a lot about pigeons too). Now that we have a model associating tokens to how specific they are to each book, we can measure how related to books are between each other.</p>
<p>To this purpose, we will use a measure of similarity called <strong>cosine similarity</strong> and we will visualize the results as a distance matrix, i.e., a matrix showing all pairwise distances between Darwin's books.</p>
End of explanation
v = sim_df.OriginofSpecies
v_sorted = v.sort_values()
# v_sorted = v_sorted[:-1]
plt.barh(range(len(v_sorted)), v_sorted.values)
plt.xlabel('Similarity')
plt.ylabel('Books')
plt.yticks(range(len(v_sorted)), v_sorted.index)
plt.xlim((0, 1))
plt.title('Books most similar to the "Origin of Species"')
plt.show()
Explanation: The book most similar to "On the Origin of Species"
<p>We now have a matrix containing all the similarity measures between any pair of books from Charles Darwin! We can now use this matrix to quickly extract the information we need, i.e., the distance between one book and one or several others. </p>
<p>As a first step, we will display which books are the most similar to "<em>On the Origin of Species</em>," more specifically we will produce a bar chart showing all books ranked by how similar they are to Darwin's landmark work.</p>
End of explanation
Z = hierarchy.linkage(sim_df, method='ward')
a = hierarchy.dendrogram(
Z,
leaf_font_size=8,
labels=sim_df.index,
orientation="left"
)
Explanation: Which books have similar content?
<p>This turns out to be extremely useful if we want to determine a given book's most similar work. For example, we have just seen that if you enjoyed "<em>On the Origin of Species</em>," you can read books discussing similar concepts such as "<em>The Variation of Animals and Plants under Domestication</em>" or "<em>The Descent of Man, and Selection in Relation to Sex</em>." If you are familiar with Darwin's work, these suggestions will likely seem natural to you. Indeed, <em>On the Origin of Species</em> has a whole chapter about domestication and <em>The Descent of Man, and Selection in Relation to Sex</em> applies the theory of natural selection to human evolution. Hence, the results make sense.</p>
<p>However, we now want to have a better understanding of the big picture and see how Darwin's books are generally related to each other (in terms of topics discussed). To this purpose, we will represent the whole similarity matrix as a dendrogram, which is a standard tool to display such data. <strong>This last approach will display all the information about book similarities at once.</strong> For example, we can find a book's closest relative but, also, we can visualize which groups of books have similar topics (e.g., the cluster about Charles Darwin personal life with his autobiography and letters). If you are familiar with Darwin's bibliography, the results should not surprise you too much, which indicates the method gives good results. Otherwise, next time you read one of the author's book, you will know which other books to read next in order to learn more about the topics it addressed.</p>
End of explanation |
14,930 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB
Add a column to dinos that contains the decimal equivalent of the sha256 hash. Hint.
Step1: LAB
Sort dinos by the column sha256 -- this will be an alphabetical sort.
Step2: How about in descending order? | Python Code:
dinos.assign(Decimal = dinos.sha256.apply(lambda x: int(x, base=16)))
Explanation: LAB
Add a column to dinos that contains the decimal equivalent of the sha256 hash. Hint.
End of explanation
dinos.sort_values(by='sha256').head(10)
Explanation: LAB
Sort dinos by the column sha256 -- this will be an alphabetical sort.
End of explanation
dinos.sort_values(by='sha256', ascending=False).head(10)
Explanation: How about in descending order?
End of explanation |
14,931 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conditional Probability Solution
First we'll modify the code to have some fixed purchase probability regardless of age, say 40%
Step1: Next we will compute P(E|F) for some age group, let's pick 30 year olds again
Step2: Now we'll compute P(E) | Python Code:
from numpy import random
random.seed(0)
totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0}
purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0}
totalPurchases = 0
for _ in range(100000):
ageDecade = random.choice([20, 30, 40, 50, 60, 70])
purchaseProbability = 0.4
totals[ageDecade] += 1
if (random.random() < purchaseProbability):
totalPurchases += 1
purchases[ageDecade] += 1
Explanation: Conditional Probability Solution
First we'll modify the code to have some fixed purchase probability regardless of age, say 40%:
End of explanation
PEF = float(purchases[30]) / float(totals[30])
print("P(purchase | 30s): " + str(PEF))
Explanation: Next we will compute P(E|F) for some age group, let's pick 30 year olds again:
End of explanation
PE = float(totalPurchases) / 100000.0
print("P(Purchase):" + str(PE))
Explanation: Now we'll compute P(E)
End of explanation |
14,932 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Machine Learning
LA Team Submission 6 ##
Lukas Mosser, Alfredo De la Fuente
In this approach for solving the facies classfication problem ( https
Step1: Data Preprocessing
Step2: We procceed to run Paolo Bestagini's routine to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.
Step3: Data Analysis
In this section we will run a Cross Validation routine
Step4: Prediction | Python Code:
%%sh
pip install pandas
pip install scikit-learn
pip install tpot
from __future__ import print_function
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
from sklearn.model_selection import LeavePGroupsOut
from sklearn.multiclass import OneVsOneClassifier
from sklearn.ensemble import RandomForestClassifier
from scipy.signal import medfilt
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.pipeline import make_pipeline, make_union
from sklearn.preprocessing import FunctionTransformer
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
Explanation: Facies classification using Machine Learning
LA Team Submission 6 ##
Lukas Mosser, Alfredo De la Fuente
In this approach for solving the facies classfication problem ( https://github.com/seg/2016-ml-contest. ) we will explore the following statregies:
- Features Exploration: based on Paolo Bestagini's work, we will consider imputation, normalization and augmentation routines for the initial features.
- Model tuning:
Libraries
We will need to install the following libraries and packages.
End of explanation
#Load Data
data = pd.read_csv('../facies_vectors.csv')
# Parameters
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
# Store features and labels
X = data[feature_names].values
y = data['Facies'].values
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values
# Fill 'PE' missing values with mean
imp = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(X)
X = imp.transform(X)
Explanation: Data Preprocessing
End of explanation
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
X_aug, padded_rows = augment_features(X, well, depth)
# Initialize model selection methods
lpgo = LeavePGroupsOut(2)
# Generate splits
split_list = []
for train, val in lpgo.split(X, y, groups=data['Well Name']):
hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):
split_list.append({'train':train, 'val':val})
def preprocess():
# Preprocess data to use in model
X_train_aux = []
X_test_aux = []
y_train_aux = []
y_test_aux = []
# For each data split
split = split_list[5]
# Remove padded rows
split_train_no_pad = np.setdiff1d(split['train'], padded_rows)
# Select training and validation data from current split
X_tr = X_aug[split_train_no_pad, :]
X_v = X_aug[split['val'], :]
y_tr = y[split_train_no_pad]
y_v = y[split['val']]
# Select well labels for validation data
well_v = well[split['val']]
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
X_train_aux.append( X_tr )
X_test_aux.append( X_v )
y_train_aux.append( y_tr )
y_test_aux.append ( y_v )
X_train = np.concatenate( X_train_aux )
X_test = np.concatenate ( X_test_aux )
y_train = np.concatenate ( y_train_aux )
y_test = np.concatenate ( y_test_aux )
return X_train , X_test , y_train , y_test
Explanation: We procceed to run Paolo Bestagini's routine to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.
End of explanation
from tpot import TPOTClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = preprocess()
tpot = TPOTClassifier(generations=5, population_size=100,
verbosity=2, max_eval_time_mins=30,
max_time_mins=6*60, scoring='f1_micro',
random_state = 17)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
#tpot.export('FinalPipeline_LM_long_2.py')
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.pipeline import make_pipeline, make_union
from sklearn.preprocessing import FunctionTransformer, StandardScaler
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
# Train and test a classifier
def train_and_test(X_tr, y_tr, X_v, well_v):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
# Train classifier
#clf = make_pipeline(make_union(VotingClassifier([("est", ExtraTreesClassifier(criterion="gini", max_features=1.0, n_estimators=500))]), FunctionTransformer(lambda X: X)), XGBClassifier(learning_rate=0.73, max_depth=10, min_child_weight=10, n_estimators=500, subsample=0.27))
#clf = make_pipeline( KNeighborsClassifier(n_neighbors=5, weights="distance") )
#clf = make_pipeline(MaxAbsScaler(),make_union(VotingClassifier([("est", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)),ExtraTreesClassifier(criterion="entropy", max_features=0.0001, n_estimators=500))
# * clf = make_pipeline( make_union(VotingClassifier([("est", BernoulliNB(alpha=60.0, binarize=0.26, fit_prior=True))]), FunctionTransformer(lambda X: X)),RandomForestClassifier(n_estimators=500))
clf = make_pipeline(
make_union(VotingClassifier([("est", BernoulliNB(alpha=0.41000000000000003, binarize=0.43, fit_prior=True))]), FunctionTransformer(lambda X: X)),
StandardScaler(),
RandomForestClassifier(n_estimators=500)
)
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)
return y_v_hat
Explanation: Data Analysis
In this section we will run a Cross Validation routine
End of explanation
#Load testing data
test_data = pd.read_csv('../validation_data_nofacies.csv')
# Prepare training data
X_tr = X
y_tr = y
# Augment features
X_tr, padded_rows = augment_features(X_tr, well, depth)
# Removed padded rows
X_tr = np.delete(X_tr, padded_rows, axis=0)
y_tr = np.delete(y_tr, padded_rows, axis=0)
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
# Augment features
X_ts, padded_rows = augment_features(X_ts, well_ts, depth_ts)
# Predict test labels
y_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts)
# Save predicted labels
test_data['Facies'] = y_ts_hat
test_data.to_csv('Prediction_XXI_LM_Final.csv')
Explanation: Prediction
End of explanation |
14,933 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
S&P 500 Components Time Series
Get time series of all S&P 500 components
Step1: Current S&P500 symbols.
See my SP500 project that generates the sp500.cvs file.
Step2: Create cache directory for current sp500 symbol timeseries
Step3: Update time series for the symbols below.
Time series will be fetched for any symbols not already cached. | Python Code:
from datetime import datetime
import pandas as pd
import pinkfish as pf
# -*- encoding: utf-8 -*-
%matplotlib inline
Explanation: S&P 500 Components Time Series
Get time series of all S&P 500 components
End of explanation
filename = 'sp500.csv'
symbols = pd.read_csv(filename)
symbols = sorted(list(symbols['Symbol']))
print(symbols)
Explanation: Current S&P500 symbols.
See my SP500 project that generates the sp500.cvs file.
End of explanation
now = datetime.now()
dt_string = now.strftime('%m-%d-%Y') # mm-dd-YYYY
dir_name = 'sp500-components-{}'.format(dt_string)
Explanation: Create cache directory for current sp500 symbol timeseries
End of explanation
pf.update_cache_symbols(symbols=symbols, dir_name=dir_name,from_year=2018)
Explanation: Update time series for the symbols below.
Time series will be fetched for any symbols not already cached.
End of explanation |
14,934 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build a numpy.ndarray, an equivalent dataFrame, and a numpy.rec.array
Step1: Simple Array Operation
Step2: pandas.dataFrame
Step3: pandas.dataFrame
Step4: numpy.rec.array
Step5: pandas.dataFrame with object type, expected to be slow
Step6: I would have expected pandas.dataFrame.sum to be more competitive with numpy.ndarray.sum, where the type of the dataFrame column was specified.
List comprehension style iteration in numpy.ndarray
Step7: List comprehension style sum on a list
Step8: So for a dataFame with object type, doing array operations like sum (admittedly silly), is about as good as doing this with a list comprehension. But iterating through the dataFrame rows using a list comprehension style is much worse. | Python Code:
rows = 10000000
# Equivalent numpy array
arr = np.random.uniform(size=rows*3).reshape(rows, 3)
# The pandas dataFrame with column names
df = pd.DataFrame(arr, columns=['x','y','z'])
# a `numpy.recarray`
rec = df.to_records()
df.head()
df.dtypes
Explanation: Build a numpy.ndarray, an equivalent dataFrame, and a numpy.rec.array
End of explanation
%timeit arr[:, 2].sum()
arrsum = arr[:, 2].sum()
Explanation: Simple Array Operation: Sum
numpy.ndarray
End of explanation
%timeit df.z.sum()
pdattsum = df.z.sum()
%timeit df.z.values.sum()
pdattsum = df.z.values.sum()
%timeit df.z.values.sum()
pdattsum = df.values['z'].sum()
assert_almost_equal(arrsum, pdattsum)
Explanation: pandas.dataFrame
End of explanation
%timeit df['z'].sum()
pdnstyle = df['z'].sum()
Explanation: pandas.dataFrame
End of explanation
%timeit rec['z'].sum()
reccolnames = rec['z'].sum()
Explanation: numpy.rec.array
End of explanation
df['z'] = df['z'].astype('object')
df.dtypes
%timeit df['z'].sum()
objectSum = df['z'].sum()
Explanation: pandas.dataFrame with object type, expected to be slow
End of explanation
%timeit sum(i for i in arr[:, 2])
itersumnumpy = sum(i for i in arr[:, 2])
assert_almost_equal(itersumnumpy,arrsum, decimal=5)
Explanation: I would have expected pandas.dataFrame.sum to be more competitive with numpy.ndarray.sum, where the type of the dataFrame column was specified.
List comprehension style iteration in numpy.ndarray
End of explanation
l = arr[:, 2].tolist
%timeit sum(i for i in l)
listsum = sum(i for i in l)
assert_almost_equal(listsum, arrsum, 5)
%timeit sum(i for i in df['z'])
pandasitersum = sum(i for i in df['z'])
t = tuple(l)
%timeit sum(i for i in t)
tuplesum = sum(i for i in t)
assert_almost_equal(pandasitersum, arrsum, 5)
assert_almost_equal(tuplesum, arrsum, 5)
Explanation: List comprehension style sum on a list: Again expected to be slow
End of explanation
%timeit df.values
Explanation: So for a dataFame with object type, doing array operations like sum (admittedly silly), is about as good as doing this with a list comprehension. But iterating through the dataFrame rows using a list comprehension style is much worse.
End of explanation |
14,935 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Output Containers and Layout Managers
Output containers are objects that hold a collection of other objects, and displays all its contents, even when they are complex interactive objects and MIME type.
By default the contents are just stacked up on the page, but you can configure them to get tabs, grid, cycling, or other layout methods.
Without Output
Step1: Stacked Output Containers
Step2: Tabbed Output Containers
Step3: Grid Output Containers
Step4: Cycling Output Container | Python Code:
# The defining of variable doesn't initiate output
x = "some string"
Explanation: Output Containers and Layout Managers
Output containers are objects that hold a collection of other objects, and displays all its contents, even when they are complex interactive objects and MIME type.
By default the contents are just stacked up on the page, but you can configure them to get tabs, grid, cycling, or other layout methods.
Without Output
End of explanation
from beakerx import *
o = OutputContainer()
o.addItem("simplest example")
o.addItem([2, 3, 5, 7])
o.addItem(HTML("<h1>title</h1>"))
o.addItem(None)
o
Explanation: Stacked Output Containers
End of explanation
rates = pd.read_csv("../../../doc/resources/data/interest-rates.csv")
c = Color(120, 120, 120, 100)
plot1 = Plot(initWidth= 300, initHeight= 400)
plot1.add(Points(x= rates.y1, y=rates.y30, size= 3, displayName="y1 vs y30"))
plot1.add(Points(x= rates.m3, y=rates.y5, size= 3, displayName="m3 vs y5"))
plot1.add(Line(x= rates.y1, y=rates.y30, color= c))
plot1.add(Line(x= rates.m3, y=rates.y5, color= c))
plot1.setShowLegend(False)
plot2 = SimpleTimePlot(rates, ["m3", "y1"], showLegend=False, initWidth= 300, initHeight= 400)
plot3 = SimpleTimePlot(rates, ["y5", "y10"], showLegend=False, initWidth= 300, initHeight= 400)
table = pd.DataFrame(rates.head(n=10), columns=["m3", "y1", "y5", "y10"])
l = TabbedOutputContainerLayoutManager()
l.setBorderDisplayed(False)
o = OutputContainer()
o.setLayoutManager(l)
o.addItem(plot1, "Scatter with History")
o.addItem(plot2, "Short Term")
o.addItem(plot3, "Long Term")
o.addItem(table, "1990/01")
o
Explanation: Tabbed Output Containers
End of explanation
plot1.setShowLegend(False)
bars = CategoryPlot(initWidth= 300, initHeight= 400)
bars.add(CategoryBars(value= [[1.1, 2.4, 3.8], [1, 3, 5]]))
lg = GridOutputContainerLayoutManager(3)
og = OutputContainer()
og.setLayoutManager(lg)
og.addItem(plot1, "Scatter with History")
og.addItem(plot2, "Short Term")
og.addItem(plot3, "Long Term1")
og.addItem(plot3, "Long Term2")
og.addItem(table, "1990/01")
og.addItem(bars, "Bar Chart")
og
Explanation: Grid Output Containers
End of explanation
l = CyclingOutputContainerLayoutManager()
l.setPeriod(2000); # milliseconds
l.setBorderDisplayed(False);
o = OutputContainer()
o.setLayoutManager(l)
o.addItem(plot1, "Scatter with History")
o.addItem(plot2, "Short Term")
o.addItem(table, "1990/01")
o.addItem(plot3, "Long Term")
o
Explanation: Cycling Output Container
End of explanation |
14,936 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
02 - Introduction to Machine Learning
by Alejandro Correa Bahnsen
version 0.1, Feb 2016
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Jake Vanderplas
What is Machine Learning?
In this section we will begin to explore the basic principles of machine learning.
Machine Learning is about building programs with tunable parameters (typically an
array of floating point values) that are adjusted automatically so as to improve
their behavior by adapting to previously seen data.
Machine Learning can be considered a subfield of Artificial Intelligence since those
algorithms can be seen as building blocks to make computers learn to behave more
intelligently by somehow generalizing rather that just storing and retrieving data items
like a database system would do.
We'll take a look at two very simple machine learning tasks here.
The first is a classification task
Step1: A classification algorithm may be used to draw a dividing boundary
between the two clusters of points
Step2: This may seem like a trivial task, but it is a simple version of a very important concept.
By drawing this separating line, we have learned a model which can generalize to new
data
Step3: Again, this is an example of fitting a model to data, such that the model can make
generalizations about new data. The model has been learned from the training
data, and can be used to predict the result of test data
Step4: Quick Question
Step5: This data is four dimensional, but we can visualize two of the dimensions
at a time using a simple scatter-plot
Step6: Dimensionality Reduction
Step7: Clustering
Step8: Lets then evaluate the performance of the clustering versus the ground truth
Step9: Classification Logistic Regression
Step10: Recap | Python Code:
# Import libraries
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set();
cmap = mpl.colors.ListedColormap(sns.color_palette("hls", 3))
# Create a random set of examples
from sklearn.datasets.samples_generator import make_blobs
X, Y = make_blobs(n_samples=50, centers=2,random_state=23, cluster_std=2.90)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=cmap)
plt.show()
Explanation: 02 - Introduction to Machine Learning
by Alejandro Correa Bahnsen
version 0.1, Feb 2016
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Jake Vanderplas
What is Machine Learning?
In this section we will begin to explore the basic principles of machine learning.
Machine Learning is about building programs with tunable parameters (typically an
array of floating point values) that are adjusted automatically so as to improve
their behavior by adapting to previously seen data.
Machine Learning can be considered a subfield of Artificial Intelligence since those
algorithms can be seen as building blocks to make computers learn to behave more
intelligently by somehow generalizing rather that just storing and retrieving data items
like a database system would do.
We'll take a look at two very simple machine learning tasks here.
The first is a classification task: the figure shows a
collection of two-dimensional data, colored according to two different class
labels.
End of explanation
from sklearn.linear_model import SGDClassifier
clf = SGDClassifier(loss="hinge", alpha=0.01, n_iter=200, fit_intercept=True)
clf.fit(X, Y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, .05), np.arange(y_min, y_max, .05))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.contour(xx, yy, Z)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=cmap)
plt.show()
Explanation: A classification algorithm may be used to draw a dividing boundary
between the two clusters of points:
End of explanation
a = 0.5
b = 1.0
# x from 0 to 10
x = 30 * np.random.random(20)
# y = a*x + b with noise
y = a * x + b + np.random.normal(size=x.shape)
plt.scatter(x, y)
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
clf.fit(x[:, None], y)
# underscore at the end indicates a fit parameter
print(clf.coef_)
print(clf.intercept_)
x_new = np.linspace(0, 30, 100)
y_new = clf.predict(x_new[:, None])
plt.scatter(x, y)
plt.plot(x_new, y_new)
Explanation: This may seem like a trivial task, but it is a simple version of a very important concept.
By drawing this separating line, we have learned a model which can generalize to new
data: if you were to drop another point onto the plane which is unlabeled, this algorithm
could now predict whether it's a blue or a red point.
The next simple task we'll look at is a regression task: a simple best-fit line
to a set of data:
End of explanation
from IPython.core.display import Image, display
imp_path = 'https://raw.githubusercontent.com/jakevdp/sklearn_pycon2015/master/notebooks/images/'
display(Image(url=imp_path+'iris_setosa.jpg'))
print("Iris Setosa\n")
display(Image(url=imp_path+'iris_versicolor.jpg'))
print("Iris Versicolor\n")
display(Image(url=imp_path+'iris_virginica.jpg'))
print("Iris Virginica")
display(Image(url='https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/6160065e1e574a20edddc47116a0512d20656e26/notebooks/iris_with_length.png'))
print('Iris versicolor and the petal and sepal width and length')
print('From, Python Data Analytics, Apress, 2015.')
Explanation: Again, this is an example of fitting a model to data, such that the model can make
generalizations about new data. The model has been learned from the training
data, and can be used to predict the result of test data:
here, we might be given an x-value, and the model would
allow us to predict the y value. Again, this might seem like a trivial problem,
but it is a basic example of a type of operation that is fundamental to
machine learning tasks.
Representation of Data in Scikit-learn
Machine learning is about creating models from data: for that reason, we'll start by
discussing how data can be represented in order to be understood by the computer. Along
with this, we'll build on our matplotlib examples from the previous section and show some
examples of how to visualize data.
Most machine learning algorithms implemented in scikit-learn expect data to be stored in a
two-dimensional array or matrix. The arrays can be
either numpy arrays, or in some cases scipy.sparse matrices.
The size of the array is expected to be [n_samples, n_features]
n_samples: The number of samples: each sample is an item to process (e.g. classify).
A sample can be a document, a picture, a sound, a video, an astronomical object,
a row in database or CSV file,
or whatever you can describe with a fixed set of quantitative traits.
n_features: The number of features or distinct traits that can be used to describe each
item in a quantitative manner. Features are generally real-valued, but may be boolean or
discrete-valued in some cases.
The number of features must be fixed in advance. However it can be very high dimensional
(e.g. millions of features) with most of them being zeros for a given sample. This is a case
where scipy.sparse matrices can be useful, in that they are
much more memory-efficient than numpy arrays.
A Simple Example: the Iris Dataset
As an example of a simple dataset, we're going to take a look at the
iris data stored by scikit-learn.
The data consists of measurements of three different species of irises.
There are three species of iris in the dataset, which we can picture here:
End of explanation
from sklearn.datasets import load_iris
iris = load_iris()
iris.keys()
n_samples, n_features = iris.data.shape
print((n_samples, n_features))
print(iris.data[0])
print(iris.data.shape)
print(iris.target.shape)
print(iris.target)
print(iris.target_names)
Explanation: Quick Question:
If we want to design an algorithm to recognize iris species, what might the data be?
Remember: we need a 2D array of size [n_samples x n_features].
What would the n_samples refer to?
What might the n_features refer to?
Remember that there must be a fixed number of features for each sample, and feature
number i must be a similar kind of quantity for each sample.
Loading the Iris Data with Scikit-Learn
Scikit-learn has a very straightforward set of data on these iris species. The data consist of
the following:
Features in the Iris dataset:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
Target classes to predict:
Iris Setosa
Iris Versicolour
Iris Virginica
scikit-learn embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:
End of explanation
import pandas as pd # Pandas is a topic of next session
data_temp = pd.DataFrame(iris.data, columns=iris.feature_names)
data_temp['target'] = iris.target
data_temp['target'] = data_temp['target'].astype('category')
data_temp['target'].cat.categories = iris.target_names
sns.pairplot(data_temp, hue='target', palette=sns.color_palette("hls", 3))
Explanation: This data is four dimensional, but we can visualize two of the dimensions
at a time using a simple scatter-plot:
End of explanation
X, y = iris.data, iris.target
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
pca.fit(X)
X_reduced = pca.transform(X)
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y, cmap=cmap)
X, y = iris.data, iris.target
from sklearn.manifold import Isomap
pca = Isomap(n_components=3)
pca.fit(X)
X_reduced = pca.transform(X)
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y, cmap=cmap)
X_reduced.shape
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y, cmap=cmap)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
ax.set_title('Iris Dataset by PCA', size=14)
ax.scatter(X_reduced[:,0],X_reduced[:,1],X_reduced[:,2], c=y, cmap=cmap)
ax.set_xlabel('First eigenvector')
ax.set_ylabel('Second eigenvector')
ax.set_zlabel('Third eigenvector')
ax.w_xaxis.set_ticklabels(())
ax.w_yaxis.set_ticklabels(())
ax.w_zaxis.set_ticklabels(())
plt.show()
Explanation: Dimensionality Reduction: PCA
Principle Component Analysis (PCA) is a dimension reduction technique that can find the combinations of variables that explain the most variance.
Consider the iris dataset. It cannot be visualized in a single 2D plot, as it has 4 features. We are going to extract 2 combinations of sepal and petal dimensions to visualize it:
End of explanation
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0) # Fixing the RNG in kmeans
k_means.fit(X)
y_pred = k_means.predict(X)
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y_pred, cmap=cmap);
Explanation: Clustering: K-means
Clustering groups together observations that are homogeneous with respect to a given criterion, finding ''clusters'' in the data.
Note that these clusters will uncover relevent hidden structure of the data only if the criterion used highlights it.
End of explanation
from sklearn.metrics import confusion_matrix
# Compute confusion matrix
cm = confusion_matrix(y, y_pred)
np.set_printoptions(precision=2)
print(cm)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.figure()
plot_confusion_matrix(cm)
Explanation: Lets then evaluate the performance of the clustering versus the ground truth
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn import cross_validation
errors = []
for i in range(1000):
X_train, X_test, y_train, y_test = cross_validation.train_test_split(iris.data, iris.target, test_size=0.4, random_state=i)
clf = LogisticRegression()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
acc = (y_pred == y_test).sum()
err = 1- acc / n_samples
errors.append(err)
plt.plot(list(range(1000)), errors)
errors = np.array(errors)
print(errors.max(), errors.min(), errors.mean(), errors.std())
from sklearn.ensemble import RandomForestClassifier
errors = []
for i in range(1000):
X_train, X_test, y_train, y_test = cross_validation.train_test_split(iris.data, iris.target, test_size=0.4, random_state=i)
clf = RandomForestClassifier()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
acc = (y_pred == y_test).sum()
err = 1- acc / n_samples
errors.append(err)
plt.plot(list(range(1000)), errors)
errors = np.array(errors)
print(errors.max(), errors.min(), errors.mean(), errors.std())
Explanation: Classification Logistic Regression
End of explanation
from IPython.display import Image
Image(url="http://scikit-learn.org/dev/_static/ml_map.png")
Explanation: Recap: Scikit-learn's estimator interface
Scikit-learn strives to have a uniform interface across all methods,
and we'll see examples of these below. Given a scikit-learn estimator
object named model, the following methods are available:
Available in all Estimators
model.fit() : fit training data. For supervised learning applications,
this accepts two arguments: the data X and the labels y (e.g. model.fit(X, y)).
For unsupervised learning applications, this accepts only a single argument,
the data X (e.g. model.fit(X)).
Available in supervised estimators
model.predict() : given a trained model, predict the label of a new set of data.
This method accepts one argument, the new data X_new (e.g. model.predict(X_new)),
and returns the learned label for each object in the array.
model.predict_proba() : For classification problems, some estimators also provide
this method, which returns the probability that a new observation has each categorical label.
In this case, the label with the highest probability is returned by model.predict().
model.score() : for classification or regression problems, most (all?) estimators implement
a score method. Scores are between 0 and 1, with a larger score indicating a better fit.
Available in unsupervised estimators
model.predict() : predict labels in clustering algorithms.
model.transform() : given an unsupervised model, transform new data into the new basis.
This also accepts one argument X_new, and returns the new representation of the data based
on the unsupervised model.
model.fit_transform() : some estimators implement this method,
which more efficiently performs a fit and a transform on the same input data.
Flow Chart: How to Choose your Estimator
This is a flow chart created by scikit-learn super-contributor Andreas Mueller which gives a nice summary of which algorithms to choose in various situations. Keep it around as a handy reference!
End of explanation |
14,937 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom Factors
When we first looked at factors, we explored the set of built-in factors. Frequently, a desired computation isn't included as a built-in factor. One of the most powerful features of the Pipeline API is that it allows us to define our own custom factors. When a desired computation doesn't exist as a built-in, we define a custom factor.
Conceptually, a custom factor is identical to a built-in factor. It accepts inputs, window_length, and mask as constructor arguments, and returns a Factor object each day.
Let's take an example of a computation that doesn't exist as a built-in
Step1: Next, let's define our custom factor to calculate the standard deviation over a trailing window using numpy.nanstd
Step2: Finally, let's instantiate our factor in make_pipeline()
Step3: When this pipeline is run, StdDev.compute() will be called every day with data as follows
Step4: Default Inputs
When writing a custom factor, we can set default inputs and window_length in our CustomFactor subclass. For example, let's define the TenDayMeanDifference custom factor to compute the mean difference between two data columns over a trailing window using numpy.nanmean. Let's set the default inputs to [USEquityPricing.close, USEquityPricing.open] and the default window_length to 10
Step5: <i>Remember in this case that close and open are each 10 x ~8000 2D numpy arrays.</i>
If we call TenDayMeanDifference without providing any arguments, it will use the defaults.
Step6: The defaults can be manually overridden by specifying arguments in the constructor call.
Step7: Further Example
Let's take another example where we build a momentum custom factor and use it to create a filter. We will then use that filter as a screen for our pipeline.
Let's start by defining a Momentum factor to be the division of the most recent close price by the close price from n days ago where n is the window_length.
Step8: Now, let's instantiate our Momentum factor (twice) to create a 10-day momentum factor and a 20-day momentum factor. Let's also create a positive_momentum filter returning True for securities with both a positive 10-day momentum and a positive 20-day momentum.
Step9: Next, let's add our momentum factors and our positive_momentum filter to make_pipeline. Let's also pass positive_momentum as a screen to our pipeline.
Step10: Running this pipeline outputs the standard deviation and each of our momentum computations for securities with positive 10-day and 20-day momentum. | Python Code:
from quantopian.pipeline import CustomFactor
import numpy
Explanation: Custom Factors
When we first looked at factors, we explored the set of built-in factors. Frequently, a desired computation isn't included as a built-in factor. One of the most powerful features of the Pipeline API is that it allows us to define our own custom factors. When a desired computation doesn't exist as a built-in, we define a custom factor.
Conceptually, a custom factor is identical to a built-in factor. It accepts inputs, window_length, and mask as constructor arguments, and returns a Factor object each day.
Let's take an example of a computation that doesn't exist as a built-in: standard deviation. To create a factor that computes the standard deviation over a trailing window, we can subclass quantopian.pipeline.CustomFactor and implement a compute method whose signature is:
def compute(self, today, asset_ids, out, *inputs):
...
*inputs are M x N numpy arrays, where M is the window_length and N is the number of securities (usually around ~8000 unless a mask is provided). *inputs are trailing data windows. Note that there will be one M x N array for each BoundColumn provided in the factor's inputs list. The data type of each array will be the dtype of the corresponding BoundColumn.
out is an empty array of length N. out will be the output of our custom factor each day. The job of compute is to write output values into out.
asset_ids will be an integer array of length N containing security ids corresponding to the columns in our *inputs arrays.
today will be a pandas Timestamp representing the day for which compute is being called.
Of these, *inputs and out are most commonly used.
An instance of CustomFactor that’s been added to a pipeline will have its compute method called every day. For example, let's define a custom factor that computes the standard deviation of the close price over the last 5 days. To start, let's add CustomFactor and numpy to our import statements.
End of explanation
class StdDev(CustomFactor):
def compute(self, today, asset_ids, out, values):
# Calculates the column-wise standard deviation, ignoring NaNs
out[:] = numpy.nanstd(values, axis=0)
Explanation: Next, let's define our custom factor to calculate the standard deviation over a trailing window using numpy.nanstd:
End of explanation
def make_pipeline():
std_dev = StdDev(inputs=[USEquityPricing.close], window_length=5)
return Pipeline(
columns={
'std_dev': std_dev
}
)
Explanation: Finally, let's instantiate our factor in make_pipeline():
End of explanation
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
result
Explanation: When this pipeline is run, StdDev.compute() will be called every day with data as follows:
values: An M x N numpy array, where M is 5 (window_length), and N is ~8000 (the number of securities in our database on the day in question).
out: An empty array of length N (~8000). In this example, the job of compute is to populate out with an array storing of 5-day close price standard deviations.
End of explanation
class TenDayMeanDifference(CustomFactor):
# Default inputs.
inputs = [USEquityPricing.close, USEquityPricing.open]
window_length = 10
def compute(self, today, asset_ids, out, close, open):
# Calculates the column-wise mean difference, ignoring NaNs
out[:] = numpy.nanmean(close - open, axis=0)
Explanation: Default Inputs
When writing a custom factor, we can set default inputs and window_length in our CustomFactor subclass. For example, let's define the TenDayMeanDifference custom factor to compute the mean difference between two data columns over a trailing window using numpy.nanmean. Let's set the default inputs to [USEquityPricing.close, USEquityPricing.open] and the default window_length to 10:
End of explanation
# Computes the 10-day mean difference between the daily open and close prices.
close_open_diff = TenDayMeanDifference()
Explanation: <i>Remember in this case that close and open are each 10 x ~8000 2D numpy arrays.</i>
If we call TenDayMeanDifference without providing any arguments, it will use the defaults.
End of explanation
# Computes the 10-day mean difference between the daily high and low prices.
high_low_diff = TenDayMeanDifference(inputs=[USEquityPricing.high, USEquityPricing.low])
Explanation: The defaults can be manually overridden by specifying arguments in the constructor call.
End of explanation
class Momentum(CustomFactor):
# Default inputs
inputs = [USEquityPricing.close]
# Compute momentum
def compute(self, today, assets, out, close):
out[:] = close[-1] / close[0]
Explanation: Further Example
Let's take another example where we build a momentum custom factor and use it to create a filter. We will then use that filter as a screen for our pipeline.
Let's start by defining a Momentum factor to be the division of the most recent close price by the close price from n days ago where n is the window_length.
End of explanation
ten_day_momentum = Momentum(window_length=10)
twenty_day_momentum = Momentum(window_length=20)
positive_momentum = ((ten_day_momentum > 1) & (twenty_day_momentum > 1))
Explanation: Now, let's instantiate our Momentum factor (twice) to create a 10-day momentum factor and a 20-day momentum factor. Let's also create a positive_momentum filter returning True for securities with both a positive 10-day momentum and a positive 20-day momentum.
End of explanation
def make_pipeline():
ten_day_momentum = Momentum(window_length=10)
twenty_day_momentum = Momentum(window_length=20)
positive_momentum = ((ten_day_momentum > 1) & (twenty_day_momentum > 1))
std_dev = StdDev(inputs=[USEquityPricing.close], window_length=5)
return Pipeline(
columns={
'std_dev': std_dev,
'ten_day_momentum': ten_day_momentum,
'twenty_day_momentum': twenty_day_momentum
},
screen=positive_momentum
)
Explanation: Next, let's add our momentum factors and our positive_momentum filter to make_pipeline. Let's also pass positive_momentum as a screen to our pipeline.
End of explanation
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
result
Explanation: Running this pipeline outputs the standard deviation and each of our momentum computations for securities with positive 10-day and 20-day momentum.
End of explanation |
14,938 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualization 1
Step1: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
Step2: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from __future__ import print_function
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
Explanation: Visualization 1: Matplotlib Basics Exercises
End of explanation
randx = np.random.randn(500)
randy = np.random.randn(500)
plt.scatter(randx, randy, color = "g", marker = "x")
plt.xlabel("Random X")
plt.ylabel("Random Y")
plt.title("Random Data!!!!!")
plt.box(False)
plt.grid(True)
Explanation: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
End of explanation
data = np.random.randn(500000)
def plothist(bins, numdata):
plt.hist(np.random.randn(numdata), bins=bins, color = "k", ec = "w")
interact(plothist, bins=widgets.IntSlider(min=1,max=100,step=1,value=10), numdata=\
widgets.IntSlider(min=10,max=10000,step=10,value=10));
plt.xlabel("Random Variable X")
plt.ylabel("Counts")
plt.title("Distribution of a random variable in abjustable bins.")
Explanation: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title.
End of explanation |
14,939 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Classifiers - support vector machines (SVMs)
SVMs try to construct a hyperplane maximizing the margin between the two classes. It selects a subset of the
input, called the support vectors, which are the observations closest to the separating hyperplane.
Step1: There are several support vector machine implementations in scikit-learn . The most commonly used ones
are svm.SVC , svm.NuSVC and svm.LinearSVC ; “SVC” stands for Support Vector Classifier.
Using kernels
Classes are not always separable by a hyperplane, so it would be desirable to have a decision function that is
not linear but that may be for instance polynomial or exponential
Step2: Polynomial kernel
Step3: RBF kernel (Radial Basis Function)
Step6: Example | Python Code:
from sklearn import svm
import matplotlib.pyplot as plt
from sklearn import datasets
import numpy as np
%matplotlib inline
iris = datasets.load_iris()
iris_X = iris.data
iris_y = iris.target
np.unique(iris_y)
svc = svm.SVC(kernel='linear')
svc.fit(iris.data, iris.target)
Explanation: Linear Classifiers - support vector machines (SVMs)
SVMs try to construct a hyperplane maximizing the margin between the two classes. It selects a subset of the
input, called the support vectors, which are the observations closest to the separating hyperplane.
End of explanation
svc = svm.SVC(kernel='linear')
Explanation: There are several support vector machine implementations in scikit-learn . The most commonly used ones
are svm.SVC , svm.NuSVC and svm.LinearSVC ; “SVC” stands for Support Vector Classifier.
Using kernels
Classes are not always separable by a hyperplane, so it would be desirable to have a decision function that is
not linear but that may be for instance polynomial or exponential:
Linear kernel
End of explanation
svc = svm.SVC(kernel='poly', degree=3)
Explanation: Polynomial kernel
End of explanation
svc = svm.SVC(kernel='rbf')
Explanation: RBF kernel (Radial Basis Function)
End of explanation
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
def make_meshgrid(x, y, h=.02):
Create a mesh of points to plot in
Parameters
----------
x: data to base x-axis meshgrid on
y: data to base y-axis meshgrid on
h: stepsize for meshgrid, optional
Returns
-------
xx, yy : ndarray
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(ax, clf, xx, yy, **params):
Plot the decision boundaries for a classifier.
Parameters
----------
ax: matplotlib axes object
clf: a classifier
xx: meshgrid ndarray
yy: meshgrid ndarray
params: dictionary of params to pass to contourf, optional
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
# import some data to play with
iris = datasets.load_iris()
# Take the first two features. We could avoid this by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1.0 # SVM regularization parameter
models = (svm.SVC(kernel='linear', C=C),
svm.LinearSVC(C=C),
svm.SVC(kernel='rbf', gamma=0.7, C=C),
svm.SVC(kernel='poly', degree=3, C=C))
models = (clf.fit(X, y) for clf in models)
# title for the plots
titles = ('SVC with linear kernel',
'LinearSVC (linear kernel)',
'SVC with RBF kernel',
'SVC with polynomial (degree 3) kernel')
# Set-up 2x2 grid for plotting.
fig, sub = plt.subplots(2, 2)
plt.subplots_adjust(wspace=0.4, hspace=0.4)
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1)
for clf, title, ax in zip(models, titles, sub.flatten()):
plot_contours(ax, clf, xx, yy,
cmap=plt.cm.coolwarm, alpha=0.8)
ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors='k')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xlabel('Sepal length')
ax.set_ylabel('Sepal width')
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(title)
plt.show()
Explanation: Example
End of explanation |
14,940 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EEG forward operator with a template MRI
This tutorial explains how to compute the forward operator from EEG data
using the standard template MRI subject fsaverage.
.. caution
Step1: Load the data
We use here EEG data from the BCI dataset.
<div class="alert alert-info"><h4>Note</h4><p>See `plot_montage` to view all the standard EEG montages
available in MNE-Python.</p></div>
Step2: Setup source space and compute forward
Step3: From here on, standard inverse imaging methods can be used!
Infant MRI surrogates
We don't have a sample infant dataset for MNE, so let's fake a 10-20 one
Step4: Get an infant MRI template
To use an infant head model for M/EEG data, you can use
Step5: It comes with several helpful built-in files, including a 10-20 montage
in the MRI coordinate frame, which can be used to compute the
MRI<->head transform trans
Step6: There are also BEM and source spaces
Step7: You can ensure everything is as expected by plotting the result | Python Code:
import os.path as op
import numpy as np
import mne
from mne.datasets import eegbci
from mne.datasets import fetch_fsaverage
# Download fsaverage files
fs_dir = fetch_fsaverage(verbose=True)
subjects_dir = op.dirname(fs_dir)
# The files live in:
subject = 'fsaverage'
trans = 'fsaverage' # MNE has a built-in fsaverage transformation
src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')
bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')
Explanation: EEG forward operator with a template MRI
This tutorial explains how to compute the forward operator from EEG data
using the standard template MRI subject fsaverage.
.. caution:: Source reconstruction without an individual T1 MRI from the
subject will be less accurate. Do not over interpret
activity locations which can be off by multiple centimeters.
Adult template MRI (fsaverage)
First we show how fsaverage can be used as a surrogate subject.
End of explanation
raw_fname, = eegbci.load_data(subject=1, runs=[6])
raw = mne.io.read_raw_edf(raw_fname, preload=True)
# Clean channel names to be able to use a standard 1005 montage
new_names = dict(
(ch_name,
ch_name.rstrip('.').upper().replace('Z', 'z').replace('FP', 'Fp'))
for ch_name in raw.ch_names)
raw.rename_channels(new_names)
# Read and set the EEG electrode locations:
montage = mne.channels.make_standard_montage('standard_1005')
raw.set_montage(montage)
raw.set_eeg_reference(projection=True) # needed for inverse modeling
# Check that the locations of EEG electrodes is correct with respect to MRI
mne.viz.plot_alignment(
raw.info, src=src, eeg=['original', 'projected'], trans=trans,
show_axes=True, mri_fiducials=True, dig='fiducials')
Explanation: Load the data
We use here EEG data from the BCI dataset.
<div class="alert alert-info"><h4>Note</h4><p>See `plot_montage` to view all the standard EEG montages
available in MNE-Python.</p></div>
End of explanation
fwd = mne.make_forward_solution(raw.info, trans=trans, src=src,
bem=bem, eeg=True, mindist=5.0, n_jobs=1)
print(fwd)
Explanation: Setup source space and compute forward
End of explanation
ch_names = \
'Fz Cz Pz Oz Fp1 Fp2 F3 F4 F7 F8 C3 C4 T7 T8 P3 P4 P7 P8 O1 O2'.split()
data = np.random.RandomState(0).randn(len(ch_names), 1000)
info = mne.create_info(ch_names, 1000., 'eeg')
raw = mne.io.RawArray(data, info)
Explanation: From here on, standard inverse imaging methods can be used!
Infant MRI surrogates
We don't have a sample infant dataset for MNE, so let's fake a 10-20 one:
End of explanation
subject = mne.datasets.fetch_infant_template('6mo', subjects_dir, verbose=True)
Explanation: Get an infant MRI template
To use an infant head model for M/EEG data, you can use
:func:mne.datasets.fetch_infant_template to download an infant template:
End of explanation
fname_1020 = op.join(subjects_dir, subject, 'montages', '10-20-montage.fif')
mon = mne.channels.read_dig_fif(fname_1020)
mon.rename_channels(
{f'EEG{ii:03d}': ch_name for ii, ch_name in enumerate(ch_names, 1)})
trans = mne.channels.compute_native_head_t(mon)
raw.set_montage(mon)
print(trans)
Explanation: It comes with several helpful built-in files, including a 10-20 montage
in the MRI coordinate frame, which can be used to compute the
MRI<->head transform trans:
End of explanation
bem_dir = op.join(subjects_dir, subject, 'bem')
fname_src = op.join(bem_dir, f'{subject}-oct-6-src.fif')
src = mne.read_source_spaces(fname_src)
print(src)
fname_bem = op.join(bem_dir, f'{subject}-5120-5120-5120-bem-sol.fif')
bem = mne.read_bem_solution(fname_bem)
Explanation: There are also BEM and source spaces:
End of explanation
fig = mne.viz.plot_alignment(
raw.info, subject=subject, subjects_dir=subjects_dir, trans=trans,
src=src, bem=bem, coord_frame='mri', mri_fiducials=True, show_axes=True,
surfaces=('white', 'outer_skin', 'inner_skull', 'outer_skull'))
mne.viz.set_3d_view(fig, 25, 70, focalpoint=[0, -0.005, 0.01])
Explanation: You can ensure everything is as expected by plotting the result:
End of explanation |
14,941 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
quant-econ Solutions
Step1: Exercise 1
This exercise asked you to validate the laws of motion for $\gamma$ and $\mu$ given in the lecture, based on the stated result about Bayesian updating in a scalar Gaussian setting. The stated result tells us that after observing average output $X$ of the $M$ firms, our posterior beliefs will be
$$
N(\mu_0, 1/\gamma_0)
$$
where
$$
\mu_0 = \frac{\mu \gamma + M X \gamma_x}{\gamma + M \gamma_x}
\quad \text{and} \quad
\gamma_0 = \gamma + M \gamma_x
$$
If we take a random variable $\theta$ with this distribution and then evaluate the distribution of $\rho \theta + \sigma_\theta w$ where $w$ is independent and standard normal, we get the expressions for $\mu'$ and $\gamma'$ given in the lecture.
Exercise 2
First let's replicate the plot that illustrates the law of motion for precision, which is
$$
\gamma_{t+1} =
\left(
\frac{\rho^2}{\gamma_t + M \gamma_x} + \sigma_\theta^2
\right)^{-1}
$$
Here $M$ is the number of active firms. The next figure plots $\gamma_{t+1}$ against $\gamma_t$ on a 45 degree diagram for different values of $M$
Step2: The points where the curves hit the 45 degree lines are the long run steady states corresponding to each $M$, if that value of $M$ was to remain fixed. As the number of firms falls, so does the long run steady state of precision.
Next let's generate time series for beliefs and the aggregates -- that is, the number
of active firms and average output.
Step3: First let's see how well $\mu$ tracks $\theta$ in these simulations
Step4: Now let's plot the whole thing together | Python Code:
%matplotlib inline
from __future__ import division
import matplotlib.pyplot as plt
import numpy as np
import quantecon as qe
import seaborn as sns
import itertools
Explanation: quant-econ Solutions: Uncertainty Traps
Solutions for http://quant-econ.net/py/uncertainty_traps.html
End of explanation
palette = itertools.cycle(sns.color_palette())
econ = qe.models.UncertaintyTrapEcon()
rho, sig_theta, gx = econ.rho, econ.sig_theta, econ.gx # simplify names
g = np.linspace(1e-10, 3, 200) # gamma grid
fig, ax = plt.subplots(figsize=(9, 9))
ax.plot(g, g, 'k-') # 45 degree line
for M in range(7):
g_next = 1 / (rho**2 / (g + M * gx) + sig_theta**2)
label_string = r"$M = {}$".format(M)
ax.plot(g, g_next, lw=2, label=label_string, color=next(palette))
ax.legend(loc='lower right', fontsize=14)
ax.set_xlabel(r'$\gamma$', fontsize=16)
ax.set_ylabel(r"$\gamma'$", fontsize=16)
ax.grid()
plt.show()
Explanation: Exercise 1
This exercise asked you to validate the laws of motion for $\gamma$ and $\mu$ given in the lecture, based on the stated result about Bayesian updating in a scalar Gaussian setting. The stated result tells us that after observing average output $X$ of the $M$ firms, our posterior beliefs will be
$$
N(\mu_0, 1/\gamma_0)
$$
where
$$
\mu_0 = \frac{\mu \gamma + M X \gamma_x}{\gamma + M \gamma_x}
\quad \text{and} \quad
\gamma_0 = \gamma + M \gamma_x
$$
If we take a random variable $\theta$ with this distribution and then evaluate the distribution of $\rho \theta + \sigma_\theta w$ where $w$ is independent and standard normal, we get the expressions for $\mu'$ and $\gamma'$ given in the lecture.
Exercise 2
First let's replicate the plot that illustrates the law of motion for precision, which is
$$
\gamma_{t+1} =
\left(
\frac{\rho^2}{\gamma_t + M \gamma_x} + \sigma_\theta^2
\right)^{-1}
$$
Here $M$ is the number of active firms. The next figure plots $\gamma_{t+1}$ against $\gamma_t$ on a 45 degree diagram for different values of $M$
End of explanation
sim_length=2000
mu_vec = np.empty(sim_length)
theta_vec = np.empty(sim_length)
gamma_vec = np.empty(sim_length)
X_vec = np.empty(sim_length)
M_vec = np.empty(sim_length)
mu_vec[0] = econ.mu
gamma_vec[0] = econ.gamma
theta_vec[0] = 0
w_shocks = np.random.randn(sim_length)
for t in range(sim_length-1):
X, M = econ.gen_aggregates()
X_vec[t] = X
M_vec[t] = M
econ.update_beliefs(X, M)
econ.update_theta(w_shocks[t])
mu_vec[t+1] = econ.mu
gamma_vec[t+1] = econ.gamma
theta_vec[t+1] = econ.theta
# Record final values of aggregates
X, M = econ.gen_aggregates()
X_vec[-1] = X
M_vec[-1] = M
Explanation: The points where the curves hit the 45 degree lines are the long run steady states corresponding to each $M$, if that value of $M$ was to remain fixed. As the number of firms falls, so does the long run steady state of precision.
Next let's generate time series for beliefs and the aggregates -- that is, the number
of active firms and average output.
End of explanation
fig, ax = plt.subplots(figsize=(9, 6))
ax.plot(range(sim_length), theta_vec, alpha=0.6, lw=2, label=r"$\theta$")
ax.plot(range(sim_length), mu_vec, alpha=0.6, lw=2, label=r"$\mu$")
ax.legend(fontsize=16)
plt.show()
Explanation: First let's see how well $\mu$ tracks $\theta$ in these simulations
End of explanation
fig, axes = plt.subplots(4, 1, figsize=(12, 20))
# Add some spacing
fig.subplots_adjust(hspace=0.3)
series = (theta_vec, mu_vec, gamma_vec, M_vec)
names = r'$\theta$', r'$\mu$', r'$\gamma$', r'$M$'
for ax, vals, name in zip(axes, series, names):
# determine suitable y limits
s_max, s_min = max(vals), min(vals)
s_range = s_max - s_min
y_max = s_max + s_range * 0.1
y_min = s_min - s_range * 0.1
ax.set_ylim(y_min, y_max)
# Plot series
ax.plot(range(sim_length), vals, alpha=0.6, lw=2)
ax.set_title("time series for {}".format(name), fontsize=16)
ax.grid()
plt.show()
Explanation: Now let's plot the whole thing together
End of explanation |
14,942 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BioPandas
Authors
Step1: Working with mmCIF Structures in DataFrames
Loading mmCIF Files
There are several ways to load a mmCIF structure into a PandasMmcif object.
1 -- Loading an mmCIF file from the Protein Data Bank
MmCIF files can be directly fetched from The Protein Data Bank at http
Step2: 2 -- Loading an mmCIF file from the AlphaFold Structure Database
(New in version 0.4.0)
PDB files can be directly fetched from The AlphaFold Structure Database at https
Step3: 3 a) -- Loading a mmCIF structure from a local file
Alternatively, we can load mmCIF files from local directories as regular mmCIF files using read_mmcif
Step4: [File link
Step5: [File link
Step6: The most interesting / useful attribute is the PandasMmcif.df DataFrame dictionary though, which gives us access to the mmCIF files as pandas DataFrames. Let's print the first 3 lines from the ATOM coordinate section to see how it looks like
Step7: But more on that in the next section.
4 -- Loading a mmCIF file from a Python List
Mmcif files can also be loaded into a PandasMmcif object from a Python list
Step8: Looking at mmCIF files in DataFrames
mmCIF files are parsed according to the mmCIF file format description.
For more information, we recommend the helpful Beginner’s Guide to PDB Structures and the PDBx/mmCIF Format guide.
After loading a PDB file from rcsb.org or our local drive, the PandasPdb.df attribute should contain the following 3 DataFrame objects
Step9: [File link
Step10: 'group_PDB'
Step11: However, there are a few naming differences in the ANISOU columns, for instance, the 'ATOM' and 'HETATM' DataFrames feature the following columns that are not contained in ANISOU
Step12: Vice versa, ANISOU contains the following columns that are not in the 'ATOM' and 'HETATM' DataFrames
Step13: BioPandas tries to stay to the original column names as close as possible, and for more details, we recommend checking the original descriptions
Step14: [File link
Step15: Or main chain atoms
Step16: It's also easy to strip our coordinate section from hydrogen atoms if there are any ...
Step17: Or, let's compute the average temperature factor of our protein main chain
Step18: Plotting
Since we are using pandas under the hood, which in turns uses matplotlib under the hood, we can produce quick summary plots of our mmCIF structures relatively conveniently
Step19: [File link
Step20: Computing the Root Mean Square Deviation
BioPandas also comes with certain convenience functions, for example, ...
The Root-mean-square deviation (RMSD) is simply a measure of the average distance between atoms of 2 protein or ligand structures. This calculation of the Cartesian error follows the equation
Step21: [File links
Step22: Similarly, we can compute the RMSD between 2 related protein structures
Step23: Or the RMSD between the main chains only
Step24: <br>
Filtering PDBs by Distance
We can use the distance method to compute the distance between each atom (or a subset of atoms) in our data frame and a three-dimensional reference point. For example
Step25: [File link
Step26: And we can use this Series object, for instance, to select certain atoms in our DataFrame that fall within a desired distance threshold. For example, let's select all atoms that are within 7A of our reference point
Step27: Visualized in PyMOL, this subset (yellow surface) would look as follows
Step28: As shown above, the amino3to1 method returns a DataFrame containing the auth_asym_id (chain ID) and auth_comp_id (residue name) of the translated 1-letter amino acids. If you like to work with the sequence as a Python list of string characters, you could do the following
Step29: And if you prefer to work with the sequence as a string, you can use the join method
Step30: To iterate over the sequences of multi-chain proteins, you can use the unique method as shown below | Python Code:
%load_ext watermark
%watermark -d -u -p pandas,biopandas
import pandas as pd
pd.set_option('display.width', 600)
pd.set_option('display.max_columns', 8)
Explanation: BioPandas
Authors:
- Sebastian Raschka mail@sebastianraschka.com
- Arian Jamasb arian@jamasb.io
License: BSD 3 clause
Project Website: http://rasbt.github.io/biopandas/
Code Repository: https://github.com/rasbt/biopandas
End of explanation
from biopandas.mmcif import PandasMmcif
# Initialize a new PandasMmcif object
# and fetch the mmCIF file from rcsb.org
pmmcif = PandasMmcif().fetch_mmcif('3eiy')
Explanation: Working with mmCIF Structures in DataFrames
Loading mmCIF Files
There are several ways to load a mmCIF structure into a PandasMmcif object.
1 -- Loading an mmCIF file from the Protein Data Bank
MmCIF files can be directly fetched from The Protein Data Bank at http://www.rcsb.org via its unique 4-letter after initializing a new PandasMmcif object and calling the fetch_mmcif method:
End of explanation
from biopandas.mmcif import PandasMmcif
# Initialize a new PandasPdb object
# and fetch the PDB file from alphafold.ebi.ac.uk
ppdb = PandasMmcif().fetch_mmcif(uniprot_id='Q5VSL9', source='alphafold2-v2')
Explanation: 2 -- Loading an mmCIF file from the AlphaFold Structure Database
(New in version 0.4.0)
PDB files can be directly fetched from The AlphaFold Structure Database at https://alphafold.ebi.ac.uk/ via its unique UniProt Identifier after initializing a new PandasPdb object and calling the fetch_af2 method:
End of explanation
pmmcif.read_mmcif('./data/3eiy.cif')
Explanation: 3 a) -- Loading a mmCIF structure from a local file
Alternatively, we can load mmCIF files from local directories as regular mmCIF files using read_mmcif:
End of explanation
pmmcif.read_mmcif('./data/3eiy.cif.gz')
Explanation: [File link: 3eiy.cif]
3 b) -- Loading a mmCIF structure from a local gzipped mmCIF file
Or, we can load them from gzip archives like so (note that the file must end with a '.gz' suffix in order to be recognized as a gzip file):
End of explanation
print('mmCIF Code: %s' % pmmcif.code)
print('mmCIF Header Line: %s' % pmmcif.header)
print('\nRaw mmCIF file contents:\n\n%s\n...' % pmmcif.pdb_text[:1000])
Explanation: [File link: 3eiy.cif.gz]
After the file was succesfully loaded, we have access to the following attributes:
End of explanation
pmmcif.df['ATOM'].head(3)
Explanation: The most interesting / useful attribute is the PandasMmcif.df DataFrame dictionary though, which gives us access to the mmCIF files as pandas DataFrames. Let's print the first 3 lines from the ATOM coordinate section to see how it looks like:
End of explanation
with open('./data/3eiy.cif', 'r') as f:
three_eiy = f.read()
pmmcif2 = PandasMmcif()
pmmcif2.read_mmcif_from_list(three_eiy)
pmmcif2.df['ATOM'].head()
Explanation: But more on that in the next section.
4 -- Loading a mmCIF file from a Python List
Mmcif files can also be loaded into a PandasMmcif object from a Python list:
End of explanation
from biopandas.mmcif import PandasMmcif
pmmcif = PandasMmcif()
pmmcif.read_mmcif('./data/3eiy.cif')
pmmcif.df.keys()
Explanation: Looking at mmCIF files in DataFrames
mmCIF files are parsed according to the mmCIF file format description.
For more information, we recommend the helpful Beginner’s Guide to PDB Structures and the PDBx/mmCIF Format guide.
After loading a PDB file from rcsb.org or our local drive, the PandasPdb.df attribute should contain the following 3 DataFrame objects:
End of explanation
pmmcif.df['ATOM'].columns
Explanation: [File link: 3eiy.cif]
'ATOM': contains the entries from the ATOM coordinate section
'HETATM': ... entries from the "HETATM" coordinate section
'ANISOU': ... entries from the "ANISOU" coordinate section
The columns for 'ATOM' DataFrame are as follows:
End of explanation
pmmcif.df['HETATM'].head(2)
set(pmmcif.df['HETATM'].columns) == set(pmmcif.df['ATOM'].columns)
Explanation: 'group_PDB':
The group of atoms to which the atom site belongs. This data
item is provided for compatibility with the original Protein
Data Bank format, and only for that purpose.
'id': The value of _atom_site.id must uniquely identify a record in the
ATOM_SITE list. Note that this item need not be a number; it can be any unique
identifier.
'type_symbol': The code used to identify the atom species (singular or plural)
representing this atom type. Normally this code is the element
symbol. The code may be composed of any character except
an underscore with the additional proviso that digits designate
an oxidation state and must be followed by a + or - character.
'label_atom_id': An atom name identifier, e.g., N, CA, C, O, ...
'label_alt_id': A place holder to indicate alternate conformation. The alternate conformation
can be an entire polymer chain, or several residues or
partial residue (several atoms within one residue). If
an atom is provided in more than one position, then a
non-blank alternate location indicator must be used for
each of the atomic positions.
'label_comp_id': For protein polymer entities, this is the three-letter code for
the amino acid. For nucleic acid polymer entities, this is the one-letter code
for the base.
'label_asym_id': A value that uniquely identifies a record in
the STRUCT_ASYM list.
'label_entity_id': A value that uniquely identifies a record in
the ENTITY list.
'label_seq_id': A value that uniquely identifies a record in
the ENTITY_POLY_SEQ list.
'pdbx_PDB_ins_code': PDB insertion code.
'Cartn_x': The x atom-site coordinate in angstroms
'Cartn_y': The y atom-site coordinate in angstroms
'Cartn_z': The z atom-site coordinate in angstroms
'occupancy': The fraction of the atom type present at this site.
The sum of the occupancies of all the atom types at this site
may not significantly exceed 1.0 unless it is a dummy site.
'B_iso_or_equiv': Isotropic atomic displacement parameter, or equivalent isotropic
atomic displacement parameter, B_eq, calculated from the
anisotropic displacement parameters.
'pdbx_formal_charge': The net integer charge assigned to this atom. This is the
formal charge assignment normally found in chemical diagrams.
'auth_seq_id': An alternative identifier for _atom_site.label_seq_id that
may be provided by an author in order to match the identification
used in the publication that describes the structure.
'auth_comp_id': An alternative identifier for _atom_site.label_comp_id that
may be provided by an author in order to match the identification
used in the publication that describes the structure.
'auth_asym_id': An alternative identifier for _atom_site.label_asym_id that
may be provided by an author in order to match the identification
used in the publication that describes the structure.
'auth_atom_id': An alternative identifier for _atom_site.label_atom_id that
may be provided by an author in order to match the identification
used in the publication that describes the structure.
'pdbx_PDB_model_num': PDB model number.
The columns of the 'HETATM' DataFrame are indentical to the 'ATOM' DataFrame that we've seen earlier:
End of explanation
set(pmmcif.df['ATOM'].columns) - set(pmmcif.df['ANISOU'].columns)
Explanation: However, there are a few naming differences in the ANISOU columns, for instance, the 'ATOM' and 'HETATM' DataFrames feature the following columns that are not contained in ANISOU:
End of explanation
set(pmmcif.df['ANISOU'].columns) - set(pmmcif.df['ATOM'].columns)
Explanation: Vice versa, ANISOU contains the following columns that are not in the 'ATOM' and 'HETATM' DataFrames:
End of explanation
from biopandas.mmcif import PandasMmcif
pmmcif = PandasMmcif()
pmmcif.read_mmcif('./data/3eiy.cif.gz')
pmmcif.df['ATOM'].head()
Explanation: BioPandas tries to stay to the original column names as close as possible, and for more details, we recommend checking the original descriptions:
ATOM/HETATM
ANISOU
<br>
<br>
Working with mmCIF DataFrames
In the previous sections, we've seen how to load mmCIF structures into DataFrames, and how to access them. Now, let's talk about manipulating mmCIF files in DataFrames.
End of explanation
pmmcif.df['ATOM'][pmmcif.df['ATOM']['auth_comp_id'] == 'PRO'].head()
Explanation: [File link: 3eiy.cif.gz]
Okay, there's actually not that much to say ...
Once we have our mmCIF file in the DataFrame format, we have the whole convenience of pandas right there at our fingertips.
For example, let's get all Proline residues:
End of explanation
pmmcif.df['ATOM'][pmmcif.df['ATOM']['label_atom_id'] == 'CA'].head()
Explanation: Or main chain atoms:
End of explanation
pmmcif.df['ATOM'][pmmcif.df['ATOM']['type_symbol'] != 'H'].head()
Explanation: It's also easy to strip our coordinate section from hydrogen atoms if there are any ...
End of explanation
mainchain = pmmcif.df['ATOM'][(pmmcif.df['ATOM']['label_atom_id'] == 'C') |
(pmmcif.df['ATOM']['label_atom_id'] == 'O') |
(pmmcif.df['ATOM']['label_atom_id'] == 'N') |
(pmmcif.df['ATOM']['label_atom_id'] == 'CA')]
bfact_mc_avg = mainchain['occupancy'].mean()
print('Average B-Factor [Main Chain]: %.2f' % bfact_mc_avg)
Explanation: Or, let's compute the average temperature factor of our protein main chain:
End of explanation
from biopandas.mmcif import PandasMmcif
pmmcif = PandasMmcif().read_mmcif('./data/3eiy.cif.gz')
Explanation: Plotting
Since we are using pandas under the hood, which in turns uses matplotlib under the hood, we can produce quick summary plots of our mmCIF structures relatively conveniently:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import style
style.use('ggplot')
pmmcif.df['ATOM']['B_iso_or_equiv'].plot(kind='hist')
plt.title('Distribution of B-Factors')
plt.xlabel('B-factor')
plt.ylabel('count')
plt.show()
pmmcif.df['ATOM']['B_iso_or_equiv'].plot(kind='line')
plt.title('B-Factors Along the Amino Acid Chain')
plt.xlabel('Residue Number')
plt.ylabel('B-factor in $A^2$')
plt.show()
pmmcif.df['ATOM']['type_symbol'].value_counts().plot(kind='bar')
plt.title('Distribution of Atom Types')
plt.xlabel('elements')
plt.ylabel('count')
plt.show()
Explanation: [File link: 3eiy.cif.gz]
End of explanation
from biopandas.mmcif import PandasMmcif
l_1 = PandasMmcif().read_mmcif('./data/lig_conf_1.cif')
l_2 = PandasMmcif().read_mmcif('./data/lig_conf_2.cif')
r = PandasMmcif.rmsd(l_1.df['HETATM'], l_2.df['HETATM'],
s=None) # all atoms, including hydrogens
print('RMSD: %.4f Angstrom' % r)
Explanation: Computing the Root Mean Square Deviation
BioPandas also comes with certain convenience functions, for example, ...
The Root-mean-square deviation (RMSD) is simply a measure of the average distance between atoms of 2 protein or ligand structures. This calculation of the Cartesian error follows the equation:
$$
RMSD(a, b) = \sqrt{\frac{1}{n} \sum^{n}{i=1} \big((a{ix})^2 + (a_{iy})^2 + (a_{iz})^2 \big)}
= \sqrt{\frac{1}{n} \sum^{n}_{i=1} || a_i + b_i||_2^2}
$$
So, assuming that the we have the following 2 conformations of a ligand molecule
we can compute the RMSD as follows:
End of explanation
r = PandasMmcif.rmsd(l_1.df['HETATM'], l_2.df['HETATM'],
s='carbon') # carbon atoms only
print('RMSD: %.4f Angstrom' % r)
r = PandasMmcif.rmsd(l_1.df['HETATM'], l_2.df['HETATM'],
s='heavy') # heavy atoms only
print('RMSD: %.4f Angstrom' % r)
Explanation: [File links: lig_conf_1.cif, lig_conf_2.cif]
End of explanation
p_1 = PandasMmcif().read_mmcif('./data/1t48_995.cif')
p_2 = PandasMmcif().read_mmcif('./data/1t49_995.cif')
r = PandasMmcif.rmsd(p_1.df['ATOM'], p_2.df['ATOM'], s='heavy')
print('RMSD: %.4f Angstrom' % r)
Explanation: Similarly, we can compute the RMSD between 2 related protein structures:
The hydrogen-free RMSD:
End of explanation
p_1 = PandasMmcif().read_mmcif('./data/1t48_995.cif')
p_2 = PandasMmcif().read_mmcif('./data/1t49_995.cif')
r = PandasMmcif.rmsd(p_1.df['ATOM'], p_2.df['ATOM'], s='main chain')
print('RMSD: %.4f Angstrom' % r)
Explanation: Or the RMSD between the main chains only:
End of explanation
p_1 = PandasMmcif().read_mmcif('./data/3eiy.cif')
reference_point = (9.362, 41.410, 10.542)
distances = p_1.distance(xyz=reference_point, records=('ATOM',))
Explanation: <br>
Filtering PDBs by Distance
We can use the distance method to compute the distance between each atom (or a subset of atoms) in our data frame and a three-dimensional reference point. For example:
End of explanation
distances.head()
Explanation: [File link: 3eiy.cif]
The distance method returns a Pandas Series object:
End of explanation
all_within_7A = p_1.df['ATOM'][distances < 7.0]
all_within_7A.tail()
Explanation: And we can use this Series object, for instance, to select certain atoms in our DataFrame that fall within a desired distance threshold. For example, let's select all atoms that are within 7A of our reference point:
End of explanation
from biopandas.mmcif import PandasMmcif
pmmcif = PandasMmcif().fetch_mmcif('5mtn')
sequence = pmmcif.amino3to1()
sequence.tail()
Explanation: Visualized in PyMOL, this subset (yellow surface) would look as follows:
Converting Amino Acid codes from 3- to 1-letter codes
Residues in the residue_name field can be converted into 1-letter amino acid codes, which may be useful for further sequence analysis, for example, pair-wise or multiple sequence alignments:
End of explanation
sequence_list = list(sequence.loc[sequence['auth_asym_id'] == 'A', 'auth_comp_id'])
sequence_list[-5:] # last 5 residues of chain A
Explanation: As shown above, the amino3to1 method returns a DataFrame containing the auth_asym_id (chain ID) and auth_comp_id (residue name) of the translated 1-letter amino acids. If you like to work with the sequence as a Python list of string characters, you could do the following:
End of explanation
''.join(sequence.loc[sequence['auth_asym_id'] == 'A', 'auth_comp_id'])
Explanation: And if you prefer to work with the sequence as a string, you can use the join method:
End of explanation
for chain_id in sequence['auth_asym_id'].unique():
print('\nChain ID: %s' % chain_id)
print(''.join(sequence.loc[sequence['auth_asym_id'] == chain_id, 'auth_comp_id']))
Explanation: To iterate over the sequences of multi-chain proteins, you can use the unique method as shown below:
End of explanation |
14,943 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Barotropic Model
Here will will use pyqg to reproduce the results of the paper
Step1: McWilliams performed freely-evolving 2D turbulence ($R_d = \infty$, $\beta =0$) experiments on a $2\pi\times 2\pi$ periodic box.
Step2: Initial condition
The initial condition is random, with a prescribed spectrum
$$
|\hat{\psi}|^2 = A \,\kappa^{-1}\left[1 + \left(\frac{\kappa}{6}\right)^4\right]^{-1}\,,
$$
where $\kappa$ is the wavenumber magnitude. The constant A is determined so that the initial energy is $KE = 0.5$.
Step3: Runing the model
Here we demonstrate how to use the run_with_snapshots feature to periodically stop the model and perform some action (in this case, visualization).
Step4: The genius of McWilliams (1984) was that he showed that the initial random vorticity field organizes itself into strong coherent vortices. This is true in significant part of the parameter space. This was previously suspected but unproven, mainly because people did not have computer resources to run the simulation long enough. Thirty years later we can perform such simulations in a couple of minutes on a laptop!
Also, note that the energy is nearly conserved, as it should be, and this is a nice test of the model.
Plotting spectra | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pyqg
Explanation: Barotropic Model
Here will will use pyqg to reproduce the results of the paper: <br />
J. C. Mcwilliams (1984). The emergence of isolated coherent vortices in turbulent flow. Journal of Fluid Mechanics, 146, pp 21-43 doi:10.1017/S0022112084001750
End of explanation
# create the model object
m = pyqg.BTModel(L=2.*np.pi, nx=256,
beta=0., H=1., rek=0., rd=None,
tmax=40, dt=0.001, taveint=1,
ntd=4)
# in this example we used ntd=4, four threads
# if your machine has more (or fewer) cores available, you could try changing it
Explanation: McWilliams performed freely-evolving 2D turbulence ($R_d = \infty$, $\beta =0$) experiments on a $2\pi\times 2\pi$ periodic box.
End of explanation
# generate McWilliams 84 IC condition
fk = m.wv != 0
ckappa = np.zeros_like(m.wv2)
ckappa[fk] = np.sqrt( m.wv2[fk]*(1. + (m.wv2[fk]/36.)**2) )**-1
nhx,nhy = m.wv2.shape
Pi_hat = np.random.randn(nhx,nhy)*ckappa +1j*np.random.randn(nhx,nhy)*ckappa
Pi = m.ifft( Pi_hat[np.newaxis,:,:] )
Pi = Pi - Pi.mean()
Pi_hat = m.fft( Pi )
KEaux = m.spec_var( m.wv*Pi_hat )
pih = ( Pi_hat/np.sqrt(KEaux) )
qih = -m.wv2*pih
qi = m.ifft(qih)
# initialize the model with that initial condition
m.set_q(qi)
# define a quick function for plotting and visualize the initial condition
def plot_q(m, qmax=40):
fig, ax = plt.subplots()
pc = ax.pcolormesh(m.x,m.y,m.q.squeeze(), cmap='RdBu_r')
pc.set_clim([-qmax, qmax])
ax.set_xlim([0, 2*np.pi])
ax.set_ylim([0, 2*np.pi]);
ax.set_aspect(1)
plt.colorbar(pc)
plt.title('Time = %g' % m.t)
plt.show()
plot_q(m)
Explanation: Initial condition
The initial condition is random, with a prescribed spectrum
$$
|\hat{\psi}|^2 = A \,\kappa^{-1}\left[1 + \left(\frac{\kappa}{6}\right)^4\right]^{-1}\,,
$$
where $\kappa$ is the wavenumber magnitude. The constant A is determined so that the initial energy is $KE = 0.5$.
End of explanation
for _ in m.run_with_snapshots(tsnapstart=0, tsnapint=10):
plot_q(m)
Explanation: Runing the model
Here we demonstrate how to use the run_with_snapshots feature to periodically stop the model and perform some action (in this case, visualization).
End of explanation
energy = m.get_diagnostic('KEspec')
enstrophy = m.get_diagnostic('Ensspec')
# this makes it easy to calculate an isotropic spectrum
from pyqg import diagnostic_tools as tools
kr, energy_iso = tools.calc_ispec(m,energy.squeeze())
_, enstrophy_iso = tools.calc_ispec(m,enstrophy.squeeze())
ks = np.array([3.,80])
es = 5*ks**-4
plt.loglog(kr,energy_iso)
plt.loglog(ks,es,'k--')
plt.text(2.5,.0001,r'$k^{-4}$',fontsize=20)
plt.ylim(1.e-10,1.e0)
plt.xlabel('wavenumber')
plt.title('Energy Spectrum')
ks = np.array([3.,80])
es = 5*ks**(-5./3)
plt.loglog(kr,enstrophy_iso)
plt.loglog(ks,es,'k--')
plt.text(5.5,.01,r'$k^{-5/3}$',fontsize=20)
plt.ylim(1.e-3,1.e0)
plt.xlabel('wavenumber')
plt.title('Enstrophy Spectrum')
Explanation: The genius of McWilliams (1984) was that he showed that the initial random vorticity field organizes itself into strong coherent vortices. This is true in significant part of the parameter space. This was previously suspected but unproven, mainly because people did not have computer resources to run the simulation long enough. Thirty years later we can perform such simulations in a couple of minutes on a laptop!
Also, note that the energy is nearly conserved, as it should be, and this is a nice test of the model.
Plotting spectra
End of explanation |
14,944 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-1', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CSIRO-BOM
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:56
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
14,945 | Given the following text description, write Python code to implement the functionality described.
Description:
Given a grid with N rows and N columns (N >= 2) and a positive integer k,
each cell of the grid contains a value. Every integer in the range [1, N * N]
inclusive appears exactly once on the cells of the grid.
You have to find the minimum path of length k in the grid. You can start
from any cell, and in each step you can move to any of the neighbor cells,
in other words, you can go to cells which share an edge with you current
cell.
Please note that a path of length k means visiting exactly k cells (not
necessarily distinct).
You CANNOT go off the grid.
A path A (of length k) is considered less than a path B (of length k) if
after making the ordered lists of the values on the cells that A and B go
through (let's call them lst_A and lst_B), lst_A is lexicographically less
than lst_B, in other words, there exist an integer index i (1 <= i <= k)
such that lst_A[i] < lst_B[i] and for any j (1 <= j < i) we have
lst_A[j] = lst_B[j].
It is guaranteed that the answer is unique.
Return an ordered list of the values on the cells that the minimum path go through.
Examples:
Input: grid = [ [1,2,3], [4,5,6], [7,8,9]], k = 3
Output: [1, 2, 1]
Input: grid = [ [5,9,3], [4,1,6], [7,8,2]], k = 1
Output: [1]
| Python Code:
def minPath(grid, k):
n = len(grid)
val = n * n + 1
for i in range(n):
for j in range(n):
if grid[i][j] == 1:
temp = []
if i != 0:
temp.append(grid[i - 1][j])
if j != 0:
temp.append(grid[i][j - 1])
if i != n - 1:
temp.append(grid[i + 1][j])
if j != n - 1:
temp.append(grid[i][j + 1])
val = min(temp)
ans = []
for i in range(k):
if i % 2 == 0:
ans.append(1)
else:
ans.append(val)
return ans |
14,946 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Chapter 22
Copyright 2017 Allen Downey
License
Step1: Vectors
A Vector object represents a vector quantity. In the context of mechanics, vector quantities include position, velocity, acceleration, and force, all of which might be in 2D or 3D.
You can define a Vector object without units, but if it represents a physical quantity, you will often want to attach units to it.
I'll start by grabbing the units we'll need.
Step2: Here's a two dimensional Vector in meters.
Step3: We can access the elements by name.
Step4: The magnitude is the length of the vector.
Step5: The angle is the number of radians between the vector and the positive x axis.
Step6: If we make another Vector with the same units,
Step7: We can add Vector objects like this
Step8: And subtract like this
Step9: We can compute the Euclidean distance between two Vectors.
Step10: And the difference in angle
Step11: If we are given the magnitude and angle of a vector, what we have is the representation of the vector in polar coordinates.
Step12: We can use pol2cart to convert from polar to Cartesian coordinates, and then use the Cartesian coordinates to make a Vector object.
In this example, the Vector we get should have the same components as A.
Step13: Another way to represent the direction of A is a unit vector, which is a vector with magnitude 1 that points in the same direction as A. You can compute a unit vector by dividing a vector by its magnitude
Step14: Or by using the hat function, so named because unit vectors are conventionally decorated with a hat, like this
Step15: Exercise
Step16: Degrees and radians
Pint provides units to represent degree and radians.
Step17: If you have an angle in degrees,
Step18: You can convert to radians.
Step19: If it's already in radians, to does the right thing.
Step20: You can also convert from radians to degrees.
Step21: As an alterative, you can use np.deg2rad, which works with Pint quantities, but it also works with simple numbers and NumPy arrays
Step22: Exercise
Step23: Baseball
Here's a Params object that contains parameters for the flight of a baseball.
Step25: And here's the function that uses the Params object to make a System object.
Step26: Here's how we use it
Step28: Here's a function that computes drag force using vectors
Step29: We can test it like this.
Step31: Here's the slope function that computes acceleration due to gravity and drag.
Step32: Always test the slope function with the initial conditions.
Step34: We can use an event function to stop the simulation when the ball hits the ground
Step35: Now we can call run_ode_solver
Step36: The final label tells us the flight time.
Step37: The final value of x tells us the how far the ball landed from home plate
Step38: Visualizing the results
The simplest way to visualize the results is to plot x and y as functions of time.
Step39: We can plot the velocities the same way.
Step40: The x velocity slows down due to drag.
The y velocity drops quickly while drag and gravity are in the same direction, then more slowly after the ball starts to fall.
Another way to visualize the results is to plot y versus x. The result is the trajectory of the ball through its plane of motion.
Step41: Animation
One of the best ways to visualize the results of a physical model is animation. If there are problems with the model, animation can make them apparent.
The ModSimPy library provides animate, which takes as parameters a TimeSeries and a draw function.
The draw function should take as parameters a State object and the time. It should draw a single frame of the animation.
Inside the draw function, you almost always have to call set_xlim and set_ylim. Otherwise matplotlib auto-scales the axes, which is usually not what you want.
Step42: Exercise
Step43: A ModSimVector is a specialized kind of Pint Quantity.
Step44: There's one gotcha you might run into with Vectors and Quantities. If you multiply a ModSimVector and a Quantity, you get a ModSimVector
Step45: But if you multiply a Quantity and a Vector, you get a Quantity
Step46: With a ModSimVector you can get the coordinates using dot notation, as well as mag, mag2, and angle
Step47: With a Quantity, you can't. But you can use indexing to get the coordinates
Step48: And you can use vector functions to get the magnitude and angle.
Step49: And often you can avoid the whole issue by doing the multiplication with the ModSimVector on the left.
Exercises
Exercise
Step50: Exercise
Step51: Exercise
Step53: Modify the model to include the dependence of C_d on velocity, and see how much it affects the results. Hint | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
Explanation: Modeling and Simulation in Python
Chapter 22
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
Explanation: Vectors
A Vector object represents a vector quantity. In the context of mechanics, vector quantities include position, velocity, acceleration, and force, all of which might be in 2D or 3D.
You can define a Vector object without units, but if it represents a physical quantity, you will often want to attach units to it.
I'll start by grabbing the units we'll need.
End of explanation
A = Vector(3, 4) * m
Explanation: Here's a two dimensional Vector in meters.
End of explanation
A.x
A.y
Explanation: We can access the elements by name.
End of explanation
A.mag
Explanation: The magnitude is the length of the vector.
End of explanation
A.angle
Explanation: The angle is the number of radians between the vector and the positive x axis.
End of explanation
B = Vector(1, 2) * m
Explanation: If we make another Vector with the same units,
End of explanation
A + B
Explanation: We can add Vector objects like this
End of explanation
A - B
Explanation: And subtract like this:
End of explanation
A.dist(B)
Explanation: We can compute the Euclidean distance between two Vectors.
End of explanation
A.diff_angle(B)
Explanation: And the difference in angle
End of explanation
mag = A.mag
angle = A.angle
Explanation: If we are given the magnitude and angle of a vector, what we have is the representation of the vector in polar coordinates.
End of explanation
x, y = pol2cart(angle, mag)
Vector(x, y)
Explanation: We can use pol2cart to convert from polar to Cartesian coordinates, and then use the Cartesian coordinates to make a Vector object.
In this example, the Vector we get should have the same components as A.
End of explanation
A / A.mag
Explanation: Another way to represent the direction of A is a unit vector, which is a vector with magnitude 1 that points in the same direction as A. You can compute a unit vector by dividing a vector by its magnitude:
End of explanation
A.hat()
Explanation: Or by using the hat function, so named because unit vectors are conventionally decorated with a hat, like this: $\hat{A}$:
End of explanation
# Solution
a_grav = Vector(0, -9.8) * m / s**2
Explanation: Exercise: Create a Vector named a_grav that represents acceleration due to gravity, with x component 0 and y component $-9.8$ meters / second$^2$.
End of explanation
degree = UNITS.degree
radian = UNITS.radian
Explanation: Degrees and radians
Pint provides units to represent degree and radians.
End of explanation
angle = 45 * degree
angle
Explanation: If you have an angle in degrees,
End of explanation
angle_rad = angle.to(radian)
Explanation: You can convert to radians.
End of explanation
angle_rad.to(radian)
Explanation: If it's already in radians, to does the right thing.
End of explanation
angle_rad.to(degree)
Explanation: You can also convert from radians to degrees.
End of explanation
np.deg2rad(angle)
Explanation: As an alterative, you can use np.deg2rad, which works with Pint quantities, but it also works with simple numbers and NumPy arrays:
End of explanation
# Solution
N = UNITS.newton
mag = 0.5 * N
angle = 45 * degree
theta = angle.to(radian)
x, y = pol2cart(theta, mag)
force = Vector(x, y)
mass = 0.3 * kg
a_force = force / mass
a_force
# Solution
a_force + a_grav
Explanation: Exercise: Create a Vector named a_force that represents acceleration due to a force of 0.5 Newton applied to an object with mass 0.3 kilograms, in a direction 45 degrees up from the positive x-axis.
Add a_force to a_grav from the previous exercise. If that addition succeeds, that means that the units are compatible. Confirm that the total acceleration seems to make sense.
End of explanation
t_end = 10 * s
dt = t_end / 100
params = Params(x = 0 * m,
y = 1 * m,
g = 9.8 * m/s**2,
mass = 145e-3 * kg,
diameter = 73e-3 * m,
rho = 1.2 * kg/m**3,
C_d = 0.33,
angle = 45 * degree,
velocity = 40 * m / s,
t_end=t_end, dt=dt)
Explanation: Baseball
Here's a Params object that contains parameters for the flight of a baseball.
End of explanation
def make_system(params):
Make a system object.
params: Params object with angle, velocity, x, y,
diameter, duration, g, mass, rho, and C_d
returns: System object
angle, velocity = params.angle, params.velocity
# convert angle to degrees
theta = np.deg2rad(angle)
# compute x and y components of velocity
vx, vy = pol2cart(theta, velocity)
# make the initial state
R = Vector(params.x, params.y)
V = Vector(vx, vy)
init = State(R=R, V=V)
# compute area from diameter
diameter = params.diameter
area = np.pi * (diameter/2)**2
return System(params, init=init, area=area)
Explanation: And here's the function that uses the Params object to make a System object.
End of explanation
system = make_system(params)
Explanation: Here's how we use it:
End of explanation
def drag_force(V, system):
Computes drag force in the opposite direction of `v`.
V: velocity Vector
system: System object with rho, C_d, area
returns: Vector drag force
rho, C_d, area = system.rho, system.C_d, system.area
mag = rho * V.mag**2 * C_d * area / 2
direction = -V.hat()
f_drag = direction * mag
return f_drag
Explanation: Here's a function that computes drag force using vectors:
End of explanation
V_test = Vector(10, 10) * m/s
drag_force(V_test, system)
Explanation: We can test it like this.
End of explanation
def slope_func(state, t, system):
Computes derivatives of the state variables.
state: State (x, y, x velocity, y velocity)
t: time
system: System object with g, rho, C_d, area, mass
returns: sequence (vx, vy, ax, ay)
R, V = state
mass, g = system.mass, system.g
a_drag = drag_force(V, system) / mass
a_grav = Vector(0, -g)
A = a_grav + a_drag
return V, A
Explanation: Here's the slope function that computes acceleration due to gravity and drag.
End of explanation
slope_func(system.init, 0, system)
Explanation: Always test the slope function with the initial conditions.
End of explanation
def event_func(state, t, system):
Stop when the y coordinate is 0.
state: State object
t: time
system: System object
returns: y coordinate
R, V = state
return R.y
event_func(system.init, 0, system)
Explanation: We can use an event function to stop the simulation when the ball hits the ground:
End of explanation
results, details = run_ode_solver(system, slope_func, events=event_func)
details
Explanation: Now we can call run_ode_solver
End of explanation
flight_time = get_last_label(results) * s
Explanation: The final label tells us the flight time.
End of explanation
R_final = get_last_value(results.R)
x_dist = R_final.x
Explanation: The final value of x tells us the how far the ball landed from home plate:
End of explanation
xs = results.R.extract('x')
ys = results.R.extract('y')
xs.plot()
ys.plot()
decorate(xlabel='Time (s)',
ylabel='Position (m)')
savefig('figs/chap22-fig01.pdf')
Explanation: Visualizing the results
The simplest way to visualize the results is to plot x and y as functions of time.
End of explanation
vx = results.V.extract('x')
vy = results.V.extract('y')
vx.plot(label='vx')
vy.plot(label='vy')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
Explanation: We can plot the velocities the same way.
End of explanation
def plot_trajectory(results):
xs = results.R.extract('x')
ys = results.R.extract('y')
plot(xs, ys, color='C2', label='trajectory')
decorate(xlabel='x position (m)',
ylabel='y position (m)')
plot_trajectory(results)
savefig('figs/chap22-fig02.pdf')
Explanation: The x velocity slows down due to drag.
The y velocity drops quickly while drag and gravity are in the same direction, then more slowly after the ball starts to fall.
Another way to visualize the results is to plot y versus x. The result is the trajectory of the ball through its plane of motion.
End of explanation
xs = results.R.extract('x')
ys = results.R.extract('y')
def draw_func(state, t):
set_xlim(xs)
set_ylim(ys)
x, y = state.R
plot(x, y, 'bo')
decorate(xlabel='x position (m)',
ylabel='y position (m)')
animate(results, draw_func)
Explanation: Animation
One of the best ways to visualize the results of a physical model is animation. If there are problems with the model, animation can make them apparent.
The ModSimPy library provides animate, which takes as parameters a TimeSeries and a draw function.
The draw function should take as parameters a State object and the time. It should draw a single frame of the animation.
Inside the draw function, you almost always have to call set_xlim and set_ylim. Otherwise matplotlib auto-scales the axes, which is usually not what you want.
End of explanation
V = Vector(3, 4)
type(V)
Explanation: Exercise: Delete the lines that set the x and y axes (or comment them out) and see what the animation does.
Under the hood
Vector is a function that returns a ModSimVector object.
End of explanation
isinstance(V, Quantity)
Explanation: A ModSimVector is a specialized kind of Pint Quantity.
End of explanation
V1 = V * m
type(V1)
Explanation: There's one gotcha you might run into with Vectors and Quantities. If you multiply a ModSimVector and a Quantity, you get a ModSimVector:
End of explanation
V2 = m * V
type(V2)
Explanation: But if you multiply a Quantity and a Vector, you get a Quantity:
End of explanation
V1.x, V1.y, V1.mag, V1.angle
Explanation: With a ModSimVector you can get the coordinates using dot notation, as well as mag, mag2, and angle:
End of explanation
V2[0], V2[1]
Explanation: With a Quantity, you can't. But you can use indexing to get the coordinates:
End of explanation
vector_mag(V2), vector_angle(V2)
Explanation: And you can use vector functions to get the magnitude and angle.
End of explanation
# Hint
system_no_drag = System(system, C_d=0)
# Solution
results_no_drag, details = run_ode_solver(system_no_drag, slope_func, events=event_func)
details
# Solution
plot_trajectory(results)
plot_trajectory(results_no_drag)
# Solution
x_dist = get_last_value(results.R).x
# Solution
xdist_no_drag = get_last_value(results_no_drag.R).x
# Solution
xdist_no_drag - x_dist
Explanation: And often you can avoid the whole issue by doing the multiplication with the ModSimVector on the left.
Exercises
Exercise: Run the simulation with and without air resistance. How wrong would we be if we ignored drag?
End of explanation
# Hint
system2 = System(system, rho=1.0*kg/m**3)
# Solution
results2, details2 = run_ode_solver(system2, slope_func, events=event_func)
x = results2.R.extract('x')
x_dist2 = get_last_value(x)
# Solution
x_dist2 - x_dist
Explanation: Exercise: The baseball stadium in Denver, Colorado is 1,580 meters above sea level, where the density of air is about 1.0 kg / meter$^3$. How much farther would a ball hit with the same velocity and launch angle travel?
End of explanation
baseball_drag = pd.read_csv('data/baseball_drag.csv')
mph = Quantity(baseball_drag['Velocity in mph'], UNITS.mph)
mps = mph.to(m/s)
baseball_drag.index = magnitude(mps)
baseball_drag.index.name = 'Velocity in meters per second'
baseball_drag
Explanation: Exercise: The model so far is based on the assumption that coefficient of drag does not depend on velocity, but in reality it does. The following figure, from Adair, The Physics of Baseball, shows coefficient of drag as a function of velocity.
<img src="data/baseball_drag.png" width="400">
I used an online graph digitizer to extract the data and save it in a CSV file. Here's how we can read it:
End of explanation
# Solution
drag_interp = interpolate(baseball_drag['Drag coefficient'])
vs = linspace(0, 60)
cds = drag_interp(vs)
plot(vs, cds)
decorate(xlabel='Velocity (m/s)', ylabel='C_d')
# Solution
def drag_force(V, system):
Computes drag force in the opposite direction of `v`.
v: velocity
system: System object with rho, C_d, area
returns: Vector drag force
rho, C_d, area = system.rho, system.C_d, system.area
C_d = drag_interp(V.mag)
mag = -rho * V.mag**2 * C_d * area / 2
direction = V.hat()
f_drag = direction * mag
return f_drag
C_d = drag_interp(43 * m / s)
# Solution
system = System(system, drag_interp=drag_interp)
V = Vector(30, 30) * m/s
f_drag = drag_force(V, system)
# Solution
slope_func(system.init, 0, system)
# Solution
results, details = run_ode_solver(system, slope_func, events=event_func)
details
# Solution
results.tail()
# Solution
x = results.R.extract('x')
x_dist3 = get_last_value(x)
# Solution
x_dist - x_dist3
# Solution
# Here are the highest and lowest speeds
vs = results.V.extract('mag')
interval = min(vs), max(vs)
# Solution
# And here are the drag coefficients at the highest and lowest speed.
# They are substantially different.
drag_interp(interval)
Explanation: Modify the model to include the dependence of C_d on velocity, and see how much it affects the results. Hint: use interpolate.
End of explanation |
14,947 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nansat
Step1: Open file with Nansat
Step2: Read information ABOUT the data (METADATA)
Step3: Read the actual DATA
Step4: Check what kind of data we have
Step5: Find where the image is taken | Python Code:
import os
import shutil
import nansat
idir = os.path.join(os.path.dirname(nansat.__file__), 'tests', 'data/')
Explanation: Nansat: First Steps
Overview
The NANSAT package contains several classes:
Nansat - open and read satellite data
Domain - define grid for the region of interest
Figure - create raster images (PNG, TIF)
NSR - define spatial reference (SR)
Copy sample data
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
from nansat import Nansat
n = Nansat(idir+'gcps.tif')
Explanation: Open file with Nansat
End of explanation
print(n)
Explanation: Read information ABOUT the data (METADATA)
End of explanation
b1 = n[1]
Explanation: Read the actual DATA
End of explanation
%whos
plt.imshow(b1);plt.colorbar()
plt.show()
Explanation: Check what kind of data we have
End of explanation
n.write_figure('map.png', pltshow=True)
Explanation: Find where the image is taken
End of explanation |
14,948 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Selection via Validation
Step1: Cross-validation
Step2: We can use different splitting strategies, such as random splitting (There exists many different cross-validation strategies in scikit-learn. They are often useful to take in account non iid datasets)
Step3: Hyperparameter optimization with cross-validation | Python Code:
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC
from sklearn import model_selection
from sklearn import metrics
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y,
test_size=0.25, random_state=0)
for Model in [GaussianNB, KNeighborsClassifier, LinearSVC]:
clf = Model().fit(X_train, y_train)
y_pred = clf.predict(X_test)
print('%s: %s' %
(Model.__name__, metrics.f1_score(y_test, y_pred, average="macro")))
Explanation: Model Selection via Validation
End of explanation
clf = KNeighborsClassifier()
from sklearn.model_selection import cross_val_score
cross_val_score(clf, X, y, cv=5)
Explanation: Cross-validation
End of explanation
from sklearn.model_selection import ShuffleSplit
cv = ShuffleSplit(n_splits=5)
cross_val_score(clf, X, y, cv=cv)
Explanation: We can use different splitting strategies, such as random splitting (There exists many different cross-validation strategies in scikit-learn. They are often useful to take in account non iid datasets)
End of explanation
from sklearn.datasets import load_diabetes
data = load_diabetes()
X, y = data.data, data.target
print(X.shape)
from sklearn.linear_model import Ridge, Lasso
for Model in [Ridge, Lasso]:
model = Model()
print('%s: %s' % (Model.__name__, cross_val_score(model, X, y).mean()))
Explanation: Hyperparameter optimization with cross-validation
End of explanation |
14,949 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graphistry Tutorial
Step1: Connect to Graphistry + Test
Step2: Connect to TigerGraph and Test
Step3: Query Tigergraph
Step4: Visualize result of TigerGraph query
Step5: In-Tool UI Walkthrough
1. Clustering, Pan/Zoom, Data Table + Data Brush
Open Visual guide in a separate tab
Toggle visual clustering
Step6: Adding Graphs
Step7: Custom Nodes and Attributes + Saving Sessions | Python Code:
TIGER_CONFIG = {
'fqdn': 'http://MY_TIGER_SERVER:9000'
}
Explanation: Graphistry Tutorial: Notebooks + TigerGraph via raw REST calls
Connect to Graphistry, TigerGraph
Load data from TigerGraph into a Pandas Dataframes
Plot in Graphistry as a Graph and Hypergraph
Explore in Graphistry
Advanced notebooks
Configuration
End of explanation
#!pip install graphistry
import pandas as pd
import requests
### COMMON ISSUES: wrong server, wrong key, wrong protocol, network notebook->graphistry firewall permissions
import graphistry
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
graphistry.__version__
### EXPECTED RESULT: Visualization of a curved triangle
### COMMON ISSUES: Blank box as HTTPS not configured on Graphistry server so browser disallows iframe. Try plot(render=False)
g = graphistry\
.edges(pd.DataFrame({'s': [0,1,2], 'd': [1,2,0], 'a': ['quick', 'brown', 'fox'] }))\
.bind(source='s', destination='d')
g.plot() #g.plot(render=False)
Explanation: Connect to Graphistry + Test
End of explanation
### EXPECTED RESULT: {'GET /statistics': ...}
### COMMON ISSUES: returns '{}' (may need to run a few times); wrong fqdn; firewall issues; ...
requests.get(TIGER_CONFIG['fqdn'] + '/statistics?seconds=60').json()
Explanation: Connect to TigerGraph and Test
End of explanation
# string -> dict
def query_raw(query_string):
url = TIGER_CONFIG['fqdn'] + "/query/" + query_string
r = requests.get(url)
return r.json()
def flatten (lst_of_lst):
try:
if type(lst_of_lst[0]) == list:
return [item for sublist in lst_of_lst for item in sublist]
else:
return lst_of_lst
except:
print('fail', lst_of_lst)
return lst_of_lst
#str * dict -> dict
def named_edge_to_record(name, edge):
record = {k: edge[k] for k in edge.keys() if not (type(edge[k]) == dict) }
record['type'] = name
nested = [k for k in edge.keys() if type(edge[k]) == dict]
if len(nested) == 1:
for k in edge[nested[0]].keys():
record[k] = edge[nested[0]][k]
else:
for prefix in nested:
for k in edge[nested[prefix]].keys():
record[prefix + "_" + k] = edge[nested[prefix]][k]
return record
def query(query_string):
results = query_raw(query_string)['results']
out = {}
for o in results:
for k in o.keys():
if type(o[k]) == list:
out[k] = flatten(o[k])
out = flatten([[named_edge_to_record(k,v) for v in out[k]] for k in out.keys()])
print('# results', len(out))
return pd.DataFrame(out)
def graph_edges(edges):
return graphistry.bind(source='from_id', destination='to_id').edges(edges)
df = query("connection_mining?A=1&B=10&k=1000")
print('rows: ', len(df))
df.sample(3)
Explanation: Query Tigergraph
End of explanation
### EXPECTED RESULT: GRAPH VISUALIZATION
### COMMON ISSUES: try inspecting query_raw('connection_mining?A=1&B=10&k=2')
graph_edges(query("connection_mining?A=1&B=10&k=1000")).plot()
Explanation: Visualize result of TigerGraph query
End of explanation
df = pd.read_csv('https://github.com/graphistry/pygraphistry/raw/master/demos/data/transactions.csv')
df.sample(10)
hg = graphistry.hypergraph(df[:1000], entity_types=['Source', 'Destination', 'Transaction ID'])
print('Hypergraph parts', hg.keys())
hg['graph'].plot()
help(graphistry.hypergraph)
Explanation: In-Tool UI Walkthrough
1. Clustering, Pan/Zoom, Data Table + Data Brush
Open Visual guide in a separate tab
Toggle visual clustering: Click to start, click to stop. (Edges invisible during clustering.)
Pan/zoom: Just like Google maps
Autocenter button when lost
Click node or edge to see details.
Data Table with Nodes, Edges, (Events) tabs
Use Data brush mode to click-drag to select region and filter data table
Challenge: What node has the most edges? What do its edges have in common?
2. Histograms and Using data for sizes & colors
For point:degree histogram on bottom right, press each button and see what it does
Set node size based on attribute. Then, Scene settings -> Point size slider.
Make histogram log scale in case of an extreme distribution
Pick any color. If UI doesn't update, try running clustering for one tick.
Add a histogram for point:_title
Try coloring via a categorical vs gradient : What is the difference?
3. Filtering
Add histogram edge:from_type
Click-drag the degree histogram to filter for multiple bins
Open/close filter panel and toggle on/off the filter
Toggle cull isolated nodes to remove noisey nodes with no edges left
Click filter on histogram to remove
You can manually create SQL WHERE clauses here. filters -> edge:e_type -> edge:e_type ilike "%phone%"
Toggle visual clustering and then off when stablized
Challenge: How many distinct phone networks are there?
4. Data table
Search points, e.g., 135 area code
Export CSV (currently returns filtered as well)
Advanced Notebooks
Hypergraph
If you have a CSV and not a graph, hypergraphs are a quick way to analyze the data as a graph. They turn each entity into a node, and link them together if they are in the same row of the CSV. E.g., link together a phone and address. It does so indirectly -- it creates a node for the row, and connects the row to each entity mentioned.
Challenge: What was the last tainted transaction, and the amount on it?
End of explanation
df1 = query("connection_mining?A=1&B=10&k=1000").assign(data_source='query1')
df2 = query("connection_mining?A=1&B=12&k=1000").assign(data_source='query2')
edges2 = pd.concat([df1, df2], ignore_index=True)
graph_edges(edges2).plot()
Explanation: Adding Graphs
End of explanation
conn = query("connection_mining?A=1&B=10&k=1000")
froms = conn.rename(columns={'from_id': 'id', 'from_type': 'node_type'})[['id', 'node_type']]
tos = conn.rename(columns={'to_id': 'id', 'to_type': 'node_type'})[['id', 'node_type']]
nodes = pd.concat([froms, tos], ignore_index=True).drop_duplicates().dropna()
nodes.sample(3)
nodes['node_type'].unique()
#https://labs.graphistry.com/docs/docs/palette.html
type2color = {
'phone_call': 0,
'citizen': 1,
'bank_account': 2,
'phone_number': 3,
'bank_transfer_event': 4,
'hotel_room_event': 5
}
nodes['color'] = nodes['node_type'].apply(lambda type_str: type2color[type_str])
nodes.sample(3)
g = graphistry.bind(source='from_id', destination='to_id').edges(conn)
#updating colors
g = g.bind(node='id', point_color='color').nodes(nodes)
#saving sessions
g = g.settings(url_params={'workbook': 'my_workbook1'})
g.plot()
Explanation: Custom Nodes and Attributes + Saving Sessions
End of explanation |
14,950 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Theory and Practice of Visualization Exercise 1
Imports
Step1: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook. | Python Code:
from IPython.display import Image
Explanation: Theory and Practice of Visualization Exercise 1
Imports
End of explanation
# Add your filename and uncomment the following line:
Image(filename='TheoryAndPracticeEx01graph.png')
Explanation: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook.
End of explanation |
14,951 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deploying a scikit-learn model on Verta
Within Verta, a "Model" can be any arbitrary function
Step1: 0.1 Verta import and setup
Step2: 1. Model Training
1.1 Load training data
Step3: Define hyperparams
Step4: 1.3 Train/test code
Step5: 2. Register Model for deployment
Step6: 2.1 Register from the model object
If you are in the same file where you have the model object handy, use the code below to package the model
Step7: 2.2 (OR) Register a serialized version of the model using the VertaModelBase
Step8: 3. Deploy model to endpoint | Python Code:
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
Explanation: Deploying a scikit-learn model on Verta
Within Verta, a "Model" can be any arbitrary function: a traditional ML model (e.g., sklearn, PyTorch, TF, etc); a function (e.g., squaring a number, making a DB function etc.); or a mixture of the above (e.g., pre-processing code, a DB call, and then a model application.) See more here.
This notebook provides an example of how to deploy a scikit-learn model on Verta as a Verta Standard Model either via convenience functions or by extending VertaModelBase.
0. Imports
End of explanation
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
import os
# Ensure credentials are set up, if not, use below
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
# os.environ['VERTA_HOST'] =
from verta import Client
PROJECT_NAME = "Census"
EXPERIMENT_NAME = "sklearn"
client = Client(os.environ['VERTA_HOST'])
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
Explanation: 0.1 Verta import and setup
End of explanation
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
train_data_url = "http://s3.amazonaws.com/verta-starter/census-train.csv"
train_data_filename = wget.detect_filename(train_data_url)
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
test_data_url = "http://s3.amazonaws.com/verta-starter/census-test.csv"
test_data_filename = wget.detect_filename(test_data_url)
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
y_test = df_test.iloc[:, -1]
df_train.head()
Explanation: 1. Model Training
1.1 Load training data
End of explanation
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
Explanation: Define hyperparams
End of explanation
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# NOTE: run_experiment() could also be defined in a module, and executed in parallel
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
Explanation: 1.3 Train/test code
End of explanation
registered_model = client.get_or_create_registered_model(
name="census-sklearn", labels=["tabular", "sklearn"])
Explanation: 2. Register Model for deployment
End of explanation
from verta.environment import Python
model_version_v1 = registered_model.create_standard_model_from_sklearn(
model,
environment=Python(requirements=["scikit-learn"]),
name="v1",
)
Explanation: 2.1 Register from the model object
If you are in the same file where you have the model object handy, use the code below to package the model
End of explanation
import cloudpickle
with open("model.pkl", "wb") as f:
cloudpickle.dump(model, f)
from verta.registry import VertaModelBase
class CensusIncomeClassifier(VertaModelBase):
def __init__(self, artifacts):
self.model = cloudpickle.load(open(artifacts["serialized_model"], "rb"))
def predict(self, batch_input):
results = []
for one_input in batch_input:
results.append(self.model.predict(one_input))
return results
artifacts_dict = {"serialized_model" : "model.pkl"}
clf = CensusIncomeClassifier(artifacts_dict)
clf.predict([X_test.values.tolist()[:5]])
model_version_v2 = registered_model.create_standard_model(
model_cls=CensusIncomeClassifier,
environment=Python(requirements=["scikit-learn"]),
artifacts=artifacts_dict,
name="v2"
)
Explanation: 2.2 (OR) Register a serialized version of the model using the VertaModelBase
End of explanation
census_endpoint = client.get_or_create_endpoint("census-model")
census_endpoint.update(model_version_v1, wait=True)
deployed_model = census_endpoint.get_deployed_model()
deployed_model.predict(X_test.values.tolist()[:5])
census_endpoint.update(model_version_v2, wait=True)
deployed_model = census_endpoint.get_deployed_model()
deployed_model.predict([X_test.values.tolist()[:5]])
Explanation: 3. Deploy model to endpoint
End of explanation |
14,952 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
python_subdict - Documentation
The markdown version of this document is here.
Installing
You can pip-install python_subdict in your environment by typing the following code on your shell
Step1: If we need only the keys 'a' and 'd', we can do this
Step2: We can also specify 'subkeys' by using a dotted-syntax
Step3: The dotted-syntax can have any needed level of depth
Step4: Specifying invalid keys behavior
Let's consider the following dict from now on
Step5: By default, invalid keys passed to the extract_subdict function are ignored
Step6: However, by passing True to the strict parameter of the function, invalid keys will raise a KeyError exception
Step7: Successive extractions
Extracting only 'name' and 'albums' from the person dict
Step8: Now, extracting only the 'name' of each album
Step9: The result is the following | Python Code:
d = {
'a': 'A',
'b': 'B',
'c': 'C',
'd': {
'x': 'D_X',
'y': 'D_Y',
'z': {
'I': 'D_Z_I',
'II': {
'1': 'D_Z_II_1',
'2': 'D_Z_II_2'
},
'III': 'D_Z_III'
}
}
}
Explanation: python_subdict - Documentation
The markdown version of this document is here.
Installing
You can pip-install python_subdict in your environment by typing the following code on your shell:
pip install subdict
Usage example
As an example, let's say that we have the following dict:
End of explanation
from subdict import extract_subdict # The main function of the library
from pprint import pprint # Just for a nice presentation here
pprint( extract_subdict(d, ['a', 'd']) )
Explanation: If we need only the keys 'a' and 'd', we can do this:
End of explanation
pprint( extract_subdict(d, ['a', 'd.x', 'd.z']) )
Explanation: We can also specify 'subkeys' by using a dotted-syntax:
End of explanation
pprint( extract_subdict(d, ['a', 'd.z.II.1']) )
Explanation: The dotted-syntax can have any needed level of depth:
End of explanation
person = {
'name': 'John Frusciante',
'birth': '1970-03-05',
'city': {
'name': 'New York City',
'state': {'name': 'New York', 'country': 'USA'}
},
'albums': [
{
'year': 2001,
'name': 'To Record Only Water For Ten Days',
'label': {
'name': 'Warner Bros Records',
'link': 'https://en.wikipedia.org/wiki/Warner_Bros._Records'
}
},
{
'year': 2004,
'name': 'Shadows Collide With People',
'label': {
'name': 'Warner Bros Records',
'link': 'https://en.wikipedia.org/wiki/Warner_Bros._Records'
}
},
{
'year': 2009,
'name': 'The Empyrean',
'label': {
'name': 'Record Collection',
'link': 'https://en.wikipedia.org/wiki/Record_Collection'
}
}
]
}
Explanation: Specifying invalid keys behavior
Let's consider the following dict from now on:
End of explanation
extract_subdict(person, ['name', 'birth', 'hair_color']) # 'hair_color' is invalid
Explanation: By default, invalid keys passed to the extract_subdict function are ignored:
End of explanation
extract_subdict(person, ['name', 'birth', 'hair_color'], strict=True)
Explanation: However, by passing True to the strict parameter of the function, invalid keys will raise a KeyError exception:
End of explanation
subdict = extract_subdict(person, ['name', 'albums'])
pprint(subdict)
Explanation: Successive extractions
Extracting only 'name' and 'albums' from the person dict:
End of explanation
for index in range(len(subdict['albums'])):
subdict['albums'][index] = extract_subdict(subdict['albums'][index], ['name'])
Explanation: Now, extracting only the 'name' of each album:
End of explanation
pprint(subdict)
Explanation: The result is the following:
End of explanation |
14,953 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Define the column names and read data from source file
Step1: Quick check on the summary statistics of the data set
Step2: There are only 5399 unique citations when there are 5391 judgments in the data set. This suggests that there may be duplicate entries.
Removing duplicate entries based on the citation
Step3: We drop the duplicate rows from the data set.
Step4: These are not duplicates but are instead different entries with identical citations.
Step5: A check on http
Step6: Cleaning the citation column
A quick way of checking whether any values of the citation column is in the wrong format, is to split the values.
Step7: Check whether the neutral citation has the right format.
Step8: Judgment 15625 did not split correctly because of a missing space.
Step9: Judgment 17912 and 17913 appear to be erroneous records so they are dropped.
Step10: Quick check of the different components of the neutral citation.
Step11: We know that the highest number of cases in a year was 427 in 2010, so we shouldn't be seeing 4 digit numbers in the final part of the neutral citation.
Step12: Summary statistics of the data set
Step13: There are 5382 judgments in the dataset. The 'author' column has 888 missing values. The 'catchwords' column has 104 missing values
Check the datatypes of each column
Step14: Converting the date column from object to datetime
Step15: Converting the date column into datetime raises an error so we verify whether the dates are in the right format (day, month and year).
Step16: The day and month for the date column in 15157 are joined together. This is fixed by adding the missing space.
Step17: Attempting to convert the date column into datetime still raises the same error so we need to split the date values and check the day, month and year values.
Step18: The day values appear to be fine.
Step19: For the month values, we see that there are 4 instances where February was misspelled as Febuary.
Step20: For the year values, there are 6 instances where there is a trailing dot.
Step21: The errors are fixed by correcting the misspelled months and removing the trailing dots.
Step22: The conversion is successful this time.
Step23: Checking the values in the court column
Step24: There is one inconsistent value which shows 'CA/Court of Appeal' instead of the usual 'Court of Appeal'. This is corrected.
Step25: Summary statistics of the coram column
We check the coram column to ensure that the data is clean. As there is a sizeable number of missing values in the author column, it may be useful to fill in these missing values from the coram values where possible.
Step26: Find out what is the delimiter used when there are multiple judges in the coram column
Step27: It appears that there is no one consistent delimiter for all cases. The delimiter could be a comma, semicolon, or the word 'and'.
Cleaning the data in the coram column
The coram values are first split using the delimiters
Step28: If the split was done correctly, the judicial titles such as 'J', 'JA', or 'JC' should appear at the end of each judge name.
Step29: However, when an attempt is made to split the values into 'name' and 'title' format (using default space), a 'list index out of range' error is raised when accessing the value at index [-1], which means there are empty strings in the series.
Step30: There are 5 instances of empty strings, which means there are delimiters in unexpected places (beginning or end of string, or next to one another).
Step31: There is one instance of semicolons incorrectly appearing at the beginning and/or end of the judge's name. This is fixed by stripping the semicolons.
Step32: There are two instances of a comma appearing just before the word 'and'. This is fixed by replacing them with a semicolon.
Step33: There are two instances of a semicolon appearing just before the word 'and'. This is fixed by replacing them with a semicolon.
Step34: After fixing the incorrectly placed delimiters, the split is attempted again to extract the individual judge names and judicial titles.
Step35: The split is successful for the majority of cases. However, there are 23 instances where the judge's name and title were not split correctly because of the additional words '(as he then was)' added to the judge's name. A number of other names were also split incorrectly due to commas appearing within the names.
Step36: This is fixed by removing the words '(as he then was)' and the commas from the affected names.
Step37: Some of the names and judicial titles were not separated by a space. This is fixed by replacing the name with the correct separating space.
Step38: There is one instance where the words 'plaintiff' and 'defendant.' unexpectedly appear in the coram column.
Step39: It turns out that the values for the coram, counsel and catchwords columns were misaligned. This is fixed by shifting the values to their correct positions.
Step40: To fill in the missing values in the 'author' and 'coram' columns, the judge's name was obtained from http
Step41: The handful of remaining cases that did not split correctly into 'name' and 'title' format are eyeballed to identify the errors or inconsistencies. We first check whether the judicial titles are missing from the coram values by checking the end-of-string values.
Step42: The coram values listed above are either missing the judicial title or have it placed before the name. This is fixed by placing the judicial titles after the names.
Step43: It is noted that the occurrence of an incorrectly split 'Boon' did not turn up in this end-of-string check. We try checking for the other possibility of an incorrect split due to a misplaced semicolon after the word 'Boon'.
Step44: We find the instances where there is a misplaced semicolon where there should have been a space instead (the semicolons here are likely to have been a side effect of preprocessor.py which replaces '\n' characters with ';' when parsing the html to ensure that catchwords are delimited correctly). The semicolons are replaced with a space.
Step45: Finally for consistency, all comma and 'and' delimiters throughout the coram column are replaced with semicolons.
Step46: It may be useful to write a function that gets the judge names since this code is repeated.
Step47: After fixing the names and using a consistent delimiter, the split is attempted again to extract the judge names and titles.
Step48: The judge names and judicial titles appear to be splitting correctly now. The next step is to verify that the names are correct and consistent.
Step49: Errors (missing spaces) are fixed and some names are changed for consistency (e.g. 'Quentin Loh Sze-On' to 'Quentin Loh')
Step50: The coram lists will need to be sorted so that we can compare them across different cases.
Step51: Check whether there is any CA judgment with only one judge value in the coram column, as we expect to see more than one judge.
Step52: Checking the html files reveals that
Step53: Court of Appeal cases with 2 judges in the coram
Step54: Filling in the missing values in the author column using the coram values
Check how many CA and HC judgments have a missing value in the author column.
Step55: It is possible to fill in the missing values in the 'author' column using the 'coram' values, particularly for cases where there is only one judge in the coram.
Step56: Copy values from coram column into author column if there is only one judge in the coram (that judge must have been the author).
Step57: This reduced the number of missing values in High Court judgments from 679 to 11.
Step58: If there are multiple judges in the coram then one of them must be the author.
Step59: Look in the source html files for words "delivered by" to find which of the possible authors delivered the judgment.
Step60: Check how many remaining judgments have a missing value in the author column.
Step61: The number of missing values in Court of Appeal judgments has reduced from 207 to 161.
Check how many judgments with a missing value in the author column were delivered after 2004.
Step62: Look up the missing values at http
Step63: Filling in the missing values in catchwords column
Step64: Clean the counsel column
Create a category file to store the labels | Python Code:
col_names = ['index', 'name', 'citation', 'author', 'number', 'date', 'court', 'coram', 'counsel', 'catchwords']
df = pd.read_table('raw.tsv', encoding='utf-8', header=None, names=col_names, index_col=0, parse_dates=True)
df.head()
Explanation: Define the column names and read data from source file
End of explanation
df.describe()
Explanation: Quick check on the summary statistics of the data set
End of explanation
df[df.duplicated('citation')]
df[df.citation.str.contains('\[2014\] SGHCR 4')]
Explanation: There are only 5399 unique citations when there are 5391 judgments in the data set. This suggests that there may be duplicate entries.
Removing duplicate entries based on the citation
End of explanation
df = df.drop(15509)
df[df.citation.str.contains('\[2014\] SGHC 192')]
df = df.drop(15746)
df[df.citation.str.contains('\[2014\] SGHC 207')]
df = df.drop(15751)
df[df.citation.str.contains('\[2014\] SGHC 213')]
df = df.drop(15755)
df[df.citation.str.contains('\[2014\] SGHC 262')]
Explanation: We drop the duplicate rows from the data set.
End of explanation
df[df.citation.str.contains('\[2014\] SGHC 242')]
df[df.name.str.contains('Ong Kian Hoy')]
Explanation: These are not duplicates but are instead different entries with identical citations.
End of explanation
df = df.drop(15800)
df[df.citation.str.contains('\[2015\] SGHC 134')]
df = df.drop(16030)
df[df.citation.str.contains('\[2015\] SGCA 59')]
df = df.drop(18274)
df[df.citation.str.contains('\[2015\] SGCA 60')]
df = df.drop(18283)
Explanation: A check on http://commonlii.org shows that the citation for judgment 15800 should be [2014] SGHC 242 and it turns out there is already an identical judgment 15814 with the correct citation in the data set, so we drop 15800 in favour of 15814.
End of explanation
cits = df.citation.str.split(';', expand=True)
cits.columns = ['neutral', 'slr']
cits.head(20)
swap_idx = cits.slr.notnull()
cits.loc[swap_idx, ['neutral', 'slr']] = cits.loc[swap_idx, ['slr', 'neutral']].values
cits.head(20)
Explanation: Cleaning the citation column
A quick way of checking whether any values of the citation column is in the wrong format, is to split the values.
End of explanation
cits[cits.neutral.str.split().str.len()!=3]
Explanation: Check whether the neutral citation has the right format.
End of explanation
df.loc[15265, 'citation'] = '[2013] SGHC 115'
df.loc[[17912, 17913]]
df[df.name.str.contains('ABJ')]
Explanation: Judgment 15625 did not split correctly because of a missing space.
End of explanation
df = df.drop([17912, 17913])
Explanation: Judgment 17912 and 17913 appear to be erroneous records so they are dropped.
End of explanation
for i in range(3):
print(cits.neutral.str.split(expand=True)[i].value_counts())
Explanation: Quick check of the different components of the neutral citation.
End of explanation
cits[cits.neutral.str.split(expand=True)[2].str.len() > 3]
df.loc[[14786]]
df[df.name.str.contains('Erin Brooke')]
df = df.drop(14786)
df.loc[[17915]]
df[df.name.str.contains('Ashik bin Aris')]
df = df.drop(17915)
df.loc[[17916]]
df[df.name.str.contains('Ferrero SPA')]
df = df.drop(17916)
df.loc[[17921]]
df[df.number.str.contains('591 of 2011')]
df = df.drop(17921)
df.loc[[18026]]
df[df.name.str.contains('Woo Kah Wai')]
df = df.drop(18026)
df.loc[[18039]]
df[df.name.str.contains('Muthukumaran s/o Varthan')]
df = df.drop(18039)
df.loc[[18040]]
df[df.name.str.contains('Grains and Industrial')]
df = df.drop(18040)
df.loc[[18144]]
df[df.name.str.contains('Malini Ventura')]
df = df.drop(18144)
Explanation: We know that the highest number of cases in a year was 427 in 2010, so we shouldn't be seeing 4 digit numbers in the final part of the neutral citation.
End of explanation
df.describe()
len(df[df.author.isnull()])
len(df[df.catchwords.isnull()])
Explanation: Summary statistics of the data set
End of explanation
df.info()
Explanation: There are 5382 judgments in the dataset. The 'author' column has 888 missing values. The 'catchwords' column has 104 missing values
Check the datatypes of each column
End of explanation
try:
df.date = pd.to_datetime(df.date)
except Exception as e:
print (e)
Explanation: Converting the date column from object to datetime
End of explanation
df[df.date.str.split().str.len() != 3]
Explanation: Converting the date column into datetime raises an error so we verify whether the dates are in the right format (day, month and year).
End of explanation
df.loc[15157, 'date'] = '11 March 2013'
try:
df.date = pd.to_datetime(df.date)
except Exception as e:
print (e)
Explanation: The day and month for the date column in 15157 are joined together. This is fixed by adding the missing space.
End of explanation
dates = df.date.str.split(' ', expand=True)
dates[0].value_counts()
Explanation: Attempting to convert the date column into datetime still raises the same error so we need to split the date values and check the day, month and year values.
End of explanation
dates[1].value_counts()
Explanation: The day values appear to be fine.
End of explanation
dates[2].value_counts()
Explanation: For the month values, we see that there are 4 instances where February was misspelled as Febuary.
End of explanation
df.date = df.date.str.replace('Febuary', 'February').str.strip('.')
Explanation: For the year values, there are 6 instances where there is a trailing dot.
End of explanation
try:
df.date = pd.to_datetime(df.date)
except Exception as e:
print (e)
Explanation: The errors are fixed by correcting the misspelled months and removing the trailing dots.
End of explanation
df.date.describe()
%matplotlib inline
df[df.court.str.contains('Appeal')].groupby(df.date.dt.year).size().plot(kind='bar')
df[df.court.str.contains('High')].groupby(df.date.dt.year).size().plot(kind='bar')
df[df.court.str.contains('Appeal|High')].groupby([df.court, df.date.dt.year]).size().unstack('court').plot(grid=True)
Explanation: The conversion is successful this time.
End of explanation
df.court.value_counts()
Explanation: Checking the values in the court column
End of explanation
df.court = df.court.str.replace('CA/', '')
df.court.value_counts()
Explanation: There is one inconsistent value which shows 'CA/Court of Appeal' instead of the usual 'Court of Appeal'. This is corrected.
End of explanation
df.coram.describe()
Explanation: Summary statistics of the coram column
We check the coram column to ensure that the data is clean. As there is a sizeable number of missing values in the author column, it may be useful to fill in these missing values from the coram values where possible.
End of explanation
df[df.coram.str.contains('and')].head()
df[df.coram.str.contains(';')].head()
df[df.coram.str.contains(',')].head()
Explanation: Find out what is the delimiter used when there are multiple judges in the coram column
End of explanation
judges = pd.DataFrame([judge for judge_list in df.coram.str.split(
'\s*,\s*|\s*and\s*|\s*;\s*').tolist() for judge in judge_list])
Explanation: It appears that there is no one consistent delimiter for all cases. The delimiter could be a comma, semicolon, or the word 'and'.
Cleaning the data in the coram column
The coram values are first split using the delimiters: comma, semicolon, or the word 'and'. The results are then combined into a single dataframe for review.
End of explanation
try:
judges[0].str.split().str[-1].value_counts()
except Exception as e:
print (e)
Explanation: If the split was done correctly, the judicial titles such as 'J', 'JA', or 'JC' should appear at the end of each judge name.
End of explanation
len(judges[judges[0]==''])
Explanation: However, when an attempt is made to split the values into 'name' and 'title' format (using default space), a 'list index out of range' error is raised when accessing the value at index [-1], which means there are empty strings in the series.
End of explanation
df[df.coram.str.startswith(';', na=False) | df.coram.str.endswith(';', na=False)]
Explanation: There are 5 instances of empty strings, which means there are delimiters in unexpected places (beginning or end of string, or next to one another).
End of explanation
df['coram']= df.coram.str.strip('; ')
df.loc[[15878]]
df[df.coram.str.contains(', and', na=False)]
Explanation: There is one instance of semicolons incorrectly appearing at the beginning and/or end of the judge's name. This is fixed by stripping the semicolons.
End of explanation
df.coram = df.coram.str.replace(', and', ';')
df.loc[[15319, 15900]]
df[df.coram.str.contains('; and', na=False)]
Explanation: There are two instances of a comma appearing just before the word 'and'. This is fixed by replacing them with a semicolon.
End of explanation
df.coram = df.coram.str.replace('; and', ';')
Explanation: There are two instances of a semicolon appearing just before the word 'and'. This is fixed by replacing them with a semicolon.
End of explanation
judges = pd.DataFrame([judge for judge_list in df.coram.str.split(
',\s*|\s*and\s*|\s*;\s').tolist() for judge in judge_list])
judges[0].str.split().str[-1].value_counts()
Explanation: After fixing the incorrectly placed delimiters, the split is attempted again to extract the individual judge names and judicial titles.
End of explanation
df[df.coram.str.contains('was\)') | df.coram.str.contains('Yi-Ling,') | df.coram.str.contains('Sern,')].head()
Explanation: The split is successful for the majority of cases. However, there are 23 instances where the judge's name and title were not split correctly because of the additional words '(as he then was)' added to the judge's name. A number of other names were also split incorrectly due to commas appearing within the names.
End of explanation
df.coram = df.coram.str.replace(' \(as he then was\)', '').str.replace('Yi-Ling,','Yi-Ling').str.replace('Sern,', 'Sern')
Explanation: This is fixed by removing the words '(as he then was)' and the commas from the affected names.
End of explanation
df.coram = df.coram.str.replace('Loh;J', 'Loh J')
Explanation: Some of the names and judicial titles were not separated by a space. This is fixed by replacing the name with the correct separating space.
End of explanation
df[df.coram.str.contains('plaintiff|defendant')]
Explanation: There is one instance where the words 'plaintiff' and 'defendant.' unexpectedly appear in the coram column.
End of explanation
df.loc[[14912], ['coram', 'counsel', 'catchwords']] = df.loc[[14912], ['coram', 'counsel', 'catchwords']].shift(1, axis=1)
df.loc[[14912]]
Explanation: It turns out that the values for the coram, counsel and catchwords columns were misaligned. This is fixed by shifting the values to their correct positions.
End of explanation
df.loc[14912,['author','coram']] = 'Lai Siu Chiu J'
Explanation: To fill in the missing values in the 'author' and 'coram' columns, the judge's name was obtained from http://commonlii.org/sg/cases/SGHC/2012/
End of explanation
df[df.coram.str.contains('Teck$|Elaine$|Ping$|Lee$|Peng$|Chua$|Boon$|Tung$|Abdullah$|Lionel$')]
Explanation: The handful of remaining cases that did not split correctly into 'name' and 'title' format are eyeballed to identify the errors or inconsistencies. We first check whether the judicial titles are missing from the coram values by checking the end-of-string values.
End of explanation
df.loc[14695, ['author', 'coram']] = 'Leo Zhen Wei Lionel AR'
df.loc[14967, ['author', 'coram']] = df.loc[14988, ['author', 'coram']] = 'Chew Yi-Ling Elaine AR'
df.loc[14977, ['author', 'coram']] = 'Eunice Chua AR'
df.loc[15146, ['author', 'coram']] = 'Chee Min Ping AR'
df.loc[15199, 'coram'] = df.loc[17937, 'coram'] = 'Choo Han Teck J'
df.loc[15954, ['author', 'coram']] = 'Hoo Sheau Peng JC'
df.loc[15955, ['author', 'coram']] = 'Aedit Abdullah JC'
df.loc[17969, ['author', 'coram']] = 'Amy Tung AR'
df.loc[18212, ['author', 'coram']] = 'James Elisha Lee AR'
Explanation: The coram values listed above are either missing the judicial title or have it placed before the name. This is fixed by placing the judicial titles after the names.
End of explanation
df[df.coram.str.contains('Boon;')]
Explanation: It is noted that the occurrence of an incorrectly split 'Boon' did not turn up in this end-of-string check. We try checking for the other possibility of an incorrect split due to a misplaced semicolon after the word 'Boon'.
End of explanation
df.coram = df.coram.str.replace('Boon;\s*', 'Boon ')
Explanation: We find the instances where there is a misplaced semicolon where there should have been a space instead (the semicolons here are likely to have been a side effect of preprocessor.py which replaces '\n' characters with ';' when parsing the html to ensure that catchwords are delimited correctly). The semicolons are replaced with a space.
End of explanation
df.coram = df.coram.str.replace(',| and', ';')
Explanation: Finally for consistency, all comma and 'and' delimiters throughout the coram column are replaced with semicolons.
End of explanation
def get_judges(df):
judges = pd.DataFrame([judge for judge_list in df.coram.str.split('; ').tolist() for judge in judge_list])
judges.drop_duplicates(inplace=True)
judges = judges[0].str.rsplit(' ', expand=True, n=1)
judges.columns = ['name', 'title']
return judges
Explanation: It may be useful to write a function that gets the judge names since this code is repeated.
End of explanation
judges = get_judges(df)
judges.title.value_counts()
Explanation: After fixing the names and using a consistent delimiter, the split is attempted again to extract the judge names and titles.
End of explanation
judges[judges.name.str.match('^A|B')].sort_values(by='name')
judges[judges.name.str.match('^C|D|E')].sort_values(by='name')
judges[judges.name.str.match('^F|G|H|I|J')].sort_values(by='name')
judges[judges.name.str.match('^K|L|M|N')].sort_values(by='name')
judges[judges.name.str.match('^P|Q|R|S')].sort_values(by='name')
judges[judges.name.str.match('^T|U|V|W|X|Y|Z')].sort_values(by='name')
Explanation: The judge names and judicial titles appear to be splitting correctly now. The next step is to verify that the names are correct and consistent.
End of explanation
df.coram = df.coram.str.replace('BoonLeong', 'Boon Leong').str.replace('AndrewAng', 'Andrew Ang')
df.coram = df.coram.str.replace('AndrewPhang', 'Andrew Phang').str.replace('ChanSeng', 'Chan Seng').str.replace('Chao;', 'Chao ')
df.coram = df.coram.str.replace('George;?Wei', 'George Wei').str.replace('Judith;', 'Judith ').str.replace('Lai;', 'Lai ')
df.coram = df.coram.str.replace('LeeKim', 'Lee Kim').str.replace('LeeSeiu', 'Lee Seiu')
df.coram = df.coram.str.replace('Lionel;?Yee', 'Lionel Yee').str.replace(' Sze-On', '').str.replace(' Li Shiong', '')
df.coram = df.coram.str.replace('CJAndrew', 'CJ; Andrew').str.replace('SiongThye', 'Siong Thye')
df.coram = df.coram.str.replace('TayYong', 'Tay Yong').str.replace('V;?K', 'V K').str.replace('WooBih', 'Woo Bih')
judges = get_judges(df)
judges.name.value_counts()
Explanation: Errors (missing spaces) are fixed and some names are changed for consistency (e.g. 'Quentin Loh Sze-On' to 'Quentin Loh')
End of explanation
df.coram = df.coram.str.split('; ').apply(lambda x: "; ".join(sorted(x)))
df.coram.value_counts().head(20)
Explanation: The coram lists will need to be sorted so that we can compare them across different cases.
End of explanation
df[(~df.coram.str.contains(';')) & (df.court=='Court of Appeal')]
Explanation: Check whether there is any CA judgment with only one judge value in the coram column, as we expect to see more than one judge.
End of explanation
df.loc[21949, 'author'] = 'Chao Hick Tin JA'
df.loc[21949, 'coram'] = 'Chao Hick Tin JA; Tan Lee Meng J'
Explanation: Checking the html files reveals that:
1. There is another judge's name at the bottom of the judgment in 21949 so the error is fixed by adding the name to the coram. TThe author is also indicated in the judgment.
2. The cases 22411 and 22413 were dealt with by a single judge Court of Appeal under s 36(1) of the Supreme Court of Judicature Act (Cap 322, 1999 Rev Ed) so the coram values are correct.
End of explanation
df[~(df.coram.str.contains(';.+;')) & (df.court=='Court of Appeal')]
Explanation: Court of Appeal cases with 2 judges in the coram
End of explanation
df[df.author.isnull()].court.value_counts()
Explanation: Filling in the missing values in the author column using the coram values
Check how many CA and HC judgments have a missing value in the author column.
End of explanation
df[df.author.isnull() & (~df.coram.str.contains(';'))].head()
Explanation: It is possible to fill in the missing values in the 'author' column using the 'coram' values, particularly for cases where there is only one judge in the coram.
End of explanation
df.loc[df.author.isnull() & (~df.coram.str.contains(';')),'author'] = df.loc[df.author.isnull() & (~df.coram.str.contains(';')),'coram'].values
df[df.author.isnull()].court.value_counts()
Explanation: Copy values from coram column into author column if there is only one judge in the coram (that judge must have been the author).
End of explanation
df[df.author.isnull()].coram.value_counts()
Explanation: This reduced the number of missing values in High Court judgments from 679 to 11.
End of explanation
possible_authors = df[df.author.isnull()].coram.str.split('; ')
possible_authors.head()
Explanation: If there are multiple judges in the coram then one of them must be the author.
End of explanation
import bs4
import re
p_tags = bs4.SoupStrainer('p')
for index, row in possible_authors.iteritems():
soup = bs4.BeautifulSoup(open('html/' + str(index) + '.html', 'r', encoding='utf-8').read(), 'lxml', parse_only=p_tags)
texts = soup(string=re.compile('\xa0'))
for t in texts:
text = t.replace('\xa0', ' ')
t.replace_with(text)
for judge in row:
if soup.find(string=re.compile("Delivered by\s+" + judge + "|" + judge + "\s+\(delivering the\s+")):
df.loc[index, 'author'] = judge
elif soup.find(string=re.compile("Delivered by")):
tag = soup.find(string=re.compile("Delivered by"))
if judge in (tag.next_element.string , tag.next_element.next_element.string):
df.loc[index, 'author'] = judge
Explanation: Look in the source html files for words "delivered by" to find which of the possible authors delivered the judgment.
End of explanation
df[df.author.isnull()].court.value_counts()
Explanation: Check how many remaining judgments have a missing value in the author column.
End of explanation
df[df.author.isnull() & (pd.to_datetime(df.date).dt.year > 2004)]
Explanation: The number of missing values in Court of Appeal judgments has reduced from 207 to 161.
Check how many judgments with a missing value in the author column were delivered after 2004.
End of explanation
df.loc[13540, 'author'] = 'Chan Sek Keong CJ'
df.loc[15284, 'author'] = 'V K Rajah JA'
df[df.author.isnull()].court.value_counts()
Explanation: Look up the missing values at http://commonlii.org which has a database of SG judgments since 2005
End of explanation
len(df[df.catchwords.isnull()])
cases = df[df.catchwords.isnull() & (df.date.dt.year > 2012)]
cases
df.loc[15337, 'catchwords'] = 'Civil Procedure — Foreign Judgments — Reciprocal Enforcement of Commonwealth Judgments Act'
df.loc[15415, 'catchwords'] = 'Civil Procedure — Striking Out'
df.loc[15606, 'catchwords'] = ('Civil Procedure — Jurisdiction;Civil Procedure — Service;Choses in Action — Assignment;'
'Conflict of Laws — Natural Forum;Conflict of Laws — Jurisdiction;Insolvency Law — Bankruptcy;'
'Res Judicata— Issue Estoppel')
df.loc[15607, 'catchwords'] = 'Administrative Law— Judicial Review'
df.loc[15815, 'catchwords'] = 'Family Law— Custody— Care and Control'
df.loc[15948, 'catchwords'] = 'Criminal law— Offences — Property— Criminal breach of trust'
df.loc[18028, 'catchwords'] = 'Family Law— Matrimonial Assets'
df.loc[18060, 'catchwords'] = 'Arbitration — Arbitrability and public policy;Arbitration — Arbitral tribunal — Competence'
df.loc[18134, 'catchwords'] = 'Injunctions — Interlocutory injunctions'
df.loc[18542, 'catchwords'] = ('Criminal Procedure and Sentencing— Sentencing— Rape;Criminal Procedure and Sentencing— '
'Sentencing— Aggravated outrage of modesty;Criminal Procedure and Sentencing— Sentencing— '
'Criminal intimidation')
df.loc[18546, 'catchwords'] = 'Unincorporated associations and trade unions — Societies'
df.loc[22600, 'catchwords'] = 'Employment Law — Pay — Recovery'
len(df[df.catchwords.isnull()])
df.catchwords.describe()
df.catchwords.head(10)
df.catchwords.tail(10)
df.catchwords = df.catchwords.str.lower().str.replace('\s*[—–-]+\s*', '-').str.replace('\s+', '_')
df.catchwords.tail(10)
df.catchwords.str.split(';', expand=True).head(10)
import numpy as np
def get_L1_catchwords(df):
ds = df.catchwords.str.split(';').str[0].str.split('-').str[0].fillna('')
for i in range(1, 11):
ds = ds + ';' + df.catchwords.str.split(';').str[i].str.split('-').str[0].fillna('')
return ds.str.split(';').apply(lambda x: ";".join(sorted(set(filter(None, x))))).replace('', np.nan)
def get_split_counts(ds):
return pd.Series(
[word for word_list in ds.fillna('').str.split(';').tolist() for word in word_list]).replace('', np.nan).value_counts()
L1_catchwords = get_L1_catchwords(df)
L1_catchwords.tail(10)
L1_counts = get_split_counts(get_L1_catchwords(df)).sort_index()
len(L1_counts)
L1_counts[:20]
df[df.catchwords.str.contains('_recourse_against_award', na=False)]
df.loc[14873, 'catchwords'] = 'arbitration-award-additional_award;arbitration-award-recourse_against_award'
df[df.catchwords.str.contains('admiralty', na=False) & (~df.catchwords.str.contains('admiralty_', na=False))]
df.loc[14774, 'catchwords'] = 'admiralty_and_shipping;conflict_of_laws-forum_non_conveniens'
df[df.catchwords.str.contains(';bail$', na=False)]
df.loc[14897, 'catchwords'] = 'criminal_procedure_and_sentencing-extradition-bail'
df.catchwords = df.catchwords.str.replace('adminstrative', 'administrative').str.replace('_−', '-').str.replace('_–_', '-')
L1_counts[21:40]
df[df.catchwords.str.contains('^caveats', na=False)]
df.loc[14773, 'catchwords'] = 'land-caveats;equity'
df.catchwords = df.catchwords.str.replace('construction_contracts,_contractors’_duties', 'construction_law-contractors’_duties')
df.catchwords = df.catchwords.str.replace('building_and_construction_contracts', 'building_and_construction_law')
df.catchwords = df.catchwords.str.replace('industry_security_of_payment_act', 'law-security_of_payment_act')
df.catchwords = df.catchwords.str.replace('civil_procedue', 'civil_procedure')
df[df.catchwords.str.contains('civil_procedure_', na=False)]
df[df.catchwords.str.contains('\xad', na=False)]
df.catchwords = df.catchwords.str.replace('\xad', '').str.replace('_-', '-')
L1_counts[41:60]
df.catchwords = df.catchwords.str.replace('company_law', 'companies').str.replace('conflicts_of_laws', 'conflict_of_laws')
df.catchwords = df.catchwords.str.replace('constitutional_interpretation', 'constitutional_law')
df.catchwords = df.catchwords.str.replace('conflict_of_laws_forum_non_conveniens', 'conflict_of_laws-forum_non_conveniens')
L1_counts[61:100]
df[df.catchwords.str.startswith('contracts', na=False)]
df.loc[14876, 'catchwords'] = 'contract-building_contracts'
df[df.catchwords.str.contains(';damage-', na=False)]
df.loc[14933, 'catchwords']
df[df.catchwords.str.contains('criminal_procedure-', na=False)]
df[df.catchwords.str.contains('criminal_procedure$', na=False)]
df.loc[15098, 'catchwords'] = 'criminal_procedure_and_sentencing'
df[df.catchwords.str.contains('election_of_remedies', na=False)]
df.loc[14290, 'catchwords'] = 'civil_procedure-amendment_of_pleadings-election_of_remedies'
df[df.catchwords.str.contains(';estoppel-', na=False)]
df.loc[18364, 'catchwords']
df.loc[18364, 'catchwords'] = ('civil_procedure-originating_processes;companies-memorandum_and_articles_of_association-effect;'
'companies-accounts;companies-capacity-pre-incorporation_contracts;contract-collateral_contracts;'
'contract-consideration;contract-ratification;equity-estoppel-estoppel_by_representation')
df.catchwords = df.catchwords.str.replace('credit_&_security', 'credit_and_security')
df.catchwords = df.catchwords.str.replace('credit_and_securities', 'credit_and_security')
df.catchwords = df.catchwords.str.replace('criminal_procedure-', 'criminal_procedure_and_sentencing-')
df.catchwords = df.catchwords.str.replace(';damage-', ';damages-').str.replace('evidence_limitation', 'evidence-limitation')
df.catchwords = df.catchwords.str.replace('family_law,_insolvency_law', 'family_law;insolvency_law')
L1_counts[101:140]
df[df.catchwords.str.contains('^injunction-', na=False)]
df.loc[18122, 'catchwords'] = 'injunctions-interlocutory_injunction;injunctions-springboard_injunction'
df[df.catchwords.str.contains('^insolvency-', na=False)]
df.catchwords = df.catchwords.str.replace('^insolvency-', 'insolvency_law-')
df[df.catchwords.str.contains('^intellectual_property', na=False)]
df.loc[18545, 'catchwords']
df[df.catchwords.str.contains('copyright_infringement', na=False)]
df[df.catchwords.str.contains('-groundless_threat', na=False)]
df.loc[18545, 'catchwords'] = ('tort-passing_off;copyright-copyright_infringement;copyright-groundless_threat')
df[df.catchwords.str.contains('interest_of_the_public', na=False)]
df.loc[17938, 'catchwords'] = 'legal_profession-reinstatement-interest_of_the_public'
df[df.catchwords.str.contains(';international_arbitration_act_\(cap_143a\)', na=False)]
df.catchwords = df.catchwords.str.replace(';international_arbitration_act_\(cap_143a\)', ';arbitration')
df[df.catchwords.str.contains('^offences', na=False)]
df.loc[15068, 'catchwords'] = 'criminal_law-offences-property-cheating'
df[df.catchwords.str.contains('^procedure', na=False)]
df.loc[18090, 'catchwords'] = 'civil_procedure-summary_judgment'
df[df.catchwords.str.contains(';pre', na=False)]
df.loc[14598, 'catchwords'] = 'civil_procedure-discovery_of_documents-pre-action_discovery'
df[df.catchwords.str.contains('^rape', na=False)]
df.loc[17993, 'catchwords'] = 'criminal_law-offences-rape'
df[df.catchwords.str.contains(';ratification', na=False)]
df.catchwords = df.catchwords.str.replace(';ratification', ';agency-ratification')
df.catchwords = df.catchwords.str.replace('land_law', 'land').str.replace('legal_professional', 'legal_profession')
df.catchwords = df.catchwords.str.replace('probate_&_administration', 'probate_and_administration')
L1_counts[141:]
df[df.catchwords.str.contains(';sale_of_goods', na=False)]
df.loc[18561, 'catchwords'] = 'contract-misrepresentation;commercial_transactions-sale_of_goods-implied_terms_as_to_quality'
df[df.catchwords.str.contains(';sentencing', na=False)]
df.loc[18599, 'catchwords'] = 'criminal_procedure_and_sentencing-sentencing'
df[df.catchwords.str.contains('^trade_marks-', na=False)]
df.loc[18213, 'catchwords'] = 'trade_marks_and_trade_names-infringement-assessment_of_damages-statutory_damages'
df[df.catchwords.str.contains('trademarks', na=False)]
df.loc[14860, 'catchwords'] = ('trade_marks_and_trade_names-infringement-defence_of_prior_use;'
'trade_marks_and_trade_names-infringement-well_known_marks;tort-passing_off-goodwill')
df[df.catchwords.str.contains('unincorporated_associations$', na=False)]
df.loc[14906, 'catchwords'] = 'civil_procedure-striking_out;unincorporated_associations_and_trade_unions'
df[df.catchwords.str.contains('work_injury_compensation_act', na=False)]
df.loc[15035, 'catchwords'] = 'employment_law-work_injury_compensation_act'
L1_counts = get_split_counts(get_L1_catchwords(df)).sort_index()
L1_counts.head(20)
L1_counts[21:40]
L1_counts[41:60]
L1_counts[61:]
len(L1_counts)
def get_L2_catchwords(df):
ds = df.catchwords.str.split(';').str[0].str.split('-').str[:2].str.join('-').fillna('')
for i in range(1, 11):
ds = ds + ';' + df.catchwords.str.split(';').str[i].str.split('-').str[:2].str.join('-').fillna('')
return ds.str.split(';').apply(lambda x: ";".join(sorted(set(filter(None, x))))).replace('', np.nan)
L2_counts = get_split_counts(get_L2_catchwords(df)).sort_index()
L2_counts.sort_values(ascending=False).head(20)
df.describe()
df.sort_values(by='date').tail(10)
L1_catchwords = get_L1_catchwords(df)
L1_catchwords.head()
L1_catchwords.tail()
L1_catchwords = L1_catchwords.str.replace(';', ' ')
L1_catchwords.tail()
Explanation: Filling in the missing values in catchwords column
End of explanation
pd.options.display.max_colwidth = 1000
with open('cats.txt', 'w') as f:
f.write(
'\n'.join(' '.join(line.split()) for line in L1_catchwords.to_string(na_rep='uncategorised', header=False).split('\n'))
)
pd.options.display.max_colwidth = 50
Explanation: Clean the counsel column
Create a category file to store the labels
End of explanation |
14,954 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have an array of experimental values and a probability density function that supposedly describes their distribution: | Problem:
import numpy as np
import scipy as sp
from scipy import integrate,stats
def bekkers(x, a, m, d):
p = a*np.exp((-1*(x**(1/3) - m)**2)/(2*d**2))*x**(-2/3)
return(p)
range_start = 1
range_end = 10
estimated_a, estimated_m, estimated_d = 1,1,1
sample_data = [1.5,1.6,1.8,2.1,2.2,3.3,4,6,8,9]
def bekkers_cdf(x,a,m,d,range_start,range_end):
values = []
for value in x:
integral = integrate.quad(lambda k: bekkers(k,a,m,d),range_start,value)[0]
normalized = integral/integrate.quad(lambda k: bekkers(k,a,m,d),range_start,range_end)[0]
values.append(normalized)
return np.array(values)
s, p_value = stats.kstest(sample_data, lambda x: bekkers_cdf(x, estimated_a, estimated_m, estimated_d, range_start,range_end))
if p_value >= 0.05:
result = False
else:
result = True |
14,955 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial for polydisperseity in with bulk Monte Carlo simulations in the structureal-color package
Copyright 2016, Vinothan N. Manoharan, Victoria Hwang, Annie Stephenson
This file is part of the structural-color python package.
This package is free software
Step1: Start by running Monte Carlo code for a single sphere
This is essentially the same as running MC for a sphere as described in montecarlo_tutorial.ipynb, only we return a few extra parameters from calc_refl_trans() and use them to calculate the phase function, scattering coefficient, and absorption coefficient for the bulk Monte Carlo simulation.
Set parameters
We have to set a few extra parameters for the bulk simulation
Step2: Sample sphere boundary sizes
Calculate a list of sphere boundary diameters based on the polydispersity of the spheres. This list will be used to calculate phase functions and scattering/absorption coefficients for single spheres, to use in the bulk model. In this example, we assume each sphere has the same particle packings inside.
Step3: Run Monte Carlo for each of the sphere boundary sizes
Run Monte Carlo simulations for a sphere boundary, for all the sizes of spheres calculated above. This will give two scattering parameters for sphere size and each wavelength.
Step4: Sample distribution of sphere boundary radii
Given the pdi of the sphere boundary radius and the mean radii of the sphere boundaries, sample the particle radii for each event and trajectory.
Step5: Calculate reflectance of bulk polydisperse film
The only difference from a normal bulk reflectance calculation (see bulk_montecarlo_tutorial.ipynb) is that we use the function pfs.sample_angles_step_poly() instead of sample_angles() and sample_step()
Note that for mixtures of different sphere types, absorption only works in the bulk matrix, not in the spheres themselves. This is because sampling the different absorption lengths for different sphere types has not yet been implemented.
Step6: Plot results | Python Code:
%matplotlib inline
import numpy as np
import time
import structcol as sc
import structcol.refractive_index as ri
from structcol import montecarlo as mc
from structcol import detector as det
from structcol import phase_func_sphere as pfs
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.misc import factorial
import os
Explanation: Tutorial for polydisperseity in with bulk Monte Carlo simulations in the structureal-color package
Copyright 2016, Vinothan N. Manoharan, Victoria Hwang, Annie Stephenson
This file is part of the structural-color python package.
This package is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This package is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this package. If not, see http://www.gnu.org/licenses/.
Introduction to polydispersity with bulk Monte Carlo simulations
One of the advantages of the bulk montecarlo model is that we can sample phase functions and scattering lengths for spheres of different sizes in a bulk film. This means we can predict the reflectance of bulk films made of mixtures of spheres with different sizes, allowing us to simulate polydispersity using the bulk Monte Carlo model.
Below is an example that calculates a reflectance spectrum from a bulk film made of polydisperse spheres, where the internal structure of each sphere is the same.
Loading and using the package and module
You'll need the following imports
End of explanation
# Properties of the source
wavelengths = sc.Quantity(np.arange(400., 801.,10),'nm') # wavelengths at which to calculate reflectance
# Geometric properties of the sample
num_diams = 3 # number of diams from which to sample,
# higher number takes longer but gives a more precise calculation
sphere_boundary_diam_mean = sc.Quantity(10,'um') # mean diameter of the microspheres
pdi = 0.2 # poldispersity index
particle_radius = sc.Quantity(160,'nm') # radii of the two species of particles
volume_fraction_bulk = sc.Quantity(0.63,'') # volume fraction of the spheres in the bulk film
volume_fraction_particles = sc.Quantity(0.55, '') # volume fraction of the particles in the sphere boundary
bulk_thickness = sc.Quantity('50 um') # thickness of the bulk film
boundary = 'sphere' # geometry of sample
boundary_bulk = 'film' # geometry of the bulk sample
# Refractive indices
n_particle = ri.n('vacuum', wavelengths) # refractive index of particle
n_matrix = ri.n('polystyrene', wavelengths) + 2e-5*1j # refractive index of matrix
n_matrix_bulk = ri.n('vacuum', wavelengths) # refractive index of the bulk matrix
n_medium = ri.n('vacuum', wavelengths) # refractive index of medium outside the bulk sample.
# Monte Carlo parameters
ntrajectories = 500 # number of trajectories to run with a spherical boundary
nevents = 300 # number of scattering events for each trajectory in a spherical boundary
ntrajectories_bulk = 1000 # number of trajectories to run in the bulk film
nevents_bulk = 300 # number of events to run in the bulk film
# Plot settings
sns.set_style('white') # sets white plotting background
Explanation: Start by running Monte Carlo code for a single sphere
This is essentially the same as running MC for a sphere as described in montecarlo_tutorial.ipynb, only we return a few extra parameters from calc_refl_trans() and use them to calculate the phase function, scattering coefficient, and absorption coefficient for the bulk Monte Carlo simulation.
Set parameters
We have to set a few extra parameters for the bulk simulation
End of explanation
# calculate diameter list to sample from
sphere_boundary_diameters = pfs.calc_diam_list(num_diams, sphere_boundary_diam_mean, pdi, equal_spacing = False)
Explanation: Sample sphere boundary sizes
Calculate a list of sphere boundary diameters based on the polydispersity of the spheres. This list will be used to calculate phase functions and scattering/absorption coefficients for single spheres, to use in the bulk model. In this example, we assume each sphere has the same particle packings inside.
End of explanation
reflectance_sphere = np.zeros(wavelengths.size)
p_bulk = np.zeros((sphere_boundary_diameters.size, wavelengths.size, 200))
mu_scat_bulk = sc.Quantity(np.zeros((sphere_boundary_diameters.size, wavelengths.size)),'1/um')
mu_abs_bulk = sc.Quantity(np.zeros((sphere_boundary_diameters.size, wavelengths.size)),'1/um')
for j in range(sphere_boundary_diameters.size):
# print radius to keep track of where we are in calculation
print('diameter: ' + str(sphere_boundary_diameters[j]))
for i in range(wavelengths.size):
# caculate the effective index of the sample
n_sample = ri.n_eff(n_particle[i], n_matrix[i], volume_fraction_particles)
# Calculate the phase function and scattering and absorption coefficients from the single scattering model
# (this absorption coefficient is of the scatterer, not of an absorber added to the system)
p, mu_scat, mu_abs = mc.calc_scat(particle_radius, n_particle[i], n_sample,
volume_fraction_particles, wavelengths[i])
# Initialize the trajectories
r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_matrix_bulk[i], n_sample,
boundary, sample_diameter = sphere_boundary_diameters[j])
r0 = sc.Quantity(r0, 'um')
k0 = sc.Quantity(k0, '')
W0 = sc.Quantity(W0, '')
# Create trajectories object
trajectories = mc.Trajectory(r0, k0, W0)
# Generate a matrix of all the randomly sampled angles first
sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)
# Create step size distribution
step = mc.sample_step(nevents, ntrajectories, mu_scat)
# Run photons
trajectories.absorb(mu_abs, step)
trajectories.scatter(sintheta, costheta, sinphi, cosphi)
trajectories.move(step)
# Calculate reflection and transmition
(refl_indices,
trans_indices,
_, _, _,
refl_per_traj, trans_per_traj,
_,_,_,_,
reflectance_sphere[i],
_,_, norm_refl, norm_trans) = det.calc_refl_trans(trajectories, sphere_boundary_diameters[j],
n_matrix_bulk[i], n_sample, boundary,
run_fresnel_traj = False,
return_extra = True)
### Calculate phase function and lscat ###
# use output of calc_refl_trans to calculate phase function, mu_scat, and mu_abs for the bulk
p_bulk[j,i,:], mu_scat_bulk[j,i], mu_abs_bulk[j,i] = pfs.calc_scat_bulk(refl_per_traj, trans_per_traj,
trans_indices,
norm_refl, norm_trans,
volume_fraction_bulk,
sphere_boundary_diameters[j],
n_matrix_bulk[i],
wavelengths[i],
plot=False, phi_dependent=False)
Explanation: Run Monte Carlo for each of the sphere boundary sizes
Run Monte Carlo simulations for a sphere boundary, for all the sizes of spheres calculated above. This will give two scattering parameters for sphere size and each wavelength.
End of explanation
# sample
sphere_diams_sampled = pfs.sample_diams(pdi, sphere_boundary_diameters,
sphere_boundary_diam_mean,
ntrajectories_bulk,
nevents_bulk)
# plot
sns.distplot(np.ndarray.flatten(sphere_diams_sampled), kde = False)
plt.ylabel('number sampled')
plt.xlabel('diameter (' + str(sphere_boundary_diameters.units) + ')')
Explanation: Sample distribution of sphere boundary radii
Given the pdi of the sphere boundary radius and the mean radii of the sphere boundaries, sample the particle radii for each event and trajectory.
End of explanation
reflectance_bulk_poly = np.zeros(wavelengths.size)
for i in range(wavelengths.size):
# print the wavelength keep track of where we are in calculation
print('wavelength: ' + str(wavelengths[i]))
# Initialize the trajectories
r0, k0, W0 = mc.initialize(nevents_bulk, ntrajectories_bulk, n_medium[i], n_matrix_bulk[i],
boundary_bulk)
r0 = sc.Quantity(r0, 'um')
W0 = sc.Quantity(W0, '')
k0 = sc.Quantity(k0, '')
# Sample angles and calculate step size based on sampled radii
sintheta, costheta, sinphi, cosphi, step, _, _ = pfs.sample_angles_step_poly(nevents_bulk, ntrajectories_bulk,
p_bulk[:,i,:],
sphere_diams_sampled,
mu_scat_bulk[:,i],
param_list = sphere_boundary_diameters)
# Create trajectories object
trajectories = mc.Trajectory(r0, k0, W0)
# Run photons
trajectories.absorb(mu_abs_bulk[0,i], step) # Note: polydisperse absorption does not currently work in the bulk
# so we arbitrarily use index 0, assuming that all scattering events
# have the same amount of absorption
trajectories.scatter(sintheta, costheta, sinphi, cosphi)
trajectories.move(step)
# calculate reflectance
reflectance_bulk_poly[i], transmittance = det.calc_refl_trans(trajectories, bulk_thickness,
n_medium[i], n_matrix_bulk[i], boundary_bulk)
Explanation: Calculate reflectance of bulk polydisperse film
The only difference from a normal bulk reflectance calculation (see bulk_montecarlo_tutorial.ipynb) is that we use the function pfs.sample_angles_step_poly() instead of sample_angles() and sample_step()
Note that for mixtures of different sphere types, absorption only works in the bulk matrix, not in the spheres themselves. This is because sampling the different absorption lengths for different sphere types has not yet been implemented.
End of explanation
plt.figure()
plt.plot(wavelengths, reflectance_bulk_poly, linewidth = 3)
plt.ylim([0,1])
plt.xlim([400,800])
plt.xlabel('Wavelength (nm)')
plt.ylabel('Reflectance')
plt.title('Bulk Reflectance')
Explanation: Plot results
End of explanation |
14,956 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, you'll need some data to load up.
You can download example HARPS data files (and results files) to play around with linked in the documentation.
Here we'll assume that you have the data 51peg_e2ds.hdf5 saved in the wobble/data directory.
By default, loading the data will load all echelle orders and all epochs in the data file; you can change this with the optional orders and epochs kwargs, which each take lists (or 1-d numpy arrays) of indices for the desired orders/epochs to load.
Step1: The data we just loaded are assumed to be continuum normalized, with regions of bad data (negative flux values or very low SNR) "masked out" by setting their uncertainties to be infinite. In this example, the data are also in units of log(wavelength) vs. log(flux).
Step2: Now let's create a results object in which to store the outputs of wobble
Step3: This object is not currently populated with useful information (because we haven't optimized anything yet!), but once it is we'll be able to save it with the results.write('filename.hdf5') function.
A saved results file can be loaded as
Step4: The results have been automatically saved and we can now view them by generating a plot
Step5: The residuals look good for the star but not great around the tellurics. Let's try running with variable tellurics.
Step6: Looks better! Here are the RVs for this single order. Once we run on all orders, we can combine the order-by-order velocities using results.combine_orders('star') and access the final (non-barycentric-corrected RVs as results.star_time_rvs.
Step7: Now let's generalize this to multiple orders and get RVs for the full* spectrum
Step8: In the following loop, we'll continually be overwriting the "model" variable. That's ok! All optimized results will be copied over to the "results" object automatically, and as long as the star component is given the same name in the model for every order, they'll be associated in the results object as we'd expect.
Step9: Now that we have RVs, let's do some post-processing on them
Step10: Finally, we can look at the resulting RVs and save them to a text file
Step11: The above command saved the RVs; we probably also want to save the spectral fits and other diagnostic information for future reference. We can do that with the following command, which preserves the entire Results object (including RVs and spectra) | Python Code:
data = wobble.Data('../data/51peg_e2ds.hdf5')
Explanation: First, you'll need some data to load up.
You can download example HARPS data files (and results files) to play around with linked in the documentation.
Here we'll assume that you have the data 51peg_e2ds.hdf5 saved in the wobble/data directory.
By default, loading the data will load all echelle orders and all epochs in the data file; you can change this with the optional orders and epochs kwargs, which each take lists (or 1-d numpy arrays) of indices for the desired orders/epochs to load.
End of explanation
r = 0 # index of echelle order to plot
n = 0 # index of epoch to plot
plt.plot(data.xs[r][n], data.ys[r][n], 'k.', ms=6)
mask = data.ivars[r][n] <= 1.e-8 # masked-out bad data
plt.plot(data.xs[r][n][mask], data.ys[r][n][mask], 'w.', ms=4)
plt.ylabel('ln(flux)')
plt.xlabel('ln(wave)');
Explanation: The data we just loaded are assumed to be continuum normalized, with regions of bad data (negative flux values or very low SNR) "masked out" by setting their uncertainties to be infinite. In this example, the data are also in units of log(wavelength) vs. log(flux).
End of explanation
results = wobble.Results(data=data)
Explanation: Now let's create a results object in which to store the outputs of wobble:
End of explanation
r = 67 # index into data.orders for the desired order
model = wobble.Model(data, results, r)
model.add_star('star')
model.add_telluric('tellurics')
wobble.optimize_order(model)
Explanation: This object is not currently populated with useful information (because we haven't optimized anything yet!), but once it is we'll be able to save it with the results.write('filename.hdf5') function.
A saved results file can be loaded as:
results = wobble.Results(filename='filename.hdf5')
Here's a minimal example of optimizing a model consisting of a star and tellurics for a single order:
End of explanation
n = 40 # epoch to plot
results.plot_spectrum(r, n, data, 'demo1.png')
from IPython.display import Image
Image(filename='demo1.png')
Explanation: The results have been automatically saved and we can now view them by generating a plot:
End of explanation
results2 = wobble.Results(data=data)
model = wobble.Model(data, results2, r)
model.add_star('star')
model.add_telluric('tellurics', variable_bases=2)
wobble.optimize_order(model)
results2.plot_spectrum(r, n, data, 'demo2.png')
Image(filename='demo2.png')
Explanation: The residuals look good for the star but not great around the tellurics. Let's try running with variable tellurics.
End of explanation
plt.errorbar(results2.dates, results2.star_rvs[r] + results2.bervs,
1./np.sqrt(results2.star_ivars_rvs[r]),
fmt='o', ms=5, elinewidth=1)
plt.xlabel('JD')
plt.ylabel(r'RV (m s$^{-1}$)')
plt.xlim([2456505, 2456570]);
Explanation: Looks better! Here are the RVs for this single order. Once we run on all orders, we can combine the order-by-order velocities using results.combine_orders('star') and access the final (non-barycentric-corrected RVs as results.star_time_rvs.
End of explanation
data = wobble.Data('../data/51peg_e2ds.hdf5', orders=np.arange(65,70))
Explanation: Now let's generalize this to multiple orders and get RVs for the full* spectrum:
* not actually the complete spectrum in this tutorial because that would take a long time to run
Here we'll overwrite the Data object with one that contains only a subset of spectral orders. The following commands could (and should) be used on the entire object containing all orders, but that can take a long time (up to an hour) so for the sake of the tutorial we'll use a smaller subsample.
End of explanation
results = wobble.Results(data=data)
for r in range(len(data.orders)):
print('starting order {0} of {1}'.format(r+1, len(data.orders)))
model = wobble.Model(data, results, r)
model.add_star('star')
model.add_telluric('tellurics', variable_bases=2)
wobble.optimize_order(model)
Explanation: In the following loop, we'll continually be overwriting the "model" variable. That's ok! All optimized results will be copied over to the "results" object automatically, and as long as the star component is given the same name in the model for every order, they'll be associated in the results object as we'd expect.
End of explanation
results.combine_orders('star')
results.apply_drifts('star') # instrumental drift corrections
results.apply_bervs('star') # barycentric corrections
Explanation: Now that we have RVs, let's do some post-processing on them:
End of explanation
plt.errorbar(data.dates, results.star_time_rvs - np.mean(results.star_time_rvs),
results.star_time_sigmas,
fmt='o', ms=5, elinewidth=1)
plt.xlabel('JD')
plt.ylabel(r'RV (m s$^{-1}$)');
results.write_rvs('star', 'demo_rvs.txt')
Explanation: Finally, we can look at the resulting RVs and save them to a text file:
End of explanation
results.write('demo_results.hdf5')
Explanation: The above command saved the RVs; we probably also want to save the spectral fits and other diagnostic information for future reference. We can do that with the following command, which preserves the entire Results object (including RVs and spectra):
End of explanation |
14,957 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercises Electric Machinery Fundamentals
Chapter 4
Problem 4-29
Step1: Description
A 100-MVA, 14.4-kV 0.8-PF-lagging, Y-connected synchronous generator has a negligible armature
resistance and a synchronous reactance of 1.0 per-unit. The generator is connected in parallel with a 60-
Hz, 14.4-kV infinite bus that is capable of supplying or consuming any amount of real or reactive power
with no change in frequency or terminal voltage.
Step2: (a)
What is the synchronous reactance of the generator in ohms?
(b)
What is the internal generated voltage $E_A$ of this generator under rated conditions?
(c)
What is the armature current $I_A$ in this machine at rated conditions?
(d)
Suppose that the generator is initially operating at rated conditions. If the internal generated voltage $E_A$ is decreased by 5 percent
What will the new armature current $I_A$ be?
(e)
Repeat part (d) for 10, 15, 20, and 25 percent reductions in $E_A$ .
(f)
Plot the magnitude of the armature current $I_A$ as a function of $E_A$ .
SOLUTION
(a)
The rated phase voltage of this generator is
Step3: The base impedance of this generator is
Step4: Therefore,
Step5: (b)
The rated armature current is
Step6: The power factor is 0.8 lagging, so
Step7: Therefore, the internal generated voltage is
Step8: (c)
From the above calculations
Step9: (d)
If $E_A$ is decreased by 5%, the armature current will change as shown below. Note that the infinite bus will keep $V_\phi$ and $\omega_m$ constant. Also, since the prime mover hasn’t changed, the power supplied by the generator will be constant.
<img src="figs/Problem_4-29.png" width="60%">
$P = \frac{3V_\phi E_A}{X_S}\sin{\delta} =$ constant, so
Step10: $$\delta_2 = \arcsin\left(\frac{E_{A1}}{E_{A2}}\sin{\delta_1}\right)$$
Step11: Therefore, the new armature current is
Step12: (e)
Repeating part (d)
Step13: Therefore, the new armature current is
Step14: With a 15% decrease,
Step15: Therefore, the new armature current is
Step16: With a 20% decrease,
Step17: Therefore, the new armature current is
Step18: With a 25% decrease,
Step19: Therefore, the new armature current is
Step20: (f)
We are going to plot the magnitude of the armature current $I_A$ as a function of $E_A$ below.
Define values for this generator
Step21: Calculate delta for each $E_A$
Step22: Calculate Ia for each flux
Step23: Plot the armature current versus Ea | Python Code:
%pylab notebook
Explanation: Excercises Electric Machinery Fundamentals
Chapter 4
Problem 4-29
End of explanation
Sbase = 100e6 # [VA]
Vbase = 14.4e3 # [V]
ra = 0.0 # pu
xs = 1.0 # pu
PF = 0.8
Explanation: Description
A 100-MVA, 14.4-kV 0.8-PF-lagging, Y-connected synchronous generator has a negligible armature
resistance and a synchronous reactance of 1.0 per-unit. The generator is connected in parallel with a 60-
Hz, 14.4-kV infinite bus that is capable of supplying or consuming any amount of real or reactive power
with no change in frequency or terminal voltage.
End of explanation
Vphi_base = Vbase / sqrt(3)
print('Vphi = {:.0f} V'.format(Vphi_base))
Explanation: (a)
What is the synchronous reactance of the generator in ohms?
(b)
What is the internal generated voltage $E_A$ of this generator under rated conditions?
(c)
What is the armature current $I_A$ in this machine at rated conditions?
(d)
Suppose that the generator is initially operating at rated conditions. If the internal generated voltage $E_A$ is decreased by 5 percent
What will the new armature current $I_A$ be?
(e)
Repeat part (d) for 10, 15, 20, and 25 percent reductions in $E_A$ .
(f)
Plot the magnitude of the armature current $I_A$ as a function of $E_A$ .
SOLUTION
(a)
The rated phase voltage of this generator is:
End of explanation
Zbase = (3*Vphi_base**2) / Sbase
print('Zbase = {:.2f} Ω'.format(Zbase))
Explanation: The base impedance of this generator is:
$$Z_\text{base} = \frac{3V^2_{\phi,\text{base}}}{S_\text{base}}$$
End of explanation
Ra = ra * Zbase
Xs = xs * Zbase
print('''
Ra = {:.1f} Ω Xs = {:.1f} Ω
========================'''.format(Ra, Xs))
Explanation: Therefore,
End of explanation
ia = Sbase / (sqrt(3)*Vbase)
print('ia = {:.0f} A'.format(ia))
Explanation: (b)
The rated armature current is:
$$I_A = I_F = \frac{S}{\sqrt{3}V_T}$$
End of explanation
Ia_angle = -arccos(PF)
Ia = ia * (cos(Ia_angle) + sin(Ia_angle) * 1j)
print('Ia = {:.0f} A ∠{:.2f}°'.format(abs(Ia), Ia_angle/pi*180))
Explanation: The power factor is 0.8 lagging, so
End of explanation
EA = Vphi_base + Ra*Ia + Xs*1j *Ia
EA_angle = arctan(EA.imag/EA.real)
print('''
EA = {:.0f} V ∠{:.2f}°
===================='''.format(abs(EA), EA_angle/pi*180))
Explanation: Therefore, the internal generated voltage is:
$$\vec{E}A = \vec{V}\phi + R_A\vec{I}_A + jX_S\vec{I}_A$$
End of explanation
print('''
Ia = {:.0f} V ∠{:.2f}°
===================='''.format(abs(Ia), Ia_angle/pi*180))
Explanation: (c)
From the above calculations
End of explanation
Ea1 = abs(EA)
Ea2 = Ea1 * 0.95
print('Ea1 = {:.0f} V Ea2 = {:.0f} V'.format(Ea1, Ea2))
Explanation: (d)
If $E_A$ is decreased by 5%, the armature current will change as shown below. Note that the infinite bus will keep $V_\phi$ and $\omega_m$ constant. Also, since the prime mover hasn’t changed, the power supplied by the generator will be constant.
<img src="figs/Problem_4-29.png" width="60%">
$P = \frac{3V_\phi E_A}{X_S}\sin{\delta} =$ constant, so: $E_{A1}\sin{\delta_1} = E_{A2}\sin{\delta_2}$
With a 5% decrease,
End of explanation
delta1 = EA_angle
delta2 = arcsin(Ea1/Ea2 * sin(delta1))
print('delta2 = {:.1f}°'.format(delta2/pi*180))
EA2 = Ea2 * exp(1j*delta2)
Explanation: $$\delta_2 = \arcsin\left(\frac{E_{A1}}{E_{A2}}\sin{\delta_1}\right)$$
End of explanation
Ia2 = (EA2 - Vphi_base) / (Xs*1j)
Ia2_angle = arctan(Ia2.imag/Ia2.real)
print('''
Ia2 = {:.0f} V ∠{:.1f}°
===================='''.format(abs(Ia2), Ia2_angle/pi*180))
Explanation: Therefore, the new armature current is:
$$\vec{I}A = \frac{\vec{E}{A2} - \vec{V}_\phi}{jX_S}$$
End of explanation
Ea1 = abs(EA)
Ea3 = Ea1 * 0.9
print('Ea1 = {:.0f} V Ea3 = {:.0f} V'.format(Ea1, Ea3))
delta1 = EA_angle
delta3 = arcsin(Ea1/Ea3 * sin(delta1))
print('delta3 = {:.1f}°'.format(delta3/pi*180))
EA3 = Ea3 * exp(1j*delta3)
Explanation: (e)
Repeating part (d):
With a 10% decrease,
End of explanation
Ia3 = (EA3 - Vphi_base) / (Xs*1j)
Ia3_angle = arctan(Ia3.imag/Ia3.real)
print('''
Ia3 = {:.0f} A ∠{:.1f}°
====================='''.format(abs(Ia3), Ia3_angle/pi *180))
Explanation: Therefore, the new armature current is:
End of explanation
Ea1 = abs(EA)
Ea4 = Ea1 * 0.85
print('Ea1 = {:.0f} V Ea4 = {:.0f} V'.format(Ea1, Ea4))
delta1 = EA_angle
delta4 = arcsin(Ea1/Ea4 * sin(delta1))
print('delta4 = {:.1f}°'.format(delta4/pi*180))
EA4 = Ea4 * exp(1j*delta4)
Explanation: With a 15% decrease,
End of explanation
Ia4 = (EA4 - Vphi_base) / (Xs*1j)
Ia4_angle = arctan(Ia4.imag/Ia4.real)
print('''
Ia4 = {:.0f} A ∠{:.1f}°
====================='''.format(abs(Ia4), Ia4_angle/pi *180))
Explanation: Therefore, the new armature current is:
End of explanation
Ea1 = abs(EA)
Ea5 = Ea1 * 0.80
print('Ea1 = {:.0f} V Ea5 = {:.0f} V'.format(Ea1, Ea5))
delta1 = EA_angle
delta5 = arcsin(Ea1/Ea5 * sin(delta1))
print('delta5 = {:.1f}°'.format(delta5/pi*180))
EA5 = Ea5 * exp(1j*delta5)
Explanation: With a 20% decrease,
End of explanation
Ia5 = (EA5 - Vphi_base) / (Xs*1j)
Ia5_angle = arctan(Ia5.imag/Ia5.real)
print('''
Ia5 = {:.0f} A ∠{:.1f}°
====================='''.format(abs(Ia5), Ia5_angle/pi *180))
Explanation: Therefore, the new armature current is:
End of explanation
Ea1 = abs(EA)
Ea6 = Ea1 * 0.75
print('Ea1 = {:.0f} V Ea6 = {:.0f} V'.format(Ea1, Ea6))
delta1 = EA_angle
delta6 = arcsin(Ea1/Ea6 * sin(delta1))
print('delta6 = {:.1f}°'.format(delta6/pi*180))
EA6 = Ea6 * exp(1j*delta6)
Explanation: With a 25% decrease,
End of explanation
Ia6 = (EA6 - Vphi_base) / (Xs*1j)
Ia6_angle = arctan(Ia6.imag/Ia6.real)
print('''
Ia6 = {:.0f} A ∠{:.1f}°
====================='''.format(abs(Ia6), Ia6_angle/pi *180))
Explanation: Therefore, the new armature current is:
End of explanation
Ea = linspace(0.55, 1.00, 46) * abs(EA)
d1 = EA_angle
Explanation: (f)
We are going to plot the magnitude of the armature current $I_A$ as a function of $E_A$ below.
Define values for this generator:
End of explanation
d_ = arcsin( abs(EA) / Ea * sin(d1))
Explanation: Calculate delta for each $E_A$
End of explanation
Ea_ = Ea * exp(1j*d_)
Ia_ = ( Ea_ - Vphi_base ) / (Xs*1j)
Explanation: Calculate Ia for each flux:
End of explanation
rc('text', usetex=True) # enable LaTeX commands for plot
title(r'Armature current versus $E_A$')
xlabel(r'$E_A$ [kV]')
ylabel(r'$I_A$ [A]')
plot(abs(Ea_)/1000,abs(Ia_), linewidth = 2)
grid()
Explanation: Plot the armature current versus Ea:
End of explanation |
14,958 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explore variables one at a time
Step1: MSSubClass
Step2: MSSubClass is categorical, though it is coded as numeric. Combine all the 1 and 1.5 story dwelling types as 1, 2 and 2.5 story types as 2, and the rest as 0.
Step3: MSZoning
Step4: MSZoning has a severe lack of variability, providing little-to-no information in predicting SalePrice. Thus, this variable will not be used in the model.
LotFrontage
Step5: LotArea
Step6: Street and PavedDrive
Step7: Street and PavedDrive have severe lack of variability, providing little-to-no information in predicting SalePrice. Thus, these variables will not be used in the model.
Alley
Step8: NA means that houses have no alley, which is not the same as a missing value. Drop this variable since it lacks variability and provides limited information towards the prediction.
LotShape
Step9: LandContour
Step10: Drop LandContour due to lack of variability in its values.
Utilities
Drop Utilities due to severe lack of variability!
Step11: LotConfig
Step12: LandSlope
Drop LandSlope due to lack of variability
Step13: Neighborhood
Step14: Condition1 and Condition2
Step15: Most houses have normal condition. If either conditions are normal, record the overall condition as normal.
Step16: Drop this variable since it lacks variability.
BldgType
Step17: Drop BldgType due to lack of variability
HouseStyle
Step18: Combine 1-1.5 story as 1, 2-2.5 story as 2, and the rest as 0.
Step19: OverallQual
Step20: Combine very poor/poor/fair/less than average as -1, average/above average/good as 0, very good/excellent/very excellent as 1.
Step21: Drop OverallQual due to lack of variability
OverallCond
Drop OverallCond due to lack of variability
Step22: YearBuilt
Step23: Years usually needs to be binned. Let 0 = ancient, 1 = older, 2 = newer, 3 = modern indicate the age of the house.
Step24: YearRemodAdd
Step25: RoofStyle
Drop RoofStyle due to lack of variability
Step26: RoofMatl
Drop this variable since it heavily lacks variability.
Step27: Exterior1st and Exterior2nd
Step28: Since both variables roughly have the same distribution and most houses have only one exterior material, keep only one of them in the model.
Step29: MasVnrType and MasVnrArea
Step30: Since about half the houses don't have masonry veneer walls, the area is mostly 0.
Step31: It makes more sense to simply record if a house has masonry veneer rather than worrying about unnecessary details, such as its area and type.
Step32: ExterQual and ExterCond
Step33: Compare the original quality of the material to its current condition.
Step34: Let -1 = depreciated, 0 = no change, 1 = improved indicate the change in condition over time
Step35: Foundation
Step36: BsmtQual
This variable records the basement height.
Step37: NA means no basement, and not missing.
Step38: BsmtCond
Step39: NA means no basement, and not missing.
Step40: Drop BsmtCond due to lack of variability
BsmtExposure
Step41: NA means no basement, and not missing.
Step42: BsmtFinType1 and BsmtFinType2
Step43: NA means no basement, and not missing.
Step44: Most basements can be used as a living quarter, however that space is unfinished. Hence, it makes more sense here to record if the basement is unfinished and use TotalBsmtSF to provide the total basement area.
Step45: Drop BsmtFinType due to lack of variability
BsmtFinSF1, BsmtFinSF2, BsmtUnfSF
Step46: Since most basements are unfinished, there are just as many zeroes as in type1 and type2.
Step47: Drop this variable since it is redundant with BsmtFinType.
Step48: There are many nonzero values since most basements are unfinished. Drop this variable since it is redundant with BsmtFinType.
TotalBsmtSF
Step49: Heating
Step50: Drop Heating since it severely lacks variability
HeatingQC
Step51: CentralAir
Drop CentralAir since it lacks variability
Step52: Electrical
Drop Electrical since it lacks variability
Step53: 1stFlrSF
Step54: 2ndFlrSF
Step55: GrLivArea
Step56: There is a positive moderate correlation between GrLivArea and 1stFlrSF.
Step57: TotalArea
Step58: LowQualFinSF
Drop this variable due to lack of data
Step59: BsmtFullBath, BsmtHalfBath, FullBath, HalfBath
Step60: Combine all full bathrooms, regardless of whether it is in the basement or not.
Step61: Similarly, combine all half bathrooms.
Step62: Has half bathroom?
Step63: Bedroom
Step64: KitchenAbvGr
Drop this variable due to lack of data. Also, KitchenQual assumes that a kitchen is available.
Step65: KitchenQual
Step66: TotRmsAbvGrd
Step67: Functional
Drop due to lack of variability
Step68: Fireplaces
Step69: FireplaceQu
Step70: NA means no fireplace, and not missing.
Step71: GarageType
Step72: NA means No Garage, and doesn't mean missing.
Step73: GarageYrBlt
Step74: Since 159 houses don't have garages, replace NA with 0, including the max year!
Step75: Let 0 = ancient, 1 = older, 2 = newer, 3 = modern indicate the age of the garage
Step76: GarageFinish
Step77: NA means no garage, and not missing.
Step78: GarageCars and GarageArea
Step79: It is clear from the graph that there is a relationship between GarageCars and GarageArea. In fact, both variables have a strong positive correlation of 0.89. Garages that can fit 1-3 cars form the three biggest clusters in the graph. We choose to merge garages that can fit more than 3 cars with the garages that can fit exactly three cars since they share same area square footage range and that data doesn't have much density. Also, in order to reduce redundancy, we keep only one of the two variables. We picked GarageCars since most people understand how many cars they can fit in their garage as opposed to its area. In fact, the number of cars that a garage can fit can be viewed as binning classes for the garage area.
Step80: GarageQual and GarageCond
Step81: NA means no garage, and not missing.
Step82: Let's compare the original garage quality (GarageQual) to the current garage condition (GarageCond).
Step83: Drop GarageRemod due to lack of variability
WoodDeckSF
Step84: OpenPorchSF, EnclosedPorch, 3SsnPorch, ScreenPorch
Step85: PoolArea and PoolQC
Step86: An overwhelming majority of the houses don't have a pool! Drop both PoolArea and PoolQC.
Step87: Fence
Step88: NA means no fence, and not missing. Drop Fence due to lack of variability
Step89: MiscFeature and MiscVal
Step90: NA means no miscellaneous features, and not missing. Since vast majority of the houses don't have miscellaneous features, there is no need to investigate its dollar value. Drop both variables.
Step91: MoSold
Step92: YrSold
Step93: SaleType
Step94: Drop SaleType since most houses had conventional warranty deed
SaleCondition
Step95: Drop SaleCondition since most sales were normal
SalePrice
Step96: Pre-processed New Data
Step97: Random Forest | Python Code:
# drop ID
data.drop(["Id"], axis = 1, inplace=True)
data.head()
Explanation: Explore variables one at a time
End of explanation
data["MSSubClass"].isnull().sum()
sns.countplot(x="MSSubClass", data=data, palette=sns.color_palette("Blues", 1));
Explanation: MSSubClass
End of explanation
MSSubClass = data["MSSubClass"] \
.replace([20, 30, 40, 45, 50, 120, 150], 1) \
.replace([60, 70, 75, 160], 2) \
.replace([80, 85, 90, 180, 190], 0)
new = pd.DataFrame({"MSSubClass": MSSubClass}) # create new dataframe
new["MSSubClass"].value_counts()
Explanation: MSSubClass is categorical, though it is coded as numeric. Combine all the 1 and 1.5 story dwelling types as 1, 2 and 2.5 story types as 2, and the rest as 0.
End of explanation
data["MSZoning"].isnull().sum()
data["MSZoning"].value_counts()
# Residential zone?: Y (1) or N (0)
MSZoning = data["MSZoning"].fillna("RL").map(lambda x: 1 if (x == "RL") or (x == "RM") else 0)
pd.Series(MSZoning).value_counts()
Explanation: MSZoning
End of explanation
data["LotFrontage"].describe()
new["LotFrontage"] = data["LotFrontage"].fillna(data["LotFrontage"].median())
new["LotFrontage"].describe()
sns.distplot(new["LotFrontage"]);
# Handle outliers
lowerbound, upperbound = np.percentile(new["LotFrontage"], [1, 99]) # calculate 1st and 99th percentile
new["LotFrontage"] = np.clip(new["LotFrontage"], upperbound, lowerbound) # clip values outside these percentile range
sns.distplot(new["LotFrontage"]);
# Rescale numeric features to mean 0 and std dev 1
scaler = StandardScaler()
new["LotFrontage"] = scaler.fit_transform(new[["LotFrontage"]])
sns.distplot(new["LotFrontage"]);
Explanation: MSZoning has a severe lack of variability, providing little-to-no information in predicting SalePrice. Thus, this variable will not be used in the model.
LotFrontage
End of explanation
data["LotArea"].describe()
sns.distplot(data["LotArea"]);
lowerbound, upperbound = np.percentile(data["LotArea"], [1, 95])
new["LotArea"] = np.clip(data["LotArea"], lowerbound, upperbound)
sns.distplot(new["LotArea"]);
new["LotArea"] = scaler.fit_transform(new[["LotArea"]])
sns.distplot(new["LotArea"]);
Explanation: LotArea
End of explanation
data["Street"].isnull().sum()
data["Street"].value_counts()
data["PavedDrive"].isnull().sum()
data["PavedDrive"].value_counts()
# Paved driveway?: Y (1) or No (0)
PavedDrive = np.where(data["PavedDrive"] == "Y", 1, 0)
pd.Series(PavedDrive).value_counts()
Explanation: Street and PavedDrive
End of explanation
data["Alley"].isnull().sum()
Explanation: Street and PavedDrive have severe lack of variability, providing little-to-no information in predicting SalePrice. Thus, these variables will not be used in the model.
Alley
End of explanation
data["LotShape"].isnull().sum()
data["LotShape"].value_counts()
# Regular lot shape?: Y (1) or No (0)
new["LotShape"] = np.where(data["LotShape"] == "Reg", 1, 0)
new["LotShape"].value_counts()
Explanation: NA means that houses have no alley, which is not the same as a missing value. Drop this variable since it lacks variability and provides limited information towards the prediction.
LotShape
End of explanation
data["LandContour"].isnull().sum()
data["LandContour"].value_counts()
# Flat land?: Y (1) or No (0)
LandContour = np.where(data["LandContour"] == "Lvl", 1, 0)
pd.Series(LandContour).value_counts()
Explanation: LandContour
End of explanation
data["Utilities"].value_counts()
Explanation: Drop LandContour due to lack of variability in its values.
Utilities
Drop Utilities due to severe lack of variability!
End of explanation
data["LotConfig"].isnull().sum()
data["LotConfig"].value_counts()
# Inside lot?: Y (1) or No (0)
new["LotConfig"] = np.where(data["LotConfig"] == "Inside", 1, 0)
new["LotConfig"].value_counts()
Explanation: LotConfig
End of explanation
data["LandSlope"].isnull().sum()
data["LandSlope"].value_counts()
# Gentle slope?: Y (1) or No (0)
LandSlope = np.where(data["LandSlope"] == "Gtl", 1, 0)
pd.Series(LandSlope).value_counts()
Explanation: LandSlope
Drop LandSlope due to lack of variability
End of explanation
data["Neighborhood"].isnull().sum()
data["Neighborhood"].value_counts()
# Frequency Encoding
freqNghd = data.groupby("Neighborhood").size() / len(data)
new["Neighborhood"] = data["Neighborhood"].map(freqNghd)
new["Neighborhood"].value_counts()
Explanation: Neighborhood
End of explanation
data["Condition1"].value_counts()
data["Condition2"].value_counts()
Explanation: Condition1 and Condition2
End of explanation
Condition = np.logical_or(data["Condition1"] == "Norm", data["Condition2"] == "Norm")
Condition.value_counts()
Explanation: Most houses have normal condition. If either conditions are normal, record the overall condition as normal.
End of explanation
data["BldgType"].isnull().sum()
data["BldgType"].value_counts()
# Single-family detached?: Y (1) or N (0)
BldgType = np.where(data["BldgType"] == "1Fam", 1, 0)
pd.Series(BldgType).value_counts()
Explanation: Drop this variable since it lacks variability.
BldgType
End of explanation
data["HouseStyle"].isnull().sum()
data["HouseStyle"].value_counts()
Explanation: Drop BldgType due to lack of variability
HouseStyle
End of explanation
new["HouseStyle"] = data["HouseStyle"].replace(["1Story", "1.5Fin", "1.5Unf"], 1) \
.replace(["2Story", "2.5Fin", "2.5Unf"], 2) \
.replace(["SFoyer", "SLvl"], 0)
new["HouseStyle"].value_counts()
Explanation: Combine 1-1.5 story as 1, 2-2.5 story as 2, and the rest as 0.
End of explanation
data["OverallQual"].isnull().sum()
sns.countplot(x="OverallQual", data=data, palette=sns.color_palette("Blues", 1));
Explanation: OverallQual
End of explanation
OverallQual = data["OverallQual"].map(lambda x: -1 if x < 4 else 0 if x < 8 else 1)
pd.Series(OverallQual).value_counts()
Explanation: Combine very poor/poor/fair/less than average as -1, average/above average/good as 0, very good/excellent/very excellent as 1.
End of explanation
data["OverallCond"].isnull().sum()
sns.countplot(x="OverallCond", data=data, palette=sns.color_palette("Blues", 1));
# Above average condition?: Y (1) or No (0)
OverallCond = np.where(data["OverallCond"] > 4, 1, 0)
pd.Series(OverallCond).value_counts()
Explanation: Drop OverallQual due to lack of variability
OverallCond
Drop OverallCond due to lack of variability
End of explanation
data["YearBuilt"].describe()
Explanation: YearBuilt
End of explanation
new["YearBuilt"] = pd.qcut(data["YearBuilt"], q = 4, labels = [0, 1, 2, 3])
pd.concat((new["YearBuilt"], data["YearBuilt"]), axis = 1).head()
new["YearBuilt"].value_counts()
Explanation: Years usually needs to be binned. Let 0 = ancient, 1 = older, 2 = newer, 3 = modern indicate the age of the house.
End of explanation
data["YearRemodAdd"].describe()
remodel = np.subtract(data["YearRemodAdd"], data["YearBuilt"])
remodel.describe()
# House remodeled? Y (1) or No (0)
new["RemodAdd"] = pd.Series(remodel.map(lambda x: 0 if x <= 0 else 1))
new["RemodAdd"].value_counts()
Explanation: YearRemodAdd
End of explanation
data["RoofStyle"].isnull().sum()
data["RoofStyle"].value_counts()
# Gable roof?: Y (1) or No (0)
RoofStyle = np.where(data["RoofStyle"] == "Gable", 1, 0)
pd.Series(RoofStyle).value_counts()
Explanation: RoofStyle
Drop RoofStyle due to lack of variability
End of explanation
data["RoofMatl"].isnull().sum()
data["RoofMatl"].value_counts()
# Standard shingle?: Y (1) or N (0)
RoofMatl = pd.Series(np.where(data["RoofMatl"] == "CompShg", 1, 0))
RoofMatl.value_counts()
Explanation: RoofMatl
Drop this variable since it heavily lacks variability.
End of explanation
print data["Exterior1st"].isnull().sum()
print data["Exterior2nd"].isnull().sum()
data["Exterior1st"].value_counts()
Exterior1st = data["Exterior1st"].fillna("Other") \
.replace(["BrkFace", "WdShing", "AsbShng", "Stucco", "BrkComm", "AsphShn", "Stone", "CBlock", "ImStucc", "Other"], "Other")
Exterior1st.value_counts()
data["Exterior2nd"].value_counts()
Exterior2nd = data["Exterior2nd"].fillna("Other") \
.replace(["Wd Shng", "BrkFace", "Stucco", "AsbShng", "Brk Cmn", "ImStucc", "Stone", "AsphShn", "CBlock", "Other"], "Other")
Exterior2nd.value_counts()
np.equal(Exterior1st, Exterior2nd).value_counts() # check if both columns are same
Explanation: Exterior1st and Exterior2nd
End of explanation
new["Exterior"] = Exterior1st
new["Exterior"].value_counts()
# Frequency Encoding
freqExt = new.groupby("Exterior").size() / len(new)
new["Exterior"] = new["Exterior"].map(freqExt)
new["Exterior"].value_counts()
Explanation: Since both variables roughly have the same distribution and most houses have only one exterior material, keep only one of them in the model.
End of explanation
data["MasVnrArea"].describe()
data["MasVnrArea"] = data["MasVnrArea"].fillna(0)
np.sum(data["MasVnrArea"] == 0)
Explanation: MasVnrType and MasVnrArea
End of explanation
data["MasVnrType"].isnull().sum()
data["MasVnrType"] = data["MasVnrType"].fillna("None")
data["MasVnrType"].value_counts()
Explanation: Since about half the houses don't have masonry veneer walls, the area is mostly 0.
End of explanation
new["MasVnr"] = np.where(data["MasVnrArea"] == 0, 0, 1)
new["MasVnr"].value_counts()
Explanation: It makes more sense to simply record if a house has masonry veneer rather than worrying about unnecessary details, such as its area and type.
End of explanation
print data["ExterQual"].isnull().sum()
print data["ExterCond"].isnull().sum()
Explanation: ExterQual and ExterCond
End of explanation
original = data["ExterQual"].map({"Ex": 5, "Gd": 4, "TA": 3, "Fa": 2, "Po": 1})
current = data["ExterCond"].map({"Ex": 5, "Gd": 4, "TA": 3, "Fa": 2, "Po": 1})
original.value_counts()
current.value_counts()
# Get the change in condition over time
cond = np.subtract(current, original)
cond.value_counts()
Explanation: Compare the original quality of the material to its current condition.
End of explanation
new["ExterCond"] = pd.Series(cond.map(lambda x: -1 if x < 0 else 1 if x > 0 else 0))
new["ExterCond"].value_counts()
Explanation: Let -1 = depreciated, 0 = no change, 1 = improved indicate the change in condition over time
End of explanation
data["Foundation"].isnull().sum()
data["Foundation"].value_counts()
new["Foundation"] = data["Foundation"].replace(["BrkTil", "Slab", "Stone", "Wood"], "Other")
new["Foundation"].value_counts()
# Frequency Encoding
freqFd = new.groupby("Foundation").size() / len(new)
new["Foundation"] = new["Foundation"].map(freqFd)
new["Foundation"].value_counts()
Explanation: Foundation
End of explanation
data["BsmtQual"].isnull().sum()
data["BsmtQual"].value_counts()
Explanation: BsmtQual
This variable records the basement height.
End of explanation
new["BsmtQual"] = data["BsmtQual"].fillna("None").map({"Ex": 4, "Gd": 3, "TA": 2, "Fa": 1, "Po": 1, "None": 0})
new["BsmtQual"].value_counts()
Explanation: NA means no basement, and not missing.
End of explanation
data["BsmtCond"].isnull().sum()
data["BsmtCond"].value_counts()
Explanation: BsmtCond
End of explanation
BsmtCond = data["BsmtCond"].fillna("None")
BsmtCond.value_counts()
# Typical basement condition?: Y (1) or No (0)
BsmtCond = data["BsmtCond"].fillna("None")
BsmtCond = np.where(BsmtCond == "TA", 1, 0)
pd.Series(BsmtCond).value_counts()
Explanation: NA means no basement, and not missing.
End of explanation
data["BsmtExposure"].isnull().sum()
Explanation: Drop BsmtCond due to lack of variability
BsmtExposure
End of explanation
data["BsmtExposure"].value_counts()
# Has a walkout or garden level walls?: Y (1) or No (0)
new["BsmtExposure"] = data["BsmtExposure"].fillna("None").map({"Gd": 1, "Av": 1, "Mn": 1, "No": 0, "None": 0})
new["BsmtExposure"].value_counts()
Explanation: NA means no basement, and not missing.
End of explanation
print data["BsmtFinType1"].isnull().sum()
print data["BsmtFinType2"].isnull().sum()
Explanation: BsmtFinType1 and BsmtFinType2
End of explanation
data["BsmtFinType1"].value_counts()
data["BsmtFinType2"].value_counts()
type1 = data["BsmtFinType1"].fillna("None").map({"GLQ": 3, "ALQ": 2, "Rec": 2, "BLQ": 1, "LwQ": 1, "Unf": 0, "None": 0})
type2 = data["BsmtFinType2"].fillna("None").map({"GLQ": 3, "ALQ": 2, "Rec": 2, "BLQ": 1, "LwQ": 1, "Unf": 0, "None": 0})
type1.value_counts()
type2.value_counts()
np.equal(type1, type2).value_counts() # most basements have a 2nd rating
Explanation: NA means no basement, and not missing.
End of explanation
# Unfinished basement?
BsmtFinType = pd.Series(np.logical_or(type1 == 0.0, type2 == 0.0))
pd.Series(BsmtFinType).value_counts()
Explanation: Most basements can be used as a living quarter, however that space is unfinished. Hence, it makes more sense here to record if the basement is unfinished and use TotalBsmtSF to provide the total basement area.
End of explanation
data["BsmtFinSF1"].describe()
data["BsmtFinSF2"].describe()
print (data["BsmtFinSF1"] == 0).sum()
print (data["BsmtFinSF2"] == 0).sum()
Explanation: Drop BsmtFinType due to lack of variability
BsmtFinSF1, BsmtFinSF2, BsmtUnfSF
End of explanation
BsmtFinSF = np.logical_or(data["BsmtFinSF1"] == 0, data["BsmtFinSF2"] == 0)
(BsmtFinSF == True).sum()
Explanation: Since most basements are unfinished, there are just as many zeroes as in type1 and type2.
End of explanation
data["BsmtUnfSF"].describe()
(data["BsmtUnfSF"] != 0).sum()
Explanation: Drop this variable since it is redundant with BsmtFinType.
End of explanation
data["TotalBsmtSF"].describe()
new["TotalBsmtSF"] = data["TotalBsmtSF"].fillna(data["TotalBsmtSF"].median())
sns.distplot(new["TotalBsmtSF"]);
lowerbound, upperbound = np.percentile(new["TotalBsmtSF"], [1, 99])
new["TotalBsmtSF"] = np.clip(new["TotalBsmtSF"], lowerbound, upperbound)
sns.distplot(new["TotalBsmtSF"]);
new["TotalBsmtSF"] = scaler.fit_transform(new[["TotalBsmtSF"]])
sns.distplot(new["TotalBsmtSF"]);
Explanation: There are many nonzero values since most basements are unfinished. Drop this variable since it is redundant with BsmtFinType.
TotalBsmtSF
End of explanation
data["Heating"].value_counts()
# Has gas air furnace?: Y (1) or No (0)
Heating = pd.Series(np.where(data["Heating"] == "GasA", 1, 0))
Heating.value_counts()
Explanation: Heating
End of explanation
data["HeatingQC"].isnull().sum()
data["HeatingQC"].value_counts()
new["HeatingQC"] = data["HeatingQC"].map({"Ex": 3, "Gd": 2, "TA": 2, "Fa": 1, "Po": 1})
new["HeatingQC"].value_counts()
Explanation: Drop Heating since it severely lacks variability
HeatingQC
End of explanation
data["CentralAir"].isnull().sum()
data["CentralAir"].value_counts()
Explanation: CentralAir
Drop CentralAir since it lacks variability
End of explanation
data["Electrical"].isnull().sum()
data["Electrical"].value_counts()
# Standard breaker?: Y (1) or No (0)
Electrical = np.where(data["Electrical"] == "SBrkr", 1, 0)
pd.Series(Electrical).value_counts()
Explanation: Electrical
Drop Electrical since it lacks variability
End of explanation
data["1stFlrSF"].describe()
sns.distplot(data["1stFlrSF"]);
lowerbound, upperbound = np.percentile(data["1stFlrSF"], [1, 98])
new["1stFlrSF"] = np.clip(data["1stFlrSF"], lowerbound, upperbound)
sns.distplot(new["1stFlrSF"]);
new["1stFlrSF"] = scaler.fit_transform(new[["1stFlrSF"]])
sns.distplot(new["1stFlrSF"]);
Explanation: 1stFlrSF
End of explanation
data["2ndFlrSF"].describe()
np.sum(data["2ndFlrSF"] == 0)
# Has 2nd floor?: Y (1) or No (0)
new["2ndFlr"] = np.where(data["2ndFlrSF"] == 0, 0, 1)
new["2ndFlr"].value_counts()
Explanation: 2ndFlrSF
End of explanation
data["GrLivArea"].describe()
sns.distplot(data["GrLivArea"]);
np.corrcoef(data["GrLivArea"], data["1stFlrSF"])
Explanation: GrLivArea
End of explanation
lowerbound, upperbound = np.percentile(data["GrLivArea"], [1, 99])
new["GrLivArea"] = np.clip(data["GrLivArea"], lowerbound, upperbound)
sns.distplot(new["GrLivArea"]);
new["GrLivArea"] = scaler.fit_transform(new[["GrLivArea"]])
sns.distplot(new["GrLivArea"]);
Explanation: There is a positive moderate correlation between GrLivArea and 1stFlrSF.
End of explanation
new["TotalArea"] = pd.Series(np.sum([data["GrLivArea"], data["TotalBsmtSF"]], axis = 0))
new["TotalArea"].describe()
sns.distplot(new["TotalArea"]);
lowerbound, upperbound = np.percentile(new["TotalArea"], [1, 99])
new["TotalArea"] = np.clip(new["TotalArea"], lowerbound, upperbound)
sns.distplot(new["TotalArea"]);
new["TotalArea"] = scaler.fit_transform(new[["TotalArea"]])
sns.distplot(new["TotalArea"]);
Explanation: TotalArea: NEW feature
Ground living area is calculated by measuring the outside perimeter of the house and includes only finished, habitable, above-grade living space. Finished basements and unfinished attic areas are not included in total gross living area. Let's combine the ground living area with the total basement area to inform homeowners how much total area will be available to them. Note that the living area on the first floor is included in the ground living area calculations, which explains the correlation between the two variables.
End of explanation
data["LowQualFinSF"].describe()
np.sum(data["LowQualFinSF"] == 0)
Explanation: LowQualFinSF
Drop this variable due to lack of data
End of explanation
print data["BsmtFullBath"].isnull().sum()
print data["BsmtHalfBath"].isnull().sum()
print data["FullBath"].isnull().sum()
print data["HalfBath"].isnull().sum()
data["BsmtFullBath"].value_counts()
data["FullBath"].value_counts()
Explanation: BsmtFullBath, BsmtHalfBath, FullBath, HalfBath
End of explanation
data["BsmtFullBath"].fillna(0, inplace=True)
full = pd.Series(np.sum([data["BsmtFullBath"], data["FullBath"]], axis = 0))
full.value_counts()
new["FullBath"] = full.replace([0, 1], 1).replace([3, 4, 6], 3)
new["FullBath"].value_counts()
Explanation: Combine all full bathrooms, regardless of whether it is in the basement or not.
End of explanation
data["BsmtHalfBath"].value_counts()
data["HalfBath"].value_counts()
data["BsmtHalfBath"] = data["BsmtHalfBath"].fillna(0)
half = pd.Series(np.sum([data["BsmtHalfBath"], data["HalfBath"]], axis = 0))
half.value_counts()
Explanation: Similarly, combine all half bathrooms.
End of explanation
new["HalfBath"] = half.map(lambda x: 1 if x > 0 else 0)
new["HalfBath"].value_counts()
Explanation: Has half bathroom?: Y (1) or No (0)
End of explanation
data["BedroomAbvGr"].value_counts()
data["BedroomAbvGr"].isnull().sum()
new["BedroomAbvGr"] = data["BedroomAbvGr"].replace(0, 1).replace([4, 5, 6, 8], 4)
new["BedroomAbvGr"].value_counts()
Explanation: Bedroom
End of explanation
data["KitchenAbvGr"].value_counts()
Explanation: KitchenAbvGr
Drop this variable due to lack of data. Also, KitchenQual assumes that a kitchen is available.
End of explanation
data["KitchenQual"].value_counts()
data["KitchenQual"].isnull().sum()
new["KitchenQual"] = data["KitchenQual"].fillna("TA").map({"Ex": 3, "Gd": 3, "TA": 2, "Fa": 1, "Po": 1})
new["KitchenQual"].value_counts()
Explanation: KitchenQual
End of explanation
data["TotRmsAbvGrd"].isnull().sum()
sns.countplot(x="TotRmsAbvGrd", data=data, palette=sns.color_palette("Blues", 1));
new["TotRmsAbvGrd"] = data["TotRmsAbvGrd"].replace([2, 3, 4], 4).replace([9, 10, 11, 12, 13, 14, 15], 9)
new["TotRmsAbvGrd"].value_counts()
Explanation: TotRmsAbvGrd
End of explanation
data["Functional"].value_counts()
data["Functional"].isnull().sum()
# Typical home functionality?: Y (1) or No (0)
data["Functional"].fillna("Typ", inplace=True)
Functional = np.where(data["Functional"] == "Typ", 1, 0)
pd.Series(Functional).value_counts()
Explanation: Functional
Drop due to lack of variability
End of explanation
data["Fireplaces"].isnull().sum()
data["Fireplaces"].value_counts()
# Has a fireplace?: Y (1) or No (0)
new["Fireplaces"] = np.where(data["Fireplaces"] > 0, 1, 0)
new["Fireplaces"].value_counts()
Explanation: Fireplaces
End of explanation
data["FireplaceQu"].isnull().sum()
data["FireplaceQu"].value_counts()
Explanation: FireplaceQu
End of explanation
new["FireplaceQu"] = data["FireplaceQu"].fillna("None").map({"Ex": 3, "Gd": 3, "TA": 2, "Fa": 1, "Po": 1, "None": 0})
new["FireplaceQu"].value_counts()
Explanation: NA means no fireplace, and not missing.
End of explanation
data["GarageType"].isnull().sum()
Explanation: GarageType
End of explanation
data["GarageType"].fillna("None", inplace=True)
data["GarageType"].value_counts()
# Attached garage?: Y (1) or No (0)
new["GarageType"] = np.where(data["GarageType"] == "Attchd", 1, 0)
new["GarageType"].value_counts()
Explanation: NA means No Garage, and doesn't mean missing.
End of explanation
data["GarageYrBlt"].describe()
data["GarageYrBlt"].isnull().sum()
Explanation: GarageYrBlt
End of explanation
data["GarageYrBlt"] = data["GarageYrBlt"].fillna(0).replace(data["GarageYrBlt"].max(), 0)
data["GarageYrBlt"].describe()
Explanation: Since 159 houses don't have garages, replace NA with 0, including the max year!
End of explanation
# Bin years
new["GarageYrBlt"] = pd.qcut(data["YearBuilt"], q = 4, labels = [0, 1, 2, 3])
new["GarageYrBlt"].value_counts()
Explanation: Let 0 = ancient, 1 = older, 2 = newer, 3 = modern indicate the age of the garage
End of explanation
data["GarageFinish"].isnull().sum()
Explanation: GarageFinish
End of explanation
data["GarageFinish"].fillna("None", inplace=True)
data["GarageFinish"].value_counts()
new["GarageFinish"] = data["GarageFinish"]
new["GarageFinish"].value_counts()
# Frequency Encoding
freqGrg = new.groupby("GarageFinish").size() / len(new)
new["GarageFinish"] = new["GarageFinish"].map(freqGrg)
new["GarageFinish"].value_counts()
Explanation: NA means no garage, and not missing.
End of explanation
data["GarageCars"].isnull().sum()
data["GarageCars"].value_counts()
data["GarageArea"].describe()
data["GarageCars"] = data["GarageCars"].fillna(2)
data["GarageArea"] = data["GarageArea"].fillna(data["GarageArea"].median())
np.corrcoef(data["GarageCars"], data["GarageArea"])
sns.boxplot(x="GarageCars", y="GarageArea", data=data, palette=sns.color_palette("Blues", 1));
Explanation: GarageCars and GarageArea
End of explanation
new["GarageCars"] = data["GarageCars"].replace([3, 4, 5], 3)
new["GarageCars"].value_counts()
Explanation: It is clear from the graph that there is a relationship between GarageCars and GarageArea. In fact, both variables have a strong positive correlation of 0.89. Garages that can fit 1-3 cars form the three biggest clusters in the graph. We choose to merge garages that can fit more than 3 cars with the garages that can fit exactly three cars since they share same area square footage range and that data doesn't have much density. Also, in order to reduce redundancy, we keep only one of the two variables. We picked GarageCars since most people understand how many cars they can fit in their garage as opposed to its area. In fact, the number of cars that a garage can fit can be viewed as binning classes for the garage area.
End of explanation
data["GarageQual"].isnull().sum()
Explanation: GarageQual and GarageCond
End of explanation
data["GarageQual"].fillna("None", inplace=True)
data["GarageQual"].value_counts()
data["GarageCond"].isnull().sum()
data["GarageCond"].fillna("None", inplace=True)
data["GarageCond"].value_counts()
Explanation: NA means no garage, and not missing.
End of explanation
original = data["GarageQual"].map({"Ex": 4, "Gd": 4, "TA": 2, "Fa": 1, "Po": 1, "None": 0})
current = data["GarageCond"].map({"Ex": 4, "Gd": 4, "TA": 2, "Fa": 1, "Po": 1, "None": 0})
cond = np.subtract(current, original)
cond.value_counts()
GarageRemod = pd.Series(cond.map(lambda x: "depreciated" if x < 0 else "improved" if x > 0 else "no change"))
pd.Series(GarageRemod).value_counts()
# Was garage remodeled?: Y (1) or No (0)
GarageRemod = np.where(GarageRemod == "no change", 1, 0)
pd.Series(GarageRemod).value_counts()
Explanation: Let's compare the original garage quality (GarageQual) to the current garage condition (GarageCond).
End of explanation
data["WoodDeckSF"].describe()
np.sum(data["WoodDeckSF"] == 0)
# Has a wood deck?: Y(1) or No (0)
new["WoodDeck"] = np.where(data["WoodDeckSF"] == 0, 0, 1)
new["WoodDeck"].value_counts()
Explanation: Drop GarageRemod due to lack of variability
WoodDeckSF
End of explanation
data["OpenPorchSF"].describe()
np.sum(data["OpenPorchSF"] == 0)
data["EnclosedPorch"].describe()
np.sum(data["EnclosedPorch"] == 0)
data["3SsnPorch"].describe()
np.sum(data["3SsnPorch"] == 0)
data["ScreenPorch"].describe()
np.sum(data["ScreenPorch"] == 0)
new["TotalPorchSF"] = np.sum([data["OpenPorchSF"],
data["EnclosedPorch"],
data["3SsnPorch"],
data["ScreenPorch"]],
axis = 0)
new["TotalPorchSF"].describe()
np.sum(new["TotalPorchSF"] == 0)
sns.distplot(new["TotalPorchSF"]);
lowerbound, upperbound = np.percentile(new["TotalPorchSF"], [1, 99])
new["TotalPorchSF"] = np.clip(new["TotalPorchSF"], upperbound, lowerbound)
sns.distplot(new["TotalPorchSF"]);
new["TotalPorchSF"] = scaler.fit_transform(new[["TotalPorchSF"]])
sns.distplot(new["TotalPorchSF"]);
Explanation: OpenPorchSF, EnclosedPorch, 3SsnPorch, ScreenPorch
End of explanation
data["PoolArea"].describe()
np.sum(data["PoolArea"] == 0)
Explanation: PoolArea and PoolQC
End of explanation
data["PoolQC"].value_counts()
Explanation: An overwhelming majority of the houses don't have a pool! Drop both PoolArea and PoolQC.
End of explanation
data["Fence"].isnull().sum()
Explanation: Fence
End of explanation
# Has Fence?: Y (1) or No (0)
Fence = np.where(data["Fence"].isnull(), 0 , 1)
pd.Series(Fence).value_counts()
Explanation: NA means no fence, and not missing. Drop Fence due to lack of variability
End of explanation
data["MiscFeature"].isnull().sum()
Explanation: MiscFeature and MiscVal
End of explanation
data["MiscVal"].describe()
np.sum(data["MiscVal"] == 0)
Explanation: NA means no miscellaneous features, and not missing. Since vast majority of the houses don't have miscellaneous features, there is no need to investigate its dollar value. Drop both variables.
End of explanation
data["MoSold"].isnull().sum()
sns.countplot(x="MoSold", data=data, palette=sns.color_palette("Blues", 1));
new["MoSold"] = data["MoSold"].map({12: "winter", 1: "winter", 2: "winter",
3: "spring", 4: "spring", 5: "spring",
6: "summer", 7: "summer", 8: "summer",
9: "fall", 10: "fall", 11: "fall"})
new["MoSold"].value_counts()
# Frequency Encoding
freqMo = new.groupby("MoSold").size() / len(new)
new["MoSold"] = new["MoSold"].map(freqMo)
new["MoSold"].value_counts()
Explanation: MoSold
End of explanation
data["YrSold"].value_counts()
data["YrSold"].describe()
new["YrSold"] = data["YrSold"]
Explanation: YrSold
End of explanation
data["SaleType"].value_counts()
Explanation: SaleType
End of explanation
data["SaleCondition"].value_counts()
Explanation: Drop SaleType since most houses had conventional warranty deed
SaleCondition
End of explanation
tr["SalePrice"].describe()
sns.distplot(tr["SalePrice"]);
lowerbound, upperbound = np.percentile(tr["SalePrice"], [1, 99])
tr["SalePrice"] = np.clip(tr["SalePrice"], upperbound, lowerbound)
sns.distplot(tr["SalePrice"]);
tr["SalePrice"] = np.log1p(tr[["SalePrice"]]) # log(x+1) transform
sns.distplot(tr["SalePrice"]);
Explanation: Drop SaleCondition since most sales were normal
SalePrice
End of explanation
new.head()
new.columns
xTrain = new[:1460] # extract rows 0 to 1459
test = new[1460:] # extract rest of the rows
yTrain = tr["SalePrice"] # append SalePrice column
print xTrain.shape
print test.shape
Explanation: Pre-processed New Data
End of explanation
# Tune parameters
# gridRF = GridSearchCV(RandomForestRegressor(min_samples_leaf=3, n_jobs=-1),
# cv=5,
# param_grid={"n_estimators": [100, 300, 500, 700, 900]})
# gridRF.fit(xTrain, yTrain)
# gridRF.best_params_
# Create validation dataset with 60/40 split
xTrainVal, xTestVal, yTrainVal, yTestVal = train_test_split(xTrain, yTrain, test_size=.4)
# Train on validation train data
rf = RandomForestRegressor(n_estimators=700,
min_samples_leaf=3,
n_jobs=-1)
rf.fit(X=xTrainVal, y=yTrainVal)
# Predict on validation test data
yPredVal = rf.predict(xTestVal)
# Root Mean Squared Error (RMSE)
np.sqrt(mean_squared_error(yTestVal, yPredVal))
# Feature importance
featImp = sorted(zip(map(lambda x: round(x, 4), rf.feature_importances_), xTrain.columns),
reverse=True)
featImp
# Predict on test data and write to CSV
yTest = rf.predict(test)
submissionRF = pd.DataFrame({"Id": range(1461, len(yTest)+1461),
"SalePrice": np.expm1(yTest)}) # inverse of log(x+1)
submissionRF.to_csv("../submission/submissionRF.csv", index=False)
Explanation: Random Forest
End of explanation |
14,959 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mixed NB
gnb
Step1: training MultiNB & parameter tuning
cat_X => countvec
Step2: X_counts로 cv했을때
alpha
Step3: X_tfidf로 cv했을때
alpha
Step4: Tuning & Improvement
Step5: Retraining with new parameters & 1sigma rule
Step7: Gaussian & Multinomial NB fitting
gnb
Step8: Final Test
Step9: score | Python Code:
df = pd.read_csv('../resource/final_df3.csv')
sample = df.title
y = df['rating(y)'].values
real_X = df[['avg_rating']].values
cat_X = df.text.fillna("").values
Explanation: Mixed NB
gnb : 'avg_rating' 피쳐 한개만
mnb : alpha는 피쳐가 달라진 관계로(콤마, 띄어쓰기 제거) 다시 cv시행
ngram_range : (1, 2)
tfidf : true
sub_alpha : 0.3
score(mae) : 0.7674
End of explanation
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, TfidfTransformer
count_vect = CountVectorizer()
X_counts = count_vect.fit_transform(cat_X)
tfidf_vect = TfidfVectorizer()
X_tfidf = tfidf_vect.fit_transform(cat_X)
from sklearn.cross_validation import StratifiedKFold
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import mean_absolute_error
Explanation: training MultiNB & parameter tuning
cat_X => countvec
End of explanation
cv = StratifiedKFold(y, n_folds=5, random_state=51)
i_range = []
score_range = []
sigma = []
for a in np.arange(0, 2, 0.01):
mnb = MultinomialNB(alpha = a)
scores = np.zeros(5)
for i, (train_idx, test_idx) in enumerate(cv):
X_train = X_counts[train_idx]
y_train = y[train_idx]
X_test = X_counts[test_idx]
y_test = y[test_idx]
mnb.fit(X_train, y_train)
y_pred = mnb.predict(X_test)
scores[i] = mean_absolute_error(y_test, y_pred)
i_range.append(a)
score_range.append(np.mean(scores))
sigma.append(np.std(scores))
best_idx = np.argmin(score_range)
best_alpha = i_range[best_idx]
best_score = score_range[best_idx]
sigma
plt.figure(figsize = (15, 5))
plt.plot(i_range, score_range)
plt.plot(i_range, np.array(score_range) + sigma, 'b--')
plt.plot(i_range, np.array(score_range) - sigma, 'b--')
plt.axhline(best_score + sigma[best_idx], linestyle=':', color='r')
plt.axvline(best_alpha, linestyle=':', color='r')
def find_nearest(array, value):
idx = (np.abs(array-value)).argmin()
return idx
sub_alpha = i_range[find_nearest(score_range, best_score+sigma[best_idx])]
sub_score = best_score+sigma[best_idx]
plt.scatter(sub_alpha, sub_score, s=100, c='red')
plt.xlim(0, 2)
plt.ylabel('CV score(mae)')
plt.xlabel('alpha')
print("best alpha : ", best_alpha)
print("best score : ", best_score)
print(' 1-sigma : ', round(sigma[best_idx], 4))
print('='*25)
print("sub_opt alpha : ", sub_alpha)
print("sub_opt score : ", sub_score)
Explanation: X_counts로 cv했을때
alpha : 0.74
score : 0.819739769701
End of explanation
cv = StratifiedKFold(y, n_folds=5, random_state=51)
i_range = []
score_range = []
sigma = []
for a in np.arange(0, 1, 0.01):
mnb = MultinomialNB(alpha = a)
scores = np.zeros(5)
for i, (train_idx, test_idx) in enumerate(cv):
X_train = X_tfidf[train_idx]
y_train = y[train_idx]
X_test = X_tfidf[test_idx]
y_test = y[test_idx]
mnb.fit(X_train, y_train)
y_pred = mnb.predict(X_test)
scores[i] = mean_absolute_error(y_test, y_pred)
i_range.append(a)
score_range.append(np.mean(scores))
sigma.append(np.std(scores))
best_idx = np.argmin(score_range)
best_alpha = i_range[best_idx]
best_score = score_range[best_idx]
sigma
plt.figure(figsize = (15, 5))
plt.plot(i_range, score_range)
plt.plot(i_range, np.array(score_range) + sigma, 'b--')
plt.plot(i_range, np.array(score_range) - sigma, 'b--')
plt.axhline(best_score + sigma[best_idx], linestyle=':', color='r')
plt.axvline(best_alpha, linestyle=':', color='r')
def find_nearest(array, value):
idx = (np.abs(array-value)).argmin()
return idx
sub_alpha = i_range[find_nearest(score_range, best_score+sigma[best_idx])]
sub_score = best_score+sigma[best_idx]
plt.scatter(sub_alpha, sub_score, s=100, c='red')
plt.xlim(0, 1)
plt.ylabel('CV score(mae)')
plt.xlabel('alpha')
print("best alpha : ", best_alpha)
print("best score : ", best_score)
print(' 1-sigma : ', round(sigma[best_idx], 4))
print('='*25)
print("sub_opt alpha : ", sub_alpha)
print("sub_opt score : ", sub_score)
Explanation: X_tfidf로 cv했을때
alpha : 0.23
score : 0.791257638511
End of explanation
from sklearn.pipeline import Pipeline
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
from sklearn.grid_search import GridSearchCV
parameters = {
'vect__ngram_range': [(1, 1), (1, 2), (1, 3), (1, 4), ],
'tfidf__use_idf' : [True, False],
'clf__alpha' : np.arange(0, 1, 0.01),
}
gs_clf = GridSearchCV(text_clf, parameters, cv=5, scoring='mean_absolute_error', n_jobs=-1)
gs_clf = gs_clf.fit(cat_X, y)
best_parameters, score, _ = max(gs_clf.grid_scores_, key=lambda x: x[1])
for param_name in sorted(parameters.keys()):
print("{name}: {best}".format(
name=param_name, best=best_parameters[param_name]
))
print("="*25)
print('score :', score)
Explanation: Tuning & Improvement
End of explanation
cv = StratifiedKFold(y, n_folds=5, random_state=51)
i_range = []
score_range = []
sigma = []
for a in np.arange(0, 0.45, 0.01):
text_clf = Pipeline([
('vect', CountVectorizer(ngram_range=(1, 2))),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB(alpha=a)),
])
scores = np.zeros(5)
for i, (train_idx, test_idx) in enumerate(cv):
X_train = cat_X[train_idx]
y_train = y[train_idx]
X_test = cat_X[test_idx]
y_test = y[test_idx]
text_clf.fit(X_train, y_train)
y_pred = text_clf.predict(X_test)
scores[i] = mean_absolute_error(y_test, y_pred)
i_range.append(a)
score_range.append(np.mean(scores))
sigma.append(np.std(scores))
best_idx = np.argmin(score_range)
best_alpha = i_range[best_idx]
best_score = score_range[best_idx]
sigma
plt.figure(figsize = (15, 5))
plt.plot(i_range, score_range)
plt.plot(i_range, np.array(score_range) + sigma, 'b--')
plt.plot(i_range, np.array(score_range) - sigma, 'b--')
plt.axhline(best_score + sigma[best_idx], linestyle=':', color='r')
plt.axvline(best_alpha, linestyle=':', color='r')
def find_nearest(array, value):
idx = (np.abs(array-value)).argmin()
return idx
sub_alpha = i_range[find_nearest(score_range, best_score+sigma[best_idx])]
sub_score = best_score+sigma[best_idx]
plt.scatter(sub_alpha, sub_score, s=100, c='red')
plt.xlim(0, 0.45)
plt.ylabel('CV score(mae)')
plt.xlabel('alpha')
print("best alpha : ", best_alpha)
print("best score : ", best_score)
print(' 1-sigma : ', round(sigma[best_idx], 4))
print('='*25)
print("sub_opt alpha : ", sub_alpha)
print("sub_opt score : ", sub_score)
Explanation: Retraining with new parameters & 1sigma rule
End of explanation
from sklearn.naive_bayes import GaussianNB, MultinomialNB
gnb = GaussianNB()
mnb = Pipeline([
('vect', CountVectorizer(ngram_range=(1, 2),)),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB(alpha=0.3)),
])
gnb.fit(real_X, y)
gnb_pred = gnb.predict(real_X)
gnb_prob = gnb.predict_proba(real_X)
mnb.fit(cat_X, y)
mnb_pred = mnb.predict(cat_X)
mnb_prob = mnb.predict_proba(cat_X)
mix_prob = np.multiply(gnb_prob, mnb_prob)
mix_prob.shape
def softmax(w, t=1.0):
Calculate the softmax of a list of numbers w.
Parameters
----------
w : list of numbers
t : float
Return
------
a list of the same length as w of non-negative numbers
Examples
--------
>>> softmax([0.1, 0.2])
array([ 0.47502081, 0.52497919])
>>> softmax([-0.1, 0.2])
array([ 0.42555748, 0.57444252])
>>> softmax([0.9, -10])
array([ 9.99981542e-01, 1.84578933e-05])
>>> softmax([0, 10])
array([ 4.53978687e-05, 9.99954602e-01])
e = np.exp(np.array(w) / t)
dist = e / np.sum(e)
return dist
mix_prob_softmax = np.zeros((544, 5))
for i in range(544):
mix_prob_softmax[i] = softmax(mix_prob[i])
mix_prob_softmax
np.sum(mix_prob_softmax[0])
mix_pred = np.zeros(544, )
for i in range(544):
mix_pred[i] = np.argmax(mix_prob_softmax[i])
mix_pred += 1 # 별점은 1점부터 5점까지이므로(int)
mix_pred
Explanation: Gaussian & Multinomial NB fitting
gnb : 'avg_rating' 피쳐 한개만
mnb : alpha는 피쳐가 달라진 관계로(콤마, 띄어쓰기 제거) 다시 cv시행
ngram_range : (1, 2)
tfidf : true
sub_alpha : 0.3
score(mae) : 0.7674
End of explanation
test_df = pd.read_excel('../resource/test_df.xlsx')
test_sample = test_df.title
test_y = test_df['my_rating'].values
test_real_X = test_df[['avg_rating']].values
test_cat_X = test_df.text
test_watcha_y = test_df['watcha_rating'].values
gnb_test_pred = gnb.predict(test_real_X)
gnb_test_prob = gnb.predict_proba(test_real_X)
mnb_test_pred = mnb.predict(test_cat_X)
mnb_test_prob = mnb.predict_proba(test_cat_X)
mix_test_prob = np.multiply(gnb_test_prob, mnb_test_prob)
mix_test_prob_softmax = np.zeros((12, 5))
for i in range(12):
mix_test_prob_softmax[i] = softmax(mix_test_prob[i])
mix_test_prob_softmax
np.sum(mix_test_prob_softmax[0])
mix_test_pred = np.zeros(12, )
for i in range(12):
mix_test_pred[i] = np.argmax(mix_test_prob_softmax[i])
mix_test_pred += 1 # 별점은 1점부터 5점까지이므로(int)
mix_test_pred
test_df['predict'] = mix_test_pred
test_df
Explanation: Final Test
End of explanation
mix_score = mean_absolute_error(mix_test_pred, test_y)
watcha_score = mean_absolute_error(test_watcha_y, test_y)
print('mix_score :', mix_score)
print('watcha_score :', watcha_score)
# watcha_rating을 반올림하여 정수로변환하여 스코어 측정해봄
test_watchar_round_y = np.round(test_watcha_y,)
mean_absolute_error(test_watchar_round_y, test_y)
Explanation: score
End of explanation |
14,960 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grade
Step1: 1) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
Step2: 2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
Step3: 3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?
Tip
Step4: 4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
Step5: 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?
Tip
Step6: 6) What section talks about motorcycles the most?
Tip
Step7: 7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?
<p>Tip
Step8: 8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews? | Python Code:
import requests
Explanation: Grade: 8 / 8
All API's: http://developer.nytimes.com/
Article search API: http://developer.nytimes.com/article_search_v2.json
Best-seller API: http://developer.nytimes.com/books_api.json#/Documentation
Test/build queries: http://developer.nytimes.com/
Tip: Remember to include your API key in all requests! And their interactive web thing is pretty bad. You'll need to register for the API key.
End of explanation
dates = ['2009-05-10', '2010-05-09', '2009-06-21', '2010-06-20']
for date in dates:
response = requests.get('https://api.nytimes.com/svc/books/v3/lists//.json?list-name=hardcover-fiction&published-date=' + date + '&api-key=1a25289d587a49b7ba8128badd7088a2')
data = response.json()
print('On', date, 'this was the hardcover fiction NYT best-sellers list:')
for item in data['results']:
for book in item['book_details']:
print(book['title'])
print('')
Explanation: 1) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
End of explanation
cat_dates = ['2009-06-06', '2015-06-06']
for date in cat_dates:
cat_response = requests.get('https://api.nytimes.com/svc/books/v3/lists/names.json?published-date=' + date + '&api-key=1a25289d587a49b7ba8128badd7088a2')
cat_data = cat_response.json()
print('On', date + ', these were the different book categories the NYT ranked:')
categories = []
for result in cat_data['results']:
categories.append(result['list_name'])
print(', '.join(set(categories)))
print('')
Explanation: 2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
End of explanation
gaddafis = ['Gadafi', 'Gaddafi', 'Kadafi', 'Qaddafi']
for gaddafi in gaddafis:
g_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=' + gaddafi + '+libya&api-key=1a25289d587a49b7ba8128badd7088a2')
g_data = g_response.json()
print('There are', g_data['response']['meta']['hits'], 'instances of the spelling', gaddafi + '.')
# TA-COMMENT: As per usual, your commented code is excellent! I love how you're thinking through what might work.
# #HELP try 1.
# #Doesn't show next pages.
# gaddafis = ['Gadafi', 'Gaddafi', 'Kadafi', 'Qaddafi']
# for gaddafi in gaddafis:
# g_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=' + gaddafi + '+libya&page=0&api-key=1a25289d587a49b7ba8128badd7088a2')
# g_data = g_response.json()
# print('There are', len(g_data['response']['docs']), 'instances of the spelling', gaddafi)
# #HELP try 2. What I want to do next is
# #if the number of articles != 10 , stop
# #else, add 1 to the page number
# #Tell it to loop until the end result is not 10
# #but right now it keeps crashing
# #Maybe try by powers of 2.
# import time, sys
# pages = range(400)
# total_articles = 0
# for page in pages:
# g_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=gaddafi+libya&page=' + str(page) + '&api-key=1a25289d587a49b7ba8128badd7088a2')
# g_data = g_response.json()
# articles_on_pg = len(g_data['response']['docs'])
# total_articles = total_articles + articles_on_pg
# print(total_articles)
# time.sleep(0.6)
#HELP try 3. Trying by powers of 2.
#OMG does 'hits' means the number of articles with this text?? If so, where could I find that in the README??
# numbers = range(10)
# pages = []
# for number in numbers:
# pages.append(2 ** number)
# #temp
# print(pages)
# import time, sys
# total_articles = 0
# for page in pages:
# g_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=gaddafi+libya&page=' + str(page) + '&api-key=1a25289d587a49b7ba8128badd7088a2')
# g_data = g_response.json()
# articles_on_pg = len(g_data['response']['docs'])
# #temp
# meta_on_pg = g_data['response']['meta']
# print(page, articles_on_pg, meta_on_pg)
# time.sleep(1)
# #HELP (troubleshooting the page number that returns a keyerror)
# #By trial and error, it seems like "101" breaks it. 100 is fine.
# g_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=gadafi+libya&page=101&api-key=1a25289d587a49b7ba8128badd7088a2')
# g_data = g_response.json()
# articles_on_pg = len(g_data['response']['docs'])
# print(articles_on_pg)
Explanation: 3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?
Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy.
End of explanation
hip_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&begin_date=19950101&sort=oldest&api-key=1a25289d587a49b7ba8128badd7088a2')
hip_data = hip_response.json()
first_hipster = hip_data['response']['docs'][0]
print('The first hipster article of 1995 was titled', first_hipster['headline']['main'] + '.\nCheck it out:\n' + first_hipster['lead_paragraph'])
Explanation: 4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
End of explanation
decade_range = range(5)
date_attributes = []
for decade in decade_range:
date_attributes.append('begin_date=' + str(1950 + decade*10) +'0101&end_date=' + str(1959 + decade*10) + '1231')
date_attributes.append('begin_date=20100101')
for date in date_attributes:
gm_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q="gay+marriage"&' + date + '&api-key=1a25289d587a49b7ba8128badd7088a2')
gm_data = gm_response.json()
hits = gm_data['response']['meta']['hits']
print(hits)
Explanation: 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?
Tip: You'll want to put quotes around the search term so it isn't just looking for "gay" and "marriage" in the same article.
Tip: Write code to find the number of mentions between Jan 1, 1950 and Dec 31, 1959.
End of explanation
#I searched for motorcyle or motorcycles
# for motorcyles:
# {'count': 10, 'term': 'New York and Region'}
# {'count': 10, 'term': 'New York and Region'}
# {'count': 7, 'term': 'World'}
# {'count': 6, 'term': 'Arts'}
# {'count': 6, 'term': 'Business'}
# {'count': 5, 'term': 'U.S.'}
# for motorcycle:
# {'count': 24, 'term': 'Sports'}
# {'count': 24, 'term': 'Sports'}
# {'count': 20, 'term': 'New York and Region'}
# {'count': 16, 'term': 'U.S.'}
# {'count': 14, 'term': 'Arts'}
# {'count': 8, 'term': 'Business'}
moto_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcyle+OR+motorcyles&facet_field=section_name&api-key=1a25289d587a49b7ba8128badd7088a2')
moto_data = moto_response.json()
# #temp. Answer: dict
# print(type(moto_data))
# #temp. Answer: ['status', 'copyright', 'response']
# print(moto_data.keys())
# #temp. Answer: dict
# print(type(moto_data['response']))
# #temp. Answer: ['docs', 'meta', 'facets']
# print(moto_data['response'].keys())
# #temp. Answer: dict
# print(type(moto_data['response']['facets']))
# #temp. Answer: 'section_name'
# print(moto_data['response']['facets'].keys())
# #temp. Answer: dict
# print(type(moto_data['response']['facets']['section_name']))
# #temp. Answer:'terms'
# print(moto_data['response']['facets']['section_name'].keys())
# #temp. Answer: list
# print(type(moto_data['response']['facets']['section_name']['terms']))
# #temp. It's a list of dictionaries, with a count and a section name for each one.
# print(moto_data['response']['facets']['section_name']['terms'][0])
sections = moto_data['response']['facets']['section_name']['terms']
the_most = 0
for section in sections:
if section['count'] > the_most:
the_most = section['count']
the_most_name = section['term']
print(the_most_name, 'talks about motorcycles the most, with', the_most, 'articles.')
# #Q: WHY DO SO FEW ARTICLES MENTION MOTORCYCLES?
# #A: MAYBE BECAUSE MANY ARTICLES AREN'T IN SECTIONS?
# #temp. Answer: {'hits': 312, 'offset': 0, 'time': 24}
# print(moto_data['response']['meta'])
# #temp. Answer: ['document_type', 'blog', 'multimedia', 'pub_date',
# #'news_desk', 'keywords', 'byline', '_id', 'headline', 'snippet',
# #'source', 'lead_paragraph', 'web_url', 'print_page', 'slideshow_credits',
# #'abstract', 'section_name', 'word_count', 'subsection_name', 'type_of_material']
# print(moto_data['response']['docs'][0].keys())
# #temp. Answer: Sports
# #print(moto_data['response']['docs'][0]['section_name'])
# #temp.
# # Sports
# # Sports
# # Sports
# # None
# # Multimedia/Photos
# # Multimedia/Photos
# # Multimedia/Photos
# # New York and Region
# # None
# # New York and Region
# # New York and Region
# for article in moto_data['response']['docs']:
# print(article['section_name'])
# #temp. 10. There are only 10 because only 10 show up in search results.
# print(len(moto_data['response']['docs']))
Explanation: 6) What section talks about motorcycles the most?
Tip: You'll be using facets
End of explanation
offsets = range(3)
picks_by_group = []
for offset in offsets:
picks_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=' + str(offset * 20) + '&api-key=1a25289d587a49b7ba8128badd7088a2')
picks_data = picks_response.json()
results = picks_data['results']
picks = 0
for result in results:
if result['critics_pick'] == 1:
picks = picks + 1
picks_by_group.append(picks)
print('In the most recent', offset * 20, 'to', offset * 20 + 20, 'movies, the critics liked', picks, 'movies.')
print('In the past', (offset + 1) * 20, 'reviews, the critics liked', sum(picks_by_group), 'movies.')
print('')
# #temp. Answer: ['has_more', 'status', 'results', 'copyright', 'num_results']
# print(picks_data.keys())
# #temp. 20
# #not what we're looking for
# print(picks_data['num_results'])
# #temp. Answer: list
# print(type(picks_data['results']))
# #temp.
# print(picks_data['results'][0])
# #temp. Answer: ['display_title', 'headline', 'mpaa_rating', 'critics_pick',
# #'publication_date', 'link', 'summary_short', 'byline', 'opening_date', 'multimedia', 'date_updated']
# print(picks_data['results'][0].keys())
Explanation: 7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?
<p>Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.
End of explanation
offsets = range(2)
bylines = []
for offset in offsets:
picks_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=' + str(offset * 20) + '&api-key=1a25289d587a49b7ba8128badd7088a2')
picks_data = picks_response.json()
for result in picks_data['results']:
bylines.append(result['byline'])
print(bylines)
# I tried Counter, but there were two most common results, and it only gave me one.
# from collections import Counter
# print(collections.Counter(bylines))
# print(Counter(bylines).most_common(1))
sorted_bylines = (sorted(bylines))
numbers = range(40)
most_bylines = 0
for number in numbers:
if most_bylines < sorted_bylines.count(sorted_bylines[number]):
most_bylines = sorted_bylines.count(sorted_bylines[number])
for number in numbers:
if most_bylines == sorted_bylines.count(sorted_bylines[number]) and sorted_bylines[number] != sorted_bylines[number - 1]:
print(sorted_bylines[number], sorted_bylines.count(sorted_bylines[number]))
Explanation: 8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?
End of explanation |
14,961 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Writing your own optimization loop
In this example, we will use the pyswarms.backend module to write our own optimization loop. We will try to recreate the Global best PSO using the native backend in PySwarms. Hopefully, this short tutorial can give you an idea on how to use this for your own custom swarm implementation. The idea is simple, again, let's refer to this diagram
Step1: Native global-best PSO implementation
Now, the global best PSO pseudocode looks like the following (adapted from A. Engelbrecht, "Computational Intelligence
Step2: Now, let's write our optimization loop!
Step3: Of course, we can just use the GlobalBestPSO implementation in PySwarms (it has boundary support, tolerance, initial positions, etc.) | Python Code:
# Import modules
import numpy as np
# Import sphere function as objective function
from pyswarms.utils.functions.single_obj import sphere as f
# Import backend modules
import pyswarms.backend as P
from pyswarms.backend.topology import Star
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Writing your own optimization loop
In this example, we will use the pyswarms.backend module to write our own optimization loop. We will try to recreate the Global best PSO using the native backend in PySwarms. Hopefully, this short tutorial can give you an idea on how to use this for your own custom swarm implementation. The idea is simple, again, let's refer to this diagram:
Some things to note:
- Initialize a Swarm class and update its attributes for every iteration.
- Initialize a Topology class (in this case, we'll use a Star topology), and use its methods to operate on the Swarm.
- We can also use some additional methods in pyswarms.backend depending on our needs.
Thus, for each iteration:
1. We take an attribute from the Swarm class.
2. Operate on it according to our custom algorithm with the help of the Topology class; and
3. Update the Swarm class with the new attributes.
End of explanation
my_topology = Star() # The Topology Class
my_options = {'c1': 0.6, 'c2': 0.3, 'w': 0.4} # arbitrarily set
my_swarm = P.create_swarm(n_particles=50, dimensions=2, options=my_options) # The Swarm Class
print('The following are the attributes of our swarm: {}'.format(my_swarm.__dict__.keys()))
Explanation: Native global-best PSO implementation
Now, the global best PSO pseudocode looks like the following (adapted from A. Engelbrecht, "Computational Intelligence: An Introduction, 2002):
```python
Python-version of gbest algorithm from Engelbrecht's book
for i in range(iterations):
for particle in swarm:
# Part 1: If current position is less than the personal best,
if f(current_position[particle]) < f(personal_best[particle]):
# Update personal best
personal_best[particle] = current_position[particle]
# Part 2: If personal best is less than global best,
if f(personal_best[particle]) < f(global_best):
# Update global best
global_best = personal_best[particle]
# Part 3: Update velocity and position matrices
update_velocity()
update_position()
```
As you can see, the standard PSO has a three-part scheme: update the personal best, update the global best, and update the velocity and position matrices. We'll follow this three part scheme in our native implementation using the PySwarms backend
Let's make a 2-dimensional swarm with 50 particles that will optimize the sphere function. First, let's initialize the important attributes in our algorithm
End of explanation
iterations = 100 # Set 100 iterations
for i in range(iterations):
# Part 1: Update personal best
my_swarm.current_cost = f(my_swarm.position) # Compute current cost
my_swarm.pbest_cost = f(my_swarm.pbest_pos) # Compute personal best pos
my_swarm.pbest_pos, my_swarm.pbest_cost = P.compute_pbest(my_swarm) # Update and store
# Part 2: Update global best
# Note that gbest computation is dependent on your topology
if np.min(my_swarm.pbest_cost) < my_swarm.best_cost:
my_swarm.best_pos, my_swarm.best_cost = my_topology.compute_gbest(my_swarm)
# Let's print our output
if i%20==0:
print('Iteration: {} | my_swarm.best_cost: {:.4f}'.format(i+1, my_swarm.best_cost))
# Part 3: Update position and velocity matrices
# Note that position and velocity updates are dependent on your topology
my_swarm.velocity = my_topology.compute_velocity(my_swarm)
my_swarm.position = my_topology.compute_position(my_swarm)
print('The best cost found by our swarm is: {:.4f}'.format(my_swarm.best_cost))
print('The best position found by our swarm is: {}'.format(my_swarm.best_pos))
Explanation: Now, let's write our optimization loop!
End of explanation
from pyswarms.single import GlobalBestPSO
optimizer = GlobalBestPSO(n_particles=50, dimensions=2, options=my_options) # Reuse our previous options
optimizer.optimize(f, iters=100)
Explanation: Of course, we can just use the GlobalBestPSO implementation in PySwarms (it has boundary support, tolerance, initial positions, etc.):
End of explanation |
14,962 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Built-in Constraints
There are a number of built-in constraints that can be applied to our system. Those added by default are listed below as well as in the API docs for b.add_constraint
Step3: esinw, ecosw
These constraints handle computing the projected eccentricity which can be helpful in that they are better representations of the geometry of a light curve and result in symmetric posteriors for near-circular orbits.
Both can be inverted to also automatically solve for 'ecc' or 'per0'.
Step4: t0
This constraint handles converting between different t0 conventions - namely providing a reference time at periastron passage (t0_perpass) and at superior conjunction (t0_supconj).
Currently, this constraint only supports inverting to be solved for 't0_supconj' (ie you cannot automatically invert this constraint to constraint phshift or per0).
Step5: freq
This constraint handles the simple conversion to frequency from period - whether that be rotational or orbital - and does support inversion to solve for 'period'.
Step6: mass
This constraint handles solving for the mass of a component by obeying Kepler's third law within the parent orbit.
It can be inverted to solve for 'sma', 'q', or 'period' (in addition to 'mass').
Step7: component sma
This constraint handles computing the semi-major axis of a component about the center of mass of its parent orbit. Note that this is not the same as the semi-major axis of the parent orbit.
This currently can be inverted to solve for 'sma' of the parent orbit, but not 'q'.
Step8: component asini
This constraint handles computing the projected semi-major axis of a component about the center of mass of its parent orbit. Note that this is not the same as the asini of the parent orbit.
This currently can be inverted to solve for 'sma' of the parent orbit, but not 'q' or 'incl'.
Step9: requiv_max
This constraint handles solving for the maxium equivalent radius (for a detached system).
For a semi-detached system, the radius itself is constrained to be exactly this value.
Step10: rotation period
This constraint handles computing the rotation period of a star given its synchronicity parameter (syncpar).
It can be inverted to solve for any of the three parameters 'period' (both rotational and orbital) and 'syncpar'.
Step11: pitch/yaw (incl/long_an)
pitch constrains the relation between the orbital and rotational inclination whereas yaw constrains the relation between the orbital and rotational long_an. When pitch and yaw are set to 0, the system is aligned. | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Advanced: Built-In Constraints
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
b.filter(qualifier='asini', context='constraint')
b.get_parameter(qualifier='asini', component='binary', context='constraint')
Explanation: Built-in Constraints
There are a number of built-in constraints that can be applied to our system. Those added by default are listed below as well as in the API docs for b.add_constraint:
asini
These constraint handles computing the projected semi-major axis (either for an orbit or a star) along the line of sight and can be automatically inverted to solve for either 'asini', 'sma', or 'incl'.
End of explanation
b.get_parameter(qualifier='esinw', context='constraint')
b.get_parameter(qualifier='ecosw', context='constraint')
Explanation: esinw, ecosw
These constraints handle computing the projected eccentricity which can be helpful in that they are better representations of the geometry of a light curve and result in symmetric posteriors for near-circular orbits.
Both can be inverted to also automatically solve for 'ecc' or 'per0'.
End of explanation
b.get_parameter(qualifier='t0_perpass', context='constraint')
Explanation: t0
This constraint handles converting between different t0 conventions - namely providing a reference time at periastron passage (t0_perpass) and at superior conjunction (t0_supconj).
Currently, this constraint only supports inverting to be solved for 't0_supconj' (ie you cannot automatically invert this constraint to constraint phshift or per0).
End of explanation
b.filter(qualifier='freq', context='constraint')
b.get_parameter(qualifier='freq', component='binary', context='constraint')
b.get_parameter(qualifier='freq', component='primary', context='constraint')
Explanation: freq
This constraint handles the simple conversion to frequency from period - whether that be rotational or orbital - and does support inversion to solve for 'period'.
End of explanation
b.filter(qualifier='mass', context='constraint')
b.get_parameter(qualifier='mass', component='primary', context='constraint')
Explanation: mass
This constraint handles solving for the mass of a component by obeying Kepler's third law within the parent orbit.
It can be inverted to solve for 'sma', 'q', or 'period' (in addition to 'mass').
End of explanation
b.filter(qualifier='sma', context='constraint')
b.get_parameter(qualifier='sma', component='primary', context='constraint')
Explanation: component sma
This constraint handles computing the semi-major axis of a component about the center of mass of its parent orbit. Note that this is not the same as the semi-major axis of the parent orbit.
This currently can be inverted to solve for 'sma' of the parent orbit, but not 'q'.
End of explanation
b.filter(qualifier='asini', context='constraint')
b.get_parameter(qualifier='asini', component='primary', context='constraint')
Explanation: component asini
This constraint handles computing the projected semi-major axis of a component about the center of mass of its parent orbit. Note that this is not the same as the asini of the parent orbit.
This currently can be inverted to solve for 'sma' of the parent orbit, but not 'q' or 'incl'.
End of explanation
b.filter(qualifier='requiv_max', context='constraint')
b.get_parameter(qualifier='requiv_max', component='primary', context='constraint')
Explanation: requiv_max
This constraint handles solving for the maxium equivalent radius (for a detached system).
For a semi-detached system, the radius itself is constrained to be exactly this value.
End of explanation
b.filter(qualifier='period', context='constraint')
b.get_parameter(qualifier='period', component='primary', context='constraint')
Explanation: rotation period
This constraint handles computing the rotation period of a star given its synchronicity parameter (syncpar).
It can be inverted to solve for any of the three parameters 'period' (both rotational and orbital) and 'syncpar'.
End of explanation
b.filter(qualifier='incl', context='constraint')
b.get_parameter(qualifier='incl', component='primary', context='constraint')
b.filter(qualifier='long_an', context='constraint')
b.get_parameter(qualifier='long_an', component='primary', context='constraint')
Explanation: pitch/yaw (incl/long_an)
pitch constrains the relation between the orbital and rotational inclination whereas yaw constrains the relation between the orbital and rotational long_an. When pitch and yaw are set to 0, the system is aligned.
End of explanation |
14,963 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Let's say I have a 1d numpy positive integer array like this: | Problem:
import numpy as np
a = np.array([1, 0, 3])
b = np.zeros((a.size, a.max()+1))
b[np.arange(a.size), a]=1 |
14,964 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hosts
Hosts are identified in the HSC overlap check notebook. For DR1 they are (in "300 kpc" circles)
Step2: Generate queries
These queries are meant for the HSC casjobs at https
Step3: Alice Catalog Experiments
Step4: Write out
Step5: Matches with SDSS
Step6: Matches with DECaLS | Python Code:
import hosts
hostobjs = hosts.get_saga_hosts_from_google()
hosts.use_base_catalogs(hostobjs)
hschosts = tuple([h for h in hostobjs if h.name in ('Alice', 'Othello', 'Dune')])
assert len(hschosts) == 3
hschosts
for h in hschosts:
h.hscfn = os.path.join('catalogs', 'hsc_pdr1_{}.csv.gz'.format(h.name))
Explanation: Hosts
Hosts are identified in the HSC overlap check notebook. For DR1 they are (in "300 kpc" circles):
Alice: 75%
Othello: 66%
Dune: 100%
End of explanation
hsc_qry_templ=
SELECT {cols}
FROM {table}
WHERE coneSearch(coord, {hra}, {hdec}, {radius})
[1:-1]
bands = 'gri'
cols = 'object_id,ra,dec'.split(',')
for band in bands:
magcol_to_add = []
magcol_to_add.append('{}cmodel_mag'.format(band))
magcol_to_add.append('{}mag_psf'.format(band))
magcol_to_add.append('{}mag_kron'.format(band))
magcol_to_add.append('{}mag_aperture10'.format(band))
magcol_to_add.append('{}mag_aperture20'.format(band))
magcol_to_add.append('{}mag_aperture30'.format(band))
for magcol in magcol_to_add:
cols.append(magcol)
cols.append(magcol+'_err')
cols.append('{}flux_kron_radius'.format(band))
cols.append('a_{}'.format(band))
cols = ','.join(cols)
table_to_query='pdr1_wide.forced'
for h in hschosts:
print('Query for host', h.name, 'which should be saved to', h.hscfn)
qry = hsc_qry_templ.format(table=table_to_query, cols=cols,
hra=h.ra, hdec=h.dec,
radius=h.environsarcmin*u.arcmin.to(u.arcsec))
print(qry, '\n')
Explanation: Generate queries
These queries are meant for the HSC casjobs at https://hsc-release.mtk.nao.ac.jp/datasearch/ (which requires login)
End of explanation
def compute_ap_sb(mags, aperturerad):
A = 2.5*np.log10(np.pi*(aperturerad.to(u.arcsec).value)**2)
return np.array(mags + A) * u.mag * u.arcsec**-2
alice = [h for h in hostobjs if h.name=='Alice']
assert len(alice)==1
alice = alice[0]
alice.hsc_cat = table.Table.read(alice.hscfn, format='ascii.csv')
alice.hsc_cat['# object_id'].name = 'object_id'
alice.hsc_cat
alice.hsc_cat['coord'] = SkyCoord.guess_from_table(alice.hsc_cat, unit=u.deg)
alice.hsc_cat['r_sb10'] = compute_ap_sb(alice.hsc_cat['rmag_aperture10'], 0.5*u.arcsec)
alice.hsc_cat['r_sb20'] = compute_ap_sb(alice.hsc_cat['rmag_aperture20'], 1.0*u.arcsec)
alice.hsc_cat['r_sb_kron'] = compute_ap_sb(alice.hsc_cat['rmag_kron'], alice.hsc_cat['rflux_kron_radius']*u.arcsec)
fig, (ax1, ax2) = plt.subplots(2,1,figsize=(10, 8))
Ar = alice.hsc_cat['a_r']
ax1.scatter(alice.hsc_cat['r_sb_kron']-Ar, alice.hsc_cat['rflux_kron_radius'], alpha=.05, s=1, lw=0)
ax2.scatter(alice.hsc_cat['r_sb_kron']-Ar, alice.hsc_cat['rcmodel_mag']-Ar, alpha=.05, s=1, lw=0)
for ax in (ax1, ax2):
ax.set_xlim(20, 30)
ax.set_xlabel('r_sb_kron')
ax1.set_ylim(0, 5)
ax2.set_ylim(30, 20)
ax1.set_ylabel('rflux_kron_radius')
ax2.set_ylabel('rcmodel_mag')
Explanation: Alice Catalog Experiments
End of explanation
coo = alice.hsc_cat['coord']
del alice.hsc_cat['coord']
alice.hsc_cat.write('catalogs/Alice_HSC_reprocessed.fits', format='fits')
!gzip catalogs/Alice_HSC_reprocessed.fits
!ls -lh catalogs/Alice*
coo = alice.hsc_cat['coord'] = coo
Explanation: Write out
End of explanation
sdsscat = alice.get_sdss_catalog()
idx, d2d, _ = sdsscat['coord'].match_to_catalog_sky(alice.hsc_cat['coord'])
plt.hist(d2d.arcsec, range=(0, 10), log=True, bins=100, histtype='step');
matched = d2d < 1*u.arcsec
plt.scatter(sdsscat['r'][matched], alice.hsc_cat['rcmodel_mag'][idx[matched]], alpha=.2, lw=0, s=2)
plt.plot([17, 24], [17, 24], c='k')
plt.xlim(17, 24)
plt.ylim(17, 24)
plt.xlabel('SDSS r')
plt.ylabel('HSC r')
Explanation: Matches with SDSS
End of explanation
deccat = table.Table.read('catalogs/Alice_decals_dr3.fits')
deccat['coord'] = SkyCoord(deccat['ra'], deccat['dec'], unit=u.deg)
idx, d2d, _ = deccat['coord'].match_to_catalog_sky(alice.hsc_cat['coord'])
plt.hist(d2d.arcsec, range=(0, 10), log=True, bins=100, histtype='step');
matched = d2d < 1*u.arcsec
plt.scatter(deccat['mag_r'][matched], alice.hsc_cat['rcmodel_mag'][idx[matched]], alpha=.2, lw=0, s=2)
plt.plot([17, 24], [17, 24], c='k')
plt.xlim(17, 24)
plt.ylim(17, 24)
plt.xlabel('DECaLS r')
plt.ylabel('HSC r')
Explanation: Matches with DECaLS
End of explanation |
14,965 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Selection of secondary sampling units (SSUs) <a name="section2"></a>
To select the second stage sample, we need the second stage frame which is the list of all the households in the 10 selected clusters (psus). DHS, PHIA, MICS and other large scale surveys visit the selected clusters and construct the list of all households in the selected clusters.
Before starting the second stage selection, let us import the data from the first stage sampling information
Step1: In this tutorial, we will simulate the second stage frame. For the simulation, assume that the psu frame was obtained from a previous census conducted several years before. We also assume that, the change in the number of households since the previous census follows a normal distribution with a mean equal to 5% higher than the census value and a variance of 0.15 times the number of households from the census. Under these assumptions, we generate the following second stage frame of households. Note that the frame is created only for the selected PSUs.
Step2: According to the simulated second stage frame, we get the same number of households in cluster 7 as the census. However, in strata 10, 16, 29, and 64, we listed more households than during than the census. And finally, we found less households in the remaining clusters than the census.
Now that we have a second stage frame, let's use samplics to calculate the probabilities of selection and to select a sample. The second stage sample size is 150 households and the strategy is to select 15 households per cluster.
SSU (household) Probability of Selection <a name="section21"></a>
The second stage probabilities of selection are conditional on the first stage realization. For this stage, simple random selection (srs) and systematic selection(sys) are common methods used to select households. For this example, we use srs to select 15 households from each cluster. Conditionally to teh first stage, the second stage selection is a stratified srs where the clusters are the strata. More generally, we have that \begin{equation} p_{hij} = \frac{m_{hi}}{M_{hi}^{'}} \end{equation} where $p_{hij}$ is the conditional probability of selection for unit $j$ from stratum $h$ and cluster $j$, $m_{hi}$ and $M_{hi}^{'}$ are the sample size and the number of secondary sampling units listed for stratum $h$ and cluster $j$, respectively.
In this scenario, sample size is the same in each stratum. Hence, the parameter sample_size does not need to be a Python dictionary; we will only provide 15 in the function call.
Step3: SSU (household) Selection <a name="section22"></a>
The second stage sample is selected from the SSU frame (ssu_frame) using the variable cluster as the stratification variable. The sample is selected without replacement according to the specification of the second stage design. Hence, both ssu_sample and ssu_hits sum to 150 and each selected household was hit only ounce (i.e. ssu_hits = 1).
```python
ssu_frame["ssu_sample"].sum()
150
ssu_frame["ssu_hits"].sum()
150
```
Step4: To use systematic selection, we just need to replace method="srs" by method="sys".
Another common approach is to use a rate for selecting the sample. Instead of selecting 15 households from 130 in the first cluster, we may want to select with a rate of 15/130, and similarly for the other clusters.
Step5: A sample is selected using the rates as follows
Step6: Let's store the first and second stages samples. | Python Code:
%%capture
%run psu_selection.ipynb
Explanation: Selection of secondary sampling units (SSUs) <a name="section2"></a>
To select the second stage sample, we need the second stage frame which is the list of all the households in the 10 selected clusters (psus). DHS, PHIA, MICS and other large scale surveys visit the selected clusters and construct the list of all households in the selected clusters.
Before starting the second stage selection, let us import the data from the first stage sampling information
End of explanation
# Create a synthetic second stage frame
census_size = psu_frame.loc[
psu_frame["psu_sample"] == 1, "number_households_census"
].values
stratum_names = psu_frame.loc[psu_frame["psu_sample"] == 1, "region"].values
cluster = psu_frame.loc[psu_frame["psu_sample"] == 1, "cluster"].values
np.random.seed(15)
listing_size = np.zeros(census_size.size)
for k in range(census_size.size):
listing_size[k] = np.random.normal(1.05 * census_size[k], 0.15 * census_size[k])
listing_size = listing_size.astype(int)
hh_id = rr_id = cl_id = []
for k, s in enumerate(listing_size):
hh_k1 = np.char.array(np.repeat(stratum_names[k], s)).astype(str)
hh_k2 = np.char.array(np.arange(1, s + 1)).astype(str)
cl_k = np.repeat(cluster[k], s)
hh_k = np.char.add(np.char.array(cl_k).astype(str), hh_k2)
hh_id = np.append(hh_id, hh_k)
rr_id = np.append(rr_id, hh_k1)
cl_id = np.append(cl_id, cl_k)
ssu_frame = pd.DataFrame(cl_id.astype(int))
ssu_frame.rename(columns={0: "cluster"}, inplace=True)
ssu_frame["region"] = rr_id
ssu_frame["household"] = hh_id
nb_obs = 15
print(f"\nFirst {nb_obs} observations of the SSU frame\n")
ssu_frame.head(nb_obs)
psu_sample = psu_frame.loc[psu_frame["psu_sample"] == 1]
ssu_counts = ssu_frame.groupby("cluster").count()
ssu_counts.drop(columns="region", inplace=True)
ssu_counts.reset_index(inplace=True)
ssu_counts.rename(columns={"household": "number_households_listed"}, inplace=True)
pd.merge(
psu_sample[["cluster", "region", "number_households_census"]],
ssu_counts[["cluster", "number_households_listed"]],
on=["cluster"],
)
Explanation: In this tutorial, we will simulate the second stage frame. For the simulation, assume that the psu frame was obtained from a previous census conducted several years before. We also assume that, the change in the number of households since the previous census follows a normal distribution with a mean equal to 5% higher than the census value and a variance of 0.15 times the number of households from the census. Under these assumptions, we generate the following second stage frame of households. Note that the frame is created only for the selected PSUs.
End of explanation
stage2_design = SampleSelection(
method="srs", stratification=True, with_replacement=False
)
ssu_frame["ssu_prob"] = stage2_design.inclusion_probs(
ssu_frame["household"], 15, ssu_frame["cluster"]
)
ssu_frame.sample(20)
Explanation: According to the simulated second stage frame, we get the same number of households in cluster 7 as the census. However, in strata 10, 16, 29, and 64, we listed more households than during than the census. And finally, we found less households in the remaining clusters than the census.
Now that we have a second stage frame, let's use samplics to calculate the probabilities of selection and to select a sample. The second stage sample size is 150 households and the strategy is to select 15 households per cluster.
SSU (household) Probability of Selection <a name="section21"></a>
The second stage probabilities of selection are conditional on the first stage realization. For this stage, simple random selection (srs) and systematic selection(sys) are common methods used to select households. For this example, we use srs to select 15 households from each cluster. Conditionally to teh first stage, the second stage selection is a stratified srs where the clusters are the strata. More generally, we have that \begin{equation} p_{hij} = \frac{m_{hi}}{M_{hi}^{'}} \end{equation} where $p_{hij}$ is the conditional probability of selection for unit $j$ from stratum $h$ and cluster $j$, $m_{hi}$ and $M_{hi}^{'}$ are the sample size and the number of secondary sampling units listed for stratum $h$ and cluster $j$, respectively.
In this scenario, sample size is the same in each stratum. Hence, the parameter sample_size does not need to be a Python dictionary; we will only provide 15 in the function call.
End of explanation
np.random.seed(11)
ssu_sample, ssu_hits, ssu_probs = stage2_design.select(
ssu_frame["household"], 15, ssu_frame["cluster"]
)
ssu_frame["ssu_sample"] = ssu_sample
ssu_frame["ssu_hits"] = ssu_hits
ssu_frame["ssu_probs"] = ssu_probs
ssu_frame[ssu_frame["ssu_sample"] == 1].sample(15)
Explanation: SSU (household) Selection <a name="section22"></a>
The second stage sample is selected from the SSU frame (ssu_frame) using the variable cluster as the stratification variable. The sample is selected without replacement according to the specification of the second stage design. Hence, both ssu_sample and ssu_hits sum to 150 and each selected household was hit only ounce (i.e. ssu_hits = 1).
```python
ssu_frame["ssu_sample"].sum()
150
ssu_frame["ssu_hits"].sum()
150
```
End of explanation
rates = np.repeat(15, 10) / ssu_counts["number_households_listed"].values
ssu_rates = dict(zip(np.unique(ssu_frame["cluster"]), rates))
ssu_rates
Explanation: To use systematic selection, we just need to replace method="srs" by method="sys".
Another common approach is to use a rate for selecting the sample. Instead of selecting 15 households from 130 in the first cluster, we may want to select with a rate of 15/130, and similarly for the other clusters.
End of explanation
np.random.seed(22)
stage2_design2 = SampleSelection(
method="sys", stratification=True, with_replacement=False
)
ssu_sample_r, ssu_hits_r, _ = stage2_design2.select(
ssu_frame["household"], stratum=ssu_frame["cluster"], samp_rate=ssu_rates
)
ssu_sample2 = pd.DataFrame(
data={
"household": ssu_frame["household"],
"ssu_sample_r": ssu_sample_r,
"ssu_hits_r": ssu_hits_r,
}
)
ssu_sample2.head(25)
Explanation: A sample is selected using the rates as follows:
End of explanation
psu_sample[["cluster", "region", "psu_prob"]].to_csv("psu_sample.csv")
ssu_sample = ssu_frame.loc[ssu_frame["ssu_sample"] == 1]
ssu_sample[["cluster", "household", "ssu_prob"]].to_csv("ssu_sample.csv")
Explanation: Let's store the first and second stages samples.
End of explanation |
14,966 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print(text[3920:3960])
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
dict_punc = {
'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_parentheses||',
'--': '||dash||',
'\n': '||return||'
}
return dict_punc
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
#
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
# TODO: Implement Function
return (input, targets, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop]*2) # In Anna Karina example, it is multiplied by num_layers, and num_layers was set 2.
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
# TODO: Implement Function
return (cell, initial_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
# TODO: Implement Function
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return (outputs, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, rnn_size) # embed_dim can be rnn_size? should we use something else?
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs,vocab_size,
weights_initializer=tf.truncated_normal_initializer(mean=0.0,stddev=0.01),
biases_initializer=tf.zeros_initializer(),
activation_fn=None)
# TODO: Implement Function
return (logits, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
#n_batches = len(int_text)//batch_size
# ignore texts that do not fit into the last batch size
#mytext = int_text[:n_batches*batch_size]
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 500
# Batch Size
batch_size = 500
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.005
# Show stats for every n number of batches
show_every_n_batches = 100
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
InputTensor = loaded_graph.get_tensor_by_name('input:0')
InitialStateTensor = loaded_graph.get_tensor_by_name('initial_state:0')
FinalStateTensor = loaded_graph.get_tensor_by_name('final_state:0')
ProbsTensor = loaded_graph.get_tensor_by_name('probs:0')
return (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
p = np.squeeze(probabilities)
idx = np.argsort(p)[-1]
return int_to_vocab[idx]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
14,967 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return x / 255.0
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def make_one_hots(n):
one_hots = {}
for i in range(n):
oh = np.zeros(n)
oh[i] = 1
one_hots[i] = oh
return one_hots
one_hots = make_one_hots(10)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return np.array([ one_hots[i] for i in x ])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
x = tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name="x")
return x
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
y = tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
return y
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
xshape = x_tensor.get_shape().as_list()
weight = tf.Variable(tf.truncated_normal([
conv_ksize[0], conv_ksize[1], xshape[3], conv_num_outputs], stddev=0.05))
bias = tf.Variable(tf.constant(0.1, shape=[conv_num_outputs]))
padding = 'SAME'
strides = [1, conv_strides[0], conv_strides[1], 1]
conv2d = tf.nn.conv2d(x_tensor, weight, strides, padding) + bias
conv2d = tf.nn.relu(conv2d)
ksize = [1, pool_ksize[0], pool_ksize[1], 1]
strides = [1, pool_strides[0], pool_strides[1], 1]
conv2d = tf.nn.max_pool(conv2d, ksize, strides, padding)
return conv2d
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
dim = np.prod(x_tensor.get_shape().as_list()[1:])
x2 = tf.reshape(x_tensor, [-1, dim])
return x2
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
xshape = x_tensor.get_shape().as_list()
weight = tf.Variable(tf.truncated_normal([xshape[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.constant(0.1, shape=[num_outputs]))
fully = tf.nn.relu(tf.matmul(x_tensor, weight) + bias)
return fully
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
xshape = x_tensor.get_shape().as_list()
weight = tf.Variable(tf.truncated_normal([xshape[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.constant(0.1, shape=[num_outputs]))
o = tf.matmul(x_tensor, weight) + bias
return o
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
conv_num_outputs_1 = 16
conv_ksize_1 = (5,5)
conv_strides_1 = (1,1)
pool_ksize_1 = (2,2)
pool_strides_1 = (1,1)
conv_num_outputs_2 = 64
conv_ksize_2 = (5,5)
conv_strides_2 = (1,1)
pool_ksize_2 = (2,2)
pool_strides_2 = (2,2)
conv_num_outputs_3 = 96
conv_ksize_3 = (2,2)
conv_strides_3 = (2,2)
pool_ksize_3 = (2,2)
pool_strides_3 = (2,2)
fully_numouts_1 = 300
fully_numouts_2 = 100
fully_numouts_3 = 20
num_outputs = 10
print('\nMODEL:')
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
x_tensor = conv2d_maxpool(x, conv_num_outputs_1, conv_ksize_1, conv_strides_1, pool_ksize_1, pool_strides_1)
print('CONV', x_tensor.get_shape().as_list())
x_tensor = conv2d_maxpool(x_tensor, conv_num_outputs_2, conv_ksize_2, conv_strides_2, pool_ksize_2, pool_strides_2)
print('CONV', x_tensor.get_shape().as_list())
x_tensor = conv2d_maxpool(x_tensor, conv_num_outputs_3, conv_ksize_3, conv_strides_3, pool_ksize_3, pool_strides_3)
print('CONV', x_tensor.get_shape().as_list())
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_tensor = flatten(x_tensor)
print('FLAT', x_tensor.get_shape().as_list())
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x_tensor = fully_conn(x_tensor, fully_numouts_1)
print('FC', x_tensor.get_shape().as_list())
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
print('DROP')
x_tensor = fully_conn(x_tensor, fully_numouts_2)
print('FC', x_tensor.get_shape().as_list())
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
print('DROP')
x_tensor = fully_conn(x_tensor, fully_numouts_3)
print('FC', x_tensor.get_shape().as_list())
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
print('DROP')
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
o = output(x_tensor, num_outputs)
print('OUT:', o.get_shape().as_list())
# TODO: return output
return o
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={ x:feature_batch, y:label_batch, keep_prob:keep_probability} )
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
cst = sess.run(cost, feed_dict={ x:feature_batch, y:label_batch, keep_prob:1.0})
acc = sess.run(accuracy, feed_dict={x:valid_features, y:valid_labels, keep_prob:1.0})
print('Loss %f - Accuracy %.1f%%' % (cst, acc*100))
pass
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 50
batch_size = 64
keep_probability = .5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
14,968 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Timeseries with pandas
Working with time-series data is an important part of data analysis.
Starting with v0.8, the pandas library has included a rich API for time-series manipulations.
The pandas time-series API includes
Step1: Example using tick data
Sample trade ticks from 2011-11-01 to 2011-11-03 for a single security
Step2: parse_dates
Step3: Narrow Data down to just Timestamp (key), Price and Volume.
head shows the first few rows.
Step4: resample
Step5: Compute a VWAP using resample
Step6: Convenient indexing for time series data
Step7: at_time
Step8: between_time
Step9: fillna
Step10: Simple plotting
Step11: Lead/lag
Step12: shift realigns values
Step13: tshift manipulates index values
Step14: SSS
Step15: We shouldn't use shift here because
Step16: Let's play
Step17: Convert to percentage volume
Step18: Verify
Step19: Vivaldi FTW
Step20: Expanding window of hourly means for volume
Step21: Compute deviations from the hourly means
Step22: Date range creation
pd.date_range
Step23: Frequency constants
<table>
<tr><td>Name</td><td>Description</td></tr>
<tr><td>D</td><td>Calendar day</td></tr>
<tr><td>B</td><td>Business day</td></tr>
<tr><td>M</td><td>Calendar end of month</td></tr>
<tr><td>MS</td><td>Calendar start of month</td></tr>
<tr><td>BM</td><td>Business end of month</td></tr>
<tr><td>BMS</td><td>Business start of month</td></tr>
<tr><td>W-{MON, TUE,...}</td><td>Week ending on Monday, Tuesday, ...</td></tr>
<tr><td>Q-{JAN, FEB,...}</td><td>Quarter end with year ending January, February...</td></tr>
<tr><td>QS-{JAN, FEB,...}</td><td>Quarter start with year ending January, February...</td></tr>
<tr><td>BQ-{JAN, FEB,...}</td><td>Business quarter end with year ending January, February...</td></tr>
<tr><td>BQS-{JAN, FEB,...}</td><td>Business quarter start with year ending January, February...</td></tr>
<tr><td>A-{JAN, FEB, ...}</td><td>Year end (December)</td></tr>
<tr><td>AS-{JAN, FEB, ...}</td><td>Year start (December)</td></tr>
<tr><td>BA-{JAN, FEB, ...}</td><td>Business year end (December)</td></tr>
<tr><td>BAS-{JAN, FEB, ...}</td><td>Business year start (December)</td></tr>
<tr><td>H</td><td>Hour</td></tr>
<tr><td>T</td><td>Minute</td></tr>
<tr><td>s</td><td>Second</td></tr>
<tr><td>L, ms</td><td>Millisecond</td></tr>
<tr><td>U</td><td>Microsecond</td></tr>
</table>
Anchored offsets
Step24: Week anchor indicates end of week
Step25: Year anchor indicates year ending month
Step26: DatetimeIndex is a subclass of Index
Step27: Use it for Series/DataFrame labelling
Error Stop here.
Step28: Label indexing
Step29: Partial indexing
Step30: positional indexing still works
Step31: Elements of DatetimeIndex
Elements boxed as Timestamp (subclass of datetime.datetime)
Step32: Why do we need this subclass?
Step33: Implemented internally using numpy.datetime64 (dtype='M8[ns]')
Step34: Upgrade Numpy to 1.7b to fix repr issue
Step35: Or use DatetimeIndex.asobject for workaround
Step36: Other views
Step37: Integer representation
Step38: More fun with resampling and asfreq
Step39: Upsampling
Step40: asfreq
Step41: closed
Step42: label
Step43: loffset
Step44: Time zones
Localization
Step45: Localization assumes naive time is local (and not UTC)
Step46: TZ conversions
Step47: Period representation
A lot of time series data is better represented as intervals of time rather than points in time.
This is represented in pandas as Period and PeriodIndex
Creating periods
Step48: No xxx-start frequencies
Step49: PeriodRange
Step50: Converting between representations
Step51: Bugggg | Python Code:
from datetime import datetime, date, time
import sys
sys.version
import pandas as pd
from pandas import Series, DataFrame, Panel
pd.__version__
import numpy as np
np.__version__
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rc('figure', figsize=(10, 8))
mpl.__version__
Explanation: Timeseries with pandas
Working with time-series data is an important part of data analysis.
Starting with v0.8, the pandas library has included a rich API for time-series manipulations.
The pandas time-series API includes:
Creating date ranges
From files
From scratch
Manipulations: Shift, resample, filter
Field accessors (e.g., hour of day)
Plotting
Time zones (localization and conversion)
Dual representations (point-in-time vs interval)
End of explanation
import os.path
os.path.exists('data.csv')
with open('data/data.csv', 'r') as fh:
print(fh.readline()) # headers
print(fh.readline()) # first row
Explanation: Example using tick data
Sample trade ticks from 2011-11-01 to 2011-11-03 for a single security
End of explanation
data = pd.read_csv('data/data.csv',
parse_dates={'Timestamp': ['Date', 'Time']},
index_col='Timestamp')
data.head()
Explanation: parse_dates: use a list or dict for flexible (possibly multi-column) date parsing
End of explanation
ticks = data.ix[:, ['Price', 'Volume']]
ticks.head()
type(data)
Explanation: Narrow Data down to just Timestamp (key), Price and Volume.
head shows the first few rows.
End of explanation
ticks.count()
bars = ticks.Price.resample('1min').ohlc()
bars
bars.describe()
minute_range = bars.high - bars.low
minute_range.describe()
minute_return = bars.close / bars.open - 1
minute_return.describe()
Explanation: resample: regularization and frequency conversion
End of explanation
volume = ticks.Volume.resample('1min').sum()
value = ticks.prod(axis=1).resample('1min').sum()
vwap = value / volume
Explanation: Compute a VWAP using resample
End of explanation
vwap.ix['2011-11-01 09:27':'2011-11-01 09:32']
Explanation: Convenient indexing for time series data
End of explanation
bars.open.at_time('9:30')
bars.close.at_time('16:00')
Explanation: at_time: same (b)at_time (same bat channel)
End of explanation
filtered = vwap.between_time('10:00', '16:00')
filtered.head(20)
vol = volume.between_time('10:00', '16:00')
vol.head(20)
Explanation: between_time: intraday time range
End of explanation
filtered.ix['2011-11-03':'2011-11-04'].head(20)
filled = filtered.fillna(method='pad', limit=1)
filled.ix['2011-11-03':'2011-11-04'].head(20)
vol = vol.fillna(0.)
vol.head(20)
Explanation: fillna: handling missing data
End of explanation
filled.ix['2011-11-03':'2011-11-04'].plot()
plt.ylim(103.5, 104.5)
vwap.ix['2011-11-03':'2011-11-04'].plot()
plt.ylim(103.5, 104.5)
vol.ix['2011-11-03':'2011-11-04'].plot(secondary_y=True, style='r')
Explanation: Simple plotting
End of explanation
ticks.head()
Explanation: Lead/lag
End of explanation
ticks.shift(1).head()
ticks.shift(-1).head()
Explanation: shift realigns values
End of explanation
ticks.tshift(1, 'min').head()
Explanation: tshift manipulates index values
End of explanation
minute_return.head()
mr = minute_return.between_time('9:30', '16:00')
mr.head()
lagged = mr.shift(1)
lagged.head()
Explanation: SSS: stupidly simple strategy
End of explanation
lagged.at_time('9:30')
mr.at_time('16:00')
lagged = minute_return.tshift(1, 'min').between_time('9:30', '16:00')
lagged.at_time('9:30')
Explanation: We shouldn't use shift here because:
End of explanation
pd.ols(y=mr, x=lagged)
mr = vwap / bars.open - 1
mr = mr.between_time('9:30', '16:00')
lagged = mr.tshift(1, 'min').between_time('9:30', '16:00')
pd.ols(y=mr, x=lagged)
inter = mr * vol
inter = inter.between_time('9:30', '16:00')
lagged_inter = inter.tshift(1, 'min').between_time('9:30', '16:00')
pd.ols(y=mr, x=lagged_inter)
Explanation: Let's play
End of explanation
vol = vol.groupby(vol.index.day).transform(lambda x: x/x.sum())
vol.head()
Explanation: Convert to percentage volume
End of explanation
vol.resample('D', how='sum')
inter = mr * vol
inter = inter.between_time('9:30', '16:00')
lagged_inter = inter.tshift(1, 'min').between_time('9:30', '16:00')
pd.ols(y=mr, x=lagged_inter)
Explanation: Verify
End of explanation
hour = vol.index.hour
hourly_volume = vol.groupby(hour).mean()
hourly_volume.plot(kind='bar')
Explanation: Vivaldi FTW
End of explanation
hourly = vol.resample('H')
def calc_mean(hr):
hr = time(hour=hr)
data = hourly.at_time(hr)
return pd.expanding_mean(data)
df = pd.concat([calc_mean(hr) for hr in range(10, 16)])
df = df.sort_index()
df
Explanation: Expanding window of hourly means for volume
End of explanation
clean_vol = vol.between_time('10:00', '15:59')
dev = clean_vol - df.reindex(clean_vol.index, method='pad') # be careful over day boundaries
dev
inter = mr * dev
inter = inter.between_time('10:00', '15:59')
pd.ols(y=mr, x=inter.tshift(1, 'min'))
Explanation: Compute deviations from the hourly means
End of explanation
rng = pd.date_range('2005', '2012', freq='M')
rng
pd.date_range('2005', periods=7*12, freq='M')
pd.date_range(end='2012', periods=7*12, freq='M')
Explanation: Date range creation
pd.date_range
End of explanation
pd.date_range('2005', periods=4, freq='Q')
pd.date_range('2005', periods=4, freq='Q-NOV')
Explanation: Frequency constants
<table>
<tr><td>Name</td><td>Description</td></tr>
<tr><td>D</td><td>Calendar day</td></tr>
<tr><td>B</td><td>Business day</td></tr>
<tr><td>M</td><td>Calendar end of month</td></tr>
<tr><td>MS</td><td>Calendar start of month</td></tr>
<tr><td>BM</td><td>Business end of month</td></tr>
<tr><td>BMS</td><td>Business start of month</td></tr>
<tr><td>W-{MON, TUE,...}</td><td>Week ending on Monday, Tuesday, ...</td></tr>
<tr><td>Q-{JAN, FEB,...}</td><td>Quarter end with year ending January, February...</td></tr>
<tr><td>QS-{JAN, FEB,...}</td><td>Quarter start with year ending January, February...</td></tr>
<tr><td>BQ-{JAN, FEB,...}</td><td>Business quarter end with year ending January, February...</td></tr>
<tr><td>BQS-{JAN, FEB,...}</td><td>Business quarter start with year ending January, February...</td></tr>
<tr><td>A-{JAN, FEB, ...}</td><td>Year end (December)</td></tr>
<tr><td>AS-{JAN, FEB, ...}</td><td>Year start (December)</td></tr>
<tr><td>BA-{JAN, FEB, ...}</td><td>Business year end (December)</td></tr>
<tr><td>BAS-{JAN, FEB, ...}</td><td>Business year start (December)</td></tr>
<tr><td>H</td><td>Hour</td></tr>
<tr><td>T</td><td>Minute</td></tr>
<tr><td>s</td><td>Second</td></tr>
<tr><td>L, ms</td><td>Millisecond</td></tr>
<tr><td>U</td><td>Microsecond</td></tr>
</table>
Anchored offsets
End of explanation
wkrng = pd.date_range('2012-10-25', periods=3, freq='W')
wkrng
wkrng[0].dayofweek
Explanation: Week anchor indicates end of week
End of explanation
pd.date_range('2005', periods=3, freq='A-JUN')
Explanation: Year anchor indicates year ending month
End of explanation
isinstance(rng, pd.Index)
rng[2:4]
Explanation: DatetimeIndex is a subclass of Index
End of explanation
s = Series(randn(len(rng)), rng)
s.head()
df = DataFrame(randn(len(rng), 3), rng, ['X', 'Y', 'Z'])
df.head()
Explanation: Use it for Series/DataFrame labelling
Error Stop here.
End of explanation
s[datetime(2005, 1, 31) : datetime(2006, 12, 31)] #slice end inclusive
df['2005-1-31':'2006-12-31']
Explanation: Label indexing
End of explanation
s['2005':'2006']
Explanation: Partial indexing
End of explanation
df[:2] # slice end exclusive
Explanation: positional indexing still works
End of explanation
elm = rng[0]
elm
isinstance(elm, datetime)
Explanation: Elements of DatetimeIndex
Elements boxed as Timestamp (subclass of datetime.datetime)
End of explanation
elm.nanosecond
Explanation: Why do we need this subclass?
End of explanation
val = rng.values
type(val)
val.dtype
Explanation: Implemented internally using numpy.datetime64 (dtype='M8[ns]')
End of explanation
val[0]
Explanation: Upgrade Numpy to 1.7b to fix repr issue
End of explanation
rng.asobject.values[0]
Explanation: Or use DatetimeIndex.asobject for workaround
End of explanation
rng.asobject
rng.to_pydatetime()
rng.to_pydatetime()[0]
Explanation: Other views
End of explanation
type(rng.asi8)
rng.asi8.dtype
rng.asi8[0]
Explanation: Integer representation
End of explanation
s.index.freqstr
s.resample('30D').head(10)
s.resample('30D', fill_method='ffill').head(10)
Explanation: More fun with resampling and asfreq
End of explanation
s.ix[:3].resample('W')
s.ix[:3].resample('W', fill_method='ffill')
Explanation: Upsampling
End of explanation
s.asfreq('Q').head()
s.resample('Q', 'last').head()
Explanation: asfreq
End of explanation
s.resample('Q').head()
s.ix[3:6].mean()
s.resample('Q', closed='left').head()
s.ix[2:5].mean()
Explanation: closed: 'left' or 'right' bin edge is closed (default is 'right')
End of explanation
s.resample('Q').head()
s.resample('Q', label='left').head()
Explanation: label: label the bin with 'left' or 'right' edge (default is 'right')
End of explanation
s.resample('Q', label='left', loffset='-1D').head()
Explanation: loffset: shift the result index
End of explanation
rng.tz
d = rng[0]
d
d.tz
localized = rng.tz_localize('US/Eastern')
Explanation: Time zones
Localization
End of explanation
localized[0]
localized.asi8[0]
rng.asi8[0]
d_utc = d.tz_localize('UTC')
d_utc
d_utc.tz_localize('US/Eastern')
Explanation: Localization assumes naive time is local (and not UTC)
End of explanation
localized.tz_convert('UTC')
d_ny = d_utc.tz_convert('US/Eastern')
d_ny
rng.tz_convert('US/Eastern')
Explanation: TZ conversions
End of explanation
p = pd.Period('2005', 'A')
p
pd.Period('2006Q1', 'Q-MAR')
pd.Period('2007-1-1', 'B')
Explanation: Period representation
A lot of time series data is better represented as intervals of time rather than points in time.
This is represented in pandas as Period and PeriodIndex
Creating periods
End of explanation
pd.Period('2005', 'AS')
Explanation: No xxx-start frequencies
End of explanation
pd.period_range('2005', '2012', freq='A')
prng = pd.period_range('2005', periods=7, freq='A')
prng
Explanation: PeriodRange
End of explanation
p
p.to_timestamp()
p.to_timestamp('M', 's')
p.to_timestamp('M', 'e')
prng.to_timestamp(how='e')
prng.to_timestamp('M', 'e')
rng
rng.to_period()
rng.to_period('D')
Explanation: Converting between representations
End of explanation
p
p.end_time
datetime(2005, 12, 31, 10, 0, 0) < p.end_time # WAT?!
Explanation: Bugggg
End of explanation |
14,969 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.soft - Tests unitaires, setup et ingéniérie logicielle
On vérifie toujours qu'un code fonctionne quand on l'écrit mais cela ne veut pas dire qu'il continuera à fonctionner à l'avenir. La robustesse d'un code vient de tout ce qu'on fait autour pour s'assurer qu'il continue d'exécuter correctement.
Step1: Petite histoire
Supposons que vous ayez implémenté trois fonctions qui dépendent les unes des autres. la fonction f3 utilise les fonctions f1 et f2.
Step2: Six mois plus tard, vous créez une fonction f5 qui appelle une fonction f4 et la fonction f2.
Step3: Ah au fait, ce faisant, vous modifiez la fonction f2 et vous avez un peu oublié ce que faisait la fonction f3... Bref, vous ne savez pas si la fonction f3 sera impactée par la modification introduite dans la fonction f2 ? C'est ce type de problème qu'on rencontre tous les jours quand on écrit un logiciel à plusieurs et sur une longue durée. Ce notebook présente les briques classiques pour s'assurer de la robustesse d'un logiciel.
les tests unitaires
un logiciel de suivi de source
calcul de couverture
l'intégration continue
écrire un setup
écrire la documentation
publier sur PyPi
Ecrire une fonction
N'importe quel fonction qui fait un calcul, par exemple une fonction qui résoud une équation du second degré.
Step4: Ecrire un test unitaire
Un test unitaire est une fonction qui s'assure qu'une autre fonction retourne bien le résultat souhaité. Le plus simple est d'utiliser le module standard unittest et de quitter les notebooks pour utiliser des fichiers. Parmi les autres alternatives
Step5: Il y a des badges un peu pour tout.
Ecrire un setup
Le fichier setup.py détermin la façon dont le module python doit être installé pour un utilisateur qui ne l'a pas développé. Comment construire un setup | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from pyensae.graphhelper import draw_diagram
Explanation: 1A.soft - Tests unitaires, setup et ingéniérie logicielle
On vérifie toujours qu'un code fonctionne quand on l'écrit mais cela ne veut pas dire qu'il continuera à fonctionner à l'avenir. La robustesse d'un code vient de tout ce qu'on fait autour pour s'assurer qu'il continue d'exécuter correctement.
End of explanation
draw_diagram("blockdiag { f0 -> f1 -> f3; f2 -> f3;}")
Explanation: Petite histoire
Supposons que vous ayez implémenté trois fonctions qui dépendent les unes des autres. la fonction f3 utilise les fonctions f1 et f2.
End of explanation
draw_diagram('blockdiag { f0 -> f1 -> f3; f2 -> f3; f2 -> f5 [color="red"]; f4 -> f5 [color="red"]; }')
Explanation: Six mois plus tard, vous créez une fonction f5 qui appelle une fonction f4 et la fonction f2.
End of explanation
def solve_polynom(a, b, c):
# ....
return None
Explanation: Ah au fait, ce faisant, vous modifiez la fonction f2 et vous avez un peu oublié ce que faisait la fonction f3... Bref, vous ne savez pas si la fonction f3 sera impactée par la modification introduite dans la fonction f2 ? C'est ce type de problème qu'on rencontre tous les jours quand on écrit un logiciel à plusieurs et sur une longue durée. Ce notebook présente les briques classiques pour s'assurer de la robustesse d'un logiciel.
les tests unitaires
un logiciel de suivi de source
calcul de couverture
l'intégration continue
écrire un setup
écrire la documentation
publier sur PyPi
Ecrire une fonction
N'importe quel fonction qui fait un calcul, par exemple une fonction qui résoud une équation du second degré.
End of explanation
from IPython.display import Image
try:
im = Image("https://travis-ci.com/sdpython/ensae_teaching_cs.png")
except TimeoutError:
im = None
im
from IPython.display import SVG
try:
im = SVG("https://codecov.io/github/sdpython/ensae_teaching_cs/coverage.svg")
except TimeoutError:
im = None
im
Explanation: Ecrire un test unitaire
Un test unitaire est une fonction qui s'assure qu'une autre fonction retourne bien le résultat souhaité. Le plus simple est d'utiliser le module standard unittest et de quitter les notebooks pour utiliser des fichiers. Parmi les autres alternatives : pytest et nose.
Couverture ou coverage
La couverture de code est l'ensemble des lignes exécutées par les tests unitaires. Cela ne signifie pas toujours qu'elles soient correctes mais seulement qu'elles ont été exécutées une ou plusieurs sans provoquer d'erreur. Le module le plus simple est coverage. Il produit des rapports de ce type : mlstatpy/coverage.
Créer un compte GitHub
GitHub est un site qui contient la majorité des codes des projets open-source. Il faut créer un compte si vous n'en avez pas, c'est gratuit pour les projets open souce, puis créer un projet et enfin y insérer votre projet. Votre ordinateur a besoin de :
git
GitHub destkop
Vous pouvez lire GitHub Pour les Nuls : Pas de Panique, Lancez-Vous ! (Première Partie) et bien sûr faire plein de recherches internet.
Note
Tout ce que vous mettez sur GitHub pour un projet open-source est en accès libre. Veillez à ne rien mettre de personnel. Un compte GitHub fait aussi partie des choses qu'un recruteur ira regarder en premier.
Intégration continue
L'intégration continue a pour objectif de réduire le temps entre une modification et sa mise en production. Typiquement, un développeur fait une modification, une machine exécute tous les tests unitaires. On en déduit que le logiciel fonctionne sous tous les angles, on peut sans crainte le mettre à disposition des utilisateurs. Si je résume, l'intégration continue consiste à lancer une batterie de tests dès qu'une modification est détectée. Si tout fonctionne, le logiciel est construit et prêt à être partagé ou déployé si c'est un site web.
Là encore pour des projets open-source, il est possible de trouver des sites qui offre ce service gratuitement :
travis - Linux
appveyor - Windows - 1 job à la fois, pas plus d'une heure.
circle-ci - Linux et Mac OSX (payant)
GitLab-ci
A part GitLab-ci, ces trois services font tourner les tests unitaires sur des machines hébergés par chacun des sociétés. Il faut s'enregistrer sur le site, définir un fichier .travis.yml, .appveyor.yml ou circle.yml puis activer le projet sur le site correspondant. Quelques exemples sont disponibles à pyquickhelper ou scikit-learn. Le fichier doit être ajouté au projet sur GitHub et activé sur le site d'intégration continue choisi. La moindre modification déclenchera un nouveau build.permet
La plupart des sites permettent l'insertion de badge de façon à signifier que le build fonctionne.
End of explanation
try:
im = SVG("https://badge.fury.io/py/ensae_teaching_cs.svg")
except TimeoutError:
im = None
im
Explanation: Il y a des badges un peu pour tout.
Ecrire un setup
Le fichier setup.py détermin la façon dont le module python doit être installé pour un utilisateur qui ne l'a pas développé. Comment construire un setup : setup.
Ecrire la documentation
L'outil est le plus utilisé est sphinx. Saurez-vous l'utiliser ?
Dernière étape : PyPi
PyPi est un serveur qui permet de mettre un module à la disposition de tout le monde. Il suffit d'uploader le module... Packaging and Distributing Projects ou How to submit a package to PyPI. PyPi permet aussi l'insertion de badge.
End of explanation |
14,970 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 2 pre-class assignment
Goals for today's pre-class assignment
Make sure that you can get a Jupyter notebook up and running!
Learn about algorithms, computer programs, and their relationship
To devise and think about the components of an algorithm for a simple task
Learn about Python, IPython, and IPython notebooks and understand why we're using it in class.
Assignment instructions
Pre-class assignments will be composed of a combination of videos, text to read, and small assignments. The goal of these assignments is to prepare you for class the following day. You should watch the videos and read the text, and then do the assigned work. You will be graded on making a good-faith effort, not on correctness!
To make notebook cells that have Python code in them do something, hold down the 'shift' key and then press the 'enter' or 'return' key (you'll have to do this to get movies to run). To edit a cell (to add answers, for example) you double-click, add your text, and then enter it by holding down 'shift' and pressing 'enter'.
This assignment is due by 11
Step1: Algorithms
Step2: Further reading on algorithms and computer programs
note | Python Code:
# The command below this comment imports the functionality that we need to display
# YouTube videos in a Jupyter Notebook. You need to run this cell before you
# run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
Explanation: Day 2 pre-class assignment
Goals for today's pre-class assignment
Make sure that you can get a Jupyter notebook up and running!
Learn about algorithms, computer programs, and their relationship
To devise and think about the components of an algorithm for a simple task
Learn about Python, IPython, and IPython notebooks and understand why we're using it in class.
Assignment instructions
Pre-class assignments will be composed of a combination of videos, text to read, and small assignments. The goal of these assignments is to prepare you for class the following day. You should watch the videos and read the text, and then do the assigned work. You will be graded on making a good-faith effort, not on correctness!
To make notebook cells that have Python code in them do something, hold down the 'shift' key and then press the 'enter' or 'return' key (you'll have to do this to get movies to run). To edit a cell (to add answers, for example) you double-click, add your text, and then enter it by holding down 'shift' and pressing 'enter'.
This assignment is due by 11:59 p.m. the day before class, and should be uploaded into the "Pre-class assignments" dropbox folder for Day 2. Submission instructions can be found at the end of the notebook.
End of explanation
# the command below this comment actually displays a specific YouTube video,
# with a given width and height. You can watch the video in full-screen (much higher
# resolution) mode by clicking the little box in the bottom-right corner of the video.
YouTubeVideo("jT0KZ849fak",width=640,height=360)
Explanation: Algorithms
End of explanation
YouTubeVideo("L03BzGmLUUE",width=640,height=360)
Explanation: Further reading on algorithms and computer programs
note: This isn't mandatory, but might be helpful!
Wikipedia page on algorithms
Wikipedia page on computer programs
Assignment: Algorithms and computer programs
Question 1: Come up with an algorithm for a simple task that you do every day (i.e., putting on your shoes). What are the steps of this algorithm?
Put your answer to Question 1 here! (double-click on this text to edit this cell, and hit shift+enter to save the text)
Question 2: Think about the algorithm you devised in the previous question and the video you just watched. Identify the various parts of your algorithm, as defined by the video.
Put your answer to question 2 here!
Python, IPython, and IPython notebooks
End of explanation |
14,971 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tracking an Unknown Number of Objects
While SVI can be used to learn components and assignments of a mixture model, pyro.contrib.tracking provides more efficient inference algorithms to estimate assignments. This notebook demonstrates how to use the MarginalAssignmentPersistent inside SVI.
Step1: Let's consider a model with deterministic dynamics, say sinusoids with known period but unknown phase and amplitude.
Step2: It's tricky to define a fully generative model, so instead we'll separate our data generation process generate_data() from a factor graph model() that will be used in inference.
Step3: This guide uses a smart assignment solver but a naive state estimator. A smarter implementation would use message passing also for state estimation, e.g. a Kalman filter-smoother.
Step4: We'll define a global config object to make it easy to port code to argparse.
Step5: Generate data
Step6: Train | Python Code:
import math
import os
import torch
from torch.distributions import constraints
from matplotlib import pyplot
import pyro
import pyro.distributions as dist
import pyro.poutine as poutine
from pyro.contrib.tracking.assignment import MarginalAssignmentPersistent
from pyro.distributions.util import gather
from pyro.infer import SVI, TraceEnum_ELBO
from pyro.optim import Adam
%matplotlib inline
assert pyro.__version__.startswith('1.7.0')
smoke_test = ('CI' in os.environ)
Explanation: Tracking an Unknown Number of Objects
While SVI can be used to learn components and assignments of a mixture model, pyro.contrib.tracking provides more efficient inference algorithms to estimate assignments. This notebook demonstrates how to use the MarginalAssignmentPersistent inside SVI.
End of explanation
def get_dynamics(num_frames):
time = torch.arange(float(num_frames)) / 4
return torch.stack([time.cos(), time.sin()], -1)
Explanation: Let's consider a model with deterministic dynamics, say sinusoids with known period but unknown phase and amplitude.
End of explanation
def generate_data(args):
# Object model.
num_objects = int(round(args.expected_num_objects)) # Deterministic.
states = dist.Normal(0., 1.).sample((num_objects, 2))
# Detection model.
emitted = dist.Bernoulli(args.emission_prob).sample((args.num_frames, num_objects))
num_spurious = dist.Poisson(args.expected_num_spurious).sample((args.num_frames,))
max_num_detections = int((num_spurious + emitted.sum(-1)).max())
observations = torch.zeros(args.num_frames, max_num_detections, 1+1) # position+confidence
positions = get_dynamics(args.num_frames).mm(states.t())
noisy_positions = dist.Normal(positions, args.emission_noise_scale).sample()
for t in range(args.num_frames):
j = 0
for i, e in enumerate(emitted[t]):
if e:
observations[t, j, 0] = noisy_positions[t, i]
observations[t, j, 1] = 1
j += 1
n = int(num_spurious[t])
if n:
observations[t, j:j+n, 0] = dist.Normal(0., 1.).sample((n,))
observations[t, j:j+n, 1] = 1
return states, positions, observations
def model(args, observations):
with pyro.plate("objects", args.max_num_objects):
exists = pyro.sample("exists",
dist.Bernoulli(args.expected_num_objects / args.max_num_objects))
with poutine.mask(mask=exists.bool()):
states = pyro.sample("states", dist.Normal(0., 1.).expand([2]).to_event(1))
positions = get_dynamics(args.num_frames).mm(states.t())
with pyro.plate("detections", observations.shape[1]):
with pyro.plate("time", args.num_frames):
# The combinatorial part of the log prob is approximated to allow independence.
is_observed = (observations[..., -1] > 0)
with poutine.mask(mask=is_observed):
assign = pyro.sample("assign",
dist.Categorical(torch.ones(args.max_num_objects + 1)))
is_spurious = (assign == args.max_num_objects)
is_real = is_observed & ~is_spurious
num_observed = is_observed.float().sum(-1, True)
pyro.sample("is_real",
dist.Bernoulli(args.expected_num_objects / num_observed),
obs=is_real.float())
pyro.sample("is_spurious",
dist.Bernoulli(args.expected_num_spurious / num_observed),
obs=is_spurious.float())
# The remaining continuous part is exact.
observed_positions = observations[..., 0]
with poutine.mask(mask=is_real):
bogus_position = positions.new_zeros(args.num_frames, 1)
augmented_positions = torch.cat([positions, bogus_position], -1)
predicted_positions = gather(augmented_positions, assign, -1)
pyro.sample("real_observations",
dist.Normal(predicted_positions, args.emission_noise_scale),
obs=observed_positions)
with poutine.mask(mask=is_spurious):
pyro.sample("spurious_observations", dist.Normal(0., 1.),
obs=observed_positions)
Explanation: It's tricky to define a fully generative model, so instead we'll separate our data generation process generate_data() from a factor graph model() that will be used in inference.
End of explanation
def guide(args, observations):
# Initialize states randomly from the prior.
states_loc = pyro.param("states_loc", lambda: torch.randn(args.max_num_objects, 2))
states_scale = pyro.param("states_scale",
lambda: torch.ones(states_loc.shape) * args.emission_noise_scale,
constraint=constraints.positive)
positions = get_dynamics(args.num_frames).mm(states_loc.t())
# Solve soft assignment problem.
real_dist = dist.Normal(positions.unsqueeze(-2), args.emission_noise_scale)
spurious_dist = dist.Normal(0., 1.)
is_observed = (observations[..., -1] > 0)
observed_positions = observations[..., 0].unsqueeze(-1)
assign_logits = (real_dist.log_prob(observed_positions) -
spurious_dist.log_prob(observed_positions) +
math.log(args.expected_num_objects * args.emission_prob /
args.expected_num_spurious))
assign_logits[~is_observed] = -float('inf')
exists_logits = torch.empty(args.max_num_objects).fill_(
math.log(args.max_num_objects / args.expected_num_objects))
assignment = MarginalAssignmentPersistent(exists_logits, assign_logits)
with pyro.plate("objects", args.max_num_objects):
exists = pyro.sample("exists", assignment.exists_dist, infer={"enumerate": "parallel"})
with poutine.mask(mask=exists.bool()):
pyro.sample("states", dist.Normal(states_loc, states_scale).to_event(1))
with pyro.plate("detections", observations.shape[1]):
with poutine.mask(mask=is_observed):
with pyro.plate("time", args.num_frames):
assign = pyro.sample("assign", assignment.assign_dist, infer={"enumerate": "parallel"})
return assignment
Explanation: This guide uses a smart assignment solver but a naive state estimator. A smarter implementation would use message passing also for state estimation, e.g. a Kalman filter-smoother.
End of explanation
args = type('Args', (object,), {}) # A fake ArgumentParser.parse_args() result.
args.num_frames = 5
args.max_num_objects = 3
args.expected_num_objects = 2.
args.expected_num_spurious = 1.
args.emission_prob = 0.8
args.emission_noise_scale = 0.1
assert args.max_num_objects >= args.expected_num_objects
Explanation: We'll define a global config object to make it easy to port code to argparse.
End of explanation
pyro.set_rng_seed(0)
true_states, true_positions, observations = generate_data(args)
true_num_objects = len(true_states)
max_num_detections = observations.shape[1]
assert true_states.shape == (true_num_objects, 2)
assert true_positions.shape == (args.num_frames, true_num_objects)
assert observations.shape == (args.num_frames, max_num_detections, 1+1)
print("generated {:d} detections from {:d} objects".format(
(observations[..., -1] > 0).long().sum(), true_num_objects))
Explanation: Generate data
End of explanation
def plot_solution(message=''):
assignment = guide(args, observations)
states_loc = pyro.param("states_loc")
positions = get_dynamics(args.num_frames).mm(states_loc.t())
pyplot.figure(figsize=(12,6)).patch.set_color('white')
pyplot.plot(true_positions.numpy(), 'k--')
is_observed = (observations[..., -1] > 0)
pos = observations[..., 0]
time = torch.arange(float(args.num_frames)).unsqueeze(-1).expand_as(pos)
pyplot.scatter(time[is_observed].view(-1).numpy(),
pos[is_observed].view(-1).numpy(), color='k', marker='+',
label='observation')
for i in range(args.max_num_objects):
p_exist = assignment.exists_dist.probs[i].item()
position = positions[:, i].detach().numpy()
pyplot.plot(position, alpha=p_exist, color='C0')
pyplot.title('Truth, observations, and predicted tracks ' + message)
pyplot.plot([], 'k--', label='truth')
pyplot.plot([], color='C0', label='prediction')
pyplot.legend(loc='best')
pyplot.xlabel('time step')
pyplot.ylabel('position')
pyplot.tight_layout()
pyro.set_rng_seed(1)
pyro.clear_param_store()
plot_solution('(before training)')
infer = SVI(model, guide, Adam({"lr": 0.01}), TraceEnum_ELBO(max_plate_nesting=2))
losses = []
for epoch in range(101 if not smoke_test else 2):
loss = infer.step(args, observations)
if epoch % 10 == 0:
print("epoch {: >4d} loss = {}".format(epoch, loss))
losses.append(loss)
pyplot.plot(losses);
plot_solution('(after training)')
Explanation: Train
End of explanation |
14,972 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pysam
Pysam è un package che mette a disposizione le funzionalità per manipolare file in formato SAM/BAM.
Importare il modulo pysam
Step1: Come leggere gli allineamenti da un file BAM
AlignmentFile è la classe che rappresenta un set di allineamenti.
Un oggetto di tipo AlignmentFile può essere costruito a partire da un file SAM/BAM nel seguente modo
Step2: Costruire l'oggetto AlignmentFile a partire dal file sample.bam
Indicizzare il file sample.bam tramite la funzione index().
Step3: Costruire l'oggetto AlignmentFile.
Step4: Il metodo get_index_statistics() fornisce qualche informazione generale.
Step5: Il numero di reads mappati e non mappati si ottiene accedendo a mapped e unmapped
Step6: Per ottenere le references coinvolte nel file BAM
Step7: Per ottenere il numero di references coinvolte nel file
Step8: Per ottenere le lunghezze delle references coinvolte nel file
Step9: Per ottenere la lunghezza di una reference
Step10: Per ottenere il numero di reads mappati a una determinata regione della reference
Step11: Come ottenere gli allineamenti dall'oggetto AlignmentFile
Le istruzioni
Step12: Il numero di allineamenti estratti è
Step13: Si estraggano ora tutti gli allineamenti relativa alla reference X.
Step14: Si estraggano infine tutti gli allineamenti relativa alla reference X che cadono nella regione tra le posizioni 280000 e 285000.
Step15: L'istruzione
Step16: Come manipolare gli allineamenti
Step17: Recuperare il primo allineamento della lista.
Step18: Gli oggetti AlignedSegment hanno una serie di attributi che descrivono l'allineamento.
Step19: Gli oggetti AlignedSegment hanno una serie di metodi tra i quali
Step20: Qualche metodo di AlignmentFile
find_introns() restituisce in un oggetto Counter gli introni e il loro supporto in termini di reads allineati
bamfile.find_introns(alignment_iterator)
Per trovare tutti gli introni supportati dagli allineamenti del file in input basta scrivere
Step21: pileup(), restituisce un iteratore contenente un oggetto PileupColumn per ognuna delle basi coperte dagli allineamenti che cadono nella regione specificata come argomento.
bamfile.pileup()
bamfile.pileup(ref_name)
bamfile.pileup(ref_name, start_region, end_region)
Ogni base della reference coperta da allineamenti viene chiamata pileup column.
Trovare tutte le basi della reference coperte dagli allineamenti presenti nel BAM file.
Step22: Gli allineamenti presenti nel BAM file coprono un totale di
Step23: Accedere alla 1000-esima colonna di pileup (base della reference) e
Step24: accedere alla posizione sulla reference
Step25: recuperare il numero di allineamenti (segmenti) che coprono la base.
Step26: ottenere gli identificatori dei reads (queries) che coprono la base.
Step27: ottenere la lista degli allineamenti che coprono la base.
Step28: Per il primo oggetto PileupRead, accedere alla posizione sul read che corrisponde alla base di pileup sulla reference.
Step29: L'allineamento del primo read "impilato" è | Python Code:
import pysam
Explanation: Pysam
Pysam è un package che mette a disposizione le funzionalità per manipolare file in formato SAM/BAM.
Importare il modulo pysam
End of explanation
from pysam import AlignmentFile
help(AlignmentFile)
Explanation: Come leggere gli allineamenti da un file BAM
AlignmentFile è la classe che rappresenta un set di allineamenti.
Un oggetto di tipo AlignmentFile può essere costruito a partire da un file SAM/BAM nel seguente modo:
samfile = pysam.AlignmentFile(sam_file_name, 'r')
bamfile = pysam.AlignmentFile(bam_file_name, 'rb')
Importare la classe AlignmentFile.
End of explanation
pysam.index('./sample.bam')
Explanation: Costruire l'oggetto AlignmentFile a partire dal file sample.bam
Indicizzare il file sample.bam tramite la funzione index().
End of explanation
bamfile = AlignmentFile('./sample.bam', 'rb')
Explanation: Costruire l'oggetto AlignmentFile.
End of explanation
bamfile.get_index_statistics()
Explanation: Il metodo get_index_statistics() fornisce qualche informazione generale.
End of explanation
bamfile.mapped
bamfile.unmapped
Explanation: Il numero di reads mappati e non mappati si ottiene accedendo a mapped e unmapped:
End of explanation
bamfile.references
Explanation: Per ottenere le references coinvolte nel file BAM:
End of explanation
bamfile.nreferences
Explanation: Per ottenere il numero di references coinvolte nel file:
End of explanation
bamfile.lengths
Explanation: Per ottenere le lunghezze delle references coinvolte nel file:
End of explanation
bamfile.get_reference_length('X')
Explanation: Per ottenere la lunghezza di una reference:
End of explanation
bamfile.count('X', 280000, 300000)
Explanation: Per ottenere il numero di reads mappati a una determinata regione della reference:
End of explanation
all_alignments = bamfile.fetch()
Explanation: Come ottenere gli allineamenti dall'oggetto AlignmentFile
Le istruzioni:
bamfile.fetch()
bamfile.fetch(ref_name)
bamfile.fetch(ref_name, start_region, end_region)
restituiscono (rispettivamente) un iteratore contenente:
- tutti gli allineamenti del BAM file
- tutti gli allineamenti che si riferiscono alla reference ref_name
- tutti gli allineamenti che si riferiscono alla sottoregione tra le posizioni start_region e end_regiondella reference ref_name.
Un singolo allineamento tra query e reference è un oggetto di tipo AlignedSegment.
Si estraggano tutti gli allineamenti del BAM file.
End of explanation
len(list(all_alignments))
Explanation: Il numero di allineamenti estratti è:
End of explanation
all_alignments = bamfile.fetch('X')
len(list(all_alignments))
Explanation: Si estraggano ora tutti gli allineamenti relativa alla reference X.
End of explanation
all_alignments = bamfile.fetch('X', 280000, 285000)
len(list(all_alignments))
Explanation: Si estraggano infine tutti gli allineamenti relativa alla reference X che cadono nella regione tra le posizioni 280000 e 285000.
End of explanation
first_alignment = bamfile.head(1000)
len(list(first_alignment))
Explanation: L'istruzione:
bamfile.head(n)
restituisce i primi n allineamenti del BAM file.
Si estraggano i primi 1000 allineamenti.
End of explanation
all_alignments = bamfile.fetch()
all_alignments = list(all_alignments)
help(pysam.libcalignedsegment.AlignedSegment)
Explanation: Come manipolare gli allineamenti
End of explanation
first_alignment = all_alignments[0]
Explanation: Recuperare il primo allineamento della lista.
End of explanation
first_alignment.cigarstring
first_alignment.cigartuples
first_alignment.flag
first_alignment.is_paired
first_alignment.is_reverse
first_alignment.is_secondary
first_alignment.mapping_quality
first_alignment.reference_start
first_alignment.query_name
first_alignment.query_qualities
first_alignment.query_alignment_sequence
Explanation: Gli oggetti AlignedSegment hanno una serie di attributi che descrivono l'allineamento.
End of explanation
first_alignment.get_tags()
first_alignment.get_forward_sequence()
#first_alignment.get_reference_sequence()
first_alignment.to_dict()
print(first_alignment.to_string())
Explanation: Gli oggetti AlignedSegment hanno una serie di metodi tra i quali:
End of explanation
bamfile.find_introns(bamfile.fetch())
Explanation: Qualche metodo di AlignmentFile
find_introns() restituisce in un oggetto Counter gli introni e il loro supporto in termini di reads allineati
bamfile.find_introns(alignment_iterator)
Per trovare tutti gli introni supportati dagli allineamenti del file in input basta scrivere:
End of explanation
#help(pysam.libcalignedsegment.PileupColumn)
pileup_iter = bamfile.pileup()
pileup_columns = list(pileup_iter)
Explanation: pileup(), restituisce un iteratore contenente un oggetto PileupColumn per ognuna delle basi coperte dagli allineamenti che cadono nella regione specificata come argomento.
bamfile.pileup()
bamfile.pileup(ref_name)
bamfile.pileup(ref_name, start_region, end_region)
Ogni base della reference coperta da allineamenti viene chiamata pileup column.
Trovare tutte le basi della reference coperte dagli allineamenti presenti nel BAM file.
End of explanation
len(pileup_columns)
Explanation: Gli allineamenti presenti nel BAM file coprono un totale di:
End of explanation
pileup_columns[999].set_min_base_quality(0)
Explanation: Accedere alla 1000-esima colonna di pileup (base della reference) e:
settare la qualità minima a 0
End of explanation
pileup_columns[999].pos
Explanation: accedere alla posizione sulla reference
End of explanation
pileup_columns[999].nsegments
pileup_columns[999].get_num_aligned()
Explanation: recuperare il numero di allineamenti (segmenti) che coprono la base.
End of explanation
pileup_columns[999].get_query_names()
Explanation: ottenere gli identificatori dei reads (queries) che coprono la base.
End of explanation
pileup_columns[999].pileups
Explanation: ottenere la lista degli allineamenti che coprono la base.
End of explanation
pileup_columns[999].pileups[0].query_position
Explanation: Per il primo oggetto PileupRead, accedere alla posizione sul read che corrisponde alla base di pileup sulla reference.
End of explanation
pileup_columns[999].pileups[0].alignment
Explanation: L'allineamento del primo read "impilato" è:
End of explanation |
14,973 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project Title
Step1: The above table shows the first 5 tuples of the dataset which contains two columns namely the roll no and text of the assignment.
Step2: The dataset contains 1028 entries (tuples) and 2 columns as described above.
Splitting Training and testing sets
Spliting the dataset into a training and testing set by using the train_test_split method in sklearn.
Spliting the data by using the following variables
Step3: Applying Bag of Words processing to our dataset
We have split the data, next we will generate Bag of words and convert our data into the desired matrix format.
We will be using CountVectorizer() which is in sklearn library.<br />
-> First we have to fit our training data (X_train) into CountVectorizer() and return the matrix.<br />
-> Later we have to transform our testing data (X_test) to return the matrix.<br />
Here X_train is our training data for the 'textData' column in our dataset and we will be using this to train our model.<br/>
X_test is our testing data for the 'textData' column and this is the data we will be using(after transformation to a matrix) to make predictions on. We will then compare those predictions with y_test later.
Step4: Learning a vocabulary dictionary for the training data and then transforming the data into a document-term matrix and next for the testing data here we are only transforming the data into a document-term matrix. <br />
We have passed arguments to customize the count_vector which involved removing stop words of english language and puntuations.
Naive Bayes implementation using scikit-learn
Step5: Our algorithm has been trained using the training data set we can now make some predictions on the test data
stored in 'testing_data' using predict().
Step6: Evaluating our model | Python Code:
# Importing pandas library
import pandas as pd
# Loding the data set
df = pd.read_table('data.csv',
sep=',',
header=None,
names=['rollNo','textData'])
# Output printing out first 5 columns
df.head()
# from sklearn.feature_extraction import text
Explanation: Project Title : Author Labeling by text classification
Introduction :
Text classification is one of the major applications of Machine Learning. Most of the text classification projects are done by implementing any of the Machine Learning Algorithms. In this project we will use Naive_Bayes algorithm to label the text.
Input Data Preprocessing :
The student assignments of English class are used as input for this project and we have to label the text with respective author(student). The data we received has repetative content for every student, we have dropped such type of files from the input data and the student records with fewer files were also dropped.
Thus evolved data is processed to generate the ".csv" file which is used as input dataset for this project. It contains two columns, one with student roll number and other with corresponding text.
Working Theme
End of explanation
# Shape is used to get the details of the data set.
df.shape
Explanation: The above table shows the first 5 tuples of the dataset which contains two columns namely the roll no and text of the assignment.
End of explanation
# split into training and testing sets
# USE from sklearn.model_selection import train_test_split to avoid seeing deprecation warning.
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df['textData'],
df['rollNo'],
random_state=1)
# Printing out the number of rows we have in each our training and testing data.
print('Number of rows in the total set: {}'.format(df.shape[0]))
print('Number of rows in the training set: {}'.format(X_train.shape[0]))
print('Number of rows in the test set: {}'.format(X_test.shape[0]))
Explanation: The dataset contains 1028 entries (tuples) and 2 columns as described above.
Splitting Training and testing sets
Spliting the dataset into a training and testing set by using the train_test_split method in sklearn.
Spliting the data by using the following variables:<br />
-> X_train is our training data for the 'textData' column. <br />
-> y_train is our training data for the 'rollNo' column<br />
-> X_test is our testing data for the 'textData' column.<br />
-> y_test is our testing data for the 'rollNo' column.<br />
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
# Instantiate the CountVectorizer method
count_vector = CountVectorizer(stop_words="english", token_pattern=u'(?u)\\b\\w\\w+\\b')
# Fit the training data and then return the matrix
training_data = count_vector.fit_transform(X_train)
# Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer()
testing_data = count_vector.transform(X_test)
Explanation: Applying Bag of Words processing to our dataset
We have split the data, next we will generate Bag of words and convert our data into the desired matrix format.
We will be using CountVectorizer() which is in sklearn library.<br />
-> First we have to fit our training data (X_train) into CountVectorizer() and return the matrix.<br />
-> Later we have to transform our testing data (X_test) to return the matrix.<br />
Here X_train is our training data for the 'textData' column in our dataset and we will be using this to train our model.<br/>
X_test is our testing data for the 'textData' column and this is the data we will be using(after transformation to a matrix) to make predictions on. We will then compare those predictions with y_test later.
End of explanation
from sklearn.naive_bayes import MultinomialNB
naive_bayes = MultinomialNB()
naive_bayes.fit(training_data, y_train)
Explanation: Learning a vocabulary dictionary for the training data and then transforming the data into a document-term matrix and next for the testing data here we are only transforming the data into a document-term matrix. <br />
We have passed arguments to customize the count_vector which involved removing stop words of english language and puntuations.
Naive Bayes implementation using scikit-learn :
We will use sklearns sklearn.naive_bayes method to make predictions on our dataset.
Specifically, we will use the multinomial Naive Bayes implementation which is suitable for classification with discrete features (such as in our case, word counts for text classification). It takes in integer word counts as its input.
Loading the training data into the variable 'training_data' and the testing data into the variable 'testing_data'.
We will import the MultinomialNB classifier and fit the training data into the classifier using fit() and we will train the classifier using 'training_data' and 'y_train' which we have from our split.
End of explanation
predictions = naive_bayes.predict(testing_data)
Explanation: Our algorithm has been trained using the training data set we can now make some predictions on the test data
stored in 'testing_data' using predict().
End of explanation
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy score: ', format(accuracy_score(y_test, predictions)))
print('Precision score: ', format(precision_score(y_test, predictions,average="weighted")))
print('Recall score: ', format(recall_score(y_test, predictions,average="weighted")))
print('F1 score: ', format(f1_score(y_test, predictions,average="weighted")))
Explanation: Evaluating our model :
Computing the accuracy, precision, recall and F1 scores of our model using your test data 'y_test' and the predictions
we made earlier stored in the 'predictions' variable.
End of explanation |
14,974 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PLEASE MAKE A COPY BEFORE CHANGING
Copyright 2022 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Mount Google Drive
Step2: Import Appsflyer's Install Report as csv from Google Drive
Step3: Prepare and check dataframe
Step4: Plots
Step5: Contribution Ratio | Python Code:
## Import Packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: PLEASE MAKE A COPY BEFORE CHANGING
Copyright 2022 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
<b>Important</b>
This content are intended for educational and informational purposes only.
Instructions
1. Export Install Report from Appsflyer
2. Upload csv to Google Drive
3. Configure the locations below then run this colab.
Import necessary packages
End of explanation
## Mount to Google Drive
from google.colab import drive
drive.mount('/content/drive')
print("Log: Google Drive mounted on 'Files' tab")
Explanation: Mount Google Drive
End of explanation
file_path = "/content/drive/location/file.csv" # @param {type:"string"}
low_memory=False
df = pd.read_csv(file_path)
Explanation: Import Appsflyer's Install Report as csv from Google Drive
End of explanation
# @title Select necessary columns and prepare dataframe { vertical-output: true, display-mode: "form" }
## Select necessary columns
df = df[df['Event Name']=='install']
df = df[['Attributed Touch Type'
, 'Attributed Touch Time'
, 'Install Time'
, 'Media Source'
, 'Country Code'
, 'Contributor 1 Touch Type'
, 'Contributor 1 Touch Time'
, 'Contributor 1 Media Source'
, 'Contributor 2 Touch Type'
, 'Contributor 2 Touch Time'
, 'Contributor 2 Media Source'
, 'Contributor 3 Touch Type'
, 'Contributor 3 Touch Time'
, 'Contributor 3 Media Source'
]]
## Calculate time Touch to install time
df['Install-Touch Timestamp'] = (pd.to_datetime(df['Install Time']) -\
pd.to_datetime(df['Attributed Touch Time']))
df['Install-Touch sec'] = pd.to_timedelta(df['Install-Touch Timestamp'], unit='s')
df['Install-Touch sec'] = df['Install-Touch sec'].dt.total_seconds()
df.rename(columns={'Media Source': 'Attributed Media Source'}, inplace=True)
df.head(3)
# @title Describe data { vertical-output: true, display-mode: "form" }
grouping = "Attributed Media Source" #@param ["Attributed Media Source", "Contributor 1 Media Source", "Contributor 2 Media Source", "Contributor 3 Media Source"]
df_cont = df.groupby(grouping).agg(['count', 'mean','min','max','std'])
column = 'Install-Touch sec' # @param['Install-Touch sec']
min_entries = 500 # @param {type:"number"}
df_cont=df_cont[column].sort_values(by=['count'], ascending=False)
df_cont=df_cont[df_cont['count']>=min_entries]
##Affects next card
medias = list(df_cont.index.values)
df_cont
Explanation: Prepare and check dataframe
End of explanation
# @title Use Violin Plots to compare distributions side by side { vertical-output: true, display-mode: "form" }
col_x = 'Attributed Media Source'
col_y = 'Install-Touch sec'
sns.set(rc={'axes.facecolor':'white', 'figure.facecolor':'white'}, font_scale=1.15)
sns.set_theme(style="whitegrid")
sec_min = 0 # @param {type:"number"}
sec_max = 960 # @param {type:"number"}
f, ax = plt.subplots(figsize=(30, 8))
ax = sns.violinplot(x=col_x
, y=col_y
, data=df[((df[col_y]<=sec_max))],
palette = "tab20_r",bw=.2, cut=1, linewidth=1, order=medias)
#@title Plot histogram to compare distributions { vertical-output: true, display-mode: "form" }
max_sec = 960 # @param {type:"number"}
bsize = 10 # @param {type:"number"}
#Change baseline to desired media source
baseline = 'googleadwords_int' # @param["googleadwords_int"] {allow-input: true}
#Change media_source to compare
media_source = 'googleadwords_int' # @param["googleadwords_int"] {allow-input: true}
df_filtered = df[(df['Install-Touch sec']<= max_sec) & (df['Install-Touch sec']>= 0)]
df_filtered1 = df_filtered[df_filtered['Attributed Media Source']==baseline]
df_filtered2 = df_filtered[df_filtered['Attributed Media Source']==media_source]
sns.set(rc={'axes.facecolor':'white', 'figure.facecolor':'white'})
f, ax = plt.subplots(figsize=(20, 10))
sns.histplot( df_filtered1['Install-Touch sec'], stat='density', kde=False,
color="slategray", label=baseline, bins=range(0, max_sec + bsize, bsize))
sns.histplot( df_filtered2['Install-Touch sec'], stat='density', kde=False,
color="deeppink", label=media_source, bins=range(0, max_sec + bsize, bsize))
plt.legend()
plt.show()
Explanation: Plots
End of explanation
#@title Evaluate contribution/attribution ratio { vertical-output: true, display-mode: "form" }
df_contrib = pd.DataFrame(df['Attributed Media Source'].value_counts())\
.join(pd.DataFrame(df['Contributor 1 Media Source'].value_counts()),how='outer')\
.join(pd.DataFrame(df['Contributor 2 Media Source'].value_counts()),how='outer')\
.join(pd.DataFrame(df['Contributor 3 Media Source'].value_counts()),how='outer').fillna(0)
df_contrib['Contributions']= df_contrib[list(df_contrib.columns)[1:]].sum(axis=1)
df_contrib['Ratio']=df_contrib['Contributions'] / df_contrib['Attributed Media Source']
df_contrib=df_contrib.sort_values(by=['Attributed Media Source'],ascending=False)
df_contrib.style.format({'Attributed Media Source':"{:,}",\
'Contributor 1 Media Source':"{:,}",\
'Contributor 2 Media Source':"{:,}",\
'Contributor 3 Media Source':"{:,}",\
'Contributions':"{:,}",\
'Ratio': "{:.2%}"})
Explanation: Contribution Ratio
End of explanation |
14,975 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Polarization
Following Detlefs[2012] the electric field of a monochromatic plane wave can be described as
$$
\begin{equation}
\begin{split}
\vec{E}(t,\vec{x}) =& \Re[(V_0\hat{e}_0 + V_1\hat{e}_1)e^{i(\vec{k}\cdot\vec{x} - \omega t)}]\
\hat{e}_0 =& \hat{e}_1\times\hat{e}_2\
\hat{e}_1 =& \hat{e}_2\times\hat{e}_0\
\hat{e}_2 =&\hat{k}
\end{split}
\end{equation}
$$
The complex random vector $\vec{V}=[V_0,V_1]^T$ is known as the Jones vector. The state of polarization of this field refers to its second order statistics, which can be expressed as the expectation of the outer product of the Jones vector with itself ($x^*$ denotes the complex conjugate of $x$ and $\langle x\rangle$ the expectation)
$$
\begin{equation}
C = \langle\vec{V}\otimes\vec{V}\rangle=
\begin{bmatrix} \langle V_0 V_0^\rangle & \langle V_0 V_1^\rangle\
\langle V_1 V_0^\rangle & \langle V_1 V_1^\rangle
\end{bmatrix}
\end{equation}
$$
Under Stokes' formalism the coherency matrix $C$ is decomposed as follows
$$
\begin{equation}
C = \frac{1}{2}\begin{bmatrix} S_0+S_1 & S_2-iS_3\
S_2+iS_3 & S_0-S_1
\end{bmatrix}
\end{equation}
$$
The Poynting vector can be written as function of the Jones vector
Step1: Fully polarized
Plot the electric field vector in a plane perpendicular to the propagation direction
Step2: Partially polarized
Plot the electric field vector in a plane perpendicular to the propagation direction
Step3: Thomson scattering
The scattering direction defined by the wave vector $\vec{k}_\mathrm{sc}$ in the coordinate system of the primary beam $\vec{k}$
Step4: Thomson cross-section
The Thomson cross-section (elastic cross-section for a free electron) can be written as
$$
\begin{equation}
\begin{split}
\frac{d\sigma_T}{d\Omega} =& r_e^2\frac{I_{sc}(\phi,\theta)}{I_0}\
=&r_e^2 K(\phi,\theta)\
\sigma_T =& r_e^2\int_0^{2\pi}\int_0^\pi K(\phi,\theta) \sin\theta d\theta d\phi\
=&\frac{8\pi}{3}r_e^2
\end{split}
\end{equation}
$$
Step5: Rayleigh cross-section
The differential Rayleigh cross-section (elastic scattering cross-section of an atom) is proportional to the differential Thomson cross-section and the squared atomic form factor
$$
\begin{equation}
\frac{d\mu_R}{d\Omega} = r_e^2 K(\phi,\theta) \frac{N_A}{M}f^2(E,\theta)
\end{equation}
$$
with SI units $cm^2/g/sr$, $M$ the molar mass of the atom ($g/mol$), $N_A$ the Avogadro constant ($1/mol$) and $f$ the atomic form factor.
Step6: Compton scattering
Using the same incident and reference frames as for Thomson scattering, the Stokes parameters are transformed as follows (McMaster[1961])
$$
\begin{equation}
\begin{split}
S_{sc}=&M\cdot S\
M =& \frac{E_{sc}^2}{E^2}\begin{bmatrix}
a+b&(1-a)\cos2\beta&-(1-a)\sin2\beta&0\
(1-a)&a\cos2\beta&-a\sin2\beta&0\
0&\cos\theta\sin2\beta&\cos\theta\cos2\beta&0\
0&0&0&(b+1)\cos\theta
\end{bmatrix}\
\beta=&\frac{\pi}{2}-\phi=\arccos(\hat{e}0\cdot\hat{e}_0^\prime)\
a=&\frac{1+\cos^2\theta}{2}\
b=&\frac{E-E{sc}}{2 m_e c^2}(1-\cos\theta)\
E_{sc}=&\frac{E}{1+\frac{E}{m_e c^2}(1-\cos\theta)}
\end{split}
\end{equation}
$$
where $E$ and $E_{sc}$ the incident and scattered photon energy. Note that when $E_{sc}=E$ we obtain the same expressions as for Thomson scattering. The scattered intensity can be written as
$$
\begin{equation}
\begin{split}
I_{sc}=&I_0K(\phi,\theta)\
K(\phi,\theta)=&\frac{E_{sc}^2}{E^2}\left(a+b-(1-a)\left(\frac{S_1}{S_0}\cos2\phi+\frac{S_2}{S_0}\sin2\phi\right)\right)
\end{split}
\end{equation}
$$
Two common examples are $S_1=S_2=S_3=0$ (unpolarized) and $S_1=S_0$, $S_2=S_3=0$ (linear polarized in the horizontal direction)
$$
\begin{equation}
\begin{split}
K_{un}=&\frac{1}{2}\frac{E_{sc}^2}{E^2}\left(\frac{E}{E_{sc}}+\frac{E_{sc}}{E}-\sin^2\theta\right)\
K_{linh}=&\frac{1}{2}\frac{E_{sc}^2}{E^2}\left(\frac{E}{E_{sc}}+\frac{E_{sc}}{E}-2\sin^2\theta\cos^2\phi\right)
\end{split}
\end{equation}
$$
Step7: Klein-Nishina cross-section
The Klein-Nishina cross-section (inelastic cross-section for a free electron) can be written as
$$
\begin{equation}
\begin{split}
\frac{d\sigma_{KN}}{d\Omega} =& r_e^2\frac{I_{sc}(\phi,\theta)}{I_0}\
=&r_e^2 K(\phi,\theta)\
\sigma_{KN} =& r_e^2\int_0^{2\pi}\int_0^\pi K(\phi,\theta) \sin\theta d\theta d\phi\
=&2\pi r_e^2 \left(
\frac{1+k}{k^2}\left(\frac{2+2k}{c}-\frac{\ln (1+2k)}{k}\right)+\frac{\ln (1+2k)}{2k}-\frac{1+3k}{(1+2k)^2}
\right)\
k =& \frac{E}{m_e c^2}
\end{split}
\end{equation}
$$
Step8: Compton cross-section
The differential Compton cross-section (inelastic scattering cross-section of an atom) is proportional to the differential Klein-Nishina cross-section and the incoherent scattering function
$$
\begin{equation}
\frac{d\mu_C}{d\Omega} = r_e^2 K(\phi,\theta) \frac{N_A}{M}S(E,\theta)
\end{equation}
$$
with SI units $cm^2/g/sr$, $M$ the molar mass of the atom ($g/mol$), $N_A$ the Avogadro constant ($1/mol$) and $S$ the incoherent scattering function of the atom.
Step9: The abscence of forward scattering can be attributed to the incoherent scattering function.
Compare Rayleigh and Compton differential cross-sections | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from spectrocrunch.sources import polarization
Explanation: Polarization
Following Detlefs[2012] the electric field of a monochromatic plane wave can be described as
$$
\begin{equation}
\begin{split}
\vec{E}(t,\vec{x}) =& \Re[(V_0\hat{e}_0 + V_1\hat{e}_1)e^{i(\vec{k}\cdot\vec{x} - \omega t)}]\
\hat{e}_0 =& \hat{e}_1\times\hat{e}_2\
\hat{e}_1 =& \hat{e}_2\times\hat{e}_0\
\hat{e}_2 =&\hat{k}
\end{split}
\end{equation}
$$
The complex random vector $\vec{V}=[V_0,V_1]^T$ is known as the Jones vector. The state of polarization of this field refers to its second order statistics, which can be expressed as the expectation of the outer product of the Jones vector with itself ($x^*$ denotes the complex conjugate of $x$ and $\langle x\rangle$ the expectation)
$$
\begin{equation}
C = \langle\vec{V}\otimes\vec{V}\rangle=
\begin{bmatrix} \langle V_0 V_0^\rangle & \langle V_0 V_1^\rangle\
\langle V_1 V_0^\rangle & \langle V_1 V_1^\rangle
\end{bmatrix}
\end{equation}
$$
Under Stokes' formalism the coherency matrix $C$ is decomposed as follows
$$
\begin{equation}
C = \frac{1}{2}\begin{bmatrix} S_0+S_1 & S_2-iS_3\
S_2+iS_3 & S_0-S_1
\end{bmatrix}
\end{equation}
$$
The Poynting vector can be written as function of the Jones vector:
$$
\begin{equation}
\begin{split}
\vec{P}(t,\vec{x})=& \frac{1}{\mu_0}\vec{E}(t,\vec{x})\times\vec{B}(t,\vec{x})\
=& \frac{1}{c\mu_0}\left(\vec{E}(t,\vec{x})\cdot\vec{E}(t,\vec{x})\right)\hat{k}\
=& \frac{c\epsilon_0}{2}\left( \Re[\vec{V}\cdot\vec{V}^] + \Re[\vec{V}\cdot\vec{V}e^{2i(\vec{k}\cdot\vec{x} - \omega t)} ] \right)\hat{k}\
\Re[\vec{V}\cdot\vec{V}^]=&\vec{V}\cdot\vec{V}^*=\left\lvert V_0\right\rvert^2 +\left\lvert V_1\right\rvert^2\
\Re[\vec{V}\cdot\vec{V}e^{2i(\vec{k}\cdot\vec{x} - \omega t)}]=&
\left\lvert V_0\right\rvert^2 \cos(2(\phi_0+\alpha))
+\left\lvert V_1\right\rvert^2 \cos(2(\phi_1+\alpha))\
\alpha=&\vec{k}\cdot\vec{x} - \omega t\
\end{split}
\end{equation}
$$
where we used $\cos 2x = 2\cos^2 x -1$ and SI units $V/m$ for $\left\lvert V_j\right\rvert$.
For an isolated wave $\vec{V}\in\mathbb{C}^2$, the intensity (SI units: $W/m^2$) can be obtained by time averaging the Poynting vector (second term vanishes):
$$
\begin{equation}
\begin{split}
I(\vec{x})=&{\left\langle \left\lVert \vec{P}(t,\vec{x}) \right\rVert \right\rangle}{t}\
=&\frac{c\epsilon_0}{2}\vec{V}\cdot\vec{V}^*\quad\quad {\langle \cos(2(\phi_j+\alpha)) \rangle}{t}=0\
=&\frac{c\epsilon_0}{2}\mathrm{Tr}(C)\
=&\frac{c\epsilon_0}{2}S_0
\end{split}
\end{equation}
$$
For an ensemble of waves (with the same angular frequency $\omega$) the Jones vector is a random vector and the intensity is the expectation value of the Poynting vector norm instead of the time average
$$
\begin{equation}
\begin{split}
I(\vec{x})=&{\left\langle \left\lVert \vec{P}(t,\vec{x}) \right\rVert \right\rangle}\
=&\frac{c\epsilon_0}{2}\langle\vec{V}\cdot\vec{V}^*\rangle\quad\quad \langle \left\lvert V_j\right\rvert^2\cos(2(\phi_j+\alpha))\rangle = 0\
=&\frac{c\epsilon_0}{2}\mathrm{Tr}(C)\
=&\frac{c\epsilon_0}{2}S_0
\end{split}
\end{equation}
$$
where we assume that $\left\lvert V_j\right\rvert$ and $\phi_j$ are independent random variables and $\phi_j$ uniform, in which case the ensemble average is independent of time.
End of explanation
# Define incident intensity and polarization
intensity = 1 # W/m^2
dolp = 0.4 # degree of linear polarization (in [0,1])
polangle = 40 # angle of semi-major axis with respect to the horizontal direction (in [-90,90])
handedness = "left"
phase0 = 0 # phase of V0 in degrees
# Describe with Jones' formalism because fully polarized
J1 = polarization.Jones.from_params(intensity=intensity,dolp=dolp,polangle=polangle,\
handedness=handedness,phase0=phase0)
# We're looking upstream:
J1.plot_efield()
plt.show()
Explanation: Fully polarized
Plot the electric field vector in a plane perpendicular to the propagation direction:
End of explanation
# Define incident intensity and polarization
intensity = 1 # W/m^2
dop = 0.9 # degree of polarization (in [0,1])
dolp = 0.5*dop # degree of linear polarization (in [0,dop])
polangle = -30 # angle of polarization ellipse with respect to the horizontal direction (in [-90,90])
handedness = "right"
# Describe with Stokes' formalism because not fully polarized
S1 = polarization.Stokes.from_params(intensity=intensity,dop=dop,dolp=dolp,\
polangle=polangle,handedness=handedness)
# Plot the polarized component:
print("Intensity: {} (pol, W/m²), {} (unpol, W/m²)".format(S1.intensity_polarized,S1.intensity_unpolarized))
S1.decompose()["pol"].plot_efield()
plt.show()
Explanation: Partially polarized
Plot the electric field vector in a plane perpendicular to the propagation direction:
End of explanation
# Define incident intensity and polarization
intensity = 1 # W/m^2
dop = 0.4 # degree of polarization (in [0,1])
dolp = 0.8*dop # degree of linear polarization (in [0,dop])
polangle = 0 # angle of polarization ellipse with respect to the horizontal direction (in [-90,90])
handedness = "left"
# Fully polarized
J1 = polarization.Jones.from_params(intensity=intensity,dolp=dolp/dop,\
polangle=polangle,handedness=handedness)
# Partially polarized
S1 = polarization.Stokes.from_params(intensity=intensity,dop=dop,dolp=dolp,\
polangle=polangle,handedness=handedness)
# Scattering direction: spherical coordinates
azimuth = 20
polar = 80 # scattering angle
J2 = J1.thomson_scattering(azimuth,polar)
S2 = S1.thomson_scattering(azimuth,polar)
print("Incident (Jones):")
print(J1)
print("\nScattered (Jones:")
print(J2)
print("\n\nIncident (Stokes):")
print(S1)
print("\nScattered (Stokes):")
print(S2)
# We're looking upstream:
fig,axs = plt.subplots(1,2,figsize=(12,8))
print("\nIncident beam (polarized component):")
plt.sca(axs[0])
J1.plot_efield()
if S1.dop==0:
axs[1].set_aspect('equal')
axs[1].axis('off')
else:
plt.sca(axs[1])
S1.decompose()["pol"].plot_efield()
plt.show()
fig,axs = plt.subplots(1,2,figsize=(12,8))
print("Scattered beam (polarized component in scattered reference frame):")
plt.sca(axs[0])
J2.plot_efield()
if S2.dop==0:
axs[1].set_aspect('equal')
axs[1].axis('off')
else:
plt.sca(axs[1])
S2.decompose()["pol"].plot_efield()
plt.show()
# Define incident intensity and polarization
intensity = 1 # W/m^2
dop = 0.7 # degree of polarization (in [0,1])
dolp = 0.95*dop # degree of linear polarization (in [0,dop])
polangle = 0 # angle of polarization ellipse with respect to the horizontal direction (in [-90,90])
handedness = "left"
S1 = polarization.Stokes.from_params(intensity=intensity,dop=dop,dolp=dolp,\
polangle=polangle,handedness=handedness)
if S1.dop!=0:
print("\nIncident beam (polarized component):")
S1.decompose()["pol"].plot_efield()
plt.show()
azimuth = np.linspace(0,360,100)
polar = np.linspace(0,180,50)
extent = [azimuth[0],azimuth[-1],polar[0],polar[-1]]
azimuth,polar = np.meshgrid(azimuth,polar)
print("Incident intensity: {} W/m²".format(S1.intensity))
print("Scattered intensity (phi=0deg,theta=90deg) = {} W/m²".format(S1.thomson_intensity(0,90)))
print("Scattered intensity (phi=90deg,theta=90deg) = {} W/m²".format(S1.thomson_intensity(90,90)))
img = S1.thomson_intensity(azimuth,polar)
plt.imshow(img,origin="lower",extent=extent)
plt.axhline(y=90)
for x in [90,180,270]:
plt.axvline(x=x)
plt.xlabel("Azimuth (deg)")
plt.ylabel("Polar (deg)")
plt.title("Incident intensity = {} W/m$^2$".format(S1.intensity))
plt.colorbar(label="W/m$^2$")
plt.show()
Explanation: Thomson scattering
The scattering direction defined by the wave vector $\vec{k}_\mathrm{sc}$ in the coordinate system of the primary beam $\vec{k}$:
$$
\begin{equation}
\begin{split}
\hat{k}\mathrm{sc}\cdot\hat{e}_0 =& \cos\phi\sin\theta\
\hat{k}\mathrm{sc}\cdot\hat{e}1 =& \sin\phi\sin\theta\
\hat{k}\mathrm{sc}\cdot\hat{e}_2 =& \cos\theta\
\end{split}
\end{equation}
$$
If we define the reference frame of the scattered beam as
$$
\begin{equation}
\begin{split}
\hat{e}2^\prime =& \hat{k}\mathrm{sc}\
(\hat{k}_\mathrm{sc}\times\hat{k})\cdot\hat{e}_0^\prime=1\
\hat{e}_1^\prime =&\hat{e}_2^\prime\times\hat{e}_0^\prime
\end{split}
\end{equation}
$$
which means that $\hat{e}0^\prime$ is prependicular to the scattering plane defined by $\hat{k}\mathrm{sc}$ and $\hat{k}$, then the Stokes parameters after Thomson scattering can be expressed in this reference frame as
$$
\begin{equation}
\begin{split}
S_{sc}=&M\cdot S\
M =& \begin{bmatrix}
a&(1-a)\cos2\beta&-(1-a)\sin2\beta&0\
(1-a)&a\cos2\beta&-a\sin2\beta&0\
0&\cos\theta\sin2\beta&\cos\theta\cos2\beta&0\
0&0&0&\cos\theta
\end{bmatrix}\
\beta=&\frac{\pi}{2}-\phi=\arccos(\hat{e}_0\cdot\hat{e}_0^\prime)\
a=&\frac{1+\cos^2\theta}{2}
\end{split}
\end{equation}
$$
The scattered intensity can be written as
$$
\begin{equation}
\begin{split}
I_{sc}=&I_0K(\phi,\theta)\
K(\phi,\theta)=&a-(1-a)\left(\frac{S_1}{S_0}\cos2\phi+\frac{S_2}{S_0}\sin2\phi\right)
\end{split}
\end{equation}
$$
Two common examples are $S_1=S_2=S_3=0$ (unpolarized) and $S_1=S_0$, $S_2=S_3=0$ (linear polarized in the horizontal direction)
$$
\begin{equation}
\begin{split}
K_{un}=&\frac{1+\cos^2\theta}{2}\
K_{linh}=&1-\sin^2\theta\cos^2\phi
\end{split}
\end{equation}
$$
End of explanation
from scipy import integrate
K = S1.thomson_K
integrand = lambda azimuth,polar: K(azimuth,polar)*np.sin(polar)
thomsonsc = integrate.dblquad(integrand, 0, np.pi, lambda x:0, lambda x:2*np.pi)[0]
print(thomsonsc,8*np.pi/3) # units of r_e^2
Explanation: Thomson cross-section
The Thomson cross-section (elastic cross-section for a free electron) can be written as
$$
\begin{equation}
\begin{split}
\frac{d\sigma_T}{d\Omega} =& r_e^2\frac{I_{sc}(\phi,\theta)}{I_0}\
=&r_e^2 K(\phi,\theta)\
\sigma_T =& r_e^2\int_0^{2\pi}\int_0^\pi K(\phi,\theta) \sin\theta d\theta d\phi\
=&\frac{8\pi}{3}r_e^2
\end{split}
\end{equation}
$$
End of explanation
from spectrocrunch.materials import element
from spectrocrunch.sources import xray as xraysources
source = xraysources.factory("synchrotron")
print(source)
if source.stokes.dop!=0:
print("\nPolarized component:")
source.stokes.decompose()["pol"].plot_efield()
plt.show()
el = element.Element("Fe")
polar = np.linspace(0,2*np.pi,500)[:-1]
fig,axs = plt.subplots(1,2,figsize=(15, 5),subplot_kw={"projection":"polar"})
for ax,azimuth in zip(axs,[0,np.pi/2]):
plt.sca(ax)
for energy in [2,5,10,20]:
diffcs = el.diff_rayleigh_cross_section(energy,source)
r = diffcs(azimuth,polar)
ax.plot(polar, r, label="{}keV".format(energy))
ax.set_title("Azimuth = {} deg".format(np.degrees(azimuth)))
ax.legend()
energy = 10
diffcs = el.diff_rayleigh_cross_section(energy,source)
integrand = lambda azimuth,polar: diffcs(azimuth,polar)*np.sin(polar)
cs1 = integrate.dblquad(integrand, 0, np.pi, lambda x:0, lambda x:2*np.pi)[0]
cs2 = el.rayleigh_cross_section(energy)
print("Rayleigh cross-section ({} keV): {} cm²/g (integrated)".format(energy,cs1))
print("Rayleigh cross-section ({} keV): {} cm²/g (tabulated)".format(energy,cs2))
plt.show()
Explanation: Rayleigh cross-section
The differential Rayleigh cross-section (elastic scattering cross-section of an atom) is proportional to the differential Thomson cross-section and the squared atomic form factor
$$
\begin{equation}
\frac{d\mu_R}{d\Omega} = r_e^2 K(\phi,\theta) \frac{N_A}{M}f^2(E,\theta)
\end{equation}
$$
with SI units $cm^2/g/sr$, $M$ the molar mass of the atom ($g/mol$), $N_A$ the Avogadro constant ($1/mol$) and $f$ the atomic form factor.
End of explanation
# Define incident intensity and polarization
intensity = 1 # W/m^2
dop = 0.4 # degree of polarization (in [0,1])
dolp = 0.7*dop # degree of linear polarization (in [0,dop])
polangle = 0 # angle of polarization ellipse with respect to the horizontal direction (in [-90,90])
handedness = "left"
# Partially polarized
S1 = polarization.Stokes.from_params(intensity=intensity,dop=dop,dolp=dolp,\
polangle=polangle,handedness=handedness)
# Scattering direction: spherical coordinates
azimuth = 20
polar = 50 # scattering angle
energy = 5
S2 = S1.compton_scattering(azimuth,polar,energy)
# We're looking upstream:
if S1.dop!=0:
print("\nIncident beam (polarized component):")
S1.decompose()["pol"].plot_efield()
plt.show()
if S2.dop!=0:
print("Scattered beam (polarized component in diffraction reference frame):")
S2.decompose()["pol"].plot_efield()
plt.show()
# Define incident intensity and polarization
intensity = 1 # W/m^2
dop = 0.7 # degree of polarization (in [0,1])
dolp = 0.95*dop # degree of linear polarization (in [0,dop])
polangle = 0 # angle of polarization ellipse with respect to the horizontal direction (in [-90,90])
handedness = "left"
S1 = polarization.Stokes.from_params(intensity=intensity,dop=dop,dolp=dolp,\
polangle=polangle,handedness=handedness)
if S1.dop!=0:
print("\nIncident beam (polarized component):")
S1.decompose()["pol"].plot_efield()
plt.show()
azimuth = np.linspace(0,360,100)
polar = np.linspace(0,180,50)
extent = [azimuth[0],azimuth[-1],polar[0],polar[-1]]
azimuth,polar = np.meshgrid(azimuth,polar)
energy = 5
print("Incident intensity: {} W/m²".format(S1.intensity))
print("Scattered intensity (phi=0deg,theta=90deg) = {} W/m²".format(S1.compton_intensity(0,90,energy)))
print("Scattered intensity (phi=90deg,theta=90deg) = {} W/m²".format(S1.compton_intensity(90,90,energy)))
img = S1.compton_intensity(azimuth,polar,energy)
plt.imshow(img,origin="lower",extent=extent)
plt.axhline(y=90)
for x in [90,180,270]:
plt.axvline(x=x)
plt.xlabel("Azimuth (deg)")
plt.ylabel("Polar (deg)")
plt.title("Incident intensity = {} W/m$^2$".format(S1.intensity))
plt.colorbar(label="W/m$^2$")
plt.show()
Explanation: Compton scattering
Using the same incident and reference frames as for Thomson scattering, the Stokes parameters are transformed as follows (McMaster[1961])
$$
\begin{equation}
\begin{split}
S_{sc}=&M\cdot S\
M =& \frac{E_{sc}^2}{E^2}\begin{bmatrix}
a+b&(1-a)\cos2\beta&-(1-a)\sin2\beta&0\
(1-a)&a\cos2\beta&-a\sin2\beta&0\
0&\cos\theta\sin2\beta&\cos\theta\cos2\beta&0\
0&0&0&(b+1)\cos\theta
\end{bmatrix}\
\beta=&\frac{\pi}{2}-\phi=\arccos(\hat{e}0\cdot\hat{e}_0^\prime)\
a=&\frac{1+\cos^2\theta}{2}\
b=&\frac{E-E{sc}}{2 m_e c^2}(1-\cos\theta)\
E_{sc}=&\frac{E}{1+\frac{E}{m_e c^2}(1-\cos\theta)}
\end{split}
\end{equation}
$$
where $E$ and $E_{sc}$ the incident and scattered photon energy. Note that when $E_{sc}=E$ we obtain the same expressions as for Thomson scattering. The scattered intensity can be written as
$$
\begin{equation}
\begin{split}
I_{sc}=&I_0K(\phi,\theta)\
K(\phi,\theta)=&\frac{E_{sc}^2}{E^2}\left(a+b-(1-a)\left(\frac{S_1}{S_0}\cos2\phi+\frac{S_2}{S_0}\sin2\phi\right)\right)
\end{split}
\end{equation}
$$
Two common examples are $S_1=S_2=S_3=0$ (unpolarized) and $S_1=S_0$, $S_2=S_3=0$ (linear polarized in the horizontal direction)
$$
\begin{equation}
\begin{split}
K_{un}=&\frac{1}{2}\frac{E_{sc}^2}{E^2}\left(\frac{E}{E_{sc}}+\frac{E_{sc}}{E}-\sin^2\theta\right)\
K_{linh}=&\frac{1}{2}\frac{E_{sc}^2}{E^2}\left(\frac{E}{E_{sc}}+\frac{E_{sc}}{E}-2\sin^2\theta\cos^2\phi\right)
\end{split}
\end{equation}
$$
End of explanation
from spectrocrunch.patch.pint import ureg
K = S1.compton_K(energy)
integrand = lambda azimuth,polar: K(azimuth,polar)*np.sin(polar)
comptonsc = integrate.dblquad(integrand, 0, np.pi, lambda x:0, lambda x:2*np.pi)[0]
E = ureg.Quantity(energy,"keV").to("m_e*c^2").magnitude
c = 2*E+1
logc = np.log(c)
comptonsc1 = 2*np.pi*((1+E)/E**2*((1+c)/c-logc/E)+logc/(2*E)-(1+3*E)/c**2)
P0 = 2*E**4 + 18*E**3 + 16*E**2 + 4*E
P1 = 4*E**4 - 4*E**3 - 15*E**2 - 10*E - 2
P2 = 4*E**5 + 4*E**4 + E**3
comptonsc2 = np.pi*(P0 + P1*np.log(2*E + 1))/P2
print(comptonsc,comptonsc1,comptonsc2) # units of r_e^2
Explanation: Klein-Nishina cross-section
The Klein-Nishina cross-section (inelastic cross-section for a free electron) can be written as
$$
\begin{equation}
\begin{split}
\frac{d\sigma_{KN}}{d\Omega} =& r_e^2\frac{I_{sc}(\phi,\theta)}{I_0}\
=&r_e^2 K(\phi,\theta)\
\sigma_{KN} =& r_e^2\int_0^{2\pi}\int_0^\pi K(\phi,\theta) \sin\theta d\theta d\phi\
=&2\pi r_e^2 \left(
\frac{1+k}{k^2}\left(\frac{2+2k}{c}-\frac{\ln (1+2k)}{k}\right)+\frac{\ln (1+2k)}{2k}-\frac{1+3k}{(1+2k)^2}
\right)\
k =& \frac{E}{m_e c^2}
\end{split}
\end{equation}
$$
End of explanation
source = xraysources.factory("synchrotron")
print(source)
if source.stokes.dop!=0:
print("\nPolarized component:")
source.stokes.decompose()["pol"].plot_efield()
plt.show()
el = element.Element("Fe")
polar = np.linspace(0,2*np.pi,500)
fig,axs = plt.subplots(1,2,figsize=(15, 5),subplot_kw={"projection":"polar"})
for ax,azimuth in zip(axs,[0,np.pi/2]):
plt.sca(ax)
for energy in [2,5,10,20]:
diffcs = el.diff_compton_cross_section(energy,source)
r = diffcs(azimuth,polar)
ax.plot(polar, r, label="{}keV".format(energy))
ax.set_title("Azimuth = {} deg".format(np.degrees(azimuth)))
ax.legend()
energy = 10
diffcs = el.diff_compton_cross_section(energy,source)
integrand = lambda azimuth,polar: diffcs(azimuth,polar)*np.sin(polar)
cs1 = integrate.dblquad(integrand, 0, np.pi, lambda x:0, lambda x:2*np.pi)[0]
cs2 = el.compton_cross_section(energy)
print("Compton cross-section ({} keV): {} cm²/g (integrated)".format(energy,cs1))
print("Compton cross-section ({} keV): {} cm²/g (tabulated)".format(energy,cs2))
plt.show()
Explanation: Compton cross-section
The differential Compton cross-section (inelastic scattering cross-section of an atom) is proportional to the differential Klein-Nishina cross-section and the incoherent scattering function
$$
\begin{equation}
\frac{d\mu_C}{d\Omega} = r_e^2 K(\phi,\theta) \frac{N_A}{M}S(E,\theta)
\end{equation}
$$
with SI units $cm^2/g/sr$, $M$ the molar mass of the atom ($g/mol$), $N_A$ the Avogadro constant ($1/mol$) and $S$ the incoherent scattering function of the atom.
End of explanation
for source in ['synchrotron', 'tube']:
source = xraysources.factory(source)
print(source)
if source.stokes.dop!=0:
print("\nPolarized component:")
source.stokes.decompose()["pol"].plot_efield()
plt.show()
el = element.Element("Fe")
polar = np.linspace(0,2*np.pi,500)[:-1]
energy = 5
fig,axs = plt.subplots(1,2,figsize=(15, 5),subplot_kw={"projection":"polar"})
for ax,azimuth in zip(axs,[0,np.pi/2]):
plt.sca(ax)
diffcs = el.diff_rayleigh_cross_section(energy,source)
rr = diffcs(azimuth,polar)
ax.plot(polar, rr, label="Rayleigh")
diffcs = el.diff_compton_cross_section(energy,source)
rc = diffcs(azimuth,polar)
m = int(np.round(np.max(rr)/np.max(rc))/10)*10
ax.plot(polar, rc*m, label="{}x Compton".format(m))
ax.set_title("Azimuth = {} deg".format(np.degrees(azimuth)))
ax.legend()
plt.show()
Explanation: The abscence of forward scattering can be attributed to the incoherent scattering function.
Compare Rayleigh and Compton differential cross-sections:
End of explanation |
14,976 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
dog face detector using haar cascade
| Python Code::
import cv2
import numpy as np
dog_cascade = cv2.CascadeClassifier('dog_face_haar_cascade.xml')
dog_face = dog_cascade.detectMultiScale(image)
for (x, y, w, h) in dog_face:
start_point, end_point = (x, y), (x+ w, y+h)
cv2.rectangle(image, pt1= start_point, pt2 = end_point, color = (0, 255, 0), thickness = 2)
cv2.imshow('img', image)
|
14,977 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Why automate your work flow, and how to approach the process
Questions for students to consider
Step1: Questions for students
Step2: Scope of Variables
Global variables
Global variables are available in the environment your script is working in. Every variable we have made at this point is a global variable.
Local variables
Local variables will be useful to understand when we start using functions in the automation of our code. Local variables only exist in the function environment, not the global environment your linear workflow code is.
Other useful conventions with variables to follow
Set-up variables at the begining of your page, after importing libraries
use variables instead of file names, or exact values or strings so that if you need to change the value of something you don't have to search through all your code to make sure you made the change everywhere, simply change the value of the variable at the top. -- This will also make your code more reproducible in the end.
See what variants exist in the Jupyter notebook
Step3: Let's Get Started
To get started we will import the python modules that we will use in the session. These modules are developed by programmers and made available as open source packages for python. We would normally have to install each of these ourself but they are included as part of the Anaconda Python Distribution.
The %matplotlib inline statement is part of the Jupyter and IPython magic that enables plaots generated by the matplotlib package to be discplayed as output in the Jupyter Notebook instead of open in a separate window.
Step4: We will continue where the data exploration module left off but importing the cleaned gapminder dataset and setting it equal to a new varaible named df to denote that we have imported a pandas dataframe.
As validation that we have imported the data we will also look at the top five rows of data using the head method of pandas.
Step5: Lesson 2 (10-15 min)<a id="lesson-2"></a>
Learning Objectives
- Define "Don't Repeat Yourself" (DRY) and provide examples of how you would implement DRY in your code
- Identify code that can be modularized following DRY and implement a modular workflow using functions.
As you write software there comes a time when you are going to encounter a situation where you want to do the same analysis step as you have already done in your analysis. Our natural tendancy is the copy the code that we wrote and paste it into teh new location for reuse. Sounds easy, right. Copy, paste, move on...not so fast.
What happens if there is a problem with the code or you decide to tweak it, just a little, to change a format or enahce it?
You wil have to change the code in every place you ahve copied it. How do you know if you got all of the copies? What happens if one of the copies is not changed?
These examples illustrate the principle of "Don't Repeat Yourself". We are going to look at how to refactor our code and pull pieces out by making them functions. They we will call the function everytime we want to use that code.
Step6: Lesson 3 <a id="lesson-3"></a>
Learning Objectives
- Know how to construct a function
Step7: Lesson 4 <a id="lesson-4"></a>
Learning Objective
- Organize a set of functions within a python (.py) script and use it (import it into) in a Jupyter notebook.
Now we are going to cut and paste the functions that we have created above. We are going to save them to a single file so we will paste all of the code that we want in that file below. Additionally, we will use the Jupyter / IPython magic, specifically the magic command writefile to save teh contents of the cell below to a file. The file name follows the %%writefile command below.
We are assuming that the notebook is in the project code directory and notebook subdirectory.
Step8: If you would like to see the contents of your file you can type
Step9: Lets get some help on our new code
Step12: Docstring
Get details on DocStrings here.
Step13: Lesson 5 <a id="lesson-5"></a>
Optional lesson
- Use asserts to test validity of function inputs and outputs
Python Testing | Python Code:
# write out three variables, assign a number, string, list
x = 'Asia' # String
y = 1952 # an integer
z = 1.5 # a floating point number
cal_1 = y * z
print(cal_1)
# or
x, y = 'Asia', 'Africa'
w = x
w = x + x #concatinating strings (combinging strings)
print(w)
h = 'Africa'
list_1 = ['Asia', 'Africa', 'Europe'] # list
print(list_1)
Explanation: Why automate your work flow, and how to approach the process
Questions for students to consider:
1) What happens when you get a new dataset that you need to analyze in the same way you analyzed a previous data set?
2) What processes do you do often? How do you implement these?
3) Do you have a clear workflow you could replicate?
4) Or even better, could you plug this new data set into your old workflow?
Learning Objectives of Automation Module:
Lesson 1 (10-15 min)
Employ best practices of naming a variable including: don’t use existing function names, avoid periods in names, don’t use numbers at the beginning of a variable name.
Lesson 2 (10-15 min)
Define "Don't Repeat Yourself" (DRY) and provide examples of how you would implement DRY in your code
Identify code that can be modularized following DRY and implement a modular workflow using functions.
Lesson 3 (60 min)
Know how to construct a function: variables, function name, syntax, documentation, return values
Demonstrate use of function within the notebook / code.
Construct and compose function documentation that clearly defines inputs, output variables and behaviour.
Lesson 4 (10-15 min)
Organize a set of functions within a python (.py) script and use it (import it into) in a Jupyter notebook.
Lesson 5 (20-30 min)
Optional lesson
- Use asserts to test validity of function inputs and outputs
Lesson 6 (To be added at a later date)
(right now this is not useful in people's current workflows in python)
Demonstration of how to pull all components we went over together into one finished document
Basic Overview of the suggested workflow using Socrative (Optional)
Use Socrative quiz to collect answers from student activities (students can run their code in their notebooks, and post to socrative). This will allow the instructor to see what solutions students came up with, and identify any places where misconceptions and confusion are coming up. Using Socrative quizes also allows for a record of the student work to be analyzed after class to see how students are learning and where they are having troubles.
sharing of prepared Socrative Quizes designed to be used with the automation module can be shared by URL links to each teacher so they do not have to be remade.
Level of Python / Jupiter Automation
Good - Documenting all analysis steps in enough details that will enable them to be reproduced successfully.
Better - Script your analysis
Best - Script your analysis and write tests to validate each step.
Setup
Please download the cleaned data file:
https://github.com/Reproducible-Science-Curriculum/automation-RR-Jupyter/blob/master/gapminder_cleaned.csv
Lesson 1 <a id="lesson-1"></a>
Lets begin by creating a new Jupyter notebook.
Question:
Accounding to the organization we setup where should we put this notebook?
Review of good variable practices
Learning Objective: Employ best practices of naming a variable including: don’t use existing function names, avoid periods in names, don’t use numbers at the beginning of a variable name
Types of variables:
strings, integers, etc..
References:
- PEP8 - Style Guide for Python Code - https://www.python.org/dev/peps/pep-0008/
- https://www.tutorialspoint.com/python3/python_variable_types.htm
Keep in mind that code is read many more times then it is written!
Naming conventions that should be followed
Rules
End of explanation
countries = ['Asia', 'Africa', 'Europe']
Explanation: Questions for students:
1. what do you think will happen with this code? x * z
1. what do you think will happen with this code? list_1[0]
1. what do you think will happen with this code? list_1[1:2]
Lists and Indexing
Python indexing is from 0 to length of list - 1
Example:
list_1 = ['Asia', 'Africa', 'Europe']
Asia index = 0,
Africa index = 1,
Europe index = 2
list_1 is not a very descriptive and identifiable variable.
What would be a better name for the variable that holds these values?
End of explanation
%who
Explanation: Scope of Variables
Global variables
Global variables are available in the environment your script is working in. Every variable we have made at this point is a global variable.
Local variables
Local variables will be useful to understand when we start using functions in the automation of our code. Local variables only exist in the function environment, not the global environment your linear workflow code is.
Other useful conventions with variables to follow
Set-up variables at the begining of your page, after importing libraries
use variables instead of file names, or exact values or strings so that if you need to change the value of something you don't have to search through all your code to make sure you made the change everywhere, simply change the value of the variable at the top. -- This will also make your code more reproducible in the end.
See what variants exist in the Jupyter notebook:
%who
End of explanation
import numpy as np
import pandas as pd
import pylab as plt
import matplotlib
%matplotlib inline
Explanation: Let's Get Started
To get started we will import the python modules that we will use in the session. These modules are developed by programmers and made available as open source packages for python. We would normally have to install each of these ourself but they are included as part of the Anaconda Python Distribution.
The %matplotlib inline statement is part of the Jupyter and IPython magic that enables plaots generated by the matplotlib package to be discplayed as output in the Jupyter Notebook instead of open in a separate window.
End of explanation
cleaned_data_location = 'gapminder_cleaned.csv'
df = pd.read_csv(cleaned_data_location)
df.head()
df['year'].unique()
Explanation: We will continue where the data exploration module left off but importing the cleaned gapminder dataset and setting it equal to a new varaible named df to denote that we have imported a pandas dataframe.
As validation that we have imported the data we will also look at the top five rows of data using the head method of pandas.
End of explanation
# Define which continent / category we will use
category = 'lifeexp'
continent = 'asia'
# Create a mask that selects the continent of choice
mask_continent = df['continent'] == continent
df_continent = df[mask_continent]
# Loop through years and calculate the statistic of interest
years = df_continent['year'].unique()
summary = []
for year in years:
mask_year = df_continent['year'] == year
df_year = df_continent[mask_year]
value = np.mean(df_year[category])
summary.append((continent, year, value))
# Turn the summary into a dataframe so that we can visualize easily
summary = pd.DataFrame(summary, columns=['continent', 'year', category])
summary.plot.line('year', 'lifeexp')
Explanation: Lesson 2 (10-15 min)<a id="lesson-2"></a>
Learning Objectives
- Define "Don't Repeat Yourself" (DRY) and provide examples of how you would implement DRY in your code
- Identify code that can be modularized following DRY and implement a modular workflow using functions.
As you write software there comes a time when you are going to encounter a situation where you want to do the same analysis step as you have already done in your analysis. Our natural tendancy is the copy the code that we wrote and paste it into teh new location for reuse. Sounds easy, right. Copy, paste, move on...not so fast.
What happens if there is a problem with the code or you decide to tweak it, just a little, to change a format or enahce it?
You wil have to change the code in every place you ahve copied it. How do you know if you got all of the copies? What happens if one of the copies is not changed?
These examples illustrate the principle of "Don't Repeat Yourself". We are going to look at how to refactor our code and pull pieces out by making them functions. They we will call the function everytime we want to use that code.
End of explanation
def calculate_statistic_over_time(data, category, continent, func=None):
if func is None:
func = np.mean
# Create a mask that selects the continent of choice
mask_continent = data['continent'] == continent
data_continent = data[mask_continent]
# Loop through years and calculate the statistic of interest
years = data_continent['year'].unique()
summary = []
for year in years:
mask_year = data_continent['year'] == year
data_year = data_continent[mask_year]
value = func(data_year[category])
summary.append((continent, year, value))
# Turn the summary into a dataframe so that we can visualize easily
summary = pd.DataFrame(summary, columns=['continent', 'year', category])
return summary
category = 'lifeexp'
continents = df['continent'].unique()
fig, ax = plt.subplots()
for continent in continents:
output = calculate_statistic_over_time(df, category, continent)
output.plot.line('year', category, ax=ax)
category = 'lifeexp'
mean_values = df.groupby('continent').mean()[category]
mean_values = mean_values.sort_values(ascending=False)
continents = mean_values.index.values
n_continents = len(continents)
cmap = plt.cm.coolwarm_r
fig, ax = plt.subplots()
for ii, continent in enumerate(continents):
this_color = cmap(float(ii / n_continents))
output = calculate_statistic_over_time(df, category, continent)
output.plot.line('year', category, ax=ax, label=continent,
color=this_color)
plt.legend(loc=(1.02, 0))
ax.set(ylabel=category, xlabel='Year',
title='{} over time'.format(category))
plt.setp(ax.lines, lw=4, alpha=.4)
def plot_statistic_over_time(data, category, func=None,
cmap=None, ax=None, legend=True,
sort=True):
if ax is None:
fig, ax = plt.subplots()
if cmap is None:
cmap = plt.cm.viridis
if sort is True:
# Sort the continents by the category of choice
mean_values = df.groupby('continent').mean()[category]
mean_values = mean_values.sort_values(ascending=False)
continents = mean_values.index.values
else:
continents = np.unique(df['continent'])
n_continents = len(continents)
# Loop through continents, calculate its stat, and add a line
for ii, continent in enumerate(continents):
this_color = cmap(float(ii / n_continents))
output = calculate_statistic_over_time(data, category, continent)
output.plot.line('year', category, ax=ax, label=continent,
color=this_color)
if legend is True:
plt.legend(loc=(1.02, 0))
else:
ax.get_legend().set(visible=False)
ax.set(ylabel=category, xlabel='Year',
title='{} over time'.format(category))
plt.setp(ax.lines, lw=4, alpha=.4)
return ax
plot_statistic_over_time(df, category, continent, cmap=plt.cm.coolwarm)
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
categories = ['pop', 'gdppercap']
for ax, i_category in zip(axs, categories):
plot_statistic_over_time(df, i_category, continent,
ax=ax, sort=False)
plt.setp(axs[0].get_legend(), visible=False)
plt.tight_layout()
fig, axs = plt.subplots(1, 2, figsize=(10, 5), sharey=True)
cmaps = [plt.cm.viridis, plt.cm.coolwarm]
for ax, cmap in zip(axs, cmaps):
plot_statistic_over_time(df, category, continent,
cmap=cmap, ax=ax, legend=False)
ax = df.groupby(['continent', 'year']).mean()['lifeexp']\
.unstack('continent').plot(cmap=plt.cm.viridis, alpha=.4, lw=3)
Explanation: Lesson 3 <a id="lesson-3"></a>
Learning Objectives
- Know how to construct a function: variables, function name, syntax, documentation, return values
- Demonstrate use of function within the notebook / code.
- Construct and compose function documentation that clearly defines inputs, output
End of explanation
%%writefile stats_and_plot.py
def calculate_statistic_over_time(data, category, continent, func=None):
import numpy as np
import pandas as pd
if func is None:
func = np.mean
# Create a mask that selects the continent of choice
mask_continent = data['continent'] == continent
data_continent = data[mask_continent]
# Loop through years and calculate the statistic of interest
years = data_continent['year'].unique()
summary = []
for year in years:
mask_year = data_continent['year'] == year
data_year = data_continent[mask_year]
value = func(data_year[category])
summary.append((continent, year, value))
# Turn the summary into a dataframe so that we can visualize easily
summary = pd.DataFrame(summary, columns=['continent', 'year', category])
return summary
def plot_statistic_over_time(data, category, func=None, cmap=None, ax=None, legend=True, sort=True):
if ax is None:
fig, ax = plt.subplots()
if cmap is None:
cmap = plt.cm.viridis
if sort is True:
# Sort the continents by the category of choice
mean_values = df.groupby('continent').mean()[category]
mean_values = mean_values.sort_values(ascending=False)
continents = mean_values.index.values
else:
continents = np.unique(df['continent'])
n_continents = len(continents)
# Loop through continents, calculate its stat, and add a line
for ii, continent in enumerate(continents):
this_color = cmap(float(ii / n_continents))
output = calculate_statistic_over_time(data, category, continent)
output.plot.line('year', category, ax=ax, label=continent,
color=this_color)
if legend is True:
plt.legend(loc=(1.02, 0))
else:
ax.get_legend().set(visible=False)
ax.set(ylabel=category, xlabel='Year',
title='{} over time'.format(category))
plt.setp(ax.lines, lw=4, alpha=.4)
return ax
Explanation: Lesson 4 <a id="lesson-4"></a>
Learning Objective
- Organize a set of functions within a python (.py) script and use it (import it into) in a Jupyter notebook.
Now we are going to cut and paste the functions that we have created above. We are going to save them to a single file so we will paste all of the code that we want in that file below. Additionally, we will use the Jupyter / IPython magic, specifically the magic command writefile to save teh contents of the cell below to a file. The file name follows the %%writefile command below.
We are assuming that the notebook is in the project code directory and notebook subdirectory.
End of explanation
from stats_and_plot import calculate_statistic_over_time
# Saving for publication
cmaps = [plt.cm.magma, plt.cm.rainbow]
for ii, cmap in enumerate(cmaps):
fig, ax = plt.subplots(figsize=(10, 10), sharey=True)
plot_statistic_over_time(df, category, continent,
cmap=cmap, ax=ax, legend=False)
labels = [ax.get_xticklabels(), ax.get_yticklabels(),
ax.yaxis.label, ax.xaxis.label, ax.title]
_ = plt.setp(labels, fontsize=30)
# ax.set_axis_off()
fig.savefig('fig_{}.png'.format(ii), transparent=True, bbox_inches='tight', dpi=300)
Explanation: If you would like to see the contents of your file you can type:
%load stats_and_plot.py
or try
%pycat stats_and_plot.py
Imports
Some information on imports:
In order to use external code within python two things have to happen: 1) The code has to exist on you local computer and 2) we have to reference or import to code to use it in our program.
The first requirement is satisfied when we install the softwre on our computer using conda or pip. Or as we will see in a minute, we can create custom functions that we create. Secondly, we need to tell python how to access and refer to the packages or source code we want to use.
Import Guidelines
https://www.python.org/dev/peps/pep-0008/#imports
Imports should be grouped in the following order:
standard library imports
related third party imports
local application/library specific imports
You should put a blank line between each group of imports.
End of explanation
calculate_statistic_over_time?
help(calculate_statistic_over_time)
Explanation: Lets get some help on our new code:
End of explanation
Form a complex number.
Keyword arguments:
real -- the real part (default 0.0)
imag -- the imaginary part (default 0.0)
%%writefile stats_and_plot.py
def calculate_statistic_over_time(data, category, continent, func=None):
Calculate a statistic on the continent. The default statistic is numpys' mean.
Keyword arguments:
data -- the dataframe data source
category -- the category to be summarized
continent -- the continent to be examined
func -- the function to be applied to the data (default numpy.mean)
import numpy as np
import pandas as pd
if func is None:
func = np.mean
# Create a mask that selects the continent of choice
mask_continent = data['continent'] == continent
data_continent = data[mask_continent]
# Loop through years and calculate the statistic of interest
years = data_continent['year'].unique()
summary = []
for year in years:
mask_year = data_continent['year'] == year
data_year = data_continent[mask_year]
value = func(data_year[category])
summary.append((continent, year, value))
# Turn the summary into a dataframe so that we can visualize easily
summary = pd.DataFrame(summary, columns=['continent', 'year', category])
return summary
def plot_statistic_over_time(data, category, func=None, cmap=None, ax=None, legend=True, sort=True):
if ax is None:
fig, ax = plt.subplots()
if cmap is None:
cmap = plt.cm.viridis
if sort is True:
# Sort the continents by the category of choice
mean_values = df.groupby('continent').mean()[category]
mean_values = mean_values.sort_values(ascending=False)
continents = mean_values.index.values
else:
continents = np.unique(df['continent'])
n_continents = len(continents)
# Loop through continents, calculate its stat, and add a line
for ii, continent in enumerate(continents):
this_color = cmap(float(ii / n_continents))
output = calculate_statistic_over_time(data, category, continent)
output.plot.line('year', category, ax=ax, label=continent,
color=this_color)
if legend is True:
plt.legend(loc=(1.02, 0))
else:
ax.get_legend().set(visible=False)
ax.set(ylabel=category, xlabel='Year',
title='{} over time'.format(category))
plt.setp(ax.lines, lw=4, alpha=.4)
return ax
Explanation: Docstring
Get details on DocStrings here.
End of explanation
import pytest
# Check to see if pytest is installed:
pytest.__version__
# content of test_sample.py
def func(x):
return x + 1
def test_answer():
assert func(3) == 5
!pytest
pytest.assert?
Explanation: Lesson 5 <a id="lesson-5"></a>
Optional lesson
- Use asserts to test validity of function inputs and outputs
Python Testing
End of explanation |
14,978 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scan
In short
Mechanism to perform loops in a Theano graph
Supports nested loops and reusing results from previous iterations
Highly generic
Implementation
You've previous seen that a Theano function graph is composed of two types of nodes; Variable nodes which represent data and Apply node which apply Ops (which represent some computation) to Variables to produce new Variables.
From this point of view, a node that applies a Scan op is just like any other. Internally, however, it is very different from most Ops.
Inside a Scan op is yet another Theano graph which represents the computation to be performed at every iteration of the loop. During compilation, that graph is compiled into a function and, during execution, the Scan op will call that function repeatedly on its inputs to produce its outputs.
Example 1
Step1: Next, we call the scan() function. It has many parameters but, because our use case is simple, we only need two of them. We'll introduce other parameters in the next examples.
The parameter sequences allows us to specify variables that Scan should iterate over as it loops. The first iteration will take as input the first element of every sequence, the second iteration will take as input the second element of every sequence, etc. These individual element have will have one less dimension than the original sequences. For example, for a matrix sequence, the individual elements will be vectors.
The parameter fn receives a function or lambda expression that expresses the computation to do at every iteration. It operates on the symbolic inputs to produce symbolic outputs. It will only ever be called once, to assemble the Theano graph used by Scan at every the iterations.
Since we wish to iterate over both vector1 and vector2 simultaneously, we provide them as sequences. This means that every iteration will operate on two inputs
Step2: Calling scan(), we see that it returns two outputs.
The first output contains the outputs of fn from every timestep concatenated into a tensor. In our case, the output of a single timestep is a scalar so output is a vector where output[i] is the output of the i-th iteration.
The second output details if and how the execution of the Scan updates any shared variable in the graph. It should be provided as an argument when compiling the Theano function.
Step3: If updates is omitted, the state of any shared variables modified by Scan will not be updated properly. Random number sampling, for instance, relies on shared variables. If updates is not provided, the state of the random number generator won't be updated properly and the same numbers might be sampled repeatedly. Always provide updates when compiling your Theano function.
Now that we've defined how to do elementwise multiplication with Scan, we can see that the result is as expected
Step4: An interesting thing is that we never explicitly told Scan how many iteration it needed to run. It was automatically inferred; when given sequences, Scan will run as many iterations as the length of the shortest sequence
Step5: Example 2
Step6: For the sake of variety, in this example we define the computation to be done at every iteration of the loop using a Python function, step(), instead of a lambda expression.
To have the full weight matrix W and the full bias vector b available at every iteration, we use the argument non_sequences. Contrary to sequences, non-sequences are not iterated upon by Scan. Every non-sequence is passed as input to every iteration.
This means that our step() function will need to operate on three symbolic inputs; one for our sequence X and one for each of our non-sequences W and b.
The inputs that correspond to the non-sequences are always last and in the same order at the non-sequences are provided to Scan. This means that the correspondence between the inputs of the step() function and the arguments to scan() is the following
Step7: We can now compile our Theano function and see that it gives the expected results.
Step8: Example 3
Step9: The trick part is informing Scan that our step function expects as input the output of a previous iteration. To achieve this, we need to use a new parameter of the scan() function
Step10: We can now compile and test the Theano function
Step11: An important thing to notice here, is that the output computed by the Scan does not include the initial state that we provided. It only outputs the states that it has computed itself.
If we want to have both the initial state and the computed states in the same Theano variable, we have to join them ourselves.
Example 4
Step12: The next step is defining the value of outputs_info.
Recall that, for non-recurrent outputs, the value is None and, for simple recurrent outputs, the value is a single initial state. For general recurrent outputs, where iteration $t$ may depend on multiple past values, the value is a dictionary. That dictionary has two values
Step13: Now that we've defined the step function and the properties of our outputs, we can call the scan() function. Because the step() function has multiple outputs, the first output of scan() function will be a list of tensors
Step14: Let's compile our Theano function which will take a vector of consecutive values from the Fibonacci sequence and compute the next 10 values
Step15: Precisions about the order of the arguments to the step function
When we start using many sequences, recurrent outputs and non-sequences, it's easy to get confused regarding the order in which the step function receives the corresponding inputs. Below is the full order
Step16: Solution
Step19: Exercise 2 - Sampling without replacement
In this exercise, the goal is to implement a Theano function that
Step20: Solution | Python Code:
import theano
import theano.tensor as T
import numpy as np
vector1 = T.vector('vector1')
vector2 = T.vector('vector2')
Explanation: Scan
In short
Mechanism to perform loops in a Theano graph
Supports nested loops and reusing results from previous iterations
Highly generic
Implementation
You've previous seen that a Theano function graph is composed of two types of nodes; Variable nodes which represent data and Apply node which apply Ops (which represent some computation) to Variables to produce new Variables.
From this point of view, a node that applies a Scan op is just like any other. Internally, however, it is very different from most Ops.
Inside a Scan op is yet another Theano graph which represents the computation to be performed at every iteration of the loop. During compilation, that graph is compiled into a function and, during execution, the Scan op will call that function repeatedly on its inputs to produce its outputs.
Example 1 : As simple as it gets
Scan's interface is complex and, thus, best introduced by examples. So, let's dive right in and start with a simple example; perform an element-wise multiplication between two vectors.
This particular example is simple enough that Scan is not the best way to do things but we'll gradually work our way to more complex examples where Scan gets more interesting.
Let's first setup our use case by defining Theano variables for the inputs :
End of explanation
output, updates = theano.scan(fn=lambda a, b : a * b,
sequences=[vector1, vector2])
Explanation: Next, we call the scan() function. It has many parameters but, because our use case is simple, we only need two of them. We'll introduce other parameters in the next examples.
The parameter sequences allows us to specify variables that Scan should iterate over as it loops. The first iteration will take as input the first element of every sequence, the second iteration will take as input the second element of every sequence, etc. These individual element have will have one less dimension than the original sequences. For example, for a matrix sequence, the individual elements will be vectors.
The parameter fn receives a function or lambda expression that expresses the computation to do at every iteration. It operates on the symbolic inputs to produce symbolic outputs. It will only ever be called once, to assemble the Theano graph used by Scan at every the iterations.
Since we wish to iterate over both vector1 and vector2 simultaneously, we provide them as sequences. This means that every iteration will operate on two inputs: an element from vector1 and the corresponding element from vector2.
Because what we want is the elementwise product between the vectors, we provide a lambda expression that, given an element a from vector1 and an element b from vector2 computes and return the product.
End of explanation
f = theano.function(inputs=[vector1, vector2],
outputs=output,
updates=updates)
Explanation: Calling scan(), we see that it returns two outputs.
The first output contains the outputs of fn from every timestep concatenated into a tensor. In our case, the output of a single timestep is a scalar so output is a vector where output[i] is the output of the i-th iteration.
The second output details if and how the execution of the Scan updates any shared variable in the graph. It should be provided as an argument when compiling the Theano function.
End of explanation
vector1_value = np.arange(0, 5).astype(theano.config.floatX) # [0,1,2,3,4]
vector2_value = np.arange(1, 6).astype(theano.config.floatX) # [1,2,3,4,5]
print(f(vector1_value, vector2_value))
Explanation: If updates is omitted, the state of any shared variables modified by Scan will not be updated properly. Random number sampling, for instance, relies on shared variables. If updates is not provided, the state of the random number generator won't be updated properly and the same numbers might be sampled repeatedly. Always provide updates when compiling your Theano function.
Now that we've defined how to do elementwise multiplication with Scan, we can see that the result is as expected :
End of explanation
print(f(vector1_value, vector2_value[:4]))
Explanation: An interesting thing is that we never explicitly told Scan how many iteration it needed to run. It was automatically inferred; when given sequences, Scan will run as many iterations as the length of the shortest sequence :
End of explanation
X = T.matrix('X') # Minibatch of data
W = T.matrix('W') # Weights of the layer
b = T.vector('b') # Biases of the layer
Explanation: Example 2 : Non-sequences
In this example, we introduce another of Scan's features; non-sequences. To demonstrate how to use them, we use Scan to compute the activations of a linear MLP layer over a minibatch.
It is not yet a use case where Scan is truly useful but it introduces a requirement that sequences cannot fulfill; if we want to use Scan to iterate over the minibatch elements and compute the activations for each of them, then we need some variables (the parameters of the layer), to be available 'as is' at every iteration of the loop. We do not want Scan to iterate over them and give only part of them at every iteration.
Once again, we begin by setting up our Theano variables :
End of explanation
def step(v, W, b):
return T.dot(v, W) + b
output, updates = theano.scan(fn=step,
sequences=[X],
non_sequences=[W, b])
Explanation: For the sake of variety, in this example we define the computation to be done at every iteration of the loop using a Python function, step(), instead of a lambda expression.
To have the full weight matrix W and the full bias vector b available at every iteration, we use the argument non_sequences. Contrary to sequences, non-sequences are not iterated upon by Scan. Every non-sequence is passed as input to every iteration.
This means that our step() function will need to operate on three symbolic inputs; one for our sequence X and one for each of our non-sequences W and b.
The inputs that correspond to the non-sequences are always last and in the same order at the non-sequences are provided to Scan. This means that the correspondence between the inputs of the step() function and the arguments to scan() is the following :
v : individual element of the sequence X
W and b : non-sequences W and b, respectively
End of explanation
f = theano.function(inputs=[X, W, b],
outputs=output,
updates=updates)
X_value = np.arange(-3, 3).reshape(3, 2).astype(theano.config.floatX)
W_value = np.eye(2).astype(theano.config.floatX)
b_value = np.arange(2).astype(theano.config.floatX)
print(f(X_value, W_value, b_value))
Explanation: We can now compile our Theano function and see that it gives the expected results.
End of explanation
def step(m_row, cumulative_sum):
return m_row + cumulative_sum
Explanation: Example 3 : Reusing outputs from the previous iterations
In this example, we will use Scan to compute a cumulative sum over the first dimension of a matrix $M$. This means that the output will be a matrix $S$ in which the first row will be equal to the first row of $M$, the second row will be equal to the sum of the two first rows of $M$, and so on.
Another way to express this, which is the way we will implement here, is that $S[t] = S[t-1] + M[t]$. Implementing this with Scan would involve iterating over the rows of the matrix $M$ and, at every iteration, reuse the cumulative row that was output at the previous iteration and return the sum of it and the current row of $M$.
If we assume for a moment that we can get Scan to provide the output value from the previous iteration as an input for every iteration, implementing a step function is simple :
End of explanation
M = T.matrix('X')
s = T.vector('s') # Initial value for the cumulative sum
output, updates = theano.scan(fn=step,
sequences=[M],
outputs_info=[s])
Explanation: The trick part is informing Scan that our step function expects as input the output of a previous iteration. To achieve this, we need to use a new parameter of the scan() function: outputs_info. This parameter is used to tell Scan how we intend to use each of the outputs that are computed at each iteration.
This parameter can be omitted (like we did so far) when the step function doesn't depend on any output of a previous iteration. However, now that we wish to have recurrent outputs, we need to start using it.
outputs_info takes a sequence with one element for every output of the step() function :
* For a non-recurrent output (like in every example before this one), the element should be None.
* For a simple recurrent output (iteration $t$ depends on the value at iteration $t-1$), the element must be a tensor. Scan will interpret it as being an initial state for a recurrent output and give it as input to the first iteration, pretending it is the output value from a previous iteration. For subsequent iterations, Scan will automatically handle giving the previous output value as an input.
The step() function needs to expect one additional input for each simple recurrent output. These inputs correspond to outputs from previous iteration and are always after the inputs that correspond to sequences but before those that correspond to non-sequences. The are received by the step() function in the order in which the recurrent outputs are declared in the outputs_info sequence.
End of explanation
f = theano.function(inputs=[M, s],
outputs=output,
updates=updates)
M_value = np.arange(9).reshape(3, 3).astype(theano.config.floatX)
s_value = np.zeros((3, ), dtype=theano.config.floatX)
print(f(M_value, s_value))
Explanation: We can now compile and test the Theano function :
End of explanation
def step(f_minus2, f_minus1):
new_f = f_minus2 + f_minus1
ratio = new_f / f_minus1
return new_f, ratio
Explanation: An important thing to notice here, is that the output computed by the Scan does not include the initial state that we provided. It only outputs the states that it has computed itself.
If we want to have both the initial state and the computed states in the same Theano variable, we have to join them ourselves.
Example 4 : Reusing outputs from multiple past iterations
The Fibonacci sequence is a sequence of numbers F where the two first numbers both 1 and every subsequence number is defined as such : $F_n = F_{n-1} + F_{n-2}$. Thus, the Fibonacci sequence goes : 1, 1, 2, 3, 5, 8, 13, ...
In this example, we will cover how to compute part of the Fibonacci sequence using Scan. Most of the tools required to achieve this have been introduced in the previous examples. The only one missing is the ability to use, at iteration $i$, outputs from iterations older than $i-1$.
Also, since every example so far had only one output at every iteration of the loop, we will also compute, at each timestep, the ratio between the new term of the Fibonacci sequence and the previous term.
Writing an appropriate step function given two inputs, representing the two previous terms of the Fibonacci sequence, is easy:
End of explanation
f_init = T.fvector()
outputs_info = [dict(initial=f_init, taps=[-2, -1]),
None]
Explanation: The next step is defining the value of outputs_info.
Recall that, for non-recurrent outputs, the value is None and, for simple recurrent outputs, the value is a single initial state. For general recurrent outputs, where iteration $t$ may depend on multiple past values, the value is a dictionary. That dictionary has two values:
* taps : list declaring which previous values of that output every iteration will need. [-3, -2, -1] would mean every iteration should take as input the last 3 values of that output. [-2] would mean every iteration should take as input the value of that output from two iterations ago.
* initial : tensor of initial values. If every initial value has $n$ dimensions, initial will be a single tensor of $n+1$ dimensions with as many initial values as the oldest requested tap. In the case of the Fibonacci sequence, the individual initial values are scalars so the initial will be a vector.
In our example, we have two outputs. The first output is the next computed term of the Fibonacci sequence so every iteration should take as input the two last values of that output. The second output is the ratio between successive terms and we don't reuse its value so this output is non-recurrent. We define the value of outputs_info as such :
End of explanation
output, updates = theano.scan(fn=step,
outputs_info=outputs_info,
n_steps=10)
next_fibonacci_terms = output[0]
ratios_between_terms = output[1]
Explanation: Now that we've defined the step function and the properties of our outputs, we can call the scan() function. Because the step() function has multiple outputs, the first output of scan() function will be a list of tensors: the first tensor containing all the states of the first output and the second tensor containing all the states of the second input.
In every previous example, we used sequences and Scan automatically inferred the number of iterations it needed to run from the length of these
sequences. Now that we have no sequence, we need to explicitly tell Scan how many iterations to run using the n_step parameter. The value can be real or symbolic.
End of explanation
f = theano.function(inputs=[f_init],
outputs=[next_fibonacci_terms, ratios_between_terms],
updates=updates)
out = f([1, 1])
print(out[0])
print(out[1])
Explanation: Let's compile our Theano function which will take a vector of consecutive values from the Fibonacci sequence and compute the next 10 values :
End of explanation
coefficients = theano.tensor.vector("coefficients")
x = T.scalar("x")
max_coefficients_supported = 10000
def step(coeff, power, free_var):
return coeff * free_var ** power
# Generate the components of the polynomial
full_range=theano.tensor.arange(max_coefficients_supported)
components, updates = theano.scan(fn=step,
outputs_info=None,
sequences=[coefficients, full_range],
non_sequences=x)
polynomial = components.sum()
calculate_polynomial = theano.function(inputs=[coefficients, x],
outputs=polynomial,
updates=updates)
test_coeff = np.asarray([1, 0, 2], dtype=theano.config.floatX)
print(calculate_polynomial(test_coeff, 3))
# 19.0
Explanation: Precisions about the order of the arguments to the step function
When we start using many sequences, recurrent outputs and non-sequences, it's easy to get confused regarding the order in which the step function receives the corresponding inputs. Below is the full order:
Element from the first sequence
...
Element from the last sequence
First requested tap from first recurrent output
...
Last requested tap from first recurrent output
...
First requested tap from last recurrent output
...
Last requested tap from last recurrent output
First non-sequence
...
Last non-sequence
When to use Scan and when not to
Scan is not appropriate for every problem. Here's some information to help you figure out if Scan is the best solution for a given use case.
Execution speed
Using Scan in a Theano function typically makes it slighly slower compared to the equivalent Theano graph in which the loop is unrolled. Both of these approaches tend to be much slower than a vectorized implementation in which large chunks of the computation can be done in parallel.
Compilation speed
Scan also adds an overhead to the compilation, potentially making it slower, but using it can also dramatically reduce the size of your graph, making compilation much faster. In the end, the effect of Scan on compilation speed will heavily depend on the size of the graph with and without Scan.
The compilation speed of a Theano function using Scan will usually be comparable to one in which the loop is unrolled if the number of iterations is small. It the number of iterations is large, however, the compilation will usually be much faster with Scan.
In summary
If you have one of the following cases, Scan can help :
* A vectorized implementation is not possible (due to the nature of the computation and/or memory usage)
* You want to do a large or variable number of iterations
If you have one of the following cases, you should consider other options :
* A vectorized implementation could perform the same computation => Use the vectorized approach. It will often be faster during both compilation and execution.
* You want to do a small, fixed, number of iterations (ex: 2 or 3) => It's probably better to simply unroll the computation
Exercises
Exercise 1 - Computing a polynomial
In this exercise, the initial version already works. It computes the value of a polynomial ($n_0 + n_1 x + n_2 x^2 + ... $) of at most 10000 degrees given the coefficients of the various terms and the value of x.
You must modify it such that the reduction (the sum() call) is done by Scan.
End of explanation
%load scan_ex1_solution.py
Explanation: Solution : run the cell below to display the solution to this exercise.
End of explanation
probabilities = T.vector()
nb_samples = T.iscalar()
rng = T.shared_randomstreams.RandomStreams(1234)
def sample_from_pvect(pvect):
Provided utility function: given a symbolic vector of
probabilities (which MUST sum to 1), sample one element
and return its index.
onehot_sample = rng.multinomial(n=1, pvals=pvect)
sample = onehot_sample.argmax()
return sample
def set_p_to_zero(pvect, i):
Provided utility function: given a symbolic vector of
probabilities and an index 'i', set the probability of the
i-th element to 0 and renormalize the probabilities so they
sum to 1.
new_pvect = T.set_subtensor(pvect[i], 0.)
new_pvect = new_pvect / new_pvect.sum()
return new_pvect
# TODO use Scan to sample from the vector of probabilities and
# symbolically obtain 'samples' the vector of sampled indices.
samples = None
# Compiling the function
f = theano.function(inputs=[probabilities, nb_samples],
outputs=[samples])
# Testing the function
test_probs = np.asarray([0.6, 0.3, 0.1], dtype=theano.config.floatX)
for i in range(10):
print(f(test_probs, 2))
Explanation: Exercise 2 - Sampling without replacement
In this exercise, the goal is to implement a Theano function that :
* takes as input a vector of probabilities and a scalar
* performs sampling without replacements from those probabilities as many times as the value of the scalar
* returns a vector containing the indices of the sampled elements.
Partial code is provided to help with the sampling of random numbers since this is not something that was covered in this tutorial.
End of explanation
%load scan_ex2_solution.py
Explanation: Solution : run the cell below to display the solution to this exercise.
End of explanation |
14,979 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encoder-Decoder Analysis
Model Architecture
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
Step7: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores | Python Code:
report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drbef/encdec_noing10_200_512_04drbef.json'
log_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drbef/encdec_noing10_200_512_04drbef_logs.json'
import json
import matplotlib.pyplot as plt
with open(report_file) as f:
report = json.loads(f.read())
with open(log_file) as f:
logs = json.loads(f.read())
print'Encoder: \n\n', report['architecture']['encoder']
print'Decoder: \n\n', report['architecture']['decoder']
Explanation: Encoder-Decoder Analysis
Model Architecture
End of explanation
print('Train Perplexity: ', report['train_perplexity'])
print('Valid Perplexity: ', report['valid_perplexity'])
print('Test Perplexity: ', report['test_perplexity'])
Explanation: Perplexity on Each Dataset
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Loss vs. Epoch
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
Explanation: Perplexity vs. Epoch
End of explanation
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
for i, sample in enumerate(report['train_samples']):
print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None)
for i, sample in enumerate(report['valid_samples']):
print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None)
for i, sample in enumerate(report['test_samples']):
print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)
Explanation: Generations
End of explanation
def print_bleu(blue_struct):
print 'Overall Score: ', blue_struct['score'], '\n'
print '1-gram Score: ', blue_struct['components']['1']
print '2-gram Score: ', blue_struct['components']['2']
print '3-gram Score: ', blue_struct['components']['3']
print '4-gram Score: ', blue_struct['components']['4']
# Training Set BLEU Scores
print_bleu(report['train_bleu'])
# Validation Set BLEU Scores
print_bleu(report['valid_bleu'])
# Test Set BLEU Scores
print_bleu(report['test_bleu'])
# All Data BLEU Scores
print_bleu(report['combined_bleu'])
Explanation: BLEU Analysis
End of explanation
# Training Set BLEU n-pairs Scores
print_bleu(report['n_pairs_bleu_train'])
# Validation Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_valid'])
# Test Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_test'])
# Combined n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_all'])
# Ground Truth n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_gold'])
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
print 'Average (Train) Generated Score: ', report['average_alignment_train']
print 'Average (Valid) Generated Score: ', report['average_alignment_valid']
print 'Average (Test) Generated Score: ', report['average_alignment_test']
print 'Average (All) Generated Score: ', report['average_alignment_all']
print 'Average Gold Score: ', report['average_alignment_gold']
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation |
14,980 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Selection For Machine Learning
In this exercise, we will explore methods to do model selection in a machine learning context, in particular cross-validation and information criteria. At the same time, we'll learn about scikit-learn's class structure and how to build a pipeline.
Why Model Selection?
There are several reasons why you might want to perform model selection
Step2: First, we're going to need some data. We'll work with the star-galaxy data from the first session. This uses the astroquery package and then queries the top 10000 observations from SDSS (see this exercise for more details)
Step3: Exercise 1
Step4: Exercise 2c
Step5: Exercise 5
Step6: Note
Step7: Comparing Algorithms
So far, we've just picked PCA because it's common. But what if there's a better algorithm for dimensionality reduction out there for our problem? Or what if you'd want to compare random forests to other classifiers?
In this case, your best option is to split off a separate validation set, perform cross-validation for each algorithm separately, and then compare the results using hold-out cross validation and your validation set (Note
Step8: Challenge Problem
Step11: Even More Challenging Challenge Problem
Step12: Here are the important things about writing transformer objects for use in scikit-learn
Step13: Now let's make a feature set that combines this feature with the PCA features
Step14: Now we can build the pipeline
Step15: Choosing The Right Scoring Function
As a standard, the algorithms in scikit-learn use accuracy to score results. The accuracy is basically the raw fraction of correctly classified samples in your validation or test set.
Question
Step16: We have now made a really imbalanced data set with many galaxies and only a few stars
Step17: Exercise 10
Step18: Exercise 11 | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
# comment out this line if you don't have seaborn installed
import seaborn as sns
sns.set_palette("colorblind")
import numpy as np
Explanation: Model Selection For Machine Learning
In this exercise, we will explore methods to do model selection in a machine learning context, in particular cross-validation and information criteria. At the same time, we'll learn about scikit-learn's class structure and how to build a pipeline.
Why Model Selection?
There are several reasons why you might want to perform model selection:
You might not be sure which machine learning algorithm is most appropriate.
The algorithm you have chosen might have a regularization parameter whose value you want to determine.
The algorithm you have chosen might have other parameters (e.g. the depth of a decision tree, the number of clusters in KMeans, ...) you would like to determine.
You might not be sure which of your features are the most useful/predictive.
Question: Can you think of other reasons and contexts in which model selection might be important?
Your decisions for how to do model selection depend very strongly (like everything else in machine learning) on the purpose of your machine learning procedure. Is your main purpose to accurately predict outcomes for new samples? Or are you trying to infer something about the system?
Inference generally restricts the number of algorithms you can reasonably use, and also the number of model selection procedures you can apply. In the following, assume that everything below works for prediction problems; I will point out methods for inference where appropriate. Additionally, assume that everything below works for supervised machine learning. We will cover unsupervised methods further below.
Imports
Let's first import some stuff we're going to need.
End of explanation
# execute this line:
from astroquery.sdss import SDSS
TSquery = SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
ORDER BY p.objid ASC
SDSSts = SDSS.query_sql(TSquery)
SDSSts
Explanation: First, we're going to need some data. We'll work with the star-galaxy data from the first session. This uses the astroquery package and then queries the top 10000 observations from SDSS (see this exercise for more details):
End of explanation
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
# set the random state
rs = 23
# extract feature names, remove class
# cast astropy table to pandas and then to a numpy array, remove classes
# our classes are the outcomes to classify on
# let's do a split in training and test set:
# we'll leave the test set for later.
# instantiate the random forest classifier:
# do a grid search over the free random forest parameters:
pars =
grid_results =
Explanation: Exercise 1: Visualize this data set. What representation is most appropriate, do you think?
Exercise 2: Let's now do some machine learning. In this exercise, you are going to use a random forest classifier to classify this data set. Here are the steps you'll need to perform:
* Split the column with the classes (stars and galaxies) from the rest of the data
* Cast the features and the classes to numpy arrays
* Split the data into a test set and a training set. The training set will be used to train the classifier; the test set we'll reserve for the very end to test the final performance of the model (more on this on Friday). You can use the scikit-learn function test_train_split for this task
* Define a RandomForest object from the sklearn.ensemble module. Note that the RandomForest class has three parameters:
- n_estimators: The number of decision trees in the random forest
- max_features: The maximum number of features to use for the decision trees
- min_samples_leaf: The minimum number of samples that need to end up in a terminal leaf (this effectively limits the number of branchings each tree can make)
* We'll want to use cross-validation to decide between parameters. You can do this with the scikit-learn class GridSearchCV. This class takes a classifier as an input, along with a dictionary of the parameter values to search over.
In the earlier lecture, you learned about four different types of cross-validation:
* hold-out cross validation, where you take a single validation set to compare your algorithm's performance to
* k-fold cross validation, where you split your training set into k subsets, each of which holds out a different portion of the data
* leave-one-out cross validation, where you have N different subsets, each of which leaves just one sample as a validation set
* random subset cross validation, where you pick a random subset of your data points k times as your validation set.
Exercise 2a: Which of the four algorithms is most appropriate here? And why?
Answer: In this case, k-fold CV or random subset CV seem to be the most appropriate algorithms to use.
* Using hold-out cross validation leads to a percentage of the data not being used for training at all.
* Given that the data set is not too huge, using k-fold CV probably won't slow down the ML procedure too much.
* LOO CV is particularly useful for small data sets, where even training on a subset of the training data is difficult (for example because there are only very few examples of a certain class).
* Random subset CV could also yield good results, since there's no real ordering to the training data. Do not use this algorithm when the ordering matters (for example in Hidden Markov Models)
Important: One important thing to remember that cross-validation crucially depends on your samples being independent from each other. Be sure that this is the case before using it. For example, say you want to classify images of galaxies, but your data set is small, and you're not sure whether your algorithm is rotation independent. So you might choose to use the same images multiple times in your training data set, but rotated by a random degree. In this case, you have to make sure all versions of the same image are included in the same data set (either the training, the validation or the test set), and not split across data sets! If you don't, your algorithm will be unreasonably confident in its accuracy (because you are training and validating essentially on the same data points).
Note that scikit-learn can actually deal with that! The class GroupKFold allows k-fold cross validation using an array of indices for your training data. Validation sets will only be split among samples with different indices.
But this was just an aside. Last time, you used a random forest and used k-fold cross validation to effectively do model selection for the different parameters that the random forest classifier uses.
Exercise 2b: Now follow the instructions above and implement your random forest classifier.
End of explanation
from sklearn.decomposition import PCA
# instantiate the PCA object
pca =
# fit and transform the samples:
X_pca =
# make a plot of the PCA components colour-coded by stars and galaxies
fig, ax = plt.subplots(1, 1, figsize=(12,8))
Explanation: Exercise 2c: Take a look at the different validation scores for the different parameter combinations. Are they very different or are they similar?
It looks like the scores are very similar, and have very small variance between the different cross validation instances. It can be useful to do this kind of representation to see for example whether there is a large variance in the cross-validation results.
Cross-validating Multiple Model Components
In most machine learning applications, your machine learning algorithm might not be the only component having free parameters. You might not even be sure which machine learning algorithm to use!
For demonstration purposes, imagine you have many features, but many of them might be correlated. A standard dimensionality reduction technique to use is Principal Component Analysis.
Exercise 4: The number of features in our present data set is pretty small, but let's nevertheless attempt to reduce dimensionality with PCA. Run a PCA decomposition in 2 dimensions and plot the results. Colour-code stars versus galaxies. How well do they separate along the principal components?
Hint: Think about whether you can run PCA on training and test set separately, or whether you need to run it on both together before doing the train-test split?
End of explanation
# Train PCA on training data set
# apply to test set
# instantiate the random forest classifier:
# do a grid search over the free random forest parameters:
pars =
grid_results =
Explanation: Exercise 5: Re-do the classification on the PCA components instead of the original features. Does it work better or worse than the classification on the original features?
End of explanation
from sklearn.pipeline import Pipeline
# make a list of name-estimator tuples
estimators =
# instantiate the pipeline
pipe =
# make a dictionary of parameters
params =
# perform the grid search
grid_search =
Explanation: Note: In general, you should (cross-)validate both your data transformations and your classifiers!
But how do we know whether two components was really the right number to choose? perhaps it should have been three? Or four? Ideally, we would like to include the feature engineering in our cross validation procedure. In principle, you can do this by running a complicated for-loop. In practice, this is what scikit-learns Pipeline is for! A Pipeline object takes a list of tuples of ("string", ScikitLearnObject) pairs as input and strings them together (your feature vector X will be put first through the first object, then the second object and so on sequentially).
Note: scikit-learn distinguishes between transformers (i.e. classes that transform the features into something else, like PCA, t-SNE, StandardScaler, ...) and predictors (i.e. classes that produce predictions, such as random forests, logistic regression, ...). In a pipeline, all but the last objects must be transformers; the last object can be either.
Exercise 6: Make a pipeline including (1) a PCA object and (2) a random forest classifier. Cross-validate both the PCA components and the parameters of the random forest classifier. What is the best number of PCA components to use?
Hint: You can also use the convenience function make_pipeline to creatue your pipeline.
Hint: Check the documentation for the precise notation to use for cross-validating parameters.
End of explanation
# First, let's redo the train-test split to split the training data
# into training and hold-out validation set
# make a list of name-estimator tuples
estimators =
# instantiate the pipeline
pipe =
# make a dictionary of parameters
params =
# perform the grid search
grid_search =
# complete the print functions
print("Best score: ")
print("Best parameter set: " )
print("Validation score for model with PCA: ")
# Now repeat the same procedure with the second algorithm you've picked.
Explanation: Comparing Algorithms
So far, we've just picked PCA because it's common. But what if there's a better algorithm for dimensionality reduction out there for our problem? Or what if you'd want to compare random forests to other classifiers?
In this case, your best option is to split off a separate validation set, perform cross-validation for each algorithm separately, and then compare the results using hold-out cross validation and your validation set (Note: Do not use your test set for this! Your test set is only used for your final error estimate!)
Doing CV across algorithms is difficult, since the KFoldCV object needs to know which parameters belong to which algorithms, which is difficult to do.
Exercise 7: Pick an algorithm from the manifold learning library in scikit-learn, cross-validate a random forest for both, and compare the performance of both.
Important: Do not choose t-SNE. The reason is that t-SNE does not generalize to new samples! This means while it's useful for data visualization, you cannot train a t-SNE transformation (in the scikit-learn implementation) on one part of your data and apply it to another!
End of explanation
from sklearn.linear_model import LogisticRegressionCV
lr =
Explanation: Challenge Problem: Interpreting Results
Earlier today, we talked about interpreting machine learning models. Let's see how you would go about this in practice.
Repeat your classification with a logistic regression model.
Is the logistic regression model easier or harder to interpret? Why?
Assume you're interested in which features are the most relevant to your classification (because they might have some bearing on the underlying physics). Would you do your classification on the original features or the PCA transformation? Why?
Change the subset of parameters used in the logistic regression models. Look at the weights. Do they change? How? Does that affect your interpretability?
End of explanation
from sklearn.base import BaseEstimator, TransformerMixin
class RebinTimeseries(BaseEstimator, TransformerMixin):
def __init__(self, n=4, method="average"):
Initialize hyperparameters
:param n: number of samples to bin
:param method: "average" or "sum" the samples within a bin?
:return:
self.n = n ## save number of bins to average together
self.method = method
return
def fit(self,X):
I don't really need a fit method!
## set number of light curves (L) and
## number of samples per light curve (k)
return self
def transform(self, X):
self.L, self.K = X.shape
## set the number of binned samples per light curve
K_binned = int(self.K/self.n)
## if the number of samples in the original light curve
## is not divisible by n, then chop off the last few samples of
## the light curve to make it divisible
#print("X shape: " + str(X.shape))
if K_binned*self.n < self.K:
X = X[:,:self.n*K_binned]
## the array for the new, binned light curves
X_binned = np.zeros((self.L, K_binned))
if self.method in ["average", "mean"]:
method = np.mean
elif self.method == "sum":
method = np.sum
else:
raise Exception("Method not recognized!")
#print("X shape: " + str(X.shape))
#print("L: " + str(self.L))
for i in xrange(self.L):
t_reshape = X[i,:].reshape((K_binned, self.n))
X_binned[i,:] = method(t_reshape, axis=1)
return X_binned
def predict(self, X):
pass
def score(self, X):
pass
def fit_transform(self, X, y=None):
self.fit(X)
X_binned = self.transform(X)
return X_binned
Explanation: Even More Challenging Challenge Problem: Implementing Your Own Estimator
Sometimes, you might want to use algorithms, for example for feature engineering, that are not implemented in scikit-learn. But perhaps these transformations still have free parameters to estimate. What to do?
scikit-learn classes inherit from certain base classes that make it easy to implement your own objects. Below is an example I wrote for a machine learning model on time series, where I wanted to re-bin the time series in different ways and and optimize the rebinning factor with respect to the classification afterwards.
End of explanation
class PSFMagThreshold(BaseEstimator, TransformerMixin):
def __init__(self, p=1.45,):
def fit(self,X):
def transform(self, X):
def predict(self, X):
def score(self, X):
def fit_transform(self, X, y=None):
Explanation: Here are the important things about writing transformer objects for use in scikit-learn:
* The class must have the following methods:
- fit: fit your training data
- transform: transform your training data into the new representation
- predict: predict new examples
- score: score predictions
- fit_transform is optional (I think)
* The __init__ method only sets up parameters. Don't put any relevant code in there (this is convention more than anything else, but it's a good one to follow!)
* The fit method is always called in a Pipeline object (either on its own or as part of fit_transform). It usually modifies the internal state of the object, so returning self (i.e. the object itself) is usually fine.
* For transformer objects, which don't need scoring and prediction methods, you can just return pass as above.
Exercise 8: Last time, you learned that the SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data:
$$\mathtt{psfMag} - \mathtt{cmodelMag} \gt 0.145,$$
sources that satisfy this criteria are considered galaxies.
Implement an object that takes $\mathtt{psfMag}$ and $\mathtt{cmodelMag}$ as inputs and has a free parameter p that sets the value above which a source is considered a galaxy.
Implement a transform methods that returns a single binary feature that is one if $$\mathtt{psfMag} - \mathtt{cmodelMag} \gt p$$ and zero otherwise.
Add this feature to your optimized set of features consisting of either the PCA or your alternative representation, and run a random forest classifier on both. Run a CV on all components involved.
Hint: $\mathtt{psfMag}$ and $\mathtt{cmodelMag}$ are the first and the last column in your feature vector, respectively.
Hint: You can use FeatureUnion to combine the outputs of two transformers in a single data set. (Note that using pipeline with all three will chain them, rather than compute the feature union, followed by a classifier). You can input your FeatureUnion object into Pipeline.
End of explanation
from sklearn.pipeline import FeatureUnion
transformers =
feat_union =
X_transformed =
Explanation: Now let's make a feature set that combines this feature with the PCA features:
End of explanation
# combine the transformers
transformers =
# make the feature union
feat_union =
# combine estimators for the pipeline
estimators =
# define the pipeline object
pipe_c =
# make the parameter set
params =
# perform the grid search
grid_search_c =
# complete the print statements:
print("Best score: ")
print("Best parameter set: ")
print("Validation score: ")
Explanation: Now we can build the pipeline:
End of explanation
# all stars
star_ind = np.argwhere(y == b"STAR").T[0]
# all galaxies
galaxy_ind = np.argwhere(y == b"GALAXY").T[0]
np.random.seed(100)
# new array with much fewer stars
star_ind_new = np.random.choice(star_ind, replace=False, size=int(len(star_ind)/80.0))
X_new = np.vstack((X[galaxy_ind], X[star_ind_new]))
y_new = np.hstack((y[galaxy_ind], y[star_ind_new]))
Explanation: Choosing The Right Scoring Function
As a standard, the algorithms in scikit-learn use accuracy to score results. The accuracy is basically the raw fraction of correctly classified samples in your validation or test set.
Question: Is this scoring function always the best method to use? Why (not)? Can you think of alternatives to use?
Let's make a heavily biased data set:
End of explanation
print(len(y_new[y_new == b"GALAXY"]))
print(len(y_new[y_new == b"STAR"]))
Explanation: We have now made a really imbalanced data set with many galaxies and only a few stars:
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, accuracy_score
X_train2, X_test2, y_train2, y_test2 = train_test_split(X_new, y_new,
test_size = 0.3,
random_state = 20)
C_all =
for C in C_all:
lr =
# ... insert code here ...
# make predictions for the validation set
y_pred =
# print accuracy score for this regularization:
# make and print a confusion matrix
cm =
Explanation: Exercise 10: Run a logistic regression classifier on this data, for a very low regularization (0.0001) and a
very large regularization (10000) parameter. Print the accuracy and a confusion matrix of the results for each run. How many mis-classified samples are in each? Where do the mis-classifications end up? If you were to run a cross validation on this, could you be sure to get a good model? Why (not)?
As part of this exercise, you should plot a confusion matrix. A confusion matrix takes the true labels and the predicted labels, and then plots a grid for all samples where true labels and predicted labels match versus do not match. You can use the scikit-learn function confusion_matrix to create one. pyplot.matshow is useful for plotting it, but just printing it on the screen works pretty well, too (at least for the two classes considered here).
End of explanation
for C in C_all:
lr =
# ... insert code here ...
# predict the validdation set
y_pred = lr.predict(X_test2)
# print both accuracy and F1 score for comparison:
# create and plot a confusion matrix:
cm =
Explanation: Exercise 11: Take a look at the metrics implemented for model evaluation in scikit-learn, in particular the different versions of the F1 score. Is there a metric that may be more suited to the task above? Which one?
Hint: Our imbalanced class, the one we're interested in, is the STAR class. Make sure you set the keyword pos_label in the f1_score function correctly.
End of explanation |
14,981 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural network hybrid recommendation system on Google Analytics data preprocessing
This notebook demonstrates how to implement a hybrid recommendation system using a neural network to combine content-based and collaborative filtering recommendation models using Google Analytics data. We are going to use the learned user embeddings from wals.ipynb and combine that with our previous content-based features from content_based_using_neural_networks.ipynb
First we are going to preprocess our data using BigQuery and Cloud Dataflow to be used in our later neural network hybrid recommendation model.
Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select python2.
Step1: Now restart notebook's session kernel!
Step3: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
First, let's create our hybrid dataset query that we will use in our Cloud Dataflow pipeline. This will combine some content-based features and the user and item embeddings learned from our WALS Matrix Factorization Collaborative filtering lab that we extracted from our trained WALSMatrixFactorization Estimator and uploaded to BigQuery.
Step4: Let's pull a sample of our data into a dataframe to see what it looks like.
Step5: Let's check our files to make sure everything went as expected
Step7: <h2> Create vocabularies using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Now we'll create our vocabulary files for our categorical features.
Step10: Also get vocab counts from the length of the vocabularies
Step11: Let's check our files to make sure everything went as expected | Python Code:
%%bash
conda update -y -n base -c defaults conda
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz
pip install apache-beam[gcp]==2.9.0
Explanation: Neural network hybrid recommendation system on Google Analytics data preprocessing
This notebook demonstrates how to implement a hybrid recommendation system using a neural network to combine content-based and collaborative filtering recommendation models using Google Analytics data. We are going to use the learned user embeddings from wals.ipynb and combine that with our previous content-based features from content_based_using_neural_networks.ipynb
First we are going to preprocess our data using BigQuery and Cloud Dataflow to be used in our later neural network hybrid recommendation model.
Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select python2.
End of explanation
# Import helpful libraries and setup our project, bucket, and region
import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "1.13"
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: Now restart notebook's session kernel!
End of explanation
query_hybrid_dataset =
WITH CTE_site_history AS (
SELECT
fullVisitorId as visitor_id,
(SELECT MAX(IF(index = 10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id,
(SELECT MAX(IF(index = 7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category,
(SELECT MAX(IF(index = 6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title,
(SELECT MAX(IF(index = 2, value, NULL)) FROM UNNEST(hits.customDimensions)) AS author_list,
SPLIT(RPAD((SELECT MAX(IF(index = 4, value, NULL)) FROM UNNEST(hits.customDimensions)), 7), '.') AS year_month_array,
LEAD(hits.customDimensions, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) AS nextCustomDimensions
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND
fullVisitorId IS NOT NULL
AND
hits.time != 0
AND
hits.time IS NOT NULL
AND
(SELECT MAX(IF(index = 10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
),
CTE_training_dataset AS (
SELECT
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) AS next_content_id,
visitor_id,
content_id,
category,
REGEXP_REPLACE(title, r",", "") AS title,
REGEXP_EXTRACT(author_list, r"^[^,]+") AS author,
DATE_DIFF(DATE(CAST(year_month_array[OFFSET(0)] AS INT64), CAST(year_month_array[OFFSET(1)] AS INT64), 1), DATE(1970, 1, 1), MONTH) AS months_since_epoch
FROM
CTE_site_history
WHERE
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) IS NOT NULL)
SELECT
CAST(next_content_id AS STRING) AS next_content_id,
CAST(training_dataset.visitor_id AS STRING) AS visitor_id,
CAST(training_dataset.content_id AS STRING) AS content_id,
CAST(IFNULL(category, 'None') AS STRING) AS category,
CONCAT("\\"", REPLACE(TRIM(CAST(IFNULL(title, 'None') AS STRING)), "\\"",""), "\\"") AS title,
CAST(IFNULL(author, 'None') AS STRING) AS author,
CAST(months_since_epoch AS STRING) AS months_since_epoch,
IFNULL(user_factors._0, 0.0) AS user_factor_0,
IFNULL(user_factors._1, 0.0) AS user_factor_1,
IFNULL(user_factors._2, 0.0) AS user_factor_2,
IFNULL(user_factors._3, 0.0) AS user_factor_3,
IFNULL(user_factors._4, 0.0) AS user_factor_4,
IFNULL(user_factors._5, 0.0) AS user_factor_5,
IFNULL(user_factors._6, 0.0) AS user_factor_6,
IFNULL(user_factors._7, 0.0) AS user_factor_7,
IFNULL(user_factors._8, 0.0) AS user_factor_8,
IFNULL(user_factors._9, 0.0) AS user_factor_9,
IFNULL(item_factors._0, 0.0) AS item_factor_0,
IFNULL(item_factors._1, 0.0) AS item_factor_1,
IFNULL(item_factors._2, 0.0) AS item_factor_2,
IFNULL(item_factors._3, 0.0) AS item_factor_3,
IFNULL(item_factors._4, 0.0) AS item_factor_4,
IFNULL(item_factors._5, 0.0) AS item_factor_5,
IFNULL(item_factors._6, 0.0) AS item_factor_6,
IFNULL(item_factors._7, 0.0) AS item_factor_7,
IFNULL(item_factors._8, 0.0) AS item_factor_8,
IFNULL(item_factors._9, 0.0) AS item_factor_9,
FARM_FINGERPRINT(CONCAT(CAST(visitor_id AS STRING), CAST(content_id AS STRING))) AS hash_id
FROM
CTE_training_dataset AS training_dataset
LEFT JOIN
`cloud-training-demos.GA360_test.user_factors` AS user_factors
ON CAST(training_dataset.visitor_id AS FLOAT64) = CAST(user_factors.user_id AS FLOAT64)
LEFT JOIN
`cloud-training-demos.GA360_test.item_factors` AS item_factors
ON CAST(training_dataset.content_id AS STRING) = CAST(item_factors.item_id AS STRING)
Explanation: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
First, let's create our hybrid dataset query that we will use in our Cloud Dataflow pipeline. This will combine some content-based features and the user and item embeddings learned from our WALS Matrix Factorization Collaborative filtering lab that we extracted from our trained WALSMatrixFactorization Estimator and uploaded to BigQuery.
End of explanation
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
df_hybrid_dataset = bq.query(query_hybrid_dataset + "LIMIT 100").to_dataframe()
df_hybrid_dataset.head()
df_hybrid_dataset.describe()
import apache_beam as beam
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = "next_content_id,visitor_id,content_id,category,title,author,months_since_epoch".split(",")
FACTOR_COLUMNS = ["user_factor_{}".format(i) for i in range(10)] + ["item_factor_{}".format(i) for i in range(10)]
# Write out rows for each input row for each column in rowdict
data = ",".join(["None" if k not in rowdict else (rowdict[k].encode("utf-8") if rowdict[k] is not None else "None") for k in CSV_COLUMNS])
data += ","
data += ",".join([str(rowdict[k]) if k in rowdict else "None" for k in FACTOR_COLUMNS])
yield ("{}".format(data))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = "preprocess-hybrid-recommendation-features" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S")
if in_test_mode:
print("Launching local job ... hang on")
OUTPUT_DIR = "./preproc/features"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print("Launching Dataflow job {} ... hang on".format(job_name))
OUTPUT_DIR = "gs://{0}/hybrid_recommendation/preproc/features/".format(BUCKET)
try:
subprocess.check_call("gsutil -m rm -r {}".format(OUTPUT_DIR).split())
except:
pass
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": job_name,
"project": PROJECT,
"teardown_policy": "TEARDOWN_ALWAYS",
"no_save_main_session": True,
"max_num_workers": 6
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = "DirectRunner"
else:
RUNNER = "DataflowRunner"
p = beam.Pipeline(RUNNER, options = opts)
query = query_hybrid_dataset
if in_test_mode:
query = query + " LIMIT 100"
for step in ["train", "eval"]:
if step == "train":
selquery = "SELECT * FROM ({}) WHERE ABS(MOD(hash_id, 10)) < 9".format(query)
else:
selquery = "SELECT * FROM ({}) WHERE ABS(MOD(hash_id, 10)) = 9".format(query)
(p
| "{}_read".format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| "{}_csv".format(step) >> beam.FlatMap(to_csv)
| "{}_out".format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, "{}.csv".format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
Explanation: Let's pull a sample of our data into a dataframe to see what it looks like.
End of explanation
%%bash
rm -rf features
mkdir features
!gsutil -m cp -r gs://{BUCKET}/hybrid_recommendation/preproc/features/*.csv* features/
!head -3 features/*
Explanation: Let's check our files to make sure everything went as expected
End of explanation
query_vocabularies =
SELECT
CAST((SELECT MAX(IF(index = index_value, value, NULL)) FROM UNNEST(hits.customDimensions)) AS STRING) AS grouped_by
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index = index_value, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
GROUP BY
grouped_by
import apache_beam as beam
import datetime, os
def to_txt(rowdict):
# Pull columns from BQ and create a line
# Write out rows for each input row for grouped by column in rowdict
return "{}".format(rowdict["grouped_by"].encode("utf-8"))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = "preprocess-hybrid-recommendation-vocab-lists" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S")
if in_test_mode:
print("Launching local job ... hang on")
OUTPUT_DIR = "./preproc/vocabs"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print("Launching Dataflow job {} ... hang on".format(job_name))
OUTPUT_DIR = "gs://{0}/hybrid_recommendation/preproc/vocabs/".format(BUCKET)
try:
subprocess.check_call("gsutil -m rm -r {}".format(OUTPUT_DIR).split())
except:
pass
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": job_name,
"project": PROJECT,
"teardown_policy": "TEARDOWN_ALWAYS",
"no_save_main_session": True,
"max_num_workers": 6
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = "DirectRunner"
else:
RUNNER = "DataflowRunner"
p = beam.Pipeline(RUNNER, options = opts)
def vocab_list(index, name):
query = query_vocabularies.replace("index_value", "{}".format(index))
(p
| "{}_read".format(name) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))
| "{}_txt".format(name) >> beam.Map(to_txt)
| "{}_out".format(name) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, "{0}_vocab.txt".format(name))))
)
# Call vocab_list function for each
vocab_list(10, "content_id") # content_id
vocab_list(7, "category") # category
vocab_list(2, "author") # author
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
Explanation: <h2> Create vocabularies using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Now we'll create our vocabulary files for our categorical features.
End of explanation
import apache_beam as beam
import datetime, os
def count_to_txt(rowdict):
# Pull columns from BQ and create a line
# Write out count
return "{}".format(rowdict["count_number"])
def mean_to_txt(rowdict):
# Pull columns from BQ and create a line
# Write out mean
return "{}".format(rowdict["mean_value"])
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = "preprocess-hybrid-recommendation-vocab-counts" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S")
if in_test_mode:
print("Launching local job ... hang on")
OUTPUT_DIR = "./preproc/vocab_counts"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print("Launching Dataflow job {} ... hang on".format(job_name))
OUTPUT_DIR = "gs://{0}/hybrid_recommendation/preproc/vocab_counts/".format(BUCKET)
try:
subprocess.check_call("gsutil -m rm -r {}".format(OUTPUT_DIR).split())
except:
pass
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": job_name,
"project": PROJECT,
"teardown_policy": "TEARDOWN_ALWAYS",
"no_save_main_session": True,
"max_num_workers": 6
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = "DirectRunner"
else:
RUNNER = "DataflowRunner"
p = beam.Pipeline(RUNNER, options = opts)
def vocab_count(index, column_name):
query =
SELECT
COUNT(*) AS count_number
FROM ({})
.format(query_vocabularies.replace("index_value", "{}".format(index)))
(p
| "{}_read".format(column_name) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))
| "{}_txt".format(column_name) >> beam.Map(count_to_txt)
| "{}_out".format(column_name) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, "{0}_vocab_count.txt".format(column_name))))
)
def global_column_mean(column_name):
query =
SELECT
AVG(CAST({1} AS FLOAT64)) AS mean_value
FROM ({0})
.format(query_hybrid_dataset, column_name)
(p
| "{}_read".format(column_name) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))
| "{}_txt".format(column_name) >> beam.Map(mean_to_txt)
| "{}_out".format(column_name) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, "{0}_mean.txt".format(column_name))))
)
# Call vocab_count function for each column we want the vocabulary count for
vocab_count(10, "content_id") # content_id
vocab_count(7, "category") # category
vocab_count(2, "author") # author
# Call global_column_mean function for each column we want the mean for
global_column_mean("months_since_epoch") # months_since_epoch
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
Explanation: Also get vocab counts from the length of the vocabularies
End of explanation
%%bash
rm -rf vocabs
mkdir vocabs
!gsutil -m cp -r gs://{BUCKET}/hybrid_recommendation/preproc/vocabs/*.txt* vocabs/
!head -3 vocabs/*
%%bash
rm -rf vocab_counts
mkdir vocab_counts
!gsutil -m cp -r gs://{BUCKET}/hybrid_recommendation/preproc/vocab_counts/*.txt* vocab_counts/
!head -3 vocab_counts/*
Explanation: Let's check our files to make sure everything went as expected
End of explanation |
14,982 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Netdata Anomaly Detection Deepdive
This notebook will walk through a simplified python based implementation of the C & C++ code in netdata/netdata/ml/ used to power the anomaly detection capabilities of the Netdata agent.
The main goal here is to help interested users learn more about how the machine learning works under the hood. If you just want to get started by enabling ml on your agent you can check out these simple configuration steps.
🚧 Note
Step3: Inputs & Parameters
A full list of all the anomaly detection configuration parameters, and descriptions of each, can be found in the configuration section of the ml readme.
Below we will focus on some basic params to decide what data to pull and the main ml params of importance in understanding how it all works.
training size/scheduling parameters
Step4: 1. Get raw data<a id="get-raw-data"></a>
Next we will use the get_data() function from the netdata-pandas library to just pull down our raw data from the agent into a Pandas dataframe.
Step5: 2. Add some anomalous data<a id="add-some-anomalous-data"></a>
Below we will pick the last n_tail_anomalous observations and mess them up in some random but noticable way. In this case we randomly shuffle the data and then multiply each observation by some integer randomly chosen from integers_to_pick_randomly
Step6: In the plot below it should be clear that the light yellow section of the data has been messed with and is now "anomalous" or "strange looking" in comparison to all the data that comes before it.
Our goal now is to create some sort of anomaly score that can easily capture this.
Step7: 3. Lets do some ML!<a id="lets-do-some-ml"></a>
In this notebook we will just use good old kmeans from scikit-learn.
In reality the Netdata Agent uses the awesome dlib c++ library and the find_clusters_using_kmeans function along with a few others. You can see the Netdata KMeans code here.
The code below
Step8: Now we are ready to just loop over each row of data and produce anomaly scores once we have some trained models and train or retrain periodically as defined by train_every.
Note
Step9: The hard work is now all done. The above cell has processed all the data, trained or retrained models as defined by the inital config, and saved all anomaly scores and anomaly bits.
The rest of the notebook will try to help make more sense of all this.
Step10: In the dataframe above we see that each observation now also has a column with the __anomaly_score and one with the __anomaly_bit.
4. Lets visualize all this!<a id="lets-visualize-all-this"></a>
Now that we have our raw data, our anomaly scores, and our anomaly bits - we can plot this all side by side to get a clear picture of how it all works together.
In the plots below we see that during the light yellow "anomalous" period the "anomaly scores" get elevated to such an extend that many "anomaly bits" start flipping from 0 to 1 and essentially "turning on" to signal potentially anomalous data.
Step11: The last concept to introduce now is the "anomaly rate" which is really just an average over "anomaly bits".
For example, in the next cell we will just average all the anomaly bits across the light yellow window of time to find the anomaly rate for the metric within this window.
Step12: 5. So, how does it actually work?<a id="so-how-does-it-actually-work"></a>
In this final section of the notebook below we will dig in to try understand this a bit more intuitivley.
First we will "featureize" or "preprocess" all the data. Then we will explore what these feature vectors actually are, how they look, and how we derive anomaly scores based on thier distance to the models cluster centroids.
Step13: Now that we have preprocessed all of our data, lets just take a look at it.
You will see that we have essentially just added num_samples_to_lag additional columns to the dataframe, one for each lag. The numbers themselve also are now longer the original raw metric values, instead they have first been differenced (just take difference of latest value with pervious value so that we are working with delta's as opposed to original raw metric) and also smoothed (in this case by just averaging the previous num_samples_to_smooth previous differenced values).
The idea here is to define the representation that the model will work in. In this case the model will decide if a recent observation is anomalous based on it's corresponding feature vector which is a differenced, smoothed, and lagged array or list of recent values.
Step14: The model works based on these feature vectors. A lot of ML is about training a model to define some "compressed representation" of the training data that can then be useful for new data in some way.
This is exactly what our cluster models are trying to do. They process a big bag of preprocessed feature vectors, covering num_samples_to_train raw observations, during training to come up with the best, synthetic, n_clusters_per_dimension feature vectors as a useful compressed representation of the training data.
The cell below will just show you what those n_clusters_per_dimension (in this case 2) synthetic (made up by the kemans algo) feature vectors are.
Step15: At inference time we can now use our n_clusters_per_dimension cluster centers as a sort of set of "reference" feature vectors we can compare against.
When we see a new feature vector that is very far away from these "reference" feature vectors, we can take that as a signal that the recent data the feature vecotr was derived from may look significantly different than most of the data the clusters where initially train on. And as such it may be "anomalous" or "strange" in some way that might be meaningful to you are a user trying to monitor and troubleshoot systems based on these metrics.
To try make this visually clearer we will take 10 random feature vectors from the first half of our data where things were generally normal and we will also take 10 random feature vectors from the yellow anomalous period of time. Lastly we will also include the cluster centroids themselves to see how they compare to both sets of 10 feature vectors.
Basically this is represented in the heatmap below where each row is a processed feature vectors corresponding to some timestamp t.
Step16: You should see from the above heatmap that the top 10 rows all tend to have lower anomaly scores (AS) and anomaly bits (AB) that are 0. While its the opposite for rows 11-20.
The final two rows are the cluster centroids themselve which should look more similar to the first 10 rows, fairly different to rows 11-20. And of course, you would expect that each cluster centroid itself has a low anomaly score and non-anomalous anomaly bit.
Another way to present this visually is via line or bar plots. Below we just plot each of the rows above as a line. First all at the same time on one plot and then each one individually as a bar plot.
In the plot below the dotted lines correspond to the feature vectors sampled from the yellow anomalous period and as such we expect them to look very different to the solid lines (sampled from the normal period) and the solid circled lines which correspond to the two centroids. (Admittedly its not the clearest of plots to read since the normal lines all bunch together).
Step17: We can also just plot each feature vector itself as a bar plot with one bar for each of our 6 features. We have set the y-axis in the below plots to be fixed such that the differences are more obvious.
Step18: Lastly, an alternative way to visualize the distances between the three groups of points from the heatmap would be a series of scatter plots, one for each pair of features.
This should give a good intuition for the distance measures (in 6 dimensional feature space, lastest preprocessed observation plus the 5 lagged values) that underpin the raw anomaly score.
Generally we would expect to see the blue 'normal' points group closer together and 'near' the green cluster centroids, meanwhile the anomalous orange points should tend to be further away from the other two groups. | Python Code:
# uncomment the line below (when running in google colab) to install the netdata-pandas library, comment it again when done.
#!pip install netdata-pandas
from datetime import datetime, timedelta
import itertools
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.patches as mpatches
from sklearn.cluster import KMeans
from scipy.spatial.distance import cdist
from netdata_pandas.data import get_data
# helper functions
def preprocess_df(df, lags_n, diffs_n, smooth_n):
Given a pandas dataframe preprocess it to take differences, add smoothing, lags and abs values.
if diffs_n >= 1:
# take differences
df = df.diff(diffs_n).dropna()
if smooth_n >= 2:
# apply a rolling average to smooth out the data a bit
df = df.rolling(smooth_n).mean().dropna()
if lags_n >= 1:
# for each dimension add a new columns for each of lags_n lags of the differenced and smoothed values for that dimension
df_columns_new = [f'{col}_lag{n}' for n in range(lags_n+1) for col in df.columns]
df = pd.concat([df.shift(n) for n in range(lags_n + 1)], axis=1).dropna()
df.columns = df_columns_new
# sort columns to have lagged values next to each other for clarity when looking at the feature vectors
df = df.reindex(sorted(df.columns), axis=1)
# take absolute values as last step
df = abs(df)
return df
def add_shading_to_plot(ax, a, b, t, c='y', alpha=0.2):
Helper function to add shading to plot and add legend item.
plt.axvspan(a, b, color=c, alpha=alpha, lw=0)
handles, labels = ax.get_legend_handles_labels()
patch = mpatches.Patch(color=c, label=t, alpha=alpha)
handles.append(patch)
plt.legend(handles=handles)
Explanation: Netdata Anomaly Detection Deepdive
This notebook will walk through a simplified python based implementation of the C & C++ code in netdata/netdata/ml/ used to power the anomaly detection capabilities of the Netdata agent.
The main goal here is to help interested users learn more about how the machine learning works under the hood. If you just want to get started by enabling ml on your agent you can check out these simple configuration steps.
🚧 Note: This functionality is still under active development and considered experimental. Changes might cause the feature to break. We dogfood it internally and among early adopters within the Netdata community to build the feature. If you would like to get involved and help us with some feedback, email us at [email protected] or come join us in the 🤖-ml-powered-monitoring channel of the Netdata discord. Alternativley, if GitHub is more of your thing, feel free to create a GitHub discussion.
In this notebook we will:
Get raw data: Pull some recent data from one of our demo agents.
Add some anomalous data: Be evil and mess up the tail end of the data to make it obviously "anomalous".
Lets do some ML!: Implement an unsupervised clustering based approach to anomaly detection.
Lets visualize all this!: Plot and explore all this visually.
So, how does it actually work?: Dig a little deeper on what's going on under the hood.
Imports & Helper Functions
Uncomment and run the next cell to install netdata-pandas which we will use to easily pull data from the Netdata agent REST API into a nice clean Pandas DataFrame where it will be easier to work with.
Once you have netdata-pandas installed you can comment it back out and rerun the cell to clear the output.
End of explanation
# data params
hosts = ['london.my-netdata.io']
charts = ['system.cpu']
# if want to just focus on a subset of dims, in this case lets just pick one for simplicity
dims = ['system.cpu|user']
last_n_hours = 2
# based on last_n_hours define the relevant 'before' and 'after' params for the netdata rest api on the agent
before = int(datetime.utcnow().timestamp())
after = int((datetime.utcnow() - timedelta(hours=last_n_hours)).timestamp())
# ml params
train_every = 3600
num_samples_to_train = 3600
num_samples_to_diff = 1
num_samples_to_smooth = 3
num_samples_to_lag = 5
dimension_anomaly_score_threshold = 0.99
n_clusters_per_dimension = 2
max_iterations = 1000
Explanation: Inputs & Parameters
A full list of all the anomaly detection configuration parameters, and descriptions of each, can be found in the configuration section of the ml readme.
Below we will focus on some basic params to decide what data to pull and the main ml params of importance in understanding how it all works.
training size/scheduling parameters:
train_every: How often to train or retrain each model.
num_samples_to_train: How much of the recent data to train on, for example 3600 would mean training on the last 1 hour of raw data. The default in the netdata agent currently is 14400, so last 4 hours.
feature preprocessing related parameters:
num_samples_to_diff: This is really just a 1 or 0 flag to turn on or off differencing in the feature preprocessing. It defaults to 1 (to take differences) and generally should be left alone.
num_samples_to_smooth: The extent of smoothing (averaging) applied as part of feature preprocessing.
num_samples_to_lag: The number of previous values to also include in our feature vector.
anomaly score related parameters:
dimension_anomaly_score_threshold: The threshold on the anomaly score, above which the data it considered anomalous and the anomaly bit is set to 1 (its actually set to 100 in reality but this just to make it behave more like a rate when aggregated in the netdata agent api). By default this is 0.99 which means anything with an anomaly score above 99% is considered anomalous. Decreasing this threshold makes the model more sensitive and will leave to more anomaly bits, increasing it does the opposite.
model parameters:
n_clusters_per_dimension: This is the number of clusters to fit for each model, by default it is set to 2 such that 2 cluster centroids will be fit for each model.
max_iterations: The maximum number of iterations the fitting of the clusters is allowed to take. In reality the clustering will converge a lot sooner than this.
Note: There is much more detailed discussion of all there configuration parameters in the "Configuration" section of the ml readme.
End of explanation
# get raw data
df = get_data(hosts=hosts, charts=charts, after=after, before=before)
# filter df for just the dims if set
if len(dims):
df = df[[dim for dim in dims]]
# set some variables based on our data
df_timestamp_min = df.index.min()
df_timestamp_max = df.index.max()
# print some info
print(df.shape)
print(df_timestamp_min, df_timestamp_max)
display(df.head())
# lets just plot each dimension to have a look at it
for col in df.columns:
# plot dimension, setting index to datetime so its more readable on the plot
df[[col]].set_index(pd.to_datetime(df.index, unit='s')).plot(title=f'Raw Data - {col}', figsize=(16,6))
Explanation: 1. Get raw data<a id="get-raw-data"></a>
Next we will use the get_data() function from the netdata-pandas library to just pull down our raw data from the agent into a Pandas dataframe.
End of explanation
# size of anomalous data
n_tail_anomalous = 500
integers_to_pick_randomly = [0,1,5,10]
# randomly scramble data and multiply randomly by some numbers to make it anomalous looking
anomalous_shape = (n_tail_anomalous, len(df.columns))
randomly_scrambled_data = np.random.choice(df.tail(n_tail_anomalous).values.reshape(-1,), anomalous_shape)
random_integers = np.random.choice(integers_to_pick_randomly, anomalous_shape)
data_anomalous = randomly_scrambled_data * random_integers
# create anomalous dataframe
df_anomalous = pd.DataFrame(data = data_anomalous, columns = df.columns)
# make sure it has the expected index since we don't want to shuffle that
df_anomalous.index = df.tail(n_tail_anomalous).index
# overwrite last n_tail observations with anomalous data
df.update(df_anomalous)
Explanation: 2. Add some anomalous data<a id="add-some-anomalous-data"></a>
Below we will pick the last n_tail_anomalous observations and mess them up in some random but noticable way. In this case we randomly shuffle the data and then multiply each observation by some integer randomly chosen from integers_to_pick_randomly
End of explanation
# let's just plot each dimension now that we have added some anomalous data
for col in df.columns:
ax = df.set_index(pd.to_datetime(df.index, unit='s')).plot(title=f'Anomalous Data Appended - {col}', figsize=(16,6))
add_shading_to_plot(ax, df_timestamp_max - n_tail_anomalous, df_timestamp_max, 'anomalous data')
Explanation: In the plot below it should be clear that the light yellow section of the data has been messed with and is now "anomalous" or "strange looking" in comparison to all the data that comes before it.
Our goal now is to create some sort of anomaly score that can easily capture this.
End of explanation
# initialize an empty kmeans model for each dimension
models = {
dim: {
'model' : KMeans(n_clusters=n_clusters_per_dimension, max_iter=max_iterations),
'fitted': False
} for dim in df.columns
}
# initialize dictionary for storing anomaly scores for each dim
anomaly_scores = {
dim: {
't' : [],
'anomaly_score': []
} for dim in df.columns
}
# initialize dictionary for storing anomaly bits for each dim
anomaly_bits = {
dim: {
't' : [],
'anomaly_bit': []
}
for dim in df.columns
}
Explanation: 3. Lets do some ML!<a id="lets-do-some-ml"></a>
In this notebook we will just use good old kmeans from scikit-learn.
In reality the Netdata Agent uses the awesome dlib c++ library and the find_clusters_using_kmeans function along with a few others. You can see the Netdata KMeans code here.
The code below:
Will initialize some empty objects to use during model training and inference.
Will loop over every observation and run training and inference in a similar way to how the Agent would process each observation.
Of course the Agent implemtation is a lot more efficient and uses more efficient streaming and buffer based approaches as opposed to the fairly naive implementation below.
The idea in this notebook is to make the general approach as readable and understandable as possible.
End of explanation
# loop over each row of data in dataframe
for t, row in df.iterrows():
# get n based on timestamp
n = t - df_timestamp_min
# for each dimension, if we have a fitted model then make predictions
for dim in df.columns:
# if we have a fitted model, get anomaly score
if models[dim]['fitted']:
#################################
# Inference / Scoring
#################################
# get a buffer of recent data
buffer_size = num_samples_to_diff + num_samples_to_smooth + num_samples_to_lag * 2
df_dim_recent = df[[dim]].loc[(t-buffer_size):t]
# preprocess/featurize recent data
df_dim_recent_preprocessed = preprocess_df(
df_dim_recent,
num_samples_to_lag,
num_samples_to_diff,
num_samples_to_smooth
)
# take most recent feature vector
X = df_dim_recent_preprocessed.tail(1).values
# get the existing trained cluster centers
cluster_centers = models[dim]['model'].cluster_centers_
# get anomaly score based on the sum of the euclidian distances between the
# feature vector and each cluster centroid
raw_anomaly_score = np.sum(cdist(X, cluster_centers, metric='euclidean'), axis=1)[0]
# normalize anomaly score based on min-max normalization
# https://en.wikipedia.org/wiki/Feature_scaling#Rescaling_(min-max_normalization)
# the idea here is to convert the raw_anomaly_score we just computed into a number on a
# [0, 1] scale such that it behaves more like a percentage. We use the min and max raw scores
# observed during training to achieve this. This would mean that a normalized score of 1 would
# correspond to a distance as big as the biggest distance (most anomalous) observed on the
# training data. So scores that are 99% or higher will tend to be as strange or more strange
# as the most strange 1% observed during training.
# normalize based on scores observed during training the model
train_raw_anomaly_score_min = models[dim]['train_raw_anomaly_score_min']
train_raw_anomaly_score_max = models[dim]['train_raw_anomaly_score_max']
train_raw_anomaly_score_range = train_raw_anomaly_score_max - train_raw_anomaly_score_min
# normalize
anomaly_score = (raw_anomaly_score - train_raw_anomaly_score_min) / train_raw_anomaly_score_range
# The Netdata Agent does not actually store the normalized_anomaly_score since doing so would require more storage
# space for each metric, essentially doubling the amount of metrics that need to be stored. Instead, the Netdata Agent
# makes use of an existing bit (the anomaly bit) in the internal storage representation used by netdata. So if the
# normalized_anomaly_score passed the dimension_anomaly_score_threshold netdata will flip the corresponding anomaly_bit
# from 0 to 1 to signify that the observation the scored feature vector is considered "anomalous".
# All without any extra storage overhead required for the Netdata Agent database! Yes it's almost magic :)
# get anomaly bit
anomaly_bit = 100 if anomaly_score >= dimension_anomaly_score_threshold else 0
# save anomaly score
anomaly_scores[dim]['t'].append(t)
anomaly_scores[dim]['anomaly_score'].append(anomaly_score)
# save anomaly bit
anomaly_bits[dim]['t'].append(t)
anomaly_bits[dim]['anomaly_bit'].append(anomaly_bit)
# check if the model needs (re)training
if (n >= num_samples_to_train) & (n % train_every == 0):
#################################
# Train / Re-Train
#################################
train_before = t - num_samples_to_train
train_after = t
print(f'train at t={t}, (n={n}, train_after={train_after}, train_before={train_before})')
# loop over each dimension/model
for dim in df.columns:
# get training data based on most recent num_samples_to_train
df_dim_train = df[[dim]].loc[(t-num_samples_to_train):t]
# preprocess/featurize training data
df_dim_train_preprocessed = preprocess_df(
df_dim_train,
num_samples_to_lag,
num_samples_to_diff,
num_samples_to_smooth
)
# fit model using the fit method of kmeans
models[dim]['model'].fit(df_dim_train_preprocessed.values)
models[dim]['fitted'] = True # mark model as fitted
# get cluster centers of model we just trained
cluster_centers = models[dim]['model'].cluster_centers_
# get training scores, needed to get min and max scores for normalization at inference time
train_raw_anomaly_scores = np.sum(cdist(df_dim_train_preprocessed.values, cluster_centers, metric='euclidean'), axis=1)
# save min and max anomaly score during training, used to normalize all scores to be 0,1 scale
models[dim]['train_raw_anomaly_score_min'] = min(train_raw_anomaly_scores)
models[dim]['train_raw_anomaly_score_max'] = max(train_raw_anomaly_scores)
Explanation: Now we are ready to just loop over each row of data and produce anomaly scores once we have some trained models and train or retrain periodically as defined by train_every.
Note: The Netdata Agent implementation spreads out the training across each train_every window as opposed to trying to train all models in one go like the below implementation. It also avoids some obvious edges cases where there is no need to retrain, for example when the data have not changed at all since last model was trained.
End of explanation
# create dataframe of anomaly scores
df_anomaly_scores = pd.DataFrame()
for dim in anomaly_scores:
df_anomaly_scores_dim = pd.DataFrame(data=zip(anomaly_scores[dim]['t'],anomaly_scores[dim]['anomaly_score']),columns=['time_idx',f'{dim}__anomaly_score']).set_index('time_idx')
df_anomaly_scores = df_anomaly_scores.join(df_anomaly_scores_dim, how='outer')
# create dataframe of anomaly bits
df_anomaly_bits = pd.DataFrame()
for dim in anomaly_bits:
df_anomaly_bits_dim = pd.DataFrame(data=zip(anomaly_bits[dim]['t'],anomaly_bits[dim]['anomaly_bit']),columns=['time_idx',f'{dim}__anomaly_bit']).set_index('time_idx')
df_anomaly_bits = df_anomaly_bits.join(df_anomaly_bits_dim, how='outer')
# join anomaly scores to raw df
df_final = df.join(df_anomaly_scores, how='outer')
# join anomaly bits to raw df
df_final = df_final.join(df_anomaly_bits, how='outer')
# let's look at a sample of some scored observations
display(df_final.tail(num_samples_to_train).sample(5))
Explanation: The hard work is now all done. The above cell has processed all the data, trained or retrained models as defined by the inital config, and saved all anomaly scores and anomaly bits.
The rest of the notebook will try to help make more sense of all this.
End of explanation
figsize = (20,4)
for dim in models:
# create a dim with the raw data, anomaly score and anomaly bit for the dim
df_final_dim = df_final[[dim,f'{dim}__anomaly_score',f'{dim}__anomaly_bit']]
# plot raw data, including the anomalous data
ax = df_final_dim[[dim]].set_index(pd.to_datetime(df_final_dim.index, unit='s')).plot(
title=f'Raw Data (Anomalous Appended) - {dim}', figsize=figsize
)
add_shading_to_plot(ax, df_timestamp_max - n_tail_anomalous, df_timestamp_max, 'Anomalous Data')
# plat the corresponding anomaly scores
ax = df_final_dim[[f'{dim}__anomaly_score']].set_index(pd.to_datetime(df_final_dim.index, unit='s')).plot(
title=f'Anomaly Score - {dim}', figsize=figsize
)
add_shading_to_plot(ax, df_timestamp_max - n_tail_anomalous, df_timestamp_max, 'Anomalous Data')
# plot the corresponding anomaly bits
ax = df_final_dim[[f'{dim}__anomaly_bit']].set_index(pd.to_datetime(df_final_dim.index, unit='s')).plot(
title=f'Anomaly Bit - {dim}', figsize=figsize
)
add_shading_to_plot(ax, df_timestamp_max - n_tail_anomalous, df_timestamp_max, 'Anomalous Data')
# finally, plot it all on the same plot (which might not be so easy or clear to read)
df_final_dim_normalized = (df_final_dim-df_final_dim.min())/(df_final_dim.max()-df_final_dim.min())
ax = df_final_dim_normalized.set_index(pd.to_datetime(df_final_dim_normalized.index, unit='s')).plot(
title=f'Combined (Raw, Score, Bit) - {dim}', figsize=figsize
)
add_shading_to_plot(ax, df_timestamp_max - n_tail_anomalous, df_timestamp_max, 'Anomalous Data')
Explanation: In the dataframe above we see that each observation now also has a column with the __anomaly_score and one with the __anomaly_bit.
4. Lets visualize all this!<a id="lets-visualize-all-this"></a>
Now that we have our raw data, our anomaly scores, and our anomaly bits - we can plot this all side by side to get a clear picture of how it all works together.
In the plots below we see that during the light yellow "anomalous" period the "anomaly scores" get elevated to such an extend that many "anomaly bits" start flipping from 0 to 1 and essentially "turning on" to signal potentially anomalous data.
End of explanation
# average the anomaly bits within the n_tail_anomalous period of the data
n_tail_anomalous_anomaly_rate = df_final_dim[[f'{dim}__anomaly_bit']].tail(n_tail_anomalous).mean()[0]
print(f'n_tail_anomalous_anomaly_rate = {n_tail_anomalous_anomaly_rate}%')
print(f'\nThis means the "anomaly rate" within the yellow period of anomalous data was {n_tail_anomalous_anomaly_rate}%')
print(f'\nAnother way to think of this is that {n_tail_anomalous_anomaly_rate}% of the observations during the yellow \nwindow were considered anomalous based on the latest trained model.')
Explanation: The last concept to introduce now is the "anomaly rate" which is really just an average over "anomaly bits".
For example, in the next cell we will just average all the anomaly bits across the light yellow window of time to find the anomaly rate for the metric within this window.
End of explanation
# preprocess/featurize all data
df_preprocessed = preprocess_df(
df,
num_samples_to_lag,
num_samples_to_diff,
num_samples_to_smooth
)
Explanation: 5. So, how does it actually work?<a id="so-how-does-it-actually-work"></a>
In this final section of the notebook below we will dig in to try understand this a bit more intuitivley.
First we will "featureize" or "preprocess" all the data. Then we will explore what these feature vectors actually are, how they look, and how we derive anomaly scores based on thier distance to the models cluster centroids.
End of explanation
print(df_preprocessed.shape)
df_preprocessed.sample(5)
Explanation: Now that we have preprocessed all of our data, lets just take a look at it.
You will see that we have essentially just added num_samples_to_lag additional columns to the dataframe, one for each lag. The numbers themselve also are now longer the original raw metric values, instead they have first been differenced (just take difference of latest value with pervious value so that we are working with delta's as opposed to original raw metric) and also smoothed (in this case by just averaging the previous num_samples_to_smooth previous differenced values).
The idea here is to define the representation that the model will work in. In this case the model will decide if a recent observation is anomalous based on it's corresponding feature vector which is a differenced, smoothed, and lagged array or list of recent values.
End of explanation
# lets pick the first model to look at
model = list(models.keys())[0]
# get the cluster centroids and put them in a dataframe similar to above
df_cluster_centers = pd.DataFrame(models[model]['model'].cluster_centers_, columns=df_preprocessed.columns)
df_cluster_centers.index = [f'centroid {i}' for i in df_cluster_centers.index.values]
display(df_cluster_centers)
Explanation: The model works based on these feature vectors. A lot of ML is about training a model to define some "compressed representation" of the training data that can then be useful for new data in some way.
This is exactly what our cluster models are trying to do. They process a big bag of preprocessed feature vectors, covering num_samples_to_train raw observations, during training to come up with the best, synthetic, n_clusters_per_dimension feature vectors as a useful compressed representation of the training data.
The cell below will just show you what those n_clusters_per_dimension (in this case 2) synthetic (made up by the kemans algo) feature vectors are.
End of explanation
# prepare heatmap
df_heatmap = pd.concat([df_preprocessed.sample(10),df_preprocessed.tail(n_tail_anomalous).sample(10), df_cluster_centers])
df_heatmap = df_heatmap.round(2)
# get scores
heatmap_scores = np.sum(cdist(df_heatmap, models[dim]['model'].cluster_centers_, metric='euclidean'), axis=1)
heatmap_bits = [1 if score >= dimension_anomaly_score_threshold else 0 for score in heatmap_scores]
# add anomaly score to index
heatmap_index_inputs = list(zip(range(1, len(df_heatmap)+1), df_heatmap.index, heatmap_scores, heatmap_bits))
df_heatmap.index = [f'{x[0]}. t={x[1]} (AS={round(x[2]*100)}%, AB={x[3]})' for x in heatmap_index_inputs]
fig, ax = plt.subplots(figsize=(10,10))
sns.heatmap(df_heatmap, annot=True, ax=ax, cmap='RdYlBu')
Explanation: At inference time we can now use our n_clusters_per_dimension cluster centers as a sort of set of "reference" feature vectors we can compare against.
When we see a new feature vector that is very far away from these "reference" feature vectors, we can take that as a signal that the recent data the feature vecotr was derived from may look significantly different than most of the data the clusters where initially train on. And as such it may be "anomalous" or "strange" in some way that might be meaningful to you are a user trying to monitor and troubleshoot systems based on these metrics.
To try make this visually clearer we will take 10 random feature vectors from the first half of our data where things were generally normal and we will also take 10 random feature vectors from the yellow anomalous period of time. Lastly we will also include the cluster centroids themselves to see how they compare to both sets of 10 feature vectors.
Basically this is represented in the heatmap below where each row is a processed feature vectors corresponding to some timestamp t.
End of explanation
line_styles = ['-' for i in range(10)] + ['--' for i in range(10)] + ['o-' for i in range(2)]
df_heatmap.transpose().plot(legend=False, style=line_styles, figsize=(10,5), rot=15)
Explanation: You should see from the above heatmap that the top 10 rows all tend to have lower anomaly scores (AS) and anomaly bits (AB) that are 0. While its the opposite for rows 11-20.
The final two rows are the cluster centroids themselve which should look more similar to the first 10 rows, fairly different to rows 11-20. And of course, you would expect that each cluster centroid itself has a low anomaly score and non-anomalous anomaly bit.
Another way to present this visually is via line or bar plots. Below we just plot each of the rows above as a line. First all at the same time on one plot and then each one individually as a bar plot.
In the plot below the dotted lines correspond to the feature vectors sampled from the yellow anomalous period and as such we expect them to look very different to the solid lines (sampled from the normal period) and the solid circled lines which correspond to the two centroids. (Admittedly its not the clearest of plots to read since the normal lines all bunch together).
End of explanation
for i,row in df_heatmap.iterrows():
ax = row.plot(kind='bar', figsize=(10,3),title=f'{i}', rot=15)
ax.set_ylim(np.min(df_heatmap.values),np.max(df_heatmap.values))
plt.show()
Explanation: We can also just plot each feature vector itself as a bar plot with one bar for each of our 6 features. We have set the y-axis in the below plots to be fixed such that the differences are more obvious.
End of explanation
# get list of feature pair combinations
feature_pairs = list(itertools.combinations(df_heatmap.columns, 2))
feature_pairs_sample = random.sample(feature_pairs,5) # just sample 5 pairs
# define colors of points
point_colors = ['blue' for i in range(10)] + ['orange' for i in range(10)] + ['green' for i in range(2)]
# plot each pair of features
for feature_pair in feature_pairs_sample:
df_heatmap.plot.scatter(x=feature_pair[0], y=feature_pair[1], color=point_colors, subplots=True)
Explanation: Lastly, an alternative way to visualize the distances between the three groups of points from the heatmap would be a series of scatter plots, one for each pair of features.
This should give a good intuition for the distance measures (in 6 dimensional feature space, lastest preprocessed observation plus the 5 lagged values) that underpin the raw anomaly score.
Generally we would expect to see the blue 'normal' points group closer together and 'near' the green cluster centroids, meanwhile the anomalous orange points should tend to be further away from the other two groups.
End of explanation |
14,983 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames
Step1: Table 3 - Photometry
Step2: Drop source 12 because it was shown to be a galaxy.
Step3: %%bash
mkdir ../data/Allers2006
Step4: Bonus | Python Code:
%pylab inline
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
Explanation: ApJdataFrames: Allers2006
Title: Young, Low-Mass Brown Dwarfs with Mid-Infrared Excesses
Authors: AKCJ
Data is from this paper:
http://iopscience.iop.org/0004-637X/644/1/364/
End of explanation
names = ['Source Number','RA (J2000.0)','DEC (J2000.0)','I', 'Ierr','J', 'Jerr',
'H', 'Herr','Ks', 'Kserr','[3.6]', '[3.6]err','[4.5]', '[4.5]err',
'[5.8]', '[5.8]err','[8.0]', '[8.0]err']
tbl3 = pd.read_csv("http://iopscience.iop.org/0004-637X/644/1/364/fulltext/64106.tb3.txt",
sep=r'\t|\\pm', names = names)
tbl3
Explanation: Table 3 - Photometry
End of explanation
tbl3.drop(11, axis=0, inplace=True)
Explanation: Drop source 12 because it was shown to be a galaxy.
End of explanation
tbl3.to_csv('../data/Allers2006/tbl3.csv', index=False)
Explanation: %%bash
mkdir ../data/Allers2006
End of explanation
mgs2010 = pd.DataFrame([(1, 11.5), (2,6), (4,2), (5,11), (6,5), (7,6),
(8,3),(9,5.5),(10,3),(11,8), (13,6), (14,8), (15,2),
(16,4), (17,12.5),(18,7), (19,4.5)], columns=["Source Number", "SpT"])
out = pd.merge(tbl3, mgs2010, on="Source Number", how="left")
out.head(2)
out.to_csv("../data/Allers2006/mgs2010_bonus.csv", index=False)
Explanation: Bonus: Get the spectral types from my Cool Stars paper.
I published the spectral types for Katelyn's 2006 sources in this cool stars proceedings.
The irony is that there is no machine readable table.
End of explanation |
14,984 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstration of SHyFT API implementation of Kalman Filtering on gridded data
This notebook gives an example of Met.no data post-processing to correct temperature forecasts based on comparison to observations. The following steps are described
Step1: setup
Step2: setup 2. Transform observation with bias to grid using kriging
Step3: setup 3. Create 3 forecasts sets for the 1x1 km grid
Step4: grid-pp
Step5: grid-pp
Step6: grid-pp
Step7: Presentation&Test
Step8: 9. Plot the results | Python Code:
# first you should import the third-party python modules which you'll use later on
# the first line enables that figures are shown inline, directly in the notebook
%pylab inline
import os
from os import path
import sys
from matplotlib import pyplot as plt
# once the shyft_path is set correctly, you should be able to import shyft modules
import shyft
from shyft.api import shyftdata_dir
from shyft import api
# if you have problems here, it may be related to having your LD_LIBRARY_PATH
# pointing to the appropriate libboost_python libraries (.so files)
from shyft.repository.default_state_repository import DefaultStateRepository
from shyft.orchestration.configuration import yaml_configs
from shyft.orchestration.simulators.config_simulator import ConfigSimulator
from shyft.time_series import Calendar
from shyft.time_series import deltahours
from shyft.time_series import TimeAxis
from shyft.time_series import TimeSeries
from shyft.time_series import time_shift
from shyft.api import TemperatureSource
from shyft.api import TemperatureSourceVector
from shyft.api import GeoPoint
from shyft.api import GeoPointVector
from shyft.api import bayesian_kriging_temperature
from shyft.api import BTKParameter
from shyft.api import idw_temperature
from shyft.api import IDWTemperatureParameter
from shyft.api import KalmanFilter
from shyft.api import KalmanState
from shyft.api import KalmanBiasPredictor
from shyft.time_series import create_periodic_pattern_ts
from shyft.time_series import POINT_AVERAGE_VALUE as stair_case
# now you can access the api of shyft with tab completion and help, try this:
#help(api.GeoPoint) # remove the hashtag and run the cell to print the documentation of the api.GeoPoint class
#api. # remove the hashtag, set the pointer behind the dot and use
# tab completion to see the available attributes of the shyft api
Explanation: Demonstration of SHyFT API implementation of Kalman Filtering on gridded data
This notebook gives an example of Met.no data post-processing to correct temperature forecasts based on comparison to observations. The following steps are described:
Loading required python modules and setting path to SHyFT installation
Setup steps is about creating synthetic data, and backtesting those
so that we have a known forecast that gives a certain response at the
four observation points
Generate synthetic data for temperature observation time-series
Transform observations from set to grid (Kriging)
Create 3 forecasts sets for the 1x1 km grid
grid-pp steps is about orchestrating a grid-pp algorithm given
our syntethic data from above
Transform forecasts from grid to observation points (IDW)
Calculate the bias time-series using Kalman filter on the difference of observation and forecast set at the observation points
Transform bias from set to grid (Kriging) and apply bias to the grid forecast
Final steps to plot and test the results from the grid-pp steps
Transform corrected forecasts grid to the observation points (IDW)
Plot the results and bias
1. Loading required python modules and setting path to SHyFT installation
End of explanation
# Create time-axis for our syntethic sample
utc = Calendar() # provide conversion and math for utc time-zone
t0 = utc.time(2016, 1, 1)
dt = deltahours(1)
n = 24*3 # 3 days length
#ta = TimeAxisFixedDeltaT(t0, dt, n)
ta = TimeAxis(t0, dt, n) # same as ta, but needed for now(we work on aligning them)
# 1. Create the terrain based geo-points for the 1x1km grid and the observations
# a. Create the grid, based on a syntethic terrain model
# specification of 1 x 1 km
grid_1x1 = GeoPointVector()
for x in range(10):
for y in range(10):
grid_1x1.append(GeoPoint(x*1000, y*1000, (x+y)*50)) # z from 0 to 1000 m
# b. Create the observation points, for metered temperature
# reasonable withing that grid_1x1, and with elevation z
# that corresponds approximately to the position
obs_points = GeoPointVector()
obs_points.append(GeoPoint( 100, 100, 10)) # observation point at the lowest part
obs_points.append(GeoPoint(5100, 100, 270 )) # halfway out in x-direction @ 270 masl
obs_points.append(GeoPoint( 100, 5100, 250)) # halfway out in y-direction @ 250 masl
obs_points.append(GeoPoint(10100,10100, 1080 )) # x-y at max, so @1080 masl
# 2. Create time-series having a constant temperature of 15 degC
# and add them to the syntetic observation set
# make sure there is some reality, like temperature gradient etc.
ts = TimeSeries(ta, fill_value=20.0,point_fx=stair_case) # 20 degC at z_t= 0 meter above sea-level
# assume set temp.gradient to -0.6 degC/100m, and estimate the other values accordingly
tgrad = -0.6/100.0 # in our case in units of degC/m
z_t = 0 # meter above sea-level
# Create a TemperatureSourceVector to hold the set of observation time-series
constant_bias=[-1.0,-0.6,0.7,+1.0]
obs_set = TemperatureSourceVector()
obs_set_w_bias = TemperatureSourceVector()
for geo_point,bias in zip(obs_points,constant_bias):
temp_at_site = ts + tgrad*(geo_point.z-z_t)
obs_set.append(TemperatureSource(geo_point,temp_at_site))
obs_set_w_bias.append(TemperatureSource(geo_point,temp_at_site + bias))
Explanation: setup: 1. Generate synthetic data for temperature observation time-series
End of explanation
# Generate the observation grid by kriging the observations out to 1x1km grid
# first create idw and kriging parameters that we will utilize in the next steps
# kriging parameters
btk_params = BTKParameter() # we could tune parameters here if needed
# idw parameters,somewhat adapted to the fact that we
# know we interpolate from a grid, with a lot of neigbours around
idw_params = IDWTemperatureParameter() # here we could tune the paramete if needed
idw_params.max_distance = 20*1000.0 # max at 10 km because we search for max-gradients
idw_params.max_members = 20 # for grid, this include all possible close neighbors
idw_params.gradient_by_equation = True # resolve horisontal component out
# now use kriging for our 'syntethic' observations with bias
obs_grid = bayesian_kriging_temperature(obs_set_w_bias,grid_1x1,ta.fixed_dt,btk_params)
# if we idw/btk back to the sites, we should have something that equals the with_bias:
# we should get close to zero differences in this to-grid-and-back operation
back_test = idw_temperature(obs_grid, obs_points, ta.fixed_dt, idw_params) # note the ta.fixed_dt here!
for bt,wb in zip(back_test,obs_set_w_bias):
print("IDW Diff {} : {} ".format(bt.mid_point(),abs((bt.ts-wb.ts).values.to_numpy()).max()))
#back_test = bayesian_kriging_temperature(obs_grid, obs_points, ta, btk_params)
#for bt,wb in zip(back_test,obs_set_w_bias):
# print("BTK Diff {} : {} ".format(bt.mid_point(),abs((bt.ts-wb.ts).values.to_numpy()).max()))
Explanation: setup 2. Transform observation with bias to grid using kriging
End of explanation
# Create a forecast grid by copying the obs_grid time-series
# since we know that idw of them to obs_points will give approx.
# the obs_set_w_bias time-series
# for the simplicity, we assume the same forecast for all 3 days
fc_grid = TemperatureSourceVector()
fc_grid_1_day_back = TemperatureSourceVector() # this is previous day
fc_grid_2_day_back = TemperatureSourceVector() # this is fc two days ago
one_day_back_dt = deltahours(-24)
two_days_back_dt = deltahours(-24*2)
noise_bias = [0.0 for i in range(len(obs_grid))] # we could generate white noise ts into these to test kalman
for fc,bias in zip(obs_grid,noise_bias):
fc_grid.append(TemperatureSource(fc.mid_point(),fc.ts + bias ))
fc_grid_1_day_back.append(
TemperatureSource(
fc.mid_point(),
time_shift(fc.ts + bias, one_day_back_dt) #time-shift the signal back
)
)
fc_grid_2_day_back.append(
TemperatureSource(
fc.mid_point(),
time_shift(fc.ts + bias, two_days_back_dt)
)
)
grid_forecasts = [fc_grid_2_day_back, fc_grid_1_day_back, fc_grid ]
Explanation: setup 3. Create 3 forecasts sets for the 1x1 km grid
End of explanation
# Now we have 3 simulated forecasts at a 1x1 km grid
# fc_grid, fc_grid_1_day_back, fc_grid_2_day_back
# we start to do the grid pp algorithm stuff
# - we know the our forecasts have some degC. bias, and we would hope that
# the kalman filter 'learns' the offset
# as a first step we project the grid_forecasts to the observation points
# making a list of historical forecasts at each observation point.
fc_at_observation_points = [idw_temperature(fc, obs_points, ta.fixed_dt, idw_params)\
for fc in grid_forecasts]
historical_forecasts = []
for i in range(len(obs_points)): # correlate obs.point and fc using common i
fc_list = TemperatureSourceVector() # the kalman bias predictor below accepts TsVector of forecasts
for fc in fc_at_observation_points:
fc_list.append(fc[i]) # pick out the fc_ts only, for the i'th observation point
#print("{} adding fc pt {} t0={}".format(i,fc[i].mid_point(),utc.to_string(fc[i].ts.time(0))))
historical_forecasts.append(fc_list)
# historical_forecasts now cntains 3 forecasts for each observation point
Explanation: grid-pp: 1. Transform forecasts from grid to observation points (IDW)
End of explanation
# Create a TemperatureSourceVector to hold the set of bias time-series
bias_set = TemperatureSourceVector()
# Create the Kalman filter having 8 samples spaced every 3 hours to represent a daily periodic pattern
kalman_dt_hours = 3
kalman_dt =deltahours(kalman_dt_hours)
kta = TimeAxis(t0, kalman_dt, int(24//kalman_dt_hours))
# Calculate the coefficients of Kalman filter and
# Create bias time-series based on the daily periodic pattern
for i in range(len(obs_set)):
kf = KalmanFilter() # each observation location do have it's own kf &predictor
kbp = KalmanBiasPredictor(kf)
#print("Diffs for obs", i)
#for fc in historical_forecasts[i]:
# print((fc.ts-obs_set[i].ts).values.to_numpy())
kbp.update_with_forecast(historical_forecasts[i], obs_set[i].ts, kta)
pattern = KalmanState.get_x(kbp.state)
#print(pattern)
bias_ts = create_periodic_pattern_ts(pattern, kalman_dt, ta.time(0), ta)
bias_set.append(TemperatureSource(obs_set[i].mid_point(), bias_ts))
Explanation: grid-pp: 2. Calculate the bias time-series using Kalman filter on the observation set
End of explanation
# Generate the bias grid by kriging the bias out on the 1x1km grid
btk_params = BTKParameter()
btk_bias_params = BTKParameter(temperature_gradient=-0.6, temperature_gradient_sd=0.25, sill=25.0, nugget=0.5, range=5000.0, zscale=20.0)
bias_grid = bayesian_kriging_temperature(bias_set, grid_1x1, ta.fixed_dt, btk_bias_params)
# Correct forecasts by applying bias time-series on the grid
fc_grid_improved = TemperatureSourceVector()
for i in range(len(fc_grid)):
fc_grid_improved.append(
TemperatureSource(
fc_grid[i].mid_point(),
fc_grid[i].ts - bias_grid[i].ts # By convention, sub bias time-series(hmm..)
)
)
# Check the first value of the time-series. It should be around 15
tx =ta.time(0)
print("Comparison original synthetic grid cell [0]\n\t at the lower left corner,\n\t at t {}\n\toriginal grid: {}\n\timproved grid: {}\n\t vs bias grid: {}\n\t nearest obs: {}"
.format(utc.to_string(tx),
fc_grid[0].ts(tx),
fc_grid_improved[0].ts(tx),
bias_grid[0].ts(tx),
obs_set[0].ts(tx)
)
)
Explanation: grid-pp: 3. Spread the bias at observation points out to the grid using kriging
End of explanation
# Generate the corrected forecast set by Krieging transform of temperature model
fc_at_observations_improved = idw_temperature(fc_grid_improved, obs_points, ta.fixed_dt, idw_params)
fc_at_observations_raw =idw_temperature(fc_grid, obs_points, ta.fixed_dt, idw_params)
Explanation: Presentation&Test: 8. Finally, Transform corrected forecasts from grid to observation points to see if we did reach the goal of adjusting the forecast (IDW)
End of explanation
# Make a time-series plot of temperature sets
for i in range(len(bias_set)):
fig, ax = plt.subplots(figsize=(20, 10))
timestamps = [datetime.datetime.utcfromtimestamp(p.start) for p in obs_set[i].ts.time_axis]
ax.plot(timestamps, obs_set[i].ts.values, label = str(i+1) + ' Observation')
ax.plot(timestamps, fc_at_observations_improved[i].ts.values, label = str(i+1) + ' Forecast improved')
ax.plot(timestamps, fc_at_observations_raw[i].ts.values,linestyle='--', label = str(i+1) + ' Forecast (raw)')
#ax.plot(timestamps, bias_set[i].ts.values, label = str(i+1) + ' Bias')
fig.autofmt_xdate()
ax.legend(title='Temperature')
ax.set_ylabel('Temp ($^\circ$C)')
# Make a scatter plot of grid temperature forecasts at ts.value(0)
x = [fc.mid_point().x for fc in bias_grid]
y = [fc.mid_point().y for fc in bias_grid]
fig, ax = plt.subplots(figsize=(10, 5))
temps = np.array([bias.ts.value(0) for bias in bias_grid])
plot = ax.scatter(x, y, c=temps, marker='o', s=500, lw=0)
plt.colorbar(plot).set_label('Temp bias correction ($^\circ$C)')
Explanation: 9. Plot the results
End of explanation |
14,985 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3.1 Problem description
Try to build a classifier for the MNIST dataset that achieves over 97% accuracy
on the test set. Hint
Step1: Split test and training data
Step2: 3.2 Training a Random Forest Classifier for baseline
The reason to use Random Forest Classifier is it runs faster than linear model
Step3: 3.3 Training a KNeighborsClassifier Classifier with default settings
Seems like we have to have n_jobs = 1 so the prediction runs within reasonable time.
Step4: 3.4 GridSearchCV | Python Code:
from scipy.io import loadmat
mnist = loadmat('./datasets/mnist-original.mat')
mnist
X, y = mnist['data'], mnist['label']
X = X.T
X.shape
y = y.T
y.shape
type(y)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
Explanation: 3.1 Problem description
Try to build a classifier for the MNIST dataset that achieves over 97% accuracy
on the test set. Hint: the KNeighborsClassifier works quite well for this task;
you just need to find good hyperparameter values (try a grid search on the
weights and n_neighbors hyperparameters).
Load the data
End of explanation
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
len(X_train)
shuffle_index = np.random.permutation(len(X_train))
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
Explanation: Split test and training data
End of explanation
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
forest_clf.fit(X_train, y_train)
forest_pred = forest_clf.predict(X_test)
forest_pred = forest_pred.reshape(10000,1)
accuracy = (forest_pred == y_test).sum() / len(y_test)
print(accuracy)
Explanation: 3.2 Training a Random Forest Classifier for baseline
The reason to use Random Forest Classifier is it runs faster than linear model
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_jobs=-1)
knn_clf.fit(X_train, y_train)
knn_clf.predict([X_test[0]])
# for i in range(1000):
# knn_clf.predict([X_test[i]])
knn_pred = knn_clf.predict(X_test)
knn_pred = knn_pred.reshape(10000, 1)
accuracy = (knn_pred == y_test).sum() / len(y_test)
print(accuracy)
Explanation: 3.3 Training a KNeighborsClassifier Classifier with default settings
Seems like we have to have n_jobs = 1 so the prediction runs within reasonable time.
End of explanation
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_jobs': [-1], 'n_neighbors': [3, 5, 11, 19], 'weights': ['uniform', 'distance']}
]
grid_search = GridSearchCV(knn_clf, param_grid, cv=3, scoring='accuracy', n_jobs=-1)
grid_search.fit(X_train, y_train)
Explanation: 3.4 GridSearchCV
End of explanation |
14,986 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sample Notebook for exploring gnomAD in BigQuery
This notebook contains sample queries to explore the gnomAD dataset which is hosted through the Google Cloud Public Datasets Program.
Setup and Authentication
If you just want to look at sample results, you can scroll down to see the output of the existing queries without having to run anything. If you would like to re-run the queries or make changes, you will need to authenticate as your user and set the Google Cloud project in which to run the analysis.
Step1: User Authentication
Before running any queries using BigQuery, you need to first authenticate yourself by running the following cell. If you are running it for the first time, it will ask you to follow a link to log in using your Google identity account, and accept the data access requests to your profile. Once this is done, it will generate a string of verification code, which you should paste back to the cell below and press enter. This should be a Google account which you can login to and which has access to run BigQuery jobs in the Google Cloud project specified in the next step.
Step2: Set Google Cloud Project
To run queries in BigQuery, you need to specify the Google Cloud project that will be used. The queries below report the number of bytes billed by each query. The first 1 TB of query data processed in a project per month is free. For more details, see the BigQuery Pricing page.
To find your Project ID, go to the Project Settings page in the Google Cloud Console. You can select the project you want using the drop-down menu at the top of the page.
Step4: gnomAD Queries Type1
Step6: After you found the [X, Y] range for your gene of interst you can run Type1 queries efficiently. Here are a couple of examples
Step8: Query 1.1b - Variant Type (Python)
You can also find the number of INDELs and SNVs in the region of interest by doing the aggregation and count in Python using the dataframe.
Step10: Instead of aggregating the results in BigQuery to count the number of each variant type, we could return all rows and process them here. The following query adds a few more columns to the previous query.
Query 1.2 - Allele Count by Sex
A query to retrieve all variants in the region of interest along with AN and AC values split by sex.
AN
Step11: We can then perform further analysis on the dataframe such as filtering out variants with a low allele count (AC).
Step12: Or we could filter to find variants that were most common in females that were not found in any male samples.
Step14: Instead of splitting AN and AC values by sex we can analyze ancestry.
Query 1.3 - Allele Count by Ancestry
A query to retrieve all variants in the region of interest along with AN and AC values for the following ancestries
Step15: An example here would be to report the most common variant for each ancestry that was not present in any of the others.
Step17: Query 1.4 - gnomAD Columns
gnomAD tables have many more columns, you can find the full list of columns along with their description using the following query.
Step18: Using column_info dataframe you can find other available values for the ancestry slice
Step19: Note that the corresponding values for AC and AF (Alternate allele frequency) exist under the alternate_bases column.
Step21: Please refer to gnomAD release announcements (v2.1 and v3.0) for more details about demographics and annotation slices.
The next query showcases how to use AN and AC values.
Query 1.5 - Burden of Mutation
Given a region of interest, compute the burden of mutation for the gene along with other summary statistics.
Step22: The other column to use is alternate_bases.vep which contains the VEP annotaions for each variant.
Step24: The next query showcases how to use some of the vep annotation values.
Query 1.6 - VEP Annotations
Given a region of interest, examine vep annotations to pull out missense variants.
Step25: gnomAD Queries Type2
Step27: Query 2.1 - Find alleles that occur at least in 90% of samples
Find all variants on the selected chromosome that were observed in at least 90% of samples. In other words, this query finds variants where allele frequency is very high for non-REF alleles.
Step28: We can condense the result and only list gene symbols and the number of variants found in the previous query
Step30: Query 2.2 - Top variants by ancenstry difference
Find top 1,000 variants on the selected chromosome that show the most significant differences between male samples of African-American ancestry versus Finnish ancestry
Step32: Query 2.3 - Find genes with high number of INDELs
Find top 1000 genes with the highest number of INDELs on the selected chromosome.
Step34: Query 2.4 - Find distribution of SNVs across a chromosome
Find the distribution of SNVs across the selected chromosome. In order to be able to plot the result we group base pairs into buckets of size 10,000. | Python Code:
# Import libraries
import numpy as np
import os
# Imports for using and authenticating BigQuery
from google.colab import auth
Explanation: Sample Notebook for exploring gnomAD in BigQuery
This notebook contains sample queries to explore the gnomAD dataset which is hosted through the Google Cloud Public Datasets Program.
Setup and Authentication
If you just want to look at sample results, you can scroll down to see the output of the existing queries without having to run anything. If you would like to re-run the queries or make changes, you will need to authenticate as your user and set the Google Cloud project in which to run the analysis.
End of explanation
auth.authenticate_user()
Explanation: User Authentication
Before running any queries using BigQuery, you need to first authenticate yourself by running the following cell. If you are running it for the first time, it will ask you to follow a link to log in using your Google identity account, and accept the data access requests to your profile. Once this is done, it will generate a string of verification code, which you should paste back to the cell below and press enter. This should be a Google account which you can login to and which has access to run BigQuery jobs in the Google Cloud project specified in the next step.
End of explanation
# Replace project_id with your Google Cloud Project ID.
os.environ["GOOGLE_CLOUD_PROJECT"]='project-id'
Explanation: Set Google Cloud Project
To run queries in BigQuery, you need to specify the Google Cloud project that will be used. The queries below report the number of bytes billed by each query. The first 1 TB of query data processed in a project per month is free. For more details, see the BigQuery Pricing page.
To find your Project ID, go to the Project Settings page in the Google Cloud Console. You can select the project you want using the drop-down menu at the top of the page.
End of explanation
import ipywidgets as widgets
print("Variables for Region (Type 1) Queries")
gnomad_version_widget_region = widgets.Dropdown(
options=['v2_1_1_exomes', 'v2_1_1_genomes', 'v3_genomes'],
value='v3_genomes',
description='gnomAD version:',
disabled=False,
style={'description_width': 'initial'}
)
display(gnomad_version_widget_region)
chromosome_widget_region = widgets.Dropdown(
options=['chr1', 'chr2', 'chr3', 'chr4', 'chr5', 'chr6', 'chr7', 'chr8',
'chr9', 'chr10', 'chr11', 'chr12', 'chr13', 'chr14', 'chr15',
'chr16', 'chr17', 'chr18', 'chr19', 'chr20', 'chr21', 'chr22',
'chrX', 'chrY'],
value='chr17',
description='Chromosome:',
disabled=False,
style={'description_width': 'initial'}
)
display(chromosome_widget_region)
gene_symbol_widget_region= widgets.Text(
value='BRCA1',
placeholder='gene_symbol',
description='Gene Symbol:',
disabled=False,
style={'description_width': 'initial'}
)
display(gene_symbol_widget_region)
# Set the variables for the rest of the Type 1 queries based on the values above.
gnomad_version_region=gnomad_version_widget_region.value
chromosome_region=chromosome_widget_region.value
gene_symbol_region=gene_symbol_widget_region.value
print('Running Region (Type 1) queries on gnomAD version: {}, chromosome: {}, gene symbol: {}'.format(
gnomad_version_region,
chromosome_region,
gene_symbol_region
))
if gnomad_version_region.startswith('v3'):
# Variant type (snv, indel, multi-snv, multi-indel, or mixed) is stored under difference columns in V2 and V3
variant_type_col = 'variant_type'
extra_columns = ''
else:
variant_type_col = 'alternate_bases. allele_type'
# These vep columns only exist in V2
extra_columns = 'vep.STRAND AS STRAND, vep.Protein_position AS Protein_pos,'
from google.cloud import bigquery
client = bigquery.Client()
def run_query(query):
query_job = client.query(query)
result = query_job.to_dataframe(progress_bar_type='tqdm_notebook')
gb_processed = (query_job.total_bytes_billed / 1024 ** 3)
print('This query processed {} GB of data which is {}% of your 1 TB monthly free quota.'.format(gb_processed, round(gb_processed / 1024 * 100, 4)))
return result
query_template =
SELECT MIN(start_position) AS X, MAX(end_position) AS Y
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table
WHERE EXISTS
(SELECT 1 FROM UNNEST(main_table.alternate_bases) AS alternate_bases
WHERE EXISTS (SELECT 1 from alternate_bases.vep WHERE SYMBOL = '{GENE}'))
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region,
GENE=gene_symbol_region)
limits = run_query(query)
print(limits)
x = limits.at[0, 'X']
y = limits.at[0, 'Y']
Explanation: gnomAD Queries Type1: Explore a particular genomic region
This category include queries that extract information from a region of the genome, for example a gene. Because gnomAD BigQuery tables utilize integer range partitioning they are optimized for this type of query.
The main requirement to use this feature is to limit queries to a particular region by adding these conditions to the WHERE clause:
WHERE start_position >= X AND start_position <= Y
Where [X, Y] is the region of interest.
You can find values of X and Y by refering to an external databses. For example the following table sumarizes the start and end positions for 4 genes on chromosome 17 extracted from an external resource:
| Gene | X | Y | Source |
|:-: |- |- |- |
| BRCA1 | 43044295 | 43125364 | link |
| COL1A1 | 50184096 | 50201649 | link |
| TP53 | 31094927 | 31377677 | link |
| NF1 | 56593699 | 56595611 | link |
Alternatively you could use the following query that extract the same infomration directly from gnomAD tables.
In the following example we are using BRCA1 on chr17 as an example. You can enter your gene of interest and chromosome to modify all the following queries. If your query returns NaN this might be because you specified the wrong chromosome, which will query the wrong table.
Also you can choose which version of the gnomAD dataset you'd like to use for all the queries:
* v2_1_1_exomes
* v2_1_1_genomes
* v3_genomes
End of explanation
# NOTE: For v2_1_1 the "variant_type" column must be replaced with "alternate_bases.allele_type AS variant_type"
query_template =
SELECT COUNT(1) AS num, variant_type
FROM (
SELECT DISTINCT
start_position,
reference_bases,
alternate_bases.alt,
{VAR_TYPE_COL} AS variant_type,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases
WHERE start_position >= {X} AND start_position <= {Y}
)
GROUP BY 2
ORDER BY 1 DESC
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region,
VAR_TYPE_COL=variant_type_col, X=x, Y=y)
summary = run_query(query)
summary.head()
Explanation: After you found the [X, Y] range for your gene of interst you can run Type1 queries efficiently. Here are a couple of examples:
Query 1.1a - Variant Type (BigQuery)
Find the number of INDELs and SNVs in the region of interest using BigQuery
End of explanation
# NOTE: For v2_1_1 the "variant_type" column must be replaced with "alternate_bases.allele_type AS variant_type"
query_template =
SELECT DISTINCT
start_position,
reference_bases,
alternate_bases.alt,
{VAR_TYPE_COL} AS variant_type,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases
WHERE start_position >= {X} AND start_position <= {Y}
ORDER BY 1,2
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region,
VAR_TYPE_COL=variant_type_col, X=x, Y=y)
summary_dataframe = run_query(query)
# Count the number of each variant type in Python instead of in BigQuery
print('Number of variants by type:')
for v in summary_dataframe.variant_type.unique():
print('{}: {}'.format(v,
np.count_nonzero(summary_dataframe['variant_type'] == v)))
Explanation: Query 1.1b - Variant Type (Python)
You can also find the number of INDELs and SNVs in the region of interest by doing the aggregation and count in Python using the dataframe.
End of explanation
# NOTE: For v2_1_1 the "variant_type" column must be replaced with "alternate_bases.allele_type AS variant_type"
query_template =
SELECT reference_name AS CHROM,
start_position AS POS,
names AS ID,
reference_bases AS REF,
alternate_bases.alt AS ALT,
AN,
AN_male,
AN_female,
alternate_bases.AC AS AC,
alternate_bases.AC_male AS AC_male,
alternate_bases.AC_female AS AC_female,
alternate_bases.nhomalt AS nhomalt,
alternate_bases.nhomalt_male AS nhomalt_male,
alternate_bases.nhomalt_female AS nhomalt_female,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases
WHERE start_position >= {X} AND start_position <= {Y}
ORDER BY 1,2
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region, X=x, Y=y)
stats_sex = run_query(query)
stats_sex.head()
Explanation: Instead of aggregating the results in BigQuery to count the number of each variant type, we could return all rows and process them here. The following query adds a few more columns to the previous query.
Query 1.2 - Allele Count by Sex
A query to retrieve all variants in the region of interest along with AN and AC values split by sex.
AN: Total number of alleles in samples
AC: Alternate allele count for samples
nhomalt: The number of individuals that are called homozygous for the alternate allele.
End of explanation
stats_sex_filtered_ac=stats_sex.loc[stats_sex['AC'] > 10]
stats_sex_filtered_ac.head()
Explanation: We can then perform further analysis on the dataframe such as filtering out variants with a low allele count (AC).
End of explanation
stats_sex_no_male=stats_sex.loc[stats_sex['AC_male'] == 0].sort_values(by=('AC_female'),
ascending = False)
stats_sex_no_male.head(10)
Explanation: Or we could filter to find variants that were most common in females that were not found in any male samples.
End of explanation
# NOTE: For v2_1_1 the "variant_type" column must be replaced with "alternate_bases.allele_type AS variant_type"
query_template =
SELECT reference_name AS CHROM,
start_position AS POS,
names AS ID,
reference_bases AS REF,
alternate_bases.alt AS ALT,
AN_afr,
AN_amr,
AN_eas,
AN_nfe,
alternate_bases.AC_afr AS AC_afr,
alternate_bases.AC_amr AS AC_amr,
alternate_bases.AC_eas AS AC_eas,
alternate_bases.AC_nfe AS AC_nfe,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases
WHERE start_position >= {X} AND start_position <= {Y}
ORDER BY 1,2
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region, X=x, Y=y)
stats_ancestry = run_query(query)
stats_ancestry.head()
Explanation: Instead of splitting AN and AC values by sex we can analyze ancestry.
Query 1.3 - Allele Count by Ancestry
A query to retrieve all variants in the region of interest along with AN and AC values for the following ancestries:
* afr: African-American/African ancestry
* amr: Latino ancestry
* eas: East Asian ancestry
* nfe: Non-Finnish European ancestry
End of explanation
stats_ancestry_amr=stats_ancestry.loc[
(stats_ancestry['AC_amr'] > 0) &
(stats_ancestry['AC_afr'] == 0) &
(stats_ancestry['AC_eas'] == 0) &
(stats_ancestry['AC_nfe'] == 0)].sort_values(by=('AC_amr'),
ascending = False)
stats_ancestry_amr.head(10)
Explanation: An example here would be to report the most common variant for each ancestry that was not present in any of the others.
End of explanation
query_template =
SELECT column_name, field_path, description
FROM `bigquery-public-data`.gnomAD.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS
WHERE table_name = "{GNOMAD_VER}__{CHROM}"
AND column_name IN (
SELECT COLUMN_NAME
FROM `bigquery-public-data`.gnomAD.INFORMATION_SCHEMA.COLUMNS
WHERE table_name = "{GNOMAD_VER}__{CHROM}")
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region)
column_info = run_query(query)
print('There are {} columns in `bigquery-public-data.gnomAD.{}__{}` table'.format(len(column_info.index),
gnomad_version_region,
chromosome_region))
column_info.head(7)
Explanation: Query 1.4 - gnomAD Columns
gnomAD tables have many more columns, you can find the full list of columns along with their description using the following query.
End of explanation
AN_columns = column_info[column_info['column_name'].str.startswith('AN')] # Retain only rows that column_name starts with "AN"
AN_columns = AN_columns[['column_name', 'description']] # Drop extra column (field_path)
AN_columns = AN_columns.sort_values(by=['column_name']) # Sort by column_name
AN_columns.head(11)
Explanation: Using column_info dataframe you can find other available values for the ancestry slice:
End of explanation
AC_columns = column_info[column_info['field_path'].str.startswith('alternate_bases.AC')] # Retain only rows that field_path starts with "alternate_bases.AC"
AC_columns = AC_columns[['field_path', 'description']] # Drop extra column (column_name)
AC_columns = AC_columns.sort_values(by=['field_path']) # Sort by field_path
AC_columns.head(11)
Explanation: Note that the corresponding values for AC and AF (Alternate allele frequency) exist under the alternate_bases column.
End of explanation
query_template =
WITH summary_stats AS (
SELECT
COUNT(1) AS num_variants,
SUM(ARRAY_LENGTH(alternate_bases)) AS num_alts, # This data appears to be bi-allelic.
SUM((SELECT alt.AC FROM UNNEST(alternate_bases) AS alt)) AS sum_AC,
APPROX_QUANTILES((SELECT alt.AC FROM UNNEST(alternate_bases) AS alt), 10) AS quantiles_AC,
SUM(AN) AS sum_AN,
APPROX_QUANTILES(AN, 10) AS quantiles_AN,
-- Also include some information from Variant Effect Predictor (VEP).
STRING_AGG(DISTINCT (SELECT annot.symbol FROM UNNEST(alternate_bases) AS alt,
UNNEST(vep) AS annot LIMIT 1), ', ') AS genes
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table
WHERE start_position >= {X} AND start_position <= {Y})
---
--- The resulting quantiles and burden_of_mutation score give a very rough idea of the mutation
--- rate within these particular regions of the genome. This query could be further refined to
--- compute over smaller windows within the regions of interest and/or over different groupings
--- of AC and AN by population.
---
SELECT
ROUND(({Y} - {X}) / num_variants, 3) AS burden_of_mutation,
*,
FROM summary_stats
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region, X=x, Y=y)
burden_of_mu = run_query(query)
burden_of_mu.head()
Explanation: Please refer to gnomAD release announcements (v2.1 and v3.0) for more details about demographics and annotation slices.
The next query showcases how to use AN and AC values.
Query 1.5 - Burden of Mutation
Given a region of interest, compute the burden of mutation for the gene along with other summary statistics.
End of explanation
vep_columns = column_info[column_info['field_path'].str.startswith('alternate_bases.vep')] # Retain only rows that field_path starts with "alternate_bases.vep"
vep_columns = vep_columns[['field_path', 'description']] # Drop extra column (column_name)
vep_columns.head(22)
Explanation: The other column to use is alternate_bases.vep which contains the VEP annotaions for each variant.
End of explanation
query_template =
SELECT reference_name AS CHROM,
start_position AS POS,
names AS ID,
reference_bases AS REF,
alternate_bases.alt AS ALT,
vep.Consequence AS Consequence,
vep.IMPACT AS Impact,
vep.SYMBOL AS Symbol,
vep.Gene AS Gene,
vep.EXON AS EXON,
vep.INTRON AS INTRON,
{EXTRA_COLS}
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases,
alternate_bases.vep AS vep
WHERE start_position >= {X} AND start_position <= {Y} AND
REGEXP_CONTAINS(vep.Consequence, r"missense_variant")
ORDER BY start_position, reference_bases
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region,
EXTRA_COLS=extra_columns, X=x, Y=y)
neg_variants = run_query(query)
neg_variants.head()
Explanation: The next query showcases how to use some of the vep annotation values.
Query 1.6 - VEP Annotations
Given a region of interest, examine vep annotations to pull out missense variants.
End of explanation
import ipywidgets as widgets
print("Variables for Chromosome (Type 2) queries")
gnomad_version_widget_chr = widgets.Dropdown(
options=['v2_1_1_exomes', 'v2_1_1_genomes', 'v3_genomes'],
value='v2_1_1_exomes',
description='gnomAD version:',
disabled=False,
style={'description_width': 'initial'}
)
display(gnomad_version_widget_chr)
chromosome_widget_chr = widgets.Dropdown(
options=['chr1', 'chr2', 'chr3', 'chr4', 'chr5', 'chr6', 'chr7', 'chr8',
'chr9', 'chr10', 'chr11', 'chr12', 'chr13', 'chr14', 'chr15',
'chr16', 'chr17', 'chr18', 'chr19', 'chr20', 'chr21', 'chr22',
'chrX', 'chrY'],
value='chr17',
description='Chromosome:',
disabled=False,
style={'description_width': 'initial'}
)
display(chromosome_widget_chr)
# Set the variables for the rest of the Chromosome (Type 2) queries based on the values above.
gnomad_version_chr=gnomad_version_widget_chr.value
chromosome_chr=chromosome_widget_chr.value
print('Running chromosome (Type 2) queries on gnomAD version: {}, chromosome: {}'.format(
gnomad_version_chr,
chromosome_chr
))
if gnomad_version_chr.startswith('v3'):
# Variant type (snv, indel, multi-snv, multi-indel, or mixed) is stored under difference columns in V2 and V3
variant_type_col = 'variant_type'
extra_columns = ''
else:
variant_type_col = 'alternate_bases. allele_type'
# These vep columns only exist in V2
extra_columns = 'vep.STRAND AS STRAND, vep.Protein_position AS Protein_pos,'
Explanation: gnomAD Queries Type2: Explore an entire chromosome
This section queries across an entire chromosome.
End of explanation
query_template =
SELECT reference_name AS CHROM,
start_position AS POS,
names AS ID,
reference_bases AS REF,
alternate_bases.alt AS ALT,
vep.SYMBOL AS Symbol,
vep.Gene AS Gene,
AN,
alternate_bases.AC AS AC,
alternate_bases.AF AS AF,
vep.EXON AS EXON,
vep.INTRON AS INTRON,
{EXTRA_COLS}
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases,
alternate_bases.vep AS vep
WHERE AN > 0 AND AF > 0.9
ORDER BY AN DESC
query = query_template.format(GNOMAD_VER=gnomad_version_chr,
CHROM=chromosome_chr,
EXTRA_COLS=extra_columns)
high_af = run_query(query)
high_af.head()
Explanation: Query 2.1 - Find alleles that occur at least in 90% of samples
Find all variants on the selected chromosome that were observed in at least 90% of samples. In other words, this query finds variants where allele frequency is very high for non-REF alleles.
End of explanation
high_af.groupby('Symbol').count()[['POS']].sort_values(by=['POS'],
ascending=False).head(10)
Explanation: We can condense the result and only list gene symbols and the number of variants found in the previous query:
End of explanation
query_template =
SELECT reference_name AS CHROM,
start_position AS POS,
names AS ID,
reference_bases AS REF,
alternate_bases.alt AS ALT,
vep.SYMBOL AS Symbol,
vep.Gene AS Gene,
AN,
alternate_bases.AC_fin_male AS AC_fin_m,
alternate_bases.AC_afr_male AS AC_afr_m,
ROUND(ABS(alternate_bases.AC_fin_male - alternate_bases.AC_afr_male) / alternate_bases.AC_male, 3) AS fin_afr_diff,
vep.EXON AS EXON,
vep.INTRON AS INTRON,
{EXTRA_COLS}
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases,
alternate_bases.vep AS vep
WHERE vep.SYMBOL IS NOT NULL AND
alternate_bases.AC_male > 20 AND alternate_bases.AC_fin_male > 0 AND alternate_bases.AC_afr_male > 0
order by fin_afr_diff DESC
LIMIT 1000
query = query_template.format(GNOMAD_VER=gnomad_version_chr,
CHROM=chromosome_chr,
EXTRA_COLS=extra_columns)
stats_chr_ancestry = run_query(query)
stats_chr_ancestry.head()
Explanation: Query 2.2 - Top variants by ancenstry difference
Find top 1,000 variants on the selected chromosome that show the most significant differences between male samples of African-American ancestry versus Finnish ancestry
End of explanation
query_template =
SELECT Symbol, count(1) AS num_indels
FROM
(
SELECT DISTINCT
start_position AS str_pos,
alternate_bases.alt AS alt,
vep.SYMBOL AS Symbol,
{VAR_TYPE_COL} AS variant_type,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases,
alternate_bases.vep AS vep
WHERE vep.SYMBOL IS NOT NULL AND variant_type IN ('ins', 'del', 'indel')
)
GROUP BY 1
ORDER BY 2 DESC
LIMIT 1000
query = query_template.format(GNOMAD_VER=gnomad_version_chr,
CHROM=chromosome_chr,
VAR_TYPE_COL=variant_type_col)
indel_stats = run_query(query)
indel_stats.head(10)
Explanation: Query 2.3 - Find genes with high number of INDELs
Find top 1000 genes with the highest number of INDELs on the selected chromosome.
End of explanation
bucket_size = 10000
query_template =
SELECT CAST(FLOOR(DIV(start_position, {BUCKET})) AS INT64) AS start_pos_bucket ,
count(1) AS num_snv
FROM
(
SELECT DISTINCT
start_position,
alternate_bases.alt AS alt,
{VAR_TYPE_COL} AS variant_type,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases
WHERE variant_type = 'snv'
)
GROUP BY 1
ORDER BY 1
query = query_template.format(GNOMAD_VER=gnomad_version_chr,
CHROM=chromosome_chr,
VAR_TYPE_COL=variant_type_col,
BUCKET=bucket_size)
snv_dist = run_query(query)
snv_dist.head()
import matplotlib.pyplot as plt
plt.figure(dpi=150)
plt.bar(snv_dist.start_pos_bucket, snv_dist.num_snv)
plt.xlabel("Bucket number of start_pos")
plt.ylabel("No of SNVs in each bucket")
plt.title("Distribution of SNVs on {} for buckets of {} base pairs".format(chromosome_chr, bucket_size))
plt.show()
Explanation: Query 2.4 - Find distribution of SNVs across a chromosome
Find the distribution of SNVs across the selected chromosome. In order to be able to plot the result we group base pairs into buckets of size 10,000.
End of explanation |
14,987 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GCE Lab 4 - Dwarf Galaxy - Chemical Evolution Trend
In this notebook, you will tune model parameters to fit the chemical evolution trend derived from stellar spectroscopy, for the dwarf spheroidal galaxy Fornax.
Step1: 1. Stellar Abundances
To plot the observed stellar abundances of a galaxy, STELLAB first needs to be launched without input parameter.
Step2: The next step is to select a galaxy. So far, STELLAB includes the Milky Way, Sculptor, Carina, Fornax, and the LMC. Simply use the galaxy parameter and enter the name of the galaxy with lower cases (spaces need to be replaced by underscores). Then, use the plot_spectro function to visualize the selected abundance ratios.
Step3: 2. Fit Stellar Abundances with Models | Python Code:
# Import standard Python packages
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# One-zone galactic chemical evolution code
import NuPyCEE.omega as omega
# Stellar abundances plotting code
import NuPyCEE.stellab as stellab
# Matplotlib option
%matplotlib inline
Explanation: GCE Lab 4 - Dwarf Galaxy - Chemical Evolution Trend
In this notebook, you will tune model parameters to fit the chemical evolution trend derived from stellar spectroscopy, for the dwarf spheroidal galaxy Fornax.
End of explanation
# Launch the STELLAB code
s = stellab.stellab()
Explanation: 1. Stellar Abundances
To plot the observed stellar abundances of a galaxy, STELLAB first needs to be launched without input parameter.
End of explanation
# Select the galaxy
galaxy = "fornax"
# Plot the abundance ratios
matplotlib.rcParams.update({'font.size': 16.0})
s.plot_spectro(galaxy=galaxy, xaxis='[Fe/H]', yaxis='[Mg/Fe]')
plt.ylim(-1.6, 1.6)
Explanation: The next step is to select a galaxy. So far, STELLAB includes the Milky Way, Sculptor, Carina, Fornax, and the LMC. Simply use the galaxy parameter and enter the name of the galaxy with lower cases (spaces need to be replaced by underscores). Then, use the plot_spectro function to visualize the selected abundance ratios.
End of explanation
# \\\\\\\\\\ Modify below \\\\\\\\\\\\
# ====================================
# Define the galactic chemical evolution (GCE) parameters
# Dimensionless star formation efficiency (must be below 1.0)
# Original value --> 0.3
sfe = 0.3
# Strength of galactic outflows
# Original value --> 2.0
mass_loading = 2.0
# Number of Type Ia supernova that occurs per stellar mass formed
# Original value --> 0.4e-3
nb_1a_per_m = 0.4e-3
# ====================================
# ////////// Modify above ////////////
# Run the GCE code OMEGA
o_1 = omega.omega(galaxy=galaxy, SF_law=True, sfe=sfe, mass_loading=mass_loading, \
nb_1a_per_m=nb_1a_per_m, transitionmass=10.0)
# Choose your elemental abundance ratios
yaxis = '[Mg/Fe]'
xaxis = '[Fe/H]'
# Plot stellar abundances derived from observations
matplotlib.rcParams.update({'font.size': 16.0})
s.plot_spectro(galaxy=galaxy, xaxis=xaxis, yaxis=yaxis)
# Overplot the predictions on top of observations
xy_0_1 = o_1.plot_spectro(xaxis=xaxis, yaxis=yaxis, return_x_y=True)
plt.plot(xy_0_1[0], xy_0_1[1], color='w', linewidth=3.0)
plt.plot(xy_0_1[0], xy_0_1[1], color='m', linewidth=1.5, label='GCE prediction 1')
# Update the legend and set the X and Y axis limits.
plt.legend(loc='center left', bbox_to_anchor=(1.01, 0.5), markerscale=0.8, fontsize=16)
plt.ylim(-1.6, 1.6)
plt.xlim(-3.2, 0.2)
Explanation: 2. Fit Stellar Abundances with Models
End of explanation |
14,988 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Citation-https
Step1: conda install numpy
Step2: Download the RetinaNet model file that will be used for object detection via this link
https | Python Code:
! pip install tensorflow
! pip install --upgrade pip
Explanation: Citation-https://towardsdatascience.com/object-detection-with-10-lines-of-code-d6cb4d86f606
End of explanation
! pip install numpy -I
import numpy.core.multiarray
!pip install spacy
! pip install scipy
! pip install opencv-python
! pip install pillow
! pip install matplotlib
! pip install h5py
! pip install keras
! pip3 install https://github.com/OlafenwaMoses/ImageAI/releases/download/2.0.2/imageai-2.0.2-py3-none-any.whl
Explanation: conda install numpy
End of explanation
import os as os
os.getcwd()
os.chdir('C:\\Users\\KOGENTIX\\Desktop\\image')
os.getcwd()
from imageai.Detection import ObjectDetection
import os
execution_path = os.getcwd()
detector = ObjectDetection()
detector.setModelTypeAsRetinaNet()
detector.setModelPath( os.path.join(execution_path , "resnet50_coco_best_v2.0.1.h5"))
detector.loadModel()
detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , "image.jpeg"), output_image_path=os.path.join(execution_path , "imagenew.jpeg"))
for eachObject in detections:
print(eachObject["name"] , " : " , eachObject["percentage_probability"] )
Explanation: Download the RetinaNet model file that will be used for object detection via this link
https://github.com/OlafenwaMoses/ImageAI/releases/download/1.0/resnet50_coco_best_v2.0.1.h5
End of explanation |
14,989 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 1, figures 3 and 4
This notebook will show you how to produce figures 1.3 and 1.4 after the predictive modeling is completed.
The predictive modeling itself, unfortunately, doesn't fit in a notebook. The number-crunching can take several hours, and although logistic regression itself is not complicated, the practical details -- dates, authors, multiprocessing to speed things up, etc -- turn it into a couple thousand lines of code. (If you want to dig into that, see chapter1/code/biomodel.py, and the scripts in /logistic at the top level of the repo.)
Without covering those tangled details, this notebook can still explore the results of modeling in enough depth to give you a sense of some important choices made along the way.
Define modeling parameters
I start by finding an optimal number of features for the model, and also a value for C (the regularization constant). To do this I run a "grid search" that tests different values of both parameters. (I use the "gridsearch" option in biomodel, aka
Step1: assessment
There's a lot of random variation with this small sample size, but it's still perfectly clear that accuracy rises across this timeline. It may not be a linear relationship
Step2: The first number is the correlation coefficient; the second a p value.
Plotting individual volume probabilities
Step3: caveats
The pattern you see above is real, and makes a nice visual emblem of generic differentiation. However, there are some choices involved worth reflection. The probabilities plotted above were produced by six models, trained on 50-year segments of the timeline, using 1100 features and a C setting of 0.00008. That C setting works fine, but it's much lower than the one I chose as optimal for assessing accuracy. What happens if we use instead C = 0.015, and in fact simply reuse the evidence from figure 1.3 unchanged?
The accuracies recorded in finalpredictbio.csv come from a series of models named cleanpredictbio (plus some more info). I haven't saved all of them, but we have the last model in each sequence of 15. We can plot those probabilities. | Python Code:
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
import random
accuracy_df = pd.read_csv('../modeloutput/finalbiopredicts.csv')
accuracy_df.head()
# I "jitter" results horizontally because we often have multiple results with the same x and y coordinates.
def jitteraframe(df, yname):
jitter = dict()
for i in df.index:
x = df.loc[i, 'center']
y = df.loc[i, yname]
if x not in jitter:
jitter[x] = set()
elif y in jitter[x]:
dodge = random.choice([-6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6])
x = x + dodge
df.loc[i, 'center'] = x
if x not in jitter:
jitter[x] = set()
jitter[x].add(y)
jitteraframe(accuracy_df, 'accuracy')
fig, ax = plt.subplots(figsize = (9, 9))
ax.margins(0.1)
ax.plot(accuracy_df.center, accuracy_df.accuracy, marker = 'o', linestyle = '', alpha = 0.5)
ax.annotate('accuracy', xy = (1700,1), fontsize = 16)
plt.show()
Explanation: Chapter 1, figures 3 and 4
This notebook will show you how to produce figures 1.3 and 1.4 after the predictive modeling is completed.
The predictive modeling itself, unfortunately, doesn't fit in a notebook. The number-crunching can take several hours, and although logistic regression itself is not complicated, the practical details -- dates, authors, multiprocessing to speed things up, etc -- turn it into a couple thousand lines of code. (If you want to dig into that, see chapter1/code/biomodel.py, and the scripts in /logistic at the top level of the repo.)
Without covering those tangled details, this notebook can still explore the results of modeling in enough depth to give you a sense of some important choices made along the way.
Define modeling parameters
I start by finding an optimal number of features for the model, and also a value for C (the regularization constant). To do this I run a "grid search" that tests different values of both parameters. (I use the "gridsearch" option in biomodel, aka: python3 biomodel.py gridsearch.) The result looks like this:
where darker red squares indicate higher accuracies. I haven't labeled the axes correctly, but the vertical axis here is number of features (from 800 to 2500), and the horizontal axis is the C parameter (from .0012 to 10, logarithmically).
It's important to use the same sample size for this test that you plan to use in the final model: in this case a rather small group of 150 volumes (75 positive and 75 negative), because I want to be able to run models in periods as small as 20 years. With such a small sample, it's important to run the gridsearch several times, since the selection of a particular 150 volumes introduces considerable random variability into the process.
One could tune the C parameter for each sample, and I try that in a different chapter, but my experience is that it introduces complexity without actually changing results--plus I get anxious about overfitting through parameter selection. Probably better just to confirm results with multiple samples and multiple C settings. A robust result should hold up.
I've tested the differentiation of genres with multiple parameter settings, and it does hold up. But for figure 1.3, I settled on 1100 features (words) and C = 0.015 as settings that fairly consistently produce good results for the biography / fiction boundary. Then it's possible to
Assess accuracy across time: Figure 1.3
I do this by running python3 biomodel.py usenewdata (the contrast between 'new' and 'old' metadata will become relevant later in this notebook). That produces a file of results visualized below.
End of explanation
from scipy.stats import pearsonr
pearsonr(accuracy_df.floor, accuracy_df.accuracy)
Explanation: assessment
There's a lot of random variation with this small sample size, but it's still perfectly clear that accuracy rises across this timeline. It may not be a linear relationship: it looks like the boundary between fiction and biography may be sharpest around 1910, and rather than a smooth line, it might be two regimes divided around 1850. But it's still quite clear that accuracy rises: if we modeled it simply as a linear correlation, it would be strong and significant.
End of explanation
root = '../modeloutput/'
frames = []
for floor in range(1700, 2000, 50):
sourcefile = root + 'theninehundred' + str(floor) + '.csv'
thisframe = pd.read_csv(sourcefile)
frames.append(thisframe)
df = pd.concat(frames)
df.head()
groups = df.groupby('realclass')
groupnames = {0: 'biography', 1: 'fiction'}
groupcolors = {0: 'k', 1: 'r'}
fig, ax = plt.subplots(figsize = (9, 9))
ax.margins(0.1)
for code, group in groups:
ax.plot(group.dateused, group.logistic, marker='o', linestyle='', ms=6, alpha = 0.66, color = groupcolors[code], label=groupnames[code])
ax.legend(numpoints = 1, loc = 'upper left')
plt.show()
Explanation: The first number is the correlation coefficient; the second a p value.
Plotting individual volume probabilities: Figure 1.4
In a sense plotting individual volumes is extremely simple. My modeling process writes files that record the metadata for each volume along with a column logistic that reports the predicted probability of being in the positive class (in this case, fiction). We can just plot the probabilities on the y axis, and dates used for modeling on the x axis. Have done that below.
End of explanation
root = '../modeloutput/'
frames = []
for floor in range(1700, 2000, 20):
if floor == 1720:
continue
# the first model covers 40 years
sourcefile = root + 'cleanpredictbio' + str(floor) + '2017-10-15.csv'
thisframe = pd.read_csv(sourcefile)
frames.append(thisframe)
df = pd.concat(frames)
bio = []
fic = []
for i in range (1710, 1990):
segment = df[(df.dateused > (i - 10)) & (df.dateused < (i + 10))]
bio.append(np.mean(segment[segment.realclass == 0].logistic))
fic.append(np.mean(segment[segment.realclass == 1].logistic))
groups = df.groupby('realclass')
groupnames = {0: 'biography', 1: 'fiction'}
groupcolors = {0: 'k', 1: 'r'}
fig, ax = plt.subplots(figsize = (9, 9))
ax.margins(0.1)
for code, group in groups:
ax.plot(group.dateused, group.logistic, marker='o', linestyle='', ms=6, alpha = 0.5, color = groupcolors[code], label=groupnames[code])
ax.plot(list(range(1710,1990)), bio, c = 'k')
ax.plot(list(range(1710,1990)), fic, c = 'r')
ax.legend(numpoints = 1, loc = 'upper left')
plt.show()
Explanation: caveats
The pattern you see above is real, and makes a nice visual emblem of generic differentiation. However, there are some choices involved worth reflection. The probabilities plotted above were produced by six models, trained on 50-year segments of the timeline, using 1100 features and a C setting of 0.00008. That C setting works fine, but it's much lower than the one I chose as optimal for assessing accuracy. What happens if we use instead C = 0.015, and in fact simply reuse the evidence from figure 1.3 unchanged?
The accuracies recorded in finalpredictbio.csv come from a series of models named cleanpredictbio (plus some more info). I haven't saved all of them, but we have the last model in each sequence of 15. We can plot those probabilities.
End of explanation |
14,990 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Exam question solution
Juan Valdez can earn u = 10 as a farm worker. Alternatively, if he can raise a lump-sum of I=60, he can start a risky coffee-growing project. Juan is risk-neutral. The project can ‘succeed’ and generate project return Xs =100 or it can ‘fail’ in which case it will generate Xf =0. The probability of success depends on Juan’s chosen level of ‘diligence.’ When he is diligent the project succeeds with probability p = 0.90 (for an expected gross return of 0.9100+0.10= 90). If instead he chooses to be ‘non-diligent’ he diverts effort and/or financial resources to other projects that give him private benefits B=10 but this lowers the probability of success on his coffee project to q = 0.45 (for expected project returns of 45).
a) Show that if Juan has 60 of his own funds he will start the project and be diligent. That is, show that he earns more from self-funding and being diligent than he does from either working as an unskilled worker or self-funding and being non-diligent (include the private benefits he would capture).
$$\max_{c_s, c_f} p (X_s - c_s) + (1-p) (X_f - c_f) - I$$
subject to the agent's participation (PC) constraint
Step2: Consider a project with the following characteristics
Step3: This project fails only 1 percent of the time when the agent is non-diligent (corrupt) but fails 50 percent of the time when they are non-diligent (corrupt). We associate non-diligence with an opportunity to divert $\bar B$ in funds to private uses.
As derived above the optimal renumeration contract calls for the agent to pay a big fine for failure and earn a positive reward for success
Step4: In expectation this covers the agent's opportunity cost of funds $\bar u$. Since the incentive compatibility constraint is met (by construction) when she is diligent
Step5: The principal extracts
Step6: Under this contract the agent will be diligent even though they cannot be observed. Had they been non-diligent the principal would earn
Step7: Limited Liability constraints
In the example above the agent is asked to pay the principal in the event of failure ($c_f <0$). Suppose however that the agent cannot be made to pay the fine (e.g. they cannot post bond and run away before paying the fine).
Suppose in fact that the worst fine we can impose is that they pay $c_f = -5$.
When that is the case the cheapest way to satisfy the incentive compatibility constraint is to set
Step8: Which then means that in expectation the agent earns an information rent of
Step9: This is above their reservation utility $bar u$ and this contract is therefore quite a bit more expensive to the principal. Their net return is now
$$E(X,p) - I - E(C,p)$$
$$E(X,p) - I - p/(p-q)$$
And the net benefits to the project are greatly reduced to
Step10: The net benefits have been reduced by the heavy cost of incentive contract to keep the agent diligent or non-corrupt.
But this is still better than allowing the agent to be corrupt.
Step11: As we can see from the diagram above the principal can contract with agents who face limited liability but they earn less from agents where the LL constraint binds. The limited liability constraint means the agent must earn a rent in excess of their reservation utility.
Suppose the most we can take away from the agent is an amount $A$, equal for example to the amount of resources that can be seized or that they posted as bond or collateral.
The cheapest way to satisfy the IC is then
Step12: This is an important expression. This tells us that unless the agent can post a minimum bond or collateral of this amount then the principal cannot provide them with strong enough incentives for them to be diligent and still allow the principal to break even on the transaction.
The take away lesson is that sometimes in asymmetric information situations one has to pay employees a rent (expected payment in excess of their next best option) in order to motivate their behavior. It also means however that if the principal (employer, lender, etc) has a choice of agent to deal with they will prefer to deal with those who can post collateral.
Monitoring by an intermediary
Suppose an intermediary can 'monitor' the project. By expending resources $m$ the monitor can reduce the agent's private benefits from non-diligence from $\bar B$ to $\bar B(m) < \bar B$.
For example the intermediar might visit the agent at random times to check up on the progress of the project. This does not completely eliminate the scope for corruption but limits how much can be privately captured (perhaps because the agent now has to spend additional resources hiding her diversions of effort and funds).
The obvious advantage of this is that it reduces the size of the information rent to
Step13: Which shows that over a range monitoring by the intermediary lowers the information rent that must be left with the agent faster than the cost of monitoring but eventually diminishing returns to this activity kick in (at somewhat less than 4 units of monitoring).
Who monitors the monitor? Two-layered moral hazard
More likely the principal cannot directly contract on the intermediary's level of monitoring. The intermediary is supposed to spend resources $m$ to monitor the agent but if the government has no way to directly verify if this is happening or not, the intermediary may well be tempted to monitor at expense of zero but claim that it has monitored at expense $m$?
The only way for the government to avoid this from happening is to put also put the intermediary on an incentive contract. The way to do this is to make the intermediary share in the agent's successess and failures.
The Principal's contract design problem is to choose renumeration packages $(c_s,c_f)$ and $(w_s, w_f)$ to maximize
Step14: In the competitive scenario more monitoring is employed but this monitoring is effective at bringing down the total cost of implementation, leaving more net project benefits. When the market for intermediaries is not competitive (and/or intermediaries are subject to limited liability constraints themselves) then monitoring still works (given our parameter assumptions in this example) but more of the project returns must be used for costly contractual bonus payments to motivate both the agent and the intermediary. Less monitoring will be employed and the net project returns will be reduced
Keep in mind that net project returns could be even larger if agents themselves could be made to post bond (credibly pay fines) in the event of project failure. Then no monitoring at all would be required and the agent could be paid the reservation wage (of zero in this example) and maximum project returns of $E(X|p) - I$ or | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interact, fixed
def E(xs,xf,p):
Expectation operator
return p*xs + (1-p)*xf
Explanation: Exam question solution
Juan Valdez can earn u = 10 as a farm worker. Alternatively, if he can raise a lump-sum of I=60, he can start a risky coffee-growing project. Juan is risk-neutral. The project can ‘succeed’ and generate project return Xs =100 or it can ‘fail’ in which case it will generate Xf =0. The probability of success depends on Juan’s chosen level of ‘diligence.’ When he is diligent the project succeeds with probability p = 0.90 (for an expected gross return of 0.9100+0.10= 90). If instead he chooses to be ‘non-diligent’ he diverts effort and/or financial resources to other projects that give him private benefits B=10 but this lowers the probability of success on his coffee project to q = 0.45 (for expected project returns of 45).
a) Show that if Juan has 60 of his own funds he will start the project and be diligent. That is, show that he earns more from self-funding and being diligent than he does from either working as an unskilled worker or self-funding and being non-diligent (include the private benefits he would capture).
$$\max_{c_s, c_f} p (X_s - c_s) + (1-p) (X_f - c_f) - I$$
subject to the agent's participation (PC) constraint:
$$ p c_s + (1-p) c_f \geq \bar u $$
Without loss of generality in what follows we will set $\bar u = 0$, so in the ideal of circumstances the government would be able to hire the agent away from their next best opportunity by paying them amount normalized to zero.
and an incentive compatibility (IC) constraint:
$$ p c_s + (1-p) c_f \geq q c_s + (1-q) c_f + \bar B $$
Note that we will at times write this problem compactly as:
$$\max_{c_s,c_f} E(X|p) - E(c|p) - I$$
s.t.
$$E(c|p) \geq \bar u$$
$$E(c|p) \geq E(c|q) + \bar B$$
The IC constraint can be rewritten:
$$ c_s \geq c_f + \frac{\bar B}{p-q} $$
This can be satisfied at minimum cost when this constraint binds. This tells us that in the event of project success the agent must receive a 'bonus' of $\frac{\bar B}{p-q}$ over what they get paid for failure outcomes. This higher reward for success compared to failure is what induces the agent to want to be diligent and increase the probability of success from $q$ to $p$. The contractual cost of this renumeration strategy is then $p c_s + (1-p) c_f$ or:
$$E(c|p) = c_f + p \frac{\bar B}{\Delta}$$
where $\Delta = p-q$which then means that the expected net benefit of the government project is:
$$E(X|p) -I - c_f - p \frac{\bar B}{\Delta}$$
Note that we earlier normalized the agent's next best employment opportunity to a renumeration of zero. If the government could get local agents to competitively bid against each other for the government contract the agent's contract could be made to bind, but this in turn would require:
$$c_f = - p \frac{\bar B}{\Delta}+\bar u$$
$$c_s = (1-p) \frac{\bar B}{\Delta}+\bar u$$
One way to think of this is that the agent is made to pay a fine of $- p \frac{\bar B}{\Delta}$ when the project fails while if the project succeeds she earns a reward of $(1-p) \frac{\bar B}{\Delta}$
A possible problem with this type of project is that it may be difficult for the government to impose a penalty on agents when the project fails (e.g. the local contractor leaves town when the bridge collapses or the incidence of malaria cases surges). One way to try to resolve that problem is by asking local contractors to post a bond but this solution may be hard to implement particularly in poor communities where the agents are poor to start with.
The consequence of not being able to impose a fine when the project fails is that we have to now impose yet another constraint on the contract design problem, a limited liability constraint of the form
$$c_f \geq 0$$
for example if the heaviest fine that can be imposed is to pay the local agent nothing when the project fails. The lowest cost way to renumerate the agent will be for this limited liability constraint and the incentive compatibility constraints to bind (to set the punishment as high as possible and the bonus as low as possible, compatible with maintaining incentives. With $c_f =0$ an extra bonus must now be paid folowing success outcomes to contine to satisfy the incentive constraint. But this increases the expected cost of renumeration and reduces expected benefits from the project to:
$$E(X|p) - I - p \frac{\bar B}{\Delta}$$
The last term $p\frac{\bar B}{\Delta}$ is sometimes referred to as an 'information rent' that must be paid to the agent that arises due to the asymmetric information problem.
An Example
End of explanation
I = 60 # Lump sum investment to start project
Xs = 100 # project success return
Xf = 0 # project failure return
p = 0.90 # probability of success when diligent
q = 0.45 # probability of success when non-diligent
EX = E(Xs,Xf,p) # Gross expected project return
ubar = 10 # Consumer reservation income
B = 10 # private benefits to being non-diligent
print('Expected returns Diligent (p): {}, Non-diligent (q): {}'.format(E(Xs,Xf,p), E(Xs,Xf,q)))
Explanation: Consider a project with the following characteristics:
End of explanation
cf = -p*B/(p-q) + ubar
cs = (1-p)*B/(p-q) + ubar
print('(c_f, c_s) =({:5.1f}{:5.1f})'.format(cf, cs))
print('consumer and bank expected payments:')
E(cs,cf,p), E(Xs-cs, Xf-cf,p) - I
Explanation: This project fails only 1 percent of the time when the agent is non-diligent (corrupt) but fails 50 percent of the time when they are non-diligent (corrupt). We associate non-diligence with an opportunity to divert $\bar B$ in funds to private uses.
As derived above the optimal renumeration contract calls for the agent to pay a big fine for failure and earn a positive reward for success:
End of explanation
def zeroprofit(c):
return EX/p -((1-p)/p)*c - I/p
def IC(c):
return c + B/(p-q)
def BPC(c,ubar):
return ubar/p - ((1-p)/p)*c
cf_min, cf_max = -40,10
c = np.arange(cf_min, cf_max)
ax = plt.subplot(111)
ax.plot(c,zeroprofit(c), 'k--',label='zero $\Pi$')
ax.plot(c,IC(c), label='IC')
ax.plot(c,BPC(c,ubar), label='PC',color='b')
ax.plot(cf,cs,marker='o')
ax.legend(loc='upper center')
ax.set_xlabel('$c_f$'), ax.set_ylabel('$c_s$')
ax.axvline(0, color='k')
ax.set_ylim(0,50)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
Explanation: In expectation this covers the agent's opportunity cost of funds $\bar u$. Since the incentive compatibility constraint is met (by construction) when she is diligent:
Diagram
End of explanation
EX - I - ubar
Explanation: The principal extracts
End of explanation
q*(Xs- cs) + (1-q)*(Xf-cf) - I
q*(cs) + (1-q)*(cf) +B
Explanation: Under this contract the agent will be diligent even though they cannot be observed. Had they been non-diligent the principal would earn
End of explanation
cf = -5
cs = -5 + B/(p-q)
Explanation: Limited Liability constraints
In the example above the agent is asked to pay the principal in the event of failure ($c_f <0$). Suppose however that the agent cannot be made to pay the fine (e.g. they cannot post bond and run away before paying the fine).
Suppose in fact that the worst fine we can impose is that they pay $c_f = -5$.
When that is the case the cheapest way to satisfy the incentive compatibility constraint is to set:
End of explanation
E(cs,cf,p)
Explanation: Which then means that in expectation the agent earns an information rent of
End of explanation
EX - I - E(cs,cf,p)
Explanation: This is above their reservation utility $bar u$ and this contract is therefore quite a bit more expensive to the principal. Their net return is now
$$E(X,p) - I - E(C,p)$$
$$E(X,p) - I - p/(p-q)$$
And the net benefits to the project are greatly reduced to:
End of explanation
q*(Xs-cs)+(1-q)*(Xf-cf) - I
Explanation: The net benefits have been reduced by the heavy cost of incentive contract to keep the agent diligent or non-corrupt.
But this is still better than allowing the agent to be corrupt.
End of explanation
Amc = p*B/(p-q) - (EX - I)
Amc
Explanation: As we can see from the diagram above the principal can contract with agents who face limited liability but they earn less from agents where the LL constraint binds. The limited liability constraint means the agent must earn a rent in excess of their reservation utility.
Suppose the most we can take away from the agent is an amount $A$, equal for example to the amount of resources that can be seized or that they posted as bond or collateral.
The cheapest way to satisfy the IC is then:
$c_f = -A$
$c_s = -A + B/(p-q)$
Which implies the agent's expected repayment is:
$$E(c|p) = - A + \frac{p B}{p-q}$$
which will be more than their reservation wage $\bar u$ as long as $A < p \frac{B}{p-q} \bar u$
Minimum collateral requirement
What is the minimum collateral requirement below which the contract cannot both satisfy the incentive compatibility constraint and guarantee at least zero profits to the principal?
Substitute the expected repayment under limited liability (above) into the principal's zero profit condition and solve for A (on the diagram above this is the $c_f$ at the intersection of the IC constraint and the principal's zero profit condition:
$E(X|p) - E(c|p) - I = 0$
$E(X|p) + A - \frac{p \dot B}{p-q} - I = 0$
$$\underline{A} = \frac{p B}{p-q} - [E(X|p) - I] $$
For our running example this minimum collateral requirement is:
End of explanation
m = np.linspace(0,10,20)
plt.plot(m, EX - I - p*(B/(1+m))/D - m)
Explanation: This is an important expression. This tells us that unless the agent can post a minimum bond or collateral of this amount then the principal cannot provide them with strong enough incentives for them to be diligent and still allow the principal to break even on the transaction.
The take away lesson is that sometimes in asymmetric information situations one has to pay employees a rent (expected payment in excess of their next best option) in order to motivate their behavior. It also means however that if the principal (employer, lender, etc) has a choice of agent to deal with they will prefer to deal with those who can post collateral.
Monitoring by an intermediary
Suppose an intermediary can 'monitor' the project. By expending resources $m$ the monitor can reduce the agent's private benefits from non-diligence from $\bar B$ to $\bar B(m) < \bar B$.
For example the intermediar might visit the agent at random times to check up on the progress of the project. This does not completely eliminate the scope for corruption but limits how much can be privately captured (perhaps because the agent now has to spend additional resources hiding her diversions of effort and funds).
The obvious advantage of this is that it reduces the size of the information rent to:
$$\frac{\bar B(m)}{\Delta}$$
And this in turn will reduce the total cost of renumerating the agent. Intuitively, since the private benefit that can be captured has been directly reduced by monitoring the contract does not have to rely so much on costly bonus payments to motivate diligence.
Now of course the Principal will have to pay the intermediary to compensate them for their expense $m$ and this will add to the cost. But so long as this extra cost is smaller than the reduction in the cost of renumerating the agent, net project benefits will improve.
Under the assumption that Principal can specify and observer the intermediary's monitoring effort the net benefits from the project will now be:
$$E(X|p) - I - p \frac{\bar B(m)}{\Delta} - m$$
To take a concrete example suppose that the 'monitoring function' where given by:
$$\bar B(m) = \frac{\bar B}{1+m}$$
then the total expected cost of renumerating the agent and the monitoring intermediary would look as follows as a function of $m$:
End of explanation
plt.plot(m, EX - I - p*(B/(1+m))/D - m)
plt.plot(m, EX - I - p*(B/(1+m))/D - p*m/D,'r--')
Explanation: Which shows that over a range monitoring by the intermediary lowers the information rent that must be left with the agent faster than the cost of monitoring but eventually diminishing returns to this activity kick in (at somewhat less than 4 units of monitoring).
Who monitors the monitor? Two-layered moral hazard
More likely the principal cannot directly contract on the intermediary's level of monitoring. The intermediary is supposed to spend resources $m$ to monitor the agent but if the government has no way to directly verify if this is happening or not, the intermediary may well be tempted to monitor at expense of zero but claim that it has monitored at expense $m$?
The only way for the government to avoid this from happening is to put also put the intermediary on an incentive contract. The way to do this is to make the intermediary share in the agent's successess and failures.
The Principal's contract design problem is to choose renumeration packages $(c_s,c_f)$ and $(w_s, w_f)$ to maximize:
$$ E(X|p) - E(c|p) - E(m|p) - I $$
subject to participation constraints for both the agent and the intermediary
$$ E(c|p) \geq 0$$
$$ E(w|p) \geq 0$$
the (now modified) incentive compatibility constraint for the agent:
$$E(c|p) \geq E(c|q) + \bar B(m)$$
and an incentive compatibility constraint for the intermediary monitor:
$$E(w|p) \geq E(w|q) + m$$
As was the case with the agent the cost of providing incentives to the intermediary monitor will depend on whether the intermediary can be made to lose money when the project fails or not.
Let's first consider the case when this is not the case. In that event the intermediary is paid $0$ when the project fails and
$$w_s = \frac{m}{\Delta}$$
when the project succeeds. Note this is very much like the expression we derived for the bonus that had to be paid to the agent. The expected cost of this intermediary renumeration contract (when $w_f =0$) is then:
$$E(w|p) = p \frac{m}{\Delta}$$
which is always larger than $m$ so long as $p > q$. This suggest that the intermediary will also earn an information rent equal to
$$E(w|p) - m = p \frac{m}{\Delta} - m > 0$$
since the monitor has to pay expenses $m$ while monitoring.
If on the other hand we can assume that intermediary's can be made to bear liability for the projects that they monitor and fail then this rent can be eliminated. Consider the case of competitive bidding for the intermediary monitor job. Different firms will compete to offer their services until the expected return to being an intermediary monitor is equal to what they could earn in their next best occupation which we assume to be zero. Then
$$E(w|p) = m $$
which implies
$$w_f + p \frac{m}{\Delta} = m $$
or
$$ w_f = -p \frac{m}{\Delta} + m$$
(which then implies $w_s = (1-p)\frac{m}{\Delta} +m $)
One way to think of this is that the principal asks the intermediary put up portion
$$I_m = p \frac{m}{\Delta} - m$$
of the total cost I of the project while the uninformed principal puts up the remainder
$$I_u = I - I_m$$
Then if the project fails the intermediary loses $I_m + m$ (their investment and their monitoring cost). If the project succeeds the intermediary pockets net of their monitoring cost:
$$w_s = (1-p) \frac{m}{\Delta} - m$$
For a zero profit expected return.
In this last competitive scenario the cost to the Principal of adding the intermediary to the contract is just the monitoring cost $m$ that must be compensated and not the larger information rent $p\frac{m}{\Delta}$ that we saw in the non-competitive case.
The diagram below depicts net benefits under the competitive (solid line) and non-competitive (dashed line) scenarios:
End of explanation
EX - I
Explanation: In the competitive scenario more monitoring is employed but this monitoring is effective at bringing down the total cost of implementation, leaving more net project benefits. When the market for intermediaries is not competitive (and/or intermediaries are subject to limited liability constraints themselves) then monitoring still works (given our parameter assumptions in this example) but more of the project returns must be used for costly contractual bonus payments to motivate both the agent and the intermediary. Less monitoring will be employed and the net project returns will be reduced
Keep in mind that net project returns could be even larger if agents themselves could be made to post bond (credibly pay fines) in the event of project failure. Then no monitoring at all would be required and the agent could be paid the reservation wage (of zero in this example) and maximum project returns of $E(X|p) - I$ or
End of explanation |
14,991 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Writing an algorithm (using pure scientific Python)
In this notebook, we show how to write an algorithm for the NeuroFinder challenge using pure scientific Python. Elsewhere in the challenge we've used Thunder to define a standard format and API for inputs and outputs. Many of the algorithm examples have additionally used Spark and Thunder to manipulate the data and run analyses, but this is not neccecssary.
Here, we'll write a function that uses Thunder only for IO, and runs an algorithm purely using scientific Python libraries, like numpy and skimage.
Load the data
This is the only step where we need Thunder (and the ThunderContext variable tsc), because we're using its API to load the images from Amazon S3
Step1: Write an algorithm
We need to write a function that takes an images variable as an input, as well as an info dictionary with data-set specific metadata, and returns identified sources as an output. The function will look like this
Step2: The images input is a class from Thunder, but we can convert it directly to a numpy array using the function collectValuesAsArray.
NOTE
Step3: Now we have a numpy array! So long as we perform this same first step in our run function, we can write the rest of our algorithm using our favorite Python libraries
Step4: Run our function on the images
Step5: And use basic matplotlib to look at the result | Python Code:
bucket = "s3n://neuro.datasets/"
path = "challenges/neurofinder/01.00/"
images = tsc.loadImages(bucket + path + 'images', startIdx=0, stopIdx=100)
Explanation: Writing an algorithm (using pure scientific Python)
In this notebook, we show how to write an algorithm for the NeuroFinder challenge using pure scientific Python. Elsewhere in the challenge we've used Thunder to define a standard format and API for inputs and outputs. Many of the algorithm examples have additionally used Spark and Thunder to manipulate the data and run analyses, but this is not neccecssary.
Here, we'll write a function that uses Thunder only for IO, and runs an algorithm purely using scientific Python libraries, like numpy and skimage.
Load the data
This is the only step where we need Thunder (and the ThunderContext variable tsc), because we're using its API to load the images from Amazon S3
End of explanation
def run(data, info=None):
# do an analysis on the images
# optionally make use of the metadata
# return a set of sources
pass
Explanation: Write an algorithm
We need to write a function that takes an images variable as an input, as well as an info dictionary with data-set specific metadata, and returns identified sources as an output. The function will look like this:
End of explanation
a = images.collectValuesAsArray()
a.shape
Explanation: The images input is a class from Thunder, but we can convert it directly to a numpy array using the function collectValuesAsArray.
NOTE: this will take a few seconds because we're loading the data here.
End of explanation
def run(data, info=None):
# convert to array
a = images.collectValuesAsArray()
# compute the standard deviation over time
s = a.std(axis=0)
# extract local peaks
from skimage.feature import peak_local_max
peaks = peak_local_max(s, min_distance=10, num_peaks=100).tolist()
# draw a circle around each peak
from skimage.draw import circle
from numpy import array
def makecircle(center):
rr, cc = circle(center[0], center[1], 5, a.shape[1:])
return array(zip(rr, cc))
circles = [makecircle(p) for p in peaks]
# put list of coordinates into SourceModel
from thunder import SourceModel
return SourceModel([c for c in circles])
Explanation: Now we have a numpy array! So long as we perform this same first step in our run function, we can write the rest of our algorithm using our favorite Python libraries: numpy, scipy, sklearn, skimage, etc. We do need to use Thunder's SourceModel at the end to wrap the output in the appropriate format at the end. Here's a very simple algorithm that just finds local maxima in the temporal standard deviation.
End of explanation
s = run(images)
s
Explanation: Run our function on the images
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(15,8))
plt.imshow(s.masks((512,512)), cmap='Greys_r', interpolation='none')
plt.axis('off');
Explanation: And use basic matplotlib to look at the result
End of explanation |
14,992 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Poynting Vector of Half-Wave Antenna
PROGRAM
Step1: In this problem, I plot the magnitude of the time averaged Poynting vector for a half-wave antenna. The antenna is oriented vertically in this problem.
- In step 1, I define constants in the problem.
- $\mu_{0}$ is the permeability of free space.
- $c$ is the speed of light.
- $I_{0}$ is the magnitude of the current. I've chosen it to be 5 A.
- In step 2, I define a function to calculate the time averaged Poynting vector magnitude, $<S> = (\frac{\mu_{0} c I_{0}^{2}}{8\pi^{2}r^{2}})\frac{cos^2(\frac{\pi}{2}cos\theta)}{sin^2\theta}$.
- In step 3, I plot the Poynting vector (represented by a point at the tip of the Poynting vector) at different angles around the half-wave antenna, one meter from the antenna.
- In step 4, I plot the Poynting vector at one meter, two meters, and three meters from the antenna. Its strength (magnitude, or length of vector) decreases.
- In step 5, I plot the Poynting vector at different radii as a vector field. This is another way to see that its strength decreases farther from the antenna, and that it is strongest at angles close to 90 degrees and 270 degrees.
1 - Define Constants
Step2: 2 - Calculate Time Averaged Poynting Vector
Step3: 3 - Plot Poynting Vector at Different Angles Around Antenna
Step4: 3 - Plot Poynting Vector Magnitude at Different Angles Around Antenna for Different Distances from Origin | Python Code:
import numpy as np
import matplotlib.pylab as plt
Explanation: Poynting Vector of Half-Wave Antenna
PROGRAM: Poynting vector of half-wave antenna
CREATED: 5/30/2018
Import packages.
End of explanation
#Define constants - permeability of free space, speed of light, current amplitude.
u_0 = 1.26 * 10**(-6)
c = 2.997925 * 10**8
I_0 = 5
#Choose any current amplitude value, I_0.
Explanation: In this problem, I plot the magnitude of the time averaged Poynting vector for a half-wave antenna. The antenna is oriented vertically in this problem.
- In step 1, I define constants in the problem.
- $\mu_{0}$ is the permeability of free space.
- $c$ is the speed of light.
- $I_{0}$ is the magnitude of the current. I've chosen it to be 5 A.
- In step 2, I define a function to calculate the time averaged Poynting vector magnitude, $<S> = (\frac{\mu_{0} c I_{0}^{2}}{8\pi^{2}r^{2}})\frac{cos^2(\frac{\pi}{2}cos\theta)}{sin^2\theta}$.
- In step 3, I plot the Poynting vector (represented by a point at the tip of the Poynting vector) at different angles around the half-wave antenna, one meter from the antenna.
- In step 4, I plot the Poynting vector at one meter, two meters, and three meters from the antenna. Its strength (magnitude, or length of vector) decreases.
- In step 5, I plot the Poynting vector at different radii as a vector field. This is another way to see that its strength decreases farther from the antenna, and that it is strongest at angles close to 90 degrees and 270 degrees.
1 - Define Constants
End of explanation
def S_avg(x, r):
return (u_0 * c * I_0**2)/(8 * np.pi * r**2) * np.cos(np.pi/2 * np.cos(x))**2/np.sin(x)**2
Explanation: 2 - Calculate Time Averaged Poynting Vector
End of explanation
#Plot average Poynting vector magnitude at different angles.
fig = plt.figure(figsize = (8, 8))
ax = fig.add_subplot(1, 1, 1, projection = 'polar')
#Define a range of angles.
theta = np.arange(0, 2 * np.pi, 0.01)
#Plot Poynting vector magnitude at different radii.
ax.plot(theta, S_avg(theta, r = 1), color = 'red', label = 'r = 1')
#Plot an example vector.
x = 0
y = 0
u = S_avg(np.pi/3, r = 1) * np.sin(np.pi/3)
v = S_avg(np.pi/3, r = 1) * np.cos(np.pi/3)
ax.quiver(x, y, u, v, scale_units = 'xy', scale = 0.5, color = 'red')
#Adjust plot labels.
ax.set_rticks([100, 200, 300])
ax.set_rlabel_position(0)
ax.grid(True)
#Flip plot axes to match antenna position from notes.
ax.set_theta_offset(np.pi/2)
ax.set_theta_direction(-1)
#Add title.
ax.set_title('Time Average of Poynting Vector at Different Angles Around Antenna, \n One Meter Away', fontsize = 16, verticalalignment = 'bottom')
plt.legend(title = 'Radius in Meters', loc = [0.91, 0.91])
plt.tight_layout()
plt.show()
Explanation: 3 - Plot Poynting Vector at Different Angles Around Antenna
End of explanation
#Define a range of angles.
theta = np.arange(0, 2 * np.pi, 0.01)
def S_avg(x, r):
return (u_0 * c * I_0**2)/(8 * np.pi * r**2) * np.cos(np.pi/2 * np.cos(x))**2/np.sin(x)**2
#Plot average Poynting vector magnitude at different angles.
fig = plt.figure(figsize = (8, 8))
ax = fig.add_subplot(1, 1, 1, projection = 'polar')
#Plot Poynting vector magnitude at different radii.
ax.plot(theta, S_avg(theta, r = 1), color = 'red', label = 'r = 1')
ax.plot(theta, S_avg(theta, r = 2), color = 'blue', label = 'r = 2')
ax.plot(theta, S_avg(theta, r = 3), color = 'purple', label = 'r = 3')
#Plot an example vector.
theta = np.pi/3
x = 0
y = 0
u = S_avg(theta, r = 1)
v = 0
ax.quiver(x, y, u*np.sin(theta), u*np.cos(theta), scale_units = 'xy', scale = 0.5, color = 'red')
#Adjust plot labels.
ax.set_rticks([100, 200, 300])
ax.set_rlabel_position(90)
ax.grid(True)
#Flip plot axes to match antenna position from notes.
ax.set_theta_offset(np.pi/2)
ax.set_theta_direction(-1)
#Add title.
ax.set_title('Time Average of Poynting Vector at Different Angles Around Antenna', fontsize = 16, verticalalignment = 'bottom')
plt.legend(title = 'Radius in Meters', loc = [0.91, 0.91])
plt.tight_layout()
plt.savefig('Time Average of Poynting Vector at Different Angles Around Antenna.png')
plt.show()
def S_avg(x, r):
return (u_0 * c * I_0**2)/(8 * np.pi * r**2) * np.cos(np.pi/2 * np.cos(x))**2/np.sin(x)**2
#Plot average Poynting vector magnitude at different angles.
fig = plt.figure(figsize = (8, 8))
ax = fig.add_subplot(1, 1, 1, projection = 'polar')
meters = 1
n = 36
theta = np.linspace(0, 2 * np.pi , n)
r = np.linspace(meters, meters, 1)
X, Y = np.meshgrid(theta, r)
u = S_avg(X,Y)
v = 0
ax.quiver(X, Y, u*np.sin(X), u*np.cos(X), color = 'red', width = 0.005)
#Adjust plot labels.
ax.set_rticks([1, 2, 3, 4])
ax.set_rlabel_position(90)
ax.grid(True)
#Flip plot axes to match antenna position from notes.
ax.set_theta_offset(np.pi/2)
ax.set_theta_direction(-1)
#Add title.
ax.set_title('Time Average Poynting Vector at Different Angles Around Antenna', fontsize = 16, verticalalignment = 'bottom')
#plt.savefig('Time Average Poynting Vector at One Meter from Antenna.png')
plt.show()
Explanation: 3 - Plot Poynting Vector Magnitude at Different Angles Around Antenna for Different Distances from Origin
End of explanation |
14,993 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtering and resampling data
This tutorial covers filtering and resampling, and gives examples of how
filtering can be used for artifact repair.
Step1: Background on filtering
A filter removes or attenuates parts of a signal. Usually, filters act on
specific frequency ranges of a signal — for example, suppressing all
frequency components above or below a certain cutoff value. There are many
ways of designing digital filters; see disc-filtering for a longer
discussion of the various approaches to filtering physiological signals in
MNE-Python.
Repairing artifacts by filtering
Artifacts that are restricted to a narrow frequency range can sometimes
be repaired by filtering the data. Two examples of frequency-restricted
artifacts are slow drifts and power line noise. Here we illustrate how each
of these can be repaired by filtering.
Slow drifts
Low-frequency drifts in raw data can usually be spotted by plotting a fairly
long span of data with the
Step2: A half-period of this slow drift appears to last around 10 seconds, so a full
period would be 20 seconds, i.e., $\frac{1}{20} \mathrm{Hz}$. To be
sure those components are excluded, we want our highpass to be higher than
that, so let's try $\frac{1}{10} \mathrm{Hz}$ and $\frac{1}{5}
\mathrm{Hz}$ filters to see which works best
Step3: Looks like 0.1 Hz was not quite high enough to fully remove the slow drifts.
Notice that the text output summarizes the relevant characteristics of the
filter that was created. If you want to visualize the filter, you can pass
the same arguments used in the call to
Step4: Notice that the output is the same as when we applied this filter to the data
using
Step5: Power line noise
Power line noise is an environmental artifact that manifests as persistent
oscillations centered around the AC power line frequency_. Power line
artifacts are easiest to see on plots of the spectrum, so we'll use
Step6: It should be evident that MEG channels are more susceptible to this kind of
interference than EEG that is recorded in the magnetically shielded room.
Removing power-line noise can be done with a notch filter,
applied directly to the
Step7:
Step8: Resampling
EEG and MEG recordings are notable for their high temporal precision, and are
often recorded with sampling rates around 1000 Hz or higher. This is good
when precise timing of events is important to the experimental design or
analysis plan, but also consumes more memory and computational resources when
processing the data. In cases where high-frequency components of the signal
are not of interest and precise timing is not needed (e.g., computing EOG or
ECG projectors on a long recording), downsampling the signal can be a useful
time-saver.
In MNE-Python, the resampling methods (
Step9: Because resampling involves filtering, there are some pitfalls to resampling
at different points in the analysis stream | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(0, 60).load_data() # use just 60 seconds of data, to save memory
Explanation: Filtering and resampling data
This tutorial covers filtering and resampling, and gives examples of how
filtering can be used for artifact repair.
:depth: 2
We begin as always by importing the necessary Python modules and loading some
example data <sample-dataset>. We'll also crop the data to 60 seconds
(to save memory on the documentation server):
End of explanation
mag_channels = mne.pick_types(raw.info, meg='mag')
raw.plot(duration=60, order=mag_channels, proj=False,
n_channels=len(mag_channels), remove_dc=False)
Explanation: Background on filtering
A filter removes or attenuates parts of a signal. Usually, filters act on
specific frequency ranges of a signal — for example, suppressing all
frequency components above or below a certain cutoff value. There are many
ways of designing digital filters; see disc-filtering for a longer
discussion of the various approaches to filtering physiological signals in
MNE-Python.
Repairing artifacts by filtering
Artifacts that are restricted to a narrow frequency range can sometimes
be repaired by filtering the data. Two examples of frequency-restricted
artifacts are slow drifts and power line noise. Here we illustrate how each
of these can be repaired by filtering.
Slow drifts
Low-frequency drifts in raw data can usually be spotted by plotting a fairly
long span of data with the :meth:~mne.io.Raw.plot method, though it is
helpful to disable channel-wise DC shift correction to make slow drifts
more readily visible. Here we plot 60 seconds, showing all the magnetometer
channels:
End of explanation
for cutoff in (0.1, 0.2):
raw_highpass = raw.copy().filter(l_freq=cutoff, h_freq=None)
fig = raw_highpass.plot(duration=60, order=mag_channels, proj=False,
n_channels=len(mag_channels), remove_dc=False)
fig.subplots_adjust(top=0.9)
fig.suptitle('High-pass filtered at {} Hz'.format(cutoff), size='xx-large',
weight='bold')
Explanation: A half-period of this slow drift appears to last around 10 seconds, so a full
period would be 20 seconds, i.e., $\frac{1}{20} \mathrm{Hz}$. To be
sure those components are excluded, we want our highpass to be higher than
that, so let's try $\frac{1}{10} \mathrm{Hz}$ and $\frac{1}{5}
\mathrm{Hz}$ filters to see which works best:
End of explanation
filter_params = mne.filter.create_filter(raw.get_data(), raw.info['sfreq'],
l_freq=0.2, h_freq=None)
Explanation: Looks like 0.1 Hz was not quite high enough to fully remove the slow drifts.
Notice that the text output summarizes the relevant characteristics of the
filter that was created. If you want to visualize the filter, you can pass
the same arguments used in the call to :meth:raw.filter()
<mne.io.Raw.filter> above to the function :func:mne.filter.create_filter
to get the filter parameters, and then pass the filter parameters to
:func:mne.viz.plot_filter. :func:~mne.filter.create_filter also requires
parameters data (a :class:NumPy array <numpy.ndarray>) and sfreq
(the sampling frequency of the data), so we'll extract those from our
:class:~mne.io.Raw object:
End of explanation
mne.viz.plot_filter(filter_params, raw.info['sfreq'], flim=(0.01, 5))
Explanation: Notice that the output is the same as when we applied this filter to the data
using :meth:raw.filter() <mne.io.Raw.filter>. You can now pass the filter
parameters (and the sampling frequency) to :func:~mne.viz.plot_filter to
plot the filter:
End of explanation
def add_arrows(axes):
# add some arrows at 60 Hz and its harmonics
for ax in axes:
freqs = ax.lines[-1].get_xdata()
psds = ax.lines[-1].get_ydata()
for freq in (60, 120, 180, 240):
idx = np.searchsorted(freqs, freq)
# get ymax of a small region around the freq. of interest
y = psds[(idx - 4):(idx + 5)].max()
ax.arrow(x=freqs[idx], y=y + 18, dx=0, dy=-12, color='red',
width=0.1, head_width=3, length_includes_head=True)
fig = raw.plot_psd(fmax=250, average=True)
add_arrows(fig.axes[:2])
Explanation: Power line noise
Power line noise is an environmental artifact that manifests as persistent
oscillations centered around the AC power line frequency_. Power line
artifacts are easiest to see on plots of the spectrum, so we'll use
:meth:~mne.io.Raw.plot_psd to illustrate. We'll also write a little
function that adds arrows to the spectrum plot to highlight the artifacts:
End of explanation
meg_picks = mne.pick_types(raw.info, meg=True)
freqs = (60, 120, 180, 240)
raw_notch = raw.copy().notch_filter(freqs=freqs, picks=meg_picks)
for title, data in zip(['Un', 'Notch '], [raw, raw_notch]):
fig = data.plot_psd(fmax=250, average=True)
fig.subplots_adjust(top=0.85)
fig.suptitle('{}filtered'.format(title), size='xx-large', weight='bold')
add_arrows(fig.axes[:2])
Explanation: It should be evident that MEG channels are more susceptible to this kind of
interference than EEG that is recorded in the magnetically shielded room.
Removing power-line noise can be done with a notch filter,
applied directly to the :class:~mne.io.Raw object, specifying an array of
frequencies to be attenuated. Since the EEG channels are relatively
unaffected by the power line noise, we'll also specify a picks argument
so that only the magnetometers and gradiometers get filtered:
End of explanation
raw_notch_fit = raw.copy().notch_filter(
freqs=freqs, picks=meg_picks, method='spectrum_fit', filter_length='10s')
for title, data in zip(['Un', 'spectrum_fit '], [raw, raw_notch_fit]):
fig = data.plot_psd(fmax=250, average=True)
fig.subplots_adjust(top=0.85)
fig.suptitle('{}filtered'.format(title), size='xx-large', weight='bold')
add_arrows(fig.axes[:2])
Explanation: :meth:~mne.io.Raw.notch_filter also has parameters to control the notch
width, transition bandwidth and other aspects of the filter. See the
docstring for details.
It's also possible to try to use a spectrum fitting routine to notch filter.
In principle it can automatically detect the frequencies to notch, but our
implementation generally does not do so reliably, so we specify the
frequencies to remove instead, and it does a good job of removing the
line noise at those frequencies:
End of explanation
raw_downsampled = raw.copy().resample(sfreq=200)
for data, title in zip([raw, raw_downsampled], ['Original', 'Downsampled']):
fig = data.plot_psd(average=True)
fig.subplots_adjust(top=0.9)
fig.suptitle(title)
plt.setp(fig.axes, xlim=(0, 300))
Explanation: Resampling
EEG and MEG recordings are notable for their high temporal precision, and are
often recorded with sampling rates around 1000 Hz or higher. This is good
when precise timing of events is important to the experimental design or
analysis plan, but also consumes more memory and computational resources when
processing the data. In cases where high-frequency components of the signal
are not of interest and precise timing is not needed (e.g., computing EOG or
ECG projectors on a long recording), downsampling the signal can be a useful
time-saver.
In MNE-Python, the resampling methods (:meth:raw.resample()
<mne.io.Raw.resample>, :meth:epochs.resample() <mne.Epochs.resample> and
:meth:evoked.resample() <mne.Evoked.resample>) apply a low-pass filter to
the signal to avoid aliasing, so you don't need to explicitly filter it
yourself first. This built-in filtering that happens when using
:meth:raw.resample() <mne.io.Raw.resample>, :meth:epochs.resample()
<mne.Epochs.resample>, or :meth:evoked.resample() <mne.Evoked.resample> is
a brick-wall filter applied in the frequency domain at the Nyquist
frequency of the desired new sampling rate. This can be clearly seen in the
PSD plot, where a dashed vertical line indicates the filter cutoff; the
original data had an existing lowpass at around 172 Hz (see
raw.info['lowpass']), and the data resampled from 600 Hz to 200 Hz gets
automatically lowpass filtered at 100 Hz (the Nyquist frequency_ for a
target rate of 200 Hz):
End of explanation
current_sfreq = raw.info['sfreq']
desired_sfreq = 90 # Hz
decim = np.round(current_sfreq / desired_sfreq).astype(int)
obtained_sfreq = current_sfreq / decim
lowpass_freq = obtained_sfreq / 3.
raw_filtered = raw.copy().filter(l_freq=None, h_freq=lowpass_freq)
events = mne.find_events(raw_filtered)
epochs = mne.Epochs(raw_filtered, events, decim=decim)
print('desired sampling frequency was {} Hz; decim factor of {} yielded an '
'actual sampling frequency of {} Hz.'
.format(desired_sfreq, decim, epochs.info['sfreq']))
Explanation: Because resampling involves filtering, there are some pitfalls to resampling
at different points in the analysis stream:
Performing resampling on :class:~mne.io.Raw data (before epoching) will
negatively affect the temporal precision of Event arrays, by causing
jitter_ in the event timing. This reduced temporal precision will
propagate to subsequent epoching operations.
Performing resampling after epoching can introduce edge artifacts on
every epoch, whereas filtering the :class:~mne.io.Raw object will only
introduce artifacts at the start and end of the recording (which is often
far enough from the first and last epochs to have no affect on the
analysis).
The following section suggests best practices to mitigate both of these
issues.
Best practices
To avoid the reduction in temporal precision of events that comes with
resampling a :class:~mne.io.Raw object, and also avoid the edge artifacts
that come with filtering an :class:~mne.Epochs or :class:~mne.Evoked
object, the best practice is to:
low-pass filter the :class:~mne.io.Raw data at or below
$\frac{1}{3}$ of the desired sample rate, then
decimate the data after epoching, by either passing the decim
parameter to the :class:~mne.Epochs constructor, or using the
:meth:~mne.Epochs.decimate method after the :class:~mne.Epochs have
been created.
<div class="alert alert-danger"><h4>Warning</h4><p>The recommendation for setting the low-pass corner frequency at
$\frac{1}{3}$ of the desired sample rate is a fairly safe rule of
thumb based on the default settings in :meth:`raw.filter()
<mne.io.Raw.filter>` (which are different from the filter settings used
inside the :meth:`raw.resample() <mne.io.Raw.resample>` method). If you
use a customized lowpass filter (specifically, if your transition
bandwidth is wider than 0.5× the lowpass cutoff), downsampling to 3× the
lowpass cutoff may still not be enough to avoid `aliasing`_, and
MNE-Python will not warn you about it (because the :class:`raw.info
<mne.Info>` object only keeps track of the lowpass cutoff, not the
transition bandwidth). Conversely, if you use a steeper filter, the
warning may be too sensitive. If you are unsure, plot the PSD of your
filtered data *before decimating* and ensure that there is no content in
the frequencies above the `Nyquist frequency`_ of the sample rate you'll
end up with *after* decimation.</p></div>
Note that this method of manually filtering and decimating is exact only when
the original sampling frequency is an integer multiple of the desired new
sampling frequency. Since the sampling frequency of our example data is
600.614990234375 Hz, ending up with a specific sampling frequency like (say)
90 Hz will not be possible:
End of explanation |
14,994 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Alignment report
Step1: Read distribution by MQ
Step2: Read distribution by alignment fate
Step3: Mapped rate and Alignment accuracy parametrized by MQ | Python Code:
# From SO: https://stackoverflow.com/a/28073228/2512851
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to show/hide raw code."></form>''')
import json
import matplotlib.pyplot as plt
import bokeh
import pandas as pd
import numpy as np
from bokeh.plotting import figure, show
from bokeh.io import output_notebook, gridplot
from bokeh.models import ColumnDataSource, HoverTool
output_notebook()
%matplotlib inline
d = json.load(open('inputs.json'))
fname = d['csvA']
df = pd.DataFrame.from_csv(fname, index_col=None, sep=',')
Explanation: Alignment report
End of explanation
def aa_mq(df):
correct_0 = df[df['derr']=='d = 0'][['MQ', 'count']].groupby('MQ').sum()
correct_0.columns = ['correct_0']
correct_50 = df[(df['derr']=='0 < d <= 50') | (df['derr']=='d = 0')][['MQ', 'count']].groupby('MQ').sum()
correct_50.columns = ['correct_50']
total = df[['MQ', 'count']].groupby('MQ').sum()
total.columns = ['total']
data = pd.concat((correct_0, correct_50, total), axis=1)
data['perr_0'] = 1 - data['correct_0'] / data['total']
data['perr_50'] = 1 - data['correct_50'] / data['total']
data['perr_ideal'] = 10 ** (-data.index / 10)
return data
def plot_aa_mq(data):
max_y = 10 ** np.ceil(np.log10(data['perr_0'].max()))
min_y = 10 ** np.floor(np.log10(data['perr_50'].min()))
source = ColumnDataSource(data)
hover = HoverTool(tooltips=[
('perr 0', '@perr_0'),
('perr 50', '@perr_50'),
('perr ideal', '@perr_ideal'),
('Reads', '@total')
])
p = figure(plot_height=200, plot_width=500,
x_axis_label='MQ',
y_axis_label='p_err',
tools=[hover, 'reset'],
y_axis_type="log", y_range=[min_y, max_y])
h1 = p.circle(x='MQ', y='perr_0', size=2, source=source)
h1 = p.circle(x='MQ', y='perr_0', size=10, alpha=0.5, color='yellow', source=source)
h3 = p.line(x='MQ', y='perr_ideal', source=source)
return p
def plot_read_hist(data):
max_y = 10 ** np.ceil(np.log10(data['total'].max()))
min_y = 10 ** np.floor(np.log10(data['total'].min()))
source = ColumnDataSource(data)
hover = HoverTool(tooltips=[
('Reads', '@total')
])
p = figure(plot_height=200, plot_width=500,
x_axis_label='MQ',
y_axis_label='Reads',
tools=[hover, 'reset'],
y_axis_type="log", y_range=[min_y, max_y])
h1 = p.vbar(x='MQ', bottom=min_y, top='total', width=0.7, source=source)
return p
data = aa_mq(df)
s = [
[plot_aa_mq(data)],
[plot_read_hist(data)]
]
p = gridplot(s)
show(p)
# read_count_mq = df.groupby('MQ').sum()
# max_y = 10 ** np.ceil(np.log10(read_count_mq['count'].max()))
# min_y = 10 ** np.floor(np.log10(read_count_mq['count'].min()))
# source = ColumnDataSource(read_count_mq)
# tools = ["reset"]
# p = figure(plot_height=200, plot_width=500,
# x_axis_label='MQ',
# y_axis_label='Reads',
# tools=tools,
# y_axis_type="log", y_range=[min_y, max_y])
# h1 = p.vbar(x='MQ', bottom=min_y, top='count', width=0.7, source=source)
# p.add_tools(HoverTool(renderers=[h1], tooltips=[("reads", "@count")]))
# show(p)
Explanation: Read distribution by MQ
End of explanation
read_count_by_fate = df.groupby('derr').sum()
read_count_by_fate['y'] = read_count_by_fate.index
max_x = 10 ** np.ceil(np.log10(read_count_by_fate['count'].max()))
min_x = 10 ** np.floor(np.log10(read_count_by_fate['count'].min()))
source = ColumnDataSource(read_count_by_fate)
tools = ["reset"]
p = figure(plot_height=200, plot_width=500,
x_axis_label='Reads',
y_axis_label='Read fate',
tools=tools,
y_range=read_count_by_fate.index.tolist(),
x_axis_type="log",
x_range=[min_x, max_x])
h1 = p.hbar(y='y', left=min_x, right='count', height=0.7, source=source)
p.add_tools(HoverTool(renderers=[h1], tooltips=[("reads", "@count")]))
show(p)
Explanation: Read distribution by alignment fate
End of explanation
# Matplotlib version of the plots
def heng_li_plot(df, category, ax):
sub_df = df[df['category']==category]
#correct = sub_df[(sub_df['derr']=='0 < d <= 50') | (sub_df['derr']=='d = 0')][['MQ', 'count']].groupby('MQ').sum()
correct = sub_df[sub_df['derr']=='d = 0'][['MQ', 'count']].groupby('MQ').sum()
correct.columns = ['correct']
mapped = sub_df[sub_df['derr']!='unmapped'][['MQ', 'count']].groupby('MQ').sum()
mapped.columns = ['mapped']
total = sub_df[['MQ', 'count']].groupby('MQ').sum()
total.columns = ['total']
data = pd.concat((correct, mapped, total), axis=1)
x = np.zeros(61, dtype=float)
y = np.zeros(61, dtype=float)
for mq in range(61):
data_sub = data.iloc[mq:70]
x[mq] = 100 * data_sub['mapped'].sum() / total.sum()
y[mq] = 100 * data_sub['correct'].sum() / data_sub['mapped'].sum()
ax.plot(x, y)
ax.plot(x[0], y[0], 'ko')
plt.setp(ax,
xlim=(95, 101), xticks=range(96,101),
ylim=(79, 101), yticks=range(80,101, 5),
title=category)
def plot_heng_li_panels(df):
fig = plt.figure(figsize=(10, 20))
for n, cat in enumerate(['Ref', 'SNP', 'Multi',
'INS <= 10', 'INS 11-50', 'INS > 50',
'DEL <= 10', 'DEL 11-50', 'DEL > 50']):
ax = plt.subplot(4, 3, n + 1)
heng_li_plot(df, cat, ax)
#plt.setp(ax, yscale='log')
if n != 6:
plt.setp(ax, xticklabels=[], yticklabels=[])
else:
plt.setp(ax, xlabel='% Mapped', ylabel='% Correct')
#plot_heng_li_panels(df)
def heng_li_plot(df, category):
sub_df = df[df['category']==category]
#correct = sub_df[(sub_df['derr']=='0 < d <= 50') | (sub_df['derr']=='d = 0')][['MQ', 'count']].groupby('MQ').sum()
correct = sub_df[sub_df['derr']=='d = 0'][['MQ', 'count']].groupby('MQ').sum()
correct.columns = ['correct']
mapped = sub_df[sub_df['derr']!='unmapped'][['MQ', 'count']].groupby('MQ').sum()
mapped.columns = ['mapped']
total = sub_df[['MQ', 'count']].groupby('MQ').sum()
total.columns = ['total']
data = pd.concat((correct, mapped, total), axis=1)
x = np.zeros(61, dtype=float)
y = np.zeros(61, dtype=float)
for mq in range(61):
data_sub = data.iloc[mq:70]
x[mq] = 100 * data_sub['mapped'].sum() / total.sum()
y[mq] = 100 * data_sub['correct'].sum() / data_sub['mapped'].sum()
source = ColumnDataSource(data=dict(
mapped=x,
correct=y,
mq=range(61)
))
hover = HoverTool(tooltips=[
('MQ', '≥ @mq'),
('Map', '@mapped %'),
('Correct', '@correct %')
])
s1 = figure(width=250, plot_height=250,
tools=[hover, 'pan', 'reset'],
title=category)
s1.circle(x[0], y[0], size=10)
s1.line('mapped', 'correct', source=source)
return s1
def plot_heng_li_panels(df):
s = []
row_cnt = 3
row_s = []
for n, cat in enumerate(['Ref', 'SNP', 'Multi',
'INS <= 10', 'INS 11-50', 'INS > 50',
'DEL <= 10', 'DEL 11-50', 'DEL > 50']):
if n % row_cnt == 0:
if len(row_s):
s.append(row_s)
row_s = []
row_s.append(heng_li_plot(df, cat))
if len(row_s):
s.append(row_s)
p = gridplot(s)
show(p)
plot_heng_li_panels(df)
Explanation: Mapped rate and Alignment accuracy parametrized by MQ
End of explanation |
14,995 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ciência dos Dados - PROJETO 1
Gabriel Heusi Pereira Bueno de Camargo
Título
O comportamento da segurança alimentar no território brasileiro.
Introdução
A diversidade do território brasileiro se apresenta em diversos fatores, entre eles há um negativo que deve ser destacado
Step1: MUDANÇA DA VARIÁVEL INICIAL QUE MOSTRA O ANO DE PESQUISA.
Step2: DEFINIÇÃO DAS REGIÕES E TRANSFORMAÇÃO EM UMA CATEGORIA;
Step3: DIVISÃO EM ZONA RURAL E URBANA, A SEGUNDA VARIÁVEL DE ANÁLISE
Step4: CRIACÃO DA VARIÁVEL INSEGURANÇA ALIMENTAR
Step5: CRIAÇÃO DO "PROBLEMA ALIMENTAR"
Step6: FILTRAGEM INICIAL
Step7: TABELA 1 - 2013
Step8: TABELA 1 - 2009
Step9: PRIMEIRA OBSERVAÇÃO
Step10: APROFUNDAMENTO NAS REGIÕES
Step11: Nesse caso pode-se observar uma clara coerência entre os dados percentuais e absolutos, isso porque as regiões Norte e Nordeste mostram a maior frequência e número de pessoas que já passaram por situação de insegurança alimentar.
Step12: OBSERVAÇÃO DA SITUAÇÃO NA ZONA URBANA E RURAL
Step13: CRUZAMENTO DE DADOS
Step14: SEQUÊNCIA DE ANÁLISE PARA CADA ANO
Step15: ANÁLISE INICIAL E NOVA FILTRAGEM
Step16: TABELA 2 - 2013
Step17: TABELA 2 - 2009
Step18: Caracterização dos problemas alimentares | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import os
from numpy import zeros_like
print('Esperamos trabalhar no diretório')
print(os.getcwd())
base = pd.read_csv('DOM2013.csv',sep=',')
base9 = pd.read_csv('DOM2009.csv',sep=',')
Explanation: Ciência dos Dados - PROJETO 1
Gabriel Heusi Pereira Bueno de Camargo
Título
O comportamento da segurança alimentar no território brasileiro.
Introdução
A diversidade do território brasileiro se apresenta em diversos fatores, entre eles há um negativo que deve ser destacado: a insegurança alimentar, ou melhor, a segurança alimentar restrita para uma parcela baixa da população. A partir disso buscou-se realizar uma análise de como isso se comporta, observando a diferença entre as regiões, entre a zona urbana e rural e ainda um destaque para a distribuição desse problema de acordo com a renda familiar. Dessa forma com auxílio de uma ferramenta do IBGE, a PNAD, os dados serão analisados e explicados ao longo da análise para então uma conclusão ao final. Vale destacar ainda que será tratado com a base de dados de 2009 e 2013, a última divulgada que involve questões de âmbito alimentar. A orientação do projeto como um todo vai ao encontro de responder a seguinte pergunta sobre segurança alimentar: ao comparar as pesquisas de 2009 e 2013, qual é a faixa de renda familiar em que se concentra maior número de pessoas que já passaram por situação de insegurança alimentar? A comparação será feita apenas para as 2 regiões que demonstram maior disparidade desse problema entre zona rural e urbana.
End of explanation
base.V0101=base.V0101.astype("int")
base9.V0101=base9.V0101.astype("int")
Explanation: MUDANÇA DA VARIÁVEL INICIAL QUE MOSTRA O ANO DE PESQUISA.
End of explanation
base.loc[(base.UF<18),"REGIAO"]="NORTE"
base.loc[(base.UF>20)&(base.UF<30),"REGIAO"]="NORDESTE"
base.loc[(base.UF>30)&(base.UF<36),"REGIAO"]="SUDESTE"
base.loc[(base.UF>35)&(base.UF<44),"REGIAO"]="SUL"
base.loc[(base.UF>43)&(base.UF<54),"REGIAO"]="CENTRO-OESTE"
base.REGIAO=base.REGIAO.astype("category")
base9.loc[(base9.UF<18),"REGIAO"]="NORTE"
base9.loc[(base9.UF>20)&(base9.UF<30),"REGIAO"]="NORDESTE"
base9.loc[(base9.UF>30)&(base9.UF<36),"REGIAO"]="SUDESTE"
base9.loc[(base9.UF>35)&(base9.UF<44),"REGIAO"]="SUL"
base9.loc[(base9.UF>43)&(base9.UF<54),"REGIAO"]="CENTRO-OESTE"
base9.REGIAO=base9.REGIAO.astype("category")
Explanation: DEFINIÇÃO DAS REGIÕES E TRANSFORMAÇÃO EM UMA CATEGORIA;
End of explanation
base.loc[(base.V4105<4),"ZONA"]="Urbana"
base.loc[(base.V4105>3),"ZONA"]="Rural"
base.ZONA=base.ZONA.astype("category")
base9.loc[(base9.V4105<4),"ZONA"]="Urbana"
base9.loc[(base9.V4105>3),"ZONA"]="Rural"
base9.ZONA=base9.ZONA.astype("category")
Explanation: DIVISÃO EM ZONA RURAL E URBANA, A SEGUNDA VARIÁVEL DE ANÁLISE
End of explanation
base.loc[(base.V2103==1) | (base.V2105==1) | (base.V2107==1) | (base.V2109==1),'Insegurança_Alimentar'] = 'Sim'
base.loc[(base.V2103==3) & (base.V2105==3) & (base.V2107==3) & (base.V2109==3),'Insegurança_Alimentar'] = 'Não'
base.V2103=base.V2103.astype("category")
base.V2105=base.V2105.astype("category")
base.V2107=base.V2107.astype("category")
base.V2109=base.V2109.astype("category")
base9.loc[(base9.V2103==1) | (base9.V2105==1) | (base9.V2107==1) | (base9.V2109==1),'Insegurança_Alimentar'] = 'Sim'
base9.loc[(base9.V2103==3) & (base9.V2105==3) & (base9.V2107==3) & (base9.V2109==3),'Insegurança_Alimentar'] = 'Não'
base9.V2103=base9.V2103.astype("category")
base9.V2105=base9.V2105.astype("category")
base9.V2107=base9.V2107.astype("category")
base9.V2109=base9.V2109.astype("category")
Explanation: CRIACÃO DA VARIÁVEL INSEGURANÇA ALIMENTAR:
A SEGUIR MODIFICA-SE AS VARIÁVEIS (PERGUNTAS SOBRE INSEGURANÇA ALIMENTAR) CRIANDO UMA ÚNICA CHAMADA "INSEGURANÇA ALIMENTAR". O MOTIVO PARA ISSO É QUE AS 4 PERGUNTAS FEITAS REPRESENTAM SITUAÇÕES DE DIFICULDADE PARA SE ALIMENTAR, PORTANTO PARA SE CONSIDERAR UMA PESSOA QUE PASSOU POR SITUAÇÃO DE DIFICULDADE ALIMENTAR DEVE SE TER PELO MENOS UMA PERGUNTA RESPONDIDA COM "SIM". HÁ AINDA A CARACTERIZACAO PARA CATEGORIA DAS 4 PERGUNTAS.
End of explanation
base.loc[(base.V2113==1) | (base.V2115==1) | (base.V2117==1) | (base.V2121==1),'Problema_Alimentar'] = 'Sim'
base.loc[(base.V2113==3) & (base.V2115==3) & (base.V2117==3) & (base.V2121==3),'Problema_Alimentar'] = 'Não'
base.V2113=base.V2113.astype("category")
base.V2115=base.V2115.astype("category")
base.V2117=base.V2117.astype("category")
base.V2121=base.V2121.astype("category")
base9.loc[(base9.V2111==1) | (base9.V2113==1) | (base9.V2115==1) | (base9.V2117==1) | (base9.V2119==1) | (base9.V2120==1) | (base9.V2121==1),'Problema_Alimentar'] = 'Sim'
base9.loc[(base9.V2111==3) & (base9.V2113==3) & (base9.V2115==3) & (base9.V2117==3) & (base9.V2119==3) & (base9.V2120==3) & (base9.V2121==3),'Problema_Alimentar'] = 'Não'
base9.V2113=base9.V2113.astype("category")
base9.V2115=base9.V2115.astype("category")
base9.V2117=base9.V2117.astype("category")
base9.V2117=base9.V2119.astype("category")
base9.V2121=base9.V2120.astype("category")
base9.V2121=base9.V2121.astype("category")
Explanation: CRIAÇÃO DO "PROBLEMA ALIMENTAR":
EM SEQUÊNCIA HÁ MAIS 4 PERGUNTAS DESTINADAS APENAS ÀQUELES QUE APRESENTARAM INSEGURANÇA ALIMENTAR. PORTANTO UTILIZOU-SE O MESMO
PROCESSO DO QUADRO ACIMA. ESSAS PERGUNTAS REFLETEM ALGUNS PROBLEMAS PELOS QUAIS AS PESSOAS PODERIAM TER PASSADO CASO RESPONDESSEM PELO MENOS UM SIM NAS 4 PERGUNTAS INICIAIS.
End of explanation
base=base.loc[:,["V0101","REGIAO","ZONA","V4614",'Insegurança_Alimentar',"Problema_Alimentar"]]
base.columns=["ANO","REGIAO","ZONA","RENDA",'Insegurança_Alimentar',"Problema_Alimentar"]
base=base.dropna(subset=["RENDA","Insegurança_Alimentar"])
base
Explanation: FILTRAGEM INICIAL:
TRANSFORMACÃO DAS SIGLAS EM NOME DAS VARIÁVEIS DE INTERESSE E POSTERIOR FILTRO PARA RETIRAR PESSOAS QUE NAO RESPONDERAM (NaN)
AS 4 PERGUNTAS INICAIS E RENDA. VALE DESTACAR QUE NAO SE UTILIZOU PARA A VARIÁVEL "PROBLEMA_ALIMENTAR" POIS AQUELES QUE NÃO
TIVERAM INSEGURANÇA ALIMENTAR NÃO FORAM CHEGARAM A SER QUESTIONADOS SOBRE E PORTANTO PERDERIA-SE DADOS.
End of explanation
writer = pd.ExcelWriter('Tabela1-2013.xlsx',engine='xlsxwriter')
base.to_excel(writer,sheet_name="Projeto_1")
writer.save()
base9=base9.loc[:,["V0101","REGIAO","ZONA","V4614",'Insegurança_Alimentar',"Problema_Alimentar"]]
base9.columns=["ANO","REGIAO","ZONA","RENDA",'Insegurança_Alimentar',"Problema_Alimentar"]
base9=base9.dropna(subset=["RENDA","Insegurança_Alimentar"])
base9
Explanation: TABELA 1 - 2013
End of explanation
writer = pd.ExcelWriter('Tabela1-2009.xlsx',engine='xlsxwriter')
base9.to_excel(writer,sheet_name="Projeto_1")
writer.save()
Explanation: TABELA 1 - 2009
End of explanation
g1 = (base.Insegurança_Alimentar.value_counts(sort=False, normalize=True)*100).round(decimals=1)
plot = g1.plot(kind='bar',title='DIFICULDADE ALIMENTAR 2013 (G1)',figsize=(5, 5),color=('b','g'))
print(g1,"\n")
g2 = (base9.Insegurança_Alimentar.value_counts(sort=False, normalize=True)*100).round(decimals=1)
plot = g2.plot(kind='bar',title='DIFICULDADE ALIMENTAR 2009 (G2)',figsize=(5, 5),color=('b','g'))
print(g2,"\n")
Explanation: PRIMEIRA OBSERVAÇÃO:
OCORRÊNCIA DE PESSOAS QUE JÁ PASSARAM POR SITUAÇÕES DE INSEGURANÇA ALIMENTAR ("Sim") PARA POSTERIORMENTE ANALISAR AINDA A DIFERENÇA ENTRE AS REGIÕES E ZONAS.
End of explanation
tb1= (pd.crosstab(base.REGIAO,base.Insegurança_Alimentar,margins=True,rownames=["REGIÃO"],colnames=["Insegurança Alimentar"],normalize='index')*100).round(decimals=1)
plot = tb1.plot(kind="bar",title="Distribuição Regional de Insegurança Alimentar 2013 (G3)")
abs1=pd.crosstab(base.REGIAO,base.Insegurança_Alimentar, margins=True, rownames=['REGIÃO'], colnames=['INSEGURANÇA ALIMENTAR'])
abs1=abs1.loc[['NORTE','NORDESTE','SUDESTE','SUL','CENTRO-OESTE']]
abs1
Explanation: APROFUNDAMENTO NAS REGIÕES:
GRÁFICO DE FREQUÊNCIA SEGUIDO DE UMA TABELA QUE POTENCIALIZA A ANÁLISE DOS VALORES, JÁ QUE MOSTRA OS VALORES ABSOLUTOS E VISA BUSCAR MAIOR COMPREENSÃO E COERÊNCIA DOS VALORES.
End of explanation
tb19= (pd.crosstab(base9.REGIAO,base9.Insegurança_Alimentar,margins=True,rownames=["REGIÃO"],colnames=["Insegurança Alimentar"],normalize='index')*100).round(decimals=1)
plot = tb19.plot(kind="bar",title="Distribuição Regional de Insegurança Alimentar 2009 (G4)")
abs19=pd.crosstab(base9.REGIAO,base9.Insegurança_Alimentar, margins=True, rownames=['REGIÃO'], colnames=['INSEGURANÇA ALIMENTAR'])
abs19=abs19.loc[['NORTE','NORDESTE','SUDESTE','SUL','CENTRO-OESTE']]
abs19
Explanation: Nesse caso pode-se observar uma clara coerência entre os dados percentuais e absolutos, isso porque as regiões Norte e Nordeste mostram a maior frequência e número de pessoas que já passaram por situação de insegurança alimentar.
End of explanation
tb2 = (pd.crosstab(base.ZONA,base.Insegurança_Alimentar,margins=True,rownames=["ZONA"],colnames=["Insegurança Alimentar"],normalize='index')*100).round(decimals=1)
plot = tb2.plot(kind="bar",title="Distribuição em Zonas de Insegurança Alimentar 2013 (G5)")
abs2=pd.crosstab(base.ZONA,base.Insegurança_Alimentar, margins=True, rownames=['ZONA'], colnames=['INSEGURANÇA ALIMENTAR'])
abs2=abs2.loc[['Rural','Urbana']]
abs2
tb29 = (pd.crosstab(base9.ZONA,base9.Insegurança_Alimentar,margins=True,rownames=["ZONA"],colnames=["Insegurança Alimentar"],normalize='index')*100).round(decimals=1)
plot = tb29.plot(kind="bar",title="Distribuição em Zonas de Insegurança Alimentar 2009 (G6)")
abs29=pd.crosstab(base9.ZONA,base9.Insegurança_Alimentar, margins=True, rownames=['ZONA'], colnames=['INSEGURANÇA ALIMENTAR'])
abs29=abs29.loc[['Rural','Urbana']]
abs29
Explanation: OBSERVAÇÃO DA SITUAÇÃO NA ZONA URBANA E RURAL:
ASSIM COMO NA CELULA SUPERIOR, UM GRÁFICO INICIAL PERCENTUAL SEGUIDO DE UMA TABELA CONTENDO VALORES ABSOLUTOS QUE POSSIBILITAM OBSERVAR A DIFERENÇA ENTRE AS DUAS ZONAS
End of explanation
ct1=(pd.crosstab([base.REGIAO, base.ZONA],base.Insegurança_Alimentar, normalize='index')*100).round(decimals=1)
ct1
print(ct1,'\n')
plot = ct1.plot(kind='bar',title="Análise de Insegurança Alimentar 2013 (G7)")
ax = plt.subplot(111)
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('Freq.Relativa (em %)')
plt.show()
ct2=(pd.crosstab([base9.REGIAO, base9.ZONA],base9.Insegurança_Alimentar, normalize='index')*100).round(decimals=1)
ct2
print(ct2,'\n')
plot = ct2.plot(kind='bar',title="Análise de Insegurança Alimentar 2009 (G8)")
ax = plt.subplot(111)
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('Freq.Relativa (em %)')
plt.show()
Explanation: CRUZAMENTO DE DADOS:
SUB-DIVISÃO MAIS COMPLEXA, CADA ZONA DIVIDIDA POR ESTADO E A FREQUÊNCIA DE CADA UM DESSES, O OBJETIVO DESTE GRÁFICO É ANALISAR EM UMA ÚNICA IMAGEM AS DIFERENÇAS NOTÁVEIS ENTRE OS FATORES TERRITORIAIS ANALISADOS E ASSIM FOCAR DIRETAMENTE NAS REGIÕES QUE PRECISAM DA ANÁLISE PARA RESPONDER A PERGUNTA
End of explanation
faixa = np.arange(0,7350,350)
frenda = pd.cut(base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORTE")], bins=faixa, right=False)
t1 = (frenda.value_counts(sort=False, normalize=True)*100).round(decimals=1)
print(t1,"\n")
plot = base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORTE")].plot.hist(bins=faixa,title="Histograma - Insegurança Alimentar - NORTE - 2013 (H1)", weights=zeros_like(base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORTE")])+1./base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORTE")].size*100, figsize=(6, 6), alpha=0.5)
plt.ylabel('Frequência relativa (em %)')
plt.xlabel('Renda (em reais)')
plt.show()
faixa = np.arange(0,7350,350)
frenda2 = pd.cut(base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")], bins=faixa, right=False)
t2 = (frenda2.value_counts(sort=False, normalize=True)*100).round(decimals=1)
print(t2,"\n")
plot = base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")].plot.hist(bins=faixa,title="Histograma - Insegurança Alimentar - NORDESTE - 2013(H2)", weights=zeros_like(base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")])+1./base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")].size*100, figsize=(6, 6), alpha=0.5,color="red")
plt.ylabel('Frequência relativa (em %)')
plt.xlabel('Renda (em reais)')
plt.show()
frenda9 = pd.cut(base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base.REGIAO=="CENTRO-OESTE")], bins=faixa, right=False)
t19 = (frenda9.value_counts(sort=False, normalize=True)*100).round(decimals=1)
print(t19,"\n")
plot = base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")].plot.hist(bins=faixa,title="Histograma - Insegurança Alimentar - CENTRO-OESTE - 2009(H3)", weights=zeros_like(base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")])+1./base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")].size*100, figsize=(6, 6), alpha=0.5,color="chocolate")
plt.ylabel('Frequência relativa (em %)')
plt.xlabel('Renda (em reais)')
plt.show()
frenda29 = pd.cut(base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")], bins=faixa, right=False)
t29 = (frenda29.value_counts(sort=False, normalize=True)*100).round(decimals=1)
print(t29,"\n")
plot = base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")].plot.hist(bins=faixa,title="Histograma - Insegurança Alimentar - NORDESTE - 2009(H4)", weights=zeros_like(base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")])+1./base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")].size*100, figsize=(6, 6), alpha=0.5,color="darkslategray")
plt.ylabel('Frequência relativa (em %)')
plt.xlabel('Renda (em reais)')
plt.show()
Explanation: SEQUÊNCIA DE ANÁLISE PARA CADA ANO:
Observando os dois últimos gráficos pode-se perceber precisamente as duas regiões que apresentam maior disparidade entre zona urbana e rural. No caso de 2013 (1°gráfico) Norte e Nordeste são as duas regiões que serão analisadas a fim de responder a pergunta-guia do projeto, já na situação de 2009 apresenta-se o Centro-Oeste e o Nordeste.
ANÁLISE QUANTITATIVA:
OBSERVAR COMO SE COMPORTA A INSEGURANÇA ALIMENTAR DE ACORDO COM A RENDA FAMILIAR.
O PRIMEIRO HISTOGRAMA DEMONSTRA A FREQUÊNCIA ENTRE AQUELES QUE RESPONDERAM PELO MENOS UM "Sim" NAS 4 PERGUNTAS INICIAIS E SÃO CONSIDERADOS PORTANTO, EM INSEGURANÇA ALIMENTAR.
End of explanation
base=base[(base.Insegurança_Alimentar=="Sim")]
base
Explanation: ANÁLISE INICIAL E NOVA FILTRAGEM:
COM A PRECISÃO DOS VALORES MOSTRADOS ACIMA, PODE-SE OBSERVAR ONDE HÁ MAIOR CONCENTRAÇÃO EM CADA UMA DAS REGIÕES DE INTERESSE DE ACORDO COM A DISPARIDADE ANALISADA ANTERIORAMENTE NOS GRÁFICOS. DESSA FORMA A PARTIR DE AGORA A ANÁLISE SE CENTRARÁ APENAS ÀQUELES QUE PASSARAM POR SITUACÃO DE INSEGURANÇA ABRINDO PARA UMA NOVA VARIÁVEL, CHAMADA DE PROBLEMA ALIMENTAR E PAUTADA EM PERGUNTAS QUE DEMONSTRAM FALTA DE COMIDA OU ALIMENTAÇÃO RESTRITA POR CONTA DE FALTA DE DINHEIRO.
End of explanation
writer = pd.ExcelWriter('Tabela2-2013.xlsx',engine='xlsxwriter')
base.to_excel(writer,sheet_name="Projeto_1")
writer.save()
base9=base9[(base9.Insegurança_Alimentar=="Sim")]
base9
Explanation: TABELA 2 - 2013
End of explanation
writer = pd.ExcelWriter('Tabela2-2009.xlsx',engine='xlsxwriter')
base9.to_excel(writer,sheet_name="Projeto_1")
writer.save()
Explanation: TABELA 2 - 2009
End of explanation
frenda3 = pd.cut(base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORTE")], bins=faixa, right=False)
t3 = (frenda3.value_counts(sort=False, normalize=True)*100).round(decimals=1)
print(t3,"\n")
plot = base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORTE")].plot.hist(bins=faixa,title="Problema Alimentar - NORTE - 2013 (H5)", weights=zeros_like(base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORTE")])+1./base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORTE")].size*100, figsize=(6, 6), alpha=0.5,color="purple")
plt.ylabel('Frequência relativa (em %)')
plt.xlabel('Renda (em reais)')
plt.show()
frenda4 = pd.cut(base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")], bins=faixa, right=False)
t4 = (frenda4.value_counts(sort=False, normalize=True)*100).round(decimals=1)
print(t4,"\n")
plot = base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")].plot.hist(bins=faixa,title="Problema Alimentar - NORDESTE - 2013(H6)", weights=zeros_like(base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")])+1./base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")].size*100, figsize=(6, 6), alpha=0.5,color="darkgreen")
plt.ylabel('Frequência relativa (em %)')
plt.xlabel('Renda (em reais)')
plt.show()
frenda39 = pd.cut(base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base.REGIAO=="CENTRO-OESTE")], bins=faixa, right=False)
t39 = (frenda39.value_counts(sort=False, normalize=True)*100).round(decimals=1)
print(t39,"\n")
plot = base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")].plot.hist(bins=faixa,title="Problema Alimentar - CENTRO-OESTE - 2009(H7)", weights=zeros_like(base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")])+1./base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")].size*100, figsize=(6, 6), alpha=0.5,color="black")
plt.ylabel('Frequência relativa (em %)')
plt.xlabel('Renda (em reais)')
plt.show()
frenda49 = pd.cut(base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base.REGIAO=="CENTRO-OESTE")], bins=faixa, right=False)
t49 = (frenda49.value_counts(sort=False, normalize=True)*100).round(decimals=1)
print(t49,"\n")
plot = base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")].plot.hist(bins=faixa,title="Problema Alimentar - NORDESTE - 2009(H8) ", weights=zeros_like(base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")])+1./base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")].size*100, figsize=(6, 6), alpha=0.5,color="orange")
plt.ylabel('Frequência relativa (em %)')
plt.xlabel('Renda (em reais)')
plt.show()
Explanation: Caracterização dos problemas alimentares:
Os próximos gráficos tem como objetivo avaliar, além do comportamento da variável "problema alimentar" de acordo com a renda mensal familiar comparar com a distribuição de "insegurança alimentar" ou seja se a distribuição analisada anteriormente se mantém de certa maneira nessa variável que por sinal é dependente da inicial, "insegurança alimentar".
End of explanation |
14,996 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CrowdTruth for Sparse Multiple Choice Tasks
Step1: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class
Step2: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Relation Extraction task
Step3: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data
Step4: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics
Step5: results is a dict object that contains the quality metrics for sentences, events and crowd workers.
The sentence metrics are stored in results["units"]
Step6: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentence. Here we plot its histogram
Step7: The unit_annotation_score column in results["units"] contains the sentence-relation scores, capturing the likelihood that a relation is expressed in a sentence. For each sentence, we store a dictionary mapping each relation to its sentence-relation score.
Step8: The worker metrics are stored in results["workers"]
Step9: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
Step10: Open to Closed Task Transformation
The goal of this crowdsourcing task is to understand how clearly a word or a word phrase is expressing an event or an action across all the sentences in the dataset and not at the level of a single sentence as previously. Therefore, in the remainder of this tutorial we show how to translate an open task to a closed task by processing both the input units and the annotations of a crowdsourcing task.
The answers from the crowd are stored in the selected_events column.
Step11: As you already know, each word can be expressed in a canonical form, i.e., as a lemma. For example, the words
Step12: The following functions create the values of the annotation vector and extracts the lemma of the events selected by each worker.
Step13: Effect on CrowdTruth metrics
Finally, we can compare the effect of the transformation from an open task to a closed task on the CrowdTruth sentence quality score. | Python Code:
import pandas as pd
test_data = pd.read_csv("../data/event-text-sparse-multiple-choice.csv")
test_data.head()
Explanation: CrowdTruth for Sparse Multiple Choice Tasks: Event Extraction
In this tutorial, we will apply CrowdTruth metrics to a sparse multiple choice crowdsourcing task for Event Extraction from sentences. The workers were asked to read a sentence and then pick from a multiple choice list which are the words or words phrases in the sentence that are events or actions. The options available in the multiple choice list change with the input sentence. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here.
In this tutorial, we will also show how to translate an open task to a closed task by processing both the input units and the annotations of a crowdsourcing task, and how this impacts the results of the CrowdTruth quality metrics. We start with an open-ended extraction task, where the crowd was asked to read a sentence and then pick from a multiple choice list which are the words or words phrases in the sentence that are events or actions.
To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: template, css, javascript.
This is a screenshot of the task as it appeared to workers:
A sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. Now you can check your data:
End of explanation
import crowdtruth
from crowdtruth.configuration import DefaultConfig
Explanation: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:
End of explanation
class TestConfig(DefaultConfig):
inputColumns = ["doc_id", "events", "events_count", "original_sentence", "processed_sentence", "sentence_id", "tokens"]
outputColumns = ["selected_events"]
annotation_separator = ","
# processing of a closed task
open_ended_task = True
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
# remove square brackets from annotations
judgments[col] = judgments[col].apply(lambda x: str(x).replace('[',''))
judgments[col] = judgments[col].apply(lambda x: str(x).replace(']',''))
# remove the quotes around the annotations
judgments[col] = judgments[col].apply(lambda x: str(x).replace('"',''))
return judgments
Explanation: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Relation Extraction task:
inputColumns: list of input columns from the .csv file with the input data
outputColumns: list of output columns from the .csv file with the answers from the workers
annotation_separator: string that separates between the crowd annotations in outputColumns
open_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False
annotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of all relations that were given as input to the crowd in at least one sentence
processJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector
The complete configuration class is declared below:
End of explanation
data_open, config = crowdtruth.load(
file = "../data/event-text-sparse-multiple-choice.csv",
config = TestConfig()
)
data_open['judgments'].head()
Explanation: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data:
End of explanation
results_open = crowdtruth.run(data_open, config)
Explanation: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics:
End of explanation
results_open["units"].head()
Explanation: results is a dict object that contains the quality metrics for sentences, events and crowd workers.
The sentence metrics are stored in results["units"]:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(results_open["units"]["uqs"])
plt.xlabel("Sentence Quality Score")
plt.ylabel("Sentences")
Explanation: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentence. Here we plot its histogram:
End of explanation
results_open["units"]["unit_annotation_score"].head(10)
Explanation: The unit_annotation_score column in results["units"] contains the sentence-relation scores, capturing the likelihood that a relation is expressed in a sentence. For each sentence, we store a dictionary mapping each relation to its sentence-relation score.
End of explanation
results_open["workers"].head()
Explanation: The worker metrics are stored in results["workers"]:
End of explanation
plt.hist(results_open["workers"]["wqs"])
plt.xlabel("Worker Quality Score")
plt.ylabel("Workers")
Explanation: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
End of explanation
test_data["selected_events"][0:30]
Explanation: Open to Closed Task Transformation
The goal of this crowdsourcing task is to understand how clearly a word or a word phrase is expressing an event or an action across all the sentences in the dataset and not at the level of a single sentence as previously. Therefore, in the remainder of this tutorial we show how to translate an open task to a closed task by processing both the input units and the annotations of a crowdsourcing task.
The answers from the crowd are stored in the selected_events column.
End of explanation
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
def nltk2wn_tag(nltk_tag):
if nltk_tag.startswith('J'):
return wordnet.ADJ
elif nltk_tag.startswith('V'):
return wordnet.VERB
elif nltk_tag.startswith('N'):
return wordnet.NOUN
elif nltk_tag.startswith('R'):
return wordnet.ADV
else:
return None
def lemmatize_events(event):
nltk_tagged = nltk.pos_tag(nltk.word_tokenize(str(event.lower().split("__")[0])))
wn_tagged = map(lambda x: (str(x[0]), nltk2wn_tag(x[1])), nltk_tagged)
res_words = []
for word, tag in wn_tagged:
if tag is None:
res_word = wordnet._morphy(str(word), wordnet.NOUN)
if res_word == []:
res_words.append(str(word))
else:
if len(res_word) == 1:
res_words.append(str(res_word[0]))
else:
res_words.append(str(res_word[1]))
else:
res_word = wordnet._morphy(str(word), tag)
if res_word == []:
res_words.append(str(word))
else:
if len(res_word) == 1:
res_words.append(str(res_word[0]))
else:
res_words.append(str(res_word[1]))
lematized_keyword = " ".join(res_words)
return lematized_keyword
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
def nltk2wn_tag(nltk_tag):
if nltk_tag.startswith('J'):
return wordnet.ADJ
elif nltk_tag.startswith('V'):
return wordnet.VERB
elif nltk_tag.startswith('N'):
return wordnet.NOUN
elif nltk_tag.startswith('R'):
return wordnet.ADV
else:
return None
Explanation: As you already know, each word can be expressed in a canonical form, i.e., as a lemma. For example, the words: run, runs, running, they all have the lemma run. As you can see in the previous cell, events in text can appear under multiple forms. To evaluate the clarity of each event, we will process both the input units and the crowd annotations to refer to a word in its canonical form, i.e., we will lemmatize them.
Following, we define the function used to lemmatize the options that are shown to the workers in the crowdsourcing task:
End of explanation
def define_annotation_vector(eventsList):
events = []
for i in range(len(eventsList)):
currentEvents = eventsList[i].split("###")
for j in range(len(currentEvents)):
if currentEvents[j] != "no_event":
lematized_keyword = lemmatize_events(currentEvents[j])
if lematized_keyword not in events:
events.append(lematized_keyword)
events.append("no_event")
return events
def lemmatize_keywords(keywords, separator):
keywords_list = keywords.split(separator)
lematized_keywords = []
for keyword in keywords_list:
lematized_keyword = lemmatize_events(keyword)
lematized_keywords.append(lematized_keyword)
return separator.join(lematized_keywords)
class TestConfig(DefaultConfig):
inputColumns = ["doc_id", "events", "events_count", "original_sentence", "processed_sentence", "sentence_id", "tokens"]
outputColumns = ["selected_events"]
annotation_separator = ","
# processing of a closed task
open_ended_task = False
annotation_vector = define_annotation_vector(test_data["events"])
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
# remove square brackets from annotations
judgments[col] = judgments[col].apply(lambda x: str(x).replace("[",""))
judgments[col] = judgments[col].apply(lambda x: str(x).replace("]",""))
# remove the quotes around the annotations
judgments[col] = judgments[col].apply(lambda x: str(x).replace('"',''))
judgments[col] = judgments[col].apply(lambda x: lemmatize_keywords(str(x), self.annotation_separator))
return judgments
data_closed, config = crowdtruth.load(
file = "data/event-text-sparse-multiple-choice.csv",
config = TestConfig()
)
data_closed['judgments'].head()
results_closed = crowdtruth.run(data_closed, config)
results_closed["annotations"]
Explanation: The following functions create the values of the annotation vector and extracts the lemma of the events selected by each worker.
End of explanation
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.scatter(
results["units"]["uqs"],
results_closed["units"]["uqs"],
)
plt.plot([0, 1], [0, 1], 'red', linewidth=1)
plt.title("Sentence Quality Score")
plt.xlabel("open task")
plt.ylabel("closed task")
plt.scatter(
results["workers"]["wqs"],
results_closed["workers"]["wqs"],
)
plt.plot([0, 1], [0, 1], 'red', linewidth=1)
plt.title("Worker Quality Score")
plt.xlabel("open task")
plt.ylabel("closed task")
Explanation: Effect on CrowdTruth metrics
Finally, we can compare the effect of the transformation from an open task to a closed task on the CrowdTruth sentence quality score.
End of explanation |
14,997 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running Computations in Parallel
Contents
Introduction
Single-Variable Functions
Multi-Variable Functions
IPython Based Parallel Processing
Step1: <a id='intro'></a>
Introduction
Often one is interested in the output of a given function as a single-parameter is varied. For instance, we can calculate the steady-state response of our system as the driving frequency is varied. In cases such as this, where each iteration is independent of the others, we can speedup the calculation by performing the iterations in parallel. In QuTiP, parallel computations may be performed using the parallel_map function or the parfor (parallel-for-loop) function.
To use the these functions we need to define a function of one or more variables, and the range over which one of these variables are to be evaluated. For example
Step2: or
Step3: Note that the return values are arranged differently for the parallel_map and the parfor functions, as illustrated below. In particular, the return value of parallel_map is not enforced to be NumPy arrays, which can avoid unnecessary copying if all that is needed is to iterate over the resulting list
Step4: <a id='multi'></a>
Multi-Variable Functions
The parallel_map and parfor functions are not limited to just numbers, but also works for a variety of outputs
Step5: One can also define functions with multiple input arguments and even keyword arguments. Here the parallel_map and parfor functions behaves differently
Step6: Note that the keyword arguments can be anything you like, but the keyword values are not iterated over. The keyword argument num_cpus is reserved as it sets the number of CPU's used by parfor. By default, this value is set to the total number of physical processors on your system. You can change this number to a lower value, however setting it higher than the number of CPU's will cause a drop in performance. In parallel_map, keyword arguments to the task function are specified using task_kwargs argument, so there is no special reserved keyword arguments.
The parallel_map function also supports progressbar, using the keyword argument progress_bar which can be set to True or to an instance of BaseProgressBar. There is a function called serial_map that works as a non-parallel drop-in replacement for parallel_map, which allows easy switching between serial and parallel computation.
Step7: <a id='ipython'></a>
IPython Based Parallel Processing
When QuTiP is used with IPython interpreter, there is an alternative parallel for-loop implementation in the QuTiP module qutip.ipynbtools, see parallel_map. The advantage of this parallel_map implementation is based on IPythons powerful framework for parallelization, so the compute processes are not confined to run on the same host as the main process, i.e. cluster computations.
<div class="success">
**Note** | Python Code:
import numpy as np
from qutip import *
Explanation: Running Computations in Parallel
Contents
Introduction
Single-Variable Functions
Multi-Variable Functions
IPython Based Parallel Processing
End of explanation
def func1(x):
return x, x**2, x**3
a, b, c = parfor(func1, range(10))
print(a)
print(b)
print(c)
Explanation: <a id='intro'></a>
Introduction
Often one is interested in the output of a given function as a single-parameter is varied. For instance, we can calculate the steady-state response of our system as the driving frequency is varied. In cases such as this, where each iteration is independent of the others, we can speedup the calculation by performing the iterations in parallel. In QuTiP, parallel computations may be performed using the parallel_map function or the parfor (parallel-for-loop) function.
To use the these functions we need to define a function of one or more variables, and the range over which one of these variables are to be evaluated. For example:
<a id='single'></a>
Single-Variable Functions
End of explanation
result = parallel_map(func1, range(10))
result_array = np.array(result)
print(result_array[:, 0]) # == a
print(result_array[:, 1]) # == b
print(result_array[:, 2]) # == c
Explanation: or
End of explanation
result = parfor(func1, range(5))
print(result)
result = parallel_map(func1, range(5))
print(result)
Explanation: Note that the return values are arranged differently for the parallel_map and the parfor functions, as illustrated below. In particular, the return value of parallel_map is not enforced to be NumPy arrays, which can avoid unnecessary copying if all that is needed is to iterate over the resulting list:
End of explanation
def func2(x):
return x, Qobj(x), 'a' * x
a, b, c = parfor(func2, range(5))
print(a)
print(b)
print(c)
result = parallel_map(func2, range(5))
result_array = np.array(result)
print(result_array[:, 0]) # == a
print(result_array[:, 1]) # == b
print(result_array[:, 2]) # == c
Explanation: <a id='multi'></a>
Multi-Variable Functions
The parallel_map and parfor functions are not limited to just numbers, but also works for a variety of outputs:
End of explanation
def sum_diff(x, y, z=0):
return x + y, x - y, z
parfor(sum_diff, [1, 2, 3], [4, 5, 6], z=5.0)
parallel_map(sum_diff, [1, 2, 3], task_args=(np.array([4, 5, 6]),), task_kwargs=dict(z=5.0))
Explanation: One can also define functions with multiple input arguments and even keyword arguments. Here the parallel_map and parfor functions behaves differently:
While parallel_map only iterate over the values arguments, the parfor function simultaneously iterates over all arguments:
End of explanation
def func(x):
return x
result = parallel_map(func, range(50), progress_bar=True)
Explanation: Note that the keyword arguments can be anything you like, but the keyword values are not iterated over. The keyword argument num_cpus is reserved as it sets the number of CPU's used by parfor. By default, this value is set to the total number of physical processors on your system. You can change this number to a lower value, however setting it higher than the number of CPU's will cause a drop in performance. In parallel_map, keyword arguments to the task function are specified using task_kwargs argument, so there is no special reserved keyword arguments.
The parallel_map function also supports progressbar, using the keyword argument progress_bar which can be set to True or to an instance of BaseProgressBar. There is a function called serial_map that works as a non-parallel drop-in replacement for parallel_map, which allows easy switching between serial and parallel computation.
End of explanation
from qutip.ipynbtools import parallel_map
result = parallel_map(func, range(50), progress_bar=True)
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/guide.css", "r").read()
return HTML(styles)
css_styling()
Explanation: <a id='ipython'></a>
IPython Based Parallel Processing
When QuTiP is used with IPython interpreter, there is an alternative parallel for-loop implementation in the QuTiP module qutip.ipynbtools, see parallel_map. The advantage of this parallel_map implementation is based on IPythons powerful framework for parallelization, so the compute processes are not confined to run on the same host as the main process, i.e. cluster computations.
<div class="success">
**Note**: In order to run the IPython `parallel_map` function, you must first turn on the IPython cluster engine.
</div>
End of explanation |
14,998 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ATM 623
Step1: Contents
The one-dimensional diffusion equation
Discretizing the diffusion operator in space
Coding the discretized diffusion operator in numpy
Discretizing the time derivative
Stability analysis of the FTCS scheme
Numerical tests with a shorter timestep
The need for a more efficient method
Implicit time method
Your homework assignment
<a id='section1'></a>
1. The one-dimensional diffusion equation
Suppose that a quantity $u(x)$ is mixed down-gradient by a diffusive process.
The diffusive flux is
$$ F = - K \frac{\partial u}{\partial x} $$
There will be local changes in $u$ wherever this flux is convergent or divergent
Step2: Here we will divide our domain up into 20 grid points.
Step3: The fluxes will be solved on the staggered grid with 21 points.
$u$ will be solved on the 20 point grid.
Step4: Here's one way to implement the finite difference, using array indexing.
Step5: We can also use the function numpy.diff() to accomplish the same thing
Step6: Here is a function that computes the diffusive flux $F$ on the staggered grid, including the boundaries.
Step7: The time tendency of $u$ is just the convergence of this flux, which requires one more finite difference
Step8: A smooth example
Suppose we have an initial $u$ field that has a local maximum in the interior.
The gaussian (bell curve) function is a convenient way to create such a field.
Step9: Hopefully this makes sense. The diffusion is acting to smooth out $u$ by reducing the peak and increasing $u$ on the flanks of the gaussian bump.
Some non-smooth examples
Use a random number generator to create some noisy initial conditions.
Step10: <a id='section4'></a>
4. Discretizing the time derivative
The simplest way to discretize the time derivative is the forward Euler method
Step11: Let's loop through a number of timesteps.
Step12: The numerics were easy to implement, and the scheme seems to work very well! The results are physically sensible.
Now, suppose that you wanted to double the spatial resolution
Try setting $J=40$ and repeat the above procedure.
What happens?
Step13: Suddenly our scheme is producing numerical noise that grows in time and overwhelms to smooth physical solution we are trying to model.
This is bad!
What went wrong, and what can we do about it?
<a id='section5'></a>
5. Stability analysis of the FTCS scheme
Following Press et al. (1988), "Numerical Recipes in C
Step14: Success! The graph now looks like a smoother (higher resolution) version of our first integration with the coarser grid.
But at a big cost | Python Code:
# Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
Explanation: ATM 623: Climate Modeling
Brian E. J. Rose, University at Albany
Lecture 20: A peak at numerical methods for diffusion models
Warning: content out of date and not maintained
You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date.
Here you are likely to find broken links and broken code.
About these notes:
This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways:
The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware
The latest versions can be viewed as static web pages rendered on nbviewer
A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website.
Also here is a legacy version from 2015.
Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display, Math, Latex
Explanation: Contents
The one-dimensional diffusion equation
Discretizing the diffusion operator in space
Coding the discretized diffusion operator in numpy
Discretizing the time derivative
Stability analysis of the FTCS scheme
Numerical tests with a shorter timestep
The need for a more efficient method
Implicit time method
Your homework assignment
<a id='section1'></a>
1. The one-dimensional diffusion equation
Suppose that a quantity $u(x)$ is mixed down-gradient by a diffusive process.
The diffusive flux is
$$ F = - K \frac{\partial u}{\partial x} $$
There will be local changes in $u$ wherever this flux is convergent or divergent:
$$ \frac{\partial u}{\partial t} = - \frac{\partial F}{\partial x} $$
Putting this together gives the classical diffusion equation in one dimension
$$ \frac{\partial u}{\partial t} = \frac{\partial}{\partial x} \left( K \frac{\partial u}{\partial x} \right) $$
For simplicity, we are going to limit ourselves to Cartesian geometry rather than meridional diffusion on a sphere.
We will also assume here that $K$ is a constant, so our governing equation is
$$ \frac{\partial u}{\partial t} = K \frac{\partial^2 u}{\partial x^2} $$
This equation represents a time-dependent diffusion process. It is an initial-boundary value problem. We want to integrate the model forward in time to model the changes in the field $u(x)$.
<a id='section2'></a>
2. Discretizing the diffusion operator in space
Solving a differential equation on a computer always requires some approximation to represent the continuous function $u(x,t)$ and its derivatives in terms of discrete quantities (arrays of numbers).
We have already dealt with simple discretization of the time derivative back in Lecture 2. We used the forward Euler method to step all our of radiation models forward in time so far.
Some notation for discretization of $u(x,t)$
We will discretize time and space on grids
$$ x_j , ~~~ t^n $$
so that
$$ u_j^n = u(x_j, ~t^n) $$
Discretizing the diffusive flux
The governing equation can be written in terms of the convergence of the diffusive flux:
$$ \frac{\partial u}{\partial t} = - \frac{\partial F}{\partial x} $$
It is sensible to use a centered difference to approximate this derivative:
$$ \frac{\partial F}{\partial x} \bigg|j \approx \frac{F{j+\frac{1}{2}} - F_{j-\frac{1}{2}}}{x_{j+\frac{1}{2}} - x_{j-\frac{1}{2}}} $$
The time tendency at point $x_j$ can thus be written
$$ \frac{\partial u}{\partial t} \bigg|j \approx - \frac{F{j+\frac{1}{2}} - F_{j-\frac{1}{2}}}{x_{j+\frac{1}{2}} - x_{j-\frac{1}{2}}} $$
The flux itself depends on a spatial derivative of $u$. We will apply the same centered difference approximation. At point $x_j$ this would look like
$$ \frac{\partial u}{\partial x} \approx \frac{u_{j+\frac{1}{2}} - u_{j-\frac{1}{2}}}{x_{j+\frac{1}{2}} - x_{j-\frac{1}{2}}} $$
But we actually want to approximate $F_{j+\frac{1}{2}}$ and $F_{j-\frac{1}{2}}$, so we apply the centered difference formula at these intermediate points to get
$$ F_{j+\frac{1}{2}} \approx -K \frac{u_{j+1} - u_{j}}{x_{j+1} - x_{j}} $$
and
$$ F_{j-\frac{1}{2}} \approx -K \frac{u_{j} - u_{j-1}}{x_{j} - x_{j-1}} $$
Putting this all together, we can write the time tendency at $x_j$ as
$$ \frac{\partial u}{\partial t} \bigg|j \approx K \frac{ \frac{u{j+1} - u_{j}}{x_{j+1} - x_{j}} - \frac{u_{j} - u_{j-1}}{x_{j} - x_{j-1}}}{x_{j+\frac{1}{2}} - x_{j-\frac{1}{2}}} $$
We'll make things easy on ourselves by using uniform grid spacing in $x$, so
$$ x_{j+1} - x_{j} = x_{j} - x_{j-1} = x_{j+\frac{1}{2}} - x_{j-\frac{1}{2}} = \Delta x $$
So our final formula for the diffusive flux convergence is
$$ \frac{\partial u}{\partial t} \bigg|j \approx K \frac{ u{j+1} - 2 u_{j} + u_{j-1}}{\Delta x^2} $$
No-flux boundary conditions
Suppose the domain is $0 \le x \le 1$, with solid walls at $x=0, 1$.
The physical boundary condition at the walls is that there can be no flux in or out of the walls:
$$ F(0) = F(1) = 0 $$
So the boundary conditions on $u$ are
$$ \frac{\partial u}{\partial x} = 0 ~~~ \text{at} ~~~ x=0,1 $$
The staggered grid
Suppose we have a grid of $J+1$ total points between $x=0$ and $x=1$, including the boundaries:
$x^*_0 = 0 $
$x^*_1 = \Delta x$
$x^*_2 = 2~\Delta x$
...
$x^*_j = j~\Delta x$
...
$x^*_{J-1} = (J-1)~\Delta x = 1 - \Delta x $
$x^*_J = J ~ \Delta x = 1 $
Clearly then the grid spacing must be $\Delta x = 1/J$.
We'll define the fluxes on this grid. The boundary conditions can thus be written
$$ F_0 = F_J = 0 $$
Since our centered difference discretization defines $F$ at points halfway between the $u$ points, it is sensible to locate $u$ on another grid that is offset by $\Delta x / 2$.
The first grid point for $u$ is thus a distance $\Delta x / 2$ from the wall, and there are a total of $J$ points:
$x_0 = \Delta x / 2$
$x_1 = \Delta x / 2 + \Delta x$
$x_2 = \Delta x / 2 + 2~\Delta x$
...
$x_j = \Delta x / 2 + j~\Delta x$
...
$x_{J-1} = \Delta x / 2 + (J-1)~\Delta x = 1 - \Delta x / 2 $
Implementing the boundary condition on the staggered grid
At $x_0$ we have
$$ \frac{\partial u}{\partial t} \bigg|_0 \approx -\frac{ F_1 - F_0}{\Delta x} $$
Subbing in $F_0 = 0$ and the normal discretization for $F_1$ gives
$$ \frac{\partial u}{\partial t} \bigg|_0 \approx K \frac{ u_1 - u_0 }{\Delta x^2} $$
The same procedure at the other wall yields
$$ \frac{\partial u}{\partial t} \bigg|{J-1} \approx - K \frac{ u{J-1} - u_{J-2} }{\Delta x^2} $$
Pulling this all together we have a complete discretization of the diffusion operator including its boundary conditions:
$$ \frac{\partial u}{\partial t} \bigg|_0 \approx K \frac{ u_1 - u_0 }{\Delta x^2} $$
$$ \frac{\partial u}{\partial t} \bigg|j \approx K \frac{ u{j+1} - 2 u_{j} + u_{j-1}}{\Delta x^2}, ~~~~~~ j=1,...,J-2 $$
$$ \frac{\partial u}{\partial t} \bigg|{J-1} \approx - K \frac{ u{J-1} - u_{J-2} }{\Delta x^2} $$
<a id='section3'></a>
3. Coding the discretized diffusion operator in numpy
End of explanation
J1 = 20
J = J1
deltax = 1./J
display(Math(r'J = %i' %J))
display(Math(r'\Delta x = %0.3f' %deltax))
Explanation: Here we will divide our domain up into 20 grid points.
End of explanation
xstag = np.linspace(0., 1., J+1)
x = xstag[:-1] + deltax/2
print( x)
u = np.zeros_like(x)
Explanation: The fluxes will be solved on the staggered grid with 21 points.
$u$ will be solved on the 20 point grid.
End of explanation
dudx = (u[1:] - u[:-1]) / (x[1:] - x[:-1])
dudx.shape
Explanation: Here's one way to implement the finite difference, using array indexing.
End of explanation
help(np.diff)
np.diff(u).shape
Explanation: We can also use the function numpy.diff() to accomplish the same thing:
End of explanation
def diffusive_flux(u, deltax, K=1):
# Take the finite difference
F = np.diff(u)/deltax
# add a zero as the first element (no flux on boundary)
F = np.insert(F, 0, 0.)
# add another zero as the last element (no flux on boundary)
F = np.append(F, 0.)
# flux is DOWN gradient, proportional to D
return -K*F
diffusive_flux(u,deltax).shape
Explanation: Here is a function that computes the diffusive flux $F$ on the staggered grid, including the boundaries.
End of explanation
def diffusion(u, deltax, K=1):
# compute flux
F = diffusive_flux(u, deltax, K)
# take convergence of flux
return -np.diff(F) / deltax
Explanation: The time tendency of $u$ is just the convergence of this flux, which requires one more finite difference:
End of explanation
def gaussian(x, mean, std):
return np.exp(-(x-mean)**2/(2*std**2))/np.sqrt(2*np.pi*std**2)
K = 0.01
u = gaussian(x, 0.5, 0.08)
dudt = diffusion(u, deltax, K=K)
fig, ax = plt.subplots(1)
ax.plot(x, u, label='$u(x)$')
ax.plot(x, dudt, label='$du/dt$')
ax.legend()
Explanation: A smooth example
Suppose we have an initial $u$ field that has a local maximum in the interior.
The gaussian (bell curve) function is a convenient way to create such a field.
End of explanation
fig = plt.figure(figsize=(10,8))
for n in range(4):
u = np.random.random(J)
dudt = diffusion(u, deltax, K)
ax = fig.add_subplot(2,2,n+1)
ax.plot(x, u)
ax.plot(x, dudt)
Explanation: Hopefully this makes sense. The diffusion is acting to smooth out $u$ by reducing the peak and increasing $u$ on the flanks of the gaussian bump.
Some non-smooth examples
Use a random number generator to create some noisy initial conditions.
End of explanation
def step_forward(u, deltax, deltat, K=1):
dudt = diffusion(u, deltax, K)
return u + deltat * dudt
K = 0.01
deltat = 0.125
deltat1 = deltat
u0 = gaussian(x, 0.5, 0.08)
u1 = step_forward(u0, deltax, deltat1, K)
fig, ax = plt.subplots(1)
ax.plot(x, u0, label='initial')
ax.plot(x, u1, label='next')
ax.legend()
Explanation: <a id='section4'></a>
4. Discretizing the time derivative
The simplest way to discretize the time derivative is the forward Euler method:
$$ \frac{d u}{dt} \bigg|^n \approx \frac{u^{n+1} - u^n}{\Delta t} $$
We have already used this method to step our prognostic variables forward in time.
Solving the above for the future value of $u$ gives
$$ u^{n+1} = u^n + \Delta t \frac{d u}{dt} \bigg|^n $$
We apply our discretization of the diffusion operator to the current value of the field $u^n_j$, to get our formula for the future values:
$$ u_j^{n+1} = u_j^n + \frac{K \Delta t}{\Delta x^2} \left( u^n_{j+1} - 2 u^n_{j} + u^n_{j-1} \right) $$
(except at the boundaries, where the diffusion operator is slightly different -- see above).
Together, this scheme is known as Forward Time, Centered Space or FTCS.
It is very simple to implement in numpy code.
End of explanation
# regular resolution
J = 20
deltax = 1./J
xstag = np.linspace(0., 1., J+1)
x = xstag[:-1] + deltax/2
u = gaussian(x, 0.5, 0.08)
niter = 11
for n in range(niter):
u = step_forward(u, deltax, deltat1, K)
plt.plot(x, u, label=n)
plt.legend()
Explanation: Let's loop through a number of timesteps.
End of explanation
# double the resolution
scaling_factor = 2
J = J1 * scaling_factor
deltax = 1./J
xstag = np.linspace(0., 1., J+1)
x = xstag[:-1] + deltax/2
u = gaussian(x, 0.5, 0.08)
for n in range(niter):
u = step_forward(u, deltax, deltat1, K)
plt.plot(x, u, label=n)
plt.legend()
Explanation: The numerics were easy to implement, and the scheme seems to work very well! The results are physically sensible.
Now, suppose that you wanted to double the spatial resolution
Try setting $J=40$ and repeat the above procedure.
What happens?
End of explanation
# double the resolution
J = J1 * scaling_factor
deltax = 1./J
xstag = np.linspace(0., 1., J+1)
x = xstag[:-1] + deltax/2
K = 0.01
# The maximum stable timestep
deltat_max = deltax**2 / 2 / K
print( 'The maximum allowable timestep is %f' %deltat_max)
deltat = deltat1 / scaling_factor**2
print( '4x the previous timestep is %f' %deltat)
u = gaussian(x, 0.5, 0.08)
for n in range(niter):
for t in range(scaling_factor**2):
u = step_forward(u, deltax, deltat, K)
plt.plot(x, u, label=n)
plt.legend()
Explanation: Suddenly our scheme is producing numerical noise that grows in time and overwhelms to smooth physical solution we are trying to model.
This is bad!
What went wrong, and what can we do about it?
<a id='section5'></a>
5. Stability analysis of the FTCS scheme
Following Press et al. (1988), "Numerical Recipes in C: The Art of Scientific Computing", Cambridge University Press.
This is an example of the so-called von Neumann Stability Analysis. It is a form of normal mode analysis for a discrete system.
We look for normal mode solutions (i.e. wavy sines and cosines) of the finite difference equations of the form
$$ u_j^n = \xi^n \exp(i~k~j~ \Delta x) $$
where $k$ is some real number that represents a spatial wavenumber (which can have any value), and $\xi = \xi(k)$ is a complex number that depends on $k$.
The number $\xi$ is called the amplification factor at a given wavenumber $k$.
The question is, under what conditions do wavy solutions grow with time? (This is bad, as it means small numerical noise will become large numerical noise and make our differencing scheme unusable)
Let's substitute the normal mode solution into our finite difference equation
$$ \frac{u_j^{n+1} - u_j^n}{\Delta t} = \frac{K}{\Delta x^2} \left( u^n_{j+1} - 2 u^n_{j} + u^n_{j-1} \right) $$
$$ \frac{\xi^{n+1} \exp(i~k~j~ \Delta x) - \xi^n \exp(i~k~j~ \Delta x)}{\Delta t} = \frac{K}{\Delta x^2} \left( \xi^n \exp(i~k~(j+1)~ \Delta x) - 2 \xi^n \exp(i~k~j~ \Delta x) + \xi^n \exp(i~k~(j-1)~ \Delta x) \right) $$
Divide through by $\xi^n \exp(i~k~j~\Delta x)$:
$$ \frac{\xi^{n+1}}{\xi^n} - 1 = \frac{K \Delta t}{\Delta x^2} \left(\exp(i~k~\Delta x) - 2 + \exp(-i~k~\Delta x) \right) $$
The exponentials simplify
$$ \frac{\xi^{n+1}}{\xi^n} = 1 + \frac{K \Delta t}{\Delta x^2} \left(2 \cos(k~\Delta x) - 2 \right) $$
Or using a double angle identity,
$$ \frac{\xi^{n+1}}{\xi^n} = 1 - \frac{4 K \Delta t}{\Delta x^2} \sin^2 \left( \frac{k~\Delta x}{2} \right) $$
The wavy solution must not grow with time
We need to prevent growing normal modes. So successive amplitudes should be
$$ \bigg| \frac{\xi^{n+1}}{\xi^n} \bigg| \le 1 $$
The stability condition is thus
$$ \bigg| 1 - \frac{4 K \Delta t}{\Delta x^2} \sin^2 \left( \frac{k~\Delta x}{2} \right) \bigg| \le 1 $$
and this condition must be met for EVERY possible wavenumber $k$.
Because $0 \le \sin^2(\phi) \le 1$ for any $\phi$, our condition can only be violated if
$$ \frac{4 K \Delta t}{\Delta x^2} > 2 $$
We conclude the the FTCS scheme is stable so long as this stability condition is met:
$$ \Delta t \le \frac{\Delta x^2}{2 K} $$
We have just discovered an important constraint on the allowable timestep
The maximum timestep we can use with the FTCS scheme for the diffusion equation is proportional to $\Delta x^2$.
A doubling of the spatial resolution would require a 4x shorter timestep to preserve numerical stability.
Physically, the restriction is that the maximum allowable timestep is approximately the diffusion time across a grid cell of width $\Delta x$.
<a id='section6'></a>
6. Numerical tests with a shorter timestep
Going back to our Gaussian example, let's double the resolution but shorten the timestep by a factor of 4.
End of explanation
%load_ext version_information
%version_information numpy, matplotlib
Explanation: Success! The graph now looks like a smoother (higher resolution) version of our first integration with the coarser grid.
But at a big cost: our calculation required 4 times more timesteps to do the same integration.
The total increase in computational cost was actally a factor of 8 to get a factor of 2 increase in spatial resolution.
<a id='section7'></a>
7. The need for a more efficient method
In practice the condition
$$ \Delta t \le \frac{\Delta x^2}{2 K} $$
is often too restrictive to be practical!
Consider our diffusive EBM. Suppose we want a spatial resolution of 1º latitude. Then we have 180 grid points from pole to pole, and our physical length scale is
$$ \Delta x \approx 10^5 \text{m} $$
We were using a diffusivity of $D = 0.6 ~ \text{W m}^{-2}~\text{K}^{-1}$ and a heat capacity of $C = 4 \times 10^7 ~ \text{J m}^{-2} ~\text{K}^{-1}$ (for 10 m of water, see Lecture 17).
Accounting for the spherical geometry in our EBM, this translates to
$$ K = \frac{2 \pi a^2 D}{C} = \frac{2 \pi ~ (6.4 \times 10^6 ~\text{m})^2 ~(0.6 ~ \text{W m}^{-2}~\text{K}^{-1})}{4 \times 10^7 ~ \text{J m}^{-2} ~\text{K}^{-1}} \approx 4 \times 10^{6} ~ \text{m}^2 ~ \text{s}^{-1} $$
Recall that this is the diffusivity associated with the large-scale motion of the atmosphere (mostly). If we take a typical velocity scale for a mid-latitude eddy, $V \approx 20~\text{m s}^{-1}$, and a typical length scale for that eddy, $L \approx 2000~\text{km}$, the diffusivity then scales as
$$ K = V~ L = 4 \times 10^{6} ~ \text{m}^2 ~ \text{s}^{-1} $$
Using these numbers the stability condition is roughly
$$ \Delta t \le 10^3 ~\text{s}$$
which is less than one hour!
And if we wanted to double the resolution to 0.5º, we would need a timestep of just a few minutes.
This can be a very onerous requirement for a model that would like to integrate out for many years. We can do better, but we need a different time discretization!
<a id='section8'></a>
8. Implicit time method
With numerical methods for partial differential equations, it often turns out that a small change in the discretization can make an enormous difference in the results.
The implicit time scheme applies exactly the same centered difference scheme to the spatial derivatives in the diffusion operator.
But instead of applying the operator to the field $u^n$ at time $n$, we instead apply it to the field at the future time $u^{n+1}$.
The scheme looks like
$$ \frac{u_j^{n+1} - u_j^n}{\Delta t} = \frac{K}{\Delta x^2} \left( u^{n+1}{j+1} - 2 u^{n+1}{j} + u^{n+1}_{j-1} \right) $$
in the interior, and at the boundaries:
$$ \frac{u_0^{n+1} - u_0^n}{\Delta t} = \frac{K}{\Delta x^2} \left( u^{n+1}_1 - u^{n+1}_0 \right) $$
and
$$ \frac{u_{J-1}^{n+1} - u_{J-1}^n}{\Delta t} = - \frac{K}{\Delta x^2} \left( u_{J-1}^{n+1} - u_{J-2}^{n+1} \right) $$
This might seem like a strange way to write the system, since we don't know the future state of the system at $t^{n+1}$. That's what we're trying to solve for!
Let's move all terms evaluated at $t^{n+1}$ to the left hand side:
$$ u_j^{n+1} - \frac{K \Delta t}{\Delta x^2} \left( u^{n+1}{j+1} - 2 u^{n+1}{j} + u^{n+1}_{j-1} \right) = u_j^n $$
or
$$ -K^ u^{n+1}_{j+1} + \left(1+2K^ \right) u_j^{n+1} - K^* u_{j-1}^{n+1} = u_j^n $$
(in the interior)
where we have introduced a non-dimensional diffusivity
$$ K^* = \frac{K \Delta t}{\Delta x^2} $$
The implicit scheme as a matrix problem
We can write this as a matrix equation
$$ \mathbf{A} ~ \mathbf{U}^{n+1} = \mathbf{U}^n $$
where $\mathbf{U}$ is a $J\times1$ column vector giving the field $u(x)$ at a particular instant in time:
$$ \mathbf{U}^n = \left[ \begin{array}{c}
u^n_0 \
u^n_1 \
u^n_2 \
... \
u^n_{J-2} \
u^n_{J-1} \
\end{array}
\right]
$$
and $\mathbf{U}^{n+1}$ is the same vector at $t^{n+1}$.
$\mathbf{A}$ is a $J\times J$ tridiagonal matrix:
$$ \mathbf{A} = \left[ \begin{array}{cccccccc}
1+K^ & -K^ & 0 & 0 & ... & 0 & 0 & 0 \
-K^ & 1+2K^ & -K^ & 0 & ... & 0 & 0 & 0 \
0 & -K^ & 1+2K^ & -K^ &... & 0 & 0 & 0 \
... & ... & ... & ... & ... & ... & ... & ... \
0 & 0 & 0 & 0 & ... & -K^ & 1+2K^ & -K^ \
0 & 0 & 0 & 0 & ... & 0 & -K^ & 1+K^* \
\end{array}
\right]
$$
Solving for the future state of the system $\mathbf{U}^{n+1}$ is then just the solution of the linear system
$$ \mathbf{U}^{n+1} = \mathbf{A}^{-1} \mathbf{U}^{n}$$
Solving a tridiagonal matrix problem like this is a very common operation in computer science, and efficient numerical routines are available in many languages (including Python / numpy!)
Stability analysis of the implicit scheme
We'll skip the details, but the amplification factor for this scheme is (see Numerical Recipes book or other text on numerical methods):
$$ \frac{\xi^{n+1}}{\xi^n} = \frac{1}{1+4 K^* \sin^2 \left( \frac{k \Delta x}{2} \right) } $$
so the stability criterion of $$ \bigg| \frac{\xi^{n+1}}{\xi^n} \bigg| \le 1 $$
is met for any value of $K^$ and thus for any timestep $\Delta t$*.
The implicit method (also called backward time) is unconditionally stable for any choice of timestep.
<a id='section9'></a>
9. Your homework assignment
Write Python code to solve the diffusion equation using this implicit time method. Demonstrate that it is numerically stable for much larger timesteps than we were able to use with the forward-time method. One way to do this is to use a much higher spatial resolution.
Some final thoughts:
We have just scratched the surface of the wonders and sorrows of numerical methods here. The implicit method is very stable but is not the most accurate method for a diffusion problem, particularly when you are interested in some of the faster dynamics of the system (as opposed to just getting the system quickly to its equilibrium state).
There are always trade-offs in the choice of a numerical method.
The equations for most climate models are sufficiently complex that more than one numerical method is necessary. Even in the simple diffusive EBM, the radiation terms are handled by a forward-time method while the diffusion term is solved implicitly.
Once you have worked through the above problem (diffusion only), you might want to look in the climlab code to see how the diffusion solver is implemented there, and how it is used when you integrate the EBM.
<div class="alert alert-success">
[Back to ATM 623 notebook home](../index.ipynb)
</div>
Version information
End of explanation |
14,999 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Best practices
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: This section lists some best practices for creating a circuit that performs well
on Google hardware devices. This is an area of active research, so users are
encouraged to try multiple approaches to improve results.
This guide is split into three parts
Step2: Using CircuitOperation to reduce circuit size
Particularly large batches (or sweeps) of circuits may encounter errors when
sent to Quantum Engine due to an upper limit on request size. If the circuits
in question have a repetitive structure, cirq.CircuitOperations can be used
to reduce the request size and avoid this limit.
Step3: When compiling circuits with CircuitOperations, providing a context
with deep=True will preserve the CircuitOperations while
optimizing their contents. This is useful for producing a concise,
device-compatible circuit.
Step4: Running circuits faster
The following sections give tips and tricks that allow you to improve your
repetition rate (how many repetitions per second the device will run).
This will allow you to make the most out of limited time on the
device by getting results faster. The shorter experiment time may
also reduce error due to drift of qubits away from calibration.
There are costs to sending circuits over the network, to compiling each
circuit into waveforms, to initializing the device,
and to sending results back over the network.
These tips will aid you in removing some of this overhead by combining your
circuits into sweeps or batches.
Use sweeps when possible
Round trip network time to and from the engine typically adds latency on the order of a second
to the overall computation time. Reducing the number of trips and allowing the engine to
properly batch circuits can improve the throughput of your calculations. One way to do this
is to use parameter sweeps to send multiple variations of a circuit at once.
One example is to turn single-qubit gates on or off by using parameter sweeps.
For instance, the following code illustrates how to combine measuring in the
Z basis or the X basis in one circuit.
Step5: One word of caution is there is a limit to the total number of repetitions. Take some care
that your parameter sweeps, especially products of sweeps, do not become so excessively large
that they exceed this limit.
Use batches if sweeps are not possible
The engine has a method called run_batch() that can be used to send multiple
circuits in a single request. This can be used to increase the efficiency
of your program so that more repetitions are completed per second.
The circuits that are grouped into the same batch must
measure the same qubits and have the same number of repetitions for each
circuit. Otherwise, the circuits will not be batched together
on the device, and there will be no gain in efficiency.
Flatten sympy formulas into symbols
Symbols are extremely useful for constructing parameterized circuits (see above). However,
only some sympy formulas can be serialized for network transport to the engine.
Currently, sums and products of symbols, including linear combinations, are supported.
See cirq_google.arg_func_langs for details.
The sympy library is also infamous for being slow, so avoid using complicated formulas if you
care about performance. Avoid using parameter resolvers that have formulas in them.
One way to eliminate formulas in your gates is to flatten your expressions.
The following example shows how to take a gate with a formula and flatten it
to a single symbol with the formula pre-computed for each value of the sweep | Python Code:
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
import cirq
print("installed cirq.")
import cirq_google as cg
import sympy
Explanation: Best practices
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/google/best_practices"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/google/best_practices.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/google/best_practices.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/google/best_practices.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
End of explanation
# Create your circuit here
my_circuit = cirq.Circuit()
# Convert the circuit to run on a Google target gateset.
# The google specific `cirq.CompilationTargetGateset` specifies the target gateset
# and a sequence of appropriate optimization routines that should be executed to compile
# a circuit to run on this target.
sycamore_circuit = cirq.optimize_for_target_gateset(my_circuit, gateset=cg.SycamoreTargetGateset())
Explanation: This section lists some best practices for creating a circuit that performs well
on Google hardware devices. This is an area of active research, so users are
encouraged to try multiple approaches to improve results.
This guide is split into three parts:
* Getting your circuit to run
* Making it run faster
* Lowering error
Getting a circuit to run on hardware
In order to run on hardware, the circuit must only use qubits and gates that the
device supports. Using inactive qubits, non-adjacent qubits, or non-native
gates will immediately cause a circuit to fail.
Validating a circuit with a device, such as
cg.Sycamore.validate_circuit(circuit) will test a lot of these
conditions. Calling the validate_circuit function will work with any
device, including those retrieved directly from the API using the
engine object, which can help
identify any qubits used in the circuit that have been disabled on the actual
device.
Using built-in tranformers as a first pass
Using built-in transformers will allow you to compile to the correct gate set. As they are
automated solutions, they will not always perform as well as a hand-crafted solution, but
they provide a good starting point for creating a circuit that is likely to run successfully
on hardware. Best practice is to inspect the circuit after optimization to make sure
that it has compiled without unintended consequences.
End of explanation
# Repeatedly apply Hadamard and measurement to 10 qubits.
my_circuit = cirq.Circuit()
qubits = cirq.GridQubit.rect(2, 5)
for i in range(100):
my_circuit.append(cirq.H.on_each(*qubits))
for qb in qubits:
my_circuit.append(cirq.measure(qb, key=cirq.MeasurementKey.parse_serialized(f'{i}:m{qb}')))
print(my_circuit)
# The same circuit, but defined using CircuitOperations.
# This is ~1000x smaller when serialized!
q = cirq.NamedQubit("q")
sub_circuit = cirq.FrozenCircuit(cirq.H(q), cirq.measure(q, key='m'))
circuit_op = cirq.CircuitOperation(sub_circuit).repeat(100)
short_circuit = cirq.Circuit(
circuit_op.with_qubits(q).with_measurement_key_mapping({'m': f'm{q}'}) for q in qubits
)
print(short_circuit)
Explanation: Using CircuitOperation to reduce circuit size
Particularly large batches (or sweeps) of circuits may encounter errors when
sent to Quantum Engine due to an upper limit on request size. If the circuits
in question have a repetitive structure, cirq.CircuitOperations can be used
to reduce the request size and avoid this limit.
End of explanation
syc_circuit = cirq.optimize_for_target_gateset(
short_circuit,
gateset=cg.SycamoreTargetGateset(),
context=cirq.TransformerContext(deep=True)
)
print(syc_circuit)
Explanation: When compiling circuits with CircuitOperations, providing a context
with deep=True will preserve the CircuitOperations while
optimizing their contents. This is useful for producing a concise,
device-compatible circuit.
End of explanation
q = cirq.GridQubit(1, 1)
sampler = cirq.Simulator()
# STRATEGY #1: Have a separate circuit and sample call for each basis.
circuit_z = cirq.Circuit(
cirq.measure(q, key='out'))
circuit_x = cirq.Circuit(
cirq.H(q),
cirq.measure(q, key='out'))
samples_z = sampler.sample(circuit_z, repetitions=5)
samples_x = sampler.sample(circuit_x, repetitions=5)
print("Measurement in Z Basis:", samples_z, sep="\n")
print("Measurement in X Basis:", samples_x, sep="\n")
# STRATEGY #2: Have a parameterized circuit.
circuit_sweep = cirq.Circuit(
cirq.H(q)**sympy.Symbol('t'),
cirq.measure(q, key='out'))
samples_sweep = sampler.sample(circuit_sweep,
repetitions=5,
params=[{'t': 0}, {'t': 1}])
print(samples_sweep)
Explanation: Running circuits faster
The following sections give tips and tricks that allow you to improve your
repetition rate (how many repetitions per second the device will run).
This will allow you to make the most out of limited time on the
device by getting results faster. The shorter experiment time may
also reduce error due to drift of qubits away from calibration.
There are costs to sending circuits over the network, to compiling each
circuit into waveforms, to initializing the device,
and to sending results back over the network.
These tips will aid you in removing some of this overhead by combining your
circuits into sweeps or batches.
Use sweeps when possible
Round trip network time to and from the engine typically adds latency on the order of a second
to the overall computation time. Reducing the number of trips and allowing the engine to
properly batch circuits can improve the throughput of your calculations. One way to do this
is to use parameter sweeps to send multiple variations of a circuit at once.
One example is to turn single-qubit gates on or off by using parameter sweeps.
For instance, the following code illustrates how to combine measuring in the
Z basis or the X basis in one circuit.
End of explanation
# Suppose we have a gate with a complicated formula. (e.g. "2^t - 1")
# This formula cannot be serialized
# It could potentially encounter sympy slowness.
gate_with_formula = cirq.XPowGate(exponent=2 ** sympy.Symbol('t') - 1)
sweep = cirq.Linspace('t', start=0, stop=1, length=5)
# Instead of sweeping the formula, we will pre-compute the values of the formula
# at every point and store it a new symbol called '<2**t - 1>'
sweep_for_gate, flat_sweep = cirq.flatten_with_sweep(gate_with_formula, sweep)
print(repr(sweep_for_gate))
print(list(flat_sweep.param_tuples()))
Explanation: One word of caution is there is a limit to the total number of repetitions. Take some care
that your parameter sweeps, especially products of sweeps, do not become so excessively large
that they exceed this limit.
Use batches if sweeps are not possible
The engine has a method called run_batch() that can be used to send multiple
circuits in a single request. This can be used to increase the efficiency
of your program so that more repetitions are completed per second.
The circuits that are grouped into the same batch must
measure the same qubits and have the same number of repetitions for each
circuit. Otherwise, the circuits will not be batched together
on the device, and there will be no gain in efficiency.
Flatten sympy formulas into symbols
Symbols are extremely useful for constructing parameterized circuits (see above). However,
only some sympy formulas can be serialized for network transport to the engine.
Currently, sums and products of symbols, including linear combinations, are supported.
See cirq_google.arg_func_langs for details.
The sympy library is also infamous for being slow, so avoid using complicated formulas if you
care about performance. Avoid using parameter resolvers that have formulas in them.
One way to eliminate formulas in your gates is to flatten your expressions.
The following example shows how to take a gate with a formula and flatten it
to a single symbol with the formula pre-computed for each value of the sweep:
End of explanation |
Subsets and Splits