code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# Your first image!
Ok now that you understand what Docker is and why use it (at least I hope, if not you will soon!), it is time to get more technical.
## Goal
In this module we will learn:
* What is the **concrete** difference between an **image** and a **container**
* How to create an image?
* What is a Dockerfile and how to use it?
* How to create a container out of an image?
And we will run our very first container based on our very first image!

## Installation
First things first, we need to install Docker.
You just have to follow [Docker's well made documentation](https://docs.docker.com/engine/install/ubuntu/).
If you're running Windows or Mac -> https://docs.docker.com/engine/install/
By default, Docker will need to use `sudo` each time. Let's change that:
```bash
sudo gpasswd -a $USER docker
```
You will need to log-out before it applies the changes.
## How do we create an image
In order to create images (or import them), we could:
### Go to [Docker Hub](https://hub.docker.com/)
which is like GitHub for Docker images.
You need an image with a SQL DB in it? [There is one](https://hub.docker.com/r/mysalt/mysql).
You need an image with python 3.6? [There is one](https://hub.docker.com/r/silverlogic/python3.6).
And a lot of other images!
### Create your own docker file!
In many cases, we will want to create our own images, with our own files and our own script.
In order to do that we will create what we call a `Dockerfile`.
This is just a file that is named `Dockerfile` and that contains a script that Docker can understand.
Based on that, it will create an image.
## Let's create our image
It's time! So we will create a `Dockerfile` and we will use a python image as base. It means that we start from an existing image to build our own.
So we don't have to start from scratch each time.
In this file we will add a line to tell Docker that we want to start from the official Python 3.7 image.
The `FROM` keyword is used to tell Docker which base image we will use.
```Dockerfile
FROM python:3.7
```
Now let's add another line to copy a file. In the folder you're in, there is a file named `hello_world.py`. This file contains a single line:
```python
print("Hello world!")
```
We will create folder called `app` and put our file in it. As the python image is built on top of Ubuntu, we can use all the commands that work in Ubuntu.
Let's see some useful keywords that can be used in a Dockerfile:
* The `RUN` keyword can be uses to run a command on the system.
* The `COPY` keyword can be used to copy a file.
* The `WORKDIR` keyword can be used to define the path where all the commands will be run starting after it.
* The `CMD` keyword can be used to define the command that the container will run when it will be launched.
```Dockerfile
RUN mkdir /app
RUN mkdir /app/code
COPY code/hello_world.py /app/code/hello_world.py
WORKDIR /app
CMD ["python", "code/hello_world.py"]
```
## Let's build our image!
Now we're ready to create our first image! Exciting right?
We say that we `build` our image. That's the term.
The command is: `docker build . -t hello`
* `docker` to specify that we use docker
* `build` to specify that we want to create an image
* `.` to specify that the Dockerfile is in the current directory
* `-t hello` to add a name to our image. If we don't do that we will need to use the ID that docker defines for us and it's not easy to remember.
I already created the Dockerfile for you. Have a look on it!
```
!docker build . -t hello
```
As you can see, our image has been successfully built!
If you look at the last line of the output, you see:
```
Successfully tagged hello:latest
```
Our image has been tagged with `hello:latest`. As we didn't define any tag at the end of our image name, `latest` will be added by docker.
If we make changes in our image and re-build it, a new image will be created with the tag `latest` and our old image will no longer have it.
It's useful when you want to use the most recent version of your image.
We can also add our own tags as follows
```
!docker build . -t another_image:v1.0
```
If we try to list all of our images, we will see that we have 3 images.
* One `hello` with the tag `latest`
* One `hello` with the tag `v1.0`
* One `python` with the tag `3.7` *(that's the one we used as base image)*
We can see it with:
```bash
docker image ls
```
```
!docker image ls
```
## Manage images
As you can see images take a lot of place on you hard drive! And the more complex your images are and the more dependencies they have, the bigger they will be.
It rapidly become a pain...
Thankfully, we can remove the one that we don't use anymore with the command:
```bash
docker image rm <IMAGE_ID>
````
As we see in the `docker image ls` output, each image has an ID. We will use that to remove them. Let's say we want to remove our `hello:v1.0` image.
```
hello latest 8f7ca704c0a8 7 minutes ago 876MB
```
Here the ID is `8f7ca704c0a8`. But here multiple images have the same ID. That's because multiples images you the same image.
Confused? It's because Docker is really smart! We tried to create multiple images based on the same Dockerfile, and there were no changes between the creation of the first image and the last one. Neither in the files, nor in the Dockerfile. So Docker knows that it doesn't have to create multiple images! It creates and 'links' it to other tags.
So if I try to delete with the ID it will give me a warning and not do it. Because multiple tags are linked to the same image.
So instead of using IDs we will use tags.
```
!docker image rm hello:v1.0
```
The tag has been removed!
```
!docker image ls
```
## Run it
Perfect! You understand what an image is now! Let's run it.
When we will run the image, Docker will create a `container`, an instance of the image and it will execute the command we added after the `CMD` keyword.
We will do it with the command:
```bash
docker run -t hello:latest
```
* `run` is to tell to docker to create a container.
* `-t hello:latest` is to specify which image it should use to create and run the `container`
If you don't put `:<YOUR_TAG>` docker will add `:latest` by default.
```
!docker run -t hello
```
Our container successfully ran. We can see that it printed 'Hello world' as asked.
Ok. So we created a container and we ran it. Can we see it stored somewhere?
Let's try with `docker container ls` maybe?
```
!docker container ls
```
Damn it, the command seems to be right but there is nothing here!
Well, `docker container ls` only show running containers. And this one is not running anymore because it completed the task we asked him to do!
So if we want to see all the container, including the stopped one, we can do:
```bash
docker container ls -a
```
```
!docker container ls -a
```
Ok good! We can of course remove it with:
```bash
docker contaier rm <CONTAINER_ID>
```
So in this case `cd585b842f8b`
```
!docker container rm cd585b842f8b
```
It worked!
## Conclusion
Great! You now have a complete understanding of images and containers.
In the next module, we will dive deeper into the container and see what we can do with them.

|
github_jupyter
|
sudo gpasswd -a $USER docker
FROM python:3.7
print("Hello world!")
RUN mkdir /app
RUN mkdir /app/code
COPY code/hello_world.py /app/code/hello_world.py
WORKDIR /app
CMD ["python", "code/hello_world.py"]
!docker build . -t hello
Successfully tagged hello:latest
!docker build . -t another_image:v1.0
docker image ls
!docker image ls
docker image rm <IMAGE_ID>
hello latest 8f7ca704c0a8 7 minutes ago 876MB
!docker image rm hello:v1.0
!docker image ls
docker run -t hello:latest
!docker run -t hello
!docker container ls
docker container ls -a
!docker container ls -a
docker contaier rm <CONTAINER_ID>
!docker container rm cd585b842f8b
| 0.141074 | 0.927166 |
```
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
import time
import cv2
from torch.utils.data import DataLoader
from torchvision import datasets as ds, transforms as tf
from torchvision.utils import make_grid
from collections import OrderedDict
data_root = 'C:/Datasets/Places365/train'
worker = 2
batch_size = 20
image_size = (32, 32)
image_size_out = (256, 256)
lr = 1e-3
beta1 = 9e-1
ngpu = 1
weight_decay = 50
is_cuda = torch.cuda.is_available()
device = torch.device('cuda:0' if is_cuda else 'cpu')
device
_transforms = {
"32x32": tf.Compose([
tf.Resize(image_size[0]),
tf.CenterCrop(image_size[0]),
tf.ToTensor()
]),
"256x256": tf.Compose([
tf.Resize(image_size_out[0]),
tf.CenterCrop(image_size_out[0]),
tf.ToTensor()
])
}
dataset32 = ds.ImageFolder(root=data_root, transform=_transforms["32x32"])
dataloader32 = DataLoader(dataset32, batch_size=batch_size, shuffle=False, num_workers=worker, pin_memory=is_cuda)
dataset256 = ds.ImageFolder(root=data_root, transform=_transforms["256x256"])
dataloader256 = DataLoader(dataset256, batch_size=batch_size, shuffle=False, num_workers=worker, pin_memory=is_cuda)
grid32 = next(iter(dataloader32))
grid256 = next(iter(dataloader256))
im32 = make_grid(grid32[0], nrow=10, normalize=True, padding=1)
im256 = make_grid(grid256[0], nrow=10, normalize=True)
plt.figure(figsize=(15, 7))
plt.subplot(211)
plt.imshow(np.transpose(im32, axes=(1, 2, 0)))
plt.axis('off')
plt.title('32 x 32 Image Sample')
plt.subplot(212)
plt.imshow(np.transpose(im256, axes=(1, 2, 0)))
plt.axis('off')
plt.title('256 x 256 Image Sample')
```
## Super Sampling Model Development.
```
class SuperSamplingModel(nn.Module):
def __init__(self, in_channels=3, out_channels=3):
super(SuperSamplingModel, self).__init__()
self.bicubic_upsample = nn.Upsample(scale_factor=8, mode='bicubic', align_corners=False)
self.feature_extraction = nn.Sequential(OrderedDict([
(
'conv1', nn.Sequential(
nn.Conv2d(in_channels=in_channels, out_channels=32, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(32),
nn.PReLU()
)
),(
'conv2', nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=26, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(26),
nn.PReLU()
)
),(
'conv3', nn.Sequential(
nn.Conv2d(in_channels=26, out_channels=22, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(22),
nn.PReLU()
)
),(
'conv4', nn.Sequential(
nn.Conv2d(in_channels=22, out_channels=18, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(18),
nn.PReLU()
)
),(
'conv5', nn.Sequential(
nn.Conv2d(in_channels=18, out_channels=14, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(14),
nn.PReLU()
)
),(
'conv6', nn.Sequential(
nn.Conv2d(in_channels=14, out_channels=11, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(11),
nn.PReLU()
)
),(
'conv7', nn.Sequential(
nn.Conv2d(in_channels=11, out_channels=8, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(8),
nn.PReLU()
)
)
]))
self.reconstruction = nn.Sequential(OrderedDict([
(
'conv1', nn.Sequential(
nn.ConvTranspose2d(in_channels=131, out_channels=24, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(24),
nn.PReLU()
)
),(
'conv2', nn.Sequential(
nn.ConvTranspose2d(in_channels=24, out_channels=8, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(8),
nn.PReLU()
)
),(
'conv3', nn.Sequential(
nn.ConvTranspose2d(in_channels=8, out_channels=8, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(8),
nn.PReLU()
)
),(
'conv4', nn.Sequential(
nn.ConvTranspose2d(in_channels=8, out_channels=3, kernel_size=1, stride=1),
nn.BatchNorm2d(3),
nn.PReLU()
)
)
]))
self.feature_extraction_children = OrderedDict(self.feature_extraction.named_children())
def forward(self, frame):
conv1 = self.feature_extraction_children['conv1'](frame)
conv2 = self.feature_extraction_children['conv2'](conv1)
conv3 = self.feature_extraction_children['conv3'](conv2)
conv4 = self.feature_extraction_children['conv4'](conv3)
conv5 = self.feature_extraction_children['conv5'](conv4)
conv6 = self.feature_extraction_children['conv6'](conv5)
conv7 = self.feature_extraction_children['conv7'](conv6)
upscaled = self.bicubic_upsample(frame)
concat = torch.cat([conv1, conv2, conv3, conv4, conv5, conv6, conv7], dim=1)
reconstructed = self.reconstruction(concat)
reconstructed = torch.add(reconstructed, upscaled)
return reconstructed
netDLSS = SuperSamplingModel().to(device)
netDLSS
with torch.no_grad():
x = torch.ones(10, 3, 32, 32, device=device)
out = netDLSS(x)
print(out.shape)
criterion = nn.MSELoss().to(device)
optimizer = torch.optim.Adam(netDLSS.parameters(), lr=lr, betas=(beta1, 0.999))
windowName = "TRAINING VISUALIZATION"
cv2.namedWindow(windowName, flags=cv2.WINDOW_NORMAL)
cv2.resizeWindow(windowName, 1280, 256)
epochs = 5
batch_limit = 10000
print_limit = 500
is_broke = False
losses = []
start = time.time()
for i in range(epochs):
i += 1
for batch, (X, y) in enumerate(zip(dataloader32, dataloader256)):
batch += 1
optimizer.zero_grad()
X = X[0].to(device)
y = y[0].to(device)
out = netDLSS(X)
loss = criterion(out, y)
loss.backward()
optimizer.step()
if batch == 1 or batch % print_limit == 0:
print(f'Epoch: {i}/{epochs}, Batch: {batch}/{batch_limit} -> Loss: {loss:.9f}')
losses.append(loss)
if batch == 1 or batch % 100 == 0:
grid = make_grid(out.detach().cpu(), nrow=10, normalize=True)
grid = np.transpose(grid.numpy(), axes=(1, 2, 0))
grid = cv2.cvtColor(grid, code=cv2.COLOR_RGB2BGR)
cv2.imshow(windowName, grid)
if cv2.waitKey(1) & 0xFFF == 27:
is_broke = True
break
if batch_limit == batch:
break
if is_broke:
cv2.destroyAllWindows()
break
duration = time.time() - start
print(f'Execution Time -> {duration / 60:.4f} minutes')
cv2.destroyAllWindows()
plt.plot(losses)
plt.xlabel("EPOCHS")
plt.ylabel("LOSS")
plt.title("Loss x Epochs GRAPH")
```
|
github_jupyter
|
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
import time
import cv2
from torch.utils.data import DataLoader
from torchvision import datasets as ds, transforms as tf
from torchvision.utils import make_grid
from collections import OrderedDict
data_root = 'C:/Datasets/Places365/train'
worker = 2
batch_size = 20
image_size = (32, 32)
image_size_out = (256, 256)
lr = 1e-3
beta1 = 9e-1
ngpu = 1
weight_decay = 50
is_cuda = torch.cuda.is_available()
device = torch.device('cuda:0' if is_cuda else 'cpu')
device
_transforms = {
"32x32": tf.Compose([
tf.Resize(image_size[0]),
tf.CenterCrop(image_size[0]),
tf.ToTensor()
]),
"256x256": tf.Compose([
tf.Resize(image_size_out[0]),
tf.CenterCrop(image_size_out[0]),
tf.ToTensor()
])
}
dataset32 = ds.ImageFolder(root=data_root, transform=_transforms["32x32"])
dataloader32 = DataLoader(dataset32, batch_size=batch_size, shuffle=False, num_workers=worker, pin_memory=is_cuda)
dataset256 = ds.ImageFolder(root=data_root, transform=_transforms["256x256"])
dataloader256 = DataLoader(dataset256, batch_size=batch_size, shuffle=False, num_workers=worker, pin_memory=is_cuda)
grid32 = next(iter(dataloader32))
grid256 = next(iter(dataloader256))
im32 = make_grid(grid32[0], nrow=10, normalize=True, padding=1)
im256 = make_grid(grid256[0], nrow=10, normalize=True)
plt.figure(figsize=(15, 7))
plt.subplot(211)
plt.imshow(np.transpose(im32, axes=(1, 2, 0)))
plt.axis('off')
plt.title('32 x 32 Image Sample')
plt.subplot(212)
plt.imshow(np.transpose(im256, axes=(1, 2, 0)))
plt.axis('off')
plt.title('256 x 256 Image Sample')
class SuperSamplingModel(nn.Module):
def __init__(self, in_channels=3, out_channels=3):
super(SuperSamplingModel, self).__init__()
self.bicubic_upsample = nn.Upsample(scale_factor=8, mode='bicubic', align_corners=False)
self.feature_extraction = nn.Sequential(OrderedDict([
(
'conv1', nn.Sequential(
nn.Conv2d(in_channels=in_channels, out_channels=32, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(32),
nn.PReLU()
)
),(
'conv2', nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=26, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(26),
nn.PReLU()
)
),(
'conv3', nn.Sequential(
nn.Conv2d(in_channels=26, out_channels=22, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(22),
nn.PReLU()
)
),(
'conv4', nn.Sequential(
nn.Conv2d(in_channels=22, out_channels=18, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(18),
nn.PReLU()
)
),(
'conv5', nn.Sequential(
nn.Conv2d(in_channels=18, out_channels=14, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(14),
nn.PReLU()
)
),(
'conv6', nn.Sequential(
nn.Conv2d(in_channels=14, out_channels=11, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(11),
nn.PReLU()
)
),(
'conv7', nn.Sequential(
nn.Conv2d(in_channels=11, out_channels=8, kernel_size=3, stride=1),
nn.ReflectionPad2d(1),
nn.BatchNorm2d(8),
nn.PReLU()
)
)
]))
self.reconstruction = nn.Sequential(OrderedDict([
(
'conv1', nn.Sequential(
nn.ConvTranspose2d(in_channels=131, out_channels=24, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(24),
nn.PReLU()
)
),(
'conv2', nn.Sequential(
nn.ConvTranspose2d(in_channels=24, out_channels=8, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(8),
nn.PReLU()
)
),(
'conv3', nn.Sequential(
nn.ConvTranspose2d(in_channels=8, out_channels=8, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(8),
nn.PReLU()
)
),(
'conv4', nn.Sequential(
nn.ConvTranspose2d(in_channels=8, out_channels=3, kernel_size=1, stride=1),
nn.BatchNorm2d(3),
nn.PReLU()
)
)
]))
self.feature_extraction_children = OrderedDict(self.feature_extraction.named_children())
def forward(self, frame):
conv1 = self.feature_extraction_children['conv1'](frame)
conv2 = self.feature_extraction_children['conv2'](conv1)
conv3 = self.feature_extraction_children['conv3'](conv2)
conv4 = self.feature_extraction_children['conv4'](conv3)
conv5 = self.feature_extraction_children['conv5'](conv4)
conv6 = self.feature_extraction_children['conv6'](conv5)
conv7 = self.feature_extraction_children['conv7'](conv6)
upscaled = self.bicubic_upsample(frame)
concat = torch.cat([conv1, conv2, conv3, conv4, conv5, conv6, conv7], dim=1)
reconstructed = self.reconstruction(concat)
reconstructed = torch.add(reconstructed, upscaled)
return reconstructed
netDLSS = SuperSamplingModel().to(device)
netDLSS
with torch.no_grad():
x = torch.ones(10, 3, 32, 32, device=device)
out = netDLSS(x)
print(out.shape)
criterion = nn.MSELoss().to(device)
optimizer = torch.optim.Adam(netDLSS.parameters(), lr=lr, betas=(beta1, 0.999))
windowName = "TRAINING VISUALIZATION"
cv2.namedWindow(windowName, flags=cv2.WINDOW_NORMAL)
cv2.resizeWindow(windowName, 1280, 256)
epochs = 5
batch_limit = 10000
print_limit = 500
is_broke = False
losses = []
start = time.time()
for i in range(epochs):
i += 1
for batch, (X, y) in enumerate(zip(dataloader32, dataloader256)):
batch += 1
optimizer.zero_grad()
X = X[0].to(device)
y = y[0].to(device)
out = netDLSS(X)
loss = criterion(out, y)
loss.backward()
optimizer.step()
if batch == 1 or batch % print_limit == 0:
print(f'Epoch: {i}/{epochs}, Batch: {batch}/{batch_limit} -> Loss: {loss:.9f}')
losses.append(loss)
if batch == 1 or batch % 100 == 0:
grid = make_grid(out.detach().cpu(), nrow=10, normalize=True)
grid = np.transpose(grid.numpy(), axes=(1, 2, 0))
grid = cv2.cvtColor(grid, code=cv2.COLOR_RGB2BGR)
cv2.imshow(windowName, grid)
if cv2.waitKey(1) & 0xFFF == 27:
is_broke = True
break
if batch_limit == batch:
break
if is_broke:
cv2.destroyAllWindows()
break
duration = time.time() - start
print(f'Execution Time -> {duration / 60:.4f} minutes')
cv2.destroyAllWindows()
plt.plot(losses)
plt.xlabel("EPOCHS")
plt.ylabel("LOSS")
plt.title("Loss x Epochs GRAPH")
| 0.918453 | 0.73808 |
```
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style ("darkgrid")
STATS_DIR = "/hg191/corpora/legaldata/data/stats/"
texts_file = os.path.join (STATS_DIR, "ops.texts")
dates_file = os.path.join (STATS_DIR, "ops.dates")
def count_occurrences (filename, mentions):
isPresent = list()
with open (filename) as fin:
for i, line in enumerate (fin):
tokens = set (line.strip().lower().split(" "))
isPresent.append ([mention in tokens for mention in mentions])
if (i+1) % 1000000 == 0:
print ("lines done: {0}".format (i+1))
print (i+1)
return isPresent
def readTexts (filename, mentions):
isPresent = list ()
with open (filename) as fin:
for i,line in enumerate (fin):
tokens = set (line.strip().split(" "))
isPresent.append ([mention in tokens for mention in mentions])
if (i+1) % 500000 == 0:
print ("lines done: {0}".format (i+1))
return isPresent
mentionsPresent = readTexts (texts_file, ["fertilization", "land", "soil", "seed", "womb", "pregnancy", "abortion", "women",
"laundering", "washing", "clothes", "cleaning", "money", "funds", "illegal"])
print (len (mentionsPresent))
df = pd.DataFrame()
df["years"] = pd.Series (years)
df["fertilization"] = pd.Series ([item[0] for item in mentionsPresent])
df["land"] = pd.Series ([item[1] for item in mentionsPresent])
df["soil"] = pd.Series ([item[2] for item in mentionsPresent])
df["seed"] = pd.Series ([item[3] for item in mentionsPresent])
df["womb"] = pd.Series ([item[4] for item in mentionsPresent])
df["pregnancy"] = pd.Series ([item[5] for item in mentionsPresent])
df["abortion"] = pd.Series ([item[6] for item in mentionsPresent])
df["women"] = pd.Series ([item[7] for item in mentionsPresent])
df["laundering"] = pd.Series ([item[8] for item in mentionsPresent])
df["washing"] = pd.Series ([item[9] for item in mentionsPresent])
df["clothes"] = pd.Series ([item[10] for item in mentionsPresent])
df["cleaning"] = pd.Series ([item[11] for item in mentionsPresent])
df["money"] = pd.Series ([item[12] for item in mentionsPresent])
df["funds"] = pd.Series ([item[13] for item in mentionsPresent])
df["illegal"] = pd.Series ([item[14] for item in mentionsPresent])
def aggregate_df (df, word1, word2, word3):
rows = list ()
grouped = df.groupby(by="years")
for name, group in grouped:
row = list ()
row.append (name)
row.append (group[group[word1] == True].__len__ ())
row.append ((group[(group[word1] == True) & (group[word2] == True)]).__len__())
row.append ((group[(group[word1] == True) & (group[word3] == True)]).__len__())
rows.append (row)
agg = pd.DataFrame (rows, columns=["years", "nw1", "nw1&w2", "nw1&w3"])
return agg
fertilization_df = aggregate_df (df, "fertilization", "land", "pregnancy")
fertilization_df = fertilization_df[fertilization_df["nw1"] != 0]
fertilization_df["pw2|w1"] = fertilization_df["nw1&w2"] / fertilization_df["nw1"]
fertilization_df["pw3|w1"] = fertilization_df["nw1&w3"] / fertilization_df["nw1"]
laundering_df = aggregate_df (df, "laundering", "cleaning", "funds")
laundering_df = laundering_df[laundering_df["nw1"] != 0]
laundering_df["pw2|w1"] = laundering_df["nw1&w2"] / laundering_df["nw1"]
laundering_df["pw3|w1"] = laundering_df["nw1&w3"] / laundering_df["nw1"]
laundering_df[(laundering_df["years"] >= 1970) & (laundering_df["years"] < 1980)]
%matplotlib inline
sns.set_context ("paper")
fig, ax = plt.subplots(1, 1, figsize=(4,3))
sns.lineplot(x="years", y="pw2|w1", data=laundering_df, ax=ax, color='blue')
sns.lineplot(x="years", y="pw3|w1", data=laundering_df, ax=ax, color='red')
ax.axvline(x=1975, c="black")
ax.set_xlabel("Years")
ax.legend(("P(land | fertilization)", "P(abortion | fertilization)"), fontsize=10, loc="upper center")
ax.set_ylabel("Probability")
fertilization_df = aggregate_df (df, "fertilization", "land", "pregnancy")
fertilization_df.to_csv ("../data/frames/fertilization.timeseries", sep=",", header=True, index=False)
laundering_df = aggregate_df (df, "laundering", "cleaning", "funds")
laundering_df.to_csv ("../data/frames/laundering.timeseries", sep=",", header=True, index=False)
```
|
github_jupyter
|
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style ("darkgrid")
STATS_DIR = "/hg191/corpora/legaldata/data/stats/"
texts_file = os.path.join (STATS_DIR, "ops.texts")
dates_file = os.path.join (STATS_DIR, "ops.dates")
def count_occurrences (filename, mentions):
isPresent = list()
with open (filename) as fin:
for i, line in enumerate (fin):
tokens = set (line.strip().lower().split(" "))
isPresent.append ([mention in tokens for mention in mentions])
if (i+1) % 1000000 == 0:
print ("lines done: {0}".format (i+1))
print (i+1)
return isPresent
def readTexts (filename, mentions):
isPresent = list ()
with open (filename) as fin:
for i,line in enumerate (fin):
tokens = set (line.strip().split(" "))
isPresent.append ([mention in tokens for mention in mentions])
if (i+1) % 500000 == 0:
print ("lines done: {0}".format (i+1))
return isPresent
mentionsPresent = readTexts (texts_file, ["fertilization", "land", "soil", "seed", "womb", "pregnancy", "abortion", "women",
"laundering", "washing", "clothes", "cleaning", "money", "funds", "illegal"])
print (len (mentionsPresent))
df = pd.DataFrame()
df["years"] = pd.Series (years)
df["fertilization"] = pd.Series ([item[0] for item in mentionsPresent])
df["land"] = pd.Series ([item[1] for item in mentionsPresent])
df["soil"] = pd.Series ([item[2] for item in mentionsPresent])
df["seed"] = pd.Series ([item[3] for item in mentionsPresent])
df["womb"] = pd.Series ([item[4] for item in mentionsPresent])
df["pregnancy"] = pd.Series ([item[5] for item in mentionsPresent])
df["abortion"] = pd.Series ([item[6] for item in mentionsPresent])
df["women"] = pd.Series ([item[7] for item in mentionsPresent])
df["laundering"] = pd.Series ([item[8] for item in mentionsPresent])
df["washing"] = pd.Series ([item[9] for item in mentionsPresent])
df["clothes"] = pd.Series ([item[10] for item in mentionsPresent])
df["cleaning"] = pd.Series ([item[11] for item in mentionsPresent])
df["money"] = pd.Series ([item[12] for item in mentionsPresent])
df["funds"] = pd.Series ([item[13] for item in mentionsPresent])
df["illegal"] = pd.Series ([item[14] for item in mentionsPresent])
def aggregate_df (df, word1, word2, word3):
rows = list ()
grouped = df.groupby(by="years")
for name, group in grouped:
row = list ()
row.append (name)
row.append (group[group[word1] == True].__len__ ())
row.append ((group[(group[word1] == True) & (group[word2] == True)]).__len__())
row.append ((group[(group[word1] == True) & (group[word3] == True)]).__len__())
rows.append (row)
agg = pd.DataFrame (rows, columns=["years", "nw1", "nw1&w2", "nw1&w3"])
return agg
fertilization_df = aggregate_df (df, "fertilization", "land", "pregnancy")
fertilization_df = fertilization_df[fertilization_df["nw1"] != 0]
fertilization_df["pw2|w1"] = fertilization_df["nw1&w2"] / fertilization_df["nw1"]
fertilization_df["pw3|w1"] = fertilization_df["nw1&w3"] / fertilization_df["nw1"]
laundering_df = aggregate_df (df, "laundering", "cleaning", "funds")
laundering_df = laundering_df[laundering_df["nw1"] != 0]
laundering_df["pw2|w1"] = laundering_df["nw1&w2"] / laundering_df["nw1"]
laundering_df["pw3|w1"] = laundering_df["nw1&w3"] / laundering_df["nw1"]
laundering_df[(laundering_df["years"] >= 1970) & (laundering_df["years"] < 1980)]
%matplotlib inline
sns.set_context ("paper")
fig, ax = plt.subplots(1, 1, figsize=(4,3))
sns.lineplot(x="years", y="pw2|w1", data=laundering_df, ax=ax, color='blue')
sns.lineplot(x="years", y="pw3|w1", data=laundering_df, ax=ax, color='red')
ax.axvline(x=1975, c="black")
ax.set_xlabel("Years")
ax.legend(("P(land | fertilization)", "P(abortion | fertilization)"), fontsize=10, loc="upper center")
ax.set_ylabel("Probability")
fertilization_df = aggregate_df (df, "fertilization", "land", "pregnancy")
fertilization_df.to_csv ("../data/frames/fertilization.timeseries", sep=",", header=True, index=False)
laundering_df = aggregate_df (df, "laundering", "cleaning", "funds")
laundering_df.to_csv ("../data/frames/laundering.timeseries", sep=",", header=True, index=False)
| 0.231614 | 0.200597 |
<img width="100" src="https://carbonplan-assets.s3.amazonaws.com/monogram/dark-small.png" style="margin-left:0px;margin-top:20px"/>
# Forest Emissions Tracking - Validation
_CarbonPlan ClimateTrace Team_
This notebook compares our estimates of country-level forest emissions to prior estimates from other
groups. The notebook currently compares againsts:
- Global Forest Watch (Zarin et al. 2016)
- Global Carbon Project (Friedlingstein et al. 2020)
```
import geopandas
import pandas as pd
from io import StringIO
import matplotlib.pyplot as plt
import xarray as xr
from carbonplan_styles.mpl import set_theme
from carbonplan_styles.colors import colors
from carbonplan_trace.v1.emissions_workflow import open_fire_mask
import urllib3
import numpy as np
urllib3.disable_warnings()
set_theme()
```
# load in the 3km rasters of v1 and v0
```
coarse_v1 = xr.open_zarr("s3://carbonplan-climatetrace/v1.2/results/global/3000m/raster_split.zarr")
coarse_v0 = xr.open_zarr("s3://carbonplan-climatetrace/v0.4/global/3000m/raster_split.zarr")
coarse_v0 = coarse_v0.assign_coords({"lat": coarse_v1.lat.values, "lon": coarse_v1.lon.values})
```
# load in a sample 30m tile (this one covers the PNW)
```
tile = "50N_130W"
version = "v1.2"
biomass_30m = xr.open_zarr(f"s3://carbonplan-climatetrace/{version}/results/tiles/{tile}.zarr")
biomass_30m = biomass_30m.rename({"time": "year"})
biomass_30m = biomass_30m.assign_coords({"year": np.arange(2014, 2021)})
emissions_30m_v0 = xr.open_zarr(f"s3://carbonplan-climatetrace/v0.4/tiles/30m/{tile}_tot.zarr/")
emissions_30m_v0 = emissions_30m_v0.rename({"emissions": "Emissions [tCO2/ha]"})
emissions_30m_v1 = xr.open_zarr(
f"s3://carbonplan-climatetrace/{version}/results/tiles/30m/{tile}_split.zarr/"
)
emissions_30m_v1["Emissions\n[tCO2/ha]"] = (
emissions_30m_v1["emissions_from_clearing"] + emissions_30m_v1["emissions_from_fire"]
)
min_lat, max_lat, min_lon, max_lon = 47.55, 47.558, -121.834, -121.82 # north bend
# min_lat, max_lat, min_lon, max_lon = -6.32,-6.318, -53.446,-53.445#amazon
small_subset = {"lat": slice(min_lat, max_lat), "lon": slice(min_lon, max_lon)}
small_subset_reversed = {"lat": slice(max_lat, min_lat), "lon": slice(min_lon, max_lon)}
def format_panel_plot(fg, titles, label_coords):
for year, panel in zip(titles, fg.axes[0]):
panel.plot(
label_coords["lon"],
label_coords["lat"],
marker="o",
fillstyle="none",
markeredgecolor="blue",
)
panel.set_xlabel("")
panel.set_ylabel("")
panel.set_title(year)
panel.ticklabel_format(useOffset=False)
```
# make the figures that go into the v1 methods doc where we track how biomass changes and how that translates into emissions
```
sample_pixel = {"lat": 47.554, "lon": -121.825}
fg = (
biomass_30m.sel(small_subset)
.rename({"AGB": "AGB [t/ha]"})["AGB [t/ha]"]
.plot(col="year", vmax=200, vmin=0, cmap="earth_light", figsize=(22, 3))
)
format_panel_plot(fg, biomass_30m.year.values, sample_pixel)
fg = (
emissions_30m_v1.sel(small_subset)
.rename({"Emissions\n[tCO2/ha]": "Emissions - v1 - [tCO2/ha]"})["Emissions - v1 - [tCO2/ha]"]
.plot(col="year", vmax=400, cmap="fire_light", figsize=(19, 3))
)
format_panel_plot(fg, emissions_30m_v1.year.values, sample_pixel)
fg = (
emissions_30m_v1.sel(small_subset)
.rename({"sinks": "Sinks [tCO2/ha]"})["Sinks [tCO2/ha]"]
.plot(col="year", cmap="water_light_r", figsize=(19, 3))
)
format_panel_plot(fg, emissions_30m_v1.year.values, sample_pixel)
```
# compare those emissions to the same region in v0 (emissions only come off in 2018 since that is the only year with a loss according to hansen)
```
fg = (
emissions_30m_v0.sel(small_subset_reversed)
.sel(year=slice(2015, 2020))
.rename({"Emissions [tCO2/ha]": "Emissions - v0 - [tCO2/ha]"})["Emissions - v0 - [tCO2/ha]"]
.plot(col="year", vmax=400, cmap="fire_light", figsize=(19, 3))
)
format_panel_plot(fg, emissions_30m_v0.sel(year=slice(2015, 2020)).year.values, sample_pixel)
```
# look at an individual pixel and track how the carbon pools change over time
```
fig, axarr = plt.subplots(nrows=2, sharex=True)
all_carbon_pools.sel(north_bend_cell).plot(label="Total carbon pool", ax=axarr[0], color="k")
biomass_30m.sel(north_bend_cell).AGB.plot(label="AGB", ax=axarr[0])
biomass_30m.sel(north_bend_cell).BGB.plot(label="BGB", ax=axarr[0])
biomass_30m.sel(north_bend_cell).dead_wood.plot(label="Dead wood", ax=axarr[0])
biomass_30m.sel(north_bend_cell).litter.plot(label="Litter", ax=axarr[0])
all_carbon_pools = (
biomass_30m["AGB"] + biomass_30m["BGB"] + biomass_30m["dead_wood"] + biomass_30m["litter"]
)
ax = axarr[0].set_ylabel("Biomass\n[t/hectare]")
emissions_30m_v1["Emissions\n[tCO2/ha]"].sel(north_bend_cell).plot(
label="Emissions", ax=axarr[1], color="k", linestyle="--"
)
axarr[1].set_ylabel("Emissions\n[tCO2/ha]")
lines, labels = [], []
for ax in axarr:
axLine, axLabel = ax.get_legend_handles_labels()
ax.set_title("")
ax.set_xlabel("")
lines.extend(axLine)
labels.extend(axLabel)
# fig.legend(lines, labels,
# loc = 'upper right')
fig.legend(lines, labels, bbox_to_anchor=(0, 0), loc="upper left", ncol=3)
plt.xlabel("")
plt.savefig("single_cell_timeseries.png", dpi=200)
```
# then compare the emissions for that pixel between v0 and v1
```
emissions_30m_v0["emissions"].sel(year=slice(2015, 2020)).sel(north_bend_cell).plot(label="v0")
emissions_30m_v1["total_emissions"].sel(north_bend_cell).plot(label="v1")
plt.legend()
```
# create a one degree roll up to inspect global emissions
```
one_degree_raster = coarse_v1.coarsen(lat=40, lon=40).sum().compute() # .drop("spatial_ref")
```
# sources annual average
```
(
(coarse_v1["emissions_from_clearing"] + coarse_v1["emissions_from_fire"]).sum(dim="year") / 6
).plot(cmap="fire_light", robust=True)
```
# sources 1 degree
```
(
(one_degree_raster["emissions_from_clearing"] + one_degree_raster["emissions_from_fire"]).sum(
dim="year"
)
/ 6
).plot(cmap="fire_light", robust=True)
```
# sources 1 degree v0
```
one_degree_raster_v0 = coarse_v0.coarsen(lat=40, lon=40).sum().compute() # .drop("spatial_ref")
(
(
one_degree_raster_v0["emissions_from_clearing"]
+ one_degree_raster_v0["emissions_from_fire"]
).sum(dim="year")
/ 6
).plot(cmap="fire_light", robust=True)
```
# sinks at one degree
```
(one_degree_raster["sinks"].sum(dim="year") / 6).plot(robust=True, cmap="water_light_r")
```
# net emissions averaged over 2015-2020
```
# net 1 degree
(one_degree_raster.to_array("variable").sum(dim="variable").sum(dim="year") / 6).plot(
robust=True, cmap="orangeblue_light_r"
)
```
# or every year separately- the disparities among regions being net sources or sinks gets stronger in 2020
```
one_degree_raster.to_array("variable").sum(dim="variable").plot(
vmin=-1.5e6, vmax=1.5e6, col="year", col_wrap=3, cmap="orangeblue_light_r"
)
one_degree_raster.sel(year=2020).to_array("variable").sum(dim="variable").plot(
robust=True, cmap="orangeblue_light_r"
)
```
# total global emissions timeseries 2015-2020
```
coarse_v0["total_emissions"].sel(year=slice(2015, 2020)).sum(dim=["lat", "lon"]).plot(label="v0")
coarse_v1["total_emissions"].sel(year=slice(2015, 2020)).sum(dim=["lat", "lon"]).plot(label="v1")
plt.ylabel("Global emissions \n[tCO2/year]")
plt.xlabel("")
plt.legend()
```
|
github_jupyter
|
import geopandas
import pandas as pd
from io import StringIO
import matplotlib.pyplot as plt
import xarray as xr
from carbonplan_styles.mpl import set_theme
from carbonplan_styles.colors import colors
from carbonplan_trace.v1.emissions_workflow import open_fire_mask
import urllib3
import numpy as np
urllib3.disable_warnings()
set_theme()
coarse_v1 = xr.open_zarr("s3://carbonplan-climatetrace/v1.2/results/global/3000m/raster_split.zarr")
coarse_v0 = xr.open_zarr("s3://carbonplan-climatetrace/v0.4/global/3000m/raster_split.zarr")
coarse_v0 = coarse_v0.assign_coords({"lat": coarse_v1.lat.values, "lon": coarse_v1.lon.values})
tile = "50N_130W"
version = "v1.2"
biomass_30m = xr.open_zarr(f"s3://carbonplan-climatetrace/{version}/results/tiles/{tile}.zarr")
biomass_30m = biomass_30m.rename({"time": "year"})
biomass_30m = biomass_30m.assign_coords({"year": np.arange(2014, 2021)})
emissions_30m_v0 = xr.open_zarr(f"s3://carbonplan-climatetrace/v0.4/tiles/30m/{tile}_tot.zarr/")
emissions_30m_v0 = emissions_30m_v0.rename({"emissions": "Emissions [tCO2/ha]"})
emissions_30m_v1 = xr.open_zarr(
f"s3://carbonplan-climatetrace/{version}/results/tiles/30m/{tile}_split.zarr/"
)
emissions_30m_v1["Emissions\n[tCO2/ha]"] = (
emissions_30m_v1["emissions_from_clearing"] + emissions_30m_v1["emissions_from_fire"]
)
min_lat, max_lat, min_lon, max_lon = 47.55, 47.558, -121.834, -121.82 # north bend
# min_lat, max_lat, min_lon, max_lon = -6.32,-6.318, -53.446,-53.445#amazon
small_subset = {"lat": slice(min_lat, max_lat), "lon": slice(min_lon, max_lon)}
small_subset_reversed = {"lat": slice(max_lat, min_lat), "lon": slice(min_lon, max_lon)}
def format_panel_plot(fg, titles, label_coords):
for year, panel in zip(titles, fg.axes[0]):
panel.plot(
label_coords["lon"],
label_coords["lat"],
marker="o",
fillstyle="none",
markeredgecolor="blue",
)
panel.set_xlabel("")
panel.set_ylabel("")
panel.set_title(year)
panel.ticklabel_format(useOffset=False)
sample_pixel = {"lat": 47.554, "lon": -121.825}
fg = (
biomass_30m.sel(small_subset)
.rename({"AGB": "AGB [t/ha]"})["AGB [t/ha]"]
.plot(col="year", vmax=200, vmin=0, cmap="earth_light", figsize=(22, 3))
)
format_panel_plot(fg, biomass_30m.year.values, sample_pixel)
fg = (
emissions_30m_v1.sel(small_subset)
.rename({"Emissions\n[tCO2/ha]": "Emissions - v1 - [tCO2/ha]"})["Emissions - v1 - [tCO2/ha]"]
.plot(col="year", vmax=400, cmap="fire_light", figsize=(19, 3))
)
format_panel_plot(fg, emissions_30m_v1.year.values, sample_pixel)
fg = (
emissions_30m_v1.sel(small_subset)
.rename({"sinks": "Sinks [tCO2/ha]"})["Sinks [tCO2/ha]"]
.plot(col="year", cmap="water_light_r", figsize=(19, 3))
)
format_panel_plot(fg, emissions_30m_v1.year.values, sample_pixel)
fg = (
emissions_30m_v0.sel(small_subset_reversed)
.sel(year=slice(2015, 2020))
.rename({"Emissions [tCO2/ha]": "Emissions - v0 - [tCO2/ha]"})["Emissions - v0 - [tCO2/ha]"]
.plot(col="year", vmax=400, cmap="fire_light", figsize=(19, 3))
)
format_panel_plot(fg, emissions_30m_v0.sel(year=slice(2015, 2020)).year.values, sample_pixel)
fig, axarr = plt.subplots(nrows=2, sharex=True)
all_carbon_pools.sel(north_bend_cell).plot(label="Total carbon pool", ax=axarr[0], color="k")
biomass_30m.sel(north_bend_cell).AGB.plot(label="AGB", ax=axarr[0])
biomass_30m.sel(north_bend_cell).BGB.plot(label="BGB", ax=axarr[0])
biomass_30m.sel(north_bend_cell).dead_wood.plot(label="Dead wood", ax=axarr[0])
biomass_30m.sel(north_bend_cell).litter.plot(label="Litter", ax=axarr[0])
all_carbon_pools = (
biomass_30m["AGB"] + biomass_30m["BGB"] + biomass_30m["dead_wood"] + biomass_30m["litter"]
)
ax = axarr[0].set_ylabel("Biomass\n[t/hectare]")
emissions_30m_v1["Emissions\n[tCO2/ha]"].sel(north_bend_cell).plot(
label="Emissions", ax=axarr[1], color="k", linestyle="--"
)
axarr[1].set_ylabel("Emissions\n[tCO2/ha]")
lines, labels = [], []
for ax in axarr:
axLine, axLabel = ax.get_legend_handles_labels()
ax.set_title("")
ax.set_xlabel("")
lines.extend(axLine)
labels.extend(axLabel)
# fig.legend(lines, labels,
# loc = 'upper right')
fig.legend(lines, labels, bbox_to_anchor=(0, 0), loc="upper left", ncol=3)
plt.xlabel("")
plt.savefig("single_cell_timeseries.png", dpi=200)
emissions_30m_v0["emissions"].sel(year=slice(2015, 2020)).sel(north_bend_cell).plot(label="v0")
emissions_30m_v1["total_emissions"].sel(north_bend_cell).plot(label="v1")
plt.legend()
one_degree_raster = coarse_v1.coarsen(lat=40, lon=40).sum().compute() # .drop("spatial_ref")
(
(coarse_v1["emissions_from_clearing"] + coarse_v1["emissions_from_fire"]).sum(dim="year") / 6
).plot(cmap="fire_light", robust=True)
(
(one_degree_raster["emissions_from_clearing"] + one_degree_raster["emissions_from_fire"]).sum(
dim="year"
)
/ 6
).plot(cmap="fire_light", robust=True)
one_degree_raster_v0 = coarse_v0.coarsen(lat=40, lon=40).sum().compute() # .drop("spatial_ref")
(
(
one_degree_raster_v0["emissions_from_clearing"]
+ one_degree_raster_v0["emissions_from_fire"]
).sum(dim="year")
/ 6
).plot(cmap="fire_light", robust=True)
(one_degree_raster["sinks"].sum(dim="year") / 6).plot(robust=True, cmap="water_light_r")
# net 1 degree
(one_degree_raster.to_array("variable").sum(dim="variable").sum(dim="year") / 6).plot(
robust=True, cmap="orangeblue_light_r"
)
one_degree_raster.to_array("variable").sum(dim="variable").plot(
vmin=-1.5e6, vmax=1.5e6, col="year", col_wrap=3, cmap="orangeblue_light_r"
)
one_degree_raster.sel(year=2020).to_array("variable").sum(dim="variable").plot(
robust=True, cmap="orangeblue_light_r"
)
coarse_v0["total_emissions"].sel(year=slice(2015, 2020)).sum(dim=["lat", "lon"]).plot(label="v0")
coarse_v1["total_emissions"].sel(year=slice(2015, 2020)).sum(dim=["lat", "lon"]).plot(label="v1")
plt.ylabel("Global emissions \n[tCO2/year]")
plt.xlabel("")
plt.legend()
| 0.427994 | 0.864711 |
In this unit, you'll learn about variables in Python. Variables are containers that you can store data in and use at a different time.
There are four main types of data variables that you'll encounter throughout this content:
- Integers (int): Whole numbers, like 1, 4, 10, -5.
- Floats: Decimal numbers, like 0.3, 1.6, 17.4, -3.5.
- String: Chains of characters that are surrounded by single or double quotes, like "hi", "NASA", "Space Rocks", "54321".
- Boolean: Represents either True or False.
Try running this code to create some of your own variables. Note that you won't see any output in the cell:
```
# Integer Variable
numberOfRocks = 5
# Float Variable
tempInSpace = -457.87
# String Variable
roverName = "Artemis Rover"
# Boolean Variable
rocketOn = False
```
When you have created your variables, you can view the values by writing the variable's name and selecting the run button.
```
roverName
```
A unique aspect of Python is that you don't need to tell the program what type of variable you're making. For example, in some languages, if you want to make an integer, you must first let the computer know that you're about to create an integer. In Java, for example, you might write:
`int intVar = 0;`.
Later on in our program, we'll be using variables to store the number of a certain type of rock we find. One of the main types of moon rock is called basalt. We can make a variable named `basaltRockCount` and give it a value of 0 by using the following code:
```
# Create integer variable named basaltRockCount with value 0
basaltRockCount = 0
```
We can also change the value of the variable:
```
basaltRockCount = 3
basaltRockCount = basaltRockCount + 1
basaltRockCount
```
This can be useful if we want to inform the computer that we've found three rocks so far and we've just found another.
An easy way to perform an operation on a variable is to use the operation you want to apply and then add an equal sign after it. This will perform the action and set the variable to the new value. For example:
```
basaltRockCount = 5
basaltRockCount += 3 #Add 3
basaltRockCount -= 2 #Remove 2
```
Finally, you can use all of the arithmetic examples we've used so far with variables.
```
# Find out how many miles until rocket reaches moon
distanceToMoon = 238855
distanceTraveled = 10234
distanceToMoon - distanceTraveled
```
Although you can give a variable any name, it's recommended that you choose something that represents the data it holds. For example, you could have written the above code as:
```
# Find out how many miles until rocket reaches moon
a = 238855
b = 10234
a - b
```
This code is likely to be confusing because `a` and `b` could represent numbers, rather than the values intended. Using `distanceToMoon` and `distanceTraveled` is clearer.
|
github_jupyter
|
# Integer Variable
numberOfRocks = 5
# Float Variable
tempInSpace = -457.87
# String Variable
roverName = "Artemis Rover"
# Boolean Variable
rocketOn = False
roverName
# Create integer variable named basaltRockCount with value 0
basaltRockCount = 0
basaltRockCount = 3
basaltRockCount = basaltRockCount + 1
basaltRockCount
basaltRockCount = 5
basaltRockCount += 3 #Add 3
basaltRockCount -= 2 #Remove 2
# Find out how many miles until rocket reaches moon
distanceToMoon = 238855
distanceTraveled = 10234
distanceToMoon - distanceTraveled
# Find out how many miles until rocket reaches moon
a = 238855
b = 10234
a - b
| 0.394084 | 0.9462 |
## 말뭉치 읽기
```
from gensim.models import Word2Vec
corpus = []
for line in open("data/phone_review_mecab.txt", "r", encoding="utf-8").readlines():
tokens = line.strip().split()
corpus.append(tokens)
corpus
```
## Skip-Gram 임베딩
```
embedding_model = Word2Vec(corpus,
size=100,
window = 2,
workers=4,
sg=1)
embedding_model.save("embedding/word2vec")
```
## Skip-Gram 임베딩 읽어들이기
```
from gensim.models import Word2Vec
embedding_model = Word2Vec.load("embedding/word2vec")
embedding_model.wv["나"]
embedding_model.most_similar("한국", topn=5)
vocab = embedding_model.wv.index2word
```
## 임베딩 시각화
```
from bokeh.io import output_notebook, show
output_notebook()
import pandas as pd
from sklearn.manifold import TSNE
from bokeh.plotting import figure
from bokeh.models import LinearColorMapper, ColumnDataSource, LabelSet
def visualize_words(words, vecs, palette="Viridis256"):
tsne = TSNE(n_components=2)
tsne_results = tsne.fit_transform(vecs)
df = pd.DataFrame(columns=['x', 'y', 'word'])
df['x'], df['y'], df['word'] = tsne_results[:, 0], tsne_results[:, 1], list(words)
source = ColumnDataSource(ColumnDataSource.from_df(df))
labels = LabelSet(x="x", y="y", text="word", y_offset=8,
text_font_size="15pt", text_color="#555555",
source=source, text_align='center')
color_mapper = LinearColorMapper(palette=palette, low=min(tsne_results[:, 1]), high=max(tsne_results[:, 1]))
plot = figure(plot_width=800, plot_height=1000)
plot.scatter("x", "y", size=12, source=source, color={'field': 'y', 'transform': color_mapper}, line_color=None,
fill_alpha=0.8)
plot.add_layout(labels)
show(plot)
import random
words = random.sample(vocab[:3000], 100)
vecs = [embedding_model.wv[word] for word in words]
visualize_words(words, vecs)
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
from bokeh.models import ColorBar, BasicTicker
def visualize_between_words(words, vecs, palette="Viridis256"):
df_list = []
for word1_idx, word1 in enumerate(words):
for word2_idx, word2 in enumerate(words):
vec1 = vecs[word1_idx]
vec2 = vecs[word2_idx]
if np.any(vec1) and np.any(vec2):
score = cosine_similarity(X=[vec1], Y=[vec2])
df_list.append({'x': word1, 'y': word2, 'similarity': score[0][0]})
df = pd.DataFrame(df_list)
color_mapper = LinearColorMapper(palette=palette, low=1, high=0)
TOOLS = "hover,save,pan,box_zoom,reset,wheel_zoom"
p = figure(x_range=list(words), y_range=list(reversed(list(words))),
x_axis_location="above", plot_width=900, plot_height=900,
toolbar_location='below', tools=TOOLS,
tooltips=[('words', '@x @y'), ('similarity', '@similarity')])
p.grid.grid_line_color = None
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_standoff = 0
p.xaxis.major_label_orientation = 3.14 / 3
p.rect(x="x", y="y", width=1, height=1,
source=df,
fill_color={'field': 'similarity', 'transform': color_mapper},
line_color=None)
color_bar = ColorBar(ticker=BasicTicker(desired_num_ticks=5),
color_mapper=color_mapper, major_label_text_font_size="7pt",
label_standoff=6, border_line_color=None, location=(0, 0))
p.add_layout(color_bar, 'right')
show(p)
words = random.sample(vocab[:3000], 30)
vecs = [embedding_model.wv[word] for word in words]
visualize_between_words(words, vecs)
```
## 임베딩을 텍스트 형태로 저장하기
```
with open("embedding/word2vec.txt", "w", encoding="utf-8") as f:
for token in vocab:
vec = [str(el) for el in embedding_model.wv[token]]
line = token + " " + " ".join(vec) + "\n"
f.writelines(line)
```
|
github_jupyter
|
from gensim.models import Word2Vec
corpus = []
for line in open("data/phone_review_mecab.txt", "r", encoding="utf-8").readlines():
tokens = line.strip().split()
corpus.append(tokens)
corpus
embedding_model = Word2Vec(corpus,
size=100,
window = 2,
workers=4,
sg=1)
embedding_model.save("embedding/word2vec")
from gensim.models import Word2Vec
embedding_model = Word2Vec.load("embedding/word2vec")
embedding_model.wv["나"]
embedding_model.most_similar("한국", topn=5)
vocab = embedding_model.wv.index2word
from bokeh.io import output_notebook, show
output_notebook()
import pandas as pd
from sklearn.manifold import TSNE
from bokeh.plotting import figure
from bokeh.models import LinearColorMapper, ColumnDataSource, LabelSet
def visualize_words(words, vecs, palette="Viridis256"):
tsne = TSNE(n_components=2)
tsne_results = tsne.fit_transform(vecs)
df = pd.DataFrame(columns=['x', 'y', 'word'])
df['x'], df['y'], df['word'] = tsne_results[:, 0], tsne_results[:, 1], list(words)
source = ColumnDataSource(ColumnDataSource.from_df(df))
labels = LabelSet(x="x", y="y", text="word", y_offset=8,
text_font_size="15pt", text_color="#555555",
source=source, text_align='center')
color_mapper = LinearColorMapper(palette=palette, low=min(tsne_results[:, 1]), high=max(tsne_results[:, 1]))
plot = figure(plot_width=800, plot_height=1000)
plot.scatter("x", "y", size=12, source=source, color={'field': 'y', 'transform': color_mapper}, line_color=None,
fill_alpha=0.8)
plot.add_layout(labels)
show(plot)
import random
words = random.sample(vocab[:3000], 100)
vecs = [embedding_model.wv[word] for word in words]
visualize_words(words, vecs)
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
from bokeh.models import ColorBar, BasicTicker
def visualize_between_words(words, vecs, palette="Viridis256"):
df_list = []
for word1_idx, word1 in enumerate(words):
for word2_idx, word2 in enumerate(words):
vec1 = vecs[word1_idx]
vec2 = vecs[word2_idx]
if np.any(vec1) and np.any(vec2):
score = cosine_similarity(X=[vec1], Y=[vec2])
df_list.append({'x': word1, 'y': word2, 'similarity': score[0][0]})
df = pd.DataFrame(df_list)
color_mapper = LinearColorMapper(palette=palette, low=1, high=0)
TOOLS = "hover,save,pan,box_zoom,reset,wheel_zoom"
p = figure(x_range=list(words), y_range=list(reversed(list(words))),
x_axis_location="above", plot_width=900, plot_height=900,
toolbar_location='below', tools=TOOLS,
tooltips=[('words', '@x @y'), ('similarity', '@similarity')])
p.grid.grid_line_color = None
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_standoff = 0
p.xaxis.major_label_orientation = 3.14 / 3
p.rect(x="x", y="y", width=1, height=1,
source=df,
fill_color={'field': 'similarity', 'transform': color_mapper},
line_color=None)
color_bar = ColorBar(ticker=BasicTicker(desired_num_ticks=5),
color_mapper=color_mapper, major_label_text_font_size="7pt",
label_standoff=6, border_line_color=None, location=(0, 0))
p.add_layout(color_bar, 'right')
show(p)
words = random.sample(vocab[:3000], 30)
vecs = [embedding_model.wv[word] for word in words]
visualize_between_words(words, vecs)
with open("embedding/word2vec.txt", "w", encoding="utf-8") as f:
for token in vocab:
vec = [str(el) for el in embedding_model.wv[token]]
line = token + " " + " ".join(vec) + "\n"
f.writelines(line)
| 0.570212 | 0.701445 |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
from matplotlib import rc, font_manager
ticks_font = font_manager.FontProperties(family='serif', style='normal',
size=24, weight='normal', stretch='normal')
import scipy.integrate as integrate
import scipy.optimize as optimize
b = np.sqrt(3)/2
ap = np.sqrt(2/3)/b
c_i = 0.3
def kappa_i_integral(x):
return np.exp(-x**2/2)/np.sqrt(2*np.pi)
def L(b,c_i,kappa_i):
f = lambda x: np.exp(-x**2/2)/np.sqrt(2*np.pi)
y = integrate.quad(f,kappa_i,np.inf)
return b/(3*y[0]*c_i) - 1
kappa_i = optimize.fsolve(L,1,args=(b,c_i,kappa_i))
L(b,c_i,kappa_i)
def y(x):
return x**2 - 2*x
optimize.root(y,1)
y
x = np.linspace(-4,4)
plt.plot(x,y(x))
plt.axhline(0)
# Mo
inputdata = {
"E_f_v" :2.96 ,
"E_f_si" : 7.419 ,
"a_0" : 3.14,
"E_w" : 0.146,
"G": 51,
"rho" : 4e13
}
experiment_conditions = {
"T" : 300,
"strain_r" : 0.001
}
4e13/10**13
class Suzuki_model_RWASM:
def __init__(self,
inputdata,
composition,
experiment_conditions):
# conditions
self.strain_r = experiment_conditions['strain_r']
self.T = experiment_conditions['T']
# constants
self.boltzmann_J = 1.380649e-23
self.boltzmann_eV = 8.617333262145e-5
self.kT = self.boltzmann_J * self.T
self.J2eV = self.boltzmann_eV/self.boltzmann_J
self.eV2J = 1/self.J2eV
self.Debye = 5 * 10**(12) # Debye frequency /s
self.rho = inputdata['rho']
# properties
self.E_f_v = inputdata['E_f_v'] * self.eV2J
self.E_f_si = inputdata['E_f_si'] * self.eV2J
self.a_0 = inputdata['a_0']
self.E_w = inputdata['E_w'] * self.eV2J
self.c = composition
self.G = inputdata['G']
self.b = self.a_0 * np.sqrt(3) / 2
self.a_p = self.a_0 * np.sqrt(2/3)
self.E_vac = 0.707 * self.E_f_v /self.b
self.E_int = 0.707 * self.E_f_si /self.b
self.lambda_k = self.b * 10
def L(self,kappa_i):
f = lambda x: np.exp(-x**2/2)/np.sqrt(2*np.pi)
y = integrate.quad(f,kappa_i,np.inf)
return self.b/(3*y[0]*self.c_i) - 1
def tau_y_optimize(self):
self.tau_j = lambda kappa_i: (self.E_int + self.E_vac)/(4*self.b*self.L(kappa_i))
self.Delta_V = lambda tau_k,kappa_i: 3 * kappa_i**2 * self.E_w**2 * self.c / (2*self.tau_k**2*self.a_p*self.b**2) + \
tau_k**2 * self.a_p**3 * self.b**4 * self.lambda_k**2 / (6*kappa_i**2 * self.E_w**2 * self.c)
self.S = lambda tau_k,kappa_i: 18 * kappa_i**2 * self.E_w**2 * self.c *self.kT /(self.a_p**3 * self.b**4 * self.lambda_k**2) * \
np.log( (5*np.pi*self.kT)**2 * self.Debye * self.a_p * self.b /((self.G*self.b*self.Delta_V(tau_k))**2 * self.strain_r) )
self.R = lambda kappa_i: 27 * kappa_i**4 * self.E_w**4 * self.c**2 / (self.a_p**4 * self.b**6 * self.lambda_k**2)
self.tau_k_opt_func = lambda tau_k,kappa_i: tau_k**4 + tau_k*self.S(tau_k,kappa_i) - self.R(kappa_i)
self.tau_y = lambda tau_k,kappa_i: self.tau_j(kappa_i) + self.tau_k_opt_func(tau_k,kappa_i)
optimize.minimize(self.tau_y, [0,0])
model = Suzuki_model_RWASM(inputdata,
1,
experiment_conditions)
model.tau_y_optimize()
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
from matplotlib import rc, font_manager
ticks_font = font_manager.FontProperties(family='serif', style='normal',
size=24, weight='normal', stretch='normal')
import scipy.integrate as integrate
import scipy.optimize as optimize
b = np.sqrt(3)/2
ap = np.sqrt(2/3)/b
c_i = 0.3
def kappa_i_integral(x):
return np.exp(-x**2/2)/np.sqrt(2*np.pi)
def L(b,c_i,kappa_i):
f = lambda x: np.exp(-x**2/2)/np.sqrt(2*np.pi)
y = integrate.quad(f,kappa_i,np.inf)
return b/(3*y[0]*c_i) - 1
kappa_i = optimize.fsolve(L,1,args=(b,c_i,kappa_i))
L(b,c_i,kappa_i)
def y(x):
return x**2 - 2*x
optimize.root(y,1)
y
x = np.linspace(-4,4)
plt.plot(x,y(x))
plt.axhline(0)
# Mo
inputdata = {
"E_f_v" :2.96 ,
"E_f_si" : 7.419 ,
"a_0" : 3.14,
"E_w" : 0.146,
"G": 51,
"rho" : 4e13
}
experiment_conditions = {
"T" : 300,
"strain_r" : 0.001
}
4e13/10**13
class Suzuki_model_RWASM:
def __init__(self,
inputdata,
composition,
experiment_conditions):
# conditions
self.strain_r = experiment_conditions['strain_r']
self.T = experiment_conditions['T']
# constants
self.boltzmann_J = 1.380649e-23
self.boltzmann_eV = 8.617333262145e-5
self.kT = self.boltzmann_J * self.T
self.J2eV = self.boltzmann_eV/self.boltzmann_J
self.eV2J = 1/self.J2eV
self.Debye = 5 * 10**(12) # Debye frequency /s
self.rho = inputdata['rho']
# properties
self.E_f_v = inputdata['E_f_v'] * self.eV2J
self.E_f_si = inputdata['E_f_si'] * self.eV2J
self.a_0 = inputdata['a_0']
self.E_w = inputdata['E_w'] * self.eV2J
self.c = composition
self.G = inputdata['G']
self.b = self.a_0 * np.sqrt(3) / 2
self.a_p = self.a_0 * np.sqrt(2/3)
self.E_vac = 0.707 * self.E_f_v /self.b
self.E_int = 0.707 * self.E_f_si /self.b
self.lambda_k = self.b * 10
def L(self,kappa_i):
f = lambda x: np.exp(-x**2/2)/np.sqrt(2*np.pi)
y = integrate.quad(f,kappa_i,np.inf)
return self.b/(3*y[0]*self.c_i) - 1
def tau_y_optimize(self):
self.tau_j = lambda kappa_i: (self.E_int + self.E_vac)/(4*self.b*self.L(kappa_i))
self.Delta_V = lambda tau_k,kappa_i: 3 * kappa_i**2 * self.E_w**2 * self.c / (2*self.tau_k**2*self.a_p*self.b**2) + \
tau_k**2 * self.a_p**3 * self.b**4 * self.lambda_k**2 / (6*kappa_i**2 * self.E_w**2 * self.c)
self.S = lambda tau_k,kappa_i: 18 * kappa_i**2 * self.E_w**2 * self.c *self.kT /(self.a_p**3 * self.b**4 * self.lambda_k**2) * \
np.log( (5*np.pi*self.kT)**2 * self.Debye * self.a_p * self.b /((self.G*self.b*self.Delta_V(tau_k))**2 * self.strain_r) )
self.R = lambda kappa_i: 27 * kappa_i**4 * self.E_w**4 * self.c**2 / (self.a_p**4 * self.b**6 * self.lambda_k**2)
self.tau_k_opt_func = lambda tau_k,kappa_i: tau_k**4 + tau_k*self.S(tau_k,kappa_i) - self.R(kappa_i)
self.tau_y = lambda tau_k,kappa_i: self.tau_j(kappa_i) + self.tau_k_opt_func(tau_k,kappa_i)
optimize.minimize(self.tau_y, [0,0])
model = Suzuki_model_RWASM(inputdata,
1,
experiment_conditions)
model.tau_y_optimize()
| 0.560614 | 0.560854 |
# Credit Risk Resampling Techniques
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
```
# Read the CSV and Perform Basic Data Cleaning
```
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership",
"annual_inc", "verification_status", "issue_d", "loan_status",
"pymnt_plan", "dti", "delinq_2yrs", "inq_last_6mths",
"open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int", "total_rec_late_fee",
"recoveries", "collection_recovery_fee", "last_pymnt_amnt", "next_pymnt_d",
"collections_12_mths_ex_med", "policy_code", "application_type", "acc_now_delinq",
"tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il",
"open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il",
"il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc",
"all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl",
"inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy",
"bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct",
"mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc",
"mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0",
"num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies",
"tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit",
"total_il_high_credit_limit", "hardship_flag", "debt_settlement_flag"
]
target = ["loan_status"]
# Load the data
file_path = Path('LoanStats_2019Q1.zip')
df = pd.read_csv(file_path, skiprows=1)[:-2]
df = df.loc[:, columns].copy()
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
# Remove the `Issued` loan status
issued_mask = df['loan_status'] != 'Issued'
df = df.loc[issued_mask]
# convert interest rate to numerical
df['int_rate'] = df['int_rate'].str.replace('%', '')
df['int_rate'] = df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = {'Current': 'low_risk'}
df = df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk')
df = df.replace(x)
df.reset_index(inplace=True, drop=True)
df.head()
# Check for missing values
missing_data = df.isnull()
for column in missing_data.columns :
print(column,':')
print(missing_data[column].value_counts())
print()
df['loan_status'].value_counts()
# Convert string values to numerical
df = pd.get_dummies(df, columns=["home_ownership", "verification_status","issue_d", "pymnt_plan",
"initial_list_status", "next_pymnt_d","application_type", "hardship_flag",
"debt_settlement_flag"])
df['loan_status'].replace(to_replace=['low_risk','high_risk'], value=[0,1],inplace=True)
df.head()
```
# Split the Data into Training and Testing
```
# Create our features
X = df.copy()
X = X.drop("loan_status", axis=1)
X.head()
# Create our target
y = df["loan_status"].values
X.describe()
# Normal train-test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
```
# Oversampling
In this section, you will compare two oversampling algorithms to determine which algorithm results in the best performance. You will oversample the data using the naive random oversampling algorithm and the SMOTE algorithm. For each algorithm, be sure to complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Print the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
### Naive Random Oversampling
```
# Resample the training data with the RandomOversampler
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler(random_state=1)
X_resampled, y_resampled = ros.fit_resample(X_train, y_train)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
confusion_matrix(y_test, y_pred)
# Calculated the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
balanced_accuracy_score(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
```
### SMOTE Oversampling
```
# Resample the training data with SMOTE
from imblearn.over_sampling import SMOTE
X_resampled, y_resampled = SMOTE(random_state=1, sampling_strategy=1.0).fit_resample(
X_train, y_train
)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Calculated the balanced accuracy score
y_pred = model.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred))
```
# Undersampling
In this section, you will test an undersampling algorithms to determine which algorithm results in the best performance compared to the oversampling algorithms above. You will undersample the data using the Cluster Centroids algorithm and complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Print the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
```
# Resample the data using the ClusterCentroids resampler
# Warning: This is a large dataset, and this step may take some time to complete
from imblearn.under_sampling import ClusterCentroids
cc = ClusterCentroids(random_state=1)
X_resampled, y_resampled = cc.fit_resample(X_train, y_train)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=78)
model.fit(X_resampled, y_resampled)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
confusion_matrix(y_test, y_pred)
# Calculate the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
balanced_accuracy_score(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
```
# Combination (Over and Under) Sampling
In this section, you will test a combination over- and under-sampling algorithm to determine if the algorithm results in the best performance compared to the other sampling algorithms above. You will resample the data using the SMOTEENN algorithm and complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Print the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
```
# Resample the training data with SMOTEENN
# Warning: This is a large dataset, and this step may take some time to complete
from imblearn.combine import SMOTEENN
smote_enn = SMOTEENN(random_state=0)
X_resampled, y_resampled = smote_enn.fit_resample(X, y)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
confusion_matrix(y_test, y_pred)
# Calculated the balanced accuracy score
balanced_accuracy_score(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
```
|
github_jupyter
|
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership",
"annual_inc", "verification_status", "issue_d", "loan_status",
"pymnt_plan", "dti", "delinq_2yrs", "inq_last_6mths",
"open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int", "total_rec_late_fee",
"recoveries", "collection_recovery_fee", "last_pymnt_amnt", "next_pymnt_d",
"collections_12_mths_ex_med", "policy_code", "application_type", "acc_now_delinq",
"tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il",
"open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il",
"il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc",
"all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl",
"inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy",
"bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct",
"mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc",
"mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0",
"num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies",
"tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit",
"total_il_high_credit_limit", "hardship_flag", "debt_settlement_flag"
]
target = ["loan_status"]
# Load the data
file_path = Path('LoanStats_2019Q1.zip')
df = pd.read_csv(file_path, skiprows=1)[:-2]
df = df.loc[:, columns].copy()
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
# Remove the `Issued` loan status
issued_mask = df['loan_status'] != 'Issued'
df = df.loc[issued_mask]
# convert interest rate to numerical
df['int_rate'] = df['int_rate'].str.replace('%', '')
df['int_rate'] = df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = {'Current': 'low_risk'}
df = df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk')
df = df.replace(x)
df.reset_index(inplace=True, drop=True)
df.head()
# Check for missing values
missing_data = df.isnull()
for column in missing_data.columns :
print(column,':')
print(missing_data[column].value_counts())
print()
df['loan_status'].value_counts()
# Convert string values to numerical
df = pd.get_dummies(df, columns=["home_ownership", "verification_status","issue_d", "pymnt_plan",
"initial_list_status", "next_pymnt_d","application_type", "hardship_flag",
"debt_settlement_flag"])
df['loan_status'].replace(to_replace=['low_risk','high_risk'], value=[0,1],inplace=True)
df.head()
# Create our features
X = df.copy()
X = X.drop("loan_status", axis=1)
X.head()
# Create our target
y = df["loan_status"].values
X.describe()
# Normal train-test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# Resample the training data with the RandomOversampler
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler(random_state=1)
X_resampled, y_resampled = ros.fit_resample(X_train, y_train)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
confusion_matrix(y_test, y_pred)
# Calculated the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
balanced_accuracy_score(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
# Resample the training data with SMOTE
from imblearn.over_sampling import SMOTE
X_resampled, y_resampled = SMOTE(random_state=1, sampling_strategy=1.0).fit_resample(
X_train, y_train
)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Calculated the balanced accuracy score
y_pred = model.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred))
# Resample the data using the ClusterCentroids resampler
# Warning: This is a large dataset, and this step may take some time to complete
from imblearn.under_sampling import ClusterCentroids
cc = ClusterCentroids(random_state=1)
X_resampled, y_resampled = cc.fit_resample(X_train, y_train)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=78)
model.fit(X_resampled, y_resampled)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
confusion_matrix(y_test, y_pred)
# Calculate the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
balanced_accuracy_score(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
# Resample the training data with SMOTEENN
# Warning: This is a large dataset, and this step may take some time to complete
from imblearn.combine import SMOTEENN
smote_enn = SMOTEENN(random_state=0)
X_resampled, y_resampled = smote_enn.fit_resample(X, y)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
confusion_matrix(y_test, y_pred)
# Calculated the balanced accuracy score
balanced_accuracy_score(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
| 0.495117 | 0.824638 |
# Clustering
In this notebook we will study about **K-Means** algorithm but first we will start with **Loading Data**. Before exploring data, let us have a look at the data dictionary
Following is the Data Dictionary for Credit Card dataset :-
**CUST_ID** : Identification of Credit Card holder (Categorical) <br/>
**BALANCE** : Balance amount left in their account to make purchases <br/>
**PURCHASES** : Amount of purchases made from account <br/>
**INSTALLMENTS_PURCHASES** : Amount of purchase done in installment <br/>
**CASH_ADVANCE** : Cash in advance given by the user <br/>
**CREDIT_LIMIT** : Limit of Credit Card for user <br/>
**PAYMENTS** : Amount of Payment done by user <br/>
**MINIMUM_PAYMENTS** : Minimum amount of payments made by user <br/>
**TENURE** : Tenure of credit card service for user
## Loading Data
```
import pandas as pd
import numpy as np
from sklearn.cluster import KMeans
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
```
**Task 1:** Read CSV file "credit_card.csv" from system and It is imporatant to make a copy of data first.
```
#write code here
data =pd.read_csv("credit_card.csv")
df= data.copy()
```
**Task 2:** Get the shape of data
```
#write code here
df.shape
```
**Task 3:** Display first five rows
```
#write code here
df.head()
```
**Task 4:** Display data types of Data
```
#write code here
df.dtypes
```
**Task 5:** Check missing values
```
#write code here
df.isnull().sum()
```
**Task 6:** Check the statistics
```
#write code here
df.describe()
```
**Task 7:** Remove **CUST_ID**
```
#Write code here
X=df.drop(["CUST_ID"],axis=1)
df.head()
```
# KMeans
K-means clustering is a type of unsupervised learning, which is used when you have unlabeled data (i.e., data without defined categories or groups). The goal of this algorithm is to find groups in the data, with the number of groups represented by the variable K. The algorithm works iteratively to assign each data point to one of K groups based on the features that are provided. Data points are clustered based on feature similarity. The results of the K-means clustering algorithm are:
<br><br>
<li>The centroids of the K clusters, which can be used to label new data</li>
<li>Labels for the training data (each data point is assigned to a single cluster)</li><br>
Rather than defining groups before looking at the data, clustering allows you to find and analyze the groups that have formed organically. The "Choosing K" section below describes how the number of groups can be determined.
Each centroid of a cluster is a collection of feature values which define the resulting groups. Examining the centroid feature weights can be used to qualitatively interpret what kind of group each cluster represents.
```
kmeans = KMeans(n_clusters=5, random_state=0).fit(X)
```
Kmean.fit command runs the Kmean algorithm on the provided dataset.
Now lets make a copy of df in a new variable ***pred***.
To get to know that which observation belongs to which cluster, there is an attribute ***labels_***. This will return the list of labels and assign it to the new column ***kmean1***
```
pred = X.copy()
pred['kmean1'] = kmeans.labels_
pred.head()
```
The **kmean1** column shows the lables of the Kmean algorithm. For example row index 0 belongs to cluster 0 and row 1 belongs to cluster 1 and row 2 belongs to cluster 4 and so on
```
pred['kmean1'].value_counts()
```
The above output shows the number of obervations in each cluster
# Scaling
#### Why need scaling?
<br>Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization.
### Scaling using min max
Also known as min-max scaling or min-max normalization, is the simplest method and consists in rescaling the range of features to scale the range in [0, 1]. Selecting the target range depends on the nature of the data. The general formula is given as:
<br>
*Formula*
<br>zi=(xi−min(x))/(max(x)−min(x))
### Scaling using MinMaxScaler function
```
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
new=scaler.fit_transform(X)
type(new)
new
```
In the above step the scaling is done by the built in min max scaler function
```
col_names=["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE"]
scaled=pd.DataFrame(columns=col_names,data=new)
scaled.head()
scaled.describe()
```
Now we will use the scaled variables and see how our clusters differ
**Task 8:** Apply ***fit*** on **scaled** dataset and put the labels in the predicted data.
Also display value count
```
#Write code here
kmean2 =KMeans(n_clusters=5,random_state=0)
#Write code to fit
kmean2.fit(scaled)
pred=scaled.copy()
#Write code to put labels into predicted data
pred['kmean2'] = kmean2.labels_
#View the final data set i.e top 5 rows
pred.head()
#Write code here to view value counts
pred["kmean2"].value_counts()
kmean2.inertia_ #cost function
```
From the above output you can see that now the distribution of the clusters has changed
## Choosing K
### Elbow Analysis
The Elbow method is a method of interpretation and validation of consistency within cluster analysis designed to help finding the appropriate number of clusters in a dataset.
### Working
One method to validate the number of clusters is the elbow method. The idea of the elbow method is to run k-means clustering on the dataset for a range of values of k (say, k from 1 to 10 in the examples above), and for each value of k calculate the sum of squared errors (SSE).Then, plot a line chart of the SSE for each value of k. If the line chart looks like an arm, then the "elbow" on the arm is the value of k that is the best. The idea is that we want a small SSE, but that the SSE tends to decrease toward 0 as we increase k (the SSE is 0 when k is equal to the number of data points in the dataset, because then each data point is its own cluster, and there is no error between it and the center of its cluster). So our goal is to choose a small value of k that still has a low SSE, and the elbow usually represents where we start to have diminishing returns by increasing k
```
cost = []
for k in range(1, 15):
kmeanModel = KMeans(n_clusters=k, random_state=0).fit(scaled)
cost.append([k,kmeanModel.inertia_])
cost
plt.figure(figsize=(15,6))
sns.set_context('poster')
plt.plot(pd.DataFrame(cost)[0], pd.DataFrame(cost)[1])
plt.xlabel('k')
plt.ylabel('Cost')
plt.title('The Elbow Method showing the optimal k')
plt.show()
```
From the above graph we can see that the elbow is formed when the input was 3 clusters.
<br>But before proceding, let us check the **Silhouette Score**
### Silhouette Score
The Silhouette Coefficient is calculated using the mean intra-cluster distance (a) and the mean nearest-cluster distance (b) for each sample. The Silhouette Coefficient for a sample is (b - a) / max(a, b). To clarify, b is the distance between a sample and the nearest cluster that the sample is not a part of. Note that Silhouette Coefficient is only defined if number of labels is 2 <= n_labels <= n_samples - 1.
```
from sklearn.metrics import silhouette_score
#add plot
s_score = []
for k in range(2, 15):
kmeans = KMeans(n_clusters=k, random_state=0).fit(scaled)
s_score.append([k, silhouette_score(scaled, kmeans.labels_)])
s_score
plt.figure(figsize=(15,6))
sns.set_context('poster')
plt.plot( pd.DataFrame(s_score)[0], pd.DataFrame(s_score)[1])
plt.xlabel('clusters')
plt.ylabel('score')
plt.title('The silhouette score')
plt.show()
```
## Final clusters using K-Means
After checking the **Elbow Score** and **Silhoute Score**, we can conclude that number of clusters/k should be 3.
**Task 9:** Apply kmeans algorithm with number of clusters = 3. Also assign values to the predicted data and check value count.
```
#Write code here
kmean3 = KMeans(n_clusters=3,random_state=0)
kmean3.fit(scaled)
#write code to fit
pred=scaled.copy()
#Write code to assign labels to predicted data
pred['kmean3'] = kmean3.labels_
#Write code to display value counts
pred['kmean3'].value_counts()
```
## Profiling
**Profiling and its usage**<br>
Having decided (for now) how many clusters to use, we would like to get a better understanding of what values are in those clusters are and interpret them.
Data analytics is used to eventually make decisions, and that is feasible only when we are comfortable (enough) with our understanding of the analytics results, including our ability to clearly interpret them.
To this purpose, one needs to spend time visualizing and understanding the data within each of the selected clusters. For example, one can see how the summary statistics (e.g. averages, standard deviations, etc) of the profiling attributes differ across the segments.
In our case, assuming we decided we use the 3 clusters found using kmean algorithm as outlined above, we can see how the responses changes across clusters. The average values of our data within each cluster are:
```
p_ = pred[["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE",'kmean3']]
pivoted = p_.groupby('kmean3')["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE"].median().reset_index()
pivoted
```
# Radar Plot
The radar chart is a chart and/or plot that consists of a sequence of equi-angular spokes, called radii, with each spoke representing one of the variables. The data length of a spoke is proportional to the magnitude of the variable for the data point relative to the maximum magnitude of the variable across all data points. A line is drawn connecting the data values for each spoke. This gives the plot a star-like appearance and the origin of one of the popular names for this plot.
<img src="https://upload.wikimedia.org/wikipedia/commons/0/00/Spider_Chart.svg" />
```
!pip install plotly
```
[Sign UP](https://plot.ly/Auth/login/?action=signup#/) on Plotly, verify your email address and regenerate your API key
```
import plotly
plotly.tools.set_credentials_file(username='AliEhtasham', api_key='XxnTB7ApsDkKYmpKxpMA')
import plotly.plotly as py
import plotly.graph_objs as go
radar_data = [
go.Scatterpolar(
r = list(pivoted.loc[0,["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE']]),
theta = ["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE'],
fill = None,
fillcolor=None,
name = 'Cluster 0'
),
go.Scatterpolar(
r = list(pivoted.loc[1,["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE']]),
theta = ["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE'],
fill = None,
fillcolor=None,
name = 'Cluster 1'
),
go.Scatterpolar(
r = list(pivoted.loc[2,["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE']]),
theta = ["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE'],
fill = None,
fillcolor=None,
name = 'Cluster 2'
)
]
radar_layout = go.Layout(polar = dict(radialaxis = dict(visible = True,range = [0, 9000])), showlegend = True)
fig = go.Figure(data=radar_data, layout=radar_layout)
py.iplot(fig, filename = "radar")
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from sklearn.cluster import KMeans
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
#write code here
data =pd.read_csv("credit_card.csv")
df= data.copy()
#write code here
df.shape
#write code here
df.head()
#write code here
df.dtypes
#write code here
df.isnull().sum()
#write code here
df.describe()
#Write code here
X=df.drop(["CUST_ID"],axis=1)
df.head()
kmeans = KMeans(n_clusters=5, random_state=0).fit(X)
pred = X.copy()
pred['kmean1'] = kmeans.labels_
pred.head()
pred['kmean1'].value_counts()
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
new=scaler.fit_transform(X)
type(new)
new
col_names=["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE"]
scaled=pd.DataFrame(columns=col_names,data=new)
scaled.head()
scaled.describe()
#Write code here
kmean2 =KMeans(n_clusters=5,random_state=0)
#Write code to fit
kmean2.fit(scaled)
pred=scaled.copy()
#Write code to put labels into predicted data
pred['kmean2'] = kmean2.labels_
#View the final data set i.e top 5 rows
pred.head()
#Write code here to view value counts
pred["kmean2"].value_counts()
kmean2.inertia_ #cost function
cost = []
for k in range(1, 15):
kmeanModel = KMeans(n_clusters=k, random_state=0).fit(scaled)
cost.append([k,kmeanModel.inertia_])
cost
plt.figure(figsize=(15,6))
sns.set_context('poster')
plt.plot(pd.DataFrame(cost)[0], pd.DataFrame(cost)[1])
plt.xlabel('k')
plt.ylabel('Cost')
plt.title('The Elbow Method showing the optimal k')
plt.show()
from sklearn.metrics import silhouette_score
#add plot
s_score = []
for k in range(2, 15):
kmeans = KMeans(n_clusters=k, random_state=0).fit(scaled)
s_score.append([k, silhouette_score(scaled, kmeans.labels_)])
s_score
plt.figure(figsize=(15,6))
sns.set_context('poster')
plt.plot( pd.DataFrame(s_score)[0], pd.DataFrame(s_score)[1])
plt.xlabel('clusters')
plt.ylabel('score')
plt.title('The silhouette score')
plt.show()
#Write code here
kmean3 = KMeans(n_clusters=3,random_state=0)
kmean3.fit(scaled)
#write code to fit
pred=scaled.copy()
#Write code to assign labels to predicted data
pred['kmean3'] = kmean3.labels_
#Write code to display value counts
pred['kmean3'].value_counts()
p_ = pred[["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE",'kmean3']]
pivoted = p_.groupby('kmean3')["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE"].median().reset_index()
pivoted
!pip install plotly
import plotly
plotly.tools.set_credentials_file(username='AliEhtasham', api_key='XxnTB7ApsDkKYmpKxpMA')
import plotly.plotly as py
import plotly.graph_objs as go
radar_data = [
go.Scatterpolar(
r = list(pivoted.loc[0,["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE']]),
theta = ["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE'],
fill = None,
fillcolor=None,
name = 'Cluster 0'
),
go.Scatterpolar(
r = list(pivoted.loc[1,["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE']]),
theta = ["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE'],
fill = None,
fillcolor=None,
name = 'Cluster 1'
),
go.Scatterpolar(
r = list(pivoted.loc[2,["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE']]),
theta = ["BALANCE", "PURCHASES","INSTALLMENTS_PURCHASES","CASH_ADVANCE","CREDIT_LIMIT", "PAYMENTS", "MINIMUM_PAYMENTS","TENURE", 'BALANCE'],
fill = None,
fillcolor=None,
name = 'Cluster 2'
)
]
radar_layout = go.Layout(polar = dict(radialaxis = dict(visible = True,range = [0, 9000])), showlegend = True)
fig = go.Figure(data=radar_data, layout=radar_layout)
py.iplot(fig, filename = "radar")
| 0.293708 | 0.990301 |
<a href="https://colab.research.google.com/github/mfsen/tensorflowExcercises/blob/main/Regression_neural_network_case_study.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Regression neural network case study**
Bu projede sklearn kütüphanesi bünyesinde bulunan diabetes kütüphanesini kullanarak yapay sinir ağları ile regresyon analizi yapacağız ve aşşağıdaki adımları izleyeceğiz.
1. Verinin indirilip yüklenmesi
2. Yüklenen verilerin incelenmesi ve önişleme tabi tutulması
3. Tensorflow ile regression modelinin oluşturulması
4. Oluşturulan modelin eğitilmesi ve eğitimdeki hata değerlerinin çizdirilmesi
5. Eğitilen modelin test verisi üzerinde denenmesi ve hata değerlerinin çıkarılması
```
# Gerekli kütüphanelerin yüklenmesi
import matplotlib.pyplot as plt
import tensorflow as tf
import pandas as pd
from sklearn import datasets
from sklearn.model_selection import train_test_split
#Verilerin indirilmesi ve yüklenmesi
diabetesData = datasets.load_diabetes()
#İndirilen verinin incelenmesi ve Prepreocess işlemi
diabetesData.keys() #datasetin içinde bulunan datasetle ilgili verileri anahtar kelimeleri biz burada `data`, `target` bölümündeki verileri kullanacağız
#Veri setinde bulunan özellikler
diabetesData["feature_names"]
# Burada veri tabanında bulunan özellikleri data frame olarak bir variable a aktaracağız
dataframe = pd.DataFrame(diabetesData["data"],columns=diabetesData["feature_names"])
dataframe
#Dataframe üzerinde bazı önişlemleri yapalım.
#öncelikle verisetinde boş veri vermı onu kontrol edelim
dataframe.isnull().sum()
dataframe.describe()
#test ve train setlerimizi hazırlıyoruz
Xtrain,Xtest,Ytrain,Ytest = train_test_split(dataframe,diabetesData["target"],test_size = 0.2, random_state=42) #toplamverimizin %20sini test olarak ayırdık ve her seferinde aynı rastgeleliği yakalamak için random state belirledik.
#Parametrelerde belirli bir rastgelelik için randomseed belirliyoruz
tf.random.set_seed(42)
#Modelimizi oluşturuyoruz
regmodel = tf.keras.Sequential([
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(50),
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(1)
])
#Modelimiz giriş katmanı ile birlikte 5 katmandan oluşan bir yapıya sahiptir. Burada çıkış katmanında tek bir sayı elde edeceğimizden dolayı tek neurallu bir katman oluşturuyoruz.
#Modelimizi compile ediyoruz şimdiki adımımızda
regmodel.compile(loss=tf.losses.mae,
optimizer=tf.keras.optimizers.Adam(learning_rate = 0.001),
metrics=["mae"])
history = regmodel.fit(Xtrain,Ytrain,epochs=200,verbose=0) # her adım için log basmaması için verbose değerini 0'a eşitledim
#Modelimizin train adımlarındaki hata değerini çizdirelim
pd.DataFrame(history.history).plot()
plt.ylabel("loss")
plt.xlabel("epochs")
# Eğittiğimiz modelin test verisi üzerindeki başırımına bakalım
pred = regmodel.predict(Xtest)
#model tarafından buluna sonuçlar ile gerçek sonuçlar arasındaki mean absulute error değerini hesaplayalım
tf.metrics.mean_absolute_error(Ytest,tf.squeeze(pred))
```
**Sonuç**
Eğittiğimiz model test setimizde mae metriğine göre hata değerimiz 44 oldu. Bu değeri geliştirmek için çeşitli aksiyomları alabiliri bunlar aşşağıdaki gibidir.
1. Modeldeki parametre ya da katman sayısı arttırılabilir
2. Modelde kullanılan learning rate ya da optimizer değiştirilebilir.
3. Model daha uzun süre eğitilebilir.
4. Bu örnek için değil ama gündelik hayatta modelimiz hada fazla veri üzerinde eğitilebilir.
```
```
|
github_jupyter
|
# Gerekli kütüphanelerin yüklenmesi
import matplotlib.pyplot as plt
import tensorflow as tf
import pandas as pd
from sklearn import datasets
from sklearn.model_selection import train_test_split
#Verilerin indirilmesi ve yüklenmesi
diabetesData = datasets.load_diabetes()
#İndirilen verinin incelenmesi ve Prepreocess işlemi
diabetesData.keys() #datasetin içinde bulunan datasetle ilgili verileri anahtar kelimeleri biz burada `data`, `target` bölümündeki verileri kullanacağız
#Veri setinde bulunan özellikler
diabetesData["feature_names"]
# Burada veri tabanında bulunan özellikleri data frame olarak bir variable a aktaracağız
dataframe = pd.DataFrame(diabetesData["data"],columns=diabetesData["feature_names"])
dataframe
#Dataframe üzerinde bazı önişlemleri yapalım.
#öncelikle verisetinde boş veri vermı onu kontrol edelim
dataframe.isnull().sum()
dataframe.describe()
#test ve train setlerimizi hazırlıyoruz
Xtrain,Xtest,Ytrain,Ytest = train_test_split(dataframe,diabetesData["target"],test_size = 0.2, random_state=42) #toplamverimizin %20sini test olarak ayırdık ve her seferinde aynı rastgeleliği yakalamak için random state belirledik.
#Parametrelerde belirli bir rastgelelik için randomseed belirliyoruz
tf.random.set_seed(42)
#Modelimizi oluşturuyoruz
regmodel = tf.keras.Sequential([
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(50),
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(1)
])
#Modelimiz giriş katmanı ile birlikte 5 katmandan oluşan bir yapıya sahiptir. Burada çıkış katmanında tek bir sayı elde edeceğimizden dolayı tek neurallu bir katman oluşturuyoruz.
#Modelimizi compile ediyoruz şimdiki adımımızda
regmodel.compile(loss=tf.losses.mae,
optimizer=tf.keras.optimizers.Adam(learning_rate = 0.001),
metrics=["mae"])
history = regmodel.fit(Xtrain,Ytrain,epochs=200,verbose=0) # her adım için log basmaması için verbose değerini 0'a eşitledim
#Modelimizin train adımlarındaki hata değerini çizdirelim
pd.DataFrame(history.history).plot()
plt.ylabel("loss")
plt.xlabel("epochs")
# Eğittiğimiz modelin test verisi üzerindeki başırımına bakalım
pred = regmodel.predict(Xtest)
#model tarafından buluna sonuçlar ile gerçek sonuçlar arasındaki mean absulute error değerini hesaplayalım
tf.metrics.mean_absolute_error(Ytest,tf.squeeze(pred))
| 0.478529 | 0.979707 |
# Quantum Approximate Optimization Algorithm: Maximum-Cut Problem
## *Abstract*
*In this tutorial, we use Qiskit to simulate a hybrid quantum-classical variational algorithm called Quantum Approximate Optimization Algorithm (QAOA). We will use Maximum-Cut as an example to show how to implement QAOA circuit through the combination of a quantum algorithm and gradient-descend algorithm to find the local maximum.*
## Introduction
Roughly speaking, the main idea of QAOA [[1](https://arxiv.org/abs/1411.4028v1)] is "marking" the optimized states by successive sets of $U(C,\alpha)$ operators and $U(B,\beta)$ operators. When we measure the output, the mean value varies with our choice of the parameters defined in the $R_{z}$ gates and $R_{x}$ gates. Using proper classical optimization, an approximate local optimum is obtained by changing the variational parameters.
What makes QAOA stand out is that it only asks for a low-depth circuit to get an approximate optimized result. This is a huge advantage because lower depth means fewer error in the near-term quantum devices. However, the cost of its advantage is to find a good set of parameters efficiently by classical algorithms [[2](https://arxiv.org/abs/1812.01041v1)].
QAOA has been proposed and proved to be a powerful quantum algorithm to tackle combinatorial optimization problems. One of the toy example which has been well studied is MaxCut problem. Apart from the combinatorial optimization, many applications of QAOA have been brought about including lattice protein folding problem, traverse field Ising model and many other problems.
This tutorial apply QAOA to the MaxCut problem using Qiskit and includes two major parts: quantum circuit implementation and classical optimization.
## Introduction to the maximum cut problem
Here we investigate an unweighted, undirected graph consisting of five vertexes and six edges. The problem is defined to find the maximum number of cut between a vertex of a subset and the other vertex of the complementary subset. For instance, we can denote each vertex by a state, $z_{i}$, and then we assign values of 1 or 0 to them as two different subsets.
$$|z_{i}\rangle = |1\rangle , |0\rangle$$
<img src="graph1.png" width = "20%">
Any arbitrary combination is represented by state $|s\rangle$ :
$$|s\rangle = |z_{0}z_{1}z_{2} \cdots z_{n} \rangle$$
It is obvious that there are $2^{n}$ ways of combination for an n-vertex graph. For a graph of 5 vertexes, the maximum cuts can be found by trying through 32 possible states.
<img src="maxcut4.png" width = "25%">
The graph shown is the optimized state, $|s_{max}\rangle$, and the corresponding number of cuts is 5.
$$| s_{max} \rangle = |01010\rangle, |10101\rangle$$
Yet the time complexity grows exponentially with the vertexes involved ($O(2^{n})$). Luckily, we can make full use of quantum superposition and entanglement to find the optimum more efficiently.
## Quantum Approximate Optimization Algorithm
### Objective Function
We start with the definition of the objective function. This function takes any state as input and output the number of cuts.
$$C(s) = \sum^{m}_{\langle ij \rangle}C_{\langle ij \rangle}(|z_{i}z_{j}\rangle)$$
The equivalent operator acts on any edge in the graph, and the eigenvalue is 1 or 0. This operator yields one if the edge is linked by the two different subsets of vertexes, and 0 otherwise.
$$C = \sum_{\langle ij \rangle}C_{\langle ij \rangle}$$
$$ C_{\langle ij \rangle} |z_{i}z_{j} \rangle = 1 \ |z_{i}z_{j} \rangle , \ z_{i} \neq z_{j} \\
C_{\langle ij \rangle} |z_{i}z_{j} \rangle = 0 \ |z_{i}z_{j} \rangle , \ z_{i} = z_{j}
$$
### Operator $U(C,\alpha)$
We now introduce the operator function $U(C,\alpha)$:
$$ U(C,\alpha) = e^{-i \alpha C} = e^{-i \ \alpha \ \sum_{\langle ij \rangle}C_{\langle ij \rangle}} = \prod^{m}_{\langle ij \rangle} e^{- i \alpha C_{\langle ij \rangle} } $$
Where $\langle ij \rangle$ is each edge jointed by vertexes $z_{i}$ and $z_{j}$. This unitary operator is diagonal in the computational basis, therefore it can be viewed as a spectral decomposition:
$$e^{- i \alpha C_{\langle ij \rangle}} = |00\rangle\ \langle00| + |11\rangle\ \langle 11| + e^{- i \alpha}|01\rangle\ \langle01| + e^{- i \alpha}|10\rangle\ \langle10|$$
This means if we apply operator $U(C,\alpha)$ to $|z_{i}z_{j}\rangle$, this state will be multiply by a phase $e^{-i\alpha}$ if $z_{i} \neq z_{j}$:
$$ e^{- i \alpha C_{\langle ij \rangle} } \vert 00 \rangle = \vert 00 \rangle \\
e^{- i \alpha C_{\langle ij \rangle} } \vert 11 \rangle = \vert 11 \rangle \\
e^{- i \alpha C_{\langle ij \rangle} } \vert 10 \rangle = e^{- i \alpha} \vert 10 \rangle\\
e^{- i \alpha C_{\langle ij \rangle} } \vert 01 \rangle = e^{- i \alpha} \vert 01 \rangle$$
In this way we can "mark" the optimized state and later on we will "rotate" this state by operator $U(C,\beta)$.
### Operator $U(C,\beta)$
The operator $B$ is defined as:
$$B = \sum^{n}_{j=1}\sigma^{x}_{j}$$
It is natural to define the correspoding rotation operator:
$$U(B,\beta)=e^{-i\beta B}=e^{-i\beta\sum^{n}_{j=1}\sigma^{x}_{j}}= \prod^{n}_{j=1}e^{-i\beta \sigma^{x}_{j}}$$
In the view of Blcoh sphere, the rotation operator $e^{-i\beta\sigma^{x}}$ rotates the state about $x$ axis. Hence the probability of this state is changed and we will acquire an informative result if we measure it.
### Algorithm
The implementation of QAOA for Maxcut problem [[3](https://www.cs.umd.edu/class/fall2018/cmsc657/projects/group_16.pdf)] is shown as follow (n=5):
1. Initialization
Register a n-qubits state, and create superposition by n Hadmard gates.
$$|\psi_{0} \rangle = |00000 \rangle $$
$$ |\psi_{0} \rangle \stackrel{H^{\otimes n}}{\longrightarrow}|\psi_{i}\rangle = (\frac{|0\rangle + |1\rangle}{\sqrt{2}})^{\otimes n} =\sum_{s\in\{0,1\}^n} \frac{1}{\sqrt{2^{n}}} \vert s \rangle $$
2. Applying $U(C,\alpha)$ and $U(C,\beta)$
By alternately applying unitary operators $U(C,\alpha)$ and $U(C,\beta)$ to the initialied state $|\psi_{i}\rangle$, a new state $|\vec{\alpha},\vec{\beta}\rangle$ is generated by two sets of parameters $\vec{\alpha}$ and $\vec{\beta}$.
$$|\vec{\alpha},\vec{\beta}\rangle = U(B,\beta_{p})U(C,\alpha)...U(B,\beta_{2})U(C,\alpha_{2})U(B,\beta_{1})U(C,\alpha_{1})|\psi_{i}\rangle $$
Where $\vec{\alpha} =(\alpha_{1},\alpha_{2},...,\alpha_{p})$ and $\vec{\beta} = (\beta_{1},\beta_{2},...,\beta_{p})$ and $p$ is the depth of this algorithm.
3. Measurement and Expectation
Measure the classical bit of $|\vec{\alpha},\vec{\beta}\rangle$, then calculate the mean value by objective operator $C$.
$$|\vec{\alpha},\vec{\beta}\rangle \stackrel{C}{\longrightarrow}C|\vec{\alpha},\vec{\beta}\rangle \stackrel{}{\longrightarrow}M_{p}(\vec{\alpha},\vec{\beta})=\langle \vec{\alpha},\vec{\beta}| C |\vec{\alpha},\vec{\beta}\rangle $$
4. Classical Optimization
Feed the parameters set and $M_{p}$ into classical optimizer and update a new set of $\vec{\alpha},\vec{\beta}$. Then go back to step 2 and 3 to output a new mean value $M_{p}$. Go through step 2 to 3 repetively until a local maximum is found.
<img src="loop.png" width = "35%">
## The Implementation of QAOA with Qiskit
We now code the quantum circuit of QAOA with Qiskit.
*NOTICE: The number of shots should be large enough, otherwise the classical optimization involving gradient calculation will fail*
### Import Packages and Tools
```
import itertools
import qiskit
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from numpy import *
from qiskit import BasicAer, IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
from qiskit.compiler import transpile
from qiskit.tools.visualization import plot_histogram
from qiskit.quantum_info.analysis import *
from qiskit.tools.visualization import circuit_drawer, plot_bloch_multivector
pi = np.pi
backend = BasicAer.get_backend('qasm_simulator')
shots = 99999 ## This value should be large enough
```
### Preload Functions
All the functions we will use in this tutorial is preloaded here, but these functions will be explained in detail in the following sections. So we should skip this part and go on our tutorial.
```
def uz(circ, node, aux, graph, a): # theta takes value in [0, 2pi]
step_len = 1/100
for[i,j] in graph:
circ.cx(node[i],node[j])
circ.rz(-2*step_len*pi*a,node[j])
circ.cx(node[i],node[j])
circ.rz(2*step_len*pi*a,aux)
circ.barrier()
def ux(circ,node,b):
step_len = 1/100
circ.rx(4*step_len*pi*b, node)
circ.barrier()
def maxcut(circ, node, aux, creq,graph, ang_zip):
for a,b in ang_zip:
uz(circ, node, aux, graph, a)
ux(circ, node, b)
circ.measure(node, creq)
# Calculate the mean value from the measurement
def mean(graph,answer):
sum2 = 0
for k,v in answer.items():
sum1 = 0
for [i,j] in graph:
if k[i] != k[j]:
sum1 += 1
sum2 += sum1*v
mean = sum2/shots
return(mean)
def outcome(n,graph,ang1,ang2):
ang_zip = zip(ang1,ang2)
node = QuantumRegister(n)
aux = QuantumRegister(1)
creq = ClassicalRegister(n)
circ = QuantumCircuit(node,aux,creq)
circ.h(node)
maxcut(circ, node, aux, creq,graph, ang_zip)
results = execute(circ, backend = backend, shots = shots).result()
answer = results.get_counts()
out = mean(graph,answer)
return(out)
def gradient(n,graph,ang1,ang2,step_len=1):
ang1_dif = []
ang2_dif = []
res = outcome(n,graph,ang1,ang2)
for i in range(len(ang1)):
angp = ang1.copy()
angp[i] += step_len
resp = (outcome(n,graph,angp,ang2)-res)/step_len
ang1_dif = np.append(ang1_dif,resp)
for i in range(len(ang2)):
angp = ang2.copy()
angp[i] += step_len
resp = (outcome(n,graph,ang1,angp)-res)/step_len
ang2_dif = np.append(ang2_dif,resp)
return(np.vstack((ang1_dif,ang2_dif)))
## gradient descend algorithm
def gra_desc(n,graph,ang1,ang2,step_size=1):
iter_num = 100
threshold = 0.01
t = 0
s = step_size
x0 = np.vstack((ang1,ang2))
gra0 = gradient(n,graph,x0[0],x0[1])
x1 = x0 + s * gra0
gra1 = gradient(n,graph,x1[0],x1[1])
f1 = outcome(n,graph,x1[0],x1[1])
times = 0
for i in range(iter_num):
del_x = np.hstack(x1 - x0)
del_df = np.hstack(gra1 - gra0)
s = abs(np.dot(del_x,del_df)/(np.dot(del_df,del_df)))
x2 = x1 + s * gra1
f2 = outcome(n,graph,x2[0],x2[1])
gra2 = gradient(n,graph,x2[0],x2[1])
times += 1
print(x2,f2)
if(abs(f2-f1) < threshold):
print('ok')
return(x2,f2)
while(f1 > f2 and t < 10):
t = t + 1
s = s * 0.9
x2 = x1 + s * gra1
f2 = outcome(n,graph,x2[0],x2[1])
x0 = x1
gra0 = gra1
gra1 = gra2
x1 = x2
f1 = f2
return(x1,f1)
def init_localmax():
ang1 = [random.randint(0,100)]
ang2 = [random.randint(0,100)]
res = gra_desc(n,graph,ang1,ang2)
opti_ang = res[0]
return(opti_ang)
def QAOA(n,graph,pmax):
print('[a,b] M(a,b)')
p = 1
print('p = %d' % p)
opti_ang = init_localmax()
while(p < pmax):
p = p + 1
print('p = %d' % p)
opti_ang1 = opti_ang[0]
opti_ang2 = opti_ang[1]
start_ang1 = np.zeros(p)
start_ang2 = np.zeros(p)
for i in range(p):
if i == 0:
start_ang1[i] = opti_ang1[0]
start_ang2[i] = opti_ang2[0]
if i == p-1:
start_ang1[i] = opti_ang1[p-2]
start_ang2[i] = opti_ang2[p-2]
else:
start_ang1[i] = (i/(p-1))*opti_ang1[i-1] + ((p-i-1)/(p-1))*opti_ang1[i]
start_ang2[i] = (i/(p-1))*opti_ang2[i-1] + ((p-i-1)/(p-1))*opti_ang2[i]
res = gra_desc(n,graph,start_ang1,start_ang2)
opti_ang = res[0]
opti_val = res[1]
print('done')
```
### Initialization
We still use the example graph mentioned previously. Each vertex is represented by a quantum state $|z_{i}\rangle$ and each edge is denoted by the two adjoint vertexes $|z_{i}z_{j}\rangle$.
<img src="graph.png" width = "20%">
```
# Initialization
graph = [[0,1],[1,2],[1,4],[2,3],[3,4],[4,0]] # define the graph
n = 5 # the number of vertexes
node = QuantumRegister(n) # register qubits
aux = QuantumRegister(1) # register an auxiliary qubit
creq = ClassicalRegister(n)
```
### Implementation of $U(C,\alpha)$
For a specific edge $\langle ij \rangle$, the two-bit unitary operator $U(C_{ij},\alpha)$ can be implemented in this form:
<img src="twobits.png" width = "25%">
This is because any two-bit control unitary matrix is equivalent to the combination of two $CX$ gates and three unitary $2 \times 2$ gates $A, B, C$ algebraically [[4](https://arxiv.org/abs/quant-ph/9503016)]. We denote this two-bit network by operator $U(2)$ and apply to the computational basis of two qubits by definition.
$$ U(2)|00\rangle = I \otimes CBA |00\rangle \\
U(2)|11\rangle = I \otimes C \cdot \sigma^{x} \cdot B \cdot \sigma^{x} \cdot A |11\rangle \\
U(2)|10\rangle = I \otimes C \cdot \sigma^{x} \cdot B \cdot \sigma^{x} \cdot A |10\rangle \\
U(2)|01\rangle = I \otimes CBA |01\rangle
$$
These equations are then compared to the equalvalance by applying $U(C_{ij},\alpha) = e^{-i \alpha C_{\langle ij \rangle}} = U(2)$
$$ e^{- i \alpha C_{\langle ij \rangle} } \vert 00 \rangle = \vert 00 \rangle \\
e^{- i \alpha C_{\langle ij \rangle} } \vert 11 \rangle = \vert 11 \rangle \\
e^{- i \alpha C_{\langle ij \rangle} } \vert 10 \rangle = e^{- i \alpha} \vert 10 \rangle\\
e^{- i \alpha C_{\langle ij \rangle} } \vert 01 \rangle = e^{- i \alpha} \vert 01 \rangle$$
We now deduce some resulting equaitons:
$$
CBA|0\rangle = |0\rangle \\
C \cdot \sigma^{x} \cdot B \cdot \sigma^{x} \cdot A |1\rangle = |1\rangle \\
C \cdot \sigma^{x} \cdot B \cdot \sigma^{x} \cdot A |0\rangle = e^{- i \alpha}|0\rangle \\
CBA |1\rangle = e^{- i \alpha}|1\rangle
$$
There exists a possible solution:
$$
A = I \\
C = I \\
B|0\rangle = |0\rangle \\
B|1\rangle = e^{-i\alpha} |1\rangle
$$
The role of $B$ can be done by two $Rz$ gates, along with an auxiliary qubit $|0\rangle$ [[3](https://www.cs.umd.edu/class/fall2018/cmsc657/projects/group_16.pdf)].
$$ Rz (-\alpha) \otimes Rz(\alpha) |0\rangle |0\rangle_{aux} = (e^{-i \frac{-\alpha}{2}} |0\rangle ) \otimes (e^{-i \frac{\alpha}{2}}|0\rangle_{aux})=|0\rangle |0\rangle_{aux} $$
$$ Rz (-\alpha) \otimes Rz(\alpha) |1\rangle |0\rangle_{aux} = (e^{-i \frac{\alpha}{2}} |1\rangle) \otimes (e^{-i \frac{\alpha}{2}}|0\rangle_{aux})=
e^{-i\alpha}|1\rangle |0\rangle_{aux}$$
The two-bit operator $U(2)$ therefore can be implemented as:
<img src="uc.png" width = "25%">
```
## The definition of U(C,a)
def uz(circ, node, aux, graph, a): # theta takes value in [0, 2pi]
step_len = 1/100
for[i,j] in graph:
circ.cx(node[i],node[j])
circ.rz(-2*step_len*pi*a,node[j])
circ.cx(node[i],node[j])
circ.rz(2*step_len*pi*a,aux)
circ.barrier()
```
### Implementation of $U(B,\beta)$
It is straightforward to implement $U(B,\beta)$ by simply inserting $Rx$ operator into each qubit.
```
def ux(circ,node,b):
step_len = 1/100
circ.rx(4*step_len*pi*b, node)
circ.barrier()
```
### Implementation of QAOA Quantum Circuit
To create the QAOA circuit, we need to initialize the depth of this circuit, $p$ and the parameters set $\vec{\alpha}$ and $\vec{\beta}$. For the convenience of our analysis, we define $a,b$ lying in the interval $[0,100]$ to replace $\alpha,\beta$ lying in $[0,2\pi]$
$$ p=2 \\
\vec{\alpha} = (32,45) \\
\vec{\beta} = (41,23) \\
$$
```
# This function is used to implement QAOA circuit
def maxcut(circ, node, aux, creq,graph, ang_zip):
for a,b in ang_zip:
uz(circ, node, aux, graph, a)
ux(circ, node, b)
circ.measure(node, creq)
# Calculate the mean value from the measurement
def mean(graph,answer):
sum2 = 0
for k,v in answer.items():
sum1 = 0
for [i,j] in graph:
if k[i] != k[j]:
sum1 += 1
sum2 += sum1*v
mean = sum2/shots
return(mean)
```
The code of QAOA circuit is shown as below:
```
a = [32,45]
b = [41,23]
ang_zip = zip(a,b)
circ = QuantumCircuit(node,aux,creq)
circ.h(node)
maxcut(circ, node, aux, creq,graph, ang_zip)
circ.draw(output = 'mpl')
```
We then analyze the result of classical measurement:
```
results = execute(circ, backend = backend, shots = shots).result()
answer = results.get_counts()
plot_histogram(answer, figsize=(20,10))
```
This histogram graph explains how QAOA works with the help of $U(C,\alpha)$ and $U(B,\beta)$. These operators change the probabilities of some states. In this case, the probabilities for states like $|00101\rangle$, $|01101\rangle$, $|10010\rangle$ are higher than any other state. The next step is to calculate the mean value based on the measurement.
```
# The mean value for a = [32,45], b = [41,23]
mean(graph,answer)
```
But this is not a local optimized result, so a classical optimization is needed to get a better outcome.
### Classical Optimization
In this part, we would like to introduce two frequently-used classical algorithms involved in QAOA, Gradient Descend algorithm (ascend, in the case of MaxCut) and Interpolation combined [[2](https://arxiv.org/abs/1812.01041v1)]. We send the result to the optimizer and gain updated parameters to run through the same steps of the quantum algorithm. This loop will be terminated when a local maximum is reached.
We need to modify our quantum algorithm with two classical algorithms. The workflow of our modified hybrid algorithm is:
<img src="workflow.png" width = "40%">
The whole loop is ended when a designated depth of $p$ is reached.
#### Gradient Descend (Ascend) Algorithm
Gradient descend utilizes the gradient of the function at the current point to takes steps to the next point. If the function goes with the positive gradient and takes sufficiently small steps, then a local maximum can be guaranteed.
$$
seed \stackrel{randomize}{\longrightarrow} (\vec{\alpha},\vec{\beta})_{0} \stackrel{\nabla f_{0}}{\longrightarrow} (\vec{\alpha},\vec{\beta})_{1}\stackrel{\nabla f_{1}}{\longrightarrow} \dots \stackrel{\nabla f_{n}}{\longrightarrow} (\vec{\alpha},\vec{\beta})_{loc}
$$
```
## the gradient of current point
def gradient(n,graph,ang1,ang2,step_len=1):
ang1_dif = []
ang2_dif = []
res = outcome(n,graph,ang1,ang2)
for i in range(len(ang1)):
angp = ang1.copy()
angp[i] += step_len
resp = (outcome(n,graph,angp,ang2)-res)/step_len
ang1_dif = np.append(ang1_dif,resp)
for i in range(len(ang2)):
angp = ang2.copy()
angp[i] += step_len
resp = (outcome(n,graph,ang1,angp)-res)/step_len
ang2_dif = np.append(ang2_dif,resp)
return(np.vstack((ang1_dif,ang2_dif)))
## gradient descend algorithm
def gra_desc(n,graph,ang1,ang2,step_size=1):
iter_num = 100
threshold = 0.01
t = 0
s = step_size
x0 = np.vstack((ang1,ang2))
gra0 = gradient(n,graph,x0[0],x0[1])
x1 = x0 + s * gra0
gra1 = gradient(n,graph,x1[0],x1[1])
f1 = outcome(n,graph,x1[0],x1[1])
times = 0
for i in range(iter_num):
del_x = np.hstack(x1 - x0)
del_df = np.hstack(gra1 - gra0)
s = abs(np.dot(del_x,del_df)/(np.dot(del_df,del_df)))
x2 = x1 + s * gra1
f2 = outcome(n,graph,x2[0],x2[1])
gra2 = gradient(n,graph,x2[0],x2[1])
times += 1
print(x2,f2)
if(abs(f2-f1) < threshold):
print('ok')
return(x2,f2)
while(f1 > f2 and t < 10):
t = t + 1
s = s * 0.9
x2 = x1 + s * gra1
f2 = outcome(n,graph,x2[0],x2[1])
x0 = x1
gra0 = gra1
gra1 = gra2
x1 = x2
f1 = f2
return(x1,f1)
def init_localmax():
ang1 = [random.randint(0,100)]
ang2 = [random.randint(0,100)]
res = gra_desc(n,graph,ang1,ang2)
opti_ang = res[0]
return(opti_ang)
```
#### Interpolation for $p+1$ Parameters
When a $p$-level local maximum and the correponding $(\vec{\alpha}_{loc},\vec{\beta}_{loc})_{p}$ are found by gradient ascend, we can go on to $p+1$ level circuit. The starting point of $p+1$ level parameters set $(\vec{\alpha}_{0},\vec{\beta}_{0})_{p+1}$ is produced by interpolation algorithm, which is based on the value of $(\vec{\alpha}_{loc},\vec{\beta}_{loc})_{p}$:
$$[(\vec{\alpha}_{0},\vec{\beta}_{0})_{p+1}]_{i} = \frac{i-1}{p}[(\vec{\alpha}_{loc},\vec{\beta}_{loc})_{p}]_{i-1} + \frac{p-i+1}{p}[(\vec{\alpha}_{loc},\vec{\beta}_{loc})_{p}]_{i}$$
where $i = 1, 2, 3, \cdots, p+1$ and $[(\vec{\alpha}_{loc},\vec{\beta}_{loc})_{p}]_{0} = [(\vec{\alpha}_{loc},\vec{\beta}_{loc})_{p}]_{p+1} =(0,0)$
### Main Function and Some Examples
Now we integrate all the functions and calculation into one QAOA main function. We feed the number of vertexes $n$, the specifically defined graph and the maximum depth of this circuit $p_{max}$ to this function, obtaining local optimized variational parameters $\vec{a},\vec{b}$ and local maximum $M_{p}(\vec{a},\vec{b})$. In theory, if $p$ goes to infinity, $M_{p}$ will be closer to the ideal maximum value. In a real calculation, a good approximation can be obtained even with a low-depth circuit.
```
def QAOA(n,graph,pmax):
print('[a,b] M(a,b)')
p = 1
print('p = %d' % p)
opti_ang = init_localmax()
while(p < pmax):
p = p + 1
print('p = %d' % p)
opti_ang1 = opti_ang[0]
opti_ang2 = opti_ang[1]
start_ang1 = np.zeros(p)
start_ang2 = np.zeros(p)
for i in range(p):
if i == 0:
start_ang1[i] = opti_ang1[0]
start_ang2[i] = opti_ang2[0]
if i == p-1:
start_ang1[i] = opti_ang1[p-2]
start_ang2[i] = opti_ang2[p-2]
else:
start_ang1[i] = (i/(p-1))*opti_ang1[i-1] + ((p-i-1)/(p-1))*opti_ang1[i]
start_ang2[i] = (i/(p-1))*opti_ang2[i-1] + ((p-i-1)/(p-1))*opti_ang2[i]
res = gra_desc(n,graph,start_ang1,start_ang2)
opti_ang = res[0]
opti_val = res[1]
print('done')
```
#### Graph 1
<img src="graph1.png" width = "20%">
```
graph = [[0,1],[1,2],[1,4],[2,3],[3,4],[4,0]] # define the graph
n = 5
pmax = 3
QAOA(n,graph,pmax)
```
#### Graph 2
<img src="graph2.png" width = "20%">
```
graph = [[0,1],[1,2],[2,3],[3,4],[4,5],[5,0]] # define the graph
n = 6 # the number of vertexes
pmax = 3
QAOA(n,graph,pmax)
```
#### Graph 3
<img src="graph3.png" width = "20%">
```
graph = [[0,1],[1,2],[2,3],[3,4],[4,0]] # define the graph
n = 5 # the number of vertexes
pmax = 2
QAOA(n,graph,pmax)
```
## *References*
[1] Farhi E, Goldstone J, Gutmann S. A quantum approximate optimization algorithm\[J\]. arXiv preprint arXiv:1411.4028, 2014.
[2] Zhou L, Wang S T, Choi S, et al. Quantum approximate optimization algorithm: performance, mechanism, and implementation on near-term devices\[J\]. arXiv preprint arXiv:1812.01041, 2018.
[3] Wang Q, Abdullah T. An Introduction to Quantum Optimization Approximation Algorithm\[J\]. 2018.
[4] Barenco A, Bennett C H, Cleve R, et al. Elementary gates for quantum computation\[J\]. Physical review A, 1995, 52(5): 3457.
|
github_jupyter
|
import itertools
import qiskit
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from numpy import *
from qiskit import BasicAer, IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
from qiskit.compiler import transpile
from qiskit.tools.visualization import plot_histogram
from qiskit.quantum_info.analysis import *
from qiskit.tools.visualization import circuit_drawer, plot_bloch_multivector
pi = np.pi
backend = BasicAer.get_backend('qasm_simulator')
shots = 99999 ## This value should be large enough
def uz(circ, node, aux, graph, a): # theta takes value in [0, 2pi]
step_len = 1/100
for[i,j] in graph:
circ.cx(node[i],node[j])
circ.rz(-2*step_len*pi*a,node[j])
circ.cx(node[i],node[j])
circ.rz(2*step_len*pi*a,aux)
circ.barrier()
def ux(circ,node,b):
step_len = 1/100
circ.rx(4*step_len*pi*b, node)
circ.barrier()
def maxcut(circ, node, aux, creq,graph, ang_zip):
for a,b in ang_zip:
uz(circ, node, aux, graph, a)
ux(circ, node, b)
circ.measure(node, creq)
# Calculate the mean value from the measurement
def mean(graph,answer):
sum2 = 0
for k,v in answer.items():
sum1 = 0
for [i,j] in graph:
if k[i] != k[j]:
sum1 += 1
sum2 += sum1*v
mean = sum2/shots
return(mean)
def outcome(n,graph,ang1,ang2):
ang_zip = zip(ang1,ang2)
node = QuantumRegister(n)
aux = QuantumRegister(1)
creq = ClassicalRegister(n)
circ = QuantumCircuit(node,aux,creq)
circ.h(node)
maxcut(circ, node, aux, creq,graph, ang_zip)
results = execute(circ, backend = backend, shots = shots).result()
answer = results.get_counts()
out = mean(graph,answer)
return(out)
def gradient(n,graph,ang1,ang2,step_len=1):
ang1_dif = []
ang2_dif = []
res = outcome(n,graph,ang1,ang2)
for i in range(len(ang1)):
angp = ang1.copy()
angp[i] += step_len
resp = (outcome(n,graph,angp,ang2)-res)/step_len
ang1_dif = np.append(ang1_dif,resp)
for i in range(len(ang2)):
angp = ang2.copy()
angp[i] += step_len
resp = (outcome(n,graph,ang1,angp)-res)/step_len
ang2_dif = np.append(ang2_dif,resp)
return(np.vstack((ang1_dif,ang2_dif)))
## gradient descend algorithm
def gra_desc(n,graph,ang1,ang2,step_size=1):
iter_num = 100
threshold = 0.01
t = 0
s = step_size
x0 = np.vstack((ang1,ang2))
gra0 = gradient(n,graph,x0[0],x0[1])
x1 = x0 + s * gra0
gra1 = gradient(n,graph,x1[0],x1[1])
f1 = outcome(n,graph,x1[0],x1[1])
times = 0
for i in range(iter_num):
del_x = np.hstack(x1 - x0)
del_df = np.hstack(gra1 - gra0)
s = abs(np.dot(del_x,del_df)/(np.dot(del_df,del_df)))
x2 = x1 + s * gra1
f2 = outcome(n,graph,x2[0],x2[1])
gra2 = gradient(n,graph,x2[0],x2[1])
times += 1
print(x2,f2)
if(abs(f2-f1) < threshold):
print('ok')
return(x2,f2)
while(f1 > f2 and t < 10):
t = t + 1
s = s * 0.9
x2 = x1 + s * gra1
f2 = outcome(n,graph,x2[0],x2[1])
x0 = x1
gra0 = gra1
gra1 = gra2
x1 = x2
f1 = f2
return(x1,f1)
def init_localmax():
ang1 = [random.randint(0,100)]
ang2 = [random.randint(0,100)]
res = gra_desc(n,graph,ang1,ang2)
opti_ang = res[0]
return(opti_ang)
def QAOA(n,graph,pmax):
print('[a,b] M(a,b)')
p = 1
print('p = %d' % p)
opti_ang = init_localmax()
while(p < pmax):
p = p + 1
print('p = %d' % p)
opti_ang1 = opti_ang[0]
opti_ang2 = opti_ang[1]
start_ang1 = np.zeros(p)
start_ang2 = np.zeros(p)
for i in range(p):
if i == 0:
start_ang1[i] = opti_ang1[0]
start_ang2[i] = opti_ang2[0]
if i == p-1:
start_ang1[i] = opti_ang1[p-2]
start_ang2[i] = opti_ang2[p-2]
else:
start_ang1[i] = (i/(p-1))*opti_ang1[i-1] + ((p-i-1)/(p-1))*opti_ang1[i]
start_ang2[i] = (i/(p-1))*opti_ang2[i-1] + ((p-i-1)/(p-1))*opti_ang2[i]
res = gra_desc(n,graph,start_ang1,start_ang2)
opti_ang = res[0]
opti_val = res[1]
print('done')
# Initialization
graph = [[0,1],[1,2],[1,4],[2,3],[3,4],[4,0]] # define the graph
n = 5 # the number of vertexes
node = QuantumRegister(n) # register qubits
aux = QuantumRegister(1) # register an auxiliary qubit
creq = ClassicalRegister(n)
## The definition of U(C,a)
def uz(circ, node, aux, graph, a): # theta takes value in [0, 2pi]
step_len = 1/100
for[i,j] in graph:
circ.cx(node[i],node[j])
circ.rz(-2*step_len*pi*a,node[j])
circ.cx(node[i],node[j])
circ.rz(2*step_len*pi*a,aux)
circ.barrier()
def ux(circ,node,b):
step_len = 1/100
circ.rx(4*step_len*pi*b, node)
circ.barrier()
# This function is used to implement QAOA circuit
def maxcut(circ, node, aux, creq,graph, ang_zip):
for a,b in ang_zip:
uz(circ, node, aux, graph, a)
ux(circ, node, b)
circ.measure(node, creq)
# Calculate the mean value from the measurement
def mean(graph,answer):
sum2 = 0
for k,v in answer.items():
sum1 = 0
for [i,j] in graph:
if k[i] != k[j]:
sum1 += 1
sum2 += sum1*v
mean = sum2/shots
return(mean)
a = [32,45]
b = [41,23]
ang_zip = zip(a,b)
circ = QuantumCircuit(node,aux,creq)
circ.h(node)
maxcut(circ, node, aux, creq,graph, ang_zip)
circ.draw(output = 'mpl')
results = execute(circ, backend = backend, shots = shots).result()
answer = results.get_counts()
plot_histogram(answer, figsize=(20,10))
# The mean value for a = [32,45], b = [41,23]
mean(graph,answer)
## the gradient of current point
def gradient(n,graph,ang1,ang2,step_len=1):
ang1_dif = []
ang2_dif = []
res = outcome(n,graph,ang1,ang2)
for i in range(len(ang1)):
angp = ang1.copy()
angp[i] += step_len
resp = (outcome(n,graph,angp,ang2)-res)/step_len
ang1_dif = np.append(ang1_dif,resp)
for i in range(len(ang2)):
angp = ang2.copy()
angp[i] += step_len
resp = (outcome(n,graph,ang1,angp)-res)/step_len
ang2_dif = np.append(ang2_dif,resp)
return(np.vstack((ang1_dif,ang2_dif)))
## gradient descend algorithm
def gra_desc(n,graph,ang1,ang2,step_size=1):
iter_num = 100
threshold = 0.01
t = 0
s = step_size
x0 = np.vstack((ang1,ang2))
gra0 = gradient(n,graph,x0[0],x0[1])
x1 = x0 + s * gra0
gra1 = gradient(n,graph,x1[0],x1[1])
f1 = outcome(n,graph,x1[0],x1[1])
times = 0
for i in range(iter_num):
del_x = np.hstack(x1 - x0)
del_df = np.hstack(gra1 - gra0)
s = abs(np.dot(del_x,del_df)/(np.dot(del_df,del_df)))
x2 = x1 + s * gra1
f2 = outcome(n,graph,x2[0],x2[1])
gra2 = gradient(n,graph,x2[0],x2[1])
times += 1
print(x2,f2)
if(abs(f2-f1) < threshold):
print('ok')
return(x2,f2)
while(f1 > f2 and t < 10):
t = t + 1
s = s * 0.9
x2 = x1 + s * gra1
f2 = outcome(n,graph,x2[0],x2[1])
x0 = x1
gra0 = gra1
gra1 = gra2
x1 = x2
f1 = f2
return(x1,f1)
def init_localmax():
ang1 = [random.randint(0,100)]
ang2 = [random.randint(0,100)]
res = gra_desc(n,graph,ang1,ang2)
opti_ang = res[0]
return(opti_ang)
def QAOA(n,graph,pmax):
print('[a,b] M(a,b)')
p = 1
print('p = %d' % p)
opti_ang = init_localmax()
while(p < pmax):
p = p + 1
print('p = %d' % p)
opti_ang1 = opti_ang[0]
opti_ang2 = opti_ang[1]
start_ang1 = np.zeros(p)
start_ang2 = np.zeros(p)
for i in range(p):
if i == 0:
start_ang1[i] = opti_ang1[0]
start_ang2[i] = opti_ang2[0]
if i == p-1:
start_ang1[i] = opti_ang1[p-2]
start_ang2[i] = opti_ang2[p-2]
else:
start_ang1[i] = (i/(p-1))*opti_ang1[i-1] + ((p-i-1)/(p-1))*opti_ang1[i]
start_ang2[i] = (i/(p-1))*opti_ang2[i-1] + ((p-i-1)/(p-1))*opti_ang2[i]
res = gra_desc(n,graph,start_ang1,start_ang2)
opti_ang = res[0]
opti_val = res[1]
print('done')
graph = [[0,1],[1,2],[1,4],[2,3],[3,4],[4,0]] # define the graph
n = 5
pmax = 3
QAOA(n,graph,pmax)
graph = [[0,1],[1,2],[2,3],[3,4],[4,5],[5,0]] # define the graph
n = 6 # the number of vertexes
pmax = 3
QAOA(n,graph,pmax)
graph = [[0,1],[1,2],[2,3],[3,4],[4,0]] # define the graph
n = 5 # the number of vertexes
pmax = 2
QAOA(n,graph,pmax)
| 0.348091 | 0.995886 |
#### Removing Data Part II
So, you now have seen how we can fit a model by dropping rows with missing values. This is great in that sklearn doesn't break! However, this means future observations will not obtain a prediction if they have missing values in any of the columns.
In this notebook, you will answer a few questions about what happened in the last screencast, and take a few additional steps.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import seaborn as sns
import RemovingData as t
%matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
#Subset to only quantitative vars
num_vars = df[['Salary', 'CareerSatisfaction', 'HoursPerWeek', 'JobSatisfaction', 'StackOverflowSatisfaction']]
num_vars.head()
```
#### Question 1
**1.** What proportion of individuals in the dataset reported a salary?
```
prop_sals = 1 - df.isnull()['Salary'].mean()# Proportion of individuals in the dataset with salary reported
prop_sals
t.prop_sals_test(prop_sals) #test
```
#### Question 2
**2.** Remove the rows associated with nan values in Salary (only Salary) from the dataframe **num_vars**. Store the dataframe with these rows removed in **sal_rem**.
```
sal_rm = num_vars.dropna(subset=['Salary'], axis=0)# dataframe with rows for nan Salaries removed
sal_rm.head()
t.sal_rm_test(sal_rm) #test
```
#### Question 3
**3.** Using **sal_rm**, create **X** be a dataframe (matrix) of all of the numeric feature variables. Then, let **y** be the response vector you would like to predict (Salary). Run the cell below once you have split the data, and use the result of the code to assign the correct letter to **question3_solution**.
```
X = sal_rm[['CareerSatisfaction', 'HoursPerWeek', 'JobSatisfaction', 'StackOverflowSatisfaction']]
y = sal_rm['Salary']
# Split data into training and test data, and fit a linear model
X_train, X_test, y_train, y_test = train_test_split(X, y , test_size=.30, random_state=42)
lm_model = LinearRegression(normalize=True)
# If our model works, it should just fit our model to the data. Otherwise, it will let us know.
try:
lm_model.fit(X_train, y_train)
except:
print("Oh no! It doesn't work!!!")
a = 'Python just likes to break sometimes for no reason at all.'
b = 'It worked, because Python is magic.'
c = 'It broke because we still have missing values in X'
question3_solution = c
#test
t.question3_check(question3_solution)
```
#### Question 4
**4.** Remove the rows associated with nan values in any column from **num_vars** (this was the removal process used in the screencast). Store the dataframe with these rows removed in **all_rem**.
```
all_rm = num_vars.dropna(axis=0) # dataframe with rows for nan Salaries removed
all_rm.head()
t.all_rm_test(all_rm) #test
```
#### Question 5
**5.** Using **all_rm**, create **X_2** be a dataframe (matrix) of all of the numeric feature variables. Then, let **y_2** be the response vector you would like to predict (Salary). Run the cell below once you have split the data, and use the result of the code to assign the correct letter to **question5_solution**.
```
X_2 = all_rm[['CareerSatisfaction', 'HoursPerWeek', 'JobSatisfaction', 'StackOverflowSatisfaction']]
y_2 = all_rm['Salary']
# Split data into training and test data, and fit a linear model
X_2_train, X_2_test, y_2_train, y_2_test = train_test_split(X_2, y_2 , test_size=.30, random_state=42)
lm_2_model = LinearRegression(normalize=True)
# If our model works, it should just fit our model to the data. Otherwise, it will let us know.
try:
lm_2_model.fit(X_2_train, y_2_train)
except:
print("Oh no! It doesn't work!!!")
a = 'Python just likes to break sometimes for no reason at all.'
b = 'It worked, because Python is magic.'
c = 'It broke because we still have missing values in X'
question5_solution = b
#test
t.question5_check(question5_solution)
```
#### Question 6
**6.** Now, use **lm_2_model** to predict the **y_2_test** response values, and obtain an r-squared value for how well the predicted values compare to the actual test values.
```
y_test_preds = lm_2_model.predict(X_2_test)# Predictions here
r2_test = r2_score(y_2_test, y_test_preds) # Rsquared here
# Print r2 to see result
r2_test
t.r2_test_check(r2_test)
```
#### Question 7
**7.** Use what you have learned in this notebook (and as many cells as you need to find the answers) to complete the dictionary with the variables that link to the corresponding descriptions.
```
a = 5009
b = 'Other'
c = 645
d = 'We still want to predict their salary'
e = 'We do not care to predict their salary'
f = False
g = True
question7_solution = {'The number of reported salaries in the original dataset': a,
'The number of test salaries predicted using our model': c,
'If an individual does not rate stackoverflow, but has a salary': d,
'If an individual does not have a a job satisfaction, but has a salary': d,
'Our model predicts salaries for the two individuals described above.': f}
#Check your answers against the solution - you should be told you were right if your answers are correct!
t.question7_check(question7_solution)
print("The number of salaries in the original dataframe is " + str(np.sum(df.Salary.notnull())))
print("The number of salaries predicted using our model is " + str(len(y_test_preds)))
print("This is bad because we only predicted " + str((len(y_test_preds))/np.sum(df.Salary.notnull())) + " of the salaries in the dataset.")
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import seaborn as sns
import RemovingData as t
%matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
#Subset to only quantitative vars
num_vars = df[['Salary', 'CareerSatisfaction', 'HoursPerWeek', 'JobSatisfaction', 'StackOverflowSatisfaction']]
num_vars.head()
prop_sals = 1 - df.isnull()['Salary'].mean()# Proportion of individuals in the dataset with salary reported
prop_sals
t.prop_sals_test(prop_sals) #test
sal_rm = num_vars.dropna(subset=['Salary'], axis=0)# dataframe with rows for nan Salaries removed
sal_rm.head()
t.sal_rm_test(sal_rm) #test
X = sal_rm[['CareerSatisfaction', 'HoursPerWeek', 'JobSatisfaction', 'StackOverflowSatisfaction']]
y = sal_rm['Salary']
# Split data into training and test data, and fit a linear model
X_train, X_test, y_train, y_test = train_test_split(X, y , test_size=.30, random_state=42)
lm_model = LinearRegression(normalize=True)
# If our model works, it should just fit our model to the data. Otherwise, it will let us know.
try:
lm_model.fit(X_train, y_train)
except:
print("Oh no! It doesn't work!!!")
a = 'Python just likes to break sometimes for no reason at all.'
b = 'It worked, because Python is magic.'
c = 'It broke because we still have missing values in X'
question3_solution = c
#test
t.question3_check(question3_solution)
all_rm = num_vars.dropna(axis=0) # dataframe with rows for nan Salaries removed
all_rm.head()
t.all_rm_test(all_rm) #test
X_2 = all_rm[['CareerSatisfaction', 'HoursPerWeek', 'JobSatisfaction', 'StackOverflowSatisfaction']]
y_2 = all_rm['Salary']
# Split data into training and test data, and fit a linear model
X_2_train, X_2_test, y_2_train, y_2_test = train_test_split(X_2, y_2 , test_size=.30, random_state=42)
lm_2_model = LinearRegression(normalize=True)
# If our model works, it should just fit our model to the data. Otherwise, it will let us know.
try:
lm_2_model.fit(X_2_train, y_2_train)
except:
print("Oh no! It doesn't work!!!")
a = 'Python just likes to break sometimes for no reason at all.'
b = 'It worked, because Python is magic.'
c = 'It broke because we still have missing values in X'
question5_solution = b
#test
t.question5_check(question5_solution)
y_test_preds = lm_2_model.predict(X_2_test)# Predictions here
r2_test = r2_score(y_2_test, y_test_preds) # Rsquared here
# Print r2 to see result
r2_test
t.r2_test_check(r2_test)
a = 5009
b = 'Other'
c = 645
d = 'We still want to predict their salary'
e = 'We do not care to predict their salary'
f = False
g = True
question7_solution = {'The number of reported salaries in the original dataset': a,
'The number of test salaries predicted using our model': c,
'If an individual does not rate stackoverflow, but has a salary': d,
'If an individual does not have a a job satisfaction, but has a salary': d,
'Our model predicts salaries for the two individuals described above.': f}
#Check your answers against the solution - you should be told you were right if your answers are correct!
t.question7_check(question7_solution)
print("The number of salaries in the original dataframe is " + str(np.sum(df.Salary.notnull())))
print("The number of salaries predicted using our model is " + str(len(y_test_preds)))
print("This is bad because we only predicted " + str((len(y_test_preds))/np.sum(df.Salary.notnull())) + " of the salaries in the dataset.")
| 0.422862 | 0.978935 |
```
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
from sstcam_simulation import Camera, SSTCameraMapping, PhotoelectronSource
from sstcam_simulation.plotting import CameraImage
camera = Camera(
mapping=SSTCameraMapping(n_pixels=8),
digital_trigger_length=60,
)
pe = PhotoelectronSource(camera=camera, seed=1).get_uniform_illumination(time=100, illumination=1, laser_pulse_width=3)
pe.time[5:] += 50
```
# Tutorial 7: NNSuperpixelAboveThreshold
This tutorial demonstrates the stages in the default `Trigger` logic. The methods described in this tutorial do not need to be called manually (they are called within `EventAcquisition.get_trigger`). However, they may be useful in particular investigations (e.g. in tutorial 4_bias_scan)
```
from sstcam_simulation.event.trigger import NNSuperpixelAboveThreshold
trigger = NNSuperpixelAboveThreshold(camera=camera)
NNSuperpixelAboveThreshold?
```
## Continuous readout
Obtain a `continuous_readout` for this demonstration
```
from sstcam_simulation.event.acquisition import EventAcquisition
acquisition = EventAcquisition(camera=camera, seed=1)
readout = acquisition.get_continuous_readout(pe)
# Divide by photoelectron pulse height to convert sample units to photoelectrons
plt.plot(camera.continuous_readout_time_axis, readout.T / camera.photoelectron_pulse.height)
plt.xlabel("Time (ns)")
_ = plt.ylabel("Amplitude (p.e.)")
plt.plot(camera.continuous_readout_time_axis[400:1000], readout[:, 400:1000].T / camera.photoelectron_pulse.height)
plt.xlabel("Time (ns)")
_ = plt.ylabel("Amplitude (p.e.)")
```
## Superpixel Digital Trigger Line
The continuous readout from each pixel is continuously summed with the readout from the other pixels of the same superpixel. This summed readout is then compared with the trigger threshold, to produce a boolean array.
To account for the coincidence window (for the backplane trigger between neighbouring superpixels, simulated later) X ns are padded to each True reading. This is performed by `extend_by_coincidence_window`.
```
trigger.get_superpixel_digital_trigger_line?
camera.update_trigger_threshold(3)
digital = trigger.get_superpixel_digital_trigger_line(readout)
digital_coincidence = trigger.extend_by_digital_trigger_length(digital)
from sstcam_simulation.event.trigger import sum_superpixels
superpixel_sum = sum_superpixels(readout, camera.mapping.pixel_to_superpixel, camera.mapping.n_superpixels) / camera.photoelectron_pulse.height
plt.plot(camera.continuous_readout_time_axis[400:1000], superpixel_sum[:, 400:1000].T )
plt.plot(camera.continuous_readout_time_axis[400:1000], digital[:, 400:1000].T * superpixel_sum.max(), 'g-', label="Digital Trigger")
plt.plot(camera.continuous_readout_time_axis[400:1000], digital_coincidence[:, 400:1000].T * superpixel_sum.max(), 'r--', label="Digital Trigger w/ coincidence")
plt.axhline(camera.trigger_threshold, color='black', label='Trigger Threshold')
plt.xlabel("Time (ns)")
plt.ylabel("Superpixel Amplitude (p.e.)")
_ = plt.legend(loc='best')
print("Number of triggers per superpixel: ", trigger.get_n_superpixel_triggers(digital))
```
## Backplane Trigger
If two neighbouring superpixels are high, within the coincidence window length, then a backplane trigger is generated. The time of the trigger occurs on the rising edge of the coincidence (bitwise AND), with the digital trigger sampled every nanosecond.
```
trigger.get_backplane_trigger?
camera.update_trigger_threshold(3)
digital = trigger.get_superpixel_digital_trigger_line(readout)
backplane_triggers, _ = trigger.get_backplane_trigger(digital)
superpixel_sum = sum_superpixels(readout, camera.mapping.pixel_to_superpixel, camera.mapping.n_superpixels) / camera.photoelectron_pulse.height
plt.plot(camera.continuous_readout_time_axis[400:1000], superpixel_sum[:, 400:1000].T )
plt.plot(camera.continuous_readout_time_axis[400:1000], digital[:, 400:1000].T * superpixel_sum.max(), '-', label="Digital Trigger")
plt.axhline(camera.trigger_threshold, color='black', label='Trigger Threshold')
plt.xlabel("Time (ns)")
plt.ylabel("Superpixel Amplitude (p.e.)")
_ = plt.legend(loc='best')
print("N Backplane Triggers: ", backplane_triggers.size)
camera.update_trigger_threshold(1)
digital = trigger.get_superpixel_digital_trigger_line(readout)
backplane_triggers, _ = trigger.get_backplane_trigger(digital)
superpixel_sum = sum_superpixels(readout, camera.mapping.pixel_to_superpixel, camera.mapping.n_superpixels) / camera.photoelectron_pulse.height
plt.plot(camera.continuous_readout_time_axis[400:1000], superpixel_sum[:, 400:1000].T )
plt.plot(camera.continuous_readout_time_axis[400:1000], digital[:, 400:1000].T * superpixel_sum.max(), '-', label="Digital Trigger")
plt.axhline(camera.trigger_threshold, color='black', label='Trigger Threshold')
plt.xlabel("Time (ns)")
plt.ylabel("Superpixel Amplitude (p.e.)")
_ = plt.legend(loc='best')
print("N Backplane Triggers: ", backplane_triggers.size)
camera.update_trigger_threshold(0.5)
digital = trigger.get_superpixel_digital_trigger_line(readout)
extended = trigger.extend_by_digital_trigger_length(digital)
backplane_triggers, _ = trigger.get_backplane_trigger(extended)
superpixel_sum = sum_superpixels(readout, camera.mapping.pixel_to_superpixel, camera.mapping.n_superpixels) / camera.photoelectron_pulse.height
plt.plot(camera.continuous_readout_time_axis[400:1000], superpixel_sum[:, 400:1000].T )
plt.plot(camera.continuous_readout_time_axis[400:1000], extended[:, 400:1000].T * superpixel_sum.max(), '-', label="Digital Trigger")
plt.axhline(camera.trigger_threshold, color='black', label='Trigger Threshold')
plt.axvline(backplane_triggers[0], label="Backplane Trigger")
plt.xlabel("Time (ns)")
plt.ylabel("Superpixel Amplitude (p.e.)")
_ = plt.legend(loc='best')
print("N Backplane Triggers: ", backplane_triggers.size)
```
|
github_jupyter
|
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
from sstcam_simulation import Camera, SSTCameraMapping, PhotoelectronSource
from sstcam_simulation.plotting import CameraImage
camera = Camera(
mapping=SSTCameraMapping(n_pixels=8),
digital_trigger_length=60,
)
pe = PhotoelectronSource(camera=camera, seed=1).get_uniform_illumination(time=100, illumination=1, laser_pulse_width=3)
pe.time[5:] += 50
from sstcam_simulation.event.trigger import NNSuperpixelAboveThreshold
trigger = NNSuperpixelAboveThreshold(camera=camera)
NNSuperpixelAboveThreshold?
from sstcam_simulation.event.acquisition import EventAcquisition
acquisition = EventAcquisition(camera=camera, seed=1)
readout = acquisition.get_continuous_readout(pe)
# Divide by photoelectron pulse height to convert sample units to photoelectrons
plt.plot(camera.continuous_readout_time_axis, readout.T / camera.photoelectron_pulse.height)
plt.xlabel("Time (ns)")
_ = plt.ylabel("Amplitude (p.e.)")
plt.plot(camera.continuous_readout_time_axis[400:1000], readout[:, 400:1000].T / camera.photoelectron_pulse.height)
plt.xlabel("Time (ns)")
_ = plt.ylabel("Amplitude (p.e.)")
trigger.get_superpixel_digital_trigger_line?
camera.update_trigger_threshold(3)
digital = trigger.get_superpixel_digital_trigger_line(readout)
digital_coincidence = trigger.extend_by_digital_trigger_length(digital)
from sstcam_simulation.event.trigger import sum_superpixels
superpixel_sum = sum_superpixels(readout, camera.mapping.pixel_to_superpixel, camera.mapping.n_superpixels) / camera.photoelectron_pulse.height
plt.plot(camera.continuous_readout_time_axis[400:1000], superpixel_sum[:, 400:1000].T )
plt.plot(camera.continuous_readout_time_axis[400:1000], digital[:, 400:1000].T * superpixel_sum.max(), 'g-', label="Digital Trigger")
plt.plot(camera.continuous_readout_time_axis[400:1000], digital_coincidence[:, 400:1000].T * superpixel_sum.max(), 'r--', label="Digital Trigger w/ coincidence")
plt.axhline(camera.trigger_threshold, color='black', label='Trigger Threshold')
plt.xlabel("Time (ns)")
plt.ylabel("Superpixel Amplitude (p.e.)")
_ = plt.legend(loc='best')
print("Number of triggers per superpixel: ", trigger.get_n_superpixel_triggers(digital))
trigger.get_backplane_trigger?
camera.update_trigger_threshold(3)
digital = trigger.get_superpixel_digital_trigger_line(readout)
backplane_triggers, _ = trigger.get_backplane_trigger(digital)
superpixel_sum = sum_superpixels(readout, camera.mapping.pixel_to_superpixel, camera.mapping.n_superpixels) / camera.photoelectron_pulse.height
plt.plot(camera.continuous_readout_time_axis[400:1000], superpixel_sum[:, 400:1000].T )
plt.plot(camera.continuous_readout_time_axis[400:1000], digital[:, 400:1000].T * superpixel_sum.max(), '-', label="Digital Trigger")
plt.axhline(camera.trigger_threshold, color='black', label='Trigger Threshold')
plt.xlabel("Time (ns)")
plt.ylabel("Superpixel Amplitude (p.e.)")
_ = plt.legend(loc='best')
print("N Backplane Triggers: ", backplane_triggers.size)
camera.update_trigger_threshold(1)
digital = trigger.get_superpixel_digital_trigger_line(readout)
backplane_triggers, _ = trigger.get_backplane_trigger(digital)
superpixel_sum = sum_superpixels(readout, camera.mapping.pixel_to_superpixel, camera.mapping.n_superpixels) / camera.photoelectron_pulse.height
plt.plot(camera.continuous_readout_time_axis[400:1000], superpixel_sum[:, 400:1000].T )
plt.plot(camera.continuous_readout_time_axis[400:1000], digital[:, 400:1000].T * superpixel_sum.max(), '-', label="Digital Trigger")
plt.axhline(camera.trigger_threshold, color='black', label='Trigger Threshold')
plt.xlabel("Time (ns)")
plt.ylabel("Superpixel Amplitude (p.e.)")
_ = plt.legend(loc='best')
print("N Backplane Triggers: ", backplane_triggers.size)
camera.update_trigger_threshold(0.5)
digital = trigger.get_superpixel_digital_trigger_line(readout)
extended = trigger.extend_by_digital_trigger_length(digital)
backplane_triggers, _ = trigger.get_backplane_trigger(extended)
superpixel_sum = sum_superpixels(readout, camera.mapping.pixel_to_superpixel, camera.mapping.n_superpixels) / camera.photoelectron_pulse.height
plt.plot(camera.continuous_readout_time_axis[400:1000], superpixel_sum[:, 400:1000].T )
plt.plot(camera.continuous_readout_time_axis[400:1000], extended[:, 400:1000].T * superpixel_sum.max(), '-', label="Digital Trigger")
plt.axhline(camera.trigger_threshold, color='black', label='Trigger Threshold')
plt.axvline(backplane_triggers[0], label="Backplane Trigger")
plt.xlabel("Time (ns)")
plt.ylabel("Superpixel Amplitude (p.e.)")
_ = plt.legend(loc='best')
print("N Backplane Triggers: ", backplane_triggers.size)
| 0.654011 | 0.879043 |
# Parsing Quick Start Guide
This quick start walks through creating YANGify parsers for both LLDP neighbors and interfaces. The goal of this is to take native data, such as show commands, from a traditional network device and convert it to structured data (JSON) that adheres to a given YANG model.
## Parsing LLDP Neighbors & Interfaces
Our example will use a Cisco IOS device and the goal is to represent and create LLDP and interface data that adheres to OpenConfig YANG models. The OC model for LLDP actually accounts for quite a bit of data from basic LLDP configuration of a device, to its neighbors, to low level details such as counters, timers, and TTL information often found in a combination of show commands including `show run`, `show lldp neighbors`, and `show lldp neighbor detail` just to name a few. The point is multiple commands are required to generate and fully populate the data required for a complete YANG model. It's often unnecessary to worry about every feature knob, so in this exercise, we are only going to worry about key aspects of LLDP being very much focused on LLDP neighbors of a given device. The reason we'll also be parsing interfaces is that the LLDP model actually makes references to interfaces that must be valid and exist on the device, so to properly parse LLDP, we need to parse interfaces too.
## Getting Familiar with Data
Before we get started, it's quite helpful to visually see the data that we'll be working with.
Let's look at the starting config along with the tree output of the YANG model we're generating data for, and the JSON representations of the data that we'll be creating with Yangify parsers.
### Native Configuration and Operational Data
View the baseline configuration:
```
%cat ../data/ios/ntc-r1/config.txt
```
View the LLDP neighbors output:
```
%cat ../data/ios/ntc-r1/lldp.txt
```
Let's also gain some insight into what the OpenConfig YANG model looks like for LLDP neighbors.
### Tree Output of the OpenConfig LLDP YANG Model
> Note: you can install `pyang` with `pip`.
```
ntc@nautobot:models (master)$ pyang -f tree /yangify/docs/tutorials/yang/yang-modules/openconfig/openconfig-lldp.yang
module: openconfig-lldp
+--rw lldp
+--rw config
| +--rw enabled? boolean
| +--rw hello-timer? uint64
| +--rw suppress-tlv-advertisement* identityref
| +--rw system-name? string
| +--rw system-description? string
| +--rw chassis-id? string
| +--rw chassis-id-type? oc-lldp-types:chassis-id-type
+--ro state
| +--ro enabled? boolean
| +--ro hello-timer? uint64
| +--ro suppress-tlv-advertisement* identityref
| +--ro system-name? string
| +--ro system-description? string
| +--ro chassis-id? string
| +--ro chassis-id-type? oc-lldp-types:chassis-id-type
| +--ro counters
| +--ro frame-in? yang:counter64
| +--ro frame-out? yang:counter64
| +--ro frame-error-in? yang:counter64
| +--ro frame-discard? yang:counter64
| +--ro tlv-discard? yang:counter64
| +--ro tlv-unknown? yang:counter64
| +--ro last-clear? yang:date-and-time
| +--ro tlv-accepted? yang:counter64
| +--ro entries-aged-out? yang:counter64
+--rw interfaces
+--rw interface* [name]
+--rw name -> ../config/name
+--rw config
| +--rw name? oc-if:base-interface-ref
| +--rw enabled? boolean
+--ro state
| +--ro name? oc-if:base-interface-ref
| +--ro enabled? boolean
| +--ro counters
| +--ro frame-in? yang:counter64
| +--ro frame-out? yang:counter64
| +--ro frame-error-in? yang:counter64
| +--ro frame-discard? yang:counter64
| +--ro tlv-discard? yang:counter64
| +--ro tlv-unknown? yang:counter64
| +--ro last-clear? yang:date-and-time
| +--ro frame-error-out? yang:counter64
+--ro neighbors
+--ro neighbor* [id]
+--ro id -> ../state/id
+--ro config
+--ro state
| +--ro system-name? string
| +--ro system-description? string
| +--ro chassis-id? string
| +--ro chassis-id-type? oc-lldp-types:chassis-id-type
| +--ro id? string
| +--ro age? uint64
| +--ro last-update? int64
| +--ro ttl? uint16
| +--ro port-id? string
| +--ro port-id-type? oc-lldp-types:port-id-type
| +--ro port-description? string
| +--ro management-address? string
| +--ro management-address-type? string
+--ro custom-tlvs
| +--ro tlv* [type oui oui-subtype]
| +--ro type -> ../state/type
| +--ro oui -> ../state/oui
| +--ro oui-subtype -> ../state/oui-subtype
| +--ro config
| +--ro state
| +--ro type? int32
| +--ro oui? string
| +--ro oui-subtype? string
| +--ro value? binary
+--ro capabilities
+--ro capability* [name]
+--ro name -> ../state/name
+--ro config
+--ro state
+--ro name? identityref
+--ro enabled? boolean
ntc@nautobot:models (master)$
```
For this exercise, we are only interested in a sub-set of the data from the full model. You get to choose exactly what parts of the model you want to parse with Yangify.
This is the sub-set of the YANG model our script and parsers will generate data for:
```
ntc@nautobot:models (master)$ pyang -f tree lldp/openconfig-lldp.yang
module: openconfig-lldp
+--rw lldp
+--rw config
| +--rw system-name? string
+--rw interfaces
+--rw interface* [name]
+--rw name -> ../config/name
+--rw config
| +--rw name? oc-if:base-interface-ref
+--ro neighbors
+--ro neighbor* [id]
+--ro id -> ../state/id
+--ro state
| +--ro id? string
| +--ro port-id? string
ntc@nautobot:models (master)$
```
And now a glimpse into native YANG:
### Naive YANG Output
```
container lldp {
description
"Top-level container for LLDP configuration and state data";
container config { // this is the container that'll hold the system-name
description
"Configuration data ";
// shortened for brevity
```
### JSON Representation of OpenConfig LLDP YANG Data
Most importantly, here is JSON data of what we'll be building given the sub-set of the model in question. This is the whole premise behind Yangify. It's to provide a framework that makes it much easier to parse (and translate) data that maps directly back to YANG models. You should be able to cross-reference the ASCII tree back to the JSON based on the keys used in the following output:
```
{
"openconfig-lldp:lldp": {
"config": {
"system-name": "ntc-r1"
},
"interfaces": {
"interface": [
{
"name": "GigabitEthernet1",
"config": {
"name": "GigabitEthernet1"
},
"neighbors": {
"neighbor": [
{
"id": "ntc-r6",
"state": {
"id": "ntc-r6",
"port-id": "Gi0/4"
}
}
]
}
},
{
"name": "GigabitEthernet2",
"config": {
"name": "GigabitEthernet2"
},
"neighbors": {
"neighbor": [
{
"id": "ntc-r2",
"state": {
"id": "ntc-r2",
"port-id": "Gi0/1"
}
},
{
"id": "ntc-s2",
"state": {
"id": "ntc-s2",
"port-id": "Gi0/1"
}
}
]
}
}
]
}
}
}
```
That covers the data we'll be starting with as well as a view into what will be generated.
## Parsing Configurations
First, we'll look at how Yangify parses native "traditional CLI" configs.
As a reminder the following is our starting (partial configuration).
```
%cat ../data/ios/ntc-r1/config.txt
```
Let's get famililar with the text tree parser, the `parse_intended_config` function, used by Yangify.
```
import json
from yangify.parser.text_tree import parse_indented_config
with open("../data/ios/ntc-r1/config.txt", "r") as f:
config = f.read()
parsed_config = parse_indented_config(config.splitlines())
print(json.dumps(parsed_config, indent=4))
```
You can find more information about this parser in the API documentation, but it's essentially a smart parser that creates a dictionary for each command set (or command) while also understanding hierarchy. For commands with multiple words on the same line, it creates a nested dictionary per word with the final value at the `#text` key; and a key with the same as the value, with `#standalone` for when it's the last word too.
## Using Yangify
The first step is to create a script that we'll use to consume the parsers we're building. Our script will just print out the generated structured data. Using it beyond this for network operations is out of scope of this guide.
```
import json
from yangify import parser
from yangify.parser.text_tree import parse_indented_config
from yangson.datamodel import DataModel
import task1_lldp_parser
dm = DataModel.from_file("../yang/yang-library-data.json", ["../yang/yang-modules/ietf", "../yang/yang-modules/openconfig"])
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.native = self.root_native
# interfaces is YANG container. Variable names are always
# YANG containers or lists
interfaces = task1_lldp_parser.Interfaces
with open("../data/ios/ntc-r1/config.txt", "r") as f:
config = f.read()
data = {}
data["show run"] = config
```
Few things to note in the script so far:
* You'll always need to instantiate a `DataModel` object as shown at the top of the script with the variable `dm`. You'll need to pass in a JSON file that is a library of your available models (per an RFC) and then a list of available paths Yangify will search for those models.
* You'll always start with a base class that has an arbritrary name inheiriting `parser.RootParser` with a nested class with a special class name of `Yangify`. Here you'll always set two properties `root_native` and `native` that'll be used as the data that'll be parsed. The object types of both of these variables don't matter, but it'll be common to use a dictionary just in case we need to pass in data from multiple show commands (or API calls) into the Parser.
* The variable above called `interfaces` is the outer most element being built by the parser. In YANG speak, `interfaces` is a YANG container, and this is how you'll likely start most of the development when using Yangify, rooting the parsing at a specific contaienr.
> Note: `interfaces` is a YANG container.
We've mentioned `root_native` and `native`, but what exactly are they?
`root_native` should be and will always be the original starting point of the _native_ configuration. We'll soon see this could be text or any other data type. In fact, we'll start with a `show run` for our example and add a dictionary to it later on. When things are kicked off, we are setting `native` to be equal to `root_native`, so they are the same when parsing begins. As parsing takes place, `native` does transform within the parser as lists are processed. We will see this shortly.
## Building out Yangify Parsers
Before we take a look at the `task1_lldp_parser` Python module, let's take quick look at the abbreviated YANG ASCII tree for interfaces that we are using for the tutorial:
```bash
ntc@nautobot:openconfig (lldp-dev)$ pyang -f tree openconfig-interfaces.yang
module: openconfig-interfaces
+--rw interfaces
+--rw interface* [name] // yang list
+--rw name -> ../config/name
+--rw config
| +--rw name? string
| +--rw type identityref
| +--rw mtu? uint16
| +--rw loopback-mode? boolean
| +--rw description? string
| +--rw enabled? boolean
+--rw subinterfaces
+--rw subinterface* [index]
+--rw index -> ../config/index
+--rw config
+--rw index? uint32
+--rw description? string
+--rw enabled? boolean
```
Take note of the structure: it's interfaces (YANG Container) -> interface (YANG list with key of name) -> a YANG leaf of name, and then another container called `config`. We won't be using anything else shown for the parsing.
Keep in mind, we already referenced the `interfaces` container in our starting script with the following statement:
```python
interfaces = task1_lldp_parser.Interfaces
```
This tells us the first class we'll need to create is called `Interfaces` in the `task1_lldp_parser` module.
Let's dive into parser.
While the actual goal is to build a parser for LLDP neighbors, the LLDP neighbors OpenConfig YANG model references the OpenConfig interfaces model ensuring valid interfaces are being used in the LLDP model. This can be seen by a specific line in the LLDP model--refer back to the ASCII tree:
```
+--rw name? oc-if:base-interface-ref
```
Once we finish parsing interfaces, we'll move onto LLDP.
Let's now instantiate the `IOSParser` and test the initial parser (yet to be covered).
We are passing in two arguments below to get us going. The positional argument of the `DataModel` built earlier and then `data` which contains what we'll parse.
> Note: when you execute the next step, you're supposed to get an error.
```
from yangson.exceptions import SchemaError
p = IOSParser(dm, native=data)
try:
result = p.process()
print(json.dumps(result.raw_value(), indent=4))
except SchemaError as e:
print("Supressing output for readability:")
print(f"error: {e}")
```
This script uses a parser called `task1_lldp_parser` as you may have noticed in the imports. You can tell it didn't actually work. Let's start to explore the parser and see why it failed.
```
%cat task1_lldp_parser.py
```
### Parser Classes
Reviewing the parser, the first class to build is the `Interfaces` class since we already had it in our script.
Few things to note as you look at the parser above:
* Class names **ARE** arbitrary.
* Variable names **ARE NOT** arbitrary.
* Variable names are either YANG containers or lists.
* Methods in each Class are YANG leaf objects
With that said, the class has a variable called `interface`. This is the YANG list that contains the interfaces. We've assigned that variable a value of `Interface` and if we look in that class we see a method called `name` which is a YANG leaf. You can start to conceptualize the model with the variable and method names by cross-referencing the YANG tree output we saw earlier.
Don't worry, we'll go through the parser in more detail in just a minute.
Okay, so why didn't the processing work the first time? It's because we violated the YANG model.
Recall this part of the tree output:
```
module: openconfig-interfaces
+--rw interfaces
+--rw interface* [name] // yang list
+--rw name -> ../config/name
+--rw config
| +--rw name? string
| +--rw type identityref
```
Next to a leaf, there is a `?` if it's optional. We can see that there is a leaf called `type` that is required (no question mark), thus the `config` container is required.
Let's process the parser again, but this time disable YANG validation.
```
p = IOSParser(dm, native=data)
result = p.process(validate=False)
print(json.dumps(result.raw_value(), indent=4))
```
This time it worked because we disabled validation meaning it didn't work the first time because the `config` container is a required container for each interface.
Let's update the parser, now with a new name, to do more testing while adding the `config` container with a few leaves.
```
%cat task2_lldp_parser.py
```
Although we added a bit to the parser, we still need to understand the built in attributes like `yy` being used along with the `Yangify` class and the `extract_elements` method. Let's take a look.
### Diving Deeper into the Yangify Parsing Logic
This is where things get interesting. YANG lists become fun to work with.
> Keep in mind, after doing a few of these, the process will become pretty methodical.
So what goes on inside this `Interface` class?
Let's look at the variable `config` and method called `name`.
By looking at the model and remembering what we described already, we should know `config` is either a YANG container or list, and based on the model itself, we do in fact know it's a YANG container. `name` on the other hand is a YANG leaf because all methods are YANG leaves!
#### The Yangify Class
Since the class we're inside of, `Interface`, is representative of a YANG list and the list requires a _key_ denoted by `[name]` in the ASCII tree output of the model, and that key is a YANG leaf itself, it's represented as a method inside the class. Again, **all method names are YANG leaves.**
In order to bring this all together, we need to look at the `Yangify` class being used inside `Interface`
You'll basically use the `Yangify` class whenever you need to work with the original data we passed into `IOSParser`. This will be a requirement whenever working with YANG lists because usually data needs be looped over and the method called `extract_elements` within the `Yangify` class is a hard-set requirement within the Yangify framework. It must be used for lists.
#### Parsing YANG Lists
In order to _extract individual list elements_ required to build the desired data set, we need to understand the existing data that we can consume. First and foremost, `native` and `root_native` are available, e.g. the data we passed in to kick things off in the first place. This data is now consumable as a dictionary created by the `parse_intended_config` that we talked about earlier.
#### Accessing Available Data
Let's look at the available data both inside `extract_elements` and the leaf, e.g. the method called `name`.
The [YANG] **key** for the YANG list is `name`, which is the interface name itself, e.g. GigabitEthernet1 and so on. You can look back and see that the interface names are actually keys in the dictionary created `self.native["show run"]["interface"]`, which is the result of the text tree parser, e.g. result of `parse_intended_config`. That's how we'll access the data.
As the interfaces are looped over and processed, we need to skip any key that is equal to `#text`due to an artifact of the `parse_intended_config` function (as documented in the Yangify docs) and skip any key with a `.` since that denotes a sub-interface. Sub-interfaces are handled differently per the interfaces YANG model and we are skipping them for now.
As the data (interfaces for now) is looped over, we need to return, or more precisely _yield_ the right data within the `extract_elements` method so the other methods can consume this data. Using `yield` allows the function to maintain its state between function calls (unlike returning data with the `return` statement).
The `extract_elements` function should always return a tuple that is a key-value pair, the key being the YANG list key and the value being the relevant configuration data for that element.
Within the `Interface` class as shown above, we know the method called `name` is the key for the YANG list. It's shown here:
```python
def name(self) -> str:
return self.yy.key
```
The neat thing is that inside the `Interface` class, the key that was _yielded_ from `extract_elements`, e.g. `interface` is accessible inside the methods in that class using `self.yy.key`. Note: You can actually access the value too using `self.yy.native`. Don't worry, we'll use that in an example soon.
This means that `self.yy.key` in the `name` method is equivalent to `interface` that was in the `yield` statement for each iteration of the loop.
Also take note of the `config` variable that was added, e.g. YANG container assigned the value of `InterfaceConfig`. This container is required to be valid data. This new class has 2 methods, each method being a YANG leaf.
Let's re-instiate `IOSParser` and generate a new parsed output that has the config container.
```
import task2_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.native = self.root_native
interfaces = task2_lldp_parser.Interfaces
p = IOSParser(dm, native=data)
result = p.process(validate=False)
print(json.dumps(result.raw_value(), indent=4))
```
You should also know that this would still fail if we tried to validate the model because from what we said earlier the leaf called `type` is required.
Let's add more YANG leafs to the config container.
```
%cat task3_lldp_parser.py
```
This class `InterfaceConfig` represents each instance of an interface within the interfaces list. Each method in this class is a YANG leaf.
When using lists like interfaces and you have a class that represents an instance like this, note that you still have access to `self.yy`.
For example, you'll note the `name` method in this class is the same as the `name` method in the `Interface` class, e.g. they are returning the same value. It's all about being aware of what data you have access to.
You'll also note that `self.yy.native` is used. We alluded to this already. `self.yy.native` is actually the _value_ in the key-value pair that `extract_elements` returned within the `Interface` class. However, it's not always equal to that value.
Everywhere you see `self.yy.native`, it is either equal to `self.native` or equal to the value returned by `extract_elements` during each loop iteration. It all depends on scoping and what type of data you're working with.
Let's re-instantiate `IOSParser` and generate our parsed output, this time also re-enabling validation.
```
import task3_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.native = self.root_native
interfaces = task3_lldp_parser.Interfaces
p = IOSParser(dm, native=data)
result = p.process(validate=True)
print(json.dumps(result.raw_value(), indent=4))
```
### Troubleshooting Tip
If something isn't making sense like some of these builtin attributes, just add `import pdb; pdb.trace()` to the line of interest, so you can use the Python debugger to explore the variables.
Let's try it:
```python
def enabled(self) -> bool:
import pdb; pdb.trace()
shutdown = self.yy.native.get("shutdown", {}).get("#standalone")
if shutdown:
return False
else:
return True
```
Run the script.
You'll see this output:
```python
(yangify) ntc@nautobot:lldp (lldp-dev)$ python lldp-sample.py
IOSParser: doesn't implement ietf-yang-library:modules-state
/openconfig-interfaces:interfaces/interface/config: doesn't implement openconfig-interfaces:mtu
> /ntc-data/repos/yangify/docs/lldp/lldp_parser.py(24)enabled()
-> shutdown = self.yy.native.get("shutdown", {}).get("#standalone")
(Pdb)
```
Now you can explore:
```python
(Pdb)
(Pdb) print(self.yy.key)
GigabitEthernet1
(Pdb)
(Pdb) import json
(Pdb) print(json.dumps(self.yy.native, indent=4))
{
"#list": [
{
"description": {
"#text": "Hello GigE1",
"Hello": {
"#text": "GigE1",
"GigE1": {
"#standalone": true
}
}
}
}
],
"#text": "shutdown",
"description": {
"#text": "Hello GigE1",
"Hello": {
"#text": "GigE1",
"GigE1": {
"#standalone": true
}
}
},
"shutdown": {
"#standalone": true
},
"#standalone": true
}
(Pdb)
(Pdb) exit
```
Once you understand the data, it becomes much easier to return the right data, of course. Using `pdb` is bit cleaner than adding print statements all over the script.
**Make sure to remove the `pdb` statement.**
## Parsing LLDP Neighbors
At this point, we're half way there. We've parsed interfaces and are ready to begin the parsing LLDP neighbors.
Our interest is to generate data for LLDP very much focused on the YANG container called `lldp` that we looked at earlier in this tutorial.
This means we need to:
a) add the data required for the LLDP parsers to consume
b) add the line in the script to create the `lldp` container
c) update `root_native` to include the new data
d) update the `taskN_lldp_parser` Python module with the appropriate parser classes
Reminder, these are the neigbhors we are working with:
```
%cat ../data/ios/ntc-r1/lldp.txt
```
### Parsing the LLDP data
We've added the data, but the only native text parser in Yangify is the text tree parser meant for those traditional CLI configurations, not operational data commands like `show lldp neighbors`. However, there are a number of pre-built TextFSM templates in [ntc-templates](https://github.com/networktocode/ntc-templates), we can leverage. Let's try it, but first make sure you `pip install textfsm` and do a git clone on `ntc-templates`.
We can use the following code block to use the TextFSM template and then subsequently transform the data into a more usable structure:
```
import textfsm
with open("../data/ios/ntc-r1/lldp.txt", "r") as f:
lldp_txt = f.read()
template = '../data/ntc-templates/cisco_ios_show_lldp_neighbors.template'
table = textfsm.TextFSM(open(template))
converted_data = table.ParseText(lldp_txt)
print(converted_data)
neighbors = {}
for item in converted_data:
neighbor = item[0]
local_interface = item[1].replace("Gi", "GigabitEthernet")
neighbor_interface = item[2]
if not neighbors.get(local_interface):
neighbors[local_interface] = []
neighbor = dict(
local_interface=local_interface,
neighbor=neighbor,
neighbor_interface=neighbor_interface,
)
neighbors[local_interface].append(neighbor)
print(json.dumps(neighbors, indent=4))
```
Add the parsed neighbors to `data` which is the data we actually parse to generate the YANG based JSON.
```
data["lldp_data"] = neighbors
```
Now let's add a basic parser for LLDP - note the two new classes at the bottom of the new parser:
```
%cat task4_lldp_parser.py
```
Using the new parser in a script to create the `lldp` container:
```
import task4_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.root_native["lldp_data"] = self.native["lldp_data"]
self.native = self.root_native
interfaces = task4_lldp_parser.Interfaces
lldp = task4_lldp_parser.LLdp
```
Execute the script to generate JSON data for the `lldp->config` containers with a leaf of `system-name`.
```
p = IOSParser(dm, native=data)
result = p.process(validate=True)
print(json.dumps(result.raw_value(), indent=4))
```
We are going to add the `interfaces` container but just return the `name` of each interface to get started.
Because `interfaces` is a list, this means we have to implement `extract_elements`. It's the same approach we used earlier when parsing interfaces. The only difference is we have a little more logic in `extract_elements` now:
```
%cat task5_lldp_parser.py
```
As stated in the docstring, we're using the show run data to build out the proper framework for the interfaces on the device in contrast to `lldp_data` that has interfaces that only have active neighbors.
```
import task5_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.root_native["lldp_data"] = self.native["lldp_data"]
self.native = self.root_native
interfaces = task5_lldp_parser.Interfaces
lldp = task5_lldp_parser.LLdp
p = IOSParser(dm, native=data)
result = p.process(validate=False)
print(json.dumps(result.raw_value(), indent=4))
```
Just like before, if you do want to see it with validation on, it'll fail because a `name` leaf is required in a container called `config` per interface.
```
from yangson.exceptions import SemanticError
p = IOSParser(dm, native=data)
try:
result = p.process(validate=True)
print(json.dumps(result.raw_value(), indent=4))
except SemanticError as e:
print("Supressing output for readability:")
print(f"error: {e}")
```
Let's update the parser with the `config` container that has a single leaf called `name` using the `task6_lldp_parser`:
```
%cat task6_lldp_parser.py
import task6_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.root_native["lldp_data"] = self.native["lldp_data"]
self.native = self.root_native
interfaces = task6_lldp_parser.Interfaces
lldp = task6_lldp_parser.LLdp
p = IOSParser(dm, native=data)
aresult = p.process(validate=True)
print(json.dumps(aresult.raw_value(), indent=4))
```
Next, we need to add the `neighbors` container for each interface, thus this new container, will be at the same hierarchy as `config` is now. Let's look at it in `task7_lldp_parser`:
```
%cat task7_lldp_parser.py
```
Pay extra attention to the notes in the docstring this time around because it's the first we're showing how to use `self.keys`. In the current loop (`extract_elements`), the interfaces from the `show run` are being iterated over; now we need to take that key, and use it to access the neighbors for that interfaces stored in `lldp_data`. As you can see, `self.keys` is Yangify dictionary that helps maintain context on the current object being processed. To see this better, you can explore it using `pdb` as mentioned earlier.
```
import json
from yangify import parser
from yangify.parser.text_tree import parse_indented_config
import textfsm
import task7_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.root_native["lldp_data"] = self.native["lldp_data"]
self.native = self.root_native
interfaces = task7_lldp_parser.Interfaces
lldp = task7_lldp_parser.LLdp
p = IOSParser(dm, native=data)
result = p.process(validate=False)
print(json.dumps(result.raw_value(), indent=4))
```
What's still missing is the `port-id` of each neighbor. Let's add that in (within the `state` container):
```
%cat task8_lldp_parser.py
import json
from yangify import parser
from yangify.parser.text_tree import parse_indented_config
import textfsm
import task8_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.root_native["lldp_data"] = self.native["lldp_data"]
self.native = self.root_native
interfaces = task8_lldp_parser.Interfaces
lldp = task8_lldp_parser.LLdp
p = IOSParser(dm, native=data, state=True)
result = p.process(validate=True)
print(json.dumps(result.raw_value(), indent=4))
```
Hopefully this helps really convey what Yangify is all about, how it can be used, and gets you on your way for buliding Yangify parsers!
|
github_jupyter
|
%cat ../data/ios/ntc-r1/config.txt
%cat ../data/ios/ntc-r1/lldp.txt
ntc@nautobot:models (master)$ pyang -f tree /yangify/docs/tutorials/yang/yang-modules/openconfig/openconfig-lldp.yang
module: openconfig-lldp
+--rw lldp
+--rw config
| +--rw enabled? boolean
| +--rw hello-timer? uint64
| +--rw suppress-tlv-advertisement* identityref
| +--rw system-name? string
| +--rw system-description? string
| +--rw chassis-id? string
| +--rw chassis-id-type? oc-lldp-types:chassis-id-type
+--ro state
| +--ro enabled? boolean
| +--ro hello-timer? uint64
| +--ro suppress-tlv-advertisement* identityref
| +--ro system-name? string
| +--ro system-description? string
| +--ro chassis-id? string
| +--ro chassis-id-type? oc-lldp-types:chassis-id-type
| +--ro counters
| +--ro frame-in? yang:counter64
| +--ro frame-out? yang:counter64
| +--ro frame-error-in? yang:counter64
| +--ro frame-discard? yang:counter64
| +--ro tlv-discard? yang:counter64
| +--ro tlv-unknown? yang:counter64
| +--ro last-clear? yang:date-and-time
| +--ro tlv-accepted? yang:counter64
| +--ro entries-aged-out? yang:counter64
+--rw interfaces
+--rw interface* [name]
+--rw name -> ../config/name
+--rw config
| +--rw name? oc-if:base-interface-ref
| +--rw enabled? boolean
+--ro state
| +--ro name? oc-if:base-interface-ref
| +--ro enabled? boolean
| +--ro counters
| +--ro frame-in? yang:counter64
| +--ro frame-out? yang:counter64
| +--ro frame-error-in? yang:counter64
| +--ro frame-discard? yang:counter64
| +--ro tlv-discard? yang:counter64
| +--ro tlv-unknown? yang:counter64
| +--ro last-clear? yang:date-and-time
| +--ro frame-error-out? yang:counter64
+--ro neighbors
+--ro neighbor* [id]
+--ro id -> ../state/id
+--ro config
+--ro state
| +--ro system-name? string
| +--ro system-description? string
| +--ro chassis-id? string
| +--ro chassis-id-type? oc-lldp-types:chassis-id-type
| +--ro id? string
| +--ro age? uint64
| +--ro last-update? int64
| +--ro ttl? uint16
| +--ro port-id? string
| +--ro port-id-type? oc-lldp-types:port-id-type
| +--ro port-description? string
| +--ro management-address? string
| +--ro management-address-type? string
+--ro custom-tlvs
| +--ro tlv* [type oui oui-subtype]
| +--ro type -> ../state/type
| +--ro oui -> ../state/oui
| +--ro oui-subtype -> ../state/oui-subtype
| +--ro config
| +--ro state
| +--ro type? int32
| +--ro oui? string
| +--ro oui-subtype? string
| +--ro value? binary
+--ro capabilities
+--ro capability* [name]
+--ro name -> ../state/name
+--ro config
+--ro state
+--ro name? identityref
+--ro enabled? boolean
ntc@nautobot:models (master)$
ntc@nautobot:models (master)$ pyang -f tree lldp/openconfig-lldp.yang
module: openconfig-lldp
+--rw lldp
+--rw config
| +--rw system-name? string
+--rw interfaces
+--rw interface* [name]
+--rw name -> ../config/name
+--rw config
| +--rw name? oc-if:base-interface-ref
+--ro neighbors
+--ro neighbor* [id]
+--ro id -> ../state/id
+--ro state
| +--ro id? string
| +--ro port-id? string
ntc@nautobot:models (master)$
container lldp {
description
"Top-level container for LLDP configuration and state data";
container config { // this is the container that'll hold the system-name
description
"Configuration data ";
// shortened for brevity
{
"openconfig-lldp:lldp": {
"config": {
"system-name": "ntc-r1"
},
"interfaces": {
"interface": [
{
"name": "GigabitEthernet1",
"config": {
"name": "GigabitEthernet1"
},
"neighbors": {
"neighbor": [
{
"id": "ntc-r6",
"state": {
"id": "ntc-r6",
"port-id": "Gi0/4"
}
}
]
}
},
{
"name": "GigabitEthernet2",
"config": {
"name": "GigabitEthernet2"
},
"neighbors": {
"neighbor": [
{
"id": "ntc-r2",
"state": {
"id": "ntc-r2",
"port-id": "Gi0/1"
}
},
{
"id": "ntc-s2",
"state": {
"id": "ntc-s2",
"port-id": "Gi0/1"
}
}
]
}
}
]
}
}
}
%cat ../data/ios/ntc-r1/config.txt
import json
from yangify.parser.text_tree import parse_indented_config
with open("../data/ios/ntc-r1/config.txt", "r") as f:
config = f.read()
parsed_config = parse_indented_config(config.splitlines())
print(json.dumps(parsed_config, indent=4))
import json
from yangify import parser
from yangify.parser.text_tree import parse_indented_config
from yangson.datamodel import DataModel
import task1_lldp_parser
dm = DataModel.from_file("../yang/yang-library-data.json", ["../yang/yang-modules/ietf", "../yang/yang-modules/openconfig"])
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.native = self.root_native
# interfaces is YANG container. Variable names are always
# YANG containers or lists
interfaces = task1_lldp_parser.Interfaces
with open("../data/ios/ntc-r1/config.txt", "r") as f:
config = f.read()
data = {}
data["show run"] = config
ntc@nautobot:openconfig (lldp-dev)$ pyang -f tree openconfig-interfaces.yang
module: openconfig-interfaces
+--rw interfaces
+--rw interface* [name] // yang list
+--rw name -> ../config/name
+--rw config
| +--rw name? string
| +--rw type identityref
| +--rw mtu? uint16
| +--rw loopback-mode? boolean
| +--rw description? string
| +--rw enabled? boolean
+--rw subinterfaces
+--rw subinterface* [index]
+--rw index -> ../config/index
+--rw config
+--rw index? uint32
+--rw description? string
+--rw enabled? boolean
interfaces = task1_lldp_parser.Interfaces
+--rw name? oc-if:base-interface-ref
from yangson.exceptions import SchemaError
p = IOSParser(dm, native=data)
try:
result = p.process()
print(json.dumps(result.raw_value(), indent=4))
except SchemaError as e:
print("Supressing output for readability:")
print(f"error: {e}")
%cat task1_lldp_parser.py
module: openconfig-interfaces
+--rw interfaces
+--rw interface* [name] // yang list
+--rw name -> ../config/name
+--rw config
| +--rw name? string
| +--rw type identityref
p = IOSParser(dm, native=data)
result = p.process(validate=False)
print(json.dumps(result.raw_value(), indent=4))
%cat task2_lldp_parser.py
def name(self) -> str:
return self.yy.key
import task2_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.native = self.root_native
interfaces = task2_lldp_parser.Interfaces
p = IOSParser(dm, native=data)
result = p.process(validate=False)
print(json.dumps(result.raw_value(), indent=4))
%cat task3_lldp_parser.py
import task3_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.native = self.root_native
interfaces = task3_lldp_parser.Interfaces
p = IOSParser(dm, native=data)
result = p.process(validate=True)
print(json.dumps(result.raw_value(), indent=4))
def enabled(self) -> bool:
import pdb; pdb.trace()
shutdown = self.yy.native.get("shutdown", {}).get("#standalone")
if shutdown:
return False
else:
return True
(yangify) ntc@nautobot:lldp (lldp-dev)$ python lldp-sample.py
IOSParser: doesn't implement ietf-yang-library:modules-state
/openconfig-interfaces:interfaces/interface/config: doesn't implement openconfig-interfaces:mtu
> /ntc-data/repos/yangify/docs/lldp/lldp_parser.py(24)enabled()
-> shutdown = self.yy.native.get("shutdown", {}).get("#standalone")
(Pdb)
(Pdb)
(Pdb) print(self.yy.key)
GigabitEthernet1
(Pdb)
(Pdb) import json
(Pdb) print(json.dumps(self.yy.native, indent=4))
{
"#list": [
{
"description": {
"#text": "Hello GigE1",
"Hello": {
"#text": "GigE1",
"GigE1": {
"#standalone": true
}
}
}
}
],
"#text": "shutdown",
"description": {
"#text": "Hello GigE1",
"Hello": {
"#text": "GigE1",
"GigE1": {
"#standalone": true
}
}
},
"shutdown": {
"#standalone": true
},
"#standalone": true
}
(Pdb)
(Pdb) exit
%cat ../data/ios/ntc-r1/lldp.txt
import textfsm
with open("../data/ios/ntc-r1/lldp.txt", "r") as f:
lldp_txt = f.read()
template = '../data/ntc-templates/cisco_ios_show_lldp_neighbors.template'
table = textfsm.TextFSM(open(template))
converted_data = table.ParseText(lldp_txt)
print(converted_data)
neighbors = {}
for item in converted_data:
neighbor = item[0]
local_interface = item[1].replace("Gi", "GigabitEthernet")
neighbor_interface = item[2]
if not neighbors.get(local_interface):
neighbors[local_interface] = []
neighbor = dict(
local_interface=local_interface,
neighbor=neighbor,
neighbor_interface=neighbor_interface,
)
neighbors[local_interface].append(neighbor)
print(json.dumps(neighbors, indent=4))
data["lldp_data"] = neighbors
%cat task4_lldp_parser.py
import task4_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.root_native["lldp_data"] = self.native["lldp_data"]
self.native = self.root_native
interfaces = task4_lldp_parser.Interfaces
lldp = task4_lldp_parser.LLdp
p = IOSParser(dm, native=data)
result = p.process(validate=True)
print(json.dumps(result.raw_value(), indent=4))
%cat task5_lldp_parser.py
import task5_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.root_native["lldp_data"] = self.native["lldp_data"]
self.native = self.root_native
interfaces = task5_lldp_parser.Interfaces
lldp = task5_lldp_parser.LLdp
p = IOSParser(dm, native=data)
result = p.process(validate=False)
print(json.dumps(result.raw_value(), indent=4))
from yangson.exceptions import SemanticError
p = IOSParser(dm, native=data)
try:
result = p.process(validate=True)
print(json.dumps(result.raw_value(), indent=4))
except SemanticError as e:
print("Supressing output for readability:")
print(f"error: {e}")
%cat task6_lldp_parser.py
import task6_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.root_native["lldp_data"] = self.native["lldp_data"]
self.native = self.root_native
interfaces = task6_lldp_parser.Interfaces
lldp = task6_lldp_parser.LLdp
p = IOSParser(dm, native=data)
aresult = p.process(validate=True)
print(json.dumps(aresult.raw_value(), indent=4))
%cat task7_lldp_parser.py
import json
from yangify import parser
from yangify.parser.text_tree import parse_indented_config
import textfsm
import task7_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.root_native["lldp_data"] = self.native["lldp_data"]
self.native = self.root_native
interfaces = task7_lldp_parser.Interfaces
lldp = task7_lldp_parser.LLdp
p = IOSParser(dm, native=data)
result = p.process(validate=False)
print(json.dumps(result.raw_value(), indent=4))
%cat task8_lldp_parser.py
import json
from yangify import parser
from yangify.parser.text_tree import parse_indented_config
import textfsm
import task8_lldp_parser
class IOSParser(parser.RootParser):
class Yangify(parser.ParserData):
def init(self) -> None:
self.root_native = {}
self.root_native["show run"] = parse_indented_config(
self.native["show run"].splitlines()
)
self.root_native["lldp_data"] = self.native["lldp_data"]
self.native = self.root_native
interfaces = task8_lldp_parser.Interfaces
lldp = task8_lldp_parser.LLdp
p = IOSParser(dm, native=data, state=True)
result = p.process(validate=True)
print(json.dumps(result.raw_value(), indent=4))
| 0.2763 | 0.942242 |
```
import csv
from datetime import date, timedelta, datetime
import matplotlib.pyplot as plt
import numpy as np
from math import sqrt
```
#### Read the file
Skipping irrelevant data: death date not within the expected year and pages created after the beginning of the year (to filter out the bias of posthumous pages creation)
```
def read_file(year):
d = []
filename = "deaths{}.csv".format(year)
with open(filename, 'r') as f:
r = csv.reader(f)
for row in r:
if row[0]=='Name': continue
name = row[0]
birth = date.fromisoformat(row[1])
death = date.fromisoformat(row[2])
covid = (row[3] == 'True')
created = date.fromisoformat(row[4])
if death < date(year,1,1) or death >=date(year+1,1,1):
print ("Page {} weird death date {}".format(name, death))
continue
if created >= date(year,1,1):
#print ("Page {} created too late: {}".format(name, death))
continue
d.append([name,birth,death,covid,created])
return d
d2020 = read_file(2020)
def hist_death(data, start_day, end_day=None, filt=None):
if not filt: filt = lambda _: True
if not end_day: end_day = date(start_day.year+1,1,1)
num_days = (end_day-start_day).days
dates = [start_day+timedelta(days=j) for j in range(num_days)]
count = np.zeros(num_days)
for row in data:
if not filt(row):
continue
j=(row[2]-start_day).days
if j<0 or j>=num_days:
continue
count[j]+=1
return dates, count
end_day = date(2020,12,31) # Skip Dec 31 to make a 365 day year
start_day = date(2020,1,1)
dates2020, count_2020 = hist_death(d2020, start_day, end_day)
_, cvd2020 = hist_death(d2020, start_day, end_day, lambda row: row[3])
def ma(x, win=7):
w = np.ones(win)/win
return np.convolve(x, w, 'valid')
ma2020 = ma(count_2020)
d2019 = read_file(2019)
dates2019, count2019 = hist_death(d2019, date(2019,1,1))
ma2019 = ma(count2019)
d2021 = read_file(2021)
end_day = date(2021,5,18)
start_day = date(2021,1,1)
dates2021, count_2021 = hist_death(d2021, start_day, end_day)
ma2021 = ma(count_2021)
_,cvd2021 = hist_death(d2021, start_day, end_day, lambda row: row[3])
```
#### Plotting daily mortality
`All` - all deaths; `COVID` - deaths with COVID or coronavirus mentioned; `Base` - non-COVID deaths; 2019 - corresponding data from 2019 for comparison.
The solid lines are 7 day moving average.
```
plt.figure(figsize=(12,8))
plt.plot_date(dates2020[:len(count_2021)], count_2021, 'gx')
plt.plot_date(dates2020[:len(count_2021)], cvd2021, 'g^')
plt.plot_date(dates2020, count_2020, 'kx')
plt.plot_date(dates2020, cvd2020, 'r^')
plt.plot_date(dates2020, count2019, 'yx')
plt.plot_date(dates2020[3:len(count_2021)-3], ma2021, 'g-')
plt.plot_date(dates2020[3:-3], ma2020, 'k-')
plt.plot_date(dates2020[3:-3], ma2019, 'y-')
plt.legend(['2021', '2021 COVID', '2020', '2020 COVID', '2019'])
(np.sum(cvd2020) + np.sum(cvd2021))/(np.sum(count_2020)+np.sum(count_2021))
ages = [(row[2] - row[1])/timedelta(days=365.25) for row in d2020+d2021]
ages_no = [(row[2] - row[1])/timedelta(days=365.25) for row in d2020+d2021 if not row[3]]
ages_cvd = [(row[2] - row[1])/timedelta(days=365.25) for row in d2020+d2021 if row[3]]
ages2019 = [(row[2] - row[1])/timedelta(days=365.25) for row in d2019]
```
#### Age at death histogram
Blue = all deaths; orange = COVID deaths
```
plt.figure(figsize=(10,6))
plt.hist(ages_no, bins=22, histtype='step', density=True, range=(0,110))
plt.hist(ages_cvd, bins=22, histtype='step', density=True, range=(0,110))
wt = len(count2019)/(len(count_2020)+len(count_2021))
plt.figure(figsize=(10,6))
h=plt.hist(ages, weights=np.ones(len(ages))*wt, bins=22, histtype='step', color='g', range=(0,110))
ho=plt.hist(ages2019, bins=22, histtype='step', color='b', range=(0,110))
plt.legend(['2020-2021', '2019'])
plt.figure(figsize=(10,6))
plt.bar(np.arange(0,110,5),h[0]-ho[0], width=5, align='edge')
np.median(ages_no), np.median(ages_cvd), np.average(ages_no), np.average(ages_cvd)
```
### Cumulative anomaly comparing to the 2017-2019 average since start of the year
```
d2018 = read_file(2018)
dates2018, count2018 = hist_death(d2018, date(2018,1,1))
d2017 = read_file(2017)
dates2017, count2017 = hist_death(d2017, date(2017,1,1))
c2021 = np.cumsum(count_2021)
c2020 = np.cumsum(count_2020)
c2019 = np.cumsum(count2019)
c2018 = np.cumsum(count2018)
c2017 = np.cumsum(count2017)
print(c2017[-1], c2018[-1], c2019[-1], c2020[-1], c2021[-1])
plt.figure(figsize=(12,6))
plt.plot_date(dates2020[:len(c2021)], c2021)
plt.plot_date(dates2020, c2020)
plt.plot_date(dates2020, c2019)
plt.plot_date(dates2020, c2018)
plt.plot_date(dates2020, c2017)
plt.legend(['2021', '2020', '2019', '2018', '2017'])
```
#### Same but trying to control against the corpus size
```
wp_size = np.array([5321200, 5541900, 5773600, 5989400, 6219700])
weight = wp_size[-1]/wp_size
print(weight)
cs2017, cs2018, cs2019, cs2020, cs2021 = c2017*weight[0], c2018*weight[1], c2019*weight[2], c2020*weight[3], c2021*weight[4]
plt.figure(figsize=(12,6))
csavg = (cs2019+cs2018+cs2017)/3.0
plt.plot_date(dates2020[:len(c2021)], cs2021 - csavg[:len(c2021)])
plt.plot_date(dates2020, cs2020-csavg)
plt.plot_date(dates2020, cs2019-csavg)
plt.plot_date(dates2020, cs2018-csavg)
plt.plot_date(dates2020, cs2017-csavg)
plt.legend(['2021', '2020', '2019', '2018', '2017'])
```
Doesn't look very good, probably the corpus growth for people is slower. Ok, let's assume there was a linear corpus growth (at scale of 10% linear doesn't significantly differ from exponential).
```
f1,f0 = np.polyfit([2017,2018,2019], [c2017[-1],c2018[-1],c2019[-1]], deg=1)
eoy = f0+f1*np.array([2017,2018,2019,2020,2021])
print(eoy)
yearly = (c2019+c2018+c2017)/3.0 - eoy[1]*np.arange(365)/365
plt.figure(figsize=(12,6))
plt.plot_date(dates2020[:len(c2021)], c2021-eoy[4]*np.arange(len(c2021))/365-yearly[:len(c2021)])
plt.plot_date(dates2020, c2020-eoy[3]*np.arange(len(c2020))/365-yearly[:len(c2020)])
plt.plot_date(dates2020, c2019-eoy[2]*np.arange(365)/365-yearly)
plt.plot_date(dates2020, c2018-eoy[1]*np.arange(365)/365-yearly)
plt.plot_date(dates2020, c2017-eoy[0]*np.arange(365)/365-yearly)
plt.legend(['2021','2020','2019','2018','2017'])
expected2020 = (c2019+c2018+c2017)/3.0 + 2*f1*np.arange(365)/365
plt.figure(figsize=(12,6))
plt.plot_date(dates2020,expected2020)
plt.plot_date(dates2020, c2020)
print(c2020[-1]/expected[-1])
weeks2020 = [dates2020[7*k] for k in range(52)]
c_wk2020 = np.array([c2020[7*(k+1)]-c2020[7*k] for k in range(52)])
e_wk2020 = np.array([expected2020[7*(k+1)]-expected2020[7*k] for k in range(52)])
weeks2021 = [dates2021[7*k] for k in range(len(dates2021)//7)]
expected2021 = (c2019+c2018+c2017)/3.0 + 3*f1*np.arange(365)/365
c_wk2021 = np.array([c2021[7*(k+1)]-c2021[7*k] for k in range(len(weeks2021))])
e_wk2021 = np.array([expected2021[7*(k+1)]-expected2021[7*k] for k in range(len(weeks2021))])
plt.figure(figsize=(12,6))
e_weekly = expected2020[-1]/52
plt.bar(weeks2020, (c_wk2020-e_wk2020)/e_weekly,
width=7, color=np.where(c_wk2020>e_wk2020, 'r', 'b'), ls='--')
plt.bar(weeks2021, (c_wk2021-e_wk2021)/e_weekly,
width=7, color=np.where(c_wk2021>e_wk2021, 'r', 'b'), ls='--')
```
|
github_jupyter
|
import csv
from datetime import date, timedelta, datetime
import matplotlib.pyplot as plt
import numpy as np
from math import sqrt
def read_file(year):
d = []
filename = "deaths{}.csv".format(year)
with open(filename, 'r') as f:
r = csv.reader(f)
for row in r:
if row[0]=='Name': continue
name = row[0]
birth = date.fromisoformat(row[1])
death = date.fromisoformat(row[2])
covid = (row[3] == 'True')
created = date.fromisoformat(row[4])
if death < date(year,1,1) or death >=date(year+1,1,1):
print ("Page {} weird death date {}".format(name, death))
continue
if created >= date(year,1,1):
#print ("Page {} created too late: {}".format(name, death))
continue
d.append([name,birth,death,covid,created])
return d
d2020 = read_file(2020)
def hist_death(data, start_day, end_day=None, filt=None):
if not filt: filt = lambda _: True
if not end_day: end_day = date(start_day.year+1,1,1)
num_days = (end_day-start_day).days
dates = [start_day+timedelta(days=j) for j in range(num_days)]
count = np.zeros(num_days)
for row in data:
if not filt(row):
continue
j=(row[2]-start_day).days
if j<0 or j>=num_days:
continue
count[j]+=1
return dates, count
end_day = date(2020,12,31) # Skip Dec 31 to make a 365 day year
start_day = date(2020,1,1)
dates2020, count_2020 = hist_death(d2020, start_day, end_day)
_, cvd2020 = hist_death(d2020, start_day, end_day, lambda row: row[3])
def ma(x, win=7):
w = np.ones(win)/win
return np.convolve(x, w, 'valid')
ma2020 = ma(count_2020)
d2019 = read_file(2019)
dates2019, count2019 = hist_death(d2019, date(2019,1,1))
ma2019 = ma(count2019)
d2021 = read_file(2021)
end_day = date(2021,5,18)
start_day = date(2021,1,1)
dates2021, count_2021 = hist_death(d2021, start_day, end_day)
ma2021 = ma(count_2021)
_,cvd2021 = hist_death(d2021, start_day, end_day, lambda row: row[3])
plt.figure(figsize=(12,8))
plt.plot_date(dates2020[:len(count_2021)], count_2021, 'gx')
plt.plot_date(dates2020[:len(count_2021)], cvd2021, 'g^')
plt.plot_date(dates2020, count_2020, 'kx')
plt.plot_date(dates2020, cvd2020, 'r^')
plt.plot_date(dates2020, count2019, 'yx')
plt.plot_date(dates2020[3:len(count_2021)-3], ma2021, 'g-')
plt.plot_date(dates2020[3:-3], ma2020, 'k-')
plt.plot_date(dates2020[3:-3], ma2019, 'y-')
plt.legend(['2021', '2021 COVID', '2020', '2020 COVID', '2019'])
(np.sum(cvd2020) + np.sum(cvd2021))/(np.sum(count_2020)+np.sum(count_2021))
ages = [(row[2] - row[1])/timedelta(days=365.25) for row in d2020+d2021]
ages_no = [(row[2] - row[1])/timedelta(days=365.25) for row in d2020+d2021 if not row[3]]
ages_cvd = [(row[2] - row[1])/timedelta(days=365.25) for row in d2020+d2021 if row[3]]
ages2019 = [(row[2] - row[1])/timedelta(days=365.25) for row in d2019]
plt.figure(figsize=(10,6))
plt.hist(ages_no, bins=22, histtype='step', density=True, range=(0,110))
plt.hist(ages_cvd, bins=22, histtype='step', density=True, range=(0,110))
wt = len(count2019)/(len(count_2020)+len(count_2021))
plt.figure(figsize=(10,6))
h=plt.hist(ages, weights=np.ones(len(ages))*wt, bins=22, histtype='step', color='g', range=(0,110))
ho=plt.hist(ages2019, bins=22, histtype='step', color='b', range=(0,110))
plt.legend(['2020-2021', '2019'])
plt.figure(figsize=(10,6))
plt.bar(np.arange(0,110,5),h[0]-ho[0], width=5, align='edge')
np.median(ages_no), np.median(ages_cvd), np.average(ages_no), np.average(ages_cvd)
d2018 = read_file(2018)
dates2018, count2018 = hist_death(d2018, date(2018,1,1))
d2017 = read_file(2017)
dates2017, count2017 = hist_death(d2017, date(2017,1,1))
c2021 = np.cumsum(count_2021)
c2020 = np.cumsum(count_2020)
c2019 = np.cumsum(count2019)
c2018 = np.cumsum(count2018)
c2017 = np.cumsum(count2017)
print(c2017[-1], c2018[-1], c2019[-1], c2020[-1], c2021[-1])
plt.figure(figsize=(12,6))
plt.plot_date(dates2020[:len(c2021)], c2021)
plt.plot_date(dates2020, c2020)
plt.plot_date(dates2020, c2019)
plt.plot_date(dates2020, c2018)
plt.plot_date(dates2020, c2017)
plt.legend(['2021', '2020', '2019', '2018', '2017'])
wp_size = np.array([5321200, 5541900, 5773600, 5989400, 6219700])
weight = wp_size[-1]/wp_size
print(weight)
cs2017, cs2018, cs2019, cs2020, cs2021 = c2017*weight[0], c2018*weight[1], c2019*weight[2], c2020*weight[3], c2021*weight[4]
plt.figure(figsize=(12,6))
csavg = (cs2019+cs2018+cs2017)/3.0
plt.plot_date(dates2020[:len(c2021)], cs2021 - csavg[:len(c2021)])
plt.plot_date(dates2020, cs2020-csavg)
plt.plot_date(dates2020, cs2019-csavg)
plt.plot_date(dates2020, cs2018-csavg)
plt.plot_date(dates2020, cs2017-csavg)
plt.legend(['2021', '2020', '2019', '2018', '2017'])
f1,f0 = np.polyfit([2017,2018,2019], [c2017[-1],c2018[-1],c2019[-1]], deg=1)
eoy = f0+f1*np.array([2017,2018,2019,2020,2021])
print(eoy)
yearly = (c2019+c2018+c2017)/3.0 - eoy[1]*np.arange(365)/365
plt.figure(figsize=(12,6))
plt.plot_date(dates2020[:len(c2021)], c2021-eoy[4]*np.arange(len(c2021))/365-yearly[:len(c2021)])
plt.plot_date(dates2020, c2020-eoy[3]*np.arange(len(c2020))/365-yearly[:len(c2020)])
plt.plot_date(dates2020, c2019-eoy[2]*np.arange(365)/365-yearly)
plt.plot_date(dates2020, c2018-eoy[1]*np.arange(365)/365-yearly)
plt.plot_date(dates2020, c2017-eoy[0]*np.arange(365)/365-yearly)
plt.legend(['2021','2020','2019','2018','2017'])
expected2020 = (c2019+c2018+c2017)/3.0 + 2*f1*np.arange(365)/365
plt.figure(figsize=(12,6))
plt.plot_date(dates2020,expected2020)
plt.plot_date(dates2020, c2020)
print(c2020[-1]/expected[-1])
weeks2020 = [dates2020[7*k] for k in range(52)]
c_wk2020 = np.array([c2020[7*(k+1)]-c2020[7*k] for k in range(52)])
e_wk2020 = np.array([expected2020[7*(k+1)]-expected2020[7*k] for k in range(52)])
weeks2021 = [dates2021[7*k] for k in range(len(dates2021)//7)]
expected2021 = (c2019+c2018+c2017)/3.0 + 3*f1*np.arange(365)/365
c_wk2021 = np.array([c2021[7*(k+1)]-c2021[7*k] for k in range(len(weeks2021))])
e_wk2021 = np.array([expected2021[7*(k+1)]-expected2021[7*k] for k in range(len(weeks2021))])
plt.figure(figsize=(12,6))
e_weekly = expected2020[-1]/52
plt.bar(weeks2020, (c_wk2020-e_wk2020)/e_weekly,
width=7, color=np.where(c_wk2020>e_wk2020, 'r', 'b'), ls='--')
plt.bar(weeks2021, (c_wk2021-e_wk2021)/e_weekly,
width=7, color=np.where(c_wk2021>e_wk2021, 'r', 'b'), ls='--')
| 0.224906 | 0.849784 |
# Lambda School Data Science - Logistic Regression
Logistic regression is the baseline for classification models, as well as a handy way to predict probabilities (since those too live in the unit interval). While relatively simple, it is also the foundation for more sophisticated classification techniques such as neural networks (many of which can effectively be thought of as networks of logistic models).
## Lecture - Where Linear goes Wrong
### Return of the Titanic 🚢
You've likely already explored the rich dataset that is the Titanic - let's use regression and try to predict survival with it. The data is [available from Kaggle](https://www.kaggle.com/c/titanic/data), so we'll also play a bit with [the Kaggle API](https://github.com/Kaggle/kaggle-api).
### Get data, option 1: Kaggle API
#### Sign up for Kaggle and get an API token
1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one.
2. [Follow these instructions](https://github.com/Kaggle/kaggle-api#api-credentials) to create a Kaggle “API Token” and download your `kaggle.json` file. If you are using Anaconda, put the file in the directory specified in the instructions.
_This will enable you to download data directly from Kaggle. If you run into problems, don’t worry — I’ll give you an easy alternative way to download today’s data, so you can still follow along with the lecture hands-on. And then we’ll help you through the Kaggle process after the lecture._
#### Put `kaggle.json` in the correct location
- ***If you're using Anaconda,*** put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-api#api-credentials).
- ***If you're using Google Colab,*** upload the file to your Google Drive, and run this cell:
```
from google.colab import drive
drive.mount('/content/drive')
%env KAGGLE_CONFIG_DIR=/content/drive/My Drive/
```
#### Install the Kaggle API package and use it to get the data
You also have to join the Titanic competition to have access to the data
```
!pip install kaggle
!kaggle competitions download -c titanic
```
### Get data, option 2: Download from the competition page
1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one.
2. [Go to the Titanic competition page](https://www.kaggle.com/c/titanic) to download the [data](https://www.kaggle.com/c/titanic/data).
### Get data, option 3: Use Seaborn
```
import seaborn as sns
train = sns.load_dataset('titanic')
```
But Seaborn's version of the Titanic dataset is not identical to Kaggle's version, as we'll see during this lesson!
### Read data
```
import pandas as pd
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
train.shape, test.shape
train.sample(n=5)
test.sample(n=5)
target = 'Survived'
train[target].value_counts(normalize=True)
train.describe(include='number')
train.describe(exclude='number')
```
### How would we try to do this with linear regression?
https://scikit-learn.org/stable/modules/impute.html
```
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LinearRegression
features = ['Pclass', 'Age', 'Fare']
target = 'Survived'
X_train = train[features]
y_train = train[target]
X_test = test[features]
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train)
X_test_imputed = imputer.transform(X_test)
lin_reg = LinearRegression()
lin_reg.fit(X_train_imputed, y_train)
X_train.shape, X_train_imputed.shape, X_test.shape, X_test_imputed.shape
X_train.head(6)
X_train_imputed[:6]
X_train['Age'].mean()
X_test.tail()
X_test_imputed[-5:]
X_test['Age'].mean()
pd.concat([X_train, X_test])['Age'].mean()
import numpy as np
test_case = np.array([[1, 5, 500]]) # Rich 5-year old in first class
lin_reg.predict(test_case)
y_pred = lin_reg.predict(X_test_imputed)
pd.Series(y_pred).describe()
pd.Series(lin_reg.coef_, X_train.columns)
lin_reg.intercept_
```
### How would we do this with Logistic Regression?
```
from sklearn.linear_model import LogisticRegression
# Linear Regression
# lin_reg = LinearRegression()
# lin_reg.fit(X_train_imputed, y_train)
# lin_reg.predict(test_case)
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X_train_imputed, y_train)
print('Prediction for rich 5 year old:', log_reg.predict(test_case))
print('Predicted probabilities for rich 5 year old:', log_reg.predict_proba(test_case))
threshold = 0.5
(log_reg.predict_proba(X_test_imputed)[:, 1] > threshold).astype(int)
manual_predictions = (log_reg.predict_proba(X_test_imputed)[:,1] > threshold).astype(int)
direct_predictions = log_reg.predict(X_test_imputed)
all(manual_predictions == direct_predictions)
```
### How accurate is the Logistic Regression?
```
score = log_reg.score(X_train_imputed, y_train)
print('Train Accuracy Score', score)
score = log_reg.score(X_test_imputed, y_test)
print('Test Accuracy Score', score)
from sklearn.model_selection import cross_val_score
scores = cross_val_score(log_reg, X_train_imputed, y_train, cv=10)
print('Cross-Validation Accuracy Scores', scores)
scores = pd.Series(scores)
scores.min(), scores.mean(), scores.max()
X_train_imputed.shape
y_pred = log_reg.predict(X_train_imputed)
len(y_pred)
len(y_train)
y_pred[:5]
y_train[:5].values
correct_predictions = 4
total_predictions = 5
accuracy = correct_predictions / total_predictions
print(accuracy)
from sklearn.metrics import accuracy_score
accuracy_score(y_train[:5], y_pred[:5])
y_train.value_counts(normalize=True)
```
### What's the math for the Logistic Regression?
https://en.wikipedia.org/wiki/Logistic_function
https://en.wikipedia.org/wiki/Logistic_regression#Probability_of_passing_an_exam_versus_hours_of_study
```
log_reg.coef_
log_reg.intercept_
# The logistic sigmoid "squishing" function,
# implemented to work with numpy arrays
def sigmoid(x):
return 1 / (1 + np.e**(-x))
sigmoid(np.dot(log_reg.coef_, test_case.T) + log_reg.intercept_)
sigmoid(log_reg.coef_ @ test_case.T + log_reg.intercept_)
log_reg.predict_proba(test_case)
```
## Feature Engineering
Get the [Category Encoder](http://contrib.scikit-learn.org/categorical-encoding/) library
If you're running on Google Colab:
```
!pip install category_encoders
```
If you're running locally with Anaconda:
```
!conda install -c conda-forge category_encoders
```
```
import seaborn as sns
sns_titanic = sns.load_dataset('titanic')
print(sns_titanic.shape)
train.shape
sns_titanic.head()
# Titanic
train.head()
def make_features(X):
X = X.copy()
X['adult_male'] = (X['Sex'] == 'male') & (X['Age'] >= 16)
X['alone'] = (X['SibSp'] == 0) & (X['Parch'] == 0)
X['last_name'] = X['Name'].str.split(',').str[0]
X['title'] = X['Name'].str.split(',').str[1].str.split('.').str[0]
return X
train = make_features(train)
test = make_features(test)
train['adult_male'].value_counts()
train['alone'].value_counts()
train['title'].value_counts()
sns_titanic.head()
```
## Assignment: real-world classification
We're going to check out a larger dataset - the [FMA Free Music Archive data](https://github.com/mdeff/fma). It has a selection of CSVs with metadata and calculated audio features that you can load and try to use to classify genre of tracks. To get you started:
### Get and unzip the data
#### Google Colab
```
!wget https://os.unil.cloud.switch.ch/fma/fma_metadata.zip
!unzip fma_metadata.zip
```
#### Windows
- Download the [zip file](https://os.unil.cloud.switch.ch/fma/fma_metadata.zip)
- You may need to use [7zip](https://www.7-zip.org/download.html) to unzip it
#### Mac
- Download the [zip file](https://os.unil.cloud.switch.ch/fma/fma_metadata.zip)
- You may need to use [p7zip](https://superuser.com/a/626731) to unzip it
### Look at first 3 lines of raw file
```
!head -n 4 fma_metadata/tracks.csv
```
### Read with pandas
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
```
tracks = pd.read_csv('fma_metadata/tracks.csv', header=[0,1], index_col=0)
tracks.head()
```
### Fit Logistic Regression!
This dataset is bigger than many you've worked with so far, and while it should fit in Colab, it can take awhile to run. That's part of the challenge!
Your tasks:
- Clean up the variable names in the dataframe
- Use logistic regression to fit a model predicting (primary/top) genre
- Inspect, iterate, and improve your model
- Answer the following questions (written, ~paragraph each):
- What are the best predictors of genre?
- What information isn't very useful for predicting genre?
- What surprised you the most about your results?
*Important caveats*:
- This is going to be difficult data to work with - don't let the perfect be the enemy of the good!
- Be creative in cleaning it up - if the best way you know how to do it is download it locally and edit as a spreadsheet, that's OK!
- If the data size becomes problematic, consider sampling/subsetting, or [downcasting numeric datatypes](https://www.dataquest.io/blog/pandas-big-data/).
- You do not need perfect or complete results - just something plausible that runs, and that supports the reasoning in your written answers
If you find that fitting a model to classify *all* genres isn't very good, it's totally OK to limit to the most frequent genres, or perhaps trying to combine or cluster genres as a preprocessing step. Even then, there will be limits to how good a model can be with just this metadata - if you really want to train an effective genre classifier, you'll have to involve the other data (see stretch goals).
This is real data - there is no "one correct answer", so you can take this in a variety of directions. Just make sure to support your findings, and feel free to share them as well! This is meant to be practice for dealing with other "messy" data, a common task in data science.
## Resources and stretch goals
- Check out the other .csv files from the FMA dataset, and see if you can join them or otherwise fit interesting models with them
- [Logistic regression from scratch in numpy](https://blog.goodaudience.com/logistic-regression-from-scratch-in-numpy-5841c09e425f) - if you want to dig in a bit more to both the code and math (also takes a gradient descent approach, introducing the logistic loss function)
- Create a visualization to show predictions of your model - ideally show a confidence interval based on error!
- Check out and compare classification models from scikit-learn, such as [SVM](https://scikit-learn.org/stable/modules/svm.html#classification), [decision trees](https://scikit-learn.org/stable/modules/tree.html#classification), and [naive Bayes](https://scikit-learn.org/stable/modules/naive_bayes.html). The underlying math will vary significantly, but the API (how you write the code) and interpretation will actually be fairly similar.
- Sign up for [Kaggle](https://kaggle.com), and find a competition to try logistic regression with
- (Not logistic regression related) If you enjoyed the assignment, you may want to read up on [music informatics](https://en.wikipedia.org/wiki/Music_informatics), which is how those audio features were actually calculated. The FMA includes the actual raw audio, so (while this is more of a longterm project than a stretch goal, and won't fit in Colab) if you'd like you can check those out and see what sort of deeper analysis you can do.
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive')
%env KAGGLE_CONFIG_DIR=/content/drive/My Drive/
!pip install kaggle
!kaggle competitions download -c titanic
import seaborn as sns
train = sns.load_dataset('titanic')
import pandas as pd
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
train.shape, test.shape
train.sample(n=5)
test.sample(n=5)
target = 'Survived'
train[target].value_counts(normalize=True)
train.describe(include='number')
train.describe(exclude='number')
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LinearRegression
features = ['Pclass', 'Age', 'Fare']
target = 'Survived'
X_train = train[features]
y_train = train[target]
X_test = test[features]
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train)
X_test_imputed = imputer.transform(X_test)
lin_reg = LinearRegression()
lin_reg.fit(X_train_imputed, y_train)
X_train.shape, X_train_imputed.shape, X_test.shape, X_test_imputed.shape
X_train.head(6)
X_train_imputed[:6]
X_train['Age'].mean()
X_test.tail()
X_test_imputed[-5:]
X_test['Age'].mean()
pd.concat([X_train, X_test])['Age'].mean()
import numpy as np
test_case = np.array([[1, 5, 500]]) # Rich 5-year old in first class
lin_reg.predict(test_case)
y_pred = lin_reg.predict(X_test_imputed)
pd.Series(y_pred).describe()
pd.Series(lin_reg.coef_, X_train.columns)
lin_reg.intercept_
from sklearn.linear_model import LogisticRegression
# Linear Regression
# lin_reg = LinearRegression()
# lin_reg.fit(X_train_imputed, y_train)
# lin_reg.predict(test_case)
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X_train_imputed, y_train)
print('Prediction for rich 5 year old:', log_reg.predict(test_case))
print('Predicted probabilities for rich 5 year old:', log_reg.predict_proba(test_case))
threshold = 0.5
(log_reg.predict_proba(X_test_imputed)[:, 1] > threshold).astype(int)
manual_predictions = (log_reg.predict_proba(X_test_imputed)[:,1] > threshold).astype(int)
direct_predictions = log_reg.predict(X_test_imputed)
all(manual_predictions == direct_predictions)
score = log_reg.score(X_train_imputed, y_train)
print('Train Accuracy Score', score)
score = log_reg.score(X_test_imputed, y_test)
print('Test Accuracy Score', score)
from sklearn.model_selection import cross_val_score
scores = cross_val_score(log_reg, X_train_imputed, y_train, cv=10)
print('Cross-Validation Accuracy Scores', scores)
scores = pd.Series(scores)
scores.min(), scores.mean(), scores.max()
X_train_imputed.shape
y_pred = log_reg.predict(X_train_imputed)
len(y_pred)
len(y_train)
y_pred[:5]
y_train[:5].values
correct_predictions = 4
total_predictions = 5
accuracy = correct_predictions / total_predictions
print(accuracy)
from sklearn.metrics import accuracy_score
accuracy_score(y_train[:5], y_pred[:5])
y_train.value_counts(normalize=True)
log_reg.coef_
log_reg.intercept_
# The logistic sigmoid "squishing" function,
# implemented to work with numpy arrays
def sigmoid(x):
return 1 / (1 + np.e**(-x))
sigmoid(np.dot(log_reg.coef_, test_case.T) + log_reg.intercept_)
sigmoid(log_reg.coef_ @ test_case.T + log_reg.intercept_)
log_reg.predict_proba(test_case)
!pip install category_encoders
!conda install -c conda-forge category_encoders
import seaborn as sns
sns_titanic = sns.load_dataset('titanic')
print(sns_titanic.shape)
train.shape
sns_titanic.head()
# Titanic
train.head()
def make_features(X):
X = X.copy()
X['adult_male'] = (X['Sex'] == 'male') & (X['Age'] >= 16)
X['alone'] = (X['SibSp'] == 0) & (X['Parch'] == 0)
X['last_name'] = X['Name'].str.split(',').str[0]
X['title'] = X['Name'].str.split(',').str[1].str.split('.').str[0]
return X
train = make_features(train)
test = make_features(test)
train['adult_male'].value_counts()
train['alone'].value_counts()
train['title'].value_counts()
sns_titanic.head()
!wget https://os.unil.cloud.switch.ch/fma/fma_metadata.zip
!unzip fma_metadata.zip
!head -n 4 fma_metadata/tracks.csv
tracks = pd.read_csv('fma_metadata/tracks.csv', header=[0,1], index_col=0)
tracks.head()
| 0.539711 | 0.938237 |
```
from pprint import pprint
from demo.server.demo_helpers import get_schema, execute_query, pretty_print_data
```
## The autogenerated schema is real GraphQL SDL
```
print(get_schema())
```
## Queries are in real GraphQL syntax, but have different semantics
### Naming fields does not cause them to be output
- Explicit `@output` directive required to request that a field is output.
- This allows filtering on fields without outputting them, and many other nice things.
```
# Get some details about the BOS airport.
query = '''{
Airport {
name @output(out_name: "airport_name")
city_served @output(out_name: "airport_city")
iata_code @filter(op_name: "=", value: ["$code"]) # filtered but not output
}
}'''
args = {
'code': 'BOS',
}
_, result = execute_query(query, args)
pretty_print_data(result)
```
### Why not just use `input`-typed objects for filtering? Several reasons:
- We already discussed that the necessary autogenerated `input` objects are going to be massive. On a multi-million-line schema (not counting documentation), this adds up quickly!
- Besides, what if we want to apply a particular kind of filter more than once on a given field?
```
# Get the airlines and their registration countries where
# the airline name contains both "Airways" and "Jet".
query = '''{
Airline {
name @filter(op_name: "has_substring", value: ["$name_substring1"])
@filter(op_name: "has_substring", value: ["$name_substring2"])
@output(out_name: "airline")
out_Airline_RegisteredIn {
name @output(out_name: "country")
}
}
}'''
args = {
'name_substring1': 'Airways',
'name_substring2': 'Jet',
}
_, result = execute_query(query, args)
pretty_print_data(result)
```
### The output format is shaped like a dataframe (list of dicts, rows and columns)
- This is what the database returned as well; producing "proper" nested GraphQL requires postprocessing.
- We avoid doing any kind of postprocessing in general since that costs performance.
```
pprint(result)
```
### We also expose advanced database features via directives, such as recursive JOINs (edge traversals)
- Check out the compiler's `@recurse`, `@optional`, and `@tag` directives: https://github.com/kensho-technologies/graphql-compiler
```
# Get the names of all countries in Europe that end in "land".
query = '''{
Region {
name @filter(op_name: "=", value: ["$region"])
out_GeographicArea_SubArea @recurse(depth: 5) {
... on Country {
name @output(out_name: "country_name") @filter(op_name: "ends_with", value: ["$suffix"])
}
}
}
}'''
args = {
'region': 'Europe',
'suffix': 'land',
}
_, result = execute_query(query, args)
pretty_print_data(result)
```
|
github_jupyter
|
from pprint import pprint
from demo.server.demo_helpers import get_schema, execute_query, pretty_print_data
print(get_schema())
# Get some details about the BOS airport.
query = '''{
Airport {
name @output(out_name: "airport_name")
city_served @output(out_name: "airport_city")
iata_code @filter(op_name: "=", value: ["$code"]) # filtered but not output
}
}'''
args = {
'code': 'BOS',
}
_, result = execute_query(query, args)
pretty_print_data(result)
# Get the airlines and their registration countries where
# the airline name contains both "Airways" and "Jet".
query = '''{
Airline {
name @filter(op_name: "has_substring", value: ["$name_substring1"])
@filter(op_name: "has_substring", value: ["$name_substring2"])
@output(out_name: "airline")
out_Airline_RegisteredIn {
name @output(out_name: "country")
}
}
}'''
args = {
'name_substring1': 'Airways',
'name_substring2': 'Jet',
}
_, result = execute_query(query, args)
pretty_print_data(result)
pprint(result)
# Get the names of all countries in Europe that end in "land".
query = '''{
Region {
name @filter(op_name: "=", value: ["$region"])
out_GeographicArea_SubArea @recurse(depth: 5) {
... on Country {
name @output(out_name: "country_name") @filter(op_name: "ends_with", value: ["$suffix"])
}
}
}
}'''
args = {
'region': 'Europe',
'suffix': 'land',
}
_, result = execute_query(query, args)
pretty_print_data(result)
| 0.308503 | 0.742935 |
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/transformers/HuggingFace%20in%20Spark%20NLP%20-%20XLM-RoBERTa.ipynb)
## Import XLM-RoBERTa models from HuggingFace 🤗 into Spark NLP 🚀
Let's keep in mind a few things before we start 😊
- This feature is only in `Spark NLP 3.1.x` and after. So please make sure you have upgraded to the latest Spark NLP release
- You can import models for XLM-RoBERTa from HuggingFace but they have to be compatible with `TensorFlow` and they have to be in `Fill Mask` category. Meaning, you cannot use XLM-RoBERTa models trained/fine-tuned on a specific task such as token/sequence classification.
- Some of the `XLM-RoBERTa` are larger than 2GB which is the limit in Spark NLP. Unfortunately, the models must be less than 2GB in size
## Export and Save HuggingFace model
- Let's install `HuggingFace` and `TensorFlow`. You don't need `TensorFlow` to be installed for Spark NLP, however, we need it to load and save models from HuggingFace.
- We lock TensorFlow on `2.4.1` version and Transformers on `4.6.1`. This doesn't mean it won't work with the future releases, but we wanted you to know which versions have been tested successfully.
- XLMRobertaTokenizer requires the `SentencePiece` library, so we install that as well
```
!pip install -q transformers==4.6.1 tensorflow==2.4.1 sentencepiece
```
- HuggingFace comes with a native `saved_model` feature inside `save_pretrained` function for TensorFlow based models. We will use that to save it as TF `SavedModel`.
- We'll use [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) model from HuggingFace as an example
- In addition to `TFXLMRobertaModel` we also need to save the `XLMRobertaTokenizer`. This is the same for every model, these are assets needed for tokenization inside Spark NLP.
- Since `xlm-roberta-base` model is PyTorch we will use `from_pt=True` param to convert it to TensorFlow
```
from transformers import XLMRobertaTokenizer, TFXLMRobertaModel
import tensorflow as tf
# xlm-roberta-base
MODEL_NAME = 'xlm-roberta-base'
XLMRobertaTokenizer.from_pretrained(MODEL_NAME, return_tensors="pt").save_pretrained("./{}_tokenizer".format(MODEL_NAME))
TFXLMRobertaModel.from_pretrained(MODEL_NAME, from_pt=True).save_pretrained("./{}".format(MODEL_NAME), saved_model=True)
```
Let's have a look inside these two directories and see what we are dealing with:
```
!ls -l {MODEL_NAME}
!ls -l {MODEL_NAME}/saved_model/1
!ls -l {MODEL_NAME}_tokenizer
```
- as you can see, we need the SavedModel from `saved_model/1/` path
- we also be needing `sentencepiece.bpe.model` file from the tokenizer
- all we need is to copy `sentencepiece.bpe.model` file into `saved_model/1/assets` which Spark NLP will look for
```
# let's copy sentencepiece.bpe.model file to saved_model/1/assets
!cp {MODEL_NAME}_tokenizer/sentencepiece.bpe.model {MODEL_NAME}/saved_model/1/assets
```
## Import and Save XLM-RoBERTa in Spark NLP
- Let's install and setup Spark NLP in Google Colab
- This part is pretty easy via our simple script
```
! wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
```
Let's start Spark with Spark NLP included via our simple `start()` function
```
import sparknlp
# let's start Spark with Spark NLP
spark = sparknlp.start()
```
- Let's use `loadSavedModel` functon in `XlmRoBertaEmbeddings` which allows us to load TensorFlow model in SavedModel format
- Most params can be set later when you are loading this model in `XlmRoBertaEmbeddings` in runtime, so don't worry what you are setting them now
- `loadSavedModel` accepts two params, first is the path to the TF SavedModel. The second is the SparkSession that is `spark` variable we previously started via `sparknlp.start()`
- `setStorageRef` is very important. When you are training a task like NER or any Text Classification, we use this reference to bound the trained model to this specific embeddings so you won't load a different embeddings by mistake and see terrible results 😊
- It's up to you what you put in `setStorageRef` but it cannot be changed later on. We usually use the name of the model to be clear, but you can get creative if you want!
- The `dimension` param is is purely cosmetic and won't change anything. It's mostly for you to know later via `.getDimension` what is the dimension of your model. So set this accordingly.
- NOTE: `loadSavedModel` only accepts local paths and not distributed file systems such as `HDFS`, `S3`, `DBFS`, etc. That is why we use `write.save` so we can use `.load()` from any file systems.
```
from sparknlp.annotator import *
xlm_roberta = XlmRoBertaEmbeddings.loadSavedModel(
'{}/saved_model/1'.format(MODEL_NAME),
spark
)\
.setInputCols(["sentence",'token'])\
.setOutputCol("embeddings")\
.setCaseSensitive(True)\
.setDimension(768)\
.setStorageRef('xlm_roberta_base')
```
- Let's save it on disk so it is easier to be moved around and also be used later via `.load` function
```
xlm_roberta.write().overwrite().save("./{}_spark_nlp".format(MODEL_NAME))
```
Let's clean up stuff we don't need anymore
```
!rm -rf {MODEL_NAME}_tokenizer {MODEL_NAME}
```
Awesome 😎 !
This is your XLM-RoBERTa model from HuggingFace 🤗 loaded and saved by Spark NLP 🚀
```
! ls -l {MODEL_NAME}_spark_nlp
```
Now let's see how we can use it on other machines, clusters, or any place you wish to use your new and shiny RoBERTa model 😊
```
xlm_roberta_loaded = XlmRoBertaEmbeddings.load("./{}_spark_nlp".format(MODEL_NAME))\
.setInputCols(["sentence",'token'])\
.setOutputCol("embeddings")\
.setCaseSensitive(True)
roberta_loaded.getStorageRef()
```
That's it! You can now go wild and use hundreds of XLM-RoBERTa models from HuggingFace 🤗 in Spark NLP 🚀
|
github_jupyter
|
!pip install -q transformers==4.6.1 tensorflow==2.4.1 sentencepiece
from transformers import XLMRobertaTokenizer, TFXLMRobertaModel
import tensorflow as tf
# xlm-roberta-base
MODEL_NAME = 'xlm-roberta-base'
XLMRobertaTokenizer.from_pretrained(MODEL_NAME, return_tensors="pt").save_pretrained("./{}_tokenizer".format(MODEL_NAME))
TFXLMRobertaModel.from_pretrained(MODEL_NAME, from_pt=True).save_pretrained("./{}".format(MODEL_NAME), saved_model=True)
!ls -l {MODEL_NAME}
!ls -l {MODEL_NAME}/saved_model/1
!ls -l {MODEL_NAME}_tokenizer
# let's copy sentencepiece.bpe.model file to saved_model/1/assets
!cp {MODEL_NAME}_tokenizer/sentencepiece.bpe.model {MODEL_NAME}/saved_model/1/assets
! wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
import sparknlp
# let's start Spark with Spark NLP
spark = sparknlp.start()
from sparknlp.annotator import *
xlm_roberta = XlmRoBertaEmbeddings.loadSavedModel(
'{}/saved_model/1'.format(MODEL_NAME),
spark
)\
.setInputCols(["sentence",'token'])\
.setOutputCol("embeddings")\
.setCaseSensitive(True)\
.setDimension(768)\
.setStorageRef('xlm_roberta_base')
xlm_roberta.write().overwrite().save("./{}_spark_nlp".format(MODEL_NAME))
!rm -rf {MODEL_NAME}_tokenizer {MODEL_NAME}
! ls -l {MODEL_NAME}_spark_nlp
xlm_roberta_loaded = XlmRoBertaEmbeddings.load("./{}_spark_nlp".format(MODEL_NAME))\
.setInputCols(["sentence",'token'])\
.setOutputCol("embeddings")\
.setCaseSensitive(True)
roberta_loaded.getStorageRef()
| 0.433022 | 0.973869 |
# CERTUTIL hunt
This notebook helps to collect all cmd (cmd.exe) and (certutil.exe) process executions in order to find suspicious activity.
This example demonstrates how to find suspicious executions that are downloaded by using certutil.exe, and then using certutil.exe to attack.
```
from pyclient.stix_shifter_dataframe import StixShifterDataFrame
from dateutil import parser
import re
import pandas as pd
carbon_black_stix_bundle_1 = 'https://raw.githubusercontent.com/opencybersecurityalliance/stix-shifter/master/data/cybox/carbon_black/carbon_black_observable.json'
sb_config_1 = {
'translation_module': 'stix_bundle',
'transmission_module': 'stix_bundle',
'connection': {
"host": carbon_black_stix_bundle_1,
"port": 443
},
'configuration': {
"auth": {
"username": None,
"password": None
}
},
'data_source': '{"type": "identity", "id": "identity--3532c56d-ea72-48be-a2ad-1a53f4c9c6d3", "name": "stix_boundle", "identity_class": "events"}'
}
carbon_black_stix_bundle_2 = 'https://raw.githubusercontent.com/opencybersecurityalliance/stix-shifter/develop/data/cybox/carbon_black/cb_observed_156.json'
sb_config_2 = {
'translation_module': 'stix_bundle',
'transmission_module': 'stix_bundle',
'connection': {
"host": carbon_black_stix_bundle_2,
"port": 443
},
'configuration': {
"auth": {
"username": None,
"password": None
}
},
'data_source': '{"type": "identity", "id": "identity--3532c56d-ea72-48be-a2ad-1a53f4c9c6d3", "name": "stix_boundle", "identity_class": "events"}'
}
def get_duration(duration):
days, seconds = duration.days, duration.seconds
hours = seconds // 3600
minutes = (seconds % 3600) // 60
seconds = seconds % 60
return f"{days}d {hours}h {minutes}m {seconds}.{duration.microseconds//1000}s"
def defang(url):
return re.sub('http', 'hxxp', url)
```
# Fetch process data that are spawn by cmd
```
ssdf = StixShifterDataFrame()
ssdf.add_config('cb_stix_bundle_1', sb_config_1)
ssdf.add_config('cb_stix_bundle_2', sb_config_2)
# stix-shifter uses STIX patterning as its query language
# See http://docs.oasis-open.org/cti/stix/v2.0/cs01/part5-stix-patterning/stix-v2.0-cs01-part5-stix-patterning.html
cmd_query = "[process:name = 'cmd.exe']"
df = ssdf.search_df(query=cmd_query, config_names=['cb_stix_bundle_1', 'cb_stix_bundle_2'])
df['first_observed'] = pd.to_datetime(df['first_observed'], infer_datetime_format=True, utc=True)
df['last_observed'] = pd.to_datetime(df['last_observed'], infer_datetime_format=True, utc=True)
df['duration'] = df['last_observed'] - df['first_observed']
df['duration'] = df['duration'].map(lambda dur: get_duration(dur))
df['first_observed'] = pd.to_datetime(df['first_observed'], infer_datetime_format=True, utc=True)
df['last_observed'] = pd.to_datetime(df['last_observed'], infer_datetime_format=True, utc=True)
df['duration'] = df['last_observed'] - df['first_observed']
df['duration'] = df['duration'].map(lambda dur: get_duration(dur))
df.head()
```
# Find suspicious command line
```
# Use a regex to find suspicious certutil usage
susp = df[df['process:command_line'].str.contains(r'certutil.*[0-9a-zA-Z_-]*\.(exe|dat)')]
# Look at the matches (defang any URLs in there since jupyter makes them clickable!)
list(map(defang, susp['process:command_line'].head().to_list()))
```
# Attack steps
```
fields = ['first_observed', 'last_observed', 'duration',
'process:name', 'process:pid',
'process:binary_ref.name', 'process:parent_ref.name',
'network-traffic:dst_ref.value', 'network-traffic:src_ref.value',
'process:command_line', 'user-account:user_id'
]
df[fields].sort_values(by=['first_observed'])
```
In this notebook, we finally found that this is a APT attack , ```c64.exe f64.data "9839D7F1A0 -m" ```
Ref: https://www.fireeye.com/blog/threat-research/2019/08/game-over-detecting-and-stopping-an-apt41-operation.html
|
github_jupyter
|
from pyclient.stix_shifter_dataframe import StixShifterDataFrame
from dateutil import parser
import re
import pandas as pd
carbon_black_stix_bundle_1 = 'https://raw.githubusercontent.com/opencybersecurityalliance/stix-shifter/master/data/cybox/carbon_black/carbon_black_observable.json'
sb_config_1 = {
'translation_module': 'stix_bundle',
'transmission_module': 'stix_bundle',
'connection': {
"host": carbon_black_stix_bundle_1,
"port": 443
},
'configuration': {
"auth": {
"username": None,
"password": None
}
},
'data_source': '{"type": "identity", "id": "identity--3532c56d-ea72-48be-a2ad-1a53f4c9c6d3", "name": "stix_boundle", "identity_class": "events"}'
}
carbon_black_stix_bundle_2 = 'https://raw.githubusercontent.com/opencybersecurityalliance/stix-shifter/develop/data/cybox/carbon_black/cb_observed_156.json'
sb_config_2 = {
'translation_module': 'stix_bundle',
'transmission_module': 'stix_bundle',
'connection': {
"host": carbon_black_stix_bundle_2,
"port": 443
},
'configuration': {
"auth": {
"username": None,
"password": None
}
},
'data_source': '{"type": "identity", "id": "identity--3532c56d-ea72-48be-a2ad-1a53f4c9c6d3", "name": "stix_boundle", "identity_class": "events"}'
}
def get_duration(duration):
days, seconds = duration.days, duration.seconds
hours = seconds // 3600
minutes = (seconds % 3600) // 60
seconds = seconds % 60
return f"{days}d {hours}h {minutes}m {seconds}.{duration.microseconds//1000}s"
def defang(url):
return re.sub('http', 'hxxp', url)
ssdf = StixShifterDataFrame()
ssdf.add_config('cb_stix_bundle_1', sb_config_1)
ssdf.add_config('cb_stix_bundle_2', sb_config_2)
# stix-shifter uses STIX patterning as its query language
# See http://docs.oasis-open.org/cti/stix/v2.0/cs01/part5-stix-patterning/stix-v2.0-cs01-part5-stix-patterning.html
cmd_query = "[process:name = 'cmd.exe']"
df = ssdf.search_df(query=cmd_query, config_names=['cb_stix_bundle_1', 'cb_stix_bundle_2'])
df['first_observed'] = pd.to_datetime(df['first_observed'], infer_datetime_format=True, utc=True)
df['last_observed'] = pd.to_datetime(df['last_observed'], infer_datetime_format=True, utc=True)
df['duration'] = df['last_observed'] - df['first_observed']
df['duration'] = df['duration'].map(lambda dur: get_duration(dur))
df['first_observed'] = pd.to_datetime(df['first_observed'], infer_datetime_format=True, utc=True)
df['last_observed'] = pd.to_datetime(df['last_observed'], infer_datetime_format=True, utc=True)
df['duration'] = df['last_observed'] - df['first_observed']
df['duration'] = df['duration'].map(lambda dur: get_duration(dur))
df.head()
# Use a regex to find suspicious certutil usage
susp = df[df['process:command_line'].str.contains(r'certutil.*[0-9a-zA-Z_-]*\.(exe|dat)')]
# Look at the matches (defang any URLs in there since jupyter makes them clickable!)
list(map(defang, susp['process:command_line'].head().to_list()))
fields = ['first_observed', 'last_observed', 'duration',
'process:name', 'process:pid',
'process:binary_ref.name', 'process:parent_ref.name',
'network-traffic:dst_ref.value', 'network-traffic:src_ref.value',
'process:command_line', 'user-account:user_id'
]
df[fields].sort_values(by=['first_observed'])
| 0.630799 | 0.715809 |
## First Assignment
#### 1) Apply the appropriate string methods to the **x** variable (as '.upper') to change it exactly to: "$Dichlorodiphenyltrichloroethane$".
```
x = "DiClOrod IFeNi lTRicLOr oETaNo DiChlorod iPHeny lTrichL oroEThaNe"
y = x.replace(' ','')
print(y[27:].capitalize())
```
#### 2) Assign respectively the values: 'word', 15, 3.14 and 'list' to variables A, B, C and D in a single line of code. Then, print them in that same order on a single line separated by a space, using only one print statement.
```
A, B, C, D = 'word', 15, 3.14, 'list'
print(f'{A} {B} {C} {D}')
```
#### 3) Use the **input()** function to receive an input in the form **'68.4 1.71'**, that is, two floating point numbers in a line separated by space. Then, assign these numbers to the variables **w** and **h** respectively, which represent an individual's weight and height (hint: take a look at the '.split()' method). With this data, calculate the individual's Body Mass Index (BMI) from the following relationship:
\begin{equation}
BMI = \dfrac{weight}{height^2}
\end{equation}
```
weight, height = input("Enter weight and height, seperated by space: ").split()
BMI = float(weight)/float(height)**2
print(BMI)
```
#### This value can also be classified according to ranges of values, following to the table below. Use conditional structures to classify and print the classification assigned to the individual.
<center><img src="https://healthtravelguide.com/wp-content/uploads/2020/06/Body-mass-index-table.png" width="30%"><\center>
(source: https://healthtravelguide.com/bmi-calculator/)
```
if BMI < 18.5:
print("Underweight")
elif BMI <= 24.9:
print("Normal weight")
elif BMI <= 29.9:
print("Pre-obesity")
elif BMI <= 34.9:
print("Obesity class I")
elif BMI <= 39.9:
print("Obesity class II")
else:
print("Obesity class III")
```
#### 4) Receive an integer as an input and, using a loop, calculate the factorial of this number, that is, the product of all the integers from one to the number provided.
```
number = int(input("Insert an integer:"))
fact = 1
for i in range(1, number+1, 1):
fact = fact * i
print(fact)
```
#### 5) Using a while loop and the input function, read an indefinite number of integers until the number read is -1. Present the sum of all these numbers in the form of a print, excluding the -1 read at the end.
```
sum_input, number_input = 0, 0
while number_input != -1:
sum_input += number_input
number_input = int(input("Enter an integer: "))
print(f"Sum of input = {sum_input}")
```
#### 6) Read the **first name** of an employee, his **amount of hours worked** and the **salary per hour** in a single line separated by commas. Next, calculate the **total salary** for this employee and show it to two decimal places.
```
name, hours, salary = input("Insert name of the employee, amount of hours worked and salary per hour, separated by commas:").split(",")
totalSalary = float(hours)*float(salary)
print(f"{name} earns {round(totalSalary,2)}")
```
#### 7) Read three floating point values **A**, **B** and **C** respectively. Then calculate itens a, b, c, d and e:
```
x, y, z = input("Enter the values for A, B, C separated by comma").split(",")
A, B, C = float(x), float(y), float(z)
```
a) the area of the triangle rectangle with A as the base and C as the height.
```
print("Area of the triangle:", A*C/2)
```
b) the area of the circle of radius C. (pi = 3.14159)
```
print("Area of the circle:", 3.14159*C**2)
```
c) the area of the trapezoid that has A and B for bases and C for height.
```
print("Area of the trapezoid:", (A+B)*C/2)
```
d) the area of the square that has side B.
```
print("Area of the square:", B**2)
```
e) the area of the rectangle that has sides A and B.
```
print("Area of the rectangle:", A*B)
```
#### 8) Read **the values a, b and c** and calculate the **roots of the second degree equation** $ax^{2}+bx+c=0$ using [this formula](https://en.wikipedia.org/wiki/Quadratic_equation). If it is not possible to calculate the roots, display the message **“There are no real roots”**.
```
a, b, c = [float(x) for x in input("Enter the values of a, b and c, separated by commas: ").split(",")]
root = (b**2) - (4*a*c)
if root < 0:
print("There are no real roots")
elif root == 0:
x = -b/(2*a)
print(f"Root: {x}")
else:
x1 = (-b+root**2)/(2*a)
x2 = (-b-root**2)/(2*a)
print(f"Roots: {x1}, {x2}")
```
#### 9) Read four floating point numerical values corresponding to the coordinates of two geographical coordinates in the cartesian plane. Each point will come in a line with its coordinates separated by space. Then calculate and show the distance between these two points.
(obs: $d=\sqrt{(x_1-x_2)^2 + (y_1-y_2)^2}$)
```
x1, x2 = [float(a) for a in input("Enter coordinates x1 and x2, separated by comma ").split(",")]
y1, y2 = [float(a) for a in input("Enter coordinates y1 and y2: separated by comma ").split(",")]
d = ((x1-x2)**2+(y1-y2)**2)**0.5
print(f"d = {d}")
```
#### 10) Read **two floating point numbers** on a line that represent **coordinates of a cartesian point**. With this, use **conditional structures** to determine if you are at the origin, printing the message **'origin'**; in one of the axes, printing **'x axis'** or **'y axis'**; or in one of the four quadrants, printing **'q1'**, **'q2**', **'q3'** or **'q4'**.
```
x, y = [float(x) for x in input("Insert a float for x, y: ").split(",")]
if x == 0 and y == 0:
print('Origin')
elif x == 0:
print('y axis')
elif y == 0:
print('x axis')
elif x > 0 and y > 0:
print('q1')
elif x > 0:
print('q4')
elif y > 0:
print('q2')
else:
print('q3')
```
#### 11) Read an integer that represents a phone code for international dialing.
#### Then, inform to which country the code belongs to, considering the generated table below:
(You just need to consider the first 10 entries)
```
import pandas as pd
df = pd.read_html('https://en.wikipedia.org/wiki/Telephone_numbers_in_Europe')[1]
df = df.iloc[:,:2]
df.head(20)
country_code = int(input("Enter the country calling code:"))
code_dict = {43:'Austria',
32:'Belgium',
359:'Bulgaria',
385:'Croatia',
357:'Cyprus',
420:'Czech Republic',
45:'Denmark',
372:'Estonia',
359:'Finland',
33:'France',
}
if country_code in code_dict.keys():
print(code_dict[country_code])
else:
print('Cannot find country.')
```
#### 12) Write a piece of code that reads 6 numbers in a row. Next, show the number of positive values entered. On the next line, print the average of the values to one decimal place.
```
numbers = [float(x) for x in input("Enter six values separated by commas: ").split(",")]
pos, total = 0, 0
for i in numbers:
total += i
if i > 0:
pos += 1
print(f"{pos} positive numbers")
print(f"Average = {round(total/6 , 1)}")
```
#### 13) Read an integer **N**. Then print the **square of each of the even values**, from 1 to N, including N, if applicable, arranged one per line.
```
N = int(input("Enter integer: "))
for x in range(1,N+1,1):
if x%2==0:
print(x**2)
```
#### 14) Using **input()**, read an integer and print its classification as **'even / odd'** and **'positive / negative'** . The two classes for the number must be printed on the same line separated by a space. In the case of zero, print only **'null'**.
```
number = int(input("Enter an integer:"))
class_even = "even" if number%2 == 0 else "odd"
class_pos = "positive" if number == abs(number) else "negative"
if number == 0:
print("null")
else:
print(class_even, class_pos)
```
## Challenge
#### 15) Ordering problems are recurrent in the history of programming. Over time, several algorithms have been developed to fulfill this function. The simplest of these algorithms is the [**Bubble Sort**](https://en.wikipedia.org/wiki/Bubble_sort), which is based on comparisons of elements two by two in a loop of passes through the elements. Your mission, if you decide to accept it, will be to input six whole numbers ramdonly ordered. Then implement the **Bubble Sort** principle to order these six numbers **using only loops and conditionals**.
#### At the end, print the six numbers in ascending order on a single line separated by spaces.
|
github_jupyter
|
x = "DiClOrod IFeNi lTRicLOr oETaNo DiChlorod iPHeny lTrichL oroEThaNe"
y = x.replace(' ','')
print(y[27:].capitalize())
A, B, C, D = 'word', 15, 3.14, 'list'
print(f'{A} {B} {C} {D}')
weight, height = input("Enter weight and height, seperated by space: ").split()
BMI = float(weight)/float(height)**2
print(BMI)
if BMI < 18.5:
print("Underweight")
elif BMI <= 24.9:
print("Normal weight")
elif BMI <= 29.9:
print("Pre-obesity")
elif BMI <= 34.9:
print("Obesity class I")
elif BMI <= 39.9:
print("Obesity class II")
else:
print("Obesity class III")
number = int(input("Insert an integer:"))
fact = 1
for i in range(1, number+1, 1):
fact = fact * i
print(fact)
sum_input, number_input = 0, 0
while number_input != -1:
sum_input += number_input
number_input = int(input("Enter an integer: "))
print(f"Sum of input = {sum_input}")
name, hours, salary = input("Insert name of the employee, amount of hours worked and salary per hour, separated by commas:").split(",")
totalSalary = float(hours)*float(salary)
print(f"{name} earns {round(totalSalary,2)}")
x, y, z = input("Enter the values for A, B, C separated by comma").split(",")
A, B, C = float(x), float(y), float(z)
print("Area of the triangle:", A*C/2)
print("Area of the circle:", 3.14159*C**2)
print("Area of the trapezoid:", (A+B)*C/2)
print("Area of the square:", B**2)
print("Area of the rectangle:", A*B)
a, b, c = [float(x) for x in input("Enter the values of a, b and c, separated by commas: ").split(",")]
root = (b**2) - (4*a*c)
if root < 0:
print("There are no real roots")
elif root == 0:
x = -b/(2*a)
print(f"Root: {x}")
else:
x1 = (-b+root**2)/(2*a)
x2 = (-b-root**2)/(2*a)
print(f"Roots: {x1}, {x2}")
x1, x2 = [float(a) for a in input("Enter coordinates x1 and x2, separated by comma ").split(",")]
y1, y2 = [float(a) for a in input("Enter coordinates y1 and y2: separated by comma ").split(",")]
d = ((x1-x2)**2+(y1-y2)**2)**0.5
print(f"d = {d}")
x, y = [float(x) for x in input("Insert a float for x, y: ").split(",")]
if x == 0 and y == 0:
print('Origin')
elif x == 0:
print('y axis')
elif y == 0:
print('x axis')
elif x > 0 and y > 0:
print('q1')
elif x > 0:
print('q4')
elif y > 0:
print('q2')
else:
print('q3')
import pandas as pd
df = pd.read_html('https://en.wikipedia.org/wiki/Telephone_numbers_in_Europe')[1]
df = df.iloc[:,:2]
df.head(20)
country_code = int(input("Enter the country calling code:"))
code_dict = {43:'Austria',
32:'Belgium',
359:'Bulgaria',
385:'Croatia',
357:'Cyprus',
420:'Czech Republic',
45:'Denmark',
372:'Estonia',
359:'Finland',
33:'France',
}
if country_code in code_dict.keys():
print(code_dict[country_code])
else:
print('Cannot find country.')
numbers = [float(x) for x in input("Enter six values separated by commas: ").split(",")]
pos, total = 0, 0
for i in numbers:
total += i
if i > 0:
pos += 1
print(f"{pos} positive numbers")
print(f"Average = {round(total/6 , 1)}")
N = int(input("Enter integer: "))
for x in range(1,N+1,1):
if x%2==0:
print(x**2)
number = int(input("Enter an integer:"))
class_even = "even" if number%2 == 0 else "odd"
class_pos = "positive" if number == abs(number) else "negative"
if number == 0:
print("null")
else:
print(class_even, class_pos)
| 0.233706 | 0.984561 |
<p align="center">
<img src="https://github.com/GeostatsGuy/GeostatsPy/blob/master/TCG_color_logo.png?raw=true" width="220" height="240" />
</p>
## Interactive Bayesian Coin Demonstration from Sivia (1996)
### Sivia, D.S., 1996, Data Analysis: A Bayesian Tutorial
* interactive plot demonstration with ipywidget package
#### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy)
#### The Bayesian Coin Example
I have a coin and you need to figure out if it is a fair coin!
* a fair coin would have a 50% probability of heads (and a 50% probability of tails)
You start with your prior assessment of my coin, a prior distribution over the probability of heads $Prob(Coin)$
* it could be based on how honest you think I am
Then you perform a set of coin tosses to build a likelihood distribution, $P(Tosses | Coin)$
* the more coin tosses, the narrower this distribution
Then you update the prior distribution with the likelihood distribution to get the posterior distribution, $P(Coin | Tosses)$.
\begin{equation}
P( Coin | Tosses ) = \frac{P( Tosses | Coin ) P( Coin )}{P( Tosses )}
\end{equation}
#### Objective
Provide an example and demonstration for:
1. interactive plotting in Jupyter Notebooks with Python packages matplotlib and ipywidgets
2. provide an intuitive hands-on example of Bayesian updating
#### Getting Started
Here's the steps to get setup in Python with the GeostatsPy package:
1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
2. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
#### Load the Required Libraries
The following code loads the required libraries.
```
%matplotlib inline
from ipywidgets import interactive # widgets and interactivity
from ipywidgets import widgets # widgets and interactivity
import matplotlib.pyplot as plt # plotting
import numpy as np # working with arrays
from scipy.stats import triang # parametric distributions
from scipy.stats import binom
from scipy.stats import norm
```
#### Make Our Interactive Plot
For this demonstration we will:
* declare a set of 4 widgets in a HBox (horizontal box of widgets).
* define a function 'f' that will read the output from these widgets and make a plot
You may have some flicker and lag. I have not tried to optimize performance for this demonstration.
```
# 4 slider bars for the model input
a = widgets.FloatSlider(min=0.0, max = 1.0, value = 0.5, description = 'coin bias')
d = widgets.FloatSlider(min=0.01, max = 1.0, value = 0.1, step = 0.01, description = 'coin uncert.')
b = widgets.FloatSlider(min = 0, max = 1.0, value = 0.5, description = 'prop. heads')
c = widgets.IntSlider(min = 5, max = 1000, value = 100, description = 'coin tosses')
ui = widgets.HBox([a,d,b,c],)
def f(a, b, c, d): # function to make the plot
heads = int(c * b)
tails = c - heads
x = np.linspace(0.0, 1.0, num=1000)
prior = norm.pdf(x,loc = a, scale = d)
prior = prior / np.sum(prior)
plt.subplot(221)
plt.plot(x, prior) # prior distribution of coin fairness
plt.xlim(0.0,1.0)
plt.xlabel('P(Coin Heads)'); plt.ylabel('Density'); plt.title('Prior Distribution')
plt.ylim(0, 0.05)
plt.grid()
plt.subplot(222) # results from the coin tosses
plt.pie([heads, tails],labels = ['heads','tails'],radius = 0.5*(c/1000)+0.5, autopct='%1.1f%%', colors = ['#ff9999','#66b3ff'], explode = [.02,.02], wedgeprops = {"edgecolor":"k",'linewidth': 1} )
plt.title(str(c) + ' Coin Tosses')
likelihood = binom.pmf(heads,c,x)
likelihood = likelihood/np.sum(likelihood)
plt.subplot(223) # likelihood distribution given the coin tosses
plt.plot(x, likelihood)
plt.xlim(0.0,1.0)
plt.xlabel('P(Tosses | Coin Bias)'); plt.ylabel('Density'); plt.title('Likelihood Distribution')
plt.ylim(0, 0.05)
plt.grid()
post = prior * likelihood
post = post / np.sum(post)
plt.subplot(224) # posterior distribution
plt.plot(x, post)
plt.xlim(0.0,1.0)
plt.xlabel('P(Coin Bias | Tosses)'); plt.ylabel('Density'); plt.title('Posterior Distribution')
plt.ylim(0, 0.05)
plt.grid()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.6, wspace=0.2, hspace=0.3)
plt.show()
interactive_plot = widgets.interactive_output(f, {'a': a, 'd': d, 'b': b, 'c': c})
interactive_plot.clear_output(wait = True) # reduce flickering by delaying plot updating
```
### Bayesian Coin Example from Sivia, 1996, Data Analysis: A Bayesian Tutorial
* interactive plot demonstration with ipywidget package
#### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy)
### The Problem
What is the PDF for the coin probability of heads, P(Coin Heads)? Start with a prior model and update with coin tosses.
* **coin bias**: expectation for your prior distribution for probability of heads
* **coin uncert.**: standard deviation for your prior distribution for probability of heads
* **prop. heads**: proportion of heads in the coin toss experiment
* **coin tosses**: number of coin tosses in thecoin toss experiment
```
display(ui, interactive_plot) # display the interactive plot
```
#### Comments
This was a simple demonstration of interactive plots in Jupyter Notebook Python with the ipywidgets and matplotlib packages.
I have many other demonstrations on data analytics and machine learning, e.g. on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy.
I hope this was helpful,
*Michael*
#### The Author:
### Michael Pyrcz, Associate Professor, University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
For more about Michael check out these links:
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#### Want to Work Together?
I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
* I can be reached at [email protected].
I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
|
github_jupyter
|
%matplotlib inline
from ipywidgets import interactive # widgets and interactivity
from ipywidgets import widgets # widgets and interactivity
import matplotlib.pyplot as plt # plotting
import numpy as np # working with arrays
from scipy.stats import triang # parametric distributions
from scipy.stats import binom
from scipy.stats import norm
# 4 slider bars for the model input
a = widgets.FloatSlider(min=0.0, max = 1.0, value = 0.5, description = 'coin bias')
d = widgets.FloatSlider(min=0.01, max = 1.0, value = 0.1, step = 0.01, description = 'coin uncert.')
b = widgets.FloatSlider(min = 0, max = 1.0, value = 0.5, description = 'prop. heads')
c = widgets.IntSlider(min = 5, max = 1000, value = 100, description = 'coin tosses')
ui = widgets.HBox([a,d,b,c],)
def f(a, b, c, d): # function to make the plot
heads = int(c * b)
tails = c - heads
x = np.linspace(0.0, 1.0, num=1000)
prior = norm.pdf(x,loc = a, scale = d)
prior = prior / np.sum(prior)
plt.subplot(221)
plt.plot(x, prior) # prior distribution of coin fairness
plt.xlim(0.0,1.0)
plt.xlabel('P(Coin Heads)'); plt.ylabel('Density'); plt.title('Prior Distribution')
plt.ylim(0, 0.05)
plt.grid()
plt.subplot(222) # results from the coin tosses
plt.pie([heads, tails],labels = ['heads','tails'],radius = 0.5*(c/1000)+0.5, autopct='%1.1f%%', colors = ['#ff9999','#66b3ff'], explode = [.02,.02], wedgeprops = {"edgecolor":"k",'linewidth': 1} )
plt.title(str(c) + ' Coin Tosses')
likelihood = binom.pmf(heads,c,x)
likelihood = likelihood/np.sum(likelihood)
plt.subplot(223) # likelihood distribution given the coin tosses
plt.plot(x, likelihood)
plt.xlim(0.0,1.0)
plt.xlabel('P(Tosses | Coin Bias)'); plt.ylabel('Density'); plt.title('Likelihood Distribution')
plt.ylim(0, 0.05)
plt.grid()
post = prior * likelihood
post = post / np.sum(post)
plt.subplot(224) # posterior distribution
plt.plot(x, post)
plt.xlim(0.0,1.0)
plt.xlabel('P(Coin Bias | Tosses)'); plt.ylabel('Density'); plt.title('Posterior Distribution')
plt.ylim(0, 0.05)
plt.grid()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.6, wspace=0.2, hspace=0.3)
plt.show()
interactive_plot = widgets.interactive_output(f, {'a': a, 'd': d, 'b': b, 'c': c})
interactive_plot.clear_output(wait = True) # reduce flickering by delaying plot updating
display(ui, interactive_plot) # display the interactive plot
| 0.723016 | 0.989286 |
# A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.

In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
```
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
```
Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
```
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
```
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a **single ReLU hidden layer**. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a **sigmoid activation on the output layer** to get values matching the input.

> **Exercise:** Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, `tf.layers`. For instance, you would use [`tf.layers.dense(inputs, units, activation=tf.nn.relu)`](https://www.tensorflow.org/api_docs/python/tf/layers/dense) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this `tf.nn.sigmoid_cross_entropy_with_logits` ([documentation](https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits)). You should note that `tf.nn.sigmoid_cross_entropy_with_logits` takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
```
# Size of the encoding layer (the hidden layer)
encoding_dim = 32
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
```
## Training
```
# Create the session
sess = tf.Session()
```
Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
Calling `mnist.train.next_batch(batch_size)` will return a tuple of `(images, labels)`. We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with `sess.run(tf.global_variables_initializer())`. Then, run the optimizer and get the loss with `batch_cost, _ = sess.run([cost, opt], feed_dict=feed)`.
```
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
```
## Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
```
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
```
## Up Next
We're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.
In practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build.
|
github_jupyter
|
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
# Size of the encoding layer (the hidden layer)
encoding_dim = 32
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
| 0.816918 | 0.993655 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="darkgrid")
%matplotlib inline
np.random.seed(sum(map(ord, "relational")))
```
```
tips = sns.load_dataset("tips")
sns.relplot(x="total_bill", y="tip", data=tips);
```
```
sns.relplot(x="total_bill", y="tip", hue="smoker", data=tips);
```
```
sns.relplot(x="total_bill", y="tip", hue="smoker", style="smoker",
data=tips);
```
```
sns.relplot(x="total_bill", y="tip", hue="smoker", style="time", data=tips);
```
```
sns.relplot(x="total_bill", y="tip", hue="size", data=tips);
```
```
sns.relplot(x="total_bill", y="tip", hue="size", palette="ch:r=-.5,l=.75", data=tips);
```
```
sns.relplot(x="total_bill", y="tip", size="size", data=tips);
```
```
sns.relplot(x="total_bill", y="tip", size="size", sizes=(15, 200), data=tips);
```
```
df = pd.DataFrame(dict(time=np.arange(500),
value=np.random.randn(500).cumsum()))
g = sns.relplot(x="time", y="value", kind="line", data=df)
g.fig.autofmt_xdate()
```
```
df = pd.DataFrame(np.random.randn(500, 2).cumsum(axis=0), columns=["x", "y"])
sns.relplot(x="x", y="y", sort=False, kind="line", data=df);
```
```
fmri = sns.load_dataset("fmri")
sns.relplot(x="timepoint", y="signal", kind="line", data=fmri);
```
```
sns.relplot(x="timepoint", y="signal", ci=None, kind="line", data=fmri);
```
```
sns.relplot(x="timepoint", y="signal", kind="line", ci="sd", data=fmri);
```
```
sns.relplot(x="timepoint", y="signal", estimator=None, kind="line", data=fmri);
```
```
sns.relplot(x="timepoint", y="signal", hue="event", kind="line", data=fmri);
```
```
sns.relplot(x="timepoint", y="signal", hue="region", style="event",
kind="line", data=fmri);
```
```
sns.relplot(x="timepoint", y="signal", hue="region", style="event",
dashes=False, markers=True, kind="line", data=fmri);
```
```
sns.relplot(x="timepoint", y="signal", hue="event", style="event",
kind="line", data=fmri);
```
```
sns.relplot(x="timepoint", y="signal", hue="region",
units="subject", estimator=None,
kind="line", data=fmri.query("event == 'stim'"));
```
```
dots = sns.load_dataset("dots").query("align == 'dots'")
sns.relplot(x="time", y="firing_rate",
hue="coherence", style="choice",
kind="line", data=dots);
```
```
palette = sns.cubehelix_palette(light=.8, n_colors=6)
sns.relplot(x="time", y="firing_rate",
hue="coherence", style="choice",
palette=palette,
kind="line", data=dots);
```
```
from matplotlib.colors import LogNorm
palette = sns.cubehelix_palette(light=.7, n_colors=6)
sns.relplot(x="time", y="firing_rate",
hue="coherence", style="choice",
hue_norm=LogNorm(),
kind="line",
data=dots.query("coherence > 0"));
```
```
sns.relplot(x="time", y="firing_rate",
size="coherence", style="choice",
kind="line", data=dots);
```
```
sns.relplot(x="time", y="firing_rate",
hue="coherence", size="choice",
palette=palette,
kind="line", data=dots);
```
```
df = pd.DataFrame(dict(time=pd.date_range("2017-1-1", periods=500),
value=np.random.randn(500).cumsum()))
g = sns.relplot(x="time", y="value", kind="line", data=df)
g.fig.autofmt_xdate()
```
```
sns.relplot(x="total_bill", y="tip", hue="smoker",
col="time", data=tips);
```
```
subject_number = fmri["subject"].str[1:].astype(int)
fmri= fmri.iloc[subject_number.argsort()]
sns.relplot(x="timepoint", y="signal", hue="subject",
col="region", row="event", height=3,
kind="line", estimator=None, data=fmri);
```
```
sns.relplot(x="timepoint", y="signal", hue="event", style="event",
col="subject", col_wrap=5,
height=3, aspect=.75, linewidth=2.5,
kind="line", data=fmri.query("region == 'frontal'"));
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="darkgrid")
%matplotlib inline
np.random.seed(sum(map(ord, "relational")))
tips = sns.load_dataset("tips")
sns.relplot(x="total_bill", y="tip", data=tips);
sns.relplot(x="total_bill", y="tip", hue="smoker", data=tips);
sns.relplot(x="total_bill", y="tip", hue="smoker", style="smoker",
data=tips);
sns.relplot(x="total_bill", y="tip", hue="smoker", style="time", data=tips);
sns.relplot(x="total_bill", y="tip", hue="size", data=tips);
sns.relplot(x="total_bill", y="tip", hue="size", palette="ch:r=-.5,l=.75", data=tips);
sns.relplot(x="total_bill", y="tip", size="size", data=tips);
sns.relplot(x="total_bill", y="tip", size="size", sizes=(15, 200), data=tips);
df = pd.DataFrame(dict(time=np.arange(500),
value=np.random.randn(500).cumsum()))
g = sns.relplot(x="time", y="value", kind="line", data=df)
g.fig.autofmt_xdate()
df = pd.DataFrame(np.random.randn(500, 2).cumsum(axis=0), columns=["x", "y"])
sns.relplot(x="x", y="y", sort=False, kind="line", data=df);
fmri = sns.load_dataset("fmri")
sns.relplot(x="timepoint", y="signal", kind="line", data=fmri);
sns.relplot(x="timepoint", y="signal", ci=None, kind="line", data=fmri);
sns.relplot(x="timepoint", y="signal", kind="line", ci="sd", data=fmri);
sns.relplot(x="timepoint", y="signal", estimator=None, kind="line", data=fmri);
sns.relplot(x="timepoint", y="signal", hue="event", kind="line", data=fmri);
sns.relplot(x="timepoint", y="signal", hue="region", style="event",
kind="line", data=fmri);
sns.relplot(x="timepoint", y="signal", hue="region", style="event",
dashes=False, markers=True, kind="line", data=fmri);
sns.relplot(x="timepoint", y="signal", hue="event", style="event",
kind="line", data=fmri);
sns.relplot(x="timepoint", y="signal", hue="region",
units="subject", estimator=None,
kind="line", data=fmri.query("event == 'stim'"));
dots = sns.load_dataset("dots").query("align == 'dots'")
sns.relplot(x="time", y="firing_rate",
hue="coherence", style="choice",
kind="line", data=dots);
palette = sns.cubehelix_palette(light=.8, n_colors=6)
sns.relplot(x="time", y="firing_rate",
hue="coherence", style="choice",
palette=palette,
kind="line", data=dots);
from matplotlib.colors import LogNorm
palette = sns.cubehelix_palette(light=.7, n_colors=6)
sns.relplot(x="time", y="firing_rate",
hue="coherence", style="choice",
hue_norm=LogNorm(),
kind="line",
data=dots.query("coherence > 0"));
sns.relplot(x="time", y="firing_rate",
size="coherence", style="choice",
kind="line", data=dots);
sns.relplot(x="time", y="firing_rate",
hue="coherence", size="choice",
palette=palette,
kind="line", data=dots);
df = pd.DataFrame(dict(time=pd.date_range("2017-1-1", periods=500),
value=np.random.randn(500).cumsum()))
g = sns.relplot(x="time", y="value", kind="line", data=df)
g.fig.autofmt_xdate()
sns.relplot(x="total_bill", y="tip", hue="smoker",
col="time", data=tips);
subject_number = fmri["subject"].str[1:].astype(int)
fmri= fmri.iloc[subject_number.argsort()]
sns.relplot(x="timepoint", y="signal", hue="subject",
col="region", row="event", height=3,
kind="line", estimator=None, data=fmri);
sns.relplot(x="timepoint", y="signal", hue="event", style="event",
col="subject", col_wrap=5,
height=3, aspect=.75, linewidth=2.5,
kind="line", data=fmri.query("region == 'frontal'"));
| 0.557123 | 0.959724 |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# SAR Single Node on MovieLens (Python, CPU)
Simple Algorithm for Recommendation (SAR) is a fast and scalable algorithm for personalized recommendations based on user transaction history. It produces easily explainable and interpretable recommendations and handles "cold item" and "semi-cold user" scenarios. SAR is a kind of neighborhood based algorithm (as discussed in [Recommender Systems by Aggarwal](https://dl.acm.org/citation.cfm?id=2931100)) which is intended for ranking top items for each user. More details about SAR can be found in the [deep dive notebook](../02_model_collaborative_filtering/sar_deep_dive.ipynb).
SAR recommends items that are most ***similar*** to the ones that the user already has an existing ***affinity*** for. Two items are ***similar*** if the users that interacted with one item are also likely to have interacted with the other. A user has an ***affinity*** to an item if they have interacted with it in the past.
### Advantages of SAR:
- High accuracy for an easy to train and deploy algorithm
- Fast training, only requiring simple counting to construct matrices used at prediction time.
- Fast scoring, only involving multiplication of the similarity matrix with an affinity vector
### Notes to use SAR properly:
- Since it does not use item or user features, it can be at a disadvantage against algorithms that do.
- It's memory-hungry, requiring the creation of an $mxm$ sparse square matrix (where $m$ is the number of items). This can also be a problem for many matrix factorization algorithms.
- SAR favors an implicit rating scenario and it does not predict ratings.
This notebook provides an example of how to utilize and evaluate SAR in Python on a CPU.
# 0 Global Settings and Imports
```
%load_ext autoreload
%autoreload 2
import logging
import numpy as np
import pandas as pd
import scrapbook as sb
from sklearn.preprocessing import minmax_scale
from reco_utils.common.python_utils import binarize
from reco_utils.common.timer import Timer
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_stratified_split
from reco_utils.evaluation.python_evaluation import (
map_at_k,
ndcg_at_k,
precision_at_k,
recall_at_k,
rmse,
mae,
logloss,
rsquared,
exp_var
)
from reco_utils.recommender.sar import SAR
import sys
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
```
# 1 Load Data
SAR is intended to be used on interactions with the following schema:
`<User ID>, <Item ID>,<Time>,[<Event Type>], [<Event Weight>]`.
Each row represents a single interaction between a user and an item. These interactions might be different types of events on an e-commerce website, such as a user clicking to view an item, adding it to a shopping basket, following a recommendation link, and so on. Each event type can be assigned a different weight, for example, we might assign a “buy” event a weight of 10, while a “view” event might only have a weight of 1.
The MovieLens dataset is well formatted interactions of Users providing Ratings to Movies (movie ratings are used as the event weight) - we will use it for the rest of the example.
```
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
```
### 1.1 Download and use the MovieLens Dataset
```
data = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE
)
# Convert the float precision to 32-bit in order to reduce memory consumption
data['rating'] = data['rating'].astype(np.float32)
data.head()
```
### 1.2 Split the data using the python random splitter provided in utilities:
We split the full dataset into a `train` and `test` dataset to evaluate performance of the algorithm against a held-out set not seen during training. Because SAR generates recommendations based on user preferences, all users that are in the test set must also exist in the training set. For this case, we can use the provided `python_stratified_split` function which holds out a percentage (in this case 25%) of items from each user, but ensures all users are in both `train` and `test` datasets. Other options are available in the `dataset.python_splitters` module which provide more control over how the split occurs.
```
train, test = python_stratified_split(data, ratio=0.75, col_user='userID', col_item='itemID', seed=42)
print("""
Train:
Total Ratings: {train_total}
Unique Users: {train_users}
Unique Items: {train_items}
Test:
Total Ratings: {test_total}
Unique Users: {test_users}
Unique Items: {test_items}
""".format(
train_total=len(train),
train_users=len(train['userID'].unique()),
train_items=len(train['itemID'].unique()),
test_total=len(test),
test_users=len(test['userID'].unique()),
test_items=len(test['itemID'].unique()),
))
```
# 2 Train the SAR Model
### 2.1 Instantiate the SAR algorithm and set the index
We will use the single node implementation of SAR and specify the column names to match our dataset (timestamp is an optional column that is used and can be removed if your dataset does not contain it).
Other options are specified to control the behavior of the algorithm as described in the [deep dive notebook](../02_model_collaborative_filtering/sar_deep_dive.ipynb).
```
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)-8s %(message)s')
model = SAR(
col_user="userID",
col_item="itemID",
col_rating="rating",
col_timestamp="timestamp",
similarity_type="jaccard",
time_decay_coefficient=30,
timedecay_formula=True,
normalize=True
)
```
### 2.2 Train the SAR model on our training data, and get the top-k recommendations for our testing data
SAR first computes an item-to-item ***co-occurence matrix***. Co-occurence represents the number of times two items appear together for any given user. Once we have the co-occurence matrix, we compute an ***item similarity matrix*** by rescaling the cooccurences by a given metric (Jaccard similarity in this example).
We also compute an ***affinity matrix*** to capture the strength of the relationship between each user and each item. Affinity is driven by different types (like *rating* or *viewing* a movie), and by the time of the event.
Recommendations are achieved by multiplying the affinity matrix $A$ and the similarity matrix $S$. The result is a ***recommendation score matrix*** $R$. We compute the ***top-k*** results for each user in the `recommend_k_items` function seen below.
A full walkthrough of the SAR algorithm can be found [here](../02_model_collaborative_filtering/sar_deep_dive.ipynb).
```
with Timer() as train_time:
model.fit(train)
print("Took {} seconds for training.".format(train_time.interval))
with Timer() as test_time:
top_k = model.recommend_k_items(test, remove_seen=True)
print("Took {} seconds for prediction.".format(test_time.interval))
top_k.head()
```
### 2.3. Evaluate how well SAR performs
We evaluate how well SAR performs for a few common ranking metrics provided in the `python_evaluation` module in reco_utils. We will consider the Mean Average Precision (MAP), Normalized Discounted Cumalative Gain (NDCG), Precision, and Recall for the top-k items per user we computed with SAR. User, item and rating column names are specified in each evaluation method.
```
eval_map = map_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_ndcg = ndcg_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_precision = precision_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_recall = recall_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_rmse = rmse(test, top_k, col_user='userID', col_item='itemID', col_rating='rating')
eval_mae = mae(test, top_k, col_user='userID', col_item='itemID', col_rating='rating')
eval_rsquared = rsquared(test, top_k, col_user='userID', col_item='itemID', col_rating='rating')
eval_exp_var = exp_var(test, top_k, col_user='userID', col_item='itemID', col_rating='rating')
positivity_threshold = 2
test_bin = test.copy()
test_bin['rating'] = binarize(test_bin['rating'], positivity_threshold)
top_k_prob = top_k.copy()
top_k_prob['prediction'] = minmax_scale(
top_k_prob['prediction'].astype(float)
)
eval_logloss = logloss(test_bin, top_k_prob, col_user='userID', col_item='itemID', col_rating='rating')
print("Model:\t",
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"R2:\t%f" % eval_rsquared,
"Exp var:\t%f" % eval_exp_var,
"Logloss:\t%f" % eval_logloss,
sep='\n')
# Now let's look at the results for a specific user
user_id = 876
ground_truth = test[test['userID']==user_id].sort_values(by='rating', ascending=False)[:TOP_K]
prediction = model.recommend_k_items(pd.DataFrame(dict(userID=[user_id])), remove_seen=True)
pd.merge(ground_truth, prediction, on=['userID', 'itemID'], how='left')
```
Above, we see that one of the highest rated items from the test set was recovered by the model's top-k recommendations, however the others were not. Offline evaluations are difficult as they can only use what was seen previously in the test set and may not represent the user's actual preferences across the entire set of items. Adjustments to how the data is split, algorithm is used and hyper-parameters can improve the results here.
```
# Record results with papermill for tests - ignore this cell
sb.glue("map", eval_map)
sb.glue("ndcg", eval_ndcg)
sb.glue("precision", eval_precision)
sb.glue("recall", eval_recall)
sb.glue("train_time", train_time.interval)
sb.glue("test_time", test_time.interval)
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import logging
import numpy as np
import pandas as pd
import scrapbook as sb
from sklearn.preprocessing import minmax_scale
from reco_utils.common.python_utils import binarize
from reco_utils.common.timer import Timer
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_stratified_split
from reco_utils.evaluation.python_evaluation import (
map_at_k,
ndcg_at_k,
precision_at_k,
recall_at_k,
rmse,
mae,
logloss,
rsquared,
exp_var
)
from reco_utils.recommender.sar import SAR
import sys
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
data = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE
)
# Convert the float precision to 32-bit in order to reduce memory consumption
data['rating'] = data['rating'].astype(np.float32)
data.head()
train, test = python_stratified_split(data, ratio=0.75, col_user='userID', col_item='itemID', seed=42)
print("""
Train:
Total Ratings: {train_total}
Unique Users: {train_users}
Unique Items: {train_items}
Test:
Total Ratings: {test_total}
Unique Users: {test_users}
Unique Items: {test_items}
""".format(
train_total=len(train),
train_users=len(train['userID'].unique()),
train_items=len(train['itemID'].unique()),
test_total=len(test),
test_users=len(test['userID'].unique()),
test_items=len(test['itemID'].unique()),
))
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)-8s %(message)s')
model = SAR(
col_user="userID",
col_item="itemID",
col_rating="rating",
col_timestamp="timestamp",
similarity_type="jaccard",
time_decay_coefficient=30,
timedecay_formula=True,
normalize=True
)
with Timer() as train_time:
model.fit(train)
print("Took {} seconds for training.".format(train_time.interval))
with Timer() as test_time:
top_k = model.recommend_k_items(test, remove_seen=True)
print("Took {} seconds for prediction.".format(test_time.interval))
top_k.head()
eval_map = map_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_ndcg = ndcg_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_precision = precision_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_recall = recall_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_rmse = rmse(test, top_k, col_user='userID', col_item='itemID', col_rating='rating')
eval_mae = mae(test, top_k, col_user='userID', col_item='itemID', col_rating='rating')
eval_rsquared = rsquared(test, top_k, col_user='userID', col_item='itemID', col_rating='rating')
eval_exp_var = exp_var(test, top_k, col_user='userID', col_item='itemID', col_rating='rating')
positivity_threshold = 2
test_bin = test.copy()
test_bin['rating'] = binarize(test_bin['rating'], positivity_threshold)
top_k_prob = top_k.copy()
top_k_prob['prediction'] = minmax_scale(
top_k_prob['prediction'].astype(float)
)
eval_logloss = logloss(test_bin, top_k_prob, col_user='userID', col_item='itemID', col_rating='rating')
print("Model:\t",
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"R2:\t%f" % eval_rsquared,
"Exp var:\t%f" % eval_exp_var,
"Logloss:\t%f" % eval_logloss,
sep='\n')
# Now let's look at the results for a specific user
user_id = 876
ground_truth = test[test['userID']==user_id].sort_values(by='rating', ascending=False)[:TOP_K]
prediction = model.recommend_k_items(pd.DataFrame(dict(userID=[user_id])), remove_seen=True)
pd.merge(ground_truth, prediction, on=['userID', 'itemID'], how='left')
# Record results with papermill for tests - ignore this cell
sb.glue("map", eval_map)
sb.glue("ndcg", eval_ndcg)
sb.glue("precision", eval_precision)
sb.glue("recall", eval_recall)
sb.glue("train_time", train_time.interval)
sb.glue("test_time", test_time.interval)
| 0.512693 | 0.957038 |
# Heatmap of Global Temperature Anomaly
Global temperature anomaly (GTA, $^oC$) was downloaded from [NCDC](https://www.ncdc.noaa.gov/cag/global/time-series/globe/land_ocean/all/3/1958-2018). The data come from the Global Historical Climatology Network-Monthly (GHCN-M) data set and International Comprehensive Ocean-Atmosphere Data Set (ICOADS), which have data from 1880 to the present. These two datasets are blended into a single product to produce the combined global land and ocean temperature anomalies. The available time series of global-scale temperature anomalies are calculated with respect to the 20th-century average.
The period from Jan/1958 to Mar/2018 was used in this notebook. The data are presented as Heatmap, which is a graphical representation of data where the individual values contained in a matrix are represented as colors. It is really useful to display a general view of numerical data, not to extract specific data point.
## 1. Load all needed libraries
```
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
import calendar
%matplotlib inline
```
## 2. Read global temperature anomaly
```
gta = pd.read_csv('data\gta_1958_2018.csv',
sep=",",
skiprows=5,
names = ["Month", "GTA"])
gta['Month'] = pd.to_datetime(gta['Month'], format='%Y%m', errors='ignore')
gta.set_index('Month', inplace=True)
```
### 2.1 Convert to Year * Months
```
gta = pd.pivot(gta.index.year, gta.index.month, gta['GTA']) \
.rename(columns=calendar.month_name.__getitem__)
gta.index.name = None
gta.head()
```
## 3. Visualize
### 3.1 Heatmap
```
def plot(returns,
title="Global Temperature Anomaly ($^\circ$C)\n",
title_color="black",
title_size=14,
annot_size=5,
vmin = -1.0,
vmax = 1.0,
figsize=None,
cmap='RdBu_r',
cbar=True,
square=False):
if figsize is None:
size = list(plt.gcf().get_size_inches())
figsize = (size[0], size[0] // 2)
plt.close()
fig, ax = plt.subplots(figsize=figsize)
ax = sns.heatmap(returns, ax=ax, annot=False, vmin=vmin, vmax=vmax, center=0,
annot_kws={"size": annot_size},
fmt="0.2f", linewidths=0.5,
square=square, cbar=cbar, cbar_kws={'fraction':0.10},
cmap=cmap)
ax.set_title(title, fontsize=title_size,
color=title_color, fontweight="bold")
fig.subplots_adjust(hspace=0)
plt.yticks(rotation=0)
plt.show()
plot(gta, figsize=[8, 25])
```
### 3.2 Classic line plots
```
gta.plot(title="Global Temperature Anomaly ($^\circ$C)", figsize=[15, 7])
```
## Summary
The heatmap presents global temperature change in a visually appealing and straightforward way. The pace of change is immediately obvious, especially over the past few decades. However, it will become a little harder to check the heatmap when the data is getting longer and longer. At this time, an animation presentation seems more appealing and attractive. For example, Dr. Ed Hawkins’ global warming [spiral](http://www.climate-lab-book.ac.uk/2016/spiralling-global-temperatures/) shows the extent of global temperature increase from 1850 to the present.
Global temperature rising is expected to have far-reaching, long-lasting and, in many cases, devastating consequences for planet Earth. Recently, a new [research](http://www.iiasa.ac.at/web/home/about/news/180516-byers-temperature-rise.html#.Wv_uKnZgcjk.linkedin) identifying climate vulnerability hotspots has found that the number of people affected by multiple climate change risks could double if the global temperature rises by 2°C, compared to a rise of 1.5°C.
## References
NOAA National Centers for Environmental information, Climate at a Glance: Global Time Series, published May 2018, retrieved on May 20, 2018 from http://www.ncdc.noaa.gov/cag/
John D. Hunter. Matplotlib: A 2D Graphics Environment, Computing in Science & Engineering, 9, 90-95 (2007), DOI:10.1109/MCSE.2007.55
Wes McKinney. Data Structures for Statistical Computing in Python, Proceedings of the 9th Python in Science Conference, 51-56 (2010)
https://seaborn.pydata.org/index.html
|
github_jupyter
|
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
import calendar
%matplotlib inline
gta = pd.read_csv('data\gta_1958_2018.csv',
sep=",",
skiprows=5,
names = ["Month", "GTA"])
gta['Month'] = pd.to_datetime(gta['Month'], format='%Y%m', errors='ignore')
gta.set_index('Month', inplace=True)
gta = pd.pivot(gta.index.year, gta.index.month, gta['GTA']) \
.rename(columns=calendar.month_name.__getitem__)
gta.index.name = None
gta.head()
def plot(returns,
title="Global Temperature Anomaly ($^\circ$C)\n",
title_color="black",
title_size=14,
annot_size=5,
vmin = -1.0,
vmax = 1.0,
figsize=None,
cmap='RdBu_r',
cbar=True,
square=False):
if figsize is None:
size = list(plt.gcf().get_size_inches())
figsize = (size[0], size[0] // 2)
plt.close()
fig, ax = plt.subplots(figsize=figsize)
ax = sns.heatmap(returns, ax=ax, annot=False, vmin=vmin, vmax=vmax, center=0,
annot_kws={"size": annot_size},
fmt="0.2f", linewidths=0.5,
square=square, cbar=cbar, cbar_kws={'fraction':0.10},
cmap=cmap)
ax.set_title(title, fontsize=title_size,
color=title_color, fontweight="bold")
fig.subplots_adjust(hspace=0)
plt.yticks(rotation=0)
plt.show()
plot(gta, figsize=[8, 25])
gta.plot(title="Global Temperature Anomaly ($^\circ$C)", figsize=[15, 7])
| 0.593963 | 0.94868 |
# 第04回データ分析勉強会(2020/01/14)
## データの前処理
使用データ:[Kickstarter Projects](https://www.kaggle.com/kemical/kickstarter-projects) <br>
参考URL:<br>
[テービーテックのデータサイエンス "Kaggleに挑戦しよう! ~コード説明1~"](https://ds-blog.tbtech.co.jp/entry/2019/04/19/Kaggle%E3%81%AB%E6%8C%91%E6%88%A6%E3%81%97%E3%82%88%E3%81%86%EF%BC%81_%EF%BD%9E%E3%82%B3%E3%83%BC%E3%83%89%E8%AA%AC%E6%98%8E%EF%BC%91%EF%BD%9E)<br>
[テービーテックのデータサイエンス "Kaggleに挑戦しよう! ~コード説明2~"](https://ds-blog.tbtech.co.jp/entry/2019/04/27/Kaggle%E3%81%AB%E6%8C%91%E6%88%A6%E3%81%97%E3%82%88%E3%81%86%EF%BC%81_%EF%BD%9E%E3%82%B3%E3%83%BC%E3%83%89%E8%AA%AC%E6%98%8E%EF%BC%92%EF%BD%9E)
# import libraries
ライブラリについては[コチラ](https://www.tech-teacher.jp/blog/python-import/)の記事を参照。簡単に説明するならば、Pythonでは関数やクラスなどをまとめて書いたファイルをモジュールと呼び、モジュールを複数まとめたものをライブラリと呼ぶ。importすることで利用可能。
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import log_loss, accuracy_score, precision_recall_fscore_support, confusion_matrix
from sklearn.metrics import mean_absolute_error
import datetime as dt
%matplotlib inline
sns.set()
```
# read data
ここではデータを実際に読み込む。配布データはzip形式で圧縮されているので解凍してからデータを読み込む。
```
!unzip input/archive.zip -d input
import codecs
with codecs.open('input/ks-projects-201612.csv', 'r', 'utf-8', 'ignore') as file:
df = pd.read_csv(file, delimiter=",")
df.columns = df.columns.str.replace(" ", "")
df = df.loc[:, ~df.columns.str.match('Unnamed')]
df.head()
```
# data handling
ここで取得したデータ自体を前処理を行い、分析するためのデータを作成していく。取得したデータは欠損値・異常値が含まれている場合が大半で、そのまま(ローデータ)の状態で分析をすることはまず無い。分析するためにローデータを加工しなければならなく、<span style="color: red; ">データ分析の工数の8割を占めるほど重要な工程</span>。
## データの変数
| 変数名 | 詳細 |
| ------------- | ---------------------------------------- |
| ID | クラウドファンディングの個別ID |
| name | クラウドファンディングの名前 |
| category | 詳細なカテゴリー |
| main_category | 大まかなカテゴリー |
| currency | 使用された通貨 |
| deadline | 締め切り日時 |
| goal | 目標調達資金額 |
| launched | 開始した日時 |
| pledged | 集まった資金 |
| state | プロジェクトの状態(成功、失敗、キャンセルなど) |
| backer | 集まった支援者 |
| country | プロジェクトが開かれた国 |
| usd pledged | 集まった資金の米ドル換算 |
```
df.info()
df.isnull().sum()
```
## create variavle
今回のデータでは、そのまま使用することが難しそうな変数がある。例えば、"deadline"と"launched"。具体的な日付自体を確認して得られるものは無く、それより当該プロジェクトが発足してから締切までの経過日数の方が重要だと考えられるので、経過日数の変数を作成する。
```
df['deadline'] = pd.to_datetime(df['deadline'], errors = 'coerce')
df['launched'] = pd.to_datetime(df['launched'], errors = 'coerce')
df['period'] = (df['deadline'] - df['launched']).dt.days
df = df.drop(['deadline', 'launched'], axis=1)
```
## data type convert
プログラミングにおいて「変数」には**型**というものがある。データ分析においても型(data type)は重要であり、それによって分析結果が変わったり、そもそも分析ができなかったりする。Pythonでの変数の型について詳しくは[コチラ](https://ai-inter1.com/python-data_type/)の記事等を参照。
```
df['goal'] = pd.to_numeric(df['goal'], errors ='coerce')
df['pledged'] = pd.to_numeric(df['pledged'], errors ='coerce')
df['backers'] = pd.to_numeric(df['backers'], errors ='coerce')
df['usdpledged'] = pd.to_numeric(df['usdpledged'], errors ='coerce')
df.info()
```
## data select
分析に使用しないデータを読み込んでいても意味が無いし、メモリ上に置いておいてもそれについて思考すること自体に脳のリソースを割く必要も無いので、使用するデータだけをメモリ上に残しておく。もし後で必要になった場合はそのときに再度読み込めばいい。
```
df = df.drop(['ID','name','category','country'], axis=1)
df.head()
```
## missing value handling
欠損値があるレコード(行)に対して何かしらの対処を行う。欠損値があるとその列に処理を行うことができなくなってしまう。
```
df.isnull().sum()
df.info()
df = df.dropna()
df.reset_index(inplace=True, drop=True)
df.isnull().sum()
df.info()
df
```
## Correlation
データの相関関係を確認する。相関が強い変数があると学習が不安定になり、係数の解釈性に信用性がなくなることがある。
```
df.corr()
sns.heatmap(df.corr(), cmap='Blues', annot=True, fmt='1.3f')
plt.show()
```
## Convert the correlation to zero.
データ自体に強い相関関係が認められたので、相関を無相関化する。<br>
無相関化について数学的背景を知りたい方は[コチラ](https://qiita.com/Hawaii/items/37c0398a7d2afc8dbedb)の記事を参照。
```
df_pledged = pd.DataFrame({'pledged' : df['pledged'], 'usdpledged' : df['usdpledged']})
df_pledged.reset_index(inplace=True, drop=True)
cov = np.cov(df_pledged, rowvar=0)
_, S = np.linalg.eig(cov)
pledged_decorr = np.dot(S.T, df_pledged.T).T
print('相関係数: {:.3f}'.format(np.corrcoef(pledged_decorr[:, 0], pledged_decorr[:, 1])[0,1]))
plt.grid(which='major',color='black',linestyle=':')
plt.grid(which='minor',color='black',linestyle=':')
plt.plot(pledged_decorr[:, 0], pledged_decorr[:, 1], 'o')
plt.show()
# 無相関化した変数を元のデータセットに返す。
pledged_decorr = pd.DataFrame(pledged_decorr, columns=['pledged','uspledged'])
df['pledged'] = pledged_decorr.loc[:,'pledged']
df['usdpledged'] = pledged_decorr.loc[:,'uspledged']
```
## Convert the variable to dummy.
```
df['main_category'].unique()
df_dummy = pd.get_dummies(df['main_category'])
df = pd.concat([df.drop(['main_category'],axis=1),df_dummy],axis=1)
df_dummy = pd.get_dummies(df['currency'])
df = pd.concat([df.drop(['currency'],axis=1),df_dummy],axis=1)
df.head()
```
## Hold out
ホールドアウト法とは、データを予め訓練用とテスト用に分割しておき、学習させたモデルを評価データで評価し、モデルの精度を確かめる方法のこと。
```
y = df['state'].values
X = df.drop('state', axis=1).values
test_size = 0.3
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=1234)
```
## standardization
標準化とは、与えられたデータを平均が0で分散が1のデータに変換することで、データスケールを揃えて、計算や比較をしやすくする。
```
stdsc = StandardScaler()
X_train = stdsc.fit_transform(X_train)
X_test = stdsc.transform(X_test)
```
# reconfirmation the data
本日データを前処理した結果を最後に再確認する。[read data](#read-data)で読み込んだデータと再度比較してもらうとかなり変化していることが分かると思う。
```
df.info()
# default=20
pd.set_option('display.max_columns', 50)
df
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import log_loss, accuracy_score, precision_recall_fscore_support, confusion_matrix
from sklearn.metrics import mean_absolute_error
import datetime as dt
%matplotlib inline
sns.set()
!unzip input/archive.zip -d input
import codecs
with codecs.open('input/ks-projects-201612.csv', 'r', 'utf-8', 'ignore') as file:
df = pd.read_csv(file, delimiter=",")
df.columns = df.columns.str.replace(" ", "")
df = df.loc[:, ~df.columns.str.match('Unnamed')]
df.head()
df.info()
df.isnull().sum()
df['deadline'] = pd.to_datetime(df['deadline'], errors = 'coerce')
df['launched'] = pd.to_datetime(df['launched'], errors = 'coerce')
df['period'] = (df['deadline'] - df['launched']).dt.days
df = df.drop(['deadline', 'launched'], axis=1)
df['goal'] = pd.to_numeric(df['goal'], errors ='coerce')
df['pledged'] = pd.to_numeric(df['pledged'], errors ='coerce')
df['backers'] = pd.to_numeric(df['backers'], errors ='coerce')
df['usdpledged'] = pd.to_numeric(df['usdpledged'], errors ='coerce')
df.info()
df = df.drop(['ID','name','category','country'], axis=1)
df.head()
df.isnull().sum()
df.info()
df = df.dropna()
df.reset_index(inplace=True, drop=True)
df.isnull().sum()
df.info()
df
df.corr()
sns.heatmap(df.corr(), cmap='Blues', annot=True, fmt='1.3f')
plt.show()
df_pledged = pd.DataFrame({'pledged' : df['pledged'], 'usdpledged' : df['usdpledged']})
df_pledged.reset_index(inplace=True, drop=True)
cov = np.cov(df_pledged, rowvar=0)
_, S = np.linalg.eig(cov)
pledged_decorr = np.dot(S.T, df_pledged.T).T
print('相関係数: {:.3f}'.format(np.corrcoef(pledged_decorr[:, 0], pledged_decorr[:, 1])[0,1]))
plt.grid(which='major',color='black',linestyle=':')
plt.grid(which='minor',color='black',linestyle=':')
plt.plot(pledged_decorr[:, 0], pledged_decorr[:, 1], 'o')
plt.show()
# 無相関化した変数を元のデータセットに返す。
pledged_decorr = pd.DataFrame(pledged_decorr, columns=['pledged','uspledged'])
df['pledged'] = pledged_decorr.loc[:,'pledged']
df['usdpledged'] = pledged_decorr.loc[:,'uspledged']
df['main_category'].unique()
df_dummy = pd.get_dummies(df['main_category'])
df = pd.concat([df.drop(['main_category'],axis=1),df_dummy],axis=1)
df_dummy = pd.get_dummies(df['currency'])
df = pd.concat([df.drop(['currency'],axis=1),df_dummy],axis=1)
df.head()
y = df['state'].values
X = df.drop('state', axis=1).values
test_size = 0.3
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=1234)
stdsc = StandardScaler()
X_train = stdsc.fit_transform(X_train)
X_test = stdsc.transform(X_test)
df.info()
# default=20
pd.set_option('display.max_columns', 50)
df
| 0.468791 | 0.813424 |
# Working with Compute
When you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:
* The Python environment for the script, which must include all Python packages used in the script.
* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.
In this lab, you'll explore *environments* and *compute targets* for experiments.
## Connect to Your Workspace
The first thing you need to do is to connect to your workspace using the Azure ML SDK.
> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
```
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
```
## Prepare Data
In this lab, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if you already created it in a previous lab, the code will find the existing version.)
```
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
```
## Create a Training Script
Run the following two cells to create:
1. A folder for a new experiment
2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
```
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import os
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
# Set regularization hyperparameter (passed as an argument to the script)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
args = parser.parse_args()
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['diabetes'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
```
## Define an Environment
When you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.
You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.
Run the following cell to create an environment for the diabetes experiment.
```
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn'],
pip_packages=['azureml-defaults', 'azureml-dataprep[pandas]'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
```
Now you can use the environment for the experiment by assigning it to an Estimator (or RunConfig).
The following code assigns the environment you created to a generic estimator, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
```
from azureml.train.estimator import Estimator
from azureml.core import Experiment
from azureml.widgets import RunDetails
# Set the script parameters
script_params = {
'--regularization': 0.1
}
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create an estimator
estimator = Estimator(source_directory=experiment_folder,
inputs=[diabetes_ds.as_named_input('diabetes')],
script_params=script_params,
compute_target = 'local',
environment_definition = diabetes_env,
entry_script='diabetes_training.py')
# Create an experiment
experiment = Experiment(workspace = ws, name = 'diabetes-training')
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
```
The experiment successfully used the environment, which included all of the packages it required.
Having gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
```
# Register the environment
diabetes_env.register(workspace=ws)
```
## Run an Experiment on a Remote Compute Target
In many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud.
Azure ML supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them. In this case, we'll run the diabetes training experiment on a compute cluster with a unique name of your choosing, so let's verify that exists (and if not, create it) so we can use it to run training experiments.
> **Important**: Change *your-compute-cluster* to a unique name for your compute cluster in the code below before running it! Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
```
Now you're ready to run the experiment on the compute you created. You can do this by specifying the **compute_target** parameter in the estimator (you can set this to either the name of the compute target, or a **ComputeTarget** object.)
You'll also reuse the environment you registered previously.
```
from azureml.train.estimator import Estimator
from azureml.core import Environment, Experiment
from azureml.widgets import RunDetails
# Get the environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Set the script parameters
script_params = {
'--regularization': 0.1
}
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create an estimator
estimator = Estimator(source_directory=experiment_folder,
inputs=[diabetes_ds.as_named_input('diabetes')],
script_params=script_params,
compute_target = cluster_name, # Run the experiment on the remote compute target
environment_definition = registered_env,
entry_script='diabetes_training.py')
# Create an experiment
experiment = Experiment(workspace = ws, name = 'diabetes-training')
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
```
The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment with a large volume of data that would take several hours on your local workstation - dynamically creating more scalable compute may reduce the overall time significantly.
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com).
> **Note**: After some time, the widget may stop updating. You'll be able to tell the experiment run has completed by the information displayed immediately below the widget and by the fact that the kernel indicator at the top right of the notebook window has changed from **⚫** (indicating the kernel is running code) to **◯** (indicating the kernel is idle).
After the experiment has finished, you can get the metrics and files generated by the experiment run. The files will include logs for building the image and managing the compute.
```
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
```
**More Information**:
- For more information about environments in Azure Machine Learning, see [Reuse environments for training and deployment by using Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/how-to-use-environments).
- For more information about compute targets in Azure Machine Learning, see [What are compute targets in Azure Machine Learning?](https://docs.microsoft.com/azure/machine-learning/concept-compute-target).
|
github_jupyter
|
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import os
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
# Set regularization hyperparameter (passed as an argument to the script)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
args = parser.parse_args()
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['diabetes'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn'],
pip_packages=['azureml-defaults', 'azureml-dataprep[pandas]'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
from azureml.train.estimator import Estimator
from azureml.core import Experiment
from azureml.widgets import RunDetails
# Set the script parameters
script_params = {
'--regularization': 0.1
}
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create an estimator
estimator = Estimator(source_directory=experiment_folder,
inputs=[diabetes_ds.as_named_input('diabetes')],
script_params=script_params,
compute_target = 'local',
environment_definition = diabetes_env,
entry_script='diabetes_training.py')
# Create an experiment
experiment = Experiment(workspace = ws, name = 'diabetes-training')
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
# Register the environment
diabetes_env.register(workspace=ws)
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
from azureml.train.estimator import Estimator
from azureml.core import Environment, Experiment
from azureml.widgets import RunDetails
# Get the environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Set the script parameters
script_params = {
'--regularization': 0.1
}
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create an estimator
estimator = Estimator(source_directory=experiment_folder,
inputs=[diabetes_ds.as_named_input('diabetes')],
script_params=script_params,
compute_target = cluster_name, # Run the experiment on the remote compute target
environment_definition = registered_env,
entry_script='diabetes_training.py')
# Create an experiment
experiment = Experiment(workspace = ws, name = 'diabetes-training')
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
| 0.581303 | 0.939081 |
# Word-level language modeling using PyTorch
## Contents
1. [Background](#Background)
1. [Setup](#Setup)
1. [Data](#Data)
1. [Train](#Train)
1. [Host](#Host)
---
## Background
This example trains a multi-layer LSTM RNN model on a language modeling task based on [PyTorch example](https://github.com/pytorch/examples/tree/master/word_language_model). By default, the training script uses the Wikitext-2 dataset. We will train a model on SageMaker, deploy it, and then use deployed model to generate new text.
For more information about the PyTorch in SageMaker, please visit [sagemaker-pytorch-containers](https://github.com/aws/sagemaker-pytorch-containers) and [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk) github repositories.
---
## Setup
_This notebook was created and tested on an ml.p2.xlarge notebook instance._
Let's start by creating a SageMaker session and specifying:
- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
- The IAM role arn used to give training and hosting access to your data. See [the documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the sagemaker.get_execution_role() with appropriate full IAM role arn string(s).
```
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/DEMO-pytorch-rnn-lstm'
role = sagemaker.get_execution_role()
```
## Data
### Getting the data
As mentioned above we are going to use [the wikitext-2 raw data](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/). This data is from Wikipedia and is licensed CC-BY-SA-3.0. Before you use this data for any other purpose than this example, you should understand the data license, described at https://creativecommons.org/licenses/by-sa/3.0/
```
%%bash
wget http://research.metamind.io.s3.amazonaws.com/wikitext/wikitext-2-raw-v1.zip
unzip -n wikitext-2-raw-v1.zip
cd wikitext-2-raw
mv wiki.test.raw test && mv wiki.train.raw train && mv wiki.valid.raw valid
```
Let's preview what data looks like.
```
!head -5 wikitext-2-raw/train
```
### Uploading the data to S3
We are going to use the `sagemaker.Session.upload_data` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use later when we start the training job.
```
inputs = sagemaker_session.upload_data(path='wikitext-2-raw', bucket=bucket, key_prefix=prefix)
print('input spec (in this case, just an S3 path): {}'.format(inputs))
```
## Train
### Training script
We need to provide a training script that can run on the SageMaker platform. The training script is very similar to a training script you might run outside of SageMaker, but you can access useful properties about the training environment through various environment variables, such as:
* `SM_MODEL_DIR`: A string representing the path to the directory to write model artifacts to.
These artifacts are uploaded to S3 for model hosting.
* `SM_OUTPUT_DATA_DIR`: A string representing the filesystem path to write output artifacts to. Output artifacts may
include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed
and uploaded to S3 to the same S3 prefix as the model artifacts.
Supposing one input channel, 'training', was used in the call to the PyTorch estimator's `fit()` method,
the following will be set, following the format `SM_CHANNEL_[channel_name]`:
* `SM_CHANNEL_TRAINING`: A string representing the path to the directory containing data in the 'training' channel.
A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to `model_dir` so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an `argparse.ArgumentParser` instance.
In this notebook example, we will use Git integration. That is, you can specify a training script that is stored in a GitHub, CodeCommit or other Git repository as the entry point for the estimator, so that you don't have to download the scripts locally. If you do so, source directory and dependencies should be in the same repo if they are needed.
To use Git integration, pass a dict `git_config` as a parameter when you create the `PyTorch` Estimator object. In the `git_config` parameter, you specify the fields `repo`, `branch` and `commit` to locate the specific repo you want to use. If authentication is required to access the repo, you can specify fields `2FA_enabled`, `username`, `password` and token accordingly.
The script that we will use in this example is stored in GitHub repo
[https://github.com/awslabs/amazon-sagemaker-examples/tree/training-scripts](https://github.com/awslabs/amazon-sagemaker-examples/tree/training-scripts),
under the branch `training-scripts`. It is a public repo so we don't need authentication to access it. Let's specify the `git_config` argument here:
```
git_config = {'repo': 'https://github.com/awslabs/amazon-sagemaker-examples.git', 'branch': 'training-scripts'}
```
Note that we do not specify `commit` in `git_config` here, in which case the latest commit of the specified repo and branch will be used by default.
A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to `model_dir` so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an `argparse.ArgumentParser` instance.
For example, the script run by this notebook:
[https://github.com/awslabs/amazon-sagemaker-examples/blob/training-scripts/pytorch-rnn-scripts/train.py](https://github.com/awslabs/amazon-sagemaker-examples/blob/training-scripts/pytorch-rnn-scripts/train.py).
For more information about training environment variables, please visit [SageMaker Containers](https://github.com/aws/sagemaker-containers).
In the current example we also need to provide source directory, because training script imports data and model classes from other modules. The source directory is
[https://github.com/awslabs/amazon-sagemaker-examples/blob/training-scripts/pytorch-rnn-scripts/](https://github.com/awslabs/amazon-sagemaker-examples/blob/training-scripts/pytorch-rnn-scripts/). We should provide 'pytorch-rnn-scripts' for `source_dir` when creating the Estimator object, which is a relative path inside the Git repository.
### Run training in SageMaker
The PyTorch class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script and source directory, an IAM role, the number of training instances, and the training instance type. In this case we will run our training job on ```ml.p2.xlarge``` instance. As you can see in this example you can also specify hyperparameters.
```
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point='train.py',
role=role,
framework_version='1.2.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
source_dir='pytorch-rnn-scripts',
git_config=git_config,
# available hyperparameters: emsize, nhid, nlayers, lr, clip, epochs, batch_size,
# bptt, dropout, tied, seed, log_interval
hyperparameters={
'epochs': 6,
'tied': True
})
```
After we've constructed our PyTorch object, we can fit it using the data we uploaded to S3. SageMaker makes sure our data is available in the local filesystem, so our training script can simply read the data from disk.
```
estimator.fit({'training': inputs})
```
## Host
### Hosting script
We are going to provide custom implementation of `model_fn`, `input_fn`, `output_fn` and `predict_fn` hosting functions in a separate file, which is in the same Git repo as the training script:
[https://github.com/awslabs/amazon-sagemaker-examples/blob/training-scripts/pytorch-rnn-scripts/generate.py](https://github.com/awslabs/amazon-sagemaker-examples/blob/training-scripts/pytorch-rnn-scripts/generate.py).
We will use Git integration for hosting too since the hosting code is also in the Git repo.
You can also put your training and hosting code in the same file but you would need to add a main guard (`if __name__=='__main__':`) for the training code, so that the container does not inadvertently run it at the wrong point in execution during hosting.
### Import model into SageMaker
The PyTorch model uses a npy serializer and deserializer by default. For this example, since we have a custom implementation of all the hosting functions and plan on using JSON instead, we need a predictor that can serialize and deserialize JSON.
```
from sagemaker.predictor import RealTimePredictor, json_serializer, json_deserializer
class JSONPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(JSONPredictor, self).__init__(endpoint_name, sagemaker_session, json_serializer, json_deserializer)
```
Since hosting functions implemented outside of train script we can't just use estimator object to deploy the model. Instead we need to create a PyTorchModel object using the latest training job to get the S3 location of the trained model data. Besides model data location in S3, we also need to configure PyTorchModel with the script and source directory (because our `generate` script requires model and data classes from source directory), an IAM role.
```
from sagemaker.pytorch import PyTorchModel
training_job_name = estimator.latest_training_job.name
desc = sagemaker_session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
trained_model_location = desc['ModelArtifacts']['S3ModelArtifacts']
model = PyTorchModel(model_data=trained_model_location,
role=role,
framework_version='1.0.0',
entry_point='generate.py',
source_dir='pytorch-rnn-scripts',
git_config=git_config,
predictor_cls=JSONPredictor)
```
### Create endpoint
Now the model is ready to be deployed at a SageMaker endpoint and we are going to use the `sagemaker.pytorch.model.PyTorchModel.deploy` method to do this. We can use a CPU-based instance for inference (in this case an ml.m4.xlarge), even though we trained on GPU instances, because at the end of training we moved model to cpu before returning it. This way we can load trained model on any device and then move to GPU if CUDA is available.
```
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
### Evaluate
We are going to use our deployed model to generate text by providing random seed, temperature (higher will increase diversity) and number of words we would like to get.
```
input = {
'seed': 111,
'temperature': 2.0,
'words': 100
}
response = predictor.predict(input)
print(response)
```
### Cleanup
After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it.
```
sagemaker_session.delete_endpoint(predictor.endpoint)
```
|
github_jupyter
|
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/DEMO-pytorch-rnn-lstm'
role = sagemaker.get_execution_role()
%%bash
wget http://research.metamind.io.s3.amazonaws.com/wikitext/wikitext-2-raw-v1.zip
unzip -n wikitext-2-raw-v1.zip
cd wikitext-2-raw
mv wiki.test.raw test && mv wiki.train.raw train && mv wiki.valid.raw valid
!head -5 wikitext-2-raw/train
inputs = sagemaker_session.upload_data(path='wikitext-2-raw', bucket=bucket, key_prefix=prefix)
print('input spec (in this case, just an S3 path): {}'.format(inputs))
git_config = {'repo': 'https://github.com/awslabs/amazon-sagemaker-examples.git', 'branch': 'training-scripts'}
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point='train.py',
role=role,
framework_version='1.2.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
source_dir='pytorch-rnn-scripts',
git_config=git_config,
# available hyperparameters: emsize, nhid, nlayers, lr, clip, epochs, batch_size,
# bptt, dropout, tied, seed, log_interval
hyperparameters={
'epochs': 6,
'tied': True
})
estimator.fit({'training': inputs})
from sagemaker.predictor import RealTimePredictor, json_serializer, json_deserializer
class JSONPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(JSONPredictor, self).__init__(endpoint_name, sagemaker_session, json_serializer, json_deserializer)
from sagemaker.pytorch import PyTorchModel
training_job_name = estimator.latest_training_job.name
desc = sagemaker_session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
trained_model_location = desc['ModelArtifacts']['S3ModelArtifacts']
model = PyTorchModel(model_data=trained_model_location,
role=role,
framework_version='1.0.0',
entry_point='generate.py',
source_dir='pytorch-rnn-scripts',
git_config=git_config,
predictor_cls=JSONPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
input = {
'seed': 111,
'temperature': 2.0,
'words': 100
}
response = predictor.predict(input)
print(response)
sagemaker_session.delete_endpoint(predictor.endpoint)
| 0.374333 | 0.946151 |
# 第7章: データベース
```
%system curl -O http://www.cl.ecei.tohoku.ac.jp/nlp100/data/artist.json.gz
```
## 60. KVSの構築
```
import gzip
import json
import unicodedata
import redis
db = redis.Redis()
with gzip.open('artist.json.gz') as fd:
for line in fd:
data_json = json.loads(line)
key = unicodedata.normalize('NFKC', data_json['name']) + '\t' + str(data_json['id'])
value = data_json.get('area', '')
db.set(key, value)
```
## 61. KVSの検索
```
artist = input('> ')
for key in db.keys(artist + '\t*'):
print(key.decode('utf8'), db.get(key).decode('utf8'))
```
## 62. KVS内の反復処理
```
count = 0
for key in db.keys('*'):
if db.get(key) == b'Japan':
count += 1
print(count)
```
## 63. オブジェクトを値に格納したKVS
```
db.flushdb()
with gzip.open('artist.json.gz') as fd:
for line in fd:
data_json = json.loads(line)
key = unicodedata.normalize('NFKC', data_json['name']) + '\t' + str(data_json['id'])
for tag in data_json.get('tags', []):
db.hset(key, tag['value'], tag['count'])
artist = input('> ')
for key in db.keys(artist + '\t*'):
print(key.decode('utf8'), db.hgetall(key))
```
## MongoDBの構築
```
import rethinkdb as r
r.connect().repl()
r.db("test").table_create("artists").run()
import gzip
import json
import unicodedata
artists = []
with gzip.open('artist.json.gz') as fd:
for (i, line) in enumerate(fd):
data_json = json.loads(line)
key = unicodedata.normalize('NFKC', data_json['name']) + '\t' + str(data_json['id'])
artists.append({'name': key,
'area': data_json.get('area', ''),
'aliases.name': [tag['name'] for tag in data_json.get('aliases', [])],
'tags.value': [tag['value'] for tag in data_json.get('tags', [])],
'rating.value': data_json.get('rating', {}).get('value', 0)})
if i % 100000 == 0:
r.table("artists").insert(artists).run()
artists = []
r.table("artists").insert(artists).run()
```
## 65. MongoDBの検索
```
artist = input('> ')
cursor = r.table("artists").filter(r.row['name'].match('^' + artist + '\t*')).run()
for document in cursor:
print(document)
```
## 66. 検索件数の取得
```
cursor = r.table("artists").filter(r.row["area"] == 'Japan').run()
print(sum([1 for document in cursor]))
```
## 67. 複数のドキュメントの取得
```
artist = 'スマップ'
cursor = r.table("artists").run()
for document in cursor:
if artist in document['aliases.name']:
print(document)
```
## 68. ソート
```
cursor = r.table("artists").filter(lambda artist: artist['rating.value'] > 0).run()
for doc in sorted([ doc for doc in cursor if 'dance' in doc['tags.value']],
key=lambda x: x['rating.value'], reverse=True)[:10]:
print(doc)
from flask import Flask, Markup, request
app = Flask(__name__)
HTML = '''<html>
<head>
<title>言語処理100本ノック2015 問題69</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
</head>
<body>
<form method="POST" action="/">
名前、別名:<input type="text" name="name" size="20"/><br />
タグ:<input type="text" name="tag" size="20"/><br />
<input type="submit" value="検索"/>
</form>
%s
</body>
</html>'''
def sort(cursor):
return [doc for doc in sorted([ doc for doc in cursor],
key=lambda x: x['rating.value'], reverse=True)][:10]
def search(artist, tag):
contents = ''
if artist:
cursor = r.table("artists").filter(r.row['name'].match('^' + artist + '\t*')).run()
for doc in sort(cursor):
contents += str(doc) + '<br>\n'
if tag:
cursor = r.table("artists").run()
cursor = [doc for doc in cursor if tag in doc['tags.value']]
for doc in sort(cursor):
contents += str(doc) + '<br>\n'
return contents
@app.route("/", methods=['GET', 'POST'])
def contents():
message = ''
contents = ''
artist = request.form.get('name')
tag = request.form.get('tag')
if artist or tag:
contents = search(artist, tag)
return Markup(HTML % (contents))
if __name__ == "__main__":
app.run()
```
|
github_jupyter
|
%system curl -O http://www.cl.ecei.tohoku.ac.jp/nlp100/data/artist.json.gz
import gzip
import json
import unicodedata
import redis
db = redis.Redis()
with gzip.open('artist.json.gz') as fd:
for line in fd:
data_json = json.loads(line)
key = unicodedata.normalize('NFKC', data_json['name']) + '\t' + str(data_json['id'])
value = data_json.get('area', '')
db.set(key, value)
artist = input('> ')
for key in db.keys(artist + '\t*'):
print(key.decode('utf8'), db.get(key).decode('utf8'))
count = 0
for key in db.keys('*'):
if db.get(key) == b'Japan':
count += 1
print(count)
db.flushdb()
with gzip.open('artist.json.gz') as fd:
for line in fd:
data_json = json.loads(line)
key = unicodedata.normalize('NFKC', data_json['name']) + '\t' + str(data_json['id'])
for tag in data_json.get('tags', []):
db.hset(key, tag['value'], tag['count'])
artist = input('> ')
for key in db.keys(artist + '\t*'):
print(key.decode('utf8'), db.hgetall(key))
import rethinkdb as r
r.connect().repl()
r.db("test").table_create("artists").run()
import gzip
import json
import unicodedata
artists = []
with gzip.open('artist.json.gz') as fd:
for (i, line) in enumerate(fd):
data_json = json.loads(line)
key = unicodedata.normalize('NFKC', data_json['name']) + '\t' + str(data_json['id'])
artists.append({'name': key,
'area': data_json.get('area', ''),
'aliases.name': [tag['name'] for tag in data_json.get('aliases', [])],
'tags.value': [tag['value'] for tag in data_json.get('tags', [])],
'rating.value': data_json.get('rating', {}).get('value', 0)})
if i % 100000 == 0:
r.table("artists").insert(artists).run()
artists = []
r.table("artists").insert(artists).run()
artist = input('> ')
cursor = r.table("artists").filter(r.row['name'].match('^' + artist + '\t*')).run()
for document in cursor:
print(document)
cursor = r.table("artists").filter(r.row["area"] == 'Japan').run()
print(sum([1 for document in cursor]))
artist = 'スマップ'
cursor = r.table("artists").run()
for document in cursor:
if artist in document['aliases.name']:
print(document)
cursor = r.table("artists").filter(lambda artist: artist['rating.value'] > 0).run()
for doc in sorted([ doc for doc in cursor if 'dance' in doc['tags.value']],
key=lambda x: x['rating.value'], reverse=True)[:10]:
print(doc)
from flask import Flask, Markup, request
app = Flask(__name__)
HTML = '''<html>
<head>
<title>言語処理100本ノック2015 問題69</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
</head>
<body>
<form method="POST" action="/">
名前、別名:<input type="text" name="name" size="20"/><br />
タグ:<input type="text" name="tag" size="20"/><br />
<input type="submit" value="検索"/>
</form>
%s
</body>
</html>'''
def sort(cursor):
return [doc for doc in sorted([ doc for doc in cursor],
key=lambda x: x['rating.value'], reverse=True)][:10]
def search(artist, tag):
contents = ''
if artist:
cursor = r.table("artists").filter(r.row['name'].match('^' + artist + '\t*')).run()
for doc in sort(cursor):
contents += str(doc) + '<br>\n'
if tag:
cursor = r.table("artists").run()
cursor = [doc for doc in cursor if tag in doc['tags.value']]
for doc in sort(cursor):
contents += str(doc) + '<br>\n'
return contents
@app.route("/", methods=['GET', 'POST'])
def contents():
message = ''
contents = ''
artist = request.form.get('name')
tag = request.form.get('tag')
if artist or tag:
contents = search(artist, tag)
return Markup(HTML % (contents))
if __name__ == "__main__":
app.run()
| 0.17515 | 0.642867 |
# 양자 머신러닝 소개
Original notebook: https://github.com/qiskit-community/qiskit-application-modules-demo-sessions/blob/main/qiskit-machine-learning/qiskit-machine-learning-demo.ipynb
Anton Dekusar
IBM Quantum, IBM Research Europe - Dublin
## 개요
- 양자 머신러닝 개요
- 서포트 백터 머신
- 양자 서포트 백터 머신
- 양자 커널 데모
## 양자 머신러닝 개요
### Qiskit ML

## 서포트 백터 머신
### 분류 문제
이진 분류 지도 학습
train set: $T = \{\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_M\}, \: \mathbf{x}_i \in \mathbb{R}^s$
class map: $c_{T} \rightarrow \{+1, -1\}$
test set: $S = \{\mathbf{x}_1', \mathbf{x}_2', ..., \mathbf{x}_m'\}$
<br>
모든 x에 대한 분류 map을 알 수 없다.
map: $c: \mathbf{x} \rightarrow \{+1, -1\}, \, \forall \mathbf{x}$
<br>
목표 : 분류 map을 찾는 것 $\widetilde{c} : T \cup \ S \rightarrow \{+1, -1\}\,$ 모든 실제 레이블을 결정하는 알 수 없는 map $c$ 와 잘 일치한다..

### SVM
선형결정함수
$$
\widetilde{c}_{\text{SVM}}(\mathbf{x}) = \mathrm{sign}(\mathbf{w}^T\mathbf{x} - b)
$$
는 선형으로 분리할 수 있는 데이터에만 적용된다. 목표는 마진을 최대화하는 것입니다
$$
\min_{\mathbf{w} \in \mathbb{R}^s, \, b \in \mathbb{R}} ||\mathbf{w}||^2 \\
\text{s. t. } y_i(\mathbf{w}^T \mathbf{x}_i - b) \geq 1
$$

### Kernelized SVM
비선형 형상 변환 도입
$$
\phi : \mathbb{R}^s \rightarrow \ \mathcal{V}, \text{ where} \mathcal{V} \text{ a Hilbert space} \\
\widetilde{c}_{\text{SVM}}(\mathbf{x}) = \mathrm{sign}(\langle \mathbf{w}, \phi({\mathbf{x})}\rangle_{\nu} - b)
$$
형상 공간에서는 선형이지만 원래 공간에서는 비선형이다.
커널 트릭은 기능 벡터 $\phi(\mathbf{x})$ 가 아닌
$$
k(\mathbf{x}, \mathbf{x'}) = \langle\phi{(\mathbf{x})}, \phi{(\mathbf{x'})}\rangle_{\nu}
$$
에서만 명시적으로 의존하도록 SVM 문제를 다시 작성하는 것입니다.

원본 데이터
$$
\mathbf{x} \in \mathbb{R}^2
$$
Feature map
$$
\phi(\mathbf{x}) = (x_1, x_2, x_1^2 + x_2^2) \in \mathbb{R}^3
$$
Kernel
$$
k(\mathbf{x}, \mathbf{x'}) = \phi(\mathbf{x}) \cdot \phi(\mathbf{x'}) = \mathbf{x} \cdot \mathbf{x'} + ||\mathbf{x}||^2 ||\mathbf{x'}||^2
$$

## 양자 커널 데모
```
import warnings
warnings.filterwarnings(action='ignore')
import matplotlib.pyplot as plt
import numpy as np
from qiskit.utils import algorithm_globals
seed = 2022
algorithm_globals.random_seed = seed
```
### 분류 예제
분류 예제에서는 다음을 사용합니다.
- "Supervised learning with quantum enhanced feature spaces"에 설명된 임시 데이터 세트(https://www.nature.com/articles/s41586-019-0980-2)
- `scikit-learn` Support Vector Machine (https://scikit-learn.org/stable/modules/svm.html) classification (`SVC`) algorithm.`
- Qiskit Machine Learning의 Wrapper `QSVC`
Qiskit Machine Learning에서 제공하는 유틸리티 함수를 사용하여 훈련 및 테스트 데이터 세트를 샘플링해 보겠습니다. `adhoc_dimensions = 2`로 데이터 세트의 차원 수를 선택합니다.
```
from qiskit_machine_learning.datasets import ad_hoc_data
adhoc_dimension = 2
x_train, y_train, x_test, y_test, adhoc_total = ad_hoc_data(
training_size=50,
test_size=10,
n=adhoc_dimension,
gap=0.3,
plot_data=False,
one_hot=False,
include_sample_total=True
)
plt.figure(figsize=(16., 16.))
plt.ylim(0, 2 * np.pi)
plt.xlim(0, 2 * np.pi)
plt.imshow(np.asmatrix(adhoc_total).T, interpolation='nearest',
origin='lower', cmap='RdBu', extent=[0, 2 * np.pi, 0, 2 * np.pi])
plt.scatter(x_train[np.where(y_train[:] == 0), 0], x_train[np.where(y_train[:] == 0), 1],
marker='s', facecolors='w', edgecolors='b', label="A train")
plt.scatter(x_train[np.where(y_train[:] == 1), 0], x_train[np.where(y_train[:] == 1), 1],
marker='o', facecolors='w', edgecolors='r', label="B train")
plt.scatter(x_test[np.where(y_test[:] == 0), 0], x_test[np.where(y_test[:] == 0), 1],
marker='s', facecolors='b', edgecolors='w', label="A test")
plt.scatter(x_test[np.where(y_test[:] == 1), 0], x_test[np.where(y_test[:] == 1), 1],
marker='o', facecolors='r', edgecolors='w', label="B test")
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
plt.title("Ad hoc dataset for classification")
plt.show()
```
### Classic SVM
https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
kernel{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’} or callable, default=’rbf’
```
from sklearn.svm import SVC
svc = SVC()
svc.fit(x_train, y_train)
svc.score(x_test, y_test)
```
## SVM with custom kernel
https://scikit-learn.org/stable/auto_examples/svm/plot_custom_kernel.html
```
def my_kernel(X, Y):
"""
We create a custom kernel:
(2 0)
k(X, Y) = X ( ) Y.T
(0 1)
"""
M = np.array([[2, 0], [0, 1.0]])
return np.dot(np.dot(X, M), Y.T)
svc = SVC(kernel=my_kernel)
svc.fit(x_train, y_train)
svc.score(x_test, y_test)
```
### Qiskit ML
훈련 및 테스트 데이터 세트가 준비되면 `QuantumKernel` 클래스를 설정하여 `ZFeatureMap`을 사용하여 커널 매트릭스를 계산하고 `Aer qasm_simulator`를 사용하여 1024개의 `shots`을 사용합니다.
```
from qiskit import Aer
from qiskit.utils import QuantumInstance
quantum_instance = QuantumInstance(
Aer.get_backend('qasm_simulator'),
shots=1024,
seed_simulator=seed,
seed_transpiler=seed)
```
## ZFeatureMap
```
from qiskit.circuit.library import ZFeatureMap
from qiskit_machine_learning.kernels import QuantumKernel
z_feature_map = ZFeatureMap(feature_dimension=adhoc_dimension, reps=2)
z_kernel = QuantumKernel(feature_map=z_feature_map, quantum_instance=quantum_instance)
z_feature_map.draw(output="mpl", scale=2)
z_feature_map.decompose().draw(output="mpl", scale=3)
```
`scikit-learn` `SVC` 알고리즘을 사용하면 두 가지 방법으로 사용자 정의 커널을 정의할 수 있다.
- 커널을 호출 가능한 함수로 제공
- 커널 행렬을 미리 계산
Qiskit Machine Learning의 QuantumKernel 클래스를 사용하여 이 중 하나를 수행할 수 있습니다. 자세한 내용은 https://scikit-learn.org/stable/modules/svm.html#custom-kernels를 참조하세요.
이 데모에서는 커널을 호출 가능한 함수로 제공한다.
## ZZFeatureMap
```
from qiskit.circuit.library import ZZFeatureMap
zz_feature_map = ZZFeatureMap(feature_dimension=adhoc_dimension, reps=2, entanglement='linear')
zz_kernel = QuantumKernel(feature_map=zz_feature_map, quantum_instance=quantum_instance)
zz_feature_map.decompose().draw(output="mpl", scale=2)
svc = SVC(kernel=zz_kernel.evaluate)
svc.fit(x_train, y_train)
svc.score(x_test, y_test)
```
### QSVC
Qiskit Machine Learning에는 `scikit-learn` `SVC` 클래스를 확장하는 `QSVC` 클래스도 포함되어 있으며 다음과 같이 사용할 수 있습니다.
```
from qiskit_machine_learning.algorithms import QSVC
qsvc = QSVC()
qsvc.quantum_kernel.quantum_instance = quantum_instance
qsvc.fit(x_train, y_train)
qsvc.score(x_test, y_test)
qsvc.predict(x_test)
```
|
github_jupyter
|
import warnings
warnings.filterwarnings(action='ignore')
import matplotlib.pyplot as plt
import numpy as np
from qiskit.utils import algorithm_globals
seed = 2022
algorithm_globals.random_seed = seed
from qiskit_machine_learning.datasets import ad_hoc_data
adhoc_dimension = 2
x_train, y_train, x_test, y_test, adhoc_total = ad_hoc_data(
training_size=50,
test_size=10,
n=adhoc_dimension,
gap=0.3,
plot_data=False,
one_hot=False,
include_sample_total=True
)
plt.figure(figsize=(16., 16.))
plt.ylim(0, 2 * np.pi)
plt.xlim(0, 2 * np.pi)
plt.imshow(np.asmatrix(adhoc_total).T, interpolation='nearest',
origin='lower', cmap='RdBu', extent=[0, 2 * np.pi, 0, 2 * np.pi])
plt.scatter(x_train[np.where(y_train[:] == 0), 0], x_train[np.where(y_train[:] == 0), 1],
marker='s', facecolors='w', edgecolors='b', label="A train")
plt.scatter(x_train[np.where(y_train[:] == 1), 0], x_train[np.where(y_train[:] == 1), 1],
marker='o', facecolors='w', edgecolors='r', label="B train")
plt.scatter(x_test[np.where(y_test[:] == 0), 0], x_test[np.where(y_test[:] == 0), 1],
marker='s', facecolors='b', edgecolors='w', label="A test")
plt.scatter(x_test[np.where(y_test[:] == 1), 0], x_test[np.where(y_test[:] == 1), 1],
marker='o', facecolors='r', edgecolors='w', label="B test")
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
plt.title("Ad hoc dataset for classification")
plt.show()
from sklearn.svm import SVC
svc = SVC()
svc.fit(x_train, y_train)
svc.score(x_test, y_test)
def my_kernel(X, Y):
"""
We create a custom kernel:
(2 0)
k(X, Y) = X ( ) Y.T
(0 1)
"""
M = np.array([[2, 0], [0, 1.0]])
return np.dot(np.dot(X, M), Y.T)
svc = SVC(kernel=my_kernel)
svc.fit(x_train, y_train)
svc.score(x_test, y_test)
from qiskit import Aer
from qiskit.utils import QuantumInstance
quantum_instance = QuantumInstance(
Aer.get_backend('qasm_simulator'),
shots=1024,
seed_simulator=seed,
seed_transpiler=seed)
from qiskit.circuit.library import ZFeatureMap
from qiskit_machine_learning.kernels import QuantumKernel
z_feature_map = ZFeatureMap(feature_dimension=adhoc_dimension, reps=2)
z_kernel = QuantumKernel(feature_map=z_feature_map, quantum_instance=quantum_instance)
z_feature_map.draw(output="mpl", scale=2)
z_feature_map.decompose().draw(output="mpl", scale=3)
from qiskit.circuit.library import ZZFeatureMap
zz_feature_map = ZZFeatureMap(feature_dimension=adhoc_dimension, reps=2, entanglement='linear')
zz_kernel = QuantumKernel(feature_map=zz_feature_map, quantum_instance=quantum_instance)
zz_feature_map.decompose().draw(output="mpl", scale=2)
svc = SVC(kernel=zz_kernel.evaluate)
svc.fit(x_train, y_train)
svc.score(x_test, y_test)
from qiskit_machine_learning.algorithms import QSVC
qsvc = QSVC()
qsvc.quantum_kernel.quantum_instance = quantum_instance
qsvc.fit(x_train, y_train)
qsvc.score(x_test, y_test)
qsvc.predict(x_test)
| 0.672332 | 0.949576 |
# Visualize Terrascope Sentinel-2 products
The Belgian Collaborative ground segment **TERRASCOPE** systematically processes Sentinel-2 L1C products into Surface Reflectance (TOC) and several biophysical parameters (NDVI, FAPAR, FCOVER, LAI) over Belgium.
This notebook illustrates i) how to find the paths of different Sentinel-2 products (TOC, NDVI, FAPAR, FCOVER, LAI) for a cloudfree day over Belgium, ii) mosaic them using virtual raster files (vrt) and iii) visualize these. We use the 'reticulate' R library to access a Python library that supports finding locations of Sentinel-2 products on the MEP. Reusing this Python library ensures that R users can also benefit from everything that is implemented for Python users.
More information on Terrascope and the Sentinel-2 products can be found at [www.terrascope.be](https://www.terrascope.be/)
If you run this script in the jupyterlab of terrascope, make sure to shutdown all kernels before reconnecting as the last cell uses a lot of memory.
# 1. Load libraries
```
library(raster)
library(gdalUtils)
library(reticulate)
#load catalog client
catalogclient <- import("catalogclient")
cat=catalogclient$catalog$Catalog()
#show all producttypes
cat$get_producttypes()
```
# 2. Get product paths
```
#first we import the python datetime package
datetime <- import("datetime")
#next we query the catalog for TOC, FCOVER, FAPAR, NDVI and LAI products on 2017-05-26
date = datetime$date(2017L, 5L, 26L)
#TOC
products_TOC = cat$get_products('CGS_S2_RADIOMETRY',
fileformat="GEOTIFF",
startdate=date,
enddate=date)
#NDVI
products_NDVI= cat$get_products('CGS_S2_NDVI',
fileformat="GEOTIFF",
startdate=date,
enddate=date)
#FAPAR
products_FAPAR= cat$get_products('CGS_S2_FAPAR',
fileformat="GEOTIFF",
startdate=date,
enddate=date)
#FCOVER
products_FCOVER= cat$get_products('CGS_S2_FCOVER',
fileformat="GEOTIFF",
startdate=date,
enddate=date)
#LAI
products_LAI= cat$get_products('CGS_S2_LAI',
fileformat="GEOTIFF",
startdate=date,
enddate=date)
```
Next, we define a function to retrieve the paths to the different files which we will use for subsequent visualization. This path is stored in the *filename* attribute of the above _products_ variables.
```
get_paths <- function(products,pattern){
tmp <- sapply(products,FUN=function(x)(sapply(x$files,FUN=function(y)(y$filename))))
return(gsub("file:","",tmp[grep(pattern,tmp)]))
}
```
This function is subsequently used to retrieve the paths. For the TOC products we extract the paths for the blue, green, red and NIR band.
```
#TOC
files_B2 <- get_paths(products_TOC,"B02")
files_B3 <- get_paths(products_TOC,"B03")
files_B4 <- get_paths(products_TOC,"B04")
files_B8 <- get_paths(products_TOC,"B08")
head(files_B2)
#NDVI
files_NDVI <- get_paths(products_NDVI,"NDVI")
head(files_NDVI)
#FAPAR
files_FAPAR <- get_paths(products_FAPAR,"FAPAR")
head(files_FAPAR)
#FCOVER
files_FCOVER <- get_paths(products_FCOVER,"FCOVER")
head(files_FCOVER)
#LAI
files_LAI <- get_paths(products_LAI,"LAI")
head(files_LAI)
```
# 3. Make spatial vrt per band (combining all tiles)
Next, we use the gdalbuildvrt function from the gdalUtils library to build spatial vrt's, i.e. a virtual raster file which is a spatial mosaic of all Sentinel-2 tiles for that particular day.
```
#TOC
gdalbuildvrt(files_B2,"B2.vrt")
gdalbuildvrt(files_B3,"B3.vrt")
gdalbuildvrt(files_B4,"B4.vrt")
gdalbuildvrt(files_B8,"B8.vrt")
#NDVI
gdalbuildvrt(files_NDVI,"NDVI.vrt")
#FAPAR
gdalbuildvrt(files_FAPAR,"FAPAR.vrt")
#FCOVER
gdalbuildvrt(files_FCOVER,"FCOVER.vrt")
#LAI
gdalbuildvrt(files_LAI,"LAI.vrt")
```
# 4. Make multiband vrt
The gdalbuildvrt function is again called to stack the individual Sentinel-2 bands (B2, B3, B4 and B8) into a multiband vrt.
```
gdalbuildvrt(c("B2.vrt","B3.vrt","B4.vrt","B8.vrt"),"RGBNIR.vrt",separate=T)
```
# 5. Plot
Next we use the plotting functions from the R raster package to visualize i) RGB composites and ii) the individual biophysical parameters. Note that these products are scaled between 0-10000 (TOC) or 0-255 (Biopars). More information on the used gains and offset can be found at [www.terrascope.be](https://www.terrascope.be/)
```
#TOC
im_RGBNIR <- stack("RGBNIR.vrt")
names(im_RGBNIR)<- c("B","G","R","NIR")
#RGB composites
plotRGB(im_RGBNIR,3,2,1,scale=10000,stretch="lin") #True Colour Composite with R=B4, G=B3, B=B2
plotRGB(im_RGBNIR,4,3,2,scale=10000,stretch="lin") #False Colour Composite with R=B8, G=B4, B=B3
#Biopar
im_Biopar <- stack(c(raster("NDVI.vrt"),raster("FAPAR.vrt"),raster("FCOVER.vrt"),raster("LAI.vrt")))
names(im_Biopar) <- c("NDVI","FAPAR","FCOVER","LAI")
plot(im_Biopar)
```
|
github_jupyter
|
library(raster)
library(gdalUtils)
library(reticulate)
#load catalog client
catalogclient <- import("catalogclient")
cat=catalogclient$catalog$Catalog()
#show all producttypes
cat$get_producttypes()
#first we import the python datetime package
datetime <- import("datetime")
#next we query the catalog for TOC, FCOVER, FAPAR, NDVI and LAI products on 2017-05-26
date = datetime$date(2017L, 5L, 26L)
#TOC
products_TOC = cat$get_products('CGS_S2_RADIOMETRY',
fileformat="GEOTIFF",
startdate=date,
enddate=date)
#NDVI
products_NDVI= cat$get_products('CGS_S2_NDVI',
fileformat="GEOTIFF",
startdate=date,
enddate=date)
#FAPAR
products_FAPAR= cat$get_products('CGS_S2_FAPAR',
fileformat="GEOTIFF",
startdate=date,
enddate=date)
#FCOVER
products_FCOVER= cat$get_products('CGS_S2_FCOVER',
fileformat="GEOTIFF",
startdate=date,
enddate=date)
#LAI
products_LAI= cat$get_products('CGS_S2_LAI',
fileformat="GEOTIFF",
startdate=date,
enddate=date)
get_paths <- function(products,pattern){
tmp <- sapply(products,FUN=function(x)(sapply(x$files,FUN=function(y)(y$filename))))
return(gsub("file:","",tmp[grep(pattern,tmp)]))
}
#TOC
files_B2 <- get_paths(products_TOC,"B02")
files_B3 <- get_paths(products_TOC,"B03")
files_B4 <- get_paths(products_TOC,"B04")
files_B8 <- get_paths(products_TOC,"B08")
head(files_B2)
#NDVI
files_NDVI <- get_paths(products_NDVI,"NDVI")
head(files_NDVI)
#FAPAR
files_FAPAR <- get_paths(products_FAPAR,"FAPAR")
head(files_FAPAR)
#FCOVER
files_FCOVER <- get_paths(products_FCOVER,"FCOVER")
head(files_FCOVER)
#LAI
files_LAI <- get_paths(products_LAI,"LAI")
head(files_LAI)
#TOC
gdalbuildvrt(files_B2,"B2.vrt")
gdalbuildvrt(files_B3,"B3.vrt")
gdalbuildvrt(files_B4,"B4.vrt")
gdalbuildvrt(files_B8,"B8.vrt")
#NDVI
gdalbuildvrt(files_NDVI,"NDVI.vrt")
#FAPAR
gdalbuildvrt(files_FAPAR,"FAPAR.vrt")
#FCOVER
gdalbuildvrt(files_FCOVER,"FCOVER.vrt")
#LAI
gdalbuildvrt(files_LAI,"LAI.vrt")
gdalbuildvrt(c("B2.vrt","B3.vrt","B4.vrt","B8.vrt"),"RGBNIR.vrt",separate=T)
#TOC
im_RGBNIR <- stack("RGBNIR.vrt")
names(im_RGBNIR)<- c("B","G","R","NIR")
#RGB composites
plotRGB(im_RGBNIR,3,2,1,scale=10000,stretch="lin") #True Colour Composite with R=B4, G=B3, B=B2
plotRGB(im_RGBNIR,4,3,2,scale=10000,stretch="lin") #False Colour Composite with R=B8, G=B4, B=B3
#Biopar
im_Biopar <- stack(c(raster("NDVI.vrt"),raster("FAPAR.vrt"),raster("FCOVER.vrt"),raster("LAI.vrt")))
names(im_Biopar) <- c("NDVI","FAPAR","FCOVER","LAI")
plot(im_Biopar)
| 0.305179 | 0.880129 |
# Lambda School Data Science Module 133
## Introduction to Bayesian Inference
## Assignment - Code it up!
Most of the above was pure math - now write Python code to reproduce the results! This is purposefully open ended - you'll have to think about how you should represent probabilities and events. You can and should look things up, and as a stretch goal - refactor your code into helpful reusable functions!
Specific goals/targets:
1. Write a function `def prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk)` that reproduces the example from lecture, and use it to calculate and visualize a range of situations
2. Explore `scipy.stats.bayes_mvs` - read its documentation, and experiment with it on data you've tested in other ways earlier this week
3. Create a visualization comparing the results of a Bayesian approach to a traditional/frequentist approach
4. In your own words, summarize the difference between Bayesian and Frequentist statistics
If you're unsure where to start, check out [this blog post of Bayes theorem with Python](https://dataconomy.com/2015/02/introduction-to-bayes-theorem-with-python/) - you could and should create something similar!
Stretch goals:
- Apply a Bayesian technique to a problem you previously worked (in an assignment or project work) on from a frequentist (standard) perspective
- Check out [PyMC3](https://docs.pymc.io/) (note this goes beyond hypothesis tests into modeling) - read the guides and work through some examples
- Take PyMC3 further - see if you can build something with it!
###Write a function def prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk) that reproduces the example from lecture
```
#STEP 1:
# Write a function def prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk) that reproduces the example from lecture, and use it to calculate and visualize a range of situations
# Breathalyzer tests
# True positive rate = 100% = [(P(Positive/Drunk))] = probab_positive_drunk
# Prior Rate of drunk driving = 1 in 1000 (0.001) = P (Drunk) = probab_drunk_prior
# False positive rate = 8% = P (Positive) = probab_positive
import numpy as np
def prob_drunk_given_positive(prob_drunk_prior, prob_positive_drunk, prob_positive):
return (prob_drunk_prior * prob_positive_drunk) / prob_positive
# scenario for breathalyzer test in the lecture
prob_drunk_given_positive(0.001, 1, 0.08)
# Another scenario with different values for true positive and false positive rate
prob_drunk_given_positive(0.001, 0.8, 0.1)
# The probability of being positive drunk is still very low due to same input for very low prevalence of found drunk in the general population i.e. the prior which is a very important input in Bayesian statistics
```
### ****Python Program for Bayesian Theorem applied to Breathalyzer test****
```
# ****Python Program for Bayesian Theorem applied to Breathalyzer test****
print("Enter 'x' for exit.");
print("Enter prob_drunk_prior or population incidence of drunk driving in fractions: press enter after entering number ");
num1 = input();
print("probab_positive_drunk or True positive rate in fractions: press enter after entering number");
num2 = input();
print("prob_positive or False positive rate in fractions: press enter after entering number");
num3 = input();
if num1 == 'x':
exit();
else:
res = float(num1) * float(num2) / float(num3)
print ("prob_drunk_given_positive in fractions=", res)
```
### STEP 2:
### Explore scipy.stats.bayes_mvs - read its documentation, and experiment with it on data you've tested in other ways earlier this week
```
# STEP 2:
# Explore scipy.stats.bayes_mvs - read its documentation, and experiment with it on data you've tested in other ways earlier this week
import pandas as pd
import numpy as np
df = pd.read_csv('https://raw.githubusercontent.com/bs3537/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/master/master.csv')
df.head()
df.tail()
df.dtypes
df2 = df[df['country'] == 'United States']
df2.head()
df2.shape
df2.isnull().sum()
df2['year'].max()
df2_1 = df2['suicides/100k pop'].describe()
print ("Summary statistics for the U.S. population suicide rate/100K pop in all age groups for 1985 to 2015", df2_1)
# isolating U.S. suicide number data across all age groups for 1985-2015.
df2_1_1 = df2['suicides/100k pop']
df2_1_1.head()
df2_1_1.shape
df3 = df2[df2['year'] == 2015]
df3.head()
df4= df3['suicides/100k pop'].describe()
print ("Summary statistics for the U.S. population suicide rate/100k pop in all age groups for 2015", df4)
# isolating U.S. suicide rate/100k pop. data across all age groups for 2015.
df5= df3['suicides/100k pop']
df5.head()
df6 = df2[df2['year'] == 1985]
df6.head()
# isolating U.S. suicide number data across all age groups for 1985.
df7 = df6['suicides/100k pop']
df7.head()
df8= df6['suicides/100k pop'].describe()
print ("Summary statistics for the U.S. population suicide rate/100k pop in all age groups for 1985", df8)
```
## t test
```
# Null hypothesis is that the average number of suicides for the U.S. in 2015 is not different from that in 1985.
from scipy import stats
# First using t test
from scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel
ttest_ind(df5, df7, equal_var=False)
# Using t test, the null hypthesis is true and there is no difference in the number of suicide rate/100 k pop. across all age groups in the U.S. in 2015 vs. 1985 (p value is >0.05)
```
### Applying frequentist method to calculate mean, variance and std dev for GDP per capita in 1985-2015 population sample
```
df8 = df2['gdp_per_capita ($)']
df8.head()
df8.isnull().sum()
df8.describe()
mean = np.mean(df8)
mean
# standard deviation using frequentist approach
std = np.std(df8, ddof=1)
std
var = np.var(df8)
var
sample_size = 372
std_err = std/np.sqrt(sample_size)
std_err
t = 1.984 # 95% confidence
(mean, mean - t*std_err, mean + t*std_err)
# The output gives the mean and 95% C.I. by frequentist approach.
#using df.describe function to compare summary stats for 1985-2016 U.S. GDP per capita
df8.describe()
# The mean and std. deviation match that calculated using above numpy functions
```
###Applying Bayesian method to calculate mean, variance and std dev for 1985-2016 U.S. GDP per capita
```
# Bayesian method, using alpha =0.95, for measuring 95% confidence intervals
# this function also returns the confidence intervals.
stats.bayes_mvs(df8, alpha=0.95)
# first line of the output gives the mean and 95% CI using Bayesian method
```
### Conclusions:
### Mean is the same and 95% confidence intervals are almost the same using frequentist and Bayesian statistics.
### Variance is slightly higher for Bayesian method but can be considered almost similar.
### Standard deviation is slightly higher for Bayesian method but can be considered almost similar.
###Plotting histogram showing the means and distribution of 2015 suicide rate dataset by frequentist and Bayesian methods.
```
# frequentist and Bayesian approach means are same = 2453.83
means = 39269.61, 39269.61
# standard deviation by frequentist method = 12334.11
# standard deviation by Bayesian method = 12359.12
stdevs = 12334.11, 12359.12
dist = pd.DataFrame(
np.random.normal(loc=means, scale=stdevs, size=(1000, 2)),
columns=['frequentist', 'Bayesian'])
dist.agg(['min', 'max', 'mean', 'std']).round(decimals=2)
fig, ax = plt.subplots()
dist.plot.kde(ax=ax, legend=False, title='Histogram: frequentist vs. Bayesian methods')
dist.plot.hist(density=True, ax=ax)
ax.set_ylabel('Probability')
ax.grid(axis='y')
ax.set_facecolor('#d8dcd6')
```
### The histogram shows almost similar distribution by both frequentist and Bayesian methods
### The difference in the two methods is the way in which their practictioners interpret the confidence intervals which is explained below. Other differences are also discussed below.
###In your own words, summarize the difference between Bayesian and Frequentist statistics
Frequentist statistics relies on a confidence interval (C.I.) while tring to estimate the value of an unknown parameter in a sample, while the Bayesian approach relies on a credible region (C.R.). Frequentists consider probability as a measure of the frequency of repeated events while Bayesians consider probability as a measure of the degree of certainty about values. frequentists consider model parametrs to be fixed and the data to be random, while Bayesians consider model parametrs to be random and data to be fixed.
In the Bayesian formula, the posterior input on mean is exactly equal to the frequentist sampling distribution for mean (as above example of suicide rate as shown). The confidence intervals calculated by the two methods are also similar but their interpretation is different.
Frequentist confidence interval interpretation = There is 95% probability that an unknown variable from this sample has mean within the two confidence intervals.
Bayesian confidence interval interpretation = Given our observed data, there is a 95% probability that the true value of mean falls within these two confidence intervals.
The Bayesian interpreation is thus a statement of probability about the parameter value given fixed bounds. The frequentist solution is a probability about the bounds given a fixed parameter value.
The Bayesian approach fixes the credible region and guarantees that 95$ of possible values of mean will fall withn it. The frequentist approach fixes the parameter and guarantees that 95% of possible confidence intervals will contain it.
In most scientific applications, frequentism is answering the wrong question while analyzing what a particular observed set of data is telling us. Still, frequentism continues to be be the standard approach when submitting papers, etc. to scientific journals or doing drug trials, etc. since reviewers look for p values calculated using frequentist approach.
####Bayesian approach relies heavily on prior information while frequentist approach does not rely on prior information.
## Resources
- [Worked example of Bayes rule calculation](https://en.wikipedia.org/wiki/Bayes'_theorem#Examples) (helpful as it fully breaks out the denominator)
- [Source code for mvsdist in scipy](https://github.com/scipy/scipy/blob/90534919e139d2a81c24bf08341734ff41a3db12/scipy/stats/morestats.py#L139)
|
github_jupyter
|
#STEP 1:
# Write a function def prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk) that reproduces the example from lecture, and use it to calculate and visualize a range of situations
# Breathalyzer tests
# True positive rate = 100% = [(P(Positive/Drunk))] = probab_positive_drunk
# Prior Rate of drunk driving = 1 in 1000 (0.001) = P (Drunk) = probab_drunk_prior
# False positive rate = 8% = P (Positive) = probab_positive
import numpy as np
def prob_drunk_given_positive(prob_drunk_prior, prob_positive_drunk, prob_positive):
return (prob_drunk_prior * prob_positive_drunk) / prob_positive
# scenario for breathalyzer test in the lecture
prob_drunk_given_positive(0.001, 1, 0.08)
# Another scenario with different values for true positive and false positive rate
prob_drunk_given_positive(0.001, 0.8, 0.1)
# The probability of being positive drunk is still very low due to same input for very low prevalence of found drunk in the general population i.e. the prior which is a very important input in Bayesian statistics
# ****Python Program for Bayesian Theorem applied to Breathalyzer test****
print("Enter 'x' for exit.");
print("Enter prob_drunk_prior or population incidence of drunk driving in fractions: press enter after entering number ");
num1 = input();
print("probab_positive_drunk or True positive rate in fractions: press enter after entering number");
num2 = input();
print("prob_positive or False positive rate in fractions: press enter after entering number");
num3 = input();
if num1 == 'x':
exit();
else:
res = float(num1) * float(num2) / float(num3)
print ("prob_drunk_given_positive in fractions=", res)
# STEP 2:
# Explore scipy.stats.bayes_mvs - read its documentation, and experiment with it on data you've tested in other ways earlier this week
import pandas as pd
import numpy as np
df = pd.read_csv('https://raw.githubusercontent.com/bs3537/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/master/master.csv')
df.head()
df.tail()
df.dtypes
df2 = df[df['country'] == 'United States']
df2.head()
df2.shape
df2.isnull().sum()
df2['year'].max()
df2_1 = df2['suicides/100k pop'].describe()
print ("Summary statistics for the U.S. population suicide rate/100K pop in all age groups for 1985 to 2015", df2_1)
# isolating U.S. suicide number data across all age groups for 1985-2015.
df2_1_1 = df2['suicides/100k pop']
df2_1_1.head()
df2_1_1.shape
df3 = df2[df2['year'] == 2015]
df3.head()
df4= df3['suicides/100k pop'].describe()
print ("Summary statistics for the U.S. population suicide rate/100k pop in all age groups for 2015", df4)
# isolating U.S. suicide rate/100k pop. data across all age groups for 2015.
df5= df3['suicides/100k pop']
df5.head()
df6 = df2[df2['year'] == 1985]
df6.head()
# isolating U.S. suicide number data across all age groups for 1985.
df7 = df6['suicides/100k pop']
df7.head()
df8= df6['suicides/100k pop'].describe()
print ("Summary statistics for the U.S. population suicide rate/100k pop in all age groups for 1985", df8)
# Null hypothesis is that the average number of suicides for the U.S. in 2015 is not different from that in 1985.
from scipy import stats
# First using t test
from scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel
ttest_ind(df5, df7, equal_var=False)
# Using t test, the null hypthesis is true and there is no difference in the number of suicide rate/100 k pop. across all age groups in the U.S. in 2015 vs. 1985 (p value is >0.05)
df8 = df2['gdp_per_capita ($)']
df8.head()
df8.isnull().sum()
df8.describe()
mean = np.mean(df8)
mean
# standard deviation using frequentist approach
std = np.std(df8, ddof=1)
std
var = np.var(df8)
var
sample_size = 372
std_err = std/np.sqrt(sample_size)
std_err
t = 1.984 # 95% confidence
(mean, mean - t*std_err, mean + t*std_err)
# The output gives the mean and 95% C.I. by frequentist approach.
#using df.describe function to compare summary stats for 1985-2016 U.S. GDP per capita
df8.describe()
# The mean and std. deviation match that calculated using above numpy functions
# Bayesian method, using alpha =0.95, for measuring 95% confidence intervals
# this function also returns the confidence intervals.
stats.bayes_mvs(df8, alpha=0.95)
# first line of the output gives the mean and 95% CI using Bayesian method
# frequentist and Bayesian approach means are same = 2453.83
means = 39269.61, 39269.61
# standard deviation by frequentist method = 12334.11
# standard deviation by Bayesian method = 12359.12
stdevs = 12334.11, 12359.12
dist = pd.DataFrame(
np.random.normal(loc=means, scale=stdevs, size=(1000, 2)),
columns=['frequentist', 'Bayesian'])
dist.agg(['min', 'max', 'mean', 'std']).round(decimals=2)
fig, ax = plt.subplots()
dist.plot.kde(ax=ax, legend=False, title='Histogram: frequentist vs. Bayesian methods')
dist.plot.hist(density=True, ax=ax)
ax.set_ylabel('Probability')
ax.grid(axis='y')
ax.set_facecolor('#d8dcd6')
| 0.639286 | 0.968471 |
```
import param
import panel as pn
import holoviews as hv
pn.extension()
```
The [Param user guide](Param.ipynb) described how to set up classes which declare parameters and link them to some computation or visualization. In this section we will discover how to connect multiple such panels into a ``Pipeline`` to express complex workflows where the output of one stage feeds into the next stage. To start using a ``Pipeline`` let us declare an empty one by instantiating the class:
```
pipeline = pn.pipeline.Pipeline()
```
Having set up a Pipeline it is now possible to start populating it. While we have already seen how to declare a ``Parameterized`` class with parameters which are linked to some visualization or computation on a method using the ``param.depends`` decorator, ``Pipelines`` make use of another decorator and a convention for displaying the objects.
The ``param.output`` decorator provides a way to annotate the methods on a class by declaring its outputs. A ``Pipeline`` uses this information to determine what outputs are available to be fed into the next stage of the workflow. In the example below the ``Stage1`` class has to parameters of its own (``a`` and ``b``) and one output, which is named ``c``. The signature of the decorator allows a number of different ways of declaring the outputs:
* ``param.output()``: Declaring an output without arguments will declare that the method returns an output which will inherit the name of the method and does not make any specific type declarations.
* ``param.output(param.Number)``: Declaring an output with a specific ``Parameter`` or Python type also declares an output with the name of the method but declares that the output will be of a specific type.
* ``param.output(c=param.Number)``: Declaring an output using a keyword argument allows overriding the method name as the name of the output and declares the type.
In the example below the output is simply the result of multiplying the two inputs (``a`` and ``b``) which will produce output ``c``. Additionally we declare a ``view`` method which returns a ``LaTeX`` pane which will render the equation to ``LaTeX``. Finally a ``panel`` method declares returns a ``panel`` object rendering both the parameters and the view; this is the second convention that a ``Pipeline`` expects.
Let's start by displaying this stage on its own:
```
class Stage1(param.Parameterized):
a = param.Number(default=5, bounds=(0, 10))
b = param.Number(default=5, bounds=(0, 10))
@param.output(c=param.Number)
def output(self):
return self.a * self.b
@param.depends('a', 'b')
def view(self):
return pn.pane.LaTeX('$%s * %s = %s$' % (self.a, self.b, self.output()))
def panel(self):
return pn.Row(self.param, self.view)
stage1 = Stage1()
stage1.panel()
```
To summarize we have followed several conventions when setting up this stage of our ``Pipeline``:
1. Declare a Parameterized class with some input parameters
2. Declare one or more output methods and name them appropriately
3. Declare a ``panel`` method which returns a view of the object that the ``Pipeline`` can render
Now that the object has been instantiated we can also query it for its outputs:
```
stage1.param.outputs()
```
We can see that ``Stage1`` declares an output of name ``c`` of ``Number`` type which can be accessed by calling the ``output`` method on the object. Now let us add this stage to our ``Pipeline`` using the ``add_stage`` method:
```
pipeline.add_stage('Stage 1', stage1)
```
A ``Pipeline`` with only a single stage is not much of a ``Pipeline`` of course, so it's time to set up a second stage, which consumes the outputs of the first. Recall that ``Stage1`` declares one output named ``c``, this means that if the output from ``Stage1`` should flow to ``Stage2``, the latter should declare a ``Parameter`` named ``c`` which will consume the output of the first stage. Below we therefore define parameters ``c`` and ``exp`` and since ``c`` is the output of the first stage the ``c`` parameter will be declared with a negative precedence stopping ``panel`` from generating a widget for it. Otherwise this class is very similar to the first one, it declares both a ``view`` method which depends on the parameters of the class and a ``panel`` method which returns a view of the object.
```
class Stage2(param.Parameterized):
c = param.Number(default=5, precedence=-1, bounds=(0, None))
exp = param.Number(default=0.1, bounds=(0, 3))
@param.depends('c', 'exp')
def view(self):
return pn.pane.LaTeX('${%s}^{%s}={%.3f}$' % (self.c, self.exp, self.c**self.exp))
def panel(self):
return pn.Row(self.param, self.view)
stage2 = Stage2(c=stage1.output())
stage2.panel()
```
Now that we have declared the second stage of the pipeline let us add it to the ``Pipeline`` object:
```
pipeline.add_stage('Stage 2', stage2)
```
And that's it, we have no declared a two stage pipeline, where the output ``c`` flows from the first stage into the second stage. To display it we can now view the ``pipeline.layout``:
```
pipeline.layout
```
As you can see the ``Pipeline`` renders a little diagram displaying the available stages in the workflow along with previous and next buttons to move between each stage. This allows setting up complex workflows with multiple stages, where each component is a self-contained unit, with minimal declarations about its outputs (using the ``param.output`` decorator) and how to render it (by declaring a ``panel`` method).
Above we created the ``Pipeline`` as we went along which makes some sense in a notebook, when deploying the Pipeline as a server app or when there's no reason to instantiate each stage separately it is also possible to declare the stages as part of the constructor:
```
stages = [
('Stage 1', Stage1),
('Stage 2', Stage2)
]
pipeline = pn.pipeline.Pipeline(stages)
pipeline.layout
```
As you will note the Pipeline stages may be either ``Parameterized`` instances or classes, however when working with instances you must ensure that updating the parameters of the class is sufficient to update the current state of the class.
|
github_jupyter
|
import param
import panel as pn
import holoviews as hv
pn.extension()
pipeline = pn.pipeline.Pipeline()
class Stage1(param.Parameterized):
a = param.Number(default=5, bounds=(0, 10))
b = param.Number(default=5, bounds=(0, 10))
@param.output(c=param.Number)
def output(self):
return self.a * self.b
@param.depends('a', 'b')
def view(self):
return pn.pane.LaTeX('$%s * %s = %s$' % (self.a, self.b, self.output()))
def panel(self):
return pn.Row(self.param, self.view)
stage1 = Stage1()
stage1.panel()
stage1.param.outputs()
pipeline.add_stage('Stage 1', stage1)
class Stage2(param.Parameterized):
c = param.Number(default=5, precedence=-1, bounds=(0, None))
exp = param.Number(default=0.1, bounds=(0, 3))
@param.depends('c', 'exp')
def view(self):
return pn.pane.LaTeX('${%s}^{%s}={%.3f}$' % (self.c, self.exp, self.c**self.exp))
def panel(self):
return pn.Row(self.param, self.view)
stage2 = Stage2(c=stage1.output())
stage2.panel()
pipeline.add_stage('Stage 2', stage2)
pipeline.layout
stages = [
('Stage 1', Stage1),
('Stage 2', Stage2)
]
pipeline = pn.pipeline.Pipeline(stages)
pipeline.layout
| 0.540681 | 0.988503 |
```
from bs4 import BeautifulSoup
from splinter import Browser
from pprint import pprint
import pymongo
import pandas as pd
import requests
from selenium import webdriver
url = 'https://mars.nasa.gov/news/'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
print(soup.prettify())
# Extract title text
nasa_title = soup.find('div',class_='content_title').text
print(nasa_title)
# Extract Paragraph text
nasa_p_text = soup.find('div',class_='rollover_description').text
print(nasa_p_text)
#JPL Mars Space Images - Featured Image
url='https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html'
response = requests.get(url)
image = soup.find("img", class_="fancybox-image")["src"]
# print(image)
# Make sure to save a complete url string for this image
# pull image link
pic_src = []
for image in images:
pic = image['data-fancybox-href']
pic_src.append(pic)
featured_image_url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html' + pic
featured_image_url
#Mars Facts
mars_df= pd.read_html("https://space-facts.com/mars/")[0]
print(mars_df)
mars_df.columns=["Description","Value"]
mars_df.set_index("Description", inplace=True)
mars_df
```
<!-- #Mars Hemisphere -->
```
hemisphere_image_urls = []
url = ('https://astrogeology.usgs.gov/search/map/Mars/Viking/cerberus_enhanced')
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
print(soup.prettify())
cerberus_img = soup.find_all('div', class_="wide-image-wrapper")
print(cerberus_img)
for img in cerberus_img:
pic = img.find('li')
full_img = pic.find('a')['href']
print(full_img)
cerberus_title = soup.find('h2', class_='title').text
print(cerberus_title)
cerberus_hem = {"Title": cerberus_title, "url": full_img}
print(cerberus_hem)
url = ('https://astrogeology.usgs.gov/search/map/Mars/Viking/schiaparelli_enhanced')
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
shiaparelli_img = soup.find_all('div', class_="wide-image-wrapper")
print(shiaparelli_img)
for img in shiaparelli_img:
pic = img.find('li')
full_img = pic.find('a')['href']
print(full_img)
shiaparelli_title = soup.find('h2', class_='title').text
print(shiaparelli_title)
shiaparelli_hem = {"Title": shiaparelli_title, "url": full_img}
print(shiaparelli_hem)
url = ('https://astrogeology.usgs.gov/search/map/Mars/Viking/syrtis_major_enhanced')
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
syrtris_img = soup.find_all('div', class_="wide-image-wrapper")
print(syrtris_img)
for img in syrtris_img:
pic = img.find('li')
full_img = pic.find('a')['href']
print(full_img)
syrtris_title = soup.find('h2', class_='title').text
print(syrtris_title)
syrtris_hem = {"Title": syrtris_title, "url": full_img}
print(syrtris_hem)
url = ('https://astrogeology.usgs.gov/search/map/Mars/Viking/valles_marineris_enhanced')
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
valles_marineris_img = soup.find_all('div', class_="wide-image-wrapper")
print(valles_marineris_img)
for img in valles_marineris_img:
pic = img.find('li')
full_img = pic.find('a')['href']
print(full_img)
valles_marineris_title = soup.find('h2', class_='title').text
print(valles_marineris_title)
valles_marineris_hem = {"Title": valles_marineris_title, "url": full_img}
print(valles_marineris_hem)
```
|
github_jupyter
|
from bs4 import BeautifulSoup
from splinter import Browser
from pprint import pprint
import pymongo
import pandas as pd
import requests
from selenium import webdriver
url = 'https://mars.nasa.gov/news/'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
print(soup.prettify())
# Extract title text
nasa_title = soup.find('div',class_='content_title').text
print(nasa_title)
# Extract Paragraph text
nasa_p_text = soup.find('div',class_='rollover_description').text
print(nasa_p_text)
#JPL Mars Space Images - Featured Image
url='https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html'
response = requests.get(url)
image = soup.find("img", class_="fancybox-image")["src"]
# print(image)
# Make sure to save a complete url string for this image
# pull image link
pic_src = []
for image in images:
pic = image['data-fancybox-href']
pic_src.append(pic)
featured_image_url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html' + pic
featured_image_url
#Mars Facts
mars_df= pd.read_html("https://space-facts.com/mars/")[0]
print(mars_df)
mars_df.columns=["Description","Value"]
mars_df.set_index("Description", inplace=True)
mars_df
hemisphere_image_urls = []
url = ('https://astrogeology.usgs.gov/search/map/Mars/Viking/cerberus_enhanced')
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
print(soup.prettify())
cerberus_img = soup.find_all('div', class_="wide-image-wrapper")
print(cerberus_img)
for img in cerberus_img:
pic = img.find('li')
full_img = pic.find('a')['href']
print(full_img)
cerberus_title = soup.find('h2', class_='title').text
print(cerberus_title)
cerberus_hem = {"Title": cerberus_title, "url": full_img}
print(cerberus_hem)
url = ('https://astrogeology.usgs.gov/search/map/Mars/Viking/schiaparelli_enhanced')
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
shiaparelli_img = soup.find_all('div', class_="wide-image-wrapper")
print(shiaparelli_img)
for img in shiaparelli_img:
pic = img.find('li')
full_img = pic.find('a')['href']
print(full_img)
shiaparelli_title = soup.find('h2', class_='title').text
print(shiaparelli_title)
shiaparelli_hem = {"Title": shiaparelli_title, "url": full_img}
print(shiaparelli_hem)
url = ('https://astrogeology.usgs.gov/search/map/Mars/Viking/syrtis_major_enhanced')
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
syrtris_img = soup.find_all('div', class_="wide-image-wrapper")
print(syrtris_img)
for img in syrtris_img:
pic = img.find('li')
full_img = pic.find('a')['href']
print(full_img)
syrtris_title = soup.find('h2', class_='title').text
print(syrtris_title)
syrtris_hem = {"Title": syrtris_title, "url": full_img}
print(syrtris_hem)
url = ('https://astrogeology.usgs.gov/search/map/Mars/Viking/valles_marineris_enhanced')
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
valles_marineris_img = soup.find_all('div', class_="wide-image-wrapper")
print(valles_marineris_img)
for img in valles_marineris_img:
pic = img.find('li')
full_img = pic.find('a')['href']
print(full_img)
valles_marineris_title = soup.find('h2', class_='title').text
print(valles_marineris_title)
valles_marineris_hem = {"Title": valles_marineris_title, "url": full_img}
print(valles_marineris_hem)
| 0.165323 | 0.286971 |
# Collision Avoidance - Data Collection
If you ran through the basic motion notebook, hopefully you're enjoying how easy it can be to make your Jetbot move around! Thats very cool! But what's even cooler, is making JetBot move around all by itself!
This is a super hard task, that has many different approaches but the whole problem is usually broken down into easier sub-problems. It could be argued that one of the most
important sub-problems to solve, is the problem of preventing the robot from entering dangerous situations! We're calling this *collision avoidance*.
In this set of notebooks, we're going to attempt to solve the problem using deep learning and a single, very versatile, sensor: the camera. You'll see how with a neural network, camera, and the NVIDIA Jetson Nano, we can teach the robot a very useful behavior!
The approach we take to avoiding collisions is to create a virtual "safety bubble" around the robot. Within this safety bubble, the robot is able to spin in a circle without hitting any objects (or other dangerous situations like falling off a ledge).
Of course, the robot is limited by what's in it's field of vision, and we can't prevent objects from being placed behind the robot, etc. But we can prevent the robot from entering these scenarios itself.
The way we'll do this is super simple:
First, we'll manually place the robot in scenarios where it's "safety bubble" is violated, and label these scenarios ``blocked``. We save a snapshot of what the robot sees along with this label.
Second, we'll manually place the robot in scenarios where it's safe to move forward a bit, and label these scenarios ``free``. Likewise, we save a snapshot along with this label.
That's all that we'll do in this notebook; data collection. Once we have lots of images and labels, we'll upload this data to a GPU enabled machine where we'll *train* a neural network to predict whether the robot's safety bubble is being violated based off of the image it sees. We'll use this to implement a simple collision avoidance behavior in the end :)
> IMPORTANT NOTE: When JetBot spins in place, it actually spins about the center between the two wheels, not the center of the robot chassis itself. This is an important detail to remember when you're trying to estimate whether the robot's safety bubble is violated or not. But don't worry, you don't have to be exact. If in doubt it's better to lean on the cautious side (a big safety bubble). We want to make sure JetBot doesn't enter a scenario that it couldn't get out of by turning in place.
### Display live camera feed
So let's get started. First, let's initialize and display our camera like we did in the *teleoperation* notebook.
> Our neural network takes a 224x224 pixel image as input. We'll set our camera to that size to minimize the filesize of our dataset (we've tested that it works for this task).
> In some scenarios it may be better to collect data in a larger image size and downscale to the desired size later.
```
import traitlets
import ipywidgets.widgets as widgets
from IPython.display import display
from jetbot import Camera, bgr8_to_jpeg
camera = Camera.instance(width=224, height=224)
image = widgets.Image(format='jpeg', width=224, height=224) # this width and height doesn't necessarily have to match the camera
camera_link = traitlets.dlink((camera, 'value'), (image, 'value'), transform=bgr8_to_jpeg)
display(image)
```
Awesome, next let's create a few directories where we'll store all our data. We'll create a folder ``dataset`` that will contain two sub-folders ``free`` and ``blocked``,
where we'll place the images for each scenario.
```
import os
blocked_dir = 'dataset/blocked'
free_dir = 'dataset/free'
# we have this "try/except" statement because these next functions can throw an error if the directories exist already
try:
os.makedirs(free_dir)
os.makedirs(blocked_dir)
except FileExistsError:
print('Directories not created because they already exist')
```
If you refresh the Jupyter file browser on the left, you should now see those directories appear. Next, let's create and display some buttons that we'll use to save snapshots
for each class label. We'll also add some text boxes that will display how many images of each category that we've collected so far. This is useful because we want to make
sure we collect about as many ``free`` images as ``blocked`` images. It also helps to know how many images we've collected overall.
```
button_layout = widgets.Layout(width='128px', height='64px')
free_button = widgets.Button(description='add free', button_style='success', layout=button_layout)
blocked_button = widgets.Button(description='add blocked', button_style='danger', layout=button_layout)
free_count = widgets.IntText(layout=button_layout, value=len(os.listdir(free_dir)))
blocked_count = widgets.IntText(layout=button_layout, value=len(os.listdir(blocked_dir)))
display(widgets.HBox([free_count, free_button]))
display(widgets.HBox([blocked_count, blocked_button]))
```
Right now, these buttons wont do anything. We have to attach functions to save images for each category to the buttons' ``on_click`` event. We'll save the value
of the ``Image`` widget (rather than the camera), because it's already in compressed JPEG format!
To make sure we don't repeat any file names (even across different machines!) we'll use the ``uuid`` package in python, which defines the ``uuid1`` method to generate
a unique identifier. This unique identifier is generated from information like the current time and the machine address.
```
from uuid import uuid1
def save_snapshot(directory):
image_path = os.path.join(directory, str(uuid1()) + '.jpg')
with open(image_path, 'wb') as f:
f.write(image.value)
def save_free():
global free_dir, free_count
save_snapshot(free_dir)
free_count.value = len(os.listdir(free_dir))
def save_blocked():
global blocked_dir, blocked_count
save_snapshot(blocked_dir)
blocked_count.value = len(os.listdir(blocked_dir))
# attach the callbacks, we use a 'lambda' function to ignore the
# parameter that the on_click event would provide to our function
# because we don't need it.
free_button.on_click(lambda x: save_free())
blocked_button.on_click(lambda x: save_blocked())
```
Great! Now the buttons above should save images to the ``free`` and ``blocked`` directories. You can use the Jupyter Lab file browser to view these files!
Now go ahead and collect some data
1. Place the robot in a scenario where it's blocked and press ``add blocked``
2. Place the robot in a scenario where it's free and press ``add free``
3. Repeat 1, 2
> REMINDER: You can move the widgets to new windows by right clicking the cell and clicking ``Create New View for Output``. Or, you can just re-display them
> together as we will below
Here are some tips for labeling data
1. Try different orientations
2. Try different lighting
3. Try varied object / collision types; walls, ledges, objects
4. Try different textured floors / objects; patterned, smooth, glass, etc.
Ultimately, the more data we have of scenarios the robot will encounter in the real world, the better our collision avoidance behavior will be. It's important
to get *varied* data (as described by the above tips) and not just a lot of data, but you'll probably need at least 100 images of each class (that's not a science, just a helpful tip here). But don't worry, it goes pretty fast once you get going :)
```
display(image)
display(widgets.HBox([free_count, free_button]))
display(widgets.HBox([blocked_count, blocked_button]))
```
Again, let's close the camera conneciton properly so that we can use the camera in the later notebook.
```
camera.stop()
```
## Next
Once you've collected enough data, we'll need to copy that data to our GPU desktop or cloud machine for training. First, we can call the following *terminal* command to compress
our dataset folder into a single *zip* file.
> The ! prefix indicates that we want to run the cell as a *shell* (or *terminal*) command.
> The -r flag in the zip command below indicates *recursive* so that we include all nested files, the -q flag indicates *quiet* so that the zip command doesn't print any output
```
!zip -r -q dataset.zip dataset
```
You should see a file named ``dataset.zip`` in the Jupyter Lab file browser. You should download the zip file using the Jupyter Lab file browser by right clicking and selecting ``Download``.
Next, we'll need to upload this data to our GPU desktop or cloud machine (we refer to this as the *host*) to train the collision avoidance neural network. We'll assume that you've set up your training
machine as described in the JetBot WiKi. If you have, you can navigate to ``http://<host_ip_address>:8888`` to open up the Jupyter Lab environment running on the host. The notebook you'll need to open there is called ``collision_avoidance/train_model.ipynb``.
So head on over to your training machine and follow the instructions there! Once your model is trained, we'll return to the robot Jupyter Lab enivornment to use the model for a live demo!
|
github_jupyter
|
import traitlets
import ipywidgets.widgets as widgets
from IPython.display import display
from jetbot import Camera, bgr8_to_jpeg
camera = Camera.instance(width=224, height=224)
image = widgets.Image(format='jpeg', width=224, height=224) # this width and height doesn't necessarily have to match the camera
camera_link = traitlets.dlink((camera, 'value'), (image, 'value'), transform=bgr8_to_jpeg)
display(image)
import os
blocked_dir = 'dataset/blocked'
free_dir = 'dataset/free'
# we have this "try/except" statement because these next functions can throw an error if the directories exist already
try:
os.makedirs(free_dir)
os.makedirs(blocked_dir)
except FileExistsError:
print('Directories not created because they already exist')
button_layout = widgets.Layout(width='128px', height='64px')
free_button = widgets.Button(description='add free', button_style='success', layout=button_layout)
blocked_button = widgets.Button(description='add blocked', button_style='danger', layout=button_layout)
free_count = widgets.IntText(layout=button_layout, value=len(os.listdir(free_dir)))
blocked_count = widgets.IntText(layout=button_layout, value=len(os.listdir(blocked_dir)))
display(widgets.HBox([free_count, free_button]))
display(widgets.HBox([blocked_count, blocked_button]))
from uuid import uuid1
def save_snapshot(directory):
image_path = os.path.join(directory, str(uuid1()) + '.jpg')
with open(image_path, 'wb') as f:
f.write(image.value)
def save_free():
global free_dir, free_count
save_snapshot(free_dir)
free_count.value = len(os.listdir(free_dir))
def save_blocked():
global blocked_dir, blocked_count
save_snapshot(blocked_dir)
blocked_count.value = len(os.listdir(blocked_dir))
# attach the callbacks, we use a 'lambda' function to ignore the
# parameter that the on_click event would provide to our function
# because we don't need it.
free_button.on_click(lambda x: save_free())
blocked_button.on_click(lambda x: save_blocked())
display(image)
display(widgets.HBox([free_count, free_button]))
display(widgets.HBox([blocked_count, blocked_button]))
camera.stop()
!zip -r -q dataset.zip dataset
| 0.240239 | 0.942665 |
### Iterators
In the last lecture we saw that we could approach iterating over a collection using this concept of `next`.
But there were some downsides that did not resolve (yet!):
* we cannot use a `for` loop
* once we exhaust the iteration (repeatedly calling next), we're essentially done with object. The only way to iterate through it again is to create a new instance of the object.
First we are going to look at making our `next` be usable in a for loop.
This idea of using `__next__` and the `StopIteration` exception is exactly what Python does.
So, somehow we need to tell Python that the object we are dealing with can be used with `next`.
To do so, we create an `iterator` type object.
Iterators are objects that implement:
* a `__next__` method
* an `__iter__` method that simply returns the object itself
That's it - that's all there is to an iterator - two methods, `__iter__` and `__next__`.
Let's go back to our `Squares` example:
```
class Squares:
def __init__(self, length):
self.length = length
self.i = 0
def __iter__(self):
return self
def __next__(self):
if self.i >= self.length:
raise StopIteration
else:
result = self.i ** 2
self.i += 1
return result
```
Now we can still call `next`:
```
sq = Squares(5)
print(next(sq))
print(next(sq))
print(next(sq))
```
Of course, our iterator still suffers from not being able to "reset" it - we just have to create a new instance:
```
sq = Squares(5)
```
But now, we can also use a `for` loop:
```
for item in sq:
print(item)
```
Now `sq` is **exhausted**, so if we try to loop through again:
```
for item in sq:
print(item)
```
We get nothing...
All we need to do is create a new iterator:
```
sq = Squares(5)
for item in sq:
print(item)
```
Just like Python's built-in `next` function calls our `__next__` method, Python has a built-in function `iter` which calls the `__iter__` method:
```
sq = Squares(5)
id(sq)
id(sq.__iter__())
id(iter(sq))
```
And of course we can also use a list comprehension on our iterator object:
```
sq = Squares(5)
[item for item in sq if item%2==0]
```
We can even use any function that requires an iterable as an argument (iterators are iterable):
```
sq = Squares(5)
list(enumerate(sq))
```
But of course we have to be careful, our iterator was exhausted, so if try that again:
```
list(enumerate(sq))
```
we get an empty list - instead we have to create a new iterator first:
```
sq = Squares(5)
list(enumerate(sq))
```
We can even use the `sorted` method on it:
```
sq = Squares(5)
sorted(sq, reverse=True)
```
#### Python Iterators Summary
Iterators are objects that implement the `__iter__` and `__next__` methods.
The `__iter__` method of an iterator just returns itself.
Once we fully iterate over an iterator, the iterator is **exhausted** and we can no longer use it for iteration purposes.
The way Python applies a `for` loop to an iterator object is basically what we saw with the `while` loop and the `StopIteration` exception.
```
sq = Squares(5)
while True:
try:
print(next(sq))
except StopIteration:
break
```
In fact we can easily see this by tweaking our iterator a bit:
```
class Squares:
def __init__(self, length):
self.length = length
self.i = 0
def __iter__(self):
print('calling __iter__')
return self
def __next__(self):
print('calling __next__')
if self.i >= self.length:
raise StopIteration
else:
result = self.i ** 2
self.i += 1
return result
sq = Squares(5)
for i in sq:
print(i)
```
As you can see Python calls `__next__` (and stops once a `StopIteration` exception is raised).
But you'll notice that it also called the `__iter__` method.
In fact we'll see this happening in other places too:
```
sq = Squares(5)
[item for item in sq if item%2==0]
sq = Squares(5)
list(enumerate(sq))
sq = Squares(5)
sorted(sq, reverse=True)
```
Why is `__iter__` being called? After all, it just returns itself!
That's the topic of the next lecture!
But let's see how we can mimic what Python is doing:
```
sq = Squares(5)
sq_iterator = iter(sq)
print(id(sq), id(sq_iterator))
while True:
try:
item = next(sq_iterator)
print(item)
except StopIteration:
break
```
As you can see, we first request an iterator from `sq` using the `iter` function, and then we iterate using the returned iterator. In the case of an iterator, the `iter` function just gets the iterator itself back.
|
github_jupyter
|
class Squares:
def __init__(self, length):
self.length = length
self.i = 0
def __iter__(self):
return self
def __next__(self):
if self.i >= self.length:
raise StopIteration
else:
result = self.i ** 2
self.i += 1
return result
sq = Squares(5)
print(next(sq))
print(next(sq))
print(next(sq))
sq = Squares(5)
for item in sq:
print(item)
for item in sq:
print(item)
sq = Squares(5)
for item in sq:
print(item)
sq = Squares(5)
id(sq)
id(sq.__iter__())
id(iter(sq))
sq = Squares(5)
[item for item in sq if item%2==0]
sq = Squares(5)
list(enumerate(sq))
list(enumerate(sq))
sq = Squares(5)
list(enumerate(sq))
sq = Squares(5)
sorted(sq, reverse=True)
sq = Squares(5)
while True:
try:
print(next(sq))
except StopIteration:
break
class Squares:
def __init__(self, length):
self.length = length
self.i = 0
def __iter__(self):
print('calling __iter__')
return self
def __next__(self):
print('calling __next__')
if self.i >= self.length:
raise StopIteration
else:
result = self.i ** 2
self.i += 1
return result
sq = Squares(5)
for i in sq:
print(i)
sq = Squares(5)
[item for item in sq if item%2==0]
sq = Squares(5)
list(enumerate(sq))
sq = Squares(5)
sorted(sq, reverse=True)
sq = Squares(5)
sq_iterator = iter(sq)
print(id(sq), id(sq_iterator))
while True:
try:
item = next(sq_iterator)
print(item)
except StopIteration:
break
| 0.362179 | 0.970521 |
```
import emapy as epy
import sys;
import pandas as pd
locationAreaBoundingBox = (41.3248770036,2.0520401001,41.4829908452,2.2813796997)
epy.divideBoundingBoxBySurfaceUnitSavedGeoJSON(
locationAreaBoundingBox,
1,
1,
'../data/processed/surfaceBarcelona')
surface = epy.getDatabase(
'surfaceBCN',
'geojson',
'../data/processed/surfaceBarcelona.geojson',
'',
True,
0,
1,
'id')
locationAreaBoundingBox = (41.3248770036,2.0520401001,41.4829908452,2.2813796997)
allData = epy.getDatabaseFromOSM(
'restaurantes',
'amenity',
False,
True,
locationAreaBoundingBox,
'bar')
numJumps = 1
T = [[boxId,
data['properties'],
data['geometry'],
epy.getLessDistanceInKmBtwnCoordAndInfoStructureWithJumps(
data["geometry"][0],
data["geometry"][1],
allData,
numJumps,
True)[0]]
for boxId in surface[1]
for data in allData
if epy.coordInsidePolygon(data["geometry"][0],
data["geometry"][1],
epy.transformArrYXToXY(surface[1][boxId]))]
df = pd.DataFrame({'id' : [], 'data': []})
allId = dict()
allValue = dict()
for data in T:
key = int(float(data[0]))
if key in allId:
allId[key] += 1
allValue[key] += data[3]
else:
allId[key] = 1
allValue[key] = data[3]
print allId
for boxId in surface[1]:
key = int(float(boxId))
if key in allId:
row = [key, allValue[key] * 1.0 / allId[key]]
df.loc[len(df), ['id', 'data']] = row
else:
df.loc[len(df), ['id', 'data']] = [key,0]
map = epy.mapCreation(41.388790,2.158990)
epy.mapChoropleth(map,
'../data/processed/surfaceBarcelona.geojson',
'feature.properties.id',
df,
'id',
'data',
'YlGn',
0.7,
0.3,
[],
'bars / barri')
dataNames = []
idNodes = []
color = 'blue'
for data in allData:
if data['type'].strip().lower() == 'point':
prop = data['properties']
name = ''
if 'name' in prop:
name = prop['name']
dataNames.append(name)
idNode = str(data['geometry'][0]) + str(data['geometry'][1]) + name
if idNode not in idNodes:
idNodes.append(idNode)
epy.mapAddMarker(
map,
data['geometry'][0],
data['geometry'][1],
'glyphicon-glass',
color,
name)
epy.mapSave(map, '../reports/maps/mapOfBarsDistanceUS.html')
map
```
|
github_jupyter
|
import emapy as epy
import sys;
import pandas as pd
locationAreaBoundingBox = (41.3248770036,2.0520401001,41.4829908452,2.2813796997)
epy.divideBoundingBoxBySurfaceUnitSavedGeoJSON(
locationAreaBoundingBox,
1,
1,
'../data/processed/surfaceBarcelona')
surface = epy.getDatabase(
'surfaceBCN',
'geojson',
'../data/processed/surfaceBarcelona.geojson',
'',
True,
0,
1,
'id')
locationAreaBoundingBox = (41.3248770036,2.0520401001,41.4829908452,2.2813796997)
allData = epy.getDatabaseFromOSM(
'restaurantes',
'amenity',
False,
True,
locationAreaBoundingBox,
'bar')
numJumps = 1
T = [[boxId,
data['properties'],
data['geometry'],
epy.getLessDistanceInKmBtwnCoordAndInfoStructureWithJumps(
data["geometry"][0],
data["geometry"][1],
allData,
numJumps,
True)[0]]
for boxId in surface[1]
for data in allData
if epy.coordInsidePolygon(data["geometry"][0],
data["geometry"][1],
epy.transformArrYXToXY(surface[1][boxId]))]
df = pd.DataFrame({'id' : [], 'data': []})
allId = dict()
allValue = dict()
for data in T:
key = int(float(data[0]))
if key in allId:
allId[key] += 1
allValue[key] += data[3]
else:
allId[key] = 1
allValue[key] = data[3]
print allId
for boxId in surface[1]:
key = int(float(boxId))
if key in allId:
row = [key, allValue[key] * 1.0 / allId[key]]
df.loc[len(df), ['id', 'data']] = row
else:
df.loc[len(df), ['id', 'data']] = [key,0]
map = epy.mapCreation(41.388790,2.158990)
epy.mapChoropleth(map,
'../data/processed/surfaceBarcelona.geojson',
'feature.properties.id',
df,
'id',
'data',
'YlGn',
0.7,
0.3,
[],
'bars / barri')
dataNames = []
idNodes = []
color = 'blue'
for data in allData:
if data['type'].strip().lower() == 'point':
prop = data['properties']
name = ''
if 'name' in prop:
name = prop['name']
dataNames.append(name)
idNode = str(data['geometry'][0]) + str(data['geometry'][1]) + name
if idNode not in idNodes:
idNodes.append(idNode)
epy.mapAddMarker(
map,
data['geometry'][0],
data['geometry'][1],
'glyphicon-glass',
color,
name)
epy.mapSave(map, '../reports/maps/mapOfBarsDistanceUS.html')
map
| 0.075412 | 0.294806 |
```
import numpy as np
import pandas as pd
import seaborn as sns
from os.path import join
from nilearn import plotting
from scipy.spatial.distance import jaccard, dice
nbs_dir = '/Users/katherine/Dropbox/Projects/physics-retrieval/data/output/nbs'
all_retr = pd.read_csv(join(nbs_dir, 'all_students-retr.csv'), index_col=0, header=0, dtype=int)
fml_retr = pd.read_csv(join(nbs_dir, 'female_students-retr.csv'), index_col=0, header=0, dtype=int)
mle_retr = pd.read_csv(join(nbs_dir, 'male_students-retr.csv'), index_col=0, header=0, dtype=int)
lec_retr = pd.read_csv(join(nbs_dir, 'lecture_students-retr.csv'), index_col=0, header=0, dtype=int)
lf_retr = pd.read_csv(join(nbs_dir, 'female_lecture_students-retr.csv'), index_col=0, header=0, dtype=int)
lm_retr = pd.read_csv(join(nbs_dir, 'male_lecture_students-retr.csv'), index_col=0, header=0, dtype=int)
mod_retr = pd.read_csv(join(nbs_dir, 'modeling_students-retr.csv'), index_col=0, header=0, dtype=int)
mf_retr = pd.read_csv(join(nbs_dir, 'female_modeling_students-retr.csv'), index_col=0, header=0, dtype=int)
mm_retr = pd.read_csv(join(nbs_dir, 'male_modeling_students-retr.csv'), index_col=0, header=0, dtype=int)
subject_groups = {'all': all_retr,
'female': fml_retr,
'male': mle_retr,
'lecture': lec_retr,
'modeling': mod_retr,
'female_lecture': lf_retr,
'female_modeling': mf_retr,
'male_lecture': lm_retr,
'male_modeling': mm_retr}
dice_df = pd.Series()
jaccard_df = pd.Series()
for group1 in subject_groups.keys():
for group2 in subject_groups.keys():
if group1 != group2:
one = subject_groups[group1]
two = subject_groups[group2]
jaccard_df['{0}-{1}'.format(group1, group2)] = jaccard(np.ravel(one.values, order='F'),
np.ravel(two.values, order='F'))
dice_df['{0}-{1}'.format(group1, group2)] = dice(np.ravel(one.values, order='F'),
np.ravel(two.values, order='F'))
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
print(jaccard_df.sort_values(ascending=False))
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
print(dice_df.sort_values(ascending=False))
all_fci = pd.read_csv(join(nbs_dir, 'all_students-fci.csv'), index_col=0, header=0, dtype=int)
fml_fci = pd.read_csv(join(nbs_dir, 'female_students-fci.csv'), index_col=0, header=0, dtype=int)
mle_fci = pd.read_csv(join(nbs_dir, 'male_students-fci.csv'), index_col=0, header=0, dtype=int)
lec_fci = pd.read_csv(join(nbs_dir, 'lecture_students-fci.csv'), index_col=0, header=0, dtype=int)
lf_fci = pd.read_csv(join(nbs_dir, 'female_lecture_students-fci.csv'), index_col=0, header=0, dtype=int)
lm_fci = pd.read_csv(join(nbs_dir, 'male_lecture_students-fci.csv'), index_col=0, header=0, dtype=int)
mod_fci = pd.read_csv(join(nbs_dir, 'modeling_students-fci.csv'), index_col=0, header=0, dtype=int)
mf_fci = pd.read_csv(join(nbs_dir, 'female_modeling_students-fci.csv'), index_col=0, header=0, dtype=int)
mm_fci = pd.read_csv(join(nbs_dir, 'male_modeling_students-fci.csv'), index_col=0, header=0, dtype=int)
subject_groups = {'all': all_fci,
'female': fml_fci,
'male': mle_fci,
'lecture': lec_fci,
'modeling': mod_fci,
'female_lecture': lf_fci,
'female_modeling': mf_fci,
'male_lecture': lm_fci,
'male_modeling': mm_fci}
dice_df = pd.Series()
jaccard_df = pd.Series()
for group1 in subject_groups.keys():
for group2 in subject_groups.keys():
if group1 != group2:
one = subject_groups[group1]
two = subject_groups[group2]
jaccard_df['{0}-{1}'.format(group1, group2)] = jaccard(np.ravel(one.values, order='F'),
np.ravel(two.values, order='F'))
dice_df['{0}-{1}'.format(group1, group2)] = dice(np.ravel(one.values, order='F'),
np.ravel(two.values, order='F'))
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
print(jaccard_df.sort_values(ascending=False))
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
display(dice_df.sort_values(ascending=False))
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import seaborn as sns
from os.path import join
from nilearn import plotting
from scipy.spatial.distance import jaccard, dice
nbs_dir = '/Users/katherine/Dropbox/Projects/physics-retrieval/data/output/nbs'
all_retr = pd.read_csv(join(nbs_dir, 'all_students-retr.csv'), index_col=0, header=0, dtype=int)
fml_retr = pd.read_csv(join(nbs_dir, 'female_students-retr.csv'), index_col=0, header=0, dtype=int)
mle_retr = pd.read_csv(join(nbs_dir, 'male_students-retr.csv'), index_col=0, header=0, dtype=int)
lec_retr = pd.read_csv(join(nbs_dir, 'lecture_students-retr.csv'), index_col=0, header=0, dtype=int)
lf_retr = pd.read_csv(join(nbs_dir, 'female_lecture_students-retr.csv'), index_col=0, header=0, dtype=int)
lm_retr = pd.read_csv(join(nbs_dir, 'male_lecture_students-retr.csv'), index_col=0, header=0, dtype=int)
mod_retr = pd.read_csv(join(nbs_dir, 'modeling_students-retr.csv'), index_col=0, header=0, dtype=int)
mf_retr = pd.read_csv(join(nbs_dir, 'female_modeling_students-retr.csv'), index_col=0, header=0, dtype=int)
mm_retr = pd.read_csv(join(nbs_dir, 'male_modeling_students-retr.csv'), index_col=0, header=0, dtype=int)
subject_groups = {'all': all_retr,
'female': fml_retr,
'male': mle_retr,
'lecture': lec_retr,
'modeling': mod_retr,
'female_lecture': lf_retr,
'female_modeling': mf_retr,
'male_lecture': lm_retr,
'male_modeling': mm_retr}
dice_df = pd.Series()
jaccard_df = pd.Series()
for group1 in subject_groups.keys():
for group2 in subject_groups.keys():
if group1 != group2:
one = subject_groups[group1]
two = subject_groups[group2]
jaccard_df['{0}-{1}'.format(group1, group2)] = jaccard(np.ravel(one.values, order='F'),
np.ravel(two.values, order='F'))
dice_df['{0}-{1}'.format(group1, group2)] = dice(np.ravel(one.values, order='F'),
np.ravel(two.values, order='F'))
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
print(jaccard_df.sort_values(ascending=False))
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
print(dice_df.sort_values(ascending=False))
all_fci = pd.read_csv(join(nbs_dir, 'all_students-fci.csv'), index_col=0, header=0, dtype=int)
fml_fci = pd.read_csv(join(nbs_dir, 'female_students-fci.csv'), index_col=0, header=0, dtype=int)
mle_fci = pd.read_csv(join(nbs_dir, 'male_students-fci.csv'), index_col=0, header=0, dtype=int)
lec_fci = pd.read_csv(join(nbs_dir, 'lecture_students-fci.csv'), index_col=0, header=0, dtype=int)
lf_fci = pd.read_csv(join(nbs_dir, 'female_lecture_students-fci.csv'), index_col=0, header=0, dtype=int)
lm_fci = pd.read_csv(join(nbs_dir, 'male_lecture_students-fci.csv'), index_col=0, header=0, dtype=int)
mod_fci = pd.read_csv(join(nbs_dir, 'modeling_students-fci.csv'), index_col=0, header=0, dtype=int)
mf_fci = pd.read_csv(join(nbs_dir, 'female_modeling_students-fci.csv'), index_col=0, header=0, dtype=int)
mm_fci = pd.read_csv(join(nbs_dir, 'male_modeling_students-fci.csv'), index_col=0, header=0, dtype=int)
subject_groups = {'all': all_fci,
'female': fml_fci,
'male': mle_fci,
'lecture': lec_fci,
'modeling': mod_fci,
'female_lecture': lf_fci,
'female_modeling': mf_fci,
'male_lecture': lm_fci,
'male_modeling': mm_fci}
dice_df = pd.Series()
jaccard_df = pd.Series()
for group1 in subject_groups.keys():
for group2 in subject_groups.keys():
if group1 != group2:
one = subject_groups[group1]
two = subject_groups[group2]
jaccard_df['{0}-{1}'.format(group1, group2)] = jaccard(np.ravel(one.values, order='F'),
np.ravel(two.values, order='F'))
dice_df['{0}-{1}'.format(group1, group2)] = dice(np.ravel(one.values, order='F'),
np.ravel(two.values, order='F'))
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
print(jaccard_df.sort_values(ascending=False))
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
display(dice_df.sort_values(ascending=False))
| 0.237841 | 0.092196 |
## Store Sales - Time Series Forecasting
```
# Import Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from feature_engine.encoding import CountFrequencyEncoder
# Metrics
from sklearn.metrics import mean_squared_error as MSE
from sklearn.preprocessing import StandardScaler
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
```
### 1st Part: **EDA**
```
# Paths
train_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\train.csv"
test_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\test.csv"
stores_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\stores.csv"
oil_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\oil.csv"
transactions_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\transactions.csv"
holidays_events_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\holidays_events.csv"
```
<p style="text-align: justify"> When we see to this dataset, we see that columns dates are deconfigured. One option to solve this is to focus the feature values around the day of the date column. Let's see how we can do this.</p>
```
# Importing Datasets
train = pd.read_csv(train_pth)
test = pd.read_csv(test_pth)
stores = pd.read_csv(stores_pth)
oil = pd.read_csv(oil_pth)
events = pd.read_csv(holidays_events_pth, parse_dates=['date'])
transactions = pd.read_csv(transactions_pth)
stores.head()
train.head()
df = pd.merge(train,stores, on='store_nbr')
df['date'] = pd.to_datetime(df['date'])
transactions_store = transactions.groupby('store_nbr')['transactions'].sum()
transactions_date = transactions.groupby('date')['transactions'].sum()
# Merge df with events
events.reset_index(inplace=True)
events['date'] = pd.to_datetime(events['date'])
df = pd.merge(df, events, on = 'date', how='outer')
# Merge df with oil
oil['date'] = pd.to_datetime(oil['date'])
df = pd.merge(df, oil, on = 'date',how='outer')
# Merge df with Transaction_store and Transactions_date
df = pd.merge(df, transactions_store, on='store_nbr',how = 'outer')
transactions_date = pd.DataFrame(transactions_date).reset_index()
transactions_date['date'] = pd.to_datetime(transactions_date['date'])
df = pd.merge(df, transactions_date, on='date', how='outer')
df.dropna(subset='id', inplace=True)
df_terremoto = df[df['description'].str.contains('Terremoto', na = False)]
max = df_terremoto['date'].max()
min = df_terremoto['date'].min()
selecao = (df['date'] >= min) & (df['date'] <= max)
df_filte = df[selecao]
fig, ax = plt.subplots(1,2, figsize=(15,5))
ax[0].plot(df_filte['sales'])
ax[0].set_title('Gráfico do Período dos Terremoto - Total')
ax[1].plot(df_terremoto['sales'])
ax[1].set_title('Gráfico de Vendas nos dias com a Palavra Terremoto')
plt.show()
soma_fil = round(df_filte['sales'].sum(),0)
print(f"Total de Vendas no Período de Terremotos (TOTAL): {soma_fil:2,f}")
soma_terr = round(df_terremoto['sales'].sum(),0)
print(f"Total de Vendas no Período de Terremotos : {soma_terr:2,f}")
# Impactou muito!!!
df_cat_col = [c for c in df.columns if df[c].dtypes=='O'] # Separating Categoricals Columns
df.fillna(0, inplace=True)
df = CountFrequencyEncoder(variables=df_cat_col).fit_transform(df)
# Total Lines
lines = df.shape[0]
print(f'DataFrame has {lines:1,d} lines.')
# Separating Features and Target
X = df.drop(labels=['sales'], axis=1)
y = df['sales']
# Train - Test values
train_values = int(lines * 0.7)
X_train = X[:train_values]
y_train = X[train_values:]
X_test = y[:train_values]
y_test = y[train_values:]
dataset = tf.data.Dataset.from_tensor_slices((X_train.drop('date', axis=1).values, X_test.values))
train_dataset = dataset.shuffle(len(df)).batch(1000)
def get_compiled_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = get_compiled_model()
model.fit(train_dataset, epochs=15)
```
|
github_jupyter
|
# Import Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from feature_engine.encoding import CountFrequencyEncoder
# Metrics
from sklearn.metrics import mean_squared_error as MSE
from sklearn.preprocessing import StandardScaler
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# Paths
train_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\train.csv"
test_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\test.csv"
stores_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\stores.csv"
oil_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\oil.csv"
transactions_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\transactions.csv"
holidays_events_pth = r"C:\Users\willi\OneDrive\Documentos\5. Estudos\Kaggle_Notebooks\Store_Sales\data\holidays_events.csv"
# Importing Datasets
train = pd.read_csv(train_pth)
test = pd.read_csv(test_pth)
stores = pd.read_csv(stores_pth)
oil = pd.read_csv(oil_pth)
events = pd.read_csv(holidays_events_pth, parse_dates=['date'])
transactions = pd.read_csv(transactions_pth)
stores.head()
train.head()
df = pd.merge(train,stores, on='store_nbr')
df['date'] = pd.to_datetime(df['date'])
transactions_store = transactions.groupby('store_nbr')['transactions'].sum()
transactions_date = transactions.groupby('date')['transactions'].sum()
# Merge df with events
events.reset_index(inplace=True)
events['date'] = pd.to_datetime(events['date'])
df = pd.merge(df, events, on = 'date', how='outer')
# Merge df with oil
oil['date'] = pd.to_datetime(oil['date'])
df = pd.merge(df, oil, on = 'date',how='outer')
# Merge df with Transaction_store and Transactions_date
df = pd.merge(df, transactions_store, on='store_nbr',how = 'outer')
transactions_date = pd.DataFrame(transactions_date).reset_index()
transactions_date['date'] = pd.to_datetime(transactions_date['date'])
df = pd.merge(df, transactions_date, on='date', how='outer')
df.dropna(subset='id', inplace=True)
df_terremoto = df[df['description'].str.contains('Terremoto', na = False)]
max = df_terremoto['date'].max()
min = df_terremoto['date'].min()
selecao = (df['date'] >= min) & (df['date'] <= max)
df_filte = df[selecao]
fig, ax = plt.subplots(1,2, figsize=(15,5))
ax[0].plot(df_filte['sales'])
ax[0].set_title('Gráfico do Período dos Terremoto - Total')
ax[1].plot(df_terremoto['sales'])
ax[1].set_title('Gráfico de Vendas nos dias com a Palavra Terremoto')
plt.show()
soma_fil = round(df_filte['sales'].sum(),0)
print(f"Total de Vendas no Período de Terremotos (TOTAL): {soma_fil:2,f}")
soma_terr = round(df_terremoto['sales'].sum(),0)
print(f"Total de Vendas no Período de Terremotos : {soma_terr:2,f}")
# Impactou muito!!!
df_cat_col = [c for c in df.columns if df[c].dtypes=='O'] # Separating Categoricals Columns
df.fillna(0, inplace=True)
df = CountFrequencyEncoder(variables=df_cat_col).fit_transform(df)
# Total Lines
lines = df.shape[0]
print(f'DataFrame has {lines:1,d} lines.')
# Separating Features and Target
X = df.drop(labels=['sales'], axis=1)
y = df['sales']
# Train - Test values
train_values = int(lines * 0.7)
X_train = X[:train_values]
y_train = X[train_values:]
X_test = y[:train_values]
y_test = y[train_values:]
dataset = tf.data.Dataset.from_tensor_slices((X_train.drop('date', axis=1).values, X_test.values))
train_dataset = dataset.shuffle(len(df)).batch(1000)
def get_compiled_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = get_compiled_model()
model.fit(train_dataset, epochs=15)
| 0.572962 | 0.70374 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Read in files
sad_mlsq = pd.read_csv("ProcessedData/sad_mlsq.csv",index_col=0)
mrdi_mlsq = pd.read_csv("ProcessedData/mrdi_mlsq.csv",index_col=0)
sar_mlsq = pd.read_csv("ProcessedData/sar_mlsq.csv",index_col=0)
# Indigenous v introduced
sad_mlsq_i = pd.read_csv("ProcessedData/sad_mlsq_indigenous.csv",index_col=0)
mrdi_mlsq_i = pd.read_csv("ProcessedData/mrdi_mlsq_indigenous.csv",index_col=0)
sar_mlsq_i = pd.read_csv("ProcessedData/sar_mlsq_indigenous.csv",index_col=0)
# KS
sad_ks = pd.read_csv("ProcessedData/sad_means_ksD_R.csv",index_col=0)
sad_ks = sad_ks.T # Make this the same direction as the others
mrdi_ks = pd.read_csv("ProcessedData/mrdi_means_ksD.csv",index_col=0)
# Hard code number of sites and labels
lu = sad_mlsq.columns
lu_sites = pd.Series(np.array([12,44,24,16]),index=lu)
# Make xlabels
xlabels = []
for l in lu:
# For the semi-natural pasture, there actually aren't as many sites for the SAR
if l=='Semi-natural pasture':
xlabels.append('{}\n{} sites (SAD & MRDI)\n10 sites (SAR)'.format(l,lu_sites[l]))
else:
xlabels.append('{}\n{} sites'.format(l,lu_sites[l]))
```
# Overall plot
```
fig,ax = plt.subplots(figsize=(4,3))
y2 = ax.twinx()
clist = ['tab:blue','tab:green','tab:red','tab:orange']
xlist = np.array([2,1,4,3])
for i in np.arange(len(clist)):
ax.errorbar(x=xlist[i]-0.2,y=sad_mlsq.loc['Mean'][i],yerr=sad_mlsq.loc['Standard error'][i],
fmt='o',capsize=4,c=clist[i],label='SAD')
ax.errorbar(x=xlist[i],y=mrdi_mlsq.loc['Mean'][i],yerr=mrdi_mlsq.loc['Standard error'][i],
fmt='v',capsize=4,c=clist[i],label='MRDI')
y2.errorbar(x=xlist[i]+0.2,y=sar_mlsq.loc['Mean'][i],yerr=sar_mlsq.loc['Standard error'][i],
fmt='s',capsize=4,c=clist[i],label='SAR')
if (i == len(clist)-1):
psad = ax.errorbar(x=[],y=[],yerr=[],
fmt='o',capsize=4,c='tab:gray',label='SAD')
pmrdi = ax.errorbar(x=[],y=[],yerr=[],
fmt='v',capsize=4,c='tab:gray',label='MRDI')
psar = y2.errorbar(x=[],y=[],yerr=[],
fmt='s',capsize=4,c='tab:gray',label='SAR')
# Labels
ax.set_ylabel('Mean least squared error \nSAD & MRDI')
y2.set_ylabel('Mean least squared error \nSAR',rotation=270,labelpad=25)
# Set 0 scale
ax.set_ylim(0,1.0)
y2.set_ylim(0,0.05)
# Legend
lns = [psad, pmrdi, psar]
ax.legend(handles=lns, loc='upper left')
# x-ticks for ax
ax.xaxis.set_ticks([2,1,4,3])
ax.set_xticklabels(xlabels, rotation = 45,ha='right')
# y-ticks and grid
y2.set_yticks(np.linspace(0.0, 0.05, len(ax.get_yticks())))
# Just make one grid, and just for y-axis
ax.yaxis.grid(True)
#ax.xaxis.grid(True,which='minor')
plt.savefig("Figures/means_together_grid.pdf",bbox_inches='tight')
```
## Indigenous versus introduced
```
# Need a separate xlabel without site numbers. Those will go in caption.
xlabelsi = []
for l in lu:
xlabelsi.append(l)
# With grid
# Another option with separate axis for SAR, with alt colors
# Use existing colors according to site, shape represents analysis
# Plot the mean least squares
fig,ax = plt.subplots(figsize=(8,3))
y2 = ax.twinx()
clist = ['tab:blue','tab:green','tab:red','tab:orange']
xlist = np.array([2,1,4,3])
for i in np.arange(len(clist)):
# Indigenous ones
ax.errorbar(x=xlist[i]-0.2,y=sad_mlsq_i.loc['Mean indigenous'][i],
yerr=sad_mlsq_i.loc['Standard error indigenous'][i],
fmt='o',capsize=4,c=clist[i],label='SAD')
ax.errorbar(x=xlist[i],y=mrdi_mlsq_i.loc['Mean (idg)'][i],
yerr=mrdi_mlsq_i.loc['Standard error (idg)'][i],
fmt='v',capsize=4,c=clist[i],label='MRDI')
y2.errorbar(x=xlist[i]+0.2,y=sar_mlsq_i.loc['Mean (idg)'][i],
yerr=sar_mlsq_i.loc['Standard error (idg)'][i],
fmt='s',capsize=4,c=clist[i],label='SAR')
# Introduced
al = 0.7 # Alpha value
ax.errorbar(x=xlist[i]-0.1,y=sad_mlsq_i.loc['Mean introduced'][i],
yerr=sad_mlsq_i.loc['Standard error introduced'][i],
fmt='o',capsize=4,c=clist[i],alpha=al,markerfacecolor='none',label='SAD')
ax.errorbar(x=xlist[i]+0.1,y=mrdi_mlsq_i.loc['Mean (int)'][i],
yerr=mrdi_mlsq_i.loc['Standard error (int)'][i],
fmt='v',capsize=4,c=clist[i],alpha=al,markerfacecolor='none',label='MRDI')
y2.errorbar(x=xlist[i]+0.3,y=sar_mlsq_i.loc['Mean (int)'][i],
yerr=sar_mlsq_i.loc['Standard error (int)'][i],
fmt='s',capsize=4,c=clist[i],alpha=al,markerfacecolor='none',label='SAR')
# For the legend
if (i == len(clist)-1):
psad = ax.errorbar(x=[],y=[],yerr=[],
fmt='o',capsize=4,c='tab:gray',label='SAD')
pmrdi = ax.errorbar(x=[],y=[],yerr=[],
fmt='v',capsize=4,c='tab:gray',label='MRDI')
psar = y2.errorbar(x=[],y=[],yerr=[],
fmt='s',capsize=4,c='tab:gray',label='SAR')
# Explain filled/unfilled in caption
# Labels
ax.set_ylabel('Mean least squared error \nSAD & MRDI')
y2.set_ylabel('Mean least squared error \nSAR',rotation=270,labelpad=25)
# Set 0 scale
ax.set_ylim(0,2.75)
y2.set_ylim(0,0.055)
# Legend
lns = [psad, pmrdi, psar]
ax.legend(handles=lns,loc=(0.2,0.69))
# x-ticks for ax
ax.xaxis.set_ticks([2.05,1.05,4.05,3.05])
ax.set_xticklabels(xlabelsi, rotation = 30,ha='right')
# y-ticks and grid
y2.set_yticks(np.linspace(0.0, 0.05, len(ax.get_yticks())-1))
# Just make one grid, and just for y-axis
ax.yaxis.grid(True)
#y2.yaxis.grid(True) # Just to test they overlap
#ax.xaxis.grid(True,which='minor')
plt.savefig("Figures/means_together_indigenous_grid.pdf",bbox_inches='tight')
```
# SI
## KS
```
# KS together
# Plot the mean least squares
plt.figure(figsize=(4,3))
# Make color dictionarry
clist = {lu[1]:'tab:green',lu[0]:'tab:blue',lu[3]:'tab:orange',lu[2]:'tab:red'}
xlist = {lu[1]:1, lu[0]:2, lu[3]:3, lu[2]:4}
for l in lu:
plt.errorbar(x=xlist[l]-0.1,y=sad_ks.loc['means',l],yerr=sad_ks.loc['se',l],
fmt='o',c=clist[l],capsize=4,label='SAD')
plt.errorbar(x=xlist[l]+0.1,y=mrdi_ks.loc['Mean',l],yerr=mrdi_ks.loc['Standard error',l],
fmt='v',c=clist[l],capsize=4,label='MRDI')
plt.ylabel(r'KS test statistic $D_{KS}$')
plt.xticks([2,1,4,3],['{}\n{} sites'.format(l,lu_sites[l]) for l in lu],
rotation=45,ha='right')
# Legend (requires previous cell)
lns = [psad, pmrdi]
plt.legend(handles=lns, loc='upper left')
plt.savefig("Figures/SI/KSD_together.pdf",bbox_inches='tight')
```
## For combined
```
# Read in
#SAD
comm_sad = pd.read_csv('ProcessedData/sad_combined_data.csv')
comm_sad_mlsq = comm_sad['mlsq'].values
#Native forest 0.491278
#Exotic forest 0.689149
#Semi-natural pasture 1.257251
#Intensive pasture 0.331208
#MRDI
comm_mrdi = pd.read_csv('ProcessedData/mrdi_combined_data.csv')
comm_mrdi_mlsq = comm_mrdi['mlsq'].values
#Native forest 0.378824
#Exotic forest 0.260345
#Semi-natural pasture 1.253413
#Intensive pasture 1.391150
# Plot least squares
# Set up
clist = ['tab:green','tab:blue','tab:orange','tab:red']
xlist = np.arange(4)+1
# Plot
plt.figure(figsize=(4,3))
for i in np.arange(len(clist)):
plt.plot(xlist[i],comm_sad_mlsq[i],'o',c=clist[i],label='SAD')
plt.plot(xlist[i]+0.3,comm_mrdi_mlsq[i],'v',c=clist[i],label='MRDI')
plt.ylabel('Mean least squared error')
plt.xticks(xlist+0.15,['{}\n{} sites'.format(l,lu_sites[l]) for l in [lu[1],lu[0],lu[3],lu[2]]],
rotation=45,ha='right')
# Legend
sadl, = plt.plot([],[],'o',c='tab:gray',label='SAD')
mrdil, = plt.plot([],[],'v',c='tab:gray',label='MRDI')
lns = [sadl,mrdil]
plt.legend(handles=lns, loc='best')
plt.savefig('Figures/SI/means_together_community_level.pdf',bbox_inches='tight')
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Read in files
sad_mlsq = pd.read_csv("ProcessedData/sad_mlsq.csv",index_col=0)
mrdi_mlsq = pd.read_csv("ProcessedData/mrdi_mlsq.csv",index_col=0)
sar_mlsq = pd.read_csv("ProcessedData/sar_mlsq.csv",index_col=0)
# Indigenous v introduced
sad_mlsq_i = pd.read_csv("ProcessedData/sad_mlsq_indigenous.csv",index_col=0)
mrdi_mlsq_i = pd.read_csv("ProcessedData/mrdi_mlsq_indigenous.csv",index_col=0)
sar_mlsq_i = pd.read_csv("ProcessedData/sar_mlsq_indigenous.csv",index_col=0)
# KS
sad_ks = pd.read_csv("ProcessedData/sad_means_ksD_R.csv",index_col=0)
sad_ks = sad_ks.T # Make this the same direction as the others
mrdi_ks = pd.read_csv("ProcessedData/mrdi_means_ksD.csv",index_col=0)
# Hard code number of sites and labels
lu = sad_mlsq.columns
lu_sites = pd.Series(np.array([12,44,24,16]),index=lu)
# Make xlabels
xlabels = []
for l in lu:
# For the semi-natural pasture, there actually aren't as many sites for the SAR
if l=='Semi-natural pasture':
xlabels.append('{}\n{} sites (SAD & MRDI)\n10 sites (SAR)'.format(l,lu_sites[l]))
else:
xlabels.append('{}\n{} sites'.format(l,lu_sites[l]))
fig,ax = plt.subplots(figsize=(4,3))
y2 = ax.twinx()
clist = ['tab:blue','tab:green','tab:red','tab:orange']
xlist = np.array([2,1,4,3])
for i in np.arange(len(clist)):
ax.errorbar(x=xlist[i]-0.2,y=sad_mlsq.loc['Mean'][i],yerr=sad_mlsq.loc['Standard error'][i],
fmt='o',capsize=4,c=clist[i],label='SAD')
ax.errorbar(x=xlist[i],y=mrdi_mlsq.loc['Mean'][i],yerr=mrdi_mlsq.loc['Standard error'][i],
fmt='v',capsize=4,c=clist[i],label='MRDI')
y2.errorbar(x=xlist[i]+0.2,y=sar_mlsq.loc['Mean'][i],yerr=sar_mlsq.loc['Standard error'][i],
fmt='s',capsize=4,c=clist[i],label='SAR')
if (i == len(clist)-1):
psad = ax.errorbar(x=[],y=[],yerr=[],
fmt='o',capsize=4,c='tab:gray',label='SAD')
pmrdi = ax.errorbar(x=[],y=[],yerr=[],
fmt='v',capsize=4,c='tab:gray',label='MRDI')
psar = y2.errorbar(x=[],y=[],yerr=[],
fmt='s',capsize=4,c='tab:gray',label='SAR')
# Labels
ax.set_ylabel('Mean least squared error \nSAD & MRDI')
y2.set_ylabel('Mean least squared error \nSAR',rotation=270,labelpad=25)
# Set 0 scale
ax.set_ylim(0,1.0)
y2.set_ylim(0,0.05)
# Legend
lns = [psad, pmrdi, psar]
ax.legend(handles=lns, loc='upper left')
# x-ticks for ax
ax.xaxis.set_ticks([2,1,4,3])
ax.set_xticklabels(xlabels, rotation = 45,ha='right')
# y-ticks and grid
y2.set_yticks(np.linspace(0.0, 0.05, len(ax.get_yticks())))
# Just make one grid, and just for y-axis
ax.yaxis.grid(True)
#ax.xaxis.grid(True,which='minor')
plt.savefig("Figures/means_together_grid.pdf",bbox_inches='tight')
# Need a separate xlabel without site numbers. Those will go in caption.
xlabelsi = []
for l in lu:
xlabelsi.append(l)
# With grid
# Another option with separate axis for SAR, with alt colors
# Use existing colors according to site, shape represents analysis
# Plot the mean least squares
fig,ax = plt.subplots(figsize=(8,3))
y2 = ax.twinx()
clist = ['tab:blue','tab:green','tab:red','tab:orange']
xlist = np.array([2,1,4,3])
for i in np.arange(len(clist)):
# Indigenous ones
ax.errorbar(x=xlist[i]-0.2,y=sad_mlsq_i.loc['Mean indigenous'][i],
yerr=sad_mlsq_i.loc['Standard error indigenous'][i],
fmt='o',capsize=4,c=clist[i],label='SAD')
ax.errorbar(x=xlist[i],y=mrdi_mlsq_i.loc['Mean (idg)'][i],
yerr=mrdi_mlsq_i.loc['Standard error (idg)'][i],
fmt='v',capsize=4,c=clist[i],label='MRDI')
y2.errorbar(x=xlist[i]+0.2,y=sar_mlsq_i.loc['Mean (idg)'][i],
yerr=sar_mlsq_i.loc['Standard error (idg)'][i],
fmt='s',capsize=4,c=clist[i],label='SAR')
# Introduced
al = 0.7 # Alpha value
ax.errorbar(x=xlist[i]-0.1,y=sad_mlsq_i.loc['Mean introduced'][i],
yerr=sad_mlsq_i.loc['Standard error introduced'][i],
fmt='o',capsize=4,c=clist[i],alpha=al,markerfacecolor='none',label='SAD')
ax.errorbar(x=xlist[i]+0.1,y=mrdi_mlsq_i.loc['Mean (int)'][i],
yerr=mrdi_mlsq_i.loc['Standard error (int)'][i],
fmt='v',capsize=4,c=clist[i],alpha=al,markerfacecolor='none',label='MRDI')
y2.errorbar(x=xlist[i]+0.3,y=sar_mlsq_i.loc['Mean (int)'][i],
yerr=sar_mlsq_i.loc['Standard error (int)'][i],
fmt='s',capsize=4,c=clist[i],alpha=al,markerfacecolor='none',label='SAR')
# For the legend
if (i == len(clist)-1):
psad = ax.errorbar(x=[],y=[],yerr=[],
fmt='o',capsize=4,c='tab:gray',label='SAD')
pmrdi = ax.errorbar(x=[],y=[],yerr=[],
fmt='v',capsize=4,c='tab:gray',label='MRDI')
psar = y2.errorbar(x=[],y=[],yerr=[],
fmt='s',capsize=4,c='tab:gray',label='SAR')
# Explain filled/unfilled in caption
# Labels
ax.set_ylabel('Mean least squared error \nSAD & MRDI')
y2.set_ylabel('Mean least squared error \nSAR',rotation=270,labelpad=25)
# Set 0 scale
ax.set_ylim(0,2.75)
y2.set_ylim(0,0.055)
# Legend
lns = [psad, pmrdi, psar]
ax.legend(handles=lns,loc=(0.2,0.69))
# x-ticks for ax
ax.xaxis.set_ticks([2.05,1.05,4.05,3.05])
ax.set_xticklabels(xlabelsi, rotation = 30,ha='right')
# y-ticks and grid
y2.set_yticks(np.linspace(0.0, 0.05, len(ax.get_yticks())-1))
# Just make one grid, and just for y-axis
ax.yaxis.grid(True)
#y2.yaxis.grid(True) # Just to test they overlap
#ax.xaxis.grid(True,which='minor')
plt.savefig("Figures/means_together_indigenous_grid.pdf",bbox_inches='tight')
# KS together
# Plot the mean least squares
plt.figure(figsize=(4,3))
# Make color dictionarry
clist = {lu[1]:'tab:green',lu[0]:'tab:blue',lu[3]:'tab:orange',lu[2]:'tab:red'}
xlist = {lu[1]:1, lu[0]:2, lu[3]:3, lu[2]:4}
for l in lu:
plt.errorbar(x=xlist[l]-0.1,y=sad_ks.loc['means',l],yerr=sad_ks.loc['se',l],
fmt='o',c=clist[l],capsize=4,label='SAD')
plt.errorbar(x=xlist[l]+0.1,y=mrdi_ks.loc['Mean',l],yerr=mrdi_ks.loc['Standard error',l],
fmt='v',c=clist[l],capsize=4,label='MRDI')
plt.ylabel(r'KS test statistic $D_{KS}$')
plt.xticks([2,1,4,3],['{}\n{} sites'.format(l,lu_sites[l]) for l in lu],
rotation=45,ha='right')
# Legend (requires previous cell)
lns = [psad, pmrdi]
plt.legend(handles=lns, loc='upper left')
plt.savefig("Figures/SI/KSD_together.pdf",bbox_inches='tight')
# Read in
#SAD
comm_sad = pd.read_csv('ProcessedData/sad_combined_data.csv')
comm_sad_mlsq = comm_sad['mlsq'].values
#Native forest 0.491278
#Exotic forest 0.689149
#Semi-natural pasture 1.257251
#Intensive pasture 0.331208
#MRDI
comm_mrdi = pd.read_csv('ProcessedData/mrdi_combined_data.csv')
comm_mrdi_mlsq = comm_mrdi['mlsq'].values
#Native forest 0.378824
#Exotic forest 0.260345
#Semi-natural pasture 1.253413
#Intensive pasture 1.391150
# Plot least squares
# Set up
clist = ['tab:green','tab:blue','tab:orange','tab:red']
xlist = np.arange(4)+1
# Plot
plt.figure(figsize=(4,3))
for i in np.arange(len(clist)):
plt.plot(xlist[i],comm_sad_mlsq[i],'o',c=clist[i],label='SAD')
plt.plot(xlist[i]+0.3,comm_mrdi_mlsq[i],'v',c=clist[i],label='MRDI')
plt.ylabel('Mean least squared error')
plt.xticks(xlist+0.15,['{}\n{} sites'.format(l,lu_sites[l]) for l in [lu[1],lu[0],lu[3],lu[2]]],
rotation=45,ha='right')
# Legend
sadl, = plt.plot([],[],'o',c='tab:gray',label='SAD')
mrdil, = plt.plot([],[],'v',c='tab:gray',label='MRDI')
lns = [sadl,mrdil]
plt.legend(handles=lns, loc='best')
plt.savefig('Figures/SI/means_together_community_level.pdf',bbox_inches='tight')
| 0.464173 | 0.712869 |
```
import pandas as pd
import os
import cobra
import cobra.test
from cobra.io import read_sbml_model
import re
import copy as cp
from cobra.flux_analysis import gapfill
import gurobipy
import warnings
# Just for testing
def load_query_model_file(file, obj = None):
"""
Reads a XML SBML file, returning a model object.
Needs cobra.io.
If obj parameter is specified, changes the objective of the model.
"""
model = read_sbml_model(file)
if obj != None:
model.objective = obj
return model
return model
cobra_config = cobra.Configuration()
# solvers: glpk, glpk-exact, scipy, gurobi
cobra_config.solver = 'glpk'
# Test
x = load_query_model_file("iEC1344_C.xml")
# accession to model name
str(x)
# accession to a gene given its reaction
x.reactions[600].id
# finding the gene
x.reactions.ADPT
str(list(x.reactions.ADPT.genes)[0])
# reaction without associated gene
x.reactions[800]
x.reactions[800].genes
str(list(x.reactions[800].genes)[0])
# index error means there are no genes associated
try:
str(list(x.reactions[800].genes)[0])
except IndexError:
print("Unknown gene")
# Exchange reactions
x.reactions[0]
if x.reactions[0].id.startswith("EX_"):
print("Exchange reaction")
# Several genes
x.reactions[580]
str(list(x.reactions[580].genes)[0])
# that method doesn't work
list(x.reactions[580].genes)
[str(list(x.reactions[580].genes)[i]) for i in range(len(list(x.reactions[580].genes)))]
def load_query_model(model, obj = None):
"""
Loads a model object. Changes objective if specified.
"""
# Error with deepcopy when gurobi solver set
model = cp.deepcopy(model)
if obj != None:
model.objective = obj
return model
return model
# Tests
y = load_query_model(x)
y
z = load_query_model(x, obj = "2MAHMP")
z
# If deepcopy works, y and z objectives should differ
y
def load_template_models(template_list, obj = None):
"""
Takes a list of template models and changes objective if specified.
Objective can be either biomass or a specific reaction.
It also returns a list with the models which objective couldn't be changed.
"""
# deepcopy doesn't work when solver = gurobi
templates = cp.deepcopy(template_list)
templates = template_list
failures = []
if obj == None:
return templates, failures
if obj == "biomass":
b = re.compile("biomass", re.IGNORECASE)
bc = re.compile("(biomass){1}.*(core){1}", re.IGNORECASE)
for model in templates:
reactions = [reaction.id for reaction in model.reactions]
# Searching for a biomass_core reaction
core = list(filter(bc.match, reactions))
if core:
model.objective = core[0]
continue
# Searching for a non core biomass reaction
biomass = list(filter(b.match, reactions))
if biomass:
model.objective = biomass[0]
# If biomass reactions are not found, the model name is stored into "failures" list.
# The model won't change its objective, but it will be used anyway for gap filling
else:
failures.append(model.name)
return templates, failures
if obj != None and obj != "biomass":
for model in templates:
try:
model.objective = obj
except ValueError:
failures.append(model.name)
return templates, failures
# Function testing
templ = [x, y, z]
# The objective for x and y is biomass, as for z is the reaction 2MAHMP
A, failures = load_template_models(templ)
A[0]
A[2]
failures
B, failures = load_template_models(templ, obj = "biomass")
B[2]
C, failures = load_template_models(templ, obj = "2MAHMP")
C[0]
C[2]
D, failures = load_template_models(templ, obj = "qwerty")
D[0]
failures
# model.name doesn't work
x.optimize().objective_value
# Gapfilling test
cobra_config.solver = 'gurobi'
model = load_query_model_file("iJN746.xml")
model.optimize().objective_value
model.summary()
universal = cobra.Model("universal_reactions")
for i in [i.id for i in model.metabolites.glc__D_e.reactions]:
reaction = model.reactions.get_by_id(i)
universal.add_reaction(reaction.copy())
model.remove_reactions([reaction])
value = model.optimize().objective_value
print(value)
solution = gapfill(model, universal, demand_reactions=False)
for reaction in solution[0]:
print(reaction.id)
# solution is a list with the reactions needed to make the model work
type(solution)
for reaction in solution[0]:
model.add_reaction(reaction.copy())
model.optimize().objective_value
model.summary()
# first draft
def homology_gapfilling(model, templates, model_obj = None, template_obj = None):
"""
Performs gap filling on a model using homology models as templates
"""
model = load_query_model(model, obj = model_obj)
model.solver = 'gurobi'
templates, template_failures = load_template_models(templates, obj = template_obj)
# This dict will store used models and reactions
added_reactions = {}
# Initial flux value
value = model.optimize().objective_value
if value == None:
value = 0.0
for template in templates:
template.solver = 'gurobi'
# result will store the reactions ids
result = gapfill(model, template, demand_reactions=False)
# dict
reactions = [reaction.id for reaction in result[0]]
# template.name does not work. Must find a solution to store the name of the used templates
added_reactions[template.name] = reactions
# Adding reactions to the model
[model.add_reaction(reaction.copy()) for reaction in result[0]]
# Flux will be evaluated here
new_value = model.optimize().objective_value
if new_value != None and new_value > value:
value = new_value
elif new_value == None:
continue
elif new_value != None and new_value == value:
break
return model, added_reactions
# model and templ default solver should be glpk in order to avoid deepcopy error
X = homology_gapfilling(model, templ)
X[0].optimize().objective_value
```
|
github_jupyter
|
import pandas as pd
import os
import cobra
import cobra.test
from cobra.io import read_sbml_model
import re
import copy as cp
from cobra.flux_analysis import gapfill
import gurobipy
import warnings
# Just for testing
def load_query_model_file(file, obj = None):
"""
Reads a XML SBML file, returning a model object.
Needs cobra.io.
If obj parameter is specified, changes the objective of the model.
"""
model = read_sbml_model(file)
if obj != None:
model.objective = obj
return model
return model
cobra_config = cobra.Configuration()
# solvers: glpk, glpk-exact, scipy, gurobi
cobra_config.solver = 'glpk'
# Test
x = load_query_model_file("iEC1344_C.xml")
# accession to model name
str(x)
# accession to a gene given its reaction
x.reactions[600].id
# finding the gene
x.reactions.ADPT
str(list(x.reactions.ADPT.genes)[0])
# reaction without associated gene
x.reactions[800]
x.reactions[800].genes
str(list(x.reactions[800].genes)[0])
# index error means there are no genes associated
try:
str(list(x.reactions[800].genes)[0])
except IndexError:
print("Unknown gene")
# Exchange reactions
x.reactions[0]
if x.reactions[0].id.startswith("EX_"):
print("Exchange reaction")
# Several genes
x.reactions[580]
str(list(x.reactions[580].genes)[0])
# that method doesn't work
list(x.reactions[580].genes)
[str(list(x.reactions[580].genes)[i]) for i in range(len(list(x.reactions[580].genes)))]
def load_query_model(model, obj = None):
"""
Loads a model object. Changes objective if specified.
"""
# Error with deepcopy when gurobi solver set
model = cp.deepcopy(model)
if obj != None:
model.objective = obj
return model
return model
# Tests
y = load_query_model(x)
y
z = load_query_model(x, obj = "2MAHMP")
z
# If deepcopy works, y and z objectives should differ
y
def load_template_models(template_list, obj = None):
"""
Takes a list of template models and changes objective if specified.
Objective can be either biomass or a specific reaction.
It also returns a list with the models which objective couldn't be changed.
"""
# deepcopy doesn't work when solver = gurobi
templates = cp.deepcopy(template_list)
templates = template_list
failures = []
if obj == None:
return templates, failures
if obj == "biomass":
b = re.compile("biomass", re.IGNORECASE)
bc = re.compile("(biomass){1}.*(core){1}", re.IGNORECASE)
for model in templates:
reactions = [reaction.id for reaction in model.reactions]
# Searching for a biomass_core reaction
core = list(filter(bc.match, reactions))
if core:
model.objective = core[0]
continue
# Searching for a non core biomass reaction
biomass = list(filter(b.match, reactions))
if biomass:
model.objective = biomass[0]
# If biomass reactions are not found, the model name is stored into "failures" list.
# The model won't change its objective, but it will be used anyway for gap filling
else:
failures.append(model.name)
return templates, failures
if obj != None and obj != "biomass":
for model in templates:
try:
model.objective = obj
except ValueError:
failures.append(model.name)
return templates, failures
# Function testing
templ = [x, y, z]
# The objective for x and y is biomass, as for z is the reaction 2MAHMP
A, failures = load_template_models(templ)
A[0]
A[2]
failures
B, failures = load_template_models(templ, obj = "biomass")
B[2]
C, failures = load_template_models(templ, obj = "2MAHMP")
C[0]
C[2]
D, failures = load_template_models(templ, obj = "qwerty")
D[0]
failures
# model.name doesn't work
x.optimize().objective_value
# Gapfilling test
cobra_config.solver = 'gurobi'
model = load_query_model_file("iJN746.xml")
model.optimize().objective_value
model.summary()
universal = cobra.Model("universal_reactions")
for i in [i.id for i in model.metabolites.glc__D_e.reactions]:
reaction = model.reactions.get_by_id(i)
universal.add_reaction(reaction.copy())
model.remove_reactions([reaction])
value = model.optimize().objective_value
print(value)
solution = gapfill(model, universal, demand_reactions=False)
for reaction in solution[0]:
print(reaction.id)
# solution is a list with the reactions needed to make the model work
type(solution)
for reaction in solution[0]:
model.add_reaction(reaction.copy())
model.optimize().objective_value
model.summary()
# first draft
def homology_gapfilling(model, templates, model_obj = None, template_obj = None):
"""
Performs gap filling on a model using homology models as templates
"""
model = load_query_model(model, obj = model_obj)
model.solver = 'gurobi'
templates, template_failures = load_template_models(templates, obj = template_obj)
# This dict will store used models and reactions
added_reactions = {}
# Initial flux value
value = model.optimize().objective_value
if value == None:
value = 0.0
for template in templates:
template.solver = 'gurobi'
# result will store the reactions ids
result = gapfill(model, template, demand_reactions=False)
# dict
reactions = [reaction.id for reaction in result[0]]
# template.name does not work. Must find a solution to store the name of the used templates
added_reactions[template.name] = reactions
# Adding reactions to the model
[model.add_reaction(reaction.copy()) for reaction in result[0]]
# Flux will be evaluated here
new_value = model.optimize().objective_value
if new_value != None and new_value > value:
value = new_value
elif new_value == None:
continue
elif new_value != None and new_value == value:
break
return model, added_reactions
# model and templ default solver should be glpk in order to avoid deepcopy error
X = homology_gapfilling(model, templ)
X[0].optimize().objective_value
| 0.456652 | 0.309099 |
This notebook contains Hovmoller plots that compare the model output over many different depths to the results from the ORCA Buoy data.
```
import sys
sys.path.append('/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools')
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
import netCDF4 as nc
import xarray as xr
import datetime as dt
from salishsea_tools import evaltools as et, viz_tools, places
import gsw
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import matplotlib.dates as mdates
import cmocean as cmo
import scipy.interpolate as sinterp
import math
from scipy import io
import pickle
import cmocean
import json
import Keegan_eval_tools as ket
from collections import OrderedDict
from matplotlib.colors import LogNorm
fs=16
mpl.rc('xtick', labelsize=fs)
mpl.rc('ytick', labelsize=fs)
mpl.rc('legend', fontsize=fs)
mpl.rc('axes', titlesize=fs)
mpl.rc('axes', labelsize=fs)
mpl.rc('figure', titlesize=fs)
mpl.rc('font', size=fs)
mpl.rc('font', family='sans-serif', weight='normal', style='normal')
import warnings
#warnings.filterwarnings('ignore')
from IPython.display import Markdown, display
%matplotlib inline
ptrcloc='/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data'
modver='HC201905' #HC202007 is the other option.
gridloc='/ocean/kflanaga/MEOPAR/savedData/201905_grid_data'
ORCAloc='/ocean/kflanaga/MEOPAR/savedData/ORCAData'
year=2019
mooring='Twanoh'
# Parameters
year = 2015
modver = "HC202007"
mooring = "Twanoh"
ptrcloc = "/ocean/kflanaga/MEOPAR/savedData/202007_ptrc_data"
gridloc = "/ocean/kflanaga/MEOPAR/savedData/202007_grid_data"
ORCAloc = "/ocean/kflanaga/MEOPAR/savedData/ORCAData"
orca_dict=io.loadmat(f'{ORCAloc}/{mooring}.mat')
def ORCA_dd_to_dt(date_list):
UTC=[]
for yd in date_list:
if np.isnan(yd) == True:
UTC.append(float("NaN"))
else:
start = dt.datetime(1999,12,31)
delta = dt.timedelta(yd)
offset = start + delta
time=offset.replace(microsecond=0)
UTC.append(time)
return UTC
obs_tt=[]
for i in range(len(orca_dict['Btime'][1])):
obs_tt.append(np.nanmean(orca_dict['Btime'][:,i]))
#I should also change this obs_tt thing I have here into datetimes
YD_rounded=[]
for yd in obs_tt:
if np.isnan(yd) == True:
YD_rounded.append(float("NaN"))
else:
YD_rounded.append(math.floor(yd))
obs_dep=[]
for i in orca_dict['Bdepth']:
obs_dep.append(np.nanmean(i))
grid=xr.open_mfdataset(gridloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(grid.time_counter)
mod_depth=np.array(grid.deptht)
mod_votemper=(grid.votemper.isel(y=0,x=0))
mod_vosaline=(grid.vosaline.isel(y=0,x=0))
mod_votemper = (np.array(mod_votemper))
mod_votemper = np.ma.masked_equal(mod_votemper,0).T
mod_vosaline = (np.array(mod_vosaline))
mod_vosaline = np.ma.masked_equal(mod_vosaline,0).T
def Process_ORCA(orca_var,depths,dates,year):
# Transpose the columns so that a yearday column can be added.
df_1=pd.DataFrame(orca_var).transpose()
df_YD=pd.DataFrame(dates,columns=['yearday'])
df_1=pd.concat((df_1,df_YD),axis=1)
#Group by yearday so that you can take the daily mean values.
dfg=df_1.groupby(by='yearday')
df_mean=dfg.mean()
df_mean=df_mean.reset_index()
# Convert the yeardays to datetime UTC
UTC=ORCA_dd_to_dt(df_mean['yearday'])
df_mean['yearday']=UTC
# Select the range of dates that you would like.
df_year=df_mean[(df_mean.yearday >= dt.datetime(year,1,1))&(df_mean.yearday <= dt.datetime(year,12,31))]
df_year=df_year.set_index('yearday')
#Add in any missing date values
idx=pd.date_range(df_year.index[0],df_year.index[-1])
df_full=df_year.reindex(idx,fill_value=-1)
#Transpose again so that you can add a depth column.
df_full=df_full.transpose()
df_full['depth']=obs_dep
# Remove any rows that have NA values for depth.
df_full=df_full.dropna(how='all',subset=['depth'])
df_full=df_full.set_index('depth')
#Mask any NA values and any negative values.
df_final=np.ma.masked_invalid(np.array(df_full))
df_final=np.ma.masked_less(df_final,0)
return df_final, df_full.index, df_full.columns
```
## Map of Buoy Location.
```
lon,lat=places.PLACES[mooring]['lon lat']
fig, ax = plt.subplots(1,1,figsize = (6,6))
with nc.Dataset('/data/vdo/MEOPAR/NEMO-forcing/grid/bathymetry_201702.nc') as bathy:
viz_tools.plot_coastline(ax, bathy, coords = 'map',isobath=.1)
color=('firebrick')
ax.plot(lon, lat,'o',color = 'firebrick', label=mooring)
ax.set_ylim(47, 49)
ax.legend(bbox_to_anchor=[1,.6,0.45,0])
ax.set_xlim(-124, -122);
ax.set_title('Buoy Location');
```
## Temperature
```
df,dep,tim= Process_ORCA(orca_dict['Btemp'],obs_dep,YD_rounded,year)
date_range=(dt.datetime(year,1,1),dt.datetime(year,12,31))
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
ax=ket.hovmoeller(mod_votemper, mod_depth, tt, (2,15),date_range, title='Modeled Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
```
# Salinity
```
df,dep,tim= Process_ORCA(orca_dict['Bsal'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
ax=ket.hovmoeller(mod_vosaline, mod_depth, tt, (2,15),date_range,title='Modeled Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
grid.close()
bio=xr.open_mfdataset(ptrcloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(bio.time_counter)
mod_depth=np.array(bio.deptht)
mod_flagellatets=(bio.flagellates.isel(y=0,x=0))
mod_ciliates=(bio.ciliates.isel(y=0,x=0))
mod_diatoms=(bio.diatoms.isel(y=0,x=0))
mod_Chl = np.array((mod_flagellatets+mod_ciliates+mod_diatoms)*1.8)
mod_Chl = np.ma.masked_equal(mod_Chl,0).T
df,dep,tim= Process_ORCA(orca_dict['Bfluor'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
ax=ket.hovmoeller(mod_Chl, mod_depth, tt, (2,15),date_range,title='Modeled Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
bio.close()
```
|
github_jupyter
|
import sys
sys.path.append('/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools')
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
import netCDF4 as nc
import xarray as xr
import datetime as dt
from salishsea_tools import evaltools as et, viz_tools, places
import gsw
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import matplotlib.dates as mdates
import cmocean as cmo
import scipy.interpolate as sinterp
import math
from scipy import io
import pickle
import cmocean
import json
import Keegan_eval_tools as ket
from collections import OrderedDict
from matplotlib.colors import LogNorm
fs=16
mpl.rc('xtick', labelsize=fs)
mpl.rc('ytick', labelsize=fs)
mpl.rc('legend', fontsize=fs)
mpl.rc('axes', titlesize=fs)
mpl.rc('axes', labelsize=fs)
mpl.rc('figure', titlesize=fs)
mpl.rc('font', size=fs)
mpl.rc('font', family='sans-serif', weight='normal', style='normal')
import warnings
#warnings.filterwarnings('ignore')
from IPython.display import Markdown, display
%matplotlib inline
ptrcloc='/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data'
modver='HC201905' #HC202007 is the other option.
gridloc='/ocean/kflanaga/MEOPAR/savedData/201905_grid_data'
ORCAloc='/ocean/kflanaga/MEOPAR/savedData/ORCAData'
year=2019
mooring='Twanoh'
# Parameters
year = 2015
modver = "HC202007"
mooring = "Twanoh"
ptrcloc = "/ocean/kflanaga/MEOPAR/savedData/202007_ptrc_data"
gridloc = "/ocean/kflanaga/MEOPAR/savedData/202007_grid_data"
ORCAloc = "/ocean/kflanaga/MEOPAR/savedData/ORCAData"
orca_dict=io.loadmat(f'{ORCAloc}/{mooring}.mat')
def ORCA_dd_to_dt(date_list):
UTC=[]
for yd in date_list:
if np.isnan(yd) == True:
UTC.append(float("NaN"))
else:
start = dt.datetime(1999,12,31)
delta = dt.timedelta(yd)
offset = start + delta
time=offset.replace(microsecond=0)
UTC.append(time)
return UTC
obs_tt=[]
for i in range(len(orca_dict['Btime'][1])):
obs_tt.append(np.nanmean(orca_dict['Btime'][:,i]))
#I should also change this obs_tt thing I have here into datetimes
YD_rounded=[]
for yd in obs_tt:
if np.isnan(yd) == True:
YD_rounded.append(float("NaN"))
else:
YD_rounded.append(math.floor(yd))
obs_dep=[]
for i in orca_dict['Bdepth']:
obs_dep.append(np.nanmean(i))
grid=xr.open_mfdataset(gridloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(grid.time_counter)
mod_depth=np.array(grid.deptht)
mod_votemper=(grid.votemper.isel(y=0,x=0))
mod_vosaline=(grid.vosaline.isel(y=0,x=0))
mod_votemper = (np.array(mod_votemper))
mod_votemper = np.ma.masked_equal(mod_votemper,0).T
mod_vosaline = (np.array(mod_vosaline))
mod_vosaline = np.ma.masked_equal(mod_vosaline,0).T
def Process_ORCA(orca_var,depths,dates,year):
# Transpose the columns so that a yearday column can be added.
df_1=pd.DataFrame(orca_var).transpose()
df_YD=pd.DataFrame(dates,columns=['yearday'])
df_1=pd.concat((df_1,df_YD),axis=1)
#Group by yearday so that you can take the daily mean values.
dfg=df_1.groupby(by='yearday')
df_mean=dfg.mean()
df_mean=df_mean.reset_index()
# Convert the yeardays to datetime UTC
UTC=ORCA_dd_to_dt(df_mean['yearday'])
df_mean['yearday']=UTC
# Select the range of dates that you would like.
df_year=df_mean[(df_mean.yearday >= dt.datetime(year,1,1))&(df_mean.yearday <= dt.datetime(year,12,31))]
df_year=df_year.set_index('yearday')
#Add in any missing date values
idx=pd.date_range(df_year.index[0],df_year.index[-1])
df_full=df_year.reindex(idx,fill_value=-1)
#Transpose again so that you can add a depth column.
df_full=df_full.transpose()
df_full['depth']=obs_dep
# Remove any rows that have NA values for depth.
df_full=df_full.dropna(how='all',subset=['depth'])
df_full=df_full.set_index('depth')
#Mask any NA values and any negative values.
df_final=np.ma.masked_invalid(np.array(df_full))
df_final=np.ma.masked_less(df_final,0)
return df_final, df_full.index, df_full.columns
lon,lat=places.PLACES[mooring]['lon lat']
fig, ax = plt.subplots(1,1,figsize = (6,6))
with nc.Dataset('/data/vdo/MEOPAR/NEMO-forcing/grid/bathymetry_201702.nc') as bathy:
viz_tools.plot_coastline(ax, bathy, coords = 'map',isobath=.1)
color=('firebrick')
ax.plot(lon, lat,'o',color = 'firebrick', label=mooring)
ax.set_ylim(47, 49)
ax.legend(bbox_to_anchor=[1,.6,0.45,0])
ax.set_xlim(-124, -122);
ax.set_title('Buoy Location');
df,dep,tim= Process_ORCA(orca_dict['Btemp'],obs_dep,YD_rounded,year)
date_range=(dt.datetime(year,1,1),dt.datetime(year,12,31))
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
ax=ket.hovmoeller(mod_votemper, mod_depth, tt, (2,15),date_range, title='Modeled Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
df,dep,tim= Process_ORCA(orca_dict['Bsal'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
ax=ket.hovmoeller(mod_vosaline, mod_depth, tt, (2,15),date_range,title='Modeled Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
grid.close()
bio=xr.open_mfdataset(ptrcloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(bio.time_counter)
mod_depth=np.array(bio.deptht)
mod_flagellatets=(bio.flagellates.isel(y=0,x=0))
mod_ciliates=(bio.ciliates.isel(y=0,x=0))
mod_diatoms=(bio.diatoms.isel(y=0,x=0))
mod_Chl = np.array((mod_flagellatets+mod_ciliates+mod_diatoms)*1.8)
mod_Chl = np.ma.masked_equal(mod_Chl,0).T
df,dep,tim= Process_ORCA(orca_dict['Bfluor'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
ax=ket.hovmoeller(mod_Chl, mod_depth, tt, (2,15),date_range,title='Modeled Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
bio.close()
| 0.318803 | 0.773002 |
# T자 모양 단면의 도심<br>Centroid of a T shaped Section
```
# 그래프, 수학 기능 추가
# Add graph and math features
import pylab as py
import numpy as np
import numpy.linalg as nl
# 기호 연산 기능 추가
# Add symbolic operation capability
import sympy as sy
```
참고문헌 : 예제 5.2, Pytel 외 저, 이주성 외 역, 재료역학, 2판, 한티미디어, 2013.<br>Ref: Example 5.2, Pytel, Kiusalaas, Sharma, Mechanics of Materials, 2nd Ed., Cengage Learning, 2013.
다음과 같은 T자 모양 단면의 도심을 구해 보자.<br>
Let's try to find the centroid of the following T shaped section.
윗 변의 폭 $w=150mm$<br>Width of the upper side $w=150mm$
```
w_mm = 150
```
아랫변의 높이 $h=200mm$<br>Height of the lower side $h=200mm$
```
h_mm = 200
```
두께 $t=20mm$<br>Thickness $t=20mm$
```
t_mm = 20
```
## 도심의 정의<br>Definition of a Centroid
$$
C_y=\frac{\int yS_x(y) dy}{A}=\frac{\int yS_x(y) dy}{\int S_x(y) dy}
$$
ref : https://en.wikipedia.org/wiki/Centroid
여기서 $S_x(y)$는 다음과 같다. (T 단면의 아래 끝에서 $y=0$)<br>
Here, $S_x(y)$ is as follows. ($y=0$ at the lower end of T section)
$$
S_x(y) =
\begin{cases}
t, & 0 \leq y < h \\
w, & h \leq y < h + t \\
0, & otherwise
\end{cases}
$$
Python 언어로는 다음과 같이 구현할 수 있다.<br>We can implement in python as follows.
```
def sx(y_mm):
if 0 <= y_mm < h_mm :
result = t_mm
elif h_mm <= y_mm < (h_mm + t_mm):
result = w_mm
else:
result = 0
return result
```
이 함수의 그래프를 그려 보자<br>Let's plot this.
```
y_mm_array = py.arange(0, h_mm + t_mm + 0.5, 1)
sx_mm_array = py.array([sx(y_mm) for y_mm in y_mm_array])
py.plot(sx_mm_array * 0.5, y_mm_array)
py.plot(sx_mm_array * (-0.5), y_mm_array)
py.axis('equal')
py.grid(True)
py.xlabel('x(mm)')
py.xlabel('y(mm)')
```
## 정적분 계산<br>Numerical Integration
0차 적분 함수를 이용해 보자<br>Let's use 0'th order numerical integration function.
```
def get_delta_x(xi, xe, n):
return (xe - xi) / n
def num_int_0(f, xi, xe, n, b_verbose=False):
x_array = py.linspace(xi, xe, n+1)
delta_x = x_array[1] - x_array[0]
assert 1e-3 > (abs(delta_x - get_delta_x(xi, xe, n)) / get_delta_x(xi, xe, n)), f"delta_x = {delta_x}"
integration_result = 0.0
for k in range(n):
x_k = x_array[k]
F_k = f(x_k) * delta_x
if b_verbose:
print('k = %2d, F_k = %g' % (k, F_k))
integration_result += F_k
return integration_result
```
### 단면적<br>Area of the section
```
A_mm2 = num_int_0(sx, 0, h_mm + t_mm, h_mm + t_mm)
A_mm2
```
확인해 보자.<br>Let's verify.
```
h_mm * t_mm + w_mm * t_mm
```
아래와 같이 지정해 두면 T 자 단면적 결과가 맞는지 확인할 수 있다.<br>
We can designate as follows to assert T shape section area.
```
assert 1e-6 > abs((h_mm * t_mm + w_mm * t_mm) - A_mm2)
```
### 도심<br>Centroid
```
def ySx(y_mm):
return y_mm * sx(y_mm)
numerator_mm3 = num_int_0(ySx, 0, h_mm + t_mm, h_mm + t_mm)
cy_mm = numerator_mm3 / A_mm2
cy_mm
```
역시 확인해 보자.<br>Again, let's verify.
```
cy_exact_mm = ((h_mm * t_mm) * (h_mm * 0.5) + (w_mm * t_mm) * (h_mm + 0.5 * t_mm)) / (h_mm * t_mm + w_mm * t_mm)
cy_exact_mm
cy_mm - cy_exact_mm
```
어떻게 하면 위 오차를 줄일 수 있을 것인가?<br>How can we make the error above smaller?
```
error = (cy_mm - cy_exact_mm)
try :
assert (1e-6 > abs(error)), "Error too large"
except AssertionError as e:
print(e)
```
## Final Bell<br>마지막 종
```
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
```
|
github_jupyter
|
# 그래프, 수학 기능 추가
# Add graph and math features
import pylab as py
import numpy as np
import numpy.linalg as nl
# 기호 연산 기능 추가
# Add symbolic operation capability
import sympy as sy
w_mm = 150
h_mm = 200
t_mm = 20
def sx(y_mm):
if 0 <= y_mm < h_mm :
result = t_mm
elif h_mm <= y_mm < (h_mm + t_mm):
result = w_mm
else:
result = 0
return result
y_mm_array = py.arange(0, h_mm + t_mm + 0.5, 1)
sx_mm_array = py.array([sx(y_mm) for y_mm in y_mm_array])
py.plot(sx_mm_array * 0.5, y_mm_array)
py.plot(sx_mm_array * (-0.5), y_mm_array)
py.axis('equal')
py.grid(True)
py.xlabel('x(mm)')
py.xlabel('y(mm)')
def get_delta_x(xi, xe, n):
return (xe - xi) / n
def num_int_0(f, xi, xe, n, b_verbose=False):
x_array = py.linspace(xi, xe, n+1)
delta_x = x_array[1] - x_array[0]
assert 1e-3 > (abs(delta_x - get_delta_x(xi, xe, n)) / get_delta_x(xi, xe, n)), f"delta_x = {delta_x}"
integration_result = 0.0
for k in range(n):
x_k = x_array[k]
F_k = f(x_k) * delta_x
if b_verbose:
print('k = %2d, F_k = %g' % (k, F_k))
integration_result += F_k
return integration_result
A_mm2 = num_int_0(sx, 0, h_mm + t_mm, h_mm + t_mm)
A_mm2
h_mm * t_mm + w_mm * t_mm
assert 1e-6 > abs((h_mm * t_mm + w_mm * t_mm) - A_mm2)
def ySx(y_mm):
return y_mm * sx(y_mm)
numerator_mm3 = num_int_0(ySx, 0, h_mm + t_mm, h_mm + t_mm)
cy_mm = numerator_mm3 / A_mm2
cy_mm
cy_exact_mm = ((h_mm * t_mm) * (h_mm * 0.5) + (w_mm * t_mm) * (h_mm + 0.5 * t_mm)) / (h_mm * t_mm + w_mm * t_mm)
cy_exact_mm
cy_mm - cy_exact_mm
error = (cy_mm - cy_exact_mm)
try :
assert (1e-6 > abs(error)), "Error too large"
except AssertionError as e:
print(e)
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
| 0.352759 | 0.964954 |
```
import tensorflow as tf
import numpy as np
import librosa
import tflearn
seed = 'the best seed ever'
seed = sum([ord(char) for char in seed])
np.random.seed(seed)
tf.set_random_seed(seed)
class AudioGenerator():
def __init__(self, audio_path, sample_rate, audio_frame_size, data_frames_truncation=0):
data, _ = librosa.core.load(mono=True,
path=audio_path,
sr=sample_rate)
self.original_length = len(data)
if data_frames_truncation > 0:
data = data[:audio_frame_size*data_frames_truncation]
self.shortened_length = len(data)
self.audio_frames = []
skip = audio_frame_size
for start in range(0, len(data) - audio_frame_size, skip):
end = start + audio_frame_size
self.audio_frames.append(data[start:end])
self.audio_frames = np.array(self.audio_frames)
self.index = 0
self.epochs = 0
np.random.shuffle(self.audio_frames)
def print_dataset_stats(self):
percent = self.shortened_length / self.original_length * 100
print("Dataset stats:")
print(" * Audio data {}% original size".format(percent, self.shortened_length))
print(" * Audio stft frames shape", self.audio_frames.shape, "\n")
def next_batch(self, batch_size):
if self.index + batch_size >= len(self.audio_frames):
self.index = 0
self.epochs += 1
np.random.shuffle(self.audio_frames)
else:
self.index += batch_size
return self.audio_frames[self.index:self.index + batch_size], self.epochs
def recurrent_net(net, rec_type, rec_size, return_sequence):
"""
A quick if else block to build a recurrent layer, based on the type specified
by the user.
"""
if rec_type == 'lstm':
net = tflearn.layers.recurrent.lstm(net, rec_size, return_seq=return_sequence)
elif rec_type == 'gru':
net = tflearn.layers.recurrent.gru(net, rec_size, return_seq=return_sequence)
elif rec_type == 'bi_lstm':
net = bidirectional_rnn(net,
BasicLSTMCell(rec_size),
BasicLSTMCell(rec_size),
return_seq=return_sequence)
elif rec_type == 'bi_gru':
net = bidirectional_rnn(net,
GRUCell(rec_size),
GRUCell(rec_size),
return_seq=return_sequence)
else:
raise ValueError('Incorrect rnn type passed. Try lstm, gru, bi_lstm or bi_gru.')
return net
def get_audio_input_frame_size(sequence_length, window_size, hop_size):
input_frame_size = window_size
for _ in range(sequence_length):
input_frame_size += hop_size
return input_frame_size
fft_size = 1024
hop_size = 256
window_size = 1024
sequence_length = 15
dataset_truncated_amount = 0
generating = True
rnn_sizes = [1024, 1024]
dense_sizes = [1024, 1024]
dense_activation = tf.nn.relu
weight_decay = 0.000
batch_norm = False
amount_epochs = 10000
batch_size = 32
learning_rate = 0.001
keep_prob = 1.0
input_frame_size = get_audio_input_frame_size(sequence_length,
window_size,
hop_size)
assert fft_size == window_size, "fft size must equal window size for maths to work"
tf.reset_default_graph()
with tf.variable_scope("inputs"):
audio = tf.placeholder(tf.float32,
shape=[None, input_frame_size])
with tf.variable_scope("stft"):
stfts = tf.contrib.signal.stft(audio,
frame_length=fft_size,
frame_step=hop_size,
fft_length=window_size,
pad_end=False)
print('stfts shape', stfts.get_shape())
stft_frames_length = stfts.get_shape()[1]
assert stft_frames_length == sequence_length + 1, "{} wtf".format()
with tf.variable_scope("cart2polar"):
magnitudes = tf.abs(stfts)
phases = tf.angle(stfts)
with tf.variable_scope("input_target_split"):
input_magnitudes = magnitudes[:, :sequence_length]
input_phases = phases[:, :sequence_length]
target_magnitudes = magnitudes[:, -1]
target_phases = phases[:, -1]
target_features = tf.concat([target_magnitudes, target_phases], axis=1)
print(input_magnitudes.get_shape(), target_magnitudes.get_shape())
print(input_phases.get_shape(), target_phases.get_shape())
with tf.variable_scope("mag_phases_concat"):
features = tf.concat([input_magnitudes, input_phases], axis=2)
if batch_norm:
features = tf.contrib.layers.batch_norm(features)
net = features
# Recurrent
for layer, size in enumerate(rnn_sizes):
return_sequence = layer != (len(rnn_sizes) - 1)
net = recurrent_net(net, 'lstm', size, return_sequence)
net = tflearn.dropout(net, keep_prob)
# Dense + MLP Out
for size in dense_sizes:
net = tflearn.fully_connected(net,
size,
activation=dense_activation,
regularizer='L2',
weight_decay=0.001)
logits = tflearn.fully_connected(net,
features.get_shape()[2],
activation='linear')
split_size = int(features.get_shape()[2]) // 2
with tf.variable_scope("mag_phase_predict_split"):
predicted_magnitudes = logits[:, :split_size]
predicted_phases = logits[:, split_size:]
predicted_real = predicted_magnitudes * tf.cos(predicted_phases)
predicted_imag = predicted_magnitudes * tf.sin(predicted_phases)
predicted_stft = tf.complex(predicted_real, predicted_imag)
predicted_audio = tf.contrib.signal.inverse_stft(
predicted_stft,
frame_length=window_size,
frame_step=hop_size,
fft_length=fft_size,
window_fn=None,
name=None
)
loss = tf.losses.mean_squared_error(logits, target_features)
black_list = ['BatchNorm', 'batch_norm', 'LSTM', 'lstm', 'bias', '/b:']
regulisable_vars = []
for var in tf.trainable_variables():
if not any([bad in var.name for bad in black_list]):
regulisable_vars.append(tf.nn.l2_loss(var))
l2_losses = tf.add_n(regulisable_vars)
l2_loss = l2_losses * weight_decay
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
loss += l2_loss
optimiser = tf.train.AdamOptimizer(learning_rate=learning_rate)
optimise = optimiser.minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
audio_generator = AudioGenerator("./assets/electronic_piano/HM_120_AF_EPiano5.wav",
44100,
input_frame_size,
dataset_truncated_amount)
audio_generator.print_dataset_stats()
print('Started optimisation.')
generation_step = 20
generation_length = 400
generated_audio = []
for epoch in range(amount_epochs):
epoch = 0
last_epoch = 0
while epoch != amount_epochs:
audio_frames, epoch = audio_generator.next_batch(batch_size)
sess.run(optimise, feed_dict={
audio: audio_frames
})
iteration_loss = sess.run(loss, feed_dict={
audio: audio_frames
})
print(epoch, iteration_loss)
if iteration_loss == 0.0:
print(audio_frames)
if epoch != last_epoch and generating and iteration_loss < 0.001:
print('Generating.')
index = np.random.randint(len(audio_generator.audio_frames))
impulse = audio_generator.audio_frames[index]
impulse_size = len(impulse)
for _ in range(generation_length):
predicted_stft_frames = sess.run(predicted_stft, feed_dict={
audio: impulse[-impulse_size:].reshape((1, -1))
})
impulse = np.concatenate((impulse, predicted_audio_frames))
generated_audio.append(impulse)
last_epoch = epoch
generated_audio = np.array(generated_audio).reshape((1, -1))
input_audio = tf.placeholder(tf.float32, shape=[1, None])
frames = tf.contrib.signal.frame(input_audio, frame_length=window_size, frame_step=hop_size)
reconstructed_signals = tf.contrib.signal.overlap_and_add(generated_audio,
frame_step=hop_size)
with tf.Session() as sess:
preview = sess.run(input_audio, feed_dict={
input_audio: generated_audio
})
print(generated_audio.shape)
print(preview.shape)
librosa.output.write_wav("preview.wav", preview, 44100)
import IPython.display as ipd
ipd.Audio(preview.reshape((-1))[:44100 * 5], rate=44100)
```
|
github_jupyter
|
import tensorflow as tf
import numpy as np
import librosa
import tflearn
seed = 'the best seed ever'
seed = sum([ord(char) for char in seed])
np.random.seed(seed)
tf.set_random_seed(seed)
class AudioGenerator():
def __init__(self, audio_path, sample_rate, audio_frame_size, data_frames_truncation=0):
data, _ = librosa.core.load(mono=True,
path=audio_path,
sr=sample_rate)
self.original_length = len(data)
if data_frames_truncation > 0:
data = data[:audio_frame_size*data_frames_truncation]
self.shortened_length = len(data)
self.audio_frames = []
skip = audio_frame_size
for start in range(0, len(data) - audio_frame_size, skip):
end = start + audio_frame_size
self.audio_frames.append(data[start:end])
self.audio_frames = np.array(self.audio_frames)
self.index = 0
self.epochs = 0
np.random.shuffle(self.audio_frames)
def print_dataset_stats(self):
percent = self.shortened_length / self.original_length * 100
print("Dataset stats:")
print(" * Audio data {}% original size".format(percent, self.shortened_length))
print(" * Audio stft frames shape", self.audio_frames.shape, "\n")
def next_batch(self, batch_size):
if self.index + batch_size >= len(self.audio_frames):
self.index = 0
self.epochs += 1
np.random.shuffle(self.audio_frames)
else:
self.index += batch_size
return self.audio_frames[self.index:self.index + batch_size], self.epochs
def recurrent_net(net, rec_type, rec_size, return_sequence):
"""
A quick if else block to build a recurrent layer, based on the type specified
by the user.
"""
if rec_type == 'lstm':
net = tflearn.layers.recurrent.lstm(net, rec_size, return_seq=return_sequence)
elif rec_type == 'gru':
net = tflearn.layers.recurrent.gru(net, rec_size, return_seq=return_sequence)
elif rec_type == 'bi_lstm':
net = bidirectional_rnn(net,
BasicLSTMCell(rec_size),
BasicLSTMCell(rec_size),
return_seq=return_sequence)
elif rec_type == 'bi_gru':
net = bidirectional_rnn(net,
GRUCell(rec_size),
GRUCell(rec_size),
return_seq=return_sequence)
else:
raise ValueError('Incorrect rnn type passed. Try lstm, gru, bi_lstm or bi_gru.')
return net
def get_audio_input_frame_size(sequence_length, window_size, hop_size):
input_frame_size = window_size
for _ in range(sequence_length):
input_frame_size += hop_size
return input_frame_size
fft_size = 1024
hop_size = 256
window_size = 1024
sequence_length = 15
dataset_truncated_amount = 0
generating = True
rnn_sizes = [1024, 1024]
dense_sizes = [1024, 1024]
dense_activation = tf.nn.relu
weight_decay = 0.000
batch_norm = False
amount_epochs = 10000
batch_size = 32
learning_rate = 0.001
keep_prob = 1.0
input_frame_size = get_audio_input_frame_size(sequence_length,
window_size,
hop_size)
assert fft_size == window_size, "fft size must equal window size for maths to work"
tf.reset_default_graph()
with tf.variable_scope("inputs"):
audio = tf.placeholder(tf.float32,
shape=[None, input_frame_size])
with tf.variable_scope("stft"):
stfts = tf.contrib.signal.stft(audio,
frame_length=fft_size,
frame_step=hop_size,
fft_length=window_size,
pad_end=False)
print('stfts shape', stfts.get_shape())
stft_frames_length = stfts.get_shape()[1]
assert stft_frames_length == sequence_length + 1, "{} wtf".format()
with tf.variable_scope("cart2polar"):
magnitudes = tf.abs(stfts)
phases = tf.angle(stfts)
with tf.variable_scope("input_target_split"):
input_magnitudes = magnitudes[:, :sequence_length]
input_phases = phases[:, :sequence_length]
target_magnitudes = magnitudes[:, -1]
target_phases = phases[:, -1]
target_features = tf.concat([target_magnitudes, target_phases], axis=1)
print(input_magnitudes.get_shape(), target_magnitudes.get_shape())
print(input_phases.get_shape(), target_phases.get_shape())
with tf.variable_scope("mag_phases_concat"):
features = tf.concat([input_magnitudes, input_phases], axis=2)
if batch_norm:
features = tf.contrib.layers.batch_norm(features)
net = features
# Recurrent
for layer, size in enumerate(rnn_sizes):
return_sequence = layer != (len(rnn_sizes) - 1)
net = recurrent_net(net, 'lstm', size, return_sequence)
net = tflearn.dropout(net, keep_prob)
# Dense + MLP Out
for size in dense_sizes:
net = tflearn.fully_connected(net,
size,
activation=dense_activation,
regularizer='L2',
weight_decay=0.001)
logits = tflearn.fully_connected(net,
features.get_shape()[2],
activation='linear')
split_size = int(features.get_shape()[2]) // 2
with tf.variable_scope("mag_phase_predict_split"):
predicted_magnitudes = logits[:, :split_size]
predicted_phases = logits[:, split_size:]
predicted_real = predicted_magnitudes * tf.cos(predicted_phases)
predicted_imag = predicted_magnitudes * tf.sin(predicted_phases)
predicted_stft = tf.complex(predicted_real, predicted_imag)
predicted_audio = tf.contrib.signal.inverse_stft(
predicted_stft,
frame_length=window_size,
frame_step=hop_size,
fft_length=fft_size,
window_fn=None,
name=None
)
loss = tf.losses.mean_squared_error(logits, target_features)
black_list = ['BatchNorm', 'batch_norm', 'LSTM', 'lstm', 'bias', '/b:']
regulisable_vars = []
for var in tf.trainable_variables():
if not any([bad in var.name for bad in black_list]):
regulisable_vars.append(tf.nn.l2_loss(var))
l2_losses = tf.add_n(regulisable_vars)
l2_loss = l2_losses * weight_decay
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
loss += l2_loss
optimiser = tf.train.AdamOptimizer(learning_rate=learning_rate)
optimise = optimiser.minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
audio_generator = AudioGenerator("./assets/electronic_piano/HM_120_AF_EPiano5.wav",
44100,
input_frame_size,
dataset_truncated_amount)
audio_generator.print_dataset_stats()
print('Started optimisation.')
generation_step = 20
generation_length = 400
generated_audio = []
for epoch in range(amount_epochs):
epoch = 0
last_epoch = 0
while epoch != amount_epochs:
audio_frames, epoch = audio_generator.next_batch(batch_size)
sess.run(optimise, feed_dict={
audio: audio_frames
})
iteration_loss = sess.run(loss, feed_dict={
audio: audio_frames
})
print(epoch, iteration_loss)
if iteration_loss == 0.0:
print(audio_frames)
if epoch != last_epoch and generating and iteration_loss < 0.001:
print('Generating.')
index = np.random.randint(len(audio_generator.audio_frames))
impulse = audio_generator.audio_frames[index]
impulse_size = len(impulse)
for _ in range(generation_length):
predicted_stft_frames = sess.run(predicted_stft, feed_dict={
audio: impulse[-impulse_size:].reshape((1, -1))
})
impulse = np.concatenate((impulse, predicted_audio_frames))
generated_audio.append(impulse)
last_epoch = epoch
generated_audio = np.array(generated_audio).reshape((1, -1))
input_audio = tf.placeholder(tf.float32, shape=[1, None])
frames = tf.contrib.signal.frame(input_audio, frame_length=window_size, frame_step=hop_size)
reconstructed_signals = tf.contrib.signal.overlap_and_add(generated_audio,
frame_step=hop_size)
with tf.Session() as sess:
preview = sess.run(input_audio, feed_dict={
input_audio: generated_audio
})
print(generated_audio.shape)
print(preview.shape)
librosa.output.write_wav("preview.wav", preview, 44100)
import IPython.display as ipd
ipd.Audio(preview.reshape((-1))[:44100 * 5], rate=44100)
| 0.610337 | 0.394201 |
# Семинар 13
# Методы внутренней точки
## На прошлом семинаре
- SDP
- Релаксации комбинаторных и невыпуклых задач
- Алгоритм Goemans-Williamson'a по приблоижению решения задачи MAXCUT
## Задача выпуклой оптимизации с ограничениями типа равенств
\begin{equation*}
\begin{split}
&\min f(x) \\
\text{s.t. } & Ax = b,
\end{split}
\end{equation*}
где $f$ - выпукла и дважды дифференцируема, $A \in \mathbb{R}^{p \times n}$ и $\mathrm{rank} \; A = p < n$
### Двойственная задача
Двойственная функция
\begin{equation*}
\begin{split}
g(\mu) & = -b^{\top}\mu + \inf_x(f(x) + \mu^{\top}Ax) \\
& = -b^{\top}\mu - \sup_x((-A^{\top}\mu)^{\top}x -f(x)) \\
& = -b^{\top}\mu - f^*(-A^{\top}\mu)
\end{split}
\end{equation*}
Двойственная задача
$$
\max_\mu -b^{\top}\mu - f^*(-A^{\top}\mu)
$$
**Подход 1**: найти сопряжённую функцию и решить безусловную задачу оптимизации
**Трудности**
- не всегда легко восстановить решение прямой задачи по решению двойственной
- сопряжённая функция $f^*$ должна быть дважды дифференцируемое для быстрого решения двойственной задачи. Это не всегда так.
### Условия оптимальности
- $Ax^* = b$
- $f'(x^*) + A^{\top}\mu^* = 0$
или
$$ \begin{bmatrix} f' & A^{\top} \\ A & 0 \end{bmatrix} \begin{bmatrix} x^{\\*} \\ \mu^{\\*} \end{bmatrix} = \begin{bmatrix} 0 \\ b \end{bmatrix} $$
**Подход 2**: решить нелинейную в общем случае систему методом Ньютона.
**Вопрос**: в каком случае система окажется линейной?
## Метод Ньютона для выпуклых задач с ограничениями типа равенств
\begin{equation*}
\begin{split}
& \min_v f(x) + f'(x)^{\top}v + \frac{1}{2}v^{\top}f''(x)v\\
\text{s.t. } & A(x + v) = b
\end{split}
\end{equation*}
Из условий оптимальности имеем
$$ \begin{bmatrix} f''(x) & A^{\top} \\ A & 0 \end{bmatrix} \begin{bmatrix} v \\ w \end{bmatrix} = \begin{bmatrix} -f'(x) \\ 0 \end{bmatrix} $$
**Шаг метода Ньютона определён только для невырожденной матрицы!**
**Упражнение**. Посчитайте за сколько итераций метод Ньютона сойдётся для квадратичной функции с ограничениями типа равенств.
### Линеаризация условий оптимальности
- $A(x + v) = b \rightarrow Av = 0$
- $f'(x + v) + A^{\top}w \approx f'(x) + f''(x)v + A^{\top}w = 0$
или
- $f''(x)v + A^{\top}w = -f'(x)$
### Псевдокод
**Важно:** начальная точка должна лежать в допустимом множестве!
```python
def NewtonEqualityFeasible(f, gradf, hessf, A, b,
stop_crit, line_search, x0, tol):
x = x0
n = x.shape[0]
while True:
newton_matrix = [[hessf(x), A.T], [A, 0]]
rhs = [-gradf(x), 0]
w = solve_lin_sys(newton_matrix, rhs)
h = w[:n]
if stop_crit(x, h, gradf(x), **kwargs) < tol:
break
alpha = line_search(x, h, f, gradf(x), **kwargs)
x = x + alpha * h
return x
```
### Критерий остановки
Получим выражение для значения
$$
f(x) - \inf_v(\hat{f}(x + v) \; | \; A(x+v) = b),
$$
где $\hat{f}$ - квадратичная аппроксимация функции $f$.
Для этого
$$
\langle h^{\top} \rvert \cdot \quad f''(x)h + A^{\top}w = -f'(x)
$$
с учётом $Ah = 0$ получаем
$$
h^{\top}f''(x)h = -f'(x)^{\top}h
$$
Тогда
$$
\inf_v(\hat{f}(x + v) \; | \; A(x+v) = b) = f(x) - \frac{1}{2}h^{\top}f''(x)h
$$
**Вывод:** величина $h^{\top}f''(x)h$ является наиболее адекватным критерием остановки метода Ньютона.
### Теорема сходимости
Сходимость метода аналогична сходимости метода Ньютона для задачи безусловной оптимизации.
**Теорема**
Пусть выполнены следующие условия
- множество уровней $S = \{ x \; | \; x \in D(f), \; f(x) \leq f(x_0), \; Ax = b \}$ замкнуто и $x_0 \in D(f), \; Ax_0 = b$
- для любых $x \in S$ и $\tilde{x} \in S$ гессиан $f''(x)$ липшицев
- на множестве $S$ $\|f''(x)\|_2 \leq M $ и норма обратной матрицы KKT системы ограничена сверху
Тогда, метод Ньютона сходится к паре $(x^*, \mu^*)$ линейно, а при достижении достаточной близости к решению - квадратично.
## Случай недопустимой начальной точки
- Метод Ньютона требует чтобы начальная точка лежала в допустимом множестве
- Что делать, если поиск такой точки неочевиден: например, если область определения $f$ не сопадает с $\mathbb{R}^n$
- Пусть начальная точка не является допустимой, в этом случае условия KKT можно записать так
$$
\begin{bmatrix}
f''(x) & A^{\top}\\
A & 0
\end{bmatrix}
\begin{bmatrix}
v\\
w
\end{bmatrix}
= -
\begin{bmatrix}
f'(x)\\
{\color{red}{Ax - b}}
\end{bmatrix}
$$
- Если $x$ допустима, то система совпадает с системой для обычного метода Ньютона
### Прямо-двойственная интерпретация
- Метод *прямо-двойственный*, если на каждой итерации обновляются прямые и двойственные переменные
- Покажем, что это значит. Для этого запишем условия оптимальности в виде
$$
r(x^*, \mu^*) = (r_p(x^*, \mu^*), r_d(x^*, \mu^*)) = 0,
$$
где $r_p(x, \mu) = Ax - b$ и $r_d(x, \mu) = f'(x) + A^{\top}\mu$
- Решим систему методом Ньютона:
$$
r(y + z) \approx r(y) + Dr(y)z = 0
$$
- Прямо-двойственный шаг в методе Ньютона определим как решение линейной системы
$$
Dr(y)z = -r(y)
$$
или более подробно
$$
\begin{bmatrix}
f''(x) & A^{\top}\\
A & 0
\end{bmatrix}
\begin{bmatrix}
z_p\\
z_d
\end{bmatrix}
= -
\begin{bmatrix}
r_p(x, \mu)\\
r_d(x, \mu)
\end{bmatrix}
= -
\begin{bmatrix}
f'(x) + A^{\top}\mu\\
Ax - b
\end{bmatrix}
$$
- Заменим $z_d^+ = \mu + z_d$ и получим
$$
\begin{bmatrix}
f''(x) & A^{\top}\\
A & 0
\end{bmatrix}
\begin{bmatrix}
z_p\\
z_d^+
\end{bmatrix}
= -
\begin{bmatrix}
f'(x)\\
Ax - b
\end{bmatrix}
$$
- Система полностью эквивалентна ранее полученной в обозначениях
$$
v = z_p \qquad w = z_d^+ = \mu + z_d
$$
- Метод Ньютона даёт шаг для прямой переменной и обновлённое значение для двойственной
### Способ инициализации
- Удобный способ задания начального приближения: найти точку из области определения $f$ гораздо проще, чем из пересечения области определения и допустимого множества
- Метод Ньютона с недопустимой начальной точкой не может определить согласованность ограничений
### Псевдокод
```python
def NewtonEqualityInfeasible(f, gradf, hessf, A, b, stop_crit, line_search, x0, mu0, tol):
x = x0
mu = mu0
n = x.shape[0]
while True:
z_p, z_d = ComputeNewtonStep(hessf(x), A, b)
if stop_crit(x, z_p, z_d, gradf(x), **kwargs) < tol:
break
alpha = line_search(x, z_p, z_d, f, gradf(x), **kwargs)
x = x + alpha * z_p
mu = mu + alpha * z_d
return x
```
### Критерий остановки и линейный поиск
- Изменение $r_p$ после шага $z_p$
$$
A(x + \alpha z_p) - b = [A(x + z_p) = b] = Ax + \alpha(b - Ax) - b = (1 - \alpha)(Ax - b)
$$
- Итоговое изменение после $k$ шагов
$$
r^{(k)} = \prod_{i=0}^{k-1}(1 - \alpha^{(i)})r^{(0)}
$$
- Критерий остановки: $Ax = b$ и $\|r(x, \mu)\|_2 \leq \varepsilon$
- Линейный поиск: $c \in (0, 1/2)$, $\beta = (0, 1)$
```python
def linesearch(r, x, mu, z_p, z_d, c, beta):
alpha = 1
while np.linalg.norm(r(x + alpha * z_p, mu + alpha * z_d)) >= (1 - c * alpha) * np.linalg.norm(r(x, mu)):
alpha *= beta
return alpha
```
### Теорема сходимости
Результат аналогичен результаты для допустимой начальной точки
**Теорема.** Пусть
- множество подуровней $S = \{(x, \mu) \; | \; x \in D(f), \; \| r(x, \mu) \|_2 \leq \| r(x_0, \mu_0) \|_2 \}$ замкнуто
- на множестве $S$ норма матрицы обратной к ККТ матрице ограничена
- гессиан липшицев на $S$.
Тогда сходимость метода линейна при удалении от решении и квадратичная при достаточном приближении к решению.
## Общая задача выпуклой оптимизации
\begin{equation*}
\begin{split}
& \min_{x \in \mathbb{R}^n} f_0(x)\\
\text{s.t. } & f_i (x) \leq 0 \qquad i=1,\ldots,m\\
& Ax = b,
\end{split}
\end{equation*}
где $f_i$ - выпуклые и дважды непрерывно дифференцируемы, $A \in \mathbb{R}^{p \times n}$ и $\mathrm{rank} \; A = p < n$.
Предполагаем, что задача строго разрешима, то есть выполняется условие Слейтера.
## Условия оптимальности
- Разрешимость прямой задачи
$$
Ax^* = b, \; f_i(x^*) \leq 0, \; i = 1,\ldots,m
$$
- Разрешимость двойственной задачи
$$
\lambda^* \geq 0
$$
- Стационарность лагранжиана
$$
f'_0(x^*) + \sum_{i=1}^m \lambda^*_if'_i(x^*) + A^{\top}\mu^* = 0
$$
- Условие дополняющей нежёсткости
$$
\lambda^*_i f_i(x^*) = 0, \qquad i = 1,\ldots, m
$$
## Идея
- Свести задачу с ограничениями типа **неравенств** к последовательности задач с ограничениями типа **равенств**
- Использовать методы для решения задачи с ограничениями типа равенств
\begin{equation*}
\begin{split}
& \min f_0(x) + \sum_{i=1}^m I_-(f_i(x))\\
\text{s.t. } & Ax = b,
\end{split}
\end{equation*}
где $I_-$ - индикаторная функция
$$
I_-(u) =
\begin{cases}
0, & u \leq 0\\
\infty, & u > 0
\end{cases}
$$
**Проблема.** Теперь целевая функция - **недифференцируема**.
## Логарифмический барьер
**Идея.** Приблизить функцию $I_-(u)$ функцией
$$
\hat{I}_-(u) = -t\log(-u),
$$
где $t > 0$ - параметр.
- Функции $I_-(u)$ и $\hat{I}_-(u)$ выпуклые и неубывающие
- Однако $\hat{I}_-(u)$ **дифференцируема** и приближается к $I_-(u)$ при $t \to 0$
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.rc("text", usetex=True)
x = np.linspace(-2, 0, 100000, endpoint=False)
plt.figure(figsize=(10, 6))
for t in [0.1, 0.5, 1, 1.5, 2]:
plt.plot(x, -t * np.log(-x), label=r"$t = " + str(t) + "$")
plt.legend(fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.xlabel("$u$", fontsize=20)
```
### "Ограниченная" задача
\begin{equation*}
\begin{split}
& \min f_0(x) + \sum_{i=1}^m -t \log(-f_i(x))\\
\text{s.t. } & Ax = b,
\end{split}
\end{equation*}
- Задача по-прежнему **выпуклая**
- Функция
$$
\phi(x) = -\sum\limits_{i=1}^m \log(-f_i(x))
$$
называется *логарифмическим барьером*. Её область определения - множество точек, для котороых ограничения типа неравенств выполняются строго.
**Упражнение.** Найдите градиент и гессиан $\phi(x)$
## Центральный путь
Для каждого $t > 0$ "ограниченная" задача имеет единственное решение $x^*(t)$.
**Определение.** Последовательность $x^*(t)$ для $t > 0$ образует *центральный путь*.
## Условия оптимальности для "ограниченной" задачи
- Разрешимость прямой задачи
$$
Ax^*(t) = b, \; f_i(x^*) < 0, \; i = 1,\ldots,m
$$
- Стационарность лагранжиана
\begin{equation*}
\begin{split}
& f'_0(x^*(t)) + \phi'(x^*(t)) + A^{\top}\hat{\mu} = \\
& = f'_0(x^*(t)) - t\sum_{i=1}^m \frac{f_i'(x^*(t))}{f_i(x^*(t))} + A^{\top}\hat{\mu} = 0
\end{split}
\end{equation*}
- Обозначим
$$
\lambda^*_i(t) = -\frac{t}{f_i(x^*(t))} \; i=1,\ldots,m \text{ и } \mu^* = \hat{\mu}
$$
- Тогда условие оптимальности можно записать как
$$
f'_0(x^*(t)) + \sum_{i=1}^m \lambda^*_i(t)f_i'(x^*(t)) + A^{\top}\mu^* = 0
$$
- Тогда $x^*(t)$ минимизирует лагранжиан
$$
L = f_0(x) + \sum_{i=1}^m \lambda_if_i(x) + \mu^{\top}(Ax - b)
$$
для $\lambda = \lambda^*(t)$ и $\mu = \mu^*$.
### Зазор двойственности
- Двойственная функция $g(\lambda^*(t), \mu^*)$ конечна и представима в виде
\begin{equation*}
\begin{split}
g(\lambda^*(t), \mu^*) & = f_0(x^*(t)) + \sum_{i=1}^m \lambda^*_i(t)f_i(x^*(t)) + (\mu^*)^{\top}(Ax^*(t) - b)\\
& = f_0(x^*(t)) - mt
\end{split}
\end{equation*}
- Зазор двойственности
$$
f_0(x^*(t)) - p^* \leq mt
$$
- При $t \to 0$ зазор двойственности равен 0 и центральный путь сходится к решению исходной задачи.
## ККТ интерпретация
Условия оптимальности для "ограниченной" задачи эквивалентны условиям оптимальности для исходной задачи если
$$
-\lambda_i f_i(x) = 0 \Rightarrow - \lambda_i f_i(x) = t \quad i = 1,\ldots, m
$$
## Физическая интерпретация
- Предположим, что ограничений типа равенства нет
- Рассмотрим неквантовую частицу в поле сил
- Каждому ограничению $f_i(x) \leq 0$ поставим в соответствие силу
$$
F_i(x) = -\nabla(-\log(-f_i(x))) = \frac{f'_i(x)}{f_i(x)}
$$
- Целевой функции также поставим в соответствие силу
$$
F_0(x) = -\frac{f'_0(x)}{t}
$$
- Каждая точка из центрального пути $x^*(t)$ - это положение частицы, в котором выполняется баланс сил ограничений и целевой функции
- С уменьшением $t$ сила для целевой функции доминирует, и частица стремится занять положение, расположенное ближе к оптимальному
- Поскольку сила ограничений стремится к бесконечности при приближении частицы к границе, частица никогда не вылетит из допустимого множества
## Барьерный метод
- $x_0$ должна быть допустимой
- $t_0 > 0$ - начальное значение параметра
- $\alpha \in (0, 1)$ - множитель для уменьшения $t_0$
```python
def BarrierMethod(f, x0, t0, tol, alpha, **kwargs):
x = x0
t = t0
while True:
x = SolveBarrierProblem(f, t, x, **kwargs)
if m * t < tol:
break
t *= alpha
return x
```
### Точность решения "ограниченной" задачи
- Точное решение "ограниченной" задачи не требуется, так как приближённый центральный путь всё равно сойдётся к решению исходной задачи
- Двойственные переменные перестают быть двойственными при неточном решении, но это поправимо введением поправочных слагаемых
- Разница в стоимости точного и неточного центрального пути - несколько итераций метода Ньютона, поэтому существенного ускорения добиться нельзя
### Выбор параметров
- Множитель $\alpha$
- При $\alpha \sim 1$, **мало** итераций нужно для решения "ограниченной" задачи, но **много** для нахождения точного решения исходной задачи
- При $\alpha \sim 10^{-5}$ **много** итераций нужно для решения "ограниченной" задачи, но **мало** для нахождения точного решения исходной задачи
- Начальный параметр $t_0$
- Аналогичная альтернатива как и для параметра $\alpha$
- Параметр $t_0$ задаёт начальную точку для центрального пути
### Почти теорема сходимости
- Как было показано выше при $t \to 0$ барьерный метод сходится к решению исходной задачи
- Скорость сходимости напрямую связана с параметрами $\alpha$ и $t_0$, как показано ранее
- Основная сложность - быстрое решение вспомогательных задач методом Ньютона
## Задача поиска допустимого начального приближения
- Барьерный метод требует допустимого начального приближения
- Метод разбивается на две фазы
- Первая фаза метода ищет допустимое начальное приближение
- Вторая фаза использует найденное начальное приближение для запуска барьерного метода
### Первая фаза метода
Простой поиск допустимой точки
\begin{equation*}
\begin{split}
& \min s\\
\text{s.t. } & f_i(x) \leq s\\
& Ax = b
\end{split}
\end{equation*}
- эта задача всегда имеет строго допустимое начальное приближение
- если $s^* < 0$, то $x^*$ строго допустима и может быть использована в барьерном методе
- если $s^* > 0$, то задача не разрешима и допустимое множество пусто
### Сумма несогласованностей
\begin{equation*}
\begin{split}
& \min s_1 + \ldots + s_m\\
\text{s.t. } & f_i(x) \leq s_i\\
& Ax = b\\
& s \geq 0
\end{split}
\end{equation*}
- оптимальное значене равно нулю и достигается тогда и только тогда когда система ограничений совместна
- если задача неразрешима, то можно определить какие ограничения к этому приводят, то есть какие $s_i > 0$
### Вторая фаза метода
- После получения допустимой начальной точки $x_0$ выполняется обычный метод Ньютона для задачи с ограничениями равенствами
## Прямо-двойственный метод
Похож на барьерный метод, но
- нет разделения на внешние итерации и внутренние: на каждой итерации обновляются прямые и двойственные переменные
- направление определяется методом Ньютона, применённого к модифицированной системе ККТ
- последовательность точек в прямо-двойственном методе не обязательно допустимы
- работает даже когда задача не строго допустима
## Резюме
- Метод Ньютона для выпуклой задачи с оганичениями типа равенств
- Случай недопустимой начальной точки
- Прямой барьерный метод
- Прямо-двойственный метод
|
github_jupyter
|
def NewtonEqualityFeasible(f, gradf, hessf, A, b,
stop_crit, line_search, x0, tol):
x = x0
n = x.shape[0]
while True:
newton_matrix = [[hessf(x), A.T], [A, 0]]
rhs = [-gradf(x), 0]
w = solve_lin_sys(newton_matrix, rhs)
h = w[:n]
if stop_crit(x, h, gradf(x), **kwargs) < tol:
break
alpha = line_search(x, h, f, gradf(x), **kwargs)
x = x + alpha * h
return x
def NewtonEqualityInfeasible(f, gradf, hessf, A, b, stop_crit, line_search, x0, mu0, tol):
x = x0
mu = mu0
n = x.shape[0]
while True:
z_p, z_d = ComputeNewtonStep(hessf(x), A, b)
if stop_crit(x, z_p, z_d, gradf(x), **kwargs) < tol:
break
alpha = line_search(x, z_p, z_d, f, gradf(x), **kwargs)
x = x + alpha * z_p
mu = mu + alpha * z_d
return x
def linesearch(r, x, mu, z_p, z_d, c, beta):
alpha = 1
while np.linalg.norm(r(x + alpha * z_p, mu + alpha * z_d)) >= (1 - c * alpha) * np.linalg.norm(r(x, mu)):
alpha *= beta
return alpha
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.rc("text", usetex=True)
x = np.linspace(-2, 0, 100000, endpoint=False)
plt.figure(figsize=(10, 6))
for t in [0.1, 0.5, 1, 1.5, 2]:
plt.plot(x, -t * np.log(-x), label=r"$t = " + str(t) + "$")
plt.legend(fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.xlabel("$u$", fontsize=20)
def BarrierMethod(f, x0, t0, tol, alpha, **kwargs):
x = x0
t = t0
while True:
x = SolveBarrierProblem(f, t, x, **kwargs)
if m * t < tol:
break
t *= alpha
return x
| 0.564339 | 0.92157 |
# Purpose
This notebook is to do analysis on the distribution of listening events across Freebase genres, minus the AllMusic genres, by looking at how many top genres is takes to cover a majority of listening events
## Loading Libraries
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import itertools
import os
import swifter
```
## Preparing Data
```
allMusicGenres = pd.read_parquet('../data/LastFM1bKidArtistToAllMusicGenre.gzip.parquet')
allMusicGenres = list(allMusicGenres['Genre'].cat.categories)
allMusicGenres
artistToGenre = pd.read_parquet('../data/LastFM1bKidArtistToFreebaseGenre.gzip.parquet')
artistToGenre.head(5)
artistToGenre = artistToGenre[artistToGenre['Genre'].isin(allMusicGenres) == False]
```
### Ranking Genres from most listened to least
```
genreRanking = pd.read_parquet('../data/LastFM1bKidListeningEventsWithUsers', columns = ['Artist', 'User Id'])
genreRanking = genreRanking.groupby(['Artist']).agg(ArtistCount = ('User Id', 'count')).reset_index()
genreRanking.head(5)
genreRanking = genreRanking.merge(artistToGenre, on = 'Artist')
genreRanking.drop(columns = ['Artist'], inplace = True)
genreRanking.head(5)
genreRanking = genreRanking.groupby(['Genre'], observed = True).agg(Count = ('ArtistCount', 'sum')).reset_index()
genreRanking.head(5)
```
### User to Artist counts
```
userToArtistCounts = pd.read_parquet('../data/LastFM1bKidListeningEventsWithUsers', columns = ['Education Level', 'Age', 'Artist'])
userToArtistCounts.drop(columns = ['Partition'], inplace = True)
userToArtistCounts.head(5)
userToArtistCounts = userToArtistCounts.groupby(['Education Level', 'Age', 'Artist'], observed = True).agg(Count = ('Artist', 'count')).reset_index()
userToArtistCounts.head(5)
```
### Top X Genres to Listening Event Counts
```
calculations = pd.DataFrame({'Genres': itertools.accumulate(map(lambda x: [x], genreRanking['Genre'].to_list()))})
calculations.head(5)
def CountListeningEventsIn(genres):
result = artistToGenre[artistToGenre['Genre'].isin(genres)][['Artist']].drop_duplicates()
result = userToArtistCounts.merge(result, on = 'Artist')
result = result.groupby(['Education Level', 'Age'], observed = True).agg(Total = ('Count', 'sum'))
result = result.rename(columns = {'Total': len(genres)})
return result
data = calculations['Genres'].swifter.allow_dask_on_strings(enable = True).apply(CountListeningEventsIn)
data = pd.concat(data.to_list(), axis = 1)
data
```
## Graphs
### by Age
```
graphData = data.fillna(0).reset_index().drop(columns = 'Education Level').set_index('Age').unstack().reset_index().rename(columns = {'level_0': 'Top X', 0: 'Listening Event Count'})
temp = graphData.groupby(['Age']).agg(Total = ('Listening Event Count', 'max'))
graphData = graphData.merge(temp, on = 'Age')
graphData['% of Listening Events'] = graphData['Listening Event Count'] / graphData['Total']
graphData
sns.lineplot(data = graphData, x = 'Top X', hue = 'Age', y = '% of Listening Events')
plt.show()
sns.lineplot(data = graphData[graphData['Top X'] <= 10], x = 'Top X', hue = 'Age', y = '% of Listening Events');
```
### by Education Level
```
graphData = data.fillna(0).reset_index().drop(columns = 'Age').groupby('Education Level').sum().unstack().reset_index().rename(columns = {'level_0': 'Top X', 0: 'Listening Event Count'})
temp = graphData.groupby(['Education Level']).agg(Total = ('Listening Event Count', 'max'))
graphData = graphData.merge(temp, on = 'Education Level')
graphData['% of Listening Events'] = graphData['Listening Event Count'] / graphData['Total']
graphData
sns.lineplot(data = graphData, x = 'Top X', hue = 'Education Level', y = '% of Listening Events')
plt.show()
sns.lineplot(data = graphData[graphData['Top X'] <= 10], x = 'Top X', hue = 'Education Level', y = '% of Listening Events');
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import itertools
import os
import swifter
allMusicGenres = pd.read_parquet('../data/LastFM1bKidArtistToAllMusicGenre.gzip.parquet')
allMusicGenres = list(allMusicGenres['Genre'].cat.categories)
allMusicGenres
artistToGenre = pd.read_parquet('../data/LastFM1bKidArtistToFreebaseGenre.gzip.parquet')
artistToGenre.head(5)
artistToGenre = artistToGenre[artistToGenre['Genre'].isin(allMusicGenres) == False]
genreRanking = pd.read_parquet('../data/LastFM1bKidListeningEventsWithUsers', columns = ['Artist', 'User Id'])
genreRanking = genreRanking.groupby(['Artist']).agg(ArtistCount = ('User Id', 'count')).reset_index()
genreRanking.head(5)
genreRanking = genreRanking.merge(artistToGenre, on = 'Artist')
genreRanking.drop(columns = ['Artist'], inplace = True)
genreRanking.head(5)
genreRanking = genreRanking.groupby(['Genre'], observed = True).agg(Count = ('ArtistCount', 'sum')).reset_index()
genreRanking.head(5)
userToArtistCounts = pd.read_parquet('../data/LastFM1bKidListeningEventsWithUsers', columns = ['Education Level', 'Age', 'Artist'])
userToArtistCounts.drop(columns = ['Partition'], inplace = True)
userToArtistCounts.head(5)
userToArtistCounts = userToArtistCounts.groupby(['Education Level', 'Age', 'Artist'], observed = True).agg(Count = ('Artist', 'count')).reset_index()
userToArtistCounts.head(5)
calculations = pd.DataFrame({'Genres': itertools.accumulate(map(lambda x: [x], genreRanking['Genre'].to_list()))})
calculations.head(5)
def CountListeningEventsIn(genres):
result = artistToGenre[artistToGenre['Genre'].isin(genres)][['Artist']].drop_duplicates()
result = userToArtistCounts.merge(result, on = 'Artist')
result = result.groupby(['Education Level', 'Age'], observed = True).agg(Total = ('Count', 'sum'))
result = result.rename(columns = {'Total': len(genres)})
return result
data = calculations['Genres'].swifter.allow_dask_on_strings(enable = True).apply(CountListeningEventsIn)
data = pd.concat(data.to_list(), axis = 1)
data
graphData = data.fillna(0).reset_index().drop(columns = 'Education Level').set_index('Age').unstack().reset_index().rename(columns = {'level_0': 'Top X', 0: 'Listening Event Count'})
temp = graphData.groupby(['Age']).agg(Total = ('Listening Event Count', 'max'))
graphData = graphData.merge(temp, on = 'Age')
graphData['% of Listening Events'] = graphData['Listening Event Count'] / graphData['Total']
graphData
sns.lineplot(data = graphData, x = 'Top X', hue = 'Age', y = '% of Listening Events')
plt.show()
sns.lineplot(data = graphData[graphData['Top X'] <= 10], x = 'Top X', hue = 'Age', y = '% of Listening Events');
graphData = data.fillna(0).reset_index().drop(columns = 'Age').groupby('Education Level').sum().unstack().reset_index().rename(columns = {'level_0': 'Top X', 0: 'Listening Event Count'})
temp = graphData.groupby(['Education Level']).agg(Total = ('Listening Event Count', 'max'))
graphData = graphData.merge(temp, on = 'Education Level')
graphData['% of Listening Events'] = graphData['Listening Event Count'] / graphData['Total']
graphData
sns.lineplot(data = graphData, x = 'Top X', hue = 'Education Level', y = '% of Listening Events')
plt.show()
sns.lineplot(data = graphData[graphData['Top X'] <= 10], x = 'Top X', hue = 'Education Level', y = '% of Listening Events');
| 0.296247 | 0.834407 |
Task 1 - Install Spark, download datasets, create final dataframe. If you get an error regarding tar or wget, it is probably due to the Spark file being removed from the repository. Go to https://downloads.apache.org/spark/ and choose an equivalent version of Spark and Hadoop to download. So if 2.4.7 is not available, download the next version. At the time of this project creation, 2.4.7 exists.
```
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q https://downloads.apache.org/spark/spark-2.4.7/spark-2.4.7-bin-hadoop2.7.tgz
!tar xf spark-2.4.7-bin-hadoop2.7.tgz
!pip install -q findspark
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.4.7-bin-hadoop2.7"
import findspark
findspark.init()
from google.colab import files
from pyspark.sql import SparkSession, Window
from pyspark.sql.functions import isnan, when, count, col, lit, trim, avg, ceil
from pyspark.sql.types import StringType
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
!wget https://s3.amazonaws.com/drivendata/data/7/public/4910797b-ee55-40a7-8668-10efd5c1b960.csv -O features.csv
!wget https://s3.amazonaws.com/drivendata/data/7/public/0bf8bc6e-30d0-4c50-956a-603fc693d966.csv -O labels.csv
!ls
sc = SparkSession.builder.master("local[*]").getOrCreate()
features = sc.read.csv('features.csv', inferSchema=True, header=True)
label = sc.read.csv('labels.csv', inferSchema=True, header=True)
print(features.count())
print(label.count())
print(features.columns)
print(label.columns)
data = features.join(label, on=('id'))
print(data.count())
print(data.columns)
```
Task 2 - Change column type, drop duplicated rows, remove whitespacs. If you are disconnected, please run the previous cells by clicking on this cell, going to Runtime, then clicking Run before.
```
print(data.printSchema())
print(data.show(10))
data = data.withColumn('region_code', col('region_code').cast(StringType())) \
.withColumn('district_code', col('district_code').cast(StringType()))
data.printSchema()
data = data.dropDuplicates(['id'])
data.count()
str_cols = [item[0] for item in data.dtypes if item[1].startswith('string')]
for cols in str_cols:
data = data.withColumn(cols, trim(data[cols]))
```
Task 3 - Remove columns with null values more than a threshold. If you are disconnected, please run the previous cells by clicking on this cell, going to Runtime, then clicking Run before.
```
agg_row = data.select([(count(when(isnan(c) | col(c).isNull(), c))/data.count()).alias(c) for c in data.columns if c not in {'date_recorded', 'public_meeting', 'permit'}]).collect()
agg_dict_list = [row.asDict() for row in agg_row]
agg_dict = agg_dict_list[0]
col_null = list({i for i in agg_dict if agg_dict[i] > 0.4})
print(agg_dict)
print(col_null)
data = data.drop(*col_null)
```
Task 4 - Group, aggregate, create pivot table. If you are disconnected, please run the previous cells by clicking on this cell, going to Runtime, then clicking Run before.
```
data.groupBy('recorded_by').count().show()
data.groupBy('water_quality').count().orderBy('count', ascending=False).show()
data = data.drop('recorded_by')
data.groupBy('status_group').pivot('region').sum('amount_tsh').show()
```
Task 5 - Convert categories with low frequency to Others, impute missing values. If you are disconnected, please run the previous cells by clicking on this cell, going to Runtime, then clicking Run before.
```
print(str_cols)
for column in str_cols[:2]:
data.groupBy(column).count().orderBy('count', ascending=False).show()
values_cat = data.groupBy(column).count().collect()
lessthan = [x[0] for x in values_cat if x[1] < 1000]
data = data.withColumn(column, when(col(column).isin(lessthan), 'Others').otherwise(col(column)))
data.groupBy(column).count().orderBy('count', ascending=False).show()
data.groupBy('population').count().orderBy('population').show()
data = data.withColumn('population', when(col('population') < 2, lit(None)).otherwise(col('population')))
w = Window.partitionBy(data['district_code'])
data = data.withColumn('population', when(col('population').isNull(), avg(data['population']).over(w)).otherwise(col('population')))
data = data.withColumn('population', ceil(data['population']))
data.groupBy('population').count().orderBy('population').show()
```
Task 6 - Make visualizations. If you are disconnected, please run the previous cells by clicking on this cell, going to Runtime, then clicking Run before.
```
color_status = {'functional': 'green', 'non functional': 'red', 'functional needs repair': 'blue'}
cols = ['status_group', 'payment_type', 'longitude', 'latitude', 'gps_height']
df = data.select(cols).toPandas()
fig, ax = plt.subplots(figsize=(12,8))
sns.countplot(x='payment_type', hue='status_group', data=df, ax=ax, palette=color_status)
plt.xticks(rotation=45)
fig, ax = plt.subplots(figsize=(12,8))
sns.scatterplot(x='longitude', y='latitude', hue='status_group', data=df, ax=ax, palette=color_status)
plt.xticks(rotation=45)
row_functional = (df['status_group'] == 'functional')
row_non_fnctional = (df['status_group'] == 'non functional')
row_repair = (df['status_group'] == 'functional needs repair')
col = 'gps_height'
fig, ax = plt.subplots(figsize=(12,8))
sns.distplot(df[col][row_functional], color='green', label='functional', ax=ax)
sns.distplot(df[col][row_non_fnctional], color='red', label='non functional', ax=ax)
sns.distplot(df[col][row_repair], color='blue', label='functional needs repair', ax=ax)
plt.legend();
data.groupby(col).count().orderBy(col).show()
```
|
github_jupyter
|
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q https://downloads.apache.org/spark/spark-2.4.7/spark-2.4.7-bin-hadoop2.7.tgz
!tar xf spark-2.4.7-bin-hadoop2.7.tgz
!pip install -q findspark
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.4.7-bin-hadoop2.7"
import findspark
findspark.init()
from google.colab import files
from pyspark.sql import SparkSession, Window
from pyspark.sql.functions import isnan, when, count, col, lit, trim, avg, ceil
from pyspark.sql.types import StringType
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
!wget https://s3.amazonaws.com/drivendata/data/7/public/4910797b-ee55-40a7-8668-10efd5c1b960.csv -O features.csv
!wget https://s3.amazonaws.com/drivendata/data/7/public/0bf8bc6e-30d0-4c50-956a-603fc693d966.csv -O labels.csv
!ls
sc = SparkSession.builder.master("local[*]").getOrCreate()
features = sc.read.csv('features.csv', inferSchema=True, header=True)
label = sc.read.csv('labels.csv', inferSchema=True, header=True)
print(features.count())
print(label.count())
print(features.columns)
print(label.columns)
data = features.join(label, on=('id'))
print(data.count())
print(data.columns)
print(data.printSchema())
print(data.show(10))
data = data.withColumn('region_code', col('region_code').cast(StringType())) \
.withColumn('district_code', col('district_code').cast(StringType()))
data.printSchema()
data = data.dropDuplicates(['id'])
data.count()
str_cols = [item[0] for item in data.dtypes if item[1].startswith('string')]
for cols in str_cols:
data = data.withColumn(cols, trim(data[cols]))
agg_row = data.select([(count(when(isnan(c) | col(c).isNull(), c))/data.count()).alias(c) for c in data.columns if c not in {'date_recorded', 'public_meeting', 'permit'}]).collect()
agg_dict_list = [row.asDict() for row in agg_row]
agg_dict = agg_dict_list[0]
col_null = list({i for i in agg_dict if agg_dict[i] > 0.4})
print(agg_dict)
print(col_null)
data = data.drop(*col_null)
data.groupBy('recorded_by').count().show()
data.groupBy('water_quality').count().orderBy('count', ascending=False).show()
data = data.drop('recorded_by')
data.groupBy('status_group').pivot('region').sum('amount_tsh').show()
print(str_cols)
for column in str_cols[:2]:
data.groupBy(column).count().orderBy('count', ascending=False).show()
values_cat = data.groupBy(column).count().collect()
lessthan = [x[0] for x in values_cat if x[1] < 1000]
data = data.withColumn(column, when(col(column).isin(lessthan), 'Others').otherwise(col(column)))
data.groupBy(column).count().orderBy('count', ascending=False).show()
data.groupBy('population').count().orderBy('population').show()
data = data.withColumn('population', when(col('population') < 2, lit(None)).otherwise(col('population')))
w = Window.partitionBy(data['district_code'])
data = data.withColumn('population', when(col('population').isNull(), avg(data['population']).over(w)).otherwise(col('population')))
data = data.withColumn('population', ceil(data['population']))
data.groupBy('population').count().orderBy('population').show()
color_status = {'functional': 'green', 'non functional': 'red', 'functional needs repair': 'blue'}
cols = ['status_group', 'payment_type', 'longitude', 'latitude', 'gps_height']
df = data.select(cols).toPandas()
fig, ax = plt.subplots(figsize=(12,8))
sns.countplot(x='payment_type', hue='status_group', data=df, ax=ax, palette=color_status)
plt.xticks(rotation=45)
fig, ax = plt.subplots(figsize=(12,8))
sns.scatterplot(x='longitude', y='latitude', hue='status_group', data=df, ax=ax, palette=color_status)
plt.xticks(rotation=45)
row_functional = (df['status_group'] == 'functional')
row_non_fnctional = (df['status_group'] == 'non functional')
row_repair = (df['status_group'] == 'functional needs repair')
col = 'gps_height'
fig, ax = plt.subplots(figsize=(12,8))
sns.distplot(df[col][row_functional], color='green', label='functional', ax=ax)
sns.distplot(df[col][row_non_fnctional], color='red', label='non functional', ax=ax)
sns.distplot(df[col][row_repair], color='blue', label='functional needs repair', ax=ax)
plt.legend();
data.groupby(col).count().orderBy(col).show()
| 0.538983 | 0.746947 |
This course is based on the book below and the corresponding Jupyter notebooks of *Jake VanderPlas*. The course in its original format can be found [on Github](https://github.com/jakevdp/WhirlwindTourOfPython). As reading from and writing to a file is belonging also to the basics, Kinga Sipos created the complementray Chapter *15-File Input and Output*. Chapter *19-Plotting* was also added to provide a first experience with plotting in Python. Some other little details were created and/or changed. Finally extra exercises were added in order to enhance the interactivity of the course.
<img src="fig/cover-large.gif">
*A Whirlwind Tour of Python* is a fast-paced introduction to essential
components of the Python language for researchers and developers.
According to *Jake VanderPlas* the material is particularly aimed at those
who wish to use Python for data
science and/or scientific programming, and in this capacity serves as an
introduction to his next book, *The Python Data Science Handbook*.
These notebooks are adapted from lectures and workshops *Jake VanderPlas* has given on these
topics at University of Washington and at various conferences, meetings, and
workshops around the world.
## Index
0. [Introduction](00-Introduction.ipynb)
1. [How to Run Python Code](01-How-to-Run-Python-Code.ipynb)
2. [Basic Python Syntax](02-Basic-Python-Syntax.ipynb)
3. [Python Semantics: Variables](03-Semantics-Variables.ipynb)
4. [Python Semantics: Operators](04-Semantics-Operators.ipynb)
5. [Built-In Scalar Types](05-Built-in-Scalar-Types.ipynb)
6. [Built-In Data Structures](06-Built-in-Data-Structures.ipynb)
7. [Control Flow Statements](07-Control-Flow-Statements.ipynb)
8. [Defining Functions](08-Defining-Functions.ipynb)
9. [Errors and Exceptions](09-Errors-and-Exceptions.ipynb)
10. [Iterators](10-Iterators.ipynb)
11. [List Comprehensions](11-List-Comprehensions.ipynb)
12. [Generators and Generator Expressions](12-Generators.ipynb)
13. [Strings and Regular Expressions](13-Strings-and-Regular-Expressions.ipynb)
14. [File Input and Output](14-File-Input-and-Output.ipynb)
15. [Modules and Packages](15-Modules-and-Packages.ipynb)
16. [Preview of Data Science Tools](16-Preview-of-Data-Science-Tools.ipynb)
17. [Resources for Further Learning](17-Further-Resources.ipynb)
18. [Plotting](18-Plotting.ipynb)
19. [Appendix: Figure Code](19-Figures.ipynb)
20. [Exercises](20-Exercises.ipynb)
21. [Solutions](21-Solutions.ipynb)
## License
The materials from *Jake VanderPlas* were modified respecting the licence they were published with. The modified material is released under the same "No Rights Reserved" [CC0](LICENSE)
license, and thus you are free to re-use, modify, build-on, and enhance
this material for any purpose.
Respecting the wish of *Jake VanderPlast*, we include a proper attribution and/or citation of the materials consituting the basis of this course:
> *A Whirlwind Tour of Python* by Jake VanderPlas (O’Reilly). Copyright 2016 O’Reilly Media, Inc., 978-1-491-96465-1
Read more about CC0 [here](https://creativecommons.org/share-your-work/public-domain/cc0/).
|
github_jupyter
|
This course is based on the book below and the corresponding Jupyter notebooks of *Jake VanderPlas*. The course in its original format can be found [on Github](https://github.com/jakevdp/WhirlwindTourOfPython). As reading from and writing to a file is belonging also to the basics, Kinga Sipos created the complementray Chapter *15-File Input and Output*. Chapter *19-Plotting* was also added to provide a first experience with plotting in Python. Some other little details were created and/or changed. Finally extra exercises were added in order to enhance the interactivity of the course.
<img src="fig/cover-large.gif">
*A Whirlwind Tour of Python* is a fast-paced introduction to essential
components of the Python language for researchers and developers.
According to *Jake VanderPlas* the material is particularly aimed at those
who wish to use Python for data
science and/or scientific programming, and in this capacity serves as an
introduction to his next book, *The Python Data Science Handbook*.
These notebooks are adapted from lectures and workshops *Jake VanderPlas* has given on these
topics at University of Washington and at various conferences, meetings, and
workshops around the world.
## Index
0. [Introduction](00-Introduction.ipynb)
1. [How to Run Python Code](01-How-to-Run-Python-Code.ipynb)
2. [Basic Python Syntax](02-Basic-Python-Syntax.ipynb)
3. [Python Semantics: Variables](03-Semantics-Variables.ipynb)
4. [Python Semantics: Operators](04-Semantics-Operators.ipynb)
5. [Built-In Scalar Types](05-Built-in-Scalar-Types.ipynb)
6. [Built-In Data Structures](06-Built-in-Data-Structures.ipynb)
7. [Control Flow Statements](07-Control-Flow-Statements.ipynb)
8. [Defining Functions](08-Defining-Functions.ipynb)
9. [Errors and Exceptions](09-Errors-and-Exceptions.ipynb)
10. [Iterators](10-Iterators.ipynb)
11. [List Comprehensions](11-List-Comprehensions.ipynb)
12. [Generators and Generator Expressions](12-Generators.ipynb)
13. [Strings and Regular Expressions](13-Strings-and-Regular-Expressions.ipynb)
14. [File Input and Output](14-File-Input-and-Output.ipynb)
15. [Modules and Packages](15-Modules-and-Packages.ipynb)
16. [Preview of Data Science Tools](16-Preview-of-Data-Science-Tools.ipynb)
17. [Resources for Further Learning](17-Further-Resources.ipynb)
18. [Plotting](18-Plotting.ipynb)
19. [Appendix: Figure Code](19-Figures.ipynb)
20. [Exercises](20-Exercises.ipynb)
21. [Solutions](21-Solutions.ipynb)
## License
The materials from *Jake VanderPlas* were modified respecting the licence they were published with. The modified material is released under the same "No Rights Reserved" [CC0](LICENSE)
license, and thus you are free to re-use, modify, build-on, and enhance
this material for any purpose.
Respecting the wish of *Jake VanderPlast*, we include a proper attribution and/or citation of the materials consituting the basis of this course:
> *A Whirlwind Tour of Python* by Jake VanderPlas (O’Reilly). Copyright 2016 O’Reilly Media, Inc., 978-1-491-96465-1
Read more about CC0 [here](https://creativecommons.org/share-your-work/public-domain/cc0/).
| 0.617397 | 0.975202 |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
```
# Exploratory Precipitation Analysis
```
# Find the most recent date in the data set.
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
# Calculate the date one year from the last date in data set.
prev_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
results = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= prev_year).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(results, columns=['date', 'precipitation'])
df.set_index(df['date'], inplace=True)
# Sort the dataframe by date
df = df.sort_index()
# Use Pandas Plotting with Matplotlib to plot the data
df.plot(rot=90)
# Use Pandas to calcualte the summary statistics for the precipitation data
df.describe
```
# Exploratory Station Analysis
```
# Design a query to calculate the total number stations in the dataset
session.query(func.count(Station.station)).all()
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).all()
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).\
filter(Measurement.station == 'USC00519281').all()
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
import datetime as dt
from pandas.plotting import table
prev_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
results = session.query(Measurement.tobs).\
filter(Measurement.station == 'USC00519281').\
filter(Measurement.date >= prev_year).all()
df = pd.DataFrame(results, columns=['tobs'])
df.plot.hist(bins=12)
plt.tight_layout()
```
# Close session
```
# Close Session
session.close()
```
|
github_jupyter
|
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# Find the most recent date in the data set.
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
# Calculate the date one year from the last date in data set.
prev_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
results = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= prev_year).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(results, columns=['date', 'precipitation'])
df.set_index(df['date'], inplace=True)
# Sort the dataframe by date
df = df.sort_index()
# Use Pandas Plotting with Matplotlib to plot the data
df.plot(rot=90)
# Use Pandas to calcualte the summary statistics for the precipitation data
df.describe
# Design a query to calculate the total number stations in the dataset
session.query(func.count(Station.station)).all()
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).all()
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).\
filter(Measurement.station == 'USC00519281').all()
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
import datetime as dt
from pandas.plotting import table
prev_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
results = session.query(Measurement.tobs).\
filter(Measurement.station == 'USC00519281').\
filter(Measurement.date >= prev_year).all()
df = pd.DataFrame(results, columns=['tobs'])
df.plot.hist(bins=12)
plt.tight_layout()
# Close Session
session.close()
| 0.740362 | 0.912864 |
# Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
* [Pix2Pix](https://affinelayer.com/pixsrv/)
* [CycleGAN](https://github.com/junyanz/CycleGAN)
* [A whole list](https://github.com/wiseodd/generative-models)
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.

The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
```
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
```
## Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input `inputs_real` and the generator input `inputs_z`. We'll assign them the appropriate sizes for each of the networks.
>**Exercise:** Finish the `model_inputs` function below. Create the placeholders for `inputs_real` and `inputs_z` using the input sizes `real_dim` and `z_dim` respectively.
```
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape=(None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, shape=(None, z_dim), name='input_z')
return inputs_real, inputs_z
```
## Generator network

Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
#### Variable Scope
Here we need to use `tf.variable_scope` for two reasons. Firstly, we're going to make sure all the variable names start with `generator`. Similarly, we'll prepend `discriminator` to the discriminator variables. This will help out later when we're training the separate networks.
We could just use `tf.name_scope` to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also _sample from it_ as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the `reuse` keyword for `tf.variable_scope` to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use `tf.variable_scope`, you use a `with` statement:
```python
with tf.variable_scope('scope_name', reuse=False):
# code here
```
Here's more from [the TensorFlow documentation](https://www.tensorflow.org/programmers_guide/variable_scope#the_problem) to get another look at using `tf.variable_scope`.
#### Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to `tf.maximum`. Typically, a parameter `alpha` sets the magnitude of the output for negative values. So, the output for negative input (`x`) values is `alpha*x`, and the output for positive `x` is `x`:
$$
f(x) = max(\alpha * x, x)
$$
#### Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
>**Exercise:** Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the `reuse` keyword argument from the function to `tf.variable_scope`.
```
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
```
## Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
>**Exercise:** Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the `reuse` keyword argument from the function arguments to `tf.variable_scope`.
```
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
```
## Hyperparameters
```
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
```
## Build network
Now we're building the network from the functions defined above.
First is to get our inputs, `input_real, input_z` from `model_inputs` using the sizes of the input and z.
Then, we'll create the generator, `generator(input_z, input_size)`. This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as `g_model`. So the real data discriminator is `discriminator(input_real)` while the fake discriminator is `discriminator(g_model, reuse=True)`.
>**Exercise:** Build the network from the functions you defined earlier.
```
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
```
## Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_loss_real + d_loss_fake`. The losses will be sigmoid cross-entropies, which we can get with `tf.nn.sigmoid_cross_entropy_with_logits`. We'll also wrap that in `tf.reduce_mean` to get the mean for all the images in the batch. So the losses will look something like
```python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
```
For the real image logits, we'll use `d_logits_real` which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter `smooth`. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like `labels = tf.ones_like(tensor) * (1 - smooth)`
The discriminator loss for the fake data is similar. The logits are `d_logits_fake`, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using `d_logits_fake`, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
>**Exercise:** Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
```
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1-smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))
```
## Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use `tf.trainable_variables()`. This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with `generator`. So, we just need to iterate through the list from `tf.trainable_variables()` and keep variables that start with `generator`. Each variable object has an attribute `name` which holds the name of the variable as a string (`var.name == 'weights_0'` for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with `discriminator`.
Then, in the optimizer we pass the variable lists to the `var_list` keyword argument of the `minimize` method. This tells the optimizer to only update the listed variables. Something like `tf.train.AdamOptimizer().minimize(loss, var_list=var_list)` will only train the variables in `var_list`.
>**Exercise: ** Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using `AdamOptimizer`, create an optimizer for each network that update the network variables separately.
```
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
```
## Training
```
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
```
## Training loss
Here we'll check out the training losses for the generator and discriminator.
```
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
```
## Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
```
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
```
These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
```
_ = view_samples(-1, samples)
```
Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
```
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
```
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
## Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
```
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
```
|
github_jupyter
|
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape=(None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, shape=(None, z_dim), name='input_z')
return inputs_real, inputs_z
with tf.variable_scope('scope_name', reuse=False):
# code here
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1-smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
| 0.826432 | 0.991922 |
# Torkan 2
Denna uppgiften är en fortsättningsuppgift på [torkan](https://github.com/lunduniversity/schoolprog-satellite/tree/master/exercises/drought). Vi rekommenderar att du gör den först.
I denna uppgift ska vi undersöka hur man nogrannare kan undersöka om det är torka. Detta kommer vi göra med bland annat hjälp av NDVI som du känner igen från torkan men vi kommer också introducera NDWI (Normalized Difference Water Index) som istället för vegetation kollar på vatteninnehåll. Likt NDVI ger inte NDWI själv någon bild av hur det ser ur jämfört med tidigare år. Detta är varför vi även ska kolla på VCI (vegetation condition index). VCI används för att jämföra ett år med andra för att se hur högt eller lågt NDVI/NDWI är relativt de andra åren. Vi kommer ha med korta teoriavsnitt om både VCI och NDWI.
## 1. Ladda in datan från drive
Innan vi kan börja analysera data måste vi ladda in data. Kör följande block för att installera och importera biblioteken du behöver:
```
!pip3 install PyDrive
import os
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
```
Vi kommer ladda ner datan från google drive. För att göra detta måste du logga in på ditt googlekonto genom att klicka på länken som visas när du kör koden nedan. Detta kommer ge dig en verifikationskod som du ska klistra in i fältet som genereras av samma kod.
```
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
```
Följande block laddar in och packar upp datan. Du kan se i rutan till vänster om koden vilka filer du har.
```
download = drive.CreateFile({'id': '1PsNbvr9Kch4UtQXsohtXViJo07VSg-N9'})
download.GetContentFile('sentinel2_sw_scania.tar.gz')
import tarfile
tf = tarfile.open('sentinel2_sw_scania.tar.gz')
tf.extractall()
```
## 2. Hjälpmetoder och imports till NDVI och NDWI
För att alla delar i programmet ska fungera behöver vi importera biblioteken i blocket nedan. För att underlätta för er har vi också skrivit två hjälpfunktioner som reducerar arrayer. Detta innebär att de tar en array som är av storlek $h \times w$ och reducerar den till en array av storlek $\frac{h}{n} \times \frac{w}{n}$ där $n$ är parameter till funktionen. Vad detta innebär är att bilderna vi senare kollar på inte tar lika mycket plats som de annars skulle gjort, detta gör att våra beräkning går mycket snabbare.
```
from osgeo import gdal
import numpy as np
import matplotlib.pyplot as plt
import gc
def reduce_array(n, arr):
r = arr.shape[0]
c = arr.shape[1]
out = np.empty(((r + n-1)//n, (c+n-1)//n))
for i in range((r + n-1)//n):
for j in range((c+n-1)//n):
summa = 0
num = 0
for di in range(n):
for dj in range(n):
if(i*n + di >= r or j*n + dj >= c):
continue
summa += arr[i*n+di][j*n+dj]
num += 1
out[i][j] = summa/num
return out
def reduce_array_2(n, arr):
r = arr.shape[0]
c = arr.shape[1]
out = np.empty(((r + n-1)//n, (c+n-1)//n))
for i in range((r + n-1)//n):
for j in range((c+n-1)//n):
if(i*n >= r or j*n >= c):
continue
out[i][j]=arr[i*n][j*n]
return out
```
## 3. NDVI och dess VCI
Nu ska vi börja beräkna NDVI och VCI! (Vi förutsätter att du redan vet hur man beräknar NDVI, om du behöver pigga upp minnet, kolla på uppgiften torkan.) Nu ska vi kort förklara vad VCI är för något. VCI används för att jämföra ett NDVI för ett visst år med värden uppmätta de senate åren. VCI är en procentandel som visar var det nuvarande värdet befinner sig i förhållande till det största respektive minsta värdet under de undersökta åren. Ett lågt VCI indikerar på lågt NDVI jämfört med föregående år och därmed också ett en dålig växtlighet och ett tecken på torka. Högt VCI tyder däremot på att att det är hög växtlighet jämfört med tidigare år och att det inte är torrt. VCI räknas ut genom
<br>
$$VCI = \frac{NDVI_{\text{current}} - NDVI_{\text{min}}}{NDVI_{\text{max}} - NDVI_{\text{min}}} \times 100$$
<br>
för varje pixel. $NDVI_{\text{current}}$ är NDVI för året vi kollar på. $NDVI_{\text{min}}$ är det minsta NDVI-värdet för den pixeln under alla åren vi kollar på. Liknande är $NDVI_{\text{max}}$ det största NDVI-värdet för den pixeln under alla åren vi kollar på.
För att läsa in datan till programmet används funktionen nedan.
**Uppdrag:** Definiera funktionen genom att köra koden i blocket nedan. Läs sedan in datan i variabeln `raw_ndvi_data`. Vad betyder de olika stegen i funktionen? Vad får `raw_ndvi_data` för typ? Hur gör man för att komma åt det röda bandet från 2016? Hur många gånger har bilderna reducerats?
```
def get_raw_ndvi_data():
ndvi_data = {}
ndvi_data[2015] = {}
ndvi_data[2015]["red"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2015/S2A_OPER_MSI_L1C_TL_EPA__20160721T205724_A000634_T33UUB_B04.jp2").ReadAsArray())
ndvi_data[2015]["nir"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2015/S2A_OPER_MSI_L1C_TL_EPA__20160721T205724_A000634_T33UUB_B08.jp2").ReadAsArray())
print("-----2015-done-----")
ndvi_data[2016] = {}
ndvi_data[2016]["red"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2016/S2A_OPER_MSI_L1C_TL_SGS__20160721T154816_A005639_T33UUB_B04.jp2").ReadAsArray())
ndvi_data[2016]["nir"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2016/S2A_OPER_MSI_L1C_TL_SGS__20160721T154816_A005639_T33UUB_B08.jp2").ReadAsArray())
print("-----2016-done-----")
ndvi_data[2017] = {}
ndvi_data[2017]["red"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2017/T33UUB_20170706T102021_B04.jp2").ReadAsArray())
ndvi_data[2017]["nir"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2017/T33UUB_20170706T102021_B08.jp2").ReadAsArray())
print("-----2017-done-----")
ndvi_data[2018] = {}
ndvi_data[2018]["red"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2018/T33UUB_20180726T102019_B04.jp2").ReadAsArray())
ndvi_data[2018]["nir"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2018/T33UUB_20180726T102019_B08.jp2").ReadAsArray())
print("-----2018-done-----")
return ndvi_data
```
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>raw_ndvi_data = get_raw_ndvi_data()
</code></pre>
</details>
<details><summary markdown="span">Svar</summary>
<p>
<code>raw_ndvi_data</code> är en <code>dict</code> med nycklar som är år. Värdet för varje år är ytterligare en <code>dict</code> med nycklarna <code>"red"</code> och <code>"nir"</code> som innehåller det röda och nära infraröda bandet. För att komma åt det röda bandet från 2016 skriver man därför <code>raw_ndvi_data[2016]["red"]</code>. Bilden har reducerats 10 gånger. Detta ser man eftersom det första agumentet till <code>reduce_array_2()</code> är 10.
</details>
Nu ska vi beräkna NDVI precis som vi gjort innan.
**Uppdrag:** Skapa en `dict` som heter `ndvi` som ska ha år som nycklar och innehålla NDVI för det året. Raden `np.seterr(divide='ignore', invalid='ignore')` gör att ni inte får problem om ni skulle råka dela med 0.
```
np.seterr(divide='ignore', invalid='ignore')
```
<details><summary markdown="span">Tips</summary>
<p>
Gör en <code>for</code>-loop för att loopa igenom åren.
</p>
</details>
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>ndvi = {}
for i in range(2015, 2019):
ndvi[i] = (raw_ndvi_data[i]["nir"] - raw_ndvi_data[i]["red"])/(raw_ndvi_data[i]["nir"] + raw_ndvi_data[i]["red"])
</code></pre>
</details>
**Uppdrag:** Plotta NDVI för något år. Vilket ställe ser man på bilden?
```
```
<details><summary markdown="span">Tips</summary>
<p>
Kolla hur du gjorde i förra uppgiften.
</p>
</details>
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>plt.figure(figsize=(10,10))
plt.imshow(ndvi[2018], aspect='auto', cmap='PiYG')
plt.clim(-1.0, 1.0)
plt.colorbar(label='NDVI ')
plt.show()
plt.savefig("ndvi.png")
plt.close()
</code></pre>
</details>
<details><summary markdown="span">Svar</summary>
<p>
Man kan se sydvästra Skåne och en del av Danmark.
</p>
</details>
Nu ska vi beräkna VCI.
**Uppdrag:** Skriv en funktion `calc_vci_ndvi(year)` som tar in ett år och beräknar VCI för detta året. Du ska returnera en array.
```
```
<details><summary markdown="span">Tips</summary>
<p>
Använd antingen en <code>for</code>-loop för att loopa igenom alla pixlar eller så kan du använda <code>np.maximum(a, b)</code> och <code>np.minimum(a, b)</code> som returnerar en array där varje värde är det största respektive minsta värdet av <code>a</code> och <code>b</code> på den positionen.
</p>
</details>
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>def calc_vci_ndvi(year):
max1 = np.maximum(ndvi[2015], ndvi[2016])
max2 = np.maximum(ndvi[2017], ndvi[2018])
max_total = np.maximum(max1, max2)
min1 = np.minimum(ndvi[2015], ndvi[2016])
min2 = np.minimum(ndvi[2017], ndvi[2018])
min_total = np.minimum(min1, min2)
vci = (ndvi[year] - min_total)/(max_total - min_total)
return vci * 100
</code></pre>
</details>
Nu ska vi plotta VCI för att se hur mycket torka det är.
**Uppdrag:** Skriv en funktion `plot_vci_ndvi(year)` som tar ett år och plottar VCI för det året. Använd denna funktionen för att plotta VCI för 2018. Ser det ut att vara torka?
```
```
<details><summary markdown="span">Tips</summary>
<p>
Gör liknande som när du plottade NDVI.
</p>
</details>
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>def plot_vci_ndvi(year):
vci_ndvi = calc_vci_ndvi(year)
plt.figure(figsize=(10,10))
plt.imshow(vci_ndvi, aspect='auto', cmap='PiYG')
plt.clim(0, 100)
plt.colorbar(label='VCI-NDVI '+str(year))
plt.show()
plt.savefig("vci_ndvi" + str(year) + ".png")
plt.close()
plot_vci_ndvi(2018)
</code></pre>
</details>
<details><summary markdown="span">Svar</summary>
<p>
Det ser ganska lila ut, vilket betyder att 2018 har bland de lägre NDVI-värdena av de fyra åren. Detta tyder på torka.
</p>
</details>
Du kanske även vill kunna jämföra VCI för de olika åren? Kör koden nedan så hamnar VCI för alla år i samma figur.
```
def plot_all_ndvi_vci():
(fig, arr_plots) = plt.subplots(2, 2, gridspec_kw={'wspace': 0.3}, figsize=(18, 16))
plots = []
ims = []
plots.append(arr_plots[0][0])
plots.append(arr_plots[0][1])
plots.append(arr_plots[1][0])
plots.append(arr_plots[1][1])
for i in range(4):
vci_ndvi = calc_vci_ndvi(2015 + i)
ims.append(plots[i].imshow(vci_ndvi, aspect='auto', cmap='PiYG'))
ims[-1].set_clim(0, 100)
plots[i].set_title("VCI-NDVI " + str(2015+i))
fig.colorbar(ims[0], ax=arr_plots)
fig.show()
plot_all_ndvi_vci()
```
## 4. NDWI och dess VCI
Nu ska vi kolla på det liknande NDWI istället för NDVI. De är ganska snarlika, men man kan säga att NDVI mäter vegetation medans NDWI snarare mäter fuktighet. Eftersom NDWI mäter fuktighet kan detta vara bra att kolla på för att undersöka torka. Vi ska nu gå igenom hur man kan beräkna NDWI.
Beräkningen är ganska lik NDVI men vi använder ett annat band. Vi använder fortfarande NIR men vi kommer också använda SWIR som står för Short-Wave Infrared band. Formeln är
<br>
$$NDWI = \frac{(NIR-SWIR)}{(NIR+SWIR)}.$$
<br>
Även NDWI kommer alltid vara mellan -1och 1. Ett högt NDWI-värde indikerar på hög fuktighet, medans ett lågt indikerar på låg fuktighet. För ytterligare information om NDWI kan du läsa [här](https://en.wikipedia.org/wiki/Normalized_difference_water_index).
Vi kommer även kolla på VCI för NDWI. Som tur är funkar detta nästan exakt likadant. Formeln för VCI för NDWI är
<br>
$$VCI = \frac{NDWI_{\text{current}} - NDWI_{\text{min}}}{NDWI_{\text{max}} - NDWI_{\text{min}}} \times 100.$$
<br>
Åter igen tyder ett lågt VCI på att detta året hade ovanlig lågt NDWI, medans ett högt tyder på att det hade ett högt NDWI.
**Uppdrag:** Kör följande kod för att definiera funktionen `get_raw_ndwi_data()` och spara datan i variabeln `raw_ndwi_data`. Vad är `raw_ndvi_data` för typ? Hur gör man för att komma åt NIR från 2016? Hur mycket reduceras bilden?
```
def get_raw_ndwi_data():
ndwi_data = {}
ndwi_data[2015] = {}
ndwi_data[2015]["swir"] = reduce_array_2(5, gdal.Open("sentinel2_sw_scania/2015/S2A_OPER_MSI_L1C_TL_EPA__20160721T205724_A000634_T33UUB_B11.jp2").ReadAsArray())
ndwi_data[2015]["nir"] = reduce_array_2(5, gdal.Open("sentinel2_sw_scania/2015/S2A_OPER_MSI_L1C_TL_EPA__20160721T205724_A000634_T33UUB_B8A.jp2").ReadAsArray())
print("-----2015-done-----")
ndwi_data[2016] = {}
ndwi_data[2016]["swir"] = reduce_array_2(5, gdal.Open("sentinel2_sw_scania/2016/S2A_OPER_MSI_L1C_TL_SGS__20160721T154816_A005639_T33UUB_B11.jp2").ReadAsArray())
ndwi_data[2016]["nir"] = reduce_array_2(5, gdal.Open("sentinel2_sw_scania/2016/S2A_OPER_MSI_L1C_TL_SGS__20160721T154816_A005639_T33UUB_B8A.jp2").ReadAsArray())
print("-----2016-done-----")
ndwi_data[2017] = {}
ndwi_data[2017]["swir"] = reduce_array_2(5, gdal.Open("sentinel2_sw_scania/2017/T33UUB_20170706T102021_B11.jp2").ReadAsArray())
ndwi_data[2017]["nir"] = reduce_array_2(5, gdal.Open("sentinel2_sw_scania/2017/T33UUB_20170706T102021_B8A.jp2").ReadAsArray())
print("-----2017-done-----")
ndwi_data[2018] = {}
ndwi_data[2018]["swir"] = reduce_array_2(5, gdal.Open("sentinel2_sw_scania/2018/T33UUB_20180726T102019_B11.jp2").ReadAsArray())
ndwi_data[2018]["nir"] = reduce_array_2(5, gdal.Open("sentinel2_sw_scania/2018/T33UUB_20180726T102019_B8A.jp2").ReadAsArray())
print("-----2018-done-----")
return ndwi_data
```
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>raw_ndwi_data = get_raw_ndwi_data()
</code></pre>
</details>
<details><summary markdown="span">Svar</summary>
<p>
<code>raw_ndwi_data</code> är en <code>dict</code> med nycklar som är år. Värdet för varje år är ytterligare en <code>dict</code> med nycklarna <code>"swir"</code> och <code>"nir"</code> som innehåller det kort våglängd infraröda bandet och nära infraröda bandet. För att komma åt det nära infraröda bandet från 2016 skriver man därför <code>raw_ndwi_data[2016]["nir"]</code>. Bilden har reducerats 5 gånger. Detta ser man eftersom det första agumentet till <code>reduce_array_2()</code> är 5.
</details>
**Uppdrag:** Skapa nu en `dict` som heter `ndwi` som ska innehålla NDWI för varje år.
```
np.seterr(divide='ignore', invalid='ignore')
```
<details><summary markdown="span">Tips</summary>
<p>
Du kan återanvända mycket av koden du skrev innan för NDVI.
</details>
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>ndwi = {}
for i in range(2015, 2019):
ndwi[i] = (raw_ndwi_data[i]["nir"] - raw_ndwi_data[i]["swir"])/(raw_ndwi_data[i]["nir"] + raw_ndwi_data[i]["swir"])
</code></pre>
</details>
**Uppdrag:** Gör nu liknande det du gjorde innan en funktion `calc_vci_ndwi(year)` och `plot_vci_ndwi(year)` fast för NDWI istället för NDVI. Använd detta för att plotta NDWI för 2018. Ser det torrt ut?
```
```
<details><summary markdown="span">Tips</summary>
<p>
Du kan återanvända mycket av koden du använde för NDVI.
</p>
</details>
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>def calc_vci_ndwi(year):
max1 = np.maximum(ndwi[2015], ndwi[2016])
max2 = np.maximum(ndwi[2017], ndwi[2018])
max_total = np.maximum(max1, max2)
min1 = np.minimum(ndwi[2015], ndwi[2016])
min2 = np.minimum(ndwi[2017], ndwi[2018])
min_total = np.minimum(min1, min2)
vci = (ndwi[year] - min_total)/(max_total - min_total)
return vci*100
def plot_vci_ndwi(year):
vci_ndwi = calc_vci_ndwi(year)
plt.figure(figsize=(10,10))
plt.imshow(vci_ndwi, aspect='auto', cmap='PiYG')
plt.clim(0, 100)
plt.colorbar(label='VCI-NDWI '+str(year))
plt.show()
plt.savefig("vci_ndwi" + str(year) + ".png")
plt.close()
plot_vci_ndwi(2018)
</code></pre>
</details>
<details><summary markdown="span">Svar</summary>
<p>
Det ser ganska lila ut, vilket betyder att 2018 har bland de lägre NDWI-värdena av de fyra åren. Detta tyder på torka.
</p>
</details>
Här kanske du åter vill se alla VCI i samma figur. Kör koden nedan.
```
def plot_all_ndwi_vci():
(fig, arr_plots) = plt.subplots(2, 2, gridspec_kw={'wspace': 0.3}, figsize=(18, 16))
plots = []
ims = []
plots.append(arr_plots[0][0])
plots.append(arr_plots[0][1])
plots.append(arr_plots[1][0])
plots.append(arr_plots[1][1])
for i in range(4):
vci_ndvi = calc_vci_ndwi(2015 + i)
ims.append(plots[i].imshow(vci_ndvi, aspect='auto', cmap='PiYG'))
ims[-1].set_clim(0, 100)
plots[i].set_title("VCI-NDWI " + str(2015+i))
fig.colorbar(ims[0], ax=arr_plots)
fig.show()
plot_all_ndwi_vci()
```
## Fortsättningsuppgifter
- Studera `reduce_array()` och `reduce_array_2()`. Hur fungerar dem, vad gör dem? Vilken är snabbare och varför? Testa att skriva en egen funktion som komprimerar arrayer.
|
github_jupyter
|
!pip3 install PyDrive
import os
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
download = drive.CreateFile({'id': '1PsNbvr9Kch4UtQXsohtXViJo07VSg-N9'})
download.GetContentFile('sentinel2_sw_scania.tar.gz')
import tarfile
tf = tarfile.open('sentinel2_sw_scania.tar.gz')
tf.extractall()
from osgeo import gdal
import numpy as np
import matplotlib.pyplot as plt
import gc
def reduce_array(n, arr):
r = arr.shape[0]
c = arr.shape[1]
out = np.empty(((r + n-1)//n, (c+n-1)//n))
for i in range((r + n-1)//n):
for j in range((c+n-1)//n):
summa = 0
num = 0
for di in range(n):
for dj in range(n):
if(i*n + di >= r or j*n + dj >= c):
continue
summa += arr[i*n+di][j*n+dj]
num += 1
out[i][j] = summa/num
return out
def reduce_array_2(n, arr):
r = arr.shape[0]
c = arr.shape[1]
out = np.empty(((r + n-1)//n, (c+n-1)//n))
for i in range((r + n-1)//n):
for j in range((c+n-1)//n):
if(i*n >= r or j*n >= c):
continue
out[i][j]=arr[i*n][j*n]
return out
def get_raw_ndvi_data():
ndvi_data = {}
ndvi_data[2015] = {}
ndvi_data[2015]["red"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2015/S2A_OPER_MSI_L1C_TL_EPA__20160721T205724_A000634_T33UUB_B04.jp2").ReadAsArray())
ndvi_data[2015]["nir"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2015/S2A_OPER_MSI_L1C_TL_EPA__20160721T205724_A000634_T33UUB_B08.jp2").ReadAsArray())
print("-----2015-done-----")
ndvi_data[2016] = {}
ndvi_data[2016]["red"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2016/S2A_OPER_MSI_L1C_TL_SGS__20160721T154816_A005639_T33UUB_B04.jp2").ReadAsArray())
ndvi_data[2016]["nir"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2016/S2A_OPER_MSI_L1C_TL_SGS__20160721T154816_A005639_T33UUB_B08.jp2").ReadAsArray())
print("-----2016-done-----")
ndvi_data[2017] = {}
ndvi_data[2017]["red"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2017/T33UUB_20170706T102021_B04.jp2").ReadAsArray())
ndvi_data[2017]["nir"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2017/T33UUB_20170706T102021_B08.jp2").ReadAsArray())
print("-----2017-done-----")
ndvi_data[2018] = {}
ndvi_data[2018]["red"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2018/T33UUB_20180726T102019_B04.jp2").ReadAsArray())
ndvi_data[2018]["nir"] = reduce_array_2(10, gdal.Open("sentinel2_sw_scania/2018/T33UUB_20180726T102019_B08.jp2").ReadAsArray())
print("-----2018-done-----")
return ndvi_data
np.seterr(divide='ignore', invalid='ignore')
```
<details><summary markdown="span">Tips</summary>
<p>
Kolla hur du gjorde i förra uppgiften.
</p>
</details>
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>plt.figure(figsize=(10,10))
plt.imshow(ndvi[2018], aspect='auto', cmap='PiYG')
plt.clim(-1.0, 1.0)
plt.colorbar(label='NDVI ')
plt.show()
plt.savefig("ndvi.png")
plt.close()
</code></pre>
</details>
<details><summary markdown="span">Svar</summary>
<p>
Man kan se sydvästra Skåne och en del av Danmark.
</p>
</details>
Nu ska vi beräkna VCI.
**Uppdrag:** Skriv en funktion `calc_vci_ndvi(year)` som tar in ett år och beräknar VCI för detta året. Du ska returnera en array.
<details><summary markdown="span">Tips</summary>
<p>
Använd antingen en <code>for</code>-loop för att loopa igenom alla pixlar eller så kan du använda <code>np.maximum(a, b)</code> och <code>np.minimum(a, b)</code> som returnerar en array där varje värde är det största respektive minsta värdet av <code>a</code> och <code>b</code> på den positionen.
</p>
</details>
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>def calc_vci_ndvi(year):
max1 = np.maximum(ndvi[2015], ndvi[2016])
max2 = np.maximum(ndvi[2017], ndvi[2018])
max_total = np.maximum(max1, max2)
min1 = np.minimum(ndvi[2015], ndvi[2016])
min2 = np.minimum(ndvi[2017], ndvi[2018])
min_total = np.minimum(min1, min2)
vci = (ndvi[year] - min_total)/(max_total - min_total)
return vci * 100
</code></pre>
</details>
Nu ska vi plotta VCI för att se hur mycket torka det är.
**Uppdrag:** Skriv en funktion `plot_vci_ndvi(year)` som tar ett år och plottar VCI för det året. Använd denna funktionen för att plotta VCI för 2018. Ser det ut att vara torka?
<details><summary markdown="span">Tips</summary>
<p>
Gör liknande som när du plottade NDVI.
</p>
</details>
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>def plot_vci_ndvi(year):
vci_ndvi = calc_vci_ndvi(year)
plt.figure(figsize=(10,10))
plt.imshow(vci_ndvi, aspect='auto', cmap='PiYG')
plt.clim(0, 100)
plt.colorbar(label='VCI-NDVI '+str(year))
plt.show()
plt.savefig("vci_ndvi" + str(year) + ".png")
plt.close()
plot_vci_ndvi(2018)
</code></pre>
</details>
<details><summary markdown="span">Svar</summary>
<p>
Det ser ganska lila ut, vilket betyder att 2018 har bland de lägre NDVI-värdena av de fyra åren. Detta tyder på torka.
</p>
</details>
Du kanske även vill kunna jämföra VCI för de olika åren? Kör koden nedan så hamnar VCI för alla år i samma figur.
## 4. NDWI och dess VCI
Nu ska vi kolla på det liknande NDWI istället för NDVI. De är ganska snarlika, men man kan säga att NDVI mäter vegetation medans NDWI snarare mäter fuktighet. Eftersom NDWI mäter fuktighet kan detta vara bra att kolla på för att undersöka torka. Vi ska nu gå igenom hur man kan beräkna NDWI.
Beräkningen är ganska lik NDVI men vi använder ett annat band. Vi använder fortfarande NIR men vi kommer också använda SWIR som står för Short-Wave Infrared band. Formeln är
<br>
$$NDWI = \frac{(NIR-SWIR)}{(NIR+SWIR)}.$$
<br>
Även NDWI kommer alltid vara mellan -1och 1. Ett högt NDWI-värde indikerar på hög fuktighet, medans ett lågt indikerar på låg fuktighet. För ytterligare information om NDWI kan du läsa [här](https://en.wikipedia.org/wiki/Normalized_difference_water_index).
Vi kommer även kolla på VCI för NDWI. Som tur är funkar detta nästan exakt likadant. Formeln för VCI för NDWI är
<br>
$$VCI = \frac{NDWI_{\text{current}} - NDWI_{\text{min}}}{NDWI_{\text{max}} - NDWI_{\text{min}}} \times 100.$$
<br>
Åter igen tyder ett lågt VCI på att detta året hade ovanlig lågt NDWI, medans ett högt tyder på att det hade ett högt NDWI.
**Uppdrag:** Kör följande kod för att definiera funktionen `get_raw_ndwi_data()` och spara datan i variabeln `raw_ndwi_data`. Vad är `raw_ndvi_data` för typ? Hur gör man för att komma åt NIR från 2016? Hur mycket reduceras bilden?
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>raw_ndwi_data = get_raw_ndwi_data()
</code></pre>
</details>
<details><summary markdown="span">Svar</summary>
<p>
<code>raw_ndwi_data</code> är en <code>dict</code> med nycklar som är år. Värdet för varje år är ytterligare en <code>dict</code> med nycklarna <code>"swir"</code> och <code>"nir"</code> som innehåller det kort våglängd infraröda bandet och nära infraröda bandet. För att komma åt det nära infraröda bandet från 2016 skriver man därför <code>raw_ndwi_data[2016]["nir"]</code>. Bilden har reducerats 5 gånger. Detta ser man eftersom det första agumentet till <code>reduce_array_2()</code> är 5.
</details>
**Uppdrag:** Skapa nu en `dict` som heter `ndwi` som ska innehålla NDWI för varje år.
<details><summary markdown="span">Tips</summary>
<p>
Du kan återanvända mycket av koden du skrev innan för NDVI.
</details>
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>ndwi = {}
for i in range(2015, 2019):
ndwi[i] = (raw_ndwi_data[i]["nir"] - raw_ndwi_data[i]["swir"])/(raw_ndwi_data[i]["nir"] + raw_ndwi_data[i]["swir"])
</code></pre>
</details>
**Uppdrag:** Gör nu liknande det du gjorde innan en funktion `calc_vci_ndwi(year)` och `plot_vci_ndwi(year)` fast för NDWI istället för NDVI. Använd detta för att plotta NDWI för 2018. Ser det torrt ut?
<details><summary markdown="span">Tips</summary>
<p>
Du kan återanvända mycket av koden du använde för NDVI.
</p>
</details>
<details><summary markdown="span">Lösning</summary>
<p>
<pre><code>def calc_vci_ndwi(year):
max1 = np.maximum(ndwi[2015], ndwi[2016])
max2 = np.maximum(ndwi[2017], ndwi[2018])
max_total = np.maximum(max1, max2)
min1 = np.minimum(ndwi[2015], ndwi[2016])
min2 = np.minimum(ndwi[2017], ndwi[2018])
min_total = np.minimum(min1, min2)
vci = (ndwi[year] - min_total)/(max_total - min_total)
return vci*100
def plot_vci_ndwi(year):
vci_ndwi = calc_vci_ndwi(year)
plt.figure(figsize=(10,10))
plt.imshow(vci_ndwi, aspect='auto', cmap='PiYG')
plt.clim(0, 100)
plt.colorbar(label='VCI-NDWI '+str(year))
plt.show()
plt.savefig("vci_ndwi" + str(year) + ".png")
plt.close()
plot_vci_ndwi(2018)
</code></pre>
</details>
<details><summary markdown="span">Svar</summary>
<p>
Det ser ganska lila ut, vilket betyder att 2018 har bland de lägre NDWI-värdena av de fyra åren. Detta tyder på torka.
</p>
</details>
Här kanske du åter vill se alla VCI i samma figur. Kör koden nedan.
| 0.337749 | 0.832679 |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/JavaScripts/Image/Polynomial.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Image/Polynomial.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Image/Polynomial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google MapS`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Applies a non-linear contrast enhancement to a MODIS image using
# function -0.2 + 2.4x - 1.2x^2.
# Load a MODIS image and apply the scaling factor.
img = ee.Image('MODIS/006/MOD09GA/2012_03_09') \
.select(['sur_refl_b01', 'sur_refl_b04', 'sur_refl_b03']) \
.multiply(0.0001)
# Apply the polynomial enhancement.
adj = img.polynomial([-0.2, 2.4, -1.2])
Map.setCenter(-107.24304, 35.78663, 8)
Map.addLayer(img, {'min': 0, 'max': 1}, 'original')
Map.addLayer(adj, {'min': 0, 'max': 1}, 'adjusted')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
|
github_jupyter
|
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
Map = geemap.Map(center=[40,-100], zoom=4)
Map
# Add Earth Engine dataset
# Applies a non-linear contrast enhancement to a MODIS image using
# function -0.2 + 2.4x - 1.2x^2.
# Load a MODIS image and apply the scaling factor.
img = ee.Image('MODIS/006/MOD09GA/2012_03_09') \
.select(['sur_refl_b01', 'sur_refl_b04', 'sur_refl_b03']) \
.multiply(0.0001)
# Apply the polynomial enhancement.
adj = img.polynomial([-0.2, 2.4, -1.2])
Map.setCenter(-107.24304, 35.78663, 8)
Map.addLayer(img, {'min': 0, 'max': 1}, 'original')
Map.addLayer(adj, {'min': 0, 'max': 1}, 'adjusted')
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
| 0.792384 | 0.966757 |
```
# default_exp fasta
```
# FASTA
> Functions related to generating spectra from FASTA files
This notebook contains all functions related to creating spectra from FASTA files. In brief, what we are doing is the following:
1. Read a FASTA file and in silico digest the proteins to generate peptides
2. For each peptide, calculate a theoretical spectrum and precursor mass
3. Save spectra
Currently, `numba` has only limited string support. A lot of the functions are therefore Python-native.
```
#hide
from nbdev.showdoc import *
```
## Cleaving
We use regular expressions to find potential cleavage sites for cleaving and write the wrapper `cleave_sequence` to use it.
```
#export
from alphapept import constants
import re
def get_missed_cleavages(sequences:list, n_missed_cleavages:int) -> list:
"""
Combine cleaved sequences to get sequences with missed cleavages
Args:
seqeuences (list of str): the list of cleaved sequences, no missed cleavages are there.
n_missed_cleavages (int): the number of miss cleavage sites.
Returns:
list (of str): the sequences with missed cleavages.
"""
missed = []
for k in range(len(sequences)-n_missed_cleavages):
missed.append(''.join(sequences[k-1:k+n_missed_cleavages]))
return missed
def cleave_sequence(
sequence:str="",
n_missed_cleavages:int=0,
protease:str="trypsin",
pep_length_min:int=6,
pep_length_max:int=65,
**kwargs
)->list:
"""
Cleave a sequence with a given protease. Filters to have a minimum and maximum length.
Args:
sequence (str): the given (protein) sequence.
n_missed_cleavages (int): the number of max missed cleavages.
protease (str): the protease/enzyme name, the regular expression can be found in alphapept.constants.protease_dict.
pep_length_min (int): min peptide length.
pep_length_max (int): max peptide length.
Returns:
list (of str): cleaved peptide sequences with missed cleavages.
"""
proteases = constants.protease_dict
pattern = proteases[protease]
p = re.compile(pattern)
cutpos = [m.start()+1 for m in p.finditer(sequence)]
cutpos.insert(0,0)
cutpos.append(len(sequence))
base_sequences = [sequence[cutpos[i]:cutpos[i+1]] for i in range(len(cutpos)-1)]
sequences = base_sequences.copy()
for i in range(1, n_missed_cleavages+1):
sequences.extend(get_missed_cleavages(base_sequences, i))
sequences = [_ for _ in sequences if len(_)>=pep_length_min and len(_)<=pep_length_max]
return sequences
protease = "trypsin"
n_missed_cleavages = 0
pep_length_min, pep_length_max = 6, 65
cleave_sequence('ABCDEFGHIJKLMNOPQRST', n_missed_cleavages, protease, pep_length_min, pep_length_max)
#hide
def test_cleave_sequence():
protease = "trypsin"
pep_length_min, pep_length_max = 6, 65
assert set(cleave_sequence('ABCDEFGHIJKLMNOPQRST', 0, protease, pep_length_min, pep_length_max)) == set(['ABCDEFGHIJK', 'LMNOPQR'])
assert set(cleave_sequence('ABCDEFGHIJKLMNOPQRST', 1, protease, pep_length_min, pep_length_max)) == set(['ABCDEFGHIJK', 'LMNOPQR', 'ABCDEFGHIJKLMNOPQR'])
test_cleave_sequence()
```
## Counting missed and internal cleavages
The following are helper functions to retrieve the number of missed cleavages and internal cleavage sites for each sequence.
```
#export
import re
from alphapept import constants
def count_missed_cleavages(sequence:str="", protease:str="trypsin", **kwargs) -> int:
"""
Counts the number of missed cleavages for a given sequence and protease
Args:
sequence (str): the given (peptide) sequence.
protease (str): the protease/enzyme name, the regular expression can be found in alphapept.constants.protease_dict.
Returns:
int: the number of miss cleavages
"""
proteases = constants.protease_dict
protease = proteases[protease]
p = re.compile(protease)
n_missed = len(p.findall(sequence))
return n_missed
def count_internal_cleavages(sequence:str="", protease:str="trypsin", **kwargs) -> int:
"""
Counts the number of internal cleavage sites for a given sequence and protease
Args:
sequence (str): the given (peptide) sequence.
protease (str): the protease/enzyme name, the regular expression can be found in alphapept.constants.protease_dict.
Returns:
int (0 or 1): if the sequence is from internal cleavage.
"""
proteases = constants.protease_dict
protease = proteases[protease]
match = re.search(protease,sequence[-1]+'_')
if match:
n_internal = 0
else:
n_internal = 1
return n_internal
protease = "trypsin"
print(count_missed_cleavages('ABCDEFGHIJKLMNOPQRST', protease))
protease = "trypsin"
print(count_internal_cleavages('ABCDEFGHIJKLMNOPQRST', protease))
#hide
def test_get_missed_cleavages():
assert count_missed_cleavages('ABCDEFGHIJKLMNOPQRST', 'trypsin') == 2
assert count_missed_cleavages('ABCDEFGHIJKLMNOPQRST', 'clostripain') == 1
test_get_missed_cleavages()
def test_get_internal_cleavages():
assert count_internal_cleavages('ABCDEFGHIJKLMNOPQRST', 'trypsin') == 1
assert count_internal_cleavages('ABCDEFGHIJKLMNOPQRSTK', 'trypsin') == 0
test_get_internal_cleavages()
```
## Parsing
Peptides are composed out of amino acids that are written in capital letters - `PEPTIDE`. To distinguish modifications, they are written in lowercase such as `PEPTIoxDE` and can be of arbitrary length. For a modified amino acid (AA), the modification precedes the letter of the amino acid. Decoys are indicated with an underscore. Therefore, the `parse` function splits after `_`. When parsing, the peptide string is converted into a `numba`-compatible list, like so: `PEPoxTIDE` -> `[P, E, P, oxT, I, D, E]`. This allows that we can use the `mass_dict` from `alphapept.constants` to directly determine the masses for the corresponding amino acids.
```
#export
from numba import njit
from numba.typed import List
@njit
def parse(peptide:str)->List:
"""
Parser to parse peptide strings
Args:
peptide (str): modified peptide sequence.
Return:
List (numba.typed.List): a list of animo acids and modified amono acids
"""
if "_" in peptide:
peptide = peptide.split("_")[0]
parsed = List()
string = ""
for i in peptide:
string += i
if i.isupper():
parsed.append(string)
string = ""
return parsed
def list_to_numba(a_list) -> List:
"""
Convert Python list to numba.typed.List
Args:
a_list (list): Python list.
Return:
List (numba.typed.List): Numba typed list.
"""
numba_list = List()
for element in a_list:
numba_list.append(element)
return numba_list
print(parse('PEPTIDE'))
print(parse('PEPoxTIDE'))
#hide
def test_parse():
peptide = "PEPTIDE"
assert parse(peptide) == list_to_numba(["P", "E", "P", "T", "I", "D", "E"])
peptide = "PEPoxTIDE"
assert parse(peptide) == list_to_numba(["P", "E", "P", "oxT", "I", "D", "E"])
peptide = "PEPTIDE_decoy"
assert parse(peptide) == list_to_numba(["P", "E", "P", "T", "I", "D", "E"])
test_parse()
```
## Decoy
The decoy strategy employed is a pseudo-reversal of the peptide sequence, keeping only the terminal amino acid and reversing the rest. Additionally, we can call the functions `swap_KR` and and `swap_AL` that will swap the respective AAs. The function `swap_KR` will only swap terminal AAs. The swapping functions only work if the AA is not modified.
```
#export
@njit
def get_decoy_sequence(peptide:str, pseudo_reverse:bool=False, AL_swap:bool=False, KR_swap:bool = False)->str:
"""
Reverses a sequence and adds the '_decoy' tag.
Args:
peptide (str): modified peptide to be reversed.
pseudo_reverse (bool): If True, reverse the peptide bug keep the C-terminal amino acid; otherwise reverse the whole peptide. (Default: False)
AL_swap (bool): replace A with L, and vice versa. (Default: False)
KR_swap (bool): replace K with R at the C-terminal, and vice versa. (Default: False)
Returns:
str: reversed peptide ending with the '_decoy' tag.
"""
pep = parse(peptide)
if pseudo_reverse:
rev_pep = pep[:-1][::-1]
rev_pep.append(pep[-1])
else:
rev_pep = pep[::-1]
if AL_swap:
rev_pep = swap_AL(rev_pep)
if KR_swap:
rev_pep = swap_KR(rev_pep)
rev_pep = "".join(rev_pep)
return rev_pep
@njit
def swap_KR(peptide:str)->str:
"""
Swaps a terminal K or R. Note: Only if AA is not modified.
Args:
peptide (str): peptide.
Returns:
str: peptide with swapped KRs.
"""
if peptide[-1] == 'K':
peptide[-1] = 'R'
elif peptide[-1] == 'R':
peptide[-1] = 'K'
return peptide
@njit
def swap_AL(peptide:str)->str:
"""
Swaps a A with L. Note: Only if AA is not modified.
Args:
peptide (str): peptide.
Returns:
str: peptide with swapped ALs.
"""
i = 0
while i < len(range(len(peptide) - 1)):
if peptide[i] == "A":
peptide[i] = peptide[i + 1]
peptide[i + 1] = "A"
i += 1
elif peptide[i] == "L":
peptide[i] = peptide[i + 1]
peptide[i + 1] = "L"
i += 1
i += 1
return peptide
def get_decoys(peptide_list, pseudo_reverse=False, AL_swap=False, KR_swap = False, **kwargs)->list:
"""
Wrapper to get decoys for lists of peptides
Args:
peptide_list (list): the list of peptides to be reversed.
pseudo_reverse (bool): If True, reverse the peptide bug keep the C-terminal amino acid; otherwise reverse the whole peptide. (Default: False)
AL_swap (bool): replace A with L, and vice versa. (Default: False)
KR_swap (bool): replace K with R at the C-terminal, and vice versa. (Default: False)
Returns:
list (of str): a list of decoy peptides
"""
decoys = []
decoys.extend([get_decoy_sequence(peptide, pseudo_reverse, AL_swap, KR_swap) for peptide in peptide_list])
return decoys
def add_decoy_tag(peptides):
"""
Adds a '_decoy' tag to a list of peptides
"""
return [peptide + "_decoy" for peptide in peptides]
print(swap_AL(parse('KKKALKKK')))
print(swap_KR(parse('AAAKRAAA')))
print(get_decoy_sequence('PEPTIDE'))
print(get_decoys(['ABC','DEF','GHI']))
#hide
def test_swap_AL():
assert swap_AL(parse("ABCDEF")) == parse("BACDEF")
assert swap_AL(parse("GHIKLM")) == parse("GHIKML")
assert swap_AL(parse("FEDCBA")) == parse("FEDCBA")
assert swap_AL(parse("GHIKL")) == parse("GHIKL")
assert swap_AL(parse("ABCDEFGHIKLM")) == parse("BACDEFGHIKML")
assert swap_AL(parse("BBAcCD")) == parse("BBcCAD")
assert swap_AL(parse("FEDCBA")) == parse("FEDCBA")
test_swap_AL()
def test_swapKR():
assert swap_KR(parse("ABCDEK")) == parse("ABCDER")
assert swap_KR(parse("ABCDER")) == parse("ABCDEK")
assert swap_KR(parse("ABCDEF")) == parse("ABCDEF")
assert swap_KR(parse("KABCDEF")) == parse("KABCDEF")
assert swap_KR(parse("KABCRDEF")) == parse("KABCRDEF")
assert swap_KR(parse("KABCKDEF")) == parse("KABCKDEF")
test_swapKR()
def test_get_decoy_sequence():
peptide = "PEPTIDER"
assert get_decoy_sequence(peptide, pseudo_reverse=True) == "EDITPEPR"
assert get_decoy_sequence(peptide) == "REDITPEP"
assert get_decoy_sequence(peptide, KR_swap=True, pseudo_reverse=True) == "EDITPEPK"
test_get_decoy_sequence()
```
## Modifications
To add modifications to the peptides, we distinguish fixed and variable modifications. Additionally, we make a distinction between whether the modification is only terminal or not.
### Fixed Modifications
Fixed modifications are implemented by passing a list with modified AAs that should be replaced. As a AA is only one letter, the remainder is the modification.
```
#export
def add_fixed_mods(seqs:list, mods_fixed:list, **kwargs)->list:
"""
Adds fixed modifications to sequences.
Args:
seqs (list of str): sequences to add fixed modifications
mods_fixed (list of str): the string list of fixed modifications. Each modification string must be in lower case, except for that the last letter must be the modified amino acid (e.g. oxidation on M should be oxM).
Returns:
list (of str): the list of the modified sequences. 'ABCDEF' with fixed mod 'cC' will be 'ABcCDEF'.
"""
if not mods_fixed:
return seqs
else:
for mod_aa in mods_fixed:
seqs = [seq.replace(mod_aa[-1], mod_aa) for seq in seqs]
return seqs
mods_fixed = ['cC','bB']
peptide_list = ['ABCDEF']
add_fixed_mods(peptide_list, mods_fixed)
mods_fixed = ['aA','cC','bB']
peptide_list = ['cABCDEF']
add_fixed_mods(peptide_list, mods_fixed)
#hide
def test_add_fixed_mods():
mods_fixed = ['cC']
peptide_list = ['ABCDEF']
peptides_new = add_fixed_mods(peptide_list, [])
assert peptides_new == peptide_list
peptides_new = add_fixed_mods(peptide_list, mods_fixed)
assert peptides_new == ['ABcCDEF']
test_add_fixed_mods()
```
### Variable Modifications
To employ variable modifications, we loop through each variable modification and each position of the peptide and add them to the peptide list. For each iteration in get_isoforms, one more variable modification will be added.
```
#export
def add_variable_mod(peps:list, mods_variable_dict:dict)->list:
"""
Function to add variable modification to a list of peptides.
Args:
peps (list): List of peptides.
mods_variable_dict (dict): Dicitionary with modifications. The key is AA, and value is the modified form (e.g. oxM).
Returns:
list : the list of peptide forms for the given peptide.
"""
peptides = []
for pep_ in peps:
pep, min_idx = pep_
for mod in mods_variable_dict:
for i in range(len(pep)):
if i >= min_idx:
c = pep[i]
if c == mod:
peptides.append((pep[:i]+[mods_variable_dict[c]]+pep[i+1:], i))
return peptides
def get_isoforms(mods_variable_dict:dict, peptide:str, isoforms_max:int, n_modifications_max:int=None)->list:
"""
Function to generate modified forms (with variable modifications) for a given peptide - returns a list of modified forms.
The original sequence is included in the list
Args:
mods_variable_dict (dict): Dicitionary with modifications. The key is AA, and value is the modified form (e.g. oxM).
peptide (str): the peptide sequence to generate modified forms.
isoforms_max (int): max number of modified forms to generate per peptide.
n_modifications_max (int, optional): max number of variable modifications per peptide.
Returns:
list (of str): the list of peptide forms for the given peptide
"""
pep = list(parse(peptide))
peptides = [pep]
new_peps = [(pep, 0)]
iteration = 0
while len(peptides) < isoforms_max:
if n_modifications_max:
if iteration >= n_modifications_max:
break
new_peps = add_variable_mod(new_peps, mods_variable_dict)
if len(new_peps) == 0:
break
if len(new_peps) > 1:
if new_peps[0][0] == new_peps[1][0]:
new_peps = new_peps[0:1]
for _ in new_peps:
if len(peptides) < isoforms_max:
peptides.append(_[0])
iteration +=1
peptides = [''.join(_) for _ in peptides]
return peptides
mods_variable_dict = {'S':'pS','P':'pP','M':'oxM'}
isoforms_max = 1024
print(get_isoforms(mods_variable_dict, 'PEPTIDE', isoforms_max))
print(get_isoforms(mods_variable_dict, 'AMAMA', isoforms_max))
print(get_isoforms(mods_variable_dict, 'AMAMA', isoforms_max, n_modifications_max=1))
```
Lastly, we define the wrapper `add_variable_mods` so that the functions can be called for lists of peptides and a list of variable modifications.
```
#hide
def test_get_isoforms():
mods_variable_dict = {'S':'pS','P':'pP'}
peptide = 'PEPTIDE'
isoforms_max = 1024
get_isoforms(mods_variable_dict, peptide, isoforms_max)
assert len(get_isoforms(mods_variable_dict, peptide, isoforms_max)) == 4
test_get_isoforms()
#export
from itertools import chain
def add_variable_mods(peptide_list:list, mods_variable:list, isoforms_max:int, n_modifications_max:int, **kwargs)->list:
"""
Add variable modifications to the peptide list
Args:
peptide_list (list of str): peptide list.
mods_variable (list of str): modification list.
isoforms_max (int): max number of modified forms per peptide sequence.
n_modifications_max (int): max number of variable modifications per peptide.
Returns:
list (of str): list of modified sequences for the given peptide list.
"""
#the peptide_list originates from one peptide already -> limit isoforms here
max_ = isoforms_max - len(peptide_list) + 1
if max_ < 0:
max_ = 0
if not mods_variable:
return peptide_list
else:
mods_variable_r = {}
for _ in mods_variable:
mods_variable_r[_[-1]] = _
peptide_list = [get_isoforms(mods_variable_r, peptide, max_, n_modifications_max) for peptide in peptide_list]
return list(chain.from_iterable(peptide_list))
peptide_list = ['AMA', 'AAC']
mods_variable = ['oxM','amC']
isoforms_max = 1024
n_modifications_max = 10
add_variable_mods(peptide_list, mods_variable, isoforms_max, n_modifications_max)
#hide
def test_add_variable_mods():
mods_variable = ['oxM']
peptide = ['AMAMA']
peptides_new = add_variable_mods(peptide, [], 1024, None)
assert peptides_new == peptide
peptides_new = add_variable_mods(peptide, mods_variable, 1024, None)
assert set(['AMAMA', 'AMAoxMA', 'AoxMAMA', 'AoxMAoxMA']) == set(peptides_new)
# Check if number of isoforms is correct
peptides_new = add_variable_mods(peptide, mods_variable, 3, None)
assert len(peptides_new) == 3
peptide_list = ['PEPTIDE']
mods_variable = ['pP','pS']
isoforms_max = 1024
peptides_new = add_variable_mods(peptide_list, mods_variable, isoforms_max, None)
assert len(peptides_new) == 4
test_add_variable_mods()
```
### Terminal Modifications - Fixed
To handle terminal modifications, we use the following convention:
* `<` for the left side (N-terminal)
* `>` for the right side (C-Terminal)
Additionally, if we want to have a terminal modification on any AA we indicate this `^`.
```
#export
def add_fixed_mod_terminal(peptides:list, mod:str)->list:
"""
Adds fixed terminal modifications
Args:
peptides (list of str): peptide list.
mod (str): n-term mod contains '<^' (e.g. a<^ for Acetyl@N-term); c-term mod contains '>^'.
Raises:
"Invalid fixed terminal modification 'mod name'" for the given mod.
Returns:
list (of str): list of peptides with modification added.
"""
# < for left side (N-Term), > for right side (C-Term)
if "<^" in mod: #Any n-term, e.g. a<^
peptides = [mod[:-2] + peptide for peptide in peptides]
elif ">^" in mod: #Any c-term, e.g. a>^
peptides = [peptide[:-1] + mod[:-2] + peptide[-1] for peptide in peptides]
elif "<" in mod: #only if specific AA, e.g. ox<C
peptides = [peptide[0].replace(mod[-1], mod[:-2]+mod[-1]) + peptide[1:] for peptide in peptides]
elif ">" in mod:
peptides = [peptide[:-1] + peptide[-1].replace(mod[-1], mod[:-2]+mod[-1]) for peptide in peptides]
else:
# This should not happen
raise ("Invalid fixed terminal modification {}.".format(mod))
return peptides
def add_fixed_mods_terminal(peptides:list, mods_fixed_terminal:list, **kwargs)->list:
"""
Wrapper to add fixed mods on sequences and lists of mods
Args:
peptides (list of str): peptide list.
mods_fixed_terminal (list of str): list of fixed terminal mods.
Raises:
"Invalid fixed terminal modification {mod}" exception for the given mod.
Returns:
list (of str): list of peptides with modification added.
"""
if mods_fixed_terminal == []:
return peptides
else:
# < for left side (N-Term), > for right side (C-Term)
for key in mods_fixed_terminal:
peptides = add_fixed_mod_terminal(peptides, key)
return peptides
peptide = ['AMAMA']
print(f'Starting with peptide {peptide}')
print('Any n-term modified with x (x<^):', add_fixed_mods_terminal(peptide, ['x<^']))
print('Any c-term modified with x (x>^):', add_fixed_mods_terminal(peptide, ['x>^']))
print('Only A on n-term modified with x (x<A):', add_fixed_mods_terminal(peptide, ['x<A']))
print('Only A on c-term modified with x (x<A):', add_fixed_mods_terminal(peptide, ['x>A']))
#hide
def test_add_fixed_mods_terminal():
peptide = ['AMAMA']
peptides_new = add_fixed_mods_terminal(peptide, [])
assert peptides_new == peptide
#Any N-term
peptides_new = add_fixed_mods_terminal(peptide, ['x<^'])
assert peptides_new == ['xAMAMA']
#Any C-term
peptides_new = add_fixed_mods_terminal(peptide, ['x>^'])
assert peptides_new == ['AMAMxA']
#Selected N-term
peptides_new = add_fixed_mods_terminal(peptide, ['x<A'])
assert peptides_new == ['xAMAMA']
peptides_new = add_fixed_mods_terminal(peptide, ['x<C'])
assert peptides_new == peptide
#Selected C-term
peptides_new = add_fixed_mods_terminal(peptide, ['x>A'])
assert peptides_new == ['AMAMxA']
peptides_new = add_fixed_mods_terminal(peptide, ['x>C'])
assert peptides_new == peptide
test_add_fixed_mods_terminal()
```
### Terminal Modifications - Variable
Lastly, to handle terminal variable modifications, we use the function `add_variable_mods_terminal`. As the modification can only be at the terminal end, this function only adds a peptide where the terminal end is modified.
```
#export
def add_variable_mods_terminal(peptides:list, mods_variable_terminal:list, **kwargs)->list:
"""
Function to add variable terminal modifications.
Args:
peptides (list of str): peptide list.
mods_variable_terminal (list of str): list of variable terminal mods.
Returns:
list (of str): list of peptides with modification added.
"""
if not mods_variable_terminal:
return peptides
else:
new_peptides_n = peptides.copy()
for key in mods_variable_terminal:
if "<" in key:
# Only allow one variable mod on one end
new_peptides_n.extend(
add_fixed_mod_terminal(peptides, key)
)
new_peptides_n = get_unique_peptides(new_peptides_n)
# N complete, let's go for c-terminal
new_peptides_c = new_peptides_n
for key in mods_variable_terminal:
if ">" in key:
# Only allow one variable mod on one end
new_peptides_c.extend(
add_fixed_mod_terminal(new_peptides_n, key)
)
return get_unique_peptides(new_peptides_c)
def get_unique_peptides(peptides:list) -> list:
"""
Function to return unique elements from list.
Args:
peptides (list of str): peptide list.
Returns:
list (of str): list of peptides (unique).
"""
return list(set(peptides))
peptide_list = ['AMAMA']
add_variable_mods_terminal(peptide_list, ['x<^'])
#hide
def test_add_variable_mods_terminal():
peptide_list = ['AMAMA']
peptides_new = add_variable_mods_terminal(peptide_list, [])
assert peptides_new == peptide
#Any N-term
peptides_new = add_variable_mods_terminal(peptide_list, ['x<^'])
assert set(peptides_new) == set(['xAMAMA', 'AMAMA'])
test_add_variable_mods_terminal()
```
### Generating Peptides
Lastly, we put all the functions into a wrapper `generate_peptides`. It will accept a peptide and a dictionary with settings so that we can get all modified peptides.
```
#export
def generate_peptides(peptide:str, **kwargs)->list:
"""
Wrapper to get modified peptides (fixed and variable mods) from a peptide.
Args:
peptide (str): the given peptide sequence.
Returns:
list (of str): all modified peptides.
TODO:
There can be some edge-cases which are not defined yet.
Example:
Setting the same fixed modification - once for all peptides and once for only terminal for the protein.
The modification will then be applied twice.
"""
mod_peptide = add_fixed_mods_terminal([peptide], kwargs['mods_fixed_terminal_prot'])
mod_peptide = add_variable_mods_terminal(mod_peptide, kwargs['mods_variable_terminal_prot'])
peptides = []
[peptides.extend(cleave_sequence(_, **kwargs)) for _ in mod_peptide]
peptides = [_ for _ in peptides if check_peptide(_, constants.AAs)]
isoforms_max = kwargs['isoforms_max']
all_peptides = []
for peptide in peptides: #1 per, limit the number of isoforms
#Regular peptides
mod_peptides = add_fixed_mods([peptide], **kwargs)
mod_peptides = add_fixed_mods_terminal(mod_peptides, **kwargs)
mod_peptides = add_variable_mods_terminal(mod_peptides, **kwargs)
kwargs['isoforms_max'] = isoforms_max - len(mod_peptides)
mod_peptides = add_variable_mods(mod_peptides, **kwargs)
all_peptides.extend(mod_peptides)
#Decoys:
decoy_peptides = get_decoys([peptide], **kwargs)
mod_peptides_decoy = add_fixed_mods(decoy_peptides, **kwargs)
mod_peptides_decoy = add_fixed_mods_terminal(mod_peptides_decoy, **kwargs)
mod_peptides_decoy = add_variable_mods_terminal(mod_peptides_decoy, **kwargs)
kwargs['isoforms_max'] = isoforms_max - len(mod_peptides_decoy)
mod_peptides_decoy = add_variable_mods(mod_peptides_decoy, **kwargs)
mod_peptides_decoy = add_decoy_tag(mod_peptides_decoy)
all_peptides.extend(mod_peptides_decoy)
return all_peptides
def check_peptide(peptide:str, AAs:set)->bool:
"""
Check if the peptide contains non-AA letters.
Args:
peptide (str): peptide sequence.
AAs (set): the set of legal amino acids. See alphapept.constants.AAs
Returns:
bool: True if all letters in the peptide is the subset of AAs, otherwise False
"""
if set([_ for _ in peptide if _.isupper()]).issubset(AAs):
return True
else:
return False
kwargs = {}
kwargs["protease"] = "trypsin"
kwargs["n_missed_cleavages"] = 2
kwargs["pep_length_min"] = 6
kwargs["pep_length_max"] = 27
kwargs["mods_variable"] = ["oxM"]
kwargs["mods_variable_terminal"] = []
kwargs["mods_fixed"] = ["cC"]
kwargs["mods_fixed_terminal"] = []
kwargs["mods_fixed_terminal_prot"] = []
kwargs["mods_variable_terminal_prot"] = ['a<^']
kwargs["isoforms_max"] = 1024
kwargs["n_modifications_max"] = None
generate_peptides('PEPTIDEM', **kwargs)
#hide
def test_generate_peptides():
kwargs = {}
kwargs["protease"] = "trypsin"
kwargs["n_missed_cleavages"] = 2
kwargs["pep_length_min"] = 6
kwargs["pep_length_max"] = 27
kwargs["mods_variable"] = ["oxM"]
kwargs["mods_variable_terminal"] = []
kwargs["mods_fixed"] = ["cC"]
kwargs["mods_fixed_terminal"] = []
kwargs["mods_fixed_terminal_prot"] = []
kwargs["mods_variable_terminal_prot"] = []
kwargs["isoforms_max"] = 1024
kwargs['pseudo_reverse'] = True
kwargs["n_modifications_max"] = None
peps = generate_peptides('PEPTIDEM', **kwargs)
assert set(peps) == set(['PEPTIDEM', 'PEPTIDEoxM', 'EDITPEPM_decoy', 'EDITPEPoxM_decoy'])
test_generate_peptides()
```
## Mass Calculations
Using the `mass_dict` from `constants` and being able to parse sequences with `parse`, one can simply look up the masses for each modified or unmodified amino acid and add everything up.
### Precursor
To calculate the mass of the neutral precursor, we start with the mass of an $H_2O$ and add the masses of all amino acids of the sequence.
```
#export
from numba import njit
from numba.typed import List
import numpy as np
import numba
@njit
def get_precmass(parsed_pep:list, mass_dict:numba.typed.Dict)->float:
"""
Calculate the mass of the neutral precursor
Args:
parsed_pep (list or numba.typed.List of str): the list of amino acids and modified amono acids.
mass_dict (numba.typed.Dict): key is the amino acid or the modified amino acid, and the value is the mass.
Returns:
float: the peptide neutral mass.
"""
tmass = mass_dict["H2O"]
for _ in parsed_pep:
tmass += mass_dict[_]
return tmass
get_precmass(parse('PEPTIDE'), constants.mass_dict)
#hide
def test_get_precmass():
precmass = get_precmass(parse('PEPTIDE'), constants.mass_dict)
assert np.allclose(precmass, 799.3599642034599)
test_get_precmass()
```
### Fragments
Likewise, we can calculate the masses of the fragment ions. We employ two functions: `get_fragmass` and `get_frag_dict`.
`get_fragmass` is a fast, `numba`-compatible function that calculates the fragment masses and returns an array indicating whether the ion-type was `b` or `y`.
`get_frag_dict` instead is not `numba`-compatible and hence a bit slower. It returns a dictionary with the respective ion and can be used for plotting theoretical spectra.
```
#export
import numba
@njit
def get_fragmass(parsed_pep:list, mass_dict:numba.typed.Dict)->tuple:
"""
Calculate the masses of the fragment ions
Args:
parsed_pep (numba.typed.List of str): the list of amino acids and modified amono acids.
mass_dict (numba.typed.Dict): key is the amino acid or the modified amino acid, and the value is the mass.
Returns:
Tuple[np.ndarray(np.float64), np.ndarray(np.int8)]: the fragment masses and the fragment types (represented as np.int8).
For a fragment type, positive value means the b-ion, the value indicates the position (b1, b2, b3...); the negative value means
the y-ion, the absolute value indicates the position (y1, y2, ...).
"""
n_frags = (len(parsed_pep) - 1) * 2
frag_masses = np.zeros(n_frags, dtype=np.float64)
frag_type = np.zeros(n_frags, dtype=np.int8)
# b-ions > 0
n_frag = 0
frag_m = mass_dict["Proton"]
for idx, _ in enumerate(parsed_pep[:-1]):
frag_m += mass_dict[_]
frag_masses[n_frag] = frag_m
frag_type[n_frag] = (idx+1)
n_frag += 1
# y-ions < 0
frag_m = mass_dict["Proton"] + mass_dict["H2O"]
for idx, _ in enumerate(parsed_pep[::-1][:-1]):
frag_m += mass_dict[_]
frag_masses[n_frag] = frag_m
frag_type[n_frag] = -(idx+1)
n_frag += 1
return frag_masses, frag_type
get_fragmass(parse('PEPTIDE'), constants.mass_dict)
#hide
def test_get_fragmass():
frag_masses, frag_type = get_fragmass(parse('PEPTIDE'), constants.mass_dict)
ref_masses = np.array([ 98.06004033, 227.10263343, 324.15539729, 425.20307579,
538.28713979, 653.31408289, 148.06043425, 263.08737735,
376.17144135, 477.21911985, 574.27188371, 703.31447681])
assert np.allclose(frag_masses, ref_masses)
test_get_fragmass()
#export
def get_frag_dict(parsed_pep:list, mass_dict:dict)->dict:
"""
Calculate the masses of the fragment ions
Args:
parsed_pep (list or numba.typed.List of str): the list of amino acids and modified amono acids.
mass_dict (numba.typed.Dict): key is the amino acid or the modified amino acid, and the value is the mass.
Returns:
dict{str:float}: key is the fragment type (b1, b2, ..., y1, y2, ...), value is fragment mass.
"""
frag_dict = {}
frag_masses, frag_type = get_fragmass(parsed_pep, mass_dict)
for idx, _ in enumerate(frag_masses):
cnt = frag_type[idx]
if cnt > 0:
identifier = 'b'
else:
identifier = 'y'
cnt = -cnt
frag_dict[identifier+str(cnt)] = _
return frag_dict
get_frag_dict(parse('PEPTIDE'), constants.mass_dict)
#hide
def test_get_frag_dict():
refdict = {'b1': 98.06004032687,
'b2': 227.10263342686997,
'b3': 324.15539728686997,
'y1': 120.06551965033,
'y2': 217.11828351033,
'y3': 346.16087661033}
newdict = get_frag_dict(parse('PEPT'), constants.mass_dict)
for key in newdict.keys():
assert np.allclose(refdict[key], newdict[key])
test_get_frag_dict()
```
This allows us also to generate the theoretical isotopes for a fragment:
```
import matplotlib.pyplot as plt
%matplotlib inline
peptide = 'PEPTIDE'
frag_dict = get_frag_dict(parse(peptide), constants.mass_dict)
db_frag = list(frag_dict.values())
db_int = [100 for _ in db_frag]
plt.figure(figsize=(10,5))
plt.vlines(db_frag, 0, db_int, "k", label="DB", alpha=0.2)
for _ in frag_dict.keys():
plt.text(frag_dict[_], 104, _, fontsize=12, alpha = 0.8)
plt.title('Theoretical Spectrum for {}'.format(peptide))
plt.xlabel('Mass')
plt.ylabel('Intensity')
plt.ylim([0,110])
plt.show()
```
### Spectra
The function `get_spectrum` returns a tuple with the following content:
* precursor mass
* peptide sequence
* fragmasses
* fragtypes
Likewise, `get_spectra` returns a list of tuples. We employ a list of tuples here as this way, we can sort them easily by precursor mass.
```
#export
@njit
def get_spectrum(peptide:str, mass_dict:numba.typed.Dict)->tuple:
"""
Get neutral peptide mass, fragment masses and fragment types for a peptide
Args:
peptide (str): the (modified) peptide.
mass_dict (numba.typed.Dict): key is the amino acid or modified amino acid, and the value is the mass.
Returns:
Tuple[float, str, np.ndarray(np.float64), np.ndarray(np.int8)]: (peptide mass, peptide, fragment masses, fragment_types), for fragment types, see get_fragmass.
"""
parsed_peptide = parse(peptide)
fragmasses, fragtypes = get_fragmass(parsed_peptide, mass_dict)
sortindex = np.argsort(fragmasses)
fragmasses = fragmasses[sortindex]
fragtypes = fragtypes[sortindex]
precmass = get_precmass(parsed_peptide, mass_dict)
return (precmass, peptide, fragmasses, fragtypes)
@njit
def get_spectra(peptides:numba.typed.List, mass_dict:numba.typed.Dict)->List:
"""
Get neutral peptide mass, fragment masses and fragment types for a list of peptides
Args:
peptides (list of str): the (modified) peptide list.
mass_dict (numba.typed.Dict): key is the amino acid or modified amino acid, and the value is the mass.
Raises:
Unknown exception and pass.
Returns:
list of Tuple[float, str, np.ndarray(np.float64), np.ndarray(np.int8)]: See get_spectrum.
"""
spectra = List()
for i in range(len(peptides)):
try:
spectra.append(get_spectrum(peptides[i], mass_dict))
except Exception: #TODO: This is to fix edge cases when having multiple modifications on the same AA.
pass
return spectra
print(get_spectra(List(['PEPTIDE']), constants.mass_dict))
#hide
def test_get_spectra():
spectra = get_spectra(List(['PEPTIDE']), constants.mass_dict)
precmass, peptide, frags, fragtypes = spectra[0]
assert np.allclose(precmass, 799.3599642034599)
assert peptide == 'PEPTIDE'
assert np.allclose(frags, np.array([ 98.06004033, 148.06043425, 227.10263343, 263.08737735,
324.15539729, 376.17144135, 425.20307579, 477.21911985,
538.28713979, 574.27188371, 653.31408289, 703.31447681]))
test_get_spectra()
```
## Reading FASTA
To read FASTA files, we use the `SeqIO` module from the `Biopython` library. This is a generator expression so that we read one FASTA entry after another until the `StopIteration` is reached, which is implemented in `read_fasta_file`. Additionally, we define the function `read_fasta_file_entries` that simply counts the number of FASTA entries.
All FASTA entries that contain AAs which are not in the mass_dict can be checked with `check_sequence` and will be ignored.
```
#export
from Bio import SeqIO
import os
from glob import glob
import logging
def read_fasta_file(fasta_filename:str=""):
"""
Read a FASTA file line by line
Args:
fasta_filename (str): fasta.
Yields:
dict {id:str, name:str, description:str, sequence:str}: protein information.
"""
with open(fasta_filename, "rt") as handle:
iterator = SeqIO.parse(handle, "fasta")
while iterator:
try:
record = next(iterator)
parts = record.id.split("|") # pipe char
if len(parts) > 1:
id = parts[1]
else:
id = record.name
sequence = str(record.seq)
entry = {
"id": id,
"name": record.name,
"description": record.description,
"sequence": sequence,
}
yield entry
except StopIteration:
break
def read_fasta_file_entries(fasta_filename=""):
"""
Function to count entries in fasta file
Args:
fasta_filename (str): fasta.
Returns:
int: number of entries.
"""
with open(fasta_filename, "rt") as handle:
iterator = SeqIO.parse(handle, "fasta")
count = 0
while iterator:
try:
record = next(iterator)
count+=1
except StopIteration:
break
return count
def check_sequence(element:dict, AAs:set, verbose:bool = False)->bool:
"""
Checks wheter a sequence from a FASTA entry contains valid AAs
Args:
element (dict): fasta entry of the protein information.
AAs (set): a set of amino acid letters.
verbose (bool): logging the invalid amino acids.
Returns:
bool: False if the protein sequence contains non-AA letters, otherwise True.
"""
if not set(element['sequence']).issubset(AAs):
unknown = set(element['sequence']) - set(AAs)
if verbose:
logging.error(f'This FASTA entry contains unknown AAs {unknown} - Peptides with unknown AAs will be skipped: \n {element}\n')
return False
else:
return True
#load example fasta file
fasta_path = '../testfiles/test.fasta'
list(read_fasta_file(fasta_path))[0]
```
## Peptide Dictionary
In order to efficiently store peptides, we rely on the Python dictionary. The idea is to have a dictionary with peptides as keys and indices to proteins as values. This way, one can quickly look up to which protein a peptide belongs to. The function `add_to_pept_dict` uses a regular python dictionary and allows to add peptides and stores indices to the originating proteins as a list. If a peptide is already present in the dictionary, the list is appended. The function returns a list of `added_peptides`, which were not present in the dictionary yet. One can use the function `merge_pept_dicts` to merge multiple peptide dicts.
```
#export
def add_to_pept_dict(pept_dict:dict, new_peptides:list, i:int)->tuple:
"""
Add peptides to the peptide dictionary
Args:
pept_dict (dict): the key is peptide sequence, and the value is protein id list indicating where the peptide is from.
new_peptides (list): the list of peptides to be added to pept_dict.
i (int): the protein id where new_peptides are from.
Returns:
dict: same as the pept_dict in the arguments.
list (of str): the peptides from new_peptides that not in the pept_dict.
"""
added_peptides = List()
for peptide in new_peptides:
if peptide in pept_dict:
if i not in pept_dict[peptide]:
pept_dict[peptide].append(i)
else:
pept_dict[peptide] = [i]
added_peptides.append(peptide)
return pept_dict, added_peptides
pept_dict = {}
new_peptides = ['ABC','DEF']
pept_dict, added_peptides = add_to_pept_dict(pept_dict, new_peptides, 0)
new_peptides = ['DEF','GHI']
pept_dict, added_peptides = add_to_pept_dict(pept_dict, new_peptides, 1)
print(pept_dict)
#hide
def test_add_to_pept_dict():
pept_dict = {}
new_peptides = ['ABC','DEF']
pept_dict, added_peptides = add_to_pept_dict(pept_dict, new_peptides, 0)
new_peptides = ['DEF','GHI']
pept_dict, added_peptides = add_to_pept_dict(pept_dict, new_peptides, 1)
assert pept_dict == {'ABC': [0], 'DEF': [0, 1], 'GHI': [1]}
test_add_to_pept_dict()
#export
def merge_pept_dicts(list_of_pept_dicts:list)->dict:
"""
Merge a list of peptide dict into a single dict.
Args:
list_of_pept_dicts (list of dict): the key of the pept_dict is peptide sequence, and the value is protein id list indicating where the peptide is from.
Returns:
dict: the key is peptide sequence, and the value is protein id list indicating where the peptide is from.
"""
if len(list_of_pept_dicts) == 0:
raise ValueError('Need to pass at least 1 element.')
new_pept_dict = list_of_pept_dicts[0]
for pept_dict in list_of_pept_dicts[1:]:
for key in pept_dict.keys():
if key in new_pept_dict:
for element in pept_dict[key]:
new_pept_dict[key].append(element)
else:
new_pept_dict[key] = pept_dict[key]
return new_pept_dict
pept_dict_1 = {'ABC': [0], 'DEF': [0, 1], 'GHI': [1]}
pept_dict_2 = {'ABC': [3,4], 'JKL': [5, 6], 'MNO': [7]}
merge_pept_dicts([pept_dict_1, pept_dict_2])
#hide
def test_merge_pept_dicts():
pept_dict_1 = {'ABC': [0], 'DEF': [0, 1], 'GHI': [1]}
pept_dict_2 = {'ABC': [3,4], 'JKL': [5, 6], 'MNO': [7]}
assert merge_pept_dicts([pept_dict_1, pept_dict_2]) == {'ABC': [0, 3, 4], 'DEF': [0, 1], 'GHI': [1], 'JKL': [5, 6], 'MNO': [7]}
test_merge_pept_dicts()
```
## Generating a database
To wrap everything up, we employ two functions, `generate_database` and `generate_spectra`. The first one reads a FASTA file and generates a list of peptides, as well as the peptide dictionary and an ordered FASTA dictionary to be able to look up the protein indices later. For the `callback` we first read the whole FASTA file to determine the total number of entries in the FASTA file. For a typical FASTA file of 30 Mb with 40k entries, this should take less than a second. The progress of the digestion is monitored by processing the FASTA file one by one.
The function `generate_spectra` then calculates precursor masses and fragment ions. Here, we split the total_number of sequences in `1000` steps to be able to track progress with the `callback`.
```
#export
from collections import OrderedDict
def generate_fasta_list(fasta_paths:list, callback = None, **kwargs)->tuple:
"""
Function to generate a database from a fasta file
Args:
fasta_paths (str or list of str): fasta path or a list of fasta paths.
callback (function, optional): callback function.
Returns:
fasta_list (list of dict): list of protein entry dict {id:str, name:str, description:str, sequence:str}.
fasta_dict (dict{int:dict}): the key is the protein id, the value is the protein entry dict.
"""
fasta_list = []
fasta_dict = OrderedDict()
fasta_index = 0
if type(fasta_paths) is str:
fasta_paths = [fasta_paths]
n_fastas = 1
elif type(fasta_paths) is list:
n_fastas = len(fasta_paths)
for f_id, fasta_file in enumerate(fasta_paths):
n_entries = read_fasta_file_entries(fasta_file)
fasta_generator = read_fasta_file(fasta_file)
for element in fasta_generator:
check_sequence(element, constants.AAs)
fasta_list.append(element)
fasta_dict[fasta_index] = element
fasta_index += 1
return fasta_list, fasta_dict
#hide
def test_generate_fasta_list():
fasta_list, fasta_dict = generate_fasta_list('../testfiles/test.fasta')
assert len(fasta_list) == 17
assert fasta_dict[0]['name'] == 'sp|A0PJZ0|A20A5_HUMAN'
test_generate_fasta_list()
#export
def generate_database(mass_dict:dict, fasta_paths:list, callback = None, **kwargs)->tuple:
"""
Function to generate a database from a fasta file
Args:
mass_dict (dict): not used, will be removed in the future.
fasta_paths (str or list of str): fasta path or a list of fasta paths.
callback (function, optional): callback function.
Returns:
to_add (list of str): non-redundant (modified) peptides to be added.
pept_dict (dict{str:list of int}): the key is peptide sequence, and the value is protein id list indicating where the peptide is from.
fasta_dict (dict{int:dict}): the key is the protein id, the value is the protein entry dict {id:str, name:str, description:str, sequence:str}.
"""
to_add = List()
fasta_dict = OrderedDict()
fasta_index = 0
pept_dict = {}
if type(fasta_paths) is str:
fasta_paths = [fasta_paths]
n_fastas = 1
elif type(fasta_paths) is list:
n_fastas = len(fasta_paths)
for f_id, fasta_file in enumerate(fasta_paths):
n_entries = read_fasta_file_entries(fasta_file)
fasta_generator = read_fasta_file(fasta_file)
for element in fasta_generator:
fasta_dict[fasta_index] = element
mod_peptides = generate_peptides(element["sequence"], **kwargs)
pept_dict, added_seqs = add_to_pept_dict(pept_dict, mod_peptides, fasta_index)
if len(added_seqs) > 0:
to_add.extend(added_seqs)
fasta_index += 1
if callback:
callback(fasta_index/n_entries/n_fastas+f_id)
return to_add, pept_dict, fasta_dict
#hide
def test_generate_database():
from alphapept.constants import mass_dict
from alphapept.settings import load_settings
from alphapept.paths import DEFAULT_SETTINGS_PATH
settings = load_settings(DEFAULT_SETTINGS_PATH)
to_add, pept_dict, fasta_dict = generate_database(mass_dict, ['../testfiles/test.fasta'], **settings['fasta'])
assert len(to_add) == 3078 #This will break if the deafult settings are changed
assert 'GQTVLGSIDHLYTGSGYR' in pept_dict
assert fasta_dict[0]['name'] == 'sp|A0PJZ0|A20A5_HUMAN'
test_generate_database()
#export
def generate_spectra(to_add:list, mass_dict:dict, callback = None)->list:
"""
Function to generate spectra list database from a fasta file
Args:
to_add (list):
mass_dict (dict{str:float}): amino acid mass dict.
callback (function, optional): callback function. (Default: None)
Returns:
list (of tuple): list of (peptide mass, peptide, fragment masses, fragment_types), see get_fragmass.
"""
if len(to_add) > 0:
if callback: #Chunk the spectra to get a progress_bar
spectra = []
stepsize = int(np.ceil(len(to_add)/1000))
for i in range(0, len(to_add), stepsize):
sub = to_add[i:i + stepsize]
spectra.extend(get_spectra(sub, mass_dict))
callback((i+1)/len(to_add))
else:
spectra = get_spectra(to_add, mass_dict)
else:
raise ValueError("No spectra to generate.")
return spectra
#hide
def test_generate_spectra():
from alphapept.constants import mass_dict
from numba.typed import List
to_add = List(['PEPTIDE'])
spectra = generate_spectra(to_add, mass_dict)
assert np.allclose(spectra[0][0], 799.35996420346)
assert spectra[0][1] == 'PEPTIDE'
test_generate_spectra()
```
## Parallelized version
To speed up spectra generated, one can use the parallelized version. The function `generate_database_parallel` reads an entire FASTA file and splits it into multiple blocks. Each block will be processed, and the generated pept_dicts will be merged.
```
#export
from typing import Generator
def block_idx(len_list:int, block_size:int = 1000)->list:
"""
Helper function to split length into blocks
Args:
len_list (int): list length.
block_size (int, optional, default 1000): size per block.
Returns:
list[(int, int)]: list of (start, end) positions of blocks.
"""
blocks = []
start = 0
end = 0
while end <= len_list:
end += block_size
blocks.append((start, end))
start = end
return blocks
def blocks(l:int, n:int)->Generator[list, None, None]:
"""
Helper function to create blocks from a given list
Args:
l (list): the list
n (int): size per block
Returns:
Generator: List with splitted elements
"""
n = max(1, n)
return (l[i:i+n] for i in range(0, len(l), n))
#hide
def test_block_idx():
assert block_idx(100, 50) == [(0, 50), (50, 100), (100, 150)]
test_block_idx()
def test_blocks():
assert list(blocks([1,2,3,4], 2)) == [[1, 2], [3, 4]]
test_blocks()
#export
from multiprocessing import Pool
from alphapept import constants
mass_dict = constants.mass_dict
#This function is a wrapper function and to be tested by the integration test
def digest_fasta_block(to_process:tuple)-> (list, dict):
"""
Digest and create spectra for a whole fasta_block for multiprocessing. See generate_database_parallel.
"""
fasta_index, fasta_block, settings = to_process
to_add = List()
f_index = 0
pept_dict = {}
for element in fasta_block:
sequence = element["sequence"]
mod_peptides = generate_peptides(sequence, **settings['fasta'])
pept_dict, added_peptides = add_to_pept_dict(pept_dict, mod_peptides, fasta_index+f_index)
if len(added_peptides) > 0:
to_add.extend(added_peptides)
f_index += 1
spectra = []
if len(to_add) > 0:
for specta_block in blocks(to_add, settings['fasta']['spectra_block']):
spectra.extend(generate_spectra(specta_block, mass_dict))
return (spectra, pept_dict)
import alphapept.performance
#This function is a wrapper function and to be tested by the integration test
def generate_database_parallel(settings:dict, callback = None):
"""
Function to generate a database from a fasta file in parallel.
Args:
settings: alphapept settings.
Returns:
list: theoretical spectra. See generate_spectra()
dict: peptide dict. See add_to_pept_dict()
dict: fasta_dict. See generate_fasta_list()
"""
n_processes = alphapept.performance.set_worker_count(
worker_count=settings['general']['n_processes'],
set_global=False
)
fasta_list, fasta_dict = generate_fasta_list(fasta_paths = settings['experiment']['fasta_paths'], **settings['fasta'])
logging.info(f'FASTA contains {len(fasta_list):,} entries.')
blocks = block_idx(len(fasta_list), settings['fasta']['fasta_block'])
to_process = [(idx_start, fasta_list[idx_start:idx_end], settings) for idx_start, idx_end in blocks]
spectra = []
pept_dicts = []
with Pool(n_processes) as p:
max_ = len(to_process)
for i, _ in enumerate(p.imap_unordered(digest_fasta_block, to_process)):
if callback:
callback((i+1)/max_)
spectra.extend(_[0])
pept_dicts.append(_[1])
spectra = sorted(spectra, key=lambda x: x[1])
spectra_set = [spectra[idx] for idx in range(len(spectra)-1) if spectra[idx][1] != spectra[idx+1][1]]
spectra_set.append(spectra[-1])
pept_dict = merge_pept_dicts(pept_dicts)
return spectra_set, pept_dict, fasta_dict
```
### Parallel search on large files
In some cases (e.g., a lot of modifications or very large FASTA files), it will not be useful to save the database as it will consume too much memory. Here, we use the function `search_parallel` from search. It creates theoretical spectra on the fly and directly searches against them. As we cannot create a pept_dict here, we need to create one from the search results. For this, we group peptides by their FASTA index and generate a lookup dictionary that can be used as a pept_dict.
> Note that we are passing the settings argument here. Search results should be stored in the corresponding path in the `*.hdf` file.
```
#export
#This function is a wrapper function and to be tested by the integration test
def pept_dict_from_search(settings:dict):
"""
Generates a peptide dict from a large search.
"""
paths = settings['experiment']['file_paths']
bases = [os.path.splitext(_)[0]+'.ms_data.hdf' for _ in paths]
all_dfs = []
for _ in bases:
try:
df = alphapept.io.MS_Data_File(_).read(dataset_name="peptide_fdr")
except KeyError:
df = pd.DataFrame()
if len(df) > 0:
all_dfs.append(df)
if sum([len(_) for _ in all_dfs]) == 0:
raise ValueError("No sequences present to concatenate.")
df = pd.concat(all_dfs)
df['fasta_index'] = df['fasta_index'].str.split(',')
lst_col = 'fasta_index'
df_ = pd.DataFrame({
col:np.repeat(df[col].values, df[lst_col].str.len())
for col in df.columns.drop(lst_col)}
).assign(**{lst_col:np.concatenate(df[lst_col].values)})[df.columns]
df_['fasta_index'] = df_['fasta_index'].astype('int')
df_grouped = df_.groupby(['sequence'])['fasta_index'].unique()
pept_dict = {}
for keys, vals in zip(df_grouped.index, df_grouped.values):
pept_dict[keys] = vals.tolist()
return pept_dict
```
## Saving
To save the generated spectra, we rely on the HDF format. For this, we create a dictionary and save all the generated elements. The container will contain the following elements:
* `precursors`: An array containing the precursor masses
* `seqs`: An array containing the peptide sequences for the precursor masses
* `pept_dict`: A peptide dictionary to look up the peptides and return their FASTA index
* `fasta_dict`: A FASTA dictionary to look up the FASTA entry based on a pept_dict index
* `fragmasses`: An array containing the fragment masses. Unoccupied cells are filled with -1
* `fragtypes:`: An array containing the fragment types. 0 equals b-ions, and 1 equals y-ions. Unoccupied cells are filled with -1
* `bounds`: An integer array containing the upper bounds for the fragment masses/types array. This is needed to quickly slice the data.
All arrays are sorted according to the precursor mass.
> Note: To access the dictionaries such as `pept_dict` or `fasta_dict`, one needs to extract them using the `.item()` method like so: `container["pept_dict"].item()`.
```
#export
import alphapept.io
import pandas as pd
def save_database(spectra:list, pept_dict:dict, fasta_dict:dict, database_path:str, **kwargs):
"""
Function to save a database to the *.hdf format. Write the database into hdf.
Args:
spectra (list): list: theoretical spectra. See generate_spectra().
pept_dict (dict): peptide dict. See add_to_pept_dict().
fasta_dict (dict): fasta_dict. See generate_fasta_list().
database_path (str): Path to database.
"""
precmasses, seqs, fragmasses, fragtypes = zip(*spectra)
sortindex = np.argsort(precmasses)
fragmasses = np.array(fragmasses, dtype=object)[sortindex]
fragtypes = np.array(fragtypes, dtype=object)[sortindex]
lens = [len(_) for _ in fragmasses]
n_frags = sum(lens)
frags = np.zeros(n_frags, dtype=fragmasses[0].dtype)
frag_types = np.zeros(n_frags, dtype=fragtypes[0].dtype)
indices = np.zeros(len(lens) + 1, np.int64)
indices[1:] = lens
indices = np.cumsum(indices)
#Fill data
for _ in range(len(indices)-1):
start = indices[_]
end = indices[_+1]
frags[start:end] = fragmasses[_]
frag_types[start:end] = fragtypes[_]
to_save = {}
to_save["precursors"] = np.array(precmasses)[sortindex]
to_save["seqs"] = np.array(seqs, dtype=object)[sortindex]
to_save["proteins"] = pd.DataFrame(fasta_dict).T
to_save["fragmasses"] = frags
to_save["fragtypes"] = frag_types
to_save["indices"] = indices
db_file = alphapept.io.HDF_File(database_path, is_new_file=True)
for key, value in to_save.items():
db_file.write(value, dataset_name=key)
peps = np.array(list(pept_dict), dtype=object)
indices = np.empty(len(peps) + 1, dtype=np.int64)
indices[0] = 0
indices[1:] = np.cumsum([len(pept_dict[i]) for i in peps])
proteins = np.concatenate([pept_dict[i] for i in peps])
db_file.write("peptides")
db_file.write(
peps,
dataset_name="sequences",
group_name="peptides"
)
db_file.write(
indices,
dataset_name="protein_indptr",
group_name="peptides"
)
db_file.write(
proteins,
dataset_name="protein_indices",
group_name="peptides"
)
#export
import collections
def read_database(database_path:str, array_name:str=None)->dict:
"""
Read database from hdf file.
Args:
database_path (str): hdf database file generate by alphapept.
array_name (str): the dataset name to read
return:
dict: key is the dataset_name in hdf file, value is the python object read from the dataset_name
"""
db_file = alphapept.io.HDF_File(database_path)
if array_name is None:
db_data = {
key: db_file.read(
dataset_name=key
) for key in db_file.read() if key not in (
"proteins",
"peptides"
)
}
db_data["fasta_dict"] = np.array(
collections.OrderedDict(db_file.read(dataset_name="proteins").T)
)
peps = db_file.read(dataset_name="sequences", group_name="peptides")
protein_indptr = db_file.read(
dataset_name="protein_indptr",
group_name="peptides"
)
protein_indices = db_file.read(
dataset_name="protein_indices",
group_name="peptides"
)
db_data["pept_dict"] = np.array(
{
pep: (protein_indices[s: e]).tolist() for pep, s, e in zip(
peps,
protein_indptr[:-1],
protein_indptr[1:],
)
}
)
db_data["seqs"] = db_data["seqs"].astype(str)
else:
db_data = db_file.read(dataset_name=array_name)
return db_data
#hide
def test_database_io():
from alphapept.constants import mass_dict
from numba.typed import List
to_add = List(['PEPTIDE'])
spectra = generate_spectra(to_add, mass_dict)
fasta_list, fasta_dict = generate_fasta_list('../testfiles/test.fasta')
database_path = '../testfiles/testdb.hdf'
save_database(spectra, pept_dict, fasta_dict, database_path)
assert list(read_database(database_path, 'seqs')) == ['PEPTIDE']
assert np.allclose(list(read_database(database_path, 'precursors'))[0], spectra[0][0])
test_database_io()
#hide
from nbdev.export import *
notebook2script()
```
|
github_jupyter
|
# default_exp fasta
#hide
from nbdev.showdoc import *
#export
from alphapept import constants
import re
def get_missed_cleavages(sequences:list, n_missed_cleavages:int) -> list:
"""
Combine cleaved sequences to get sequences with missed cleavages
Args:
seqeuences (list of str): the list of cleaved sequences, no missed cleavages are there.
n_missed_cleavages (int): the number of miss cleavage sites.
Returns:
list (of str): the sequences with missed cleavages.
"""
missed = []
for k in range(len(sequences)-n_missed_cleavages):
missed.append(''.join(sequences[k-1:k+n_missed_cleavages]))
return missed
def cleave_sequence(
sequence:str="",
n_missed_cleavages:int=0,
protease:str="trypsin",
pep_length_min:int=6,
pep_length_max:int=65,
**kwargs
)->list:
"""
Cleave a sequence with a given protease. Filters to have a minimum and maximum length.
Args:
sequence (str): the given (protein) sequence.
n_missed_cleavages (int): the number of max missed cleavages.
protease (str): the protease/enzyme name, the regular expression can be found in alphapept.constants.protease_dict.
pep_length_min (int): min peptide length.
pep_length_max (int): max peptide length.
Returns:
list (of str): cleaved peptide sequences with missed cleavages.
"""
proteases = constants.protease_dict
pattern = proteases[protease]
p = re.compile(pattern)
cutpos = [m.start()+1 for m in p.finditer(sequence)]
cutpos.insert(0,0)
cutpos.append(len(sequence))
base_sequences = [sequence[cutpos[i]:cutpos[i+1]] for i in range(len(cutpos)-1)]
sequences = base_sequences.copy()
for i in range(1, n_missed_cleavages+1):
sequences.extend(get_missed_cleavages(base_sequences, i))
sequences = [_ for _ in sequences if len(_)>=pep_length_min and len(_)<=pep_length_max]
return sequences
protease = "trypsin"
n_missed_cleavages = 0
pep_length_min, pep_length_max = 6, 65
cleave_sequence('ABCDEFGHIJKLMNOPQRST', n_missed_cleavages, protease, pep_length_min, pep_length_max)
#hide
def test_cleave_sequence():
protease = "trypsin"
pep_length_min, pep_length_max = 6, 65
assert set(cleave_sequence('ABCDEFGHIJKLMNOPQRST', 0, protease, pep_length_min, pep_length_max)) == set(['ABCDEFGHIJK', 'LMNOPQR'])
assert set(cleave_sequence('ABCDEFGHIJKLMNOPQRST', 1, protease, pep_length_min, pep_length_max)) == set(['ABCDEFGHIJK', 'LMNOPQR', 'ABCDEFGHIJKLMNOPQR'])
test_cleave_sequence()
#export
import re
from alphapept import constants
def count_missed_cleavages(sequence:str="", protease:str="trypsin", **kwargs) -> int:
"""
Counts the number of missed cleavages for a given sequence and protease
Args:
sequence (str): the given (peptide) sequence.
protease (str): the protease/enzyme name, the regular expression can be found in alphapept.constants.protease_dict.
Returns:
int: the number of miss cleavages
"""
proteases = constants.protease_dict
protease = proteases[protease]
p = re.compile(protease)
n_missed = len(p.findall(sequence))
return n_missed
def count_internal_cleavages(sequence:str="", protease:str="trypsin", **kwargs) -> int:
"""
Counts the number of internal cleavage sites for a given sequence and protease
Args:
sequence (str): the given (peptide) sequence.
protease (str): the protease/enzyme name, the regular expression can be found in alphapept.constants.protease_dict.
Returns:
int (0 or 1): if the sequence is from internal cleavage.
"""
proteases = constants.protease_dict
protease = proteases[protease]
match = re.search(protease,sequence[-1]+'_')
if match:
n_internal = 0
else:
n_internal = 1
return n_internal
protease = "trypsin"
print(count_missed_cleavages('ABCDEFGHIJKLMNOPQRST', protease))
protease = "trypsin"
print(count_internal_cleavages('ABCDEFGHIJKLMNOPQRST', protease))
#hide
def test_get_missed_cleavages():
assert count_missed_cleavages('ABCDEFGHIJKLMNOPQRST', 'trypsin') == 2
assert count_missed_cleavages('ABCDEFGHIJKLMNOPQRST', 'clostripain') == 1
test_get_missed_cleavages()
def test_get_internal_cleavages():
assert count_internal_cleavages('ABCDEFGHIJKLMNOPQRST', 'trypsin') == 1
assert count_internal_cleavages('ABCDEFGHIJKLMNOPQRSTK', 'trypsin') == 0
test_get_internal_cleavages()
#export
from numba import njit
from numba.typed import List
@njit
def parse(peptide:str)->List:
"""
Parser to parse peptide strings
Args:
peptide (str): modified peptide sequence.
Return:
List (numba.typed.List): a list of animo acids and modified amono acids
"""
if "_" in peptide:
peptide = peptide.split("_")[0]
parsed = List()
string = ""
for i in peptide:
string += i
if i.isupper():
parsed.append(string)
string = ""
return parsed
def list_to_numba(a_list) -> List:
"""
Convert Python list to numba.typed.List
Args:
a_list (list): Python list.
Return:
List (numba.typed.List): Numba typed list.
"""
numba_list = List()
for element in a_list:
numba_list.append(element)
return numba_list
print(parse('PEPTIDE'))
print(parse('PEPoxTIDE'))
#hide
def test_parse():
peptide = "PEPTIDE"
assert parse(peptide) == list_to_numba(["P", "E", "P", "T", "I", "D", "E"])
peptide = "PEPoxTIDE"
assert parse(peptide) == list_to_numba(["P", "E", "P", "oxT", "I", "D", "E"])
peptide = "PEPTIDE_decoy"
assert parse(peptide) == list_to_numba(["P", "E", "P", "T", "I", "D", "E"])
test_parse()
#export
@njit
def get_decoy_sequence(peptide:str, pseudo_reverse:bool=False, AL_swap:bool=False, KR_swap:bool = False)->str:
"""
Reverses a sequence and adds the '_decoy' tag.
Args:
peptide (str): modified peptide to be reversed.
pseudo_reverse (bool): If True, reverse the peptide bug keep the C-terminal amino acid; otherwise reverse the whole peptide. (Default: False)
AL_swap (bool): replace A with L, and vice versa. (Default: False)
KR_swap (bool): replace K with R at the C-terminal, and vice versa. (Default: False)
Returns:
str: reversed peptide ending with the '_decoy' tag.
"""
pep = parse(peptide)
if pseudo_reverse:
rev_pep = pep[:-1][::-1]
rev_pep.append(pep[-1])
else:
rev_pep = pep[::-1]
if AL_swap:
rev_pep = swap_AL(rev_pep)
if KR_swap:
rev_pep = swap_KR(rev_pep)
rev_pep = "".join(rev_pep)
return rev_pep
@njit
def swap_KR(peptide:str)->str:
"""
Swaps a terminal K or R. Note: Only if AA is not modified.
Args:
peptide (str): peptide.
Returns:
str: peptide with swapped KRs.
"""
if peptide[-1] == 'K':
peptide[-1] = 'R'
elif peptide[-1] == 'R':
peptide[-1] = 'K'
return peptide
@njit
def swap_AL(peptide:str)->str:
"""
Swaps a A with L. Note: Only if AA is not modified.
Args:
peptide (str): peptide.
Returns:
str: peptide with swapped ALs.
"""
i = 0
while i < len(range(len(peptide) - 1)):
if peptide[i] == "A":
peptide[i] = peptide[i + 1]
peptide[i + 1] = "A"
i += 1
elif peptide[i] == "L":
peptide[i] = peptide[i + 1]
peptide[i + 1] = "L"
i += 1
i += 1
return peptide
def get_decoys(peptide_list, pseudo_reverse=False, AL_swap=False, KR_swap = False, **kwargs)->list:
"""
Wrapper to get decoys for lists of peptides
Args:
peptide_list (list): the list of peptides to be reversed.
pseudo_reverse (bool): If True, reverse the peptide bug keep the C-terminal amino acid; otherwise reverse the whole peptide. (Default: False)
AL_swap (bool): replace A with L, and vice versa. (Default: False)
KR_swap (bool): replace K with R at the C-terminal, and vice versa. (Default: False)
Returns:
list (of str): a list of decoy peptides
"""
decoys = []
decoys.extend([get_decoy_sequence(peptide, pseudo_reverse, AL_swap, KR_swap) for peptide in peptide_list])
return decoys
def add_decoy_tag(peptides):
"""
Adds a '_decoy' tag to a list of peptides
"""
return [peptide + "_decoy" for peptide in peptides]
print(swap_AL(parse('KKKALKKK')))
print(swap_KR(parse('AAAKRAAA')))
print(get_decoy_sequence('PEPTIDE'))
print(get_decoys(['ABC','DEF','GHI']))
#hide
def test_swap_AL():
assert swap_AL(parse("ABCDEF")) == parse("BACDEF")
assert swap_AL(parse("GHIKLM")) == parse("GHIKML")
assert swap_AL(parse("FEDCBA")) == parse("FEDCBA")
assert swap_AL(parse("GHIKL")) == parse("GHIKL")
assert swap_AL(parse("ABCDEFGHIKLM")) == parse("BACDEFGHIKML")
assert swap_AL(parse("BBAcCD")) == parse("BBcCAD")
assert swap_AL(parse("FEDCBA")) == parse("FEDCBA")
test_swap_AL()
def test_swapKR():
assert swap_KR(parse("ABCDEK")) == parse("ABCDER")
assert swap_KR(parse("ABCDER")) == parse("ABCDEK")
assert swap_KR(parse("ABCDEF")) == parse("ABCDEF")
assert swap_KR(parse("KABCDEF")) == parse("KABCDEF")
assert swap_KR(parse("KABCRDEF")) == parse("KABCRDEF")
assert swap_KR(parse("KABCKDEF")) == parse("KABCKDEF")
test_swapKR()
def test_get_decoy_sequence():
peptide = "PEPTIDER"
assert get_decoy_sequence(peptide, pseudo_reverse=True) == "EDITPEPR"
assert get_decoy_sequence(peptide) == "REDITPEP"
assert get_decoy_sequence(peptide, KR_swap=True, pseudo_reverse=True) == "EDITPEPK"
test_get_decoy_sequence()
#export
def add_fixed_mods(seqs:list, mods_fixed:list, **kwargs)->list:
"""
Adds fixed modifications to sequences.
Args:
seqs (list of str): sequences to add fixed modifications
mods_fixed (list of str): the string list of fixed modifications. Each modification string must be in lower case, except for that the last letter must be the modified amino acid (e.g. oxidation on M should be oxM).
Returns:
list (of str): the list of the modified sequences. 'ABCDEF' with fixed mod 'cC' will be 'ABcCDEF'.
"""
if not mods_fixed:
return seqs
else:
for mod_aa in mods_fixed:
seqs = [seq.replace(mod_aa[-1], mod_aa) for seq in seqs]
return seqs
mods_fixed = ['cC','bB']
peptide_list = ['ABCDEF']
add_fixed_mods(peptide_list, mods_fixed)
mods_fixed = ['aA','cC','bB']
peptide_list = ['cABCDEF']
add_fixed_mods(peptide_list, mods_fixed)
#hide
def test_add_fixed_mods():
mods_fixed = ['cC']
peptide_list = ['ABCDEF']
peptides_new = add_fixed_mods(peptide_list, [])
assert peptides_new == peptide_list
peptides_new = add_fixed_mods(peptide_list, mods_fixed)
assert peptides_new == ['ABcCDEF']
test_add_fixed_mods()
#export
def add_variable_mod(peps:list, mods_variable_dict:dict)->list:
"""
Function to add variable modification to a list of peptides.
Args:
peps (list): List of peptides.
mods_variable_dict (dict): Dicitionary with modifications. The key is AA, and value is the modified form (e.g. oxM).
Returns:
list : the list of peptide forms for the given peptide.
"""
peptides = []
for pep_ in peps:
pep, min_idx = pep_
for mod in mods_variable_dict:
for i in range(len(pep)):
if i >= min_idx:
c = pep[i]
if c == mod:
peptides.append((pep[:i]+[mods_variable_dict[c]]+pep[i+1:], i))
return peptides
def get_isoforms(mods_variable_dict:dict, peptide:str, isoforms_max:int, n_modifications_max:int=None)->list:
"""
Function to generate modified forms (with variable modifications) for a given peptide - returns a list of modified forms.
The original sequence is included in the list
Args:
mods_variable_dict (dict): Dicitionary with modifications. The key is AA, and value is the modified form (e.g. oxM).
peptide (str): the peptide sequence to generate modified forms.
isoforms_max (int): max number of modified forms to generate per peptide.
n_modifications_max (int, optional): max number of variable modifications per peptide.
Returns:
list (of str): the list of peptide forms for the given peptide
"""
pep = list(parse(peptide))
peptides = [pep]
new_peps = [(pep, 0)]
iteration = 0
while len(peptides) < isoforms_max:
if n_modifications_max:
if iteration >= n_modifications_max:
break
new_peps = add_variable_mod(new_peps, mods_variable_dict)
if len(new_peps) == 0:
break
if len(new_peps) > 1:
if new_peps[0][0] == new_peps[1][0]:
new_peps = new_peps[0:1]
for _ in new_peps:
if len(peptides) < isoforms_max:
peptides.append(_[0])
iteration +=1
peptides = [''.join(_) for _ in peptides]
return peptides
mods_variable_dict = {'S':'pS','P':'pP','M':'oxM'}
isoforms_max = 1024
print(get_isoforms(mods_variable_dict, 'PEPTIDE', isoforms_max))
print(get_isoforms(mods_variable_dict, 'AMAMA', isoforms_max))
print(get_isoforms(mods_variable_dict, 'AMAMA', isoforms_max, n_modifications_max=1))
#hide
def test_get_isoforms():
mods_variable_dict = {'S':'pS','P':'pP'}
peptide = 'PEPTIDE'
isoforms_max = 1024
get_isoforms(mods_variable_dict, peptide, isoforms_max)
assert len(get_isoforms(mods_variable_dict, peptide, isoforms_max)) == 4
test_get_isoforms()
#export
from itertools import chain
def add_variable_mods(peptide_list:list, mods_variable:list, isoforms_max:int, n_modifications_max:int, **kwargs)->list:
"""
Add variable modifications to the peptide list
Args:
peptide_list (list of str): peptide list.
mods_variable (list of str): modification list.
isoforms_max (int): max number of modified forms per peptide sequence.
n_modifications_max (int): max number of variable modifications per peptide.
Returns:
list (of str): list of modified sequences for the given peptide list.
"""
#the peptide_list originates from one peptide already -> limit isoforms here
max_ = isoforms_max - len(peptide_list) + 1
if max_ < 0:
max_ = 0
if not mods_variable:
return peptide_list
else:
mods_variable_r = {}
for _ in mods_variable:
mods_variable_r[_[-1]] = _
peptide_list = [get_isoforms(mods_variable_r, peptide, max_, n_modifications_max) for peptide in peptide_list]
return list(chain.from_iterable(peptide_list))
peptide_list = ['AMA', 'AAC']
mods_variable = ['oxM','amC']
isoforms_max = 1024
n_modifications_max = 10
add_variable_mods(peptide_list, mods_variable, isoforms_max, n_modifications_max)
#hide
def test_add_variable_mods():
mods_variable = ['oxM']
peptide = ['AMAMA']
peptides_new = add_variable_mods(peptide, [], 1024, None)
assert peptides_new == peptide
peptides_new = add_variable_mods(peptide, mods_variable, 1024, None)
assert set(['AMAMA', 'AMAoxMA', 'AoxMAMA', 'AoxMAoxMA']) == set(peptides_new)
# Check if number of isoforms is correct
peptides_new = add_variable_mods(peptide, mods_variable, 3, None)
assert len(peptides_new) == 3
peptide_list = ['PEPTIDE']
mods_variable = ['pP','pS']
isoforms_max = 1024
peptides_new = add_variable_mods(peptide_list, mods_variable, isoforms_max, None)
assert len(peptides_new) == 4
test_add_variable_mods()
#export
def add_fixed_mod_terminal(peptides:list, mod:str)->list:
"""
Adds fixed terminal modifications
Args:
peptides (list of str): peptide list.
mod (str): n-term mod contains '<^' (e.g. a<^ for Acetyl@N-term); c-term mod contains '>^'.
Raises:
"Invalid fixed terminal modification 'mod name'" for the given mod.
Returns:
list (of str): list of peptides with modification added.
"""
# < for left side (N-Term), > for right side (C-Term)
if "<^" in mod: #Any n-term, e.g. a<^
peptides = [mod[:-2] + peptide for peptide in peptides]
elif ">^" in mod: #Any c-term, e.g. a>^
peptides = [peptide[:-1] + mod[:-2] + peptide[-1] for peptide in peptides]
elif "<" in mod: #only if specific AA, e.g. ox<C
peptides = [peptide[0].replace(mod[-1], mod[:-2]+mod[-1]) + peptide[1:] for peptide in peptides]
elif ">" in mod:
peptides = [peptide[:-1] + peptide[-1].replace(mod[-1], mod[:-2]+mod[-1]) for peptide in peptides]
else:
# This should not happen
raise ("Invalid fixed terminal modification {}.".format(mod))
return peptides
def add_fixed_mods_terminal(peptides:list, mods_fixed_terminal:list, **kwargs)->list:
"""
Wrapper to add fixed mods on sequences and lists of mods
Args:
peptides (list of str): peptide list.
mods_fixed_terminal (list of str): list of fixed terminal mods.
Raises:
"Invalid fixed terminal modification {mod}" exception for the given mod.
Returns:
list (of str): list of peptides with modification added.
"""
if mods_fixed_terminal == []:
return peptides
else:
# < for left side (N-Term), > for right side (C-Term)
for key in mods_fixed_terminal:
peptides = add_fixed_mod_terminal(peptides, key)
return peptides
peptide = ['AMAMA']
print(f'Starting with peptide {peptide}')
print('Any n-term modified with x (x<^):', add_fixed_mods_terminal(peptide, ['x<^']))
print('Any c-term modified with x (x>^):', add_fixed_mods_terminal(peptide, ['x>^']))
print('Only A on n-term modified with x (x<A):', add_fixed_mods_terminal(peptide, ['x<A']))
print('Only A on c-term modified with x (x<A):', add_fixed_mods_terminal(peptide, ['x>A']))
#hide
def test_add_fixed_mods_terminal():
peptide = ['AMAMA']
peptides_new = add_fixed_mods_terminal(peptide, [])
assert peptides_new == peptide
#Any N-term
peptides_new = add_fixed_mods_terminal(peptide, ['x<^'])
assert peptides_new == ['xAMAMA']
#Any C-term
peptides_new = add_fixed_mods_terminal(peptide, ['x>^'])
assert peptides_new == ['AMAMxA']
#Selected N-term
peptides_new = add_fixed_mods_terminal(peptide, ['x<A'])
assert peptides_new == ['xAMAMA']
peptides_new = add_fixed_mods_terminal(peptide, ['x<C'])
assert peptides_new == peptide
#Selected C-term
peptides_new = add_fixed_mods_terminal(peptide, ['x>A'])
assert peptides_new == ['AMAMxA']
peptides_new = add_fixed_mods_terminal(peptide, ['x>C'])
assert peptides_new == peptide
test_add_fixed_mods_terminal()
#export
def add_variable_mods_terminal(peptides:list, mods_variable_terminal:list, **kwargs)->list:
"""
Function to add variable terminal modifications.
Args:
peptides (list of str): peptide list.
mods_variable_terminal (list of str): list of variable terminal mods.
Returns:
list (of str): list of peptides with modification added.
"""
if not mods_variable_terminal:
return peptides
else:
new_peptides_n = peptides.copy()
for key in mods_variable_terminal:
if "<" in key:
# Only allow one variable mod on one end
new_peptides_n.extend(
add_fixed_mod_terminal(peptides, key)
)
new_peptides_n = get_unique_peptides(new_peptides_n)
# N complete, let's go for c-terminal
new_peptides_c = new_peptides_n
for key in mods_variable_terminal:
if ">" in key:
# Only allow one variable mod on one end
new_peptides_c.extend(
add_fixed_mod_terminal(new_peptides_n, key)
)
return get_unique_peptides(new_peptides_c)
def get_unique_peptides(peptides:list) -> list:
"""
Function to return unique elements from list.
Args:
peptides (list of str): peptide list.
Returns:
list (of str): list of peptides (unique).
"""
return list(set(peptides))
peptide_list = ['AMAMA']
add_variable_mods_terminal(peptide_list, ['x<^'])
#hide
def test_add_variable_mods_terminal():
peptide_list = ['AMAMA']
peptides_new = add_variable_mods_terminal(peptide_list, [])
assert peptides_new == peptide
#Any N-term
peptides_new = add_variable_mods_terminal(peptide_list, ['x<^'])
assert set(peptides_new) == set(['xAMAMA', 'AMAMA'])
test_add_variable_mods_terminal()
#export
def generate_peptides(peptide:str, **kwargs)->list:
"""
Wrapper to get modified peptides (fixed and variable mods) from a peptide.
Args:
peptide (str): the given peptide sequence.
Returns:
list (of str): all modified peptides.
TODO:
There can be some edge-cases which are not defined yet.
Example:
Setting the same fixed modification - once for all peptides and once for only terminal for the protein.
The modification will then be applied twice.
"""
mod_peptide = add_fixed_mods_terminal([peptide], kwargs['mods_fixed_terminal_prot'])
mod_peptide = add_variable_mods_terminal(mod_peptide, kwargs['mods_variable_terminal_prot'])
peptides = []
[peptides.extend(cleave_sequence(_, **kwargs)) for _ in mod_peptide]
peptides = [_ for _ in peptides if check_peptide(_, constants.AAs)]
isoforms_max = kwargs['isoforms_max']
all_peptides = []
for peptide in peptides: #1 per, limit the number of isoforms
#Regular peptides
mod_peptides = add_fixed_mods([peptide], **kwargs)
mod_peptides = add_fixed_mods_terminal(mod_peptides, **kwargs)
mod_peptides = add_variable_mods_terminal(mod_peptides, **kwargs)
kwargs['isoforms_max'] = isoforms_max - len(mod_peptides)
mod_peptides = add_variable_mods(mod_peptides, **kwargs)
all_peptides.extend(mod_peptides)
#Decoys:
decoy_peptides = get_decoys([peptide], **kwargs)
mod_peptides_decoy = add_fixed_mods(decoy_peptides, **kwargs)
mod_peptides_decoy = add_fixed_mods_terminal(mod_peptides_decoy, **kwargs)
mod_peptides_decoy = add_variable_mods_terminal(mod_peptides_decoy, **kwargs)
kwargs['isoforms_max'] = isoforms_max - len(mod_peptides_decoy)
mod_peptides_decoy = add_variable_mods(mod_peptides_decoy, **kwargs)
mod_peptides_decoy = add_decoy_tag(mod_peptides_decoy)
all_peptides.extend(mod_peptides_decoy)
return all_peptides
def check_peptide(peptide:str, AAs:set)->bool:
"""
Check if the peptide contains non-AA letters.
Args:
peptide (str): peptide sequence.
AAs (set): the set of legal amino acids. See alphapept.constants.AAs
Returns:
bool: True if all letters in the peptide is the subset of AAs, otherwise False
"""
if set([_ for _ in peptide if _.isupper()]).issubset(AAs):
return True
else:
return False
kwargs = {}
kwargs["protease"] = "trypsin"
kwargs["n_missed_cleavages"] = 2
kwargs["pep_length_min"] = 6
kwargs["pep_length_max"] = 27
kwargs["mods_variable"] = ["oxM"]
kwargs["mods_variable_terminal"] = []
kwargs["mods_fixed"] = ["cC"]
kwargs["mods_fixed_terminal"] = []
kwargs["mods_fixed_terminal_prot"] = []
kwargs["mods_variable_terminal_prot"] = ['a<^']
kwargs["isoforms_max"] = 1024
kwargs["n_modifications_max"] = None
generate_peptides('PEPTIDEM', **kwargs)
#hide
def test_generate_peptides():
kwargs = {}
kwargs["protease"] = "trypsin"
kwargs["n_missed_cleavages"] = 2
kwargs["pep_length_min"] = 6
kwargs["pep_length_max"] = 27
kwargs["mods_variable"] = ["oxM"]
kwargs["mods_variable_terminal"] = []
kwargs["mods_fixed"] = ["cC"]
kwargs["mods_fixed_terminal"] = []
kwargs["mods_fixed_terminal_prot"] = []
kwargs["mods_variable_terminal_prot"] = []
kwargs["isoforms_max"] = 1024
kwargs['pseudo_reverse'] = True
kwargs["n_modifications_max"] = None
peps = generate_peptides('PEPTIDEM', **kwargs)
assert set(peps) == set(['PEPTIDEM', 'PEPTIDEoxM', 'EDITPEPM_decoy', 'EDITPEPoxM_decoy'])
test_generate_peptides()
#export
from numba import njit
from numba.typed import List
import numpy as np
import numba
@njit
def get_precmass(parsed_pep:list, mass_dict:numba.typed.Dict)->float:
"""
Calculate the mass of the neutral precursor
Args:
parsed_pep (list or numba.typed.List of str): the list of amino acids and modified amono acids.
mass_dict (numba.typed.Dict): key is the amino acid or the modified amino acid, and the value is the mass.
Returns:
float: the peptide neutral mass.
"""
tmass = mass_dict["H2O"]
for _ in parsed_pep:
tmass += mass_dict[_]
return tmass
get_precmass(parse('PEPTIDE'), constants.mass_dict)
#hide
def test_get_precmass():
precmass = get_precmass(parse('PEPTIDE'), constants.mass_dict)
assert np.allclose(precmass, 799.3599642034599)
test_get_precmass()
#export
import numba
@njit
def get_fragmass(parsed_pep:list, mass_dict:numba.typed.Dict)->tuple:
"""
Calculate the masses of the fragment ions
Args:
parsed_pep (numba.typed.List of str): the list of amino acids and modified amono acids.
mass_dict (numba.typed.Dict): key is the amino acid or the modified amino acid, and the value is the mass.
Returns:
Tuple[np.ndarray(np.float64), np.ndarray(np.int8)]: the fragment masses and the fragment types (represented as np.int8).
For a fragment type, positive value means the b-ion, the value indicates the position (b1, b2, b3...); the negative value means
the y-ion, the absolute value indicates the position (y1, y2, ...).
"""
n_frags = (len(parsed_pep) - 1) * 2
frag_masses = np.zeros(n_frags, dtype=np.float64)
frag_type = np.zeros(n_frags, dtype=np.int8)
# b-ions > 0
n_frag = 0
frag_m = mass_dict["Proton"]
for idx, _ in enumerate(parsed_pep[:-1]):
frag_m += mass_dict[_]
frag_masses[n_frag] = frag_m
frag_type[n_frag] = (idx+1)
n_frag += 1
# y-ions < 0
frag_m = mass_dict["Proton"] + mass_dict["H2O"]
for idx, _ in enumerate(parsed_pep[::-1][:-1]):
frag_m += mass_dict[_]
frag_masses[n_frag] = frag_m
frag_type[n_frag] = -(idx+1)
n_frag += 1
return frag_masses, frag_type
get_fragmass(parse('PEPTIDE'), constants.mass_dict)
#hide
def test_get_fragmass():
frag_masses, frag_type = get_fragmass(parse('PEPTIDE'), constants.mass_dict)
ref_masses = np.array([ 98.06004033, 227.10263343, 324.15539729, 425.20307579,
538.28713979, 653.31408289, 148.06043425, 263.08737735,
376.17144135, 477.21911985, 574.27188371, 703.31447681])
assert np.allclose(frag_masses, ref_masses)
test_get_fragmass()
#export
def get_frag_dict(parsed_pep:list, mass_dict:dict)->dict:
"""
Calculate the masses of the fragment ions
Args:
parsed_pep (list or numba.typed.List of str): the list of amino acids and modified amono acids.
mass_dict (numba.typed.Dict): key is the amino acid or the modified amino acid, and the value is the mass.
Returns:
dict{str:float}: key is the fragment type (b1, b2, ..., y1, y2, ...), value is fragment mass.
"""
frag_dict = {}
frag_masses, frag_type = get_fragmass(parsed_pep, mass_dict)
for idx, _ in enumerate(frag_masses):
cnt = frag_type[idx]
if cnt > 0:
identifier = 'b'
else:
identifier = 'y'
cnt = -cnt
frag_dict[identifier+str(cnt)] = _
return frag_dict
get_frag_dict(parse('PEPTIDE'), constants.mass_dict)
#hide
def test_get_frag_dict():
refdict = {'b1': 98.06004032687,
'b2': 227.10263342686997,
'b3': 324.15539728686997,
'y1': 120.06551965033,
'y2': 217.11828351033,
'y3': 346.16087661033}
newdict = get_frag_dict(parse('PEPT'), constants.mass_dict)
for key in newdict.keys():
assert np.allclose(refdict[key], newdict[key])
test_get_frag_dict()
import matplotlib.pyplot as plt
%matplotlib inline
peptide = 'PEPTIDE'
frag_dict = get_frag_dict(parse(peptide), constants.mass_dict)
db_frag = list(frag_dict.values())
db_int = [100 for _ in db_frag]
plt.figure(figsize=(10,5))
plt.vlines(db_frag, 0, db_int, "k", label="DB", alpha=0.2)
for _ in frag_dict.keys():
plt.text(frag_dict[_], 104, _, fontsize=12, alpha = 0.8)
plt.title('Theoretical Spectrum for {}'.format(peptide))
plt.xlabel('Mass')
plt.ylabel('Intensity')
plt.ylim([0,110])
plt.show()
#export
@njit
def get_spectrum(peptide:str, mass_dict:numba.typed.Dict)->tuple:
"""
Get neutral peptide mass, fragment masses and fragment types for a peptide
Args:
peptide (str): the (modified) peptide.
mass_dict (numba.typed.Dict): key is the amino acid or modified amino acid, and the value is the mass.
Returns:
Tuple[float, str, np.ndarray(np.float64), np.ndarray(np.int8)]: (peptide mass, peptide, fragment masses, fragment_types), for fragment types, see get_fragmass.
"""
parsed_peptide = parse(peptide)
fragmasses, fragtypes = get_fragmass(parsed_peptide, mass_dict)
sortindex = np.argsort(fragmasses)
fragmasses = fragmasses[sortindex]
fragtypes = fragtypes[sortindex]
precmass = get_precmass(parsed_peptide, mass_dict)
return (precmass, peptide, fragmasses, fragtypes)
@njit
def get_spectra(peptides:numba.typed.List, mass_dict:numba.typed.Dict)->List:
"""
Get neutral peptide mass, fragment masses and fragment types for a list of peptides
Args:
peptides (list of str): the (modified) peptide list.
mass_dict (numba.typed.Dict): key is the amino acid or modified amino acid, and the value is the mass.
Raises:
Unknown exception and pass.
Returns:
list of Tuple[float, str, np.ndarray(np.float64), np.ndarray(np.int8)]: See get_spectrum.
"""
spectra = List()
for i in range(len(peptides)):
try:
spectra.append(get_spectrum(peptides[i], mass_dict))
except Exception: #TODO: This is to fix edge cases when having multiple modifications on the same AA.
pass
return spectra
print(get_spectra(List(['PEPTIDE']), constants.mass_dict))
#hide
def test_get_spectra():
spectra = get_spectra(List(['PEPTIDE']), constants.mass_dict)
precmass, peptide, frags, fragtypes = spectra[0]
assert np.allclose(precmass, 799.3599642034599)
assert peptide == 'PEPTIDE'
assert np.allclose(frags, np.array([ 98.06004033, 148.06043425, 227.10263343, 263.08737735,
324.15539729, 376.17144135, 425.20307579, 477.21911985,
538.28713979, 574.27188371, 653.31408289, 703.31447681]))
test_get_spectra()
#export
from Bio import SeqIO
import os
from glob import glob
import logging
def read_fasta_file(fasta_filename:str=""):
"""
Read a FASTA file line by line
Args:
fasta_filename (str): fasta.
Yields:
dict {id:str, name:str, description:str, sequence:str}: protein information.
"""
with open(fasta_filename, "rt") as handle:
iterator = SeqIO.parse(handle, "fasta")
while iterator:
try:
record = next(iterator)
parts = record.id.split("|") # pipe char
if len(parts) > 1:
id = parts[1]
else:
id = record.name
sequence = str(record.seq)
entry = {
"id": id,
"name": record.name,
"description": record.description,
"sequence": sequence,
}
yield entry
except StopIteration:
break
def read_fasta_file_entries(fasta_filename=""):
"""
Function to count entries in fasta file
Args:
fasta_filename (str): fasta.
Returns:
int: number of entries.
"""
with open(fasta_filename, "rt") as handle:
iterator = SeqIO.parse(handle, "fasta")
count = 0
while iterator:
try:
record = next(iterator)
count+=1
except StopIteration:
break
return count
def check_sequence(element:dict, AAs:set, verbose:bool = False)->bool:
"""
Checks wheter a sequence from a FASTA entry contains valid AAs
Args:
element (dict): fasta entry of the protein information.
AAs (set): a set of amino acid letters.
verbose (bool): logging the invalid amino acids.
Returns:
bool: False if the protein sequence contains non-AA letters, otherwise True.
"""
if not set(element['sequence']).issubset(AAs):
unknown = set(element['sequence']) - set(AAs)
if verbose:
logging.error(f'This FASTA entry contains unknown AAs {unknown} - Peptides with unknown AAs will be skipped: \n {element}\n')
return False
else:
return True
#load example fasta file
fasta_path = '../testfiles/test.fasta'
list(read_fasta_file(fasta_path))[0]
#export
def add_to_pept_dict(pept_dict:dict, new_peptides:list, i:int)->tuple:
"""
Add peptides to the peptide dictionary
Args:
pept_dict (dict): the key is peptide sequence, and the value is protein id list indicating where the peptide is from.
new_peptides (list): the list of peptides to be added to pept_dict.
i (int): the protein id where new_peptides are from.
Returns:
dict: same as the pept_dict in the arguments.
list (of str): the peptides from new_peptides that not in the pept_dict.
"""
added_peptides = List()
for peptide in new_peptides:
if peptide in pept_dict:
if i not in pept_dict[peptide]:
pept_dict[peptide].append(i)
else:
pept_dict[peptide] = [i]
added_peptides.append(peptide)
return pept_dict, added_peptides
pept_dict = {}
new_peptides = ['ABC','DEF']
pept_dict, added_peptides = add_to_pept_dict(pept_dict, new_peptides, 0)
new_peptides = ['DEF','GHI']
pept_dict, added_peptides = add_to_pept_dict(pept_dict, new_peptides, 1)
print(pept_dict)
#hide
def test_add_to_pept_dict():
pept_dict = {}
new_peptides = ['ABC','DEF']
pept_dict, added_peptides = add_to_pept_dict(pept_dict, new_peptides, 0)
new_peptides = ['DEF','GHI']
pept_dict, added_peptides = add_to_pept_dict(pept_dict, new_peptides, 1)
assert pept_dict == {'ABC': [0], 'DEF': [0, 1], 'GHI': [1]}
test_add_to_pept_dict()
#export
def merge_pept_dicts(list_of_pept_dicts:list)->dict:
"""
Merge a list of peptide dict into a single dict.
Args:
list_of_pept_dicts (list of dict): the key of the pept_dict is peptide sequence, and the value is protein id list indicating where the peptide is from.
Returns:
dict: the key is peptide sequence, and the value is protein id list indicating where the peptide is from.
"""
if len(list_of_pept_dicts) == 0:
raise ValueError('Need to pass at least 1 element.')
new_pept_dict = list_of_pept_dicts[0]
for pept_dict in list_of_pept_dicts[1:]:
for key in pept_dict.keys():
if key in new_pept_dict:
for element in pept_dict[key]:
new_pept_dict[key].append(element)
else:
new_pept_dict[key] = pept_dict[key]
return new_pept_dict
pept_dict_1 = {'ABC': [0], 'DEF': [0, 1], 'GHI': [1]}
pept_dict_2 = {'ABC': [3,4], 'JKL': [5, 6], 'MNO': [7]}
merge_pept_dicts([pept_dict_1, pept_dict_2])
#hide
def test_merge_pept_dicts():
pept_dict_1 = {'ABC': [0], 'DEF': [0, 1], 'GHI': [1]}
pept_dict_2 = {'ABC': [3,4], 'JKL': [5, 6], 'MNO': [7]}
assert merge_pept_dicts([pept_dict_1, pept_dict_2]) == {'ABC': [0, 3, 4], 'DEF': [0, 1], 'GHI': [1], 'JKL': [5, 6], 'MNO': [7]}
test_merge_pept_dicts()
#export
from collections import OrderedDict
def generate_fasta_list(fasta_paths:list, callback = None, **kwargs)->tuple:
"""
Function to generate a database from a fasta file
Args:
fasta_paths (str or list of str): fasta path or a list of fasta paths.
callback (function, optional): callback function.
Returns:
fasta_list (list of dict): list of protein entry dict {id:str, name:str, description:str, sequence:str}.
fasta_dict (dict{int:dict}): the key is the protein id, the value is the protein entry dict.
"""
fasta_list = []
fasta_dict = OrderedDict()
fasta_index = 0
if type(fasta_paths) is str:
fasta_paths = [fasta_paths]
n_fastas = 1
elif type(fasta_paths) is list:
n_fastas = len(fasta_paths)
for f_id, fasta_file in enumerate(fasta_paths):
n_entries = read_fasta_file_entries(fasta_file)
fasta_generator = read_fasta_file(fasta_file)
for element in fasta_generator:
check_sequence(element, constants.AAs)
fasta_list.append(element)
fasta_dict[fasta_index] = element
fasta_index += 1
return fasta_list, fasta_dict
#hide
def test_generate_fasta_list():
fasta_list, fasta_dict = generate_fasta_list('../testfiles/test.fasta')
assert len(fasta_list) == 17
assert fasta_dict[0]['name'] == 'sp|A0PJZ0|A20A5_HUMAN'
test_generate_fasta_list()
#export
def generate_database(mass_dict:dict, fasta_paths:list, callback = None, **kwargs)->tuple:
"""
Function to generate a database from a fasta file
Args:
mass_dict (dict): not used, will be removed in the future.
fasta_paths (str or list of str): fasta path or a list of fasta paths.
callback (function, optional): callback function.
Returns:
to_add (list of str): non-redundant (modified) peptides to be added.
pept_dict (dict{str:list of int}): the key is peptide sequence, and the value is protein id list indicating where the peptide is from.
fasta_dict (dict{int:dict}): the key is the protein id, the value is the protein entry dict {id:str, name:str, description:str, sequence:str}.
"""
to_add = List()
fasta_dict = OrderedDict()
fasta_index = 0
pept_dict = {}
if type(fasta_paths) is str:
fasta_paths = [fasta_paths]
n_fastas = 1
elif type(fasta_paths) is list:
n_fastas = len(fasta_paths)
for f_id, fasta_file in enumerate(fasta_paths):
n_entries = read_fasta_file_entries(fasta_file)
fasta_generator = read_fasta_file(fasta_file)
for element in fasta_generator:
fasta_dict[fasta_index] = element
mod_peptides = generate_peptides(element["sequence"], **kwargs)
pept_dict, added_seqs = add_to_pept_dict(pept_dict, mod_peptides, fasta_index)
if len(added_seqs) > 0:
to_add.extend(added_seqs)
fasta_index += 1
if callback:
callback(fasta_index/n_entries/n_fastas+f_id)
return to_add, pept_dict, fasta_dict
#hide
def test_generate_database():
from alphapept.constants import mass_dict
from alphapept.settings import load_settings
from alphapept.paths import DEFAULT_SETTINGS_PATH
settings = load_settings(DEFAULT_SETTINGS_PATH)
to_add, pept_dict, fasta_dict = generate_database(mass_dict, ['../testfiles/test.fasta'], **settings['fasta'])
assert len(to_add) == 3078 #This will break if the deafult settings are changed
assert 'GQTVLGSIDHLYTGSGYR' in pept_dict
assert fasta_dict[0]['name'] == 'sp|A0PJZ0|A20A5_HUMAN'
test_generate_database()
#export
def generate_spectra(to_add:list, mass_dict:dict, callback = None)->list:
"""
Function to generate spectra list database from a fasta file
Args:
to_add (list):
mass_dict (dict{str:float}): amino acid mass dict.
callback (function, optional): callback function. (Default: None)
Returns:
list (of tuple): list of (peptide mass, peptide, fragment masses, fragment_types), see get_fragmass.
"""
if len(to_add) > 0:
if callback: #Chunk the spectra to get a progress_bar
spectra = []
stepsize = int(np.ceil(len(to_add)/1000))
for i in range(0, len(to_add), stepsize):
sub = to_add[i:i + stepsize]
spectra.extend(get_spectra(sub, mass_dict))
callback((i+1)/len(to_add))
else:
spectra = get_spectra(to_add, mass_dict)
else:
raise ValueError("No spectra to generate.")
return spectra
#hide
def test_generate_spectra():
from alphapept.constants import mass_dict
from numba.typed import List
to_add = List(['PEPTIDE'])
spectra = generate_spectra(to_add, mass_dict)
assert np.allclose(spectra[0][0], 799.35996420346)
assert spectra[0][1] == 'PEPTIDE'
test_generate_spectra()
#export
from typing import Generator
def block_idx(len_list:int, block_size:int = 1000)->list:
"""
Helper function to split length into blocks
Args:
len_list (int): list length.
block_size (int, optional, default 1000): size per block.
Returns:
list[(int, int)]: list of (start, end) positions of blocks.
"""
blocks = []
start = 0
end = 0
while end <= len_list:
end += block_size
blocks.append((start, end))
start = end
return blocks
def blocks(l:int, n:int)->Generator[list, None, None]:
"""
Helper function to create blocks from a given list
Args:
l (list): the list
n (int): size per block
Returns:
Generator: List with splitted elements
"""
n = max(1, n)
return (l[i:i+n] for i in range(0, len(l), n))
#hide
def test_block_idx():
assert block_idx(100, 50) == [(0, 50), (50, 100), (100, 150)]
test_block_idx()
def test_blocks():
assert list(blocks([1,2,3,4], 2)) == [[1, 2], [3, 4]]
test_blocks()
#export
from multiprocessing import Pool
from alphapept import constants
mass_dict = constants.mass_dict
#This function is a wrapper function and to be tested by the integration test
def digest_fasta_block(to_process:tuple)-> (list, dict):
"""
Digest and create spectra for a whole fasta_block for multiprocessing. See generate_database_parallel.
"""
fasta_index, fasta_block, settings = to_process
to_add = List()
f_index = 0
pept_dict = {}
for element in fasta_block:
sequence = element["sequence"]
mod_peptides = generate_peptides(sequence, **settings['fasta'])
pept_dict, added_peptides = add_to_pept_dict(pept_dict, mod_peptides, fasta_index+f_index)
if len(added_peptides) > 0:
to_add.extend(added_peptides)
f_index += 1
spectra = []
if len(to_add) > 0:
for specta_block in blocks(to_add, settings['fasta']['spectra_block']):
spectra.extend(generate_spectra(specta_block, mass_dict))
return (spectra, pept_dict)
import alphapept.performance
#This function is a wrapper function and to be tested by the integration test
def generate_database_parallel(settings:dict, callback = None):
"""
Function to generate a database from a fasta file in parallel.
Args:
settings: alphapept settings.
Returns:
list: theoretical spectra. See generate_spectra()
dict: peptide dict. See add_to_pept_dict()
dict: fasta_dict. See generate_fasta_list()
"""
n_processes = alphapept.performance.set_worker_count(
worker_count=settings['general']['n_processes'],
set_global=False
)
fasta_list, fasta_dict = generate_fasta_list(fasta_paths = settings['experiment']['fasta_paths'], **settings['fasta'])
logging.info(f'FASTA contains {len(fasta_list):,} entries.')
blocks = block_idx(len(fasta_list), settings['fasta']['fasta_block'])
to_process = [(idx_start, fasta_list[idx_start:idx_end], settings) for idx_start, idx_end in blocks]
spectra = []
pept_dicts = []
with Pool(n_processes) as p:
max_ = len(to_process)
for i, _ in enumerate(p.imap_unordered(digest_fasta_block, to_process)):
if callback:
callback((i+1)/max_)
spectra.extend(_[0])
pept_dicts.append(_[1])
spectra = sorted(spectra, key=lambda x: x[1])
spectra_set = [spectra[idx] for idx in range(len(spectra)-1) if spectra[idx][1] != spectra[idx+1][1]]
spectra_set.append(spectra[-1])
pept_dict = merge_pept_dicts(pept_dicts)
return spectra_set, pept_dict, fasta_dict
#export
#This function is a wrapper function and to be tested by the integration test
def pept_dict_from_search(settings:dict):
"""
Generates a peptide dict from a large search.
"""
paths = settings['experiment']['file_paths']
bases = [os.path.splitext(_)[0]+'.ms_data.hdf' for _ in paths]
all_dfs = []
for _ in bases:
try:
df = alphapept.io.MS_Data_File(_).read(dataset_name="peptide_fdr")
except KeyError:
df = pd.DataFrame()
if len(df) > 0:
all_dfs.append(df)
if sum([len(_) for _ in all_dfs]) == 0:
raise ValueError("No sequences present to concatenate.")
df = pd.concat(all_dfs)
df['fasta_index'] = df['fasta_index'].str.split(',')
lst_col = 'fasta_index'
df_ = pd.DataFrame({
col:np.repeat(df[col].values, df[lst_col].str.len())
for col in df.columns.drop(lst_col)}
).assign(**{lst_col:np.concatenate(df[lst_col].values)})[df.columns]
df_['fasta_index'] = df_['fasta_index'].astype('int')
df_grouped = df_.groupby(['sequence'])['fasta_index'].unique()
pept_dict = {}
for keys, vals in zip(df_grouped.index, df_grouped.values):
pept_dict[keys] = vals.tolist()
return pept_dict
#export
import alphapept.io
import pandas as pd
def save_database(spectra:list, pept_dict:dict, fasta_dict:dict, database_path:str, **kwargs):
"""
Function to save a database to the *.hdf format. Write the database into hdf.
Args:
spectra (list): list: theoretical spectra. See generate_spectra().
pept_dict (dict): peptide dict. See add_to_pept_dict().
fasta_dict (dict): fasta_dict. See generate_fasta_list().
database_path (str): Path to database.
"""
precmasses, seqs, fragmasses, fragtypes = zip(*spectra)
sortindex = np.argsort(precmasses)
fragmasses = np.array(fragmasses, dtype=object)[sortindex]
fragtypes = np.array(fragtypes, dtype=object)[sortindex]
lens = [len(_) for _ in fragmasses]
n_frags = sum(lens)
frags = np.zeros(n_frags, dtype=fragmasses[0].dtype)
frag_types = np.zeros(n_frags, dtype=fragtypes[0].dtype)
indices = np.zeros(len(lens) + 1, np.int64)
indices[1:] = lens
indices = np.cumsum(indices)
#Fill data
for _ in range(len(indices)-1):
start = indices[_]
end = indices[_+1]
frags[start:end] = fragmasses[_]
frag_types[start:end] = fragtypes[_]
to_save = {}
to_save["precursors"] = np.array(precmasses)[sortindex]
to_save["seqs"] = np.array(seqs, dtype=object)[sortindex]
to_save["proteins"] = pd.DataFrame(fasta_dict).T
to_save["fragmasses"] = frags
to_save["fragtypes"] = frag_types
to_save["indices"] = indices
db_file = alphapept.io.HDF_File(database_path, is_new_file=True)
for key, value in to_save.items():
db_file.write(value, dataset_name=key)
peps = np.array(list(pept_dict), dtype=object)
indices = np.empty(len(peps) + 1, dtype=np.int64)
indices[0] = 0
indices[1:] = np.cumsum([len(pept_dict[i]) for i in peps])
proteins = np.concatenate([pept_dict[i] for i in peps])
db_file.write("peptides")
db_file.write(
peps,
dataset_name="sequences",
group_name="peptides"
)
db_file.write(
indices,
dataset_name="protein_indptr",
group_name="peptides"
)
db_file.write(
proteins,
dataset_name="protein_indices",
group_name="peptides"
)
#export
import collections
def read_database(database_path:str, array_name:str=None)->dict:
"""
Read database from hdf file.
Args:
database_path (str): hdf database file generate by alphapept.
array_name (str): the dataset name to read
return:
dict: key is the dataset_name in hdf file, value is the python object read from the dataset_name
"""
db_file = alphapept.io.HDF_File(database_path)
if array_name is None:
db_data = {
key: db_file.read(
dataset_name=key
) for key in db_file.read() if key not in (
"proteins",
"peptides"
)
}
db_data["fasta_dict"] = np.array(
collections.OrderedDict(db_file.read(dataset_name="proteins").T)
)
peps = db_file.read(dataset_name="sequences", group_name="peptides")
protein_indptr = db_file.read(
dataset_name="protein_indptr",
group_name="peptides"
)
protein_indices = db_file.read(
dataset_name="protein_indices",
group_name="peptides"
)
db_data["pept_dict"] = np.array(
{
pep: (protein_indices[s: e]).tolist() for pep, s, e in zip(
peps,
protein_indptr[:-1],
protein_indptr[1:],
)
}
)
db_data["seqs"] = db_data["seqs"].astype(str)
else:
db_data = db_file.read(dataset_name=array_name)
return db_data
#hide
def test_database_io():
from alphapept.constants import mass_dict
from numba.typed import List
to_add = List(['PEPTIDE'])
spectra = generate_spectra(to_add, mass_dict)
fasta_list, fasta_dict = generate_fasta_list('../testfiles/test.fasta')
database_path = '../testfiles/testdb.hdf'
save_database(spectra, pept_dict, fasta_dict, database_path)
assert list(read_database(database_path, 'seqs')) == ['PEPTIDE']
assert np.allclose(list(read_database(database_path, 'precursors'))[0], spectra[0][0])
test_database_io()
#hide
from nbdev.export import *
notebook2script()
| 0.709523 | 0.926901 |
[](https://www.pythonista.io)
# Tipado estricto con *Python*.
## Pistas de tipo (*type hints*).
https://www.python.org/dev/peps/pep-0483/
Los indicadores de tipo son sintácticamente válidos, pero el intérpete de *Python* no los toma en cuenta.
### Indicadores para asignación de nombres a objetos.
```
<nombre>: <tipo>
```
```
<nombre>: <tipo> = <obj>
```
### Indicadores para los objetos que regresan las funciones.
```
def <func>(<params>) -> <tipo>:
...
...
return <objeto del tipo definido>
```
## El paquete ```mypy```.
El paquete ```mypy```incluye un comando que permite validar los tipos de un *script* de *Python* y en caso de no cumplir con las reglas de tipado, levantará una advertiencia indicando el error.
http://mypy-lang.org/
### El paquete ```nb_mypy```.
Este paquete es la implementación de ```mypyp``` para *iPython* y se habilita usando el comando mágico ```%nb_mypy On```.
https://gitlab.tue.nl/jupyter-projects/nb_mypy
```
!pip install nb_mypy
%load_ext nb_mypy
%nb_mypy On
```
* La función ```suma_int()``` es definida de tal manera que sólo acepta objetos de tipo ```int```.
```
def suma_int(a:int, b:int) -> int:
return a + b
suma_int(1, 2)
suma_int(1, 2.5)
suma_int("Hola", "Mundo")
%nb_mypy Off
suma_int("Hola", "Mundo")
%nb_mypy On
```
## El módulo ```typing```.
El módulo ```typing``` es parte de la biblioteca estándar de *Python* y contiene diversas clases que definen tipos/clases de objetos comunes en el lenguaje.
https://docs.python.org/3/library/typing.html
### Las clases contenidas en ```typing```.
https://docs.python.org/3/library/typing.html#module-contents
```
import typing
dir(typing)
from typing import List
```
* La función ```lista_rango``` es definida para regresar una lista de objetos ```int```.
```
def lista_rango(n:int, m:int) -> List[int]:
return [i for i in range(n, m)]
lista_rango(2, 3)
lista_rango(2, 3)
lista_rango(12, 'Hola')
```
## Pydantic.
Este paquete permite crear clases que utilizan de forma estricta los indicadores de tipado y extienden los esquemas de validación.
https://pydantic-docs.helpmanual.io/
```
! pip install pydantic
```
### La clase ```pydantic.BaseModel```.
La clase ```BaseModel```permite crear subclases cuyos atributos son defindos con tipado estricto.
Todos los atributos definidos sin asignarles un valor son considerados como argumentos obligatorios al instanciar las subclases.
https://pydantic-docs.helpmanual.io/usage/models/
```
from pydantic import BaseModel
class Alumno(BaseModel):
cuenta: int
nombre: str
primer_apellido: str
segundo_apellido: str = ''
carrera: str
semestre: int
promedio: float
al_corriente: bool
```
* Es posible utilizar las subclases de ```BaseModel``` como una ```dataclass```.
https://docs.python.org/3/library/dataclasses.html
```
datos = {'cuenta': 1234567,
'nombre': 'Juan',
'primer_apellido': 'Pérez',
'carrera': 'Medicina',
'semestre': 7,
'promedio': 6.5,
'al_corriente': True
}
alumno = Alumno(**datos)
alumno.nombre
dict(alumno)
datos = {'cuenta': 1234567,
'nombre': 'Juan',
'primer_apellido': 'Pérez',
'carrera': 'Medicina',
'semestre': 7,
'promedio': 6.5,
'al_corriente': True,
'genero': 'M'
}
alumno = Alumno(**datos)
alumno
```
#### El método ```pydantic.BaseModel.schema()```.
```
alumno.schema()
```
#### El método ```pydantic.BaseModel.schema_json()```.
```
alumno.schema_json()
```
### La función ```pydantic.validator```.
https://pydantic-docs.helpmanual.io/usage/validators/
```
from pydantic import validator
class UserModel(BaseModel):
name: str
username: str
password1: str
password2: str
@validator('name')
def name_must_contain_space(cls, v):
if ' ' not in v:
raise ValueError('must contain a space')
return v.title()
@validator('password2')
def passwords_match(cls, v, values, **kwargs):
if 'password1' in values and v != values['password1']:
raise ValueError('passwords do not match')
return v
@validator('username')
def username_alphanumeric(cls, v):
assert v.isalnum(), 'must be alphanumeric'
return v
UserModel(name="Jose Luis",
username='josec',
password1='123qwe',
password2='123qwe').schema()
```
### Los tipos de datos aceptados por ```pydantic```.
https://pydantic-docs.helpmanual.io/usage/types/#standard-library-types
### La función ```pydantic.Field```.
https://pydantic-docs.helpmanual.io/usage/schema/#field-customisation
```
from pydantic import Field, PositiveInt
from enum import Enum
class Carreras(Enum):
derecho = "Derecho"
sistemas = "Sistemas"
actuaria = "Actuaria"
administracion = "Administración"
class Alumno(BaseModel):
cuenta: int = Field(default= 5000000, ge=1000000, le=9999999)
nombre: str
primer_apellido: str
segundo_apellido: str = ''
carrera: Carreras
semestre: PositiveInt
promedio: float = Field(ge=0, le=10)
al_corriente: bool
datos = {'cuenta': 1234567,
'nombre': 'Juan',
'primer_apellido': 'Pérez',
'carrera': 'Medicina',
'semestre': 7,
'promedio': 6.5,
'al_corriente': True
}
Alumno(**datos)
datos = {'cuenta': 1234567,
'nombre': 'Juan',
'primer_apellido': 'Pérez',
'carrera': 'Administración',
'semestre': 7,
'promedio': 6.5,
'al_corriente': True
}
alumno = Alumno(**datos)
alumno.schema_json()
```
<p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p>
<p style="text-align: center">© José Luis Chiquete Valdivieso. 2022.</p>
|
github_jupyter
|
<nombre>: <tipo>
<nombre>: <tipo> = <obj>
def <func>(<params>) -> <tipo>:
...
...
return <objeto del tipo definido>
!pip install nb_mypy
%load_ext nb_mypy
%nb_mypy On
def suma_int(a:int, b:int) -> int:
return a + b
suma_int(1, 2)
suma_int(1, 2.5)
suma_int("Hola", "Mundo")
%nb_mypy Off
suma_int("Hola", "Mundo")
%nb_mypy On
import typing
dir(typing)
from typing import List
def lista_rango(n:int, m:int) -> List[int]:
return [i for i in range(n, m)]
lista_rango(2, 3)
lista_rango(2, 3)
lista_rango(12, 'Hola')
! pip install pydantic
from pydantic import BaseModel
class Alumno(BaseModel):
cuenta: int
nombre: str
primer_apellido: str
segundo_apellido: str = ''
carrera: str
semestre: int
promedio: float
al_corriente: bool
datos = {'cuenta': 1234567,
'nombre': 'Juan',
'primer_apellido': 'Pérez',
'carrera': 'Medicina',
'semestre': 7,
'promedio': 6.5,
'al_corriente': True
}
alumno = Alumno(**datos)
alumno.nombre
dict(alumno)
datos = {'cuenta': 1234567,
'nombre': 'Juan',
'primer_apellido': 'Pérez',
'carrera': 'Medicina',
'semestre': 7,
'promedio': 6.5,
'al_corriente': True,
'genero': 'M'
}
alumno = Alumno(**datos)
alumno
alumno.schema()
alumno.schema_json()
from pydantic import validator
class UserModel(BaseModel):
name: str
username: str
password1: str
password2: str
@validator('name')
def name_must_contain_space(cls, v):
if ' ' not in v:
raise ValueError('must contain a space')
return v.title()
@validator('password2')
def passwords_match(cls, v, values, **kwargs):
if 'password1' in values and v != values['password1']:
raise ValueError('passwords do not match')
return v
@validator('username')
def username_alphanumeric(cls, v):
assert v.isalnum(), 'must be alphanumeric'
return v
UserModel(name="Jose Luis",
username='josec',
password1='123qwe',
password2='123qwe').schema()
from pydantic import Field, PositiveInt
from enum import Enum
class Carreras(Enum):
derecho = "Derecho"
sistemas = "Sistemas"
actuaria = "Actuaria"
administracion = "Administración"
class Alumno(BaseModel):
cuenta: int = Field(default= 5000000, ge=1000000, le=9999999)
nombre: str
primer_apellido: str
segundo_apellido: str = ''
carrera: Carreras
semestre: PositiveInt
promedio: float = Field(ge=0, le=10)
al_corriente: bool
datos = {'cuenta': 1234567,
'nombre': 'Juan',
'primer_apellido': 'Pérez',
'carrera': 'Medicina',
'semestre': 7,
'promedio': 6.5,
'al_corriente': True
}
Alumno(**datos)
datos = {'cuenta': 1234567,
'nombre': 'Juan',
'primer_apellido': 'Pérez',
'carrera': 'Administración',
'semestre': 7,
'promedio': 6.5,
'al_corriente': True
}
alumno = Alumno(**datos)
alumno.schema_json()
| 0.662796 | 0.933915 |
<img alt="QuantRocket logo" src="https://www.quantrocket.com/assets/img/notebook-header-logo.png">
© Copyright Quantopian Inc.<br>
© Modifications Copyright QuantRocket LLC<br>
Licensed under the [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/legalcode).
<a href="https://www.quantrocket.com/disclaimer/">Disclaimer</a>
# Fundamental Factor Models
By Beha Abasi, Maxwell Margenot, and Delaney Mackenzie
## What are Fundamental Factor Models?
Fundamental data refers to the metrics and ratios measuring the financial characteristics of companies derived from the public filings made by these companies, such as their income statements and balance sheets. Examples of factors drawn from these documents include market cap, net income growth, and cash flow.
This fundamental data can be used in many ways, one of which is to build a linear factor model. Given a set of $k$ fundamental factors, we can represent the returns of an asset, $R_t$, as follows:
$$R_t = \alpha_t + \beta_{t, F_1}F_1 + \beta_{t, F_2}F_2 + ... + \beta_{t, F_k}F_k + \epsilon_t$$
where each $F_j$ represents a fundamental factor return stream. These return streams are from portfolios whose value is derived from its respective factor.
Fundamental factor models try to determine characteristics that affect an asset's risk and return. The most difficult part of this is determining which factors to use. Much research has been done on determining significant factors, and what makes things even more difficult is that the discovery of a significant factor often leads to its advantage being arbitraged away! This is one of the reasons why fundamental factor models, and linear factor models in general, are so prevalent in modern finance. Once you have found significant factors, you need to calculate the exposure an asset's return stream has to each factor. This is similar to the calculation of risk premia discussed in the CAPM lecture.
In using fundamental data, we run into the problem of having factors that may not be easily compared due to their varying units and magnitudes. To resolve this, we take two different approaches to bring the data onto the same level - portfolio construction to compare return streams and normalization of factor values.
## Approach One: Portfolio Construction
The first approach consists of using the fundamental data as a ranking scheme and creating a long-short equity portfolio based on each factor. We then use the return streams associated with each portfolio as our model factors.
One of the most well-known examples of this approach is the Fama-French model. The Fama-French model, and later the Carhart four factor model, adds market cap, book-to-price ratios, and momentum to the original CAPM, which only included market risk.
Historically, certain groups of stocks were seen as outperforming the market, namely those with small market caps, high book-to-price ratios, and those that had previously done well (i.e., they had momentum). Empirically, Fama & French found that the returns of these particular types of stocks tended to be better than what was predicted by the security market line of the CAPM.
In order to capture these phenomena, we will use those factors to create a ranking scheme that will be used in the creation of long short equity portfolios. The factors will be $SMB$, measuring the excess return of small market cap companies minus big, $HML$, measuring the excess return of companies with high book-to-price ratios versus low, $MOM$, measuring the excess returns of last month's winners versus last month's losers, and $EXMRKT$ which is a measure of the market risk.
In general, this approach can be used as an asset pricing model or to hedge our portfolios. The latter uses Fama-Macbeth regressions to calculate risk premia, as demonstrated in the CAPM lecture. Hedging can be achieved through a linear regression of portfolio returns on the returns from the long-short factor portfolios. Below are examples of both.
### Portfolio Construction as an Asset Pricing Model
First we import the relevant libraries.
```
import pandas as pd
import numpy as np
from zipline.pipeline import Pipeline
from zipline.pipeline.data import sharadar
from zipline.pipeline.data import master
from zipline.pipeline.data import EquityPricing
from zipline.pipeline.factors import CustomFactor, Returns, AverageDollarVolume
from zipline.pipeline.filters import AllPresent, All
from zipline.pipeline.classifiers import Classifier
from zipline.research import run_pipeline
import matplotlib.pyplot as plt
```
Use pipeline to get all of our factor data that we will use in the rest of the lecture.
```
class Momentum(CustomFactor):
# will give us the returns from last month
inputs = [Returns(window_length=20)]
window_length = 20
def compute(self, today, assets, out, lag_returns):
out[:] = lag_returns[0]
Fundamentals = sharadar.Fundamentals.slice(dimension='ARQ', period_offset=0)
def make_pipeline():
# define our fundamental factor pipeline
pipe = Pipeline()
# market cap and book-to-price data gets fed in here
market_cap = Fundamentals.MARKETCAP.latest
book_to_price = 1/Fundamentals.PB.latest
# and momentum as lagged returns (1 month lag)
momentum = Momentum()
# we also get daily returns
returns = Returns(window_length=2)
TradableStocksUS = (
# Market cap over $500M
(sharadar.Fundamentals.slice(dimension='ARQ', period_offset=0).MARKETCAP.latest >= 500e6)
# dollar volume over $2.5M over trailing 200 days
& (AverageDollarVolume(window_length=200) >= 2.5e6)
# price > $5
& (EquityPricing.close.latest > 5)
# no missing data for 200 days (exclude trading halts, IPOs, etc.)
& AllPresent(inputs=[EquityPricing.close], window_length=200)
& All([EquityPricing.volume.latest > 0], window_length=200)
# common stocks only
& master.SecuritiesMaster.usstock_SecurityType2.latest.eq("Common Stock")
# primary share only
& master.SecuritiesMaster.usstock_PrimaryShareSid.latest.isnull()
)
# we compute a daily rank of both factors, this is used in the next step,
# which is computing portfolio membership
market_cap_rank = market_cap.rank(mask=TradableStocksUS)
book_to_price_rank = book_to_price.rank(mask=TradableStocksUS)
momentum_rank = momentum.rank(mask=TradableStocksUS)
# Grab the top and bottom 1000 for each factor
biggest = market_cap_rank.top(1000)
smallest = market_cap_rank.bottom(1000)
highpb = book_to_price_rank.top(1000)
lowpb = book_to_price_rank.bottom(1000)
top = momentum_rank.top(1000)
bottom = momentum_rank.bottom(1000)
# Define our universe, screening out anything that isn't in the top or bottom
universe = TradableStocksUS & (biggest | smallest | highpb | lowpb | top | bottom)
pipe = Pipeline(
columns = {
'market_cap':market_cap,
'book_to_price':book_to_price,
'momentum':momentum,
'Returns':returns,
'market_cap_rank':market_cap_rank,
'book_to_price_rank':book_to_price_rank,
'momentum_rank':momentum_rank,
'biggest':biggest,
'smallest':smallest,
'highpb':highpb,
'lowpb':lowpb,
'top':top,
'bottom':bottom
},
screen=universe
)
return pipe
# Initializing the pipe
pipe = make_pipeline()
# Now let's start the pipeline
start_date, end_date = '2016-01-01', '2016-12-31'
results = run_pipeline(pipe, start_date=start_date, end_date=end_date, bundle='usstock-1d-bundle')
results.head()
```
Now we can go through the data and build the factor portfolios we want
```
from quantrocket.master import get_securities
from quantrocket import get_prices
# group_by(level=0).mean() gives you the average return of each day for a particular group of stocks
R_biggest = results[results.biggest]['Returns'].groupby(level=0).mean()
R_smallest = results[results.smallest]['Returns'].groupby(level=0).mean()
R_highpb = results[results.highpb]['Returns'].groupby(level=0).mean()
R_lowpb = results[results.lowpb]['Returns'].groupby(level=0).mean()
R_top = results[results.top]['Returns'].groupby(level=0).mean()
R_bottom = results[results.bottom]['Returns'].groupby(level=0).mean()
# risk-free proxy
securities = get_securities(symbols=['BIL', 'SPY'], vendors='usstock')
BIL = securities[securities.Symbol=='BIL'].index[0]
SPY = securities[securities.Symbol=='SPY'].index[0]
R_F = get_prices('usstock-1d-bundle', sids=BIL, data_frequency='daily', fields='Close', start_date=start_date, end_date=end_date)
R_F = R_F.loc['Close'][BIL].pct_change()[1:]
# find it's beta against market
M = get_prices('usstock-1d-bundle', sids=SPY, data_frequency='daily', fields='Close', start_date=start_date, end_date=end_date)
M = M.loc['Close'][SPY].pct_change()[1:]
# Defining our final factors
EXMRKT = M - R_F
SMB = R_smallest - R_biggest # small minus big
HML = R_highpb - R_lowpb # high minus low
MOM = R_top - R_bottom # momentum
```
Now that we've constructed our portfolios, let's look at our performance if we were to hold each one.
```
plt.plot(SMB.index, SMB.values)
plt.ylabel('Daily Percent Return')
plt.legend(['SMB Portfolio Returns']);
plt.plot(HML.index, HML.values)
plt.ylabel('Daily Percent Return')
plt.legend(['HML Portfolio Returns']);
plt.plot(MOM.index, MOM.values)
plt.ylabel('Daily Percent Return')
plt.legend(['MOM Portfolio Returns']);
```
Now, as we did in the CAPM lecture, we'll calculate the risk premia on each of these factors using the Fama-Macbeth regressions.
```
import itertools
import statsmodels.api as sm
from statsmodels import regression,stats
import scipy
```
Our asset returns data is asset and date specific, whereas our factor portfolio returns are only date specific. Therefore, we'll need to spread each day's portfolio return across all the assets for which we have data for on that day.
```
data = results[['Returns']].set_index(results.index)
asset_list_sizes = [group[1].size for group in data.groupby(level=0)]
# Spreading the factor portfolio data across all assets for each day
SMB_column = [[SMB.loc[group[0]]] * size for group, size \
in zip(data.groupby(level=0), asset_list_sizes)]
data['SMB'] = list(itertools.chain(*SMB_column))
HML_column = [[HML.loc[group[0]]] * size for group, size \
in zip(data.groupby(level=0), asset_list_sizes)]
data['HML'] = list(itertools.chain(*HML_column))
MOM_column = [[MOM.loc[group[0]]] * size for group, size \
in zip(data.groupby(level=0), asset_list_sizes)]
data['MOM'] = list(itertools.chain(*MOM_column))
EXMRKT_column = [[EXMRKT.loc[group[0]]]*size if group[0] in EXMRKT.index else [None]*size \
for group, size in zip(data.groupby(level=0), asset_list_sizes)]
data['EXMRKT'] = list(itertools.chain(*EXMRKT_column))
data = sm.add_constant(data.dropna())
# Our list of assets from pipeline
assets = data.index.levels[1].unique()
# gathering our data to be asset-specific
Y = [data.xs(asset, level=1)['Returns'] for asset in assets]
X = [data.xs(asset, level=1)[['EXMRKT','SMB', 'HML', 'MOM', 'const']] for asset in assets]
# First regression step: estimating the betas
reg_results = [regression.linear_model.OLS(y, x).fit().params \
for y, x in zip(Y, X) if not(x.empty or y.empty)]
indices = [asset for y, x, asset in zip(Y, X, assets) if not(x.empty or y.empty)]
betas = pd.DataFrame(reg_results, index=indices)
betas = sm.add_constant(betas.drop('const', axis=1))
R = data['Returns'].mean(axis=0, level=1).sort_index()
# Second regression step: estimating the risk premia
risk_free_rate = np.mean(R_F)
final_results = regression.linear_model.OLS(R - risk_free_rate, betas).fit()
final_results.summary()
```
#### Returns Prediction
As discussed in the CAPM lecture, factor modeling can be used to predict future returns based on current fundamental factors. As well, it could be used to determine when an asset may be mispriced in order to arbitrage the difference, as shown in the CAPM lecture.
Modeling future returns is accomplished by offsetting the returns in the regression, so that rather than predict for current returns, you are predicting for future returns. Once you have a predictive model, the most canonical way to create a strategy is to attempt a long-short equity approach.
### Portfolio Construction for Hedging
Once we've determined that we are exposed to a factor, we may want to avoid depending on the performance of that factor by taking out a hedge. This is discussed in more detail in the Beta Hedging and Risk Factor Exposure lectures. The essential idea is to take the exposure your return stream has to a factor, and short the proportional value. So, if your total portfolio value was $V$, and the exposure you calculated to a certain factor return stream was $\beta$, you would short $\beta V$ amount of that return stream.
The following is an example using the Fama-French factors we used before.
```
# we'll take a random sample of 500 assets from TradableStocksUS in order to build a random portfolio
random_assets = list(np.random.choice(assets, size=500, replace=False))
portfolio_data = data[data.index.isin(random_assets, level=1)]
# this is the return of our portfolio with no hedging
R_portfolio_time_series = portfolio_data['Returns'].mean(level=0)
# next, we calculate the exposure of our portfolio to each of the Fama-French factors
portfolio_exposure = regression.linear_model.OLS(portfolio_data['Returns'], \
portfolio_data[['EXMRKT', 'SMB', 'HML', 'MOM', 'const']]).fit()
print(portfolio_exposure.summary())
# our hedged return stream
hedged_portfolio = R_portfolio_time_series - \
portfolio_exposure.params[0]*EXMRKT.tz_localize('UTC') - \
portfolio_exposure.params[1]*SMB - \
portfolio_exposure.params[2]*HML - \
portfolio_exposure.params[3]*MOM
print('Mean, Std of Hedged Portfolio:', np.mean(hedged_portfolio), np.std(hedged_portfolio))
print('Mean, Std of Unhedged Portfolio:', np.mean(R_portfolio_time_series), np.std(R_portfolio_time_series))
```
Let's look at a graph of our two portfolio return streams
```
plt.plot(hedged_portfolio)
plt.plot(R_portfolio_time_series)
plt.legend(['Hedged', 'Unhedged']);
```
We'll check for normality, homoskedasticity, and autocorrelation in this model. For more information on the tests below, check out the Violations of Regression Models lecture.
For normality, we'll run a Jarque-Bera test, which tests whether our data's skew/kurtosis matches that of a normal distribution. As the standard, we'll reject the null hypothesis that our data is normally distributed if our p-value falls under our confidence level of 5%.
To test for heteroskedasticity, we'll run a Breush-Pagan test, which tests whether the variance of the errors in a linear regression is related to the values of the independent variables. In this case, our null hypothesis is that the data is homoskedastic.
Autocorrelation is tested for using the Durbin-Watson statistic, which looks at the lagged relationship between the errors in a regression. This will give you a number between 0 and 4, with 2 meaning no autocorrelation.
```
# testing for normality: jarque-bera
_, pvalue_JB, _, _ = stats.stattools.jarque_bera(portfolio_exposure.resid)
print("Jarque-Bera p-value:", pvalue_JB)
# testing for homoskedasticity: breush pagan
_, pvalue_BP, _, _ = stats.diagnostic.het_breuschpagan(portfolio_exposure.resid, \
portfolio_data[['EXMRKT', 'SMB', 'HML', 'MOM', 'const']])
print("Breush Pagan p-value:", pvalue_BP)
# testing for autocorrelation
dw = stats.stattools.durbin_watson(portfolio_exposure.resid)
print("Durbin Watson statistic:", dw)
```
Based on the Jarque-Bera p-value, we would reject the null hypothesis that the data is normally distributed. This means there is strong evidence that our data follows some other distribution.
The test for homoskedasticity suggests that the data is heteroskedastic. However, we need to be careful about this test as we saw that our data may not be normally distributed.
Finally, the Durbin-Watson statistic can be evaluated by looking at the critical values of the statistic. At a confidence level of 95% and 4 explanatory variables, we cannot reject the null hypothesis of no autocorrelation.
## Approach Two: Factor Value Normalization
Another approach is to normalize factor values for each day and see how predictive of that day's returns they were. This is also known as cross-sectional factor analysis. We do this by computing a normalized factor value $b_{a,j}$ for each asset $a$ in the following way.
$$b_{a,j} = \frac{F_{a,j} - \mu_{F_j}}{\sigma_{F_j}}$$
$F_{a,j}$ is the value of factor $j$ for asset $a$ during this time, $\mu_{F_j}$ is the mean factor value across all assets, and $\sigma_{F_j}$ is the standard deviation of factor values over all assets. Notice that we are just computing a z-score to make asset specific factor values comparable across different factors.
The exceptions to this formula are indicator variables, which are set to 1 for true and 0 for false. One example is industry membership: the coefficient tells us whether the asset belongs to the industry or not.
After we calculate all of the normalized scores during time $t$, we can estimate factor $j$'s returns $F_{j,t}$, using a cross-sectional regression (i.e. at each time step, we perform a regression using the equations for all of the assets). Specifically, once we have returns for each asset $R_{a,t}$, and normalized factor coefficients $b_{a,j}$, we construct the following model and estimate the $F_j$s and $a_t$
$$R_{a,t} = a_t + b_{a,F_1}F_1 + b_{a, F_2}F_2 + \dots + b_{a, F_K}F_K$$
You can think of this as slicing through the other direction from the first analysis, as now the factor returns are unknowns to be solved for, whereas originally the coefficients were the unknowns. Another way to think about it is that you're determining how predictive of returns the factor was on that day, and therefore how much return you could have squeezed out of that factor.
Following this procedure, we'll get the cross-sectional returns on 2016-11-22, and compute the coefficients for all assets:
We can take the fundamental data we got from the pipeline call above.
```
date = '2016-11-22'
BTP = results['book_to_price'][date]
z_score = (BTP - BTP.mean()) / BTP.std()
z_score.dropna(inplace=True)
plt.hist(z_score)
plt.xlabel('Z-Score')
plt.ylabel('Frequency');
```
#### Problem: The Data is Weirdly Distributed
Notice how there are big outliers in the dataset that cause the z-scores to lose a lot of information. Basically the presence of some very large outliers causes the rest of the data to occupy a relatively small area. We can get around this issue using some data cleaning techniques, such as winsorization.
#### Winsorization
Winzorization takes the top $n\%$ of a dataset and sets it all equal to the least extreme value in the top $n\%$. For example, if your dataset ranged from 0-10, plus a few crazy outliers, those outliers would be set to 0 or 10 depending on their direction. The following is an example.
```
# Get some random data
random_data = np.random.normal(0, 1, 100)
# Put in some outliers
random_data[0] = 1000
random_data[1] = -1000
# Perform winsorization
print('Before winsorization', np.min(random_data), np.max(random_data))
scipy.stats.mstats.winsorize(random_data, inplace=True, limits=0.01)
print('After winsorization', np.min(random_data), np.max(random_data))
```
We'll apply the same technique to our data and grab the returns to all the assets in our universe. Then we'll run a linear regression to estimate $F_j$.
```
BTP = scipy.stats.mstats.winsorize(results['book_to_price'][date], limits=0.01)
BTP_z = (BTP - np.mean(BTP)) / np.std(BTP)
MC = scipy.stats.mstats.winsorize(results['market_cap'][date], limits=0.01)
MC_z = (MC - np.mean(MC)) / np.std(MC)
Lag_Ret = scipy.stats.mstats.winsorize(results['momentum'][date], limits=0.01)
Lag_Ret_z = (Lag_Ret - np.mean(Lag_Ret)) / np.std(Lag_Ret)
returns = results['Returns'][date]
df_day = pd.DataFrame({'R': returns,
'BTP_z': BTP_z,
'MC_z': MC_z,
'Lag_Ret_z': Lag_Ret_z,
'Constant': 1}).dropna()
cross_sectional_results = \
regression.linear_model.OLS(df_day['R'], df_day[['BTP_z', 'MC_z', 'Lag_Ret_z', 'Constant']]).fit()
cross_sectional_results.summary()
```
To expand this analysis, you would simply loop through days, running this every day and getting an estimated factor return.
---
**Next Lecture:** [Portfolio Analysis with pyfolio](Lecture33-Portfolio-Analysis-with-pyfolio.ipynb)
[Back to Introduction](Introduction.ipynb)
---
*This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian") or QuantRocket LLC ("QuantRocket"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, neither Quantopian nor QuantRocket has taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information believed to be reliable at the time of publication. Neither Quantopian nor QuantRocket makes any guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
|
github_jupyter
|
import pandas as pd
import numpy as np
from zipline.pipeline import Pipeline
from zipline.pipeline.data import sharadar
from zipline.pipeline.data import master
from zipline.pipeline.data import EquityPricing
from zipline.pipeline.factors import CustomFactor, Returns, AverageDollarVolume
from zipline.pipeline.filters import AllPresent, All
from zipline.pipeline.classifiers import Classifier
from zipline.research import run_pipeline
import matplotlib.pyplot as plt
class Momentum(CustomFactor):
# will give us the returns from last month
inputs = [Returns(window_length=20)]
window_length = 20
def compute(self, today, assets, out, lag_returns):
out[:] = lag_returns[0]
Fundamentals = sharadar.Fundamentals.slice(dimension='ARQ', period_offset=0)
def make_pipeline():
# define our fundamental factor pipeline
pipe = Pipeline()
# market cap and book-to-price data gets fed in here
market_cap = Fundamentals.MARKETCAP.latest
book_to_price = 1/Fundamentals.PB.latest
# and momentum as lagged returns (1 month lag)
momentum = Momentum()
# we also get daily returns
returns = Returns(window_length=2)
TradableStocksUS = (
# Market cap over $500M
(sharadar.Fundamentals.slice(dimension='ARQ', period_offset=0).MARKETCAP.latest >= 500e6)
# dollar volume over $2.5M over trailing 200 days
& (AverageDollarVolume(window_length=200) >= 2.5e6)
# price > $5
& (EquityPricing.close.latest > 5)
# no missing data for 200 days (exclude trading halts, IPOs, etc.)
& AllPresent(inputs=[EquityPricing.close], window_length=200)
& All([EquityPricing.volume.latest > 0], window_length=200)
# common stocks only
& master.SecuritiesMaster.usstock_SecurityType2.latest.eq("Common Stock")
# primary share only
& master.SecuritiesMaster.usstock_PrimaryShareSid.latest.isnull()
)
# we compute a daily rank of both factors, this is used in the next step,
# which is computing portfolio membership
market_cap_rank = market_cap.rank(mask=TradableStocksUS)
book_to_price_rank = book_to_price.rank(mask=TradableStocksUS)
momentum_rank = momentum.rank(mask=TradableStocksUS)
# Grab the top and bottom 1000 for each factor
biggest = market_cap_rank.top(1000)
smallest = market_cap_rank.bottom(1000)
highpb = book_to_price_rank.top(1000)
lowpb = book_to_price_rank.bottom(1000)
top = momentum_rank.top(1000)
bottom = momentum_rank.bottom(1000)
# Define our universe, screening out anything that isn't in the top or bottom
universe = TradableStocksUS & (biggest | smallest | highpb | lowpb | top | bottom)
pipe = Pipeline(
columns = {
'market_cap':market_cap,
'book_to_price':book_to_price,
'momentum':momentum,
'Returns':returns,
'market_cap_rank':market_cap_rank,
'book_to_price_rank':book_to_price_rank,
'momentum_rank':momentum_rank,
'biggest':biggest,
'smallest':smallest,
'highpb':highpb,
'lowpb':lowpb,
'top':top,
'bottom':bottom
},
screen=universe
)
return pipe
# Initializing the pipe
pipe = make_pipeline()
# Now let's start the pipeline
start_date, end_date = '2016-01-01', '2016-12-31'
results = run_pipeline(pipe, start_date=start_date, end_date=end_date, bundle='usstock-1d-bundle')
results.head()
from quantrocket.master import get_securities
from quantrocket import get_prices
# group_by(level=0).mean() gives you the average return of each day for a particular group of stocks
R_biggest = results[results.biggest]['Returns'].groupby(level=0).mean()
R_smallest = results[results.smallest]['Returns'].groupby(level=0).mean()
R_highpb = results[results.highpb]['Returns'].groupby(level=0).mean()
R_lowpb = results[results.lowpb]['Returns'].groupby(level=0).mean()
R_top = results[results.top]['Returns'].groupby(level=0).mean()
R_bottom = results[results.bottom]['Returns'].groupby(level=0).mean()
# risk-free proxy
securities = get_securities(symbols=['BIL', 'SPY'], vendors='usstock')
BIL = securities[securities.Symbol=='BIL'].index[0]
SPY = securities[securities.Symbol=='SPY'].index[0]
R_F = get_prices('usstock-1d-bundle', sids=BIL, data_frequency='daily', fields='Close', start_date=start_date, end_date=end_date)
R_F = R_F.loc['Close'][BIL].pct_change()[1:]
# find it's beta against market
M = get_prices('usstock-1d-bundle', sids=SPY, data_frequency='daily', fields='Close', start_date=start_date, end_date=end_date)
M = M.loc['Close'][SPY].pct_change()[1:]
# Defining our final factors
EXMRKT = M - R_F
SMB = R_smallest - R_biggest # small minus big
HML = R_highpb - R_lowpb # high minus low
MOM = R_top - R_bottom # momentum
plt.plot(SMB.index, SMB.values)
plt.ylabel('Daily Percent Return')
plt.legend(['SMB Portfolio Returns']);
plt.plot(HML.index, HML.values)
plt.ylabel('Daily Percent Return')
plt.legend(['HML Portfolio Returns']);
plt.plot(MOM.index, MOM.values)
plt.ylabel('Daily Percent Return')
plt.legend(['MOM Portfolio Returns']);
import itertools
import statsmodels.api as sm
from statsmodels import regression,stats
import scipy
data = results[['Returns']].set_index(results.index)
asset_list_sizes = [group[1].size for group in data.groupby(level=0)]
# Spreading the factor portfolio data across all assets for each day
SMB_column = [[SMB.loc[group[0]]] * size for group, size \
in zip(data.groupby(level=0), asset_list_sizes)]
data['SMB'] = list(itertools.chain(*SMB_column))
HML_column = [[HML.loc[group[0]]] * size for group, size \
in zip(data.groupby(level=0), asset_list_sizes)]
data['HML'] = list(itertools.chain(*HML_column))
MOM_column = [[MOM.loc[group[0]]] * size for group, size \
in zip(data.groupby(level=0), asset_list_sizes)]
data['MOM'] = list(itertools.chain(*MOM_column))
EXMRKT_column = [[EXMRKT.loc[group[0]]]*size if group[0] in EXMRKT.index else [None]*size \
for group, size in zip(data.groupby(level=0), asset_list_sizes)]
data['EXMRKT'] = list(itertools.chain(*EXMRKT_column))
data = sm.add_constant(data.dropna())
# Our list of assets from pipeline
assets = data.index.levels[1].unique()
# gathering our data to be asset-specific
Y = [data.xs(asset, level=1)['Returns'] for asset in assets]
X = [data.xs(asset, level=1)[['EXMRKT','SMB', 'HML', 'MOM', 'const']] for asset in assets]
# First regression step: estimating the betas
reg_results = [regression.linear_model.OLS(y, x).fit().params \
for y, x in zip(Y, X) if not(x.empty or y.empty)]
indices = [asset for y, x, asset in zip(Y, X, assets) if not(x.empty or y.empty)]
betas = pd.DataFrame(reg_results, index=indices)
betas = sm.add_constant(betas.drop('const', axis=1))
R = data['Returns'].mean(axis=0, level=1).sort_index()
# Second regression step: estimating the risk premia
risk_free_rate = np.mean(R_F)
final_results = regression.linear_model.OLS(R - risk_free_rate, betas).fit()
final_results.summary()
# we'll take a random sample of 500 assets from TradableStocksUS in order to build a random portfolio
random_assets = list(np.random.choice(assets, size=500, replace=False))
portfolio_data = data[data.index.isin(random_assets, level=1)]
# this is the return of our portfolio with no hedging
R_portfolio_time_series = portfolio_data['Returns'].mean(level=0)
# next, we calculate the exposure of our portfolio to each of the Fama-French factors
portfolio_exposure = regression.linear_model.OLS(portfolio_data['Returns'], \
portfolio_data[['EXMRKT', 'SMB', 'HML', 'MOM', 'const']]).fit()
print(portfolio_exposure.summary())
# our hedged return stream
hedged_portfolio = R_portfolio_time_series - \
portfolio_exposure.params[0]*EXMRKT.tz_localize('UTC') - \
portfolio_exposure.params[1]*SMB - \
portfolio_exposure.params[2]*HML - \
portfolio_exposure.params[3]*MOM
print('Mean, Std of Hedged Portfolio:', np.mean(hedged_portfolio), np.std(hedged_portfolio))
print('Mean, Std of Unhedged Portfolio:', np.mean(R_portfolio_time_series), np.std(R_portfolio_time_series))
plt.plot(hedged_portfolio)
plt.plot(R_portfolio_time_series)
plt.legend(['Hedged', 'Unhedged']);
# testing for normality: jarque-bera
_, pvalue_JB, _, _ = stats.stattools.jarque_bera(portfolio_exposure.resid)
print("Jarque-Bera p-value:", pvalue_JB)
# testing for homoskedasticity: breush pagan
_, pvalue_BP, _, _ = stats.diagnostic.het_breuschpagan(portfolio_exposure.resid, \
portfolio_data[['EXMRKT', 'SMB', 'HML', 'MOM', 'const']])
print("Breush Pagan p-value:", pvalue_BP)
# testing for autocorrelation
dw = stats.stattools.durbin_watson(portfolio_exposure.resid)
print("Durbin Watson statistic:", dw)
date = '2016-11-22'
BTP = results['book_to_price'][date]
z_score = (BTP - BTP.mean()) / BTP.std()
z_score.dropna(inplace=True)
plt.hist(z_score)
plt.xlabel('Z-Score')
plt.ylabel('Frequency');
# Get some random data
random_data = np.random.normal(0, 1, 100)
# Put in some outliers
random_data[0] = 1000
random_data[1] = -1000
# Perform winsorization
print('Before winsorization', np.min(random_data), np.max(random_data))
scipy.stats.mstats.winsorize(random_data, inplace=True, limits=0.01)
print('After winsorization', np.min(random_data), np.max(random_data))
BTP = scipy.stats.mstats.winsorize(results['book_to_price'][date], limits=0.01)
BTP_z = (BTP - np.mean(BTP)) / np.std(BTP)
MC = scipy.stats.mstats.winsorize(results['market_cap'][date], limits=0.01)
MC_z = (MC - np.mean(MC)) / np.std(MC)
Lag_Ret = scipy.stats.mstats.winsorize(results['momentum'][date], limits=0.01)
Lag_Ret_z = (Lag_Ret - np.mean(Lag_Ret)) / np.std(Lag_Ret)
returns = results['Returns'][date]
df_day = pd.DataFrame({'R': returns,
'BTP_z': BTP_z,
'MC_z': MC_z,
'Lag_Ret_z': Lag_Ret_z,
'Constant': 1}).dropna()
cross_sectional_results = \
regression.linear_model.OLS(df_day['R'], df_day[['BTP_z', 'MC_z', 'Lag_Ret_z', 'Constant']]).fit()
cross_sectional_results.summary()
| 0.610105 | 0.991546 |
---
Replication Project for Microeconometrics course | Summer 2021, M.Sc. Economics, Bonn University | [Aysu Avcı](https://github.com/aysuavci)
# Replication of Zimmermann (2020) <a class="tocSkip">
---
This notebook contains the replication of the following study:
> [Zimmermann, F. (2020). The Dynamics of Motivated Beliefs. American Economic Review, 110(2), 337–363](https://www.aeaweb.org/articles?id=10.1257/aer.20180728).
##### Notes:
* I try to remain true to the original naming of the variables, however, I renamed all of the figures and tables according to sections in this notebook to avoid any confusions that might arise due to having two replications.
* For convenience, I decided to keep some of the coding in the notebook which I have seen essential for understanding the steps taken in replication.
* The responsitory can be downloaded from [here](https://github.com/OpenSourceEconomics/ose-data-science-course-project-aysuavci), for the best viewing.
## **Table of Contents**
* [Introduction](#sec1)
* [Background for the Experiment and Hypothesis](#sec2)
* [Experiment Design](#sec3)
* [Identification](#sec4)
* [Empirical Strategy](#sec5)
* [Replication of Zimmermann (2020)](#sec6)
* [Main Results](#sec6_1)
* [Data & Descriptive Statistics](#sec6_1_1)
* [Belief Adjustment Distribution¶](#sec6_1_2)
* [Belief Adjustment Distribution & Test performances](#sec6_1_3)
* [Regression Outputs](#sec6_1_4)
* [Robustness Checks](#sec6_2)
* [Appendix A.4 - Alternative Definition of Positive/Negative Feedback](#sec6_2_1)
* [Appendix A.6 - Updating in the Short-Run](#sec6_2_2)
* [Appendix A.7 - No Feedback Condition](#sec6_2_3)
* [Appendix A.8 - Figures Bayesian Posteriors](#sec6_2_4)
* [Extension](#sec7)
* [Replication of Eil and Rao (2011)](#sec7_1)
* [Introduction](#sec7_1_1)
* [Experiment Design](#sec7_1_2)
* [Emprical Strategy](#sec7_1_3)
* [Replication](#sec7_1_4)
* [Comparison with Zimmermann (2020)](#sec7_1_5)
* [Further Extension](#sec7_2)
* [Alternative Data Visualization](#sec7_2_1)
* [Further Analysis with No Feedback Group](#sec7_2_2)
* [Emprical Strategy](#sec7_2_2_1)
* [Estimation](#sec7_2_2_2)
* [Conclusion](#sec8)
* [References](#sec9)
```
%matplotlib inline
!pip install stargazer
!pip install matplotlib
!pip install -U matplotlib
import numpy as np
import pandas as pd
import pandas.io.formats.style
import seaborn as sns
import statsmodels as sm
import statsmodels.formula.api as smf
import statsmodels.api as sm_api
import matplotlib as plt
from IPython.display import HTML
from stargazer.stargazer import Stargazer, LineLocation
from statsmodels.iolib.summary2 import summary_col
from auxiliary.auxiliary_tools import *
from auxiliary.auxiliary_plots import *
from auxiliary.auxiliary_tables import *
```
---
# 1.Introduction <a class="anchor" id="sec1"></a>
---
The study of Zimmermann(2020) is a lab experiment aiming to examine how motivated beliefs held by individuals continues to be sustained after receiving positive or negative feedback.
The study can be divided into 3 parts:
1. The first part of the study-named “**Motivated Belief Dynamics**” in the paper- is the main study for investigating the causality between different types of feedback (positive or negative) received and reconstruction of belief patterns depending on the elicitation time (directly or 1 month) of the belief after the experiment.
2. The second part of the study-named “**The Role of Memory**” in the paper- investigates the asymmetry in the accuracy of feedback recall and also recall of IQ test in general that they solve as a part of the experiment.
3. The final part of the study-named “**The Trade-Off between Motivated and Accurate Belief**” in the paper- that questions whether incentivizing for recall accuracy could mitigate the motivated reasoning that participants employ.
In this project notebook, the main study-first part- will be replicated; the part where Zimmermann (2020) used also **difference-in-difference (DID) models** for their estimations. In the next section (Section 2), I will give a quick literature background for the experiment and also specify the researcher’s hypothesis. In Section 3, I will explain the relevant parts of the experimental design and introduce treatment groups and variables for convenience. The identification part (Section 4) will explain the causality studied by Zimmermann (2020) and the empirical strategy part (Section 5) will present the established models for estimations. Section 6 is the replication part where the figures and tables from the main study and relevant Appendix parts are presented. The extension of this project is located in Section 7 where I replicate the study of Eil and Rao (2022) as well as including some data visualization and further analysis. Finally, Section 8 includes my concluding remarks.
---
# 2. Background for the Experiment and Hypothesis <a class="anchor" id="sec2"></a>
---
Personally, I have always been interested in the question of how individual beliefs are formed and sustained or changed over time. In the neoclassical economics framework, any changes in beliefs follow Bayes Rule’s since it is accepted that all agents are rational and unbiased. However, many studies in the literature of motivated beliefs showed that individuals deviate from these Bayesian predictions and manipulate their beliefs in a self-serving way; and some studies like Eil and Rao 2011 or Zimmermann 2020 tried to address the asymmetry in belief updating as well. I see the study of Zimmermann (2020) as an influential study to study motivated belief dynamics because of the experiment design that is been employed. The design of Zimmermann (2020) allows comparing the belief updating across signals received (asymmetry) and also across time (short-run and long-run updating) by employing control and various treatment groups.
**Main Hypothesis:** *Individuals’ engagement with belief updating follows an asymmetric pattern, putting more weight on positive signals and less on the negative signals, which is more pronounced for long-term since negative signal’s effects fade over time.*
Using DID models for analysis is suitable for this study because, first of all, it is impossible to make one experiment subject expose to two different treatments while keeping everything constant in this experiment to examine individual-level effects. Secondly, since this is a randomly assigned experiment we can assume subjects do not differ between the control group and treatment groups. Also, between the control group and treatment group Zimmermann (2020) made sure to keep everything constant and only made the feedback condition vary. Also, DID was easily implemented as an interaction term between time treatment and feedback dummy allowing for causal inferences on belief dynamics in line with the researcher’s goal and with the group of interest which is the individuals who received a negative signal and in the treatment group 1 month and their differences compared to other groups in belief updating. In addition to these, employing DID models allowed Zimmermann (2020) to add the fixed rank effects interacted with treatment which serve as a control for the possible characteristic differences due to rank groups between treatments. Also, having a control condition for 1-month treatment which serve as a robustness check for potential beliefs changes due to time trends that may exist in the absence of feedback.
---
# 3. Experimental Design <a class="anchor" id="sec3"></a>
---
1. Subjects completed an IQ test where they need to solve 10 Raven matrices.
2. Subjects were randomly assigned to groups of 10.
3. Subjects were asked to estimate their belief about the likelihood of being ranked in the upper half of their group in terms of a percentage point. Full probability distributions for every possible rank is also elicited.
4. Subjects were given noisy but true information about their rank by randomly selecting 3 participants from their group and telling them whether they were rank higher or lower than these 3 participants. If a subject is told that they ranked higher than 2 or 3 participants in their group, the subject is considered to be in the positive feedback group. They considered being in the negative feedback group if otherwise.
5. After receiving the feedbacks, subjects are divided into two treatment groups randomly: `condifence_direct` and `confidence_1monthlater`.
6. In the `confidence_1monthlater` treatment, subjects are asked to elicit their beliefs 1 month later; and in the `condifence_direct` treatment, group subjects are asked to elicit their beliefs after the feedback. `condifence_direct`has also two subgroups: `condifence_direct_immediate` and `condifence_direct_15minutelater`.
**Main Variables**
|Treatment groups|Feedback types|Outcomes|Other variables|
|---|---|---|---|
|`confidence_1monthlater`|Positive|Belief adjustment|Bayesian belief adjustment||
|`condifence_direct_immediate`|Negative||rank|
|`condifence_direct_15minutelater`|||
|`no_feedback`|||
---
# 4. Identification <a class="anchor" id="sec4"></a>
---
The first part of the experiment is dedicated to investigating the causality between the effect of feedback and belief adjustments of participants that are in two different treatment groups that vary depending on time: _CondifenceDirect_ and _Condifence1month_. The causality is established by two key components in the experiment. First of all, Zimmermann(2020) elicit peoples prior beliefs about their probability of ranking in the upper half of the group and then they elicited again after participants received the feedback. So, for each participant, they had a clear measure of belief updating which is possibly caused by the feedback and in line with the type of feedback. The second and the most important component of the experiment, that ensured causal identification, is the noisy feedback component. In this experiment rather than randomly assigning a subject into a negative or positive feedback group directly, Zimmermann(2020) randomly choose 3 other participants within the subject’s group to compare their rank and provide them with 3 comparisons. In this way, subjects indeed received true feedback but a noisy one. So, it is possible for two subjects with the same rank to receive different types of feedback. By adopting such an experimental design, Zimmermann(2020) ensured that potential asymmetries in the belief dynamics cannot be explained by the individual difference between participants.
In addition to investigating the causal relationship between belief dynamics and feedback, another goal of the experiment is to observe the effect of time on this relationship. For that, Zimmermann(2020) assigned participants randomly to two groups: Confidence1month and CondifenceDirect. ConfidenceDirect is also divided into two subgroups(immediate and 15 minutes later belief elicitation) to investigate the possible short-term dynamics in belief adjustment, however, that part of the study will not be covered in this replication.
<img src= "files/causal_1.gv.png">
---
# 5. Empirical Strategy <a class="anchor" id="sec5"></a>
---
In the study, the belief adjustment is defined as the difference between the elicited belief after the feedback and the belief before the feedback, formally
\begin{equation}
Pr(upperhalf)^{post}_{i} - Pr(upperhalf)^{prior}_{i}
\end{equation}
where $Pr(upperhalf)_{i}$ represents the subject ${i}$'s given probability of being in the upperhalf of the group.
To compare belief adjustments between positive and negative feedbacks, a normalized version of the belief adjustment norm is also formed as below:
$$
beliefadjustmentnorm_i = \left\{
\begin{array}\\
Pr(upperhalf)^{post}_{i} - Pr(upperhalf)^{prior}_{i} & \mbox{if } \ feedback \ positive \\
(-1) x (Pr(upperhalf)^{post}_{i} - Pr(upperhalf)^{prior}_{i}) & \mbox{if } \ feedback \ negative \\
\end{array}
\right.
$$
The difference-in-difference models, formally in the form of
\begin{equation}
beliefadjustmentnorm_i = \alpha + \beta feedback_i + \gamma T_i + \delta I_i + X_i \gamma + \epsilon_i
\end{equation}
are estimated for the purpose of establishing dynamic belief patterns. $feedback_i$ is the dummy variable for feedback and $T_i$ is the type of treatment. $I_i$ is for the interation term that is a generated dummy for the interest group, interest group being subjects who are in the `confidence_1monthlater`and received a negative feedback. In other words, if subjects were in `confidence_1monthlater` treatment and received a negative information; treatment dummy, feedback dummy and interaction term are all equal to 1. In this case, parameter $\delta$ is therefore captures the belief dynamics. In other words, if the subject is in `confidence_1monthlater` treatment and received a negative information; treatment dummy, feedback dummy and interaction term are equal to 1.
In order to form a causal inference, Zimmermann (2020) control for rank and IQ test scores; these variables are captured by $X_i$, set of control variables. As another control variable, Bayesian belief adjustments are also used for some specifications. Zimmermann (2020) calculated Bayesian predictions, using individuals full prior probability distribution and the feedback type that they received. The Bayesian belief adjustment is then, produced in the same way as the belief adjustment
\begin{equation}
Pr(upperhalf)^{postBayes}_{i} - Pr(upperhalf)^{prior}_{i}
\end{equation}
where $Pr(upperhalf)^{postBayes}_{i}$ is the calculated Bayesian prediction for the subject $i$.
Another specification within the study is adding the rank fixed effects interacted with treatment. In this way, Zimmermann (2020) was able to control the differentiation of possible different characteristics due to ranking between treatment groups.
Zimmermann (2020) choose to perform ordinary least square (OLS) estimates with type HC1 heteroscedasticity robust covariance. Most of the main results are produced with this linear regression and I will use this type of regression in the table replications.
---
# 6. Replication of Zimmermann (2020) <a class="anchor" id="sec6"></a>
---
## 6.1. Main Results <a class="anchor" id="sec6_1"></a>
### 6.1.1 Data & Descriptive Statistics <a class="anchor" id="sec6_1_1"></a>
```
df = pd.read_stata('data/data.dta')
#Adjustments for better viewing
pd.set_option("display.max.columns", None)
pd.set_option("display.precision", 7)
pd.set_option("display.max.rows", 30)
#Viewing data
df.head()
#Renaming treatgroup for easier coding
tg_conditions = [
(df['treatment'] == 'belief_announcement'),
(df['treatment'] == 'confidence_1monthlater'),
(df['treatment'] == 'confidence_direct_15minuteslater'),
(df['treatment'] == 'confidence_direct_immediate'),
(df['treatment'] == 'memory'),
(df['treatment'] == 'memory_high'),
(df['treatment'] == 'nofeedback'),
(df['treatment'] == 'tournament_announcement'),
]
tg_values = [1, 2, 3, 4, 5, 6, 7, 8]
df['treatgroup'] = np.select(tg_conditions, tg_values)
```
Throughout this study feedback groups will be addressed using the dummy variable `dummynews_goodbad`. As it is mentioned above, if a subject received more than 2 positive comparisons they are considered to be in the positive feedback group; if else they are considered to be in the negative feedback group.
```
#Creating dummy variable for good/bad news (if good = 0, if bad = 1)
df["dummynews_goodbad"] = np.nan
df.loc[(df['pos_comparisons'] == 2) | (df['pos_comparisons'] == 3), 'dummynews_goodbad'] = 0
df.loc[(df['pos_comparisons'] == 0) | (df['pos_comparisons'] == 1), 'dummynews_goodbad'] = 1
#Belief adjustment
df["beliefadjustment"] = df["posterior_median"] - df["prior_median"]
#Normalized belief adjustment
df["beliefadjustment_normalized"] = np.nan
df.loc[df['dummynews_goodbad'] == 0, 'beliefadjustment_normalized'] = df['beliefadjustment']
df.loc[df['dummynews_goodbad'] == 1, 'beliefadjustment_normalized'] = df['beliefadjustment']*-1
```
### 6.1.2 Belief Adjustment Distribution <a class="anchor" id="sec6_1_2"></a>
```
Main_Figure1(df)
```
Figure 1 is an easy way to view the belief adjustments of the subjects, even though, they aren't enough for causal inference. As it is observable from Panel A. subjects adjusted their beliefs in line with the feedback they have received. For instance, in the positive feedback group, we see the accumulation of observations between 0 and +50 while an opposite pattern is observable for the negative feedback group.
In Panel B. we see that for the individuals who have received positive feedback, the pattern observe is not so much different from the pattern in Panel A. However, the graph significantly changes for the negative feedback when subjects elicit their beliefs 1 month after the feedback.
### 6.1.3 Belief Adjustment Distribution & Test performances <a class="anchor" id="sec6_1_3"></a>
**Calculating Bayesian Predictions**
```
#Bayesian predictions
get_Bayesian_predictions(df)
#Calculating Bayesian Posterior Median Beliefs
df["posterior_median_bayes"] = (df["post_1"] + df["post_2"] + df["post_3"] + df["post_4"] + df["post_5"])*100
##Bayesian predictions for belief adjustment
df["beliefadjustment_bayes"] = df["posterior_median_bayes"] - df["prior_median"]
#Normalization of Bayesian predictions
df["beliefadjustment_bayes_norm"] = np.nan
df.loc[df['dummynews_goodbad'] == 0, 'beliefadjustment_bayes_norm'] = df['beliefadjustment_bayes']
df.loc[df['dummynews_goodbad'] == 1, 'beliefadjustment_bayes_norm'] = df['beliefadjustment_bayes']*-1
```
There are two sub treatment groups in the immediate treatment: confidence_direct_15minuteslater' and 'confidence_direct_immediate'. Zimmermann(2020) made this differentiation for a robustness check to see whether there would be a difference between eliciting beliefs immediately after feedback or 15 minutes later. However, the main results derived from the comparison between 1-month treatment and pooled treatment which combines 15 minutes later and immediate treatment. So, a dummy is created for further analysis; with 1 indicating treatment 1 month and 0 indicating pooled direct treatment.
```
#Generate dummy for direct treatment and 1 month treatment
df["dummytreat_direct1month"] = np.nan
df.loc[(df['treatgroup'] == 3) | (df['treatgroup'] == 4), 'dummytreat_direct1month'] = 0
df.loc[df['treatgroup'] == 2, 'dummytreat_direct1month'] = 1
#Grouping IQ test scores
df["test_1"] = np.nan
df.loc[(df['rscore'] == 0) | (df['rscore'] == 1) | (df['rscore'] == 2) | (df['rscore'] == 3) | (df['rscore'] == 4), 'test_1'] = 1
df.loc[(df['rscore'] == 5), 'test_1'] = 2
df.loc[(df['rscore'] == 6), 'test_1'] = 3
df.loc[(df['rscore'] == 7) | (df['rscore'] == 8) | (df['rscore'] == 9) | (df['rscore'] == 10), 'test_1'] = 4
#Average prior belief
df['prior_av'] = df.groupby(['test_1', 'dummytreat_direct1month'])["prior_median"].transform('mean')
#Average posterior belief if feedback is positive
df['post_av_pos'] = df[df["dummynews_goodbad"] == 0].groupby(['test_1', 'dummytreat_direct1month'])["posterior_median"].transform('mean')
#Average posterior belief if feedback is negative
df['post_av_neg'] = df[df["dummynews_goodbad"] == 1].groupby(['test_1', 'dummytreat_direct1month'])["posterior_median"].transform('mean')
Main_Figure2(df)
```
Figure 2 plots the individuals' prior and posterior belief of being in the upper half of their group for differents levels in test performances. Test performances are grouped as 1, 2, 3 and 4; with 1 being the lowest score group. Also, posteriors are plotted with two different lines.
It is easy to see that in Panel A. ConfidenceDirect, subject's posterior beliefs visibly differ from their prior beliefs in line with the feedback they received. However, in Panel B, the posterior beliefs of subjects that received negative feedback are visibly closer to the prior beliefs.
Compared to the first graph, here causal inference is made possible by grouping individuals according to their test scores.
These figures are also a great way to observe ex-ante overconfident-average priors lie consistently above 50 percent- which can be theoretically explained by Bayesian updating (see. Benoît and Dubra, 2011) and also visible in the figures from the study of Zimmermann (2020).
### 6.1.4. Regression Outputs <a class="anchor" id="sec6_1_4"></a>
In the study, Zimmermann (2020) perform 8 OLS estimations(see in Table 1) with type HC1 heteroscedasticity robust covariance. :<br>
(1) Regressing $y_i$ (normalized belief adjustment) on $T_i$ (treatment type) in case of positive feedback<br>
(2) Same as (1) but controling for $X_i$ (rank and normalized bayesian belief adjustment)<br>
(3) Regressing $y_i$ (normalized belief adjustment) on $T_i$ (treatment type) in case of negative feedback<br>
(4) Same as (3) but controling for $X_i$ (rank and normalized bayesian belief adjustment)<br>
(5) Regressing $y_i$ (normalized belief adjustment) on $T_i$ (treatment type), $feedback_i$ (feedback type) and $I_i$ (interaction term indicating whether individual is in the interest group or not)<br>
(6) Same as (5) but controling for $X_i$ (rank and normalized bayesian belief adjustment)<br>
(7) Same as (5) but including rank fixed effects with their interaction with treatment<br>
(8) Same as (7) but controling for $X_i$ (normalized bayesian belief adjustment)<br>
```
df_good = pd.DataFrame({"beliefadjustment_normalized": df[df["dummynews_goodbad"] == 0]['beliefadjustment_normalized'], "dummytreat_direct1month": df[df["dummynews_goodbad"] == 0]['dummytreat_direct1month'], "rank": df[df["dummynews_goodbad"] == 0]['rank'], "beliefadjustment_bayes_norm": df[df["dummynews_goodbad"] == 0]['beliefadjustment_bayes_norm']})
#(1)
model_ols = smf.ols(formula="beliefadjustment_normalized ~ dummytreat_direct1month", data=df_good)
reg_1 = model_ols.fit(cov_type='HC1')
#(2)
model_ols = smf.ols(formula="beliefadjustment_normalized ~ dummytreat_direct1month + rank + beliefadjustment_bayes_norm", data=df_good)
reg_2 = model_ols.fit(cov_type='HC1')
df_bad = pd.DataFrame({"beliefadjustment_normalized": df[df["dummynews_goodbad"] == 1]['beliefadjustment_normalized'], "dummytreat_direct1month": df[df["dummynews_goodbad"] == 1]['dummytreat_direct1month'], "rank": df[df["dummynews_goodbad"] == 1]['rank'], "beliefadjustment_bayes_norm": df[df["dummynews_goodbad"] == 1]['beliefadjustment_bayes_norm']})
#(3)
model_ols = smf.ols(formula="beliefadjustment_normalized ~ dummytreat_direct1month", data=df_bad)
reg_3 = model_ols.fit(cov_type='HC1')
#(4)
model_ols = smf.ols(formula="beliefadjustment_normalized ~ dummytreat_direct1month + rank + beliefadjustment_bayes_norm", data=df_bad)
reg_4 = model_ols.fit(cov_type='HC1')
```
Next regressions are DID models, therefore interaction term for our interest group is added.
```
#Generating interaction term
df["interact_direct1month"] = df["dummytreat_direct1month"]*df["dummynews_goodbad"]
#(5)
model_ols = smf.ols(formula= "beliefadjustment_normalized ~ dummytreat_direct1month + dummynews_goodbad + interact_direct1month", data=df)
reg_5 = model_ols.fit(cov_type='HC1')
#(6)
model_ols = smf.ols(formula= "beliefadjustment_normalized ~ dummytreat_direct1month + dummynews_goodbad + rank + interact_direct1month + beliefadjustment_bayes_norm", data=df)
reg_6 = model_ols.fit(cov_type='HC1')
```
Next spesifications are with rank fixed effects to include their interaction with treatment. So rank dummies are generated and added to the regression as well as their interaction term: `rankdummy`$_i$ *`dummytreat_direct1month`.
```
#Rank dummies
get_rank_dummies(df)
#Intereaction term: Rank dummy*Treatment dummy
get_rank_interation_term(df)
#(7)
model_ols = smf.ols(formula= "beliefadjustment_normalized ~ dummytreat_direct1month + dummynews_goodbad + interact_direct1month + rankdummy1 + rankdummy2 + rankdummy3 + rankdummy4 + rankdummy5 + rankdummy6 + rankdummy7 + rankdummy8 + rankdummy9 + rankdummy1_interact + rankdummy2_interact + rankdummy3_interact + rankdummy4_interact + rankdummy5_interact + rankdummy6_interact + rankdummy7_interact + rankdummy8_interact + rankdummy9_interact", data=df)
reg_7 = model_ols.fit(cov_type='HC1')
#(8)
model_ols = smf.ols(formula= "beliefadjustment_normalized ~ dummytreat_direct1month + dummynews_goodbad + interact_direct1month + beliefadjustment_bayes_norm + rankdummy1 + rankdummy2 + rankdummy3 + rankdummy4 + rankdummy5 + rankdummy6 + rankdummy7 + rankdummy8 + rankdummy9 + rankdummy1_interact + rankdummy2_interact + rankdummy3_interact + rankdummy4_interact + rankdummy5_interact + rankdummy6_interact + rankdummy7_interact + rankdummy8_interact + rankdummy9_interact", data=df)
reg_8 = model_ols.fit(cov_type='HC1')
Main_Table_1(df)
```
From the first two regressions, it is clear that belief adjustments of individuals in the direct treatment group and the 1-month treatment group did not differ significantly. In other words, time has no significant effect on individuals belief adjustment if they have received positive feedback. These findings overlap with the finding in Zimmermann(2020) that state positive feedback has a persistent effect on belief dynamics.
The (3) and (4) regression also correlates with Zimmermann(2020) findings since it is clear that the belief adjustments are significantly different between treatment groups at the .01 level, referring to the short-term effect of negative feedback.
As it is observable from the column 5 and 6 the interaction term for negative feedback and 1-month treatment group is significant again at the .01 level, in line with the findings from the previous columns. The regression output columns with control variables (rank and Bayesian belief adjustment) also gave the same results in terms of significance levels which strengthens the results. Again these are also in line with the findings of Zimmermann (2020).
The last columns are the DID models with rank effects interacted with treatment group and also includes rank dummy controls, as it is observable the interaction term for negative feedback and 1-month treatment group is significant again at the .05 level in (7) and at the .01 level in (8).
Overall, all of the results and all of the values are exactly the same as the ones in the study of Zimmermann (2020); confirming the researcher's results.
---
## 6.2. Robustness Checks <a class="anchor" id="sec6_2"></a>
---
Some parts of the Appendix that would be relavant for the extension part of this project is added to the project notebook.
### 6.2.1 Appendix A.4 - Alternative Definition of Positive/Negative Feedback <a class="anchor" id="sec6_2_1"></a>
##### **Heterogenous Feedbacks**
Basically, to be considered to receive a positive or negative feedback, a subject should receive all the comparisons as the same kind; positive feedback if he/she received 3 positive comparisons and negative feedback if he/she received 3 negative comparisons.
```
#Creating the new feedback dummy accordingly
df["dummynews_goodbad_h"] = np.nan
df.loc[df['pos_comparisons'] == 3, 'dummynews_goodbad_h'] = 0
df.loc[df['pos_comparisons'] == 0, 'dummynews_goodbad_h'] = 1
Appendix_Table_1(df)
```
All of the values mimic the values in Table 1 in terms of significance, which means the definition of feedback in the main study was robust. However, for the negative feedback, the coefficients are larger than the previous results which might suggest more negative feedback may affect belief adjustment more negatively.
---
### 6.2.2 Appendix A.6 - Updating in the Short-Run <a class="anchor" id="sec6_2_2"></a>
---
Looking specifically at short-run updating is not the main aim of Zimmermann(2020), however, the researcher finds it relevant to include the results from data. As it is also mentioned in the study, previous findings on short-term belief updating suggest two phenomena: conservatism (Möbius et al., 2013) and asymmetry (Eil and Rao, 2011).
**Conservatism**
According to Bayes' rule, people should adjust their belief by about 20 percentage points, on average after a signal. However, Zimmermann (2020) found a way lower normalized belief adjustment value of 11.80 while the correlation between Bayesian predictions and belief adjustment is being 0.459.
```
df_short = df[df['treatgroup'] == 4]
df_short['beliefadjustment_normalized'].mean()
correlation = df_short[['beliefadjustment_normalized','beliefadjustment_bayes_norm']].corr(method='pearson')
correlation
```
**Asymmetry**
In order to investigate asymmetry, Zimmermann (2020) analyze the effect of Bayesian predictions on belief adjustments, separately for positive and negative feedback.
```
Appendix_Table_3(df)
```
As it is suggested by the findings in Column 3, DID reveals no significant relationship. In other words, the Bayesian predictions don’t significantly differ between feedback groups. However, clearly, the coefficient for Bayesian prediction is slightly larger in positive information than in negative information which suggests some asymmetry.
---
### 6.2.3. Appendix A.7 - No Feedback Condition <a class="anchor" id="sec6_2_3"></a>
---
```
Appendix_Figure_1(df)
```
The figure clearly demonstrates that in absence of feedback, many subjects do not change their beliefs and a clear symmetry is observable for those who changed their beliefs.
```
#Stats for control group; the only numbers I couldn't get exactly from the study of Zimmermann (2020) is the numbers below.
#The reported mean belief adjustment being 0.22, with a standard deviation of 17.83.
df[df['treatgroup'] == 7]['beliefadjustment'].describe()
```
---
### 6.2.4. Appendix A.8 - Figures Bayesian Posteriors <a class="anchor" id="sec6_2_4"></a>
---
```
#Calculating Bayesian Posterior Averages
df['prior_av_b'] = df.groupby(['test_1', 'dummytreat_direct1month'])["prior_median"].transform('mean')
df['post_av_pos_b'] = df[df["dummynews_goodbad"] == 0].groupby(['test_1', 'dummytreat_direct1month'])["posterior_median_bayes"].transform('mean')
df['post_av_neg_b'] = df[df["dummynews_goodbad"] == 1].groupby(['test_1', 'dummytreat_direct1month'])["posterior_median_bayes"].transform('mean')
Appendix_Figure_2(df)
```
This figure is the same as Figure 2 only here Bayes prediction of posterior beliefs are used instead of actual posteriors.
---
# 7. Extension <a class="anchor" id="sec7"></a>
---
The extension consist of two parts.
In the first part, I will replicate the first part of the study of Eil and Rao (2011) which is a quite similar experiment in terms of experimental design and empirical strategy that has been used. I think replicating an experiment that Zimmermann(2020) had taken reference would be beneficial to understand the motives and aims of Zimmermann(2020) while forming his study in the way it has been done. Also, I made use of some parts of the Eil and Rao (2011) study in the second part of the extension.
In the second part, I will present the extensions I have added such as data visualizations and further analysis on the effect of feedback.
---
## 7.1. Replication of Eil and Rao (2011) <a class="anchor" id="sec7_1"></a>
As an extension, I decided to replicate the study of Eil and Rao (2011) which is a relevant experiment that is also quite oftenly mentioned by Zimmermann (2020) in their study.
### 7.1.1. Introduction <a class="anchor" id="sec7_1_1"></a>
The experiment consists of 3 main parts. I decided to replicate the first part of the experiment that analyzes the posterior beliefs of the subjects receiving the most “extreme” feedback(all positive or all negative).
Eil and Rao (2011) main aim in this experiment is to observe the asymmetry in the belief updating when subjects receive different types (positive and negative) of signals. Unlike Zimmermann (2020), Eil and Rao (2011) didn’t vary the time between treatments but varied the scoring method(referred to as image task) of subjects. The subjects who are assigned to an image task, either scored according to their IQ test or physical attractiveness(beauty) and also a control group task.
### 7.1.2. Experiment Design <a class="anchor" id="sec7_1_2"></a>
The experiment is also quite similar to the study of Zimmermann(2020) by nature.
1. Subjects were first either taken an IQ test and assigned to groups of 10 or went through a series of speed dates with 10 other people and asked to give a rating to each in terms of their physical attractiveness from 1 to 10.
2. After the tasks, subjects were asked to elicit their belief about their ranking within their group of 10.
3. Then subjects received 3 cards that revealed whether they were ranked higher or lower than their randomly picked groupmates. For the control task, they were told the number on the cards was randomly picked. Belief elicitation was done in three rounds since subjects were asked to elicit their beliefs about their ranks within the group after each card.
**Variable Definitions**
|Variable name|Definition|Beauty|IQ|Control|
|---|---|---|---|---|
|`meanbelief_priorbayesimage`|Mean Bayesian belief prediction calculated from prior belief|√|√||
|`meanbelief_priorbayescard`|Mean Bayesian belief prediction calculated from prior belief|||√|
|`frac_upimage`|0 if good news, 1 if bad news|√|√||
|`frac_upcard`||||√|
|`mb_fracup`|interaction term: $meanbelief_priorbayesimage_i$*$frac_upimage_i$|√|√||
|`mb_fraccard`|interaction term: $meanbelief_priorbayescard_i$*$frac_upcard_i$|||√|
### 7.1.3. Emprical Strategy <a class="anchor" id="sec7_1_3"></a>
Even though, a model is not specified in the study of Eil and Rao (2011). One can come up with the following DID model for the estimations presented in the Table 1:
\begin{equation}
meanbelief_i = \alpha + \beta meanBayesian_i + \gamma feedback_i + \delta I_i + \epsilon_i
\end{equation}
where $feedback_i$ is the dummy variable for feedback and $meanBayesian_i$ is the mean Bayesian belief prediction. $I_i$ is for the interaction term that is generated by $meanBayesian_i$*$feedback_i$. Bayesian predictions are calculated using reported priors and the received signal type; then the term mean Bayesian, `meanbelief_priorbayesimage`, is produced by taking the expected rank according to posteriors reported through the rounds.
### 7.1.4. Replication <a class="anchor" id="sec7_1_4"></a>
```
#Importing the adjusted and merged data set provided by the Eil and Rao (2011)
df_ex = pd.read_stata('data/extension.dta')
```
The first figure presented by Eil and Rao (2011) is formed by regressing the subject's observed mean belief on the predicted mean belief using Bayes' Rule and grouping subjects by the feedback type that they have received.
```
Extension_Figure_1(df_ex)
```
At first glance, a visible asymmetry can be observed in the Beauty group between good and bad news; while such asymmetry is not that pronounced in the IQ test group. We interpret this asymmetry by the difference in slopes of the lines. As it can be seen in Panel A. bad news condition has a flatter slope, meaning that actual posteriors' have a shorter value length which means even though bayesian posteriors might suggest extreme updating the actual updating is relatively more conservative. In other words, the slope of the bad news line suggests an insensitivity to signal strength. Compared to bad news, we see a steeper slope for good news which suggests an exaggerated response to signal strength. Therefore, an asymmetry in responses for Beauty treatment is pronounced. If we look at Panel C. Control group we see a constant slope which implies that such asymmetries are the results of image treatments. The slope of the good news line in IQ treatments is also steeper compared to the control group however, not as much as it is in Beauty treatment.
Another feature to analyze in this figure is the observations scattered around the fitted line. If we compare the two treatment groups, it is clear that beliefs are noisier for bad news as it is for good news.
Also, it is worth noting that having slopes equal to zero and negative intercept would imply optimism in beliefs in general.
```
#Used regressions in Extension Figure 2 with the standard errors clustered at subject level
reg_B_b_ = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage',
data=df_ex[(df_ex['frac_upimage'] == 0) & (df_ex['IQtask'] ==0)], group_var='ID')
reg_B_g_ = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage',
data=df_ex[(df_ex['frac_upimage'] == 1) & (df_ex['IQtask'] ==0)], group_var='ID')
reg_IQ_b_ = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage',
data=df_ex[(df_ex['frac_upimage'] == 0) & (df_ex['IQtask'] ==1)], group_var='ID')
reg_IQ_g_ = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage',
data=df_ex[(df_ex['frac_upimage'] == 1) & (df_ex['IQtask'] ==1)], group_var='ID')
reg_C_b_ = cluster_fit('meanbeliefcard ~ meanbelief_priorbayescard + mb_fracupcard + frac_upcard',
data=df_ex[(df_ex['frac_upcard'] == 0)], group_var='ID')
reg_C_g_ = cluster_fit('meanbeliefcard ~ meanbelief_priorbayescard + mb_fracupcard + frac_upcard',
data=df_ex[(df_ex['frac_upcard'] == 0)], group_var='ID')
Extension_Figure_2(df_ex)
```
Panel. D represents the plots of the R^2s from the regressions used in Extension Table 1 but divided according to the type of feedback. Unfortunately, even though I used regressions with standard errors clustered at the subject level as is described by Eil and Rao (2011); I couldn’t get the R^2 value of the IQ test/good newsgroup right, which is confusing since I get exactly the same result from the pooled regression in the Extension Table 1. According to Eil and Rao (2011), the $R^2$ for IQ/good newsgroup should be below 0.7.
Taking the result of Eil and Rao (2011) as a reference point, R^2 differ for the Beauty and IQ group while that is obviously not the case for the Control group.
**Table 1**
```
#Used regressions in Extension Table 1 with the standard errors clustered at subject level
reg_B = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage', data=df_ex[(((df_ex['frac_upimage'] == 0) | (df_ex['frac_upimage'] == 1)) & (df_ex['IQtask'] ==0))], group_var='ID')
reg_IQ = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage', data=df_ex[(((df_ex['frac_upimage'] == 0) | (df_ex['frac_upimage'] == 1)) & (df_ex['IQtask'] ==1))], group_var='ID')
reg_C = cluster_fit('meanbeliefcard ~ meanbelief_priorbayescard + mb_fracupcard + frac_upcard', data=df_ex[(df_ex['frac_upcard'] == 0) | (df_ex['frac_upcard'] == 1)], group_var='ID')
Extension_Table_1(df_ex)
```
---
*See below for a reader friendly version of the Extension Table 1 that I created with the same values.*
||Beauty|IQ|Control|
|---|---|---|---|
|Intercept|**2.05** |1.05 |**2.29** |
||(0.52)|(0.76)|(0.76)|
|Bayesian μ|**0.64** |**0.85** |**0.59** |
||(0.08)|(0.85)|(0.10)|
|Bayesian μ *1{All positive feedback}|**0.41** |0.07 |-0.00 |
||(0.10)|(0.13)|(0.19)|
|1{All positive feedback}|**-2.31** |-0.58 |-0.53 |
||(0.54)|(0.81)|(0.95)|
|Intercept|**2.05** |**1.05** |**2.29**|
||(0.52)|(0.76)|(0.76)|
|$R^2$|0.87|0.87|0.72 |
|$R^2$ Adj.|0.86|0.87 |0.71|
||Standard errors in parentheses. * p<.1, *p<.05*, ***p<.01|
The findings found in Extention Table1 are in furtherance of the interpretations made for the Extention Figure2. Like it is suggested by the figure, the asymmetry in belief dynamics-the difference between positive and negative feedback conditions- is only significant in Beauty at the 0.01 level of significance.
### 7.1.5. Comparison with Zimmermann (2020) <a class="anchor" id="sec7_1_5"></a>
It is easy to see the two experiments share a lot of experimental and empirical strategies. First of all, both studies have a longitudinal feature since they elicit subjects’ beliefs two times. Secondly, the two studies have the same focus which is the asymmetry in belief dynamics that arise as a result of feedback type. In addition, both studies make use of DID models in their predictions. Nevertheless, while Eil and Rao (2011) are used the heterogeneous definition of feedbacks, Zimmermann(2020) choose to allow homogenous comparison types within feedback groups- even though a robustness check is made (see Appendix A.4)-. A fundamental difference is the employment of treatment groups. While Eil and Rao (2011) were interested in belief updating in different tasks (Beauty and IQ), Zimmermann (2020) was interested in belief updating when elicitation time differs (Direct and 1 month). Therefore, all of the results Eil and Rao (2011) found was derived from short-run updating. At this point, it is intriguing that whether the difference between the tasks would be as pronounce as it was; if Zimmermann (2020) also had a Beauty image task that is also measured after 1 month.
---
## 7.2. Further Extension <a class="anchor" id="sec7_2"></a>
---
### 7.2.1. Alternative Data Visualizations <a class="anchor" id="sec7_2_1"></a>
```
Extension_Figure_3(df)
```
As a small extension, I decided to create a figure to combine the figure from Appendix A.8 with Figure 2 since it would be a good way to view posterior beliefs in comparison with Bayesian predictions.
If we look at the first row we see that observed beliefs follow a relatively similar pattern, in terms of subjects' reported percentages of being in the upper half of the group, to Bayesian predictions compared to the second row where graphs differ visibly.
Also, in the short run, it is easy to see that people are conservative in belief updating. For instance, in the group of high test scores and positive feedback, according to Bayes' Rule, one would expect a posterior slightly above %90 while it is only slightly above %80; suggesting a difference near %10 percent.
The opposite can be observable in long-run belief updating in the negative feedback group since in every test score group people reported posterior beliefs that are much higher than Bayesian predictions-especially for the individuals with lower test scores. What I found interesting in the results from long-run, is the difference in belief updating when feedback is positive. In the last figure, we see that no matter the test scores individuals tend to report a belief near %80 percent when they receive positive feedback. I think this would be a good starting point for further research in dynamic beliefs.
---
Making use of Extention Figure 2 from the study of Eil and Rao (2011), I created two similar data visualization using the data of Zimmermann (2020).
```
Extension_Figure_4(df)
```
In both figures, negative feedback lines are flatter and specifically very flat in Panel B. 1 month treatment. Meaning that in both treatment groups subjects' weighed negative information less heavily than positive information and even much less heavily in 1-month treatment.
```
Extension_Figure_5(df)
```
In this visiualization, it is easier to see the asymmetry between positive and negative feedback. If we look at Panel A. Positive Feedback, we see that slopes of fitted lines of direct and 1month treatment are almost the same; while in Panel B. they clearly differ.
## 7.2.2 Further Analysis with No Feedback Group <a class="anchor" id="sec7_2_2"></a>
As a control group, Zimmermann (2020) also added a group where no feedback is given to the subjects. The main aim of adding this group was the investigate the potential time trends in beliefs that may exist independently of receiving feedback. However, I couldn't find any further analyzes with this group neither in the study or appendix-except the visiualization in Appendix A.7.
So as a further analyze I decided to investigate the difference between feedback(positive or negative) and no feedback groups within the 1-month condition to see whether receiving feedback itself cause a difference in belief adjustment.
### 7.2.2.1. Estimation Strategy <a class="anchor" id="sec7_2_2_1"></a>
For my analysis, I made use of a simple difference-in-difference model of the following kind:
\begin{equation}
belief_i = \alpha + \beta dummyfeedback_i + \gamma secondwave_i + \delta I_i + \epsilon_i
\end{equation}
where $dummyfeedback_i$ is the dummy variable for receiving feedback and $secondwave_i$ is the dummy for posterior belief. $I_i$ is for the interaction term that is generated by $dummyfeedback_i$*$secondwave_i$. If the subjects obtained feedback and the belief is a posterior belief, (i)feedback dummy (ii)secondwave dummy and (iii) interaction term are equal to 1; making the $ \delta$ the coefficient of interest.
For the estimation, I reshaped data to have a single belief variable combining posterior and prior beliefs and to have a time variable that qualifies in which time the belief is elicited.
### 7.2.2.2. Estimation <a class="anchor" id="sec7_2_2_2"></a>
```
#Making a copy of the original data to start the reshape
df_control = df
#Generating an ID number for each participant
df_control['ID'] = df_control.index + 1
#Filtering data to have only 1 month and no feedback treatment
df_control = df[(df['treatgroup'] == 2) | (df['treatgroup'] == 7)]
#Filtering and adjusting data
df_control = df[(df['treatgroup'] == 2) | (df['treatgroup'] == 7)]
df_control = df_control[['ID', 'treatment', 'posterior_median', 'prior_median']]
df_control.rename(columns = {"prior_median": "belief1", "posterior_median": "belief2" }, inplace=True)
df_control["ID"] = df_control.index
#Transforming data from a wide format to long format
df_control = pd.wide_to_long(df_control, stubnames='belief', i="ID", j="time")
#Creating a column with time index
df_control = df_control.reset_index(level=1)
#Final data
df_control
get_variables_for_extension(df_control)
Extension_Table_2(df_control)
```
The p-value of the interaction term implies that the difference is insignificant. Therefore, one can conclude that receiving feedback does not have an effect on beliefs solely. In other words, the change in beliefs is not significant between feedback and no feedback group.
---
# 8. Conclusion <a class="anchor" id="sec8"></a>
---
In sum, all of the results I get from using the data and STATA codes provided by Zimmmermann (2020) overlaps with the ones in the study. Except for the stats in Appendix A.7., I manage to get the same numbers and figures presented in the study. Similarly, for the extension replication, I used the merged and reshaped data provided by Eil and Rao. Using the Stata codes provided by the researchers, I manage to get the same results except the $R^2$ values represented in the Extension Figure 2. In light of these, I can conclude that even though individuals adjust their beliefs accordingly- and in line with Bayesian predictions- in short term; the effects of negative feedback tend to fade away over time while positive feedback has a long-lasting effect on belief adjustment. According to my analysis done with the control group, the feedback itself solely doesn't have a significant effect on beliefs. Finally, the Difference-in-Difference models used in both of the studies and my analysis, are good tools when researchers want to analyze observations draw from a between-subjects experimental design.
---
# 9. References <a class="anchor" id="sec9"></a>
---
Benoît, Jean-Pierre, and Juan Dubra. 2011. “Apparent Overconfidence?” [*Econometrica* 79 (5):
1591–1625.](https://doi.org/10.3982/ECTA8583)
Eil, David, and Justin M. Rao. 2011. “The Good News-Bad News Effect: Asymmetric Processing of Objective Information about Yourself.” [*American Economic Journal: Microeconomics* 3 (2): 114–38.](https://doi.org/10.1257/mic.3.2.114)
Möbius, Markus M., Muriel Niederle, Paul Niehaus, and Tanya S. Rosenblat. 2013. “Managing
Self-Confidence: Theory and Experimental Evidence.” Unpublished.
Zimmermann, F. 2020. “The Dynamics of Motivated Beliefs.” [*American Economic Journal: Microeconomics* 110 (2): 337–363.](https://doi.org/10.1257/aer.20180728)
-------
Notebook by Aysu Avcı | Find me on GitHub at https://github.com/aysuavci.
---
|
github_jupyter
|
%matplotlib inline
!pip install stargazer
!pip install matplotlib
!pip install -U matplotlib
import numpy as np
import pandas as pd
import pandas.io.formats.style
import seaborn as sns
import statsmodels as sm
import statsmodels.formula.api as smf
import statsmodels.api as sm_api
import matplotlib as plt
from IPython.display import HTML
from stargazer.stargazer import Stargazer, LineLocation
from statsmodels.iolib.summary2 import summary_col
from auxiliary.auxiliary_tools import *
from auxiliary.auxiliary_plots import *
from auxiliary.auxiliary_tables import *
df = pd.read_stata('data/data.dta')
#Adjustments for better viewing
pd.set_option("display.max.columns", None)
pd.set_option("display.precision", 7)
pd.set_option("display.max.rows", 30)
#Viewing data
df.head()
#Renaming treatgroup for easier coding
tg_conditions = [
(df['treatment'] == 'belief_announcement'),
(df['treatment'] == 'confidence_1monthlater'),
(df['treatment'] == 'confidence_direct_15minuteslater'),
(df['treatment'] == 'confidence_direct_immediate'),
(df['treatment'] == 'memory'),
(df['treatment'] == 'memory_high'),
(df['treatment'] == 'nofeedback'),
(df['treatment'] == 'tournament_announcement'),
]
tg_values = [1, 2, 3, 4, 5, 6, 7, 8]
df['treatgroup'] = np.select(tg_conditions, tg_values)
#Creating dummy variable for good/bad news (if good = 0, if bad = 1)
df["dummynews_goodbad"] = np.nan
df.loc[(df['pos_comparisons'] == 2) | (df['pos_comparisons'] == 3), 'dummynews_goodbad'] = 0
df.loc[(df['pos_comparisons'] == 0) | (df['pos_comparisons'] == 1), 'dummynews_goodbad'] = 1
#Belief adjustment
df["beliefadjustment"] = df["posterior_median"] - df["prior_median"]
#Normalized belief adjustment
df["beliefadjustment_normalized"] = np.nan
df.loc[df['dummynews_goodbad'] == 0, 'beliefadjustment_normalized'] = df['beliefadjustment']
df.loc[df['dummynews_goodbad'] == 1, 'beliefadjustment_normalized'] = df['beliefadjustment']*-1
Main_Figure1(df)
#Bayesian predictions
get_Bayesian_predictions(df)
#Calculating Bayesian Posterior Median Beliefs
df["posterior_median_bayes"] = (df["post_1"] + df["post_2"] + df["post_3"] + df["post_4"] + df["post_5"])*100
##Bayesian predictions for belief adjustment
df["beliefadjustment_bayes"] = df["posterior_median_bayes"] - df["prior_median"]
#Normalization of Bayesian predictions
df["beliefadjustment_bayes_norm"] = np.nan
df.loc[df['dummynews_goodbad'] == 0, 'beliefadjustment_bayes_norm'] = df['beliefadjustment_bayes']
df.loc[df['dummynews_goodbad'] == 1, 'beliefadjustment_bayes_norm'] = df['beliefadjustment_bayes']*-1
#Generate dummy for direct treatment and 1 month treatment
df["dummytreat_direct1month"] = np.nan
df.loc[(df['treatgroup'] == 3) | (df['treatgroup'] == 4), 'dummytreat_direct1month'] = 0
df.loc[df['treatgroup'] == 2, 'dummytreat_direct1month'] = 1
#Grouping IQ test scores
df["test_1"] = np.nan
df.loc[(df['rscore'] == 0) | (df['rscore'] == 1) | (df['rscore'] == 2) | (df['rscore'] == 3) | (df['rscore'] == 4), 'test_1'] = 1
df.loc[(df['rscore'] == 5), 'test_1'] = 2
df.loc[(df['rscore'] == 6), 'test_1'] = 3
df.loc[(df['rscore'] == 7) | (df['rscore'] == 8) | (df['rscore'] == 9) | (df['rscore'] == 10), 'test_1'] = 4
#Average prior belief
df['prior_av'] = df.groupby(['test_1', 'dummytreat_direct1month'])["prior_median"].transform('mean')
#Average posterior belief if feedback is positive
df['post_av_pos'] = df[df["dummynews_goodbad"] == 0].groupby(['test_1', 'dummytreat_direct1month'])["posterior_median"].transform('mean')
#Average posterior belief if feedback is negative
df['post_av_neg'] = df[df["dummynews_goodbad"] == 1].groupby(['test_1', 'dummytreat_direct1month'])["posterior_median"].transform('mean')
Main_Figure2(df)
df_good = pd.DataFrame({"beliefadjustment_normalized": df[df["dummynews_goodbad"] == 0]['beliefadjustment_normalized'], "dummytreat_direct1month": df[df["dummynews_goodbad"] == 0]['dummytreat_direct1month'], "rank": df[df["dummynews_goodbad"] == 0]['rank'], "beliefadjustment_bayes_norm": df[df["dummynews_goodbad"] == 0]['beliefadjustment_bayes_norm']})
#(1)
model_ols = smf.ols(formula="beliefadjustment_normalized ~ dummytreat_direct1month", data=df_good)
reg_1 = model_ols.fit(cov_type='HC1')
#(2)
model_ols = smf.ols(formula="beliefadjustment_normalized ~ dummytreat_direct1month + rank + beliefadjustment_bayes_norm", data=df_good)
reg_2 = model_ols.fit(cov_type='HC1')
df_bad = pd.DataFrame({"beliefadjustment_normalized": df[df["dummynews_goodbad"] == 1]['beliefadjustment_normalized'], "dummytreat_direct1month": df[df["dummynews_goodbad"] == 1]['dummytreat_direct1month'], "rank": df[df["dummynews_goodbad"] == 1]['rank'], "beliefadjustment_bayes_norm": df[df["dummynews_goodbad"] == 1]['beliefadjustment_bayes_norm']})
#(3)
model_ols = smf.ols(formula="beliefadjustment_normalized ~ dummytreat_direct1month", data=df_bad)
reg_3 = model_ols.fit(cov_type='HC1')
#(4)
model_ols = smf.ols(formula="beliefadjustment_normalized ~ dummytreat_direct1month + rank + beliefadjustment_bayes_norm", data=df_bad)
reg_4 = model_ols.fit(cov_type='HC1')
#Generating interaction term
df["interact_direct1month"] = df["dummytreat_direct1month"]*df["dummynews_goodbad"]
#(5)
model_ols = smf.ols(formula= "beliefadjustment_normalized ~ dummytreat_direct1month + dummynews_goodbad + interact_direct1month", data=df)
reg_5 = model_ols.fit(cov_type='HC1')
#(6)
model_ols = smf.ols(formula= "beliefadjustment_normalized ~ dummytreat_direct1month + dummynews_goodbad + rank + interact_direct1month + beliefadjustment_bayes_norm", data=df)
reg_6 = model_ols.fit(cov_type='HC1')
#Rank dummies
get_rank_dummies(df)
#Intereaction term: Rank dummy*Treatment dummy
get_rank_interation_term(df)
#(7)
model_ols = smf.ols(formula= "beliefadjustment_normalized ~ dummytreat_direct1month + dummynews_goodbad + interact_direct1month + rankdummy1 + rankdummy2 + rankdummy3 + rankdummy4 + rankdummy5 + rankdummy6 + rankdummy7 + rankdummy8 + rankdummy9 + rankdummy1_interact + rankdummy2_interact + rankdummy3_interact + rankdummy4_interact + rankdummy5_interact + rankdummy6_interact + rankdummy7_interact + rankdummy8_interact + rankdummy9_interact", data=df)
reg_7 = model_ols.fit(cov_type='HC1')
#(8)
model_ols = smf.ols(formula= "beliefadjustment_normalized ~ dummytreat_direct1month + dummynews_goodbad + interact_direct1month + beliefadjustment_bayes_norm + rankdummy1 + rankdummy2 + rankdummy3 + rankdummy4 + rankdummy5 + rankdummy6 + rankdummy7 + rankdummy8 + rankdummy9 + rankdummy1_interact + rankdummy2_interact + rankdummy3_interact + rankdummy4_interact + rankdummy5_interact + rankdummy6_interact + rankdummy7_interact + rankdummy8_interact + rankdummy9_interact", data=df)
reg_8 = model_ols.fit(cov_type='HC1')
Main_Table_1(df)
#Creating the new feedback dummy accordingly
df["dummynews_goodbad_h"] = np.nan
df.loc[df['pos_comparisons'] == 3, 'dummynews_goodbad_h'] = 0
df.loc[df['pos_comparisons'] == 0, 'dummynews_goodbad_h'] = 1
Appendix_Table_1(df)
df_short = df[df['treatgroup'] == 4]
df_short['beliefadjustment_normalized'].mean()
correlation = df_short[['beliefadjustment_normalized','beliefadjustment_bayes_norm']].corr(method='pearson')
correlation
Appendix_Table_3(df)
Appendix_Figure_1(df)
#Stats for control group; the only numbers I couldn't get exactly from the study of Zimmermann (2020) is the numbers below.
#The reported mean belief adjustment being 0.22, with a standard deviation of 17.83.
df[df['treatgroup'] == 7]['beliefadjustment'].describe()
#Calculating Bayesian Posterior Averages
df['prior_av_b'] = df.groupby(['test_1', 'dummytreat_direct1month'])["prior_median"].transform('mean')
df['post_av_pos_b'] = df[df["dummynews_goodbad"] == 0].groupby(['test_1', 'dummytreat_direct1month'])["posterior_median_bayes"].transform('mean')
df['post_av_neg_b'] = df[df["dummynews_goodbad"] == 1].groupby(['test_1', 'dummytreat_direct1month'])["posterior_median_bayes"].transform('mean')
Appendix_Figure_2(df)
#Importing the adjusted and merged data set provided by the Eil and Rao (2011)
df_ex = pd.read_stata('data/extension.dta')
Extension_Figure_1(df_ex)
#Used regressions in Extension Figure 2 with the standard errors clustered at subject level
reg_B_b_ = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage',
data=df_ex[(df_ex['frac_upimage'] == 0) & (df_ex['IQtask'] ==0)], group_var='ID')
reg_B_g_ = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage',
data=df_ex[(df_ex['frac_upimage'] == 1) & (df_ex['IQtask'] ==0)], group_var='ID')
reg_IQ_b_ = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage',
data=df_ex[(df_ex['frac_upimage'] == 0) & (df_ex['IQtask'] ==1)], group_var='ID')
reg_IQ_g_ = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage',
data=df_ex[(df_ex['frac_upimage'] == 1) & (df_ex['IQtask'] ==1)], group_var='ID')
reg_C_b_ = cluster_fit('meanbeliefcard ~ meanbelief_priorbayescard + mb_fracupcard + frac_upcard',
data=df_ex[(df_ex['frac_upcard'] == 0)], group_var='ID')
reg_C_g_ = cluster_fit('meanbeliefcard ~ meanbelief_priorbayescard + mb_fracupcard + frac_upcard',
data=df_ex[(df_ex['frac_upcard'] == 0)], group_var='ID')
Extension_Figure_2(df_ex)
#Used regressions in Extension Table 1 with the standard errors clustered at subject level
reg_B = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage', data=df_ex[(((df_ex['frac_upimage'] == 0) | (df_ex['frac_upimage'] == 1)) & (df_ex['IQtask'] ==0))], group_var='ID')
reg_IQ = cluster_fit('meanbeliefimage ~ meanbelief_priorbayesimage + mb_fracup + frac_upimage', data=df_ex[(((df_ex['frac_upimage'] == 0) | (df_ex['frac_upimage'] == 1)) & (df_ex['IQtask'] ==1))], group_var='ID')
reg_C = cluster_fit('meanbeliefcard ~ meanbelief_priorbayescard + mb_fracupcard + frac_upcard', data=df_ex[(df_ex['frac_upcard'] == 0) | (df_ex['frac_upcard'] == 1)], group_var='ID')
Extension_Table_1(df_ex)
Extension_Figure_3(df)
Extension_Figure_4(df)
Extension_Figure_5(df)
#Making a copy of the original data to start the reshape
df_control = df
#Generating an ID number for each participant
df_control['ID'] = df_control.index + 1
#Filtering data to have only 1 month and no feedback treatment
df_control = df[(df['treatgroup'] == 2) | (df['treatgroup'] == 7)]
#Filtering and adjusting data
df_control = df[(df['treatgroup'] == 2) | (df['treatgroup'] == 7)]
df_control = df_control[['ID', 'treatment', 'posterior_median', 'prior_median']]
df_control.rename(columns = {"prior_median": "belief1", "posterior_median": "belief2" }, inplace=True)
df_control["ID"] = df_control.index
#Transforming data from a wide format to long format
df_control = pd.wide_to_long(df_control, stubnames='belief', i="ID", j="time")
#Creating a column with time index
df_control = df_control.reset_index(level=1)
#Final data
df_control
get_variables_for_extension(df_control)
Extension_Table_2(df_control)
| 0.480966 | 0.977306 |
# Build a queue using a linked list
By now, you may be noticing a pattern. Earlier, we had you implement a stack using an array and a linked list. Here, we're doing the same thing with queues: In the previous notebook, you implemented a queue using an array, and in this notebook we'll implement one using a linked list.
It's good to try implementing the same data structures in multiple ways. This helps you to better understand the abstract concepts behind the data structure, separate from the details of their implementation—and it also helps you develop a habit of comparing pros and cons of different implementations.
With both stack and queues, we saw that trying to use arrays introduced some concerns regarding the time complexity, particularly when the initial array size isn't large enough and we need to expand the array in order to add more items.
With our stack implementation, we saw that linked lists provided a way around this issue—and exactly the same thing is true with queues.

<span class="graffiti-highlight graffiti-id_1gfxpqm-id_shwp6yi"><i></i><button>Walkthrough</button></span>
## 1. Define a `Node` class
Since we'll be implementing a linked list for this, we know that we'll need a `Node` class like we used earlier in this lesson.
See if you can remember how to do this, and implement it in the cell below.
<span class="graffiti-highlight graffiti-id_gna1fui-id_v0zlq1c"><i></i><button>Show Solution</button></span>
## 2. Create the `Queue` class and its `__init__` method
In the cell below, see if you can write the `__init__` method for our `Queue` class. It will need three attributes:
* A `head` attribute to keep track of the first node in the linked list
* A `tail` attribute to keep track of the last node in the linked list
* A `num_elements` attribute to keep track of how many items are in the stack
<span class="graffiti-highlight graffiti-id_rf0zi6d-id_s7hiew4"><i></i><button>Show Solution</button></span>
## 3. Add the `enqueue` method
In the cell below, see if you can figure out how to write the `enqueue` method.
Remember, the purpose of this method is to add a new item to the back of the queue. Since we're using a linked list, this is equivalent to creating a new node and adding it to the tail of the list.
Some things to keep in mind:
* If the queue is empty, then both the `head` and `tail` should refer to the new node (because when there's only one node, this node is both the head and the tail)
* Otherwise (if the queue has items), add the new node to the tail (i.e., to the end of the queue)
* Be sure to shift the `tail` reference so that it refers to the new node (because it is the new tail)
```
class Queue:
def __init__(self):
self.head = None
self.tail = None
self.num_elements = 0
# TODO: Add the enqueue method
```
<span class="graffiti-highlight graffiti-id_pcfy0pd-id_3h8yswv"><i></i><button>Show Solution</button></span>
## 4. Add the `size` and `is_empty` methods
You've implemented these a couple of times now, and they'll work the same way here:
* Add a `size` method that returns the current size of the stack
* Add an `is_empty` method that returns `True` if the stack is empty and `False` otherwise
We'll make use of these methods in a moment when we write the `dequeue` method.
```
class Queue:
def __init__(self):
self.head = None
self.tail = None
self.num_elements = 0
def enqueue(self, value):
new_node = Node(value)
if self.head is None:
self.head = new_node
self.tail = self.head
else:
self.tail.next = new_node # add data to the next attribute of the tail (i.e. the end of the queue)
self.tail = self.tail.next # shift the tail (i.e., the back of the queue)
self.num_elements += 1
# TODO: Add the size method
# TODO: Add the is_empty method
```
<span class="graffiti-highlight graffiti-id_xmkf0bu-id_dv5h7su"><i></i><button>Show Solution</button></span>
## 5. Add the `dequeue` method
In the cell below, see if you can add the `deqeueue` method.
Here's what it should do:
* If the queue is empty, it should simply return `None`. Otherwise...
* Get the value from the front of the queue (i.e., the head of the linked list)
* Shift the `head` over so that it refers to the next node
* Update the `num_elements` attribute
* Return the value that was dequeued
```
class Queue:
def __init__(self):
self.head = None
self.tail = None
self.num_elements = 0
def enqueue(self, value):
new_node = Node(value)
if self.head is None:
self.head = new_node
self.tail = self.head
else:
self.tail.next = new_node # add data to the next attribute of the tail (i.e. the end of the queue)
self.tail = self.tail.next # shift the tail (i.e., the back of the queue)
self.num_elements += 1
# Add the dequeue method
def size(self):
return self.num_elements
def is_empty(self):
return self.num_elements == 0
```
<span class="graffiti-highlight graffiti-id_s4lyv17-id_n15vlij"><i></i><button>Show Solution</button></span>
## Test it!
Here's some code you can use to check if your implementation works:
```
# Setup
q = Queue()
q.enqueue(1)
q.enqueue(2)
q.enqueue(3)
# Test size
print ("Pass" if (q.size() == 3) else "Fail")
# Test dequeue
print ("Pass" if (q.dequeue() == 1) else "Fail")
# Test enqueue
q.enqueue(4)
print ("Pass" if (q.dequeue() == 2) else "Fail")
print ("Pass" if (q.dequeue() == 3) else "Fail")
print ("Pass" if (q.dequeue() == 4) else "Fail")
q.enqueue(5)
print ("Pass" if (q.size() == 1) else "Fail")
```
## Time Complexity
So what's the time complexity of adding or removing things from our queue here?
Well, when we use `enqueue`, we simply create a new node and add it to the tail of the list. And when we `dequeue` an item, we simply get the value from the head of the list and then shift the `head` variable so that it refers to the next node over.
Both of these operations happen in constant time—that is, they have a time-complexity of O(1).
|
github_jupyter
|
class Queue:
def __init__(self):
self.head = None
self.tail = None
self.num_elements = 0
# TODO: Add the enqueue method
class Queue:
def __init__(self):
self.head = None
self.tail = None
self.num_elements = 0
def enqueue(self, value):
new_node = Node(value)
if self.head is None:
self.head = new_node
self.tail = self.head
else:
self.tail.next = new_node # add data to the next attribute of the tail (i.e. the end of the queue)
self.tail = self.tail.next # shift the tail (i.e., the back of the queue)
self.num_elements += 1
# TODO: Add the size method
# TODO: Add the is_empty method
class Queue:
def __init__(self):
self.head = None
self.tail = None
self.num_elements = 0
def enqueue(self, value):
new_node = Node(value)
if self.head is None:
self.head = new_node
self.tail = self.head
else:
self.tail.next = new_node # add data to the next attribute of the tail (i.e. the end of the queue)
self.tail = self.tail.next # shift the tail (i.e., the back of the queue)
self.num_elements += 1
# Add the dequeue method
def size(self):
return self.num_elements
def is_empty(self):
return self.num_elements == 0
# Setup
q = Queue()
q.enqueue(1)
q.enqueue(2)
q.enqueue(3)
# Test size
print ("Pass" if (q.size() == 3) else "Fail")
# Test dequeue
print ("Pass" if (q.dequeue() == 1) else "Fail")
# Test enqueue
q.enqueue(4)
print ("Pass" if (q.dequeue() == 2) else "Fail")
print ("Pass" if (q.dequeue() == 3) else "Fail")
print ("Pass" if (q.dequeue() == 4) else "Fail")
q.enqueue(5)
print ("Pass" if (q.size() == 1) else "Fail")
| 0.313735 | 0.97151 |
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from sklearn.datasets import load_iris
from sklearn.svm import SVC
from sklearn import svm
import numpy as np
iris=load_iris()
pdiris=pd.DataFrame(data=iris['data'], columns=['sepal_lenght','sepal_width','petal_length','petal_width'])
pdtarget=pd.DataFrame(data=iris['target'],columns=['species']).apply(lambda x:iris['target_names'][x])
pdiris['target']=pdtarget
y['target']=pd.DataFrame(iris['target'])
y.head(100)
pdiris.head(10)
pdiris.describe()
pdiris.info()
pdiris.shape
pdiris.isnull().sum()
pdiris.corr()
sns.pairplot(pdiris)
plt.show()
sns.heatmap(pdiris.corr(), annot=True)
plt.show()
groups=pdiris.groupby(['target'])
fig, ax=plt.subplots()
ax.margins(.03)
for target, group in groups:
ax.plot(group.sepal_width, group.petal_width, marker='o', linestyle='', ms=12, label=target);
ax.legend()
plt.show()
print(pdiris.groupby(['target']).corr())
```
## train model
```
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test=train_test_split(pdiris[['sepal_lenght', 'sepal_width','petal_length', 'petal_width']],y['target'], test_size=.3, random_state=123)
model=SVC()
model.fit(x_train, y_train)
y_pred=model.predict(x_test)
accuracy=accuracy_score(y_pred, y_test)
print(accuracy)
print(model.score(x_test, y_test))
cm=confusion_matrix(y_test, y_pred)
print(cm)
print(classification_report(y_test, y_pred))
```
## print different kernels
```
h=.02#step size in mesh
C=1.0#SVM regularization parameter
svc=svm.SVC(kernel='linear', C=C).fit(x_train[['sepal_lenght','sepal_width']], y_train)
rbf_svc=svm.SVC(kernel='rbf', gamma=0.7, C=C).fit(x_train[['sepal_lenght','sepal_width']], y_train)
poly_svc=svm.SVC(kernel='poly', degree=3, C=C).fit(x_train[['sepal_lenght','sepal_width']], y_train)
lin_svc=svm.LinearSVC(C=C).fit(x_train[['sepal_lenght','sepal_width']], y_train)
x_train.describe()
x_min, x_max=x_train['sepal_lenght'].min()+1, x_train['sepal_lenght'].max()+1
y_min, y_max=x_train['sepal_width'].min()+1, x_train['sepal_width'].max()+1
print('x_min {} , x_max {}, y_min {}, y_max {}'.format(x_min , x_max, y_min, y_max))
a=np.arange(x_min, x_max,h)
b=np.arange(y_min, y_max, h)
xx,yy=np.meshgrid(a, b)
titles=['SVC with linear kernel', 'LinearSVC (linear kernel)', 'SVC with rbf kernel','SVCw ith poly ker of drgree 3']
y_train.describe()
y_train.head(10)
for i, clf in enumerate((svc, lin_svc, rbf_svc, poly_svc)):
plt.subplot(2,2, i+1)
plt.subplots_adjust(wspace=0.8, hspace=0.8)
Z=clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z=Z.reshape(xx.shape)
plt.contourf(xx,yy,Z,cmap=plt.cm.coolwarm,alpha=0.8)
plt.scatter(x_train['sepal_lenght'],x_train['sepal_width'],c=y_train, cmap=plt.cm.coolwarm)
plt.xlabel('sl')
plt.ylabel('sw')
plt.xlim(xx.min(),xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title(titles[i])
plt.show()
```
|
github_jupyter
|
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from sklearn.datasets import load_iris
from sklearn.svm import SVC
from sklearn import svm
import numpy as np
iris=load_iris()
pdiris=pd.DataFrame(data=iris['data'], columns=['sepal_lenght','sepal_width','petal_length','petal_width'])
pdtarget=pd.DataFrame(data=iris['target'],columns=['species']).apply(lambda x:iris['target_names'][x])
pdiris['target']=pdtarget
y['target']=pd.DataFrame(iris['target'])
y.head(100)
pdiris.head(10)
pdiris.describe()
pdiris.info()
pdiris.shape
pdiris.isnull().sum()
pdiris.corr()
sns.pairplot(pdiris)
plt.show()
sns.heatmap(pdiris.corr(), annot=True)
plt.show()
groups=pdiris.groupby(['target'])
fig, ax=plt.subplots()
ax.margins(.03)
for target, group in groups:
ax.plot(group.sepal_width, group.petal_width, marker='o', linestyle='', ms=12, label=target);
ax.legend()
plt.show()
print(pdiris.groupby(['target']).corr())
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test=train_test_split(pdiris[['sepal_lenght', 'sepal_width','petal_length', 'petal_width']],y['target'], test_size=.3, random_state=123)
model=SVC()
model.fit(x_train, y_train)
y_pred=model.predict(x_test)
accuracy=accuracy_score(y_pred, y_test)
print(accuracy)
print(model.score(x_test, y_test))
cm=confusion_matrix(y_test, y_pred)
print(cm)
print(classification_report(y_test, y_pred))
h=.02#step size in mesh
C=1.0#SVM regularization parameter
svc=svm.SVC(kernel='linear', C=C).fit(x_train[['sepal_lenght','sepal_width']], y_train)
rbf_svc=svm.SVC(kernel='rbf', gamma=0.7, C=C).fit(x_train[['sepal_lenght','sepal_width']], y_train)
poly_svc=svm.SVC(kernel='poly', degree=3, C=C).fit(x_train[['sepal_lenght','sepal_width']], y_train)
lin_svc=svm.LinearSVC(C=C).fit(x_train[['sepal_lenght','sepal_width']], y_train)
x_train.describe()
x_min, x_max=x_train['sepal_lenght'].min()+1, x_train['sepal_lenght'].max()+1
y_min, y_max=x_train['sepal_width'].min()+1, x_train['sepal_width'].max()+1
print('x_min {} , x_max {}, y_min {}, y_max {}'.format(x_min , x_max, y_min, y_max))
a=np.arange(x_min, x_max,h)
b=np.arange(y_min, y_max, h)
xx,yy=np.meshgrid(a, b)
titles=['SVC with linear kernel', 'LinearSVC (linear kernel)', 'SVC with rbf kernel','SVCw ith poly ker of drgree 3']
y_train.describe()
y_train.head(10)
for i, clf in enumerate((svc, lin_svc, rbf_svc, poly_svc)):
plt.subplot(2,2, i+1)
plt.subplots_adjust(wspace=0.8, hspace=0.8)
Z=clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z=Z.reshape(xx.shape)
plt.contourf(xx,yy,Z,cmap=plt.cm.coolwarm,alpha=0.8)
plt.scatter(x_train['sepal_lenght'],x_train['sepal_width'],c=y_train, cmap=plt.cm.coolwarm)
plt.xlabel('sl')
plt.ylabel('sw')
plt.xlim(xx.min(),xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title(titles[i])
plt.show()
| 0.60743 | 0.64791 |
<a href="https://colab.research.google.com/github/sayakpaul/Adventures-in-TensorFlow-Lite/blob/master/TUNIT_Conversion_to_TF_Lite.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This notebook presents a demo of the TUNIT paper ([Rethinking the Truly Unsupervised Image-to-Image Translation](https://arxiv.org/abs/2006.06500)). GitHub repo of the paper can be found [here](https://github.com/clovaai/tunit). It also demonstrates the process of converting PyTorch models to TF Lite using ONNX.

<center>Source: Original Paper</center>
Note that the predictions from the converted TF Lite models look faulty. But this notebook still might serve as a reference for the conversion worflow.
## Setup
```
from google.colab import drive
drive.mount('/content/drive')
```
Please note that we used the **animalFaces10_1_00** pre-trained checkpoints. I first copied the files from [here](https://drive.google.com/drive/folders/1rU1B9OLQjYMBzU6VQX7UwLxod2WzOfNz?usp=sharing) to my personal Drive, created a folder called animalFaces10_0_00, and copied the files to that folder.
```
!git clone https://github.com/clovaai/tunit/
%cd tunit
import torch
from models.generator import Generator
from models.guidingNet import GuidingNet
import torch.nn.functional as F
import torchvision.utils as vutils
from PIL import Image
from torchvision.transforms import ToTensor
```
## Instantiating the model classes
```
G = Generator(128, 128)
C = GuidingNet(128)
!cp -r /content/drive/My\ Drive/animalFaces10_1_00 .
!ls -lh animalFaces10_1_00
```
## Loading the checkpoints in the model classes
```
load_file = 'animalFaces10_1_00/model_4568.ckpt'
checkpoint = torch.load(load_file, map_location='cpu')
G.load_state_dict(checkpoint['G_EMA_state_dict'])
C.load_state_dict(checkpoint['C_EMA_state_dict'])
```
The reference image must be an image from a domain included in the training.
## Gather images for running inference
```
!wget -O source.jpg https://github.com/NVlabs/FUNIT/raw/master/images/input_content.jpg
!wget -O reference.jpg https://user-images.githubusercontent.com/23406491/84877309-4e7abf80-b0c3-11ea-8f2d-b18d398e9584.jpg
```
## Prepare the images and then run inference
```
G.eval()
C.eval()
source_image = Image.open('source.jpg')
reference_image = Image.open('reference.jpg')
x_src = ToTensor()(source_image).unsqueeze(0)
x_ref = ToTensor()(reference_image).unsqueeze(0)
x_src = F.interpolate(x_src, size=(128, 128))
x_ref = F.interpolate(x_ref, size=(128, 128))
x_src = (x_src - 0.5) / 0.5
x_ref = (x_ref - 0.5) / 0.5
s_ref = C.moco(x_ref)
x_res = G(x_src, s_ref)
vutils.save_image(x_res, 'test_out.jpg', normalize=True, padding=0)
```
## Visualization
```
import matplotlib.pyplot as plt
import numpy as np
def imshow(pil_image, title=None):
np_array = np.asarray(pil_image)
plt.imshow(np_array)
if title:
plt.title(title)
plt.figure(figsize=(10, 10))
plt.subplot(1, 3, 1)
imshow(source_image, 'Source Image')
plt.subplot(1, 3, 2)
imshow(reference_image, 'Reference Image')
plt.subplot(1, 3, 3)
result = Image.open('test_out.jpg')
imshow(result, 'Transformed Image')
```
## Set up `onnx-tf`
Reference: https://towardsdatascience.com/onnx-made-easy-957e60d16e94/
```
%cd /content/tunit
%tensorflow_version 2.x
import tensorflow as tf
print(tf.__version__)
!pip install -q tensorflow-addons
!git clone https://github.com/onnx/onnx-tensorflow.git
!pip install onnx
%cd onnx-tensorflow
pip install -e .
```
## Convert to TensorFlow graph
```
import onnx
from onnx_tf.backend import prepare
# Export the generator
torch.onnx.export(G, (x_src, s_ref), 'generator.onnx', input_names=['test_input', 'style_input'], output_names=['test_output'])
# Export the encoder
torch.onnx.export(C, x_ref, 'encoder.onnx', input_names=['test_input'], output_names=['test_output'])
def generate_tf_graph(onnx_file, tf_graph_file):
# Load ONNX model and convert to TensorFlow format
onnx_module = onnx.load(onnx_file)
tf_rep = prepare(onnx_module)
# Export model as .pb file
tf_rep.export_graph(tf_graph_file)
# Generate the TF Graph of the generator onnx module
generate_tf_graph('generator.onnx', 'generator.pb')
# Generate the TF Graph of the encoder onnx module
generate_tf_graph('encoder.onnx', 'encoder.pb')
```
Ignore the warnings.
## Inspect the TF graphs
```
def load_pb(path_to_pb):
with tf.io.gfile.GFile(path_to_pb, 'rb') as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name='')
return graph
def show_ops(path_to_pb):
tf_graph = load_pb(path_to_pb)
for op in tf_graph.get_operations():
print(op.values())
show_ops('generator.pb')
```
Output to note: `(<tf.Tensor 'test_output:0' shape=(1, 3, 128, 128) dtype=float32>,)`.
```
show_ops('encoder.pb')
```
Output to note: `(<tf.Tensor 'test_output:0' shape=(1, 128) dtype=float32>,)`. It also matches with the dimensions of `s_ref` which is the output we got when we ran `C.moco(x_ref)`.
## Convert to TF Lite
```
# During writing this tutorial the Flex ops were only supported via tf-nightly in the Python interpreter
!pip uninstall -q tensorflow
!pip install -q tf-nightly
```
Restart the runtime at this point.
```
import os
import tempfile
import tensorflow as tf
print(tf.__version__)
def convert_to_tflite(tf_graph, input_arrays):
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
graph_def_file=tf_graph,
input_arrays=input_arrays,
output_arrays=['test_output']
)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
# Convert to TFLite Model
tflite_model = converter.convert()
_, tflite_path = tempfile.mkstemp('.tflite')
tflite_model_size = open(tflite_path, 'wb').write(tflite_model)
tf_model_size = os.path.getsize(tf_graph)
print('TensorFlow Model is {} bytes'.format(tf_model_size))
print('TFLite Model is {} bytes'.format(tflite_model_size))
print('Post training dynamic range quantization saves {} bytes'.format(tf_model_size-tflite_model_size))
print('Saved TF Lite model to: {}'.format(tflite_path))
convert_to_tflite('/content/tunit/onnx-tensorflow/generator.pb', ['test_input', 'style_input'])
convert_to_tflite('/content/tunit/onnx-tensorflow/encoder.pb', ['test_input'])
# Please update the TF Lite file paths from the above before running this cell
!cp /tmp/tmpbhh77i06.tflite generator.tflite
!cp /tmp/tmp6144a9n1.tflite encoder.tflite
```
## Download the TF Lite files (optional)
```
# Download the TF Lite files
from google.colab import files
files.download('generator.tflite')
files.download('encoder.tflite')
```
## Running inference with TF Lite
### Inspect the model inputs
```
def load_tflite_model(tflite_model_path):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
# Get image size - converting from BCHW to WH
input_size = input_details[0]['shape'][3], input_details[0]['shape'][2]
print('Input size of {} model: {}'.format(tflite_model_path, input_size))
if tflite_model_path == 'generator.tflite':
style_reference_size = input_details[1]['shape'][0], input_details[1]['shape'][1]
print('Style reference size of {} model: {}'.format(tflite_model_path, style_reference_size))
return (interpreter, input_size)
# Load the TF Lite models in the Python TF Lite interpreter
generator_inter, _ = load_tflite_model('generator.tflite')
encoder_inter, _ = load_tflite_model('encoder.tflite')
```
### Prepare the images for inference
```
# Copy over the sample images to perform inference
!cp /content/tunit/source.jpg .
!cp /content/tunit/reference.jpg .
# Utility to prepare the images
# We need to match the steps that were performed above
def load_img(path_to_img, reshape_size=(128, 128)):
img = tf.io.read_file(path_to_img)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
img = tf.image.resize(img, reshape_size, method='nearest')
img = (img - 0.5) / 0.5
return img
# Prepare the images
x_src = load_img('source.jpg')
x_ref = load_img('reference.jpg')
x_src.shape, x_ref.shape
# The TF Lite models have an input shape of (1, 3, 128, 128)
x_src_reshaped = tf.reshape(x_src, (1, 3, 128, 128))
x_ref_reshaped = tf.reshape(x_ref, (1, 3, 128, 128))
```
### Run inference
```
# Function to run style prediction on preprocessed style image.
# Reference: https://www.tensorflow.org/lite/models/style_transfer/overview#style_transform
def run_style_predict(reference_img, tflite_path):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=tflite_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], reference_img)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(x_ref_reshaped, 'encoder.tflite')
print('Style Bottleneck Shape:', style_bottleneck.shape)
# Run style transform on preprocessed style image
# Reference: https://www.tensorflow.org/lite/models/style_transfer/overview#style_transform
def run_style_transform(style_bottleneck, preprocessed_source_image, tflite_path):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=tflite_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_source_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Transform the content image using the style bottleneck.
resultant_image = run_style_transform(style_bottleneck, x_src_reshaped, 'generator.tflite')
print('Resultant image shape:', resultant_image.shape)
# Visualize the resultant image
import matplotlib.pyplot as plt
resultant_image = tf.reshape(resultant_image, (1, 128, 128, 3))
resultant_image = tf.squeeze(resultant_image)
plt.imshow(tf.clip_by_value(resultant_image, 0., 1.))
plt.show()
```
## Acknowledgements
Thanks to [Kyungjune Baek](https://friedronaldo.github.io/hibkj/) for his guidance in running demo inference in PyTorch.
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive')
!git clone https://github.com/clovaai/tunit/
%cd tunit
import torch
from models.generator import Generator
from models.guidingNet import GuidingNet
import torch.nn.functional as F
import torchvision.utils as vutils
from PIL import Image
from torchvision.transforms import ToTensor
G = Generator(128, 128)
C = GuidingNet(128)
!cp -r /content/drive/My\ Drive/animalFaces10_1_00 .
!ls -lh animalFaces10_1_00
load_file = 'animalFaces10_1_00/model_4568.ckpt'
checkpoint = torch.load(load_file, map_location='cpu')
G.load_state_dict(checkpoint['G_EMA_state_dict'])
C.load_state_dict(checkpoint['C_EMA_state_dict'])
!wget -O source.jpg https://github.com/NVlabs/FUNIT/raw/master/images/input_content.jpg
!wget -O reference.jpg https://user-images.githubusercontent.com/23406491/84877309-4e7abf80-b0c3-11ea-8f2d-b18d398e9584.jpg
G.eval()
C.eval()
source_image = Image.open('source.jpg')
reference_image = Image.open('reference.jpg')
x_src = ToTensor()(source_image).unsqueeze(0)
x_ref = ToTensor()(reference_image).unsqueeze(0)
x_src = F.interpolate(x_src, size=(128, 128))
x_ref = F.interpolate(x_ref, size=(128, 128))
x_src = (x_src - 0.5) / 0.5
x_ref = (x_ref - 0.5) / 0.5
s_ref = C.moco(x_ref)
x_res = G(x_src, s_ref)
vutils.save_image(x_res, 'test_out.jpg', normalize=True, padding=0)
import matplotlib.pyplot as plt
import numpy as np
def imshow(pil_image, title=None):
np_array = np.asarray(pil_image)
plt.imshow(np_array)
if title:
plt.title(title)
plt.figure(figsize=(10, 10))
plt.subplot(1, 3, 1)
imshow(source_image, 'Source Image')
plt.subplot(1, 3, 2)
imshow(reference_image, 'Reference Image')
plt.subplot(1, 3, 3)
result = Image.open('test_out.jpg')
imshow(result, 'Transformed Image')
%cd /content/tunit
%tensorflow_version 2.x
import tensorflow as tf
print(tf.__version__)
!pip install -q tensorflow-addons
!git clone https://github.com/onnx/onnx-tensorflow.git
!pip install onnx
%cd onnx-tensorflow
pip install -e .
import onnx
from onnx_tf.backend import prepare
# Export the generator
torch.onnx.export(G, (x_src, s_ref), 'generator.onnx', input_names=['test_input', 'style_input'], output_names=['test_output'])
# Export the encoder
torch.onnx.export(C, x_ref, 'encoder.onnx', input_names=['test_input'], output_names=['test_output'])
def generate_tf_graph(onnx_file, tf_graph_file):
# Load ONNX model and convert to TensorFlow format
onnx_module = onnx.load(onnx_file)
tf_rep = prepare(onnx_module)
# Export model as .pb file
tf_rep.export_graph(tf_graph_file)
# Generate the TF Graph of the generator onnx module
generate_tf_graph('generator.onnx', 'generator.pb')
# Generate the TF Graph of the encoder onnx module
generate_tf_graph('encoder.onnx', 'encoder.pb')
def load_pb(path_to_pb):
with tf.io.gfile.GFile(path_to_pb, 'rb') as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name='')
return graph
def show_ops(path_to_pb):
tf_graph = load_pb(path_to_pb)
for op in tf_graph.get_operations():
print(op.values())
show_ops('generator.pb')
show_ops('encoder.pb')
# During writing this tutorial the Flex ops were only supported via tf-nightly in the Python interpreter
!pip uninstall -q tensorflow
!pip install -q tf-nightly
import os
import tempfile
import tensorflow as tf
print(tf.__version__)
def convert_to_tflite(tf_graph, input_arrays):
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
graph_def_file=tf_graph,
input_arrays=input_arrays,
output_arrays=['test_output']
)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
# Convert to TFLite Model
tflite_model = converter.convert()
_, tflite_path = tempfile.mkstemp('.tflite')
tflite_model_size = open(tflite_path, 'wb').write(tflite_model)
tf_model_size = os.path.getsize(tf_graph)
print('TensorFlow Model is {} bytes'.format(tf_model_size))
print('TFLite Model is {} bytes'.format(tflite_model_size))
print('Post training dynamic range quantization saves {} bytes'.format(tf_model_size-tflite_model_size))
print('Saved TF Lite model to: {}'.format(tflite_path))
convert_to_tflite('/content/tunit/onnx-tensorflow/generator.pb', ['test_input', 'style_input'])
convert_to_tflite('/content/tunit/onnx-tensorflow/encoder.pb', ['test_input'])
# Please update the TF Lite file paths from the above before running this cell
!cp /tmp/tmpbhh77i06.tflite generator.tflite
!cp /tmp/tmp6144a9n1.tflite encoder.tflite
# Download the TF Lite files
from google.colab import files
files.download('generator.tflite')
files.download('encoder.tflite')
def load_tflite_model(tflite_model_path):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
# Get image size - converting from BCHW to WH
input_size = input_details[0]['shape'][3], input_details[0]['shape'][2]
print('Input size of {} model: {}'.format(tflite_model_path, input_size))
if tflite_model_path == 'generator.tflite':
style_reference_size = input_details[1]['shape'][0], input_details[1]['shape'][1]
print('Style reference size of {} model: {}'.format(tflite_model_path, style_reference_size))
return (interpreter, input_size)
# Load the TF Lite models in the Python TF Lite interpreter
generator_inter, _ = load_tflite_model('generator.tflite')
encoder_inter, _ = load_tflite_model('encoder.tflite')
# Copy over the sample images to perform inference
!cp /content/tunit/source.jpg .
!cp /content/tunit/reference.jpg .
# Utility to prepare the images
# We need to match the steps that were performed above
def load_img(path_to_img, reshape_size=(128, 128)):
img = tf.io.read_file(path_to_img)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
img = tf.image.resize(img, reshape_size, method='nearest')
img = (img - 0.5) / 0.5
return img
# Prepare the images
x_src = load_img('source.jpg')
x_ref = load_img('reference.jpg')
x_src.shape, x_ref.shape
# The TF Lite models have an input shape of (1, 3, 128, 128)
x_src_reshaped = tf.reshape(x_src, (1, 3, 128, 128))
x_ref_reshaped = tf.reshape(x_ref, (1, 3, 128, 128))
# Function to run style prediction on preprocessed style image.
# Reference: https://www.tensorflow.org/lite/models/style_transfer/overview#style_transform
def run_style_predict(reference_img, tflite_path):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=tflite_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], reference_img)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(x_ref_reshaped, 'encoder.tflite')
print('Style Bottleneck Shape:', style_bottleneck.shape)
# Run style transform on preprocessed style image
# Reference: https://www.tensorflow.org/lite/models/style_transfer/overview#style_transform
def run_style_transform(style_bottleneck, preprocessed_source_image, tflite_path):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=tflite_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_source_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Transform the content image using the style bottleneck.
resultant_image = run_style_transform(style_bottleneck, x_src_reshaped, 'generator.tflite')
print('Resultant image shape:', resultant_image.shape)
# Visualize the resultant image
import matplotlib.pyplot as plt
resultant_image = tf.reshape(resultant_image, (1, 128, 128, 3))
resultant_image = tf.squeeze(resultant_image)
plt.imshow(tf.clip_by_value(resultant_image, 0., 1.))
plt.show()
| 0.623148 | 0.964052 |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Azure Machine Learning Pipelines with Data Dependency
In this notebook, we will see how we can build a pipeline with implicit data dependency.
## Prerequisites and Azure Machine Learning Basics
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration Notebook](https://aka.ms/pl-config) first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc.
### Azure Machine Learning and Pipeline SDK-specific Imports
```
import azureml.core
from azureml.core import Workspace, Experiment, Datastore
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.widgets import RunDetails
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
from azureml.data.data_reference import DataReference
from azureml.pipeline.core import Pipeline, PipelineData
from azureml.pipeline.steps import PythonScriptStep
print("Pipeline SDK-specific imports completed")
```
### Initialize Workspace
Initialize a [workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class%29) object from persisted configuration.
```
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
# Default datastore (Azure blob storage)
# def_blob_store = ws.get_default_datastore()
def_blob_store = Datastore(ws, "workspaceblobstore")
print("Blobstore's name: {}".format(def_blob_store.name))
```
### Source Directory
The best practice is to use separate folders for scripts and its dependent files for each step and specify that folder as the `source_directory` for the step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step.
```
# source directory
source_directory = 'data_dependency_run_train'
print('Sample scripts will be created in {} directory.'.format(source_directory))
```
### Required data and script files for the the tutorial
Sample files required to finish this tutorial are already copied to the project folder specified above. Even though the .py provided in the samples don't have much "ML work," as a data scientist, you will work on this extensively as part of your work. To complete this tutorial, the contents of these files are not very important. The one-line files are for demostration purpose only.
### Compute Targets
See the list of Compute Targets on the workspace.
```
cts = ws.compute_targets
for ct in cts:
print(ct)
```
#### Retrieve or create an Aml compute
Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's get the default Aml Compute in the current workspace. We will then run the training script on this compute target.
```
from azureml.core.compute_target import ComputeTargetException
aml_compute_target = "cpu-cluster"
try:
aml_compute = AmlCompute(ws, aml_compute_target)
print("found existing compute target.")
except ComputeTargetException:
print("creating new compute target")
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2",
min_nodes = 1,
max_nodes = 4)
aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)
aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
print("Aml Compute attached")
# For a more detailed view of current Azure Machine Learning Compute status, use get_status()
# example: un-comment the following line.
# print(aml_compute.get_status().serialize())
```
**Wait for this call to finish before proceeding (you will see the asterisk turning to a number).**
Now that you have created the compute target, let's see what the workspace's compute_targets() function returns. You should now see one entry named 'amlcompute' of type AmlCompute.
## Building Pipeline Steps with Inputs and Outputs
As mentioned earlier, a step in the pipeline can take data as input. This data can be a data source that lives in one of the accessible data locations, or intermediate data produced by a previous step in the pipeline.
### Datasources
Datasource is represented by **[DataReference](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.data_reference.datareference?view=azure-ml-py)** object and points to data that lives in or is accessible from Datastore. DataReference could be a pointer to a file or a directory.
```
# Reference the data uploaded to blob storage using DataReference
# Assign the datasource to blob_input_data variable
# DataReference(datastore,
# data_reference_name=None,
# path_on_datastore=None,
# mode='mount',
# path_on_compute=None,
# overwrite=False)
blob_input_data = DataReference(
datastore=def_blob_store,
data_reference_name="test_data",
path_on_datastore="20newsgroups/20news.pkl")
print("DataReference object created")
```
### Intermediate/Output Data
Intermediate data (or output of a Step) is represented by **[PipelineData](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata?view=azure-ml-py)** object. PipelineData can be produced by one step and consumed in another step by providing the PipelineData object as an output of one step and the input of one or more steps.
#### Constructing PipelineData
- **name:** [*Required*] Name of the data item within the pipeline graph
- **datastore_name:** Name of the Datastore to write this output to
- **output_name:** Name of the output
- **output_mode:** Specifies "upload" or "mount" modes for producing output (default: mount)
- **output_path_on_compute:** For "upload" mode, the path to which the module writes this output during execution
- **output_overwrite:** Flag to overwrite pre-existing data
```
# Define intermediate data using PipelineData
# Syntax
# PipelineData(name,
# datastore=None,
# output_name=None,
# output_mode='mount',
# output_path_on_compute=None,
# output_overwrite=None,
# data_type=None,
# is_directory=None)
# Naming the intermediate data as processed_data1 and assigning it to the variable processed_data1.
processed_data1 = PipelineData("processed_data1",datastore=def_blob_store)
print("PipelineData object created")
```
### Pipelines steps using datasources and intermediate data
Machine learning pipelines can have many steps and these steps could use or reuse datasources and intermediate data. Here's how we construct such a pipeline:
#### Define a Step that consumes a datasource and produces intermediate data.
In this step, we define a step that consumes a datasource and produces intermediate data.
**Open `train.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**
#### Specify conda dependencies and a base docker image through a RunConfiguration
This step uses a docker image and scikit-learn, use a [**RunConfiguration**](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.runconfiguration?view=azure-ml-py) to specify these requirements and use when creating the PythonScriptStep.
```
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.runconfig import DEFAULT_CPU_IMAGE
# create a new runconfig object
run_config = RunConfiguration()
# enable Docker
run_config.environment.docker.enabled = True
# set Docker base image to the default CPU-based image
run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE
# use conda_dependencies.yml to create a conda environment in the Docker image for execution
run_config.environment.python.user_managed_dependencies = False
# specify CondaDependencies obj
run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])
# step4 consumes the datasource (Datareference) in the previous step
# and produces processed_data1
trainStep = PythonScriptStep(
script_name="train.py",
arguments=["--input_data", blob_input_data, "--output_train", processed_data1],
inputs=[blob_input_data],
outputs=[processed_data1],
compute_target=aml_compute,
source_directory=source_directory,
runconfig=run_config
)
print("trainStep created")
```
#### Define a Step that consumes intermediate data and produces intermediate data
In this step, we define a step that consumes an intermediate data and produces intermediate data.
**Open `extract.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**
```
# step5 to use the intermediate data produced by step4
# This step also produces an output processed_data2
processed_data2 = PipelineData("processed_data2", datastore=def_blob_store)
source_directory = "data_dependency_run_extract"
extractStep = PythonScriptStep(
script_name="extract.py",
arguments=["--input_extract", processed_data1, "--output_extract", processed_data2],
inputs=[processed_data1],
outputs=[processed_data2],
compute_target=aml_compute,
source_directory=source_directory)
print("extractStep created")
```
#### Define a Step that consumes intermediate data and existing data and produces intermediate data
In this step, we define a step that consumes multiple data types and produces intermediate data.
This step uses the output generated from the previous step as well as existing data on a DataStore. The location of the existing data is specified using a [**PipelineParameter**](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelineparameter?view=azure-ml-py) and a [**DataPath**](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.datapath.datapath?view=azure-ml-py). Using a PipelineParameter enables easy modification of the data location when the Pipeline is published and resubmitted.
**Open `compare.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**
```
# Reference the data uploaded to blob storage using a PipelineParameter and a DataPath
from azureml.pipeline.core import PipelineParameter
from azureml.data.datapath import DataPath, DataPathComputeBinding
datapath = DataPath(datastore=def_blob_store, path_on_datastore='20newsgroups/20news.pkl')
datapath_param = PipelineParameter(name="compare_data", default_value=datapath)
data_parameter1 = (datapath_param, DataPathComputeBinding(mode='mount'))
# Now define the compare step which takes two inputs and produces an output
processed_data3 = PipelineData("processed_data3", datastore=def_blob_store)
source_directory = "data_dependency_run_compare"
compareStep = PythonScriptStep(
script_name="compare.py",
arguments=["--compare_data1", data_parameter1, "--compare_data2", processed_data2, "--output_compare", processed_data3],
inputs=[data_parameter1, processed_data2],
outputs=[processed_data3],
compute_target=aml_compute,
source_directory=source_directory)
print("compareStep created")
```
#### Build the pipeline
```
pipeline1 = Pipeline(workspace=ws, steps=[compareStep])
print ("Pipeline is built")
pipeline_run1 = Experiment(ws, 'Data_dependency').submit(pipeline1)
print("Pipeline is submitted for execution")
RunDetails(pipeline_run1).show()
```
#### Wait for pipeline run to complete
```
pipeline_run1.wait_for_completion(show_output=True)
```
### See Outputs
See where outputs of each pipeline step are located on your datastore.
***Wait for pipeline run to complete, to make sure all the outputs are ready***
```
# Get Steps
for step in pipeline_run1.get_steps():
print("Outputs of step " + step.name)
# Get a dictionary of StepRunOutputs with the output name as the key
output_dict = step.get_outputs()
for name, output in output_dict.items():
output_reference = output.get_port_data_reference() # Get output port data reference
print("\tname: " + name)
print("\tdatastore: " + output_reference.datastore_name)
print("\tpath on datastore: " + output_reference.path_on_datastore)
```
### Download Outputs
We can download the output of any step to our local machine using the SDK.
```
# Retrieve the step runs by name 'train.py'
train_step = pipeline_run1.find_step_run('train.py')
if train_step:
train_step_obj = train_step[0] # since we have only one step by name 'train.py'
train_step_obj.get_output_data('processed_data1').download("./outputs") # download the output to current directory
```
# Next: Publishing the Pipeline and calling it from the REST endpoint
See this [notebook](https://aka.ms/pl-pub-rep) to understand how the pipeline is published and you can call the REST endpoint to run the pipeline.
|
github_jupyter
|
import azureml.core
from azureml.core import Workspace, Experiment, Datastore
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.widgets import RunDetails
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
from azureml.data.data_reference import DataReference
from azureml.pipeline.core import Pipeline, PipelineData
from azureml.pipeline.steps import PythonScriptStep
print("Pipeline SDK-specific imports completed")
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
# Default datastore (Azure blob storage)
# def_blob_store = ws.get_default_datastore()
def_blob_store = Datastore(ws, "workspaceblobstore")
print("Blobstore's name: {}".format(def_blob_store.name))
# source directory
source_directory = 'data_dependency_run_train'
print('Sample scripts will be created in {} directory.'.format(source_directory))
cts = ws.compute_targets
for ct in cts:
print(ct)
from azureml.core.compute_target import ComputeTargetException
aml_compute_target = "cpu-cluster"
try:
aml_compute = AmlCompute(ws, aml_compute_target)
print("found existing compute target.")
except ComputeTargetException:
print("creating new compute target")
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2",
min_nodes = 1,
max_nodes = 4)
aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)
aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
print("Aml Compute attached")
# For a more detailed view of current Azure Machine Learning Compute status, use get_status()
# example: un-comment the following line.
# print(aml_compute.get_status().serialize())
# Reference the data uploaded to blob storage using DataReference
# Assign the datasource to blob_input_data variable
# DataReference(datastore,
# data_reference_name=None,
# path_on_datastore=None,
# mode='mount',
# path_on_compute=None,
# overwrite=False)
blob_input_data = DataReference(
datastore=def_blob_store,
data_reference_name="test_data",
path_on_datastore="20newsgroups/20news.pkl")
print("DataReference object created")
# Define intermediate data using PipelineData
# Syntax
# PipelineData(name,
# datastore=None,
# output_name=None,
# output_mode='mount',
# output_path_on_compute=None,
# output_overwrite=None,
# data_type=None,
# is_directory=None)
# Naming the intermediate data as processed_data1 and assigning it to the variable processed_data1.
processed_data1 = PipelineData("processed_data1",datastore=def_blob_store)
print("PipelineData object created")
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.runconfig import DEFAULT_CPU_IMAGE
# create a new runconfig object
run_config = RunConfiguration()
# enable Docker
run_config.environment.docker.enabled = True
# set Docker base image to the default CPU-based image
run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE
# use conda_dependencies.yml to create a conda environment in the Docker image for execution
run_config.environment.python.user_managed_dependencies = False
# specify CondaDependencies obj
run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])
# step4 consumes the datasource (Datareference) in the previous step
# and produces processed_data1
trainStep = PythonScriptStep(
script_name="train.py",
arguments=["--input_data", blob_input_data, "--output_train", processed_data1],
inputs=[blob_input_data],
outputs=[processed_data1],
compute_target=aml_compute,
source_directory=source_directory,
runconfig=run_config
)
print("trainStep created")
# step5 to use the intermediate data produced by step4
# This step also produces an output processed_data2
processed_data2 = PipelineData("processed_data2", datastore=def_blob_store)
source_directory = "data_dependency_run_extract"
extractStep = PythonScriptStep(
script_name="extract.py",
arguments=["--input_extract", processed_data1, "--output_extract", processed_data2],
inputs=[processed_data1],
outputs=[processed_data2],
compute_target=aml_compute,
source_directory=source_directory)
print("extractStep created")
# Reference the data uploaded to blob storage using a PipelineParameter and a DataPath
from azureml.pipeline.core import PipelineParameter
from azureml.data.datapath import DataPath, DataPathComputeBinding
datapath = DataPath(datastore=def_blob_store, path_on_datastore='20newsgroups/20news.pkl')
datapath_param = PipelineParameter(name="compare_data", default_value=datapath)
data_parameter1 = (datapath_param, DataPathComputeBinding(mode='mount'))
# Now define the compare step which takes two inputs and produces an output
processed_data3 = PipelineData("processed_data3", datastore=def_blob_store)
source_directory = "data_dependency_run_compare"
compareStep = PythonScriptStep(
script_name="compare.py",
arguments=["--compare_data1", data_parameter1, "--compare_data2", processed_data2, "--output_compare", processed_data3],
inputs=[data_parameter1, processed_data2],
outputs=[processed_data3],
compute_target=aml_compute,
source_directory=source_directory)
print("compareStep created")
pipeline1 = Pipeline(workspace=ws, steps=[compareStep])
print ("Pipeline is built")
pipeline_run1 = Experiment(ws, 'Data_dependency').submit(pipeline1)
print("Pipeline is submitted for execution")
RunDetails(pipeline_run1).show()
pipeline_run1.wait_for_completion(show_output=True)
# Get Steps
for step in pipeline_run1.get_steps():
print("Outputs of step " + step.name)
# Get a dictionary of StepRunOutputs with the output name as the key
output_dict = step.get_outputs()
for name, output in output_dict.items():
output_reference = output.get_port_data_reference() # Get output port data reference
print("\tname: " + name)
print("\tdatastore: " + output_reference.datastore_name)
print("\tpath on datastore: " + output_reference.path_on_datastore)
# Retrieve the step runs by name 'train.py'
train_step = pipeline_run1.find_step_run('train.py')
if train_step:
train_step_obj = train_step[0] # since we have only one step by name 'train.py'
train_step_obj.get_output_data('processed_data1').download("./outputs") # download the output to current directory
| 0.381335 | 0.938632 |
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Plot style
sns.set()
%pylab inline
pylab.rcParams['figure.figsize'] = (4, 4)
# Avoid inaccurate floating values (for inverse matrices in dot product for instance)
# See https://stackoverflow.com/questions/24537791/numpy-matrix-inversion-rounding-errors
np.set_printoptions(suppress=True)
%%html
<style>
.pquote {
text-align: left;
margin: 40px 0 40px auto;
width: 70%;
font-size: 1.5em;
font-style: italic;
display: block;
line-height: 1.3em;
color: #5a75a7;
font-weight: 600;
border-left: 5px solid rgba(90, 117, 167, .1);
padding-left: 6px;
}
.notes {
font-style: italic;
display: block;
margin: 40px 10%;
}
img + em {
text-align: center;
display: block;
color: gray;
font-size: 0.9em;
font-weight: 600;
}
</style>
def plotVectors(vecs, cols, alpha=1):
"""
Plot set of vectors.
Parameters
----------
vecs : array-like
Coordinates of the vectors to plot. Each vectors is in an array. For
instance: [[1, 3], [2, 2]] can be used to plot 2 vectors.
cols : array-like
Colors of the vectors. For instance: ['red', 'blue'] will display the
first vector in red and the second in blue.
alpha : float
Opacity of vectors
Returns:
fig : instance of matplotlib.figure.Figure
The figure of the vectors
"""
plt.axvline(x=0, color='#A9A9A9', zorder=0)
plt.axhline(y=0, color='#A9A9A9', zorder=0)
for i in range(len(vecs)):
if (isinstance(alpha, list)):
alpha_i = alpha[i]
else:
alpha_i = alpha
x = np.concatenate([[0,0],vecs[i]])
plt.quiver([x[0]],
[x[1]],
[x[2]],
[x[3]],
angles='xy', scale_units='xy', scale=1, color=cols[i],
alpha=alpha_i)
```
$$
\newcommand\bs[1]{\boldsymbol{#1}}
\newcommand\norm[1]{\left\lVert#1\right\rVert}
$$
# Introduction
This lesson will cover some very important concepts in linear algebra. It's dense, so strap yourself in.
First, we'll introduce eigenvectors and eigenvalues. Then we will build upon the fact that a matrix can be seen as a linear transformation and that applying a matrix on its eigenvectors gives new vectors with the same direction. Then, we will learn how to express quadratic equations in matrix form. Here we'll see that the eigendecomposition of the matrix corresponding to a quadratic equation can be used to find the minimum and maximum of this function.
There's also a bonus section on visualizing linear transformations with python.
# 3.7 Eigendecomposition
The decomposition of a matrix means that we want to find a product of matrices that is equal to the initial matrix.
Eigendecomposition is one form of matrix decomposition. "Eigen" means "peculiar to" or "characteristic" in German. Remembering that provides a good mnemonic since, in the case of eigendecomposition, we decompose the initial matrix into the product of its eigenvectors and eigenvalues. But what are eigenvectors and eigenvales?!
# Matrices as linear transformations
As we have seen with the example of the identity matrix, you can think of matrices as linear transformations. Some matrices will rotate your space while others might rescale it. The identity matrix of course does nothing to your space.
So when we apply a matrix to a vector, we end up with a transformed version of the vector. When we say that we "apply" the matrix to the vector, we mean that we calculate the dot product of the matrix with the vector. Let's being begin a basic example of this kind of transformation.
### Example 1.
```
A = np.array([[-1, 3], [2, -2]])
A
v = np.array([[2], [1]])
v
```
Let's plot this vector:
```
plotVectors([v.flatten()], cols=['#1190FF'])
plt.ylim(-1, 4)
plt.xlim(-1, 4)
```
Now, let's apply matrix $\bs{A}$ to this vector. We'll plot the original vector in light blue and the new one that's been transformed with the matrix in orange:
```
Av = A.dot(v)
print(Av)
plotVectors([v.flatten(), Av.flatten()], cols=['#1190FF', '#FF9A13'])
plt.ylim(-1, 4)
plt.xlim(-1, 4)
```
And there's the effect of applying matrix $\bs{A}$ to vector $\bs{v}$.
Now that you're comfortable thinking of matrices as linear transformation recipes, let's see the case of a very special type of vector: the eigenvector.
# Eigenvectors and eigenvalues
So you just saw an example of a vector transformed by a matrix. Now imagine that the transformation of an initial vector gives us a new vector that has the exact same direction (but not necessarily the same scale) as the initial vector. This special vector is called an eigenvector of the matrix.
In more mathematical terms, $\bs{v}$ is a eigenvector of $\bs{A}$ if $\bs{v}$ and $\bs{Av}$ have the same direction. The output vector of $\bs{Av}$ is just a scaled version of the input vector $\bs{v}$. This scalling factor $\lambda$ is called the **eigenvalue** of $\bs{A}$. Thus:
$$
\bs{Av} = \lambda\bs{v}
$$
In physical terms, this means that eigenvectors are those directions that remain unchanged by the application of $\bs{A}$ to $\bs{v}$; and $\lambda$ describes how distances (scale) are changed in that direction, with negative eigenvalues indicating a reversal of that direction.
### Example 2.
Let's $\bs{A}$ be the following matrix:
$$
\bs{A}=
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}
$$
We know that one eigenvector of A is:
$$
\bs{v}=
\begin{bmatrix}
1\\\\
1
\end{bmatrix}
$$
We can check that $\bs{Av} = \lambda\bs{v}$:
$$
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}
\begin{bmatrix}
1\\\\
1
\end{bmatrix}=\begin{bmatrix}
6\\\\
6
\end{bmatrix}
$$
We can see that:
$$
6\times \begin{bmatrix}
1\\\\
1
\end{bmatrix} = \begin{bmatrix}
6\\\\
6
\end{bmatrix}
$$
which means that $\bs{v}$ is an eigenvector of $\bs{A}$. Also, the corresponding eigenvalue is $\lambda=6$.
Let's plot $\bs{v}$ and $\bs{Av}$ to check that their directions are the same:
```
A = np.array([[5, 1], [3, 3]])
A
v = np.array([[1], [1]])
v
Av = A.dot(v)
orange = '#FF9A13'
blue = '#1190FF'
plotVectors([Av.flatten(), v.flatten()], cols=[blue, orange])
plt.ylim(-1, 7)
plt.xlim(-1, 7)
```
We can see that their directions are the same and that the eigenvalue of 6 scaled our vector's distance in that direction.
Another eigenvector of $\bs{A}$ is
$$
\bs{v}=
\begin{bmatrix}
1\\\\
-3
\end{bmatrix}
$$
because
$$
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}\begin{bmatrix}
1\\\\
-3
\end{bmatrix} = \begin{bmatrix}
2\\\\
-6
\end{bmatrix}
$$
and
$$
2 \times \begin{bmatrix}
1\\\\
-3
\end{bmatrix} =
\begin{bmatrix}
2\\\\
-6
\end{bmatrix}
$$
So the corresponding eigenvalue is $\lambda=2$.
```
v = np.array([[1], [-3]])
v
Av = A.dot(v)
plotVectors([Av.flatten(), v.flatten()], cols=[blue, orange])
plt.ylim(-7, 1)
plt.xlim(-1, 3)
```
These examples show that eigenvectors $\bs{v}$ are vectors that change only in scale when we apply the matrix $\bs{A}$ to them. Here the scales were 6 for the first eigenvector and 2 for the second, meaning $\lambda$ can take any real (or even complex) value.
## Find eigenvalues and eigenvectors in Python
Numpy provides a function for returning eigenvectors and eigenvalues (the first array corresponds to the eigenvalues and the second to the eigenvectors concatenated in columns):
```python
(array([ 6., 2.]), array([[ 0.70710678, -0.31622777],
[ 0.70710678, 0.9486833 ]]))
```
Here's a demonstration with our preceding example.
```
A = np.array([[5, 1], [3, 3]])
A
np.linalg.eig(A)
```
We can see that the eigenvalues are the same ones we got before: $6$ and $2$.
The eigenvectors correspond to the columns of the second array. This means that the eigenvector corresponding to $\lambda=6$ is:
$$
\begin{bmatrix}
0.70710678\\\\
0.70710678
\end{bmatrix}
$$
And the eigenvector corresponding to $\lambda=2$ is:
$$
\begin{bmatrix}
-0.31622777\\\\
0.9486833
\end{bmatrix}
$$
The eigenvectors look different because they don't necessarly have the same scaling as the ones we gave in the example. But we can easily see that the first eigenvector corresponds to a scaled down version of our $\begin{bmatrix}
1\\\\
1
\end{bmatrix}$.
$$
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}
\begin{bmatrix}
0.70710678\\\\
0.70710678
\end{bmatrix}=
\begin{bmatrix}
4.24264069\\\\
4.24264069
\end{bmatrix}
$$
Thus we still have $\bs{Av} = \lambda\bs{v}$, with $0.70710678 \times 6 = 4.24264069$. This indicates that there is an infinite number of eigenvectors corresponding to the eigenvalue $6$. They are all equivalent because it's directions and not the scalings that matter.
For the second eigenvector, we can check that it corresponds to a scaled version of $\begin{bmatrix}
1\\\\
-3
\end{bmatrix}$. To do so, let's draw these vectors and see if they have the same direction.
```
v = np.array([[1], [-3]])
Av = A.dot(v)
v_np = [-0.31622777, 0.9486833]
plotVectors([Av.flatten(), v.flatten(), v_np], cols=[blue, orange, 'blue'])
plt.ylim(-7, 1)
plt.xlim(-1, 3)
```
We can see that the vector found with numpy (in dark blue) is just a scaled version of our preceding $\begin{bmatrix}
1\\\\
-3
\end{bmatrix}$.
## Rescaled vectors
As we just saw with numpy, if $\bs{v}$ is an eigenvector of $\bs{A}$, then any rescaled vector $s\bs{v}$ is also an eigenvector of $\bs{A}$. The eigenvalue of that rescaled vector will be the same.
Let's try to rescale $
\bs{v}=
\begin{bmatrix}
1\\\\
-3
\end{bmatrix}
$ from our preceding example.
For instance,
$$
\bs{3v}=
\begin{bmatrix}
3\\\\
-9
\end{bmatrix}
$$
$$
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}
\begin{bmatrix}
3\\\\
-9
\end{bmatrix} =
\begin{bmatrix}
6\\\\
18
\end{bmatrix} = 2 \times
\begin{bmatrix}
3\\\\
-9
\end{bmatrix}
$$
We have $\bs{A}\times 3\bs{v} = \lambda\bs{v}$ and the eigenvalue is still $\lambda=2$.
## Concatenating eigenvalues and eigenvectors
Now we're going to use eigenvectors and eigenvalues to decompose a matrix. All eigenvectors of a matrix $\bs{A}$ can be concatenated in a matrix $\bs{V}$, with each column corresponding to each eigenvector (like in the second array returned by `np.linalg.eig(A)`):
$$
\bs{V}=
\begin{bmatrix}
1 & 1\\\\
1 & -3
\end{bmatrix}
$$
The first column $
\begin{bmatrix}
1\\\\
1
\end{bmatrix}
$ corresponds to $\lambda=6$ and the second $
\begin{bmatrix}
1\\\\
-3
\end{bmatrix}
$ to $\lambda=2$.
The vector $\bs{\lambda}$ can be created from all eigenvalues:
$$
\bs{\lambda}=
\begin{bmatrix}
6\\\\
2
\end{bmatrix}
$$
Then the eigendecomposition is given by
$$
\bs{A}=\bs{V}\cdot diag(\bs{\lambda}) \cdot \bs{V}^{-1}
$$
> Recall from the last lesson that $diag(\bs{v})$ is a diagonal matrix containing all of the eigenvalues.
Continuing with our example we have:
$$
\bs{V}=\begin{bmatrix}
1 & 1\\\\
1 & -3
\end{bmatrix}
$$
The diagonal matrix is a matrix having nonzero elements only in the diagonal running from the upper left to the lower right.
$$
diag(\bs{\bs{\lambda}})=
\begin{bmatrix}
6 & 0\\\\
0 & 2
\end{bmatrix}
$$
To calculate $\bs{V}^{-1}$, we'll use numpy:
```
V = np.array([[1, 1], [1, -3]])
V
V_inv = np.linalg.inv(V)
V_inv
```
So let's plug
$$
\bs{V}^{-1}=\begin{bmatrix}
0.75 & 0.25\\\\
0.25 & -0.25
\end{bmatrix}
$$
into our equation:
$$
\begin{align*}
&\bs{V}\cdot diag(\bs{\lambda}) \cdot \bs{V}^{-1}\\\\
&=
\begin{bmatrix}
1 & 1\\\\
1 & -3
\end{bmatrix}
\begin{bmatrix}
6 & 0\\\\
0 & 2
\end{bmatrix}
\begin{bmatrix}
0.75 & 0.25\\\\
0.25 & -0.25
\end{bmatrix}
\end{align*}
$$
If we do the dot product of the first two matrices we have:
$$
\begin{bmatrix}
1 & 1\\\\
1 & -3
\end{bmatrix}
\begin{bmatrix}
6 & 0\\\\
0 & 2
\end{bmatrix} =
\begin{bmatrix}
6 & 2\\\\
6 & -6
\end{bmatrix}
$$
Substituting that result into our equation:
$$
\begin{align*}
&\begin{bmatrix}
6 & 2\\\\
6 & -6
\end{bmatrix}
\begin{bmatrix}
0.75 & 0.25\\\\
0.25 & -0.25
\end{bmatrix}\\\\
&=
\begin{bmatrix}
6\times0.75 + (2\times0.25) & 6\times0.25 + (2\times-0.25)\\\\
6\times0.75 + (-6\times0.25) & 6\times0.25 + (-6\times-0.25)
\end{bmatrix}\\\\
&=
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}=
\bs{A}
\end{align*}
$$
Let's check our result with Python:
```
lambdas = np.diag([6,2])
lambdas
V.dot(lambdas).dot(V_inv)
```
That confirms all of our previous calculations.
## Real symmetric matrix
In the case of real-valued symmetric matrices (these were introduced in the last lesson), eigendecomposition can be expressed as
$$
\bs{A} = \bs{Q}\Lambda \bs{Q}^\text{T}
$$
where $\bs{Q}$ is the matrix with eigenvectors as columns and $\Lambda$ is $diag(\lambda)$.
### Example 3.
$$
\bs{A}=\begin{bmatrix}
6 & 2\\\\
2 & 3
\end{bmatrix}
$$
This matrix is symmetric because $\bs{A}=\bs{A}^\text{T}$. Its eigenvectors are:
$$
\bs{Q}=
\begin{bmatrix}
0.89442719 & -0.4472136\\\\
0.4472136 & 0.89442719
\end{bmatrix}
$$
and if we put its eigenvalues in a diagonal matrix, we get:
$$
\bs{\Lambda}=
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}
$$
So let's calculate $\bs{Q\Lambda}$:
$$
\begin{align*}
\bs{Q\Lambda}&=
\begin{bmatrix}
0.89442719 & -0.4472136\\\\
0.4472136 & 0.89442719
\end{bmatrix}
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}\\\\
&=
\begin{bmatrix}
0.89442719 \times 7 & -0.4472136\times 2\\\\
0.4472136 \times 7 & 0.89442719\times 2
\end{bmatrix}\\\\
&=
\begin{bmatrix}
6.26099033 & -0.8944272\\\\
3.1304952 & 1.78885438
\end{bmatrix}
\end{align*}
$$
with:
$$
\bs{Q}^\text{T}=
\begin{bmatrix}
0.89442719 & 0.4472136\\\\
-0.4472136 & 0.89442719
\end{bmatrix}
$$
So we have:
$$
\begin{align*}
\bs{Q\Lambda} \bs{Q}^\text{T}&=
\begin{bmatrix}
6.26099033 & -0.8944272\\\\
3.1304952 & 1.78885438
\end{bmatrix}
\begin{bmatrix}
0.89442719 & 0.4472136\\\\
-0.4472136 & 0.89442719
\end{bmatrix}\\\\
&=
\begin{bmatrix}
6 & 2\\\\
2 & 3
\end{bmatrix}
\end{align*}
$$
This is all to show that symmetric matrices are extremely useful.
And to prove the math to ourselves, let's do the work with `linalg` from numpy:
```
A = np.array([[6, 2], [2, 3]])
A
eigVals, eigVecs = np.linalg.eig(A)
eigVecs
eigVals = np.diag(eigVals)
eigVals
eigVecs.dot(eigVals).dot(eigVecs.T)
```
As you can see, the result corresponds to our initial matrix.
# Quadratic form to matrix form
Eigendecomposition can be used to optimize [quadratic functions](https://en.wikipedia.org/wiki/Quadratic_function). We will see that when $\bs{x}$ takes the values of an eigenvector, $f(\bs{x})$ takes the value of its corresponding eigenvalue.
Say we have the following quadratic equation
$$
f(\bs{x}) = ax_1^2 +(b+c)x_1x_2 + dx_2^2
$$
Just as we saw with simpler linear systems, a quadratic can be represented in matrix form:
$$
f(\bs{x})= \begin{bmatrix}
x_1 & x_2
\end{bmatrix}\begin{bmatrix}
a & b\\\\
c & d
\end{bmatrix}\begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix} = \bs{x^\text{T}Ax}
$$
with:
$$
\bs{x} = \begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix}
$$
and
$$
\bs{A}=\begin{bmatrix}
a & b\\\\
c & d
\end{bmatrix}
$$
If you look at the relationship between these forms, you can see that $a$ gives you the number of $x_1^2$, $(b + c)$ the number of $x_1x_2$ and $d$ the number of $x_2^2$. This means that the same quadratic form can be obtained from infinite number of matrices $\bs{A}$ by changing $b$ and $c$ while preserving their sum.
#### But why?
Having a quadratic equation in matrix form like this will help us with [constrained optimization](https://en.wikipedia.org/wiki/Constrained_optimization) (as we'll see below).
### Example 4.
$$
\bs{x} = \begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix}
$$
and
$$
\bs{A}=\begin{bmatrix}
2 & 4\\\\
2 & 5
\end{bmatrix}
$$
gives the following quadratic form:
$$
2x_1^2 + (4+2)x_1x_2 + 5x_2^2\\\\=2x_1^2 + 6x_1x_2 + 5x_2^2
$$
but if:
$$
\bs{A}=\begin{bmatrix}
2 & -3\\\\
9 & 5
\end{bmatrix}
$$
we still have the quadratic same form:
$$
2x_1^2 + (-3+9)x_1x_2 + 5x_2^2\\\\=2x_1^2 + 6x_1x_2 + 5x_2^2
$$
### Example 5
For this example, we will go from the matrix form to the quadratic form using a symmetric matrix $\bs{A}$. Let's use the matrix from example $3$.
$$
\bs{x} = \begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix}
$$
and
$$\bs{A}=\begin{bmatrix}
6 & 2\\\\
2 & 3
\end{bmatrix}
$$
$$
\begin{align*}
\bs{x^\text{T}Ax}&=
\begin{bmatrix}
x_1 & x_2
\end{bmatrix}
\begin{bmatrix}
6 & 2\\\\
2 & 3
\end{bmatrix}
\begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix}\\\\
&=
\begin{bmatrix}
x_1 & x_2
\end{bmatrix}
\begin{bmatrix}
6 x_1 + 2 x_2\\\\
2 x_1 + 3 x_2
\end{bmatrix}\\\\
&=
x_1(6 x_1 + 2 x_2) + x_2(2 x_1 + 3 x_2)\\\\
&=
6 x_1^2 + 4 x_1x_2 + 3 x_2^2
\end{align*}
$$
Our quadratic equation is thus $6 x_1^2 + 4 x_1x_2 + 3 x_2^2$.
>### Note
>If $\bs{A}$ is a diagonal matrix (all 0 except the diagonal), the quadratic form of $\bs{x^\text{T}Ax}$ will have no cross term. Take the following matrix form:
>$$
\bs{A}=\begin{bmatrix}
a & b\\\\
c & d
\end{bmatrix}
$$
>If $\bs{A}$ is diagonal, then $b$ and $c$ are $0$. This makes $f(\bs{x}) = ax_1^2 +(b+c)x_1x_2 + dx_2^2$ become $f(\bs{x}) = ax_1^2 + dx_2^2$ as there is no cross term. A quadratic form without a cross term is called diagonal form since it comes from a diagonal matrix.
# Change of variable
A change of variable (or linear substitution) simply means that we replace a variable with another one. This process can be used to remove the cross terms in our quadratic equation. Without the cross term, it becomes easier to characterize the function and eventually optimize it (i.e finding its maximum or minimum).
## With the quadratic form
### Example 6.
Let's take again our previous quadratic form:
$$
\bs{x^\text{T}Ax} = 6 x_1^2 + 4 x_1x_2 + 3 x_2^2
$$
The change of variable will concern $x_1$ and $x_2$. The ideas is that we can replace $x_1$ with any combination of $y_1$ and $y_2$ and $x_2$ with any combination $y_1$ and $y_2$. Although this means we'll of course end up with a new equation, the upshot is that this process will help us find a specific substitution that leads to a simplification of our function. Specifically, this process can be used to get rid of the cross term (in our example: $4 x_1x_2$). We will see later why this is interesting.
Actually, the right substitution is given by the eigenvectors of the matrix used to generate the quadratic form. There's a lot of math theory behind this assertion, so just take it on faith. For now, recall that the matrix form of our equation is:
$$
\bs{x} = \begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix}
$$
and
$$\bs{A}=\begin{bmatrix}
6 & 2\\\\
2 & 3
\end{bmatrix}
$$
and that the eigenvectors of $\bs{A}$ are:
$$
\begin{bmatrix}
0.89442719 & -0.4472136\\\\
0.4472136 & 0.89442719
\end{bmatrix}
$$
For the purpose of simplification, we can replace these values with:
$$
\begin{bmatrix}
\frac{2}{\sqrt{5}} & -\frac{1}{\sqrt{5}}\\\\
\frac{1}{\sqrt{5}} & \frac{2}{\sqrt{5}}
\end{bmatrix} =
\frac{1}{\sqrt{5}}
\begin{bmatrix}
2 & -1\\\\
1 & 2
\end{bmatrix}
$$
So our first eigenvector is:
$$
\frac{1}{\sqrt{5}}
\begin{bmatrix}
2\\\\
1
\end{bmatrix}
$$
and our second eigenvector is:
$$
\frac{1}{\sqrt{5}}
\begin{bmatrix}
-1\\\\
2
\end{bmatrix}
$$
The change of variable will lead to:
$$
\begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix} =
\frac{1}{\sqrt{5}}
\begin{bmatrix}
2 & -1\\\\
1 & 2
\end{bmatrix}
\begin{bmatrix}
y_1\\\\
y_2
\end{bmatrix} =
\frac{1}{\sqrt{5}}
\begin{bmatrix}
2y_1 - y_2\\\\
y_1 + 2y_2
\end{bmatrix}
$$
so we have
$$
\begin{cases}
x_1 = \frac{1}{\sqrt{5}}(2y_1 - y_2)\\\\
x_2 = \frac{1}{\sqrt{5}}(y_1 + 2y_2)
\end{cases}
$$
So far so good! Let's replace that in our example:
$$
\begin{align*}
\bs{x^\text{T}Ax}
&=
6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\
&=
6 [\frac{1}{\sqrt{5}}(2y_1 - y_2)]^2 + 4 [\frac{1}{\sqrt{5}}(2y_1 - y_2)\frac{1}{\sqrt{5}}(y_1 + 2y_2)] + 3 [\frac{1}{\sqrt{5}}(y_1 + 2y_2)]^2\\\\
&=
\frac{1}{5}[6 (2y_1 - y_2)^2 + 4 (2y_1 - y_2)(y_1 + 2y_2) + 3 (y_1 + 2y_2)^2]\\\\
&=
\frac{1}{5}[6 (4y_1^2 - 4y_1y_2 + y_2^2) + 4 (2y_1^2 + 4y_1y_2 - y_1y_2 - 2y_2^2) + 3 (y_1^2 + 4y_1y_2 + 4y_2^2)]\\\\
&=
\frac{1}{5}(24y_1^2 - 24y_1y_2 + 6y_2^2 + 8y_1^2 + 16y_1y_2 - 4y_1y_2 - 8y_2^2 + 3y_1^2 + 12y_1y_2 + 12y_2^2)\\\\
&=
\frac{1}{5}(35y_1^2 + 10y_2^2)\\\\
&=
7y_1^2 + 2y_2^2
\end{align*}
$$
**All of that algebra goes to show that we were able to eliminate the cross term from our equation using the eigenvectors of the matrix used to generate the quadratic form.**
## The Principal Axes Theorem
I know you probably have a headache right now, so let's look at a simpler way to do the change of variable. We still stay in the matrix form, starting with
$$
f(\bs{x})=\bs{x^\text{T}Ax}
$$
Our linear substitution can be written in different terms, where we replace the variables $\bs{x}$ by $\bs{y}$ given this relationship:
$$
\bs{x}=P\bs{y}
$$
We want to find $P$ such as our new equation (after the change of variable) doesn't contain the cross terms.
The first step is to make the substitution in the first equation:
$$
\begin{align*}
\bs{x^\text{T}Ax}
&=
(\bs{Py})^\text{T}\bs{A}(\bs{Py})\\\\
&=
\bs{y}^\text{T}(\bs{P}^\text{T}\bs{AP})\bs{y}
\end{align*}
$$
We can see that the substitution is done by replacing $\bs{A}$ with $\bs{P^\text{T}AP}$. Remember from example 2 that $\bs{A} = \bs{Q\Lambda} \bs{Q}^\text{T}$ ($\bs{\Lambda}$ is the eigenvalues of $\bs{A}$ put in a diagonal matrix):
$$
\begin{align*}
\bs{P^\text{T}AP}
&=
\bs{P^\text{T}Q\Lambda Q^\text{T}P}\\\\
&=
\bs{P^\text{T}PQQ^\text{T}\Lambda}\\\\
&=
\bs{\Lambda}
\end{align*}
$$
Since $\bs{P}$ and $\bs{Q}$ are orthogonal we have removed $\bs{P^\text{T}P}$ and $\bs{QQ^\text{T}}$. We finally have:
$$
\bs{x^\text{T}Ax}=\bs{y^\text{T}\Lambda y}
$$
**All of this goes to show that we can use $\bs{\Lambda}$ to simplify our quadratic equation and remove the cross terms.** If you remember from example 2, we know that the eigenvalues of $\bs{A}$ are:
$$
\bs{\Lambda}=
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}
$$
$$
\begin{align*}
\bs{x^\text{T}Ax}
&=
\bs{y^\text{T}\Lambda y}\\\\
&=
\bs{y}^\text{T}
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}
\bs{y}\\\\
&=
\begin{bmatrix}
y_1 & y_2
\end{bmatrix}
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}
\begin{bmatrix}
y_1\\\\
y_2
\end{bmatrix}\\\\
&=
\begin{bmatrix}
7y_1 +0y_2 & 0y_1 + 2y_2
\end{bmatrix}
\begin{bmatrix}
y_1\\\\
y_2
\end{bmatrix}\\\\
&=
7y_1^2 + 2y_2^2
\end{align*}
$$
And we found the same values but in fewer steps!
This form (without cross-term) is called the **principal axes form**.
### Summary
To summarize, the principal axes form can be found with
$$
\bs{x^\text{T}Ax} = \lambda_1y_1^2 + \lambda_2y_2^2
$$
where $\lambda_1$ is the eigenvalue corresponding to the first eigenvector and $\lambda_2$ the eigenvalue corresponding to the second eigenvector (second column of $\bs{x}$).
# Finding f(x) with eigendecomposition
We will now see that there is a way to find $f(\bs{x})$ with eigenvectors and eigenvalues when $\bs{x}$ is a unit vector.
Let's start from:
$$
f(\bs{x}) =\bs{x^\text{T}Ax}
$$
We know that if $\bs{x}$ is an eigenvector of $\bs{A}$ and $\lambda$ the corresponding eigenvalue, then $
\bs{Ax}=\lambda \bs{x}
$. By replacing the term in the last equation we have:
$$
f(\bs{x}) =\bs{x^\text{T}\lambda x} = \bs{x^\text{T}x}\lambda
$$
Since $\bs{x}$ is a unit vector, $\norm{\bs{x}}_2=1$ and $\bs{x^\text{T}x}=1$. Hence we end up with:
$$
f(\bs{x}) = \lambda
$$
This is a usefull property. If $\bs{x}$ is an eigenvector of $\bs{A}$, $
f(\bs{x}) =\bs{x^\text{T}Ax}$ will take the value of the corresponding eigenvalue. We can see that this is working only if the euclidean norm of $\bs{x}$ is 1 (i.e $\bs{x}$ is a unit vector).
### Example 7
This example will show that $f(\bs{x}) = \lambda$. Let's take again the last example, the eigenvectors of $\bs{A}$ were
$$
\bs{Q}=
\begin{bmatrix}
0.89442719 & -0.4472136\\\\
0.4472136 & 0.89442719
\end{bmatrix}
$$
and the eigenvalues
$$
\bs{\Lambda}=
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}
$$
So if:
$$
\bs{x}=\begin{bmatrix}
0.89442719 & 0.4472136
\end{bmatrix}
$$
$f(\bs{x})$ should be equal to 7. Let's check that's true.
$$
\begin{align*}
f(\bs{x}) &= 6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\
&= 6\times 0.89442719^2 + 4\times 0.89442719\times 0.4472136 + 3 \times 0.4472136^2\\\\
&= 7
\end{align*}
$$
In the same way, if $\bs{x}=\begin{bmatrix}
-0.4472136 & 0.89442719
\end{bmatrix}$, $f(\bs{x})$ should be equal to 2.
$$
\begin{align*}
f(\bs{x}) &= 6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\
&= 6\times -0.4472136^2 + 4\times -0.4472136\times 0.89442719 + 3 \times 0.89442719^2\\\\
&= 2
\end{align*}
$$
# Quadratic form optimization
Depending on the context, optimizing a function means finding its maximum or its minimum. This process is widely used to minimize the error of cost functions in machine learning.
Here we will see how eigendecomposition can be used to optimize quadratic functions and why this can be done easily without cross terms. The difficulty is that we want a constrained optimization, that is to find the minimum or the maximum of the function for $f(\bs{x})$ being a unit vector.
### Example 7.
We want to optimize:
$$
f(\bs{x}) =\bs{x^\text{T}Ax} \textrm{ subject to }||\bs{x}||_2= 1
$$
In our last example we ended up with:
$$
f(\bs{x}) = 7y_1^2 + 2y_2^2
$$
And the constraint of $\bs{x}$ being a unit vector implies:
$$
||\bs{x}||_2 = 1 \Leftrightarrow x_1^2 + x_2^2 = 1
$$
We can also show that $\bs{y}$ has to be a unit vector if it is the case for $\bs{x}$. Recall first that $\bs{x}=\bs{Py}$:
$$
\begin{align*}
||\bs{x}||^2 &= \bs{x^\text{T}x}\\\\
&= (\bs{Py})^\text{T}(\bs{Py})\\\\
&= \bs{P^\text{T}y^\text{T}Py}\\\\
&= \bs{PP^\text{T}y^\text{T}y}\\\\
&= \bs{y^\text{T}y} = ||\bs{y}||^2
\end{align*}
$$
So $\norm{\bs{x}}^2 = \norm{\bs{y}}^2 = 1$ and thus $y_1^2 + y_2^2 = 1$
Since $y_1^2$ and $y_2^2$ cannot be negative because they are squared values, we can be sure that $2y_2^2\leq7y_2^2$. Hence:
$$
\begin{align*}
f(\bs{x}) &= 7y_1^2 + 2y_2^2\\\\
&\leq
7y_1^2 + 7y_2^2\\\\
&=
7(y_1^2+y_2^2)\\\\
&=
7
\end{align*}
$$
This means that the maximum value of $f(\bs{x})$ is 7.
The same method can help us find the minimum of $f(\bs{x})$. $7y_1^2\geq2y_1^2$ and:
$$
\begin{align*}
f(\bs{x}) &= 7y_1^2 + 2y_2^2\\\\
&\geq
2y_1^2 + 2y_2^2\\\\
&=
2(y_1^2+y_2^2)\\\\
&=
2
\end{align*}
$$
And the minimum of $f(\bs{x})$ is 2.
### Summary
The minimum of $f(\bs{x})$ is the minimum eigenvalue of the corresponding matrix $\bs{A}$. Another useful fact is that this value is obtained when $\bs{x}$ takes the value of the corresponding eigenvector. In that way, $f(\bs{x})=7$ when $\bs{x}=\begin{bmatrix}0.89442719 & 0.4472136\end{bmatrix}$. This shows how useful eigenvalues and eigenvectors are when it comes to constrained optimization.
## Graphical views
We saw that the quadratic functions $f(\bs{x}) = ax_1^2 +2bx_1x_2 + cx_2^2$ can be represented by the symmetric matrix $\bs{A}$:
$$
\bs{A}=\begin{bmatrix}
a & b\\\\
b & c
\end{bmatrix}
$$
Graphically, these functions can take one of three general shapes (click on the links to go to the Surface Plotter and move the shapes):
1.[Positive-definite form](https://academo.org/demos/3d-surface-plotter/?expression=x*x%2By*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=49) | 2.[Negative-definite form](https://academo.org/demos/3d-surface-plotter/?expression=-x*x-y*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=25) | 3.[Indefinite form](https://academo.org/demos/3d-surface-plotter/?expression=x*x-y*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=49)
:-------------------------:|:-------------------------:|:-------:
<img src="images/quadratic-functions-positive-definite-form.png" alt="Quadratic function with a positive definite form" title="Quadratic function with a positive definite form"> | <img src="images/quadratic-functions-negative-definite-form.png" alt="Quadratic function with a negative definite form" title="Quadratic function with a negative definite form"> | <img src="images/quadratic-functions-indefinite-form.png" alt="Quadratic function with a indefinite form" title="Quadratic function with a indefinite form">
With the constraint that $\bs{x}$ is a unit vector, the minimum of the function $f(\bs{x})$ corresponds to the smallest eigenvalue and is obtained with its corresponding eigenvector. The maximum corresponds to the biggest eigenvalue and is obtained with its corresponding eigenvector.
# Conclusion
We saw a whole bunch of important and comlex stuff in this lesson. We saw that eigendecomposition can be used to solve a variety of mathematical problems, such as finding the minimum or maximum of a quadratic function.
Sadly, we cannot use eigendecomposition for non-square matrices. In the next lesson, we will tackle Singular Value Decomposition (SVD), which is another method for decomposing matrices. The advantage of the SVD is that you can use it with non-square matrices.
# BONUS: visualizing linear transformations
We can use python's plotting functionality to see the effect of eigenvectors and eigenvalues in some common linear transformations like projection and rotation.
>Remember, every linear transformations can be thought of as the mapping of a matrix to an input vector.
Let's start by drawing the set of unit vectors (they are all vectors with a norm of 1). This is called a [unit circle](https://en.wikipedia.org/wiki/Unit_circle).
```
t = np.linspace(0, 2*np.pi, 100)
x = np.cos(t)
y = np.sin(t)
plt.figure()
plt.plot(x, y)
plt.xlim(-1.5, 1.5)
plt.ylim(-1.5, 1.5)
```
Let's now transform each of these points by applying a matrix $\bs{A}$. This is the goal of the function below, which takes a matrix as input then draws:
- the origin set of unit vectors
- the transformed set of unit vectors
- the eigenvectors
- the eigenvectors scalled by their eigenvalues
```
def linearTransformation(transformMatrix):
orange = '#FF9A13'
blue = '#1190FF'
# Create original set of unit vectors
t = np.linspace(0, 2*np.pi, 100)
x = np.cos(t)
y = np.sin(t)
# Calculate eigenvectors and eigenvalues
eigVecs = np.linalg.eig(transformMatrix)[1]
eigVals = np.diag(np.linalg.eig(transformMatrix)[0])
# Create vectors of 0 to store new transformed values
newX = np.zeros(len(x))
newY = np.zeros(len(x))
for i in range(len(x)):
unitVector_i = np.array([x[i], y[i]])
# Apply the matrix to the vector
newXY = transformMatrix.dot(unitVector_i)
newX[i] = newXY[0]
newY[i] = newXY[1]
plotVectors([eigVecs[:,0], eigVecs[:,1]],
cols=[blue, blue])
plt.plot(x, y)
plotVectors([eigVals[0,0]*eigVecs[:,0], eigVals[1,1]*eigVecs[:,1]],
cols=[orange, orange])
plt.plot(newX, newY)
plt.xlim(-5, 5)
plt.ylim(-5, 5)
plt.show()
A = np.array([[1,-1], [-1, 4]])
linearTransformation(A)
```
We can see the unit circle in dark blue, the non-scaled eigenvectors in light blue, the transformed unit circle in green, and the scaled eigenvectors in orange.
It is worth noting that the eigenvectors are orthogonal here because the matrix is symmetric.
Let's try with a non-symmetric matrix:
```
A = np.array([[1,1], [-1, 4]])
linearTransformation(A)
```
In this case, the eigenvectors are not orthogonal!
# References
## Videos of Gilbert Strang
- [Gilbert Strang, Lec21 MIT - Eigenvalues and eigenvectors](https://www.youtube.com/watch?v=lXNXrLcoerU)
- [Gilbert Strang, Lec 21 MIT, Spring 2005](https://www.youtube.com/watch?v=lXNXrLcoerU)
## Quadratic forms
- [David Lay, University of Colorado, Denver](http://math.ucdenver.edu/~esulliva/LinearAlgebra/SlideShows/07_02.pdf)
- [math.stackexchange QA](https://math.stackexchange.com/questions/2207111/eigendecomposition-optimization-of-quadratic-expressions)
## Eigenvectors
- [Victor Powell and Lewis Lehe - Interactive representation of eigenvectors](http://setosa.io/ev/eigenvectors-and-eigenvalues/)
## Linear transformations
- [Gilbert Strang - Linear transformation](http://ia802205.us.archive.org/18/items/MIT18.06S05_MP4/30.mp4)
- [Linear transformation - demo video](https://www.youtube.com/watch?v=wXCRcnbCsJA)
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Plot style
sns.set()
%pylab inline
pylab.rcParams['figure.figsize'] = (4, 4)
# Avoid inaccurate floating values (for inverse matrices in dot product for instance)
# See https://stackoverflow.com/questions/24537791/numpy-matrix-inversion-rounding-errors
np.set_printoptions(suppress=True)
%%html
<style>
.pquote {
text-align: left;
margin: 40px 0 40px auto;
width: 70%;
font-size: 1.5em;
font-style: italic;
display: block;
line-height: 1.3em;
color: #5a75a7;
font-weight: 600;
border-left: 5px solid rgba(90, 117, 167, .1);
padding-left: 6px;
}
.notes {
font-style: italic;
display: block;
margin: 40px 10%;
}
img + em {
text-align: center;
display: block;
color: gray;
font-size: 0.9em;
font-weight: 600;
}
</style>
def plotVectors(vecs, cols, alpha=1):
"""
Plot set of vectors.
Parameters
----------
vecs : array-like
Coordinates of the vectors to plot. Each vectors is in an array. For
instance: [[1, 3], [2, 2]] can be used to plot 2 vectors.
cols : array-like
Colors of the vectors. For instance: ['red', 'blue'] will display the
first vector in red and the second in blue.
alpha : float
Opacity of vectors
Returns:
fig : instance of matplotlib.figure.Figure
The figure of the vectors
"""
plt.axvline(x=0, color='#A9A9A9', zorder=0)
plt.axhline(y=0, color='#A9A9A9', zorder=0)
for i in range(len(vecs)):
if (isinstance(alpha, list)):
alpha_i = alpha[i]
else:
alpha_i = alpha
x = np.concatenate([[0,0],vecs[i]])
plt.quiver([x[0]],
[x[1]],
[x[2]],
[x[3]],
angles='xy', scale_units='xy', scale=1, color=cols[i],
alpha=alpha_i)
A = np.array([[-1, 3], [2, -2]])
A
v = np.array([[2], [1]])
v
plotVectors([v.flatten()], cols=['#1190FF'])
plt.ylim(-1, 4)
plt.xlim(-1, 4)
Av = A.dot(v)
print(Av)
plotVectors([v.flatten(), Av.flatten()], cols=['#1190FF', '#FF9A13'])
plt.ylim(-1, 4)
plt.xlim(-1, 4)
A = np.array([[5, 1], [3, 3]])
A
v = np.array([[1], [1]])
v
Av = A.dot(v)
orange = '#FF9A13'
blue = '#1190FF'
plotVectors([Av.flatten(), v.flatten()], cols=[blue, orange])
plt.ylim(-1, 7)
plt.xlim(-1, 7)
v = np.array([[1], [-3]])
v
Av = A.dot(v)
plotVectors([Av.flatten(), v.flatten()], cols=[blue, orange])
plt.ylim(-7, 1)
plt.xlim(-1, 3)
(array([ 6., 2.]), array([[ 0.70710678, -0.31622777],
[ 0.70710678, 0.9486833 ]]))
A = np.array([[5, 1], [3, 3]])
A
np.linalg.eig(A)
v = np.array([[1], [-3]])
Av = A.dot(v)
v_np = [-0.31622777, 0.9486833]
plotVectors([Av.flatten(), v.flatten(), v_np], cols=[blue, orange, 'blue'])
plt.ylim(-7, 1)
plt.xlim(-1, 3)
V = np.array([[1, 1], [1, -3]])
V
V_inv = np.linalg.inv(V)
V_inv
lambdas = np.diag([6,2])
lambdas
V.dot(lambdas).dot(V_inv)
A = np.array([[6, 2], [2, 3]])
A
eigVals, eigVecs = np.linalg.eig(A)
eigVecs
eigVals = np.diag(eigVals)
eigVals
eigVecs.dot(eigVals).dot(eigVecs.T)
t = np.linspace(0, 2*np.pi, 100)
x = np.cos(t)
y = np.sin(t)
plt.figure()
plt.plot(x, y)
plt.xlim(-1.5, 1.5)
plt.ylim(-1.5, 1.5)
def linearTransformation(transformMatrix):
orange = '#FF9A13'
blue = '#1190FF'
# Create original set of unit vectors
t = np.linspace(0, 2*np.pi, 100)
x = np.cos(t)
y = np.sin(t)
# Calculate eigenvectors and eigenvalues
eigVecs = np.linalg.eig(transformMatrix)[1]
eigVals = np.diag(np.linalg.eig(transformMatrix)[0])
# Create vectors of 0 to store new transformed values
newX = np.zeros(len(x))
newY = np.zeros(len(x))
for i in range(len(x)):
unitVector_i = np.array([x[i], y[i]])
# Apply the matrix to the vector
newXY = transformMatrix.dot(unitVector_i)
newX[i] = newXY[0]
newY[i] = newXY[1]
plotVectors([eigVecs[:,0], eigVecs[:,1]],
cols=[blue, blue])
plt.plot(x, y)
plotVectors([eigVals[0,0]*eigVecs[:,0], eigVals[1,1]*eigVecs[:,1]],
cols=[orange, orange])
plt.plot(newX, newY)
plt.xlim(-5, 5)
plt.ylim(-5, 5)
plt.show()
A = np.array([[1,-1], [-1, 4]])
linearTransformation(A)
A = np.array([[1,1], [-1, 4]])
linearTransformation(A)
| 0.883148 | 0.896388 |
# Classification and Prediction in GenePattern Notebook
This notebook will show you how to use k-Nearest Neighbors (kNN) to build a predictor, use it to classify leukemia subtypes, and assess its accuracy in cross-validation.
### K-nearest-neighbors (KNN)
KNN classifies an unknown sample by assigning it the phenotype label most frequently represented among the k nearest known samples (Golub and Slonim et al., 1999).
Additionally, you can select a weighting factor for the 'votes' of the nearest neighbors. For example, one might weight the votes by the reciprocal of the distance between neighbors to give closer neighors a greater vote.
## 1. Log in to GenePattern
<div class="alert alert-info">
<ul>
<li>Enter your username and password.
<li>Click Run.
<li>When you are logged in, you can click the - button in the upper right hand corner to collapse the cell.
</ul>
</div>
```
# Requires GenePattern Notebook: pip install genepattern-notebook
import gp
import genepattern
# Username and password removed for security reasons.
genepattern.GPAuthWidget(genepattern.register_session("https://genepattern.broadinstitute.org/gp", "", ""))
```
## 2. Preprocess gene expression data
- The PreprocessDataset module allows you to remove uninformative genes. These are genes whose values do not vary more than a certain amount between the two classes being compared.
<div class="alert alert-info"><ul>
<li>Click Run in the GenePattern cell below to launch the PreprocessDataset module.</li>
<li>When the job is complete, the status in the upper right corner of the cell will display "Complete".</li></ul>
</div>
```
preprocessdataset_task = gp.GPTask(genepattern.get_session(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00020')
preprocessdataset_job_spec = preprocessdataset_task.make_job_spec()
preprocessdataset_job_spec.set_parameter("input.filename", "https://datasets.genepattern.org/data/all_aml/all_aml_train.gct")
preprocessdataset_job_spec.set_parameter("threshold.and.filter", "1")
preprocessdataset_job_spec.set_parameter("floor", "20")
preprocessdataset_job_spec.set_parameter("ceiling", "20000")
preprocessdataset_job_spec.set_parameter("min.fold.change", "3")
preprocessdataset_job_spec.set_parameter("min.delta", "100")
preprocessdataset_job_spec.set_parameter("num.outliers.to.exclude", "0")
preprocessdataset_job_spec.set_parameter("row.normalization", "0")
preprocessdataset_job_spec.set_parameter("row.sampling.rate", "1")
preprocessdataset_job_spec.set_parameter("threshold.for.removing.rows", "")
preprocessdataset_job_spec.set_parameter("number.of.columns.above.threshold", "")
preprocessdataset_job_spec.set_parameter("log2.transform", "0")
preprocessdataset_job_spec.set_parameter("output.file.format", "3")
preprocessdataset_job_spec.set_parameter("output.file", "<input.filename_basename>.preprocessed")
genepattern.GPTaskWidget(preprocessdataset_task)
```
## 3. Run k-Nearest Neighbors Cross Validation
In the result cell for the PreprocessDataset job, you will see 2 files.
<div class="alert alert-info">Click the "i" icon next to the all_aml_preprocesed.gct file.</div>
You will see a dialog box with several options.
<div class="alert alert-info"><ul>
<li>Select "Send to Existing GenePattern Cell"</li>
<li>Choose "KNNXvalidation"</li>
<li>Click Run.</li>
</ul></div>
```
knnxvalidation_task = gp.GPTask(genepattern.get_session(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00013')
knnxvalidation_job_spec = knnxvalidation_task.make_job_spec()
knnxvalidation_job_spec.set_parameter("data.filename", "")
knnxvalidation_job_spec.set_parameter("class.filename", "https://datasets.genepattern.org/data/all_aml/all_aml_train.cls")
knnxvalidation_job_spec.set_parameter("num.features", "10")
knnxvalidation_job_spec.set_parameter("feature.selection.statistic", "0")
knnxvalidation_job_spec.set_parameter("min.std", "")
knnxvalidation_job_spec.set_parameter("num.neighbors", "3")
knnxvalidation_job_spec.set_parameter("weighting.type", "1")
knnxvalidation_job_spec.set_parameter("distance.measure", "1")
knnxvalidation_job_spec.set_parameter("pred.results.file", "<data.filename_basename>.pred.odf")
knnxvalidation_job_spec.set_parameter("feature.summary.file", "<data.filename_basename>.feat.odf")
genepattern.GPTaskWidget(knnxvalidation_task)
```
## 4. View prediction results
### a. Read the results into a dataframe
<div class="alert alert-info"><ul>
<li>Select the cell containing the job result by clicking anywhere in it.</li>
<li>Click on the i icon next to the `all_aml_train.preprocessed.pred.odf` file</li>
<li>Select "Send to DataFrame"</li>
<li>You will see a new cell that contains 3 lines of code starting with `from gp.data import ODF`</li>
<li>Execute this cell</li>
<li>You will see the prediction results as a table.</li>
</ul></div>
### b. View prediction results as a bar chart
- Execute the following cell.
You will see a bar graph of class predictions.
- Direction of bars indicate which class was predicted
- Length of bars indicates confidence level
- Blue = true prediction
- Red = false prediction
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# _ is a reference to the result of the last executed cell
df = _
result_bars = list()
bar_colors = list()
tick_labels = list()
data_rows = df.row_count()
class_labels = list((set(df.dataframe["True Class"])))
confidence = pd.to_numeric(df.dataframe["Confidence"])
for i in range(0, data_rows):
tick_labels.append(df.dataframe["Samples"][i])
if df.dataframe["Predicted Class"][i] == class_labels[1]:
result_bars.append(confidence[i])
else:
result_bars.append(-confidence[i])
if df.dataframe["Correct?"][i]:
bar_colors.append('b')
else:
bar_colors.append('r')
ind = np.arange(data_rows) # the x locations for the groups
width = 0.8
if hasattr(plt, 'style'):
plt.style.use('ggplot')
fig = plt.figure(figsize=(16,12))
ax = fig.add_subplot(111)
# Set figure and axis titles
plt.title(class_labels[0]+" versus "+class_labels[1]+" Prediction Results")
plt.xlabel("Sample Name")
plt.ylabel("Confidence value")
plt.axis([0,data_rows,-1.25,1.25])
plt.text(0.2, -1.15, "Predicted " + class_labels[0])
plt.text(data_rows, 1.05, "Predicted " + class_labels[1], horizontalalignment='right')
plt.grid(True)
# Plot bar chart of predicted classes
rects1 = ax.bar(ind, result_bars, color=bar_colors, width=width)
tick_locs, tick_xlabels = plt.xticks()
plt.setp(tick_xlabels, rotation=50)
plt.show()
```
## References
Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. 1984. [Classification and regression trees](https://www.amazon.com/Classification-Regression-Wadsworth-Statistics-Probability/dp/0412048418?ie=UTF8&*Version*=1&*entries*=0). Wadsworth & Brooks/Cole Advanced Books & Software, Monterey, CA.
Golub, T.R., Slonim, D.K., Tamayo, P., Huard, C., Gaasenbeek, M., Mesirov, J.P., Coller, H., Loh, M., Downing, J.R., Caligiuri, M.A., Bloomfield, C.D., and Lander, E.S. 1999. Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression. [Science 286:531-537](http://science.sciencemag.org/content/286/5439/531.long).
Lu, J., Getz, G., Miska, E.A., Alvarez-Saavedra, E., Lamb, J., Peck, D., Sweet-Cordero, A., Ebert, B.L., Mak, R.H., Ferrando, A.A, Downing, J.R., Jacks, T., Horvitz, H.R., Golub, T.R. 2005. MicroRNA expression profiles classify human cancers. [Nature 435:834-838](http://www.nature.com/nature/journal/v435/n7043/full/nature03702.html).
Rifkin, R., Mukherjee, S., Tamayo, P., Ramaswamy, S., Yeang, C-H, Angelo, M., Reich, M., Poggio, T., Lander, E.S., Golub, T.R., Mesirov, J.P. 2003. An Analytical Method for Multiclass Molecular Cancer Classification. [SIAM Review 45(4):706-723](http://epubs.siam.org/doi/abs/10.1137/S0036144502411986).
Slonim, D.K., Tamayo, P., Mesirov, J.P., Golub, T.R., Lander, E.S. 2000. Class prediction and discovery using gene expression data. In [Proceedings of the Fourth Annual International Conference on Computational Molecular Biology (RECOMB)](http://dl.acm.org/citation.cfm?id=332564). ACM Press, New York. pp. 263-272.
|
github_jupyter
|
# Requires GenePattern Notebook: pip install genepattern-notebook
import gp
import genepattern
# Username and password removed for security reasons.
genepattern.GPAuthWidget(genepattern.register_session("https://genepattern.broadinstitute.org/gp", "", ""))
preprocessdataset_task = gp.GPTask(genepattern.get_session(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00020')
preprocessdataset_job_spec = preprocessdataset_task.make_job_spec()
preprocessdataset_job_spec.set_parameter("input.filename", "https://datasets.genepattern.org/data/all_aml/all_aml_train.gct")
preprocessdataset_job_spec.set_parameter("threshold.and.filter", "1")
preprocessdataset_job_spec.set_parameter("floor", "20")
preprocessdataset_job_spec.set_parameter("ceiling", "20000")
preprocessdataset_job_spec.set_parameter("min.fold.change", "3")
preprocessdataset_job_spec.set_parameter("min.delta", "100")
preprocessdataset_job_spec.set_parameter("num.outliers.to.exclude", "0")
preprocessdataset_job_spec.set_parameter("row.normalization", "0")
preprocessdataset_job_spec.set_parameter("row.sampling.rate", "1")
preprocessdataset_job_spec.set_parameter("threshold.for.removing.rows", "")
preprocessdataset_job_spec.set_parameter("number.of.columns.above.threshold", "")
preprocessdataset_job_spec.set_parameter("log2.transform", "0")
preprocessdataset_job_spec.set_parameter("output.file.format", "3")
preprocessdataset_job_spec.set_parameter("output.file", "<input.filename_basename>.preprocessed")
genepattern.GPTaskWidget(preprocessdataset_task)
knnxvalidation_task = gp.GPTask(genepattern.get_session(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00013')
knnxvalidation_job_spec = knnxvalidation_task.make_job_spec()
knnxvalidation_job_spec.set_parameter("data.filename", "")
knnxvalidation_job_spec.set_parameter("class.filename", "https://datasets.genepattern.org/data/all_aml/all_aml_train.cls")
knnxvalidation_job_spec.set_parameter("num.features", "10")
knnxvalidation_job_spec.set_parameter("feature.selection.statistic", "0")
knnxvalidation_job_spec.set_parameter("min.std", "")
knnxvalidation_job_spec.set_parameter("num.neighbors", "3")
knnxvalidation_job_spec.set_parameter("weighting.type", "1")
knnxvalidation_job_spec.set_parameter("distance.measure", "1")
knnxvalidation_job_spec.set_parameter("pred.results.file", "<data.filename_basename>.pred.odf")
knnxvalidation_job_spec.set_parameter("feature.summary.file", "<data.filename_basename>.feat.odf")
genepattern.GPTaskWidget(knnxvalidation_task)
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# _ is a reference to the result of the last executed cell
df = _
result_bars = list()
bar_colors = list()
tick_labels = list()
data_rows = df.row_count()
class_labels = list((set(df.dataframe["True Class"])))
confidence = pd.to_numeric(df.dataframe["Confidence"])
for i in range(0, data_rows):
tick_labels.append(df.dataframe["Samples"][i])
if df.dataframe["Predicted Class"][i] == class_labels[1]:
result_bars.append(confidence[i])
else:
result_bars.append(-confidence[i])
if df.dataframe["Correct?"][i]:
bar_colors.append('b')
else:
bar_colors.append('r')
ind = np.arange(data_rows) # the x locations for the groups
width = 0.8
if hasattr(plt, 'style'):
plt.style.use('ggplot')
fig = plt.figure(figsize=(16,12))
ax = fig.add_subplot(111)
# Set figure and axis titles
plt.title(class_labels[0]+" versus "+class_labels[1]+" Prediction Results")
plt.xlabel("Sample Name")
plt.ylabel("Confidence value")
plt.axis([0,data_rows,-1.25,1.25])
plt.text(0.2, -1.15, "Predicted " + class_labels[0])
plt.text(data_rows, 1.05, "Predicted " + class_labels[1], horizontalalignment='right')
plt.grid(True)
# Plot bar chart of predicted classes
rects1 = ax.bar(ind, result_bars, color=bar_colors, width=width)
tick_locs, tick_xlabels = plt.xticks()
plt.setp(tick_xlabels, rotation=50)
plt.show()
| 0.551815 | 0.952706 |
```
import yaml
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow_tts.inference import AutoConfig
from tensorflow_tts.inference import TFAutoModel
from tensorflow_tts.inference import AutoProcessor
import IPython.display as ipd
processor = AutoProcessor.from_pretrained("../tensorflow_tts/processor/pretrained/ljspeech_mapper.json")
input_text = "i love you so much."
input_ids = processor.text_to_sequence(input_text)
config = AutoConfig.from_pretrained("../examples/tacotron2/conf/tacotron2.v1.yaml")
tacotron2 = TFAutoModel.from_pretrained(
config=config,
pretrained_path=None,
is_build=True,
name="tacotron2"
)
tacotron2.setup_window(win_front=6, win_back=6)
tacotron2.setup_maximum_iterations(3000)
```
# Save to Pb
```
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
)
tacotron2.load_weights("../examples/tacotron2/checkpoints/model-120000.h5")
# save model into pb and do inference. Note that signatures should be a tf.function with input_signatures.
tf.saved_model.save(tacotron2, "./test_saved", signatures=tacotron2.inference)
```
# Load and Inference
```
tacotron2 = tf.saved_model.load("./test_saved")
input_text = "Unless you work on a ship, it's unlikely that you use the word boatswain in everyday conversation, so it's understandably a tricky one. The word - which refers to a petty officer in charge of hull maintenance is not pronounced boats-wain Rather, it's bo-sun to reflect the salty pronunciation of sailors, as The Free Dictionary explains."
input_ids = processor.text_to_sequence(input_text)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
tf.convert_to_tensor([len(input_ids)], tf.int32),
tf.convert_to_tensor([0], dtype=tf.int32)
)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111)
ax.set_title(f'Alignment steps')
im = ax.imshow(
alignment_history[0].numpy(),
aspect='auto',
origin='lower',
interpolation='none')
fig.colorbar(im, ax=ax)
xlabel = 'Decoder timestep'
plt.xlabel(xlabel)
plt.ylabel('Encoder timestep')
plt.tight_layout()
plt.show()
plt.close()
mel_outputs = tf.reshape(mel_outputs, [-1, 80]).numpy()
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-after-Spectrogram')
im = ax1.imshow(np.rot90(mel_outputs), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
plt.close()
```
# Let inference other input to check dynamic shape
```
input_text = "The Commission further recommends that the Secret Service coordinate its planning as closely as possible with all of the Federal agencies from which it receives information."
input_ids = processor.text_to_sequence(input_text)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
tf.convert_to_tensor([len(input_ids)], tf.int32),
tf.convert_to_tensor([0], dtype=tf.int32),
)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111)
ax.set_title(f'Alignment steps')
im = ax.imshow(
alignment_history[0].numpy(),
aspect='auto',
origin='lower',
interpolation='none')
fig.colorbar(im, ax=ax)
xlabel = 'Decoder timestep'
plt.xlabel(xlabel)
plt.ylabel('Encoder timestep')
plt.tight_layout()
plt.show()
plt.close()
mel_outputs = tf.reshape(mel_outputs, [-1, 80]).numpy()
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-after-Spectrogram')
im = ax1.imshow(np.rot90(mel_outputs), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
plt.close()
```
|
github_jupyter
|
import yaml
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow_tts.inference import AutoConfig
from tensorflow_tts.inference import TFAutoModel
from tensorflow_tts.inference import AutoProcessor
import IPython.display as ipd
processor = AutoProcessor.from_pretrained("../tensorflow_tts/processor/pretrained/ljspeech_mapper.json")
input_text = "i love you so much."
input_ids = processor.text_to_sequence(input_text)
config = AutoConfig.from_pretrained("../examples/tacotron2/conf/tacotron2.v1.yaml")
tacotron2 = TFAutoModel.from_pretrained(
config=config,
pretrained_path=None,
is_build=True,
name="tacotron2"
)
tacotron2.setup_window(win_front=6, win_back=6)
tacotron2.setup_maximum_iterations(3000)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
)
tacotron2.load_weights("../examples/tacotron2/checkpoints/model-120000.h5")
# save model into pb and do inference. Note that signatures should be a tf.function with input_signatures.
tf.saved_model.save(tacotron2, "./test_saved", signatures=tacotron2.inference)
tacotron2 = tf.saved_model.load("./test_saved")
input_text = "Unless you work on a ship, it's unlikely that you use the word boatswain in everyday conversation, so it's understandably a tricky one. The word - which refers to a petty officer in charge of hull maintenance is not pronounced boats-wain Rather, it's bo-sun to reflect the salty pronunciation of sailors, as The Free Dictionary explains."
input_ids = processor.text_to_sequence(input_text)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
tf.convert_to_tensor([len(input_ids)], tf.int32),
tf.convert_to_tensor([0], dtype=tf.int32)
)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111)
ax.set_title(f'Alignment steps')
im = ax.imshow(
alignment_history[0].numpy(),
aspect='auto',
origin='lower',
interpolation='none')
fig.colorbar(im, ax=ax)
xlabel = 'Decoder timestep'
plt.xlabel(xlabel)
plt.ylabel('Encoder timestep')
plt.tight_layout()
plt.show()
plt.close()
mel_outputs = tf.reshape(mel_outputs, [-1, 80]).numpy()
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-after-Spectrogram')
im = ax1.imshow(np.rot90(mel_outputs), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
plt.close()
input_text = "The Commission further recommends that the Secret Service coordinate its planning as closely as possible with all of the Federal agencies from which it receives information."
input_ids = processor.text_to_sequence(input_text)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
tf.convert_to_tensor([len(input_ids)], tf.int32),
tf.convert_to_tensor([0], dtype=tf.int32),
)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111)
ax.set_title(f'Alignment steps')
im = ax.imshow(
alignment_history[0].numpy(),
aspect='auto',
origin='lower',
interpolation='none')
fig.colorbar(im, ax=ax)
xlabel = 'Decoder timestep'
plt.xlabel(xlabel)
plt.ylabel('Encoder timestep')
plt.tight_layout()
plt.show()
plt.close()
mel_outputs = tf.reshape(mel_outputs, [-1, 80]).numpy()
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-after-Spectrogram')
im = ax1.imshow(np.rot90(mel_outputs), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
plt.close()
| 0.68784 | 0.709674 |
# HTTP Request/Response Cycle - Codealong
## Introduction
When developing a Web application, as we saw in the previous lesson, the request/response cycle is a useful guide to see how all the components of the app fit together. The request/response cycle traces how a user's request flows through the app. Understanding the request/response cycle is helpful to figure out which files to edit when developing an app (and where to look when things aren't working). This lesson will show how this setup works using python.
## Objectives
You will be able to:
* Explain the HTTP request/response cycle
* List the status codes of responses and their meanings
* Obtain and interpret status codes from responses
* Make HTTP GET and POST requests in python using the `requests` library
## The `requests` Library in Python
Dealing with HTTP requests could be a challenging task any programming language. Python with two built-in modules, `urllib` and `urllib2` to handle these requests but these could be very confusing and the documentation is not clear. This requires the programmer to write a lot of code to make even a simple HTTP request.
To make these things simpler, one easy-to-use third-party library, known as` Requests`, is available and most developers prefer to use it instead or urllib/urllib2. It is an Apache2 licensed HTTP library powered by urllib3 and httplib. Requests is add-on library that allows you to send HTTP requests using Python. With this library, you can access content like web page headers, form data, files, and parameters via simple Python commands. It also allows you to access the response data in a simple way.

Below is how you would install and import the requests library before making any requests.
```python
# Uncomment and install requests if you don't have it already
# !pip install requests
# Import requests to working environment
import requests
```
```
# Code here
import requests
```
## The `.get()` Method
Now we have requests library ready in our working environment, we can start making some requests using the `.get()` method as shown below:
```python
### Making a request
resp = requests.get('https://www.google.com')
```
```
# Code here
resp = requests.get('https://www.google.com')
```
GET is by far the most used HTTP method. We can use GET request to retrieve data from any destination.
## Status Codes
The request we make may not be always successful. The best way is to check the status code which gets returned with the response. Here is how you would do this.
```python
# Check the returned status code
resp.status_code == requests.codes.ok
```
```
# Code here
resp.status_code == requests.codes.ok
```
So this is a good check to see if our request was successful. Depending on the status of the web server, the access rights of the clients and the availability of requested information. A web server may return a number of status codes within the response. Wikipedia has an exhaustive details on all these codes. [Check them out here](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes).
## Response Contents
Once we know that our request was successful and we have a valid response, we can check the returned information using `.text` property of the response object.
```python
print (resp.text)
```
```
# Code here
print(resp.text)
```
So this returns a lot of information which by default is not really human-understandable due to data encoding, HTML tags and other styling information that only a web browser can truly translate. In later lessons, we'll learn how we can use **_Regular Expressions_** to clean this information and extract the required bits and pieces for analysis.
## Response Headers
The response of an HTTP request can contain many headers that holds different bits of information. We can use `.header` property of the response object to access the header information as shown below:
```python
# Read the header of the response - convert to dictionary for displaying k:v pairs neatly
dict(resp.headers)
```
```
# Code here
dict(resp.headers)
```
The content of the headers is our required element. You can see the key-value pairs holding various pieces of information about the resource and request. Let's try to parse some of these values using the requests library:
```python
print(resp.headers['Date']) # Date the response was sent
print(resp.headers['server']) # Server type (google web service - GWS)
```
```
# Code here
print(resp.headers['Date']) # Date the response was sent
print(resp.headers['server']) # Server type (google web service - GWS)
```
## Try `httpbin`
`httpbin.org` is a popular website to test different HTTP operations and practice with request-response cycles. Let's use httpbin/get to analyze the response to a GET request. First of all, let's find out the response header and inspect how it looks.
```python
r = requests.get('http://httpbin.org/get')
response = r.json()
print(r.json())
print(response['args'])
print(response['headers'])
print(response['headers']['Accept'])
print(response['headers']['Accept-Encoding'])
print(response['headers']['Host'])
print(response['headers']['User-Agent'])
print(response['origin'])
print(response['url'])
```
```
# Code here
r = requests.get('http://httpbin.org/get')
response = r.json()
print(r.json())
print(response['args'])
print(response['headers'])
print(response['headers']['Accept'])
print(response['headers']['Accept-Encoding'])
print(response['headers']['Host'])
print(response['headers']['User-Agent'])
print(response['origin'])
print(response['url'])
```
Let's use `requests` object structure to parse the values of headers as we did above.
```python
print(r.headers['Access-Control-Allow-Credentials'])
print(r.headers['Access-Control-Allow-Origin'])
print(r.headers['CONNECTION'])
print(r.headers['content-length'])
print(r.headers['Content-Type'])
print(r.headers['Date'])
print(r.headers['server'])
```
```
# Code here
print(r.headers['Access-Control-Allow-Credentials'])
print(r.headers['Access-Control-Allow-Origin'])
print(r.headers['CONNECTION'])
print(r.headers['content-length'])
print(r.headers['Content-Type'])
print(r.headers['Date'])
print(r.headers['server'])
```
## Passing Parameters in GET
In some cases, you'll need to pass parameters along with your GET requests. These extra parameters usually take the the form of query strings added to the requested URL. To do this, we need to pass these values in the `params` parameter. Let's try to access information from `httpbin` with some user information.
Note: The user information is not getting authenticated at `httpbin` so any name/password will work fine. This is merely for practice.
```python
credentials = {'user_name': 'FlatironSchool', 'password': 'learnlovecode'}
r = requests.get('http://httpbin.org/get', params=credentials)
print(r.url)
print(r.text)
```
```
# Code here
credentials = {'user_name': 'FlatironSchool', 'password': 'learnlovecode'}
r = requests.get('http://httpbin.org/get', params=credentials)
print(r.url)
print(r.text)
```
## HTTP POST method
Sometimes we need to send one or more files simultaneously to the server. For example, if a user is submitting a form and the form includes different fields for uploading files, like user profile picture, user resume, etc. Requests can handle multiple files on a single request. This can be achieved by putting the files to a list of tuples in the form (`field_name, file_info)`.
```python
import requests
url = 'http://httpbin.org/post'
file_list = [
('image', ('fi.png', open('images/fi.png', 'rb'), 'image/png')),
('image', ('fi2.jpeg', open('images/fi2.jpeg', 'rb'), 'image/png'))
]
r = requests.post(url, files=file_list)
print(r.text)
```
```
# Code here
url = 'http://httpbin.org/post'
file_list = [
('image', ('fi.png', open('images/fi.png', 'rb'), 'image/png')),
('image', ('fi2.jpeg', open('images/fi2.jpeg', 'rb'), 'image/png'))
]
r = requests.post(url, files=file_list)
print(r.text)
```
This was a brief introduction to how you would send requests and get responses from a web server, while totally avoiding the web browser interface. Later we'll see how we can pick up the required data elements from the contents of the web page for analytical purposes.
## Summary
In this lesson, we provided an introduction to the `requests` library in python. We saw how to use the get method to send requests to web servers, check server status, look at the header elements of a web page and how to send extra parameters like user information.
|
github_jupyter
|
# Uncomment and install requests if you don't have it already
# !pip install requests
# Import requests to working environment
import requests
# Code here
import requests
### Making a request
resp = requests.get('https://www.google.com')
# Code here
resp = requests.get('https://www.google.com')
# Check the returned status code
resp.status_code == requests.codes.ok
# Code here
resp.status_code == requests.codes.ok
print (resp.text)
# Code here
print(resp.text)
# Read the header of the response - convert to dictionary for displaying k:v pairs neatly
dict(resp.headers)
# Code here
dict(resp.headers)
print(resp.headers['Date']) # Date the response was sent
print(resp.headers['server']) # Server type (google web service - GWS)
# Code here
print(resp.headers['Date']) # Date the response was sent
print(resp.headers['server']) # Server type (google web service - GWS)
r = requests.get('http://httpbin.org/get')
response = r.json()
print(r.json())
print(response['args'])
print(response['headers'])
print(response['headers']['Accept'])
print(response['headers']['Accept-Encoding'])
print(response['headers']['Host'])
print(response['headers']['User-Agent'])
print(response['origin'])
print(response['url'])
# Code here
r = requests.get('http://httpbin.org/get')
response = r.json()
print(r.json())
print(response['args'])
print(response['headers'])
print(response['headers']['Accept'])
print(response['headers']['Accept-Encoding'])
print(response['headers']['Host'])
print(response['headers']['User-Agent'])
print(response['origin'])
print(response['url'])
print(r.headers['Access-Control-Allow-Credentials'])
print(r.headers['Access-Control-Allow-Origin'])
print(r.headers['CONNECTION'])
print(r.headers['content-length'])
print(r.headers['Content-Type'])
print(r.headers['Date'])
print(r.headers['server'])
# Code here
print(r.headers['Access-Control-Allow-Credentials'])
print(r.headers['Access-Control-Allow-Origin'])
print(r.headers['CONNECTION'])
print(r.headers['content-length'])
print(r.headers['Content-Type'])
print(r.headers['Date'])
print(r.headers['server'])
credentials = {'user_name': 'FlatironSchool', 'password': 'learnlovecode'}
r = requests.get('http://httpbin.org/get', params=credentials)
print(r.url)
print(r.text)
# Code here
credentials = {'user_name': 'FlatironSchool', 'password': 'learnlovecode'}
r = requests.get('http://httpbin.org/get', params=credentials)
print(r.url)
print(r.text)
import requests
url = 'http://httpbin.org/post'
file_list = [
('image', ('fi.png', open('images/fi.png', 'rb'), 'image/png')),
('image', ('fi2.jpeg', open('images/fi2.jpeg', 'rb'), 'image/png'))
]
r = requests.post(url, files=file_list)
print(r.text)
# Code here
url = 'http://httpbin.org/post'
file_list = [
('image', ('fi.png', open('images/fi.png', 'rb'), 'image/png')),
('image', ('fi2.jpeg', open('images/fi2.jpeg', 'rb'), 'image/png'))
]
r = requests.post(url, files=file_list)
print(r.text)
| 0.16099 | 0.867878 |
# Single Qubit Gates
In the previous section we looked at all the possible states a qubit could be in. We saw that qubits could be represented by 2D vectors, and that their states are limited to the form:
$$ |q\rangle = \cos{(\tfrac{\theta}{2})}|0\rangle + e^{i\phi}\sin{\tfrac{\theta}{2}}|1\rangle $$
Where $\theta$ and $\phi$ are real numbers. In this section we will cover _gates,_ the operations that change a qubit between these states. Due to the number of gates and the similarities between them, this chapter is at risk of becoming a list. To counter this, we have included a few digressions to introduce important ideas at appropriate places throughout the chapter.
## Contents
1. [The Pauli Gates](#pauli)
1.1 [The X-Gate](#xgate)
1.2 [The Y & Z-Gates](#ynzgatez)
2. [Digression: The X, Y & Z-Bases](#xyzbases)
3. [The Hadamard Gate](#hgate)
4. [Digression: Measuring in Different Bases](#measuring)
5. [The R<sub>ϕ</sub>-gate](#rzgate)
6. [The I, S and T-gates](#istgates)
6.1 [The I-Gate](#igate)
6.2 [The S-Gate](#sgate)
6.3 [The T_Gate](#tgate)
7. [The General U<sub>3</sub>-gate](#generalU3)
In _The Atoms of Computation_ we came across some gates and used them to perform a classical computation. An important feature of quantum circuits is that, between initialising the qubits and measuring them, the operations (gates) are *_always_* reversible! These reversible gates can be represented as matrices, and as rotations around the Bloch sphere.
```
from qiskit import *
from math import pi
from qiskit.visualization import plot_bloch_multivector
```
## 1. The Pauli Gates <a id="pauli"></a>
You should be familiar with the Pauli matrices from the linear algebra section. If any of the maths here is new to you, you should use the linear algebra section to bring yourself up to speed. We will see here that the Pauli matrices can represent some very commonly used quantum gates.
### 1.1 The X-Gate <a id="xgate"></a>
The X-gate is represented by the Pauli-X matrix:
$$ X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = |0\rangle\langle1| + |1\rangle\langle0| $$
To see the effect a gate has on a qubit, we simply multiply the qubit’s statevector by the gate. We can see that the X-gate switches the amplitudes of the states $|0\rangle$ and $|1\rangle$:
$$ X|0\rangle = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} = |1\rangle$$
<!--- Beware editing things inside <details> tags as it easily breaks when the notebook is converted to html --->
<details>
<summary>Reminder: Multiplying Vectors by Matrices (Click here to expand)</summary>
<p>Matrix multiplication is a generalisation of the inner product we saw in the last chapter. In the specific case of multiplying a vector by a matrix (as seen above), we always get a vector back:
$$ M|v\rangle = \begin{bmatrix}a & b \\ c & d \\\end{bmatrix}\begin{bmatrix}v_0 \\ v_1 \\\end{bmatrix}
= \begin{bmatrix}a\cdot v_0 + b \cdot v_1 \\ c \cdot v_0 + d \cdot v_1\end{bmatrix} $$
In quantum computing, we can write our matrices in terms of basis vectors:
$$X = |0\rangle\langle1| + |1\rangle\langle0|$$
This can sometimes be clearer than using a box matrix as we can see what different multiplications will result in:
$$
\begin{aligned}
X|1\rangle & = (|0\rangle\langle1| + |1\rangle\langle0|)|1\rangle \\
& = |0\rangle\langle1|1\rangle + |1\rangle\langle0|1\rangle \\
& = |0\rangle \times 1 + |1\rangle \times 0 \\
& = |0\rangle
\end{aligned}
$$
In fact, when we see a ket and a bra multiplied like this:
$$ |a\rangle\langle b| $$
this is called the <i>outer product</i>, which follows the rule:
$$
|a\rangle\langle b| =
\begin{bmatrix}
a_0 b_0 & a_0 b_1 & \dots & a_0 b_n\\
a_1 b_0 & \ddots & & \vdots \\
\vdots & & \ddots & \vdots \\
a_n b_0 & \dots & \dots & a_n b_n \\
\end{bmatrix}
$$
We can see this does indeed result in the X-matrix as seen above:
$$
|0\rangle\langle1| + |1\rangle\langle0| =
\begin{bmatrix}0 & 1 \\ 0 & 0 \\\end{bmatrix} +
\begin{bmatrix}0 & 0 \\ 1 & 0 \\\end{bmatrix} =
\begin{bmatrix}0 & 1 \\ 1 & 0 \\\end{bmatrix} = X
$$
</details>
In Qiskit, we can create a short circuit to verify this:
```
# Let's do an X-gate on a |0> qubit
qc = QuantumCircuit(1)
qc.x(0)
qc.draw('mpl')
```
Let's see the result of the above circuit. **Note:** Here we use `plot_bloch_multivector()` which takes a qubit's statevector instead of the Bloch vector.
```
# Let's see the result
backend = Aer.get_backend('statevector_simulator')
out = execute(qc,backend).result().get_statevector()
plot_bloch_multivector(out)
```
We can indeed see the state of the qubit is $|1\rangle$ as expected. We can think of this as a rotation by $\pi$ radians around the *x-axis* of the Bloch sphere. The X-gate is also often called a NOT-gate, referring to its classical analogue.
### 1.2 The Y & Z-gates <a id="ynzgatez"></a>
Similarly to the X-gate, the Y & Z Pauli matrices also act as the Y & Z-gates in our quantum circuits:
$$ Y = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \quad\quad\quad\quad Z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} $$
$$ Y = -i|0\rangle\langle1| + i|1\rangle\langle0| \quad\quad Z = |0\rangle\langle0| - |1\rangle\langle1| $$
And, unsurprisingly, they also respectively perform rotations by $\pi$ around the y and z-axis of the Bloch sphere.
Below is a widget that displays a qubit’s state on the Bloch sphere, pressing one of the buttons will perform the gate on the qubit:
```
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo(gates='pauli')
```
In Qiskit, we can apply the Y and Z-gates to our circuit using:
```
qc.y(0) # Do Y-gate on qubit 0
qc.z(0) # Do Z-gate on qubit 0
qc.draw('mpl')
```
## 2. Digression: The X, Y & Z-Bases <a id="xyzbases"></a>
<details>
<summary>Reminder: Eigenvectors of Matrices (Click here to expand)</summary>
We have seen that multiplying a vector by a matrix results in a vector:
$$
M|v\rangle = |v'\rangle \leftarrow \text{new vector}
$$
If we chose the right vectors and matrices, we can find a case in which this matrix multiplication is the same as doing a multiplication by a scalar:
$$
M|v\rangle = \lambda|v\rangle
$$
(Above, $M$ is a matrix, and $\lambda$ is a scalar). For a matrix $M$, any vector that has this property is called an <i>eigenvector</i> of $M$. For example, the eigenvectors of the Z-matrix are the states $|0\rangle$ and $|1\rangle$:
$$
\begin{aligned}
Z|0\rangle & = |0\rangle \\
Z|1\rangle & = -|1\rangle
\end{aligned}
$$
Since we use vectors to describe the state of our qubits, we often call these vectors <i>eigenstates</i> in this context. Eigenvectors are very important in quantum computing, and it is important you have a solid grasp of them.
</details>
You may also notice that the Z-gate appears to have no effect on our qubit when it is in either of these two states. This is because the states $|0\rangle$ and $|1\rangle$ are the two _eigenstates_ of the Z-gate. In fact, the _computational basis_ (the basis formed by the states $|0\rangle$ and $|1\rangle$) is often called the Z-basis. This is not the only basis we can use, a popular basis is the X-basis, formed by the eigenstates of the X-gate. We call these two vectors $|+\rangle$ and $|-\rangle$:
$$ |+\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle) = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \end{bmatrix}$$
$$ |-\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle) = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ -1 \end{bmatrix} $$
Another less commonly used basis is that formed by the eigenstates of the Y-gate. These are called:
$$ |\circlearrowleft\rangle, \quad |\circlearrowright\rangle$$
We leave it as an exercise to calculate these. There are in fact an infinite number of bases; to form one, we simply need two orthogonal vectors.
### Quick Exercises
1. Verify that $|+\rangle$ and $|-\rangle$ are in fact eigenstates of the X-gate.
2. What eigenvalues do they have?
3. Why would we not see these eigenvalues appear on the Bloch sphere?
4. Find the eigenstates of the Y-gate, and their co-ordinates on the Bloch sphere.
Using only the Pauli-gates it is impossible to move our initialised qubit to any state other than $|0\rangle$ or $|1\rangle$, i.e. we cannot achieve superposition. This means we can see no behaviour different to that of a classical bit. To create more interesting states we will need more gates!
## 3. The Hadamard Gate <a id="hgate"></a>
The Hadamard gate (H-gate) is a fundamental quantum gate. It allows us to move away from the poles of the Bloch sphere and create a superposition of $|0\rangle$ and $|1\rangle$. It has the matrix:
$$ H = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} $$
We can see that this performs the transformations below:
$$ H|0\rangle = |+\rangle $$
$$ H|1\rangle = |-\rangle $$
This can be thought of as a rotation around the Bloch vector `[1,0,1]` (the line between the x & z-axis), or as transforming the state of the qubit between the X and Z bases.
You can play around with these gates using the widget below:
```
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo(gates='pauli+h')
```
### Quick Exercise
1. Write the H-gate as the outer products of vectors $|0\rangle$, $|1\rangle$, $|+\rangle$ and $|-\rangle$.
2. Show that applying the sequence of gates: HZH, to any qubit state is equivalent to applying an X-gate.
3. Find a combination of X, Z and H-gates that is equivalent to a Y-gate (ignoring global phase).
## 4. Digression: Measuring in Different Bases <a id="measuring"></a>
We have seen that the Z-axis is not intrinsically special, and that there are infinitely many other bases. Similarly with measurement, we don’t always have to measure in the computational basis (the Z-basis), we can measure our qubits in any basis.
As an example, let’s try measuring in the X-basis. We can calculate the probability of measuring either $|+\rangle$ or $|-\rangle$:
$$ p(|+\rangle) = |\langle+|q\rangle|^2, \quad p(|-\rangle) = |\langle-|q\rangle|^2 $$
And after measurement, we are guaranteed to have a qubit in one of these two states. Since Qiskit only allows measuring in the Z-basis, we must create our own using Hadamard gates:
```
from qiskit.extensions import Initialize # Import the Inititialize function
# Create the X-measurement function:
def x_measurement(qc,qubit,cbit):
"""Measure 'qubit' in the X-basis, and store the result in 'cbit'"""
qc.h(qubit)
qc.measure(qubit, cbit)
qc.h(qubit)
return qc
# Initialise our qubit and measure it
qc = QuantumCircuit(1,1)
initial_state = [0,1]
initializer = Initialize(initial_state)
initializer.label = "init"
qc.append(initializer, [0])
x_measurement(qc, 0, 0)
qc.draw()
```
In the quick exercises above, we saw you could create an X-gate by sandwiching our Z-gate between two H-gates:
$$ X = HZH $$
Starting in the Z-basis, the H-gate switches our qubit to the X-basis, the Z-gate peforms a NOT in the X-basis, and the final H-gate returns our qubit to the Z-basis.
<img src="images/bloch_HZH.svg">
We can verify this always behaves like an X-gate by multiplying the matrices:
$$
HZH =
\tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}
\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}
\tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}
=
\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}
=X
$$
Following the same logic, we have created an X-measurement by sandwiching our Z-measurement between two H-gates.
<img src="images/x-measurement.svg">
Let’s now see the results:
```
backend = Aer.get_backend('statevector_simulator') # Tell Qiskit how to simulate our circuit
out_state = execute(qc,backend).result().get_statevector() # Do the simulation, returning the state vector
plot_bloch_multivector(out_state) # Display the output state vector
```
We initialised our qubit in the state $|1\rangle$, but we can see that, after the measurement, we have collapsed our qubit to the states $|+\rangle$ or $|-\rangle$. If you run the cell again, you will see different results, but the final state of the qubit will always be $|+\rangle$ or $|-\rangle$.
### Quick Exercises
1. If we initialise our qubit in the state $|+\rangle$, what is the probability of measuring it in state $|-\rangle$?
2. Use Qiskit to display the probability of measuring a $|0\rangle$ qubit in the states $|+\rangle$ and $|-\rangle$ (**Hint:** you might want to use `.get_counts()` and `plot_histogram()`).
3. Try to create a function that measures in the Y-basis.
Measuring in different bases allows us to see Heisenberg’s famous uncertainty principle in action. Having certainty of measuring a state in the Z-basis removes all certainty of measuring a specific state in the X-basis, and vice versa. A common misconception is that the uncertainty is due to the limits in our equipment, but here we can see the uncertainty is actually part of the nature of the qubit.
For example, if we put our qubit in the state $|0\rangle$, our measurement in the Z-basis is certain to be $|0\rangle$, but our measurement in the X-basis is completely random! Similarly, if we put our qubit in the state $|-\rangle$, our measurement in the X-basis is certain to be $|-\rangle$, but now any measurement in the Z-basis will be completely random.
More generally: _Whatever state our quantum system is in, there is always a measurement that has a certain outcome._
The introduction of the H-gate has allowed us to explore some interesting phenomena, but we are still very limited in our quantum operations. Let us now introduce a new type of gate:
## 5. The R<sub>ϕ</sub>-gate <a id="rzgate"></a>
The R<sub>ϕ</sub>-gate is _parametrised,_ that is, it needs a number ($\phi$) to tell it exactly what to do. The R<sub>ϕ</sub>-gate performs a rotation of $\phi$ around the Z-axis (and as such is sometimes also known as the R<sub>z</sub>-gate). It has the matrix:
$$
R_\phi = \begin{bmatrix} 1 & 0 \\ 0 & e^{i\phi} \end{bmatrix}
$$
Where $\phi$ is a real number.
You can use the widget below to play around with the R<sub>ϕ</sub>-gate, specify $\phi$ using the slider:
```
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo(gates='pauli+h+rz')
```
In Qiskit, we specify an R<sub>ϕ</sub>-gate using `rz(phi, qubit)`:
```
qc = QuantumCircuit(1)
qc.rz(pi/4, 0)
qc.draw('mpl')
```
You may notice that the Z-gate is a special case of the R<sub>ϕ</sub>-gate, with $\phi = \pi$. In fact there are three more commonly referenced gates we will mention in this chapter, all of which are special cases of the R<sub>ϕ</sub>-gate:
## 6. The I, S and T-gates <a id="istgates"></a>
### 6.1 The I-gate <a id="igate"></a>
First comes the I-gate (aka ‘Id-gate’ or ‘Identity gate’). This is simply a gate that does nothing. Its matrix is the identity matrix:
$$
I = \begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix}
$$
Applying the identity gate anywhere in your circuit should have no effect on the qubit state, so it’s interesting this is even considered a gate. There are two main reasons behind this, one is that it is often used in calculations, for example: proving the X-gate is its own inverse:
$$ I = XX $$
The second, is that it is often useful when considering real hardware to specify a ‘do-nothing’ or ‘none’ operation.
#### Quick Exercise
1. What are the eigenstates of the I-gate?
### 6.2 The S-gates <a id="sgate"></a>
The next gate to mention is the S-gate (sometimes known as the $\sqrt{Z}$-gate), this is an R<sub>ϕ</sub>-gate with $\phi = \pi/2$. It does a quarter-turn around the Bloch sphere. It is important to note that unlike every gate introduced in this chapter so far, the S-gate is **not** its own inverse! As a result, you will often see the S<sup>†</sup>-gate, (also “S-dagger”, “Sdg” or $\sqrt{Z}^\dagger$-gate). The S<sup>†</sup>-gate is clearly an R<sub>ϕ</sub>-gate with $\phi = -\pi/2$:
$$ S = \begin{bmatrix} 1 & 0 \\ 0 & e^{\frac{i\pi}{2}} \end{bmatrix}, \quad S^\dagger = \begin{bmatrix} 1 & 0 \\ 0 & e^{-\frac{i\pi}{2}} \end{bmatrix}$$
The name "$\sqrt{Z}$-gate" is due to the fact that two successively applied S-gates has the same effect as one Z-gate:
$$ SS|q\rangle = Z|q\rangle $$
This notation is common throughout quantum computing.
To add an S-gate in Qiskit:
```
qc = QuantumCircuit(1)
qc.s(0) # Apply S-gate to qubit 0
qc.sdg(0) # Apply Sdg-gate to qubit 0
qc.draw('mpl')
```
### 6.3 The T-gate <a id="tgate"></a>
The T-gate is a very commonly used gate, it is an R<sub>ϕ</sub>-gate with $\phi = \pi/4$:
$$ T = \begin{bmatrix} 1 & 0 \\ 0 & e^{\frac{i\pi}{4}} \end{bmatrix}, \quad T^\dagger = \begin{bmatrix} 1 & 0 \\ 0 & e^{-\frac{i\pi}{4}} \end{bmatrix}$$
As with the S-gate, the T-gate is sometimes also known as the $\sqrt[4]{Z}$-gate.
In Qiskit:
```
qc = QuantumCircuit(1)
qc.t(0) # Apply T-gate to qubit 0
qc.tdg(0) # Apply Tdg-gate to qubit 0
qc.draw('mpl')
```
You can use the widget below to play around with all the gates introduced in this chapter so far:
```
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo()
```
## 7. General U-gates <a id="generalU3"></a>
As we saw earlier, the I, Z, S & T-gates were all special cases of the more general R<sub>ϕ</sub>-gate. In the same way, the U<sub>3</sub>-gate is the most general of all single-qubit quantum gates. It is a parametrised gate of the form:
$$
U_3(\theta, \phi, \lambda) = \begin{bmatrix} \cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\
e^{i\phi}\sin(\theta/2) & e^{i\lambda+i\phi}\cos(\theta/2)
\end{bmatrix}
$$
Every gate in this chapter could be specified as $U_3(\theta,\phi,\lambda)$, but it is unusual to see this in a circuit diagram, possibly due to the difficulty in reading this.
Qiskit provides U<sub>2</sub> and U<sub>1</sub>-gates, which are specific cases of the U<sub>3</sub> gate in which $\theta = \tfrac{\pi}{2}$, and $\theta = \phi = 0$ respectively. You will notice that the U<sub>1</sub>-gate is equivalent to the R<sub>ϕ</sub>-gate.
$$
\begin{aligned}
U_3(\tfrac{\pi}{2}, \phi, \lambda) = U_2 = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & -e^{i\lambda} \\
e^{i\phi} & e^{i\lambda+i\phi}
\end{bmatrix}
& \quad &
U_3(0, 0, \lambda) = U_1 = \begin{bmatrix} 1 & 0 \\
0 & e^{i\lambda}\\
\end{bmatrix}
\end{aligned}
$$
Before running on real IBM quantum hardware, all single-qubit operations are compiled down to $U_1$ , $U_2$ and $U_3$ . For this reason they are sometimes called the _physical gates_.
It should be obvious from this that there are an infinite number of possible gates, and that this also includes R<sub>x</sub> and R<sub>y</sub>-gates, although they are not mentioned here. It must also be noted that there is nothing special about the Z-basis, except that it has been selected as the standard computational basis. That is why we have names for the S and T-gates, but not their X and Y equivalents (e.g. $\sqrt{X}$ and $\sqrt[4]{Y}$).
```
import qiskit
qiskit.__qiskit_version__
```
|
github_jupyter
|
from qiskit import *
from math import pi
from qiskit.visualization import plot_bloch_multivector
# Let's do an X-gate on a |0> qubit
qc = QuantumCircuit(1)
qc.x(0)
qc.draw('mpl')
# Let's see the result
backend = Aer.get_backend('statevector_simulator')
out = execute(qc,backend).result().get_statevector()
plot_bloch_multivector(out)
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo(gates='pauli')
qc.y(0) # Do Y-gate on qubit 0
qc.z(0) # Do Z-gate on qubit 0
qc.draw('mpl')
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo(gates='pauli+h')
from qiskit.extensions import Initialize # Import the Inititialize function
# Create the X-measurement function:
def x_measurement(qc,qubit,cbit):
"""Measure 'qubit' in the X-basis, and store the result in 'cbit'"""
qc.h(qubit)
qc.measure(qubit, cbit)
qc.h(qubit)
return qc
# Initialise our qubit and measure it
qc = QuantumCircuit(1,1)
initial_state = [0,1]
initializer = Initialize(initial_state)
initializer.label = "init"
qc.append(initializer, [0])
x_measurement(qc, 0, 0)
qc.draw()
backend = Aer.get_backend('statevector_simulator') # Tell Qiskit how to simulate our circuit
out_state = execute(qc,backend).result().get_statevector() # Do the simulation, returning the state vector
plot_bloch_multivector(out_state) # Display the output state vector
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo(gates='pauli+h+rz')
qc = QuantumCircuit(1)
qc.rz(pi/4, 0)
qc.draw('mpl')
qc = QuantumCircuit(1)
qc.s(0) # Apply S-gate to qubit 0
qc.sdg(0) # Apply Sdg-gate to qubit 0
qc.draw('mpl')
qc = QuantumCircuit(1)
qc.t(0) # Apply T-gate to qubit 0
qc.tdg(0) # Apply Tdg-gate to qubit 0
qc.draw('mpl')
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo()
import qiskit
qiskit.__qiskit_version__
| 0.82308 | 0.995466 |
```
import turicreate as tc
data = tc.SFrame('people_wiki.gl/')
data.head()
```
1. Compare top words according to word counts to TF-IDF: In the notebook we covered in the module, we explored two document representations: word counts and TF-IDF. Now, take a particular famous person, 'Elton John'. What are the 3 words in his articles with highest word counts? What are the 3 words in his articles with highest TF-IDF? These results illustrate why TF-IDF is useful for finding important words. Save these results to answer the quiz at the end.
```
data['word_count'] = tc.text_analytics.count_words(data['text'])
data.head()
data['tfidf'] = tc.text_analytics.tf_idf(data['word_count'])
data.head()
elton_john = data[data['name'] == 'Elton John']
elton_john.head()
elton_john[['word_count']].stack('word_count',new_column_name=['word','count']).sort('count',ascending=False)
elton_john[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
```
2. Measuring distance: Elton John is a famous singer; let’s compute the distance between his article and those of two other famous singers. In this assignment, you will use the cosine distance, which one measure of similarity between vectors, similar to the one discussed in the lectures. You can compute this distance using the graphlab.distances.cosine function. What’s the cosine distance between the articles on ‘Elton John’ and ‘Victoria Beckham’? What’s the cosine distance between the articles on ‘Elton John’ and Paul McCartney’? Which one of the two is closest to Elton John? Does this result make sense to you? Save these results to answer the quiz at the end.
```
victoria_beckham = data[data['name'] == 'Victoria Beckham']
paul_mccartney = data[data['name'] == 'Paul McCartney']
tc.distances.cosine(elton_john['tfidf'][0],victoria_beckham['tfidf'][0])
tc.distances.cosine(elton_john['tfidf'][0],paul_mccartney['tfidf'][0])
```
3. Building nearest neighbors models with different input features and setting the distance metric: In the sample notebook, we built a nearest neighbors model for retrieving articles using TF-IDF as features and using the default setting in the construction of the nearest neighbors model. Now, you will build two nearest neighbors models:
Using word counts as features
Using TF-IDF as features
In both of these models, we are going to set the distance function to cosine similarity. Here is how: when you call the function
1
graphlab.nearest_neighbors.create
add the parameter:
1
distance='cosine'
Now we are ready to use our model to retrieve documents. Use these two models to collect the following results:
What’s the most similar article, other than itself, to the one on ‘Elton John’ using word count features?
What’s the most similar article, other than itself, to the one on ‘Elton John’ using TF-IDF features?
What’s the most similar article, other than itself, to the one on ‘Victoria Beckham’ using word count features?
What’s the most similar article, other than itself, to the one on ‘Victoria Beckham’ using TF-IDF features?
Save these results to answer the quiz at the end.
```
knn_model1 = tc.nearest_neighbors.create(data,features=['tfidf'],label='name',distance='cosine')
knn_model2 = tc.nearest_neighbors.create(data,features=['word_count'],label='name',distance='cosine')
knn_model2.query(elton_john)
knn_model1.query(elton_john)
knn_model2.query(victoria_beckham)
knn_model1.query(victoria_beckham)
```
|
github_jupyter
|
import turicreate as tc
data = tc.SFrame('people_wiki.gl/')
data.head()
data['word_count'] = tc.text_analytics.count_words(data['text'])
data.head()
data['tfidf'] = tc.text_analytics.tf_idf(data['word_count'])
data.head()
elton_john = data[data['name'] == 'Elton John']
elton_john.head()
elton_john[['word_count']].stack('word_count',new_column_name=['word','count']).sort('count',ascending=False)
elton_john[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
victoria_beckham = data[data['name'] == 'Victoria Beckham']
paul_mccartney = data[data['name'] == 'Paul McCartney']
tc.distances.cosine(elton_john['tfidf'][0],victoria_beckham['tfidf'][0])
tc.distances.cosine(elton_john['tfidf'][0],paul_mccartney['tfidf'][0])
knn_model1 = tc.nearest_neighbors.create(data,features=['tfidf'],label='name',distance='cosine')
knn_model2 = tc.nearest_neighbors.create(data,features=['word_count'],label='name',distance='cosine')
knn_model2.query(elton_john)
knn_model1.query(elton_john)
knn_model2.query(victoria_beckham)
knn_model1.query(victoria_beckham)
| 0.241579 | 0.954732 |
```
import torch
import torch.backends.cudnn as cudnn
import torch.nn as nn
import torch.utils.data as data
from PIL import Image, ImageFile
from tensorboardX import SummaryWriter
from torchvision import transforms
from tqdm import tqdm
from sampler import InfiniteSamplerWrapper
from lib.depth_net import DepthV3
from pathlib import Path
import net
content_dir = "input/faces"
style_dir = "input/arts"
vgg_path = 'models/vgg_normalised.pth'
save_dir = './experiments'
log_dir = './logs'
lr = 1e-4
lr_decay = 5e-5
max_iter = 10000
batch_size = 8
style_weight = 10.0
content_weight = 1.0
depth_weight = 100.0
n_threads = 4
save_model_interval = 1000
def train_transform():
transform_list = [
transforms.Resize(size=(300, 300)),
transforms.RandomCrop(256),
transforms.ToTensor()
]
return transforms.Compose(transform_list)
class FlatFolderDataset(data.Dataset):
def __init__(self, root, transform):
super(FlatFolderDataset, self).__init__()
self.root = root
self.paths = list(Path(self.root).glob('*.jpg'))
self.transform = transform
def __getitem__(self, index):
path = self.paths[index]
img = Image.open(str(path)).convert('RGB')
img = self.transform(img)
return img
def __len__(self):
return len(self.paths)
def name(self):
return 'FlatFolderDataset'
def adjust_learning_rate(optimizer, iteration_count, lr):
"""Imitating the original implementation"""
new_lr = lr / (1.0 + lr_decay * iteration_count)
for param_group in optimizer.param_groups:
param_group['lr'] = new_lr
return new_lr
device = torch.device('cuda')
save_dir = Path(save_dir)
save_dir.mkdir(exist_ok=True, parents=True)
log_dir = Path(log_dir)
log_dir.mkdir(exist_ok=True, parents=True)
writer = SummaryWriter(log_dir=str(log_dir))
decoder = net.decoder
decoder.load_state_dict(torch.load("experiments/decoder_depth_AdaIN.pth"))
vgg = net.vgg
vgg.load_state_dict(torch.load(vgg_path))
vgg = nn.Sequential(*list(vgg.children())[:31])
depth_net = DepthV3((100, 100))
depth_net.load_state_dict(torch.load('models/depth_net.pth'))
for param in depth_net.parameters():
param.requires_grad = False
network = net.Net(vgg, decoder, depth_net)
network.train()
network.to(device);
content_tf = train_transform()
style_tf = train_transform()
content_dataset = FlatFolderDataset(content_dir, content_tf)
style_dataset = FlatFolderDataset(style_dir, style_tf)
content_iter = iter(data.DataLoader(
content_dataset, batch_size=batch_size,
sampler=InfiniteSamplerWrapper(content_dataset),
num_workers=n_threads))
style_iter = iter(data.DataLoader(
style_dataset, batch_size=batch_size,
sampler=InfiniteSamplerWrapper(style_dataset),
num_workers=n_threads))
optimizer = torch.optim.Adam(network.decoder.parameters(), lr=lr)
for i in tqdm(range(80001, 85001)):
lr = adjust_learning_rate(optimizer, iteration_count=i, lr=lr)
content_images = next(content_iter).to(device)
style_images = next(style_iter).to(device)
loss_c, loss_s, loss_d, _, _ = network(content_images, style_images)
loss_c = content_weight * loss_c
loss_s = style_weight * loss_s
loss_d = depth_weight * loss_d
loss = loss_c + loss_s + loss_d
optimizer.zero_grad()
loss.backward()
optimizer.step()
writer.add_scalar('loss_content', loss_c.item(), i + 1)
writer.add_scalar('loss_style', loss_s.item(), i + 1)
if (i + 1) % save_model_interval == 0 or (i + 1) == max_iter:
state_dict = net.decoder.state_dict()
for key in state_dict.keys():
state_dict[key] = state_dict[key].to(torch.device('cpu'))
torch.save(state_dict, save_dir /
'decoder_iter_{:d}.pth'.format(i + 1))
writer.close()
```
|
github_jupyter
|
import torch
import torch.backends.cudnn as cudnn
import torch.nn as nn
import torch.utils.data as data
from PIL import Image, ImageFile
from tensorboardX import SummaryWriter
from torchvision import transforms
from tqdm import tqdm
from sampler import InfiniteSamplerWrapper
from lib.depth_net import DepthV3
from pathlib import Path
import net
content_dir = "input/faces"
style_dir = "input/arts"
vgg_path = 'models/vgg_normalised.pth'
save_dir = './experiments'
log_dir = './logs'
lr = 1e-4
lr_decay = 5e-5
max_iter = 10000
batch_size = 8
style_weight = 10.0
content_weight = 1.0
depth_weight = 100.0
n_threads = 4
save_model_interval = 1000
def train_transform():
transform_list = [
transforms.Resize(size=(300, 300)),
transforms.RandomCrop(256),
transforms.ToTensor()
]
return transforms.Compose(transform_list)
class FlatFolderDataset(data.Dataset):
def __init__(self, root, transform):
super(FlatFolderDataset, self).__init__()
self.root = root
self.paths = list(Path(self.root).glob('*.jpg'))
self.transform = transform
def __getitem__(self, index):
path = self.paths[index]
img = Image.open(str(path)).convert('RGB')
img = self.transform(img)
return img
def __len__(self):
return len(self.paths)
def name(self):
return 'FlatFolderDataset'
def adjust_learning_rate(optimizer, iteration_count, lr):
"""Imitating the original implementation"""
new_lr = lr / (1.0 + lr_decay * iteration_count)
for param_group in optimizer.param_groups:
param_group['lr'] = new_lr
return new_lr
device = torch.device('cuda')
save_dir = Path(save_dir)
save_dir.mkdir(exist_ok=True, parents=True)
log_dir = Path(log_dir)
log_dir.mkdir(exist_ok=True, parents=True)
writer = SummaryWriter(log_dir=str(log_dir))
decoder = net.decoder
decoder.load_state_dict(torch.load("experiments/decoder_depth_AdaIN.pth"))
vgg = net.vgg
vgg.load_state_dict(torch.load(vgg_path))
vgg = nn.Sequential(*list(vgg.children())[:31])
depth_net = DepthV3((100, 100))
depth_net.load_state_dict(torch.load('models/depth_net.pth'))
for param in depth_net.parameters():
param.requires_grad = False
network = net.Net(vgg, decoder, depth_net)
network.train()
network.to(device);
content_tf = train_transform()
style_tf = train_transform()
content_dataset = FlatFolderDataset(content_dir, content_tf)
style_dataset = FlatFolderDataset(style_dir, style_tf)
content_iter = iter(data.DataLoader(
content_dataset, batch_size=batch_size,
sampler=InfiniteSamplerWrapper(content_dataset),
num_workers=n_threads))
style_iter = iter(data.DataLoader(
style_dataset, batch_size=batch_size,
sampler=InfiniteSamplerWrapper(style_dataset),
num_workers=n_threads))
optimizer = torch.optim.Adam(network.decoder.parameters(), lr=lr)
for i in tqdm(range(80001, 85001)):
lr = adjust_learning_rate(optimizer, iteration_count=i, lr=lr)
content_images = next(content_iter).to(device)
style_images = next(style_iter).to(device)
loss_c, loss_s, loss_d, _, _ = network(content_images, style_images)
loss_c = content_weight * loss_c
loss_s = style_weight * loss_s
loss_d = depth_weight * loss_d
loss = loss_c + loss_s + loss_d
optimizer.zero_grad()
loss.backward()
optimizer.step()
writer.add_scalar('loss_content', loss_c.item(), i + 1)
writer.add_scalar('loss_style', loss_s.item(), i + 1)
if (i + 1) % save_model_interval == 0 or (i + 1) == max_iter:
state_dict = net.decoder.state_dict()
for key in state_dict.keys():
state_dict[key] = state_dict[key].to(torch.device('cpu'))
torch.save(state_dict, save_dir /
'decoder_iter_{:d}.pth'.format(i + 1))
writer.close()
| 0.861553 | 0.253283 |
```
import os
import json
import pickle
import random
from collections import defaultdict, Counter
from indra.literature.adeft_tools import universal_extract_text
from indra.databases.hgnc_client import get_hgnc_name, get_hgnc_id
from adeft.discover import AdeftMiner
from adeft.gui import ground_with_gui
from adeft.modeling.label import AdeftLabeler
from adeft.modeling.classify import AdeftClassifier
from adeft.disambiguate import AdeftDisambiguator
from adeft_indra.ground.ground import AdeftGrounder
from adeft_indra.model_building.s3 import model_to_s3
from adeft_indra.model_building.escape import escape_filename
from adeft_indra.db.content import get_pmids_for_agent_text, get_pmids_for_entity, \
get_plaintexts_for_pmids
adeft_grounder = AdeftGrounder()
shortforms = ['LPS']
model_name = ':'.join(sorted(escape_filename(shortform) for shortform in shortforms))
results_path = os.path.abspath(os.path.join('../..', 'results', model_name))
miners = dict()
all_texts = {}
for shortform in shortforms:
pmids = get_pmids_for_agent_text(shortform)
pmids = random.choices(pmids, k=10000)
text_dict = get_plaintexts_for_pmids(pmids, contains=shortforms)
text_dict = {pmid: text for pmid, text in text_dict.items() if len(text) > 5}
miners[shortform] = AdeftMiner(shortform)
miners[shortform].process_texts(text_dict.values())
all_texts.update(text_dict)
longform_dict = {}
for shortform in shortforms:
longforms = miners[shortform].get_longforms()
longforms = [(longform, count, score) for longform, count, score in longforms
if count*score > 2]
longform_dict[shortform] = longforms
combined_longforms = Counter()
for longform_rows in longform_dict.values():
combined_longforms.update({longform: count for longform, count, score
in longform_rows})
grounding_map = {}
names = {}
for longform in combined_longforms:
groundings = adeft_grounder.ground(longform)
if groundings:
grounding = groundings[0]['grounding']
grounding_map[longform] = grounding
names[grounding] = groundings[0]['name']
longforms, counts = zip(*combined_longforms.most_common())
list(zip(longforms, counts))
grounding_map, names, pos_labels = ground_with_gui(longforms, counts,
grounding_map=grounding_map,
names=names, pos_labels=pos_labels, no_browser=True, port=8890)
result = [grounding_map, names, pos_labels]
result
grounding_map, names, pos_labels = [{'cp fine': 'MESH:D052638',
'f poae': 'ungrounded',
'f prostanoid': 'CHEBI:CHEBI:26347',
'f series prostanoid': 'CHEBI:CHEBI:26347',
'fabry perot': 'ungrounded',
'fabry pérot': 'ungrounded',
'faecal protease': 'ungrounded',
'faecalibacterium prausnitzii': 'MESH:D000070037',
'false positive': 'MESH:D005189',
'false positives': 'MESH:D005189',
'family physicians': 'MESH:D010821',
'family planning': 'MESH:D005193',
'farnesyl phosphate': 'CHEBI:CHEBI:24018',
'fast pathway': 'CHEBI:CHEBI:34922',
'fat pad': 'MESH:D000273',
'fat percentage': 'ungrounded',
'fatty pancreas': 'ungrounded',
'female protein': 'ungrounded',
'fenpropimorph': 'CHEBI:CHEBI:50145',
'fenugreek powder': 'ungrounded',
'fermentation production': 'ungrounded',
'fertility preservation': 'MESH:D059247',
'fetal pancreas': 'ungrounded',
'few polyhedra': 'ungrounded',
'fgfb pacap': 'ungrounded',
'fiber protein': 'MESH:D012596',
'field potential': 'field_potential',
'field potentials': 'field_potential',
'filamentous processes': 'ungrounded',
'filter paper': 'ungrounded',
'fine particles': 'MESH:D052638',
'fipronil': 'CHEBI:CHEBI:5063',
'first progression': 'ungrounded',
'fish peptide': 'ungrounded',
'fixed point': 'ungrounded',
'flagellar pocket': 'GO:GO:0020016',
'flavonoids and phenolic acid': 'ungrounded',
'flavopiridol': 'CHEBI:CHEBI:47344',
'flavoprotein': 'CHEBI:CHEBI:5086',
'floor plate': 'floor_plate',
'flow probe': 'ungrounded',
'flow proneness': 'ungrounded',
'flowering period': 'ungrounded',
'fluid percussion': 'fluid_percussion',
'fluid pressure': 'ungrounded',
'fluorescence polarization': 'MESH:D005454',
'fluorescence protein': 'MESH:D008164',
'fluorophosphonate': 'CHEBI:CHEBI:42699',
'fluoropyrimidine': 'PUBCHEM:141643',
'fluphenazine': 'CHEBI:CHEBI:5123',
'flurbiprofen': 'CHEBI:CHEBI:5130',
'fluticasone': 'CHEBI:CHEBI:5134',
'fluticasone propionate': 'CHEBI:CHEBI:31441',
'focused pulsed': 'ungrounded',
'follicular phase': 'MESH:D005498',
'foot processes': 'NCIT:C32623',
'footpad': 'ungrounded',
'foreperiod': 'ungrounded',
'formyl peptide': 'MESH:D009240',
'fowlpox': 'MESH:D005587',
'fowlpox virus': 'MESH:D005587',
'fp dipeptides': 'ungrounded',
'fractional photothermolysis': 'ungrounded',
'frailty phenotype': 'ungrounded',
'from propolis': 'MESH:D011429',
'frontal pole': 'ungrounded',
'fronto parietal': 'ungrounded',
'frontopolar cortex': 'ungrounded',
'fructus psoraleae': 'ungrounded',
'fucan polysaccharides': 'ungrounded',
'fungiform papilla': 'ungrounded',
'fusion peptide': 'MESH:D014760',
'fusion peptides': 'MESH:D014760',
'fusion positive': 'ungrounded',
'fusion protein': 'MESH:D014760',
'fusogenic peptides': 'MESH:D014760',
'prostaglandin f': 'HGNC:9600',
'prostaglandin f2α receptor': 'HGNC:9600'},
{'MESH:D052638': 'Particulate Matter',
'CHEBI:CHEBI:26347': 'prostanoid',
'MESH:D000070037': 'Faecalibacterium prausnitzii',
'MESH:D005189': 'False Positive Reactions',
'MESH:D010821': 'Physicians, Family',
'MESH:D005193': 'Family Planning Services',
'CHEBI:CHEBI:24018': 'farnesyl phosphate',
'CHEBI:CHEBI:34922': 'picloram',
'MESH:D000273': 'Adipose Tissue',
'CHEBI:CHEBI:50145': 'fenpropimorph',
'MESH:D059247': 'Fertility Preservation',
'MESH:D012596': 'Scleroproteins',
'field_potential': 'field_potential',
'CHEBI:CHEBI:5063': 'fipronil',
'GO:GO:0020016': 'ciliary pocket',
'CHEBI:CHEBI:47344': 'alvocidib',
'CHEBI:CHEBI:5086': 'flavoprotein',
'floor_plate': 'floor_plate',
'fluid_percussion': 'fluid_percussion',
'MESH:D005454': 'Fluorescence Polarization',
'MESH:D008164': 'Luminescent Proteins',
'CHEBI:CHEBI:42699': 'fluoridophosphate',
'PUBCHEM:141643': '2-Fluoropyrimidine',
'CHEBI:CHEBI:5123': 'fluphenazine',
'CHEBI:CHEBI:5130': 'flurbiprofen',
'CHEBI:CHEBI:5134': 'fluticasone',
'CHEBI:CHEBI:31441': 'fluticasone propionate',
'MESH:D005498': 'Follicular Phase',
'NCIT:C32623': 'Foot Process',
'MESH:D009240': 'N-Formylmethionine Leucyl-Phenylalanine',
'MESH:D005587': 'Fowlpox virus',
'MESH:D011429': 'Propolis',
'MESH:D014760': 'Viral Fusion Proteins',
'HGNC:9600': 'PTGFR'},
['CHEBI:CHEBI:47344',
'CHEBI:CHEBI:5130',
'MESH:D005189',
'MESH:D005193',
'MESH:D005454',
'MESH:D008164',
'MESH:D014760',
'NCIT:C32623',
'GO:GO:0020016']]
excluded_longforms = []
grounding_dict = {shortform: {longform: grounding_map[longform]
for longform, _, _ in longforms if longform in grounding_map
and longform not in excluded_longforms}
for shortform, longforms in longform_dict.items()}
result = [grounding_dict, names, pos_labels]
if not os.path.exists(results_path):
os.mkdir(results_path)
with open(os.path.join(results_path, f'{model_name}_preliminary_grounding_info.json'), 'w') as f:
json.dump(result, f)
additional_entities = []
unambiguous_agent_texts = {}
labeler = AdeftLabeler(grounding_dict)
corpus = labeler.build_from_texts((text, pmid) for pmid, text in all_texts.items())
agent_text_pmid_map = defaultdict(list)
for text, label, id_ in corpus:
agent_text_pmid_map[label].append(id_)
entities = pos_labels + additional_entities
entity_pmid_map = {entity: set(get_pmids_for_entity(*entity.split(':', maxsplit=1),
major_topic=True))for entity in entities}
intersection1 = []
for entity1, pmids1 in entity_pmid_map.items():
for entity2, pmids2 in entity_pmid_map.items():
intersection1.append((entity1, entity2, len(pmids1 & pmids2)))
intersection2 = []
for entity1, pmids1 in agent_text_pmid_map.items():
for entity2, pmids2 in entity_pmid_map.items():
intersection2.append((entity1, entity2, len(set(pmids1) & pmids2)))
intersection1
intersection2
all_used_pmids = set()
for entity, agent_texts in unambiguous_agent_texts.items():
used_pmids = set()
for agent_text in agent_texts:
pmids = set(get_pmids_for_agent_text(agent_text))
new_pmids = list(pmids - all_texts.keys() - used_pmids)
text_dict = get_plaintexts_for_pmids(new_pmids, contains=agent_texts)
corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()])
used_pmids.update(new_pmids)
all_used_pmids.update(used_pmids)
for entity, pmids in entity_pmid_map.items():
new_pmids = list(set(pmids) - all_texts.keys() - all_used_pmids)
if len(new_pmids) > 10000:
new_pmids = random.choices(new_pmids, k=10000)
text_dict = get_plaintexts_for_pmids(new_pmids)
corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()])
%%capture
classifier = AdeftClassifier(shortforms, pos_labels=pos_labels, random_state=1729)
param_grid = {'C': [100.0], 'max_features': [10000]}
texts, labels, pmids = zip(*corpus)
classifier.cv(texts, labels, param_grid, cv=5, n_jobs=5)
classifier.stats
disamb = AdeftDisambiguator(classifier, grounding_dict, names)
disamb.dump(model_name, results_path)
print(disamb.info())
model_to_s3(disamb)
```
|
github_jupyter
|
import os
import json
import pickle
import random
from collections import defaultdict, Counter
from indra.literature.adeft_tools import universal_extract_text
from indra.databases.hgnc_client import get_hgnc_name, get_hgnc_id
from adeft.discover import AdeftMiner
from adeft.gui import ground_with_gui
from adeft.modeling.label import AdeftLabeler
from adeft.modeling.classify import AdeftClassifier
from adeft.disambiguate import AdeftDisambiguator
from adeft_indra.ground.ground import AdeftGrounder
from adeft_indra.model_building.s3 import model_to_s3
from adeft_indra.model_building.escape import escape_filename
from adeft_indra.db.content import get_pmids_for_agent_text, get_pmids_for_entity, \
get_plaintexts_for_pmids
adeft_grounder = AdeftGrounder()
shortforms = ['LPS']
model_name = ':'.join(sorted(escape_filename(shortform) for shortform in shortforms))
results_path = os.path.abspath(os.path.join('../..', 'results', model_name))
miners = dict()
all_texts = {}
for shortform in shortforms:
pmids = get_pmids_for_agent_text(shortform)
pmids = random.choices(pmids, k=10000)
text_dict = get_plaintexts_for_pmids(pmids, contains=shortforms)
text_dict = {pmid: text for pmid, text in text_dict.items() if len(text) > 5}
miners[shortform] = AdeftMiner(shortform)
miners[shortform].process_texts(text_dict.values())
all_texts.update(text_dict)
longform_dict = {}
for shortform in shortforms:
longforms = miners[shortform].get_longforms()
longforms = [(longform, count, score) for longform, count, score in longforms
if count*score > 2]
longform_dict[shortform] = longforms
combined_longforms = Counter()
for longform_rows in longform_dict.values():
combined_longforms.update({longform: count for longform, count, score
in longform_rows})
grounding_map = {}
names = {}
for longform in combined_longforms:
groundings = adeft_grounder.ground(longform)
if groundings:
grounding = groundings[0]['grounding']
grounding_map[longform] = grounding
names[grounding] = groundings[0]['name']
longforms, counts = zip(*combined_longforms.most_common())
list(zip(longforms, counts))
grounding_map, names, pos_labels = ground_with_gui(longforms, counts,
grounding_map=grounding_map,
names=names, pos_labels=pos_labels, no_browser=True, port=8890)
result = [grounding_map, names, pos_labels]
result
grounding_map, names, pos_labels = [{'cp fine': 'MESH:D052638',
'f poae': 'ungrounded',
'f prostanoid': 'CHEBI:CHEBI:26347',
'f series prostanoid': 'CHEBI:CHEBI:26347',
'fabry perot': 'ungrounded',
'fabry pérot': 'ungrounded',
'faecal protease': 'ungrounded',
'faecalibacterium prausnitzii': 'MESH:D000070037',
'false positive': 'MESH:D005189',
'false positives': 'MESH:D005189',
'family physicians': 'MESH:D010821',
'family planning': 'MESH:D005193',
'farnesyl phosphate': 'CHEBI:CHEBI:24018',
'fast pathway': 'CHEBI:CHEBI:34922',
'fat pad': 'MESH:D000273',
'fat percentage': 'ungrounded',
'fatty pancreas': 'ungrounded',
'female protein': 'ungrounded',
'fenpropimorph': 'CHEBI:CHEBI:50145',
'fenugreek powder': 'ungrounded',
'fermentation production': 'ungrounded',
'fertility preservation': 'MESH:D059247',
'fetal pancreas': 'ungrounded',
'few polyhedra': 'ungrounded',
'fgfb pacap': 'ungrounded',
'fiber protein': 'MESH:D012596',
'field potential': 'field_potential',
'field potentials': 'field_potential',
'filamentous processes': 'ungrounded',
'filter paper': 'ungrounded',
'fine particles': 'MESH:D052638',
'fipronil': 'CHEBI:CHEBI:5063',
'first progression': 'ungrounded',
'fish peptide': 'ungrounded',
'fixed point': 'ungrounded',
'flagellar pocket': 'GO:GO:0020016',
'flavonoids and phenolic acid': 'ungrounded',
'flavopiridol': 'CHEBI:CHEBI:47344',
'flavoprotein': 'CHEBI:CHEBI:5086',
'floor plate': 'floor_plate',
'flow probe': 'ungrounded',
'flow proneness': 'ungrounded',
'flowering period': 'ungrounded',
'fluid percussion': 'fluid_percussion',
'fluid pressure': 'ungrounded',
'fluorescence polarization': 'MESH:D005454',
'fluorescence protein': 'MESH:D008164',
'fluorophosphonate': 'CHEBI:CHEBI:42699',
'fluoropyrimidine': 'PUBCHEM:141643',
'fluphenazine': 'CHEBI:CHEBI:5123',
'flurbiprofen': 'CHEBI:CHEBI:5130',
'fluticasone': 'CHEBI:CHEBI:5134',
'fluticasone propionate': 'CHEBI:CHEBI:31441',
'focused pulsed': 'ungrounded',
'follicular phase': 'MESH:D005498',
'foot processes': 'NCIT:C32623',
'footpad': 'ungrounded',
'foreperiod': 'ungrounded',
'formyl peptide': 'MESH:D009240',
'fowlpox': 'MESH:D005587',
'fowlpox virus': 'MESH:D005587',
'fp dipeptides': 'ungrounded',
'fractional photothermolysis': 'ungrounded',
'frailty phenotype': 'ungrounded',
'from propolis': 'MESH:D011429',
'frontal pole': 'ungrounded',
'fronto parietal': 'ungrounded',
'frontopolar cortex': 'ungrounded',
'fructus psoraleae': 'ungrounded',
'fucan polysaccharides': 'ungrounded',
'fungiform papilla': 'ungrounded',
'fusion peptide': 'MESH:D014760',
'fusion peptides': 'MESH:D014760',
'fusion positive': 'ungrounded',
'fusion protein': 'MESH:D014760',
'fusogenic peptides': 'MESH:D014760',
'prostaglandin f': 'HGNC:9600',
'prostaglandin f2α receptor': 'HGNC:9600'},
{'MESH:D052638': 'Particulate Matter',
'CHEBI:CHEBI:26347': 'prostanoid',
'MESH:D000070037': 'Faecalibacterium prausnitzii',
'MESH:D005189': 'False Positive Reactions',
'MESH:D010821': 'Physicians, Family',
'MESH:D005193': 'Family Planning Services',
'CHEBI:CHEBI:24018': 'farnesyl phosphate',
'CHEBI:CHEBI:34922': 'picloram',
'MESH:D000273': 'Adipose Tissue',
'CHEBI:CHEBI:50145': 'fenpropimorph',
'MESH:D059247': 'Fertility Preservation',
'MESH:D012596': 'Scleroproteins',
'field_potential': 'field_potential',
'CHEBI:CHEBI:5063': 'fipronil',
'GO:GO:0020016': 'ciliary pocket',
'CHEBI:CHEBI:47344': 'alvocidib',
'CHEBI:CHEBI:5086': 'flavoprotein',
'floor_plate': 'floor_plate',
'fluid_percussion': 'fluid_percussion',
'MESH:D005454': 'Fluorescence Polarization',
'MESH:D008164': 'Luminescent Proteins',
'CHEBI:CHEBI:42699': 'fluoridophosphate',
'PUBCHEM:141643': '2-Fluoropyrimidine',
'CHEBI:CHEBI:5123': 'fluphenazine',
'CHEBI:CHEBI:5130': 'flurbiprofen',
'CHEBI:CHEBI:5134': 'fluticasone',
'CHEBI:CHEBI:31441': 'fluticasone propionate',
'MESH:D005498': 'Follicular Phase',
'NCIT:C32623': 'Foot Process',
'MESH:D009240': 'N-Formylmethionine Leucyl-Phenylalanine',
'MESH:D005587': 'Fowlpox virus',
'MESH:D011429': 'Propolis',
'MESH:D014760': 'Viral Fusion Proteins',
'HGNC:9600': 'PTGFR'},
['CHEBI:CHEBI:47344',
'CHEBI:CHEBI:5130',
'MESH:D005189',
'MESH:D005193',
'MESH:D005454',
'MESH:D008164',
'MESH:D014760',
'NCIT:C32623',
'GO:GO:0020016']]
excluded_longforms = []
grounding_dict = {shortform: {longform: grounding_map[longform]
for longform, _, _ in longforms if longform in grounding_map
and longform not in excluded_longforms}
for shortform, longforms in longform_dict.items()}
result = [grounding_dict, names, pos_labels]
if not os.path.exists(results_path):
os.mkdir(results_path)
with open(os.path.join(results_path, f'{model_name}_preliminary_grounding_info.json'), 'w') as f:
json.dump(result, f)
additional_entities = []
unambiguous_agent_texts = {}
labeler = AdeftLabeler(grounding_dict)
corpus = labeler.build_from_texts((text, pmid) for pmid, text in all_texts.items())
agent_text_pmid_map = defaultdict(list)
for text, label, id_ in corpus:
agent_text_pmid_map[label].append(id_)
entities = pos_labels + additional_entities
entity_pmid_map = {entity: set(get_pmids_for_entity(*entity.split(':', maxsplit=1),
major_topic=True))for entity in entities}
intersection1 = []
for entity1, pmids1 in entity_pmid_map.items():
for entity2, pmids2 in entity_pmid_map.items():
intersection1.append((entity1, entity2, len(pmids1 & pmids2)))
intersection2 = []
for entity1, pmids1 in agent_text_pmid_map.items():
for entity2, pmids2 in entity_pmid_map.items():
intersection2.append((entity1, entity2, len(set(pmids1) & pmids2)))
intersection1
intersection2
all_used_pmids = set()
for entity, agent_texts in unambiguous_agent_texts.items():
used_pmids = set()
for agent_text in agent_texts:
pmids = set(get_pmids_for_agent_text(agent_text))
new_pmids = list(pmids - all_texts.keys() - used_pmids)
text_dict = get_plaintexts_for_pmids(new_pmids, contains=agent_texts)
corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()])
used_pmids.update(new_pmids)
all_used_pmids.update(used_pmids)
for entity, pmids in entity_pmid_map.items():
new_pmids = list(set(pmids) - all_texts.keys() - all_used_pmids)
if len(new_pmids) > 10000:
new_pmids = random.choices(new_pmids, k=10000)
text_dict = get_plaintexts_for_pmids(new_pmids)
corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()])
%%capture
classifier = AdeftClassifier(shortforms, pos_labels=pos_labels, random_state=1729)
param_grid = {'C': [100.0], 'max_features': [10000]}
texts, labels, pmids = zip(*corpus)
classifier.cv(texts, labels, param_grid, cv=5, n_jobs=5)
classifier.stats
disamb = AdeftDisambiguator(classifier, grounding_dict, names)
disamb.dump(model_name, results_path)
print(disamb.info())
model_to_s3(disamb)
| 0.31237 | 0.153391 |
## Tutorial on QAOA Compiler
This tutorial shows how to use the QAOA compiler for QAOA circuit compilation and optimization. (https://github.com/mahabubul-alam/QAOA-Compiler).
### Inputs to the QAOA Compiler
The compiler takes three json files as the inputs. They holds the following information:
* ZZ-interactions and corresponding coefficients in the problem Hamiltonian (https://github.com/mahabubul-alam/QAOA-Compiler/blob/main/examples/QAOA_circ.json)
* Target hardware supported gates and corresponding reliabilities (https://github.com/mahabubul-alam/QAOA-Compiler/blob/main/examples/QC.json)
* Confugurations for compilation (e.g., target p-value, routing method, random seed, etc.) (https://github.com/mahabubul-alam/QAOA-Compiler/blob/main/examples/Config.json)

The input ZZ-interactions json file content for the above graph MaxCut problem is shown below:
```
{
"(0, 2)": "-0.5", // ZZ-interaction between the qubit pair (0, 2) with a coefficient of -0.5
"(1, 2)": "-0.5"
}
```
A script is provided under utils (https://github.com/mahabubul-alam/QAOA-Compiler/blob/main/utils/construct_qaoa_circ_json.py) to generate this input json file for arbitrary unweighted graph.
The hardware json file must have the following information:
* Supported single-qubit gates
* Supported two-qubit gate
* Reliabilities of the supported single qubit operations
* Reliabilities of the supported two-qubit operations <br>
The json file content for a hypothetical 3-qubit hardware is shown below:
```
{
"1Q": [ //native single-qubit gates of the target hardware
"u3"
],
"2Q": [ //native two-qubit gate of the target hardware
"cx"
],
"u3": {
"0": 0.991, //"qubit" : success probability of u3
"1": 0.995,
"2": 0.998
}
"cx": {
"(0,1)": 0.96, //"(qubit1,qubit2)" : success probability of cx between qubit1, qubit2 (both directions)
"(1,2)": 0.97,
"(2,0)": 0.98,
}
}
```
A script is provided under utils (https://github.com/mahabubul-alam/QAOA-Compiler/blob/main/utils/construct_qc.py) to generate this json file for the quantum processors from IBM.
The configuration json file should hold the following information:
* Target backend compiler (currently supports qiskit)
* Target p-value
* Packing Limit (see https://www.microarch.org/micro53/papers/738300a215.pdf)
* Target routing method (any routing method that are supported by the qiskit compiler, e.g., sabre, stochastic_swap, basic_swap, etc.)
* Random seed for the qiskit transpiler
* Chosen optimization level for the qiskit compiler (0~3)
The content of a sample configuration json file is shown below:
```
{
"Backend" : "qiskit",
"Target_p" : "1",
"Packing_Limit" : "10e10",
"Route_Method" : "sabre",
"Trans_Seed" : "0",
"Opt_Level" : "3"
}
```
### How to Run
```
python run.py -arg arg_val
```
* -device_json string (mandatory): Target device configuration file location. This file holds the information on basis gates, reliability, and allowed two-qubit operations. It has to be written in json format. An example can be found [here](https://github.com/mahabubul-alam/QAOA_Compiler/blob/main/examples/QC.json).
* -circuit_json string (mandatory): Problem QAOA-circuit file location. This file holds the required ZZ interactions between various qubit-pairs to encode the cost hamiltonian. It has to be written in json format. An example can be found [here](https://github.com/mahabubul-alam/QAOA_Compiler/blob/main/examples/QAOA_circ.json).
* -config_json string (mandatory): Compiler configuration file location. This file holds target p-level, and chosen packing limit, qiskit transpiler seed, optimization level, and routing method. It has to be written in json format. An example can be found [here](https://github.com/mahabubul-alam/QAOA_Compiler/blob/main/examples/Config.json).
* -policy_compilation string: Chosen compilation policy. The current version supports the following policies: Instruction Parallelization-only ('IP'), Iterative Compilation ('IterC'), Incremental Compilation ('IC'), Variation-aware Incremental Compilation ('VIC'). The default value is 'IC'.
* -target_IterC string: Minimization objective of Iterative Compilation. The current version supports the following minimization objectives: Circuit Depth ('D'), Native two-qubit gate-count ('GC_2Q'), Estimated Success Probability ('ESP'). The default value is 'GC_2Q'.
* -initial_layout_method string: The chosen initial layout method. Currently supported methods: 'vqp', 'qaim', 'random'. The default method is 'qaim'.
* -output_qasm_file_name string: File name to write the compiled parametric QAOA circuit. The output is written in qasm format. The default value is 'QAOA.qasm'. The output qasm files are written following this naming style: {Method(IP/IC/VIC/IterC)}_{output_qasm_file_name}.
```
!python run.py -device_json examples/QC.json -circuit_json examples/QAOA_circ.json -config_json examples/Config.json -policy_compilation VIC -initial_layout_method vqp
```
### Output QAOA Circuits
The tool generates 3 QASM files:
* The uncompiled circuit (https://github.com/mahabubul-alam/QAOA-Compiler/blob/main/uncompiled_QAOA.qasm)
* Compiled QAOA circuit with conventinal approach (https://github.com/mahabubul-alam/QAOA-Compiler/blob/main/naive_compiled_QAOA.qasm)
* Optimized QAOA circuit with chosen set of optimization policies (https://github.com/mahabubul-alam/QAOA-Compiler/blob/main/VIC_QAOA.qasm)
A sample QASM file is shown below:
```
!cat VIC_QAOA.qasm
```
|
github_jupyter
|
{
"(0, 2)": "-0.5", // ZZ-interaction between the qubit pair (0, 2) with a coefficient of -0.5
"(1, 2)": "-0.5"
}
{
"1Q": [ //native single-qubit gates of the target hardware
"u3"
],
"2Q": [ //native two-qubit gate of the target hardware
"cx"
],
"u3": {
"0": 0.991, //"qubit" : success probability of u3
"1": 0.995,
"2": 0.998
}
"cx": {
"(0,1)": 0.96, //"(qubit1,qubit2)" : success probability of cx between qubit1, qubit2 (both directions)
"(1,2)": 0.97,
"(2,0)": 0.98,
}
}
{
"Backend" : "qiskit",
"Target_p" : "1",
"Packing_Limit" : "10e10",
"Route_Method" : "sabre",
"Trans_Seed" : "0",
"Opt_Level" : "3"
}
python run.py -arg arg_val
!python run.py -device_json examples/QC.json -circuit_json examples/QAOA_circ.json -config_json examples/Config.json -policy_compilation VIC -initial_layout_method vqp
!cat VIC_QAOA.qasm
| 0.733261 | 0.978281 |
# Биномиальный критерий для доли
```
import numpy as np
from scipy import stats
%pylab inline
```
## Shaken, not stirred
Джеймс Бонд говорит, что предпочитает мартини взболтанным, но не смешанным. Проведём слепой тест (blind test): $n$ раз предложим ему пару напитков и выясним, какой из двух он предпочитает. Получаем:
* **выборка:** бинарный вектор длины $n$, где 1 — Джеймс Бонд предпочел взболтанный напиток, 0 — смешанный;
* **гипотеза $H_0$:** Джеймс Бонд не различает 2 вида напитков и выбирает наугад;
* **статистика $T$:** количество единиц в выборке.
Если нулевая гипотеза справедлива и Джеймс Бонд действительно выбирает наугад, то мы можем с одинаковой вероятностью получить любой из $2^n$ бинарных векторов длины $n$.
Мы могли бы перебрать все такие векторы, посчитать на каждом значение статистики $T$ и получить таким образом её нулевое распределение. Но в данном случае этот этап можно пропустить: мы имеем дело с выборкой, состоящей из 0 и 1, то есть, из распределения Бернулли $Ber(p)$. Нулевая гипотеза выбора наугад соответствует значению $p=\frac1{2}$, то есть, в каждом эксперименте вероятность выбора взболтанного мартини равна $\frac1{2}$. Сумма $n$ одинаково распределённых бернуллиевских случайных величин с параметром $p$ имеет биномиальное распределение $Bin(n, p)$. Следовательно, нулевое распределение статистики $T$ — $Bin\left(n, \frac1{2}\right)$.
Пусть $n=16.$
```
n = 16
F_H0 = stats.binom(n, 0.5)
x = np.linspace(0,16,17)
pylab.bar(x, F_H0.pmf(x), align = 'center')
xlim(-0.5, 16.5)
pylab.show()
```
## Односторонняя альтернатива
**гипотеза $H_1$:** Джеймс Бонд предпочитает взболтанный мартини.
При такой альтернативе более вероятны большие значения статистики; при расчёте достигаемого уровня значимости будем суммировать высоту столбиков в правом хвосте распределения.
```
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(12,16,5), F_H0.pmf(np.linspace(12,16,5)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(12, 16, 0.5, alternative = 'greater')
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(11,16,6), F_H0.pmf(np.linspace(11,16,6)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(11, 16, 0.5, alternative = 'greater')
```
## Двусторонняя альтернатива
**гипотеза $H_1$:** Джеймс Бонд предпочитает какой-то определённый вид мартини.
При такой альтернативе более вероятны очень большие и очень маленькие значения статистики; при расчёте достигаемого уровня значимости будем суммировать высоту столбиков в правом и левом хвостах распределения.
```
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(12,16,5), F_H0.pmf(np.linspace(12,16,5)), align = 'center', color='red')
pylab.bar(np.linspace(0,4,5), F_H0.pmf(np.linspace(0,4,5)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(12, 16, 0.5, alternative = 'two-sided')
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(13,16,4), F_H0.pmf(np.linspace(13,16,4)), align = 'center', color='red')
pylab.bar(np.linspace(0,3,4), F_H0.pmf(np.linspace(0,3,4)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(13, 16, 0.5, alternative = 'two-sided')
```
|
github_jupyter
|
import numpy as np
from scipy import stats
%pylab inline
n = 16
F_H0 = stats.binom(n, 0.5)
x = np.linspace(0,16,17)
pylab.bar(x, F_H0.pmf(x), align = 'center')
xlim(-0.5, 16.5)
pylab.show()
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(12,16,5), F_H0.pmf(np.linspace(12,16,5)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(12, 16, 0.5, alternative = 'greater')
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(11,16,6), F_H0.pmf(np.linspace(11,16,6)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(11, 16, 0.5, alternative = 'greater')
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(12,16,5), F_H0.pmf(np.linspace(12,16,5)), align = 'center', color='red')
pylab.bar(np.linspace(0,4,5), F_H0.pmf(np.linspace(0,4,5)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(12, 16, 0.5, alternative = 'two-sided')
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(13,16,4), F_H0.pmf(np.linspace(13,16,4)), align = 'center', color='red')
pylab.bar(np.linspace(0,3,4), F_H0.pmf(np.linspace(0,3,4)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(13, 16, 0.5, alternative = 'two-sided')
| 0.359701 | 0.970437 |
# Launch Turi Create
```
import turicreate
```
# Load house sales data
```
sales = turicreate.SFrame('~/data/home_data.sframe/')
sales
```
# Explore
```
sales.show()
turicreate.show(sales[1:5000]['sqft_living'],sales[1:5000]['price'])
```
# Simple regression model that predicts price from square feet
```
training_set, test_set = sales.random_split(.8,seed=0)
```
## train simple regression model
```
sqft_model = turicreate.linear_regression.create(training_set,target='price',features=['sqft_living'])
```
# Evaluate the quality of our model
```
print (test_set['price'].mean())
print (sqft_model.evaluate(test_set))
```
# Explore model a little further
```
sqft_model.coefficients
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(test_set['sqft_living'],test_set['price'],'.',
test_set['sqft_living'],sqft_model.predict(test_set),'-')
```
# Explore other features of the data
```
my_features = ['bedrooms','bathrooms','sqft_living','sqft_lot','floors','zipcode']
sales[my_features].show()
turicreate.show(sales['zipcode'],sales['price'])
```
# Build a model with these additional features
```
my_features_model = turicreate.linear_regression.create(training_set,target='price',features=my_features)
```
# Compare simple model with more complex one
```
print (my_features)
print (sqft_model.evaluate(test_set))
print (my_features_model.evaluate(test_set))
```
# Apply learned models to make predictions
```
house1 = sales[sales['id']=='5309101200']
house1
```
<img src="http://blue.kingcounty.com/Assessor/eRealProperty/MediaHandler.aspx?Media=2916871">
```
print (house1['price'])
print (sqft_model.predict(house1))
print (my_features_model.predict(house1))
```
## Prediction for a second house, a fancier one
```
house2 = sales[sales['id']=='1925069082']
house2
```
<img src="https://ssl.cdn-redfin.com/photo/1/bigphoto/302/734302_0.jpg">
```
print (sqft_model.predict(house2))
print (my_features_model.predict(house2))
```
## Prediction for a super fancy home
```
bill_gates = {'bedrooms':[8],
'bathrooms':[25],
'sqft_living':[50000],
'sqft_lot':[225000],
'floors':[4],
'zipcode':['98039'],
'condition':[10],
'grade':[10],
'waterfront':[1],
'view':[4],
'sqft_above':[37500],
'sqft_basement':[12500],
'yr_built':[1994],
'yr_renovated':[2010],
'lat':[47.627606],
'long':[-122.242054],
'sqft_living15':[5000],
'sqft_lot15':[40000]}
```
<img src="https://upload.wikimedia.org/wikipedia/commons/2/26/Residence_of_Bill_Gates.jpg">
```
print (my_features_model.predict(turicreate.SFrame(bill_gates)))
```
|
github_jupyter
|
import turicreate
sales = turicreate.SFrame('~/data/home_data.sframe/')
sales
sales.show()
turicreate.show(sales[1:5000]['sqft_living'],sales[1:5000]['price'])
training_set, test_set = sales.random_split(.8,seed=0)
sqft_model = turicreate.linear_regression.create(training_set,target='price',features=['sqft_living'])
print (test_set['price'].mean())
print (sqft_model.evaluate(test_set))
sqft_model.coefficients
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(test_set['sqft_living'],test_set['price'],'.',
test_set['sqft_living'],sqft_model.predict(test_set),'-')
my_features = ['bedrooms','bathrooms','sqft_living','sqft_lot','floors','zipcode']
sales[my_features].show()
turicreate.show(sales['zipcode'],sales['price'])
my_features_model = turicreate.linear_regression.create(training_set,target='price',features=my_features)
print (my_features)
print (sqft_model.evaluate(test_set))
print (my_features_model.evaluate(test_set))
house1 = sales[sales['id']=='5309101200']
house1
print (house1['price'])
print (sqft_model.predict(house1))
print (my_features_model.predict(house1))
house2 = sales[sales['id']=='1925069082']
house2
print (sqft_model.predict(house2))
print (my_features_model.predict(house2))
bill_gates = {'bedrooms':[8],
'bathrooms':[25],
'sqft_living':[50000],
'sqft_lot':[225000],
'floors':[4],
'zipcode':['98039'],
'condition':[10],
'grade':[10],
'waterfront':[1],
'view':[4],
'sqft_above':[37500],
'sqft_basement':[12500],
'yr_built':[1994],
'yr_renovated':[2010],
'lat':[47.627606],
'long':[-122.242054],
'sqft_living15':[5000],
'sqft_lot15':[40000]}
print (my_features_model.predict(turicreate.SFrame(bill_gates)))
| 0.152821 | 0.94743 |
```
import gym
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
import random
env = gym.make('CartPole-v1')
env.reset()
goal_steps = 500
score_requirement = 60
intial_games = 10000
def playgame():
for step_index in range(goal_steps):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
print("Step {}:".format(step_index))
print("action: {}".format(action))
print("observation: {}".format(observation))
print("reward: {}".format(reward))
print("done: {}".format(done))
print("info: {}".format(info))
if done:
break
env.reset()
playgame()
def modeldp():
training_data = []
accepted_scores = []
for game_index in range(intial_games):
score = 0
game_memory = []
previous_observation = []
for step_index in range(goal_steps):
action = random.randrange(0, 2)
observation, reward, done, info = env.step(action)
if len(previous_observation) > 0:
game_memory.append([previous_observation, action])
previous_observation = observation
score += reward
if done:
break
if score >= score_requirement:
accepted_scores.append(score)
for data in game_memory:
if data[1] == 1:
output = [0, 1]
elif data[1] == 0:
output = [1, 0]
training_data.append([data[0], output])
env.reset()
print(sum(accepted_scores)/len(accepted_scores))
return training_data
training_data = modeldp()
def build_model(input_size, output_size):
model = Sequential()
model.add(Dense(1024, input_dim=input_size, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(output_size, activation='linear'))
model.compile(loss='mse', optimizer=Adam())
return model
def train_model(training_data):
X = np.array([i[0] for i in training_data]).reshape(-1, len(training_data[0][0]))
y = np.array([i[1] for i in training_data]).reshape(-1, len(training_data[0][1]))
model = build_model(input_size=len(X[0]), output_size=len(y[0]))
model.fit(X, y, epochs=20)
return model
trained_model = train_model(training_data)
scores = []
choices = []
for each_game in range(100):
score = 0
prev_obs = []
for step_index in range(goal_steps):
#env.render()
if len(prev_obs)==0:
action = random.randrange(0,2)
else:
action = np.argmax(trained_model.predict(prev_obs.reshape(-1,len(prev_obs)))[0])
choices.append(action)
new_observation, reward, done, info = env.step(action)
prev_obs = new_observation
score+=reward
if done:
break
env.reset()
scores.append(score)
print(scores)
print('Average Score:',sum(scores)/len(scores))
```
|
github_jupyter
|
import gym
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
import random
env = gym.make('CartPole-v1')
env.reset()
goal_steps = 500
score_requirement = 60
intial_games = 10000
def playgame():
for step_index in range(goal_steps):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
print("Step {}:".format(step_index))
print("action: {}".format(action))
print("observation: {}".format(observation))
print("reward: {}".format(reward))
print("done: {}".format(done))
print("info: {}".format(info))
if done:
break
env.reset()
playgame()
def modeldp():
training_data = []
accepted_scores = []
for game_index in range(intial_games):
score = 0
game_memory = []
previous_observation = []
for step_index in range(goal_steps):
action = random.randrange(0, 2)
observation, reward, done, info = env.step(action)
if len(previous_observation) > 0:
game_memory.append([previous_observation, action])
previous_observation = observation
score += reward
if done:
break
if score >= score_requirement:
accepted_scores.append(score)
for data in game_memory:
if data[1] == 1:
output = [0, 1]
elif data[1] == 0:
output = [1, 0]
training_data.append([data[0], output])
env.reset()
print(sum(accepted_scores)/len(accepted_scores))
return training_data
training_data = modeldp()
def build_model(input_size, output_size):
model = Sequential()
model.add(Dense(1024, input_dim=input_size, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(output_size, activation='linear'))
model.compile(loss='mse', optimizer=Adam())
return model
def train_model(training_data):
X = np.array([i[0] for i in training_data]).reshape(-1, len(training_data[0][0]))
y = np.array([i[1] for i in training_data]).reshape(-1, len(training_data[0][1]))
model = build_model(input_size=len(X[0]), output_size=len(y[0]))
model.fit(X, y, epochs=20)
return model
trained_model = train_model(training_data)
scores = []
choices = []
for each_game in range(100):
score = 0
prev_obs = []
for step_index in range(goal_steps):
#env.render()
if len(prev_obs)==0:
action = random.randrange(0,2)
else:
action = np.argmax(trained_model.predict(prev_obs.reshape(-1,len(prev_obs)))[0])
choices.append(action)
new_observation, reward, done, info = env.step(action)
prev_obs = new_observation
score+=reward
if done:
break
env.reset()
scores.append(score)
print(scores)
print('Average Score:',sum(scores)/len(scores))
| 0.416085 | 0.463141 |
# MACHINE LEARNING CLASSIFICATION AND COMPARISONS
This notebook we have used 6 different ML classifiers and compared them to find the best one that can accurately classify our malicious dataset.
## Installing some libraries.
```
pip install smote_variants
pip install imbalanced_databases
pip install imbalanced-learn
```
## Importing libraries for our needs.
```
import smote_variants as sv
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import imbalanced_databases as imbd
from sklearn import metrics
from sklearn.datasets import load_wine
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import plot_roc_curve
from imblearn.over_sampling import SMOTE
%matplotlib inline
from sklearn.model_selection import train_test_split
```
## Reading the dataset to a dataframe.
```
from google.colab import drive
drive.mount('/content/drive')
df_train = pd.read_csv('ml_dataset.csv')
df = df_train.copy()
df.columns
df.drop([ 'Unnamed: 0', 'pkts', 'bytes', 'dur',
'total_dur', 'spkts', 'dpkts', 'sbytes', 'dbytes',
'rate', 'TnBPSrcIP', 'TnBPDstIP', 'TnP_PSrcIP',
'TnP_PDstIP', 'TnP_PerProto', 'TnP_Per_Dport', 'AR_P_Proto_P_SrcIP',
'AR_P_Proto_P_DstIP',
'AR_P_Proto_P_Sport', 'AR_P_Proto_P_Dport',
'Pkts_P_State_P_Protocol_P_DestIP', 'Pkts_P_State_P_Protocol_P_SrcIP'],axis=1,inplace=True)
```
## Getting genral idea about the weight of available classification packets.
```
df.head()
pd.value_counts(df['attack']).plot.bar()
plt.title('Attack histogram')
plt.xlabel('attack')
plt.ylabel('Value')
df['attack'].value_counts()
```
#### Here we can find that there is a lot of imbalance in the dataset, so we can tell the data is highly-imballanced. Thus we need to synthtically oversample the minority class to get a balanced dataset for training and testing.
## Defining some methods which are later used:
```
# Used to plot the roc curve.
def plot_roc_curve(fpr, tpr):
plt.plot(fpr, tpr, color='orange', label='ROC')
plt.plot([0, 1], [0, 1], color='darkblue', linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic (ROC) Curve')
plt.legend()
plt.show()
# Used for classification of dataset.
def classif_results():
conf_mat = confusion_matrix(y_true=y_test, y_pred=y_pred)
print('Confusion matrix:\n', conf_mat)
labels = ['Class 0', 'Class 1']
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(conf_mat, cmap=plt.cm.Blues)
fig.colorbar(cax)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
plt.xlabel('Predicted')
plt.ylabel('Expected')
plt.show()
print("Accuracy", metrics.accuracy_score(y_test, y_pred))
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
auc = roc_auc_score(y_test, y_pred)
print("AUC Score: ")
print(auc)
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
plot_roc_curve(fpr, tpr)
# Used for splitting and normalizing dataset.
def test_scale():
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
```
### Here, we are applying SMOTE method, and applying it to dataset. We use the daataset by applying attack packets to X and normal to Y and oversample Y sythetically to length of X
```
X = df.iloc[:, df.columns != 'attack']
y = df.iloc[:, df.columns == 'attack']
X, y = SMOTE().fit_sample(X, y)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
```
# Logistic Regression:
```
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train,y_train)
y_pred = classifier.predict(X_test)
classif_results()
```
# Decision Trees
```
test_scale()
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
classifier.fit(X_train,y_train)
y_pred = classifier.predict(X_test)
classif_results()
```
# Random Forest:
```
test_scale()
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
classif_results()
```
# KNN
```
test_scale()
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier()
classifier.fit(X_train,y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test,y_pred)
classif_results()
```
# Support Vector Machines:
```
test_scale()
# Fitting SVM to the Training set
from sklearn.svm import SVC
classifier = SVC()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test,y_pred)
classif_results()
```
# Naive Bayes Classifier
```
test_scale()
# Fitting SVM to the Training set
from sklearn.naive_bayes import GaussianNB
#Create a Gaussian Classifier
classifier = GaussianNB()
# Train the model using the training sets
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test,y_pred)
classif_results()
```
# Neural Network
```
import keras
from keras.models import Sequential
from keras.layers import Dense
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 8))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(X_train, y_train, batch_size = 10, epochs = 3)
# Part 3 - Making predictions and evaluating the model
# Predicting the Test set results
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
classif_results()
```
|
github_jupyter
|
pip install smote_variants
pip install imbalanced_databases
pip install imbalanced-learn
import smote_variants as sv
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import imbalanced_databases as imbd
from sklearn import metrics
from sklearn.datasets import load_wine
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import plot_roc_curve
from imblearn.over_sampling import SMOTE
%matplotlib inline
from sklearn.model_selection import train_test_split
from google.colab import drive
drive.mount('/content/drive')
df_train = pd.read_csv('ml_dataset.csv')
df = df_train.copy()
df.columns
df.drop([ 'Unnamed: 0', 'pkts', 'bytes', 'dur',
'total_dur', 'spkts', 'dpkts', 'sbytes', 'dbytes',
'rate', 'TnBPSrcIP', 'TnBPDstIP', 'TnP_PSrcIP',
'TnP_PDstIP', 'TnP_PerProto', 'TnP_Per_Dport', 'AR_P_Proto_P_SrcIP',
'AR_P_Proto_P_DstIP',
'AR_P_Proto_P_Sport', 'AR_P_Proto_P_Dport',
'Pkts_P_State_P_Protocol_P_DestIP', 'Pkts_P_State_P_Protocol_P_SrcIP'],axis=1,inplace=True)
df.head()
pd.value_counts(df['attack']).plot.bar()
plt.title('Attack histogram')
plt.xlabel('attack')
plt.ylabel('Value')
df['attack'].value_counts()
# Used to plot the roc curve.
def plot_roc_curve(fpr, tpr):
plt.plot(fpr, tpr, color='orange', label='ROC')
plt.plot([0, 1], [0, 1], color='darkblue', linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic (ROC) Curve')
plt.legend()
plt.show()
# Used for classification of dataset.
def classif_results():
conf_mat = confusion_matrix(y_true=y_test, y_pred=y_pred)
print('Confusion matrix:\n', conf_mat)
labels = ['Class 0', 'Class 1']
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(conf_mat, cmap=plt.cm.Blues)
fig.colorbar(cax)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
plt.xlabel('Predicted')
plt.ylabel('Expected')
plt.show()
print("Accuracy", metrics.accuracy_score(y_test, y_pred))
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
auc = roc_auc_score(y_test, y_pred)
print("AUC Score: ")
print(auc)
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
plot_roc_curve(fpr, tpr)
# Used for splitting and normalizing dataset.
def test_scale():
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
X = df.iloc[:, df.columns != 'attack']
y = df.iloc[:, df.columns == 'attack']
X, y = SMOTE().fit_sample(X, y)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train,y_train)
y_pred = classifier.predict(X_test)
classif_results()
test_scale()
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
classifier.fit(X_train,y_train)
y_pred = classifier.predict(X_test)
classif_results()
test_scale()
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
classif_results()
test_scale()
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier()
classifier.fit(X_train,y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test,y_pred)
classif_results()
test_scale()
# Fitting SVM to the Training set
from sklearn.svm import SVC
classifier = SVC()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test,y_pred)
classif_results()
test_scale()
# Fitting SVM to the Training set
from sklearn.naive_bayes import GaussianNB
#Create a Gaussian Classifier
classifier = GaussianNB()
# Train the model using the training sets
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test,y_pred)
classif_results()
import keras
from keras.models import Sequential
from keras.layers import Dense
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 8))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(X_train, y_train, batch_size = 10, epochs = 3)
# Part 3 - Making predictions and evaluating the model
# Predicting the Test set results
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
classif_results()
| 0.76708 | 0.867204 |
# Lesson 2: Intro to Pandas - 1
### Summary
* Flexible, and expressive data structures **Series** (1D) and **DataFrame** (2D)
* High-level building block for doing **practical, real world data analysis**
* **Nearly as fast as C language** = Build on top of Numpy and extensive use of Cython
* **Robust IO tools** for loading and parsing data from text files, excel files and databases.
### Additional Resources
* Getting Started Guide: https://pandas.pydata.org/pandas-docs/stable/getting_started/index.html
* Complete User-Guide: https://pandas.pydata.org/pandas-docs/stable/user_guide/index.html#user-guide
* Geeks for Geeks: https://www.geeksforgeeks.org/pandas-tutorial/
---
# Imports
**Importing the packages you need on the top of your script/notebook is good programming practice**
```
# Import the packages that will be usefull for this lesson
import pandas as pd
import numpy as np
```
---
# pandas Objects
Typically, DataFrames and Series objects are created by loading in data from a CSV or excel file (subsequent section). But sometimes you might find it useful to create an object within your script.
## Pandas Series
* 1-D array
* Can contain data of any type (integer, string, float, python objects, etc.)
* axis labels (*i.e.* row names) are called the index
**TL;DR a Series is a column in an excel sheet**
### Creating Pandas Series
**From a List(s)**
```
Base = ('A','T','C','G','N')
Freq = (0.21, 0.24, 0.27, 0.25, 0.03)
bases = pd.Series(data=Freq, index=Base)
bases
```
**From a Dictionary**
```
d = {'A':0.21, 'T':0.24, 'C':0.27, 'G':0.25, 'N':0.03}
bases_2 = pd.Series(d)
bases_2
```
## Pandas DataFrames
* 2D tabular data
* labeled axes (rows and columns)
* size-mutable (can add/remove data)
* potentially heterogeneous data types
### Creating Pandas DataFrames
**From a pandas Series**
```
d = {'A':0.21, 'T':0.24, 'C':0.27, 'G':0.25, 'N':0.03}
bases_2 = pd.Series(d)
pd.DataFrame(bases_2, columns=["Percent"])
```
**From a Dictionary**
```
d = {'Protein':['YFP', 'GFP', 'RFP', 'BFP'],
'Ex':[514, 488, 555, 383],
'Em':[527, 510, 584, 445]}
df = pd.DataFrame(d)
df
```
---
# Loading Data
### Lesson 1 Recap
Last time, we learned about loading a text file for reading and writing.
```
f = open('mysequence.fasta', 'r')
new_file = open('thesis.txt', 'w')
```
This is useful for text-based data, such as FASTA files or text documents.
But what about **tabular data**, such as data from your flow experiment, or time-course data from the plate-reader?
### Using Pandas to Load Data
There are many methods to use pandas to load data. You can choose which method to use based off of your file-type, and can adjust the parameters accordingly.
**Load from a .csv file using `pd.read_csv()`**
```
# Comma-delimited text
data = pd.read_csv('new_data.csv')
# Tab-delimited text
data = pd.read_csv('newer_data.txt', sep='\t')
# From a URL
data = pd.read_csv('https://raw.githubusercontent.com/FBosler/you-datascientist/master/happiness_with_continent.csv')
```
**Load from a .xls/.xlsx file using `pd.read_excel()`**
```
data = pd.read_excel('thesis_data.xlsx')
```
#### Useful Parameters for Loading Data
* **sep**: separator for the columns (*e.g.* ',' or '\t')
* **header**: row that contains the column headers (*e.g.* 'None', 2 (third row, skip everything above it)
* **index_col**: which column should be the index (row names)
```
## Example: Loading Data
infile = '../data/ecoli.txt'
data = pd.read_csv(infile, sep='\t')
data
```
---
# Inspecting & Describing Data
Once data has been created/loaded using pandas, it is useful to 'look' at the data and ensure that it has been initialized properly. Rather than look at the entire data table, you can use simple functions to take a peak at the top, bottom, or random mix of the table.
### Functions to View Data
* `data.head()`: display first n rows
* `data.tail()`: display last n rows
* `data.sample()`: display n random rows
```
## Example: Viewing Data
# Display the first 5 rows
data.head()
# Display the last 10 rows
data.tail(10)
# Display a random sampling of 7 rows
data.sample(7)
## Example: Loading & Viewing Data #2
## After viewing the table, we realize we want to use "GeneID" as the rownames instead of numbers
new_data = pd.read_csv(infile, sep='\t', index_col='GeneID')
new_data.head()
```
## Functions to Describe Data
* `data.shape`: dimensions of data (# rows, # columns) *This is not actually a function, but a property, so do not use '()'
* `data.info()`: data types of each column
* `data.describe()`: statistical information about the numerical columns
```
## Example: Describing Data
data.shape
data.info()
data.describe()
```
---
Exercises
---------
Load a dataframe from this file hosted online: https://evocellnet.github.io/ecoref/data/phenotypic_data.tsv
**Hint:** the file extension might indicate which field separator to expect
How big is the dataframe? How many rows and columns are there?
How many non-null rows are there for each column? What is the type of each column?
What is the mean value for the `s-scores` column?
|
github_jupyter
|
# Import the packages that will be usefull for this lesson
import pandas as pd
import numpy as np
Base = ('A','T','C','G','N')
Freq = (0.21, 0.24, 0.27, 0.25, 0.03)
bases = pd.Series(data=Freq, index=Base)
bases
d = {'A':0.21, 'T':0.24, 'C':0.27, 'G':0.25, 'N':0.03}
bases_2 = pd.Series(d)
bases_2
d = {'A':0.21, 'T':0.24, 'C':0.27, 'G':0.25, 'N':0.03}
bases_2 = pd.Series(d)
pd.DataFrame(bases_2, columns=["Percent"])
d = {'Protein':['YFP', 'GFP', 'RFP', 'BFP'],
'Ex':[514, 488, 555, 383],
'Em':[527, 510, 584, 445]}
df = pd.DataFrame(d)
df
f = open('mysequence.fasta', 'r')
new_file = open('thesis.txt', 'w')
# Comma-delimited text
data = pd.read_csv('new_data.csv')
# Tab-delimited text
data = pd.read_csv('newer_data.txt', sep='\t')
# From a URL
data = pd.read_csv('https://raw.githubusercontent.com/FBosler/you-datascientist/master/happiness_with_continent.csv')
data = pd.read_excel('thesis_data.xlsx')
## Example: Loading Data
infile = '../data/ecoli.txt'
data = pd.read_csv(infile, sep='\t')
data
## Example: Viewing Data
# Display the first 5 rows
data.head()
# Display the last 10 rows
data.tail(10)
# Display a random sampling of 7 rows
data.sample(7)
## Example: Loading & Viewing Data #2
## After viewing the table, we realize we want to use "GeneID" as the rownames instead of numbers
new_data = pd.read_csv(infile, sep='\t', index_col='GeneID')
new_data.head()
## Example: Describing Data
data.shape
data.info()
data.describe()
| 0.48438 | 0.967625 |
<a href="https://colab.research.google.com/github/Neuralwood-Net/face-recognizer-9000/blob/main/notebooks/Training_and_plotting_vFinal.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Transfer learn SqueezeNet to four-class face recognition
### Make sure the hardware is in order
```
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
```
### Imports
```
import time
import os
import copy
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data
from torch.optim import lr_scheduler
import torchvision
from torchvision import datasets, models, transforms
from google.cloud import storage
# Placeholder to make it run until the real WoodNet is defined
class WoodNet:
pass
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
```
### Fetch and extract the data from the storage bucket
```
# Fetch the data
from google.cloud import storage
BASE_PATH = "/content"
# Make the required directories
os.makedirs(os.path.join(BASE_PATH, "faces"), exist_ok=True)
os.makedirs(os.path.join(BASE_PATH, "checkpoints"), exist_ok=True)
os.makedirs(os.path.join(BASE_PATH, "logs"), exist_ok=True)
BLOB_NAME = "faces/balanced_sampled_224px_color_156240_images_70_15_15_split.zip"
zipfilename = os.path.join(BASE_PATH, BLOB_NAME)
with open(zipfilename, "wb") as f:
storage.Client.create_anonymous_client().download_blob_to_file(f"gs://tdt4173-datasets/{BLOB_NAME}", f)
# Extract the data
import zipfile
extract_to_dir = os.path.join(BASE_PATH, *BLOB_NAME.split(os.path.sep)[:-1])
with zipfile.ZipFile(zipfilename, 'r') as zip_ref:
zip_ref.extractall(extract_to_dir)
```
### Load the data into wrapper classes and apply normalization
```
BATCH_SIZE = 16
data_transforms = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
])
data_dir = os.path.join(extract_to_dir, "sampled_dataset_balanced_244")
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms)
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=BATCH_SIZE,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
print(class_names)
print(image_datasets['val'].classes)
print(dataset_sizes)
```
### Create a helper function to aid in image plotting and show a random sample of the input data
```
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
# Get a batch of training data
inputs, classes = next(iter(dataloaders['val']))
print(inputs.shape)
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
```
### Create a function for training and validation
The following function trains the supplied model with the loss criterion and optimizer supplied, for the specified number of epochs. During training it logs the loss and accuracy for both training and validation. Whenever a better model is found on the validation set, the function saves the model parameters to a file for use for inference later.
```
def train_model(model, criterion, optimizer, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
num_img = {
"train": 0,
"val": 0,
}
datapoints_per_epoch = 100
imgs_per_datapoint = {
"train": int(float(dataset_sizes["train"] / datapoints_per_epoch)),
"val": int(float(dataset_sizes["val"] / datapoints_per_epoch)),
}
for epoch in range(num_epochs):
print(f"Epoch {epoch}/{num_epochs - 1}")
print("-" * 10)
with open(os.path.join(BASE_PATH, f"logs/{type(model).__name__}-{since}.csv"), "a") as f:
# For each epoch we want to both train and evaluate in that order
for phase in ["train", "val"]:
if phase == "train":
# Makes the network ready for training, i.e. the parameters can be tuned
# and possible Dropouts are activated
model.train()
else:
# Makes the network ready for inference, i.e. it is not tunable and will
# turn off regularization that might interfere with training
model.eval()
running_loss = 0.0
running_corrects = 0
plot_loss = 0
plot_corrects = 0
# Iterate over training or validation data
for inputs, labels in tqdm(dataloaders[phase], desc=f"Epoch: {epoch} ({phase})"):
inputs = inputs.to(device)
labels = labels.to(device)
# Reset the gradients before calculating new ones
optimizer.zero_grad()
# Ask PyTorch to generate computation graph only if in training mode
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# Only perform update steps if we're training
if phase == 'train':
loss.backward()
optimizer.step()
# Save values for statistics and logging
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
plot_loss += loss.item() * inputs.size(0)
plot_corrects += torch.sum(preds == labels.data)
num_img[phase] += BATCH_SIZE
if num_img[phase] % imgs_per_datapoint[phase] == 0:
f.write(f"{time.time()},{epoch},{phase},\
{num_img[phase]},{plot_loss / float(imgs_per_datapoint[phase])},\
{plot_corrects / float(imgs_per_datapoint[phase])}\n")
plot_loss = 0
plot_corrects = 0
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print(f"{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}")
# deep copy the model
if phase == "val" and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
torch.save(
{
"loss": epoch_loss,
"acc": epoch_acc,
"epoch": epoch,
"parameters": best_model_wts,
},
os.path.join(BASE_URL, f"checkpoints/{type(model).__name__}-{since}.data"),
)
print()
time_elapsed = time.time() - since
print(f"Training complete in {time_elapsed // 60:.0f}m {time_elapsed % 60:.0f}s")
print(f"Best val Acc: {best_acc:4f}")
# load best model weights
model.load_state_dict(best_model_wts)
return model
```
### Prepare the home-made CNN – WoodNet
Below is two networks. The first is made by the authors, and is made to be trained from scratch on the training data. The other is fully trained on ImageNet (1000 classes) and fine-tuned on the training data.
```
class WoodNet(nn.Module):
size_after_conv = 7 * 7 * 64
def __init__(self):
super(CNN, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(1, 32, kernel_size=3, padding=1),
nn.MaxPool2d(2),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=3, padding=1),
nn.MaxPool2d(2),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.MaxPool2d(2),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.MaxPool2d(2),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.MaxPool2d(2),
nn.ReLU(),
)
self.classify = nn.Sequential(
nn.Linear(self.size_after_conv, 2048),
nn.ReLU(),
nn.Linear(2048, 1024),
nn.ReLU(),
nn.Dropout(),
nn.Linear(1024, len(class_names)),
)
def forward(self, x):
x = self.features(x)
x = x.view(-1, self.size_after_conv)
x = self.classify(x)
return x
woodnet = WoodNet().to(device)
print(woodnet)
```
### Prepare the pretrained CNN – SqueezeNet
Below is the code for loading in the pretrained SqueezeNet. After it is loaded, the last classification layer is replaced with a one with the correct amount of output classes.
```
squeezenet = models.squeezenet1_1(pretrained=True, progress=True)
num_ftr = squeezenet.classifier[1].in_channels
squeezenet.classifier[1] = nn.Conv2d(num_ftr, len(class_names), 1, 1)
squeezenet = squeezenet.to(device)
squeezenet
```
### Train the network
Below is code that instantiates the loss function and optimization method and starts the training.
To train every parameter in SqueezeNet, set `train_full_network = True`, and to `False` if only the last layer is to be trained.
```
network = squeezenet
train_full_network = False
if train_full_network or isinstance(network, WoodNet):
print("Training full network")
parameters = network.parameters()
else:
print("Training only last layer of SqueezeNet")
parameters = network.classifier[1].parameters()
optimizer = torch.optim.SGD(parameters, lr=0.001, momentum=0.9)
loss_function = nn.CrossEntropyLoss()
train_model(network, loss_function, optimizer, num_epochs=25)
```
### Visualize the model performance for some images
```
def visualize_model(model, num_images=6):
was_training = model.training
model.eval()
images_so_far = 0
fig = plt.figure()
with torch.no_grad():
for i, (inputs, labels) in enumerate(dataloaders['val']):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
for j in range(inputs.size()[0]):
images_so_far += 1
ax = plt.subplot(num_images//2, 2, images_so_far)
ax.axis('off')
ax.set_title('predicted: {}'.format(class_names[preds[j]]))
imshow(inputs.cpu().data[j])
if images_so_far == num_images:
model.train(mode=was_training)
return
model.train(mode=was_training)
visualize_model(squeezenet)
```
|
github_jupyter
|
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
import time
import os
import copy
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data
from torch.optim import lr_scheduler
import torchvision
from torchvision import datasets, models, transforms
from google.cloud import storage
# Placeholder to make it run until the real WoodNet is defined
class WoodNet:
pass
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
# Fetch the data
from google.cloud import storage
BASE_PATH = "/content"
# Make the required directories
os.makedirs(os.path.join(BASE_PATH, "faces"), exist_ok=True)
os.makedirs(os.path.join(BASE_PATH, "checkpoints"), exist_ok=True)
os.makedirs(os.path.join(BASE_PATH, "logs"), exist_ok=True)
BLOB_NAME = "faces/balanced_sampled_224px_color_156240_images_70_15_15_split.zip"
zipfilename = os.path.join(BASE_PATH, BLOB_NAME)
with open(zipfilename, "wb") as f:
storage.Client.create_anonymous_client().download_blob_to_file(f"gs://tdt4173-datasets/{BLOB_NAME}", f)
# Extract the data
import zipfile
extract_to_dir = os.path.join(BASE_PATH, *BLOB_NAME.split(os.path.sep)[:-1])
with zipfile.ZipFile(zipfilename, 'r') as zip_ref:
zip_ref.extractall(extract_to_dir)
BATCH_SIZE = 16
data_transforms = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
])
data_dir = os.path.join(extract_to_dir, "sampled_dataset_balanced_244")
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms)
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=BATCH_SIZE,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
print(class_names)
print(image_datasets['val'].classes)
print(dataset_sizes)
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
# Get a batch of training data
inputs, classes = next(iter(dataloaders['val']))
print(inputs.shape)
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
def train_model(model, criterion, optimizer, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
num_img = {
"train": 0,
"val": 0,
}
datapoints_per_epoch = 100
imgs_per_datapoint = {
"train": int(float(dataset_sizes["train"] / datapoints_per_epoch)),
"val": int(float(dataset_sizes["val"] / datapoints_per_epoch)),
}
for epoch in range(num_epochs):
print(f"Epoch {epoch}/{num_epochs - 1}")
print("-" * 10)
with open(os.path.join(BASE_PATH, f"logs/{type(model).__name__}-{since}.csv"), "a") as f:
# For each epoch we want to both train and evaluate in that order
for phase in ["train", "val"]:
if phase == "train":
# Makes the network ready for training, i.e. the parameters can be tuned
# and possible Dropouts are activated
model.train()
else:
# Makes the network ready for inference, i.e. it is not tunable and will
# turn off regularization that might interfere with training
model.eval()
running_loss = 0.0
running_corrects = 0
plot_loss = 0
plot_corrects = 0
# Iterate over training or validation data
for inputs, labels in tqdm(dataloaders[phase], desc=f"Epoch: {epoch} ({phase})"):
inputs = inputs.to(device)
labels = labels.to(device)
# Reset the gradients before calculating new ones
optimizer.zero_grad()
# Ask PyTorch to generate computation graph only if in training mode
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# Only perform update steps if we're training
if phase == 'train':
loss.backward()
optimizer.step()
# Save values for statistics and logging
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
plot_loss += loss.item() * inputs.size(0)
plot_corrects += torch.sum(preds == labels.data)
num_img[phase] += BATCH_SIZE
if num_img[phase] % imgs_per_datapoint[phase] == 0:
f.write(f"{time.time()},{epoch},{phase},\
{num_img[phase]},{plot_loss / float(imgs_per_datapoint[phase])},\
{plot_corrects / float(imgs_per_datapoint[phase])}\n")
plot_loss = 0
plot_corrects = 0
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print(f"{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}")
# deep copy the model
if phase == "val" and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
torch.save(
{
"loss": epoch_loss,
"acc": epoch_acc,
"epoch": epoch,
"parameters": best_model_wts,
},
os.path.join(BASE_URL, f"checkpoints/{type(model).__name__}-{since}.data"),
)
print()
time_elapsed = time.time() - since
print(f"Training complete in {time_elapsed // 60:.0f}m {time_elapsed % 60:.0f}s")
print(f"Best val Acc: {best_acc:4f}")
# load best model weights
model.load_state_dict(best_model_wts)
return model
class WoodNet(nn.Module):
size_after_conv = 7 * 7 * 64
def __init__(self):
super(CNN, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(1, 32, kernel_size=3, padding=1),
nn.MaxPool2d(2),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=3, padding=1),
nn.MaxPool2d(2),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.MaxPool2d(2),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.MaxPool2d(2),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.MaxPool2d(2),
nn.ReLU(),
)
self.classify = nn.Sequential(
nn.Linear(self.size_after_conv, 2048),
nn.ReLU(),
nn.Linear(2048, 1024),
nn.ReLU(),
nn.Dropout(),
nn.Linear(1024, len(class_names)),
)
def forward(self, x):
x = self.features(x)
x = x.view(-1, self.size_after_conv)
x = self.classify(x)
return x
woodnet = WoodNet().to(device)
print(woodnet)
squeezenet = models.squeezenet1_1(pretrained=True, progress=True)
num_ftr = squeezenet.classifier[1].in_channels
squeezenet.classifier[1] = nn.Conv2d(num_ftr, len(class_names), 1, 1)
squeezenet = squeezenet.to(device)
squeezenet
network = squeezenet
train_full_network = False
if train_full_network or isinstance(network, WoodNet):
print("Training full network")
parameters = network.parameters()
else:
print("Training only last layer of SqueezeNet")
parameters = network.classifier[1].parameters()
optimizer = torch.optim.SGD(parameters, lr=0.001, momentum=0.9)
loss_function = nn.CrossEntropyLoss()
train_model(network, loss_function, optimizer, num_epochs=25)
def visualize_model(model, num_images=6):
was_training = model.training
model.eval()
images_so_far = 0
fig = plt.figure()
with torch.no_grad():
for i, (inputs, labels) in enumerate(dataloaders['val']):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
for j in range(inputs.size()[0]):
images_so_far += 1
ax = plt.subplot(num_images//2, 2, images_so_far)
ax.axis('off')
ax.set_title('predicted: {}'.format(class_names[preds[j]]))
imshow(inputs.cpu().data[j])
if images_so_far == num_images:
model.train(mode=was_training)
return
model.train(mode=was_training)
visualize_model(squeezenet)
| 0.623148 | 0.912981 |
# California Bike Sharing
```
%load_ext autoreload
%autoreload 2}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import cm
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import scale
from macest.regression import models as reg_mod
from macest.regression import plots as reg_plot
from macest.model_selection import KFoldConfidenceSplit
sns.set_style('darkgrid')
sns.set_context('notebook')
```
## Load (cleaned and pre-processed) data
```
bike_df = pd.read_csv('../../data/bike_df.csv').sample(frac=1)
bike_df.reset_index(drop=True, inplace=True)
y = bike_df['cnt'].values
X = bike_df.drop('cnt', axis =1).values
```
## Split the data into 4 groups
```
X_pp_train, X_conf_train, y_pp_train, y_conf_train = train_test_split(X,
y,
test_size=0.66,
random_state=10)
X_conf_train, X_cal, y_conf_train, y_cal = train_test_split(X_conf_train, y_conf_train,
test_size=0.5, random_state=1)
X_cal, X_test, y_cal, y_test, = train_test_split(X_cal, y_cal, test_size=0.5, random_state=1)
bike_df.shape
print(X_pp_train.shape)
print(X_conf_train.shape)
print(X_cal.shape)
print(X_test.shape)
```
## Train a point prediction model
```
rf_bike = RandomForestRegressor(random_state =1, n_estimators = 200)
rf_bike.fit(X_pp_train, y_pp_train)
rf_bike.score(X_test, y_test)
rf_preds = rf_bike.predict(X_conf_train)
test_error = abs(rf_preds - y_conf_train)
search_args = reg_mod.HnswGraphArgs(query_kwargs={'ef': 1500})
bnds = reg_mod.SearchBounds(k_bounds = (5,30))
```
## Initialise and fit MACE
```
macest_model = reg_mod.ModelWithPredictionInterval(
rf_bike,
X_conf_train,
test_error,
error_dist='laplace',
dist_func='linear',
search_method_args = search_args)
optimiser_args = dict(popsize = 20, disp= False)
macest_model.fit(X_cal,
y_cal,
param_range= bnds,
optimiser_args = optimiser_args)
```
## Calibration data
```
reg_plot.plot_calibration(macest_model, X_cal, y_cal)
reg_plot.plot_predicted_vs_true(rf_bike, macest_model, X_cal, y_cal)
reg_plot.plot_true_vs_predicted(rf_bike, macest_model, X_cal, y_cal)
```
## Unseen
```
reg_plot.plot_calibration(macest_model, X_test, y_test)
reg_plot.plot_predicted_vs_true(rf_bike, macest_model, X_test, y_test)
reg_plot.plot_true_vs_predicted(rf_bike, macest_model, X_test, y_test)
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import cm
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import scale
from macest.regression import models as reg_mod
from macest.regression import plots as reg_plot
from macest.model_selection import KFoldConfidenceSplit
sns.set_style('darkgrid')
sns.set_context('notebook')
bike_df = pd.read_csv('../../data/bike_df.csv').sample(frac=1)
bike_df.reset_index(drop=True, inplace=True)
y = bike_df['cnt'].values
X = bike_df.drop('cnt', axis =1).values
X_pp_train, X_conf_train, y_pp_train, y_conf_train = train_test_split(X,
y,
test_size=0.66,
random_state=10)
X_conf_train, X_cal, y_conf_train, y_cal = train_test_split(X_conf_train, y_conf_train,
test_size=0.5, random_state=1)
X_cal, X_test, y_cal, y_test, = train_test_split(X_cal, y_cal, test_size=0.5, random_state=1)
bike_df.shape
print(X_pp_train.shape)
print(X_conf_train.shape)
print(X_cal.shape)
print(X_test.shape)
rf_bike = RandomForestRegressor(random_state =1, n_estimators = 200)
rf_bike.fit(X_pp_train, y_pp_train)
rf_bike.score(X_test, y_test)
rf_preds = rf_bike.predict(X_conf_train)
test_error = abs(rf_preds - y_conf_train)
search_args = reg_mod.HnswGraphArgs(query_kwargs={'ef': 1500})
bnds = reg_mod.SearchBounds(k_bounds = (5,30))
macest_model = reg_mod.ModelWithPredictionInterval(
rf_bike,
X_conf_train,
test_error,
error_dist='laplace',
dist_func='linear',
search_method_args = search_args)
optimiser_args = dict(popsize = 20, disp= False)
macest_model.fit(X_cal,
y_cal,
param_range= bnds,
optimiser_args = optimiser_args)
reg_plot.plot_calibration(macest_model, X_cal, y_cal)
reg_plot.plot_predicted_vs_true(rf_bike, macest_model, X_cal, y_cal)
reg_plot.plot_true_vs_predicted(rf_bike, macest_model, X_cal, y_cal)
reg_plot.plot_calibration(macest_model, X_test, y_test)
reg_plot.plot_predicted_vs_true(rf_bike, macest_model, X_test, y_test)
reg_plot.plot_true_vs_predicted(rf_bike, macest_model, X_test, y_test)
| 0.569374 | 0.854521 |
# Nosecone Designs
This notbook provides descriptions, equations and renderings of several nose cone shapes commonly used in hobbly rocketry. The notebook leverages serveral common packages such as matplotlib, numpy for data handling and 2D graphing and some less-common packages viewscad, solidpython and OpenSCAD for 3D rendering.
Our first step is to import these packages.
```
# nbi:hide_in
from os.path import expanduser
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
import viewscad
from solid import *
```
L and R define the length and radius of the nosecone and S defines the diameter of the shoulder. M is the diameter of the mount.
```
# nbi:hide_in
length = 60
radius = 24.8/2
S = 10
Ri = 24.1/2
SMOOTH=100
```
This function adds origin coordinates to the arrays passed and plots a graph of the nosecone profile in 2 dimensions. The arrays are combined into an array of points and rotated around the z-axis and outputs an .stl file.
```
def render(x, y, name):
zero = np.array([0])
base = np.array([Ri])
shoulder = np.array([-S])
xplt = np.concatenate((shoulder, shoulder, zero, x, shoulder))
yplt = np.concatenate((zero, base, base, y, zero))
plt.axes().set_aspect("equal")
plt.plot(xplt, yplt)
r = viewscad.Renderer()
p = rotate_extrude(360, segments=SMOOTH)(polygon(np.vstack((yplt, xplt)).T))
r.render(p, outfile=expanduser("~") + '/' + name + '.stl')
```
We define a set of values for X from 0 to the nosecone length.
## Elliptical
The elliptical nose cone shape is one-half of an ellipse, with the major axis being the centerline and the minor axis being the base of the nose cone. A rotation of a full ellipse about its major axis is called a prolate spheroid, so an elliptical nose shape would properly be known as a prolate hemispheroid. This shape is popular in subsonic flight (such as model rocketry) due to the blunt nose and tangent base and are generally considered superior for model rocketry altitude optimisation use. This is not a shape normally found in professional rocketry, which almost always flies at much higher velocities where other designs are more suitable.
The profile is defined as $y=R{\sqrt {1-{x^{2} \over L^{2}}}}$ If $R = L$, this is a hemisphere.
We define our function, apply that to the values of x to generate an array of y values, which are used to generate a plot of the nosecone profile. The points returned combine the x and y arrays with the addition of the origin point to enure a closed shape.
```
x = np.linspace(0, length, int(length * 5))
f = lambda x: radius * np.sqrt(1-(x**2/length**2))
y = f(x)
render(x, y, 'elliptical')
```
rotate the profile around the z-axis to create a solid object and output an .stl for subsequent use.
## Conical
A very common nose-cone shape is a simple cone. This shape is often chosen for its ease of manufacture, and is also often chosen for its drag and radar cross section characteristics. A lower drag cone would be more streamlined, with the most optimal shape being a Sears-Haack body. The sides of a conical profile are straight lines, so the diameter equation is simply $y={xR \over L}$
Cones are sometimes defined by their half angle, $\phi$
$\phi = \arctan \Bigl({R \over L}\Bigr)$ and $y = x \tan (\phi) $
In practical applications, a conical nose is often blunted by capping it with a segment of a sphere. The tangency point where the sphere meets the cone can be found from
$x_t = \frac{L^2}{R}\sqrt{\frac{r_n^2}{R^2+L^2}}$
$y_t = \frac{x_tR}{L}$
where
$r_n$ is the radius of the spherical nose cap.
$x_o = x_t + \sqrt{r_n^2 - y_t^2}$
$x_a = x_o - r_n$
```
f = lambda x: x*radius/length
y = f(x)
points = build_2d(x, y[::-1])
build_3d(points, 'conical')
```
## Parabolic
This nose shape is not the blunt shape that is envisioned when people commonly refer to a "parabolic" nose cone. The parabolic series nose shape is generated by rotating a segment of a parabola around a line parallel to its latus rectum. This construction is similar to that of the tangent ogive, except that a parabola is the defining shape rather than a circle. Just as it does on an ogive, this construction produces a nose shape with a sharp tip. For the blunt shape typically associated with a parabolic nose, see power series below. (The parabolic shape is also often confused with the elliptical shape.)
For $0 \leq K^\prime \leq 1 : y=R \Biggl({2({x \over L})-K^\prime({x \over L})^{2} \over 2-K^\prime}\Biggr)$
$K^\prime$ can vary anywhere between $0$ and $1$, but the most common values used for nose cone shapes are:
| Parabola Type | $K^\prime$ Value |
| --- | --- |
| Cone | $0$ |
| Half | $\frac {1}{2}$ |
| Three Quarter| $3 \over 4$ |
| Full | $1$ |
For the case of the full parabola $(K^\prime = 1)$ the shape is tangent to the body at its base, and the base is on the axis of the parabola. Values of $K^\prime \lt 1$ result in a slimmer shape, whose appearance is similar to that of the secant ogive. The shape is no longer tangent at the base, and the base is parallel to, but offset from, the axis of the parabola.
```
K = .75
f = lambda x: radius*(((2*(x/length))-(K*(x/length)**2))/(2-K))
y = f(x)
points = build_2d(x, y[::-1])
build_3d(points, 'parabolic')
```
## Ogive
### Tangent ogive
Next to a simple cone, the tangent ogive shape is the most familiar in model/hobby rocketry. The profile of this shape is formed by a segment of a circle such that the rocket body is tangent to the curve of the nose cone at its base, and the base is on the radius of the circle. The popularity of this shape is largely due to the ease of constructing its profile, as it is simply a circular section.
The radius of the circle that forms the ogive is called the ''ogive radius'', $\rho$, and it is related to the length and base radius of the nose cone as expressed by the formula
$\rho = {R^2 + L^2\over 2R}$
The radius $y$ at any point $x$, as $x$ varies from $0$ to $L$ is
$y = \sqrt{\rho^2 - (L - x)^2}+R - \rho$
The nose cone length, $L$, must be $\leq \rho$. If they are equal, then the shape is a hemisphere.
### Spherically blunted tangent ogive
A tangent ogive nose is often blunted by capping it with a segment of a [[sphere]]. The tangency point where the sphere meets the tangent ogive can be found from:
\begin{align}
x_o &= L - \sqrt{\left(\rho - r_n\right)^2 - (\rho - R)^2} \\
y_t &= \frac{r_n(\rho - R)}{\rho - r_n} \\
x_t &= x_o - \sqrt{r_n^2 - y_t^2}
\end{align}
where $r_n$ is the radius and $x_o$ is the center of the spherical nose cap.
Finally, the apex point can be found from:
$x_a = x_o - r_n$
```
rho = ((radius**2+length**2)/2*radius)
print(rho)
f = lambda x: np.sqrt((rho**2 - (x-length)**2)) + (radius - rho)
y = f(x)
print(x, y)
yplt = y[::-1]
plt.axes().set_aspect("auto")
plt.plot(x, yplt)
```
## Haack series
Unlike all of the nose cone shapes above, the Haack Series shapes are not constructed from geometric figures. The shapes are instead mathematically derived for the purpose of minimizing drag; see also Sears–Haack body. While the series is a continuous set of shapes determined by the value of $C$ in the equations below, two values of $C$ have particular significance: when $C = 0$, the notation $LD$ signifies minimum drag for the given length and diameter, and when $C = {1 \over 3}$, $LV$ indicates minimum drag for a given length and volume. The Haack series nose cones are not perfectly tangent to the body at their base except for the case where $C = {2 \over 3}$. However, the discontinuity is usually so slight as to be imperceptible. For $C > {2 \over 3}$, Haack nose cones bulge to a maximum diameter greater than the base diameter. Haack nose tips do not come to a sharp point, but are slightly rounded.
$\theta = \arccos \Bigl(1 - {2X \over L}\Bigr)$
$y = {R \over \sqrt{\pi}} \sqrt{\theta-{\sin({2\theta})\over2}+C \sin^3({\theta})}$
Where:
$C = {1 \over 3}$ for LV-Haack
$C = 0$ for LD-Haack
```
C = 1/3
f = lambda x: (radius/np.sqrt(np.pi))*np.sqrt((np.arccos(1 - (2*x)/length)) - (np.sin(2 * (np.arccos(1 - (2*x)/length))))/2 + C * np.sin((np.arccos(1 - (2*x)/length)))**3)
y = f(x)
points = build_2d(x, y[::-1])
build_3d(points, 'haak')
```
|
github_jupyter
|
# nbi:hide_in
from os.path import expanduser
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
import viewscad
from solid import *
# nbi:hide_in
length = 60
radius = 24.8/2
S = 10
Ri = 24.1/2
SMOOTH=100
def render(x, y, name):
zero = np.array([0])
base = np.array([Ri])
shoulder = np.array([-S])
xplt = np.concatenate((shoulder, shoulder, zero, x, shoulder))
yplt = np.concatenate((zero, base, base, y, zero))
plt.axes().set_aspect("equal")
plt.plot(xplt, yplt)
r = viewscad.Renderer()
p = rotate_extrude(360, segments=SMOOTH)(polygon(np.vstack((yplt, xplt)).T))
r.render(p, outfile=expanduser("~") + '/' + name + '.stl')
x = np.linspace(0, length, int(length * 5))
f = lambda x: radius * np.sqrt(1-(x**2/length**2))
y = f(x)
render(x, y, 'elliptical')
f = lambda x: x*radius/length
y = f(x)
points = build_2d(x, y[::-1])
build_3d(points, 'conical')
K = .75
f = lambda x: radius*(((2*(x/length))-(K*(x/length)**2))/(2-K))
y = f(x)
points = build_2d(x, y[::-1])
build_3d(points, 'parabolic')
rho = ((radius**2+length**2)/2*radius)
print(rho)
f = lambda x: np.sqrt((rho**2 - (x-length)**2)) + (radius - rho)
y = f(x)
print(x, y)
yplt = y[::-1]
plt.axes().set_aspect("auto")
plt.plot(x, yplt)
C = 1/3
f = lambda x: (radius/np.sqrt(np.pi))*np.sqrt((np.arccos(1 - (2*x)/length)) - (np.sin(2 * (np.arccos(1 - (2*x)/length))))/2 + C * np.sin((np.arccos(1 - (2*x)/length)))**3)
y = f(x)
points = build_2d(x, y[::-1])
build_3d(points, 'haak')
| 0.376738 | 0.972046 |
# Stock Market Aalysis Using Time series
### Stocks are of IBM company using the Alphavantage
```
# Importing libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
```
### We will read date column as the datatype 'datetime'
```
# Reading the dataset
d_parser = lambda x: pd.datetime.strptime(x, '%Y-%m-%d')
stocks_df = pd.read_excel("stocks.xlsx", parse_dates=['date'], date_parser=d_parser)
stocks_df.head()
stocks_df.info()
stocks_df.shape
stocks_df = stocks_df.rename(columns={'1. open': "open", '2. high':"high", '3. low': "low", '4. close': "close", '5. volume': "volume"})
```
### Checking the days in each record
```
# Day name on of first record
stocks_df.loc[0, 'date'].day_name()
stocks_df["DayOfWeek"] = stocks_df["date"].dt.day_name()
stocks_df.head(10)
```
### Adding Month names to the dataframe
```
stocks_df.head(10)
years = []
months = []
days = []
for time in stocks_df["date"].astype(str):
years.append(time.split('-')[0])
months.append(time.split('-')[1])
days.append(time.split('-')[2])
stocks_df["year"] = years
stocks_df['month'] = months
stocks_df['days'] = days
```
### The dataset contains 5447 records now lets us take all the records of one year till now
```
stocks_df[stocks_df["date"] == "2019-07-01"]
df = stocks_df.iloc[:500, ]
print(df.shape)
print(df.head())
```
### Now we have to deal with data different from other data. As in stock prices we will only be given date and we have to predict all the other ourselves.
```
# plt.figure(figsize = (15, 5))
# plt.plot(df["open"])
fig, axs = plt.subplots(3, 2)
# axs[0][0].plot(df["open"])
# axs[0][1].plot(df["close"])
# axs[1][0].plot(df["high"])
# axs[1][1].plot(df["low"])
# axs[2][0].plot(df["volume"])
# plt.delaxes(ax = axs[2][1])
labels = [["open", "close"],
["high", "low"],
["volume"]]
for m in range(3):
for n in range(2):
if m == 2 and n == 1:
plt.delaxes(ax = axs[m][n])
else:
axs[m][n].plot(df[label[m][n]])
axs[m][n].set_title(labels[m][n], size = 18, y = 1.02)
plt.gcf().set_size_inches(22, 20)
```
### Visualizing monthly data
```
plt.figure(figsize = (15, 8))
plt.scatter(y = df["open"], x = df["month"])
from statsmodels.tsa.seasonal import seasonal_decompose
decompose_stocks = seasonal_decompose(df["open"],period=6)
decompose_stocks.plot()
plt.gcf().set_size_inches(15, 8)
plt.show()
```
|
github_jupyter
|
# Importing libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# Reading the dataset
d_parser = lambda x: pd.datetime.strptime(x, '%Y-%m-%d')
stocks_df = pd.read_excel("stocks.xlsx", parse_dates=['date'], date_parser=d_parser)
stocks_df.head()
stocks_df.info()
stocks_df.shape
stocks_df = stocks_df.rename(columns={'1. open': "open", '2. high':"high", '3. low': "low", '4. close': "close", '5. volume': "volume"})
# Day name on of first record
stocks_df.loc[0, 'date'].day_name()
stocks_df["DayOfWeek"] = stocks_df["date"].dt.day_name()
stocks_df.head(10)
stocks_df.head(10)
years = []
months = []
days = []
for time in stocks_df["date"].astype(str):
years.append(time.split('-')[0])
months.append(time.split('-')[1])
days.append(time.split('-')[2])
stocks_df["year"] = years
stocks_df['month'] = months
stocks_df['days'] = days
stocks_df[stocks_df["date"] == "2019-07-01"]
df = stocks_df.iloc[:500, ]
print(df.shape)
print(df.head())
# plt.figure(figsize = (15, 5))
# plt.plot(df["open"])
fig, axs = plt.subplots(3, 2)
# axs[0][0].plot(df["open"])
# axs[0][1].plot(df["close"])
# axs[1][0].plot(df["high"])
# axs[1][1].plot(df["low"])
# axs[2][0].plot(df["volume"])
# plt.delaxes(ax = axs[2][1])
labels = [["open", "close"],
["high", "low"],
["volume"]]
for m in range(3):
for n in range(2):
if m == 2 and n == 1:
plt.delaxes(ax = axs[m][n])
else:
axs[m][n].plot(df[label[m][n]])
axs[m][n].set_title(labels[m][n], size = 18, y = 1.02)
plt.gcf().set_size_inches(22, 20)
plt.figure(figsize = (15, 8))
plt.scatter(y = df["open"], x = df["month"])
from statsmodels.tsa.seasonal import seasonal_decompose
decompose_stocks = seasonal_decompose(df["open"],period=6)
decompose_stocks.plot()
plt.gcf().set_size_inches(15, 8)
plt.show()
| 0.424412 | 0.873377 |
# Optimization and Deep Learning
In this section, we will discuss the relationship between optimization and deep learning as well as the challenges of using optimization in deep learning. For a deep learning problem, we will usually define a loss function first. Once we have the loss function, we can use an optimization algorithm in attempt to minimize the loss. In optimization, a loss function is often referred to as the objective function of the optimization problem. By tradition and convention most optimization algorithms are concerned with *minimization*. If we ever need to maximize an objective there's a simple solution - just flip the sign on the objective.
## Optimization and Estimation
Although optimization provides a way to minimize the loss function for deep
learning, in essence, the goals of optimization and deep learning are
fundamentally different. The former is primarily concerned with minimizing an
objective whereas the latter is concerned with finding a suitable model, given a
finite amount of data. In Section 6.4,
we discussed the difference between these two goals in detail. For instance,
training error and generalization error generally differ: since the objective
function of the optimization algorithm is usually a loss function based on the
training data set, the goal of optimization is to reduce the training error.
However, the goal of statistical inference (and thus of deep learning) is to
reduce the generalization error. To accomplish the latter we need to pay
attention to overfitting in addition to using the optimization algorithm to
reduce the training error. We begin by importing a few libraries.
```
import sys
sys.path.insert(0, '..')
%matplotlib inline
import d2l
from mpl_toolkits import mplot3d
import numpy as np
```
The graph below illustrates the issue in some more detail. Since we have only a finite amount of data the minimum of the training error may be at a different location than the minimum of the expected error (or of the test error).
```
def f(x): return x * np.cos(np.pi * x)
def g(x): return f(x) + 0.2 * np.cos(5 * np.pi * x)
d2l.set_figsize((4.5, 2.5))
x = np.arange(0.5, 1.5, 0.01)
fig ,= d2l.plt.plot(x, f(x))
fig ,= d2l.plt.plot(x, g(x))
fig.axes.annotate('empirical risk', xy=(1.02, -1.21), xytext=(0.5, -1.15),
arrowprops=dict(arrowstyle='->'))
fig.axes.annotate('expected risk', xy=(1.1, -1.05), xytext=(0.95, -0.5),
arrowprops=dict(arrowstyle='->'))
d2l.plt.xlabel('x')
d2l.plt.ylabel('risk');
```
## Optimization Challenges in Deep Learning
In this chapter, we are going to focus specifically on the performance of the
optimization algorithm in minimizing the objective function, rather than a
model's generalization error. In :numref:`chapter_linear_regression`
we distinguished between analytical solutions and numerical solutions in
optimization problems. In deep learning, most objective functions are
complicated and do not have analytical solutions. Instead, we must use numerical
optimization algorithms. The optimization algorithms below all fall into this
category.
There are many challenges in deep learning optimization. Some of the most vexing ones are local minima, saddle points and vanishing gradients. Let's have a look at a few of them.
### Local Minima
For the objective function $f(x)$, if the value of $f(x)$ at $x$ is smaller than the values of $f(x)$ at any other points in the vicinity of $x$, then $f(x)$ could be a local minimum. If the value of $f(x)$ at $x$ is the minimum of the objective function over the entire domain, then $f(x)$ is the global minimum.
For example, given the function
$$f(x) = x \cdot \text{cos}(\pi x) \text{ for } -1.0 \leq x \leq 2.0,$$
we can approximate the local minimum and global minimum of this function.
```
x = np.arange(-1.0, 2.0, 0.01)
fig, = d2l.plt.plot(x, f(x))
fig.axes.annotate('local minimum', xy=(-0.3, -0.25), xytext=(-0.77, -1.0),
arrowprops=dict(arrowstyle='->'))
fig.axes.annotate('global minimum', xy=(1.1, -0.95), xytext=(0.6, 0.8),
arrowprops=dict(arrowstyle='->'))
d2l.plt.xlabel('x')
d2l.plt.ylabel('f(x)');
```
The objective function of deep learning models usually has many local optima. When the numerical solution of an optimization problem is near the local optimum, the numerical solution obtained by the final iteration may only minimize the objective function locally, rather than globally, as the gradient of the objective function's solutions approaches or becomes zero. Only some degree of noise might knock the parameter out of the local minimum. In fact, this is one of the beneficial properties of stochastic gradient descent where the natural variation of gradients over minibatches is able to dislodge the parameters from local minima.
### Saddle Points
Besides local minima, saddle points are another reason for gradients to vanish. A [saddle point](https://en.wikipedia.org/wiki/Saddle_point) is any location where all gradients of a function vanish but which is neither a global nor a local minimum. Consider the function $f(x) = x^3$. Its first and second derivative vanish for $x=0$. Optimization might stall at the point, even though it is not a minimum.
```
x = np.arange(-2.0, 2.0, 0.01)
fig, = d2l.plt.plot(x, x**3)
fig.axes.annotate('saddle point', xy=(0, -0.2), xytext=(-0.52, -5.0),
arrowprops=dict(arrowstyle='->'))
d2l.plt.xlabel('x')
d2l.plt.ylabel('f(x)');
```
Saddle points in higher dimensions are even more insidious, as the example below shows. Consider the function $f(x, y) = x^2 - y^2$. It has its saddle point at $(0,0)$. This is a maximum with respect to $y$ and a minimum with respect to $x$. Moreover, it *looks* like a saddle, which is where this mathematical property got its name.
```
x, y = np.mgrid[-1: 1: 101j, -1: 1: 101j]
z = x**2 - y**2
ax = d2l.plt.figure().add_subplot(111, projection='3d')
ax.plot_wireframe(x, y, z, **{'rstride': 10, 'cstride': 10})
ax.plot([0], [0], [0], 'rx')
ticks = [-1, 0, 1]
d2l.plt.xticks(ticks)
d2l.plt.yticks(ticks)
ax.set_zticks(ticks)
d2l.plt.xlabel('x')
d2l.plt.ylabel('y');
```
We assume that the input of a function is a $k$-dimensional vector and its
output is a scalar, so its Hessian matrix will have $k$ eigenvalues
(refer to Section 16.2).
The solution of the
function could be a local minimum, a local maximum, or a saddle point at a
position where the function gradient is zero:
* When the eigenvalues of the function's Hessian matrix at the zero-gradient position are all positive, we have a local minimum for the function.
* When the eigenvalues of the function's Hessian matrix at the zero-gradient position are all negative, we have a local maximum for the function.
* When the eigenvalues of the function's Hessian matrix at the zero-gradient position are negative and positive, we have a saddle point for the function.
For high-dimensional problems the likelihood that at least some of the eigenvalues are negative is quite high. This makes saddle points more likely than local minima. We will discuss some exceptions to this situation in the next section when introducing convexity. In short, convex functions are those where the eigenvalues of the Hessian are never negative. Sadly, though, most deep learning problems do not fall into this category. Nonetheless it's a great tool to study optimization algorithms.
### Vanishing Gradients
Probably the most insidious problem to encounter are vanishing gradients. For instance, assume that we want to minimize the function $f(x) = \tanh(x)$ and we happen to get started at $x = 4$. As we can see, the gradient of $f$ is close to nil. More specifically $f'(x) = 1 - \tanh^2(x)$ and thus $f'(4) = 0.0013$. Consequently optimization will get stuck for a long time before we make progress. This turns out to be one of the reasons that training deep learning models was quite tricky prior to the introduction of the ReLu activation function.
```
x = np.arange(-2.0, 5.0, 0.01)
fig, = d2l.plt.plot(x, np.tanh(x))
fig.axes.annotate('vanishing gradient', xy=(4, 1), xytext=(2, 0.0),
arrowprops=dict(arrowstyle='->'))
d2l.plt.xlabel('x')
d2l.plt.ylabel('f(x)');
```
As we saw, optimization for deep learning is full of challenges. Fortunately there exists a robust range of algorithms that perform well and that are easy to use even for beginners. Furthermore, it isn't really necessary to find *the* best solution. Local optima or even approximate solutions thereof are still very useful.
## Summary
* Minimizing the training error does *not* guarantee that we find the best set of parameters to minimize the expected error.
* The optimization problems may have many local minima.
* The problem may have even more saddle points, as generally the problems are not convex.
* Vanishing gradients can cause optimization to stall. Often a reparametrization of the problem helps. Good initialization of the parameters can be beneficial, too.
## Exercises
1. Consider a simple multilayer perceptron with a single hidden layer of, say, $d$ dimensions in the hidden layer and a single output. Show that for any local minimum there are at least $d!$ equivalent solutions that behave identically.
1. Assume that we have a symmetric random matrix $\mathbf{M}$ where the entries $M_{ij} = M_{ji}$ are each drawn from some probability distribution $p_{ij}$. Furthermore assume that $p_{ij}(x) = p_{ij}(-x)$, i.e. that the distribution is symmetric (see e.g. [1] for details).
* Prove that the distribution over eigenvalues is also symmetric. That is, for any eigenvector $\mathbf{v}$ the probability that the associated eigenvalue $\lambda$ satisfies $\Pr(\lambda > 0) = \Pr(\lambda < 0)$.
* Why does the above *not* imply $\Pr(\lambda > 0) = 0.5$?
1. What other challenges involved in deep learning optimization can you think of?
1. Assume that you want to balance a (real) ball on a (real) saddle.
* Why is this hard?
* Can you exploit this effect also for optimization algorithms?
## References
[1] Wigner, E. P. (1958). On the distribution of the roots of certain symmetric matrices. Annals of Mathematics, 325-327.
|
github_jupyter
|
import sys
sys.path.insert(0, '..')
%matplotlib inline
import d2l
from mpl_toolkits import mplot3d
import numpy as np
def f(x): return x * np.cos(np.pi * x)
def g(x): return f(x) + 0.2 * np.cos(5 * np.pi * x)
d2l.set_figsize((4.5, 2.5))
x = np.arange(0.5, 1.5, 0.01)
fig ,= d2l.plt.plot(x, f(x))
fig ,= d2l.plt.plot(x, g(x))
fig.axes.annotate('empirical risk', xy=(1.02, -1.21), xytext=(0.5, -1.15),
arrowprops=dict(arrowstyle='->'))
fig.axes.annotate('expected risk', xy=(1.1, -1.05), xytext=(0.95, -0.5),
arrowprops=dict(arrowstyle='->'))
d2l.plt.xlabel('x')
d2l.plt.ylabel('risk');
x = np.arange(-1.0, 2.0, 0.01)
fig, = d2l.plt.plot(x, f(x))
fig.axes.annotate('local minimum', xy=(-0.3, -0.25), xytext=(-0.77, -1.0),
arrowprops=dict(arrowstyle='->'))
fig.axes.annotate('global minimum', xy=(1.1, -0.95), xytext=(0.6, 0.8),
arrowprops=dict(arrowstyle='->'))
d2l.plt.xlabel('x')
d2l.plt.ylabel('f(x)');
x = np.arange(-2.0, 2.0, 0.01)
fig, = d2l.plt.plot(x, x**3)
fig.axes.annotate('saddle point', xy=(0, -0.2), xytext=(-0.52, -5.0),
arrowprops=dict(arrowstyle='->'))
d2l.plt.xlabel('x')
d2l.plt.ylabel('f(x)');
x, y = np.mgrid[-1: 1: 101j, -1: 1: 101j]
z = x**2 - y**2
ax = d2l.plt.figure().add_subplot(111, projection='3d')
ax.plot_wireframe(x, y, z, **{'rstride': 10, 'cstride': 10})
ax.plot([0], [0], [0], 'rx')
ticks = [-1, 0, 1]
d2l.plt.xticks(ticks)
d2l.plt.yticks(ticks)
ax.set_zticks(ticks)
d2l.plt.xlabel('x')
d2l.plt.ylabel('y');
x = np.arange(-2.0, 5.0, 0.01)
fig, = d2l.plt.plot(x, np.tanh(x))
fig.axes.annotate('vanishing gradient', xy=(4, 1), xytext=(2, 0.0),
arrowprops=dict(arrowstyle='->'))
d2l.plt.xlabel('x')
d2l.plt.ylabel('f(x)');
| 0.355999 | 0.993698 |
# Analysis Report
We report the following SageMaker analysis.
## Explanations of label0
The Model has `2` input features. We computed KernelShap on the dataset `dataset` and display the `10` features with the greatest feature attribution.
<img src='data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAp8AAAG5CAYAAADI9V++AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy86wFpkAAAACXBIWXMAAA9hAAAPYQGoP6dpAAA+m0lEQVR4nO3de3zP5eP/8ed7Y2cbY47TmLOIrByHkUY5JDE5hPjwkUPUp6hUCJX60MdHCSUqfCKHL8WHKKQcKsePY87kbIuZzcZ2/f7otvfP23tjtnVNPO6322437+t1va7XdV1e3u+n1/v1uuYwxhgBAAAAFnjkdQcAAABw9yB8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF84rZijFF8fLxYfhYAgDsT4RO3lYsXLyooKEgXL17M664AAIA/AeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANbky+sOABmZt+24/ALi87obAADcUZ68PzSvu8CVTwAAANhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYTPbIqKilJUVFRed8OFw+HQgAED/vTjzJgxQw6HQ4cPH/7TjwUAAO4shE8AAABYky+vO/BX9c033+R1FwAAAP5y7oorn5cuXcr1Nr28vOTl5ZXr7QIAANzJ7rjwOWLECDkcDu3atUudO3dWoUKFFBkZKUmaOXOmIiIi5Ovrq+DgYD355JM6duyYc98BAwYoICBAiYmJbu126tRJxYsXV2pqqqSM7/lMTk7W8OHDVb58eXl7e6t06dIaMmSIkpOTnXXatWunWrVquezXunVrORwOLV682Fm2ceNGORwO/fe//73lOZg1a5YqVaokHx8fRURE6Pvvv3ers2XLFj3yyCMKDAxUQECAHnroIW3YsMGt3s6dO9W0aVP5+voqNDRUo0ePVlpamkud7t27q0iRIrpy5Yrb/tHR0apUqdItjwEAANyZ7rjwma5Dhw5KTEzUm2++qd69e2vMmDHq1q2bKlSooPHjx2vw4MH69ttv1ahRI50/f16S1LFjR126dElLlixxaSsxMVFfffWV2rdvL09PzwyPl5aWpjZt2uif//ynWrdurYkTJ6pt27Z677331LFjR2e9hg0batu2bYqPj5ckGWP0448/ysPDQ2vXrnXWW7t2rTw8PNSgQYNbGveaNWs0ePBgde3aVW+88YZiY2PVokUL7dixw1ln586dzn4MGTJEr732mg4dOqSoqCht3LjRWe/UqVNq0qSJtm7dqpdeekmDBw/WZ599pgkTJrgc86mnnlJsbKyWL1/uUn7q1Cl999136tq16y2NAQAA3Lnu2Hs+a9SoodmzZ0uSjhw5onLlymn06NF65ZVXnHXatWun+++/X5MmTdIrr7yiyMhIlSpVSnPmzFGHDh2c9ZYsWaJLly65hMjrzZ49WytXrtSaNWucV1olqVq1aurbt6/WrVun+vXrq2HDhkpLS9OPP/6oRx55RDt27NDvv/+uDh06uIXPGjVqKDAw8JbGvWPHDv3yyy+KiIiQJD355JOqVKmSXn/9dS1YsECS9Oqrr+rKlSv64YcfFB4eLknq1q2bKlWqpCFDhmjNmjWSpLFjx+rs2bPauHGjateuLemPq5wVKlRwOWbTpk0VGhqqmTNnqlWrVs7y//znP0pLSyN8AgAApzv2ymffvn2df16wYIHS0tIUExOjc+fOOX+KFy+uChUqaNWqVZL+WKqoQ4cOWrp0qRISEpz7z5kzR6VKlXIJldf78ssvVaVKFVWuXNnlGE2bNpUk5zHuv/9+BQQEOL8KX7t2rUJDQ9WtWzdt3rxZiYmJMsbohx9+UMOGDW953PXq1XMGT0m655579Nhjj2n58uVKTU1VamqqvvnmG7Vt29YZPCWpRIkS6ty5s3744QfnVdmlS5eqbt26zuApSSEhIerSpYvLMT08PNSlSxctXrxYFy9edJbPmjVL9evXV9myZW95HAAA4M50x4bPawPPvn37ZIxRhQoVFBIS4vKze/dunTlzxlm3Y8eOSkpKct5/mZCQoKVLl6pDhw5yOByZHm/fvn3auXOnW/sVK1aUJOcxPD09Va9ePedVzrVr16phw4aKjIxUamqqNmzYoF27dikuLi5b4fP6q5KSVLFiRSUmJurs2bM6e/asEhMTM7wPs0qVKkpLS3PeB3vkyJEM28to327duikpKUkLFy6UJO3du1ebNm3SU089dctjAAAAd6479mt3X19f55/T0tKcD+9kdM9mQECA889169ZVmTJlNHfuXHXu3FlfffWVkpKSbviVe/oxqlevrvHjx2e4vXTp0s4/R0ZGasyYMbp8+bLWrl2rYcOGqWDBgqpWrZrWrl2rYsWKSVK2wmdeqVq1qiIiIjRz5kx169ZNM2fOlJeXl2JiYvK6awAA4DZyx4bPa5UrV07GGJUtW9Z5JfJGYmJiNGHCBMXHx2vOnDkqU6aM6tate9NjbNu2TQ899NANr5BKf4TKlJQU/ec//9Hx48edIbNRo0bO8FmxYkVnCL0V+/btcyv79ddf5efnp5CQEEmSn5+f9u7d61Zvz5498vDwcAblsLCwDNvLaF/pj6ufzz//vE6ePKnZs2erZcuWKlSo0C2PAQAA3Lnu2K/dr9WuXTt5enpq5MiRMsa4bDPGKDY21qWsY8eOSk5O1qeffqply5Zl6epdTEyMjh8/ro8++shtW1JSkstao3Xq1FH+/Pk1duxYBQcH695775X0RyjdsGGD1qxZk+2rnuvXr9fmzZudr48dO6ZFixYpOjpanp6e8vT0VHR0tBYtWuTy6zFPnz6t2bNnKzIy0vmQ06OPPqoNGzbop59+ctY7e/asZs2aleGxO3XqJIfDoUGDBungwYM8aAQAANzcNVc+R48erZdfflmHDx9W27ZtVaBAAR06dEgLFy5Unz599MILLzjr16pVS+XLl9ewYcOUnJx806/cpT+WG5o7d6769u2rVatWqUGDBkpNTdWePXs0d+5cLV++XA888ICkP648RkREaMOGDc41PqU/rnxeunRJly5dynb4rFatmpo3b65nn31W3t7emjRpkiRp5MiRzjqjR4/WihUrFBkZqX79+ilfvnyaMmWKkpOT9c477zjrDRkyRJ9//rlatGihQYMGyd/fX1OnTlVYWJi2b9/uduyQkBC1aNFCX375pQoWLKiWLVtmawwAAODOdVeET0l66aWXVLFiRb333nvOIFa6dGlFR0erTZs2bvU7duyoMWPGqHz58m6LwmfEw8ND//d//6f33ntPn332mRYuXCg/Pz+Fh4dr0KBBbl/3p1/lvPYJ+uLFi6t8+fLav39/tsNn48aNVa9ePY0cOVJHjx5V1apVNWPGDN13333OOvfee6/Wrl2rl19+WW+99ZbS0tJUp04dzZw5U3Xq1HHWK1GihFatWqWBAwfq7bffVuHChdW3b1+VLFlSvXr1yvD43bp109dff62YmBh5e3tnawwAAODO5TDXfw8N5MCiRYvUtm1bff/999kK0PHx8QoKCtK073fJL6DAn9BDAADuXk/eH5rXXbg77vmEPR999JHCw8NvuCYqAAC4e901X7v/lZ06deqG2319fRUUFGSpNxn74osvtH37di1ZskQTJky46RP/AADg7kT4/AsoUaLEDbd3795dM2bMsNOZTHTq1EkBAQHq1auX+vXrl6d9AQAAty/C51/AihUrbri9ZMmSlnqSOW4dBgAAWUH4/Ato1qxZXncBAAAgV/DAEQAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAahzHG5HUngHTx8fEKCgrShQsXFBgYmNfdAQAAuYwrnwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsOYvFz5nzJghh8Ohw4cP53VXAAAAcIv+cuHzz3bixAmNGDFCW7duzdN+JCYmasSIEVq9enWe9gMAACA3ET6vc+LECY0cOfK2CJ8jR44kfAIAgDuKtfCZlpamy5cv2zocLEpMTMzrLgAAgL+IWw6fI0aMkMPh0J49exQTE6PAwEAVLlxYgwYNcgmXDodDAwYM0KxZs3TvvffK29tby5YtkyRt2bJFjzzyiAIDAxUQEKCHHnpIGzZscDvWzp071bRpU/n6+io0NFSjR49WWlqaWz2Hw6ERI0a4lZcpU0Y9evRwKTt//ryee+45lSlTRt7e3goNDVW3bt107tw5rV69Wg8++KAk6emnn5bD4ZDD4dCMGTOyPD83al+SUlJS9PrrrysiIkJBQUHy9/dXw4YNtWrVKmcbhw8fVkhIiCRp5MiRzn5cO8Y9e/aoffv2Cg4Olo+Pjx544AEtXrzYrT/bt29X48aNXeZw+vTpGd43O2nSJOffVcmSJdW/f3+dP3/epU5UVJSqVaumTZs2qVGjRvLz89Mrr7yi7t27q0iRIrpy5YpbH6Kjo1WpUqUszyEAALhz5cvujjExMSpTpozeeustbdiwQf/+97/1+++/67PPPnPW+e677zR37lwNGDBARYoUUZkyZbRz5041bNhQgYGBGjJkiPLnz68pU6YoKipKa9asUZ06dSRJp06dUpMmTXT16lW99NJL8vf319SpU+Xr65vtwSYkJKhhw4bavXu3evbsqVq1auncuXNavHixfvvtN1WpUkVvvPGGXn/9dfXp00cNGzaUJNWvXz9X2i9SpIji4+P18ccfq1OnTurdu7cuXryoadOmqXnz5vrpp59Us2ZNhYSE6MMPP9Qzzzyjxx9/XO3atZMk3XfffZL+COUNGjRQqVKlnHMzd+5ctW3bVvPnz9fjjz8uSTp+/LiaNGkih8Ohl19+Wf7+/vr444/l7e3t1vcRI0Zo5MiRatasmZ555hnt3btXH374oX7++Wf9+OOPyp8/v7NubGysHnnkET355JPq2rWrihUrJn9/f3322Wdavny5WrVq5ax76tQpfffddxo+fHj2/tIAAMCdxdyi4cOHG0mmTZs2LuX9+vUzksy2bduMMcZIMh4eHmbnzp0u9dq2bWu8vLzMgQMHnGUnTpwwBQoUMI0aNXKWDR482EgyGzdudJadOXPGBAUFGUnm0KFDznJJZvjw4W59DQsLM927d3e+fv31140ks2DBAre6aWlpxhhjfv75ZyPJTJ8+/aZzcb2stH/16lWTnJzssu333383xYoVMz179nSWnT17NtNxPfTQQ6Z69erm8uXLLu3Xr1/fVKhQwVk2cOBA43A4zJYtW5xlsbGxJjg42GUOz5w5Y7y8vEx0dLRJTU111n3//feNJPPJJ584yxo3bmwkmcmTJ7v0KTU11YSGhpqOHTu6lI8fP944HA5z8OBBt3Fk5MKFC0aSuXDhQpbqAwCAv5Zs3/PZv39/l9cDBw6UJC1dutRZ1rhxY1WtWtX5OjU1Vd98843atm2r8PBwZ3mJEiXUuXNn/fDDD4qPj3e2U7duXdWuXdtZLyQkRF26dMlulzV//nzVqFHDeWXwWg6HI9vt3kr7np6e8vLykvTHfbBxcXG6evWqHnjgAW3evPmmx4iLi9N3332nmJgYXbx4UefOndO5c+cUGxur5s2ba9++fTp+/LgkadmyZapXr55q1qzp3D84ONhtDleuXKmUlBQNHjxYHh7//5To3bu3AgMDtWTJEpf63t7eevrpp13KPDw81KVLFy1evFgXL150ls+aNUv169dX2bJlbzo2AABw58t2+KxQoYLL63LlysnDw8PlPsLrA8fZs2eVmJiY4f1/VapUUVpamo4dOyZJOnLkiNsxJOXo3sEDBw6oWrVq2d4/t9r/9NNPdd9998nHx0eFCxdWSEiIlixZogsXLtx03/3798sYo9dee00hISEuP+lfbZ85c0bSH3NYvnx5tzauLzty5Igk97n18vJSeHi4c3u6UqVKOQP0tbp166akpCQtXLhQkrR3715t2rRJTz311E3HBQAA7g7ZvufzehldOczJ/Zm5ITU1NU+Pn5GZM2eqR48eatu2rV588UUVLVpUnp6eeuutt3TgwIGb7p/+wNULL7yg5s2bZ1gno8CZmzL7e61ataoiIiI0c+ZMdevWTTNnzpSXl5diYmL+1P4AAIC/jmyHz3379rlc2dy/f7/S0tJUpkyZTPcJCQmRn5+f9u7d67Ztz5498vDwUOnSpSVJYWFh2rdvn1u9jPYtVKiQ21PZKSkpOnnypEtZuXLltGPHjhsNK0dfv2el/Xnz5ik8PFwLFixwOdb1D+Rk1o/02xXy58+vZs2a3fBYYWFh2r9/v1v59WVhYWGS/pjba2+HSElJ0aFDh256nGt169ZNzz//vE6ePKnZs2erZcuWKlSoUJb3BwAAd7Zsf+3+wQcfuLyeOHGiJOmRRx7JdB9PT09FR0dr0aJFLl/Pnz59WrNnz1ZkZKQCAwMlSY8++qg2bNign376yVnv7NmzmjVrllu75cqV0/fff+9SNnXqVLcrn0888YS2bdvm/Fr4WsYYSZK/v78kuYXZrMhK+56eni6vJWnjxo1av369S30/P78M+1G0aFFFRUVpypQpbuFa+mOO0jVv3lzr1693WTA/Li7ObQ6bNWsmLy8v/fvf/3bp17Rp03ThwgW1bNnyRsN20alTJzkcDg0aNEgHDx5U165ds7wvAAC48znMtWkjC9KX5KlevbrKlCmjFi1aaP369Zo5c6Y6d+7sDDYOh0P9+/fX+++/77L/zp07VadOHRUsWFD9+vVTvnz5NGXKFB0/ftxlqaWTJ0+qevXqSktL06BBg1yWWtq+fbsOHTrkvMo6ZcoU9e3bV+3atdPDDz+sbdu2afny5bp48aJatmzpXKczISFBderU0d69e9WzZ09FREQoLi5Oixcv1uTJk1WjRg1duXJFRYsWVbFixfTiiy/K399fderUydIDM1lpf/r06erZs6fatGmjli1b6tChQ5o8ebJKlSqlhIQEl1B+7733Ki4uTq+99pqCg4NVrVo1VatWTbt27VJkZKQ8PDzUu3dvhYeH6/Tp01q/fr1+++03bdu2TZJ07Ngx3XfffcqXL58GDhzoXGrJx8dHW7du1eHDh51XPdP/XqOjo9WmTRvt3btXkyZNUq1atVyWWoqKitK5c+dueIW3devW+vrrr1WwYEGdOnUqw6WdMhMfH6+goCBduHDB+R8RAABwB7nVx+PTl1ratWuXad++vSlQoIApVKiQGTBggElKSnLWk2T69++fYRubN282zZs3NwEBAcbPz880adLErFu3zq3e9u3bTePGjY2Pj48pVaqUGTVqlJk2bZrbUkupqalm6NChpkiRIsbPz880b97c7N+/322pJWP+WGpowIABplSpUsbLy8uEhoaa7t27m3PnzjnrLFq0yFStWtXky5fvlpdduln7aWlp5s033zRhYWHG29vb3H///ebrr7823bt3N2FhYS5trVu3zkRERBgvLy+3ZZcOHDhgunXrZooXL27y589vSpUqZVq1amXmzZvn0saWLVtMw4YNjbe3twkNDTVvvfWW+fe//20kmVOnTrnUff/9903lypVN/vz5TbFixcwzzzxjfv/9d5c6jRs3Nvfee+8N52Du3LlGkunTp0+W5y0dSy0BAHBny/aVz7Nnz6pIkSK5nYVhweDBgzVlyhQlJCQ4bwPITYsWLVLbtm31/fffOxfqzyqufAIAcGez9rvdkTeSkpJcXsfGxurzzz9XZGTknxI8Jemjjz5SeHi4IiMj/5T2AQDAX1euLbV0J0tKSrrpGpzBwcEZrn2Z1+rVq6eoqChVqVJFp0+f1rRp0xQfH6/XXnst14/1xRdfaPv27VqyZIkmTJiQKwv3AwCAOwvhMwvmzJnj9ht9rrdq1SpFRUXZ6dAtePTRRzVv3jxNnTpVDodDtWrV0rRp09SoUaNcP1anTp0UEBCgXr16qV+/frnePgAA+Ou75Xs+70YnT57Uzp07b1gnIiKC9SxzAfd8AgBwZ+PKZxaUKFFCJUqUyOtuAAAA/OXxwBEAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMCafHndAeBaxhhJUnx8fB73BAAA3KoCBQrI4XDcsA7hE7eV2NhYSVLp0qXzuCcAAOBWXbhwQYGBgTesQ/jEbSU4OFiSdPToUQUFBeVxb/JGfHy8SpcurWPHjt30H/CdijlgDu728UvMgcQcSH+9OShQoMBN6xA+cVvx8PjjNuSgoKC/xD+yP1NgYCBzwBzc9XNwt49fYg4k5kC6s+aAB44AAABgDeETAAAA1hA+cVvx9vbW8OHD5e3tndddyTPMAXMgMQd3+/gl5kBiDqQ7cw4cJn1tGwAAAOBPxpVPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE/kuuTkZA0dOlQlS5aUr6+v6tSpoxUrVmRp3+PHjysmJkYFCxZUYGCgHnvsMR08eDDDutOmTVOVKlXk4+OjChUqaOLEibk5jBzJ7hwsWLBAHTt2VHh4uPz8/FSpUiX94x//0Pnz593qlilTRg6Hw+2nb9++f8KIbl1252DEiBEZjsvHxyfD+rfreZDd8Wf29+pwOFShQgWXupnVe/vtt/+sYd2ShIQEDR8+XC1atFBwcLAcDodmzJiR5f3Pnz+vPn36KCQkRP7+/mrSpIk2b96cYd3FixerVq1a8vHx0T333KPhw4fr6tWruTSS7MvJHHz77bfq2bOnKlasKD8/P4WHh+tvf/ubTp486VY3Kioqw3OhRYsWuTyiW5eTOZgxY0am5/mpU6fc6t+O50FOxp/Z36vD4VD+/Pld6t7unwnX4jccIdf16NFD8+bN0+DBg1WhQgXNmDFDjz76qFatWqXIyMhM90tISFCTJk104cIFvfLKK8qfP7/ee+89NW7cWFu3blXhwoWddadMmaK+ffvqiSee0PPPP6+1a9fq2WefVWJiooYOHWpjmDeU3Tno06ePSpYsqa5du+qee+7R//73P73//vtaunSpNm/eLF9fX5f6NWvW1D/+8Q+XsooVK/4pY7pV2Z2DdB9++KECAgKcrz09Pd3q3M7nQXbH/69//UsJCQkuZUeOHNGrr76q6Ohot/oPP/ywunXr5lJ2//33584gcujcuXN64403dM8996hGjRpavXp1lvdNS0tTy5YttW3bNr344osqUqSIJk2apKioKG3atMkliP/3v/9V27ZtFRUVpYkTJ+p///ufRo8erTNnzujDDz/8E0aWdTmZg6FDhyouLk4dOnRQhQoVdPDgQb3//vv6+uuvtXXrVhUvXtylfmhoqN566y2XspIlS+bGMHIkJ3OQ7o033lDZsmVdygoWLOjy+nY9D3Iy/mHDhulvf/ubS9mlS5fUt2/fDN8PbufPBBcGyEUbN240ksy7777rLEtKSjLlypUz9erVu+G+Y8eONZLMTz/95CzbvXu38fT0NC+//LKzLDEx0RQuXNi0bNnSZf8uXboYf39/ExcXl0ujyZ6czMGqVavcyj799FMjyXz00Ucu5WFhYW5zcLvIyRwMHz7cSDJnz569Yb3b+TzIyfgzMmrUKCPJ/Pjjjy7lkkz//v1z3N8/y+XLl83JkyeNMcb8/PPPRpKZPn16lvadM2eOkWS+/PJLZ9mZM2dMwYIFTadOnVzqVq1a1dSoUcNcuXLFWTZs2DDjcDjM7t27cz6QHMjJHKxZs8akpqa6lUkyw4YNcylv3Lixuffee3Olz7ktJ3Mwffp0I8n8/PPPN617u54HORl/Rj7//HMjycyaNcul/Hb+TLgeX7sjV82bN0+enp7q06ePs8zHx0e9evXS+vXrdezYsRvu++CDD+rBBx90llWuXFkPPfSQ5s6d6yxbtWqVYmNj1a9fP5f9+/fvr0uXLmnJkiW5OKJbl5M5iIqKcit7/PHHJUm7d+/OcJ+UlBRdunQpZ53OZTmZg3TGGMXHx8tkshTx7Xwe5Mb4rzV79myVLVtW9evXz3B7UlKSLl++nKM+/xm8vb3drs5l1bx581SsWDG1a9fOWRYSEqKYmBgtWrRIycnJkqRdu3Zp165d6tOnj/Ll+/9f5vXr10/GGM2bNy9ng8ihnMxBo0aN5OHh4VYWHByc6fvB1atX3a6c57WczMG1Ll68qNTU1Ay33c7nQW6NP93s2bPl7++vxx57LMPtt+NnwvUIn8hVW7ZsUcWKFRUYGOhSXrt2bUnS1q1bM9wvLS1N27dv1wMPPOC2rXbt2jpw4IAuXrzoPIYkt7oRERHy8PBwbs8r2Z2DzKTf11SkSBG3bd999538/PwUEBCgMmXKaMKECdnrdC7LjTkIDw9XUFCQChQooK5du+r06dNux5Buz/MgN8+BLVu2aPfu3ercuXOG22fMmCF/f3/5+vqqatWqmj17drb7fTvZsmWLatWq5Ra+ateurcTERP3666/OepL7eVCyZEmFhobm+ftBbktISFBCQkKG7we//vqr/P39VaBAARUvXlyvvfaarly5kge9zH1NmjRRYGCg/Pz81KZNG+3bt89l+91yHpw9e1YrVqxQ27Zt5e/v77b9dv1MuB73fCJXnTx5UiVKlHArTy87ceJEhvvFxcUpOTn5pvtWqlRJJ0+elKenp4oWLepSz8vLS4ULF870GLZkdw4yM3bsWHl6eqp9+/Yu5ffdd58iIyNVqVIlxcbGasaMGRo8eLBOnDihsWPHZn8AuSAnc1CoUCENGDBA9erVk7e3t9auXasPPvhAP/30k3755RdnoLudz4PcPAdmzZolSerSpYvbtvr16ysmJkZly5bViRMn9MEHH6hLly66cOGCnnnmmWz2/vZw8uRJNWrUyK382jmsXr268+GbzOY7r98Pctu//vUvpaSkqGPHji7l5cqVU5MmTVS9enVdunRJ8+bN0+jRo/Xrr79qzpw5edTbnPPz81OPHj2c4XPTpk0aP3686tevr82bN6t06dKSdNecB3PmzNHVq1czfD+4nT8Trkf4RK5KSkrK8PfPpj+pnJSUlOl+krK0b1JSkry8vDJsx8fHJ9Nj2JLdOcjI7NmzNW3aNA0ZMsTtSefFixe7vH766af1yCOPaPz48Ro4cKBCQ0Oz0fvckZM5GDRokMvrJ554QrVr11aXLl00adIkvfTSS842btfzILfOgbS0NH3xxRe6//77VaVKFbftP/74o8vrnj17KiIiQq+88op69Ojh9oDaX0lW5/Bm7x3x8fF/Yi/t+v777zVy5EjFxMSoadOmLtumTZvm8vqpp55Snz599NFHH+m5555T3bp1bXY118TExCgmJsb5um3btmrevLkaNWqkMWPGaPLkyZLunvNg9uzZCgkJ0cMPP+y27Xb+TLgeX7sjV/n6+jrvxbpW+v1omX0YppdnZV9fX1+lpKRk2M7ly5fz/AM3u3NwvbVr16pXr15q3ry5xowZc9P6DodDzz33nK5evZqtp0lzU27NQbrOnTurePHiWrlypcsxbtfzILfGv2bNGh0/fjzDqxwZ8fLy0oABA3T+/Hlt2rQp6x2+DWV1Dm/23pHX7we5Zc+ePXr88cdVrVo1ffzxx1naJ/2p52v/3dwJIiMjVadOHbf3A+nOPg8OHjyo9evXq2PHji73tWbmdvpMuB7hE7mqRIkSGa5Bl16W2bIfwcHB8vb2ztK+JUqUUGpqqs6cOeNSLyUlRbGxsXm+tEh25+Ba27ZtU5s2bVStWjXNmzcvS280kpxfQcXFxd1Cj3NfbszB9UqXLu0yrtv5PMit8c+aNUseHh7q1KlTlo99u5wDOZXVOUz/mjWzunn9fpAbjh07pujoaAUFBWnp0qUqUKBAlva7U86FjGT0fiDd2edB+v3cWf3PqHT7ngOET+SqmjVr6tdff3X7imPjxo3O7Rnx8PBQ9erV9csvv7ht27hxo8LDw51vuOltXF/3l19+UVpaWqbHsCW7c5DuwIEDatGihYoWLaqlS5e6rHV5M+kL8oeEhNxap3NZTufgesYYHT582GVct/N5kBvjT05O1vz58xUVFXVLH5y3yzmQUzVr1tTmzZuVlpbmUr5x40b5+fk51y7M7Dw4ceKEfvvttzx/P8ip2NhYRUdHKzk5WcuXL8/wnsbM3CnnQkYOHjyYpfeDO+U8kP4In+XKlbulWyhu23MgTxd6wh1nw4YNbusbXr582ZQvX97UqVPHWXbkyBG3ddfefvttt/Xc9uzZYzw9Pc3QoUOdZYmJiSY4ONi0atXKZf+uXbsaPz8/Exsbm9vDuiU5mYOTJ0+a8PBwU7JkSXPo0KFMjxEbG2uuXr3qUpaSkmIaNGhgvLy8nGvK5ZWczMGZM2fc2vvggw+MJDN+/Hhn2e18HuRk/OkWLFhgJJlp06ZluD2jeYqPjzflypUzRYoUMcnJyTkcRe660fqGJ06cMLt37zYpKSnOsi+++MJtnc+zZ8+aggULmo4dO7rsX7lyZVOjRg2XfxOvvvqqcTgcZteuXbk/mGy61TlISEgwtWvXNgUKFDC//PJLpu1euHDBXL582aUsLS3NdOzY0UgymzZtyrUx5NStzkFG5/mSJUuMJPPss8+6lP8VzoNbHX+6zZs3G0nmtddey7Dd2/0z4XqET+S6Dh06mHz58pkXX3zRTJkyxdSvX9/ky5fPrFmzxlmncePG5vr/+6R/cBYtWtS888475r333jOlS5c2JUuWdHsDSg8j7du3Nx999JHp1q2bkWTGjBljZYw3k905qFGjhpFkhgwZYj7//HOXn2+++cZZb/r06aZcuXJm6NChZvLkyebNN9801apVM5LMm2++aW2cN5LdOfD19TU9evQw48aNMx988IHp1KmTcTgcpmbNmubSpUsudW/n8yC740/3xBNPGG9vb3P+/PkMtw8fPtzUqFHDvPrqq2bq1Klm5MiRJiwszDgcDjNz5sw/ZUzZMXHiRDNq1CjzzDPPGEmmXbt2ZtSoUWbUqFHOsXXv3t1IcvkP19WrV03dunVNQECAGTlypPnggw/MvffeawoUKGD27NnjcoyvvvrKOBwO07RpUzN16lTz7LPPGg8PD9O7d2+bQ81UdufgscceM5JMz5493d4PFi5c6Ky3atUqU7x4cfPcc8+ZDz74wPzzn/80DRo0MJJMnz59LI82Y9mdg/Lly5sOHTqYsWPHmsmTJ5s+ffqYfPnymdKlS5tTp065HON2Pg+yO/50//jHP4wkt3M/3V/hM+FahE/kuqSkJPPCCy+Y4sWLG29vb/Pggw+aZcuWudTJ7EP32LFjpn379iYwMNAEBASYVq1amX379mV4nKlTp5pKlSoZLy8vU65cOfPee++ZtLS0P2VMtyq7cyAp05/GjRs76/3yyy+mdevWplSpUsbLy8sEBASYyMhIM3fuXBvDy5LszsHf/vY3U7VqVVOgQAGTP39+U758eTN06FATHx+f4XFu1/MgJ/8OLly4YHx8fEy7du0ybf+bb74xDz/8sClevLjJnz+/KViwoImOjjbffvttro8lJ8LCwjI9p9M/ZDP70I2LizO9evUyhQsXNn5+fqZx48aZ/qabhQsXmpo1axpvb28TGhpqXn311QyvIOWF7M7BjfYLCwtz1jt48KDp0KGDKVOmjPHx8TF+fn4mIiLCTJ48+bb4t2BM9udg2LBhpmbNmiYoKMjkz5/f3HPPPeaZZ55xC57pbtfzICf/DlJTU02pUqVMrVq1Mm3/r/CZcC2HMZn8+hAAAAAgl/HAEQAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCQDY4HA5FRUXlqI0ePXrI4XDo8OHDudKn661evVoOh0MjRoz4U9pHxnLj3ADuZIRPAHe9rVu3qm/fvqpataoCAwPl5eWl4sWL6+GHH9a4ceN09uzZvO6iNTt27FD37t1VpkwZeXt7KygoSOXLl1e7du00YcIEXfsbmQ8fPiyHw6EWLVpk2l56AO7bt2+mdY4cOSJPT085HA69++67N23r2h8fHx+Fh4erd+/ef1qIB5C78uV1BwAgr6SlpWnIkCEaN26cPD091ahRI0VHR8vf319nzpzR+vXr9cILL2j48OHau3evSpUqlddd/lOtWLFCrVq10tWrV9WsWTM9/vjj8vHx0YEDB7RmzRotXLhQ/fv3V758ufvR8cknnygtLU0Oh0OffPKJXnzxxRvWj4iIUKtWrSRJ58+f1+rVq/Xxxx9r/vz52rhxoypUqJCr/QOQuwifAO5aw4YN07hx41SrVi3NmTNH5cuXd6uzefNmDR06VElJSXnQQ7ueeeYZpaamauXKlWrSpInLNmOMvvnmG3l6eubqMdPS0jRjxgwVKVJErVq10owZM7Ru3TrVr18/030eeOABl1sJjDHq3r27Pv/8c40ZM0YzZszI1T4CyF187Q7grvTrr7/q3XffVUhIiJYtW5Zh8JSkWrVqacWKFSpTpkyW2j137pwGDx6ssmXLytvbW0WLFlVMTIx27NiR6T5paWl65513VKFCBfn4+Khs2bJ64403dOXKFZd6KSkpmjhxopo3b67SpUs722/Xrp22bNmS5bFn5MyZMzpw4ICqVavmFjylP+5jbN68uRwOR46Oc70VK1bo6NGjevLJJ9WrVy9J0rRp026pDYfDof79+0uSfv755xvWHTVqlBwOhz777LMMty9YsEAOh0PDhg1zli1cuFCdOnVS+fLl5efnp6CgIDVs2FDz58/Pch+joqIynbsb3fu7aNEiPfTQQypUqJB8fHxUrVo1/fOf/1RqaqpLvbS0NH388ceqXbu2goOD5evrq9DQULVu3VqrV6/Ocj8BGwifAO5Kn376qVJTU/X3v/9dISEhN62fla+az549q7p162rChAkqU6aMnn/+eTVt2lQLFixQnTp19MMPP2S43+DBgzV27Fg1a9ZMAwcOlLe3t4YPH65OnTq51IuLi9PgwYOVnJysRx99VM8995yioqK0dOlS1a9f/6bB60aCgoKUL18+nTx5UpcuXcp2O7cqPWh269ZNkZGRCg8P19y5c5WQkJCt9m4Wjrt27SqHw6GZM2dmuP3zzz+XJD311FPOspdfflk7d+5UZGSkBg0apA4dOmjv3r1q3769Jk6cmK1+ZsXLL7+stm3bau/evWrXrp369esnX19fvfjii3ryySfd6vbu3VtxcXHq3LmzBg8erKZNm2rnzp1auXLln9ZHIDv42h3AXWn9+vWSlOFVvuwaOnSoDhw4oJdffllvvvmms3zp0qVq2bKlnn76ae3du1ceHq7/79+wYYO2bdum0NBQSdKYMWP08MMPa/78+Zo/f76eeOIJSVKhQoV09OhRt3tPd+7cqbp16+qVV17RihUrstV3b29vtWnTRgsWLFC9evXUu3dv1a9fX9WrV5eXl9cN992/f3+mT9Tf6CGg2NhYLVq0SJUrV9aDDz4o6Y9w+MYbb2jOnDnOK6E3Y4zRhx9+KEmqXbv2DeuWLVtWDRo00HfffaeTJ0+qRIkSzm1xcXFaunSpHnjgAVWuXNlZvnTpUoWHh7u0k5CQoPr16+u1115Tr1695Ofnl6W+ZtWKFSv09ttvq3nz5po/f778/f0l/THWfv36afLkyS7nxscff6ySJUtq+/btbn2Ji4vL1b4BOWYA4C5UpUoVI8ns3r3bbduqVavM8OHDXX5WrVrlUkeSady4sfN1cnKy8fHxMYULFzaXLl1ya/Phhx82ksz333/vLOvevbuRZEaPHu1Wf+3atUaSadWqVZbG07p1a+Pl5WVSUlJcxiHJDB8+PEttnDt3zrRu3dpIcv54eXmZ+vXrmwkTJpjExESX+ocOHXKpe6Ofv//9727He++994wkM2bMGGfZ/v37jSRTr149t/rp44mIiHD+vQwePNjUrFnTSDLBwcFm//79Nx3nlClTjCQzbtw4l/JJkyYZSeZf//pXluZr3LhxRpJZvXq1S/n154YxxjRu3Nhk9pGbfh4cOnTIWdamTRsjyRw5csSt/vnz543D4TBPPPGEsyw4ONiUKVPGXL58OUt9B/ISVz4B4DqrV6/WyJEj3cpvtHbjnj17dPnyZTVp0iTDq2BNmjTRihUrtHXrVjVs2NBl2/WvJalevXrKly+f272cW7du1TvvvKMffvhBp06dcrsv9Ny5cy5X825F4cKFtXjxYu3bt0/Lli3TTz/9pA0bNmjdunVat26dPvroI61Zs0bBwcEu+zVv3lzLli3LsM3Vq1dnenV52rRpcjgc6tq1q7OsXLlyql+/vtatW6fdu3erSpUqbvtt2rRJmzZtkiR5eXmpVKlS6t27t4YNG6awsLCbjjMmJkbPPvusPv/8cz3//PPO8pkzZypfvnxutzucOXNGb7/9tv773//qyJEjbg+fnThx4qbHvFUbNmyQv7+/Pvnkkwy3+/r6as+ePc7XTz75pCZNmqRq1arpySefVJMmTVSvXj35+vrmet+AnCJ8ArgrFStWTLt379aJEydcvmKVpBEjRji/Rv7iiy/cwkhG4uPjne1mJD0Qpte7vi/X8/T0VOHChXXhwgVn2bp169S0aVNJUnR0tCpUqKCAgAA5HA793//9n7Zt26bk5OSb9vVmKlSo4LJc0datW9W1a1ft2LFDI0eO1IQJE3J8jI0bN2rHjh1q0qSJ7rnnHpdt3bp107p16/TJJ59kuO7n3//+d02ePDnbxy5YsKBatWql+fPna9euXapataoOHDigdevW6dFHH1XRokWddePi4vTggw/q6NGjatCggZo1a6aCBQvK09NTW7du1aJFi3Jlzq8XFxenq1evZvifoHTX3ps7YcIElS1bVtOnT9fo0aM1evRo+fj4KCYmRuPGjVORIkVyvY9AdvHAEYC7UvpSPqtWrcqV9gIDAyVJp0+fznD7qVOnXOpdK6N9UlNTFRsbq6CgIGfZmDFjlJycrJUrV2rx4sUaN26cRo4cqREjRqh48eK5MYwM1axZ0/lgzXfffZcrbaY/aLRq1Sq3hePTF6T/7LPP3K7s5pb0B4rSHzBKfwDp2geN0vt59OhRjRo1Sj/88IMmTpyoUaNGacSIEapbt26Wj5d+n+/Vq1fdtl37H4x0gYGBKly4sIwxmf4cOnTIWT9fvnx64YUXtHPnTh0/flyzZ89Ww4YN9dlnn6lLly5Z7idgA+ETwF2pe/fu8vDw0NSpU3Xu3Lkct1e5cmX5+Pjo559/VmJiotv29OVuatas6bZt7dq1bmXr16/X1atXdf/99zvLDhw4oODgYEVGRrrUTUxM1ObNm3M2gJsICAjItbYuXbqkL774Qn5+furVq1eGP/fdd5/OnDmjr7/+OteOe61HH31UhQsX1uzZs5WWlqZZs2apQIECeuyxx1zqHThwQJLcyqWM/94yU6hQIUnS8ePHXcrT0tK0bds2t/p16tRRbGys9u3bl+VjpCtZsqQ6derkXEJs5cqVd8U6tfjrIHwCuCtVrFhRQ4YM0ZkzZ/TII49o//79GdY7f/58ltrz8vJSp06ddO7cOb311lsu25YtW6bly5erfPnyatCggdu+EyZM0G+//eZ8nZKS4lxnskePHs7ysLAw/f7779q5c6ezLDU1VS+88EKOfwXopUuXNGbMmAyD+NWrV51ff18ffLPjyy+/1MWLF9W+fXt9/PHHGf6kH+9W1/zMqvz586tjx446evSo3nnnHe3bt09PPPGE2z2S6feQXr9M1uzZs7V06dIsHy/9af7rF8AfP368yxXMdM8++6wkqWfPnoqNjXXbfurUKe3evVuSlJycrHXr1rnVuXTpkhISEpQ/f363FRaAvMQ9nwDuWmPGjFFKSorGjx+vypUrq1GjRqpRo4b8/Px05swZbd++XT/99JMCAgIyvGJ5vbFjx2rNmjUaPXq01q1bpzp16ujw4cP68ssv5efnp+nTp2cYAurWrasaNWqoY8eO8vf311dffeVc2zF9KR1JGjhwoL755htFRkYqJiZGPj4+Wr16tY4fP66oqKgcLSZ+5coVvfrqqxoxYoTq1aunGjVqKDAwUKdPn9by5cv122+/qWzZsho+fHi2j5EuPVA+/fTTmdZp1qyZQkNDtWzZMp04cUIlS5bM8XGv99RTT2nSpEl6/fXXna8zqjN27FgNHDhQq1atUlhYmLZt26Zvv/1W7dq104IFC7J0rKefflrvvPOORowYoa1bt6pcuXL65ZdftGPHDjVu3Fhr1qxxqd+iRQu99tprGjVqlMqXL68WLVooLCxMsbGx2r9/v9auXavRo0erSpUqSkpKUoMGDVSxYkVFRETonnvuUUJCgr7++mudOnVKL7zwgry9vXM+YUBuycMn7QHgtrB582bTp08fU7lyZRMQEGDy589vihUrZpo2bWreffddc/r0abd9lMFyOsYYc/bsWfPss8+asLAwkz9/flOkSBHTvn1787///c+tbvoSOwcOHDBvv/22KV++vPHy8jJhYWFmxIgRJjk52W2fefPmmVq1ahk/Pz9TpEgRExMTYw4cOJDhcj23stRSamqqWbp0qRk0aJCJiIgwxYoVM/ny5TOBgYHmgQceMCNHjjTnz5932Sd9qaXmzZtn2m56H9KXWtqzZ4+RZMqWLWvS0tJu2Kdhw4a5LMV0fVu5oUKFCkaSCQ0NNampqRnW2bp1q4mOjjaFChUyBQoUMI0bNzYrV64006dPN5LM9OnTXepndm5s3brVPPTQQ8bPz88EBgaaxx57zOzbty/Dv7t0K1asMK1btzYhISEmf/78pnjx4qZevXpm1KhR5ujRo8YYY1JSUszYsWNNdHS0CQ0NNV5eXqZYsWKmUaNGZvbs2TedZ8A2hzHG5EnqBQAAwF2Hm0AAAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADW/D8ifEgQ4RekrwAAAABJRU5ErkJggg=='>
|
github_jupyter
|
# Analysis Report
We report the following SageMaker analysis.
## Explanations of label0
The Model has `2` input features. We computed KernelShap on the dataset `dataset` and display the `10` features with the greatest feature attribution.
<img src='data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAp8AAAG5CAYAAADI9V++AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy86wFpkAAAACXBIWXMAAA9hAAAPYQGoP6dpAAA+m0lEQVR4nO3de3zP5eP/8ed7Y2cbY47TmLOIrByHkUY5JDE5hPjwkUPUp6hUCJX60MdHCSUqfCKHL8WHKKQcKsePY87kbIuZzcZ2/f7otvfP23tjtnVNPO6322437+t1va7XdV1e3u+n1/v1uuYwxhgBAAAAFnjkdQcAAABw9yB8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF84rZijFF8fLxYfhYAgDsT4RO3lYsXLyooKEgXL17M664AAIA/AeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANbky+sOABmZt+24/ALi87obAADcUZ68PzSvu8CVTwAAANhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYRPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE8AAABYQ/gEAACANYTPbIqKilJUVFRed8OFw+HQgAED/vTjzJgxQw6HQ4cPH/7TjwUAAO4shE8AAABYky+vO/BX9c033+R1FwAAAP5y7oorn5cuXcr1Nr28vOTl5ZXr7QIAANzJ7rjwOWLECDkcDu3atUudO3dWoUKFFBkZKUmaOXOmIiIi5Ovrq+DgYD355JM6duyYc98BAwYoICBAiYmJbu126tRJxYsXV2pqqqSM7/lMTk7W8OHDVb58eXl7e6t06dIaMmSIkpOTnXXatWunWrVquezXunVrORwOLV682Fm2ceNGORwO/fe//73lOZg1a5YqVaokHx8fRURE6Pvvv3ers2XLFj3yyCMKDAxUQECAHnroIW3YsMGt3s6dO9W0aVP5+voqNDRUo0ePVlpamkud7t27q0iRIrpy5Yrb/tHR0apUqdItjwEAANyZ7rjwma5Dhw5KTEzUm2++qd69e2vMmDHq1q2bKlSooPHjx2vw4MH69ttv1ahRI50/f16S1LFjR126dElLlixxaSsxMVFfffWV2rdvL09PzwyPl5aWpjZt2uif//ynWrdurYkTJ6pt27Z677331LFjR2e9hg0batu2bYqPj5ckGWP0448/ysPDQ2vXrnXWW7t2rTw8PNSgQYNbGveaNWs0ePBgde3aVW+88YZiY2PVokUL7dixw1ln586dzn4MGTJEr732mg4dOqSoqCht3LjRWe/UqVNq0qSJtm7dqpdeekmDBw/WZ599pgkTJrgc86mnnlJsbKyWL1/uUn7q1Cl999136tq16y2NAQAA3Lnu2Hs+a9SoodmzZ0uSjhw5onLlymn06NF65ZVXnHXatWun+++/X5MmTdIrr7yiyMhIlSpVSnPmzFGHDh2c9ZYsWaJLly65hMjrzZ49WytXrtSaNWucV1olqVq1aurbt6/WrVun+vXrq2HDhkpLS9OPP/6oRx55RDt27NDvv/+uDh06uIXPGjVqKDAw8JbGvWPHDv3yyy+KiIiQJD355JOqVKmSXn/9dS1YsECS9Oqrr+rKlSv64YcfFB4eLknq1q2bKlWqpCFDhmjNmjWSpLFjx+rs2bPauHGjateuLemPq5wVKlRwOWbTpk0VGhqqmTNnqlWrVs7y//znP0pLSyN8AgAApzv2ymffvn2df16wYIHS0tIUExOjc+fOOX+KFy+uChUqaNWqVZL+WKqoQ4cOWrp0qRISEpz7z5kzR6VKlXIJldf78ssvVaVKFVWuXNnlGE2bNpUk5zHuv/9+BQQEOL8KX7t2rUJDQ9WtWzdt3rxZiYmJMsbohx9+UMOGDW953PXq1XMGT0m655579Nhjj2n58uVKTU1VamqqvvnmG7Vt29YZPCWpRIkS6ty5s3744QfnVdmlS5eqbt26zuApSSEhIerSpYvLMT08PNSlSxctXrxYFy9edJbPmjVL9evXV9myZW95HAAA4M50x4bPawPPvn37ZIxRhQoVFBIS4vKze/dunTlzxlm3Y8eOSkpKct5/mZCQoKVLl6pDhw5yOByZHm/fvn3auXOnW/sVK1aUJOcxPD09Va9ePedVzrVr16phw4aKjIxUamqqNmzYoF27dikuLi5b4fP6q5KSVLFiRSUmJurs2bM6e/asEhMTM7wPs0qVKkpLS3PeB3vkyJEM28to327duikpKUkLFy6UJO3du1ebNm3SU089dctjAAAAd6479mt3X19f55/T0tKcD+9kdM9mQECA889169ZVmTJlNHfuXHXu3FlfffWVkpKSbviVe/oxqlevrvHjx2e4vXTp0s4/R0ZGasyYMbp8+bLWrl2rYcOGqWDBgqpWrZrWrl2rYsWKSVK2wmdeqVq1qiIiIjRz5kx169ZNM2fOlJeXl2JiYvK6awAA4DZyx4bPa5UrV07GGJUtW9Z5JfJGYmJiNGHCBMXHx2vOnDkqU6aM6tate9NjbNu2TQ899NANr5BKf4TKlJQU/ec//9Hx48edIbNRo0bO8FmxYkVnCL0V+/btcyv79ddf5efnp5CQEEmSn5+f9u7d61Zvz5498vDwcAblsLCwDNvLaF/pj6ufzz//vE6ePKnZs2erZcuWKlSo0C2PAQAA3Lnu2K/dr9WuXTt5enpq5MiRMsa4bDPGKDY21qWsY8eOSk5O1qeffqply5Zl6epdTEyMjh8/ro8++shtW1JSkstao3Xq1FH+/Pk1duxYBQcH695775X0RyjdsGGD1qxZk+2rnuvXr9fmzZudr48dO6ZFixYpOjpanp6e8vT0VHR0tBYtWuTy6zFPnz6t2bNnKzIy0vmQ06OPPqoNGzbop59+ctY7e/asZs2aleGxO3XqJIfDoUGDBungwYM8aAQAANzcNVc+R48erZdfflmHDx9W27ZtVaBAAR06dEgLFy5Unz599MILLzjr16pVS+XLl9ewYcOUnJx806/cpT+WG5o7d6769u2rVatWqUGDBkpNTdWePXs0d+5cLV++XA888ICkP648RkREaMOGDc41PqU/rnxeunRJly5dynb4rFatmpo3b65nn31W3t7emjRpkiRp5MiRzjqjR4/WihUrFBkZqX79+ilfvnyaMmWKkpOT9c477zjrDRkyRJ9//rlatGihQYMGyd/fX1OnTlVYWJi2b9/uduyQkBC1aNFCX375pQoWLKiWLVtmawwAAODOdVeET0l66aWXVLFiRb333nvOIFa6dGlFR0erTZs2bvU7duyoMWPGqHz58m6LwmfEw8ND//d//6f33ntPn332mRYuXCg/Pz+Fh4dr0KBBbl/3p1/lvPYJ+uLFi6t8+fLav39/tsNn48aNVa9ePY0cOVJHjx5V1apVNWPGDN13333OOvfee6/Wrl2rl19+WW+99ZbS0tJUp04dzZw5U3Xq1HHWK1GihFatWqWBAwfq7bffVuHChdW3b1+VLFlSvXr1yvD43bp109dff62YmBh5e3tnawwAAODO5TDXfw8N5MCiRYvUtm1bff/999kK0PHx8QoKCtK073fJL6DAn9BDAADuXk/eH5rXXbg77vmEPR999JHCw8NvuCYqAAC4e901X7v/lZ06deqG2319fRUUFGSpNxn74osvtH37di1ZskQTJky46RP/AADg7kT4/AsoUaLEDbd3795dM2bMsNOZTHTq1EkBAQHq1auX+vXrl6d9AQAAty/C51/AihUrbri9ZMmSlnqSOW4dBgAAWUH4/Ato1qxZXncBAAAgV/DAEQAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAahzHG5HUngHTx8fEKCgrShQsXFBgYmNfdAQAAuYwrnwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsIbwCQAAAGsInwAAALCG8AkAAABrCJ8AAACwhvAJAAAAawifAAAAsOYvFz5nzJghh8Ohw4cP53VXAAAAcIv+cuHzz3bixAmNGDFCW7duzdN+JCYmasSIEVq9enWe9gMAACA3ET6vc+LECY0cOfK2CJ8jR44kfAIAgDuKtfCZlpamy5cv2zocLEpMTMzrLgAAgL+IWw6fI0aMkMPh0J49exQTE6PAwEAVLlxYgwYNcgmXDodDAwYM0KxZs3TvvffK29tby5YtkyRt2bJFjzzyiAIDAxUQEKCHHnpIGzZscDvWzp071bRpU/n6+io0NFSjR49WWlqaWz2Hw6ERI0a4lZcpU0Y9evRwKTt//ryee+45lSlTRt7e3goNDVW3bt107tw5rV69Wg8++KAk6emnn5bD4ZDD4dCMGTOyPD83al+SUlJS9PrrrysiIkJBQUHy9/dXw4YNtWrVKmcbhw8fVkhIiCRp5MiRzn5cO8Y9e/aoffv2Cg4Olo+Pjx544AEtXrzYrT/bt29X48aNXeZw+vTpGd43O2nSJOffVcmSJdW/f3+dP3/epU5UVJSqVaumTZs2qVGjRvLz89Mrr7yi7t27q0iRIrpy5YpbH6Kjo1WpUqUszyEAALhz5cvujjExMSpTpozeeustbdiwQf/+97/1+++/67PPPnPW+e677zR37lwNGDBARYoUUZkyZbRz5041bNhQgYGBGjJkiPLnz68pU6YoKipKa9asUZ06dSRJp06dUpMmTXT16lW99NJL8vf319SpU+Xr65vtwSYkJKhhw4bavXu3evbsqVq1auncuXNavHixfvvtN1WpUkVvvPGGXn/9dfXp00cNGzaUJNWvXz9X2i9SpIji4+P18ccfq1OnTurdu7cuXryoadOmqXnz5vrpp59Us2ZNhYSE6MMPP9Qzzzyjxx9/XO3atZMk3XfffZL+COUNGjRQqVKlnHMzd+5ctW3bVvPnz9fjjz8uSTp+/LiaNGkih8Ohl19+Wf7+/vr444/l7e3t1vcRI0Zo5MiRatasmZ555hnt3btXH374oX7++Wf9+OOPyp8/v7NubGysHnnkET355JPq2rWrihUrJn9/f3322Wdavny5WrVq5ax76tQpfffddxo+fHj2/tIAAMCdxdyi4cOHG0mmTZs2LuX9+vUzksy2bduMMcZIMh4eHmbnzp0u9dq2bWu8vLzMgQMHnGUnTpwwBQoUMI0aNXKWDR482EgyGzdudJadOXPGBAUFGUnm0KFDznJJZvjw4W59DQsLM927d3e+fv31140ks2DBAre6aWlpxhhjfv75ZyPJTJ8+/aZzcb2stH/16lWTnJzssu333383xYoVMz179nSWnT17NtNxPfTQQ6Z69erm8uXLLu3Xr1/fVKhQwVk2cOBA43A4zJYtW5xlsbGxJjg42GUOz5w5Y7y8vEx0dLRJTU111n3//feNJPPJJ584yxo3bmwkmcmTJ7v0KTU11YSGhpqOHTu6lI8fP944HA5z8OBBt3Fk5MKFC0aSuXDhQpbqAwCAv5Zs3/PZv39/l9cDBw6UJC1dutRZ1rhxY1WtWtX5OjU1Vd98843atm2r8PBwZ3mJEiXUuXNn/fDDD4qPj3e2U7duXdWuXdtZLyQkRF26dMlulzV//nzVqFHDeWXwWg6HI9vt3kr7np6e8vLykvTHfbBxcXG6evWqHnjgAW3evPmmx4iLi9N3332nmJgYXbx4UefOndO5c+cUGxur5s2ba9++fTp+/LgkadmyZapXr55q1qzp3D84ONhtDleuXKmUlBQNHjxYHh7//5To3bu3AgMDtWTJEpf63t7eevrpp13KPDw81KVLFy1evFgXL150ls+aNUv169dX2bJlbzo2AABw58t2+KxQoYLL63LlysnDw8PlPsLrA8fZs2eVmJiY4f1/VapUUVpamo4dOyZJOnLkiNsxJOXo3sEDBw6oWrVq2d4/t9r/9NNPdd9998nHx0eFCxdWSEiIlixZogsXLtx03/3798sYo9dee00hISEuP+lfbZ85c0bSH3NYvnx5tzauLzty5Igk97n18vJSeHi4c3u6UqVKOQP0tbp166akpCQtXLhQkrR3715t2rRJTz311E3HBQAA7g7ZvufzehldOczJ/Zm5ITU1NU+Pn5GZM2eqR48eatu2rV588UUVLVpUnp6eeuutt3TgwIGb7p/+wNULL7yg5s2bZ1gno8CZmzL7e61ataoiIiI0c+ZMdevWTTNnzpSXl5diYmL+1P4AAIC/jmyHz3379rlc2dy/f7/S0tJUpkyZTPcJCQmRn5+f9u7d67Ztz5498vDwUOnSpSVJYWFh2rdvn1u9jPYtVKiQ21PZKSkpOnnypEtZuXLltGPHjhsNK0dfv2el/Xnz5ik8PFwLFixwOdb1D+Rk1o/02xXy58+vZs2a3fBYYWFh2r9/v1v59WVhYWGS/pjba2+HSElJ0aFDh256nGt169ZNzz//vE6ePKnZs2erZcuWKlSoUJb3BwAAd7Zsf+3+wQcfuLyeOHGiJOmRRx7JdB9PT09FR0dr0aJFLl/Pnz59WrNnz1ZkZKQCAwMlSY8++qg2bNign376yVnv7NmzmjVrllu75cqV0/fff+9SNnXqVLcrn0888YS2bdvm/Fr4WsYYSZK/v78kuYXZrMhK+56eni6vJWnjxo1av369S30/P78M+1G0aFFFRUVpypQpbuFa+mOO0jVv3lzr1693WTA/Li7ObQ6bNWsmLy8v/fvf/3bp17Rp03ThwgW1bNnyRsN20alTJzkcDg0aNEgHDx5U165ds7wvAAC48znMtWkjC9KX5KlevbrKlCmjFi1aaP369Zo5c6Y6d+7sDDYOh0P9+/fX+++/77L/zp07VadOHRUsWFD9+vVTvnz5NGXKFB0/ftxlqaWTJ0+qevXqSktL06BBg1yWWtq+fbsOHTrkvMo6ZcoU9e3bV+3atdPDDz+sbdu2afny5bp48aJatmzpXKczISFBderU0d69e9WzZ09FREQoLi5Oixcv1uTJk1WjRg1duXJFRYsWVbFixfTiiy/K399fderUydIDM1lpf/r06erZs6fatGmjli1b6tChQ5o8ebJKlSqlhIQEl1B+7733Ki4uTq+99pqCg4NVrVo1VatWTbt27VJkZKQ8PDzUu3dvhYeH6/Tp01q/fr1+++03bdu2TZJ07Ngx3XfffcqXL58GDhzoXGrJx8dHW7du1eHDh51XPdP/XqOjo9WmTRvt3btXkyZNUq1atVyWWoqKitK5c+dueIW3devW+vrrr1WwYEGdOnUqw6WdMhMfH6+goCBduHDB+R8RAABwB7nVx+PTl1ratWuXad++vSlQoIApVKiQGTBggElKSnLWk2T69++fYRubN282zZs3NwEBAcbPz880adLErFu3zq3e9u3bTePGjY2Pj48pVaqUGTVqlJk2bZrbUkupqalm6NChpkiRIsbPz880b97c7N+/322pJWP+WGpowIABplSpUsbLy8uEhoaa7t27m3PnzjnrLFq0yFStWtXky5fvlpdduln7aWlp5s033zRhYWHG29vb3H///ebrr7823bt3N2FhYS5trVu3zkRERBgvLy+3ZZcOHDhgunXrZooXL27y589vSpUqZVq1amXmzZvn0saWLVtMw4YNjbe3twkNDTVvvfWW+fe//20kmVOnTrnUff/9903lypVN/vz5TbFixcwzzzxjfv/9d5c6jRs3Nvfee+8N52Du3LlGkunTp0+W5y0dSy0BAHBny/aVz7Nnz6pIkSK5nYVhweDBgzVlyhQlJCQ4bwPITYsWLVLbtm31/fffOxfqzyqufAIAcGez9rvdkTeSkpJcXsfGxurzzz9XZGTknxI8Jemjjz5SeHi4IiMj/5T2AQDAX1euLbV0J0tKSrrpGpzBwcEZrn2Z1+rVq6eoqChVqVJFp0+f1rRp0xQfH6/XXnst14/1xRdfaPv27VqyZIkmTJiQKwv3AwCAOwvhMwvmzJnj9ht9rrdq1SpFRUXZ6dAtePTRRzVv3jxNnTpVDodDtWrV0rRp09SoUaNcP1anTp0UEBCgXr16qV+/frnePgAA+Ou75Xs+70YnT57Uzp07b1gnIiKC9SxzAfd8AgBwZ+PKZxaUKFFCJUqUyOtuAAAA/OXxwBEAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMCafHndAeBaxhhJUnx8fB73BAAA3KoCBQrI4XDcsA7hE7eV2NhYSVLp0qXzuCcAAOBWXbhwQYGBgTesQ/jEbSU4OFiSdPToUQUFBeVxb/JGfHy8SpcurWPHjt30H/CdijlgDu728UvMgcQcSH+9OShQoMBN6xA+cVvx8PjjNuSgoKC/xD+yP1NgYCBzwBzc9XNwt49fYg4k5kC6s+aAB44AAABgDeETAAAA1hA+cVvx9vbW8OHD5e3tndddyTPMAXMgMQd3+/gl5kBiDqQ7cw4cJn1tGwAAAOBPxpVPAAAAWEP4BAAAgDWETwAAAFhD+AQAAIA1hE/kuuTkZA0dOlQlS5aUr6+v6tSpoxUrVmRp3+PHjysmJkYFCxZUYGCgHnvsMR08eDDDutOmTVOVKlXk4+OjChUqaOLEibk5jBzJ7hwsWLBAHTt2VHh4uPz8/FSpUiX94x//0Pnz593qlilTRg6Hw+2nb9++f8KIbl1252DEiBEZjsvHxyfD+rfreZDd8Wf29+pwOFShQgWXupnVe/vtt/+sYd2ShIQEDR8+XC1atFBwcLAcDodmzJiR5f3Pnz+vPn36KCQkRP7+/mrSpIk2b96cYd3FixerVq1a8vHx0T333KPhw4fr6tWruTSS7MvJHHz77bfq2bOnKlasKD8/P4WHh+tvf/ubTp486VY3Kioqw3OhRYsWuTyiW5eTOZgxY0am5/mpU6fc6t+O50FOxp/Z36vD4VD+/Pld6t7unwnX4jccIdf16NFD8+bN0+DBg1WhQgXNmDFDjz76qFatWqXIyMhM90tISFCTJk104cIFvfLKK8qfP7/ee+89NW7cWFu3blXhwoWddadMmaK+ffvqiSee0PPPP6+1a9fq2WefVWJiooYOHWpjmDeU3Tno06ePSpYsqa5du+qee+7R//73P73//vtaunSpNm/eLF9fX5f6NWvW1D/+8Q+XsooVK/4pY7pV2Z2DdB9++KECAgKcrz09Pd3q3M7nQXbH/69//UsJCQkuZUeOHNGrr76q6Ohot/oPP/ywunXr5lJ2//33584gcujcuXN64403dM8996hGjRpavXp1lvdNS0tTy5YttW3bNr344osqUqSIJk2apKioKG3atMkliP/3v/9V27ZtFRUVpYkTJ+p///ufRo8erTNnzujDDz/8E0aWdTmZg6FDhyouLk4dOnRQhQoVdPDgQb3//vv6+uuvtXXrVhUvXtylfmhoqN566y2XspIlS+bGMHIkJ3OQ7o033lDZsmVdygoWLOjy+nY9D3Iy/mHDhulvf/ubS9mlS5fUt2/fDN8PbufPBBcGyEUbN240ksy7777rLEtKSjLlypUz9erVu+G+Y8eONZLMTz/95CzbvXu38fT0NC+//LKzLDEx0RQuXNi0bNnSZf8uXboYf39/ExcXl0ujyZ6czMGqVavcyj799FMjyXz00Ucu5WFhYW5zcLvIyRwMHz7cSDJnz569Yb3b+TzIyfgzMmrUKCPJ/Pjjjy7lkkz//v1z3N8/y+XLl83JkyeNMcb8/PPPRpKZPn16lvadM2eOkWS+/PJLZ9mZM2dMwYIFTadOnVzqVq1a1dSoUcNcuXLFWTZs2DDjcDjM7t27cz6QHMjJHKxZs8akpqa6lUkyw4YNcylv3Lixuffee3Olz7ktJ3Mwffp0I8n8/PPPN617u54HORl/Rj7//HMjycyaNcul/Hb+TLgeX7sjV82bN0+enp7q06ePs8zHx0e9evXS+vXrdezYsRvu++CDD+rBBx90llWuXFkPPfSQ5s6d6yxbtWqVYmNj1a9fP5f9+/fvr0uXLmnJkiW5OKJbl5M5iIqKcit7/PHHJUm7d+/OcJ+UlBRdunQpZ53OZTmZg3TGGMXHx8tkshTx7Xwe5Mb4rzV79myVLVtW9evXz3B7UlKSLl++nKM+/xm8vb3drs5l1bx581SsWDG1a9fOWRYSEqKYmBgtWrRIycnJkqRdu3Zp165d6tOnj/Ll+/9f5vXr10/GGM2bNy9ng8ihnMxBo0aN5OHh4VYWHByc6fvB1atX3a6c57WczMG1Ll68qNTU1Ay33c7nQW6NP93s2bPl7++vxx57LMPtt+NnwvUIn8hVW7ZsUcWKFRUYGOhSXrt2bUnS1q1bM9wvLS1N27dv1wMPPOC2rXbt2jpw4IAuXrzoPIYkt7oRERHy8PBwbs8r2Z2DzKTf11SkSBG3bd999538/PwUEBCgMmXKaMKECdnrdC7LjTkIDw9XUFCQChQooK5du+r06dNux5Buz/MgN8+BLVu2aPfu3ercuXOG22fMmCF/f3/5+vqqatWqmj17drb7fTvZsmWLatWq5Ra+ateurcTERP3666/OepL7eVCyZEmFhobm+ftBbktISFBCQkKG7we//vqr/P39VaBAARUvXlyvvfaarly5kge9zH1NmjRRYGCg/Pz81KZNG+3bt89l+91yHpw9e1YrVqxQ27Zt5e/v77b9dv1MuB73fCJXnTx5UiVKlHArTy87ceJEhvvFxcUpOTn5pvtWqlRJJ0+elKenp4oWLepSz8vLS4ULF870GLZkdw4yM3bsWHl6eqp9+/Yu5ffdd58iIyNVqVIlxcbGasaMGRo8eLBOnDihsWPHZn8AuSAnc1CoUCENGDBA9erVk7e3t9auXasPPvhAP/30k3755RdnoLudz4PcPAdmzZolSerSpYvbtvr16ysmJkZly5bViRMn9MEHH6hLly66cOGCnnnmmWz2/vZw8uRJNWrUyK382jmsXr268+GbzOY7r98Pctu//vUvpaSkqGPHji7l5cqVU5MmTVS9enVdunRJ8+bN0+jRo/Xrr79qzpw5edTbnPPz81OPHj2c4XPTpk0aP3686tevr82bN6t06dKSdNecB3PmzNHVq1czfD+4nT8Trkf4RK5KSkrK8PfPpj+pnJSUlOl+krK0b1JSkry8vDJsx8fHJ9Nj2JLdOcjI7NmzNW3aNA0ZMsTtSefFixe7vH766af1yCOPaPz48Ro4cKBCQ0Oz0fvckZM5GDRokMvrJ554QrVr11aXLl00adIkvfTSS842btfzILfOgbS0NH3xxRe6//77VaVKFbftP/74o8vrnj17KiIiQq+88op69Ojh9oDaX0lW5/Bm7x3x8fF/Yi/t+v777zVy5EjFxMSoadOmLtumTZvm8vqpp55Snz599NFHH+m5555T3bp1bXY118TExCgmJsb5um3btmrevLkaNWqkMWPGaPLkyZLunvNg9uzZCgkJ0cMPP+y27Xb+TLgeX7sjV/n6+jrvxbpW+v1omX0YppdnZV9fX1+lpKRk2M7ly5fz/AM3u3NwvbVr16pXr15q3ry5xowZc9P6DodDzz33nK5evZqtp0lzU27NQbrOnTurePHiWrlypcsxbtfzILfGv2bNGh0/fjzDqxwZ8fLy0oABA3T+/Hlt2rQp6x2+DWV1Dm/23pHX7we5Zc+ePXr88cdVrVo1ffzxx1naJ/2p52v/3dwJIiMjVadOHbf3A+nOPg8OHjyo9evXq2PHji73tWbmdvpMuB7hE7mqRIkSGa5Bl16W2bIfwcHB8vb2ztK+JUqUUGpqqs6cOeNSLyUlRbGxsXm+tEh25+Ba27ZtU5s2bVStWjXNmzcvS280kpxfQcXFxd1Cj3NfbszB9UqXLu0yrtv5PMit8c+aNUseHh7q1KlTlo99u5wDOZXVOUz/mjWzunn9fpAbjh07pujoaAUFBWnp0qUqUKBAlva7U86FjGT0fiDd2edB+v3cWf3PqHT7ngOET+SqmjVr6tdff3X7imPjxo3O7Rnx8PBQ9erV9csvv7ht27hxo8LDw51vuOltXF/3l19+UVpaWqbHsCW7c5DuwIEDatGihYoWLaqlS5e6rHV5M+kL8oeEhNxap3NZTufgesYYHT582GVct/N5kBvjT05O1vz58xUVFXVLH5y3yzmQUzVr1tTmzZuVlpbmUr5x40b5+fk51y7M7Dw4ceKEfvvttzx/P8ip2NhYRUdHKzk5WcuXL8/wnsbM3CnnQkYOHjyYpfeDO+U8kP4In+XKlbulWyhu23MgTxd6wh1nw4YNbusbXr582ZQvX97UqVPHWXbkyBG3ddfefvttt/Xc9uzZYzw9Pc3QoUOdZYmJiSY4ONi0atXKZf+uXbsaPz8/Exsbm9vDuiU5mYOTJ0+a8PBwU7JkSXPo0KFMjxEbG2uuXr3qUpaSkmIaNGhgvLy8nGvK5ZWczMGZM2fc2vvggw+MJDN+/Hhn2e18HuRk/OkWLFhgJJlp06ZluD2jeYqPjzflypUzRYoUMcnJyTkcRe660fqGJ06cMLt37zYpKSnOsi+++MJtnc+zZ8+aggULmo4dO7rsX7lyZVOjRg2XfxOvvvqqcTgcZteuXbk/mGy61TlISEgwtWvXNgUKFDC//PJLpu1euHDBXL582aUsLS3NdOzY0UgymzZtyrUx5NStzkFG5/mSJUuMJPPss8+6lP8VzoNbHX+6zZs3G0nmtddey7Dd2/0z4XqET+S6Dh06mHz58pkXX3zRTJkyxdSvX9/ky5fPrFmzxlmncePG5vr/+6R/cBYtWtS888475r333jOlS5c2JUuWdHsDSg8j7du3Nx999JHp1q2bkWTGjBljZYw3k905qFGjhpFkhgwZYj7//HOXn2+++cZZb/r06aZcuXJm6NChZvLkyebNN9801apVM5LMm2++aW2cN5LdOfD19TU9evQw48aNMx988IHp1KmTcTgcpmbNmubSpUsudW/n8yC740/3xBNPGG9vb3P+/PkMtw8fPtzUqFHDvPrqq2bq1Klm5MiRJiwszDgcDjNz5sw/ZUzZMXHiRDNq1CjzzDPPGEmmXbt2ZtSoUWbUqFHOsXXv3t1IcvkP19WrV03dunVNQECAGTlypPnggw/MvffeawoUKGD27NnjcoyvvvrKOBwO07RpUzN16lTz7LPPGg8PD9O7d2+bQ81UdufgscceM5JMz5493d4PFi5c6Ky3atUqU7x4cfPcc8+ZDz74wPzzn/80DRo0MJJMnz59LI82Y9mdg/Lly5sOHTqYsWPHmsmTJ5s+ffqYfPnymdKlS5tTp065HON2Pg+yO/50//jHP4wkt3M/3V/hM+FahE/kuqSkJPPCCy+Y4sWLG29vb/Pggw+aZcuWudTJ7EP32LFjpn379iYwMNAEBASYVq1amX379mV4nKlTp5pKlSoZLy8vU65cOfPee++ZtLS0P2VMtyq7cyAp05/GjRs76/3yyy+mdevWplSpUsbLy8sEBASYyMhIM3fuXBvDy5LszsHf/vY3U7VqVVOgQAGTP39+U758eTN06FATHx+f4XFu1/MgJ/8OLly4YHx8fEy7du0ybf+bb74xDz/8sClevLjJnz+/KViwoImOjjbffvttro8lJ8LCwjI9p9M/ZDP70I2LizO9evUyhQsXNn5+fqZx48aZ/qabhQsXmpo1axpvb28TGhpqXn311QyvIOWF7M7BjfYLCwtz1jt48KDp0KGDKVOmjPHx8TF+fn4mIiLCTJ48+bb4t2BM9udg2LBhpmbNmiYoKMjkz5/f3HPPPeaZZ55xC57pbtfzICf/DlJTU02pUqVMrVq1Mm3/r/CZcC2HMZn8+hAAAAAgl/HAEQAAAKwhfAIAAMAawicAAACsIXwCAADAGsInAAAArCF8AgAAwBrCJwAAAKwhfAIAAMAawicAAACsIXwCQDY4HA5FRUXlqI0ePXrI4XDo8OHDudKn661evVoOh0MjRoz4U9pHxnLj3ADuZIRPAHe9rVu3qm/fvqpataoCAwPl5eWl4sWL6+GHH9a4ceN09uzZvO6iNTt27FD37t1VpkwZeXt7KygoSOXLl1e7du00YcIEXfsbmQ8fPiyHw6EWLVpk2l56AO7bt2+mdY4cOSJPT085HA69++67N23r2h8fHx+Fh4erd+/ef1qIB5C78uV1BwAgr6SlpWnIkCEaN26cPD091ahRI0VHR8vf319nzpzR+vXr9cILL2j48OHau3evSpUqlddd/lOtWLFCrVq10tWrV9WsWTM9/vjj8vHx0YEDB7RmzRotXLhQ/fv3V758ufvR8cknnygtLU0Oh0OffPKJXnzxxRvWj4iIUKtWrSRJ58+f1+rVq/Xxxx9r/vz52rhxoypUqJCr/QOQuwifAO5aw4YN07hx41SrVi3NmTNH5cuXd6uzefNmDR06VElJSXnQQ7ueeeYZpaamauXKlWrSpInLNmOMvvnmG3l6eubqMdPS0jRjxgwVKVJErVq10owZM7Ru3TrVr18/030eeOABl1sJjDHq3r27Pv/8c40ZM0YzZszI1T4CyF187Q7grvTrr7/q3XffVUhIiJYtW5Zh8JSkWrVqacWKFSpTpkyW2j137pwGDx6ssmXLytvbW0WLFlVMTIx27NiR6T5paWl65513VKFCBfn4+Khs2bJ64403dOXKFZd6KSkpmjhxopo3b67SpUs722/Xrp22bNmS5bFn5MyZMzpw4ICqVavmFjylP+5jbN68uRwOR46Oc70VK1bo6NGjevLJJ9WrVy9J0rRp026pDYfDof79+0uSfv755xvWHTVqlBwOhz777LMMty9YsEAOh0PDhg1zli1cuFCdOnVS+fLl5efnp6CgIDVs2FDz58/Pch+joqIynbsb3fu7aNEiPfTQQypUqJB8fHxUrVo1/fOf/1RqaqpLvbS0NH388ceqXbu2goOD5evrq9DQULVu3VqrV6/Ocj8BGwifAO5Kn376qVJTU/X3v/9dISEhN62fla+az549q7p162rChAkqU6aMnn/+eTVt2lQLFixQnTp19MMPP2S43+DBgzV27Fg1a9ZMAwcOlLe3t4YPH65OnTq51IuLi9PgwYOVnJysRx99VM8995yioqK0dOlS1a9f/6bB60aCgoKUL18+nTx5UpcuXcp2O7cqPWh269ZNkZGRCg8P19y5c5WQkJCt9m4Wjrt27SqHw6GZM2dmuP3zzz+XJD311FPOspdfflk7d+5UZGSkBg0apA4dOmjv3r1q3769Jk6cmK1+ZsXLL7+stm3bau/evWrXrp369esnX19fvfjii3ryySfd6vbu3VtxcXHq3LmzBg8erKZNm2rnzp1auXLln9ZHIDv42h3AXWn9+vWSlOFVvuwaOnSoDhw4oJdffllvvvmms3zp0qVq2bKlnn76ae3du1ceHq7/79+wYYO2bdum0NBQSdKYMWP08MMPa/78+Zo/f76eeOIJSVKhQoV09OhRt3tPd+7cqbp16+qVV17RihUrstV3b29vtWnTRgsWLFC9evXUu3dv1a9fX9WrV5eXl9cN992/f3+mT9Tf6CGg2NhYLVq0SJUrV9aDDz4o6Y9w+MYbb2jOnDnOK6E3Y4zRhx9+KEmqXbv2DeuWLVtWDRo00HfffaeTJ0+qRIkSzm1xcXFaunSpHnjgAVWuXNlZvnTpUoWHh7u0k5CQoPr16+u1115Tr1695Ofnl6W+ZtWKFSv09ttvq3nz5po/f778/f0l/THWfv36afLkyS7nxscff6ySJUtq+/btbn2Ji4vL1b4BOWYA4C5UpUoVI8ns3r3bbduqVavM8OHDXX5WrVrlUkeSady4sfN1cnKy8fHxMYULFzaXLl1ya/Phhx82ksz333/vLOvevbuRZEaPHu1Wf+3atUaSadWqVZbG07p1a+Pl5WVSUlJcxiHJDB8+PEttnDt3zrRu3dpIcv54eXmZ+vXrmwkTJpjExESX+ocOHXKpe6Ofv//9727He++994wkM2bMGGfZ/v37jSRTr149t/rp44mIiHD+vQwePNjUrFnTSDLBwcFm//79Nx3nlClTjCQzbtw4l/JJkyYZSeZf//pXluZr3LhxRpJZvXq1S/n154YxxjRu3Nhk9pGbfh4cOnTIWdamTRsjyRw5csSt/vnz543D4TBPPPGEsyw4ONiUKVPGXL58OUt9B/ISVz4B4DqrV6/WyJEj3cpvtHbjnj17dPnyZTVp0iTDq2BNmjTRihUrtHXrVjVs2NBl2/WvJalevXrKly+f272cW7du1TvvvKMffvhBp06dcrsv9Ny5cy5X825F4cKFtXjxYu3bt0/Lli3TTz/9pA0bNmjdunVat26dPvroI61Zs0bBwcEu+zVv3lzLli3LsM3Vq1dnenV52rRpcjgc6tq1q7OsXLlyql+/vtatW6fdu3erSpUqbvtt2rRJmzZtkiR5eXmpVKlS6t27t4YNG6awsLCbjjMmJkbPPvusPv/8cz3//PPO8pkzZypfvnxutzucOXNGb7/9tv773//qyJEjbg+fnThx4qbHvFUbNmyQv7+/Pvnkkwy3+/r6as+ePc7XTz75pCZNmqRq1arpySefVJMmTVSvXj35+vrmet+AnCJ8ArgrFStWTLt379aJEydcvmKVpBEjRji/Rv7iiy/cwkhG4uPjne1mJD0Qpte7vi/X8/T0VOHChXXhwgVn2bp169S0aVNJUnR0tCpUqKCAgAA5HA793//9n7Zt26bk5OSb9vVmKlSo4LJc0datW9W1a1ft2LFDI0eO1IQJE3J8jI0bN2rHjh1q0qSJ7rnnHpdt3bp107p16/TJJ59kuO7n3//+d02ePDnbxy5YsKBatWql+fPna9euXapataoOHDigdevW6dFHH1XRokWddePi4vTggw/q6NGjatCggZo1a6aCBQvK09NTW7du1aJFi3Jlzq8XFxenq1evZvifoHTX3ps7YcIElS1bVtOnT9fo0aM1evRo+fj4KCYmRuPGjVORIkVyvY9AdvHAEYC7UvpSPqtWrcqV9gIDAyVJp0+fznD7qVOnXOpdK6N9UlNTFRsbq6CgIGfZmDFjlJycrJUrV2rx4sUaN26cRo4cqREjRqh48eK5MYwM1axZ0/lgzXfffZcrbaY/aLRq1Sq3hePTF6T/7LPP3K7s5pb0B4rSHzBKfwDp2geN0vt59OhRjRo1Sj/88IMmTpyoUaNGacSIEapbt26Wj5d+n+/Vq1fdtl37H4x0gYGBKly4sIwxmf4cOnTIWT9fvnx64YUXtHPnTh0/flyzZ89Ww4YN9dlnn6lLly5Z7idgA+ETwF2pe/fu8vDw0NSpU3Xu3Lkct1e5cmX5+Pjo559/VmJiotv29OVuatas6bZt7dq1bmXr16/X1atXdf/99zvLDhw4oODgYEVGRrrUTUxM1ObNm3M2gJsICAjItbYuXbqkL774Qn5+furVq1eGP/fdd5/OnDmjr7/+OteOe61HH31UhQsX1uzZs5WWlqZZs2apQIECeuyxx1zqHThwQJLcyqWM/94yU6hQIUnS8ePHXcrT0tK0bds2t/p16tRRbGys9u3bl+VjpCtZsqQ6derkXEJs5cqVd8U6tfjrIHwCuCtVrFhRQ4YM0ZkzZ/TII49o//79GdY7f/58ltrz8vJSp06ddO7cOb311lsu25YtW6bly5erfPnyatCggdu+EyZM0G+//eZ8nZKS4lxnskePHs7ysLAw/f7779q5c6ezLDU1VS+88EKOfwXopUuXNGbMmAyD+NWrV51ff18ffLPjyy+/1MWLF9W+fXt9/PHHGf6kH+9W1/zMqvz586tjx446evSo3nnnHe3bt09PPPGE2z2S6feQXr9M1uzZs7V06dIsHy/9af7rF8AfP368yxXMdM8++6wkqWfPnoqNjXXbfurUKe3evVuSlJycrHXr1rnVuXTpkhISEpQ/f363FRaAvMQ9nwDuWmPGjFFKSorGjx+vypUrq1GjRqpRo4b8/Px05swZbd++XT/99JMCAgIyvGJ5vbFjx2rNmjUaPXq01q1bpzp16ujw4cP68ssv5efnp+nTp2cYAurWrasaNWqoY8eO8vf311dffeVc2zF9KR1JGjhwoL755htFRkYqJiZGPj4+Wr16tY4fP66oqKgcLSZ+5coVvfrqqxoxYoTq1aunGjVqKDAwUKdPn9by5cv122+/qWzZsho+fHi2j5EuPVA+/fTTmdZp1qyZQkNDtWzZMp04cUIlS5bM8XGv99RTT2nSpEl6/fXXna8zqjN27FgNHDhQq1atUlhYmLZt26Zvv/1W7dq104IFC7J0rKefflrvvPOORowYoa1bt6pcuXL65ZdftGPHDjVu3Fhr1qxxqd+iRQu99tprGjVqlMqXL68WLVooLCxMsbGx2r9/v9auXavRo0erSpUqSkpKUoMGDVSxYkVFRETonnvuUUJCgr7++mudOnVKL7zwgry9vXM+YUBuycMn7QHgtrB582bTp08fU7lyZRMQEGDy589vihUrZpo2bWreffddc/r0abd9lMFyOsYYc/bsWfPss8+asLAwkz9/flOkSBHTvn1787///c+tbvoSOwcOHDBvv/22KV++vPHy8jJhYWFmxIgRJjk52W2fefPmmVq1ahk/Pz9TpEgRExMTYw4cOJDhcj23stRSamqqWbp0qRk0aJCJiIgwxYoVM/ny5TOBgYHmgQceMCNHjjTnz5932Sd9qaXmzZtn2m56H9KXWtqzZ4+RZMqWLWvS0tJu2Kdhw4a5LMV0fVu5oUKFCkaSCQ0NNampqRnW2bp1q4mOjjaFChUyBQoUMI0bNzYrV64006dPN5LM9OnTXepndm5s3brVPPTQQ8bPz88EBgaaxx57zOzbty/Dv7t0K1asMK1btzYhISEmf/78pnjx4qZevXpm1KhR5ujRo8YYY1JSUszYsWNNdHS0CQ0NNV5eXqZYsWKmUaNGZvbs2TedZ8A2hzHG5EnqBQAAwF2Hm0AAAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADWED4BAABgDeETAAAA1hA+AQAAYA3hEwAAANYQPgEAAGAN4RMAAADW/D8ifEgQ4RekrwAAAABJRU5ErkJggg=='>
| 0.490236 | 0.42054 |
# Classifying seismic receiver functions using logistic regression
In this lab exercise, your task is to classify seismic receiver functions into two categories: good and bad. The bad seismic traces, in practice, are excluded from all further analysis. And only good seismic traces are kept for the subsequent quantitative analysis. To perform classification, you will implement logistic regression using Scikit-learn package. Specifically, you are going to use the [logistic regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) module by importing it from [Scikit-Learn](https://scikit-learn.org/stable/). <br>
<br>
After finishing this exercise, you will understand: <br>
- How to implement logistic regression using Scikit-Learn; <br>
- The typical data preprocessing steps involved in many machine learning implementation; <br>
- How to evaluate a learned model; <br>
- How the learning errors of training and validation sets change as the size of training data set increases. <br>
<br>
You will also see that machine learning can significantly accelerate the classification problem from manual labeling that takes weeks to automatic labeling that takes about 20 seconds. <br>
<br>
Author: Jiajia Sun, 02/07/2019 at University of Houston.
# 1. Introduction to USArray data
The seismic data we are going to use for this lab exercise was recorded by USArray Transportable Array (TA). The TA has traversed the continental United States and collected voluminous amounts of broadband seismic data that contain extremely rich information for mapping the structures of the Earth’s interior underneath North America. It is currently being deployed in Alaska.
Here is a picutre summarizing the locations of current USArray TA seismic receivers.
<img src="TA_AK.png">
The data we are going to classify is actually the P-wave receiver functions that were computed based on raw seismic data. P-wave receiver functions are widely used in crustal studies, because they provide important information about the crustal thickness. The seismological data used in our study are from earthquakes with a distance range of $30^\circ$ to $90^\circ$ and a magnitude of Mb 5 and above recorded at 201 stations in Alaska from the TA and Alaska Regional Network. 12,597 receiver function traces were obtained, and manually labeled ‘good’ or ‘bad’.
To learn more about USArray data, please refer to the following resources: <br>
1\. http://www.usarray.org/researchers/dataas <br>
2\. http://ds.iris.edu/ds/nodes/dmc/earthscope/usarray/ <br>
3\. http://www.usarray.org/Alaska <br>
<br>
For more information on receiver functions, please refer to the following materials: <br>
1\. https://ds.iris.edu/media/workshop/2013/01/advanced-studies-institute-on-seismological-research/files/lecture_introrecf.pdf <br>
2\. http://www.diss.fu-berlin.de/diss/servlets/MCRFileNodeServlet/FUDISS_derivate_000000001205/3_Chapter3.pdf?hosts= <br>
<br>
Note that, for the purpose of this class, you do not need to know anything about seismology or receiver functions.
# 2. Import data
The following code imports the data from Traces_qc.mat.
```
import numpy as np
import h5py
with h5py.File("../Traces_qc.mat") as f:
ampdata = [f[element[0]][:] for element in f["Data"]["amps"]]
flag = [f[element[0]][:] for element in f["Data"]["Flags"]]
ntr = [f[element[0]][:] for element in f["Data"]["ntr"]]
time = [f[element[0]][:] for element in f["Data"]["time"]]
staname = [f[element[0]][:] for element in f["Data"]["staname"]]
ampall = np.zeros((1,651))
flagall = np.zeros(1)
for i in np.arange(201):
ampall = np.vstack((ampall, ampdata[i]))
flagall = np.vstack((flagall, flag[i]))
amp_data = np.delete(ampall, 0, 0)
flag_data = np.delete(flagall, 0, 0)
```
The **amp_data** stores the seismic amplitudes from all seismic stations. The **flag_data** contains the labels for each seismic traces. These labels are encoded as 1s and 0s, with 0 representing bad seismic traces, and 1 corresponding good seismic traces. Recall that, logistic regression is a supervised machine learning, and therefore, it requires labels. Also recall that, logistic regression is a binary classification algorithm, and the labels are most often expressed numerically as 0s and 1s.
Now, let us plot up a few 'good' seismic traces which are all labeled as 1s.
```
goodtraceindex = np.nonzero(flag_data)[0].reshape(-1,1)
# plot a few good traces (before scaling is applied)
import matplotlib.pyplot as plt
fig, axs = plt.subplots(1,5, figsize=(15, 6), facecolor='w', edgecolor='k')
fig.subplots_adjust(hspace = .5, wspace=.001)
fig.suptitle('a few good traces', fontsize=20)
axs = axs.ravel()
ic = 0
for icount in goodtraceindex[:5,0]:
axs[ic].plot(amp_data[icount,:], time[0])
axs[ic].invert_yaxis()
axs[ic].set_xlabel('amplitude')
axs[ic].set_ylabel('time')
ic = ic + 1
# tight_layout() will also adjust spacing between subplots to minimize the overlaps
plt.tight_layout()
plt.show()
```
There are a few obvious features that are common to all 'good' traces. First, there is a distinct peak at time 0. Second, there is a second peak around 5 seconds. (This actually provides information about the crustal thickness). Thirdly, the amplitude for the peak at time 0 should be clearly higher than that for the second peak.
<br>
<font color = red>**Note**</font>: Visualzing data and getting an intuitive understanding of your data is very important. It is almost always done in practice. This is the first step to getting to know your data. Other ways of knowing your data better include summarizing your data using statistics such as max/min, mean, variance, quantile, histogram, etc.
# 3. Preprocessing (i.e., preparing data for subsequent machine learning)
One common preprocessing step is to normalize your data so that they have a mean of 0 and a standard deviation of 1. The reasons for doing this are twofold. First, for practical machine learning problems, different features have different scales. For example, when it comes to predicting life satisfaction, the GPD per capita might be on the order of ~1000s, whereas the education system might be ranked on a scale of 0 to 1, with 1 representing the best education. It turns out, features with vastly different scales make the optimization biased toward the ones that have large values (e.g., GPD per capita instead of education system). Secondly, for minimization, the shape of the cost function associated with features of different scales becomes elongated, making the gradient descent type of algorithms less efficient. <br>
<br>
Therefore, normalizing data is very commonly done in practice.
The following code shows how the scaling (or, normalizing) is typically done using Scikit-learn.
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(amp_data)
scaled_ampdata = scaler.transform(amp_data)
# To find out how many traces are labled as 'bad'
scaled_ampdata[np.where(flag_data == 0)[0],:].shape[0]
# to find out how many traces are labeled as 'good'
scaled_ampdata[np.nonzero(flag_data)[0],:].shape[0]
# Total number of seismic traces
scaled_ampdata.shape[0]
```
Next, we need to randomly permute these data before we start doing machine learning. The reason for doing this is to avoid the situation where your training data are ordered in some specific way. For example, it might happen that all the good seismic traces are together, followed by all the bad traces. If we are not careful, our training data set might be all the good seismic traces, and our validation or test data set might be all the bad ones. This is very dangerous because your machine lerning algorithm will not have any chance of learning from the bad seismic traces at the training stage, and you can expect that no matter how you train a machine learning model, it will not predict well on the validation/test data. Randomly permuting the data will ensure that the training set contains data from every category (good and bad), and validation/test set also contains data from all categories.
```
np.random.seed(42)
whole_data = np.append(scaled_ampdata,flag_data,1) # put all the seismic traces and their lables into one matrix which contain the whole data set for subsequent machine learing.
training_data_permute = whole_data[np.random.permutation(whole_data.shape[0]),:]
```
# 4. Split the data into training and validation sets
<font color = red>**Task 1**</font>: Please write a few sentences here to explain why we do this. <font color = red>**(10 points)**</font>
```
<answer to Task 1:>
```
We have in total 12597 seismic traces. And we are going to use the first 2000 seismic traces and their corresponding labels as our training data set.
```
X_train = training_data_permute[0:2000,:-1]
y_train = training_data_permute[0:2000,-1]
```
Similarly, we are going to put aside the seismic traces with indices from 10000 to the very end as our validation (or test) data set.
```
X_validation = training_data_permute[10000:,:-1]
y_validation = training_data_permute[10000:,-1]
```
# 5. Implementing logistic regression using Scikit-learn
<font color = red>**Task 2:**</font> Please import the logistic regression module from Scikit-learn. <font color = red>**(10 points)**</font> <br>
<br>
**HINT:** If you forget how to do it, please refer back to <fontn color=blue>Lab3_LogisticRegression_example.ipynb</font>.
<font color = red>**Task 3**</font>: Assign the LogisticRegression method to a new variable *log_reg*. <font color = red>**(10 points)**</font>
**HINT**: Please refer back to Lab3_LogisticRegression_example.ipynb. Note that, in Lab3_LogisticRegression_example.ipynb, I set C = 10$^{10}$. However, for this exercise, please use the default value for C that comes with the logistic regression module that you just imported. That is, instead of using LogisticRegression(C=10**10, random_state=42), you should use LogisticRegression(random_state=42)
<font color = red>**Task 4:**</font> Train a logistic regession model using our training data, i.e., X_train and y_train. <font color = red> **(10 points)**</font>
**HINT:** Again, if you do not know how to do it, please take a look at Lab3_LogisticRegression_example.ipynb. Only one line of code is necessary for this task.
# 6. Evaluation of the learned logistic regression model
<font color = red>**Task 5:**</font> Output the accuracy (or score) of the predictions on the **training** data set. <font color = red>**(10 points)**</font>
**HINT**: Please refer to Lab3_LogisticRegression_example.ipynb, if you are not sure what to do.
<font color = red>**Task 6:**</font> Output the accuracy (or score) of the predictions on the **validation** data set. <font color = red>**(10 points)**</font>
**HINT**: Please refer to Lab3_LogisticRegression_example.ipynb, if you are not sure what to do.
<font color = red>**Task 7:**</font> Output the error of the predictions on both the **training** and **validation** data sets. <font color = red>**(5 points)**</font>
**HINT**: error = 1 - accuracy, where accuracy is what you just obtained in Task 5 and 6.
# 7. Constructing error curves
<font color = red>**Task 8:**</font> So far, we have only used 2,000 seismic traces as our training data set. But remember that we can use up to 10,000 traces as our training data set (the remaining 2,597 traces were reserved for validation). For this task, create a training data set with 4000 seismic traces (do not touch the validation data set that we set previously). And
compute the errors of the predictions on both training and validation data sets. Similary, create a training data set with 6000, 8000 and 10000 seismic traces, and compute their respective errors on both training and validation data sets. <font color = red>**(30 points)**</font>
**HINT:** To create a training data set with 4000 seismic traces, you can use the following codes: <br>
X_train = training_data_permute[0:4000,:-1] <br>
y_train = training_data_permute[0:4000,-1] <br>
<br>
**NOTE:** For this task, our validation data set is always the same as before, that is: <br>
X_validation = training_data_permute[10000:,:-1] <br>
y_validation = training_data_permute[10000:,-1] <br>
<br>
You do not need to do anything with the validation data set.
<font color = red>**Task 9:**</font> Store the errors of the predictions on **training** data using 2000, 4000, 6000, 8000, and 10000 seismic traces in a Numpy array **train_errors**. Similarly, store the errors of the predictions on **validation** data using 2000, 4000, 6000, 8000, and 10000 seismic traces in a Numpy array **validation_errors**. <font color = red>**(5 points)**</font> <br>
<br>
**HINT:** Your **train_errors** should look like this: train_errors = np.array([0.169, 0.17825, 0.17966667, 0.180875, 0.1827]). And your **validation_errors** should look similar (the values in the array might be different though).
Now let us plot up the error curves.
```
import matplotlib.pyplot as plt
trainingsize = np.array([2000,4000,6000,8000,10000])
plt.plot(trainingsize,train_errors,'-ro',label="training errors")
plt.plot(trainingsize,validation_errors,'-bo',label="validation errors")
plt.title('Learning curves',fontsize=20)
plt.legend(loc="lower right", fontsize=16)
plt.xlabel("Size of training data", fontsize=20)
plt.ylabel("Prediction error", fontsize=20, rotation=90)
plt.show()
```
**BONUS:** Summarize the change of training and validation errors as the size of the training data increases. Explain it. <font color=red>**(10 points)**</font>
# Acknowledgments
I would like to thank Ying Zhang for manually labeling all the seismic traces, and Prof. Aibing Li for making this data set available to the students in this class. Ms. Zhang also kindly explained the fundamentals of seismic P-wave receiver functions to a non-seismic person (yes, that is me!) <br>
<img src = "photo.png">
# Congratulations! You have now mastered a great skill that allows you to classify things into binary classes using logistic regression. You have also started to use Scikit-learn to do machine learning. You have achieved a lot!
|
github_jupyter
|
import numpy as np
import h5py
with h5py.File("../Traces_qc.mat") as f:
ampdata = [f[element[0]][:] for element in f["Data"]["amps"]]
flag = [f[element[0]][:] for element in f["Data"]["Flags"]]
ntr = [f[element[0]][:] for element in f["Data"]["ntr"]]
time = [f[element[0]][:] for element in f["Data"]["time"]]
staname = [f[element[0]][:] for element in f["Data"]["staname"]]
ampall = np.zeros((1,651))
flagall = np.zeros(1)
for i in np.arange(201):
ampall = np.vstack((ampall, ampdata[i]))
flagall = np.vstack((flagall, flag[i]))
amp_data = np.delete(ampall, 0, 0)
flag_data = np.delete(flagall, 0, 0)
goodtraceindex = np.nonzero(flag_data)[0].reshape(-1,1)
# plot a few good traces (before scaling is applied)
import matplotlib.pyplot as plt
fig, axs = plt.subplots(1,5, figsize=(15, 6), facecolor='w', edgecolor='k')
fig.subplots_adjust(hspace = .5, wspace=.001)
fig.suptitle('a few good traces', fontsize=20)
axs = axs.ravel()
ic = 0
for icount in goodtraceindex[:5,0]:
axs[ic].plot(amp_data[icount,:], time[0])
axs[ic].invert_yaxis()
axs[ic].set_xlabel('amplitude')
axs[ic].set_ylabel('time')
ic = ic + 1
# tight_layout() will also adjust spacing between subplots to minimize the overlaps
plt.tight_layout()
plt.show()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(amp_data)
scaled_ampdata = scaler.transform(amp_data)
# To find out how many traces are labled as 'bad'
scaled_ampdata[np.where(flag_data == 0)[0],:].shape[0]
# to find out how many traces are labeled as 'good'
scaled_ampdata[np.nonzero(flag_data)[0],:].shape[0]
# Total number of seismic traces
scaled_ampdata.shape[0]
np.random.seed(42)
whole_data = np.append(scaled_ampdata,flag_data,1) # put all the seismic traces and their lables into one matrix which contain the whole data set for subsequent machine learing.
training_data_permute = whole_data[np.random.permutation(whole_data.shape[0]),:]
<answer to Task 1:>
X_train = training_data_permute[0:2000,:-1]
y_train = training_data_permute[0:2000,-1]
X_validation = training_data_permute[10000:,:-1]
y_validation = training_data_permute[10000:,-1]
import matplotlib.pyplot as plt
trainingsize = np.array([2000,4000,6000,8000,10000])
plt.plot(trainingsize,train_errors,'-ro',label="training errors")
plt.plot(trainingsize,validation_errors,'-bo',label="validation errors")
plt.title('Learning curves',fontsize=20)
plt.legend(loc="lower right", fontsize=16)
plt.xlabel("Size of training data", fontsize=20)
plt.ylabel("Prediction error", fontsize=20, rotation=90)
plt.show()
| 0.575111 | 0.986726 |
# Example of polarization line fitting
In this example we demonstrate the fitting of an inter-dot transition line (also known as polarization line), by using the functions `fit_pol_all` and `polmod_all_2slopes`. This fitting is useful for determining the tunnel coupling between two quantum dots. More theoretical background about this can be found in [L. DiCarlo et al., Phys. Rev. Lett. 92, 226801 (2004)](https://doi.org/10.1103/PhysRevLett.92.226801) and [Diepen et al., Appl. Phys. Lett. 113, 033101 (2018)](https://doi.org/10.1063/1.5031034).
Sjaak van diepen - [email protected]
#### Import the modules used in this example.
```
import os
import scipy.constants
import matplotlib.pyplot as plt
%matplotlib inline
import qcodes
from qcodes.data.hdf5_format import HDF5Format
import qtt
from qtt.algorithms.tunneling import fit_pol_all, polmod_all_2slopes, plot_polarization_fit
from qtt.data import load_example_dataset
```
#### Define some physical constants.
The fitting needs some input values: Plancks constan, the Boltzmann constant and the effective electron temperature. The effective electron temperature is the temperature of the electrons in the quantum dots. A method to determine this temperature is to do the polarization line scan at very low tunnel coupling and then fit the polarization line relative to the temperature. Here, we estimate the electron temperature to be 75 mK.
```
h = scipy.constants.physical_constants['Planck constant in eV s'][0]*1e15 # ueV/GHz; Planck's constant in eV/Hz*1e15 -> ueV/GHz
kb = scipy.constants.physical_constants['Boltzmann constant in eV/K'][0]*1e6 # ueV/K; Boltzmann constant in eV/K*1e6 -> ueV/K
kT = 75e-3 * kb # effective electron temperature in ueV
```
#### Load example data.
Here we load an example dataset. The array 'delta' contains the difference in chemical potential between the two dots. The values for this array are in units of ueV. The fitting is not linear in the values of delta, hence to do the fitting, it is the easiest to convert the voltages on the gates to energies using the leverarm. The lever arm can be detmined in several ways, e.g. by using photon-assisted-tunneling (see example PAT), or by means of bias triangles (see example bias triangles).
The array 'signal' contains the data for the sensor signal, usually measured using RF reflectometry on a sensing dot. The units for this array are arbitrary.
```
dataset = load_example_dataset('2017-02-21/15-59-56')
detuning = dataset.delta.ndarray
signal = dataset.signal.ndarray
```
#### Fit.
The `fit_pol_all` function returns an array with the following parameters:
- fitted_parameters[0]: tunnel coupling in ueV
- fitted_parameters[1]: offset in x_data for center of transition
- fitted_parameters[2]: offset in background signal
- fitted_parameters[3]: slope of sensor signal on left side
- fitted_parameters[4]: slope of sensor signal on right side
- fitted_parameters[5]: height of transition, i.e. sensitivity for electron transition
```
fitted_parameters, _, fit_results = fit_pol_all(detuning, signal, kT)
```
#### Plot the fit and the data.
```
plot_polarization_fit(detuning, signal, fit_results, fig = 100)
print(fit_results)
```
The values of the model can be calculated as with the method `polmod_all_2slopes`. For example to calculate the value of the sensor at detuning zero:
```
polmod_all_2slopes([0], fitted_parameters, kT)
```
|
github_jupyter
|
import os
import scipy.constants
import matplotlib.pyplot as plt
%matplotlib inline
import qcodes
from qcodes.data.hdf5_format import HDF5Format
import qtt
from qtt.algorithms.tunneling import fit_pol_all, polmod_all_2slopes, plot_polarization_fit
from qtt.data import load_example_dataset
h = scipy.constants.physical_constants['Planck constant in eV s'][0]*1e15 # ueV/GHz; Planck's constant in eV/Hz*1e15 -> ueV/GHz
kb = scipy.constants.physical_constants['Boltzmann constant in eV/K'][0]*1e6 # ueV/K; Boltzmann constant in eV/K*1e6 -> ueV/K
kT = 75e-3 * kb # effective electron temperature in ueV
dataset = load_example_dataset('2017-02-21/15-59-56')
detuning = dataset.delta.ndarray
signal = dataset.signal.ndarray
fitted_parameters, _, fit_results = fit_pol_all(detuning, signal, kT)
plot_polarization_fit(detuning, signal, fit_results, fig = 100)
print(fit_results)
polmod_all_2slopes([0], fitted_parameters, kT)
| 0.461988 | 0.991049 |
# Principal Component Analysis from scatch - preparations
From the [Data Science from Scratch book](https://www.oreilly.com/library/view/data-science-from/9781492041122/).
## Libraries and helper functions
```
import math as m
import random
import pandas as pd
import numpy as np
import altair as alt
from typing import List
Vector = List[float]
def add(vector1: Vector, vector2: Vector) -> Vector:
assert len(vector1) == len(vector2)
return [v1 + v2 for v1, v2 in zip(vector1, vector2)]
def subtract(vector1: Vector, vector2:Vector) -> Vector:
assert len(vector1) == len(vector2)
return [v1 - v2 for v1, v2 in zip(vector1, vector2)]
def vector_sum(vectors: List[Vector]) -> Vector:
assert vectors
vector_length = len(vectors[0])
assert all(len(v) == vector_length for v in vectors)
sums = [0] * vector_length
for vector in vectors:
sums = add(sums, vector)
return sums
def scalar_multiply(c: float, vector: Vector) -> Vector:
return [c * v for v in vector]
def vector_mean(vector: Vector) -> float:
n = len(vector)
return scalar_multiply(1/n, vector)
def dot(vector1: Vector, vector2: Vector) -> float:
assert len(vector1) == len(vector2)
return sum(v1 * v2 for v1, v2 in zip(vector1, vector2))
def sum_of_squares(v: Vector) -> Vector:
return dot(v, v)
def magnitude(v: Vector) -> Vector:
return m.sqrt(sum_of_squares(v))
def gradient_step(v: Vector, gradient: Vector, step_size: float) -> Vector:
"""Return vector adjusted with step. Step is gradient times step size.
"""
step = scalar_multiply(step_size, gradient)
return add(v, step)
```
## Steps
```
intercept = random.randint(-30, 30)
coefficient = random.uniform(-1, 1)
n = 30
xs = np.random.randint(-50, 10 + 1, 30)
ys = np.random.randint(-20, 50 + 1, 30)
df = pd.DataFrame({'x': xs, 'y': ys})
print(intercept, coefficient)
alt.Chart(df).mark_point().encode(
alt.X('x:Q'), alt.Y('y:Q'), alt.Tooltip(['x', 'y'])
)
```
### De-meaning
```
def de_mean(data: List[Vector]) -> List[Vector]:
# mean = vector_mean(data)
return [vector - np.mean(vector) for vector in data]
xs_demean, ys_demean = de_mean([xs, ys])
df = pd.DataFrame({'x': xs_demean, 'y': ys_demean})
alt.Chart(df).mark_point().encode(
alt.X('x:Q'), alt.Y('y:Q'), alt.Tooltip(['x', 'y'])
)
```
### Direction
```
def direction(w: Vector) -> Vector:
mag = magnitude(w)
return [w_i / mag for w_i in w]
direction(xs)
xs_dir = direction(xs_demean)
ys_dir = direction(ys_demean)
df = pd.DataFrame({'x': xs_dir, 'y': ys_dir})
alt.Chart(df).mark_point().encode(
alt.X('x:Q'), alt.Y('y:Q'), alt.Tooltip(['x', 'y'])
)
```
|
github_jupyter
|
import math as m
import random
import pandas as pd
import numpy as np
import altair as alt
from typing import List
Vector = List[float]
def add(vector1: Vector, vector2: Vector) -> Vector:
assert len(vector1) == len(vector2)
return [v1 + v2 for v1, v2 in zip(vector1, vector2)]
def subtract(vector1: Vector, vector2:Vector) -> Vector:
assert len(vector1) == len(vector2)
return [v1 - v2 for v1, v2 in zip(vector1, vector2)]
def vector_sum(vectors: List[Vector]) -> Vector:
assert vectors
vector_length = len(vectors[0])
assert all(len(v) == vector_length for v in vectors)
sums = [0] * vector_length
for vector in vectors:
sums = add(sums, vector)
return sums
def scalar_multiply(c: float, vector: Vector) -> Vector:
return [c * v for v in vector]
def vector_mean(vector: Vector) -> float:
n = len(vector)
return scalar_multiply(1/n, vector)
def dot(vector1: Vector, vector2: Vector) -> float:
assert len(vector1) == len(vector2)
return sum(v1 * v2 for v1, v2 in zip(vector1, vector2))
def sum_of_squares(v: Vector) -> Vector:
return dot(v, v)
def magnitude(v: Vector) -> Vector:
return m.sqrt(sum_of_squares(v))
def gradient_step(v: Vector, gradient: Vector, step_size: float) -> Vector:
"""Return vector adjusted with step. Step is gradient times step size.
"""
step = scalar_multiply(step_size, gradient)
return add(v, step)
intercept = random.randint(-30, 30)
coefficient = random.uniform(-1, 1)
n = 30
xs = np.random.randint(-50, 10 + 1, 30)
ys = np.random.randint(-20, 50 + 1, 30)
df = pd.DataFrame({'x': xs, 'y': ys})
print(intercept, coefficient)
alt.Chart(df).mark_point().encode(
alt.X('x:Q'), alt.Y('y:Q'), alt.Tooltip(['x', 'y'])
)
def de_mean(data: List[Vector]) -> List[Vector]:
# mean = vector_mean(data)
return [vector - np.mean(vector) for vector in data]
xs_demean, ys_demean = de_mean([xs, ys])
df = pd.DataFrame({'x': xs_demean, 'y': ys_demean})
alt.Chart(df).mark_point().encode(
alt.X('x:Q'), alt.Y('y:Q'), alt.Tooltip(['x', 'y'])
)
def direction(w: Vector) -> Vector:
mag = magnitude(w)
return [w_i / mag for w_i in w]
direction(xs)
xs_dir = direction(xs_demean)
ys_dir = direction(ys_demean)
df = pd.DataFrame({'x': xs_dir, 'y': ys_dir})
alt.Chart(df).mark_point().encode(
alt.X('x:Q'), alt.Y('y:Q'), alt.Tooltip(['x', 'y'])
)
| 0.902623 | 0.994477 |
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
<!--NAVIGATION-->
< [Example of Using PyRosetta with GNU Parallel](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/16.03-GNU-Parallel-Via-Slurm.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Part I: Parallelized Global Ligand Docking with `pyrosetta.distributed`](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/16.05-Ligand-Docking-dask.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/16.04-dask.delayed-Via-Slurm.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
# Examples Using the `dask` Module
### We can make use of the `dask` library to parallelize code
*Note:* This Jupyter notebook uses parallelization and is **not** meant to be executed within a Google Colab environment.
*Note:* This Jupyter notebook requires the PyRosetta distributed layer which is obtained by building PyRosetta with the `--serialization` flag or installing PyRosetta from the RosettaCommons conda channel
**Please see Chapter 16.00 for setup instructions**
```
import dask
import dask.array as da
import graphviz
import logging
logging.basicConfig(level=logging.INFO)
import numpy as np
import os
import pyrosetta
import pyrosetta.distributed
import pyrosetta.distributed.dask
import pyrosetta.distributed.io as io
import random
import sys
from dask.distributed import Client, LocalCluster, progress
from dask_jobqueue import SLURMCluster
from IPython.display import Image
if 'google.colab' in sys.modules:
print("This Jupyter notebook uses parallelization and is therefore not set up for the Google Colab environment.")
sys.exit(0)
```
Initialize PyRosetta within this Jupyter notebook using custom command line PyRosetta flags:
```
flags = """-out:level 100
-ignore_unrecognized_res 1
-ignore_waters 0
-detect_disulf 0 # Do not automatically detect disulfides
""" # These can be unformatted for user convenience, but no spaces in file paths!
pyrosetta.distributed.init(flags)
```
If you are running this example on a high-performance computing (HPC) cluster with SLURM scheduling, use the `SLURMCluster` class described below. For more information, visit https://jobqueue.dask.org/en/latest/generated/dask_jobqueue.SLURMCluster.html. **Note**: If you are running this example on a HPC cluster with a job scheduler other than SLURM, `dask_jobqueue` also works with other job schedulers: http://jobqueue.dask.org/en/latest/api.html
The `SLURMCluster` class in the `dask_jobqueue` module is very useful! In this case, we are requesting four workers using `cluster.scale(4)`, and specifying each worker to have:
- one thread per worker with `cores=1`
- one process per worker with `processes=1`
- one CPU per task per worker with `job_cpu=1`
- a total of 4GB memory per worker with `memory="4GB"`
- itself run on the "short" queue/partition on the SLURM scheduler with `queue="short"`
- a maximum job walltime of 3 hours using `walltime="03:00:00"`
- output dask files directed to `local_directory`
- output SLURM log files directed to file path and file name (and any other SLURM commands) with the `job_extra` option
- pre-initialization with the same custom command line PyRosetta flags used in this Jupyter notebook, using the `extra=pyrosetta.distributed.dask.worker_extra(init_flags=flags)` option
```
if not os.getenv("DEBUG"):
scratch_dir = os.path.join("/net/scratch", os.environ["USER"])
cluster = SLURMCluster(
cores=1,
processes=1,
job_cpu=1,
memory="4GB",
queue="short",
walltime="02:59:00",
local_directory=scratch_dir,
job_extra=["-o {}".format(os.path.join(scratch_dir, "slurm-%j.out"))],
extra=pyrosetta.distributed.dask.worker_extra(init_flags=flags)
)
cluster.scale(4)
client = Client(cluster)
else:
cluster = None
client = None
```
**Note**: The actual sbatch script submitted to the Slurm scheduler under the hood was:
```
if not os.getenv("DEBUG"):
print(cluster.job_script())
```
Otherwise, if you are running this example locally on your laptop, you can still spawn workers and take advantage of the `dask` module:
```
# cluster = LocalCluster(n_workers=1, threads_per_worker=1)
# client = Client(cluster)
```
Open the `dask` dashboard, which shows diagnostic information about the current state of your cluster and helps track progress, identify performance issues, and debug failures:
```
client
```
### Consider the following example that runs within this Jupyter notebook kernel just fine but could be parallelized:
```
def inc(x):
return x + 1
def double(x):
return x + 2
def add(x, y):
return x + y
output = []
for x in range(10):
a = inc(x)
b = double(x)
c = add(a, b)
output.append(c)
total = sum(output)
print(total)
```
With a slight modification, we can parallelize it on the HPC cluster using the `dask` module
```
output = []
for x in range(10):
a = dask.delayed(inc)(x)
b = dask.delayed(double)(x)
c = dask.delayed(add)(a, b)
output.append(c)
delayed = dask.delayed(sum)(output)
print(delayed)
```
We used the `dask.delayed` function to wrap the function calls that we want to turn into tasks. None of the `inc`, `double`, `add`, or `sum` calls have happened yet. Instead, the object total is a `Delayed` object that contains a task graph of the entire computation to be executed.
Let's visualize the task graph to see clear opportunities for parallel execution.
```
if not os.getenv("DEBUG"):
delayed.visualize()
```
We can now compute this lazy result to execute the graph in parallel:
```
if not os.getenv("DEBUG"):
total = delayed.compute()
print(total)
```
We can also use `dask.delayed` as a python function decorator for identical performance
```
@dask.delayed
def inc(x):
return x + 1
@dask.delayed
def double(x):
return x + 2
@dask.delayed
def add(x, y):
return x + y
output = []
for x in range(10):
a = inc(x)
b = double(x)
c = add(a, b)
output.append(c)
total = dask.delayed(sum)(output).compute()
print(total)
```
We can also use the `dask.array` library, which implements a subset of the NumPy ndarray interface using blocked algorithms, cutting up the large array into many parallelizable small arrays.
See `dask.array` documentation: http://docs.dask.org/en/latest/array.html, along with that of `dask.bag`, `dask.dataframe`, `dask.delayed`, `Futures`, etc.
```
if not os.getenv("DEBUG"):
x = da.random.random((10000, 10000, 10), chunks=(1000, 1000, 5))
y = da.random.random((10000, 10000, 10), chunks=(1000, 1000, 5))
z = (da.arcsin(x) + da.arccos(y)).sum(axis=(1, 2))
z.compute()
```
The dask dashboard allows visualizing parallel computation, including progress bars for tasks. Here is a snapshot of the dask dashboard while executing the previous cell:
```
Image(filename="inputs/dask_dashboard_example.png")
```
For more info on interpreting the dask dashboard, see: https://distributed.dask.org/en/latest/web.html
# Example Using `dask.delayed` with PyRosetta
Let's look at a simple example of sending PyRosetta jobs to the `dask-worker`, and the `dask-worker` sending the results back to this Jupyter Notebook.
We will use the crystal structure of the *de novo* mini protein gEHEE_06 from PDB ID 5JG9
```
@dask.delayed
def mutate(ppose, target, new_res):
import pyrosetta
pose = io.to_pose(ppose)
mutate = pyrosetta.rosetta.protocols.simple_moves.MutateResidue(target=target, new_res=new_res)
mutate.apply(pose)
return io.to_packed(pose)
@dask.delayed
def refine(ppose):
import pyrosetta
pose = io.to_pose(ppose)
scorefxn = pyrosetta.create_score_function("ref2015_cart")
mm = pyrosetta.rosetta.core.kinematics.MoveMap()
mm.set_bb(True)
mm.set_chi(True)
min_mover = pyrosetta.rosetta.protocols.minimization_packing.MinMover()
min_mover.set_movemap(mm)
min_mover.score_function(scorefxn)
min_mover.min_type("lbfgs_armijo_nonmonotone")
min_mover.cartesian(True)
min_mover.tolerance(0.01)
min_mover.max_iter(200)
min_mover.apply(pose)
return io.to_packed(pose)
@dask.delayed
def score(ppose):
import pyrosetta
pose = io.to_pose(ppose)
scorefxn = pyrosetta.create_score_function("ref2015")
total_score = scorefxn(pose)
return pose, total_score
if not os.getenv("DEBUG"):
pose = pyrosetta.io.pose_from_file("inputs/5JG9.clean.pdb")
keep_chA = pyrosetta.rosetta.protocols.grafting.simple_movers.KeepRegionMover(
res_start=str(pose.chain_begin(1)), res_end=str(pose.chain_end(1))
)
keep_chA.apply(pose)
#kwargs = {"extra_options": pyrosetta.distributed._normflags(flags)}
output = []
for target in random.sample(range(1, pose.size() + 1), 10):
if pose.sequence()[target - 1] != "C":
for new_res in ["ALA", "TRP"]:
a = mutate(io.to_packed(pose), target, new_res)
b = refine(a)
c = score(b)
output.append((target, new_res, c[0], c[1]))
delayed_obj = dask.delayed(np.argmin)([x[-1] for x in output])
delayed_obj.visualize()
print(output)
if not os.getenv("DEBUG"):
delayed_result = delayed_obj.persist()
progress(delayed_result)
```
The dask progress bar allows visualizing parallelization directly within the Jupyter notebook. Here is a snapshot of the dask progress bar while executing the previous cell:
```
Image(filename="inputs/dask_progress_bar_example.png")
if not os.getenv("DEBUG"):
result = delayed_result.compute()
print("The mutation with the lowest energy is residue {0} at position {1}".format(output[result][1], output[result][0]))
```
*Note*: For best practices while using `dask.delayed`, see: http://docs.dask.org/en/latest/delayed-best-practices.html
<!--NAVIGATION-->
< [Example of Using PyRosetta with GNU Parallel](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/16.03-GNU-Parallel-Via-Slurm.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Part I: Parallelized Global Ligand Docking with `pyrosetta.distributed`](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/16.05-Ligand-Docking-dask.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/16.04-dask.delayed-Via-Slurm.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
|
github_jupyter
|
import dask
import dask.array as da
import graphviz
import logging
logging.basicConfig(level=logging.INFO)
import numpy as np
import os
import pyrosetta
import pyrosetta.distributed
import pyrosetta.distributed.dask
import pyrosetta.distributed.io as io
import random
import sys
from dask.distributed import Client, LocalCluster, progress
from dask_jobqueue import SLURMCluster
from IPython.display import Image
if 'google.colab' in sys.modules:
print("This Jupyter notebook uses parallelization and is therefore not set up for the Google Colab environment.")
sys.exit(0)
flags = """-out:level 100
-ignore_unrecognized_res 1
-ignore_waters 0
-detect_disulf 0 # Do not automatically detect disulfides
""" # These can be unformatted for user convenience, but no spaces in file paths!
pyrosetta.distributed.init(flags)
if not os.getenv("DEBUG"):
scratch_dir = os.path.join("/net/scratch", os.environ["USER"])
cluster = SLURMCluster(
cores=1,
processes=1,
job_cpu=1,
memory="4GB",
queue="short",
walltime="02:59:00",
local_directory=scratch_dir,
job_extra=["-o {}".format(os.path.join(scratch_dir, "slurm-%j.out"))],
extra=pyrosetta.distributed.dask.worker_extra(init_flags=flags)
)
cluster.scale(4)
client = Client(cluster)
else:
cluster = None
client = None
if not os.getenv("DEBUG"):
print(cluster.job_script())
# cluster = LocalCluster(n_workers=1, threads_per_worker=1)
# client = Client(cluster)
client
def inc(x):
return x + 1
def double(x):
return x + 2
def add(x, y):
return x + y
output = []
for x in range(10):
a = inc(x)
b = double(x)
c = add(a, b)
output.append(c)
total = sum(output)
print(total)
output = []
for x in range(10):
a = dask.delayed(inc)(x)
b = dask.delayed(double)(x)
c = dask.delayed(add)(a, b)
output.append(c)
delayed = dask.delayed(sum)(output)
print(delayed)
if not os.getenv("DEBUG"):
delayed.visualize()
if not os.getenv("DEBUG"):
total = delayed.compute()
print(total)
@dask.delayed
def inc(x):
return x + 1
@dask.delayed
def double(x):
return x + 2
@dask.delayed
def add(x, y):
return x + y
output = []
for x in range(10):
a = inc(x)
b = double(x)
c = add(a, b)
output.append(c)
total = dask.delayed(sum)(output).compute()
print(total)
if not os.getenv("DEBUG"):
x = da.random.random((10000, 10000, 10), chunks=(1000, 1000, 5))
y = da.random.random((10000, 10000, 10), chunks=(1000, 1000, 5))
z = (da.arcsin(x) + da.arccos(y)).sum(axis=(1, 2))
z.compute()
Image(filename="inputs/dask_dashboard_example.png")
@dask.delayed
def mutate(ppose, target, new_res):
import pyrosetta
pose = io.to_pose(ppose)
mutate = pyrosetta.rosetta.protocols.simple_moves.MutateResidue(target=target, new_res=new_res)
mutate.apply(pose)
return io.to_packed(pose)
@dask.delayed
def refine(ppose):
import pyrosetta
pose = io.to_pose(ppose)
scorefxn = pyrosetta.create_score_function("ref2015_cart")
mm = pyrosetta.rosetta.core.kinematics.MoveMap()
mm.set_bb(True)
mm.set_chi(True)
min_mover = pyrosetta.rosetta.protocols.minimization_packing.MinMover()
min_mover.set_movemap(mm)
min_mover.score_function(scorefxn)
min_mover.min_type("lbfgs_armijo_nonmonotone")
min_mover.cartesian(True)
min_mover.tolerance(0.01)
min_mover.max_iter(200)
min_mover.apply(pose)
return io.to_packed(pose)
@dask.delayed
def score(ppose):
import pyrosetta
pose = io.to_pose(ppose)
scorefxn = pyrosetta.create_score_function("ref2015")
total_score = scorefxn(pose)
return pose, total_score
if not os.getenv("DEBUG"):
pose = pyrosetta.io.pose_from_file("inputs/5JG9.clean.pdb")
keep_chA = pyrosetta.rosetta.protocols.grafting.simple_movers.KeepRegionMover(
res_start=str(pose.chain_begin(1)), res_end=str(pose.chain_end(1))
)
keep_chA.apply(pose)
#kwargs = {"extra_options": pyrosetta.distributed._normflags(flags)}
output = []
for target in random.sample(range(1, pose.size() + 1), 10):
if pose.sequence()[target - 1] != "C":
for new_res in ["ALA", "TRP"]:
a = mutate(io.to_packed(pose), target, new_res)
b = refine(a)
c = score(b)
output.append((target, new_res, c[0], c[1]))
delayed_obj = dask.delayed(np.argmin)([x[-1] for x in output])
delayed_obj.visualize()
print(output)
if not os.getenv("DEBUG"):
delayed_result = delayed_obj.persist()
progress(delayed_result)
Image(filename="inputs/dask_progress_bar_example.png")
if not os.getenv("DEBUG"):
result = delayed_result.compute()
print("The mutation with the lowest energy is residue {0} at position {1}".format(output[result][1], output[result][0]))
| 0.282493 | 0.903507 |
# WeatherPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
! pip install citipy
! pip install requests
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import datetime
import json
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
```
## Generate Cities List
```
# List for Holding lat_lngs & Cities
lat_lngs = []
cities = []
# Create a Set of Random lat & lng Combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify Nearest City for Each lat, lng Combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the City is Unique, Then Add it to a Cities List
if city not in cities:
cities.append(city)
# Print the City Count to Confirm Sufficient Count
len(cities)
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
# Get Weather Data
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
query_url = f"{url}appid={weather_api_key}&units={units}&q="
weather_response = requests.get(query_url + city)
weather_json = weather_response.json()
print(json.dumps(weather_json, indent=4))
print(requests.get(query_url + city))
# Set Up Lists to Hold Reponse Info
city_name = []
country = []
date = []
latitude = []
longitude = []
max_temperature = []
humidity = []
cloudiness = []
wind_speed = []
# Processing Record Counter Starting a 1
processing_record = 1
# Print Starting Log Statement
print(f"Beginning Data Retrieval")
print(f"-------------------------------")
# Loop Through List of Cities & Perform a Request for Data on Each
for city in cities:
# Exception Handling
try:
response = requests.get(query_url + city).json()
city_name.append(response["name"])
country.append(response["sys"]["country"])
date.append(response["dt"])
latitude.append(response["coord"]["lat"])
longitude.append(response["coord"]["lon"])
max_temperature.append(response["main"]["temp_max"])
humidity.append(response["main"]["humidity"])
cloudiness.append(response["clouds"]["all"])
wind_speed.append(response["wind"]["speed"])
city_record = response["name"]
print(f"Processing Record {processing_record} | {city_record}")
# Increase Processing Record Counter by 1 For Each Loop
processing_record += 1
except:
print("City not found. Skipping...")
continue
# Print Ending Log Statement
print(f"-------------------------------")
print(f"Data Retrieval Complete")
print(f"-------------------------------")
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
```
# Create a DataFrame from Cities, Latitude, Longitude, Temperature, Humidity, Cloudiness & Wind Speed
weather_dict = {
"City": city_name,
"Country": country,
"Date": date,
"Latitude": latitude,
"Longitude": longitude,
"Max Temperature": max_temperature,
"Humidity": humidity,
"Cloudiness": cloudiness,
"Wind Speed": wind_speed
}
weather_data = pd.DataFrame(weather_dict)
weather_data.count()
# Display DataFrame
weather_data.head()
# Export & Save Data Into a .csv.
weather_data.to_csv("weather_data.csv")
```
## Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
## Latitude vs. Temperature Plot
```
# Build Scatter Plot for Each Data Type
plt.scatter(weather_data["Latitude"], weather_data["Max Temperature"], facecolors="green", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Max Temperature")
plt.ylabel("Max Temperature (°F)")
plt.xlabel("Latitude")
plt.grid(True)
# Show Plot
plt.show()
```
## Latitude vs. Humidity Plot
```
# Build Scatter Plot for Each Data Type
plt.scatter(weather_data["Latitude"], weather_data["Humidity"], facecolors="yellow", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Humidity")
plt.ylabel("Humidity (%)")
plt.xlabel("Latitude")
plt.grid(True)
# Show Plot
plt.show()
```
## Latitude vs. Cloudiness Plot
```
# Build Scatter Plot for Each Data Type
plt.scatter(weather_data["Latitude"], weather_data["Cloudiness"], facecolors="blue", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Cloudiness")
plt.ylabel("Cloudiness (%)")
plt.xlabel("Latitude")
plt.grid(True)
# Show Plot
plt.show()
```
## Latitude vs. Wind Speed Plot
```
#scatter plot for each Data type
plt.scatter(weather_data["Latitude"], weather_data["Wind Speed"], facecolors="orange", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Wind Speed")
plt.ylabel("Wind Speed (mph)")
plt.xlabel("Latitude")
plt.grid(True)
# Show Plot
plt.show()
```
|
github_jupyter
|
# Dependencies and Setup
! pip install citipy
! pip install requests
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import datetime
import json
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# List for Holding lat_lngs & Cities
lat_lngs = []
cities = []
# Create a Set of Random lat & lng Combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify Nearest City for Each lat, lng Combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the City is Unique, Then Add it to a Cities List
if city not in cities:
cities.append(city)
# Print the City Count to Confirm Sufficient Count
len(cities)
# Get Weather Data
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
query_url = f"{url}appid={weather_api_key}&units={units}&q="
weather_response = requests.get(query_url + city)
weather_json = weather_response.json()
print(json.dumps(weather_json, indent=4))
print(requests.get(query_url + city))
# Set Up Lists to Hold Reponse Info
city_name = []
country = []
date = []
latitude = []
longitude = []
max_temperature = []
humidity = []
cloudiness = []
wind_speed = []
# Processing Record Counter Starting a 1
processing_record = 1
# Print Starting Log Statement
print(f"Beginning Data Retrieval")
print(f"-------------------------------")
# Loop Through List of Cities & Perform a Request for Data on Each
for city in cities:
# Exception Handling
try:
response = requests.get(query_url + city).json()
city_name.append(response["name"])
country.append(response["sys"]["country"])
date.append(response["dt"])
latitude.append(response["coord"]["lat"])
longitude.append(response["coord"]["lon"])
max_temperature.append(response["main"]["temp_max"])
humidity.append(response["main"]["humidity"])
cloudiness.append(response["clouds"]["all"])
wind_speed.append(response["wind"]["speed"])
city_record = response["name"]
print(f"Processing Record {processing_record} | {city_record}")
# Increase Processing Record Counter by 1 For Each Loop
processing_record += 1
except:
print("City not found. Skipping...")
continue
# Print Ending Log Statement
print(f"-------------------------------")
print(f"Data Retrieval Complete")
print(f"-------------------------------")
# Create a DataFrame from Cities, Latitude, Longitude, Temperature, Humidity, Cloudiness & Wind Speed
weather_dict = {
"City": city_name,
"Country": country,
"Date": date,
"Latitude": latitude,
"Longitude": longitude,
"Max Temperature": max_temperature,
"Humidity": humidity,
"Cloudiness": cloudiness,
"Wind Speed": wind_speed
}
weather_data = pd.DataFrame(weather_dict)
weather_data.count()
# Display DataFrame
weather_data.head()
# Export & Save Data Into a .csv.
weather_data.to_csv("weather_data.csv")
# Build Scatter Plot for Each Data Type
plt.scatter(weather_data["Latitude"], weather_data["Max Temperature"], facecolors="green", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Max Temperature")
plt.ylabel("Max Temperature (°F)")
plt.xlabel("Latitude")
plt.grid(True)
# Show Plot
plt.show()
# Build Scatter Plot for Each Data Type
plt.scatter(weather_data["Latitude"], weather_data["Humidity"], facecolors="yellow", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Humidity")
plt.ylabel("Humidity (%)")
plt.xlabel("Latitude")
plt.grid(True)
# Show Plot
plt.show()
# Build Scatter Plot for Each Data Type
plt.scatter(weather_data["Latitude"], weather_data["Cloudiness"], facecolors="blue", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Cloudiness")
plt.ylabel("Cloudiness (%)")
plt.xlabel("Latitude")
plt.grid(True)
# Show Plot
plt.show()
#scatter plot for each Data type
plt.scatter(weather_data["Latitude"], weather_data["Wind Speed"], facecolors="orange", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Wind Speed")
plt.ylabel("Wind Speed (mph)")
plt.xlabel("Latitude")
plt.grid(True)
# Show Plot
plt.show()
| 0.592431 | 0.779322 |
# Attention Basics
In this notebook, we look at how attention is implemented. We will focus on implementing attention in isolation from a larger model. That's because when implementing attention in a real-world model, a lot of the focus goes into piping the data and juggling the various vectors rather than the concepts of attention themselves.
We will implement attention scoring as well as calculating an attention context vector.
## Attention Scoring
### Inputs to the scoring function
Let's start by looking at the inputs we'll give to the scoring function. We will assume we're in the first step in the decoding phase. The first input to the scoring function is the hidden state of decoder (assuming a toy RNN with three hidden nodes -- not usable in real life, but easier to illustrate):
```
dec_hidden_state = [5,1,20]
```
Let's visualize this vector:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Let's visualize our decoder hidden state
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(dec_hidden_state)), annot=True, cmap=sns.light_palette("purple", as_cmap=True), linewidths=1)
```
Our first scoring function will score a single annotation (encoder hidden state), which looks like this:
```
annotation = [3,12,45] #e.g. Encoder hidden state
# Let's visualize the single annotation
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(annotation)), annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
```
### IMPLEMENT: Scoring a Single Annotation
Let's calculate the dot product of a single annotation. NumPy's [dot()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) is a good candidate for this operation
```
def single_dot_attention_score(dec_hidden_state, enc_hidden_state):
# TODO: return the dot product of the two vectors
return np.array(dec_hidden_state).dot(np.array(enc_hidden_state))
single_dot_attention_score(dec_hidden_state, annotation)
```
### Annotations Matrix
Let's now look at scoring all the annotations at once. To do that, here's our annotation matrix:
```
annotations = np.transpose([[3,12,45], [59,2,5], [1,43,5], [4,3,45.3]])
```
And it can be visualized like this (each column is a hidden state of an encoder time step):
```
# Let's visualize our annotation (each column is an annotation)
ax = sns.heatmap(annotations, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
```
### IMPLEMENT: Scoring All Annotations at Once
Let's calculate the scores of all the annotations in one step using matrix multiplication. Let's continue to us the dot scoring method
<img src="images/scoring_functions.png" />
To do that, we'll have to transpose `dec_hidden_state` and [matrix multiply](https://docs.scipy.org/doc/numpy/reference/generated/numpy.matmul.html) it with `annotations`.
```
def dot_attention_score(dec_hidden_state, annotations):
# TODO: return the product of dec_hidden_state transpose and enc_hidden_states
return np.array(dec_hidden_state).dot(annotations)
attention_weights_raw = dot_attention_score(dec_hidden_state, annotations)
attention_weights_raw
```
Looking at these scores, can you guess which of the four vectors will get the most attention from the decoder at this time step?
## Softmax
Now that we have our scores, let's apply softmax:
<img src="images/softmax.png" />
```
def softmax(x):
x = np.array(x, dtype=np.float128)
e_x = np.exp(x)
return e_x / e_x.sum(axis=0)
attention_weights = softmax(attention_weights_raw)
attention_weights
```
Even when knowing which annotation will get the most focus, it's interesting to see how drastic softmax makes the end score become. The first and last annotation had the respective scores of 927 and 929. But after softmax, the attention they'll get is 0.12 and 0.88 respectively.
# Applying the scores back on the annotations
Now that we have our scores, let's multiply each annotation by its score to proceed closer to the attention context vector. This is the multiplication part of this formula (we'll tackle the summation part in the latter cells)
<img src="images/Context_vector.png" />
```
def apply_attention_scores(attention_weights, annotations):
# TODO: Multiple the annotations by their weights
return annotations * attention_weights
applied_attention = apply_attention_scores(attention_weights, annotations)
applied_attention
```
Let's visualize how the context vector looks now that we've applied the attention scores back on it:
```
# Let's visualize our annotations after applying attention to them
ax = sns.heatmap(applied_attention, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
```
Contrast this with the raw annotations visualized earlier in the notebook, and we can see that the second and third annotations (columns) have been nearly wiped out. The first annotation maintains some of its value, and the fourth annotation is the most pronounced.
# Calculating the Attention Context Vector
All that remains to produce our attention context vector now is to sum up the four columns to produce a single attention context vector
```
def calculate_attention_vector(applied_attention):
return np.sum(applied_attention, axis=1)
attention_vector = calculate_attention_vector(applied_attention)
attention_vector
# Let's visualize the attention context vector
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(attention_vector)), annot=True, cmap=sns.light_palette("Blue", as_cmap=True), linewidths=1)
```
Now that we have the context vector, we can concatenate it with the hidden state and pass it through a hidden layer to produce the the result of this decoding time step.
|
github_jupyter
|
dec_hidden_state = [5,1,20]
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Let's visualize our decoder hidden state
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(dec_hidden_state)), annot=True, cmap=sns.light_palette("purple", as_cmap=True), linewidths=1)
annotation = [3,12,45] #e.g. Encoder hidden state
# Let's visualize the single annotation
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(annotation)), annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
def single_dot_attention_score(dec_hidden_state, enc_hidden_state):
# TODO: return the dot product of the two vectors
return np.array(dec_hidden_state).dot(np.array(enc_hidden_state))
single_dot_attention_score(dec_hidden_state, annotation)
annotations = np.transpose([[3,12,45], [59,2,5], [1,43,5], [4,3,45.3]])
# Let's visualize our annotation (each column is an annotation)
ax = sns.heatmap(annotations, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
def dot_attention_score(dec_hidden_state, annotations):
# TODO: return the product of dec_hidden_state transpose and enc_hidden_states
return np.array(dec_hidden_state).dot(annotations)
attention_weights_raw = dot_attention_score(dec_hidden_state, annotations)
attention_weights_raw
def softmax(x):
x = np.array(x, dtype=np.float128)
e_x = np.exp(x)
return e_x / e_x.sum(axis=0)
attention_weights = softmax(attention_weights_raw)
attention_weights
def apply_attention_scores(attention_weights, annotations):
# TODO: Multiple the annotations by their weights
return annotations * attention_weights
applied_attention = apply_attention_scores(attention_weights, annotations)
applied_attention
# Let's visualize our annotations after applying attention to them
ax = sns.heatmap(applied_attention, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
def calculate_attention_vector(applied_attention):
return np.sum(applied_attention, axis=1)
attention_vector = calculate_attention_vector(applied_attention)
attention_vector
# Let's visualize the attention context vector
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(attention_vector)), annot=True, cmap=sns.light_palette("Blue", as_cmap=True), linewidths=1)
| 0.446012 | 0.991852 |
# Tensorflow Image Recognition Tutorial
This tutorial shows how we can use MLDB's [TensorFlow](https://www.tensorflow.org) integration to do image recognition. TensorFlow is Google's open source deep learning library.
We will load the [Inception-v3 model](http://arxiv.org/abs/1512.00567) to generate descriptive labels for an image. The *Inception* model is a deep convolutional neural network and was trained on the [ImageNet](http://image-net.org) Large Visual Recognition Challenge dataset, where the task was to classify images into 1000 classes.
To offer context and a basis for comparison, this notebook is inspired by [TensorFlow's Image Recognition tutorial](https://www.tensorflow.org/versions/r0.7/tutorials/image_recognition/index.html).
## Initializing pymldb and other imports
The notebook cells below use `pymldb`'s `Connection` class to make [REST API](../../../../doc/#builtin/WorkingWithRest.md.html) calls. You can check out the [Using `pymldb` Tutorial](../../../../doc/nblink.html#_tutorials/Using pymldb Tutorial) for more details.
```
from pymldb import Connection
mldb = Connection()
```
## Loading a TensorFlow graph
To load a pre-trained TensorFlow graphs in MLDB, we use the [`tensorflow.graph` function type](../../../../doc/#/v1/plugins/tensorflow/doc/TensorflowGraph.md.html).
Below, we start by creating two functions. First, the `fetcher` function allows us to fetch a binary blob from a remote URL. Second, the `inception` function that will be used to execute the trained network and that we parameterize in the following way:
- **modelFileUrl**: Path to the Inception-v3 model file. The `archive` prefix and `#` separator allow us to load a file inside a zip archive. ([more details](../../../../doc/#builtin/Url.md.html))
- **input**: As input to the graph, we provide the output of the `fetch` function called with the `url` parameter. When we call it later on, the image located at the specified URL will be downloaded and passed to the graph.
- **output**: This specifies the layer from which to return the values. The `softmax` layer is the last layer in the network so we specify that one.
```
inceptionUrl = 'http://public.mldb.ai/models/inception_dec_2015.zip'
print mldb.put('/v1/functions/fetch', {
"type": 'fetcher',
"params": {}
})
print mldb.put('/v1/functions/inception', {
"type": 'tensorflow.graph',
"params": {
"modelFileUrl": 'archive+' + inceptionUrl + '#tensorflow_inception_graph.pb',
"inputs": 'fetch({url})[content] AS "DecodeJpeg/contents"',
"outputs": "softmax"
}
})
```
## Scoring an image
To demonstrate how to run the network on an image, we re-use the same image as in the Tensorflow tutorial, the picture of [Admiral Grace Hopper](https://en.wikipedia.org/wiki/Grace_Hopper):
<img src="https://www.tensorflow.org/versions/r0.7/images/grace_hopper.jpg" width=350>
The following query applies the `inception` function on the URL of her picture:
```
amazingGrace = "https://www.tensorflow.org/versions/r0.7/images/grace_hopper.jpg"
mldb.query("SELECT inception({url: '%s'}) as *" % amazingGrace)
```
This is great! With only 3 REST calls we were able to run a deep neural network on an arbitrary image off the internet.
## *Inception* as a real-time endpoint
Not only is this function available in SQL queries within MLDB, but as all MLDB functions, it is also available as a REST endpoint. This means that when we created the `inception` function above, we essentially created an real-time API running the *Inception* model that any external service or device can call to get predictions back.
The following REST call demonstrates how this looks:
```
result = mldb.get('/v1/functions/inception/application', input={"url": amazingGrace})
print result.url + '\n\n' + repr(result) + '\n'
import numpy as np
print "Shape:"
print np.array(result.json()["output"]["softmax"]["val"]).shape
```
## Interpreting the prediction
Running the network gives us a 1008-dimensional vector. This is because the network was originally trained on the Image net categories and we created the `inception` function to return the *softmax* layer which is the output of the model.
To allow us to interpret the predictions the network makes, we can import the ImageNet labels in an MLDB dataset like this:
```
print mldb.put("/v1/procedures/imagenet_labels_importer", {
"type": "import.text",
"params": {
"dataFileUrl": 'archive+' + inceptionUrl + '#imagenet_comp_graph_label_strings.txt',
"outputDataset": {"id": "imagenet_labels", "type": "sparse.mutable"},
"headers": ["label"],
"named": "lineNumber() -1",
"offset": 1,
"runOnCreation": True
}
})
```
The contents of the dataset look like this:
```
mldb.query("SELECT * FROM imagenet_labels LIMIT 5")
```
The labels line up with the *softmax* layer that we extract from the network. By joining the output of the network with the `imagenet_labels` dataset, we can essentially label the output of the network.
The following query scores the image just as before, but then transposes the output and then joins the result to the labels dataset. We then sort on the score to keep only the 10 highest values:
```
mldb.query("""
SELECT scores.pred as score
NAMED imagenet_labels.label
FROM transpose(
(
SELECT flatten(inception({url: '%s'})[softmax]) as *
NAMED 'pred'
)
) AS scores
LEFT JOIN imagenet_labels ON
imagenet_labels.rowName() = scores.rowName()
ORDER BY score DESC
LIMIT 10
""" % amazingGrace)
```
## Where to next?
You can now look at the [Transfer Learning with Tensorflow](../../../../doc/nblink.html#_demos/Transfer Learning with Tensorflow) demo.
|
github_jupyter
|
from pymldb import Connection
mldb = Connection()
inceptionUrl = 'http://public.mldb.ai/models/inception_dec_2015.zip'
print mldb.put('/v1/functions/fetch', {
"type": 'fetcher',
"params": {}
})
print mldb.put('/v1/functions/inception', {
"type": 'tensorflow.graph',
"params": {
"modelFileUrl": 'archive+' + inceptionUrl + '#tensorflow_inception_graph.pb',
"inputs": 'fetch({url})[content] AS "DecodeJpeg/contents"',
"outputs": "softmax"
}
})
amazingGrace = "https://www.tensorflow.org/versions/r0.7/images/grace_hopper.jpg"
mldb.query("SELECT inception({url: '%s'}) as *" % amazingGrace)
result = mldb.get('/v1/functions/inception/application', input={"url": amazingGrace})
print result.url + '\n\n' + repr(result) + '\n'
import numpy as np
print "Shape:"
print np.array(result.json()["output"]["softmax"]["val"]).shape
print mldb.put("/v1/procedures/imagenet_labels_importer", {
"type": "import.text",
"params": {
"dataFileUrl": 'archive+' + inceptionUrl + '#imagenet_comp_graph_label_strings.txt',
"outputDataset": {"id": "imagenet_labels", "type": "sparse.mutable"},
"headers": ["label"],
"named": "lineNumber() -1",
"offset": 1,
"runOnCreation": True
}
})
mldb.query("SELECT * FROM imagenet_labels LIMIT 5")
mldb.query("""
SELECT scores.pred as score
NAMED imagenet_labels.label
FROM transpose(
(
SELECT flatten(inception({url: '%s'})[softmax]) as *
NAMED 'pred'
)
) AS scores
LEFT JOIN imagenet_labels ON
imagenet_labels.rowName() = scores.rowName()
ORDER BY score DESC
LIMIT 10
""" % amazingGrace)
| 0.384797 | 0.991852 |
# Counterfactual instances on MNIST
Given a test instance $X$, this method can generate counterfactual instances $X^\prime$ given a desired counterfactual class $t$ which can either be a class specified upfront or any other class that is different from the predicted class of $X$.
The loss function for finding counterfactuals is the following:
$$L(X^\prime\vert X) = (f_t(X^\prime) - p_t)^2 + \lambda L_1(X^\prime, X).$$
The first loss term, guides the search towards instances $X^\prime$ for which the predicted class probability $f_t(X^\prime)$ is close to a pre-specified target class probability $p_t$ (typically $p_t=1$). The second loss term ensures that the counterfactuals are close in the feature space to the original test instance.
In this notebook we illustrate the usage of the basic counterfactual algorithm on the MNIST dataset.
```
import tensorflow as tf
tf.get_logger().setLevel(40) # suppress deprecation messages
tf.compat.v1.disable_v2_behavior() # disable TF2 behaviour as alibi code still relies on TF1 constructs
from tensorflow.keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Input
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.utils import to_categorical
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
from time import time
from alibi.explainers import Counterfactual
print('TF version: ', tf.__version__)
print('Eager execution enabled: ', tf.executing_eagerly()) # False
```
## Load and prepare MNIST data
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
print('x_train shape:', x_train.shape, 'y_train shape:', y_train.shape)
plt.gray()
plt.imshow(x_test[1]);
```
Prepare data: scale, reshape and categorize
```
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
x_train = np.reshape(x_train, x_train.shape + (1,))
x_test = np.reshape(x_test, x_test.shape + (1,))
print('x_train shape:', x_train.shape, 'x_test shape:', x_test.shape)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
print('y_train shape:', y_train.shape, 'y_test shape:', y_test.shape)
xmin, xmax = -.5, .5
x_train = ((x_train - x_train.min()) / (x_train.max() - x_train.min())) * (xmax - xmin) + xmin
x_test = ((x_test - x_test.min()) / (x_test.max() - x_test.min())) * (xmax - xmin) + xmin
```
## Define and train CNN model
```
def cnn_model():
x_in = Input(shape=(28, 28, 1))
x = Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')(x_in)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Conv2D(filters=32, kernel_size=2, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x_out = Dense(10, activation='softmax')(x)
cnn = Model(inputs=x_in, outputs=x_out)
cnn.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return cnn
cnn = cnn_model()
cnn.summary()
cnn.fit(x_train, y_train, batch_size=64, epochs=3, verbose=0)
cnn.save('mnist_cnn.h5')
```
Evaluate the model on test set
```
cnn = load_model('mnist_cnn.h5')
score = cnn.evaluate(x_test, y_test, verbose=0)
print('Test accuracy: ', score[1])
```
## Generate counterfactuals
Original instance:
```
X = x_test[0].reshape((1,) + x_test[0].shape)
plt.imshow(X.reshape(28, 28));
```
Counterfactual parameters:
```
shape = (1,) + x_train.shape[1:]
target_proba = 1.0
tol = 0.01 # want counterfactuals with p(class)>0.99
target_class = 'other' # any class other than 7 will do
max_iter = 1000
lam_init = 1e-1
max_lam_steps = 10
learning_rate_init = 0.1
feature_range = (x_train.min(),x_train.max())
```
Run counterfactual:
```
# initialize explainer
cf = Counterfactual(cnn, shape=shape, target_proba=target_proba, tol=tol,
target_class=target_class, max_iter=max_iter, lam_init=lam_init,
max_lam_steps=max_lam_steps, learning_rate_init=learning_rate_init,
feature_range=feature_range)
start_time = time()
explanation = cf.explain(X)
print('Explanation took {:.3f} sec'.format(time() - start_time))
```
Results:
```
pred_class = explanation.cf['class']
proba = explanation.cf['proba'][0][pred_class]
print(f'Counterfactual prediction: {pred_class} with probability {proba}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
```
The counterfactual starting from a 7 moves towards the closest class as determined by the model and the data - in this case a 9. The evolution of the counterfactual during the iterations over $\lambda$ can be seen below (note that all of the following examples satisfy the counterfactual condition):
```
n_cfs = np.array([len(explanation.all[iter_cf]) for iter_cf in range(max_lam_steps)])
examples = {}
for ix, n in enumerate(n_cfs):
if n>0:
examples[ix] = {'ix': ix, 'lambda': explanation.all[ix][0]['lambda'],
'X': explanation.all[ix][0]['X']}
columns = len(examples) + 1
rows = 1
fig = plt.figure(figsize=(16,6))
for i, key in enumerate(examples.keys()):
ax = plt.subplot(rows, columns, i+1)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.imshow(examples[key]['X'].reshape(28,28))
plt.title(f'Iteration: {key}')
```
Typically, the first few iterations find counterfactuals that are out of distribution, while the later iterations make the counterfactual more sparse and interpretable.
Let's now try to steer the counterfactual to a specific class:
```
target_class = 1
cf = Counterfactual(cnn, shape=shape, target_proba=target_proba, tol=tol,
target_class=target_class, max_iter=max_iter, lam_init=lam_init,
max_lam_steps=max_lam_steps, learning_rate_init=learning_rate_init,
feature_range=feature_range)
explanation = start_time = time()
explanation = cf.explain(X)
print('Explanation took {:.3f} sec'.format(time() - start_time))
```
Results:
```
pred_class = explanation.cf['class']
proba = explanation.cf['proba'][0][pred_class]
print(f'Counterfactual prediction: {pred_class} with probability {proba}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
```
As you can see, by specifying a class, the search process can't go towards the closest class to the test instance (in this case a 9 as we saw previously), so the resulting counterfactual might be less interpretable. We can gain more insight by looking at the difference between the counterfactual and the original instance:
```
plt.imshow((explanation.cf['X'] - X).reshape(28, 28));
```
This shows that the counterfactual is stripping out the top part of the 7 to make to result in a prediction of 1 - not very surprising as the dataset has a lot of examples of diagonally slanted 1’s.
Clean up:
```
os.remove('mnist_cnn.h5')
```
|
github_jupyter
|
import tensorflow as tf
tf.get_logger().setLevel(40) # suppress deprecation messages
tf.compat.v1.disable_v2_behavior() # disable TF2 behaviour as alibi code still relies on TF1 constructs
from tensorflow.keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Input
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.utils import to_categorical
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
from time import time
from alibi.explainers import Counterfactual
print('TF version: ', tf.__version__)
print('Eager execution enabled: ', tf.executing_eagerly()) # False
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
print('x_train shape:', x_train.shape, 'y_train shape:', y_train.shape)
plt.gray()
plt.imshow(x_test[1]);
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
x_train = np.reshape(x_train, x_train.shape + (1,))
x_test = np.reshape(x_test, x_test.shape + (1,))
print('x_train shape:', x_train.shape, 'x_test shape:', x_test.shape)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
print('y_train shape:', y_train.shape, 'y_test shape:', y_test.shape)
xmin, xmax = -.5, .5
x_train = ((x_train - x_train.min()) / (x_train.max() - x_train.min())) * (xmax - xmin) + xmin
x_test = ((x_test - x_test.min()) / (x_test.max() - x_test.min())) * (xmax - xmin) + xmin
def cnn_model():
x_in = Input(shape=(28, 28, 1))
x = Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')(x_in)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Conv2D(filters=32, kernel_size=2, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x_out = Dense(10, activation='softmax')(x)
cnn = Model(inputs=x_in, outputs=x_out)
cnn.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return cnn
cnn = cnn_model()
cnn.summary()
cnn.fit(x_train, y_train, batch_size=64, epochs=3, verbose=0)
cnn.save('mnist_cnn.h5')
cnn = load_model('mnist_cnn.h5')
score = cnn.evaluate(x_test, y_test, verbose=0)
print('Test accuracy: ', score[1])
X = x_test[0].reshape((1,) + x_test[0].shape)
plt.imshow(X.reshape(28, 28));
shape = (1,) + x_train.shape[1:]
target_proba = 1.0
tol = 0.01 # want counterfactuals with p(class)>0.99
target_class = 'other' # any class other than 7 will do
max_iter = 1000
lam_init = 1e-1
max_lam_steps = 10
learning_rate_init = 0.1
feature_range = (x_train.min(),x_train.max())
# initialize explainer
cf = Counterfactual(cnn, shape=shape, target_proba=target_proba, tol=tol,
target_class=target_class, max_iter=max_iter, lam_init=lam_init,
max_lam_steps=max_lam_steps, learning_rate_init=learning_rate_init,
feature_range=feature_range)
start_time = time()
explanation = cf.explain(X)
print('Explanation took {:.3f} sec'.format(time() - start_time))
pred_class = explanation.cf['class']
proba = explanation.cf['proba'][0][pred_class]
print(f'Counterfactual prediction: {pred_class} with probability {proba}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
n_cfs = np.array([len(explanation.all[iter_cf]) for iter_cf in range(max_lam_steps)])
examples = {}
for ix, n in enumerate(n_cfs):
if n>0:
examples[ix] = {'ix': ix, 'lambda': explanation.all[ix][0]['lambda'],
'X': explanation.all[ix][0]['X']}
columns = len(examples) + 1
rows = 1
fig = plt.figure(figsize=(16,6))
for i, key in enumerate(examples.keys()):
ax = plt.subplot(rows, columns, i+1)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.imshow(examples[key]['X'].reshape(28,28))
plt.title(f'Iteration: {key}')
target_class = 1
cf = Counterfactual(cnn, shape=shape, target_proba=target_proba, tol=tol,
target_class=target_class, max_iter=max_iter, lam_init=lam_init,
max_lam_steps=max_lam_steps, learning_rate_init=learning_rate_init,
feature_range=feature_range)
explanation = start_time = time()
explanation = cf.explain(X)
print('Explanation took {:.3f} sec'.format(time() - start_time))
pred_class = explanation.cf['class']
proba = explanation.cf['proba'][0][pred_class]
print(f'Counterfactual prediction: {pred_class} with probability {proba}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
plt.imshow((explanation.cf['X'] - X).reshape(28, 28));
os.remove('mnist_cnn.h5')
| 0.803637 | 0.969208 |
<img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/horizontal-primary-light.png" alt="he-black-box" width="600"/>
# Homomorphic Encryption using Duet: Data Owner
## Tutorial 2: Encrypted image evaluation
Welcome!
This tutorial will show you how to evaluate Encrypted images using Duet and TenSEAL. This notebook illustrates the Data Owner view on the operations.
We recommend going through Tutorial 0 and 1 before trying this one.
### Setup
All modules are imported here, make sure everything is installed by running the cell below.
```
import os
import requests
import syft as sy
import tenseal as ts
from torchvision import transforms
from random import randint
import numpy as np
from PIL import Image
from matplotlib.pyplot import imshow
import torch
```
### Start Duet Data Owner instance
```
# Start Duet local instance
duet = sy.launch_duet(loopback=True)
```
### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 1 : Now STOP and run the Data Scientist notebook until the same checkpoint.
### Data Owner helpers
```
from syft.util import get_root_data_path
# Create the TenSEAL security context
def create_ctx():
"""Helper for creating the CKKS context.
CKKS params:
- Polynomial degree: 8192.
- Coefficient modulus size: [40, 21, 21, 21, 21, 21, 21, 40].
- Scale: 2 ** 21.
- The setup requires the Galois keys for evaluating the convolutions.
"""
poly_mod_degree = 8192
coeff_mod_bit_sizes = [40, 21, 21, 21, 21, 21, 21, 40]
ctx = ts.context(ts.SCHEME_TYPE.CKKS, poly_mod_degree, -1, coeff_mod_bit_sizes)
ctx.global_scale = 2 ** 21
ctx.generate_galois_keys()
return ctx
def download_images():
try:
os.makedirs(f"{get_root_data_path()}/mnist-samples", exist_ok=True)
except BaseException as e:
pass
url = "https://raw.githubusercontent.com/OpenMined/TenSEAL/master/tutorials/data/mnist-samples/img_{}.jpg"
path = f"{get_root_data_path()}" + "/mnist-samples/img_{}.jpg"
for idx in range(6):
img_url = url.format(idx)
img_path = path.format(idx)
r = requests.get(img_url)
with open(img_path, 'wb') as f:
f.write(r.content)
# Sample an image
def load_input():
download_images()
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
idx = randint(1, 5)
img_name = f"{get_root_data_path()}/mnist-samples/img_{idx}.jpg"
img = Image.open(img_name)
return transform(img).view(28, 28).tolist(), img
# Helper for encoding the image
def prepare_input(ctx, plain_input):
enc_input, windows_nb = ts.im2col_encoding(ctx, plain_input, 7, 7, 3)
assert windows_nb == 64
return enc_input
```
### Prepare the context
```
context = create_ctx()
```
### Sample and encrypt an image
```
image, orig = load_input()
encrypted_image = prepare_input(context, image)
print("Encrypted image ", encrypted_image)
print("Original image ")
imshow(np.asarray(orig))
ctx_ptr = context.send(duet, pointable=True, tags=["context"])
enc_image_ptr = encrypted_image.send(duet, pointable=True, tags=["enc_image"])
duet.store.pandas
```
### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 2 : Now STOP and run the Data Scientist notebook until the same checkpoint.
### Approve the requests
```
duet.requests.pandas
duet.requests[0].accept()
duet.requests[0].accept()
duet.requests.pandas
```
### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 3 : Now STOP and run the Data Scientist notebook until the same checkpoint.
### Retrieve and decrypt the evaluation result
```
result = duet.store["result"].get(delete_obj=False)
result.link_context(context)
result = result.decrypt()
```
### Run the activation and retrieve the label
```
probs = torch.softmax(torch.tensor(result), 0)
label_max = torch.argmax(probs)
print("Maximum probability for label {}".format(label_max))
```
### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 4 : Well done!
# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft and TenSEAL on GitHub
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- [Star PySyft](https://github.com/OpenMined/PySyft)
- [Star TenSEAL](https://github.com/OpenMined/TenSEAL)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org). #lib_tenseal and #code_tenseal are the main channels for the TenSEAL project.
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
|
github_jupyter
|
import os
import requests
import syft as sy
import tenseal as ts
from torchvision import transforms
from random import randint
import numpy as np
from PIL import Image
from matplotlib.pyplot import imshow
import torch
# Start Duet local instance
duet = sy.launch_duet(loopback=True)
from syft.util import get_root_data_path
# Create the TenSEAL security context
def create_ctx():
"""Helper for creating the CKKS context.
CKKS params:
- Polynomial degree: 8192.
- Coefficient modulus size: [40, 21, 21, 21, 21, 21, 21, 40].
- Scale: 2 ** 21.
- The setup requires the Galois keys for evaluating the convolutions.
"""
poly_mod_degree = 8192
coeff_mod_bit_sizes = [40, 21, 21, 21, 21, 21, 21, 40]
ctx = ts.context(ts.SCHEME_TYPE.CKKS, poly_mod_degree, -1, coeff_mod_bit_sizes)
ctx.global_scale = 2 ** 21
ctx.generate_galois_keys()
return ctx
def download_images():
try:
os.makedirs(f"{get_root_data_path()}/mnist-samples", exist_ok=True)
except BaseException as e:
pass
url = "https://raw.githubusercontent.com/OpenMined/TenSEAL/master/tutorials/data/mnist-samples/img_{}.jpg"
path = f"{get_root_data_path()}" + "/mnist-samples/img_{}.jpg"
for idx in range(6):
img_url = url.format(idx)
img_path = path.format(idx)
r = requests.get(img_url)
with open(img_path, 'wb') as f:
f.write(r.content)
# Sample an image
def load_input():
download_images()
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
idx = randint(1, 5)
img_name = f"{get_root_data_path()}/mnist-samples/img_{idx}.jpg"
img = Image.open(img_name)
return transform(img).view(28, 28).tolist(), img
# Helper for encoding the image
def prepare_input(ctx, plain_input):
enc_input, windows_nb = ts.im2col_encoding(ctx, plain_input, 7, 7, 3)
assert windows_nb == 64
return enc_input
context = create_ctx()
image, orig = load_input()
encrypted_image = prepare_input(context, image)
print("Encrypted image ", encrypted_image)
print("Original image ")
imshow(np.asarray(orig))
ctx_ptr = context.send(duet, pointable=True, tags=["context"])
enc_image_ptr = encrypted_image.send(duet, pointable=True, tags=["enc_image"])
duet.store.pandas
duet.requests.pandas
duet.requests[0].accept()
duet.requests[0].accept()
duet.requests.pandas
result = duet.store["result"].get(delete_obj=False)
result.link_context(context)
result = result.decrypt()
probs = torch.softmax(torch.tensor(result), 0)
label_max = torch.argmax(probs)
print("Maximum probability for label {}".format(label_max))
| 0.706089 | 0.92523 |
```
import numpy as np
from bqplot import LinearScale, Bins, Axis, Figure
```
## Bins Mark
This `Mark` is essentially the same as the `Hist` `Mark` from a user point of view, but is actually a `Bars` instance that bins sample data.
The difference with `Hist` is that the binning is done in the backend, so it will work better for large data as it does not have to ship the whole data back and forth to the frontend.
```
# Create a sample of Gaussian draws
np.random.seed(0)
x_data = np.random.randn(1000)
```
Give the `Hist` mark the data you want to perform as the `sample` argument, and also give 'x' and 'y' scales.
```
x_sc = LinearScale()
y_sc = LinearScale()
hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, padding=0,)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical')
Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0)
```
The midpoints of the resulting bins and their number of elements can be recovered via the read-only traits `x` and `y`:
```
hist.x, hist.y
```
## Tuning the bins
Under the hood, the `Bins` mark is really a `Bars` mark, with some additional magic to control the binning. The data in `sample` is binned into equal-width bins. The parameters controlling the binning are the following traits:
- `bins` sets the number of bins. It is either a fixed integer (10 by default), or the name of a method to determine the number of bins in a smart way ('auto', 'fd', 'doane', 'scott', 'rice', 'sturges' or 'sqrt').
- `min` and `max` set the range of the data (`sample`) to be binned
- `density`, if set to `True`, normalizes the heights of the bars.
For more information, see the documentation of `numpy`'s `histogram`
```
x_sc = LinearScale()
y_sc = LinearScale()
hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, padding=0,)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical')
Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0)
# Changing the number of bins
hist.bins = 'sqrt'
# Changing the range
hist.min = 0
```
## Histogram Styling
The styling of `Hist` is identical to the one of `Bars`
```
# Normalizing the count
x_sc = LinearScale()
y_sc = LinearScale()
hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, density=True)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical')
Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0)
# changing the color
hist.colors=['orangered']
# stroke and opacity update
hist.stroke = 'orange'
hist.opacities = [0.5] * len(hist.x)
# Laying the histogram on its side
hist.orientation = 'horizontal'
ax_x.orientation = 'vertical'
ax_y.orientation = 'horizontal'
```
|
github_jupyter
|
import numpy as np
from bqplot import LinearScale, Bins, Axis, Figure
# Create a sample of Gaussian draws
np.random.seed(0)
x_data = np.random.randn(1000)
x_sc = LinearScale()
y_sc = LinearScale()
hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, padding=0,)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical')
Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0)
hist.x, hist.y
x_sc = LinearScale()
y_sc = LinearScale()
hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, padding=0,)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical')
Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0)
# Changing the number of bins
hist.bins = 'sqrt'
# Changing the range
hist.min = 0
# Normalizing the count
x_sc = LinearScale()
y_sc = LinearScale()
hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, density=True)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical')
Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0)
# changing the color
hist.colors=['orangered']
# stroke and opacity update
hist.stroke = 'orange'
hist.opacities = [0.5] * len(hist.x)
# Laying the histogram on its side
hist.orientation = 'horizontal'
ax_x.orientation = 'vertical'
ax_y.orientation = 'horizontal'
| 0.774583 | 0.981979 |
# Prediction
Use the {ref}`openpifpaf.predict <cli-help-predict>` tool on the command line to run
multi-person pose estimation on images.
To create predictions from other Python modules, please refer to {doc}`predict_api`.
First we present the command line tool for predictions on images,
{ref}`openpifpaf.predict <cli-help-predict>`. Then follows
a short introduction to OpenPifPaf predictions on videos with
{ref}`openpifpaf.video <cli-help-video>`.
## Images
Run {ref}`openpifpaf.predict <cli-help-predict>` on an image:
```
%%bash
python -m openpifpaf.predict coco/000000081988.jpg --image-output --json-output --image-min-dpi=200 --show-file-extension=jpeg
```
This command produced two outputs: an image and a json file.
You can provide file or folder arguments to the `--image-output` and `--json-output` flags.
Here, we used the default which created these two files:
```sh
coco/000000081988.jpg.predictions.jpeg
coco/000000081988.jpg.predictions.json
```
Here is the image:
```
import IPython
IPython.display.Image('coco/000000081988.jpg.predictions.jpeg')
```
Image credit: "[Learning to surf](https://www.flickr.com/photos/fotologic/6038911779/in/photostream/)" by fotologic which is licensed under [CC-BY-2.0].
[CC-BY-2.0]: https://creativecommons.org/licenses/by/2.0/
And below is the json output. The json data is a list where each entry in the list corresponds to one pose annotation. In this case, there are five entries corresponding to the five people in the image. Each annotation contains information on `"keypoints"`, `"bbox"`, `"score"` and `"category_id"`.
All coordinates are in pixel coordinates. The `"keypoints"` entry is in COCO format with triples of `(x, y, c)` (`c` for confidence) for every joint as listed under {ref}`coco-person-keypoints`. The pixel coordinates have sub-pixel accuracy, i.e. 10.5 means the joint is between pixel 10 and 11.
In rare cases, joints can be localized outside the field of view and then the pixel coordinates can be negative. When `c` is zero, the joint was not detected.
The `"bbox"` (bounding box) format is `(x, y, w, h)`: the $(x, y)$ coordinate of the top-left corner followed by width and height.
The `"score"` is a number between zero and one.
```
%%bash
python -m json.tool coco/000000081988.jpg.predictions.json
```
Optional Arguments:
* `--show`: show interactive matplotlib output
* `--debug-indices`: enable debug messages and debug plots (see {ref}`Examples <example-debug>`)
Full list of arguments is available with `--help`: {ref}`CLI help for predict <cli-help-predict>`.
## Videos
```sh
python3 -m openpifpaf.video --source myvideotoprocess.mp4 --video-output --json-output
```
Requires OpenCV. The `--video-output` option also requires matplotlib.
Replace `myvideotoprocess.mp4` with `0` for webcam0 or other OpenCV compatible sources.
The full list of arguments is available with `--help`: {ref}`CLI help for video <cli-help-video>`.
In v0.12.6, we introduced the ability to pipe the output to a virtual camera.
This virtual camera can then be used as the source camera in Zoom and other
conferencing softwares. You need a virtual camera on your system, e.g.
from [OBS Studio](https://obsproject.com) and need to
install `pip3 install pyvirtualcam`. Then you can use the
`--video-output=virtualcam` argument.
## Debug
Obtain extra information by adding `--debug` to the command line. It will
show the structure of the neural network and timing information in the decoder.
```
%%bash
python -m openpifpaf.predict coco/000000081988.jpg --image-output --json-output --debug --image-min-dpi=200 --show-file-extension=jpeg
```
You can enable debug plots with `--debug-indices`.
Please refer to {ref}`the debug outputs in the Examples <example-debug>` and
some further {ref}`debug outputs in the prediction API <predict-fields>`.
|
github_jupyter
|
%%bash
python -m openpifpaf.predict coco/000000081988.jpg --image-output --json-output --image-min-dpi=200 --show-file-extension=jpeg
coco/000000081988.jpg.predictions.jpeg
coco/000000081988.jpg.predictions.json
import IPython
IPython.display.Image('coco/000000081988.jpg.predictions.jpeg')
%%bash
python -m json.tool coco/000000081988.jpg.predictions.json
python3 -m openpifpaf.video --source myvideotoprocess.mp4 --video-output --json-output
%%bash
python -m openpifpaf.predict coco/000000081988.jpg --image-output --json-output --debug --image-min-dpi=200 --show-file-extension=jpeg
| 0.09835 | 0.873107 |
<a href="https://colab.research.google.com/github/vasantbala/vb_ai_course/blob/main/NLP/Practice/NLP_MLS.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install contractions
import re, string, unicodedata # Import Regex, string and unicodedata.
import contractions # Import contractions library.
from bs4 import BeautifulSoup # Import BeautifulSoup.
import numpy as np # Import numpy.
import pandas as pd # Import pandas.
import nltk # Import Natural Language Tool-Kit.
nltk.download('stopwords') # Download Stopwords.
nltk.download('punkt')
nltk.download('wordnet')
from nltk.corpus import stopwords # Import stopwords.
from nltk.tokenize import word_tokenize, sent_tokenize # Import Tokenizer.
from nltk.stem.wordnet import WordNetLemmatizer # Import Lemmatizer.
from google.colab import drive
drive.mount('/content/drive')
data = pd.read_csv("/content/drive/MyDrive/AIProjectData/Reviews.csv")
# shape of data
data.shape
data.head()
# Taking only 50000 entries for demonstration purpose. As full data will take more time to process.
# Only keeping score and Text columns from the data, as these are useful for our analysis.
data = data.loc[:49999, ['Score', 'Text']]
data.isnull().sum(axis=0)
pd.set_option('display.max_colwidth', None) # Display full dataframe information (Non-turncated Text column.)
data.head() # Check first 5 rows of data
data.shape
```
Data Pre-processing:
Remove html tags.
Replace contractions in string. (e.g. replace I'm --> I am) and so on.\
Remove numbers.
Tokenization
To remove Stopwords.
Lemmatized data
We have used NLTK library to tokenize words , remove stopwords and lemmatize the remaining words.
```
def strip_html(text):
soup = BeautifulSoup(text, "html.parser")
return soup.get_text()
data['Text'] = data['Text'].apply(lambda x: strip_html(x))
data.head()
def replace_contractions(text):
"""Replace contractions in string of text"""
return contractions.fix(text)
data['Text'] = data['Text'].apply(lambda x: replace_contractions(x))
data.head()
def remove_numbers(text):
text = re.sub(r'\d+', '', text)
return text
data['Text'] = data['Text'].apply(lambda x: remove_numbers(x))
data.head()
data['Text'] = data.apply(lambda row: nltk.word_tokenize(row['Text']), axis=1) # Tokenization of data
data.head()
```
|
github_jupyter
|
!pip install contractions
import re, string, unicodedata # Import Regex, string and unicodedata.
import contractions # Import contractions library.
from bs4 import BeautifulSoup # Import BeautifulSoup.
import numpy as np # Import numpy.
import pandas as pd # Import pandas.
import nltk # Import Natural Language Tool-Kit.
nltk.download('stopwords') # Download Stopwords.
nltk.download('punkt')
nltk.download('wordnet')
from nltk.corpus import stopwords # Import stopwords.
from nltk.tokenize import word_tokenize, sent_tokenize # Import Tokenizer.
from nltk.stem.wordnet import WordNetLemmatizer # Import Lemmatizer.
from google.colab import drive
drive.mount('/content/drive')
data = pd.read_csv("/content/drive/MyDrive/AIProjectData/Reviews.csv")
# shape of data
data.shape
data.head()
# Taking only 50000 entries for demonstration purpose. As full data will take more time to process.
# Only keeping score and Text columns from the data, as these are useful for our analysis.
data = data.loc[:49999, ['Score', 'Text']]
data.isnull().sum(axis=0)
pd.set_option('display.max_colwidth', None) # Display full dataframe information (Non-turncated Text column.)
data.head() # Check first 5 rows of data
data.shape
def strip_html(text):
soup = BeautifulSoup(text, "html.parser")
return soup.get_text()
data['Text'] = data['Text'].apply(lambda x: strip_html(x))
data.head()
def replace_contractions(text):
"""Replace contractions in string of text"""
return contractions.fix(text)
data['Text'] = data['Text'].apply(lambda x: replace_contractions(x))
data.head()
def remove_numbers(text):
text = re.sub(r'\d+', '', text)
return text
data['Text'] = data['Text'].apply(lambda x: remove_numbers(x))
data.head()
data['Text'] = data.apply(lambda row: nltk.word_tokenize(row['Text']), axis=1) # Tokenization of data
data.head()
| 0.41478 | 0.750095 |
We really care (and want to check) that honest, protocol-following validators, reap the highest (in-protocol) rewards. We'll get a feel for what this means here by pitting different validator behaviours against each other.
To introduce a bit of terminology, we are creating an _agent-based model_ here. We do not explicitly code a scenario that agents follow, but program simple update and interaction rules, let them unfold for a while, and then look at the tape to infer conclusions.
We'll distinguish between _strategies_ and _behaviours_. A validator can do several things: attest, propose, or do nothing. We'll call _strategies_ instantiations of these particular actions. For instance, a validator may employ the strategy of always attesting correctly, or flipping a coin and half the time attesting correctly, half the time not attesting at all (a pretty dumb strategy if you ask me).
We use the word _behaviour_ to describe the general articulation of these strategies: what to play and _when_. We explored in the [main notebook](br2050.html) the behaviour of the [ASAP validator](https://github.com/barnabemonnot/beaconrunner/blob/master/notebooks/thunderdome/validators/ASAPValidator.py), who does something as early as possible according to the specs (at the beginning of the slot for a block proposal; a third of a slot, i.e., 4 seconds in _or_ whenever the block for that slot is received for an attestation).
In this notebook, we'll look at a different behaviour too: the [Prudent validator](https://github.com/barnabemonnot/beaconrunner/blob/master/notebooks/thunderdome/validators/PrudentValidator.py). Although validators are expected to propose their block as early as possible, network delays could prevent attesters from receiving the block before a third of the slot has elapsed. ASAP validators would attest anyways, at the risk of voting on the wrong head, before receiving the block for that slot. Prudent validators hedge their bets a bit:
- If Prudent validators receive the block for their attesting slot, they publish their attestation with the head set to that block.
- If more than 8 seconds have elapsed into the slot, and they still haven't received a block, they attest for the head they know about.
In other words, although prudent validators are willing to wait to see if the block will turn up, there is a limit to how long they will wait: they need to communicate their attestations in a timely manner after all! They only differ from ASAPs by how long they are willing to wait.
### Rewards under consideration
To see how this behaviour impacts the payoff received by ASAP and prudent validators, let's dive into the rewards and penalties schedule for attesters (we won't look at rewards from block proposing here). Note first the following two things:
- Validators receive a bonus for attesting on the correct head. Attesting too early means possibly losing out on this bonus if the head specified in the attestation is incorrect.
- Validators receive a bonus for early inclusion of their attestation in a block. This means that they should attest relatively soon, and cannot wait forever to see if a missing block will turn up.
You can see [here in the specs](https://github.com/ethereum/eth2.0-specs/blob/579da6d2dc734b269dbf67aa1004b54bb9449784/specs/phase0/beacon-chain.md#get_attestation_deltas) how rewards and penalties are computed at the end of each epoch. Let's unpack the computations below.
### Attester rewards in eth2
A _base reward_ $\texttt{BR}$ is computed that depends only on the validator's effective balance (maximum 32 ETH) and the current total active balance (the total effective balance of _all_ active validators). Letting $i$ denote our validator,
$$ \texttt{BR[i]} = \texttt{balance[i]} \cdot \frac{\texttt{BRF}}{\sqrt{\texttt{total\_balance}} \cdot \texttt{BRPE}} $$
$\texttt{BRPE}$ stands for _base rewards per epoch_. A validator can be rewarded for
1. Attesting to the correct source, yielding $\texttt{BR[i]}$.
2. Attesting to the correct target, yielding $\texttt{BR[i]}$.
3. Attesting to the correct head, yielding $\texttt{BR[i]}$.
4. Early inclusion, yielding at most $\texttt{BR[i]} \Big( 1 - \frac{1}{\texttt{PRQ}} \Big)$ with $\texttt{PRQ}$ the _proposer reward quotient_.
These four items are why $\texttt{BRPE}$ is set to 4 (at least in phase 0). $\texttt{BRF}$, the _base reward factor_, is a scaling constant currently set at $2^6 = 64$.
For each of the first three items, we scale the base reward by the fraction of the active stake who correctly attested for the item. If $\rho_s$ is the fraction of the stake who attested correctly for the source, then a validator $i$ with a correct source receives $\rho_s \cdot \texttt{BR[i]}$. However, failure to attest or attesting for an incorrect source, target or head incurs the full $\texttt{BR[i]}$ as penalty.
Now, let's dig deeper into item 4, early inclusion. Attestations may be included at least one slot after the slot they are attesting for, called the _committee slot_. If they are included in the block immediately following their committee slot, they receive the full base reward, minus the _proposer reward_, endowed to the block producer (see table below). If they are included in the block two slots after their committee slot, they receive half. Three blocks later, a third, etc. Attestations must be included at the latest one full epoch after their committee slot. This means the smallest inclusion reward is 1/32 of the base reward (assuming `SLOTS_PER_EPOCH` is equal to 32).
These are the attester rewards in the best of times. Note however, that the penalties are not symmetric. Whenever a validator fails to attest for one of the first three items, or attests incorrectly, the penalty is equal to the whole base reward. No discount!
What about the worst of times? From the perspective of rewards and penalities, this happens when the chain is not finalising epochs anymore (a situation we explored in a [different notebook](br2049.html)). In this case, we want to "leak" the stake of inactive validators, who are preventing finalisation. A [recent modification](https://github.com/ethereum/eth2.0-specs/pull/1830) ensures that validators who perform optimally (i.e., attest correctly to all three items _and_ included in the block immediately following their committee slot) do not suffer any losses. Meanwhile, validators who are not attesting for the correct target (a key ingredient for finalisation) suffer increasingly painful consequences, as the _finality delay_ (the gap between the current epoch and the last finalised epoch) grows.
This is all synthesised in the following table. "IL" items refer to "Inactivity Leak": the rewards and penalties during these worst of times.

Let's explore a few examples. We note at the end of the formula whether the result is positive (the validator makes a profit) or negative (the validator makes a loss).
- A validator who gets everything correct, and whose attestations are included 2 slots after their committee slot, while the chain is finalising, reaps:
$$ \Big( \rho_s + \rho_t + \rho_h + \frac{1}{2} (1 - \frac{1}{\texttt{PRQ}}) \Big) \texttt{BR[i]} > 0 $$
- A validator who gets everything correct, and whose attestations are included 2 slots after their committee slot, while the chain is _not finalising_, reaps:
$$ \Big(3 + \frac{1}{2} (1 - \frac{1}{\texttt{PRQ}}) - \texttt{BRPE} + \frac{1}{\texttt{PRQ}} \Big) \texttt{BR[i]} = \Big(-\frac{1}{2} + \frac{1}{2\texttt{PRQ}} \Big) \texttt{BR[i]} < 0 $$
- A validator who gets everything correct _except the target_, and whose attestations are included 2 slots after their committee slot, while the chain is _not finalising_ with finality delay $f$, reaps:
$$ \Big(1 - 1 + 1 + \frac{1}{2} (1 - \frac{1}{\texttt{PRQ}}) - \texttt{BRPE} + \frac{1}{\texttt{PRQ}} \Big) \texttt{BR[i]} - \texttt{balance[i]} \cdot \frac{f}{\texttt{IPQ}} = \Big(-\frac{5}{2} + \frac{1}{2\texttt{PRQ}} \Big) \texttt{BR[i]} - \texttt{balance[i]} \cdot \frac{f}{\texttt{IPQ}} < 0 $$
### Some hypotheses before simulating
In a network with perfect (or close to perfect) latency and honest proposers, we don't expect ASAP and prudent validator earnings to differ significantly. But under real-world conditions, the behaviour of prudent validators appears more _robust_ than that of ASAP validators. By robust, we mean that prudent validators perform better under slight variations of the parameters, e.g., the average network latency.
When network delays are high or block proposers are lazy (meaning they don't propagate their blocks in time, perhaps because they are being prudent in order to collect more incoming attestations), prudent validators stand to reap the "correct head" reward more often. On the other hand, when network delays are really, really high, perhaps proposing ASAP is a better strategy, as it ensures at least _something_ goes through, even if it's possibly incorrect.
## _"Two nodes enter! One node leaves!"_
Let's pit `PrudentValidator`s against `ASAP`s and observe the result. In this simulation, we let `SLOTS_PER_EPOCH` equal 4 and simulate a small number of validators. Runs are now random: sometimes the network updates, sometimes it doesn't. We'll set it up later so that on average messages propagate one step further on the network every 4 seconds.
We first load all the necessary packages. The remainder will look a lot like what we did in the [main notebook](br2050.html), check it out for more background!
```
import importlib
import types
from eth2spec.config.config_util import prepare_config
from eth2spec.utils.ssz.ssz_impl import hash_tree_root
import os, sys
sys.path.insert(1, os.path.realpath(os.path.pardir) + "/notebooks/thunderdome")
import beaconrunner as br
prepare_config(".", "fast.yaml")
br.reload_package(br)
```
We then create our _observers_, to allow us to record interesting metrics at each simulation step.
```
def current_slot(params, step, sL, s, _input):
return ("current_slot", s["network"].validators[0].data.slot)
def average_balance_prudent(params, step, sL, s, _input):
validators = s["network"].validators
validator = validators[0]
head = br.specs.get_head(validator.store)
current_state = validator.store.block_states[head]
current_epoch = br.specs.get_current_epoch(current_state)
prudent_indices = [i for i, v in enumerate(validators) if v.validator_behaviour == "prudent"]
prudent_balances = [b for i, b in enumerate(current_state.balances) if i in prudent_indices]
return ("average_balance_prudent", br.utils.eth2.gwei_to_eth(sum(prudent_balances) / float(len(prudent_indices))))
def average_balance_asap(params, step, sL, s, _input):
validators = s["network"].validators
validator = validators[0]
head = br.specs.get_head(validator.store)
current_state = validator.store.block_states[head]
current_epoch = br.specs.get_current_epoch(current_state)
asap_indices = [i for i, v in enumerate(validators) if v.validator_behaviour == "asap"]
asap_balances = [b for i, b in enumerate(current_state.balances) if i in asap_indices]
return ("average_balance_asap", br.utils.eth2.gwei_to_eth(sum(asap_balances) / float(len(asap_indices))))
observers = {
"current_slot": current_slot,
"average_balance_prudent": average_balance_prudent,
"average_balance_asap": average_balance_asap,
}
```
And define a "main" function -- in this case, `simulate_thunderdome` -- to run the simulation. The function returns a `pandas` dataframe containing the metrics recorded throughout the run.
```
from random import sample
from beaconrunner.validators.ASAPValidator import ASAPValidator
from beaconrunner.validators.PrudentValidator import PrudentValidator
def simulate_once(network_sets, num_run, num_validators, network_update_rate):
# Half our validators are prudent, the others are ASAPs
num_prudent = int(num_validators / 2)
# We sample the position on the p2p network of prudent validators randomly
prudentset = set(sample(range(num_validators), num_prudent))
validators = []
# Initiate validators
for i in range(num_validators):
if i in prudentset:
new_validator = PrudentValidator(i)
else:
new_validator = ASAPValidator(i)
validators.append(new_validator)
# Create a genesis state
genesis_state = br.simulator.get_genesis_state(validators)
# Validators load the state
[v.load_state(genesis_state.copy()) for v in validators]
br.simulator.skip_genesis_block(validators) # forward time by SECONDS_PER_SLOT
network = br.network.Network(validators = validators, sets=network_sets)
parameters = br.simulator.SimulationParameters({
"num_epochs": 20,
"num_run": num_run,
"frequency": 1,
"network_update_rate": network_update_rate,
})
return br.simulator.simulate(network, parameters, observers)
%%capture
import pandas as pd
num_validators = 12
# Create the network peers
set_a = br.network.NetworkSet(validators=list(range(0, int(num_validators * 2 / 3.0))))
set_b = br.network.NetworkSet(validators=list(range(int(num_validators / 2.0), num_validators)))
network_sets = list([set_a, set_b])
num_runs = 40
network_update_rate = 0.25
df = pd.concat([simulate_once(network_sets, num_run, num_validators, network_update_rate) for num_run in range(num_runs)])
```
To do a fair amount of runs (40) simulating a good number of epochs (20), we set a low number of validators (12). Since we are more interested in comparing individual rewards between ASAP and prudent validators rather than macro-properties or even scalability of the chain, this is not a bad thing to do (and it speeds things up quite a bit).
Note here that we keep the same network topology across all our runs. However, validator types are placed randomly over the network with each new run, with always 50% of them Prudent and the other 50% ASAPs.
Since we have 4 slots per epoch, and 20 epochs, let's read the average balances at slot 81, after the 20th epoch rewards and penalties were computed.
```
df[df.current_slot == 81][['average_balance_prudent', 'average_balance_asap']].describe()
```
It doesn't seem like a big difference, yet on average prudent validators have higher earnings than ASAPs! Taking into account the [standard error](https://en.wikipedia.org/wiki/Standard_error), the difference between the means appears to be [statistically significant](https://www.investopedia.com/terms/s/statistically_significant.asp), meaning that even though the difference is small, it's not likely to be due to chance alone (while we won't go down this path here, there are statistical tests we could use to test this more precisely if we wished). To see if our intuition is correct, let's chart the ensemble mean over the 40 runs too:
```
x = df.groupby(["current_slot"]).mean().reset_index().plot("current_slot", ["average_balance_prudent", "average_balance_asap"])
```
We see that, as we extend time, prudent validators definitely overtake ASAP validators. Even though 20 epochs is not that long -- at 12 seconds per slot, and 32 slots per epoch, 20 epochs is approximately 2 hours -- over time these differences accumulate.
## Try it out!
We specified the ASAP and prudent validators as two very simple Python classes. For instance, this is how the prudent validator's attestation behaviour is implemented:
```python
def attest(self, known_items) -> Optional[specs.Attestation]:
# Not the moment to attest
if self.data.current_attest_slot != self.data.slot:
return None
time_in_slot = (self.store.time - self.store.genesis_time) % SECONDS_PER_SLOT
# Too early in the slot / didn't receive block
if not self.data.received_block and time_in_slot < 8:
return None
# Already attested for this slot
if self.data.last_slot_attested == self.data.slot:
return None
# honest attest
return honest_attest(self, known_items)
```
You too can specify your own agent behaviours following the simple validator API documented [here](https://barnabemonnot.com/beaconrunner/build/html/validatorlib.html), and have them attest and produce block in a simulated beacon chain environment following the steps outlined above.
Validators consume information contained in their `data` attribute. If you find yourself requiring more inputs from the simulation to program your validators, try opening an issue in the [Beacon Runner repo](https://github.com/barnabemonnot/beaconrunner).
## (Bonus) Better network
When latency decreases (faster propagation), are the effects as strong? We set the network update rate to 0.9, meaning that objects propagate on the network almost definitely each step.
```
%%capture
network_update_rate = 0.9
df = pd.concat([simulate_once(network_sets, num_run, num_validators, network_update_rate) for num_run in range(num_runs)])
df.groupby(["current_slot"]).mean().reset_index().plot("current_slot", ["average_balance_prudent", "average_balance_asap"])
```
It is not as advantageous to be Prudent now, their payoffs are broadly similar to ASAPs. But it doesn't hurt either.
|
github_jupyter
|
import importlib
import types
from eth2spec.config.config_util import prepare_config
from eth2spec.utils.ssz.ssz_impl import hash_tree_root
import os, sys
sys.path.insert(1, os.path.realpath(os.path.pardir) + "/notebooks/thunderdome")
import beaconrunner as br
prepare_config(".", "fast.yaml")
br.reload_package(br)
def current_slot(params, step, sL, s, _input):
return ("current_slot", s["network"].validators[0].data.slot)
def average_balance_prudent(params, step, sL, s, _input):
validators = s["network"].validators
validator = validators[0]
head = br.specs.get_head(validator.store)
current_state = validator.store.block_states[head]
current_epoch = br.specs.get_current_epoch(current_state)
prudent_indices = [i for i, v in enumerate(validators) if v.validator_behaviour == "prudent"]
prudent_balances = [b for i, b in enumerate(current_state.balances) if i in prudent_indices]
return ("average_balance_prudent", br.utils.eth2.gwei_to_eth(sum(prudent_balances) / float(len(prudent_indices))))
def average_balance_asap(params, step, sL, s, _input):
validators = s["network"].validators
validator = validators[0]
head = br.specs.get_head(validator.store)
current_state = validator.store.block_states[head]
current_epoch = br.specs.get_current_epoch(current_state)
asap_indices = [i for i, v in enumerate(validators) if v.validator_behaviour == "asap"]
asap_balances = [b for i, b in enumerate(current_state.balances) if i in asap_indices]
return ("average_balance_asap", br.utils.eth2.gwei_to_eth(sum(asap_balances) / float(len(asap_indices))))
observers = {
"current_slot": current_slot,
"average_balance_prudent": average_balance_prudent,
"average_balance_asap": average_balance_asap,
}
from random import sample
from beaconrunner.validators.ASAPValidator import ASAPValidator
from beaconrunner.validators.PrudentValidator import PrudentValidator
def simulate_once(network_sets, num_run, num_validators, network_update_rate):
# Half our validators are prudent, the others are ASAPs
num_prudent = int(num_validators / 2)
# We sample the position on the p2p network of prudent validators randomly
prudentset = set(sample(range(num_validators), num_prudent))
validators = []
# Initiate validators
for i in range(num_validators):
if i in prudentset:
new_validator = PrudentValidator(i)
else:
new_validator = ASAPValidator(i)
validators.append(new_validator)
# Create a genesis state
genesis_state = br.simulator.get_genesis_state(validators)
# Validators load the state
[v.load_state(genesis_state.copy()) for v in validators]
br.simulator.skip_genesis_block(validators) # forward time by SECONDS_PER_SLOT
network = br.network.Network(validators = validators, sets=network_sets)
parameters = br.simulator.SimulationParameters({
"num_epochs": 20,
"num_run": num_run,
"frequency": 1,
"network_update_rate": network_update_rate,
})
return br.simulator.simulate(network, parameters, observers)
%%capture
import pandas as pd
num_validators = 12
# Create the network peers
set_a = br.network.NetworkSet(validators=list(range(0, int(num_validators * 2 / 3.0))))
set_b = br.network.NetworkSet(validators=list(range(int(num_validators / 2.0), num_validators)))
network_sets = list([set_a, set_b])
num_runs = 40
network_update_rate = 0.25
df = pd.concat([simulate_once(network_sets, num_run, num_validators, network_update_rate) for num_run in range(num_runs)])
df[df.current_slot == 81][['average_balance_prudent', 'average_balance_asap']].describe()
x = df.groupby(["current_slot"]).mean().reset_index().plot("current_slot", ["average_balance_prudent", "average_balance_asap"])
def attest(self, known_items) -> Optional[specs.Attestation]:
# Not the moment to attest
if self.data.current_attest_slot != self.data.slot:
return None
time_in_slot = (self.store.time - self.store.genesis_time) % SECONDS_PER_SLOT
# Too early in the slot / didn't receive block
if not self.data.received_block and time_in_slot < 8:
return None
# Already attested for this slot
if self.data.last_slot_attested == self.data.slot:
return None
# honest attest
return honest_attest(self, known_items)
%%capture
network_update_rate = 0.9
df = pd.concat([simulate_once(network_sets, num_run, num_validators, network_update_rate) for num_run in range(num_runs)])
df.groupby(["current_slot"]).mean().reset_index().plot("current_slot", ["average_balance_prudent", "average_balance_asap"])
| 0.412767 | 0.955693 |
# CS224N Assignment 1: Exploring Word Vectors (25 Points)
Welcome to CS224n!
Before you start, make sure you read the README.txt in the same directory as this notebook.
```
# All Import Statements Defined Here
# Note: Do not add to this list.
# All the dependencies you need, can be installed by running .
# ----------------
import sys
assert sys.version_info[0]==3
assert sys.version_info[1] >= 5
from gensim.models import KeyedVectors
from gensim.test.utils import datapath
import pprint
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 5]
import nltk
nltk.download('reuters')
from nltk.corpus import reuters
import numpy as np
import random
import scipy as sp
from sklearn.decomposition import TruncatedSVD
from sklearn.decomposition import PCA
START_TOKEN = '<START>'
END_TOKEN = '<END>'
np.random.seed(0)
random.seed(0)
# ----------------
```
## Please Write Your SUNet ID Here:
## Word Vectors
Word Vectors are often used as a fundamental component for downstream NLP tasks, e.g. question answering, text generation, translation, etc., so it is important to build some intuitions as to their strengths and weaknesses. Here, you will explore two types of word vectors: those derived from *co-occurrence matrices*, and those derived via *word2vec*.
**Assignment Notes:** Please make sure to save the notebook as you go along. Submission Instructions are located at the bottom of the notebook.
**Note on Terminology:** The terms "word vectors" and "word embeddings" are often used interchangeably. The term "embedding" refers to the fact that we are encoding aspects of a word's meaning in a lower dimensional space. As [Wikipedia](https://en.wikipedia.org/wiki/Word_embedding) states, "*conceptually it involves a mathematical embedding from a space with one dimension per word to a continuous vector space with a much lower dimension*".
## Part 1: Count-Based Word Vectors (10 points)
Most word vector models start from the following idea:
*You shall know a word by the company it keeps ([Firth, J. R. 1957:11](https://en.wikipedia.org/wiki/John_Rupert_Firth))*
Many word vector implementations are driven by the idea that similar words, i.e., (near) synonyms, will be used in similar contexts. As a result, similar words will often be spoken or written along with a shared subset of words, i.e., contexts. By examining these contexts, we can try to develop embeddings for our words. With this intuition in mind, many "old school" approaches to constructing word vectors relied on word counts. Here we elaborate upon one of those strategies, *co-occurrence matrices* (for more information, see [here](http://web.stanford.edu/class/cs124/lec/vectorsemantics.video.pdf) or [here](https://medium.com/data-science-group-iitr/word-embedding-2d05d270b285)).
### Co-Occurrence
A co-occurrence matrix counts how often things co-occur in some environment. Given some word $w_i$ occurring in the document, we consider the *context window* surrounding $w_i$. Supposing our fixed window size is $n$, then this is the $n$ preceding and $n$ subsequent words in that document, i.e. words $w_{i-n} \dots w_{i-1}$ and $w_{i+1} \dots w_{i+n}$. We build a *co-occurrence matrix* $M$, which is a symmetric word-by-word matrix in which $M_{ij}$ is the number of times $w_j$ appears inside $w_i$'s window.
**Example: Co-Occurrence with Fixed Window of n=1**:
Document 1: "all that glitters is not gold"
Document 2: "all is well that ends well"
| * | START | all | that | glitters | is | not | gold | well | ends | END |
|----------|-------|-----|------|----------|------|------|-------|------|------|-----|
| START | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| all | 2 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
| that | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 |
| glitters | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
| is | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 |
| not | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 |
| gold | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
| well | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 |
| ends | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
| END | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 |
**Note:** In NLP, we often add START and END tokens to represent the beginning and end of sentences, paragraphs or documents. In thise case we imagine START and END tokens encapsulating each document, e.g., "START All that glitters is not gold END", and include these tokens in our co-occurrence counts.
The rows (or columns) of this matrix provide one type of word vectors (those based on word-word co-occurrence), but the vectors will be large in general (linear in the number of distinct words in a corpus). Thus, our next step is to run *dimensionality reduction*. In particular, we will run *SVD (Singular Value Decomposition)*, which is a kind of generalized *PCA (Principal Components Analysis)* to select the top $k$ principal components. Here's a visualization of dimensionality reduction with SVD. In this picture our co-occurrence matrix is $A$ with $n$ rows corresponding to $n$ words. We obtain a full matrix decomposition, with the singular values ordered in the diagonal $S$ matrix, and our new, shorter length-$k$ word vectors in $U_k$.

This reduced-dimensionality co-occurrence representation preserves semantic relationships between words, e.g. *doctor* and *hospital* will be closer than *doctor* and *dog*.
**Notes:** If you can barely remember what an eigenvalue is, here's [a slow, friendly introduction to SVD](https://davetang.org/file/Singular_Value_Decomposition_Tutorial.pdf). If you want to learn more thoroughly about PCA or SVD, feel free to check out lectures [7](https://web.stanford.edu/class/cs168/l/l7.pdf), [8](http://theory.stanford.edu/~tim/s15/l/l8.pdf), and [9](https://web.stanford.edu/class/cs168/l/l9.pdf) of CS168. These course notes provide a great high-level treatment of these general purpose algorithms. Though, for the purpose of this class, you only need to know how to extract the k-dimensional embeddings by utilizing pre-programmed implementations of these algorithms from the numpy, scipy, or sklearn python packages. In practice, it is challenging to apply full SVD to large corpora because of the memory needed to perform PCA or SVD. However, if you only want the top $k$ vector components for relatively small $k$ — known as *[Truncated SVD](https://en.wikipedia.org/wiki/Singular_value_decomposition#Truncated_SVD)* — then there are reasonably scalable techniques to compute those iteratively.
### Plotting Co-Occurrence Word Embeddings
Here, we will be using the Reuters (business and financial news) corpus. If you haven't run the import cell at the top of this page, please run it now (click it and press SHIFT-RETURN). The corpus consists of 10,788 news documents totaling 1.3 million words. These documents span 90 categories and are split into train and test. For more details, please see https://www.nltk.org/book/ch02.html. We provide a `read_corpus` function below that pulls out only articles from the "crude" (i.e. news articles about oil, gas, etc.) category. The function also adds START and END tokens to each of the documents, and lowercases words. You do **not** have perform any other kind of pre-processing.
```
def read_corpus(category="crude"):
""" Read files from the specified Reuter's category.
Params:
category (string): category name
Return:
list of lists, with words from each of the processed files
"""
files = reuters.fileids(category)
return [[START_TOKEN] + [w.lower() for w in list(reuters.words(f))] + [END_TOKEN] for f in files]
```
Let's have a look what these documents are like….
```
reuters_corpus = read_corpus()
pprint.pprint(reuters_corpus[:3], compact=True, width=100)
```
### Question 1.1: Implement `distinct_words` [code] (2 points)
Write a method to work out the distinct words (word types) that occur in the corpus. You can do this with `for` loops, but it's more efficient to do it with Python list comprehensions. In particular, [this](https://coderwall.com/p/rcmaea/flatten-a-list-of-lists-in-one-line-in-python) may be useful to flatten a list of lists. If you're not familiar with Python list comprehensions in general, here's [more information](https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Comprehensions.html).
You may find it useful to use [Python sets](https://www.w3schools.com/python/python_sets.asp) to remove duplicate words.
```
def distinct_words(corpus):
""" Determine a list of distinct words for the corpus.
Params:
corpus (list of list of strings): corpus of documents
Return:
corpus_words (list of strings): list of distinct words across the corpus, sorted (using python 'sorted' function)
num_corpus_words (integer): number of distinct words across the corpus
"""
corpus_words = []
num_corpus_words = -1
# ------------------
# Write your implementation here.
corpus_words = sorted(list(set([arr for corpus_array in corpus for arr in set(corpus_array)])))
num_corpus_words = len(corpus_words)
# ------------------
return corpus_words, num_corpus_words
# ---------------------
# Run this sanity check
# Note that this not an exhaustive check for correctness.
# ---------------------
# Define toy corpus
test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")]
test_corpus_words, num_corpus_words = distinct_words(test_corpus)
# Correct answers
ans_test_corpus_words = sorted(list(set(["START", "All", "ends", "that", "gold", "All's", "glitters", "isn't", "well", "END"])))
ans_num_corpus_words = len(ans_test_corpus_words)
# Test correct number of words
assert(num_corpus_words == ans_num_corpus_words), "Incorrect number of distinct words. Correct: {}. Yours: {}".format(ans_num_corpus_words, num_corpus_words)
# Test correct words
assert (test_corpus_words == ans_test_corpus_words), "Incorrect corpus_words.\nCorrect: {}\nYours: {}".format(str(ans_test_corpus_words), str(test_corpus_words))
# Print Success
print ("-" * 80)
print("Passed All Tests!")
print ("-" * 80)
```
### Question 1.2: Implement `compute_co_occurrence_matrix` [code] (3 points)
Write a method that constructs a co-occurrence matrix for a certain window-size $n$ (with a default of 4), considering words $n$ before and $n$ after the word in the center of the window. Here, we start to use `numpy (np)` to represent vectors, matrices, and tensors. If you're not familiar with NumPy, there's a NumPy tutorial in the second half of this cs231n [Python NumPy tutorial](http://cs231n.github.io/python-numpy-tutorial/).
```
def compute_co_occurrence_matrix(corpus, window_size=4):
""" Compute co-occurrence matrix for the given corpus and window_size (default of 4).
Note: Each word in a document should be at the center of a window. Words near edges will have a smaller
number of co-occurring words.
For example, if we take the document "START All that glitters is not gold END" with window size of 4,
"All" will co-occur with "START", "that", "glitters", "is", and "not".
Params:
corpus (list of list of strings): corpus of documents
window_size (int): size of context window
Return:
M (numpy matrix of shape (number of corpus words, number of corpus words)):
Co-occurence matrix of word counts.
The ordering of the words in the rows/columns should be the same as the ordering of the words given by the distinct_words function.
word2Ind (dict): dictionary that maps word to index (i.e. row/column number) for matrix M.
"""
words, num_words = distinct_words(corpus)
M = None
word2Ind = {}
# ------------------
# Write your implementation here.
M = np.zeros((num_words, num_words))
for center_word in words:
index = words.index(center_word)
word2Ind[center_word] = index
for sentence in corpus:
for index_center_word, word in enumerate(sentence):
if word == center_word:
for i in range(1, window_size + 1):
if index_center_word - i >= 0:
l_word = sentence[index_center_word - i]
M[index, words.index(l_word)] += 1
if index_center_word + i < len(sentence):
r_word = sentence[index_center_word + i]
M[index, words.index(r_word)] += 1
# ------------------
return M, word2Ind
# ---------------------
# Run this sanity check
# Note that this is not an exhaustive check for correctness.
# ---------------------
# Define toy corpus and get student's co-occurrence matrix
test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")]
M_test, word2Ind_test = compute_co_occurrence_matrix(test_corpus, window_size=1)
# Correct M and word2Ind
M_test_ans = np.array(
[[0., 0., 0., 1., 0., 0., 0., 0., 1., 0.,],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 1.,],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 1.,],
[1., 1., 0., 0., 0., 0., 0., 0., 0., 0.,],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,],
[0., 0., 0., 0., 0., 0., 0., 1., 1., 0.,],
[0., 0., 1., 0., 0., 0., 0., 1., 0., 0.,],
[0., 0., 0., 0., 0., 1., 1., 0., 0., 0.,],
[1., 0., 0., 0., 1., 1., 0., 0., 0., 1.,],
[0., 1., 1., 0., 1., 0., 0., 0., 1., 0.,]]
)
word2Ind_ans = {'All': 0, "All's": 1, 'END': 2, 'START': 3, 'ends': 4, 'glitters': 5, 'gold': 6, "isn't": 7, 'that': 8, 'well': 9}
# Test correct word2Ind
assert (word2Ind_ans == word2Ind_test), "Your word2Ind is incorrect:\nCorrect: {}\nYours: {}".format(word2Ind_ans, word2Ind_test)
# Test correct M shape
assert (M_test.shape == M_test_ans.shape), "M matrix has incorrect shape.\nCorrect: {}\nYours: {}".format(M_test.shape, M_test_ans.shape)
# Test correct M values
for w1 in word2Ind_ans.keys():
idx1 = word2Ind_ans[w1]
for w2 in word2Ind_ans.keys():
idx2 = word2Ind_ans[w2]
student = M_test[idx1, idx2]
correct = M_test_ans[idx1, idx2]
if student != correct:
print("Correct M:")
print(M_test_ans)
print("Your M: ")
print(M_test)
raise AssertionError("Incorrect count at index ({}, {})=({}, {}) in matrix M. Yours has {} but should have {}.".format(idx1, idx2, w1, w2, student, correct))
# Print Success
print ("-" * 80)
print("Passed All Tests!")
print ("-" * 80)
```
### Question 1.3: Implement `reduce_to_k_dim` [code] (1 point)
Construct a method that performs dimensionality reduction on the matrix to produce k-dimensional embeddings. Use SVD to take the top k components and produce a new matrix of k-dimensional embeddings.
**Note:** All of numpy, scipy, and scikit-learn (`sklearn`) provide *some* implementation of SVD, but only scipy and sklearn provide an implementation of Truncated SVD, and only sklearn provides an efficient randomized algorithm for calculating large-scale Truncated SVD. So please use [sklearn.decomposition.TruncatedSVD](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html).
```
def reduce_to_k_dim(M, k=2):
""" Reduce a co-occurence count matrix of dimensionality (num_corpus_words, num_corpus_words)
to a matrix of dimensionality (num_corpus_words, k) using the following SVD function from Scikit-Learn:
- http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html
Params:
M (numpy matrix of shape (number of corpus words, number of corpus words)): co-occurence matrix of word counts
k (int): embedding size of each word after dimension reduction
Return:
M_reduced (numpy matrix of shape (number of corpus words, k)): matrix of k-dimensioal word embeddings.
In terms of the SVD from math class, this actually returns U * S
"""
n_iters = 10 # Use this parameter in your call to `TruncatedSVD`
M_reduced = None
print("Running Truncated SVD over %i words..." % (M.shape[0]))
# ------------------
# Write your implementation here.
from sklearn.decomposition import TruncatedSVD
M_reduced = TruncatedSVD(n_iter=n_iters).fit_transform(M)
# ------------------
print("Done.")
return M_reduced
# ---------------------
# Run this sanity check
# Note that this not an exhaustive check for correctness
# In fact we only check that your M_reduced has the right dimensions.
# ---------------------
# Define toy corpus and run student code
test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")]
M_test, word2Ind_test = compute_co_occurrence_matrix(test_corpus, window_size=1)
M_test_reduced = reduce_to_k_dim(M_test, k=2)
# Test proper dimensions
assert (M_test_reduced.shape[0] == 10), "M_reduced has {} rows; should have {}".format(M_test_reduced.shape[0], 10)
assert (M_test_reduced.shape[1] == 2), "M_reduced has {} columns; should have {}".format(M_test_reduced.shape[1], 2)
# Print Success
print ("-" * 80)
print("Passed All Tests!")
print ("-" * 80)
```
### Question 1.4: Implement `plot_embeddings` [code] (1 point)
Here you will write a function to plot a set of 2D vectors in 2D space. For graphs, we will use Matplotlib (`plt`).
For this example, you may find it useful to adapt [this code](https://www.pythonmembers.club/2018/05/08/matplotlib-scatter-plot-annotate-set-text-at-label-each-point/). In the future, a good way to make a plot is to look at [the Matplotlib gallery](https://matplotlib.org/gallery/index.html), find a plot that looks somewhat like what you want, and adapt the code they give.
```
def plot_embeddings(M_reduced, word2Ind, words):
""" Plot in a scatterplot the embeddings of the words specified in the list "words".
NOTE: do not plot all the words listed in M_reduced / word2Ind.
Include a label next to each point.
Params:
M_reduced (numpy matrix of shape (number of unique words in the corpus , k)): matrix of k-dimensioal word embeddings
word2Ind (dict): dictionary that maps word to indices for matrix M
words (list of strings): words whose embeddings we want to visualize
"""
# ------------------
# Write your implementation here.
import matplotlib.pyplot as plt
space = 0.01
for arr, wd in zip(M_reduced, words):
x = arr[0]
y = arr[1]
plt.scatter(x, y, marker='x', color='r')
plt.text(x + space, y + space, wd, fontsize=8)
plt.show()
# ------------------
# ---------------------
# Run this sanity check
# Note that this not an exhaustive check for correctness.
# The plot produced should look like the "test solution plot" depicted below.
# ---------------------
print ("-" * 80)
print ("Outputted Plot:")
M_reduced_plot_test = np.array([[1, 1], [-1, -1], [1, -1], [-1, 1], [0, 0]])
word2Ind_plot_test = {'test1': 0, 'test2': 1, 'test3': 2, 'test4': 3, 'test5': 4}
words = ['test1', 'test2', 'test3', 'test4', 'test5']
plot_embeddings(M_reduced_plot_test, word2Ind_plot_test, words)
print ("-" * 80)
```
<font color=red>**Test Plot Solution**</font>
<br>
<img src="imgs/test_plot.png" width=40% style="float: left;"> </img>
### Question 1.5: Co-Occurrence Plot Analysis [written] (3 points)
Now we will put together all the parts you have written! We will compute the co-occurrence matrix with fixed window of 4, over the Reuters "crude" corpus. Then we will use TruncatedSVD to compute 2-dimensional embeddings of each word. TruncatedSVD returns U\*S, so we normalize the returned vectors, so that all the vectors will appear around the unit circle (therefore closeness is directional closeness). **Note**: The line of code below that does the normalizing uses the NumPy concept of *broadcasting*. If you don't know about broadcasting, check out
[Computation on Arrays: Broadcasting by Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html).
Run the below cell to produce the plot. It'll probably take a few seconds to run. What clusters together in 2-dimensional embedding space? What doesn't cluster together that you might think should have? **Note:** "bpd" stands for "barrels per day" and is a commonly used abbreviation in crude oil topic articles.
```
# -----------------------------
# Run This Cell to Produce Your Plot
# ------------------------------
from time import time
t0 = time()
reuters_corpus = read_corpus()
M_co_occurrence, word2Ind_co_occurrence = compute_co_occurrence_matrix(reuters_corpus)
M_reduced_co_occurrence = reduce_to_k_dim(M_co_occurrence, k=2)
# Rescale (normalize) the rows to make them each of unit-length
M_lengths = np.linalg.norm(M_reduced_co_occurrence, axis=1)
M_normalized = M_reduced_co_occurrence / M_lengths[:, np.newaxis] # broadcasting
words = ['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']
plot_embeddings(M_normalized, word2Ind_co_occurrence, words)
print(f'Time to finish: {time() - t0:.2f}s')
```
#### <font color="red">Write your answer here.</font>
_Oil_ should cluster with _petroleum_, but _industry_ should be separate. _Energy_ should be further away because it doesn't necessarily have to do with _petroleum_.
## Part 2: Prediction-Based Word Vectors (15 points)
As discussed in class, more recently prediction-based word vectors have come into fashion, e.g. word2vec. Here, we shall explore the embeddings produced by word2vec. Please revisit the class notes and lecture slides for more details on the word2vec algorithm. If you're feeling adventurous, challenge yourself and try reading the [original paper](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf).
Then run the following cells to load the word2vec vectors into memory. **Note**: This might take several minutes.
```
def load_word2vec():
""" Load Word2Vec Vectors
Return:
wv_from_bin: All 3 million embeddings, each lengh 300
"""
import gensim.downloader as api
wv_from_bin = api.load("word2vec-google-news-300")
vocab = list(wv_from_bin.vocab.keys())
print("Loaded vocab size %i" % len(vocab))
return wv_from_bin
# -----------------------------------
# Run Cell to Load Word Vectors
# Note: This may take several minutes
# -----------------------------------
wv_from_bin = load_word2vec()
```
**Note: If you are receiving out of memory issues on your local machine, try closing other applications to free more memory on your device. You may want to try restarting your machine so that you can free up extra memory. Then immediately run the jupyter notebook and see if you can load the word vectors properly. If you still have problems with loading the embeddings onto your local machine after this, please follow the Piazza instructions, as how to run remotely on Stanford Farmshare machines.**
### Reducing dimensionality of Word2Vec Word Embeddings
Let's directly compare the word2vec embeddings to those of the co-occurrence matrix. Run the following cells to:
1. Put the 3 million word2vec vectors into a matrix M
2. Run reduce_to_k_dim (your Truncated SVD function) to reduce the vectors from 300-dimensional to 2-dimensional.
```
def get_matrix_of_vectors(wv_from_bin, required_words=['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']):
""" Put the word2vec vectors into a matrix M.
Param:
wv_from_bin: KeyedVectors object; the 3 million word2vec vectors loaded from file
Return:
M: numpy matrix shape (num words, 300) containing the vectors
word2Ind: dictionary mapping each word to its row number in M
"""
import random
words = list(wv_from_bin.vocab.keys())
print("Shuffling words ...")
random.shuffle(words)
words = words[:10000]
print("Putting %i words into word2Ind and matrix M..." % len(words))
word2Ind = {}
M = []
curInd = 0
for w in words:
try:
M.append(wv_from_bin.word_vec(w))
word2Ind[w] = curInd
curInd += 1
except KeyError:
continue
for w in required_words:
try:
M.append(wv_from_bin.word_vec(w))
word2Ind[w] = curInd
curInd += 1
except KeyError:
continue
M = np.stack(M)
print("Done.")
return M, word2Ind
# -----------------------------------------------------------------
# Run Cell to Reduce 300-Dimensinal Word Embeddings to k Dimensions
# Note: This may take several minutes
# -----------------------------------------------------------------
M, word2Ind = get_matrix_of_vectors(wv_from_bin)
M_reduced = reduce_to_k_dim(M, k=2)
```
### Question 2.1: Word2Vec Plot Analysis [written] (4 points)
Run the cell below to plot the 2D word2vec embeddings for `['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']`.
What clusters together in 2-dimensional embedding space? What doesn't cluster together that you might think should have? How is the plot different from the one generated earlier from the co-occurrence matrix?
```
words = ['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']
plot_embeddings(M_reduced, word2Ind, words)
```
#### <font color="red">Write your answer here.</font>
This is actually closer to what I would expect, where there is no clear grouping except a more general grouping of everything. I would have expected _bpd_ to be closer to the main general grouping and not further away.
### Cosine Similarity
Now that we have word vectors, we need a way to quantify the similarity between individual words, according to these vectors. One such metric is cosine-similarity. We will be using this to find words that are "close" and "far" from one another.
We can think of n-dimensional vectors as points in n-dimensional space. If we take this perspective L1 and L2 Distances help quantify the amount of space "we must travel" to get between these two points. Another approach is to examine the angle between two vectors. From trigonometry we know that:
<img src="imgs/inner_product.png" width=20% style="float: center;"></img>
Instead of computing the actual angle, we can leave the similarity in terms of $similarity = cos(\Theta)$. Formally the [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity) $s$ between two vectors $p$ and $q$ is defined as:
$$s = \frac{p \cdot q}{||p|| ||q||}, \textrm{ where } s \in [-1, 1] $$
### Question 2.2: Polysemous Words (2 points) [code + written]
Find a [polysemous](https://en.wikipedia.org/wiki/Polysemy) word (for example, "leaves" or "scoop") such that the top-10 most similar words (according to cosine similarity) contains related words from *both* meanings. For example, "leaves" has both "vanishes" and "stalks" in the top 10, and "scoop" has both "handed_waffle_cone" and "lowdown". You will probably need to try several polysemous words before you find one. Please state the polysemous word you discover and the multiple meanings that occur in the top 10. Why do you think many of the polysemous words you tried didn't work?
**Note**: You should use the `wv_from_bin.most_similar(word)` function to get the top 10 similar words. This function ranks all other words in the vocabulary with respect to their cosine similarity to the given word. For further assistance please check the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.FastTextKeyedVectors.most_similar)__.
```
# ------------------
# Write your polysemous word exploration code here.
from pprint import pprint
t1 = time()
words = list(wv_from_bin.vocab.keys())
random.seed(126)
random.shuffle(words)
words = words[:20]
sorted_list = []
for word in words:
sorted_list.append(sorted([(v, u, word) for u, v in wv_from_bin.most_similar(word)], reverse=True))
print(f'Time elapsed: {time() - t1:.2f}s')
pprint(sorted([ls for arr in sorted_list for ls in arr], reverse=True))
# ------------------
```
#### <font color="red">Write your answer here.</font>
Considering that the answers differ for every seed, the answers for the words with highest cosine similarity don't always often seem related, although the context could make them appear more relevant than what they seem at first appearance. High consine similarity (>0.9) doesn't seem to be common, and most of the time the words tend to be similar in construct.
### Question 2.3: Synonyms & Antonyms (2 points) [code + written]
When considering Cosine Similarity, it's often more convenient to think of Cosine Distance, which is simply 1 - Cosine Similarity.
Find three words (w1,w2,w3) where w1 and w2 are synonyms and w1 and w3 are antonyms, but Cosine Distance(w1,w3) < Cosine Distance(w1,w2). For example, w1="happy" is closer to w3="sad" than to w2="cheerful".
Once you have found your example, please give a possible explanation for why this counter-intuitive result may have happened.
You should use the the `wv_from_bin.distance(w1, w2)` function here in order to compute the cosine distance between two words. Please see the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.FastTextKeyedVectors.distance)__ for further assistance.
```
# ------------------
# Write your synonym & antonym exploration code here.
w1 = ""
w2 = ""
w3 = ""
w1_w2_dist = wv_from_bin.distance(w1, w2)
w1_w3_dist = wv_from_bin.distance(w1, w3)
print("Synonyms {}, {} have cosine distance: {}".format(w1, w2, w1_w2_dist))
print("Antonyms {}, {} have cosine distance: {}".format(w1, w3, w1_w3_dist))
# ------------------
```
#### <font color="red">Write your answer here.</font>
### Solving Analogies with Word Vectors
Word2Vec vectors have been shown to *sometimes* exhibit the ability to solve analogies.
As an example, for the analogy "man : king :: woman : x", what is x?
In the cell below, we show you how to use word vectors to find x. The `most_similar` function finds words that are most similar to the words in the `positive` list and most dissimilar from the words in the `negative` list. The answer to the analogy will be the word ranked most similar (largest numerical value).
**Note:** Further Documentation on the `most_similar` function can be found within the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.FastTextKeyedVectors.most_similar)__.
```
# Run this cell to answer the analogy -- man : king :: woman : x
pprint.pprint(wv_from_bin.most_similar(positive=['woman', 'king'], negative=['man']))
```
### Question 2.4: Finding Analogies [code + written] (2 Points)
Find an example of analogy that holds according to these vectors (i.e. the intended word is ranked top). In your solution please state the full analogy in the form x:y :: a:b. If you believe the analogy is complicated, explain why the analogy holds in one or two sentences.
**Note**: You may have to try many analogies to find one that works!
```
# ------------------
# Write your analogy exploration code here.
pprint.pprint(wv_from_bin.most_similar(positive=[], negative=[]))
# ------------------
```
#### <font color="red">Write your answer here.</font>
### Question 2.5: Incorrect Analogy [code + written] (1 point)
Find an example of analogy that does *not* hold according to these vectors. In your solution, state the intended analogy in the form x:y :: a:b, and state the (incorrect) value of b according to the word vectors.
```
# ------------------
# Write your incorrect analogy exploration code here.
pprint.pprint(wv_from_bin.most_similar(positive=[], negative=[]))
# ------------------
```
#### <font color="red">Write your answer here.</font>
### Question 2.6: Guided Analysis of Bias in Word Vectors [written] (1 point)
It's important to be cognizant of the biases (gender, race, sexual orientation etc.) implicit to our word embeddings.
Run the cell below, to examine (a) which terms are most similar to "woman" and "boss" and most dissimilar to "man", and (b) which terms are most similar to "man" and "boss" and most dissimilar to "woman". What do you find in the top 10?
```
# Run this cell
# Here `positive` indicates the list of words to be similar to and `negative` indicates the list of words to be
# most dissimilar from.
pprint.pprint(wv_from_bin.most_similar(positive=['woman', 'boss'], negative=['man']))
print()
pprint.pprint(wv_from_bin.most_similar(positive=['man', 'boss'], negative=['woman']))
```
#### <font color="red">Write your answer here.</font>
### Question 2.7: Independent Analysis of Bias in Word Vectors [code + written] (2 points)
Use the `most_similar` function to find another case where some bias is exhibited by the vectors. Please briefly explain the example of bias that you discover.
```
# ------------------
# Write your bias exploration code here.
pprint.pprint(wv_from_bin.most_similar(positive=[], negative=[]))
print()
pprint.pprint(wv_from_bin.most_similar(positive=[,], negative=[]))
# ------------------
```
#### <font color="red">Write your answer here.</font>
### Question 2.8: Thinking About Bias [written] (1 point)
What might be the cause of these biases in the word vectors?
#### <font color="red">Write your answer here.</font>
# <font color="blue"> Submission Instructions</font>
1. Click the Save button at the top of the Jupyter Notebook.
2. Please make sure to have entered your SUNET ID above.
3. Select Cell -> All Output -> Clear. This will clear all the outputs from all cells (but will keep the content of ll cells).
4. Select Cell -> Run All. This will run all the cells in order, and will take several minutes.
5. Once you've rerun everything, select File -> Download as -> PDF via LaTeX
6. Look at the PDF file and make sure all your solutions are there, displayed correctly. The PDF is the only thing your graders will see!
7. Submit your PDF on Gradescope.
|
github_jupyter
|
# All Import Statements Defined Here
# Note: Do not add to this list.
# All the dependencies you need, can be installed by running .
# ----------------
import sys
assert sys.version_info[0]==3
assert sys.version_info[1] >= 5
from gensim.models import KeyedVectors
from gensim.test.utils import datapath
import pprint
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 5]
import nltk
nltk.download('reuters')
from nltk.corpus import reuters
import numpy as np
import random
import scipy as sp
from sklearn.decomposition import TruncatedSVD
from sklearn.decomposition import PCA
START_TOKEN = '<START>'
END_TOKEN = '<END>'
np.random.seed(0)
random.seed(0)
# ----------------
def read_corpus(category="crude"):
""" Read files from the specified Reuter's category.
Params:
category (string): category name
Return:
list of lists, with words from each of the processed files
"""
files = reuters.fileids(category)
return [[START_TOKEN] + [w.lower() for w in list(reuters.words(f))] + [END_TOKEN] for f in files]
reuters_corpus = read_corpus()
pprint.pprint(reuters_corpus[:3], compact=True, width=100)
def distinct_words(corpus):
""" Determine a list of distinct words for the corpus.
Params:
corpus (list of list of strings): corpus of documents
Return:
corpus_words (list of strings): list of distinct words across the corpus, sorted (using python 'sorted' function)
num_corpus_words (integer): number of distinct words across the corpus
"""
corpus_words = []
num_corpus_words = -1
# ------------------
# Write your implementation here.
corpus_words = sorted(list(set([arr for corpus_array in corpus for arr in set(corpus_array)])))
num_corpus_words = len(corpus_words)
# ------------------
return corpus_words, num_corpus_words
# ---------------------
# Run this sanity check
# Note that this not an exhaustive check for correctness.
# ---------------------
# Define toy corpus
test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")]
test_corpus_words, num_corpus_words = distinct_words(test_corpus)
# Correct answers
ans_test_corpus_words = sorted(list(set(["START", "All", "ends", "that", "gold", "All's", "glitters", "isn't", "well", "END"])))
ans_num_corpus_words = len(ans_test_corpus_words)
# Test correct number of words
assert(num_corpus_words == ans_num_corpus_words), "Incorrect number of distinct words. Correct: {}. Yours: {}".format(ans_num_corpus_words, num_corpus_words)
# Test correct words
assert (test_corpus_words == ans_test_corpus_words), "Incorrect corpus_words.\nCorrect: {}\nYours: {}".format(str(ans_test_corpus_words), str(test_corpus_words))
# Print Success
print ("-" * 80)
print("Passed All Tests!")
print ("-" * 80)
def compute_co_occurrence_matrix(corpus, window_size=4):
""" Compute co-occurrence matrix for the given corpus and window_size (default of 4).
Note: Each word in a document should be at the center of a window. Words near edges will have a smaller
number of co-occurring words.
For example, if we take the document "START All that glitters is not gold END" with window size of 4,
"All" will co-occur with "START", "that", "glitters", "is", and "not".
Params:
corpus (list of list of strings): corpus of documents
window_size (int): size of context window
Return:
M (numpy matrix of shape (number of corpus words, number of corpus words)):
Co-occurence matrix of word counts.
The ordering of the words in the rows/columns should be the same as the ordering of the words given by the distinct_words function.
word2Ind (dict): dictionary that maps word to index (i.e. row/column number) for matrix M.
"""
words, num_words = distinct_words(corpus)
M = None
word2Ind = {}
# ------------------
# Write your implementation here.
M = np.zeros((num_words, num_words))
for center_word in words:
index = words.index(center_word)
word2Ind[center_word] = index
for sentence in corpus:
for index_center_word, word in enumerate(sentence):
if word == center_word:
for i in range(1, window_size + 1):
if index_center_word - i >= 0:
l_word = sentence[index_center_word - i]
M[index, words.index(l_word)] += 1
if index_center_word + i < len(sentence):
r_word = sentence[index_center_word + i]
M[index, words.index(r_word)] += 1
# ------------------
return M, word2Ind
# ---------------------
# Run this sanity check
# Note that this is not an exhaustive check for correctness.
# ---------------------
# Define toy corpus and get student's co-occurrence matrix
test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")]
M_test, word2Ind_test = compute_co_occurrence_matrix(test_corpus, window_size=1)
# Correct M and word2Ind
M_test_ans = np.array(
[[0., 0., 0., 1., 0., 0., 0., 0., 1., 0.,],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 1.,],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 1.,],
[1., 1., 0., 0., 0., 0., 0., 0., 0., 0.,],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,],
[0., 0., 0., 0., 0., 0., 0., 1., 1., 0.,],
[0., 0., 1., 0., 0., 0., 0., 1., 0., 0.,],
[0., 0., 0., 0., 0., 1., 1., 0., 0., 0.,],
[1., 0., 0., 0., 1., 1., 0., 0., 0., 1.,],
[0., 1., 1., 0., 1., 0., 0., 0., 1., 0.,]]
)
word2Ind_ans = {'All': 0, "All's": 1, 'END': 2, 'START': 3, 'ends': 4, 'glitters': 5, 'gold': 6, "isn't": 7, 'that': 8, 'well': 9}
# Test correct word2Ind
assert (word2Ind_ans == word2Ind_test), "Your word2Ind is incorrect:\nCorrect: {}\nYours: {}".format(word2Ind_ans, word2Ind_test)
# Test correct M shape
assert (M_test.shape == M_test_ans.shape), "M matrix has incorrect shape.\nCorrect: {}\nYours: {}".format(M_test.shape, M_test_ans.shape)
# Test correct M values
for w1 in word2Ind_ans.keys():
idx1 = word2Ind_ans[w1]
for w2 in word2Ind_ans.keys():
idx2 = word2Ind_ans[w2]
student = M_test[idx1, idx2]
correct = M_test_ans[idx1, idx2]
if student != correct:
print("Correct M:")
print(M_test_ans)
print("Your M: ")
print(M_test)
raise AssertionError("Incorrect count at index ({}, {})=({}, {}) in matrix M. Yours has {} but should have {}.".format(idx1, idx2, w1, w2, student, correct))
# Print Success
print ("-" * 80)
print("Passed All Tests!")
print ("-" * 80)
def reduce_to_k_dim(M, k=2):
""" Reduce a co-occurence count matrix of dimensionality (num_corpus_words, num_corpus_words)
to a matrix of dimensionality (num_corpus_words, k) using the following SVD function from Scikit-Learn:
- http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html
Params:
M (numpy matrix of shape (number of corpus words, number of corpus words)): co-occurence matrix of word counts
k (int): embedding size of each word after dimension reduction
Return:
M_reduced (numpy matrix of shape (number of corpus words, k)): matrix of k-dimensioal word embeddings.
In terms of the SVD from math class, this actually returns U * S
"""
n_iters = 10 # Use this parameter in your call to `TruncatedSVD`
M_reduced = None
print("Running Truncated SVD over %i words..." % (M.shape[0]))
# ------------------
# Write your implementation here.
from sklearn.decomposition import TruncatedSVD
M_reduced = TruncatedSVD(n_iter=n_iters).fit_transform(M)
# ------------------
print("Done.")
return M_reduced
# ---------------------
# Run this sanity check
# Note that this not an exhaustive check for correctness
# In fact we only check that your M_reduced has the right dimensions.
# ---------------------
# Define toy corpus and run student code
test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")]
M_test, word2Ind_test = compute_co_occurrence_matrix(test_corpus, window_size=1)
M_test_reduced = reduce_to_k_dim(M_test, k=2)
# Test proper dimensions
assert (M_test_reduced.shape[0] == 10), "M_reduced has {} rows; should have {}".format(M_test_reduced.shape[0], 10)
assert (M_test_reduced.shape[1] == 2), "M_reduced has {} columns; should have {}".format(M_test_reduced.shape[1], 2)
# Print Success
print ("-" * 80)
print("Passed All Tests!")
print ("-" * 80)
def plot_embeddings(M_reduced, word2Ind, words):
""" Plot in a scatterplot the embeddings of the words specified in the list "words".
NOTE: do not plot all the words listed in M_reduced / word2Ind.
Include a label next to each point.
Params:
M_reduced (numpy matrix of shape (number of unique words in the corpus , k)): matrix of k-dimensioal word embeddings
word2Ind (dict): dictionary that maps word to indices for matrix M
words (list of strings): words whose embeddings we want to visualize
"""
# ------------------
# Write your implementation here.
import matplotlib.pyplot as plt
space = 0.01
for arr, wd in zip(M_reduced, words):
x = arr[0]
y = arr[1]
plt.scatter(x, y, marker='x', color='r')
plt.text(x + space, y + space, wd, fontsize=8)
plt.show()
# ------------------
# ---------------------
# Run this sanity check
# Note that this not an exhaustive check for correctness.
# The plot produced should look like the "test solution plot" depicted below.
# ---------------------
print ("-" * 80)
print ("Outputted Plot:")
M_reduced_plot_test = np.array([[1, 1], [-1, -1], [1, -1], [-1, 1], [0, 0]])
word2Ind_plot_test = {'test1': 0, 'test2': 1, 'test3': 2, 'test4': 3, 'test5': 4}
words = ['test1', 'test2', 'test3', 'test4', 'test5']
plot_embeddings(M_reduced_plot_test, word2Ind_plot_test, words)
print ("-" * 80)
# -----------------------------
# Run This Cell to Produce Your Plot
# ------------------------------
from time import time
t0 = time()
reuters_corpus = read_corpus()
M_co_occurrence, word2Ind_co_occurrence = compute_co_occurrence_matrix(reuters_corpus)
M_reduced_co_occurrence = reduce_to_k_dim(M_co_occurrence, k=2)
# Rescale (normalize) the rows to make them each of unit-length
M_lengths = np.linalg.norm(M_reduced_co_occurrence, axis=1)
M_normalized = M_reduced_co_occurrence / M_lengths[:, np.newaxis] # broadcasting
words = ['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']
plot_embeddings(M_normalized, word2Ind_co_occurrence, words)
print(f'Time to finish: {time() - t0:.2f}s')
def load_word2vec():
""" Load Word2Vec Vectors
Return:
wv_from_bin: All 3 million embeddings, each lengh 300
"""
import gensim.downloader as api
wv_from_bin = api.load("word2vec-google-news-300")
vocab = list(wv_from_bin.vocab.keys())
print("Loaded vocab size %i" % len(vocab))
return wv_from_bin
# -----------------------------------
# Run Cell to Load Word Vectors
# Note: This may take several minutes
# -----------------------------------
wv_from_bin = load_word2vec()
def get_matrix_of_vectors(wv_from_bin, required_words=['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']):
""" Put the word2vec vectors into a matrix M.
Param:
wv_from_bin: KeyedVectors object; the 3 million word2vec vectors loaded from file
Return:
M: numpy matrix shape (num words, 300) containing the vectors
word2Ind: dictionary mapping each word to its row number in M
"""
import random
words = list(wv_from_bin.vocab.keys())
print("Shuffling words ...")
random.shuffle(words)
words = words[:10000]
print("Putting %i words into word2Ind and matrix M..." % len(words))
word2Ind = {}
M = []
curInd = 0
for w in words:
try:
M.append(wv_from_bin.word_vec(w))
word2Ind[w] = curInd
curInd += 1
except KeyError:
continue
for w in required_words:
try:
M.append(wv_from_bin.word_vec(w))
word2Ind[w] = curInd
curInd += 1
except KeyError:
continue
M = np.stack(M)
print("Done.")
return M, word2Ind
# -----------------------------------------------------------------
# Run Cell to Reduce 300-Dimensinal Word Embeddings to k Dimensions
# Note: This may take several minutes
# -----------------------------------------------------------------
M, word2Ind = get_matrix_of_vectors(wv_from_bin)
M_reduced = reduce_to_k_dim(M, k=2)
words = ['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']
plot_embeddings(M_reduced, word2Ind, words)
# ------------------
# Write your polysemous word exploration code here.
from pprint import pprint
t1 = time()
words = list(wv_from_bin.vocab.keys())
random.seed(126)
random.shuffle(words)
words = words[:20]
sorted_list = []
for word in words:
sorted_list.append(sorted([(v, u, word) for u, v in wv_from_bin.most_similar(word)], reverse=True))
print(f'Time elapsed: {time() - t1:.2f}s')
pprint(sorted([ls for arr in sorted_list for ls in arr], reverse=True))
# ------------------
# ------------------
# Write your synonym & antonym exploration code here.
w1 = ""
w2 = ""
w3 = ""
w1_w2_dist = wv_from_bin.distance(w1, w2)
w1_w3_dist = wv_from_bin.distance(w1, w3)
print("Synonyms {}, {} have cosine distance: {}".format(w1, w2, w1_w2_dist))
print("Antonyms {}, {} have cosine distance: {}".format(w1, w3, w1_w3_dist))
# ------------------
# Run this cell to answer the analogy -- man : king :: woman : x
pprint.pprint(wv_from_bin.most_similar(positive=['woman', 'king'], negative=['man']))
# ------------------
# Write your analogy exploration code here.
pprint.pprint(wv_from_bin.most_similar(positive=[], negative=[]))
# ------------------
# ------------------
# Write your incorrect analogy exploration code here.
pprint.pprint(wv_from_bin.most_similar(positive=[], negative=[]))
# ------------------
# Run this cell
# Here `positive` indicates the list of words to be similar to and `negative` indicates the list of words to be
# most dissimilar from.
pprint.pprint(wv_from_bin.most_similar(positive=['woman', 'boss'], negative=['man']))
print()
pprint.pprint(wv_from_bin.most_similar(positive=['man', 'boss'], negative=['woman']))
# ------------------
# Write your bias exploration code here.
pprint.pprint(wv_from_bin.most_similar(positive=[], negative=[]))
print()
pprint.pprint(wv_from_bin.most_similar(positive=[,], negative=[]))
# ------------------
| 0.560253 | 0.930584 |
# The Importance Of Being Scale-Invariant
The purpose of this notebook is to provide intuition behind the sample space of proportions, in addition to appropriate transformations that can aid the analysis of proportions (also referred to as compositions).
We will first start with importing handcrafted simulated data to explore what sort of insights can be gained from data of proportions.
```
import numpy as np
import matplotlib.pyplot as plt
from skbio.stats.composition import alr, alr_inv
from sim import sim1, sim1_truth, sim2
from util import default_ternary_labels, default_barplot
import ternary
np.random.seed(0)
%matplotlib inline
```
# Modeling Differential Abundance
The common goal of perform differential abundance is to try to identify which features have "changed" across the experimental conditions. In my field, we are often trying to identify microbes or genes that have "changed" in abundance to determine if microbes have grown or declined across conditions. When we mean by "changed", we are
interested in determine if the fold change across conditions is equal to one or not, in particular, $$\frac{A_i}{B_i} = 1$$ for abundances in conditions $A$ and $B$ for a given feature $i$.
We have liberally highlighted the term "change", because this notion is no longer fully observed, if we only observe data of proportions. For the reason being that we are missing a key variable of interest, the total number of individuals in each experimental condition.
Specifically, if we cannot directly observe $A_i$ or $B_i$, but can observe their proportions $p_{A_i}$, $p_{B_i}$ we can no longer make concrete statements about "change" because we can't observe the total number of individuals $N_A$ and $N_B$. In particular, we have a bias term $\frac{N_A}{N_B}$ given by
$$\frac{A_i}{B_i} = \frac{N_A p_{A_i}}{N_B p_{B_i}} = \frac{N_A}{N_B} \times \frac{p_{A_i}}{p_{B_i}}$$
As a result, any statement of change that we make will be confounded by the change in $N$. To see an example, consider the following scenario.
```
x, y = sim1()
# Let's plot the proportions
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
t = np.arange(np.sum(y==0))
ax[0] = default_barplot(ax[0], x[y], t, ticks=False)
ax[0].set_xlabel('Group A Samples', fontsize=18)
ax[1] = default_barplot(ax[1], x[~y], t, ticks=False)
ax[1].set_xlabel('Group B Samples', fontsize=18)
plt.legend(framealpha=1, fontsize=14)
plt.tight_layout()
```
Here, we see in Group A there are 3 parts, $x_1, x_2, x_3$ that all have the same proportions. In Group B, ratio of $x_1, x_2, x_3$ is now 1:1:2.
From this example, can we infer what happened? In particular
1. Did $x_3$ increase?
2. Did $x_1$ and $x_2$ both decrease?
3. Did option 1 and 2 both simulataneous occur?
In our particular example, we have access to the ground truth.
See below
```
x, y = sim1_truth()
fig, ax = plt.subplots(1, 2, figsize=(8, 4), sharey=True)
t = np.arange(np.sum(y==0))
ax[0] = default_barplot(ax[0], x[y], t, ticks=False)
ax[0].set_xlabel('Group A Samples', fontsize=18)
ax[0].set_ylabel('Abundances')
ax[1] = default_barplot(ax[1], x[~y], t, ticks=False)
ax[1].set_xlabel('Group B Samples', fontsize=18)
ax[1].set_ylabel('Abundances')
plt.legend(framealpha=1, fontsize=14)
plt.tight_layout()
```
In this particular example, we see that $x_3$ stayed constant, while $x_1$ and $x_2$ both decreased.
However, amongst the options that were presented earlier -- _all of those scenarios were possible given the information that was presented_. In fact, we __cannot__ infer which features actually decreased if we only had access to the proportions alone.
To obtain an intuition what we can and cannot say given the data available, we need to have a better understanding what our sample space looks like. For our 3 proportions $x_1, x_2, x_3$, they satisify the constraint where $x_1 + x_2 + x_3 = 1$, in other words, they live on a plane in the upper quadrant of the real space. If we were to visualize that plane, it would look like as follows.
```
# Reload the original simulation dataset
x, y = sim1()
## Boundary and Gridlines
scale = 1
figure, tax = ternary.figure(scale=scale)
tax = default_ternary_labels(tax)
tax.scatter(x[y, :], marker='o', color='r', label="Group A")
tax.scatter(x[~y, :], marker='x', color='b', label="Group B")
plt.axis('off')
plt.tight_layout()
plt.legend(fontsize=18)
```
The diagram above is showing the plane in which all the possible values of $x_1, x_2, x_3$ can hold.
Furthermore, we have visualized the proportions of samples in Group A and Group B in this space.
As we can see, there is a clear separation between these two groups, but if we cannot determine which features have increased or decreased, how can we determine what is causing the separation??
The key here is understanding the concept of _scale invariance_. The reason why we are having difficulties inferring which features are changing is because we lost our ability to measure _scale_, which in our case are the totals $N_A$ and $N_B$. If we cannot measure _scale_, we must engineer quantities that are invariant to it.
One such scale-invariant quantity are ratios, if we compute the ratio of two parts, the totals cancel out. Specifically, if we consider two features $i$ and $j$ and compute their ratio, the following holds
$$
\frac{p_{A_i} / p_{A_j}}{p_{B_i} / p_{B_j}} = \frac{A_i / A_j}{B_i / B_j}
$$
This approach can scale across higher dimensions if we chose to use feature $j$ as a reference for all of the other variables. This is the main concept behind the additive log-ratio (ALR) transform which is defined as follows
$$
alr(x) = \bigg[\log \frac{x_1}{x_j} \ldots \log \frac{x_D}{x_j} \bigg]
$$
Here, this transforms a $D$ dimensional vector of proportions to a $D-1$ dimensional vector of log-ratios.
For a given feature $j$, the corresponding log-ratio is left out (since $\log x_j / x_j = 0$). In addition to providing scale invariance, computing a log-ratio also removes constraints, since log-ratios can also be represented as negative quantities -- this property turns out to be particularly useful for unconstrained optimization.
We can see this transform in action with the simulation data; by default, we can chose the first feature $x_1$ as the reference, and compute two log ratios $\log(x_2 / x_1)$ and $\log(x_3 / x_1)$. The transformed data can be visualized as follows.
```
alrx = alr(x)
fig, ax = plt.subplots()
ax.scatter(alrx[y, 0], alrx[y, 1], marker='o', color='r', label="Group A")
ax.scatter(alrx[~y, 0], alrx[~y, 1], marker='x', color='b', label="Group B")
ax.legend(loc=2)
ax.set_xlabel('$log(x_2/x_1)$', fontsize=18)
ax.set_ylabel('$log(x_3/x_1)$', fontsize=18)
```
Ok, now we can start making statements about what is causing the difference between these two groups!
From eye, we can see that $\log(x_3/x_1)$ is the main differentiator between these two groups.
The good news is since we are back in unconstrained space, we can apply our favorite statistical methodologies to see if this is indeed true. Below we will apply a t-test to these two log-ratios.
```
sns.histplot(alrx[y, 1], label='Group A', color='r', alpha=0.5)
sns.histplot(alrx[~y, 1], label='Group B', color='b', alpha=0.5)
plt.legend(fontsize=14)
ttest_ind(alrx[y, 1], alrx[~y, 1])
from scipy.stats import ttest_ind
import seaborn as sns
sns.histplot(alrx[y, 0], label='Group A', color='r', alpha=0.5)
sns.histplot(alrx[~y, 0], label='Group B', color='b', alpha=0.5)
plt.legend(fontsize=14)
ttest_ind(alrx[y, 0], alrx[~y, 0])
```
Indeed our intuition is correct! $\log(x_3/x_1)$ appears to be explaining the differences whereas $\log(x_2/x_1)$ doesn't.
Now that we have gained some intuition behind the simplicial sample space and the ALR transform, we will need to consider the next steps required to apply this to applications.
One of the major hurdles to applying the ALR transform is the fact that it cannot handle zeros (since $\log(0)$ is undefined). In the case study, we will show how to get around this by treating the zeros as missing data; the trick here is to use with the inverse ALR transform instead of the ALR transform in a Generalized Linear Modeling framework.
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from skbio.stats.composition import alr, alr_inv
from sim import sim1, sim1_truth, sim2
from util import default_ternary_labels, default_barplot
import ternary
np.random.seed(0)
%matplotlib inline
x, y = sim1()
# Let's plot the proportions
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
t = np.arange(np.sum(y==0))
ax[0] = default_barplot(ax[0], x[y], t, ticks=False)
ax[0].set_xlabel('Group A Samples', fontsize=18)
ax[1] = default_barplot(ax[1], x[~y], t, ticks=False)
ax[1].set_xlabel('Group B Samples', fontsize=18)
plt.legend(framealpha=1, fontsize=14)
plt.tight_layout()
x, y = sim1_truth()
fig, ax = plt.subplots(1, 2, figsize=(8, 4), sharey=True)
t = np.arange(np.sum(y==0))
ax[0] = default_barplot(ax[0], x[y], t, ticks=False)
ax[0].set_xlabel('Group A Samples', fontsize=18)
ax[0].set_ylabel('Abundances')
ax[1] = default_barplot(ax[1], x[~y], t, ticks=False)
ax[1].set_xlabel('Group B Samples', fontsize=18)
ax[1].set_ylabel('Abundances')
plt.legend(framealpha=1, fontsize=14)
plt.tight_layout()
# Reload the original simulation dataset
x, y = sim1()
## Boundary and Gridlines
scale = 1
figure, tax = ternary.figure(scale=scale)
tax = default_ternary_labels(tax)
tax.scatter(x[y, :], marker='o', color='r', label="Group A")
tax.scatter(x[~y, :], marker='x', color='b', label="Group B")
plt.axis('off')
plt.tight_layout()
plt.legend(fontsize=18)
alrx = alr(x)
fig, ax = plt.subplots()
ax.scatter(alrx[y, 0], alrx[y, 1], marker='o', color='r', label="Group A")
ax.scatter(alrx[~y, 0], alrx[~y, 1], marker='x', color='b', label="Group B")
ax.legend(loc=2)
ax.set_xlabel('$log(x_2/x_1)$', fontsize=18)
ax.set_ylabel('$log(x_3/x_1)$', fontsize=18)
sns.histplot(alrx[y, 1], label='Group A', color='r', alpha=0.5)
sns.histplot(alrx[~y, 1], label='Group B', color='b', alpha=0.5)
plt.legend(fontsize=14)
ttest_ind(alrx[y, 1], alrx[~y, 1])
from scipy.stats import ttest_ind
import seaborn as sns
sns.histplot(alrx[y, 0], label='Group A', color='r', alpha=0.5)
sns.histplot(alrx[~y, 0], label='Group B', color='b', alpha=0.5)
plt.legend(fontsize=14)
ttest_ind(alrx[y, 0], alrx[~y, 0])
| 0.713631 | 0.98986 |
# SVM with linear kernel
The goal of this notebook is to find the best parameters for linear kernel. We also want to check if the parameters depend on stock.
Linear kernel is a function: $\langle x, x'\rangle$.
We will use [sklearn.svm](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) library to perform calculations. We want to pick the best parameters for **SVC**:
* C (default 1.0)
```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as md
from statsmodels.distributions.empirical_distribution import ECDF
import numpy as np
import seaborn as sns
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
from sklearn.metrics import classification_report
from sklearn import svm
import warnings
from lob_data_utils import lob
sns.set_style('whitegrid')
warnings.filterwarnings('ignore')
```
# Data
We use data from 5 stocks (from dates 2013-09-01 - 2013-11-16) for which logistic regression yielded the best results.
We selected 3 subsets for each stock:
* training set (60% of data)
* test set (20% of data)
* cross-validation set (20% of data)
```
stocks = ['3757', '4218', '4851', '3388', '3107']
dfs = {}
dfs_cv = {}
dfs_test = {}
for s in stocks:
df, df_cv, df_test = lob.load_prepared_data(s, cv=True)
dfs[s] = df
dfs_cv[s] = df_cv
dfs_test[s] = df_test
dfs[stocks[0]].head(5)
def svm_classification(d, kernel, gamma='auto', C=1.0, degree=3, coef0=0.0, decision_function_shape='ovr'):
clf = svm.SVC(kernel=kernel, gamma=gamma, C=C, degree=degree, coef0=coef0,
decision_function_shape=decision_function_shape)
X = d['queue_imbalance'].values.reshape(-1, 1)
y = d['mid_price_indicator'].values.reshape(-1, 1)
clf.fit(X, y)
return clf
```
# Methodology
We will use at first naive approach to grasp how each of the parameter influences the ROC area score and what values make sense, when the other parameters are set to defaults. For the **linear** kernel according to documentation it's worth to check only the **C** parameter.
### C parameter
The C parameter has influence over margin picked by SVM:
* for large values of **C** SVM will choose a smaller-margin hyperplane, which means that more data points will be classified correctly
* for small values of **C** SVM will choose a bigger-margin hyperplane, so there may be more misclassifications
At first we tried parameters: [0.0001, 0.001, 0.01, 0.1, 1, 10, 1000], but after first calculations it seems that it wasn't enough, so a few more values were introduced.
```
cs = [0.0001, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 1.5, 10, 100, 110, 1000]
df_css = {}
ax = plt.subplot()
ax.set_xscale("log", basex=10)
for s in stocks:
df_cs = pd.DataFrame(index=cs)
df_cs['roc'] = np.zeros(len(df_cs))
for c in cs:
reg_svm = svm_classification(dfs[s], 'linear', C=c)
pred_svm_out_of_sample = reg_svm.predict(dfs_cv[s]['queue_imbalance'].values.reshape(-1, 1))
logit_roc_auc = roc_auc_score(dfs_cv[s]['mid_price_indicator'], pred_svm_out_of_sample)
df_cs.loc[c] = logit_roc_auc
plt.plot(df_cs, linestyle='--', label=s, marker='x', alpha=0.6)
df_css[s] = df_cs
plt.legend()
plt.xlabel('C parameter')
plt.ylabel('roc_area value')
plt.title('roc_area vs C')
for s in stocks:
idx = df_css[s]['roc'].idxmax()
print('For {} the best is {}'.format(s, idx))
for s in stocks:
err_max = df_css[s]['roc'].max()
err_min = df_css[s]['roc'].min()
print('For {} the diff between best and worst {}'.format(s, err_max - err_min))
```
# Results
We compare results of SVMs with the best choice of **C** parameter against the logistic regression and SVM with defaults.
```
df_results = pd.DataFrame(index=stocks)
df_results['logistic'] = np.zeros(len(stocks))
df_results['linear-default'] = np.zeros(len(stocks))
df_results['linear-tunned'] = np.zeros(len(stocks))
plt.subplot(121)
for s in stocks:
reg_svm = svm_classification(dfs[s], 'linear', C=df_css[s].idxmax())
score = lob.plot_roc(dfs_test[s], reg_svm, title='ROC for test set with the best C param')
df_results['linear-tunned'][s] = score
plt.subplot(122)
for s in stocks:
reg_svm = svm_classification(dfs[s], 'linear')
score = lob.plot_roc(dfs_test[s], reg_svm, title='ROC for test set with default')
df_results['linear-default'][s] = score
plt.subplots_adjust(left=0, wspace=0.1, top=1, right=2)
plt.subplot(121)
for s in stocks:
reg_svm = svm_classification(dfs[s], 'linear', C=df_css[s].idxmax())
score = lob.plot_roc(dfs_test[s], reg_svm, title='ROC for test set with the best C param')
df_results['linear-tunned'][s] = score
plt.subplot(122)
for s in stocks:
reg_log = lob.logistic_regression(dfs[s], 0, len(dfs[s]))
score = lob.plot_roc(dfs_test[s], reg_log, title='ROC for test set with logistic classification')
df_results['logistic'][s] = score
plt.subplots_adjust(left=0, wspace=0.1, top=1, right=2)
df_results
```
# Conclusions
|
github_jupyter
|
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as md
from statsmodels.distributions.empirical_distribution import ECDF
import numpy as np
import seaborn as sns
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
from sklearn.metrics import classification_report
from sklearn import svm
import warnings
from lob_data_utils import lob
sns.set_style('whitegrid')
warnings.filterwarnings('ignore')
stocks = ['3757', '4218', '4851', '3388', '3107']
dfs = {}
dfs_cv = {}
dfs_test = {}
for s in stocks:
df, df_cv, df_test = lob.load_prepared_data(s, cv=True)
dfs[s] = df
dfs_cv[s] = df_cv
dfs_test[s] = df_test
dfs[stocks[0]].head(5)
def svm_classification(d, kernel, gamma='auto', C=1.0, degree=3, coef0=0.0, decision_function_shape='ovr'):
clf = svm.SVC(kernel=kernel, gamma=gamma, C=C, degree=degree, coef0=coef0,
decision_function_shape=decision_function_shape)
X = d['queue_imbalance'].values.reshape(-1, 1)
y = d['mid_price_indicator'].values.reshape(-1, 1)
clf.fit(X, y)
return clf
cs = [0.0001, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 1.5, 10, 100, 110, 1000]
df_css = {}
ax = plt.subplot()
ax.set_xscale("log", basex=10)
for s in stocks:
df_cs = pd.DataFrame(index=cs)
df_cs['roc'] = np.zeros(len(df_cs))
for c in cs:
reg_svm = svm_classification(dfs[s], 'linear', C=c)
pred_svm_out_of_sample = reg_svm.predict(dfs_cv[s]['queue_imbalance'].values.reshape(-1, 1))
logit_roc_auc = roc_auc_score(dfs_cv[s]['mid_price_indicator'], pred_svm_out_of_sample)
df_cs.loc[c] = logit_roc_auc
plt.plot(df_cs, linestyle='--', label=s, marker='x', alpha=0.6)
df_css[s] = df_cs
plt.legend()
plt.xlabel('C parameter')
plt.ylabel('roc_area value')
plt.title('roc_area vs C')
for s in stocks:
idx = df_css[s]['roc'].idxmax()
print('For {} the best is {}'.format(s, idx))
for s in stocks:
err_max = df_css[s]['roc'].max()
err_min = df_css[s]['roc'].min()
print('For {} the diff between best and worst {}'.format(s, err_max - err_min))
df_results = pd.DataFrame(index=stocks)
df_results['logistic'] = np.zeros(len(stocks))
df_results['linear-default'] = np.zeros(len(stocks))
df_results['linear-tunned'] = np.zeros(len(stocks))
plt.subplot(121)
for s in stocks:
reg_svm = svm_classification(dfs[s], 'linear', C=df_css[s].idxmax())
score = lob.plot_roc(dfs_test[s], reg_svm, title='ROC for test set with the best C param')
df_results['linear-tunned'][s] = score
plt.subplot(122)
for s in stocks:
reg_svm = svm_classification(dfs[s], 'linear')
score = lob.plot_roc(dfs_test[s], reg_svm, title='ROC for test set with default')
df_results['linear-default'][s] = score
plt.subplots_adjust(left=0, wspace=0.1, top=1, right=2)
plt.subplot(121)
for s in stocks:
reg_svm = svm_classification(dfs[s], 'linear', C=df_css[s].idxmax())
score = lob.plot_roc(dfs_test[s], reg_svm, title='ROC for test set with the best C param')
df_results['linear-tunned'][s] = score
plt.subplot(122)
for s in stocks:
reg_log = lob.logistic_regression(dfs[s], 0, len(dfs[s]))
score = lob.plot_roc(dfs_test[s], reg_log, title='ROC for test set with logistic classification')
df_results['logistic'][s] = score
plt.subplots_adjust(left=0, wspace=0.1, top=1, right=2)
df_results
| 0.440951 | 0.925365 |
### Convert BVH files to CVS format
```
import os
from os.path import join as pjoin
import glob
import matplotlib.pyplot as plt
import pandas as pd
from tqdm import tqdm as tqdm
DATA_PATH = '/datasets/extra_space2/ostap/kinematic-dataset-of-actors-expressing-emotions-2.1.0/BVH'
bvh_files = glob.glob(f'{DATA_PATH}/*/*.bvh')
print(f'Number of BVH files: {len(bvh_files)}')
signals_list = glob.glob(f'{DATA_PATH}/*/*.csv')
signals_list
# convert to cvs
for filename in tqdm(bvh_files):
os.system(f'bvh-converter -r {filename}')
pd.read_csv(l[3])
worldpos_df = pd.read_csv(worldpos).set_index('Time')
worldpos_df.iloc[:, :3].plot(subplots=True, figsize=(20, 15))
! bvh2csv --hierarchy Example1.bvh
```
### Split
```
import re
import numpy as np
from sklearn.model_selection import train_test_split
unique_ids = set()
for name in signals_list:
unique_ids.add(name.split('/')[-1].split('_')[0])
unique_ids = list(unique_ids)
unique_ids
pattern = r'^[A-Z]\d\d([A-Z]+)'
label = [re.search(pattern, idd).group(1) for idd in unique_ids]
print(np.unique(label))
len(label)
label_encoder = {l:i for i, l in enumerate(np.unique(label))}
label_encoder
labels_encoded = [label_encoder[l] for l in label]
len(labels_encoded)
train_ids, test_ids, train_label, test_label = train_test_split(unique_ids, labels_encoded, random_state=42, test_size=0.1, stratify=labels_encoded)
train_ids, val_ids, train_label, val_label = train_test_split(train_ids, train_label, random_state=42, test_size=0.1/0.9, stratify=train_label)
len(train_ids), len(val_ids), len(test_ids)
train_df = pd.DataFrame(data={'id': train_ids, 'label': train_label})
val_df = pd.DataFrame(data={'id': val_ids, 'label': val_label})
test_df = pd.DataFrame(data={'id': test_ids, 'label': test_label})
train_df.to_csv('/datasets/extra_space2/ostap/kinematic-dataset-of-actors-expressing-emotions-2.1.0/BVH/train.csv', index=False)
val_df.to_csv('/datasets/extra_space2/ostap/kinematic-dataset-of-actors-expressing-emotions-2.1.0/BVH/val.csv', index=False)
test_df.to_csv('/datasets/extra_space2/ostap/kinematic-dataset-of-actors-expressing-emotions-2.1.0/BVH/test.csv', index=False)
```
## DIstribution of signal lengths
```
lengths = []
for signal_name in tqdm(signals_list):
lengths.append(pd.read_csv(signal_name).shape[0])
plt.hist(lengths)
np.percentile(lengths, q=[50, 70, 90, 95, 99])
```
### Visualize 3D coordinates
```
print(signals_list[2])
sample = pd.read_csv(signals_list[2])
sample.head()
body_part = ['Hips', 'Spine1', 'LeftFoot', 'RightFoot', 'Head']
coordinates = ['X', 'Y', 'Z']
column_selected = [f'{b}.{c}' for b in body_part for c in coordinates]
column_selected
sample = sample.iloc[100].loc[column_selected]
sample
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(sample.iloc[0], sample.iloc[1], sample.iloc[2], c='red')
ax.scatter(sample.iloc[3], sample.iloc[4], sample.iloc[5], c='blue')
ax.scatter(sample.iloc[6], sample.iloc[7], sample.iloc[8], c='black')
ax.scatter(sample.iloc[9], sample.iloc[10], sample.iloc[11], c='black')
ax.scatter(sample.iloc[12], sample.iloc[13], sample.iloc[14], c='yellow')
```
|
github_jupyter
|
import os
from os.path import join as pjoin
import glob
import matplotlib.pyplot as plt
import pandas as pd
from tqdm import tqdm as tqdm
DATA_PATH = '/datasets/extra_space2/ostap/kinematic-dataset-of-actors-expressing-emotions-2.1.0/BVH'
bvh_files = glob.glob(f'{DATA_PATH}/*/*.bvh')
print(f'Number of BVH files: {len(bvh_files)}')
signals_list = glob.glob(f'{DATA_PATH}/*/*.csv')
signals_list
# convert to cvs
for filename in tqdm(bvh_files):
os.system(f'bvh-converter -r {filename}')
pd.read_csv(l[3])
worldpos_df = pd.read_csv(worldpos).set_index('Time')
worldpos_df.iloc[:, :3].plot(subplots=True, figsize=(20, 15))
! bvh2csv --hierarchy Example1.bvh
import re
import numpy as np
from sklearn.model_selection import train_test_split
unique_ids = set()
for name in signals_list:
unique_ids.add(name.split('/')[-1].split('_')[0])
unique_ids = list(unique_ids)
unique_ids
pattern = r'^[A-Z]\d\d([A-Z]+)'
label = [re.search(pattern, idd).group(1) for idd in unique_ids]
print(np.unique(label))
len(label)
label_encoder = {l:i for i, l in enumerate(np.unique(label))}
label_encoder
labels_encoded = [label_encoder[l] for l in label]
len(labels_encoded)
train_ids, test_ids, train_label, test_label = train_test_split(unique_ids, labels_encoded, random_state=42, test_size=0.1, stratify=labels_encoded)
train_ids, val_ids, train_label, val_label = train_test_split(train_ids, train_label, random_state=42, test_size=0.1/0.9, stratify=train_label)
len(train_ids), len(val_ids), len(test_ids)
train_df = pd.DataFrame(data={'id': train_ids, 'label': train_label})
val_df = pd.DataFrame(data={'id': val_ids, 'label': val_label})
test_df = pd.DataFrame(data={'id': test_ids, 'label': test_label})
train_df.to_csv('/datasets/extra_space2/ostap/kinematic-dataset-of-actors-expressing-emotions-2.1.0/BVH/train.csv', index=False)
val_df.to_csv('/datasets/extra_space2/ostap/kinematic-dataset-of-actors-expressing-emotions-2.1.0/BVH/val.csv', index=False)
test_df.to_csv('/datasets/extra_space2/ostap/kinematic-dataset-of-actors-expressing-emotions-2.1.0/BVH/test.csv', index=False)
lengths = []
for signal_name in tqdm(signals_list):
lengths.append(pd.read_csv(signal_name).shape[0])
plt.hist(lengths)
np.percentile(lengths, q=[50, 70, 90, 95, 99])
print(signals_list[2])
sample = pd.read_csv(signals_list[2])
sample.head()
body_part = ['Hips', 'Spine1', 'LeftFoot', 'RightFoot', 'Head']
coordinates = ['X', 'Y', 'Z']
column_selected = [f'{b}.{c}' for b in body_part for c in coordinates]
column_selected
sample = sample.iloc[100].loc[column_selected]
sample
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(sample.iloc[0], sample.iloc[1], sample.iloc[2], c='red')
ax.scatter(sample.iloc[3], sample.iloc[4], sample.iloc[5], c='blue')
ax.scatter(sample.iloc[6], sample.iloc[7], sample.iloc[8], c='black')
ax.scatter(sample.iloc[9], sample.iloc[10], sample.iloc[11], c='black')
ax.scatter(sample.iloc[12], sample.iloc[13], sample.iloc[14], c='yellow')
| 0.308398 | 0.640242 |
# Adding a New Dataset
Before we start: This tutorial is rendered from a Jupyter notebook that is hosted on GitHub. If you want to run the code yourself, you can find the notebook [here](https://github.com/neuralhydrology/neuralhydrology/tree/master/examples/03-Adding-Datasets).
There exist two different options to use a different dataset within the NeuralHydrology library.
1. Preprocess your data to use the `GenericDataset` in `neuralhydrology.datasetzoo.genericdataset`.
2. Implement a new dataset class, inheriting from `BaseDataset` in `neuralhydrology.datasetzoo.basedataset`.
Using the `GenericDataset` is recommended and does not require you to add/change a single line of code, while writing a new dataset gives you more freedom to do whatever you want.
## Using the GenericDataset
With the release of version 0.9.6-beta, we added a `GenericDataset`. This class can be used with any data, as long as the data is preprocessed in the following way:
- The data directory (config argument `data_dir`) must contain a folder 'time_series' and (if static attributes are used) a folder 'attributes'.
- The folder 'time_series' contains one netcdf file (.nc or .nc4) per basin, named '<basin_id\>.nc/nc4'. The netcdf file has to have one coordinate called `date`, containing the datetime index.
- The folder 'attributes' contains one or more comma-separated file (.csv) with static attributes, indexed by basin id. Attributes files can be divided into groups of basins or groups of features (but not both).
If you prepare your data set following these guidelines, you can simply set the config argument `dataset` to `generic` and set the `data_dir` to the path of your preprocessed data directory.
**Note**: Make sure to mark invalid data points as `NaN` (e.g. using NumPy's np.nan) instead of something like `-999` for invalid discharge, which is often used (for whatever reason) in hydrology. If done so,
NeuralHydrology can correctly identify these values as `NaN` and e.g. exclude samples with `NaN` in the inputs from being used for model training (which would lead to `NaN` loss and thus `NaN` weights), and similarly
ignore timesteps, where the target value is `NaN` from being considered when computing the loss.
## Adding a Dataset Class
The rest of this tutorial will show you how to add a new dataset class to the `neuralhydrology.datasetzoo`. As an example, we will use the [CAMELS-CL](https://hess.copernicus.org/articles/22/5817/2018/) dataset.
```
from pathlib import Path
from typing import List, Dict, Union
import pandas as pd
import xarray
from neuralhydrology.datasetzoo.basedataset import BaseDataset
from neuralhydrology.utils.config import Config
```
### Template
Every dataset has its own file in `neuralhydrology.datasetzoo` and follows a common template. The template can be found [here](https://github.com/neuralhydrology/neuralhydrology/blob/master/neuralhydrology/datasetzoo/template.py).
The most important points are:
- All dataset classes have to inherit from `BaseDataset` implemented in `neuralhydrology.datasetzoo.basedataset`.
- All dataset classes have to accept the same inputs upon initialization (see below)
- Within each dataset class, you have to implement two methods:
- `_load_basin_data()`: This method loads the time series data for a single basin of the dataset (e.g. meteorological forcing data and streamflow) into a time-indexed pd.DataFrame.
- `_load_attributes()`: This method loads the catchment attributes for all basins in the dataset and returns a basin-indexed pd.DataFrame with attributes as columns.
`BaseDataset` is a map-style [Pytorch Dataset](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) that implements the core logic for all data sets. It takes care of multiple temporal resolutions, data fusion, normalizations, sanity checks, etc. and also implements the required methods `__len__` (returns number of total training samples) and `__getitem__` (returns a single training sample for a given index), which PyTorch data loaders use, e.g., to create mini-batches for training. However, all of this is not important if you just want to add another dataset to the NeuralHydrology library.
### Preprocessing CAMELS-CL
Because the CAMELS-CL dataset comes in a rather unusual file structure, we added a function to create per-basin csv files with all timeseries features. You can find the function `preprocess_camels_cl_dataset` in `neuralhydrology.datasetzoo.camelscl`, which will create a subfolder called `preprocessed` containing the per-basin files. For the remainder of this tutorial, we assume that this folder and the per-basin csv files exist.
### Class skeleton
For the sake of this tutorial, we will omit doc-strings. However, when adding your dataset class we highly encourage you to add extensive doc-strings, as we did for all dataset classes in this package. We use [Python type annotations](https://docs.python.org/3/library/typing.html) everywhere, which facilitates code development with any modern IDE as well as makes it easier to understand what is happening inside a function or class.
The class skeleton looks like this:
```
class CamelsCL(BaseDataset):
def __init__(self,
cfg: Config,
is_train: bool,
period: str,
basin: str = None,
additional_features: List[Dict[str, pd.DataFrame]] = [],
id_to_int: Dict[str, int] = {},
scaler: Dict[str, Union[pd.Series, xarray.DataArray]] = {}):
# Initialize `BaseDataset` class
super(CamelsCL, self).__init__(cfg=cfg,
is_train=is_train,
period=period,
basin=basin,
additional_features=additional_features,
id_to_int=id_to_int,
scaler=scaler)
def _load_basin_data(self, basin: str) -> pd.DataFrame:
"""Load timeseries data of one specific basin"""
raise NotImplementedError
def _load_attributes(self) -> pd.DataFrame:
"""Load catchment attributes"""
raise NotImplementedError
```
### Data loading functions
For all datasets, we implemented the actual data loading (e.g., from the txt or csv files) in separate functions outside of the class so that these functions are usable everywhere. This is useful for example when you want to inspect or visualize the discharge of a particular basin or do anything else with the basin data. These functions are implemented within the same file (since they are specific to each data set) and we use those functions from within the class methods.
So let's start by implementing a function that reads a single basin file of time series data for a given basin identifier.
```
def load_camels_cl_timeseries(data_dir: Path, basin: str) -> pd.DataFrame:
preprocessed_dir = data_dir / "preprocessed"
# make sure the CAMELS-CL data was already preprocessed and per-basin files exist.
if not preprocessed_dir.is_dir():
msg = [
f"No preprocessed data directory found at {preprocessed_dir}. Use preprocessed_camels_cl_dataset ",
"in neuralhydrology.datasetzoo.camelscl to preprocess the CAMELS CL data set once into ",
"per-basin files."
]
raise FileNotFoundError("".join(msg))
# load the data for the specific basin into a time-indexed dataframe
basin_file = preprocessed_dir / f"{basin}.csv"
df = pd.read_csv(basin_file, index_col='date', parse_dates=['date'])
return df
```
Most of this should be easy to follow. First we check that the data was already preprocessed and if it wasn't, we throw an appropriate error message. Then we proceed to load the data into a pd.DataFrame and we make sure that the index is converted into a datetime format.
Next, we need a function to load the attributes, which are stored in a file called `1_CAMELScl_attributes.txt`. We assume that this file exist in the root directory of the dataset (such information is useful to add to the docstring!). The dataframe that this function has to return must be basin-indexed with attributes as columns. Furthermore, we accept an optional argument `basins`, which is a list of strings. This list can specify basins of interest and if passed, we only return the attributes for said basins.
```
def load_camels_cl_attributes(data_dir: Path, basins: List[str] = []) -> pd.DataFrame:
# load attributes into basin-indexed dataframe
attributes_file = data_dir / '1_CAMELScl_attributes.txt'
df = pd.read_csv(attributes_file, sep="\t", index_col="gauge_id").transpose()
# convert all columns, where possible, to numeric
df = df.apply(pd.to_numeric, errors='ignore')
# convert the two columns specifying record period start and end to datetime format
df["record_period_start"] = pd.to_datetime(df["record_period_start"])
df["record_period_end"] = pd.to_datetime(df["record_period_end"])
if basins:
if any(b not in df.index for b in basins):
raise ValueError('Some basins are missing static attributes.')
df = df.loc[basins]
return df
```
### Putting everything together
Now we have all required pieces and can finish the dataset class. Notice that we have access to all class attributes from the parent class in all methods (such as the config, which is stored in `self.cfg`). In the `_load_attributes` method, we simply defer to the attribute loading function we implemented above. The BaseDataset will take care of removing all attributes that are not specified as input features. It will also check for missing attributes in the `BaseDataset`, so you don't have to take care of this here.
```
class CamelsCL(BaseDataset):
def __init__(self,
cfg: Config,
is_train: bool,
period: str,
basin: str = None,
additional_features: List[Dict[str, pd.DataFrame]] = [],
id_to_int: Dict[str, int] = {},
scaler: Dict[str, Union[pd.Series, xarray.DataArray]] = {}):
# Initialize `BaseDataset` class
super(CamelsCL, self).__init__(cfg=cfg,
is_train=is_train,
period=period,
basin=basin,
additional_features=additional_features,
id_to_int=id_to_int,
scaler=scaler)
def _load_basin_data(self, basin: str) -> pd.DataFrame:
"""Load timeseries data of one specific basin"""
return load_camels_cl_timeseries(data_dir=self.cfg.data_dir, basin=basin)
def _load_attributes(self) -> pd.DataFrame:
"""Load catchment attributes"""
return load_camels_cl_attributes(self.cfg.data_dir, basins=self.basins)
```
### Integrating the dataset class into NeuralHydrology
With these few lines of code, you are ready to use a new dataset within the NeuralHydrology framework. The only thing missing is to link the new dataset in the `get_dataset()` function, implemented in `neuralhydrology.datasetzoo.__init__.py`. Again, we removed the doc-string for brevity ([here](https://neuralhydrology.readthedocs.io/en/latest/api/neuralhydrology.datasetzoo.html#neuralhydrology.datasetzoo.get_dataset) you can find the documentation), but the code of this function is as simple as this:
```
from neuralhydrology.datasetzoo.basedataset import BaseDataset
from neuralhydrology.datasetzoo.camelscl import CamelsCL
from neuralhydrology.datasetzoo.camelsgb import CamelsGB
from neuralhydrology.datasetzoo.camelsus import CamelsUS
from neuralhydrology.datasetzoo.hourlycamelsus import HourlyCamelsUS
from neuralhydrology.utils.config import Config
def get_dataset(cfg: Config,
is_train: bool,
period: str,
basin: str = None,
additional_features: list = [],
id_to_int: dict = {},
scaler: dict = {}) -> BaseDataset:
# check config argument and select appropriate data set class
if cfg.dataset == "camels_us":
Dataset = CamelsUS
elif cfg.dataset == "camels_gb":
Dataset = CamelsGB
elif cfg.dataset == "hourly_camels_us":
Dataset = HourlyCamelsUS
elif cfg.dataset == "camels_cl":
Dataset = CamelsCL
else:
raise NotImplementedError(f"No dataset class implemented for dataset {cfg.dataset}")
# initialize dataset
ds = Dataset(cfg=cfg,
is_train=is_train,
period=period,
basin=basin,
additional_features=additional_features,
id_to_int=id_to_int,
scaler=scaler)
return ds
```
Now, by settig `dataset: camels_cl` in the config file, you are able to train a model on the CAMELS-CL data set.
The available time series features are:
- tmax_cr2met
- precip_mswep
- streamflow_m3s
- tmin_cr2met
- pet_8d_modis
- precip_chirps
- pet_hargreaves
- streamflow_mm
- precip_cr2met
- swe
- tmean_cr2met
- precip_tmpa
For a list of available attributes, look at the `1_CAMELScl_attributes.txt` file or make use of the above implemented function to load the attributes into a pd.DataFrame.
|
github_jupyter
|
from pathlib import Path
from typing import List, Dict, Union
import pandas as pd
import xarray
from neuralhydrology.datasetzoo.basedataset import BaseDataset
from neuralhydrology.utils.config import Config
class CamelsCL(BaseDataset):
def __init__(self,
cfg: Config,
is_train: bool,
period: str,
basin: str = None,
additional_features: List[Dict[str, pd.DataFrame]] = [],
id_to_int: Dict[str, int] = {},
scaler: Dict[str, Union[pd.Series, xarray.DataArray]] = {}):
# Initialize `BaseDataset` class
super(CamelsCL, self).__init__(cfg=cfg,
is_train=is_train,
period=period,
basin=basin,
additional_features=additional_features,
id_to_int=id_to_int,
scaler=scaler)
def _load_basin_data(self, basin: str) -> pd.DataFrame:
"""Load timeseries data of one specific basin"""
raise NotImplementedError
def _load_attributes(self) -> pd.DataFrame:
"""Load catchment attributes"""
raise NotImplementedError
def load_camels_cl_timeseries(data_dir: Path, basin: str) -> pd.DataFrame:
preprocessed_dir = data_dir / "preprocessed"
# make sure the CAMELS-CL data was already preprocessed and per-basin files exist.
if not preprocessed_dir.is_dir():
msg = [
f"No preprocessed data directory found at {preprocessed_dir}. Use preprocessed_camels_cl_dataset ",
"in neuralhydrology.datasetzoo.camelscl to preprocess the CAMELS CL data set once into ",
"per-basin files."
]
raise FileNotFoundError("".join(msg))
# load the data for the specific basin into a time-indexed dataframe
basin_file = preprocessed_dir / f"{basin}.csv"
df = pd.read_csv(basin_file, index_col='date', parse_dates=['date'])
return df
def load_camels_cl_attributes(data_dir: Path, basins: List[str] = []) -> pd.DataFrame:
# load attributes into basin-indexed dataframe
attributes_file = data_dir / '1_CAMELScl_attributes.txt'
df = pd.read_csv(attributes_file, sep="\t", index_col="gauge_id").transpose()
# convert all columns, where possible, to numeric
df = df.apply(pd.to_numeric, errors='ignore')
# convert the two columns specifying record period start and end to datetime format
df["record_period_start"] = pd.to_datetime(df["record_period_start"])
df["record_period_end"] = pd.to_datetime(df["record_period_end"])
if basins:
if any(b not in df.index for b in basins):
raise ValueError('Some basins are missing static attributes.')
df = df.loc[basins]
return df
class CamelsCL(BaseDataset):
def __init__(self,
cfg: Config,
is_train: bool,
period: str,
basin: str = None,
additional_features: List[Dict[str, pd.DataFrame]] = [],
id_to_int: Dict[str, int] = {},
scaler: Dict[str, Union[pd.Series, xarray.DataArray]] = {}):
# Initialize `BaseDataset` class
super(CamelsCL, self).__init__(cfg=cfg,
is_train=is_train,
period=period,
basin=basin,
additional_features=additional_features,
id_to_int=id_to_int,
scaler=scaler)
def _load_basin_data(self, basin: str) -> pd.DataFrame:
"""Load timeseries data of one specific basin"""
return load_camels_cl_timeseries(data_dir=self.cfg.data_dir, basin=basin)
def _load_attributes(self) -> pd.DataFrame:
"""Load catchment attributes"""
return load_camels_cl_attributes(self.cfg.data_dir, basins=self.basins)
from neuralhydrology.datasetzoo.basedataset import BaseDataset
from neuralhydrology.datasetzoo.camelscl import CamelsCL
from neuralhydrology.datasetzoo.camelsgb import CamelsGB
from neuralhydrology.datasetzoo.camelsus import CamelsUS
from neuralhydrology.datasetzoo.hourlycamelsus import HourlyCamelsUS
from neuralhydrology.utils.config import Config
def get_dataset(cfg: Config,
is_train: bool,
period: str,
basin: str = None,
additional_features: list = [],
id_to_int: dict = {},
scaler: dict = {}) -> BaseDataset:
# check config argument and select appropriate data set class
if cfg.dataset == "camels_us":
Dataset = CamelsUS
elif cfg.dataset == "camels_gb":
Dataset = CamelsGB
elif cfg.dataset == "hourly_camels_us":
Dataset = HourlyCamelsUS
elif cfg.dataset == "camels_cl":
Dataset = CamelsCL
else:
raise NotImplementedError(f"No dataset class implemented for dataset {cfg.dataset}")
# initialize dataset
ds = Dataset(cfg=cfg,
is_train=is_train,
period=period,
basin=basin,
additional_features=additional_features,
id_to_int=id_to_int,
scaler=scaler)
return ds
| 0.889174 | 0.987191 |
<a href="https://colab.research.google.com/github/Sakha-Language-Processing/wordforms/blob/main/verbs_at_end.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Автоматизированный поиск глаголов как последних слов в предложении
Из 541024 предложений выделены последние слова, которые с большой вероятностью являются глаголами. Получилось 20408 словоформ. Доля глаголов является высокой.
Около 80% текста потеряно.
## Загрузка и очистка
```
!wget https://github.com/Sakha-Language-Processing/parser-palament/raw/main/raw_data_parlamentsakha.zip
!unzip raw_data_parlamentsakha.zip
!wget https://github.com/Sakha-Language-Processing/parser-nvkonline/raw/main/raw_data_nvkonline.zip
!unzip raw_data_nvkonline.zip
!wget https://github.com/Sakha-Language-Processing/sakha-parser-keskil/raw/main/raw_data_keskil_fixed.zip
!unzip raw_data_keskil_fixed.zip
!wget https://github.com/Sakha-Language-Processing/parser-kyym/raw/main/kyym-df.zip
!unzip kyym-df.zip
!wget https://github.com/Sakha-Language-Processing/parser-ysia/raw/main/sakha_ysia_df.zip
!unzip sakha_ysia_df.zip
# Чтение файлов
# Можно добавить свои список файлов
import json
import pandas as pd
articles = []
for file in ['raw_data_nvkonline.json',
'raw_data_parlamentsakha.json',
'raw_data_keskil.json']:
file_ = open(file)
articles += json.load(file_)
file_.close()
all_files_text = []
for article in articles:
all_files_text += article['raw_text']
if 'title' in article:
all_files_text += [article['title']]
aytal_text = ' '
for file in ['kyym-novosti.csv',
'sakha_ysia.csv']:
file_ = pd.read_csv(file)
aytal_text += ' '.join(list(file_['title'].fillna(' ')+file_['all_text'].fillna(' ')))
all_files_text = ' '.join(all_files_text) + ' '.join(aytal_text)
del aytal_text
textslist = all_files_text.split(' ')
all_files_text = ' '.join([item for item in textslist if len(item)>1])
all_files_text += ' '.join([item for item in textslist if len(item)==1])
# в некоторых статьях буквы стоят отдельно
tezt = all_files_text.replace(' ','')
# print(n)
# print(all_files_text[n-100:n+300])
# print('Имеется', len([item for item in all_files_text.split(' ') if len(item)>1]), 'слов.')
# x1,x2 = all_files_text[:100000].count('ы'),all_files_text.count(' ы ')
# x_slov = len(all_files_text[:100000].split(' '))
# print('Потеряно примерно', int(x_slov/x1*x2), 'слов.')
len(tezt.split(' '))
n = 14245000
all_files_text[n-300:n+300]
import re
all_files_text = re.sub(re.compile(r'https?\S+'), ' ', all_files_text)
all_files_text = re.sub(re.compile(r'<.*?>'), ' ', all_files_text)
all_files_text = re.sub(re.compile(r'@\S+'), ' ', all_files_text)
all_files_text = re.sub(re.compile(r'[\//:@&\-\'\`\"\_\n\#]'), ' ', all_files_text)
all_files_text = re.sub(re.compile(r'\s*[A-Za-z\d.]+\b'), ' ' , all_files_text)
all_files_text = all_files_text.lower()
similar_characters = {'h':'һ','c':'с','x':'х','y':'у',
'e':'е', 'T':'Т', 'o':'о', 'p':'р',
'a':'а', 'k':'к', 'b':'в', 'm':'м',
'5':'ҕ', '8':'ө',
# исправления
'ааа':'аа','ттт':'тт','ыыы':'ыы',
'ууу':'уу',' ':' ',' ':' ',
'иии':'ии','эээ':'ээ','үүү':'үү',
'өөө':'өө','өү':'үө','ччч':'чч',
'ннн':'нн','ддд':'дд','ллл':'лл',
'ммм':'мм','ххх':'хх',
}
for loaned, ersatz in similar_characters.items():
all_files_text = all_files_text.replace(loaned, ersatz)
all_files_text = re.sub(re.compile(r'[.!?]'), ' ', all_files_text)
all_files_text = all_files_text.replace(' ',' ')
all_files_text = all_files_text.replace(' ',' ')
all_files_text = all_files_text.replace(' ',' ')
all_files_text = all_files_text.replace(' ',' ')
all_files_text.count(' ')
ddd = [wordform.strip() for wordform in all_files_text.split(' ') if len(wordform)>1]
ddd = sorted(ddd)
words = []
counts = []
counter = 0
for item1,item2 in zip(ddd[:-1],ddd[1:]):
counter += 1
if item1!=item2:
words.append(item1)
counts.append(counter)
counter = 0
df = pd.DataFrame({'wordform':words,'occurence':counts})
```
## Выделение последних слов предложений
```
all_files_text = all_files_text.replace('?', '.')
all_files_text = all_files_text.replace('!', '.')
regex = re.compile('[^а-яА-ЯһөҕүҥҺӨҔҮҤ. ]')
all_files_text = re.sub(regex, ' ', all_files_text)
sentence_tails = [item[-min(len(item),30):] for item in all_files_text.split('.')]
verbs = []
for item in sentence_tails:
verb = item.split(' ')[-1]
if len(verb)<1:
for i in range(30):
item = item.replace(' .','.')
verb = item.split(' ')[-1]
verbs.append(verb)
verbs = sorted(list(set(verbs)))
df = pd.DataFrame(verbs)
df.columns = ['verb']
df.to_csv('verbs.csv',index=False)
len(sentence_tails)
```
|
github_jupyter
|
!wget https://github.com/Sakha-Language-Processing/parser-palament/raw/main/raw_data_parlamentsakha.zip
!unzip raw_data_parlamentsakha.zip
!wget https://github.com/Sakha-Language-Processing/parser-nvkonline/raw/main/raw_data_nvkonline.zip
!unzip raw_data_nvkonline.zip
!wget https://github.com/Sakha-Language-Processing/sakha-parser-keskil/raw/main/raw_data_keskil_fixed.zip
!unzip raw_data_keskil_fixed.zip
!wget https://github.com/Sakha-Language-Processing/parser-kyym/raw/main/kyym-df.zip
!unzip kyym-df.zip
!wget https://github.com/Sakha-Language-Processing/parser-ysia/raw/main/sakha_ysia_df.zip
!unzip sakha_ysia_df.zip
# Чтение файлов
# Можно добавить свои список файлов
import json
import pandas as pd
articles = []
for file in ['raw_data_nvkonline.json',
'raw_data_parlamentsakha.json',
'raw_data_keskil.json']:
file_ = open(file)
articles += json.load(file_)
file_.close()
all_files_text = []
for article in articles:
all_files_text += article['raw_text']
if 'title' in article:
all_files_text += [article['title']]
aytal_text = ' '
for file in ['kyym-novosti.csv',
'sakha_ysia.csv']:
file_ = pd.read_csv(file)
aytal_text += ' '.join(list(file_['title'].fillna(' ')+file_['all_text'].fillna(' ')))
all_files_text = ' '.join(all_files_text) + ' '.join(aytal_text)
del aytal_text
textslist = all_files_text.split(' ')
all_files_text = ' '.join([item for item in textslist if len(item)>1])
all_files_text += ' '.join([item for item in textslist if len(item)==1])
# в некоторых статьях буквы стоят отдельно
tezt = all_files_text.replace(' ','')
# print(n)
# print(all_files_text[n-100:n+300])
# print('Имеется', len([item for item in all_files_text.split(' ') if len(item)>1]), 'слов.')
# x1,x2 = all_files_text[:100000].count('ы'),all_files_text.count(' ы ')
# x_slov = len(all_files_text[:100000].split(' '))
# print('Потеряно примерно', int(x_slov/x1*x2), 'слов.')
len(tezt.split(' '))
n = 14245000
all_files_text[n-300:n+300]
import re
all_files_text = re.sub(re.compile(r'https?\S+'), ' ', all_files_text)
all_files_text = re.sub(re.compile(r'<.*?>'), ' ', all_files_text)
all_files_text = re.sub(re.compile(r'@\S+'), ' ', all_files_text)
all_files_text = re.sub(re.compile(r'[\//:@&\-\'\`\"\_\n\#]'), ' ', all_files_text)
all_files_text = re.sub(re.compile(r'\s*[A-Za-z\d.]+\b'), ' ' , all_files_text)
all_files_text = all_files_text.lower()
similar_characters = {'h':'һ','c':'с','x':'х','y':'у',
'e':'е', 'T':'Т', 'o':'о', 'p':'р',
'a':'а', 'k':'к', 'b':'в', 'm':'м',
'5':'ҕ', '8':'ө',
# исправления
'ааа':'аа','ттт':'тт','ыыы':'ыы',
'ууу':'уу',' ':' ',' ':' ',
'иии':'ии','эээ':'ээ','үүү':'үү',
'өөө':'өө','өү':'үө','ччч':'чч',
'ннн':'нн','ддд':'дд','ллл':'лл',
'ммм':'мм','ххх':'хх',
}
for loaned, ersatz in similar_characters.items():
all_files_text = all_files_text.replace(loaned, ersatz)
all_files_text = re.sub(re.compile(r'[.!?]'), ' ', all_files_text)
all_files_text = all_files_text.replace(' ',' ')
all_files_text = all_files_text.replace(' ',' ')
all_files_text = all_files_text.replace(' ',' ')
all_files_text = all_files_text.replace(' ',' ')
all_files_text.count(' ')
ddd = [wordform.strip() for wordform in all_files_text.split(' ') if len(wordform)>1]
ddd = sorted(ddd)
words = []
counts = []
counter = 0
for item1,item2 in zip(ddd[:-1],ddd[1:]):
counter += 1
if item1!=item2:
words.append(item1)
counts.append(counter)
counter = 0
df = pd.DataFrame({'wordform':words,'occurence':counts})
all_files_text = all_files_text.replace('?', '.')
all_files_text = all_files_text.replace('!', '.')
regex = re.compile('[^а-яА-ЯһөҕүҥҺӨҔҮҤ. ]')
all_files_text = re.sub(regex, ' ', all_files_text)
sentence_tails = [item[-min(len(item),30):] for item in all_files_text.split('.')]
verbs = []
for item in sentence_tails:
verb = item.split(' ')[-1]
if len(verb)<1:
for i in range(30):
item = item.replace(' .','.')
verb = item.split(' ')[-1]
verbs.append(verb)
verbs = sorted(list(set(verbs)))
df = pd.DataFrame(verbs)
df.columns = ['verb']
df.to_csv('verbs.csv',index=False)
len(sentence_tails)
| 0.122799 | 0.797458 |
# Node2vec on Karateclub
## Imports
```
from typing import List, Callable
import networkx as nx
import numpy as np
from gensim.models.word2vec import Word2Vec
import random
from functools import partial
```
## Utils
```
def _check_value(value, name):
try:
_ = 1 / value
except ZeroDivisionError:
raise ValueError(
f"The value of {name} is too small " f"or zero to be used in 1/{name}."
)
def _undirected(node, graph) -> List[tuple]:
edges = graph.edges(node)
return edges
def _directed(node, graph) -> List[tuple]:
edges = graph.out_edges(node, data=True)
return edges
def _get_edge_fn(graph) -> Callable:
fn = _directed if nx.classes.function.is_directed(graph) else _undirected
fn = partial(fn, graph=graph)
return fn
def _unweighted(edges: List[tuple]) -> np.ndarray:
return np.ones(len(edges))
def _weighted(edges: List[tuple]) -> np.ndarray:
weights = map(lambda edge: edge[-1]["weight"], edges)
return np.array([*weights])
def _get_weight_fn(graph) -> Callable:
fn = _weighted if nx.classes.function.is_weighted(graph) else _unweighted
return fn
```
## Biased random walker
```
class BiasedRandomWalker:
"""
Class to do biased second order random walks.
Args:
walk_length (int): Number of random walks.
walk_number (int): Number of nodes in truncated walk.
p (float): Return parameter (1/p transition probability) to move towards previous node.
q (float): In-out parameter (1/q transition probability) to move away from previous node.
"""
walks: list
graph: nx.classes.graph.Graph
edge_fn: Callable
weight_fn: Callable
def __init__(self, walk_length: int, walk_number: int, p: float, q: float):
self.walk_length = walk_length
self.walk_number = walk_number
_check_value(p, "p")
self.p = p
_check_value(q, "q")
self.q = q
def do_walk(self, node: int) -> List[str]:
"""
Doing a single truncated second order random walk from a source node.
Arg types:
* **node** *(int)* - The source node of the random walk.
Return types:
* **walk** *(list of strings)* - A single truncated random walk.
"""
walk = [node]
previous_node = None
previous_node_neighbors = []
for _ in range(self.walk_length - 1):
current_node = walk[-1]
edges = self.edge_fn(current_node)
current_node_neighbors = np.array([edge[1] for edge in edges])
weights = self.weight_fn(edges)
probability = np.piecewise(
weights,
[
current_node_neighbors == previous_node,
np.isin(current_node_neighbors, previous_node_neighbors),
],
[lambda w: w / self.p, lambda w: w / 1, lambda w: w / self.q],
)
norm_probability = probability / sum(probability)
selected = np.random.choice(current_node_neighbors, 1, p=norm_probability)[
0
]
walk.append(selected)
previous_node_neighbors = current_node_neighbors
previous_node = current_node
walk = [str(w) for w in walk]
return walk
def do_walks(self, graph) -> None:
"""
Doing a fixed number of truncated random walk from every node in the graph.
Arg types:
* **graph** *(NetworkX graph)* - The graph to run the random walks on.
"""
self.walks = []
self.graph = graph
self.edge_fn = _get_edge_fn(graph)
self.weight_fn = _get_weight_fn(graph)
for node in self.graph.nodes():
for _ in range(self.walk_number):
walk_from_node = self.do_walk(node)
self.walks.append(walk_from_node)
class Estimator(object):
"""Estimator base class with constructor and public methods."""
seed: int
def __init__(self):
"""Creating an estimator."""
pass
def fit(self):
"""Fitting a model."""
pass
def get_embedding(self):
"""Getting the embeddings (graph or node level)."""
pass
def get_memberships(self):
"""Getting the membership dictionary."""
pass
def get_cluster_centers(self):
"""Getting the cluster centers."""
pass
def _set_seed(self):
"""Creating the initial random seed."""
random.seed(self.seed)
np.random.seed(self.seed)
@staticmethod
def _ensure_integrity(graph: nx.classes.graph.Graph) -> nx.classes.graph.Graph:
"""Ensure walk traversal conditions."""
edge_list = [(index, index) for index in range(graph.number_of_nodes())]
graph.add_edges_from(edge_list)
return graph
@staticmethod
def _check_indexing(graph: nx.classes.graph.Graph):
"""Checking the consecutive numeric indexing."""
numeric_indices = [index for index in range(graph.number_of_nodes())]
node_indices = sorted([node for node in graph.nodes()])
assert numeric_indices == node_indices, "The node indexing is wrong."
def _check_graph(self, graph: nx.classes.graph.Graph) -> nx.classes.graph.Graph:
"""Check the Karate Club assumptions about the graph."""
self._check_indexing(graph)
graph = self._ensure_integrity(graph)
return graph
def _check_graphs(self, graphs: List[nx.classes.graph.Graph]):
"""Check the Karate Club assumptions for a list of graphs."""
graphs = [self._check_graph(graph) for graph in graphs]
return graphs
model = Word2Vec(
walker.walks,
hs=1,
alpha=self.learning_rate,
epochs=self.epochs,
size=self.dimensions,
window=self.window_size,
min_count=self.min_count,
workers=self.workers,
seed=self.seed,
)
Word2Vec()
```
## Node2vec
```
class Node2Vec(Estimator):
r"""An implementation of `"Node2Vec" <https://cs.stanford.edu/~jure/pubs/node2vec-kdd16.pdf>`_
from the KDD '16 paper "node2vec: Scalable Feature Learning for Networks".
The procedure uses biased second order random walks to approximate the pointwise mutual information
matrix obtained by pooling normalized adjacency matrix powers. This matrix
is decomposed by an approximate factorization technique.
Args:
walk_number (int): Number of random walks. Default is 10.
walk_length (int): Length of random walks. Default is 80.
p (float): Return parameter (1/p transition probability) to move towards from previous node.
q (float): In-out parameter (1/q transition probability) to move away from previous node.
dimensions (int): Dimensionality of embedding. Default is 128.
workers (int): Number of cores. Default is 4.
window_size (int): Matrix power order. Default is 5.
epochs (int): Number of epochs. Default is 1.
learning_rate (float): HogWild! learning rate. Default is 0.05.
min_count (int): Minimal count of node occurrences. Default is 1.
seed (int): Random seed value. Default is 42.
"""
_embedding: List[np.ndarray]
def __init__(
self,
walk_number: int = 10,
walk_length: int = 80,
p: float = 1.0,
q: float = 1.0,
dimensions: int = 128,
workers: int = 4,
window_size: int = 5,
epochs: int = 1,
learning_rate: float = 0.05,
min_count: int = 1,
seed: int = 42,
):
super(Node2Vec, self).__init__()
self.walk_number = walk_number
self.walk_length = walk_length
self.p = p
self.q = q
self.dimensions = dimensions
self.workers = workers
self.window_size = window_size
self.epochs = epochs
self.learning_rate = learning_rate
self.min_count = min_count
self.seed = seed
def fit(self, graph: nx.classes.graph.Graph):
"""
Fitting a DeepWalk model.
Arg types:
* **graph** *(NetworkX graph)* - The graph to be embedded.
"""
self._set_seed()
graph = self._check_graph(graph)
walker = BiasedRandomWalker(self.walk_length, self.walk_number, self.p, self.q)
walker.do_walks(graph)
model = Word2Vec(
walker.walks,
hs=1,
alpha=self.learning_rate,
iter=self.epochs,
size=self.dimensions,
window=self.window_size,
min_count=self.min_count,
workers=self.workers,
seed=self.seed,
)
n_nodes = graph.number_of_nodes()
self._embedding = [model.wv[str(n)] for n in range(n_nodes)]
def get_embedding(self) -> np.array:
r"""Getting the node embedding.
Return types:
* **embedding** *(Numpy array)* - The embedding of nodes.
"""
return np.array(self._embedding)
```
## Scenario
```
g = nx.newman_watts_strogatz_graph(100, 20, 0.05)
model = Node2Vec()
model.fit(g)
model.walk_length
def test_node2vec():
"""
Testing the Node2Vec class.
"""
model = Node2Vec()
graph = nx.watts_strogatz_graph(100, 10, 0.5)
model.fit(graph)
embedding = model.get_embedding()
assert embedding.shape[0] == graph.number_of_nodes()
assert embedding.shape[1] == model.dimensions
assert type(embedding) == np.ndarray
model = Node2Vec(dimensions=32)
graph = nx.watts_strogatz_graph(150, 10, 0.5)
model.fit(graph)
embedding = model.get_embedding()
assert embedding.shape[0] == graph.number_of_nodes()
assert embedding.shape[1] == model.dimensions
assert type(embedding) == np.ndarray
test_node2vec()
```
|
github_jupyter
|
from typing import List, Callable
import networkx as nx
import numpy as np
from gensim.models.word2vec import Word2Vec
import random
from functools import partial
def _check_value(value, name):
try:
_ = 1 / value
except ZeroDivisionError:
raise ValueError(
f"The value of {name} is too small " f"or zero to be used in 1/{name}."
)
def _undirected(node, graph) -> List[tuple]:
edges = graph.edges(node)
return edges
def _directed(node, graph) -> List[tuple]:
edges = graph.out_edges(node, data=True)
return edges
def _get_edge_fn(graph) -> Callable:
fn = _directed if nx.classes.function.is_directed(graph) else _undirected
fn = partial(fn, graph=graph)
return fn
def _unweighted(edges: List[tuple]) -> np.ndarray:
return np.ones(len(edges))
def _weighted(edges: List[tuple]) -> np.ndarray:
weights = map(lambda edge: edge[-1]["weight"], edges)
return np.array([*weights])
def _get_weight_fn(graph) -> Callable:
fn = _weighted if nx.classes.function.is_weighted(graph) else _unweighted
return fn
class BiasedRandomWalker:
"""
Class to do biased second order random walks.
Args:
walk_length (int): Number of random walks.
walk_number (int): Number of nodes in truncated walk.
p (float): Return parameter (1/p transition probability) to move towards previous node.
q (float): In-out parameter (1/q transition probability) to move away from previous node.
"""
walks: list
graph: nx.classes.graph.Graph
edge_fn: Callable
weight_fn: Callable
def __init__(self, walk_length: int, walk_number: int, p: float, q: float):
self.walk_length = walk_length
self.walk_number = walk_number
_check_value(p, "p")
self.p = p
_check_value(q, "q")
self.q = q
def do_walk(self, node: int) -> List[str]:
"""
Doing a single truncated second order random walk from a source node.
Arg types:
* **node** *(int)* - The source node of the random walk.
Return types:
* **walk** *(list of strings)* - A single truncated random walk.
"""
walk = [node]
previous_node = None
previous_node_neighbors = []
for _ in range(self.walk_length - 1):
current_node = walk[-1]
edges = self.edge_fn(current_node)
current_node_neighbors = np.array([edge[1] for edge in edges])
weights = self.weight_fn(edges)
probability = np.piecewise(
weights,
[
current_node_neighbors == previous_node,
np.isin(current_node_neighbors, previous_node_neighbors),
],
[lambda w: w / self.p, lambda w: w / 1, lambda w: w / self.q],
)
norm_probability = probability / sum(probability)
selected = np.random.choice(current_node_neighbors, 1, p=norm_probability)[
0
]
walk.append(selected)
previous_node_neighbors = current_node_neighbors
previous_node = current_node
walk = [str(w) for w in walk]
return walk
def do_walks(self, graph) -> None:
"""
Doing a fixed number of truncated random walk from every node in the graph.
Arg types:
* **graph** *(NetworkX graph)* - The graph to run the random walks on.
"""
self.walks = []
self.graph = graph
self.edge_fn = _get_edge_fn(graph)
self.weight_fn = _get_weight_fn(graph)
for node in self.graph.nodes():
for _ in range(self.walk_number):
walk_from_node = self.do_walk(node)
self.walks.append(walk_from_node)
class Estimator(object):
"""Estimator base class with constructor and public methods."""
seed: int
def __init__(self):
"""Creating an estimator."""
pass
def fit(self):
"""Fitting a model."""
pass
def get_embedding(self):
"""Getting the embeddings (graph or node level)."""
pass
def get_memberships(self):
"""Getting the membership dictionary."""
pass
def get_cluster_centers(self):
"""Getting the cluster centers."""
pass
def _set_seed(self):
"""Creating the initial random seed."""
random.seed(self.seed)
np.random.seed(self.seed)
@staticmethod
def _ensure_integrity(graph: nx.classes.graph.Graph) -> nx.classes.graph.Graph:
"""Ensure walk traversal conditions."""
edge_list = [(index, index) for index in range(graph.number_of_nodes())]
graph.add_edges_from(edge_list)
return graph
@staticmethod
def _check_indexing(graph: nx.classes.graph.Graph):
"""Checking the consecutive numeric indexing."""
numeric_indices = [index for index in range(graph.number_of_nodes())]
node_indices = sorted([node for node in graph.nodes()])
assert numeric_indices == node_indices, "The node indexing is wrong."
def _check_graph(self, graph: nx.classes.graph.Graph) -> nx.classes.graph.Graph:
"""Check the Karate Club assumptions about the graph."""
self._check_indexing(graph)
graph = self._ensure_integrity(graph)
return graph
def _check_graphs(self, graphs: List[nx.classes.graph.Graph]):
"""Check the Karate Club assumptions for a list of graphs."""
graphs = [self._check_graph(graph) for graph in graphs]
return graphs
model = Word2Vec(
walker.walks,
hs=1,
alpha=self.learning_rate,
epochs=self.epochs,
size=self.dimensions,
window=self.window_size,
min_count=self.min_count,
workers=self.workers,
seed=self.seed,
)
Word2Vec()
class Node2Vec(Estimator):
r"""An implementation of `"Node2Vec" <https://cs.stanford.edu/~jure/pubs/node2vec-kdd16.pdf>`_
from the KDD '16 paper "node2vec: Scalable Feature Learning for Networks".
The procedure uses biased second order random walks to approximate the pointwise mutual information
matrix obtained by pooling normalized adjacency matrix powers. This matrix
is decomposed by an approximate factorization technique.
Args:
walk_number (int): Number of random walks. Default is 10.
walk_length (int): Length of random walks. Default is 80.
p (float): Return parameter (1/p transition probability) to move towards from previous node.
q (float): In-out parameter (1/q transition probability) to move away from previous node.
dimensions (int): Dimensionality of embedding. Default is 128.
workers (int): Number of cores. Default is 4.
window_size (int): Matrix power order. Default is 5.
epochs (int): Number of epochs. Default is 1.
learning_rate (float): HogWild! learning rate. Default is 0.05.
min_count (int): Minimal count of node occurrences. Default is 1.
seed (int): Random seed value. Default is 42.
"""
_embedding: List[np.ndarray]
def __init__(
self,
walk_number: int = 10,
walk_length: int = 80,
p: float = 1.0,
q: float = 1.0,
dimensions: int = 128,
workers: int = 4,
window_size: int = 5,
epochs: int = 1,
learning_rate: float = 0.05,
min_count: int = 1,
seed: int = 42,
):
super(Node2Vec, self).__init__()
self.walk_number = walk_number
self.walk_length = walk_length
self.p = p
self.q = q
self.dimensions = dimensions
self.workers = workers
self.window_size = window_size
self.epochs = epochs
self.learning_rate = learning_rate
self.min_count = min_count
self.seed = seed
def fit(self, graph: nx.classes.graph.Graph):
"""
Fitting a DeepWalk model.
Arg types:
* **graph** *(NetworkX graph)* - The graph to be embedded.
"""
self._set_seed()
graph = self._check_graph(graph)
walker = BiasedRandomWalker(self.walk_length, self.walk_number, self.p, self.q)
walker.do_walks(graph)
model = Word2Vec(
walker.walks,
hs=1,
alpha=self.learning_rate,
iter=self.epochs,
size=self.dimensions,
window=self.window_size,
min_count=self.min_count,
workers=self.workers,
seed=self.seed,
)
n_nodes = graph.number_of_nodes()
self._embedding = [model.wv[str(n)] for n in range(n_nodes)]
def get_embedding(self) -> np.array:
r"""Getting the node embedding.
Return types:
* **embedding** *(Numpy array)* - The embedding of nodes.
"""
return np.array(self._embedding)
g = nx.newman_watts_strogatz_graph(100, 20, 0.05)
model = Node2Vec()
model.fit(g)
model.walk_length
def test_node2vec():
"""
Testing the Node2Vec class.
"""
model = Node2Vec()
graph = nx.watts_strogatz_graph(100, 10, 0.5)
model.fit(graph)
embedding = model.get_embedding()
assert embedding.shape[0] == graph.number_of_nodes()
assert embedding.shape[1] == model.dimensions
assert type(embedding) == np.ndarray
model = Node2Vec(dimensions=32)
graph = nx.watts_strogatz_graph(150, 10, 0.5)
model.fit(graph)
embedding = model.get_embedding()
assert embedding.shape[0] == graph.number_of_nodes()
assert embedding.shape[1] == model.dimensions
assert type(embedding) == np.ndarray
test_node2vec()
| 0.922375 | 0.914023 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.