path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/jupyter/SevenReasonsToLearnPyTorchOnDatabricks.ipynb | ###Markdown
Seven Reasons To Learn PyTorch on DatabricksWhat expedites the process of learning new concepts, languages, or systems? Or, when learning a new task, do you look for analogues from skills you already possess?Across all learning endeavors, three favorable characteristics stand out: familiarity, clarity, and simplicity. Familiarity eases the transition because of a recognizable link between the old and new ways of doing. Clarity minimizes the cognitive burden. And Simplicity reduces the friction in the adoption of the unknown and, as a result, it increases the fruition of learning a new concept, language, or system.Keeping these three characteristics in mind, we examine in this blog several reasons why it's easy to learn PyTorch and how the [Databricks Lakehouse Platform](https://databricks.com/product/data-lakehouse) facilitates the learning process. <img src="https://raw.githubusercontent.com/dmatrix/data-assets/main/images/7_reasons_to_learn_pytorch.png" alt="7 Reasons to Learn PyTorch on Databricks" width="800" align="middle"> 1a. PyTorch is _Pythonic_Luciano Ramalho in Fluent Python defines Pythonic as an idiomatic way to use Python code that makes use of language features to be concise and readable. Python object constructs follow a certain protocol, and their behaviors adhere to a consistent pattern across classes, iterators, generators, sequences, context managers, modules, coroutines, decorators, etc. Even with little familiarity with the [Python data model](https://docs.python.org/3/reference/datamodel.html), modules, and language constructs, you recognize similar constructs in [PyTorch APIs](https://pytorch.org/docs/stable/nn.html), such as a `torch.tensor`, `torch.nn.Module`, `torch.utils.data.Datasets`, `torch.utils.data.DataLoaders`, etc. Not only do you see this Pythonic familiarity in PyTorch but also in other PyData ecosystem packages.PyTorch integrates with the PyData ecosystem, so your familiarity with [NumPy](https://numpy.org/) makes the transition easy to learn [Torch Tensors](https://pytorch.org/docs/stable/tensors.html). Numpy arrays and Tensors have similar data structures and operations. Just as [DataFrames](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.html?highlight=dataframes) are central data structures to [Apache Spark™](https://spark.apache.org/) operations, so are tensors as inputs to PyTorch models, training operations, computations, and scoring. A PyTorch tensor’s mental image (shown in the diagram below) maps to an n-dimensional numpy array. <img src="https://raw.githubusercontent.com/dmatrix/data-assets/main/images/tensors.png" alt="Tensors in PyTorch\" width="400"> For instance, you can seamlessly create Numpy arrays and convert them into Torch tensors. Such familiarity of Numpy operations transfers easily to tensor operations, too, as you can observe from our simple operations on both Numpy and Tensors in the code below. Both have familiar, imperative, and intuitive operations that one would expect from Python object APIs, such as lists, tuples, dictionaries, sets, etc. All this familiarity with Numpy's equivalent array operations on Torch tensors helps. Consider these examples:
###Code
import torch
import numpy as np
# Create a numpy array of 2-dimension
x_np = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
y_np = np.array([[2, 4, 6], [8, 10, 12]], np.int32)
print("x_shape: {}, y_shape: {}, x and y dimensions: {}, {}". format(x_np.shape,y_np.shape, x_np.ndim, y_np.ndim))
# Convert numpy array to a 2-rank tensor
x_t = torch.from_numpy(x_np)
y_t = torch.from_numpy(y_np)
print("x tensor: {}, y tensor {}, x and y tensor ranks: {}, {}".format(x_t, y_t, x_t.ndim, y_t.ndim))
# Add two numpy array and two tensors. The methods names are similar
xy_np = np.add(x_np, y_np)
xy_t = torch.add(x_t, y_t)
print("Addition: Numpy array xy_np: {}, Tensors xy_t: {}".format(xy_np, xy_t))
###Output
_____no_output_____
###Markdown
1b. Easy to Extend PyTorch _nn_ ModulesPyTorch library includes [neural network modules](https://pytorch.org/docs/stable/nn.html) to build a layered network architecture. In PyTorch parlance, these modules comprise each layer of your network. Derived from its base class module `torch.nn.Module`, you can easily create a simple or complex layered neural network. To define a PyTorch customized network module class and its methods, you follow a similar pattern to build a customized Python object class derived from its base class object. Let's define a simple [two-layered](https://pytorch.org/tutorials/beginner/examples_nn/two_layer_net_module.htmlpytorch-custom-nn-modules) linear network example, to illustrate this similarity.Notice that the custom `TwoLayeredNet` below is Pythonic in its flow and structure. Derived classes from the torch.nn.Module have class initializers with parameters, define interface methods, and they are callable. That is, the base class `nn.Module` implements the Python magic `__call__()` object method. Even though the two-layered model is simple, it demonstrates this familiarity with extending a class from Python’s base object. Furthermore, you get an intuitive feeling that you are writing or reading Python application code while using PyTorch APIs. It does not feel like you're learning a new language: the syntax, structure, form, and behavior are all too familiar; the unfamiliar bit are the PyTorch modules and the APIs, which are no different when learning a new PyData package APIs and incorporating their use in your Python application code.
###Code
import torch
import torch.nn as nn
class TwoLayerNet(nn.Module):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
def __init__(self, input_size, hidden_layers, output_size):
super(TwoLayerNet, self).__init__()
self.l1 = nn.Linear(input_size, hidden_layers)
self.relu = nn.ReLU()
self.l2 = nn.Linear(hidden_layers, output_size)
def forward(self, x):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary (differentiable) operations on Tensors.
"""
y_pred = self.l1(x)
y_pred = self.relu(y_pred)
y_pred = self.l2(y_pred)
return y_pred
###Output
_____no_output_____
###Markdown
Define some input and out dimensions to the network layer and check if `cuda` is available
###Code
dtype = torch.float
# Check if we can use cuda if GPUs are avaliable
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# N is batch size; D_in is input dimension;
# H is hidden dimension layer; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to use inputs and outputs on respective CPU or GPU processors
X = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)
print("X shape: {}, Y shape: {}, X rank: {}, Y rank: {}". format(X.shape, y.shape, X.ndim, y.ndim))
###Output
_____no_output_____
###Markdown
Construct our model by instantiating the class defined above, as you would construct any Python custom class object.
###Code
# Check if CUDA is available for GPUs.
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = TwoLayerNet(D_in, H, D_out).to(device)
model, device
###Output
_____no_output_____
###Markdown
Construct our loss function and an Optimizer. The call to `model.parameters()` in the SGD constructor will contain the learnable parameters of the two`nn.Linear modules` which are members of the model.
###Code
learning_rate = 1e-4
loss_fn = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Now we define a simple training loop with some iterations, using Python familiar language constructs.
###Code
for t in range(350):
# Forward pass: Compute predicted y by passing x to the model.
# invokde model object since it's callable to compute the predictions
y_pred = model(X)
# Compute and print loss
loss = loss_fn(y_pred, y)
if t % 50 == 0:
print("iterations: {}, loss: {:8.2f}".format(t, loss.item()))
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
What follows from above is a recognizable pattern and flow between how you define Python’s customized class and a simple PyTorch neural network. Also, the code is concise and reads like Python code. Another recognizable Pythonic pattern in PyTorch is how `Dataset` and `DataLoaders` use Python protocols to build iterators. 1c. Easy to Customize PyTorch Dataset for DataloadersAt the core of PyTorch data loading utility is the `torch.utils.data.DataLoader` class; they are an integral part of the PyTorch iterative training process, in which we iterate over batches of input during an epoch of training. `DataLoaders` offer a Python iterable over your custom dataset by implementing a Python sequence and iterable protocol: this includes implementing `__len__` and `__getitem__` magic methods on an object. Again, very Pythonic in behavior: as part of the implementation, we employ list comprehensions, use numpy arrays to convert to tensors, and use random access to fetch _nth_ data item—all conforming to familiar access patterns and behaviors of doing things in Python.Let's look at a simple custom Dataset of temperatures for use in training a model. Other complex datasets could be images, extensive features datasets of tensors, etc.
###Code
import math
from torch.utils.data import Dataset, DataLoader
class FahrenheitTemperatures(Dataset):
def __init__(self, start=0, stop=212, size=5000):
super(FahrenheitTemperatures, self).__init__()
# Intialize local variables and covert them into tensors
f_temp = np.random.randint(start, high=stop, size=size)
# Use Python list comprehension to convert centrigrade
c_temp = np.array([self._f2c(f) for f in f_temp])
# Convert to Tensors from numpy
self.X = torch.from_numpy(f_temp).float()
self.y = torch.from_numpy(c_temp).float()
# Data for prediction or validation
self.X_pred = torch.from_numpy(np.arange(212, 170, -5, dtype=float))
self.n_samples = self.X.shape[0]
def __getitem__(self, index):
# Support indexing such that dataset[i] can be used to get i-th sample
# implement this python function for indexing
# return a tuple (X,y)
return self.X[index], self.y[index]
def __len__(self):
# We can call len(dataset) to return the size, so this can be used
# as an iterator
return self.n_samples
def _f2c(sel,f) -> float:
return (f - 32) * 5.0/9.0
###Output
_____no_output_____
###Markdown
Using familiar Python access patterns, you can use [] to access your data for a given integer index, since we have implemented the `__getitem__` magic method.
###Code
# Let's now access our dataset using an index
dataset = FahrenheitTemperatures()
#unpack since it returns a tuple
features, labels = dataset[0]
print('Fahrenheit: {:.2f}'.format(features))
print('Celcius : {:.2f}'.format(labels))
print('Samples: {}'.format(len(dataset)))
###Output
_____no_output_____
###Markdown
A PyTorch DataLoader class takes an instance of a customized `FahrenheitTemperatures` class object as a parameter. This utility class is standard in PyTorch training loops. It offers an ability to iterate over batches of data like an iterator: again, a very _Pythonic_ and straightforward way of doing things!
###Code
# Let's try Dataloader class and make this into an iterator and access the data as above
dataloader = DataLoader(dataset=dataset, batch_size=4, shuffle=True)
dataiter = iter(dataloader)
data = dataiter.next()
# Since we specified our batch size to be 4, we'll see four features and labels
print('Fahrenheit: {}'.format(data[0]))
print('Celcius : {}'.format(data[1]))
###Output
_____no_output_____
###Markdown
Since we implemented our custom `Dataset`, let's use it in the PyTorch training loop.
###Code
# Let's do a dummy training loop
num_epochs = 2
batch_size = 4
total_samples = len(dataset)
n_iterations = math.ceil(total_samples/batch_size)
for epoch in range(num_epochs):
# iterate over our dataloader in batches
# Because we have implemented our Dataset class with __getitem__ and __len__, we
# can iterate over it
for i, (inputs, labels) in enumerate(dataloader):
# Torward and backward pass, update gradients, and zero them out
# would appear within this loop
# Run your training process
if (i+1) % 400 == 0:
print(f'Epoch: {epoch+1}/{num_epochs}, Step {i+1}/{n_iterations}| Inputs {inputs.shape} | Labels {labels.shape}, Tensors {inputs}')
###Output
_____no_output_____
###Markdown
Although the aforementioned Pythonic reasons are not directly related to [Databricks Lakehouse Platform](https://databricks.com/product/data-lakehouse), they account for ideas of familiarity, clarity, simplicity, and the _Pythonic_ way of writing PyTorch code. Next, we examine what aspects within the Databricks Lakehouse Platform’s runtime for machine learning facilitate learning PyTorch. 2. No need to install Python packagesAs part of the Databricks Lakehouse platform, the runtime for machine learning (MLR) comes preinstalled with the latest versions of Python, PyTorch, PyData ecosystem packages, and additional standard machine learning libraries saving you from installing or managing any packages. Out-of-the-box and ready-to-use-runtime environments are conducive to learning because they reduce the friction to get started by unburdening you to control or install packages. If you want to install additional Python packages, it's as simple as using ``%pip install ``. This ability to support [package management](How to Simplify Python Environment Management Using Databricks’ %pip and %conda Magic Commands) on your cluster is popular among Databricks customers and widely used as part of their development model lifecycle.To inspect the list of all preinstalled packages, use the `%pip list`.
###Code
%pip list
###Output
_____no_output_____
###Markdown
3. Easy to Use CPUs or GPUsNeural networks for deep learning involve numeric-intensive computations, including dot products and matrix multiplications on large and higher-ranked tensors. For these compute-bound PyTorch applications that require GPUs, you can easily create a cluster of MLR with GPUs and consign your data to use GPUs. As such, all your training can be done on GPUs, as the above simple example of `TwoLayeredNet` demonstrate how to use GPU for training if `cuda` is available.Although our example code below is simple, showing matrix multiplication of two randomly generated tensors, real PyTorch applications will have much more intense computation during their forward and backward passes and [auto-grad](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html) computations.
###Code
dtype = torch.float
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Randomly initialize weights and put the tensors on a GPU if available
a = torch.randn((5, 5), device=device, dtype=dtype)
b = torch.randn((5, 5), device=device, dtype=dtype)
# Matrix multiplication done on GPU
c = torch.mul(a,b)
print(" c: {}".format(c))
###Output
_____no_output_____
###Markdown
4. Easy to use TensorBoard [Already announced in a blog](https://databricks.com/blog/2020/08/25/tensorboard-a-new-way-to-use-tensorboard-on-databricks.html) as part of the Databricks Runtime (DBR), this magic command displays your training metrics from [TensorBoard](https://www.tensorflow.org/tensorboard) within the same notebook. No longer do you need to leave your notebook and launch TensorBoard from another tab. This in-place visualization is a significant improvement toward simplicity and developer experience. And PyTorch developers can quickly see their metrics in TensorBoard.Let's try to run a sample [PyTorch FashionMNIST example](https://pytorch.org/docs/stable/tensorboard.html) with TensorBoard logging. First, define a `SummaryWriter`, followed by the FashionMNIST `Dataset` in the `DataLoader` in our PyTorch `torchvision.models.resnet50` model.
###Code
from torch.utils.tensorboard import SummaryWriter
import numpy as np
writer = SummaryWriter()
for n_iter in range(100):
writer.add_scalar('Loss/train', np.random.random(), n_iter)
writer.add_scalar('Loss/test', np.random.random(), n_iter)
writer.add_scalar('Accuracy/train', np.random.random(), n_iter)
writer.add_scalar('Accuracy/test', np.random.random(), n_iter)
import torch
import torchvision
from torch.utils.tensorboard import SummaryWriter
from torchvision import datasets, transforms
# Writer will output to ./runs/ directory by default
writer = SummaryWriter()
# Transformation pipeline applied to the input data
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])
# Create a PyTorch FashionMNIST dataset
trainset = datasets.FashionMNIST('mnist_train', train=True, download=True, transform=transform)
# Use the dataset as in the Dataloader
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
model = torchvision.models.resnet50(False)
# Have ResNet model take in grayscale rather than RGB
model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False)
images, labels = next(iter(trainloader))
grid = torchvision.utils.make_grid(images)
writer.add_image('images', grid, 0)
writer.add_graph(model, images)
writer.close()
###Output
_____no_output_____
###Markdown
Using our Datarbicks notebook’s `%magic commands`, we can launch the TensorBoard within our cell and examine the training metrics and model outputs.
###Code
%load_ext tensorboard
%tensorboard --logdir=./runs
###Output
_____no_output_____
###Markdown
5. PyTorch Integrated with MLflowIn our steadfast effort to make Databricks simpler, we enhanced [MLflow fluent tracking APIs](https://mlflow.org/docs/latest/python_api/mlflow.htmlmlflow.autolog) to autolog MLflow entities—metrics, tags, parameters, and artifacts—for supported machine learning libraries, including PyTorch Lightning. Through the MLflow UI, an integral part of the workspace, you can access all MLflow experiments via the `Experiment` icon in the upper right corner. All experiment runs during training are automatically logged to the MLflow tracking server. No need for you to explicitly use the tracking APIs to log MLflow entities, albeit it does not prevent you from tracking and logging any additional entities such as images, dictionaries, or text artifacts, etc.Here is a minimal example of a PyTorch Lightning FashionMNIST instance with just a training loop step (no validation, no testing). It illustrates how you can use MLflow to autolog MLflow entities, peruse the MLflow UI to inspect its runs from within this notebook, register the model, and [serve or deploy](https://docs.databricks.com/applications/mlflow/model-serving.html) it.
###Code
%pip install pytorch_lightning
import os
import pytorch_lightning as pl
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.datasets import FashionMNIST
from pytorch_lightning.metrics.functional import accuracy
import mlflow.pytorch
from mlflow.tracking import MlflowClient
class MNISTModel(pl.LightningModule):
def __init__(self):
super(MNISTModel, self).__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
acc = accuracy(loss, y)
self.log("train_loss", loss, on_epoch=True)
self.log("acc", acc, on_epoch=True)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
###Output
_____no_output_____
###Markdown
Create the PyTorch model as you would create a Python class, use the FashionMNIST `DataLoader`, a PyTorch Lightning `Trainer`, and autolog all MLflow entities during its `trainer.fit()` method.
###Code
mnist_model = MNISTModel()
# Init DataLoader from FashionMNIST Dataset
train_ds = FashionMNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
train_loader = DataLoader(train_ds, batch_size=32)
# Initialize a trainer
trainer = pl.Trainer(max_epochs=20, progress_bar_refresh_rate=20)
# Auto log all MLflow entities
mlflow.pytorch.autolog()
# Train the model
with mlflow.start_run() as run:
trainer.fit(mnist_model, train_loader)
###Output
_____no_output_____ |
examples/Python.ipynb | ###Markdown
Python LSP Hover action
###Code
def square(x):
"""Can you see me?"""
return x*x
###Output
_____no_output_____
###Markdown
Hover over `square` and see an underline appear; press Ctrl to display tooltip with the docstring.
###Code
result = square(2)
###Output
_____no_output_____
###Markdown
Inspections This import is underlied as it should be placed at the top of the file; it has an orange underline as this is only a warning.
###Code
from statistics import mean
###Output
_____no_output_____
###Markdown
You can also hover over `statistics` and `mean` (while holding Ctrl) to see the documentation of those.
###Code
undefined_variable
###Output
_____no_output_____
###Markdown
you will see red underline for an undefined variable (example above) or for an invalid syntax. Also, spurious whitespaces can be highlighted (if server supports such diagnostic):
###Code
class Dog:
def bark(self):
print('🐕 woof woof')
Dog().bark()
###Output
🐕 woof woof
###Markdown
Empty cells will cause "too many blank lines" warning as each cell is padded with two new lines. If we remove the blank cell, everything will be perfect! Diagnostics Panel Search for "Show diagnostics panel" in the commands palette, or invoke it from the context menu to display all the diagnostics from the file in one place.The diagnostics panel allows you to sort the inspections and go to the respective locations in the code (just click on the row of interest). Autocompletion
###Code
class Cat:
def miaow(self):
print('miaow')
###Output
_____no_output_____
###Markdown
Autocompletion works without the kernel - try completing "Cat" below using Tab, without running the cell above:
###Code
Ca
###Output
_____no_output_____
###Markdown
You can see that all the double-dunder methods of the class are immediately available:
###Code
Cat.__
###Output
_____no_output_____
###Markdown
In future, it will automatically invoke the completion suggestions after typing a dot (.):
###Code
Cat
###Output
_____no_output_____
###Markdown
Rename You can rename symbols by pressing F2 or selecting rename option from the context menu.If you rename the `test` variable below to `test2`, both occurrences (in the two following cells) will be updated:
###Code
test = 1
test
###Output
_____no_output_____
###Markdown
However, a local reference from a different scope (inside the `abc()` function) will be unafected:
###Code
def abc():
test = 2
test
###Output
_____no_output_____
###Markdown
Python LSP Hover action
###Code
def square(x):
"""Can you see me?"""
return x*x
###Output
_____no_output_____
###Markdown
Hover over `square` and see an underline appear; press Ctrl to display tooltip with the docstring.
###Code
result = square(2)
###Output
_____no_output_____
###Markdown
Inspections This import is underlied as it should be placed at the top of the file; it has an orange underline as this is only a warning.
###Code
from statistics import mean
###Output
_____no_output_____
###Markdown
You can also hover over `statistics` and `mean` (while holding Ctrl) to see the documentation of those.
###Code
undefined_variable
###Output
_____no_output_____
###Markdown
you will see red underline for an undefined variable (example above) or for an invalid syntax. Also, spurious whitespaces can be highlighted (if server supports such diagnostic):
###Code
class Dog:
def bark(self):
print('🐕 woof woof')
Dog().bark()
###Output
🐕 woof woof
###Markdown
Empty cells will cause "too many blank lines" warning as each cell is padded with two new lines. If we remove the blank cell, everything will be perfect! Diagnostics Panel Search for "Show diagnostics panel" in the commands palette, or invoke it from the context menu to display all the diagnostics from the file in one place.The diagnostics panel allows you to sort the inspections and go to the respective locations in the code (just click on the row of interest). Autocompletion
###Code
class Cat:
def miaow(self):
print('miaow')
###Output
_____no_output_____
###Markdown
Autocompletion works without the kernel - try completing "Cat" below using Tab, without running the cell above:
###Code
Ca
###Output
_____no_output_____
###Markdown
You can see that all the double-dunder methods of the class are immediately available:
###Code
Cat.__
###Output
_____no_output_____
###Markdown
It also automatically invokes the completion suggestions after typing a dot (.):
###Code
Cat
###Output
_____no_output_____
###Markdown
Rename You can rename symbols by pressing F2 or selecting rename option from the context menu.If you rename the `test` variable below to `test2`, both occurrences (in the two following cells) will be updated:
###Code
test = 1
test
###Output
_____no_output_____
###Markdown
However, a local reference from a different scope (inside the `abc()` function) will be unafected:
###Code
def abc():
test = 2
test
###Output
_____no_output_____
###Markdown
Python LSP Hover action
###Code
def square(x):
"""Can you see me?"""
return x*x
###Output
_____no_output_____
###Markdown
Hover over `square` and see an underline appear; press `Ctrl` to display tooltip with the docstring.
###Code
result = square(2)
###Output
_____no_output_____
###Markdown
Inspections This import is underlied as it should be placed at the top of the file; it has an orange underline as this is only a warning.
###Code
from statistics import mean
###Output
_____no_output_____
###Markdown
You can also hover over statistics and mean (while holding `Ctrl`) to see the documentation of those.
###Code
if there is invalid syntax:
pass
###Output
_____no_output_____
###Markdown
you will see red underline ("invalid" and "syntax" above are two expressions which canno be place next to each other without an operator) Also, spurious whitespaces can be highlighted (if server supports such diagnostic):
###Code
class Dog:
def bark(self):
print('🐕 woof woof')
Dog().bark()
###Output
🐕 woof woof
###Markdown
Empty cells will cause "too many blank lines" warning as each cell is padded with two new lines. If we remove the blank cell, everything will be perfect! Autocompletion
###Code
class Cat:
def miaow(self):
print('miaow')
###Output
_____no_output_____
###Markdown
Autocompletion works without the kernel - try completing "Cat" below using Tab, without running the cell above:
###Code
Ca
###Output
_____no_output_____
###Markdown
You can see that all the double-dunder methods of the class are immediately available:
###Code
Cat.__
###Output
_____no_output_____
###Markdown
In future, it will automatically invoke the completion suggestions after typing a dot (.):
###Code
Cat
###Output
_____no_output_____
###Markdown
Querying portia - Data fetching with Python Making HTTP requests using Python - Checking credentials * Unsucessfull request
###Code
# Library for HTTP requests
import requests
# Portia service URL for token authorization checking
url = "http://io.portia.supe.solutions/api/v1/accesstoken/check"
# Makes the request
response = requests.get(url)
# Shows response
if response.status_code == 200:
print("Success accessing Portia Service - Status Code: {0}\n{1}".format(response.status_code, response.text))
else:
print("Couldn't access Portia service - Status Code: {0}".format(response.status_code))
###Output
Couldn't access Portia service - Status Code: 401
###Markdown
* Sucessfull request
###Code
# Library for HTTP requests
import requests
# Portia service URL for token authorization checking
url = "http://io.portia.supe.solutions/api/v1/accesstoken/check"
# Setting the header with a token for successful authorization
header = {"Authorization": "Bearer bdb6e780b43011e7af0b67cba486057b"}
# Makes the request
response = requests.get(url, headers=header)
# Shows response
if response.status_code == 200:
print("Success accessing Portia Service - Status Code: {0}\n{1}".format(response.status_code, response.text))
else:
print("Couldn't access Portia service - Status Code: {0}".format(response.status_code))
###Output
Success accessing Portia Service - Status Code: 200
{"user":"teste","isLoggedIn":true}
###Markdown
Obtaining data from a specific time frameNow that we have learned how to authenticate with the service, let's see how to get the data
###Code
import requests # Library for HTTP requests
import time as epoch # Library for timing functions
import json # Library for JSON usage
# Example for getting the last 5 minutes of data
fiveMinutes = 1000 * 60 * 5
toTimestamp = int(epoch.time()) * 1000 # The time lib only gives us the UTC time as seconds since January 1, 1970, so, we multiply by 1000 to get the milliseconds
fromTimestamp = toTimestamp - fiveMinutes
# Portia service URL for specific time frame
url = "http://io.portia.supe.solutions/api/v1/device/HytTDwUp-j8yrsh8e/port/2/sensor/1"
# Adding the calculated timestamps as GET parameters
url += "?from_timestamp={0}&?to_timestamp={1}".format(fromTimestamp, toTimestamp) # If no parameters, the service default response is for the last 24 hours
# Setting the header with a token for successful authorization
header = {"Authorization": "Bearer bdb6e780b43011e7af0b67cba486057b"}
# Makes the request
response = requests.get(url, headers=header)
# Shows response
if response.status_code == 200:
# Parses dimensions
dimensions = json.loads(response.text)
print("Success! For each received dimension:")
for dimension in dimensions:
print("Accessing dimension package:")
print("\tDimension Code: {0}".format(dimension["dimension_code"]))
print("\tUnity Code: {0}".format(dimension["dimension_unity_code"]))
print("\tThing Code: {0}".format(dimension["dimension_thing_code"]))
print("\tDimension Value: {0}".format(dimension["dimension_value"]))
print("\tServer Timestamp: {0}\n".format(dimension["server_timestamp"]))
else:
print("Couldn't access Portia service - Status Code: {0}".format(response.status_code))
###Output
Success! For each received dimension:
Accessing dimension package:
Dimension Code: 1
Unity Code: 1
Thing Code: 1
Dimension Value: 22.5
Server Timestamp: 1508778623757
Accessing dimension package:
Dimension Code: 1
Unity Code: 1
Thing Code: 1
Dimension Value: 22.5
Server Timestamp: 1508778562317
Accessing dimension package:
Dimension Code: 1
Unity Code: 1
Thing Code: 1
Dimension Value: 22.5
Server Timestamp: 1508778502766
Accessing dimension package:
Dimension Code: 1
Unity Code: 1
Thing Code: 1
Dimension Value: 22.4
Server Timestamp: 1508778442152
Accessing dimension package:
Dimension Code: 1
Unity Code: 1
Thing Code: 1
Dimension Value: 22.4
Server Timestamp: 1508778382528
###Markdown
Obtaining the latest dataFor the next example, we are requesting only the last data sent by the equipments * Last dimension
###Code
import requests # Library for HTTP requests
import json # Library for JSON usage
# Portia service URL for getting the latest data
url = "http://io.portia.supe.solutions/api/v1/device/HytTDwUp-j8yrsh8e/port/2/sensor/1/last"
# Setting the header with a token for successful authorization
header = {"Authorization": "Bearer bdb6e780b43011e7af0b67cba486057b"}
# Makes the request
response = requests.get(url, headers=header)
# Shows response
if response.status_code == 200:
# Parses dimension
dimension = json.loads(response.text)[0]
print("Success! Accessing dimension package:")
print("\tDimension Code: {0}".format(dimension["dimension_code"]))
print("\tUnity Code: {0}".format(dimension["dimension_unity_code"]))
print("\tThing Code: {0}".format(dimension["dimension_thing_code"]))
print("\tDimension Value: {0}".format(dimension["dimension_value"]))
print("\tServer Timestamp: {0}\n".format(dimension["server_timestamp"]))
else:
print("Couldn't access Portia service - Status Code: {0}".format(response.status_code))
###Output
Success! Accessing dimension package:
Dimension Code: 1
Unity Code: 1
Thing Code: 1
Dimension Value: 22.6
Server Timestamp: 1508779043681
###Markdown
* Last three dimensions
###Code
import requests # Library for HTTP requests
import json # Library for JSON usage
# Portia service URL for getting the latest data
url = "http://io.portia.supe.solutions/api/v1/device/HytTDwUp-j8yrsh8e/port/2/sensor/1/last"
# Adding GET parameter for specifying that we want the last 3 dimension packages
url += "?limit={0}".format(3)
# Setting the header with a token for successful authorization
header = {"Authorization": "Bearer bdb6e780b43011e7af0b67cba486057b"}
# Makes the request
response = requests.get(url, headers=header)
# Shows response
if response.status_code == 200:
# Parses dimensions
dimensions = json.loads(response.text)
print("Success! For each received dimension:")
for dimension in dimensions:
print("Accessing dimension package:")
print("\tDimension Code: {0}".format(dimension["dimension_code"]))
print("\tUnity Code: {0}".format(dimension["dimension_unity_code"]))
print("\tThing Code: {0}".format(dimension["dimension_thing_code"]))
print("\tDimension Value: {0}".format(dimension["dimension_value"]))
print("\tServer Timestamp: {0}\n".format(dimension["server_timestamp"]))
else:
print("Couldn't access Portia service - Status Code: {0}".format(response.status_code))
###Output
Success! For each received dimension:
Accessing dimension package:
Dimension Code: 1
Unity Code: 1
Thing Code: 1
Dimension Value: 22.5
Server Timestamp: 1508779284186
Accessing dimension package:
Dimension Code: 1
Unity Code: 1
Thing Code: 1
Dimension Value: 22.5
Server Timestamp: 1508779224094
Accessing dimension package:
Dimension Code: 1
Unity Code: 1
Thing Code: 1
Dimension Value: 22.6
Server Timestamp: 1508779163051
|
04-csv.ipynb | ###Markdown
Archivos CSV===**Juan David Velásquez Henao** [email protected] Universidad Nacional de Colombia, Sede Medellín Facultad de Minas Medellín, Colombia---Haga click [aquí](https://github.com/jdvelasq/SQL-basics) para acceder al repositorio en GitHub.Haga click [aquí](http://nbviewer.jupyter.org/github/jdvelasq/SQL-basics/tree/master/) para explorar el repositorio usando en `nbviewer`. ---
###Code
## cell magic
%load_ext sql
## conexión a la base de datos
%sql mysql+pymysql://root:password@localhost
%sql DROP DATABASE IF EXISTS sqldemo;
%sql CREATE DATABASE IF NOT EXISTS sqldemo;
%sql USE sqldemo
###Output
* mysql+pymysql://root:***@localhost
0 rows affected.
* mysql+pymysql://root:***@localhost
1 rows affected.
* mysql+pymysql://root:***@localhost
0 rows affected.
###Markdown
Preparación de los datos Se generan tres archivos de datos en formato CSV para importar a la base de datos. El magic `%%writefile filename` escribe el contenido de la celda al archivo llamada `filename` en el directorio actual de trabajo.
###Code
%%writefile bancos.csv
Ally Financial,3608-2596-5394-1054,216-82-1048
BB&T,3608-1721-4951-1198,116-81-1883
BBVA Compass,3608-1395-5632-1976,224-25-1891
BBVA Compass,3608-1721-4005-1322,116-51-1291
BBVA Compass,3608-2181-5724-1476,144-25-1448
BBVA Compass,3608-2596-5634-1497,224-99-1262
BMO Harris Bank,3608-1721-4236-1828,279-81-1912
BNP Paribas,3608-2181-5030-1465,216-51-1025
Fifth Third Bank,3608-1782-5015-1001,429-83-1156
Citizens Financial Group,3608-2181-4711-1693,177-44-1054
Comerica,3608-1333-4580-1185,216-85-1367
Comerica,3608-2596-5551-1572,116-93-1394
Deutsche Bank,3608-1782-5551-1837,339-74-1545
Discover Financial,3608-1395-4951-1668,116-51-1291
Fifth Third Bank,3608-1782-4458-1383,166-82-1605
First Republic Bank,3608-1682-4160-1476,425-82-1851
First Republic Bank,3608-2596-5696-1134,429-83-1156
JPMorgan Chase,3608-1782-5890-1999,287-74-1145
JPMorgan Chase,3608-2067-5766-1056,177-23-1359
JPMorgan Chase,3608-2181-5988-1718,116-54-1259
JPMorgan Chase,3608-2751-4236-1394,320-54-1856
MUFG Union Bank,3608-2800-5459-1497,144-54-1840
New York Community Bancorp,3608-2968-5745-1804,323-51-1535
Popular. Inc.,3608-1333-4394-1935,177-44-1159
Popular. Inc.,3608-1721-5632-1589,224-55-1496
Popular. Inc.,3608-2800-5551-1351,279-81-1912
Santander Bank,3608-1395-5691-1428,301-25-1394
Signature Bank,3608-2588-5394-1381,216-82-1048
Signature Bank,3608-2751-5015-1278,224-55-1496
SunTrust Banks,3608-1682-5152-1053,339-74-1545
MUFG Union Bank,3608-1782-4038-1052,238-81-1227
U.S. Bancorp,3608-1192-5884-1614,391-55-1442
U.S. Bancorp,3608-1333-4005-1623,177-44-1054
U.S. Bancorp,3608-2067-5394-1306,216-85-1367
U.S. Bancorp,3608-2181-4288-1394,381-54-1605
USAA,3608-1782-5791-1558,368-83-1054
Wells Fargo,3608-1782-5030-1572,339-82-1442
Wells Fargo,3608-2588-5988-1551,144-25-1448
%%writefile personas.csv
216-51-1025,(09)-5580-7527,Memphis (Tennessee),Single,Marco Goodman,1988-06-30
116-51-1291,(07)-2905-7818,Buffalo (New York),Married,Roxanne Kerns,1974-11-27
177-44-1159,(09)-5062-6922,Detroit (Michigan),Single,Regina Lauritzen,1969-07-27
116-81-1883,(03)-1350-7402,Chandler (Arizona),Divorced,Howard Samsel,1989-11-26
429-83-1156,(09)-5794-9470,Scottsdale (Arizona),Married,Gabriel Kingston,1978-05-05
381-54-1605,(05)-5330-5036,Albuquerque (New Mexico),Single,Carrie Bigelow,1982-05-02
224-99-1262,(05)-3339-3262,Milwaukee (Wisconsin),Married,Nichelle Thaxton,1988-01-02
301-25-1394,(07)-4370-8507,Houston (Texas),Single,Joaquin Yap,1972-11-12
323-51-1535,(03)-5179-6500,Las Vegas (Nevada),Married,Yu Kittredge,1978-01-22
216-85-1367,(07)-2905-9114,Saint Paul (Minnesota),Divorced,Tania Raley,1973-12-16
166-82-1605,(09)-6473-4208,Irvine (California),Married,Demetrius Fry,1975-03-27
116-54-1259,(04)-3468-6535,San Bernardino (California),Divorced,Jake Vansant,1980-02-01
224-55-1496,(03)-8685-6502,Aurora (Colorado),Common-Law,Tamesha Lawlor,1983-10-02
177-44-1054,(08)-5902-5867,El Paso (Texas),Single,Millie Lasher,1976-03-29
320-54-1856,(04)-3858-1079,Houston (Texas),Divorced,Lilly Macdonald,1983-09-07
144-25-1448,(09)-5179-2725,Durham (North Carolina),Divorced,Gerald Glynn,1985-07-07
144-54-1840,(03)-7508-9910,Orlando (Florida),Single,Felipe Malpass,1982-06-08
224-25-1891,(05)-9333-5713,Las Vegas (Nevada),Divorced,Wallace Lowery,1971-12-19
216-82-1048,(04)-1199-9661,Las Vegas (Nevada),Married,Pedro Welch,1972-05-07
116-93-1394,(05)-9333-4606,Tampa (Florida),Married,Betty Fitzhugh,1973-08-12
279-81-1912,(08)-2905-8942,Winston–Salem (North Carolina),Single,Lauren Seifert,1977-10-07
425-82-1851,(03)-5794-3345,Aurora (Colorado),Married,Livia Castillo,1972-02-26
339-74-1545,(08)-4858-6766,Atlanta (Georgia),Married,Leland Scully,1975-07-25
368-83-1054,(05)-7508-4870,Omaha (Nebraska),Married,Elton Castellanos,1975-10-08
339-82-1442,(09)-5854-7191,Henderson (Nevada),Single,Sondra Pike,1980-06-25
391-55-1442,(05)-6865-1079,Baton Rouge (Louisiana),Divorced,Laquita Murrin,1984-04-21
177-23-1359,(07)-5854-6781,St. Louis (Missouri),Single,Gigi Ragland,1977-01-27
238-81-1227,(03)-9999-9910,Laredo (Texas),Common-Law,Ronald Signorelli,1977-06-13
287-74-1145,(03)-5794-9130,Fremont (California),Single,Wilson Upshaw,1976-02-21
%%writefile franquicias.csv
Capital One,3608-2181-5030-1465,2023-05-27,4538,337,1400
USAA,3608-1395-4951-1668,2024-03-11,5101,240,1900
U.S. Bank,3608-1333-4394-1935,2020-03-08,4814,231,2000
Capital One,3608-1721-4951-1198,2019-05-30,3925,366,1600
PNC,3608-1782-5015-1001,2025-06-08,4241,048,1200
Capital One,3608-2181-4288-1394,2021-12-08,4253,556,1700
Wells Fargo,3608-2596-5634-1497,2018-07-15,5205,140,1200
USAA,3608-1395-5691-1428,2023-04-20,2111,512,1400
American Express,3608-2968-5745-1804,2025-12-01,5065,993,1000
American Express,3608-1333-4580-1185,2025-01-13,2377,277,1300
Discover,3608-1782-4458-1383,2018-07-19,4623,863,1500
Chase,3608-2181-5988-1718,2024-03-23,2987,452,1000
USAA,3608-2751-5015-1278,2024-04-25,2744,831,1700
U.S. Bank,3608-1333-4005-1623,2022-11-10,2117,373,1400
MasterCard,3608-2751-4236-1394,2022-07-22,7943,109,1200
MasterCard,3608-2588-5988-1551,2021-11-11,2172,945,1900
USAA,3608-2800-5459-1497,2024-04-02,7568,458,1400
Discover,3608-1395-5632-1976,2022-12-22,5884,272,1200
BarclayCard US,3608-2588-5394-1381,2020-12-24,5280,237,1200
American Express,3608-2596-5551-1572,2024-07-28,4107,438,1500
BarclayCard US,3608-2800-5551-1351,2020-09-30,4174,318,1700
Discover,3608-1682-4160-1476,2021-05-08,2135,864,2000
U.S. Bank,3608-1682-5152-1053,2024-04-27,7022,246,1100
U.S. Bank,3608-1782-5791-1558,2023-09-24,7502,188,2000
PNC,3608-1782-5030-1572,2024-07-31,6887,951,1700
MasterCard,3608-1192-5884-1614,2018-06-20,5594,800,2000
Bank of America,3608-2067-5766-1056,2025-09-30,2338,355,1200
BarclayCard US,3608-1782-4038-1052,2022-06-25,2130,117,1500
American Express,3608-1782-5890-1999,2021-11-10,3195,732,1600
Visa,3608-1782-5551-1837,2024-11-05,5357,255,2000
MasterCard,3608-1721-4236-1828,2018-05-15,3700,561,1800
U.S. Bank,3608-2596-5394-1054,2022-06-05,6787,233,1000
Discover,3608-2181-5724-1476,2022-09-18,3027,475,1000
MasterCard,3608-2181-4711-1693,2024-03-26,2739,733,1400
Bank of America,3608-1721-5632-1589,2025-02-08,6587,337,1500
Chase,3608-2067-5394-1306,2019-02-13,2544,222,1200
U.S. Bank,3608-2596-5696-1134,2024-05-22,7442,587,1900
USAA,3608-1721-4005-1322,2018-08-16,7241,201,1300
###Output
_____no_output_____
###Markdown
Copia de los datos al directorio de la base de datos Debido a un cambio de configuración en MySQL, los archivos `*.csv` generados, deben moverse manualmente al directorio donde se encuentra la base de datos. Los usuarios de macOS pueden ejecutar directamente los siguientes comandos en terminal cuando están ubicados en el directorio donde esta este notebook. sudo mv personas.csv /usr/local/mysql/data/sqldemo/personas.csv sudo mv bancos.csv /usr/local/mysql/data/sqldemo/bancos.csv sudo mv franquicias.csv /usr/local/mysql/data/sqldemo/franquicias.csv Creación de la estructura de las tablas de la base de datos
###Code
%%sql
CREATE TABLE personas (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
ssn VARCHAR(11),
phone VARCHAR(14),
city VARCHAR(40),
maritalstatus VARCHAR(10),
fullname VARCHAR(40),
birthdate DATE
);
CREATE TABLE franquicias (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
ccntype VARCHAR(40),
ccn VARCHAR(20),
validthru DATE,
userkey VARCHAR(6),
userpin VARCHAR(4),
quota SMALLINT
);
CREATE TABLE bancos (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
bank VARCHAR(40),
ccn VARCHAR(20),
ssn VARCHAR(15)
);
###Output
* mysql+pymysql://root:***@localhost
0 rows affected.
0 rows affected.
0 rows affected.
###Markdown
Carga de los datos
###Code
%%sql
LOAD DATA INFILE
'personas.csv'
INTO TABLE personas
FIELDS TERMINATED BY ',' (ssn,phone,city,maritalstatus,fullname,birthdate);
SELECT * FROM personas LIMIT 5;
%%sql
LOAD DATA INFILE
'bancos.csv'
INTO TABLE bancos
FIELDS TERMINATED BY ',' (bank,ccn,ssn);
SELECT * FROM bancos LIMIT 5;
%%sql
LOAD DATA INFILE
'franquicias.csv'
INTO TABLE franquicias
FIELDS TERMINATED BY ',' (ccntype,ccn,validthru,userkey,userpin,quota);
SELECT * FROM franquicias LIMIT 5;
## Borre manualmente los archivos CSV
# sudo rm /usr/local/mysql/data/sqldemo/personas.csv
# sudo rm /usr/local/mysql/data/sqldemo/bancos.csv
# sudo rm /usr/local/mysql/data/sqldemo/franquicias.csv
###Output
_____no_output_____
###Markdown
Exportación de datos
###Code
%%sql
SELECT *
INTO OUTFILE 'franquicias-1.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM franquicias ;
###Output
* mysql+pymysql://root:***@localhost
38 rows affected.
|
JOINS.ipynb | ###Markdown
Joins in SQL This Notebook is a part of my studies for IBM Certification in Data Science Professional What I learned:- Inner Join, - Outer join- Left Join- Right Join, - Cross Join Table of Contents1 Joins1.1 Synthax1.2 Database Used in this Lab1.3 Objectives1.4 Problems1.4.1 Problem 11.4.2 Problem 21.4.3 Problem 31.4.4 Problem 41.4.5 Problem 51.4.6 Problem 61.4.7 Problem 7 Joins Synthax
###Code
# How does a CROSS JOIN (also known as Cartesian Join) statement syntax look?
"""
SELECT column_name(s)
FROM table1
CROSS JOIN table2;
"""
# How does an INNER JOIN statement syntax look?
"""
SELECT column_name(s)
FROM table1
INNER JOIN table2
ON table1.column_name = table2.column_name;
WHERE condition;
"""
# How does a LEFT OUTER JOIN statement syntax look?
"""
SELECT column_name(s)
FROM table1
LEFT OUTER JOIN table2
ON table1.column_name = table2.column_name
WHERE condition;
"""
How does a RIGHT OUTER JOIN statement syntax look?
SELECT column_name(s)
FROM table1
RIGHT OUTER JOIN table2
ON table1.column_name = table2.column_name
WHERE condition;
How does a FULL OUTER JOIN statement syntax look?
SELECT column_name(s)
FROM table1
FULL OUTER JOIN table2
ON table1.column_name = table2.column_name
WHERE condition;
###Output
_____no_output_____
###Markdown
How does a SELF JOIN statement syntax look?SELECT column_name(s)FROM table1 T1, table1 T2WHERE condition; Database Used in this Notebook The database used in this notebook is an internal database. You will be working on a sample HR database. This HR database schema consists of 5 tables called EMPLOYEES, JOB_HISTORY, JOBS, DEPARTMENTS and LOCATIONS. Each table has a few rows of sample data. The following diagram shows the tables for the HR database:
###Code
<img src="images\HR_Database.png" >
###Output
_____no_output_____ |
nbs/03_coco.ipynb | ###Markdown
COCO utilities> Make coco annotations from shapefiles and transform predictions to shapefiles
###Code
#hide
from nbdev.showdoc import *
#export
from drone_detector.imports import *
from drone_detector.utils import *
from drone_detector.coordinates import *
#export
from drone_detector.coordinates import *
from drone_detector.utils import *
import datetime
from skimage import measure
from PIL import Image
###Output
_____no_output_____
###Markdown
Binary masks to polygons
###Code
# export
# From https://github.com/waspinator/pycococreator/blob/master/pycococreatortools/pycococreatortools.py
def resize_binary_mask(array, new_size):
image = Image.fromarray(array.astype(np.uint8)*255)
image = image.resize(new_size)
return np.asarray(image).astype(np.bool_)
def close_contour(contour):
if not np.array_equal(contour[0], contour[-1]):
contour = np.vstack((contour, contour[0]))
return contour
def binary_mask_to_polygon(binary_mask, tolerance=0):
"""Converts a binary mask to COCO polygon representation
Args:
binary_mask: a 2D binary numpy array where '1's represent the object
tolerance: Maximum distance from original points of polygon to approximated
polygonal chain. If tolerance is 0, the original coordinate array is returned.
"""
polygons = []
# pad mask to close contours of shapes which start and end at an edge
padded_binary_mask = np.pad(binary_mask, pad_width=1, mode='constant', constant_values=0)
contours = measure.find_contours(padded_binary_mask, 0.5)
contours = np.subtract(contours, 1)
for contour in contours:
contour = close_contour(contour)
contour = measure.approximate_polygon(contour, tolerance)
if len(contour) < 3:
continue
contour = np.flip(contour, axis=1)
segmentation = contour.ravel().tolist()
# after padding and subtracting 1 we may get -0.5 points in our segmentation
segmentation = [0 if i < 0 else i for i in segmentation]
polygons.append(segmentation)
return polygons
###Output
_____no_output_____
###Markdown
COCOProcessor Utility to transform geospatial data to different COCO formats.Notes:* It is possible to specify `min_bbox_area` to `shp_to_coco` function to exclude too small polygons. Default value is 16 pixels* If a detection is a multipart polygon, only the polygon with the largest area is converted to a shapefile.
###Code
# export
from pycocotools.mask import frPyObjects
from shapely.geometry import MultiPolygon
class COCOProcessor():
"Handles Transformations from shapefiles to COCO-format and backwards"
def __init__(self, data_path:str, outpath:str, coco_info:dict, coco_licenses:list,
coco_categories:list):
store_attr()
self.raster_path = f'{self.data_path}/raster_tiles'
self.vector_path = f'{self.data_path}/vector_tiles'
self.prediction_path = f'{self.data_path}/predicted_vectors'
self.coco_dict = {
'info': coco_info,
'licenses': coco_licenses,
'images': [],
'annotations': [],
'categories': coco_categories,
'segment_info': []
}
self.categories = {c['name']:c['id'] for c in self.coco_dict['categories']}
def shp_to_coco(self, label_col:str='label', outfile:str='coco.json', min_bbox_area:int=16):
"Process shapefiles from self.vector_path to coco-format and save to self.outpath/outfile"
vector_tiles = [f for f in os.listdir(self.vector_path) if f.endswith(('.shp', '.geojson'))]
# If no annotations are in found in raster tile then there is no shapefile for that
raster_tiles = [f'{fname.split(".")[0]}.tif' for fname in vector_tiles]
ann_id = 1
for i, r in tqdm(enumerate(raster_tiles)):
tile_anns = []
gdf = gpd.read_file(f'{self.vector_path}/{vector_tiles[i]}')
tfmd_gdf = gdf_to_px(gdf, f'{self.raster_path}/{raster_tiles[i]}', precision=3)
for row in tfmd_gdf.itertuples():
category_id = self.categories[getattr(row, label_col)]
if box(*row.geometry.bounds).area < min_bbox_area: continue # if bounding box is smaller than 4² pixels then exclude it
tile_anns.append(_process_shp_to_coco(i, category_id, ann_id, row.geometry))
ann_id += 1
if len(tile_anns) > 0:
with rio.open(f'{self.raster_path}/{r}') as im:
h, w = im.shape
self.coco_dict['images'].append({'file_name': raster_tiles[i],'id': i, 'height':h, 'width':w})
self.coco_dict['annotations'].extend(tile_anns)
with open(f'{self.outpath}/{outfile}', 'w') as f: json.dump(self.coco_dict, f)
return
def coco_to_shp(self, coco_data:dict=None, outdir:str='predicted_vectors', downsample_factor:int=1):
"""Generates shapefiles from a dictionary with coco annotations.
TODO handle multipolygons better"""
if not os.path.exists(f'{self.outpath}/{outdir}'): os.makedirs(f'{self.outpath}/{outdir}')
annotations = coco_data['annotations']
images = coco_data['images']
categories = coco_data['categories']
for i in tqdm(images):
anns_in_image = [a for a in annotations if a['image_id'] == i['id']]
if len(anns_in_image) == 0: continue
cats = []
polys = []
scores = []
for a in anns_in_image:
# No segmentations, only bounding boxes
if a['segmentation'] is None:
cats.append(a['category_id'])
# Bbox has format xmin, ymin, xdelta, ydelta
polys.append(box(a['bbox'][0] / downsample_factor,
a['bbox'][1] / downsample_factor,
(a['bbox'][2] + a['bbox'][0]) / downsample_factor,
(a['bbox'][3]+a['bbox'][1]) / downsample_factor))
if 'score' in a.keys():
scores.append(a['score'])
# Single polygon
elif len(a['segmentation']) == 1:
cats.append(a['category_id'])
xy_coords = [(a['segmentation'][0][i] / downsample_factor,
a['segmentation'][0][i+1] / downsample_factor)
for i in range(0,len(a['segmentation'][0]),2)]
xy_coords.append(xy_coords[-1])
polys.append(Polygon(xy_coords))
if 'score' in a.keys():
scores.append(a['score'])
# Multipolygon
else:
temp_poly = None
max_area = 0
cats.append(a['category_id'])
for p in rangeof(a['segmentation']):
xy_coords = [(a['segmentation'][p][i] / downsample_factor,
a['segmentation'][p][i+1] / downsample_factor)
for i in range(0,len(a['segmentation'][p]),2)]
xy_coords.append(xy_coords[-1])
if Polygon(xy_coords).area > max_area:
temp_poly = Polygon(xy_coords)
max_area = temp_poly.area
polys.append(temp_poly)
if 'score' in a.keys():
scores.append(a['score'])
gdf = gpd.GeoDataFrame({'label':cats, 'geometry':polys})
if len(scores) != 0: gdf['score'] = scores
tfmd_gdf = georegister_px_df(gdf, f'{self.raster_path}/{i["file_name"]}')
tfmd_gdf.to_file(f'{self.outpath}/{outdir}/{i["file_name"][:-4]}.geojson', driver='GeoJSON')
return
def results_to_coco_res(self, label_col:str='label_id', outfile:str='coco_res.json'):
result_tiles = [f for f in os.listdir(self.prediction_path) if f.endswith(('.shp', '.geojson'))]
# If no annotations are in found in raster tile then there is no shapefile for that
raster_tiles = [f'{fname.split(".")[0]}.tif' for fname in result_tiles]
results = []
for i in tqdm(rangeof(raster_tiles)):
for im_id, im in enumerate(self.coco_dict['images']):
if im['file_name'] == raster_tiles[i]:
break
image_id = self.coco_dict['images'][im_id]['id']
h = self.coco_dict['images'][im_id]['height']
w = self.coco_dict['images'][im_id]['width']
gdf = gpd.read_file(f'{self.prediction_path}/{result_tiles[i]}')
tfmd_gdf = gdf_to_px(gdf, f'{self.raster_path}/{raster_tiles[i]}', precision=3)
for row in tfmd_gdf.itertuples():
res = {'image_id': image_id,
'category_id': getattr(row, label_col),
'segmentation': None,
'score': np.round(getattr(row, 'score'), 5)}
ann = _process_shp_to_coco(image_id, getattr(row, label_col), 0, row.geometry)
res['segmentation'] = frPyObjects(ann['segmentation'], h, w)[0]
res['segmentation']['counts'] = res['segmentation']['counts'].decode('ascii')
results.append(res)
with open(f'{self.outpath}/{outfile}', 'w') as f:
json.dump(results, f)
def icevision_mask_preds_to_coco_anns(preds:list) -> dict:
"""Process list of IceVision `samples` and `preds` to COCO-annotation polygon format.
Returns a dict with Coco-style `images` and `annotations`
TODO replace these with functions from icevision somehow"""
outdict = {}
outdict['annotations'] = []
outdict['images'] = [{'file_name': str(f'{p.ground_truth.filepath.stem}{p.ground_truth.filepath.suffix}'), 'id': p.record_id} for p in preds]
anns = []
for i, p in tqdm(enumerate(preds)):
for j in rangeof(p.pred.detection.label_ids):
anns = []
ann_dict = {
'segmentation': binary_mask_to_polygon(p.pred.detection.mask_array.to_mask(p.height,p.width).data[j]),
'area': None,
'iscrowd': 0,
'category_id': p.pred.detection.label_ids[j].item(),
'id': i,
'image_id': p.record_id,
'bbox': [p.pred.detection.bboxes[j].xmin.item(),
p.pred.detection.bboxes[j].ymin.item(),
p.pred.detection.bboxes[j].xmax.item() - p.pred.detection.bboxes[j].xmin.item(),
p.pred.detection.bboxes[j].ymax.item() - p.pred.detection.bboxes[j].ymin.item()],
'score': p.pred.detection.scores[j]
}
if len(ann_dict['segmentation']) == 0:
# Quickhack, find reason for empty annotation masks later
continue
anns.append(ann_dict)
outdict['annotations'].extend(anns)
return outdict
def icevision_bbox_preds_to_coco_anns(preds:list) -> dict:
"""Process list of IceVision `samples` and `preds` to COCO-annotation polygon format.
Returns a dict with Coco-style `images` and `annotations`"""
outdict = {}
outdict['annotations'] = []
outdict['images'] = [{'file_name': str(f'{p.ground_truth.filepath.stem}{p.ground_truth.filepath.suffix}'), 'id': p.record_id} for p in preds]
anns = []
for i, p in tqdm(enumerate(preds)):
for j in rangeof(p.pred.detection.bboxes):
anns = []
ann_dict = {
'segmentation': None,
'area': None,
'iscrowd': 0,
'category_id': p.pred.detection.label_ids[j].item(),
'id': i,
'image_id': p.record_id,
'bbox': [p.pred.detection.bboxes[j].xmin.item(),
p.pred.detection.bboxes[j].ymin.item(),
p.pred.detection.bboxes[j].xmax.item() - p.pred.detection.bboxes[j].xmin.item(),
p.pred.detection.bboxes[j].ymax.item() - p.pred.detection.bboxes[j].ymin.item()],
'score': p.pred.detection.scores[j]
}
anns.append(ann_dict)
outdict['annotations'].extend(anns)
return outdict
def detectron2_bbox_preds_to_coco_anns(images:list, preds:list):
"""Process detectron2 prediction to COCO-annotation polygon format.
Returns a dict with COCO-style `images` and `annotations`
"""
outdict = {}
outdict['annotations'] = []
outdict['images'] = images
for i in rangeof(preds):
p = preds[i]['instances']
for j in rangeof(p.pred_classes):
anns = []
ann_dict = {
'segmentation': None,
'area': None,
'iscrowd': 0,
'category_id': p.pred_classes[j].item(),
'id': i+1,
'image_id': images[i]['id'],
'bbox': [p.pred_boxes[j].tensor[0,0].item(),
p.pred_boxes[j].tensor[0,1].item(),
p.pred_boxes[j].tensor[0,2].item() - p.pred_boxes[j].tensor[0,0].item(),
p.pred_boxes[j].tensor[0,3].item() - p.pred_boxes[j].tensor[0,1].item()],
'score': p.scores[j].item()
}
if len(ann_dict['segmentation']) == 0:
# Quickhack, find reason for empty annotation masks later
continue
anns.append(ann_dict)
outdict['annotations'].extend(anns)
return outdict
def detectron2_mask_preds_to_coco_anns(images:list, preds:list):
"""Process detectron2 prediction to COCO-annotation polygon format.
Returns a dict with COCO-style `images` and `annotations`
"""
outdict = {}
outdict['annotations'] = []
outdict['images'] = images
for i in rangeof(preds):
p = preds[i]['instances']
for j in rangeof(p.pred_classes):
anns = []
ann_dict = {
'segmentation': binary_mask_to_polygon(p.pred_masks[j].cpu().numpy()),
'area': None,
'iscrowd': 0,
'category_id': p.pred_classes[j].item(),
'id': i+1,
'image_id': images[i]['id'],
'bbox': [p.pred_boxes[j].tensor[0,0].item(),
p.pred_boxes[j].tensor[0,1].item(),
p.pred_boxes[j].tensor[0,2].item() - p.pred_boxes[j].tensor[0,0].item(),
p.pred_boxes[j].tensor[0,3].item() - p.pred_boxes[j].tensor[0,1].item()],
'score': p.scores[j].item()
}
if len(ann_dict['segmentation']) == 0:
# Quickhack, find reason for empty annotation masks later
continue
anns.append(ann_dict)
outdict['annotations'].extend(anns)
return outdict
def _process_shp_to_coco(image_id, category_id, ann_id, poly:Polygon):
"TODO handle multipolygons"
ann_dict = {
'segmentation': [],
'area': None,
'bbox': [],
'category_id': category_id,
'id' : ann_id,
'image_id': image_id,
'iscrowd': 0,
}
if poly.type == 'Polygon':
ann_dict['segmentation'] = [list(sum(poly.exterior.coords[:-1], ()))]
ann_dict['bbox'] = [(poly.bounds[0]),
(poly.bounds[1]),
(poly.bounds[2]-poly.bounds[0]),
(poly.bounds[3]-poly.bounds[1])]
ann_dict['area'] = poly.area
elif poly.type == 'MultiPolygon':
temp_poly = None
max_area = 0
# Take only the largest polygon
for p in poly.geoms:
area = p.area
if area > max_area:
max_area = area
temp_poly = p
ann_dict['segmentation'] = [list(sum(temp_poly.exterior.coords[:-1], ()))]
ann_dict['bbox'] = [(temp_poly.bounds[0]),
(temp_poly.bounds[1]),
(temp_poly.bounds[2]-temp_poly.bounds[0]),
(temp_poly.bounds[3]-temp_poly.bounds[1])]
ann_dict['area'] = temp_poly.area
return ann_dict
###Output
_____no_output_____
###Markdown
COCO utilities> Make coco annotations from shapefiles and transform predictions to shapefiles
###Code
#hide
from nbdev.showdoc import *
#export
from drone_detector.imports import *
from drone_detector.utils import *
from drone_detector.coordinates import *
#export
from drone_detector.coordinates import *
from drone_detector.utils import *
import datetime
from skimage import measure
from PIL import Image
###Output
_____no_output_____
###Markdown
Binary masks to polygons
###Code
# export
# From https://github.com/waspinator/pycococreator/blob/master/pycococreatortools/pycococreatortools.py
def resize_binary_mask(array, new_size):
image = Image.fromarray(array.astype(np.uint8)*255)
image = image.resize(new_size)
return np.asarray(image).astype(np.bool_)
def close_contour(contour):
if not np.array_equal(contour[0], contour[-1]):
contour = np.vstack((contour, contour[0]))
return contour
def binary_mask_to_polygon(binary_mask, tolerance=0):
"""Converts a binary mask to COCO polygon representation
Args:
binary_mask: a 2D binary numpy array where '1's represent the object
tolerance: Maximum distance from original points of polygon to approximated
polygonal chain. If tolerance is 0, the original coordinate array is returned.
"""
polygons = []
# pad mask to close contours of shapes which start and end at an edge
padded_binary_mask = np.pad(binary_mask, pad_width=1, mode='constant', constant_values=0)
contours = measure.find_contours(padded_binary_mask, 0.5)
contours = np.subtract(contours, 1)
for contour in contours:
contour = close_contour(contour)
contour = measure.approximate_polygon(contour, tolerance)
if len(contour) < 3:
continue
contour = np.flip(contour, axis=1)
segmentation = contour.ravel().tolist()
# after padding and subtracting 1 we may get -0.5 points in our segmentation
segmentation = [0 if i < 0 else i for i in segmentation]
polygons.append(segmentation)
return polygons
###Output
_____no_output_____
###Markdown
COCOProcessor
###Code
# export
from pycocotools.mask import frPyObjects
from shapely.geometry import MultiPolygon
class COCOProcessor():
"Handles Transformations from shapefiles to COCO-format and backwards"
def __init__(self, data_path:str, outpath:str, coco_info:dict, coco_licenses:list,
coco_categories:list):
store_attr()
self.raster_path = f'{self.data_path}/raster_tiles'
self.vector_path = f'{self.data_path}/vector_tiles'
self.prediction_path = f'{self.data_path}/predicted_vectors'
self.coco_dict = {
'info': coco_info,
'licenses': coco_licenses,
'images': [],
'annotations': [],
'categories': coco_categories,
'segment_info': []
}
self.categories = {c['name']:c['id'] for c in self.coco_dict['categories']}
def shp_to_coco(self, label_col:str='label', outfile:str='coco.json'):
"Process shapefiles from self.vector_path to coco-format and save to self.outpath/outfile"
vector_tiles = [f for f in os.listdir(self.vector_path) if f.endswith(('.shp', '.geojson'))]
# If no annotations are in found in raster tile then there is no shapefile for that
raster_tiles = [f'{fname.split(".")[0]}.tif' for fname in vector_tiles]
for i, r in enumerate(raster_tiles):
with rio.open(f'{self.raster_path}/{r}') as im:
h, w = im.shape
self.coco_dict['images'].append({'file_name': raster_tiles[i],'id': i, 'height':h, 'width':w})
ann_id = 1
for i in tqdm(rangeof(raster_tiles)):
gdf = gpd.read_file(f'{self.vector_path}/{vector_tiles[i]}')
tfmd_gdf = gdf_to_px(gdf, f'{self.raster_path}/{raster_tiles[i]}', precision=None)
for row in tfmd_gdf.itertuples():
category_id = self.categories[getattr(row, label_col)]
self.coco_dict['annotations'].append(_process_shp_to_coco(i, category_id, ann_id, row.geometry))
ann_id += 1
with open(f'{self.outpath}/{outfile}', 'w') as f: json.dump(self.coco_dict, f)
return
def coco_to_shp(self, coco_data:dict=None, outdir:str='predicted_vectors'):
"""Generates shapefiles from a dictionary with coco annotations.
TODO handle multipolygons better"""
if not os.path.exists(f'{self.outpath}/{outdir}'): os.makedirs(f'{self.outpath}/{outdir}')
#if coco_path is None: coco_path = f'{self.outpath}/coco.json'
#with open(coco_path) as f:
# coco_data = json.load(f)
annotations = coco_data['annotations']
images = coco_data['images']
categories = coco_data['categories']
for i in tqdm(images):
anns_in_image = [a for a in annotations if a['image_id'] == i['id']]
if len(anns_in_image) == 0: continue
cats = []
polys = []
scores = []
for a in anns_in_image:
# No segmentations, only bounding boxes
if a['segmentation'] is None:
cats.append(a['category_id'])
# Bbox has format xmin, ymin, xdelta, ydelta
polys.append(box(a['bbox'][0], a['bbox'][1], a['bbox'][2] + a['bbox'][0], a['bbox'][3]+a['bbox'][1]))
if 'score' in a.keys():
scores.append(a['score'])
# Single polygon
elif len(a['segmentation']) == 1:
cats.append(a['category_id'])
xy_coords = [(a['segmentation'][0][i], a['segmentation'][0][i+1])
for i in range(0,len(a['segmentation'][0]),2)]
xy_coords.append(xy_coords[-1])
polys.append(Polygon(xy_coords))
if 'score' in a.keys():
scores.append(a['score'])
# Multipolygon
else:
temp_poly = None
max_area = 0
cats.append(a['category_id'])
for p in rangeof(a['segmentation']):
xy_coords = [(a['segmentation'][p][i], a['segmentation'][p][i+1])
for i in range(0,len(a['segmentation'][p]),2)]
xy_coords.append(xy_coords[-1])
if Polygon(xy_coords).area > max_area:
temp_poly = Polygon(xy_coords)
max_area = temp_poly.area
polys.append(temp_poly)
if 'score' in a.keys():
scores.append(a['score'])
gdf = gpd.GeoDataFrame({'label':cats, 'geometry':polys})
if len(scores) != 0: gdf['score'] = scores
tfmd_gdf = georegister_px_df(gdf, f'{self.raster_path}/{i["file_name"]}')
tfmd_gdf.to_file(f'{self.outpath}/{outdir}/{i["file_name"][:-4]}.geojson', driver='GeoJSON')
return
def results_to_coco_res(self, label_col:str='label_id', outfile:str='coco_res.json'):
result_tiles = [f for f in os.listdir(self.prediction_path) if f.endswith(('.shp', '.geojson'))]
# If no annotations are in found in raster tile then there is no shapefile for that
raster_tiles = [f'{fname.split(".")[0]}.tif' for fname in result_tiles]
results = []
for i in tqdm(rangeof(raster_tiles)):
for im_id, im in enumerate(self.coco_dict['images']):
if im['file_name'] == raster_tiles[i]:
break
image_id = self.coco_dict['images'][im_id]['id']
h = self.coco_dict['images'][im_id]['height']
w = self.coco_dict['images'][im_id]['width']
gdf = gpd.read_file(f'{self.prediction_path}/{result_tiles[i]}')
tfmd_gdf = gdf_to_px(gdf, f'{self.raster_path}/{raster_tiles[i]}', precision=None)
for row in tfmd_gdf.itertuples():
res = {'image_id': image_id,
'category_id': getattr(row, label_col),
'segmentation': None,
'score': np.round(getattr(row, 'score'), 5)}
ann = _process_shp_to_coco(image_id, getattr(row, label_col), 0, row.geometry)
res['segmentation'] = frPyObjects(ann['segmentation'], h, w)[0]
res['segmentation']['counts'] = res['segmentation']['counts'].decode('ascii')
results.append(res)
with open(f'{self.outpath}/{outfile}', 'w') as f:
json.dump(results, f)
def mask_preds_to_coco_anns(preds:list) -> dict:
"""Process list of IceVision `samples` and `preds` to COCO-annotation polygon format.
Returns a dict with Coco-style `images` and `annotations`
TODO replace these with functions from icevision somehow"""
outdict = {}
outdict['annotations'] = []
outdict['images'] = [{'file_name': str(f'{p.ground_truth.filepath.stem}{p.ground_truth.filepath.suffix}'), 'id': p.record_id} for p in preds]
anns = []
for i, p in tqdm(enumerate(preds)):
for j in rangeof(p.pred.detection.label_ids):
anns = []
ann_dict = {
'segmentation': binary_mask_to_polygon(p.pred.detection.mask_array.to_mask(p.height,p.width).data[j]),
'area': None,
'iscrowd': 0,
'category_id': p.pred.detection.label_ids[j].item(),
'id': i,
'image_id': p.record_id,
'bbox': [p.pred.detection.bboxes[j].xmin.item(),
p.pred.detection.bboxes[j].ymin.item(),
p.pred.detection.bboxes[j].xmax.item() - p.pred.detection.bboxes[j].xmin.item(),
p.pred.detection.bboxes[j].ymax.item() - p.pred.detection.bboxes[j].ymin.item()],
'score': p.pred.detection.scores[j]
}
anns.append(ann_dict)
outdict['annotations'].extend(anns)
return outdict
def bbox_preds_to_coco_anns(preds:list) -> dict:
"""Process list of IceVision `samples` and `preds` to COCO-annotation polygon format.
Returns a dict with Coco-style `images` and `annotations`"""
outdict = {}
outdict['annotations'] = []
outdict['images'] = [{'file_name': str(f'{p.ground_truth.filepath.stem}{p.ground_truth.filepath.suffix}'), 'id': p.record_id} for p in preds]
anns = []
for i, p in tqdm(enumerate(preds)):
for j in rangeof(p.pred.detection.bboxes):
anns = []
ann_dict = {
'segmentation': None,
'area': None,
'iscrowd': 0,
'category_id': p.pred.detection.label_ids[j].item(),
'id': i,
'image_id': p.record_id,
'bbox': [p.pred.detection.bboxes[j].xmin.item(),
p.pred.detection.bboxes[j].ymin.item(),
p.pred.detection.bboxes[j].xmax.item() - p.pred.detection.bboxes[j].xmin.item(),
p.pred.detection.bboxes[j].ymax.item() - p.pred.detection.bboxes[j].ymin.item()],
'score': p.pred.detection.scores[j]
}
anns.append(ann_dict)
outdict['annotations'].extend(anns)
return outdict
def _process_shp_to_coco(image_id, category_id, ann_id, poly:Polygon):
"TODO handle multipolygons"
ann_dict = {
'segmentation': [],
'area': None,
'bbox': [],
'category_id': category_id,
'id' : ann_id,
'image_id': image_id,
'iscrowd': 0,
}
ann_dict['bbox'] = [(poly.bounds[0]),
(poly.bounds[1]),
(poly.bounds[2]-poly.bounds[0]),
(poly.bounds[3]-poly.bounds[1])]
ann_dict['area'] = poly.area
if poly.type == 'Polygon':
ann_dict['segmentation'] = [list(sum(poly.exterior.coords[:-1], ()))]
elif poly.type == 'MultiPolygon':
temp_poly = None
max_area = 0
# Take only the largest polygon
for p in list(poly):
area = p.area
if area > max_area:
max_area = area
temp_poly = p
ann_dict['segmentation'] = [list(sum(temp_poly.exterior.coords[:-1], ()))]
return ann_dict
###Output
_____no_output_____ |
Platforms/Kaggle/Courses/Computer_Vision/2.Convolution_and_ReLU/exercise-convolution-and-relu.ipynb | ###Markdown
**This notebook is an exercise in the [Computer Vision](https://www.kaggle.com/learn/computer-vision) course. You can reference the tutorial at [this link](https://www.kaggle.com/ryanholbrook/convolution-and-relu).**--- Introduction In this exercise, you'll work on building some intuition around feature extraction. First, we'll walk through the example we did in the tutorial again, but this time, with a kernel you choose yourself. We've mostly been working with images in this course, but what's behind all of the operations we're learning about is mathematics. So, we'll also take a look at how these feature maps can be represented instead as arrays of numbers and what effect convolution with a kernel will have on them.Run the cell below to get started!
###Code
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.computer_vision.ex2 import *
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
tf.config.run_functions_eagerly(True)
###Output
_____no_output_____
###Markdown
Apply Transformations The next few exercises walk through feature extraction just like the example in the tutorial. Run the following cell to load an image we'll use for the next few exercises.
###Code
image_path = '../input/computer-vision-resources/car_illus.jpg'
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image, channels=1)
image = tf.image.resize(image, size=[400, 400])
img = tf.squeeze(image).numpy()
plt.figure(figsize=(6, 6))
plt.imshow(img, cmap='gray')
plt.axis('off')
plt.show();
###Output
_____no_output_____
###Markdown
You can run this cell to see some standard kernels used in image processing.
###Code
import learntools.computer_vision.visiontools as visiontools
from learntools.computer_vision.visiontools import edge, bottom_sobel, emboss, sharpen
kernels = [edge, bottom_sobel, emboss, sharpen]
names = ["Edge Detect", "Bottom Sobel", "Emboss", "Sharpen"]
plt.figure(figsize=(12, 12))
for i, (kernel, name) in enumerate(zip(kernels, names)):
plt.subplot(1, 4, i+1)
visiontools.show_kernel(kernel)
plt.title(name)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
1) Define Kernel Use the next code cell to define a kernel. You have your choice of what kind of kernel to apply. One thing to keep in mind is that the *sum* of the numbers in the kernel determines how bright the final image is. Generally, you should try to keep the sum of the numbers between 0 and 1 (though that's not required for a correct answer).In general, a kernel can have any number of rows and columns. For this exercise, let's use a $3 \times 3$ kernel, which often gives the best results. Define a kernel with `tf.constant`.
###Code
# YOUR CODE HERE: Define a kernel with 3 rows and 3 columns.
kernel = tf.constant([
[-1, -1, -1],
[-1, 8, -1],
[-1, -1, -1],
])
# Uncomment to view kernel
# visiontools.show_kernel(kernel)
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
q_1.hint()
q_1.solution()
###Output
_____no_output_____
###Markdown
Now we'll do the first step of feature extraction, the filtering step. First run this cell to do some reformatting for TensorFlow.
###Code
# Reformat for batch compatibility.
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = tf.expand_dims(image, axis=0)
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
kernel = tf.cast(kernel, dtype=tf.float32)
###Output
_____no_output_____
###Markdown
2) Apply Convolution Now we'll apply the kernel to the image by a convolution. The *layer* in Keras that does this is `layers.Conv2D`. What is the *backend function* in TensorFlow that performs the same operation?
###Code
# YOUR CODE HERE: Give the TensorFlow convolution function (without arguments)
conv_fn = tf.nn.conv2d
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
q_2.hint()
q_2.solution()
###Output
_____no_output_____
###Markdown
Once you've got the correct answer, run this next cell to execute the convolution and see the result!
###Code
image_filter = conv_fn(
input=image,
filters=kernel,
strides=1, # or (1, 1)
padding='SAME',
)
plt.imshow(
# Reformat for plotting
tf.squeeze(image_filter)
)
plt.axis('off')
plt.show();
###Output
_____no_output_____
###Markdown
Can you see how the kernel you chose relates to the feature map it produced? 3) Apply ReLU Now detect the feature with the ReLU function. In Keras, you'll usually use this as the activation function in a `Conv2D` layer. What is the *backend function* in TensorFlow that does the same thing?
###Code
# YOUR CODE HERE: Give the TensorFlow ReLU function (without arguments)
relu_fn = tf.nn.relu
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
q_3.hint()
q_3.solution()
###Output
_____no_output_____
###Markdown
Once you've got the solution, run this cell to detect the feature with ReLU and see the result!The image you see below is the feature map produced by the kernel you chose. If you like, experiment with some of the other suggested kernels above, or, try to invent one that will extract a certain kind of feature.
###Code
image_detect = relu_fn(image_filter)
plt.imshow(
# Reformat for plotting
tf.squeeze(image_detect)
)
plt.axis('off')
plt.show();
###Output
_____no_output_____
###Markdown
In the tutorial, our discussion of kernels and feature maps was mainly visual. We saw the effect of `Conv2D` and `ReLU` by observing how they transformed some example images.But the operations in a convolutional network (like in all neural networks) are usually defined through mathematical functions, through a computation on numbers. In the next exercise, we'll take a moment to explore this point of view.Let's start by defining a simple array to act as an image, and another array to act as the kernel. Run the following cell to see these arrays.
###Code
# Sympy is a python library for symbolic mathematics. It has a nice
# pretty printer for matrices, which is all we'll use it for.
import sympy
sympy.init_printing()
from IPython.display import display
image = np.array([
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 1, 1, 1],
[0, 1, 0, 0, 0, 0],
])
kernel = np.array([
[1, -1],
[1, -1],
])
display(sympy.Matrix(image))
display(sympy.Matrix(kernel))
# Reformat for Tensorflow
image = tf.cast(image, dtype=tf.float32)
image = tf.reshape(image, [1, *image.shape, 1])
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
kernel = tf.cast(kernel, dtype=tf.float32)
###Output
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_png function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_rgba function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_mask function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The MathtextBackendBitmap class was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
###Markdown
4) Observe Convolution on a Numerical Matrix What do you see? The image is simply a long vertical line on the left and a short horizontal line on the lower right. What about the kernel? What effect do you think it will have on this image? After you've thought about it, run the next cell for the answer.
###Code
# View the solution (Run this code cell to receive credit!)
q_4.check()
###Output
_____no_output_____
###Markdown
Now let's try it out. Run the next cell to apply convolution and ReLU to the image and display the result.
###Code
image_filter = tf.nn.conv2d(
input=image,
filters=kernel,
strides=1,
padding='VALID',
)
image_detect = tf.nn.relu(image_filter)
# The first matrix is the image after convolution, and the second is
# the image after ReLU.
display(sympy.Matrix(tf.squeeze(image_filter).numpy()))
display(sympy.Matrix(tf.squeeze(image_detect).numpy()))
###Output
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_png function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_rgba function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_mask function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The MathtextBackendBitmap class was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
|
examples/example_primary_flux_models.ipynb | ###Markdown
Example: Comparing Primary Flux ModelsThis file demonstrates how to use MUTE to do a simple calculation of true vertical intensities with three different primary flux models and plot their ratios. Import Packages
###Code
import matplotlib.pyplot as plt
import numpy as np
import mute.constants as mtc
import mute.underground as mtu
###Output
_____no_output_____
###Markdown
Set the Constants
###Code
mtc.set_verbose(2)
mtc.set_output(True)
mtc.set_lab("Example")
mtc.set_overburden("flat")
mtc.set_vertical_depth(1)
mtc.set_medium("rock")
mtc.set_density(2.65)
mtc.set_n_muon(100000)
###Output
_____no_output_____
###Markdown
Calculate the True Vertical IntensitiesThe primary cosmic ray flux model can be set with the ``primary_model`` parameter. The desired slant depths can be specified in a list or an array with the ``depths`` parameter. Leaving it blank will calculate the intensities for the default slant depths given by ``mtc.slant_depths``, which provides depths between 1 km.w.e. and 12 km.w.e. in steps of 0.5 km.w.e.At each call of ``mtu.calc_u_intensities_tr()``, the output files for the surface flux, underground flux, and underground intensities will be overwritten. In order to avoid this, the lab can be set between calls using, for example, ``mtc.set_lab("Example_GSF")``, then ``mtc.set_lab("Example_HG")``. To stop output files from being written for a certain call, the ``output`` parameter can be set to ``False``. To stop all output files from being written, the output can be set globally with ``mtc.set_output(False)``. Because ``verbose`` has been set to ``2``, MUTE will print out information about what it is doing every step along the way.
###Code
intensities_GSF = mtu.calc_u_intensities_tr(primary_model = "GSF") # GlobalSplitFitBeta
intensities_HG = mtu.calc_u_intensities_tr(primary_model = "HG") # HillasGaisser2012
intensities_GH = mtu.calc_u_intensities_tr(primary_model = "GH") # GaisserHonda
intensities_ZS = mtu.calc_u_intensities_tr(primary_model = "ZS") # ZatsepinSokolskaya
###Output
Calculating underground fluxes.
Loading surface fluxes for USStd using SIBYLL-2.3c and GSF.
Loaded surface fluxes.
Loading survival probabilities from mute/data/survival_probabilities/rock_2.65_100000_Survival_Probabilities.txt.
Loaded survival probabilities.
Finished calculating underground fluxes.
Underground fluxes written to mute/data/underground/Example_Underground_Fluxes.txt.
Calculating true vertical underground intensities.
Finished calculating true vertical underground intensities.
True vertical underground intensities written to mute/data/underground/Example_Underground_Intensities_TR.txt.
Calculating underground fluxes.
Loading surface fluxes for USStd using SIBYLL-2.3c and HG.
Loaded surface fluxes.
Loading survival probabilities from mute/data/survival_probabilities/rock_2.65_100000_Survival_Probabilities.txt.
Loaded survival probabilities.
Finished calculating underground fluxes.
Underground fluxes written to mute/data/underground/Example_Underground_Fluxes.txt.
Calculating true vertical underground intensities.
Finished calculating true vertical underground intensities.
True vertical underground intensities written to mute/data/underground/Example_Underground_Intensities_TR.txt.
Calculating underground fluxes.
Loading surface fluxes for USStd using SIBYLL-2.3c and GH.
Loaded surface fluxes.
Loading survival probabilities from mute/data/survival_probabilities/rock_2.65_100000_Survival_Probabilities.txt.
Loaded survival probabilities.
Finished calculating underground fluxes.
Underground fluxes written to mute/data/underground/Example_Underground_Fluxes.txt.
Calculating true vertical underground intensities.
Finished calculating true vertical underground intensities.
True vertical underground intensities written to mute/data/underground/Example_Underground_Intensities_TR.txt.
Calculating underground fluxes.
Loading surface fluxes for USStd using SIBYLL-2.3c and ZS.
Loaded surface fluxes.
Loading survival probabilities from mute/data/survival_probabilities/rock_2.65_100000_Survival_Probabilities.txt.
Loaded survival probabilities.
Finished calculating underground fluxes.
Underground fluxes written to mute/data/underground/Example_Underground_Fluxes.txt.
Calculating true vertical underground intensities.
Finished calculating true vertical underground intensities.
True vertical underground intensities written to mute/data/underground/Example_Underground_Intensities_TR.txt.
###Markdown
Plot the Results
###Code
fig = plt.figure(figsize = (10, 5))
ax = fig.add_subplot(111)
ax.plot(mtc.slant_depths, intensities_HG/intensities_GSF, color = "red", lw = 3, ls = "-", label = "HillasGaisser2012")
ax.plot(mtc.slant_depths, intensities_GH/intensities_GSF, color = "blue", lw = 3, ls = "--", label = "GaisserHonda")
ax.plot(mtc.slant_depths, intensities_ZS/intensities_GSF, color = "green", lw = 3, ls = "--", label = "ZatsepinSokolskaya")
ax.set_xlabel("Slant Depth, $X$ (km.w.e.)", fontsize = 23)
ax.set_ylabel(r"$I^u_{tr}/I^u_{(tr,\ GSF)}$", fontsize = 23)
ax.tick_params(axis = "both", which = "major", labelsize = 18)
plt.legend(frameon = False, fontsize = 16)
plt.show()
###Output
_____no_output_____ |
modules/06-apache-spark-sql/05-extracting-data-from-mysql.ipynb | ###Markdown
Extracting Data from MySQL First you need to download the JDBC driver. There are different JDBC drivers for each database (Oracle, SQL Server, etc.) 1. MySQL JDBC driver download link: http://dev.mysql.com/downloads/connector/j/2. Download de **.zip** file.3. Unzip and copy the **mysql-connector-java-8.0.16.jar** file to the **/opt/spark/jars** folder (Linux and MacOS) or to the **C:\Spark\jars** folder (Windows).
###Code
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
spSession = SparkSession.builder.master('local').appName('appSparkSql').getOrCreate()
sqlContext = SQLContext(sc)
dfMySql = spSession.read.format('jdbc').options(
url = 'jdbc:mysql://localhost/db_cars',
serverTimexone = 'UTC',
driver = 'com.mysql.jdbc.Driver',
dbtable = 'cars',
user = 'root',
password = 'root@123'
).load()
dfMySql.show(10)
dfMySql.registerTempTable('tt_cars')
sqlContext.sql("SELECT * FROM tt_cars WHERE hp > 180").show()
###Output
+-------------+---------+------+-----+-----------+--------+---------+---+----+--------+-----------+-----+
| manufacturer|fuel_type|aspire|doors| category|traction|cylinders| hp| rpm|mpg_city|mpg_highway|price|
+-------------+---------+------+-----+-----------+--------+---------+---+----+--------+-----------+-----+
| nissan| gas| turbo| two| hatchback| rwd| six|200|5200| 17| 23|19699|
| bmw| gas| std| four| sedan| rwd| six|182|5400| 16| 22|30760|
| porsche| gas| std| two| hardtop| rwd| six|207|5900| 17| 25|32528|
| porsche| gas| std| two| hardtop| rwd| six|207|5900| 17| 25|34028|
| jaguar| gas| std| two| sedan| rwd| twelve|262|5000| 13| 17|36000|
| bmw| gas| std| four| sedan| rwd| six|182|5400| 15| 20|36880|
| porsche| gas| std| two|convertible| rwd| six|207|5900| 17| 25|37028|
|mercedes-benz| gas| std| four| sedan| rwd| eight|184|4500| 14| 16|40960|
| bmw| gas| std| two| sedan| rwd| six|182|5400| 16| 22|41315|
|mercedes-benz| gas| std| two| hardtop| rwd| eight|184|4500| 14| 16|45400|
+-------------+---------+------+-----+-----------+--------+---------+---+----+--------+-----------+-----+
|
archive/LossPlots.ipynb | ###Markdown
Houston Edges
###Code
import pandas as pd
data = pd.read_csv('results/houston-edges[3-20][6-15].csv', sep=',',header=None).values[1:,:]
data.shape
axs.flatten()
from mpl_toolkits import mplot3d
fig = plt.figure(figsize=[15,15])
col = 4
vmin, vmax = 0.5, 0.9#max(data[:,col])
for i in range(11):
ax = fig.add_subplot(4, 3, i+1, projection='3d')
d = data[1331*i:1331*(i+1)-1,:]
a = ax.scatter(d[:,1],d[:,2],d[:,3],c=d[:,col],vmin=vmin, vmax=vmax,cmap='RdYlGn')
cbar = fig.colorbar(a, ax=ax,pad=0.1,extend='both')
cbar.set_label('F1 score',fontsize=12)
ax.set_xlabel('Hyperspectral edges',fontsize=12), ax.set_ylabel('LiDAR edges',fontsize=12), ax.set_zlabel('HR edges',fontsize=12)
opt = np.argmax(d[:,col])
ax.set_title('Geo edges: {}, Max F1:{:.3f}\nOptimal - HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(i,max(d[:,col]),d[opt,1],d[opt,2], d[opt,3]))
#cbar.set_label('Accuracy',fontsize=13)
#cbar.set_label('Recall',fontsize=13)
#cbar.set_label('Cross-entropy loss',fontsize=13)
topt = np.argmax(data[:,col])
fig.suptitle('F1 score optimisation - Max F1: {:.3f}\nOptimal edges - Geo: {:.0f}, HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(max(data[:,col]),data[topt,0],data[topt,1],data[topt,2], data[topt,3]),size=16,y=0.99)
# fig.suptitle('Accuracy')
# fig.suptitle('Recall')
# fig.suptitle('Cross-entropy loss')
fig.tight_layout(pad=5, w_pad=1, h_pad=5)
fig.savefig('results/houstonedgesf1.png')
from mpl_toolkits import mplot3d
fig = plt.figure(figsize=[15,15])
col = 7
vmin, vmax = 0.55,0.7#min(data[:,col]), max(data[:,col])
for i in range(11):
ax = fig.add_subplot(4, 3, i+1, projection='3d')
d = data[1331*i:1331*(i+1)-1,:]
a = ax.scatter(d[:,1],d[:,2],d[:,3],c=d[:,col],vmin=vmin, vmax=vmax, cmap='RdYlGn_r',alpha=0.7)
cbar = fig.colorbar(a, ax=ax,pad=0.1,extend='both')
cbar.set_label('Cross-entropy loss',fontsize=12)
ax.set_xlabel('Hyperspectral edges',fontsize=12), ax.set_ylabel('LiDAR edges',fontsize=12), ax.set_zlabel('HR edges',fontsize=12)
opt = np.argmin(d[:,col])
ax.set_title('Geo edges: {}, Min CE loss:{:.3f}\nOptimal - HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(i,min(d[:,col]),d[opt,1],d[opt,2], d[opt,3]))
#cbar.set_label('Accuracy',fontsize=13)
#cbar.set_label('Recall',fontsize=13)
#cbar.set_label('Cross-entropy loss',fontsize=13)
topt = np.argmin(data[:,col])
fig.suptitle('Cross-entropy optimisation - Min CE loss: {:.3f}\nOptimal edges - Geo: {:.0f}, HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(min(data[:,col]),data[topt,0],data[topt,1],data[topt,2], data[topt,3]),size=16,y=0.99)
# fig.suptitle('Accuracy')
# fig.suptitle('Recall')
# fig.suptitle('Cross-entropy loss')
fig.tight_layout(pad=5, w_pad=1, h_pad=5)
fig.savefig('results/houstonedgesCE.png')
from mpl_toolkits import mplot3d
fig = plt.figure(figsize=[15,15])
col = 5
vmin, vmax = 0.5, 0.9#max(data[:,col])
for i in range(11):
ax = fig.add_subplot(4, 3, i+1, projection='3d')
d = data[1331*i:1331*(i+1)-1,:]
a = ax.scatter(d[:,1],d[:,2],d[:,3],c=d[:,col],vmin=vmin, vmax=vmax,cmap='RdYlGn')
cbar = fig.colorbar(a, ax=ax,pad=0.1,extend='both')
cbar.set_label('Accuracy score',fontsize=12)
ax.set_xlabel('Hyperspectral edges',fontsize=12), ax.set_ylabel('LiDAR edges',fontsize=12), ax.set_zlabel('HR edges',fontsize=12)
opt = np.argmax(d[:,col])
ax.set_title('Geo edges: {}, Max Acc:{:.3f}\nOptimal - HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(i,max(d[:,col]),d[opt,1],d[opt,2], d[opt,3]))
#cbar.set_label('Accuracy',fontsize=13)
#cbar.set_label('Recall',fontsize=13)
#cbar.set_label('Cross-entropy loss',fontsize=13)
topt = np.argmax(data[:,col])
fig.suptitle('Accuracy optimisation - Max Acc: {:.3f}\nOptimal edges - Geo: {:.0f}, HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(max(data[:,col]),data[topt,0],data[topt,1],data[topt,2], data[topt,3]),size=16,y=0.99)
# fig.suptitle('Accuracy')
# fig.suptitle('Recall')
# fig.suptitle('Cross-entropy loss')
fig.tight_layout(pad=5, w_pad=1, h_pad=5)
fig.savefig('results/houstonedgesacc.png')
from mpl_toolkits import mplot3d
fig = plt.figure(figsize=[15,15])
col = 6
vmin, vmax = 0.5, 0.9#max(data[:,col])
for i in range(11):
ax = fig.add_subplot(4, 3, i+1, projection='3d')
d = data[1331*i:1331*(i+1)-1,:]
a = ax.scatter(d[:,1],d[:,2],d[:,3],c=d[:,col],vmin=vmin, vmax=vmax,cmap='RdYlGn')
cbar = fig.colorbar(a, ax=ax,pad=0.1,extend='both')
cbar.set_label('Recall score',fontsize=12)
ax.set_xlabel('Hyperspectral edges',fontsize=12), ax.set_ylabel('LiDAR edges',fontsize=12), ax.set_zlabel('HR edges',fontsize=12)
opt = np.argmax(d[:,col])
ax.set_title('Geo edges: {}, Max Rec:{:.3f}\nOptimal - HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(i,max(d[:,col]),d[opt,1],d[opt,2], d[opt,3]))
#cbar.set_label('Accuracy',fontsize=13)
#cbar.set_label('Recall',fontsize=13)
#cbar.set_label('Cross-entropy loss',fontsize=13)
topt = np.argmax(data[:,col])
fig.suptitle('Recall optimisation - Max Rec: {:.3f}\nOptimal edges - Geo: {:.0f}, HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(max(data[:,col]),data[topt,0],data[topt,1],data[topt,2], data[topt,3]),size=16,y=0.99)
# fig.suptitle('Accuracy')
# fig.suptitle('Recall')
# fig.suptitle('Cross-entropy loss')
fig.tight_layout(pad=5, w_pad=1, h_pad=5)
fig.savefig('results/houstonedgesrec.png')
from mpl_toolkits import mplot3d
fig = plt.figure(figsize=[11,8])
col = 4
vmin, vmax = 0.5, 0.9#max(data[:,col])
ploy=1
for i in [0,2,5,10]:
ax = fig.add_subplot(2, 2, ploy, projection='3d')
d = data[1331*i:1331*(i+1)-1,:]
a = ax.scatter(d[:,1],d[:,2],d[:,3],c=d[:,col],vmin=vmin, vmax=vmax,cmap='RdYlGn')
cbar = fig.colorbar(a, ax=ax,pad=0.1,extend='both')
cbar.set_label('F1 score',fontsize=12)
ax.set_xlabel('Hyperspectral edges',fontsize=12), ax.set_ylabel('LiDAR edges',fontsize=12), ax.set_zlabel('HR edges',fontsize=12)
opt = np.argmax(d[:,col])
ax.set_title('Geo edges: {}, Max F1:{:.3f}\nOptimal - HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(i,max(d[:,col]),d[opt,1],d[opt,2], d[opt,3]))
ploy+=1
#cbar.set_label('Accuracy',fontsize=13)
#cbar.set_label('Recall',fontsize=13)
#cbar.set_label('Cross-entropy loss',fontsize=13)
topt = np.argmax(data[:,col])
fig.suptitle('F1 score optimisation - Max F1: {:.3f}\nOptimal edges - Geo: {:.0f}, HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(max(data[:,col]),data[topt,0],data[topt,1],data[topt,2], data[topt,3]),size=16,y=0.99)
# fig.suptitle('Accuracy')
# fig.suptitle('Recall')
# fig.suptitle('Cross-entropy loss')
fig.tight_layout(pad=5, w_pad=1, h_pad=5)
fig.savefig('results/houstonedgesf1-4plot.png')
# fig2, axs2 = plt.subplots(2,2,4)
# for i in range(3):
# d = data[1331*i:1331*(i+1)-1,:]
# axs[i].scatter(d[:,1],d[:,2],d[:,3],c=d[:,4])
# i =
# d = data[1331*i:1331*(i+1)-1,:]
# axs[i].scatter(d[:,1],d[:,2],d[:,3],c=d[:,4])
# c = fig.colorbar(a, ax=axs[:, 1], shrink=0.6)
# cbar.set_label('F1 score',fontsize=13)
# #cbar.set_label('Accuracy',fontsize=13)
# #cbar.set_label('Recall',fontsize=13)
# #cbar.set_label('Cross-entropy loss',fontsize=13)
# fig.suptitle('F1 score')
# # fig.suptitle('Accuracy')
# # fig.suptitle('Recall')
# # fig.suptitle('Cross-entropy loss')
from mpl_toolkits import mplot3d
fig = plt.figure(figsize=[11,8])
col = 7
vmin, vmax = 0.58,0.7#min(data[:,col]), max(data[:,col])
ploy=1
for i in [0,2,5,10]:
ax = fig.add_subplot(2, 2, ploy, projection='3d')
d = data[1331*i:1331*(i+1)-1,:]
a = ax.scatter(d[:,1],d[:,2],d[:,3],c=d[:,col],vmin=vmin, vmax=vmax, cmap='RdYlGn_r',alpha=0.7)
cbar = fig.colorbar(a, ax=ax,pad=0.1,extend='both')
cbar.set_label('Cross-entropy loss',fontsize=12)
ax.set_xlabel('Hyperspectral edges',fontsize=12), ax.set_ylabel('LiDAR edges',fontsize=12), ax.set_zlabel('HR edges',fontsize=12)
opt = np.argmin(d[:,col])
ax.set_title('Geo edges: {}, Min CE loss:{:.3f}\nOptimal - HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(i,min(d[:,col]),d[opt,1],d[opt,2], d[opt,3]))
ploy+=1
#cbar.set_label('Accuracy',fontsize=13)
#cbar.set_label('Recall',fontsize=13)
#cbar.set_label('Cross-entropy loss',fontsize=13)
topt = np.argmin(data[:,col])
fig.suptitle('Cross-entropy optimisation - Min CE loss: {:.3f}\nOptimal edges - Geo: {:.0f}, HS: {:.0f}, L: {:.0f}, HR: {:.0f}'.format(min(data[:,col]),data[topt,0],data[topt,1],data[topt,2], data[topt,3]),size=16,y=0.99)
# fig.suptitle('Accuracy')
# fig.suptitle('Recall')
# fig.suptitle('Cross-entropy loss')
fig.tight_layout(pad=5, w_pad=1, h_pad=5)
fig.savefig('results/houstonedgesCE-4plot.png')
###Output
_____no_output_____
###Markdown
Beirut Edges
###Code
import pandas as pd
data = pd.read_csv('results/beirutedges.csv', sep=',',header=None).values[1:,:]
from mpl_toolkits import mplot3d
fig = plt.figure(figsize=[11,4])
d=data.copy()
# col = 4
#vmin, vmax = 0.5, 0.9#max(data[:,col])
ax = fig.add_subplot(1, 2, 1, projection='3d')
a = ax.scatter(d[:,0],d[:,1],d[:,2],c=d[:,3],cmap='RdYlGn',vmin=0.5, vmax=0.7)
cbar = fig.colorbar(a,ax=ax,pad=0.1,extend='both')
cbar.set_label('F1 score',fontsize=12)
ax.set_xlabel('Geographic edges',fontsize=12), ax.set_ylabel('HR edges',fontsize=12), ax.set_zlabel('InSAR edges',fontsize=12)
opt = np.argmax(d[:,3])
ax.set_title('Max F1:{:.3f}\nOptimal - Geo: {:.0f}, HR: {:.0f}, InSAR: {:.0f}'.format(max(d[:,3]),d[opt,0],d[opt,1], d[opt,2]))
ax = fig.add_subplot(1, 2, 2, projection='3d')
a = ax.scatter(d[:,0],d[:,1],d[:,2],c=d[:,6],cmap='RdYlGn_r',vmin=0.65, vmax=0.68)
cbar = fig.colorbar(a,ax=ax,pad=0.1,extend='both')
cbar.set_label('Cross-entropy loss',fontsize=12)
ax.set_xlabel('Geographic edges',fontsize=12), ax.set_ylabel('HR edges',fontsize=12), ax.set_zlabel('InSAR edges',fontsize=12)
opt = np.argmin(d[:,6])
ax.set_title('Min cross-entropy:{:.3f}\nOptimal - Geo: {:.0f}, HR: {:.0f}, InSAR: {:.0f}'.format(min(d[:,6]),d[opt,0],d[opt,1], d[opt,2]))
fig.suptitle('Beirut edge optimisation',size=16,y=0.99)
fig.tight_layout(pad=2, w_pad=1, h_pad=5)
fig.savefig('results/beirutedges.png')
max(d[:,3])
from mpl_toolkits import mplot3d
fig = plt.figure(figsize=[11,4])
d=data.copy()
# col = 4
#vmin, vmax = 0.5, 0.9#max(data[:,col])
ax = fig.add_subplot(1, 2, 1, projection='3d')
a = ax.scatter(d[:,0],d[:,1],d[:,2],c=d[:,4],cmap='RdYlGn',vmin=0.5, vmax=0.7)
cbar = fig.colorbar(a,ax=ax,pad=0.1,extend='both')
cbar.set_label('Accuracy',fontsize=12)
ax.set_xlabel('Geographic edges',fontsize=12), ax.set_ylabel('HR edges',fontsize=12), ax.set_zlabel('InSAR edges',fontsize=12)
opt = np.argmax(d[:,4])
ax.set_title('Max accuracy:{:.3f}\nOptimal - Geo: {:.0f}, HR: {:.0f}, InSAR: {:.0f}'.format(max(d[:,4]),d[opt,0],d[opt,1], d[opt,2]))
ax = fig.add_subplot(1, 2, 2, projection='3d')
a = ax.scatter(d[:,0],d[:,1],d[:,2],c=d[:,5],cmap='RdYlGn',vmin=0.5, vmax=0.7)
cbar = fig.colorbar(a,ax=ax,pad=0.1,extend='both')
cbar.set_label('Recall',fontsize=12)
ax.set_xlabel('Geographic edges',fontsize=12), ax.set_ylabel('HR edges',fontsize=12), ax.set_zlabel('InSAR edges',fontsize=12)
opt = np.argmax(d[:,5])
ax.set_title('Min recall:{:.3f}\nOptimal - Geo: {:.0f}, HR: {:.0f}, InSAR: {:.0f}'.format(max(d[:,5]),d[opt,0],d[opt,1], d[opt,2]))
fig.suptitle('Beirut edge optimisation',size=16,y=0.99)
fig.tight_layout(pad=2, w_pad=1, h_pad=5)
fig.savefig('results/beirutedgesappendix.png')
###Output
_____no_output_____
###Markdown
Node computation time
###Code
nodes = [100,200,500,1000,2000,5000,10000,20000,50000,100000]
time = [0.0074,0.014,0.16,0.28,0.56,1.8,5.3,15,77,251]
fig, ax = plt.subplots(1, 1, figsize=[5,4])
a = ax.loglog(nodes, time, 'r-')
p = ax.plot(10000,5.3,'k*',label='Used')
ax.set_title('NetConf computation time', size=14)
ax.set_xlabel('Nodes',fontsize=13)
ax.set_ylabel('Time (seconds)',fontsize=13)
ax.legend(fontsize=12)
fig.tight_layout()
plt.show()
fig.savefig('results/computationTime')
###Output
_____no_output_____ |
examples/01-filter/distance-between-surfaces.ipynb | ###Markdown
Distance Between Two Surfaces=============================Compute the average thickness between two surfaces.For example, you might have two surfaces that represent the boundariesof lithological layers in a subsurface geological model and you want toknow the average thickness of a unit between those boundaries.A clarification on terminology in this example is important. A meshpoint exists on the vertex of each cell on the mesh. See`what_is_a_mesh`{.interpreted-text role="ref"}. Each cell in thisexample encompasses a 2D region of space which contains an infinitenumber of spatial points; these spatial points are not mesh points. Thedistance between two surfaces can mean different things depending oncontext and usage. Each example here explores different aspects of thedistance from the vertex points of the bottom mesh to the top mesh.First, we will demo a method where we compute the normals on the vertexpoints of the bottom surface, and then project a ray to the top surfaceto compute the distance along the surface normals. This ray will usuallyintersect the top surface at a spatial point inside a cell of the mesh.Second, we will use a KDTree to compute the distance from every vertexpoint in the bottom mesh to its closest vertex point in the top mesh.Lastly, we will use a PyVista filter,`pyvista.DataSet.find_closest_cell`{.interpreted-text role="func"} tocalculate the distance from every vertex point in the bottom mesh to theclosest spatial point inside a cell of the top mesh. This will be theshortest distance from the vertex point to the top surface, unlike thefirst two examples.
###Code
import numpy as np
import pyvista as pv
def hill(seed):
"""A helper to make a random surface."""
mesh = pv.ParametricRandomHills(randomseed=seed, u_res=50, v_res=50, hillamplitude=0.5)
mesh.rotate_y(-10, inplace=True) # give the surfaces some tilt
return mesh
h0 = hill(1).elevation()
h1 = hill(10)
# Shift one surface
h1.points[:, -1] += 5
h1 = h1.elevation()
p = pv.Plotter()
p.add_mesh(h0, smooth_shading=True)
p.add_mesh(h1, smooth_shading=True)
p.show_grid()
p.show()
###Output
_____no_output_____
###Markdown
Ray Tracing Distance====================Compute normals of lower surface at vertex points
###Code
h0n = h0.compute_normals(point_normals=True, cell_normals=False, auto_orient_normals=True)
###Output
_____no_output_____
###Markdown
Travel along normals to the other surface and compute the thickness oneach vector.
###Code
h0n["distances"] = np.empty(h0.n_points)
for i in range(h0n.n_points):
p = h0n.points[i]
vec = h0n["Normals"][i] * h0n.length
p0 = p - vec
p1 = p + vec
ip, ic = h1.ray_trace(p0, p1, first_point=True)
dist = np.sqrt(np.sum((ip - p) ** 2))
h0n["distances"][i] = dist
# Replace zeros with nans
mask = h0n["distances"] == 0
h0n["distances"][mask] = np.nan
np.nanmean(h0n["distances"])
p = pv.Plotter()
p.add_mesh(h0n, scalars="distances", smooth_shading=True)
p.add_mesh(h1, color=True, opacity=0.75, smooth_shading=True)
p.show()
###Output
_____no_output_____
###Markdown
Nearest Neighbor Distance=========================You could also use a KDTree to compare the distance between each vertexpoint of the upper surface and the nearest neighbor vertex point of thelower surface. This will be noticeably faster than a ray trace,especially for large surfaces.
###Code
from scipy.spatial import KDTree
tree = KDTree(h1.points)
d_kdtree, idx = tree.query(h0.points)
h0["distances"] = d_kdtree
np.mean(d_kdtree)
p = pv.Plotter()
p.add_mesh(h0, scalars="distances", smooth_shading=True)
p.add_mesh(h1, color=True, opacity=0.75, smooth_shading=True)
p.show()
###Output
_____no_output_____
###Markdown
Using PyVista Filter====================The `pyvista.DataSet.find_closest_cell`{.interpreted-text role="func"}filter returns the spatial points inside the cells of the top surfacethat are closest to the vertex points of the bottom surface.`closest_points` is returned when using `return_closest_point=True`.
###Code
closest_cells, closest_points = h1.find_closest_cell(h0.points, return_closest_point=True)
d_exact = np.linalg.norm(h0.points - closest_points, axis=1)
h0["distances"] = d_exact
np.mean(d_exact)
###Output
_____no_output_____
###Markdown
As expected there is only a small difference between this method and theKDTree method.
###Code
p = pv.Plotter()
p.add_mesh(h0, scalars="distances", smooth_shading=True)
p.add_mesh(h1, color=True, opacity=0.75, smooth_shading=True)
p.show()
###Output
_____no_output_____
###Markdown
Distance Between Two Surfaces=============================Compute the average thickness between two surfaces.For example, you might have two surfaces that represent the boundariesof lithological layers in a subsurface geological model and you want toknow the average thickness of a unit between those boundaries.We can compute the thickness between the two surfaces using a fewdifferent methods. First, we will demo a method where we compute thenormals of the bottom surface, and then project a ray to the top surfaceto compute the distance along the surface normals. Second, we will use aKDTree to compute the distance from every point in the bottom mesh toit\'s closest point in the top mesh.
###Code
import pyvista as pv
import numpy as np
# A helper to make a random surface
def hill(seed):
mesh = pv.ParametricRandomHills(randomseed=seed, u_res=50, v_res=50,
hillamplitude=0.5)
mesh.rotate_y(-10) # give the surfaces some tilt
return mesh
h0 = hill(1).elevation()
h1 = hill(10)
# Shift one surface
h1.points[:,-1] += 5
h1 = h1.elevation()
p = pv.Plotter()
p.add_mesh(h0, smooth_shading=True)
p.add_mesh(h1, smooth_shading=True)
p.show_grid()
p.show()
###Output
_____no_output_____
###Markdown
Ray Tracing Distance====================Compute normals of lower surface
###Code
h0n = h0.compute_normals(point_normals=True, cell_normals=False,
auto_orient_normals=True)
###Output
_____no_output_____
###Markdown
Travel along normals to the other surface and compute the thickness oneach vector.
###Code
h0n["distances"] = np.empty(h0.n_points)
for i in range(h0n.n_points):
p = h0n.points[i]
vec = h0n["Normals"][i] * h0n.length
p0 = p - vec
p1 = p + vec
ip, ic = h1.ray_trace(p0, p1, first_point=True)
dist = np.sqrt(np.sum((ip - p)**2))
h0n["distances"][i] = dist
# Replace zeros with nans
mask = h0n["distances"] == 0
h0n["distances"][mask] = np.nan
np.nanmean(h0n["distances"])
p = pv.Plotter()
p.add_mesh(h0n, scalars="distances", smooth_shading=True)
p.add_mesh(h1, color=True, opacity=0.75, smooth_shading=True)
p.show()
###Output
_____no_output_____
###Markdown
Nearest Neighbor Distance=========================You could also use a KDTree to compare the distance between each pointof the upper surface and the nearest neighbor of the lower surface. Thiswon\'t be the exact surface to surface distance, but it will benoticeably faster than a ray trace, especially for large surfaces.
###Code
from scipy.spatial import KDTree
tree = KDTree(h1.points)
d, idx = tree.query(h0.points )
h0["distances"] = d
np.mean(d)
p = pv.Plotter()
p.add_mesh(h0, scalars="distances", smooth_shading=True)
p.add_mesh(h1, color=True, opacity=0.75, smooth_shading=True)
p.show()
###Output
_____no_output_____ |
module1/w20_module1_content.ipynb | ###Markdown
Module 1: First look at text-based classificationThe first real problem I'd like to look at in the course is classifying tweets as carrying fake-news (or not). But before getting to that in later modules, we need to pick up skills in what is called data wrangling and feature engineering. We will do that in this module. I am going to use a standard tutorial-type data set for machine learning: the passenger record of the Titanic steamship. The Titanic sunk on its maiden voyage. We have the record of the passengers. We will do a practice problem of predicting who survived and who perished based solely on their name. Will this be effective? Seems kind of like reading Tarot cards. But let's keep an open mind. Maybe it will work.Many text-based machine-learning problems contain their data in spreadsheet form. Python has a powerful library for dealing with spreadsheets called pandas. In this module we will use a handful of features from the *`pandas`* library. I'll go through some basic clean-up steps using pandas. Common wisdom is that the clean-up process can take up to 70% of your entire effort. Life is messy. Text data comes to us in unstructured forms. We have to deal with it. Read in spreadsheetFor the first part of the course, we will be working on a problem called classification. The data we will be using to make classifications will be in spreadsheet form (I'll also call this *table* form).We could read in the data to our own custom Python data-structure. Instead we will use the pandas library to store our data and modify it.I am going to use something called comma-separated values or csv as my raw file format. I like csv because you can use it to pass data around easily from things like Excel and google Sheets. And pandas knows how to read raw csv format and produce its own version called a Dataframe. Our week 2 goal is to read a table of tweets, in csv form, and classify them as fake-news or not.Caveat: I said we are interested in classification (e.g., fake-news or not) but I'll use the term `prediction` for the titanic. You can classification and prediction as interchangeable for now. I could say I am trying to `predict` who will survive or I could say I am trying to `classify` passengers into survivors and non-survivors. We will use the same methods for each.I have the titanic data stored on google sheets. I used sheets to give me a url to the csv version of the file. Once I have that url, I can hand it to pandas and suck it in. Pretty dang cool. You all have access to Google Sheets so you can do the same. If you have data in spreadsheet form, upload it to Sheets and then get the url. Now anyone can access your spreadsheet.BTW: it is convention to alias pandas as `pd`. It is also convention to use `df` as an abstract name for a Dataframe - you will see this in docs and StackOverflow. I am using `titanic_table` in place of `df` to give it more meaning.
###Code
import pandas as pd
url = 'https://docs.google.com/spreadsheets/d/1z1ycUZjJpmMWB4gXbhwRQ9B_qa42CwzAQkf82mLibxI/pub?output=csv'
titanic_table = pd.read_csv(url)
len(titanic_table)
#I am setting the option to see all the columns of our table as we build it, i.e., it has no max.
pd.set_option('display.max_columns', None)
titanic_table.head() #shows first 5 rows
###Output
_____no_output_____
###Markdown
Google ColabI will run all my notebooks through google colab. So I assume you downloaded this notebook from canvas and then uploaded it to your colab account. ExploreWe now have the 891 passengers in 891 rows of a table. We can use pandas methods to look a little more deeply at the data.* Use `head()` to get general layout. We did that above.* Find which columns have `NaN` (empties) and how many.* Use `describe` method to see if any odd looking columns, e.g., more than 2 unique values for a binary column.
###Code
titanic_table.describe(include='all')
###Output
_____no_output_____
###Markdown
There are a mixture of column types. Some have discrete values (e.g., `Pclass`, `Sex`, `Embarked`), some have continuous values (e.g., `Age`, `Fare`), and some are in between (e.g., `SibSp`, `Parch`). The `Name` column has text values. The `Ticket` and `Cabin` columns are a bit of a hodge podge and will take further wrangling to make them useful.Note that a `NaN` has several meanings. In the table above, it means "does not apply". For instance there is no std for the Name column so shows a NaN. More typically, a NaN will appear as a value in a table to stand for "empty - no known value". One more thing to note about it. It is not a string but a special value of pandas. So an attempt to do NaN == "NaN" will be false. You will have to use special pandas functions for dealing with a NaN.Let's next see how many empties there are in each column.
###Code
titanic_table.isna().sum() #note use of isna to find the NaNs.
###Output
_____no_output_____
###Markdown
* The `Age` column is a bit worrisome. It looks like a column that can be useful in prediction but has 177 empty values.* The `Cabin` column has a lot of empties. I am dubious that the column as a whole will be useful. However, it might make sense to use an empty/non-empty question. For instance, maybe passengers with non-empty cabins were more likely to survive.* The `Embarked` column has only 2 empties and that seems like something we can fill in. Filter out unneeded columnsI am really only interested in the `Name` column and the `Survived` column. Since we are trying to predict Survived values, it is known as the target column or label column or just plain y. The other columns are called features or xi. I am saying that we will only be interested in Name so it is the sole feature (for now). My goal is to create a new table with just those 2 columns. There are 2 ways to go: (1) drop all the other columns, (2) copy over only the needed columns. I'll show you both ways. First, I'll first use the columns attribute to obtain all the columns. I turn this into a list to make it print more cleanly. I am doing this in prepraration of dropping most of them. I am being lazy - I just want to copy and paste the output into the drop method.Note in the drop method I am using `axis=1` to say I am dropping columns and not rows (`axis=0`).
###Code
list(titanic_table.columns)
name_table_1 = titanic_table.drop(['PassengerId',
'Pclass',
'Sex',
'Age',
'SibSp',
'Parch',
'Ticket',
'Fare',
'Cabin',
'Embarked'], axis=1)
name_table_1.head()
###Output
_____no_output_____
###Markdown
Most pandas operations make shallow copies of a table. This is true above: the drop method gives me a new table. Normally I would just reassign new table to `titanic_table`. This avoids keeping a lot of variables around like `titanic_table_1`, `titanic_table_2`, etc. I find trying to manage such a name space clumsy. It is true my way does not allow you to roll back to a prior version of the table. But you can "roll forward" by just restarting the kernel and executing all of the cells from the top of the notebook to get to a specific state.All that said, I am using a new var name above to demonstrate something. That comes next. Instead of dropping a bunch of columns, let's just add the 2 we want. Nice.
###Code
name_table_2 = titanic_table[['Name', 'Survived']]
name_table_2.head()
###Output
_____no_output_____
###Markdown
That's what I'm talkin aboutWe trimmed down to the two columns we need. But as a warm up for word-vectorization in later modules, I am going to add a new column that is based on the Name column.
###Code
#I'm going to reuse titanic_table var name to avoid proliferating names. If need to get full table back, redo steps at top of notebook.
titanic_table = name_table_2 #or name_table_1 - they are equiv
###Output
_____no_output_____
###Markdown
NumerologyI have a theory that the length of your full name gives a clue to your future. I'm going to add a new column, `Length`, so I can test this out a little later. You can see below that pandas makes this pretty easy to do.What is going on on the right hand side is that pandas `apply` is generating every row in turn and then passing that row to my lambda expression. The value returned by that lambda expression goes into the new column `Length`. If you like list comprehensions better, you can use this:titanic_table['Length'] = [len(row['Name']) for index,row in titanic_table.iterrows()]The iterrows method gives you the same functionality but also includes the row index (which we are not using).
###Code
titanic_table['Length'] = titanic_table.apply(lambda row: len(row['Name']), axis=1)
titanic_table.head()
###Output
_____no_output_____
###Markdown
If you squint, you can almost believe that those who perished had shorter names. Write the table outLet's save the work we have done with the table. Because I am using google colab, I have to autheticate myself before I can store the file. Note that I created a folder, `class_tables`, on My Drive on google drive. You can make up your own folder name if you wish. The first time you run this, you will be given a key to fill in and a website to visit. The website gives you the key. Copy it and type it in and hit enter.
###Code
from google.colab import drive
drive.mount('/content/gdrive')
with open('/content/gdrive/My Drive/class_tables/name_table.csv', 'w') as f:
titanic_table.to_csv(f, encoding='utf-8', index=False)
###Output
_____no_output_____ |
i2i/fspmaps/country_jurisdiction_borders.ipynb | ###Markdown
Country and jurisdictions borders
###Code
import numpy as np
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
%matplotlib inline
# The shapely.ops module has a cascaded_union that finds the cumulative union of many objects
from shapely.ops import cascaded_union
###Output
_____no_output_____
###Markdown
Read tables
###Code
Map0 = gpd.read_file('/Users/ikersanchez/Vizzuality/PROIEKTUAK/i2i/Data/gadm36_levels_shp/gadm36_0.shp')
Map1 = gpd.read_file('/Users/ikersanchez/Vizzuality/PROIEKTUAK/i2i/Data/gadm36_levels_shp/gadm36_1.shp')
###Output
_____no_output_____
###Markdown
Country borders
###Code
iso = ['BGD','IND','KEN','LSO','NGA','TZA','UGA']
Map0.rename(columns={'GID_0': 'iso', 'NAME_0':'country'}, inplace=True)
df = pd.DataFrame(columns=['geometry','country','iso'])
for i in iso:
country = Map0[Map0['iso'] == i]
country = country[['geometry','country','iso']]
df = gpd.GeoDataFrame(pd.concat([df,country]))
df.reset_index(drop=True, inplace=True)
fig, ax = plt.subplots(figsize=[8,8])
ax.set_aspect('equal')
df.plot(ax=ax, color='white', edgecolor='black')
###Output
_____no_output_____
###Markdown
Save table
###Code
df.to_csv('/Users/ikersanchez/Vizzuality/PROIEKTUAK/i2i/Data/FSP_Maps/country_borders_gadm36.csv')
###Output
_____no_output_____
###Markdown
Country jurisdictios (Admin 1)
###Code
iso = ['BGD','IND','KEN','LSO','NGA','TZA','UGA']
df = pd.DataFrame(columns=['geometry', 'iso','country','code','jurisdiction'])
for country in iso:
if country == 'IND':
jurisdiction = Map1[(Map1['NAME_1'] == 'Uttar Pradesh') | (Map1['NAME_1'] == 'Bihar')]
jurisdiction = jurisdiction[['geometry', 'GID_0','NAME_0','GID_1','NAME_1']]
jurisdiction.rename(columns={'GID_0':'iso', 'NAME_0':'country', 'GID_1':'code', 'NAME_1':'jurisdiction'}, inplace= True)
df = pd.concat([df,jurisdiction])
else:
jurisdiction = Map1[Map1['GID_0'] == country]
jurisdiction = jurisdiction[['geometry', 'GID_0','NAME_0','GID_1','NAME_1']]
jurisdiction.rename(columns={'GID_0':'iso', 'NAME_0':'country', 'GID_1':'code', 'NAME_1':'jurisdiction'}, inplace= True)
df = pd.concat([df,jurisdiction])
df.reset_index(drop=True, inplace=True)
###Output
_____no_output_____
###Markdown
Save table
###Code
df.to_csv('/Users/ikersanchez/Vizzuality/PROIEKTUAK/i2i/Data/FSP_Maps/jurisdictions.csv')
###Output
_____no_output_____ |
coding_assignments/01_Classical_and_Quantum_Probability_Distributions.ipynb | ###Markdown
Before you begin, execute this cell to import numpy and packages from the D-Wave Ocean suite, and all necessary functions for the gate-model framework you are going to use, whether that is the Forest SDK or Qiskit. In the case of Forest SDK, it also starts the qvm and quilc servers.
###Code
%run -i "assignment_helper.py"
%matplotlib inline
###Output
Available frameworks:
Qiskit
D-Wave Ocean
###Markdown
Classical probability distributions**Exercise 1** (1 point). Recall that in classical con flipping, get heads with probability $P(X=0) = p_0$ and tails with $P(X=1) = p_1$ for each toss of the coin, where $p_i\geq 0$ for all $i$, and the probabilities sum to one: $\sum_i p_i = 1$. Create a sample with a 1000 data points using numpy, with a probability of getting tails being 0.3. This is the parameter that the `binomial` function takes. Store the outcome in an array called `x_data`.
###Code
n_samples = 1000
###
p_1 = 0.3
p_0 = 1-p_1
x_data = np.random.binomial(n=1, p=p_1, size=(n_samples,))
###
assert isinstance(x_data, np.ndarray)
assert abs(p_1-x_data.sum()/n_samples) < 0.05
###Output
_____no_output_____
###Markdown
**Exercise 2** (1 point). As you recall, we may also write the probability distribution as a stochastic vector $\vec{p} = \begin{bmatrix} p_0 \\ p_1 \end{bmatrix}$. The normalization constraint on the probability distribution says that the norm of the vector is restricted to one in the $l_1$ norm. In other words, $||\vec{p}||_1 = \sum_i |p_i| = 1$. This would be the unit circle in the $l_1$ norm, but since $p_i\geq 0$, we are restricted to a quarter of the unit circle, just as we plotted above. Write a function that checks whether a given two-dimensional vector is a stochastic vector. That is, it should return `True` if all elements are positive and the 1-norm is approximately one, and it should return `False` otherwise. The input of the function is a numpy array.
###Code
def is_stochastic_vector(p: np.array):
###
return abs(np.linalg.norm(p, ord=1) - 1) < 0.01 and (p >= 0).all()
###
assert not is_stochastic_vector(np.array([0.2, 0.3]))
assert not is_stochastic_vector(np.array([-0.2, 0.7]))
assert is_stochastic_vector(np.array([0.2, 0.8]))
###Output
_____no_output_____
###Markdown
**Exercise 3** (1 point). The probability of heads is just the first element in the $\vec{p}$ and we can use a projection to extract it. For the first element of the stochastic vector, the projection is described by the matrix $\begin{bmatrix} 1 & 0\\0 & 0\end{bmatrix}$. Write a function that performs this projection on a two-element vector described by a numpy array. Your output after the projection is also a two-element vector.
###Code
def project_to_first_basis_vector(p: np.array):
###
return np.array([[1,0],[0,0]]) @ p
###
assert np.alltrue(project_to_first_basis_vector(np.array([0.2, 0.3])) == np.array([0.2, 0.]))
assert np.alltrue(project_to_first_basis_vector(np.array([1., 0.])) == np.array([1., 0.]))
###Output
_____no_output_____
###Markdown
**Exercise 4** (1 point). The projection operators introduce some linear algebra to working with probability distributions. We can also use linear algebra to transform one probability distribution to another. A left *stochastic matrix* will map stochastic vectors to stochastic vectors when multiplied from the left: its columns add up to one. Write a function that takes a matrix and a vector as input arguments (both are numpy arrays), checks whether the vector is a stochastic vector and whether the matrix is left stochastic. If they are, return the matrix applied to the vector, otherwise raise a `ValueError`. You can call the function `is_stochastic_vector` that you defined above.
###Code
def apply_stochastic_matrix(p: np.array, M: np.array):
"""Apply the matrix M to the vector p, but only if
p is a stochastic vector and M is a left stochastic
matrix. Otherwise raise a ValueError.
"""
###
if np.apply_along_axis(is_stochastic_vector, 0, M).all():
return M @ p
else:
raise ValueError
###
p = np.array([[.5], [.5]])
M = np.array([[0.7, 0.6], [0.3, 0.4]])
assert abs(np.linalg.norm(apply_stochastic_matrix(p, M), ord=1)-1) < 0.01
M = np.array([[0.7, 0.6], [0.3, 0.5]])
try:
apply_stochastic_matrix(p, M)
except ValueError:
pass
else:
raise AssertionError("did not raise")
###Output
_____no_output_____
###Markdown
**Exercise 5** (1 point). Create a left stochastic matrix in a variable called `M` that transforms the uniform distribution $\vec{p}= \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix}$ to $\begin{bmatrix} 0.6 \\ 0.4 \end{bmatrix}$. `M` should be a two-dimensional numpy array.
###Code
###
M = np.array([[0.6, 0.6],
[0.4, 0.4]])
###
assert np.allclose(M.dot(np.array([0.5, 0.5])), np.array([0.6, 0.4]))
###Output
_____no_output_____
###Markdown
**Exercise 6** (1 point). Calculate the entropy of this distribution $\begin{bmatrix} 0.6 \\ 0.4 \end{bmatrix}$ in a variable called `S`.
###Code
###
S = - (0.6*np.log2(0.6) + 0.4*np.log2(0.4))
S
###
###
### AUTOGRADER TEST - DO NOT REMOVE
###
###Output
_____no_output_____
###Markdown
Quantum states**Exercise 7** (1 point). A quantum state is a probability distribution. A qubit state is a distribution over two values, similar to the coin flipping in the classical state. A major difference is that the entries are complex numbers and the normalization is in the $l_2$ norm. Create a function similar to `is_stochastic_vector` that checks whether a vector is a valid quantum state. The input is a numpy array and the output should be boolean.
###Code
def is_quantum_state(psi: np.array):
###
return abs(np.linalg.norm(psi) - 1) < 0.01
###
assert is_quantum_state(np.array([1/np.sqrt(2), 1/np.sqrt(2)]))
assert is_quantum_state(np.array([-1/np.sqrt(2), 1/np.sqrt(2)]))
assert is_quantum_state(np.array([-1/3, 2*np.sqrt(2)/3]))
assert is_quantum_state(np.array([-1j/3, 2*np.sqrt(2)/3]))
assert not is_quantum_state(np.array([0.2, 0.8]))
###Output
_____no_output_____
###Markdown
**Exercise 8** (1 point). While working with numpy arrays is convenient, it is better to use a framework designed for quantum computing, since it often allows us to execute a circuit directly on a quantum computer. In your preferred framework, implement a circuit of a single qubit with no operation on it. You should create it in an object called `circuit`. Do not add a measurement. The evaluation will automatically branch according to which framework you chose.
###Code
###
q = QuantumRegister(1)
c = ClassicalRegister(1)
circuit = QuantumCircuit(q,c)
###
amplitudes = get_amplitudes(circuit)
assert abs(amplitudes[0]-1.0) < 0.01
###Output
_____no_output_____
###Markdown
**Exercise 9** (1 point). In the execution branching above, you see that we use the wavefunction simulator. This allows us to use the probability amplitudes as usual numpy arrays, as you can see above. If we ran the circuit on an actual quantum device, we would not be able to inspect the wavefunction, but we would have to rely on the statistics of measurements to understand what is happening in the circuit.Create a circuit in your preferred framework that creates an equal superposition in a qubit using a Hadamard gate. Again, the name of the object should be `circuit`. The evaluation will be based on measurement statistics. In this case, you should explicitly specify the measurement on the qubit
###Code
###
q = QuantumRegister(1)
c = ClassicalRegister(1)
circuit = QuantumCircuit(q,c)
circuit.h(q[0])
circuit.measure(q,c)
###
counts = get_counts(circuit)
assert abs(counts['0']/100-.5) < 0.2
###Output
_____no_output_____
###Markdown
**Exercise 10** (1 point). If you plotted the state before measurement on the Bloch sphere, it would have been on the equator halfway between the $|0\rangle$ and $|1\rangle$ states, and the tip of the X axis. If you apply the Hadamard on the $|1\rangle$, it would have been the point on the opposite and of the X axis, since the resulting superposition would have had a -1 amplitude for $|1\rangle$. The measurement statistics, however, would be identical. The negative sign plays a role in interference: for instance, applying a Hadamard again, would take you back to $|1\rangle$. Create the superposition after applying the Hadamard gate on $|1\rangle$. We will verify whether it picked up the phase. Do not include a measurement, since we will inspect the wavefunction.
###Code
###
circuit = QuantumCircuit(q,c)
circuit.x(q[0])
circuit.h(q[0])
###
amplitudes = get_amplitudes(circuit)
assert abs(amplitudes[1]+np.sqrt(2)/2) < 0.01
###Output
_____no_output_____
###Markdown
More qubits and entanglement**Exercise 11** (1 point). To get a sense of multiqubit states, it is important to be confident with the tensor product operation. Create a function that returns the four basis vectors, $|00\rangle$, $|01\rangle$, $|10\rangle$, and $|11\rangle$, of the tensor product space $\mathbb{C}^2\otimes\mathbb{C}^2$. The order in which they appear does not matter. The return value should be a list of four numpy arrays.
###Code
def create_canonical_basis():
###
basis = [np.array([1,0]).T, np.array([0,1]).T]
canonical_basis = []
for b0 in basis:
for b1 in basis:
canonical_basis.append(np.kron(b0,b1))
return canonical_basis
###
basis = create_canonical_basis()
assert len(basis) == 4
if basis[0].shape != (4, ):
basis = [basis_vector.reshape((4, )) for basis_vector in basis]
###
### AUTOGRADER TEST - DO NOT REMOVE
###
###Output
_____no_output_____
###Markdown
**Exercise 12** (1 point). A generic product state has the form $\begin{bmatrix}a_0b_0\\ a_0b_1\\ a_1b_0\\ a_1b_1\end{bmatrix}=a_0b_0|00\rangle + a_0b_1|01\rangle + a_1b_0|10\rangle + a_1b_1|11\rangle$ on $\mathbb{C}^2\otimes\mathbb{C}^2$, but not all. We can use the basis vectors to form vectors in the space that do not have a product structure. These are entangled states that show strong correlations. Entanglement is an important resource in quantum computing and being able to create a circuit that generates an entangled state is critical. Implement a circuit in your preferred framework to create the $|\phi^-\rangle = \frac{1}{\sqrt{2}}(|00\rangle-|11\rangle)$ state, that is, almost the same as the $|\phi^+\rangle$ state, but with the opposite sign of the probability amplitude of $|11\rangle$. Do not include a measurement, as we will verify the state with the wavefunction simulator
###Code
###
q = QuantumRegister(2)
c = ClassicalRegister(2)
circuit = QuantumCircuit(q, c)
circuit.x(q[0])
circuit.h(q[0])
circuit.cx(q[0], q[1])
###
from qiskit.tools.visualization import circuit_drawer
circuit_drawer(circuit)
amplitudes = get_amplitudes(circuit)
assert np.allclose(np.array([np.sqrt(2)/2, 0, 0, -np.sqrt(2)/2]), amplitudes)
###Output
_____no_output_____
###Markdown
Before you begin, execute this cell to import numpy and packages from the D-Wave Ocean suite, and all necessary functions the gate-model framework you are going to use, whether that is the Forest SDK or Qiskit. In the case of Forest SDK, it also starts the qvm and quilc servers.
###Code
%run -i "assignment_helper.py"
###Output
_____no_output_____
###Markdown
Classical probability distributions**Exercise 1** (1 point). Recall that in classical con flipping, get heads with probability $P(X=0) = p_0$ and tails with $P(X=1) = p_1$ for each toss of the coin, where $p_i\geq 0$ for all $i$, and the probabilities sum to one: $\sum_i p_i = 1$. Create a sample with a 1000 data points using numpy, with a probability of getting tails being 0.3. This is the parameter that the `binomial` function takes. Store the outcome in an array called `x_data`.
###Code
n_samples = 1000
#
# YOUR CODE HERE
#
assert type(x_data) is np.ndarray
assert abs(p_1-x_data.sum()/n_samples) < 0.05
###Output
_____no_output_____
###Markdown
**Exercise 2** (1 point). As you recall, we may also write the probability distribution as a stochastic vector $\vec{p} = \begin{bmatrix} p_0 \\ p_1 \end{bmatrix}$. The normalization constraint on the probability distribution says that the norm of the vector is restricted to one in the $l_1$ norm. In other words, $||\vec{p}||_1 = \sum_i |p_i| = 1$. This would be the unit circle in the $l_1$ norm, but since $p_i\geq 0$, we are restricted to a quarter of the unit circle, just as we plotted above. Write a function that checks whether a given two-dimensional vector is a stochastic vector. That is, it should return `True` if all elements are positive and the 1-norm is approximately one, and it should return `False` otherwise. The input of the function is a numpy array.
###Code
def is_stochastic_vector(p: np.array):
#
# YOUR CODE HERE
#
assert is_stochastic_vector(np.array([0.2, 0.3])) is False
assert is_stochastic_vector(np.array([-0.2, 0.7])) is False
assert is_stochastic_vector(np.array([0.2, 0.8])) is True
###Output
_____no_output_____
###Markdown
**Exercise 3** (1 point). The probability of heads is just the first element in the $\vec{p}$ and we can use a projection to extract it. For the first element of the stochastic vector, the projection is described by the matrix $\begin{bmatrix} 1 & 0\\0 & 0\end{bmatrix}$. Write a function that performs this projection on a two-element vector described by a numpy array. Your output after the projection is also a two-element vector.
###Code
def project_to_first_basis_vector(p: np.array):
#
# YOUR CODE HERE
#
assert np.alltrue(project_to_first_basis_vector(np.array([0.2, 0.3])) == np.array([0.2, 0.])) == True
assert np.alltrue(project_to_first_basis_vector(np.array([1., 0.])) == np.array([1., 0.])) == True
###Output
_____no_output_____
###Markdown
**Exercise 4** (1 point). The projection operators introduce some linear algebra to working with probability distributions. We can also use linear algebra to transform one probability distribution to another. A left *stochastic matrix* will map stochastic vectors to stochastic vectors when multiplied from the left: its columns add up to one. Write a function that takes a matrix and a vector as input arguments (both are numpy arrays), checks whether the vector is a stochastic vector and whether the matrix is left stochastic. If they are, return the matrix applied to the vector, otherwise raise a `ValueError`. You can call the function `is_stochastic_vector` that you defined above.
###Code
def apply_stochastic_matrix(p: np.array, M: np.array):
"""Apply the matrix M to the vector p, but only if
p is a stochastic vector and M is a left stochastic
matrix. Otherwise raise a ValueError.
"""
#
# YOUR CODE HERE
#
p = np.array([[.5], [.5]])
M = np.array([[0.7, 0.6], [0.3, 0.4]])
assert abs(np.linalg.norm(apply_stochastic_matrix(p, M), ord=1)-1) < 0.01
M = np.array([[0.7, 0.6], [0.3, 0.5]])
try:
apply_stochastic_matrix(p, M)
except ValueError:
pass
else:
raise AssertionError("did not raise")
###Output
_____no_output_____
###Markdown
**Exercise 5** (1 point). Create a left stochastic matrix in a variable called `M` that transforms the uniform distribution $\vec{p}= \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix}$ to $\begin{bmatrix} 0.6 \\ 0.4 \end{bmatrix}$. `M` should be a two-dimensional numpy array.
###Code
#
# YOUR CODE HERE
#
assert np.alltrue(M.dot(np.array([0.5, 0.5])) == np.array([0.6, 0.4])) == True
###Output
_____no_output_____
###Markdown
**Exercise 6** (1 point). Calculate the entropy of this distribution $\begin{bmatrix} 0.6 \\ 0.4 \end{bmatrix}$ in a variable called `H`.
###Code
#
# YOUR CODE HERE
#
#
# AUTOGRADER TEST - DO NOT REMOVE
#
###Output
_____no_output_____
###Markdown
Quantum states**Exercise 7** (1 point). A quantum state is a probability distribution. A qubit state is a distribution over two values, similar to the coin flipping in the classical state. A major difference is that the entries are complex numbers and the normalization is in the $l_2$ norm. Create a function similar to `is_stochastic_vector` that checks whether a vector is a valid quantum state. The input is a numpy array and the output should be boolean.
###Code
def is_quantum_state(psi: np.array):
#
# YOUR CODE HERE
#
assert is_quantum_state(np.array([1/np.sqrt(2), 1/np.sqrt(2)]))
assert is_quantum_state(np.array([-1/np.sqrt(2), 1/np.sqrt(2)]))
assert is_quantum_state(np.array([-1/3, 2*np.sqrt(2)/3]))
assert is_quantum_state(np.array([0.2, 0.8])) is False
###Output
_____no_output_____
###Markdown
**Exercise 8** (1 point). While working with numpy arrays is convenient, it is better to use a framework designed for quantum computing, since it often allows us to execute a circuit directly on a quantum computer. In your preferred framework, implement a circuit of a single qubit with no operation on it. You should create it in an object called `circuit`. Do not add a measurement. The evaluation will automatically branch according to which framework you chose.
###Code
#
# YOUR CODE HERE
#
amplitudes = get_amplitudes(circuit)
assert abs(amplitudes[0]-1.0) < 0.01
###Output
_____no_output_____
###Markdown
**Exercise 9** (1 point). In the execution branching above, you see that we use the wavefunction simulator. This allows us to use the probability amplitudes as usual numpy arrays, as you can see above. If we ran the circuit on an actual quantum device, we would not be able to inspect the wavefunction, but we would have to rely on the statistics of measurements to understand what is happening in the circuit.Create a circuit in your preferred framework that creates an equal superposition in a qubit using a Hadamard gate. Again, the name of the object should be `circuit`. The evaluation will be based on measurement statistics. In this case, you should explicitly specify the measurement on the qubit
###Code
#
# YOUR CODE HERE
#
counts = get_counts(circuit)
assert abs(counts['0']/100-.5) < 0.2
###Output
_____no_output_____
###Markdown
**Exercise 10** (1 point). If you plotted the state before measurement on the Bloch sphere, it would have been on the equator halfway between the $|0\rangle$ and $|1\rangle$ states, and the tip of the X axis. If you apply the Hadamard on the $|1\rangle$, it would have been the point on the opposite and of the X axis, since the resulting superposition would have had a -1 amplitude for $|1\rangle$. The measurement statistics, however, would be identical. The negative sign plays a role in interference: for instance, applying a Hadamard again, would take you back to $|1\rangle$. Create the superposition after applying the Hadamard gate on $|1\rangle$. We will verify whether it picked up the phase. Do not include a measurement, since we will inspect the wavefunction.
###Code
#
# YOUR CODE HERE
#
amplitudes = get_amplitudes(circuit)
assert abs(amplitudes[1]+np.sqrt(2)/2) < 0.01
###Output
_____no_output_____
###Markdown
More qubits and entanglement**Exercise 11** (1 point). To get a sense of multiqubit states, it is important to be confident with the tensor product operation. Create a function that returns the four basis vectors, $|00\rangle$, $|01\rangle$, $|10\rangle$, and $|11\rangle$, of the tensor product space $\mathbb{C}^2\otimes\mathbb{C}^2$. The order in which they appear does not matter. The return value should be a list of four numpy arrays.
###Code
def create_canonical_basis():
#
# YOUR CODE HERE
#
basis = create_canonical_basis()
assert len(basis) == 4
#
# AUTOGRADER TEST - DO NOT REMOVE
#
###Output
_____no_output_____
###Markdown
**Exercise 12** (1 point). A generic product state has the form $\begin{bmatrix}a_0b_0\\ a_0b_1\\ a_1b_0\\ a_1b_1\end{bmatrix}=a_0b_0|00\rangle + a_0b_1|01\rangle + a_1b_0|10\rangle + a_1b_1|11\rangle$ on $\mathbb{C}^2\otimes\mathbb{C}^2$, but not all. We can use the basis vectors to form vectors in the space that do not have a product structure. These are entangled states that show strong correlations. Entanglement is an important resource in quantum computing and being able to create a circuit that generates an entangled state is critical. Implement a circuit in your preferred framework to create the $|\phi^-\rangle = \frac{1}{\sqrt{2}}(|00\rangle-|11\rangle)$ state, that is, almost the same as the $|\phi^+\rangle$ state, but with the opposite sign of the probability amplitude of $|11\rangle$. Do not include a measurement, as we will verify the state with the wavefunction simulator
###Code
#
# YOUR CODE HERE
#
amplitudes = get_amplitudes(circuit)
assert all(np.isclose(np.array([np.sqrt(2)/2, 0, 0, -np.sqrt(2)/2]), amplitudes))
###Output
_____no_output_____
###Markdown
Before you begin, execute this cell to import numpy and packages from the D-Wave Ocean suite, and all necessary functions for the gate-model framework you are going to use, whether that is the Forest SDK or Qiskit. In the case of Forest SDK, it also starts the qvm and quilc servers.
###Code
%run -i "assignment_helper.py"
%matplotlib inline
###Output
Available frameworks:
Qiskit
D-Wave Ocean
###Markdown
Classical probability distributions**Exercise 1** (1 point). Recall that in classical con flipping, get heads with probability $P(X=0) = p_0$ and tails with $P(X=1) = p_1$ for each toss of the coin, where $p_i\geq 0$ for all $i$, and the probabilities sum to one: $\sum_i p_i = 1$. Create a sample with a 1000 data points using numpy, with a probability of getting tails being 0.3. This is the parameter that the `binomial` function takes. Store the outcome in an array called `x_data`.
###Code
n_samples = 1000
###
### YOUR CODE HERE
###
## My code here
x_data = np.random.binomial(1, p=0.3, size=n_samples)
p_1 = 0.3
assert isinstance(x_data, np.ndarray)
assert abs(p_1-x_data.sum()/n_samples) < 0.05
###Output
_____no_output_____
###Markdown
**Exercise 2** (1 point). As you recall, we may also write the probability distribution as a stochastic vector $\vec{p} = \begin{bmatrix} p_0 \\ p_1 \end{bmatrix}$. The normalization constraint on the probability distribution says that the norm of the vector is restricted to one in the $l_1$ norm. In other words, $||\vec{p}||_1 = \sum_i |p_i| = 1$. This would be the unit circle in the $l_1$ norm, but since $p_i\geq 0$, we are restricted to a quarter of the unit circle, just as we plotted above. Write a function that checks whether a given two-dimensional vector is a stochastic vector. That is, it should return `True` if all elements are positive and the 1-norm is approximately one, and it should return `False` otherwise. The input of the function is a numpy array.
###Code
def is_stochastic_vector(p: np.array):
###
### YOUR CODE HERE
###
# my code here
return p.sum() == 1
assert not is_stochastic_vector(np.array([0.2, 0.3]))
assert not is_stochastic_vector(np.array([-0.2, 0.7]))
assert is_stochastic_vector(np.array([0.2, 0.8]))
###Output
_____no_output_____
###Markdown
**Exercise 3** (1 point). The probability of heads is just the first element in the $\vec{p}$ and we can use a projection to extract it. For the first element of the stochastic vector, the projection is described by the matrix $\begin{bmatrix} 1 & 0\\0 & 0\end{bmatrix}$. Write a function that performs this projection on a two-element vector described by a numpy array. Your output after the projection is also a two-element vector.
###Code
def project_to_first_basis_vector(p: np.array):
###
### YOUR CODE HERE
###
proj = np.array([[1,0], [0,0]])
return np.matmul(proj, p)
assert np.alltrue(project_to_first_basis_vector(np.array([0.2, 0.3])) == np.array([0.2, 0.]))
assert np.alltrue(project_to_first_basis_vector(np.array([1., 0.])) == np.array([1., 0.]))
###Output
_____no_output_____
###Markdown
**Exercise 4** (1 point). The projection operators introduce some linear algebra to working with probability distributions. We can also use linear algebra to transform one probability distribution to another. A left *stochastic matrix* will map stochastic vectors to stochastic vectors when multiplied from the left: its columns add up to one. Write a function that takes a matrix and a vector as input arguments (both are numpy arrays), checks whether the vector is a stochastic vector and whether the matrix is left stochastic. If they are, return the matrix applied to the vector, otherwise raise a `ValueError`. You can call the function `is_stochastic_vector` that you defined above.
###Code
def apply_stochastic_matrix(p: np.array, M: np.array):
"""Apply the matrix M to the vector p, but only if
p is a stochastic vector and M is a left stochastic
matrix. Otherwise raise a ValueError.
"""
###
### YOUR CODE HERE
###
# my code here
if is_stochastic_vector(p) & is_stochastic_vector(M[:,0]) & is_stochastic_vector(M[:, 1]):
return np.matmul(M, p)
raise ValueError
p = np.array([[.5], [.5]])
M = np.array([[0.7, 0.6], [0.3, 0.4]])
assert abs(np.linalg.norm(apply_stochastic_matrix(p, M), ord=1)-1) < 0.01
M = np.array([[0.7, 0.6], [0.3, 0.5]])
try:
apply_stochastic_matrix(p, M)
except ValueError:
pass
else:
raise AssertionError("did not raise")
###Output
_____no_output_____
###Markdown
**Exercise 5** (1 point). Create a left stochastic matrix in a variable called `M` that transforms the uniform distribution $\vec{p}= \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix}$ to $\begin{bmatrix} 0.6 \\ 0.4 \end{bmatrix}$. `M` should be a two-dimensional numpy array.
###Code
###
### YOUR CODE HERE
###
assert np.allclose(M.dot(np.array([0.5, 0.5])), np.array([0.6, 0.4]))
###Output
_____no_output_____
###Markdown
**Exercise 6** (1 point). Calculate the entropy of this distribution $\begin{bmatrix} 0.6 \\ 0.4 \end{bmatrix}$ in a variable called `S`.
###Code
###
### YOUR CODE HERE
###
p = np.array([0.6, 0.4])
s = -np.dot(p, np.log(p))
###
### AUTOGRADER TEST - DO NOT REMOVE
###
###Output
_____no_output_____
###Markdown
Quantum states**Exercise 7** (1 point). A quantum state is a probability distribution. A qubit state is a distribution over two values, similar to the coin flipping in the classical state. A major difference is that the entries are complex numbers and the normalization is in the $l_2$ norm. Create a function similar to `is_stochastic_vector` that checks whether a vector is a valid quantum state. The input is a numpy array and the output should be boolean.
###Code
def is_quantum_state(psi: np.array):
###
### YOUR CODE HERE
###
# my code here
return abs(np.linalg.norm(psi, 2)-1) < 1e-8
assert is_quantum_state(np.array([1/np.sqrt(2), 1/np.sqrt(2)]))
assert is_quantum_state(np.array([-1/np.sqrt(2), 1/np.sqrt(2)]))
assert is_quantum_state(np.array([-1/3, 2*np.sqrt(2)/3]))
assert is_quantum_state(np.array([-1j/3, 2*np.sqrt(2)/3]))
assert not is_quantum_state(np.array([0.2, 0.8]))
###Output
_____no_output_____
###Markdown
**Exercise 8** (1 point). While working with numpy arrays is convenient, it is better to use a framework designed for quantum computing, since it often allows us to execute a circuit directly on a quantum computer. In your preferred framework, implement a circuit of a single qubit with no operation on it. You should create it in an object called `circuit`. Do not add a measurement. The evaluation will automatically branch according to which framework you chose.
###Code
###
### YOUR CODE HERE
###
# my code here
from qiskit import QuantumCircuit
circuit = QuantumCircuit(1)
amplitudes = get_amplitudes(circuit)
assert abs(amplitudes[0]-1.0) < 0.01
###Output
_____no_output_____
###Markdown
**Exercise 9** (1 point). In the execution branching above, you see that we use the wavefunction simulator. This allows us to use the probability amplitudes as usual numpy arrays, as you can see above. If we ran the circuit on an actual quantum device, we would not be able to inspect the wavefunction, but we would have to rely on the statistics of measurements to understand what is happening in the circuit.Create a circuit in your preferred framework that creates an equal superposition in a qubit using a Hadamard gate. Again, the name of the object should be `circuit`. The evaluation will be based on measurement statistics. In this case, you should explicitly specify the measurement on the qubit
###Code
###
### YOUR CODE HERE
###
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
qbit = QuantumRegister(1)
cbit = ClassicalRegister(1)
circuit = QuantumCircuit(qbit, cbit)
circuit.h(0)
circuit.measure(qbit, cbit)
circuit.draw()
counts = get_counts(circuit)
assert abs(counts['0']/100-.5) < 0.2
###Output
_____no_output_____
###Markdown
**Exercise 10** (1 point). If you plotted the state before measurement on the Bloch sphere, it would have been on the equator halfway between the $|0\rangle$ and $|1\rangle$ states, and the tip of the X axis. If you apply the Hadamard on the $|1\rangle$, it would have been the point on the opposite and of the X axis, since the resulting superposition would have had a -1 amplitude for $|1\rangle$. The measurement statistics, however, would be identical. The negative sign plays a role in interference: for instance, applying a Hadamard again, would take you back to $|1\rangle$. Create the superposition after applying the Hadamard gate on $|1\rangle$. We will verify whether it picked up the phase. Do not include a measurement, since we will inspect the wavefunction.
###Code
###
### YOUR CODE HERE
###
circuit = QuantumCircuit(1)
circuit.x(0)
circuit.h(0)
from qiskit import BasicAer
from qiskit.tools.visualization import plot_histogram, plot_bloch_multivector
backend_statevector = BasicAer.get_backend('statevector_simulator')
job = execute(circuit, backend_statevector)
plot_bloch_multivector(job.result().get_statevector(circuit))
amplitudes = get_amplitudes(circuit)
assert abs(amplitudes[1]+np.sqrt(2)/2) < 0.01
###Output
_____no_output_____
###Markdown
More qubits and entanglement**Exercise 11** (1 point). To get a sense of multiqubit states, it is important to be confident with the tensor product operation. Create a function that returns the four basis vectors, $|00\rangle$, $|01\rangle$, $|10\rangle$, and $|11\rangle$, of the tensor product space $\mathbb{C}^2\otimes\mathbb{C}^2$. The order in which they appear does not matter. The return value should be a list of four numpy arrays.
###Code
def create_canonical_basis():
###
### YOUR CODE HERE
###
return np.array([[1,0,0,0],
[0,1,0,0],
[0,0,1,0],
[0,0,0,1]])
basis = create_canonical_basis()
assert len(basis) == 4
if basis[0].shape != (4, ):
basis = [basis_vector.reshape((4, )) for basis_vector in basis]
###
### AUTOGRADER TEST - DO NOT REMOVE
###
###Output
_____no_output_____
###Markdown
**Exercise 12** (1 point). A generic product state has the form $\begin{bmatrix}a_0b_0\\ a_0b_1\\ a_1b_0\\ a_1b_1\end{bmatrix}=a_0b_0|00\rangle + a_0b_1|01\rangle + a_1b_0|10\rangle + a_1b_1|11\rangle$ on $\mathbb{C}^2\otimes\mathbb{C}^2$, but not all. We can use the basis vectors to form vectors in the space that do not have a product structure. These are entangled states that show strong correlations. Entanglement is an important resource in quantum computing and being able to create a circuit that generates an entangled state is critical. Implement a circuit in your preferred framework to create the $|\phi^-\rangle = \frac{1}{\sqrt{2}}(|00\rangle-|11\rangle)$ state, that is, almost the same as the $|\phi^+\rangle$ state, but with the opposite sign of the probability amplitude of $|11\rangle$. Do not include a measurement, as we will verify the state with the wavefunction simulator
###Code
###
### YOUR CODE HERE
###
# my code here
q = QuantumRegister(2)
circuit = QuantumCircuit(q)
circuit.x(q[0])
circuit.h(q[0])
circuit.cx(q[0], q[1])
circuit.draw()
from qiskit import BasicAer
from qiskit.tools.visualization import plot_histogram, plot_bloch_multivector
backend_statevector = BasicAer.get_backend('statevector_simulator')
job = execute(circuit, backend_statevector)
plot_bloch_multivector(job.result().get_statevector(circuit))
amplitudes = get_amplitudes(circuit)
assert np.allclose(np.array([np.sqrt(2)/2, 0, 0, -np.sqrt(2)/2]), amplitudes)
###Output
_____no_output_____
###Markdown
Before you begin, execute this cell to import numpy and packages from the D-Wave Ocean suite, and all necessary functions for the gate-model framework you are going to use, whether that is the Forest SDK or Qiskit. In the case of Forest SDK, it also starts the qvm and quilc servers.
###Code
%run -i "assignment_helper.py"
%matplotlib inline
###Output
_____no_output_____
###Markdown
Classical probability distributions**Exercise 1** (1 point). Recall that in classical con flipping, get heads with probability $P(X=0) = p_0$ and tails with $P(X=1) = p_1$ for each toss of the coin, where $p_i\geq 0$ for all $i$, and the probabilities sum to one: $\sum_i p_i = 1$. Create a sample with a 1000 data points using numpy, with a probability of getting tails being 0.3. This is the parameter that the `binomial` function takes. Store the outcome in an array called `x_data`.
###Code
n_samples = 1000
###
### YOUR CODE HERE
###
assert isinstance(x_data, np.ndarray)
assert abs(p_1-x_data.sum()/n_samples) < 0.05
###Output
_____no_output_____
###Markdown
**Exercise 2** (1 point). As you recall, we may also write the probability distribution as a stochastic vector $\vec{p} = \begin{bmatrix} p_0 \\ p_1 \end{bmatrix}$. The normalization constraint on the probability distribution says that the norm of the vector is restricted to one in the $l_1$ norm. In other words, $||\vec{p}||_1 = \sum_i |p_i| = 1$. This would be the unit circle in the $l_1$ norm, but since $p_i\geq 0$, we are restricted to a quarter of the unit circle, just as we plotted above. Write a function that checks whether a given two-dimensional vector is a stochastic vector. That is, it should return `True` if all elements are positive and the 1-norm is approximately one, and it should return `False` otherwise. The input of the function is a numpy array.
###Code
def is_stochastic_vector(p: np.array):
###
### YOUR CODE HERE
###
assert not is_stochastic_vector(np.array([0.2, 0.3]))
assert not is_stochastic_vector(np.array([-0.2, 0.7]))
assert is_stochastic_vector(np.array([0.2, 0.8]))
###Output
_____no_output_____
###Markdown
**Exercise 3** (1 point). The probability of heads is just the first element in the $\vec{p}$ and we can use a projection to extract it. For the first element of the stochastic vector, the projection is described by the matrix $\begin{bmatrix} 1 & 0\\0 & 0\end{bmatrix}$. Write a function that performs this projection on a two-element vector described by a numpy array. Your output after the projection is also a two-element vector.
###Code
def project_to_first_basis_vector(p: np.array):
###
### YOUR CODE HERE
###
assert np.alltrue(project_to_first_basis_vector(np.array([0.2, 0.3])) == np.array([0.2, 0.]))
assert np.alltrue(project_to_first_basis_vector(np.array([1., 0.])) == np.array([1., 0.]))
###Output
_____no_output_____
###Markdown
**Exercise 4** (1 point). The projection operators introduce some linear algebra to working with probability distributions. We can also use linear algebra to transform one probability distribution to another. A left *stochastic matrix* will map stochastic vectors to stochastic vectors when multiplied from the left: its columns add up to one. Write a function that takes a matrix and a vector as input arguments (both are numpy arrays), checks whether the vector is a stochastic vector and whether the matrix is left stochastic. If they are, return the matrix applied to the vector, otherwise raise a `ValueError`. You can call the function `is_stochastic_vector` that you defined above.
###Code
def apply_stochastic_matrix(p: np.array, M: np.array):
"""Apply the matrix M to the vector p, but only if
p is a stochastic vector and M is a left stochastic
matrix. Otherwise raise a ValueError.
"""
###
### YOUR CODE HERE
###
p = np.array([[.5], [.5]])
M = np.array([[0.7, 0.6], [0.3, 0.4]])
assert abs(np.linalg.norm(apply_stochastic_matrix(p, M), ord=1)-1) < 0.01
M = np.array([[0.7, 0.6], [0.3, 0.5]])
try:
apply_stochastic_matrix(p, M)
except ValueError:
pass
else:
raise AssertionError("did not raise")
###Output
_____no_output_____
###Markdown
**Exercise 5** (1 point). Create a left stochastic matrix in a variable called `M` that transforms the uniform distribution $\vec{p}= \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix}$ to $\begin{bmatrix} 0.6 \\ 0.4 \end{bmatrix}$. `M` should be a two-dimensional numpy array.
###Code
###
### YOUR CODE HERE
###
assert np.allclose(M.dot(np.array([0.5, 0.5])), np.array([0.6, 0.4]))
###Output
_____no_output_____
###Markdown
**Exercise 6** (1 point). Calculate the entropy of this distribution $\begin{bmatrix} 0.6 \\ 0.4 \end{bmatrix}$ in a variable called `S`.
###Code
###
### YOUR CODE HERE
###
###
### AUTOGRADER TEST - DO NOT REMOVE
###
###Output
_____no_output_____
###Markdown
Quantum states**Exercise 7** (1 point). A quantum state is a probability distribution. A qubit state is a distribution over two values, similar to the coin flipping in the classical state. A major difference is that the entries are complex numbers and the normalization is in the $l_2$ norm. Create a function similar to `is_stochastic_vector` that checks whether a vector is a valid quantum state. The input is a numpy array and the output should be boolean.
###Code
def is_quantum_state(psi: np.array):
###
### YOUR CODE HERE
###
assert is_quantum_state(np.array([1/np.sqrt(2), 1/np.sqrt(2)]))
assert is_quantum_state(np.array([-1/np.sqrt(2), 1/np.sqrt(2)]))
assert is_quantum_state(np.array([-1/3, 2*np.sqrt(2)/3]))
assert is_quantum_state(np.array([-1j/3, 2*np.sqrt(2)/3]))
assert not is_quantum_state(np.array([0.2, 0.8]))
###Output
_____no_output_____
###Markdown
**Exercise 8** (1 point). While working with numpy arrays is convenient, it is better to use a framework designed for quantum computing, since it often allows us to execute a circuit directly on a quantum computer. In your preferred framework, implement a circuit of a single qubit with no operation on it. You should create it in an object called `circuit`. Do not add a measurement. The evaluation will automatically branch according to which framework you chose.
###Code
###
### YOUR CODE HERE
###
amplitudes = get_amplitudes(circuit)
assert abs(amplitudes[0]-1.0) < 0.01
###Output
_____no_output_____
###Markdown
**Exercise 9** (1 point). In the execution branching above, you see that we use the wavefunction simulator. This allows us to use the probability amplitudes as usual numpy arrays, as you can see above. If we ran the circuit on an actual quantum device, we would not be able to inspect the wavefunction, but we would have to rely on the statistics of measurements to understand what is happening in the circuit.Create a circuit in your preferred framework that creates an equal superposition in a qubit using a Hadamard gate. Again, the name of the object should be `circuit`. The evaluation will be based on measurement statistics. In this case, you should explicitly specify the measurement on the qubit
###Code
###
### YOUR CODE HERE
###
counts = get_counts(circuit)
assert abs(counts['0']/100-.5) < 0.2
###Output
_____no_output_____
###Markdown
**Exercise 10** (1 point). If you plotted the state before measurement on the Bloch sphere, it would have been on the equator halfway between the $|0\rangle$ and $|1\rangle$ states, and the tip of the X axis. If you apply the Hadamard on the $|1\rangle$, it would have been the point on the opposite and of the X axis, since the resulting superposition would have had a -1 amplitude for $|1\rangle$. The measurement statistics, however, would be identical. The negative sign plays a role in interference: for instance, applying a Hadamard again, would take you back to $|1\rangle$. Create the superposition after applying the Hadamard gate on $|1\rangle$. We will verify whether it picked up the phase. Do not include a measurement, since we will inspect the wavefunction.
###Code
###
### YOUR CODE HERE
###
amplitudes = get_amplitudes(circuit)
assert abs(amplitudes[1]+np.sqrt(2)/2) < 0.01
###Output
_____no_output_____
###Markdown
More qubits and entanglement**Exercise 11** (1 point). To get a sense of multiqubit states, it is important to be confident with the tensor product operation. Create a function that returns the four basis vectors, $|00\rangle$, $|01\rangle$, $|10\rangle$, and $|11\rangle$, of the tensor product space $\mathbb{C}^2\otimes\mathbb{C}^2$. The order in which they appear does not matter. The return value should be a list of four numpy arrays.
###Code
def create_canonical_basis():
###
### YOUR CODE HERE
###
basis = create_canonical_basis()
assert len(basis) == 4
if basis[0].shape != (4, ):
basis = [basis_vector.reshape((4, )) for basis_vector in basis]
###
### AUTOGRADER TEST - DO NOT REMOVE
###
###Output
_____no_output_____
###Markdown
**Exercise 12** (1 point). A generic product state has the form $\begin{bmatrix}a_0b_0\\ a_0b_1\\ a_1b_0\\ a_1b_1\end{bmatrix}=a_0b_0|00\rangle + a_0b_1|01\rangle + a_1b_0|10\rangle + a_1b_1|11\rangle$ on $\mathbb{C}^2\otimes\mathbb{C}^2$, but not all. We can use the basis vectors to form vectors in the space that do not have a product structure. These are entangled states that show strong correlations. Entanglement is an important resource in quantum computing and being able to create a circuit that generates an entangled state is critical. Implement a circuit in your preferred framework to create the $|\phi^-\rangle = \frac{1}{\sqrt{2}}(|00\rangle-|11\rangle)$ state, that is, almost the same as the $|\phi^+\rangle$ state, but with the opposite sign of the probability amplitude of $|11\rangle$. Do not include a measurement, as we will verify the state with the wavefunction simulator
###Code
###
### YOUR CODE HERE
###
amplitudes = get_amplitudes(circuit)
assert np.allclose(np.array([np.sqrt(2)/2, 0, 0, -np.sqrt(2)/2]), amplitudes)
###Output
_____no_output_____
###Markdown
Before you begin, execute this cell to import numpy and packages from the D-Wave Ocean suite, and all necessary functions for the gate-model framework you are going to use, whether that is the Forest SDK or Qiskit. In the case of Forest SDK, it also starts the qvm and quilc servers.
###Code
%run -i "assignment_helper.py"
%matplotlib inline
###Output
/home/aditya/anaconda3/envs/qiskit-env/lib/python3.9/site-packages/qiskit/aqua/__init__.py:86: DeprecationWarning: The package qiskit.aqua is deprecated. It was moved/refactored to qiskit-terra For more information see <https://github.com/Qiskit/qiskit-aqua/blob/main/README.md#migration-guide>
warn_package('aqua', 'qiskit-terra')
###Markdown
Classical probability distributions**Exercise 1** (1 point). Recall that in classical con flipping, get heads with probability $P(X=0) = p_0$ and tails with $P(X=1) = p_1$ for each toss of the coin, where $p_i\geq 0$ for all $i$, and the probabilities sum to one: $\sum_i p_i = 1$. Create a sample with a 1000 data points using numpy, with a probability of getting tails being 0.3. This is the parameter that the `binomial` function takes. Store the outcome in an array called `x_data`.
###Code
n_samples = 1000
###
### YOUR CODE HERE
###
p_1 = 0.3
x_data = np.random.binomial(n=1, p=p_1, size=(n_samples,))
assert isinstance(x_data, np.ndarray)
assert abs(p_1-x_data.sum()/n_samples) < 0.05
###Output
_____no_output_____
###Markdown
**Exercise 2** (1 point). As you recall, we may also write the probability distribution as a stochastic vector $\vec{p} = \begin{bmatrix} p_0 \\ p_1 \end{bmatrix}$. The normalization constraint on the probability distribution says that the norm of the vector is restricted to one in the $l_1$ norm. In other words, $||\vec{p}||_1 = \sum_i |p_i| = 1$. This would be the unit circle in the $l_1$ norm, but since $p_i\geq 0$, we are restricted to a quarter of the unit circle, just as we plotted above. Write a function that checks whether a given two-dimensional vector is a stochastic vector. That is, it should return `True` if all elements are positive and the 1-norm is approximately one, and it should return `False` otherwise. The input of the function is a numpy array.
###Code
def is_stochastic_vector(p: np.array):
###
### YOUR CODE HERE
###
return np.all(p > 0) and np.abs(np.sum(p) - 1) < 0.01
assert not is_stochastic_vector(np.array([0.2, 0.3]))
assert not is_stochastic_vector(np.array([-0.2, 0.7]))
assert is_stochastic_vector(np.array([0.2, 0.8]))
###Output
_____no_output_____
###Markdown
**Exercise 3** (1 point). The probability of heads is just the first element in the $\vec{p}$ and we can use a projection to extract it. For the first element of the stochastic vector, the projection is described by the matrix $\begin{bmatrix} 1 & 0\\0 & 0\end{bmatrix}$. Write a function that performs this projection on a two-element vector described by a numpy array. Your output after the projection is also a two-element vector.
###Code
def project_to_first_basis_vector(p: np.array):
###
### YOUR CODE HERE
###
head_projection = np.array([[1, 0],
[0, 0]])
return head_projection @ p
assert np.alltrue(project_to_first_basis_vector(np.array([0.2, 0.3])) == np.array([0.2, 0.]))
assert np.alltrue(project_to_first_basis_vector(np.array([1., 0.])) == np.array([1., 0.]))
###Output
_____no_output_____
###Markdown
**Exercise 4** (1 point). The projection operators introduce some linear algebra to working with probability distributions. We can also use linear algebra to transform one probability distribution to another. A left *stochastic matrix* will map stochastic vectors to stochastic vectors when multiplied from the left: its columns add up to one. Write a function that takes a matrix and a vector as input arguments (both are numpy arrays), checks whether the vector is a stochastic vector and whether the matrix is left stochastic. If they are, return the matrix applied to the vector, otherwise raise a `ValueError`. You can call the function `is_stochastic_vector` that you defined above.
###Code
def apply_stochastic_matrix(p: np.array, M: np.array):
"""Apply the matrix M to the vector p, but only if
p is a stochastic vector and M is a left stochastic
matrix. Otherwise raise a ValueError.
"""
###
### YOUR CODE HERE
###
if is_stochastic_vector(p) and np.alltrue(np.sum(M, axis=0) == [1, 1]):
return M @ p
else:
raise ValueError
p = np.array([[.5], [.5]])
M = np.array([[0.7, 0.6], [0.3, 0.4]])
assert abs(np.linalg.norm(apply_stochastic_matrix(p, M), ord=1)-1) < 0.01
M = np.array([[0.7, 0.6], [0.3, 0.5]])
try:
apply_stochastic_matrix(p, M)
except ValueError:
pass
else:
raise AssertionError("did not raise")
###Output
_____no_output_____
###Markdown
**Exercise 5** (1 point). Create a left stochastic matrix in a variable called `M` that transforms the uniform distribution $\vec{p}= \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix}$ to $\begin{bmatrix} 0.6 \\ 0.4 \end{bmatrix}$. `M` should be a two-dimensional numpy array.
###Code
###
### YOUR CODE HERE
###
M = np.array([[1, 0.2],
[0, 0.8]])
assert np.allclose(M.dot(np.array([0.5, 0.5])), np.array([0.6, 0.4]))
###Output
_____no_output_____
###Markdown
**Exercise 6** (1 point). Calculate the entropy of this distribution $\begin{bmatrix} 0.6 \\ 0.4 \end{bmatrix}$ in a variable called `S`.
###Code
###
### YOUR CODE HERE
###
p = np.array([0.6, 0.4])
S = np.sum(-p * np.log(p))
###
### AUTOGRADER TEST - DO NOT REMOVE
###
###Output
_____no_output_____
###Markdown
Quantum states**Exercise 7** (1 point). A quantum state is a probability distribution. A qubit state is a distribution over two values, similar to the coin flipping in the classical state. A major difference is that the entries are complex numbers and the normalization is in the $l_2$ norm. Create a function similar to `is_stochastic_vector` that checks whether a vector is a valid quantum state. The input is a numpy array and the output should be boolean.
###Code
def is_quantum_state(psi: np.array):
###
### YOUR CODE HERE
###
return np.allclose(np.linalg.norm(psi), 1)
assert is_quantum_state(np.array([1/np.sqrt(2), 1/np.sqrt(2)]))
assert is_quantum_state(np.array([-1/np.sqrt(2), 1/np.sqrt(2)]))
assert is_quantum_state(np.array([-1/3, 2*np.sqrt(2)/3]))
assert is_quantum_state(np.array([-1j/3, 2*np.sqrt(2)/3]))
assert not is_quantum_state(np.array([0.2, 0.8]))
###Output
_____no_output_____
###Markdown
**Exercise 8** (1 point). While working with numpy arrays is convenient, it is better to use a framework designed for quantum computing, since it often allows us to execute a circuit directly on a quantum computer. In your preferred framework, implement a circuit of a single qubit with no operation on it. You should create it in an object called `circuit`. Do not add a measurement. The evaluation will automatically branch according to which framework you chose.
###Code
###
### YOUR CODE HERE
###
circuit = QuantumCircuit(1)
circuit.id(0)
circuit.draw(output='mpl')
amplitudes = get_amplitudes(circuit)
assert abs(amplitudes[0]-1.0) < 0.01
###Output
_____no_output_____
###Markdown
**Exercise 9** (1 point). In the execution branching above, you see that we use the wavefunction simulator. This allows us to use the probability amplitudes as usual numpy arrays, as you can see above. If we ran the circuit on an actual quantum device, we would not be able to inspect the wavefunction, but we would have to rely on the statistics of measurements to understand what is happening in the circuit.Create a circuit in your preferred framework that creates an equal superposition in a qubit using a Hadamard gate. Again, the name of the object should be `circuit`. The evaluation will be based on measurement statistics. In this case, you should explicitly specify the measurement on the qubit
###Code
###
### YOUR CODE HERE
###
circuit = QuantumCircuit(1)
circuit.h(0)
circuit.measure_all()
circuit.draw(output='mpl')
counts = get_counts(circuit)
assert abs(counts['0']/100-.5) < 0.2
###Output
_____no_output_____
###Markdown
**Exercise 10** (1 point). If you plotted the state before measurement on the Bloch sphere, it would have been on the equator halfway between the $|0\rangle$ and $|1\rangle$ states, and the tip of the X axis. If you apply the Hadamard on the $|1\rangle$, it would have been the point on the opposite and of the X axis, since the resulting superposition would have had a -1 amplitude for $|1\rangle$. The measurement statistics, however, would be identical. The negative sign plays a role in interference: for instance, applying a Hadamard again, would take you back to $|1\rangle$. Create the superposition after applying the Hadamard gate on $|1\rangle$. We will verify whether it picked up the phase. Do not include a measurement, since we will inspect the wavefunction.
###Code
###
### YOUR CODE HERE
###
circuit = QuantumCircuit(1)
circuit.x(0)
circuit.h(0)
circuit.draw(output='mpl')
amplitudes = get_amplitudes(circuit)
assert abs(amplitudes[1]+np.sqrt(2)/2) < 0.01
###Output
_____no_output_____
###Markdown
More qubits and entanglement**Exercise 11** (1 point). To get a sense of multiqubit states, it is important to be confident with the tensor product operation. Create a function that returns the four basis vectors, $|00\rangle$, $|01\rangle$, $|10\rangle$, and $|11\rangle$, of the tensor product space $\mathbb{C}^2\otimes\mathbb{C}^2$. The order in which they appear does not matter. The return value should be a list of four numpy arrays.
###Code
def create_canonical_basis():
###
### YOUR CODE HERE
###
qubit_basis = [np.array([1, 0]), np.array([0, 1])]
canonical_basis = []
for i in qubit_basis:
for j in qubit_basis:
canonical_basis.append(np.kron(i, j))
return canonical_basis
basis = create_canonical_basis()
assert len(basis) == 4
if basis[0].shape != (4, ):
basis = [basis_vector.reshape((4, )) for basis_vector in basis]
###
### AUTOGRADER TEST - DO NOT REMOVE
###
###Output
_____no_output_____
###Markdown
**Exercise 12** (1 point). A generic product state has the form $\begin{bmatrix}a_0b_0\\ a_0b_1\\ a_1b_0\\ a_1b_1\end{bmatrix}=a_0b_0|00\rangle + a_0b_1|01\rangle + a_1b_0|10\rangle + a_1b_1|11\rangle$ on $\mathbb{C}^2\otimes\mathbb{C}^2$, but not all. We can use the basis vectors to form vectors in the space that do not have a product structure. These are entangled states that show strong correlations. Entanglement is an important resource in quantum computing and being able to create a circuit that generates an entangled state is critical. Implement a circuit in your preferred framework to create the $|\phi^-\rangle = \frac{1}{\sqrt{2}}(|00\rangle-|11\rangle)$ state, that is, almost the same as the $|\phi^+\rangle$ state, but with the opposite sign of the probability amplitude of $|11\rangle$. Do not include a measurement, as we will verify the state with the wavefunction simulator
###Code
###
### YOUR CODE HERE
###
circuit = QuantumCircuit(2)
circuit.x(0)
circuit.h(0)
circuit.cnot(0, 1)
circuit.draw(output='mpl')
amplitudes = get_amplitudes(circuit)
assert np.allclose(np.array([np.sqrt(2)/2, 0, 0, -np.sqrt(2)/2]), amplitudes)
###Output
_____no_output_____ |
notebooks/003_terms_and_formulas.ipynb | ###Markdown
Terms, Formulas and Interpretations Now we have all the elements to formally define ```Tarski``` languages: **Definition** (Many-Sorted First-Order Language). A _many-sorted_ _first-order_ language ${\cal L}$ is made up of: - A non-empty set $T$ of _sorts_ - An _infinite number_ of _variables_ $x_{1}^{\tau}, x_{2}^{\tau}, \ldots$ for each short $\tau \in T$ - For each $n \geq 0$ and each tuple $(\tau_1, \ldots, \tau_{n+1}) \in T^{n+1}$ of sorts, a (possibly empty) set of _function_ symbols, each of which is said to have _arity_ and _type_ $(\tau_1, \ldots, \tau_{n+1})$ - For each $n \geq 0$ and each tuple $(\tau_1, \ldots, \tau_{n+1}) \in T^{n}$ of sorts, a (possibly empty) set of _relation_ symbols (predicates), each of which is said to have _arity_ and _type_ $(\tau_1, \ldots, \tau_{n})$ Continuing with our ```Blocks World``` themed example
###Code
import tarski
from tarski.symbols import *
from tarski.theories import Theory
# 1. Create language used to describe world states and transitions
bw = tarski.language(theories=[Theory.EQUALITY, Theory.ARITHMETIC])
# 2. Define sorts
place = bw.sort('place')
block = bw.sort('block', place)
# 3. Define functions
loc = bw.function( 'loc', block, place )
looking_at = bw.function( 'looking_at', block )
# 4. Define predicates
clear = bw.predicate( 'clear', block)
###Output
_____no_output_____
###Markdown
We introduce the function $width(b)$ for blocks $b$, this will allow us to specify Hanoi Towers like tasks
###Code
width = bw.function('width', block, bw.Real)
###Output
_____no_output_____
###Markdown
_Constants_ are 0-arity functions, whose sort $\tau$ is a set with one single element. Hence, we handle them separately, as we specialise their representation
###Code
# 5. Define constants
b1, b2, b3, b4 = [ bw.constant('b_{}'.format(k), block) for k in (1,2,3,4) ]
table = bw.constant('table', place)
###Output
_____no_output_____
###Markdown
(First-Order) Terms Combinations of variables, functions and constants are called _terms_, and the rules for constructing them are given inductively: **Definition** (First-Order Terms). A term $t$ can be: - Any variable $x^{\tau}$ of the language can be a term $t$ with type $\tau$ - Any constant symbol of the language with type $\tau$ is a term with the same type - If $t_1, \ldots, t_n$ are terms with respective types $\tau_1, \ldots, \tau_n$ and $f$ is a _function_ symbol with type $(\tau_1, \ldots, \tau_n, \tau{n+1})$ then $f(t_1,\ldots,t_n)$ is a term with type $\tau_{n+1}$. Terms are implemented as Python objects. Every constant symbol is an instance of ```Term```
###Code
from tarski import Term
isinstance(b1,Term)
###Output
_____no_output_____
###Markdown
Function symbols allow to nest terms, thus
###Code
t1 = loc(b1)
isinstance(t1,Term)
x = bw.variable('x', block)
t2 = loc(x)
isinstance(t2,Term)
t3 = loc(looking_at())
isinstance(t3,Term)
###Output
_____no_output_____
###Markdown
are all terms. ```Tarski``` textual representation of variables is a bit different
###Code
print('{}, type: {}'.format(t1, t1.sort))
print('{}, type: {}'.format(t2, t2.sort))
print('{}, type: {}'.format(t3, t3.sort))
###Output
loc(b_1), type: Sort(place)
loc(x/block), type: Sort(place)
loc(looking_at()), type: Sort(place)
###Markdown
in order to make distinct variables from constants, the former are printed with the prefix ```?```. Formulas Formulas (statements that can be either ```True``` or ```False```) are defined also inductively as follows: **Definition** (First-Order Formulas). - If $t_1$ and $t_2$ are two terms with the same type, then $t_1 = t_2$ is an _atomic formula_. - If $t_1,\ldots,t_n$ are terms with respective types $\tau_1,\ldots,\tau_n$, and $R$ is a relation symbol with type $(\tau_1,\ldots,\tau_n)$, then $R(t_1,\ldots,t_n)$ is an atomic formula too. - If $\phi_1$ and $\phi_2$ are formulas then $\neg \phi_1$, $\phi_1 \lor \phi_2$ and $\phi_1 \land \phi_2$ are also formulas. - If $\phi$ is a formula, then $\exists_t x^{\tau}\, \phi$ and $\forall_t x^{\tau}\, \phi$ are also formulas. Quantification happens over a certain sort, i.e. for each sort $\tau$ $\in$ $T$ there are universal and existential quantifier symbols $\forall_{\tau}$ and $\exists_{\tau}$, which may be applied to variables of the same sort. Formulas without existential ($\exists$) or universal ($\forall$) quantifiers are called _quantifier free_. Examples We can define the formula $t_1 = t_3$ - terms $t_1$ and $t_3$ are equal - with the following statement
###Code
tau = t1 == t3
###Output
_____no_output_____
###Markdown
The ```str()``` method is overwritten for every term and formula class, returning a string representation of expression, which gives insight into how Tarski represents internally formulas and expressions
###Code
str(tau)
###Output
_____no_output_____
###Markdown
We need a new variable so we can make general statements about more than one block
###Code
y = bw.variable('y', block)
###Output
_____no_output_____
###Markdown
Now we can state properties of states like _for every block x, x cannot be wider than the place below_$$\forall x,y\, loc(x) = y \supset width(x) < width(y)$$which can be written as
###Code
phi = forall( x, y, implies( loc(x) == y, width(x) < width(y) ) )
###Output
_____no_output_____
###Markdown
which is represented internally
###Code
str(phi)
###Output
_____no_output_____
###Markdown
It's worth noting that Tarski will always try to simplify formulas. For instance, the sub-formula $$ loc(x) = y \supset width(x) < width(y)$$was transformed into the disjunction$$loc(x) \neq y \lor width(x) < width(y)$$using the transformation $$p \supset q \equiv \neg p \lor q$$ We can use the operator ```>``` instead of the function ```implies()```, if a more concise syntax is preferred.
###Code
phi = forall( x, y, (loc(x) == y) > (width(x) < width(y)) )
###Output
_____no_output_____
###Markdown
We can write the conjunctive formula$$loc(b1) \neq loc(b2) \land loc(b1) \neq loc(b3)$$in several ways. One is using the ```land()``` function
###Code
phi = land( loc(b1) != loc(b2), loc(b1) != loc(b3))
###Output
_____no_output_____
###Markdown
or the operator ```&```
###Code
phi = (loc(b1) != loc(b2)) & (loc(b1) != loc(b3))
###Output
_____no_output_____
###Markdown
Another state invariant like $$loc(b1) = b2 \lor loc(b1) = b3$$can be written as
###Code
phi = lor( loc(b1) == b2, loc(b1) == b3 )
###Output
_____no_output_____
###Markdown
or
###Code
phi = (loc(b1)==b2) | (loc(b1)==b3)
###Output
_____no_output_____
###Markdown
Finally, the formula $$loc(b1) = b2 \supset \neg clear(b2)$$ can be written as
###Code
phi=implies( loc(b1) == b2, neg(clear(b2)))
str(phi)
###Output
_____no_output_____
###Markdown
or, alternatively the ```~``` unary operator can be used instead of ```neg(...)```
###Code
phi = implies( loc(b1) == b2, ~clear(b2))
str(phi)
###Output
_____no_output_____ |
Notebooks/Final_Model.ipynb | ###Markdown
importing needed modules
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
reading dataset with pandas library using **read_csv** function and see what our dataset looks like.
###Code
cars = pd.read_csv("cleaned_data.csv")
cars.head()
###Output
_____no_output_____
###Markdown
exporting column names
###Code
cars.columns
###Output
_____no_output_____
###Markdown
now we ae going to create our X and Y data in order to perform the processes that have been told in the first part.
###Code
X = cars[['Name', 'style', 'Exterior color', 'interior color', 'Engine',
'drive type', 'Fuel Type', 'Transmission', 'Mileage', 'mpg city',
'mpg highway', 'Year', 'Engine V', 'Brand']]
Y = cars["price"].values
###Output
_____no_output_____
###Markdown
Encode categorical features as a one-hot numeric array.The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features. The features are encoded using a one-hot (aka ‘one-of-K’ or ‘dummy’) encoding scheme. This creates a binary column for each category and returns a sparse matrix or dense array (depending on the sparse parameter)By default, the encoder derives the categories based on the unique values in each feature. Alternatively, you can also specify the categories manually.read full documentation
###Code
from sklearn.preprocessing import OneHotEncoder
onehot = OneHotEncoder(categories="auto", handle_unknown="ignore")
categorical_features = onehot.fit_transform(X.iloc[:, [1,4,5,6,7,13]]).toarray()
print(categorical_features)
print(categorical_features.shape)
###Output
(6532, 91)
###Markdown
in this part we are going to delete unnecessary features and the categorical features that we have encoded in the previous part.
###Code
X = np.delete(X.values, [0,1,2,3,4,5,6,7,13], 1)
print(X.shape)
###Output
(6532, 5)
###Markdown
now we combine remaining features with the encoded array:
###Code
X = np.concatenate((X,categorical_features), axis=1)
X.shape
###Output
_____no_output_____
###Markdown
Split arrays or matrices into train and test subsets.Quick utility that wraps input validation and next(ShuffleSplit().split(X, y)) and application to input data into a single call for splitting (and optionally subsampling) data in a oneliner.read full documentation
###Code
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
X, Y,
test_size=0.1,
random_state=82,
shuffle=True
)
###Output
_____no_output_____
###Markdown
Random Forest Regressor : A random forest is a meta estimator that fits a number of classifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is controlled with the max_samples parameter if bootstrap=True (default), otherwise the whole dataset is used to build each tree.read full documentation GridSearch : Exhaustive search over specified parameter values for an estimator.GridSearchCV implements a “fit” and a “score” method. It also implements “score_samples”, “predict”, “predict_proba”, “decision_function”, “transform” and “inverse_transform” if they are implemented in the estimator used.The parameters of the estimator used to apply these methods are optimized by cross-validated grid-search over a parameter grid.read full documentation
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
rfr_pip = Pipeline([
#("standardizer", StandardScaler()),
("rfr", RandomForestRegressor())
])
param_range = [i for i in range(50,101)]
grid_params = [{"rfr__n_estimators" : param_range}]
grid = GridSearchCV(
rfr_pip,
grid_params,
n_jobs=-1,
cv=5
)
grid.fit(x_train, y_train)
print(grid.best_score_)
print(grid.best_params_)
###Output
0.9046922991577464
{'rfr__n_estimators': 96}
###Markdown
as you can see we used a pipeline for simplicity and used RandomForestRegressor for estimator. its not necessary to bring our features into same scale while we are using decision trees. but we have commented the code and you can use it if you want(the result will not change).we used GridSearch to find the optimal value of n_estimators in RandomForestRegressor. as you can see we got a slightly better result than our previously trained linearregression model by using 94 seprate estimators. lets check the goodness of our models fit on test data:
###Code
from sklearn.metrics import r2_score
# obtaining the best estimator from grid
rfr = grid.best_estimator_
rfr.fit(x_train, y_train)
y_pred = rfr.predict(x_test)
print("Test Accuracy : {:.3f}".format(r2_score(y_test, y_pred)))
###Output
Test Accuracy : 0.914
|
Coding Task 01.ipynb | ###Markdown
Given, nx = 2, nh = 4, ny = 1 So, W1.shape == (nh, nx) == (4,2) b1.shape == (nh, 1) == (4,1) w2.shape == (ny, nh) == (1, 4) b1.shape == (ny, 1) == (1, 1) Param initialization
###Code
def init_params(nx, nh, ny):
W1 = np.random.randn(nh, nx)*0.01
b1 = np.zeros((nh, 1))
W2 = np.random.randn(ny, nh) * 0.01
b2 = np.zeros((ny, 1))
assert(W1.shape == (nh, nx))
assert(b1.shape == (nh, 1))
assert(W2.shape == (ny, nh))
assert(b2.shape == (ny,1))
params = {'W1':W1, 'b1':b1, 'W2':W2, 'b2':b2}
return params
###Output
_____no_output_____
###Markdown
Forward Prop
###Code
def sigmoid(Z):
A = 1 / (1+np.exp(-Z))
cache = Z
return A, cache
def relu(Z):
A = np.maximum(0, Z)
cache = Z
assert(A.shape() == Z.shape())
return A, cache
def linear_forward(A, W, b):
Z = np.dot(W, A) + b
assert(Z.shape() == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
def linear_forward_activation(A_prev, W, b, activation):
if activation == 'sigmoid':
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
elif activation == 'relu':
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
assert(A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
###Output
_____no_output_____
###Markdown
Back Prop
###Code
def sigmoid_back(dA, cache):
Z = cache
s = 1 / (1+np.exp(-Z))
dZ = dA * s * (1-s)
assert(dZ.shape == Z.shape)
return dZ
def relu_back(dA, cache):
Z = cache
dZ = np.array(dA, copy=True)
dZ[Z<=0] = 0
assert(dZ.shape == Z.shape)
return dZ
def linear_backward(dZ, cache):
A_prev, W, b = cache;
m = A_prev.shape[1];
dW = (1/m) * np.dot(dZ ,A_prev.T)
db = 1 / m * np.sum(dZ, axis = 1, keepdims = True)
dA_prev = np.dot(W.T, dZ)
assert(dA_prev.shape == A_prev.shape)
assert(dW.shape == W.shape)
assert(db.shape == b.shape)
return dA_prev, dW, db
def linear_backward_activation(dA, cache, activation):
linear_cache, activation_cache = cache
if activation == 'sigmoid':
Z = sigmoid_back(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
return dA_prev, dW, db
###Output
_____no_output_____
###Markdown
Cost function
###Code
def compute_cost(AL, Y):
m = Y.shape[1];
cost = -1 / m * (np.dot(Y, np.log(AL).T) + np.dot(1 - Y, np.log(1 - AL).T))
cost = np.squeeze(cost)
assert(cost.shape == ())
return cost
###Output
_____no_output_____
###Markdown
Update Params
###Code
def update_params(params, grads, lr):
L = len(params) // 2
for l in range(L):
params['W'+str(l+1)] -= lr*grads['dW'+str(l+1)]
params['b'+str(l+1)] -= lr*grads['db'+str(l+1)]
return params
###Output
_____no_output_____
###Markdown
Model
###Code
def model(X, Y, layer_dims, num_iter=500, lr=0.01):
grads={}
cost=[]
m = X.shape[1]
(nx, nh, ny) = layer_dims
params = init_params(nx, nh, ny)
W1, b1, W2, b2 = params['W1'], params['b1'], params['W2'], params['b2']
for i in range(0, num_iter):
A1, cache1 = linear_forward_activation(X, W1, b1, activation='relu')
A2, cache2 = linear_forward_activation(A1, W2, b2, activation = 'sigmoid')
cost = compute_cost(A2, Y)
dA2 = - (np.divide(Y, A2) - np.divide(1-Y, 1-A2))
dA1, dW2, db2 = linear_backward_activation(dA2, cache2, activation='sigmoid')
dA0, dW1, db1 = linear_backward_activation(dA1, cache1, activation='relu')
grads['dW1'], grads['db1'], grads['dW2'], grads['db2'] = dW1, db1, dW2, db2
params = update_params(params, grads, lr)
W1, b1, W2, b2 = params['W1'], params['b1'], params['W2'], params['b2']
print(cost)
return params
###Output
_____no_output_____ |
leo/Stage1Phase2-NN.ipynb | ###Markdown
Take Training Data
###Code
folder = 'data/10fold_stacking/' #共同predict對stacling verified data的結果
acc_df = pd.read_csv('data/ens_unverified/validation_ACC_P1S1.csv') #accuracy csv
acc_df.columns = ['model','csv_name','acc']
acc_df = acc_df.filter(['csv_name','acc'])
acc_df['csv_name'] = acc_df['csv_name'].str.replace('_unverified_','_')
files = os.listdir(folder)
ratio_all=0
df_dict = {}
for i,csv in enumerate(files):
df_name = csv[:csv.rfind('_')]
# print(df_name)
if csv.startswith('validation_ACC'):
continue
ratio = acc_df[acc_df['csv_name'] == csv]['acc'].values[0]
ratio_all += ratio
if not (df_name in df_dict.keys()):
df_dict[df_name] = pd.read_csv(os.path.join(folder,csv))
df1_ = df_dict[df_name].drop('fname',axis=1)
df1_ = np.array(df1_) * ratio
df1_ = pd.DataFrame(df1_)
df1_['acc'] = ratio
# print(len(df1_))
df_dict[df_name] = df_dict[df_name].filter(['fname'])
df_dict[df_name] = pd.merge(df_dict[df_name],df1_,how='inner',right_index=True,left_index=True)
else:
df1_name = pd.read_csv(os.path.join(folder,csv))
df1_ = df1_name.drop('fname',axis=1)
df1_ = np.array(df1_) * ratio
df1_ = pd.DataFrame(df1_)
df1_['acc'] = ratio
# print(len(df1_))
df1_name = df1_name.filter(['fname'])
df1_ = pd.merge(df1_name, df1_,how='inner',right_index=True,left_index=True)
df_dict[df_name] = df_dict[df_name].append(df1_, ignore_index=True)
# print(len(df_dict[df_name]))
# elif csv.startswith('mike_resnet'):
# ratio = acc_df[acc_df['csv_name'] == csv]['acc'].values[0]
# if a==0:
# df1 = pd.read_csv(os.path.join(folder,csv))
# df1_ = df1.drop('fname',axis=1)
# df1_ = np.array(df1_) * ratio
# df1_ = pd.DataFrame(df1_)
# df1 = df1.filter(['fname'])
# df1 = pd.merge(df1,df1_,how='inner',right_index=True,left_index=True)
# a+=1
# else:
# df1_name = pd.read_csv(os.path.join(folder,csv))
# df1_ = df1_name.drop('fname',axis=1)
# df1_ = np.array(df1_) * ratio
# df1_ = pd.DataFrame(df1_)
# df1_name = df1_name.filter(['fname'])
# df1_ = pd.merge(df1_name, df1_,how='inner',right_index=True,left_index=True)
# df1.append(df1_, ignore_index=True)
# elif csv.startswith('mike_cnn2d'):
# ratio = acc_df[acc_df['csv_name'] == csv]['acc'].values[0]
# if b==0:
# df2 = pd.read_csv(os.path.join(folder,csv))
# df2_ = df2.drop('fname',axis=1)
# df2_ = np.array(df1_) * ratio
# df2_ = pd.DataFrame(df1_)
# df2 = df2.filter(['fname'])
# df2 = pd.merge(df2,df1_,how='inner',right_index=True,left_index=True)
# b+=1
# else:
# df2.append(pd.read_csv(os.path.join(folder,csv)), ignore_index=True)
# elif csv.startswith('mow_cnn2d'):
# ratio = acc_df[acc_df['csv_name'] == csv]['acc'].values[0]
# if c==0:
# df3 = pd.read_csv(os.path.join(folder,csv))
# c+=1
# else:
# df3.append(pd.read_csv(os.path.join(folder,csv)), ignore_index=True)
# else:
# print('unknown csv')
# break
# ratio = acc_df[acc_df['csv_name'] == csv]['acc'].values[0]
# print(ratio)
# ratio_all += ratio
# df = pd.read_csv(os.path.join(folder,csv),header=None)
for k,v in df_dict.items():
df_dict[k] = v.sort_values('fname')
_ = list(df_dict.keys())
sum_ = np.zeros((len(df_dict[_[0]]), 42))
for k,v in df_dict.items():
sum_ += v[v.columns[1:]].values
ratio_ = np.tile(sum_[:, -1], (41, 1)).T
train_X = sum_[:, :-1] / ratio_
# df_final = pd.DataFrame(sum_)
# df_final
# sum_ /= ratio_all
# df = pd.merge(df1,df2,on='fname',how='inner')
# df = pd.merge(df,df3,on='fname',how='inner')
# # if df.iloc[0,0] == 'fname':
# df = df.drop('fname',axis=1)
# # df = df.drop(0,axis=1)
# df
# if i==0:
# train_X = df.values*ratio
# else:
# train_X += df.values*ratio
train_X.shape
###Output
_____no_output_____
###Markdown
Take Label
###Code
label = pd.read_csv('data/train_label.csv',names=['fname','label','verified'],header=0)
dicts_ = pickle.load(open('data/map.pkl','rb'))
label['trans']=label['label'].map(dicts_)
# label = label.drop(['ID','fname','verified','label'],axis=1)
label = pd.merge(label,df_dict[k],on='fname',how='inner')
label = label['trans'].values
print('label like:',label,'data#:', len(label))
###Output
label like: [ 1 3 4 ... 14 9 17] data#: 3710
###Markdown
Split Data to eval
###Code
def split_valid_set(X_all, Y_all, percentage):
all_data_size = len(X_all)
valid_data_size = int(floor(all_data_size * percentage))
X_all, Y_all = _shuffle(X_all, Y_all)
X_train, Y_train = X_all[0:valid_data_size], Y_all[0:valid_data_size]
X_valid, Y_valid = X_all[valid_data_size:], Y_all[valid_data_size:]
return X_train, Y_train, X_valid, Y_valid
def _shuffle(X, Y):
randomize = np.arange(len(X))
np.random.shuffle(randomize)
# print(X.shape, Y.shape)
return (X[randomize], Y[randomize])
# 檢查pure ensemble baseline
sum(np.argmax(train_X,axis=1) == label) / len(label)
## PCA 的話這格別跑
label = to_categorical(label,num_classes=41)
X_train, Y_train, X_valid, Y_valid = split_valid_set(train_X, label, 0.95)
print(X_train.shape , Y_train.shape)
###Output
(3524, 41) (3524, 41)
###Markdown
PCA / Autoencoder降維度 PCA
###Code
pca = PCA(n_components=32,iterated_power='auto', whiten=True,svd_solver="full",random_state=725035) #n_components='mle',395
train_X_PCA = pca.fit_transform(train_X)
label = to_categorical(label,num_classes=41)
X_train, Y_train, X_valid, Y_valid = split_valid_set(train_X_PCA, label, 0.95)
###Output
_____no_output_____
###Markdown
Autoencoder
###Code
#DNN autoencoder
input_img = Input(shape=(41,))
x = Dense(32,activation='relu')(input_img) #one_norm #activity
x = Dense(32,activation='relu')(x)
x = Dense(32,activation='relu')(x)
x = Dense(16,activation='relu')(x)
x = Dense(16,activation='relu')(x)
x = Dense(16,activation='relu')(x)
x = Dense(8,activation='relu')(x) #code # 改成256?
encoder = Model(inputs=input_img,outputs=x)
d = Dense(16)(x)
d = Dense(32)(d)
d = Dense(41,activation='sigmoid')(d)
autoencoder = Model(inputs=input_img,outputs=d)
autoencoder.summary()
encoder.summary()
batchSize=128
patien=15
epoch=300
saveP = 'model/strong_S1P2_autoencoder.h5'
logD = './logs/'+saveP.split('/')[-1].split('.')[0]
opt = Nadam()
autoencoder.compile(optimizer=opt,loss='mse')
history = History()
callback=[
EarlyStopping(patience=patien,monitor='val_loss',verbose=1),
ModelCheckpoint(saveP,monitor='val_loss',verbose=1,save_best_only=True, save_weights_only=False),
TensorBoard(log_dir=logD+'events.epochs'+str(epoch)),
history,
]
autoencoder.fit(X_train, X_train,
epochs=epoch,
batch_size=batchSize,
shuffle=True,
validation_data=(X_valid,X_valid),
callbacks=callback,
class_weight='auto'
)
# model.save(saveP+"_all.h5")
# encoder.save(saveP+'_enc_all.h5')
# encoder.save_weights(saveP+'_enc.h5')
model_test = load_model('model/strong_S1P2_autoencoder.h5')
model_test = Model(inputs=model_test.layers[0].input, outputs=model_test.layers[7].output)
model_test.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) (None, 41) 0
_________________________________________________________________
dense_41 (Dense) (None, 32) 1344
_________________________________________________________________
dense_42 (Dense) (None, 32) 1056
_________________________________________________________________
dense_43 (Dense) (None, 32) 1056
_________________________________________________________________
dense_44 (Dense) (None, 16) 528
_________________________________________________________________
dense_45 (Dense) (None, 16) 272
_________________________________________________________________
dense_46 (Dense) (None, 16) 272
_________________________________________________________________
dense_47 (Dense) (None, 8) 136
=================================================================
Total params: 4,664
Trainable params: 4,664
Non-trainable params: 0
_________________________________________________________________
###Markdown
get latent dim
###Code
feature_train = model_test.predict(X_train)
feature_valid = model_test.predict(X_valid)
feature_train.shape
###Output
_____no_output_____
###Markdown
Model
###Code
## autoencoder
input_ = Input(shape=(8,))
bn = BatchNormalization()(input_)
dense = Dense(8,activation='selu',kernel_initializer='lecun_normal',kernel_regularizer=l2(0.00005))(bn)
# dense = Dense(8,kernel_initializer='uniform',kernel_regularizer=l2(0.000008))(bn)
# bn = BatchNormalization()(dense)
# dense = LeakyReLU()(dense)
dropout = Dropout(0.005)(dense)
# dense = Dense(164,kernel_initializer='uniform',kernel_regularizer=l2(0.008))(dropout)
bn = BatchNormalization()(dropout)
dense = Dense(41,activation='selu',kernel_initializer='lecun_normal',kernel_regularizer=l2(0.00005))(bn)
# dense = Dense(41,kernel_initializer='uniform',kernel_regularizer=l2(0.000008))(bn)
# bn = BatchNormalization()(dense)
# dense = LeakyReLU()(dense)
dropout = Dropout(0.005)(dense)
bn = BatchNormalization()(dropout)
dense = Dense(41,activation='softmax',kernel_regularizer=l2(0.00005),kernel_initializer='lecun_normal')(bn)
# dense = Dense(41,activation='softmax',kernel_initializer='uniform',kernel_regularizer=l2(0.000008))(dropout)
# dense = LeakyReLU()(dense)
model = Model(inputs=input_, outputs=dense)
# model = Sequential()
# i
# model.add(BatchNormalization())
# model.add(Dense(41,activation='linear',input_shape=(41,)))
model.summary()
## Autoencoder
batchSize=64
patien=30
epoch=300
saveP = 'model/strong_S1P2_NNclf.h5'
logD = './logs/'+saveP.split('/')[-1].split('.')[0]
opt = Nadam() #Adam(decay=1e-20)#
model.compile(loss='categorical_crossentropy',optimizer=opt,metrics=['acc'])
history = History()
callback=[
EarlyStopping(patience=patien,monitor='val_loss',verbose=1),
ModelCheckpoint(saveP,monitor='val_acc',verbose=1,save_best_only=True, save_weights_only=False),
TensorBoard(log_dir=logD+'events.epochs'+str(epoch)),
history,
]
model.fit(feature_train, Y_train,
epochs=epoch,
batch_size=batchSize,
shuffle=True,
validation_data=(feature_valid,Y_valid),
callbacks=callback,
class_weight='auto'
)
# model.save(saveP+"_all.h5")
# encoder.save(saveP+'_enc_all.h5')
# encoder.save_weights(saveP+'_enc.h5')
## PCA
input_ = Input(shape=(32,))
bn = BatchNormalization()(input_)
dense = Dense(64,activation='selu',kernel_initializer='lecun_normal')(bn)
# dense = Dense(8,kernel_initializer='uniform',kernel_regularizer=l2(0.000008))(bn)
# bn = BatchNormalization()(dense)
# dense = LeakyReLU()(dense)
dropout = Dropout(0.45)(dense)
# dense = Dense(164,kernel_initializer='uniform',kernel_regularizer=l2(0.008))(dropout)
bn = BatchNormalization()(dropout)
dense = Dense(64,activation='selu',kernel_initializer='lecun_normal')(bn)
# dense = Dense(41,kernel_initializer='uniform',kernel_regularizer=l2(0.000008))(bn)
# bn = BatchNormalization()(dense)
# dense = LeakyReLU()(dense)
dropout = Dropout(0.45)(dense)
bn = BatchNormalization()(dropout)
dense = Dense(41,activation='softmax',kernel_initializer='lecun_normal')(bn)
# dense = Dense(41,activation='softmax',kernel_initializer='uniform',kernel_regularizer=l2(0.000008))(dropout)
# dense = LeakyReLU()(dense)
model = Model(inputs=input_, outputs=dense)
# model = Sequential()
# i
# model.add(BatchNormalization())
# model.add(Dense(41,activation='linear',input_shape=(41,)))
model.summary()
## PCA
batchSize=128
patien=100
epoch=500
saveP = 'model/strong_S1P2_NNclf_PCA.h5'
logD = './logs/'+saveP.split('/')[-1].split('.')[0]
opt = Nadam()#Adamax()#Adam(decay=1e-20)#
model.compile(loss='categorical_crossentropy',optimizer=opt,metrics=['acc'])
history = History()
callback=[
EarlyStopping(patience=patien,monitor='val_loss',verbose=1),
ModelCheckpoint(saveP,monitor='val_acc',verbose=1,save_best_only=True, save_weights_only=False),
TensorBoard(log_dir=logD+'events.epochs'+str(epoch)),
history,
]
model.fit(X_train, Y_train,
epochs=epoch,
batch_size=batchSize,
shuffle=True,
validation_data=(X_valid,Y_valid),
callbacks=callback,
class_weight='auto'
)
# 0.87062= 32+Nadan/1024 batch
###Output
Train on 3524 samples, validate on 186 samples
Epoch 1/500
3524/3524 [==============================] - 1s 262us/step - loss: 0.6867 - acc: 0.8156 - val_loss: 0.6287 - val_acc: 0.8495
Epoch 00001: val_acc improved from -inf to 0.84946, saving model to model/strong_S1P2_NNclf_PCA.h5
Epoch 2/500
3524/3524 [==============================] - 0s 25us/step - loss: 0.6758 - acc: 0.8147 - val_loss: 0.6369 - val_acc: 0.8602
Epoch 00002: val_acc improved from 0.84946 to 0.86022, saving model to model/strong_S1P2_NNclf_PCA.h5
Epoch 3/500
3524/3524 [==============================] - 0s 25us/step - loss: 0.6915 - acc: 0.8170 - val_loss: 0.6290 - val_acc: 0.8495
Epoch 00003: val_acc did not improve from 0.86022
Epoch 4/500
3524/3524 [==============================] - 0s 25us/step - loss: 0.6860 - acc: 0.8138 - val_loss: 0.6302 - val_acc: 0.8441
Epoch 00004: val_acc did not improve from 0.86022
Epoch 5/500
3524/3524 [==============================] - 0s 28us/step - loss: 0.6872 - acc: 0.8116 - val_loss: 0.6327 - val_acc: 0.8441
Epoch 00005: val_acc did not improve from 0.86022
Epoch 6/500
3524/3524 [==============================] - 0s 29us/step - loss: 0.6763 - acc: 0.8209 - val_loss: 0.6212 - val_acc: 0.8548
Epoch 00006: val_acc did not improve from 0.86022
Epoch 7/500
3524/3524 [==============================] - 0s 30us/step - loss: 0.6831 - acc: 0.8173 - val_loss: 0.6309 - val_acc: 0.8441
Epoch 00007: val_acc did not improve from 0.86022
Epoch 8/500
3524/3524 [==============================] - 0s 32us/step - loss: 0.6781 - acc: 0.8156 - val_loss: 0.6364 - val_acc: 0.8548
Epoch 00008: val_acc did not improve from 0.86022
Epoch 9/500
3524/3524 [==============================] - 0s 26us/step - loss: 0.6854 - acc: 0.8090 - val_loss: 0.6315 - val_acc: 0.8441
Epoch 00009: val_acc did not improve from 0.86022
Epoch 10/500
3524/3524 [==============================] - 0s 35us/step - loss: 0.6696 - acc: 0.8204 - val_loss: 0.6358 - val_acc: 0.8495
Epoch 00010: val_acc did not improve from 0.86022
Epoch 11/500
3524/3524 [==============================] - 0s 30us/step - loss: 0.6858 - acc: 0.8164 - val_loss: 0.6223 - val_acc: 0.8602
Epoch 00011: val_acc did not improve from 0.86022
Epoch 12/500
3524/3524 [==============================] - 0s 24us/step - loss: 0.6669 - acc: 0.8232 - val_loss: 0.6272 - val_acc: 0.8548
Epoch 00012: val_acc did not improve from 0.86022
Epoch 13/500
3524/3524 [==============================] - 0s 31us/step - loss: 0.6810 - acc: 0.8192 - val_loss: 0.6311 - val_acc: 0.8495
Epoch 00013: val_acc did not improve from 0.86022
Epoch 14/500
3524/3524 [==============================] - 0s 28us/step - loss: 0.6718 - acc: 0.8113 - val_loss: 0.6386 - val_acc: 0.8602
Epoch 00014: val_acc did not improve from 0.86022
Epoch 15/500
3524/3524 [==============================] - 0s 34us/step - loss: 0.6686 - acc: 0.8192 - val_loss: 0.6344 - val_acc: 0.8548
Epoch 00015: val_acc did not improve from 0.86022
Epoch 16/500
3524/3524 [==============================] - 0s 28us/step - loss: 0.6741 - acc: 0.8178 - val_loss: 0.6353 - val_acc: 0.8656
Epoch 00016: val_acc improved from 0.86022 to 0.86559, saving model to model/strong_S1P2_NNclf_PCA.h5
Epoch 17/500
3524/3524 [==============================] - 0s 28us/step - loss: 0.6756 - acc: 0.8167 - val_loss: 0.6285 - val_acc: 0.8548
Epoch 00017: val_acc did not improve from 0.86559
Epoch 18/500
3524/3524 [==============================] - 0s 35us/step - loss: 0.6722 - acc: 0.8175 - val_loss: 0.6270 - val_acc: 0.8548
Epoch 00018: val_acc did not improve from 0.86559
Epoch 19/500
3524/3524 [==============================] - 0s 38us/step - loss: 0.6826 - acc: 0.8150 - val_loss: 0.6395 - val_acc: 0.8441
Epoch 00019: val_acc did not improve from 0.86559
Epoch 20/500
3524/3524 [==============================] - 0s 28us/step - loss: 0.6769 - acc: 0.8181 - val_loss: 0.6482 - val_acc: 0.8441
Epoch 00020: val_acc did not improve from 0.86559
Epoch 21/500
3524/3524 [==============================] - 0s 36us/step - loss: 0.6778 - acc: 0.8181 - val_loss: 0.6388 - val_acc: 0.8441
Epoch 00021: val_acc did not improve from 0.86559
Epoch 22/500
3524/3524 [==============================] - 0s 27us/step - loss: 0.6774 - acc: 0.8181 - val_loss: 0.6360 - val_acc: 0.8495
Epoch 00022: val_acc did not improve from 0.86559
Epoch 23/500
3524/3524 [==============================] - 0s 29us/step - loss: 0.6733 - acc: 0.8209 - val_loss: 0.6461 - val_acc: 0.8495
Epoch 00023: val_acc did not improve from 0.86559
Epoch 24/500
3524/3524 [==============================] - 0s 31us/step - loss: 0.6664 - acc: 0.8243 - val_loss: 0.6417 - val_acc: 0.8387
Epoch 00024: val_acc did not improve from 0.86559
Epoch 25/500
3524/3524 [==============================] - 0s 31us/step - loss: 0.6713 - acc: 0.8204 - val_loss: 0.6318 - val_acc: 0.8495
Epoch 00025: val_acc did not improve from 0.86559
Epoch 26/500
3524/3524 [==============================] - 0s 26us/step - loss: 0.6657 - acc: 0.8178 - val_loss: 0.6303 - val_acc: 0.8548
Epoch 00026: val_acc did not improve from 0.86559
Epoch 27/500
3524/3524 [==============================] - 0s 31us/step - loss: 0.6639 - acc: 0.8178 - val_loss: 0.6323 - val_acc: 0.8441
Epoch 00027: val_acc did not improve from 0.86559
Epoch 28/500
3524/3524 [==============================] - 0s 26us/step - loss: 0.6815 - acc: 0.8178 - val_loss: 0.6415 - val_acc: 0.8387
Epoch 00028: val_acc did not improve from 0.86559
Epoch 29/500
3524/3524 [==============================] - 0s 42us/step - loss: 0.6794 - acc: 0.8187 - val_loss: 0.6391 - val_acc: 0.8387
Epoch 00029: val_acc did not improve from 0.86559
Epoch 30/500
3524/3524 [==============================] - 0s 24us/step - loss: 0.6750 - acc: 0.8158 - val_loss: 0.6392 - val_acc: 0.8441
Epoch 00030: val_acc did not improve from 0.86559
Epoch 31/500
3524/3524 [==============================] - 0s 32us/step - loss: 0.6492 - acc: 0.8280 - val_loss: 0.6357 - val_acc: 0.8548
Epoch 00031: val_acc did not improve from 0.86559
Epoch 32/500
3524/3524 [==============================] - 0s 29us/step - loss: 0.6665 - acc: 0.8198 - val_loss: 0.6403 - val_acc: 0.8441
Epoch 00032: val_acc did not improve from 0.86559
Epoch 33/500
3524/3524 [==============================] - 0s 27us/step - loss: 0.6544 - acc: 0.8190 - val_loss: 0.6467 - val_acc: 0.8602
Epoch 00033: val_acc did not improve from 0.86559
Epoch 34/500
3524/3524 [==============================] - 0s 29us/step - loss: 0.6629 - acc: 0.8204 - val_loss: 0.6416 - val_acc: 0.8602
Epoch 00034: val_acc did not improve from 0.86559
Epoch 35/500
3524/3524 [==============================] - 0s 29us/step - loss: 0.6744 - acc: 0.8130 - val_loss: 0.6377 - val_acc: 0.8495
Epoch 00035: val_acc did not improve from 0.86559
Epoch 36/500
3524/3524 [==============================] - 0s 26us/step - loss: 0.6576 - acc: 0.8215 - val_loss: 0.6362 - val_acc: 0.8602
Epoch 00036: val_acc did not improve from 0.86559
Epoch 37/500
3524/3524 [==============================] - 0s 31us/step - loss: 0.6697 - acc: 0.8187 - val_loss: 0.6387 - val_acc: 0.8548
Epoch 00037: val_acc did not improve from 0.86559
Epoch 38/500
3524/3524 [==============================] - 0s 35us/step - loss: 0.6727 - acc: 0.8209 - val_loss: 0.6337 - val_acc: 0.8441
Epoch 00038: val_acc did not improve from 0.86559
Epoch 39/500
3524/3524 [==============================] - 0s 26us/step - loss: 0.6627 - acc: 0.8198 - val_loss: 0.6397 - val_acc: 0.8387
Epoch 00039: val_acc did not improve from 0.86559
Epoch 40/500
3524/3524 [==============================] - 0s 38us/step - loss: 0.6624 - acc: 0.8218 - val_loss: 0.6383 - val_acc: 0.8548
Epoch 00040: val_acc did not improve from 0.86559
Epoch 41/500
3524/3524 [==============================] - 0s 41us/step - loss: 0.6612 - acc: 0.8150 - val_loss: 0.6400 - val_acc: 0.8441
Epoch 00041: val_acc did not improve from 0.86559
Epoch 42/500
3524/3524 [==============================] - 0s 33us/step - loss: 0.6516 - acc: 0.8275 - val_loss: 0.6365 - val_acc: 0.8441
Epoch 00042: val_acc did not improve from 0.86559
Epoch 43/500
3524/3524 [==============================] - 0s 29us/step - loss: 0.6704 - acc: 0.8167 - val_loss: 0.6333 - val_acc: 0.8548
###Markdown
Original 41-dim
###Code
input_ = Input(shape=(41,))
bn = BatchNormalization()(input_)
dense = Dense(82,activation='selu',kernel_initializer='lecun_normal')(bn)
# dense = Dense(8,kernel_initializer='uniform',kernel_regularizer=l2(0.000008))(bn)
# bn = BatchNormalization()(dense)
# dense = LeakyReLU()(dense)
dropout = Dropout(0.43)(dense)
# dense = Dense(164,kernel_initializer='uniform',kernel_regularizer=l2(0.008))(dropout)
bn = BatchNormalization()(dropout)
dense = Dense(82,activation='selu',kernel_initializer='lecun_normal')(bn)
# dense = Dense(41,kernel_initializer='uniform',kernel_regularizer=l2(0.000008))(bn)
# bn = BatchNormalization()(dense)
# dense = LeakyReLU()(dense)
dropout = Dropout(0.43)(dense)
bn = BatchNormalization()(dropout)
dense = Dense(41,activation='softmax',kernel_initializer='lecun_normal')(bn)
# dense = Dense(41,activation='softmax',kernel_initializer='uniform',kernel_regularizer=l2(0.000008))(dropout)
# dense = LeakyReLU()(dense)
model = Model(inputs=input_, outputs=dense)
# model = Sequential()
# i
# model.add(BatchNormalization())
# model.add(Dense(41,activation='linear',input_shape=(41,)))
model.summary()
batchSize=128
patien=100
epoch=500
saveP = 'model/strong_S1P2_NNclf_41dim_softmax.h5'
logD = './logs/'+saveP.split('/')[-1].split('.')[0]
opt = Adam(decay=1e-20)#
model.compile(loss='categorical_crossentropy',optimizer=opt,metrics=['acc'])
history = History()
callback=[
EarlyStopping(patience=patien,monitor='val_loss',verbose=1),
ModelCheckpoint(saveP,monitor='val_acc',verbose=1,save_best_only=True, save_weights_only=False),
TensorBoard(log_dir=logD+'events.epochs'+str(epoch)),
history,
]
model.fit(X_train, Y_train,
epochs=epoch,
batch_size=batchSize,
shuffle=True,
validation_data=(X_valid,Y_valid),
callbacks=callback,
class_weight='auto'
)
# 0.87062= 32+Nadan/1024 batch
model = load_model('model/strong_S1P2_NNclf_PCA.h5')
ans = model.predict(train_X)
ans.shape
np.argmax(ans,axis=1)
label
sum(np.argmax(ans,axis=1) == label) / len(label)
###Output
_____no_output_____ |
L12 Basic Convolutional Networks/L12_2_LeNet.ipynb | ###Markdown
LeNet is the first convolutional neural network. This note book reproduces the LeNet using Fashion-MNIST dataset.
###Code
%matplotlib inline
import torch
import torch.nn as nn
from matplotlib import pyplot as plt
import numpy as np
import torchvision
import torchvision.datasets as datasets
from torchvision import transforms
import torch.optim as optim
import time
batch_size = 256
num_epochs = 20
transform = transforms.Compose([transforms.ToTensor()])
mnist_trainset = datasets.FashionMNIST(root='./data', train=True, download=True, transform=transform)
mnist_testset = datasets.FashionMNIST(root='./data', train=False, download=True, transform=transform)
###Output
_____no_output_____
###Markdown
LeNet uses two convolutional layers to extract the input features, and three fully connected layers are used as classifier.
###Code
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 6, kernel_size=5, padding=2)
self.conv2 = nn.Conv2d(6, 16, kernel_size=5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.sigmoid = nn.Sigmoid()
self.avgpool = nn.AvgPool2d(kernel_size=2, stride=2)
def forward(self, x):
# Feature extraction using to convolutional layers.
x = self.avgpool(self.sigmoid(self.conv1(x)))
x = self.avgpool(self.sigmoid(self.conv2(x)))
#reshape the tensor to 1-d to fit the FC layer input
x = x.view(x.shape[0], -1)
# Classifier using three fully connected layers.
x = self.sigmoid(self.fc1(x))
x = self.sigmoid(self.fc2(x))
x = self.fc3(x)
return x
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
torch.nn.init.xavier_uniform_(m.weight)
m.bias.data.normal_(0.0, 0.01)
elif classname.find('Linear') != -1:
torch.nn.init.xavier_uniform_(m.weight)
m.bias.data.normal_(0.0, 0.01)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.01)
m.bias.data.fill_(0)
def evaluate_accuracy(data_iter, net):
"""Evaluate accuracy of a model on the given data set."""
acc_sum,n = 0,0
for (imgs, labels) in data_iter:
# send data to the GPU if cuda is availabel
if torch.cuda.is_available():
imgs = imgs.cuda()
labels = labels.cuda()
net.eval()
with torch.no_grad():
labels = labels.long()
acc_sum += torch.sum((torch.argmax(net(imgs), dim=1) == labels)).float()
n += labels.shape[0]
return acc_sum.item()/n
# Loading training set and test set using DataLoader.
train_loader = torch.utils.data.DataLoader(mnist_trainset, batch_size=batch_size,
shuffle=True, num_workers=0)
test_loader = torch.utils.data.DataLoader(mnist_testset, batch_size=batch_size,
shuffle=True, num_workers=0)
if torch.cuda.is_available():
print('Training using GPU.')
net = LeNet().cuda()
else:
print('Training using CPU.')
net = LeNet()
#Initialize network parameters.
net.apply(weights_init)
#Loss function
if torch.cuda.is_available():
loss = nn.CrossEntropyLoss().cuda()
else:
loss = nn.CrossEntropyLoss()
# Train using SGD optimizer
lr= 0.3 # This learning rate was not fine-tuned.
opt_n = optim.SGD(net.parameters(), lr=lr)
# Training stage
for epoch in range(1, num_epochs+1):
train_loader_iter = iter(train_loader)
train_l_sum, train_acc_sum, n, start = 0.0, 0.0, 0, time.time()
for (imgs, labels) in train_loader_iter:
net.train()
opt_n.zero_grad()
if torch.cuda.is_available():
imgs = imgs.cuda()
labels = labels.cuda()
# Label prediction from LeNet
y_hat = net(imgs)
l = loss(y_hat, labels)
# Backprobagation
l.backward()
opt_n.step()
# Calculate tarining error
with torch.no_grad():
labels = labels.long()
train_l_sum += l.item()
train_acc_sum += (torch.sum(torch.argmax(y_hat, dim=1) == labels)).float().item()
n += labels.shape[0]
# calculate testing error every epoch.
test_acc = evaluate_accuracy(iter(test_loader), net)
print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, time %.1f sec'
% (epoch, train_l_sum/n, train_acc_sum/n, test_acc,
time.time() - start))
###Output
Training using GPU.
epoch 1, loss 0.0091, train acc 0.099, test acc 0.100, time 4.4 sec
epoch 2, loss 0.0090, train acc 0.110, test acc 0.100, time 4.4 sec
epoch 3, loss 0.0066, train acc 0.410, test acc 0.568, time 4.4 sec
epoch 4, loss 0.0040, train acc 0.608, test acc 0.608, time 4.4 sec
epoch 5, loss 0.0034, train acc 0.673, test acc 0.701, time 4.5 sec
epoch 6, loss 0.0031, train acc 0.698, test acc 0.676, time 4.5 sec
epoch 7, loss 0.0029, train acc 0.720, test acc 0.692, time 4.5 sec
epoch 8, loss 0.0027, train acc 0.735, test acc 0.714, time 4.5 sec
epoch 9, loss 0.0026, train acc 0.744, test acc 0.738, time 4.5 sec
epoch 10, loss 0.0025, train acc 0.751, test acc 0.683, time 4.4 sec
epoch 11, loss 0.0024, train acc 0.762, test acc 0.762, time 4.5 sec
epoch 12, loss 0.0023, train acc 0.770, test acc 0.754, time 4.5 sec
epoch 13, loss 0.0023, train acc 0.775, test acc 0.766, time 4.5 sec
epoch 14, loss 0.0022, train acc 0.784, test acc 0.764, time 4.6 sec
epoch 15, loss 0.0021, train acc 0.789, test acc 0.773, time 4.5 sec
epoch 16, loss 0.0021, train acc 0.796, test acc 0.756, time 4.5 sec
epoch 17, loss 0.0020, train acc 0.803, test acc 0.761, time 4.4 sec
epoch 18, loss 0.0020, train acc 0.807, test acc 0.788, time 4.5 sec
epoch 19, loss 0.0019, train acc 0.814, test acc 0.774, time 4.4 sec
epoch 20, loss 0.0019, train acc 0.817, test acc 0.815, time 4.5 sec
|
doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Answers/CM_Notebook8_Answers.ipynb | ###Markdown
Classical Mechanics - Week 8 Last Week:- Introduced the SymPy package- Visualized Potential Energy surfaces- Explored packages in Python This Week:We will (mostly) take a break from learning new concepts in scientific computing, and instead just apply some of our current knowledge to complete Problem Set 8.
###Code
# As usual, we will need packages
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
3. Taylor problem 4.29.(a) Here we plot the potential energy $U(x)=kx^4$, with $k>0$. Note that the constant $k$ is not given. But it is just an overall scale factor for the potential, so the plot will basically look the same no matter what value you choose for $k$. For the purposes of scaling and keeping things simple, we recommend you set $k=1$.For 2D plots, we learned two different methods:- 1) Our original plotting method, using `pyplot`. (Refer back to ***Notebook 1*** for this method.) **Note:** use `plt.plot()` rather than `plt.scatter()` in order to make connected curves in the plot.- 2) Using `SymPy`. (Refer back to ***Notebook 7*** for this method.) You will have to import the sympy package if you use this method.It is up to you to decide which method you prefer for making the plot. Use the cell below to define the potential energy function and to make the plot.
###Code
###Output
_____no_output_____
###Markdown
Q1.) Qualitatively describe the motion if the mass in this potential is initially stationary at $x=0$ and is given a sharp kick to the right at $t=0$? &9989; Double click this cell, erase its content, and put your answer to the above question here. (d) After performing the change of variable in part (c) of the problem, you should find that the period of oscillation for the mass is given by$$\tau=\dfrac{1}{A}\sqrt{\dfrac{m}{k}}I\,,$$where $$I=\dfrac{4}{\sqrt{2}}\int_0^1\dfrac{dy}{\sqrt{1-y^4}}$$ is the integral to be evaluated. (Where did the factor of 4 come from?) Note that the integral $I$ is dimensionless. Changing variables to obtain a dimensionless integral is almost always useful, especially when you need to evaluate it numerically. Also, even if we don't know what the value of $I$ is, from this expression we can see explicitly how $\tau$ depends on the parameters$A$, $m$, and $k$. Now how to do the integral?(Review back to ***Notebook 5*** for ***numerical integration***.)If we tried to use our Trapezoidal Rule routine, we would immediately run into problems.What happens to the integrand in the limit as $y$ goes to 1?Although there are ways to change variables again, so that the Trapezoidal Rule routine would work, it is here where more general integration packages are useful, since they can often handle these integrable singularities without any extra effort. So instead, let's use the `integrate.quad()` function from `SciPy` to do it.In the cell below, define the function to be integrated and then integrate it using `integrate.quad()`. Don't forget to import the numerical integration routines from `SciPy` first. (Again, refer back to Notebook 5 if you need guidance.)
###Code
###Output
_____no_output_____
###Markdown
Q2.) What is the period of oscillation of the mass, as a function of the parameters, and including the calculated numerical factor? &9989; Double click this cell, erase its content, and put your answer to the above question here. 5. Taylor problem 4.37.(c) It should be possible to write the potential energy function in this problem as$$U(\phi)=MgR\,f(\phi,m/M)\,,$$so that the factor $MgR$ is just an overall scale factor. Might as well just set $M=g=R=1$. But the shape of the potential energy does depend on the value of $m/M$.In the cell below, plot two different $\phi$ vs $U(\phi)$ potential energy lines onto the same graph, one line where $m/M=0.7$ and a second line where $m/M=0.8$. Use your preferred method to make the plot of $\phi$ vs $U(\phi)$.
###Code
###Output
_____no_output_____
###Markdown
In this problem, you are asked to consider the motion of the system starting from rest at $\phi=0$.What is the total energy in this case, and why is it important for determining the motion of the system?Try varying the ratio $m/M$ in your plot in order to determine the critical value $r_\mathrm{crit}$. This is defined so that if $m/Mr_\mathrm{crit}$ the wheel keeps spinning and the mass $m$ keeps falling (if released from rest at $\phi=0$).Use the cell below.
###Code
###Output
_____no_output_____
###Markdown
Q3.) What feature of the plot did you use to determine the critical value of $m/M$? What value did you obtain? &9989; Double click this cell, erase its content, and put your answer to the above question here.
###Code
###Output
_____no_output_____
###Markdown
6. Taylor problem 4.38.(b) After doing the substitution in part (a), you obtained the EXACT period for the pendulum as$$\tau=\tau_0\dfrac{2}{\pi}K(A^2)\,,$$where $$K(A^2)=\int_0^1\dfrac{du}{\sqrt{1-u^2}\sqrt{1-A^2u^2}}\,.$$ The integral is dimensionless, so that the period is proportional to $\tau_0$ (the period for small oscillations),but now the proportionality factor depends on the amplitude $\Phi$ of the oscillations, through the dependence on $A=\sin(\Phi/2)$.As in problem 3, the Trapezoidal Rule method will struggle with this integral, due to the singularity in the integrand as $u$ goes to 1. However, `integrate.quad()` from the `SciPy` package should have no problem with it.In the cell below, define the function to be integrated and then integrate it using `integrate.quad()` to obtain $K(A^2)$ for the values of $\Phi=\pi/4$, $\Phi=\pi/2$, and $\Phi=3\pi/4$.
###Code
###Output
_____no_output_____
###Markdown
Special FunctionsThe function $K(A^2)$ cannot be written in terms of elementary functions, such as cosine, sine, logarithm, exponential, etc. However, it does pop up enough in mathematics and physics problems, that it is given its own name: the ***complete elliptic integral of the first kind***.This is one of many so-called "special functions" that arise frequently in physics problems. Others are Bessel functions, Legendre functions, etc., etc. They aren't really more complicated than the elementary functions; it's just that we are not as familiar with them. It turns out that `SciPy` has many of these "special functions" already coded up. Try running the following cell. (Use the same values of $\Phi$ as before.)
###Code
# The following line imports the special functions from SciPy:
from scipy import special
A= # Evaluate for the same values of Phi that you used previously
special.ellipk(A**2)
###Output
_____no_output_____
###Markdown
You should have gotten the same answers as before.Now, use the special function to make a plot of $\tau/\tau_0$ as a function of $\Phi$ for $0\le\Phi\le3$ (in radians) in the following cell.As a check, what should $\tau/\tau_0$ be in the limit as $\Phi\rightarrow0$?
###Code
###Output
_____no_output_____
###Markdown
Q4.) How well does the small angle approximation work for the period of a pendulum with amplitude $\Phi=\pi/4$? What happens to $\tau$ as $\Phi$ approaches $\pi$? Explain. &9989; Double click this cell, erase its content, and put your answer to the above question here. 7. The last problem.As we have seen, it is often useful to write things in terms of dimensionless combinations of parameters.Your result for the potential in this problem can be written$$U(x)=k\alpha^2\,f(y)\,,$$where $y=x/\alpha$ is dimensionless. Verify this. Thus, the natural distance scale is $\alpha$ and the natural energy scale is$k\alpha^2$. In the cell below, make a plot of $U(x)/(k\alpha^2)$ as a function of $x/\alpha$. (Note that this is equivalent tosetting $k=\alpha=1$ in U(x). Why?)
###Code
###Output
_____no_output_____
###Markdown
Q5.) From your plot, what is special about the energy $E=k\alpha^2/4$? &9989; Double click this cell, erase its content, and put your answer to the above question here. Notebook Wrap-up. Run the cell below and copy-paste your answers into their corresponding cells.
###Code
from IPython.display import HTML
HTML(
"""
<iframe
src="https://forms.gle/o2JbpvJeUFYvWQni7"
width="100%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
###Output
_____no_output_____ |
semi-supervised/.ipynb_checkpoints/semi-supervised_learning_2-checkpoint.ipynb | ###Markdown
In this notebook, we'll learn how to use GANs to do semi-supervised learning.In supervised learning, we have a training set of inputs $x$ and class labels $y$. We train a model that takes $x$ as input and gives $y$ as output.In semi-supervised learning, our goal is still to train a model that takes $x$ as input and generates $y$ as output. However, not all of our training examples have a label $y$. We need to develop an algorithm that is able to get better at classification by studying both labeled $(x, y)$ pairs and unlabeled $x$ examples.To do this for the SVHN dataset, we'll turn the GAN discriminator into an 11 class discriminator. It will recognize the 10 different classes of real SVHN digits, as well as an 11th class of fake images that come from the generator. The discriminator will get to train on real labeled images, real unlabeled images, and fake images. By drawing on three sources of data instead of just one, it will generalize to the test set much better than a traditional classifier trained on only one source of data.
###Code
%matplotlib inline
import pickle as pkl
import time
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=True, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
# The SVHN dataset comes with lots of labels, but for the purpose of this exercise,
# we will pretend that there are only 1000.
# We use this mask to say which labels we will allow ourselves to use.
self.label_mask = np.zeros_like(self.train_y)
self.label_mask[0:1000] = 1
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.train_x = self.scaler(self.train_x)
self.valid_x = self.scaler(self.valid_x)
self.test_x = self.scaler(self.test_x)
self.shuffle = shuffle
def batches(self, batch_size, which_set="train"):
x_name = which_set + "_x"
y_name = which_set + "_y"
num_examples = len(getattr(dataset, y_name))
if self.shuffle:
idx = np.arange(num_examples)
np.random.shuffle(idx)
setattr(dataset, x_name, getattr(dataset, x_name)[idx])
setattr(dataset, y_name, getattr(dataset, y_name)[idx])
if which_set == "train":
dataset.label_mask = dataset.label_mask[idx]
dataset_x = getattr(dataset, x_name)
dataset_y = getattr(dataset, y_name)
for ii in range(0, num_examples, batch_size):
x = dataset_x[ii:ii+batch_size]
y = dataset_y[ii:ii+batch_size]
if which_set == "train":
# When we use the data for training, we need to include
# the label mask, so we can pretend we don't have access
# to some of the labels, as an exercise of our semi-supervised
# learning ability
yield x, y, self.label_mask[ii:ii+batch_size]
else:
yield x, y
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
y = tf.placeholder(tf.int32, (None), name='y')
label_mask = tf.placeholder(tf.int32, (None), name='label_mask')
return inputs_real, inputs_z, y, label_mask
def generator(z, output_dim, reuse=False, alpha=0.2, training=True, size_mult=128):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4 * 4 * size_mult * 4)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, size_mult * 4))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
x2 = tf.layers.conv2d_transpose(x1, size_mult * 2, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
x3 = tf.layers.conv2d_transpose(x2, size_mult, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
out = tf.tanh(logits)
return out
def discriminator(x, reuse=False, alpha=0.2, drop_rate=0., num_classes=10, size_mult=64):
with tf.variable_scope('discriminator', reuse=reuse):
x = tf.layers.dropout(x, rate=drop_rate/2.5)
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, size_mult, 3, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
relu1 = tf.layers.dropout(relu1, rate=drop_rate)
x2 = tf.layers.conv2d(relu1, size_mult, 3, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * x2, x2)
x3 = tf.layers.conv2d(relu2, size_mult, 3, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
relu3 = tf.layers.dropout(relu3, rate=drop_rate)
x4 = tf.layers.conv2d(relu3, 2 * size_mult, 3, strides=1, padding='same')
bn4 = tf.layers.batch_normalization(x4, training=True)
relu4 = tf.maximum(alpha * bn4, bn4)
x5 = tf.layers.conv2d(relu4, 2 * size_mult, 3, strides=1, padding='same')
bn5 = tf.layers.batch_normalization(x5, training=True)
relu5 = tf.maximum(alpha * bn5, bn5)
x6 = tf.layers.conv2d(relu5, 2 * size_mult, 3, strides=2, padding='same')
bn6 = tf.layers.batch_normalization(x6, training=True)
relu6 = tf.maximum(alpha * bn6, bn6)
relu6 = tf.layers.dropout(relu6, rate=drop_rate)
x7 = tf.layers.conv2d(relu5, 2 * size_mult, 3, strides=1, padding='valid')
# Don't use bn on this layer, because bn would set the mean of each feature
# to the bn mu parameter.
# This layer is used for the feature matching loss, which only works if
# the means can be different when the discriminator is run on the data than
# when the discriminator is run on the generator samples.
relu7 = tf.maximum(alpha * x7, x7)
# Flatten it by global average pooling
features = raise NotImplementedError()
# Set class_logits to be the inputs to a softmax distribution over the different classes
raise NotImplementedError()
# Set gan_logits such that P(input is real | input) = sigmoid(gan_logits).
# Keep in mind that class_logits gives you the probability distribution over all the real
# classes and the fake class. You need to work out how to transform this multiclass softmax
# distribution into a binary real-vs-fake decision that can be described with a sigmoid.
# Numerical stability is very important.
# You'll probably need to use this numerical stability trick:
# log sum_i exp a_i = m + log sum_i exp(a_i - m).
# This is numerically stable when m = max_i a_i.
# (It helps to think about what goes wrong when...
# 1. One value of a_i is very large
# 2. All the values of a_i are very negative
# This trick and this value of m fix both those cases, but the naive implementation and
# other values of m encounter various problems)
raise NotImplementedError()
return out, class_logits, gan_logits, features
def model_loss(input_real, input_z, output_dim, y, num_classes, label_mask, alpha=0.2, drop_rate=0.):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param output_dim: The number of channels in the output image
:param y: Integer class labels
:param num_classes: The number of classes
:param alpha: The slope of the left half of leaky ReLU activation
:param drop_rate: The probability of dropping a hidden unit
:return: A tuple of (discriminator loss, generator loss)
"""
# These numbers multiply the size of each layer of the generator and the discriminator,
# respectively. You can reduce them to run your code faster for debugging purposes.
g_size_mult = 32
d_size_mult = 64
# Here we run the generator and the discriminator
g_model = generator(input_z, output_dim, alpha=alpha, size_mult=g_size_mult)
d_on_data = discriminator(input_real, alpha=alpha, drop_rate=drop_rate, size_mult=d_size_mult)
d_model_real, class_logits_on_data, gan_logits_on_data, data_features = d_on_data
d_on_samples = discriminator(g_model, reuse=True, alpha=alpha, drop_rate=drop_rate, size_mult=d_size_mult)
d_model_fake, class_logits_on_samples, gan_logits_on_samples, sample_features = d_on_samples
# Here we compute `d_loss`, the loss for the discriminator.
# This should combine two different losses:
# 1. The loss for the GAN problem, where we minimize the cross-entropy for the binary
# real-vs-fake classification problem.
# 2. The loss for the SVHN digit classification problem, where we minimize the cross-entropy
# for the multi-class softmax. For this one we use the labels. Don't forget to ignore
# use `label_mask` to ignore the examples that we are pretending are unlabeled for the
# semi-supervised learning problem.
raise NotImplementedError()
# Here we set `g_loss` to the "feature matching" loss invented by Tim Salimans at OpenAI.
# This loss consists of minimizing the absolute difference between the expected features
# on the data and the expected features on the generated samples.
# This loss works better for semi-supervised learning than the tradition GAN losses.
raise NotImplementedError()
pred_class = tf.cast(tf.argmax(class_logits_on_data, 1), tf.int32)
eq = tf.equal(tf.squeeze(y), pred_class)
correct = tf.reduce_sum(tf.to_float(eq))
masked_correct = tf.reduce_sum(label_mask * tf.to_float(eq))
return d_loss, g_loss, correct, masked_correct, g_model
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and biases to update. Get them separately for the discriminator and the generator
raise NotImplementedError()
# Minimize both players' costs simultaneously
raise NotImplementedError()
shrink_lr = tf.assign(learning_rate, learning_rate * 0.9)
return d_train_opt, g_train_opt, shrink_lr
class GAN:
"""
A GAN model.
:param real_size: The shape of the real data.
:param z_size: The number of entries in the z code vector.
:param learnin_rate: The learning rate to use for Adam.
:param num_classes: The number of classes to recognize.
:param alpha: The slope of the left half of the leaky ReLU activation
:param beta1: The beta1 parameter for Adam.
"""
def __init__(self, real_size, z_size, learning_rate, num_classes=10, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.learning_rate = tf.Variable(learning_rate, trainable=False)
inputs = model_inputs(real_size, z_size)
self.input_real, self.input_z, self.y, self.label_mask = inputs
self.drop_rate = tf.placeholder_with_default(.5, (), "drop_rate")
loss_results = model_loss(self.input_real, self.input_z,
real_size[2], self.y, num_classes,
label_mask=self.label_mask,
alpha=0.2,
drop_rate=self.drop_rate)
self.d_loss, self.g_loss, self.correct, self.masked_correct, self.samples = loss_results
self.d_opt, self.g_opt, self.shrink_lr = model_opt(self.d_loss, self.g_loss, self.learning_rate, beta1)
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img)
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
def train(net, dataset, epochs, batch_size, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.normal(0, 1, size=(50, z_size))
samples, train_accuracies, test_accuracies = [], [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
print("Epoch",e)
t1e = time.time()
num_examples = 0
num_correct = 0
for x, y, label_mask in dataset.batches(batch_size):
assert 'int' in str(y.dtype)
steps += 1
num_examples += label_mask.sum()
# Sample random noise for G
batch_z = np.random.normal(0, 1, size=(batch_size, z_size))
# Run optimizers
t1 = time.time()
_, _, correct = sess.run([net.d_opt, net.g_opt, net.masked_correct],
feed_dict={net.input_real: x, net.input_z: batch_z,
net.y : y, net.label_mask : label_mask})
t2 = time.time()
num_correct += correct
sess.run([net.shrink_lr])
train_accuracy = num_correct / float(num_examples)
print("\t\tClassifier train accuracy: ", train_accuracy)
num_examples = 0
num_correct = 0
for x, y in dataset.batches(batch_size, which_set="test"):
assert 'int' in str(y.dtype)
num_examples += x.shape[0]
correct, = sess.run([net.correct], feed_dict={net.input_real: x,
net.y : y,
net.drop_rate: 0.})
num_correct += correct
test_accuracy = num_correct / float(num_examples)
print("\t\tClassifier test accuracy", test_accuracy)
print("\t\tStep time: ", t2 - t1)
t2e = time.time()
print("\t\tEpoch time: ", t2e - t1e)
gen_samples = sess.run(
net.samples,
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 5, 10, figsize=figsize)
plt.show()
# Save history of accuracies to view after training
train_accuracies.append(train_accuracy)
test_accuracies.append(test_accuracy)
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return train_accuracies, test_accuracies, samples
!mkdir checkpoints
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0003
net = GAN(real_size, z_size, learning_rate)
dataset = Dataset(trainset, testset)
batch_size = 128
epochs = 25
train_accuracies, test_accuracies, samples = train(net,
dataset,
epochs,
batch_size,
figsize=(10,5))
fig, ax = plt.subplots()
plt.plot(train_accuracies, label='Train', alpha=0.5)
plt.plot(test_accuracies, label='Test', alpha=0.5)
plt.title("Accuracy")
plt.legend()
###Output
_____no_output_____
###Markdown
When you run the fully implemented semi-supervised GAN, you should usually find that the test accuracy peaks at 69-71%. It should definitely stay above 68% fairly consistently throughout the last several epochs of training.This is a little bit better than a [NIPS 2014 paper](https://arxiv.org/pdf/1406.5298.pdf) that got 64% accuracy on 1000-label SVHN with variational methods. However, we still have lost something by not using all the labels. If you re-run with all the labels included, you should obtain over 80% accuracy using this architecture (and other architectures that take longer to run can do much better).
###Code
_ = view_samples(-1, samples, 5, 10, figsize=(10,5))
!mkdir images
for ii in range(len(samples)):
fig, ax = view_samples(ii, samples, 5, 10, figsize=(10,5))
fig.savefig('images/samples_{:03d}.png'.format(ii))
plt.close()
###Output
_____no_output_____
###Markdown
In this notebook, we'll learn how to use GANs to do semi-supervised learning.In supervised learning, we have a training set of inputs $x$ and class labels $y$. We train a model that takes $x$ as input and gives $y$ as output.In semi-supervised learning, our goal is still to train a model that takes $x$ as input and generates $y$ as output. However, not all of our training examples have a label $y$. We need to develop an algorithm that is able to get better at classification by studying both labeled $(x, y)$ pairs and unlabeled $x$ examples.To do this for the SVHN dataset, we'll turn the GAN discriminator into an 11 class discriminator. It will recognize the 10 different classes of real SVHN digits, as well as an 11th class of fake images that come from the generator. The discriminator will get to train on real labeled images, real unlabeled images, and fake images. By drawing on three sources of data instead of just one, it will generalize to the test set much better than a traditional classifier trained on only one source of data.
###Code
%matplotlib inline
import pickle as pkl
import time
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=True, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
# The SVHN dataset comes with lots of labels, but for the purpose of this exercise,
# we will pretend that there are only 1000.
# We use this mask to say which labels we will allow ourselves to use.
self.label_mask = np.zeros_like(self.train_y)
self.label_mask[0:1000] = 1
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.train_x = self.scaler(self.train_x)
self.valid_x = self.scaler(self.valid_x)
self.test_x = self.scaler(self.test_x)
self.shuffle = shuffle
def batches(self, batch_size, which_set="train"):
x_name = which_set + "_x"
y_name = which_set + "_y"
num_examples = len(getattr(dataset, y_name))
if self.shuffle:
idx = np.arange(num_examples)
np.random.shuffle(idx)
setattr(dataset, x_name, getattr(dataset, x_name)[idx])
setattr(dataset, y_name, getattr(dataset, y_name)[idx])
if which_set == "train":
dataset.label_mask = dataset.label_mask[idx]
dataset_x = getattr(dataset, x_name)
dataset_y = getattr(dataset, y_name)
for ii in range(0, num_examples, batch_size):
x = dataset_x[ii:ii+batch_size]
y = dataset_y[ii:ii+batch_size]
if which_set == "train":
# When we use the data for training, we need to include
# the label mask, so we can pretend we don't have access
# to some of the labels, as an exercise of our semi-supervised
# learning ability
yield x, y, self.label_mask[ii:ii+batch_size]
else:
yield x, y
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
y = tf.placeholder(tf.int32, (None), name='y')
label_mask = tf.placeholder(tf.int32, (None), name='label_mask')
return inputs_real, inputs_z, y, label_mask
def generator(z, output_dim, reuse=False, alpha=0.2, training=True, size_mult=128):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4 * 4 * size_mult * 4)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, size_mult * 4))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
x2 = tf.layers.conv2d_transpose(x1, size_mult * 2, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
x3 = tf.layers.conv2d_transpose(x2, size_mult, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
out = tf.tanh(logits)
return out
def discriminator(x, reuse=False, alpha=0.2, drop_rate=0., num_classes=10, size_mult=64):
with tf.variable_scope('discriminator', reuse=reuse):
x = tf.layers.dropout(x, rate=drop_rate/2.5)
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, size_mult, 3, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
relu1 = tf.layers.dropout(relu1, rate=drop_rate)
x2 = tf.layers.conv2d(relu1, size_mult, 3, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * x2, x2)
x3 = tf.layers.conv2d(relu2, size_mult, 3, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
relu3 = tf.layers.dropout(relu3, rate=drop_rate)
x4 = tf.layers.conv2d(relu3, 2 * size_mult, 3, strides=1, padding='same')
bn4 = tf.layers.batch_normalization(x4, training=True)
relu4 = tf.maximum(alpha * bn4, bn4)
x5 = tf.layers.conv2d(relu4, 2 * size_mult, 3, strides=1, padding='same')
bn5 = tf.layers.batch_normalization(x5, training=True)
relu5 = tf.maximum(alpha * bn5, bn5)
x6 = tf.layers.conv2d(relu5, 2 * size_mult, 3, strides=2, padding='same')
bn6 = tf.layers.batch_normalization(x6, training=True)
relu6 = tf.maximum(alpha * bn6, bn6)
relu6 = tf.layers.dropout(relu6, rate=drop_rate)
x7 = tf.layers.conv2d(relu5, 2 * size_mult, 3, strides=1, padding='valid')
# Don't use bn on this layer, because bn would set the mean of each feature
# to the bn mu parameter.
# This layer is used for the feature matching loss, which only works if
# the means can be different when the discriminator is run on the data than
# when the discriminator is run on the generator samples.
relu7 = tf.maximum(alpha * x7, x7)
# Flatten it by global average pooling
features = raise NotImplementedError()
# Set class_logits to be the inputs to a softmax distribution over the different classes
raise NotImplementedError()
# Set gan_logits such that P(input is real | input) = sigmoid(gan_logits).
# Keep in mind that class_logits gives you the probability distribution over all the real
# classes and the fake class. You need to work out how to transform this multiclass softmax
# distribution into a binary real-vs-fake decision that can be described with a sigmoid.
# Numerical stability is very important.
# You'll probably need to use this numerical stability trick:
# log sum_i exp a_i = m + log sum_i exp(a_i - m).
# This is numerically stable when m = max_i a_i.
# (It helps to think about what goes wrong when...
# 1. One value of a_i is very large
# 2. All the values of a_i are very negative
# This trick and this value of m fix both those cases, but the naive implementation and
# other values of m encounter various problems)
raise NotImplementedError()
return out, class_logits, gan_logits, features
def model_loss(input_real, input_z, output_dim, y, num_classes, label_mask, alpha=0.2, drop_rate=0.):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param output_dim: The number of channels in the output image
:param y: Integer class labels
:param num_classes: The number of classes
:param alpha: The slope of the left half of leaky ReLU activation
:param drop_rate: The probability of dropping a hidden unit
:return: A tuple of (discriminator loss, generator loss)
"""
# These numbers multiply the size of each layer of the generator and the discriminator,
# respectively. You can reduce them to run your code faster for debugging purposes.
g_size_mult = 32
d_size_mult = 64
# Here we run the generator and the discriminator
g_model = generator(input_z, output_dim, alpha=alpha, size_mult=g_size_mult)
d_on_data = discriminator(input_real, alpha=alpha, drop_rate=drop_rate, size_mult=d_size_mult)
d_model_real, class_logits_on_data, gan_logits_on_data, data_features = d_on_data
d_on_samples = discriminator(g_model, reuse=True, alpha=alpha, drop_rate=drop_rate, size_mult=d_size_mult)
d_model_fake, class_logits_on_samples, gan_logits_on_samples, sample_features = d_on_samples
# Here we compute `d_loss`, the loss for the discriminator.
# This should combine two different losses:
# 1. The loss for the GAN problem, where we minimize the cross-entropy for the binary
# real-vs-fake classification problem.
# 2. The loss for the SVHN digit classification problem, where we minimize the cross-entropy
# for the multi-class softmax. For this one we use the labels. Don't forget to ignore
# use `label_mask` to ignore the examples that we are pretending are unlabeled for the
# semi-supervised learning problem.
raise NotImplementedError()
# Here we set `g_loss` to the "feature matching" loss invented by Tim Salimans at OpenAI.
# This loss consists of minimizing the absolute difference between the expected features
# on the data and the expected features on the generated samples.
# This loss works better for semi-supervised learning than the tradition GAN losses.
raise NotImplementedError()
pred_class = tf.cast(tf.argmax(class_logits_on_data, 1), tf.int32)
eq = tf.equal(tf.squeeze(y), pred_class)
correct = tf.reduce_sum(tf.to_float(eq))
masked_correct = tf.reduce_sum(label_mask * tf.to_float(eq))
return d_loss, g_loss, correct, masked_correct, g_model
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and biases to update. Get them separately for the discriminator and the generator
raise NotImplementedError()
# Minimize both players' costs simultaneously
raise NotImplementedError()
shrink_lr = tf.assign(learning_rate, learning_rate * 0.9)
return d_train_opt, g_train_opt, shrink_lr
class GAN:
"""
A GAN model.
:param real_size: The shape of the real data.
:param z_size: The number of entries in the z code vector.
:param learnin_rate: The learning rate to use for Adam.
:param num_classes: The number of classes to recognize.
:param alpha: The slope of the left half of the leaky ReLU activation
:param beta1: The beta1 parameter for Adam.
"""
def __init__(self, real_size, z_size, learning_rate, num_classes=10, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.learning_rate = tf.Variable(learning_rate, trainable=False)
inputs = model_inputs(real_size, z_size)
self.input_real, self.input_z, self.y, self.label_mask = inputs
self.drop_rate = tf.placeholder_with_default(.5, (), "drop_rate")
loss_results = model_loss(self.input_real, self.input_z,
real_size[2], self.y, num_classes,
label_mask=self.label_mask,
alpha=0.2,
drop_rate=self.drop_rate)
self.d_loss, self.g_loss, self.correct, self.masked_correct, self.samples = loss_results
self.d_opt, self.g_opt, self.shrink_lr = model_opt(self.d_loss, self.g_loss, self.learning_rate, beta1)
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img)
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
def train(net, dataset, epochs, batch_size, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.normal(0, 1, size=(50, z_size))
samples, train_accuracies, test_accuracies = [], [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
print("Epoch",e)
t1e = time.time()
num_examples = 0
num_correct = 0
for x, y, label_mask in dataset.batches(batch_size):
assert 'int' in str(y.dtype)
steps += 1
num_examples += label_mask.sum()
# Sample random noise for G
batch_z = np.random.normal(0, 1, size=(batch_size, z_size))
# Run optimizers
t1 = time.time()
_, _, correct = sess.run([net.d_opt, net.g_opt, net.masked_correct],
feed_dict={net.input_real: x, net.input_z: batch_z,
net.y : y, net.label_mask : label_mask})
t2 = time.time()
num_correct += correct
sess.run([net.shrink_lr])
train_accuracy = num_correct / float(num_examples)
print("\t\tClassifier train accuracy: ", train_accuracy)
num_examples = 0
num_correct = 0
for x, y in dataset.batches(batch_size, which_set="test"):
assert 'int' in str(y.dtype)
num_examples += x.shape[0]
correct, = sess.run([net.correct], feed_dict={net.input_real: x,
net.y : y,
net.drop_rate: 0.})
num_correct += correct
test_accuracy = num_correct / float(num_examples)
print("\t\tClassifier test accuracy", test_accuracy)
print("\t\tStep time: ", t2 - t1)
t2e = time.time()
print("\t\tEpoch time: ", t2e - t1e)
gen_samples = sess.run(
net.samples,
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 5, 10, figsize=figsize)
plt.show()
# Save history of accuracies to view after training
train_accuracies.append(train_accuracy)
test_accuracies.append(test_accuracy)
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return train_accuracies, test_accuracies, samples
!mkdir checkpoints
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0003
net = GAN(real_size, z_size, learning_rate)
dataset = Dataset(trainset, testset)
batch_size = 128
epochs = 25
train_accuracies, test_accuracies, samples = train(net,
dataset,
epochs,
batch_size,
figsize=(10,5))
fig, ax = plt.subplots()
plt.plot(train_accuracies, label='Train', alpha=0.5)
plt.plot(test_accuracies, label='Test', alpha=0.5)
plt.title("Accuracy")
plt.legend()
###Output
_____no_output_____
###Markdown
When you run the fully implemented semi-supervised GAN, you should usually find that the test accuracy peaks at 69-71%. It should definitely stay above 68% fairly consistently throughout the last several epochs of training.This is a little bit better than a [NIPS 2014 paper](https://arxiv.org/pdf/1406.5298.pdf) that got 64% accuracy on 1000-label SVHN with variational methods. However, we still have lost something by not using all the labels. If you re-run with all the labels included, you should obtain over 80% accuracy using this architecture (and other architectures that take longer to run can do much better).
###Code
_ = view_samples(-1, samples, 5, 10, figsize=(10,5))
!mkdir images
for ii in range(len(samples)):
fig, ax = view_samples(ii, samples, 5, 10, figsize=(10,5))
fig.savefig('images/samples_{:03d}.png'.format(ii))
plt.close()
###Output
_____no_output_____ |
1-Python/Fede_Ruiz_Examen_ramp_up-Aula.ipynb | ###Markdown
Prueba de Ramp-up Antes de comenzar, cambia el nombre del archivo notebook poniéndole delante tu nombre y primer apellido:* nombre_apellido_test_pract_nov_2021.ipynb
###Code
# Escribe tu nombre en la variable:
nombre_y_apellidos = "Federico Ruiz Ruiz"
# Ejemplo de como queremos ver el código para más de un intento:
# intento 1
#your code
# intento 2
#your code
# crea todas las celdas que necesites
###Output
_____no_output_____
###Markdown
Notas y pruebas de apoyo para contestar las preguntasCualquier código aunque no esté correcto nos dirá más de ti que no escribir nada. __Debes subir tus respuestas antes de las 13:30 al enlace que habilitaremos__ Ejercicio 1.En este ejercicio se va a crear una lista con los siguientes elementos. Primero, crearemos las variables que luego añadiremos a la lista. Las variables son las siguientes:1. Crea una variable llamada `codigo_postal` de tipo Integer que represente tu código postal. 2. Crea una variable llamada `color` de tipo String y que represente tu color favorito.3. Crea una variable llamada `polo` de tipo Boolean True que indique si has estado en el Polo Norte.4. Crea una variable llamada `nada` de tipo None cuyo valor sea None.5. Crea una variable llamada `lista_poética` de tipo List que contenga las cinco palabras que más te gustan del español. Cada elemento ha de ser de tipo String y el tamaño de la lista ha de ser 5.6. Crea una variable llamada `tupla_contrasena` de tipo Tuple con dos contraseñas ficticias.7. Crea una variable llamada `dict_viajes` con dos keys y dos values. Una key es 'destino', teniendo como value a dónde os gustaría viajar. Otra key es 'tren', teniendo como value un booleano que indique si sí os gusta viajar en tren.8. Crea una variable llamada `telefono` que contenga tu número de teléfono, separando el prefijo del resto del número con un guion y sin espacios en blanco. Ejemplo: si el número es el 678123123, el valor de la variable deberá ser 0034-678123123.Añade todos estos elementos en una lista llamada `lista_examen` en el orden de aparición.
###Code
codigo_postal = 28018
color = "Verde"
polo = False
nada = None
lista_poetica = ["axarquía", "mancuerna", "parraque", "enebrar", "vahído"]
tupla_contrasena = ("Almu3rz0_at_tw0", "D1nn3r_at_0ch0")
dict_viajes = {'destino':'Moscú',
'tren':False}
telefono = '0034-657527942'
lista_examen = []
lista_examen.append(codigo_postal)
lista_examen.append(color)
lista_examen.append(polo)
lista_examen.append(nada)
lista_examen.append(lista_poetica)
lista_examen.append(tupla_contrasena)
lista_examen.append(dict_viajes)
lista_examen.append(telefono)
###Output
_____no_output_____
###Markdown
Ejercicio 2. Para resolver este ejercicio necesitarás la lista del ejercicio anterior (`lista_examen`). Se recomienda leer cada punto hasta el final antes de resolver el ejercicio.Se repite para que quede aún más claro: **SOLO USAR LA VARIABLE** `lista_examen`. Usar las demás variables se calificará como error. 1. Mostrar por pantalla, sin uso de funciones (puedes usar print) ni bucles: - El penúltimo elemento de `lista_poetica` desde `lista_examen` contando por atrás. - La primera palabra poética. - Vuestro color favorito. - El prefijo telefónico sin guion concatenado con la primera contraseña, es decir, en un único string, y sin espacios en blanco. - Si no os gusta ir en tren y vuestro código postal es par, mostrad el destino. Si no, mostrad "¿casualidad?, no lo creo" 2. Mostrar por pantalla, con uso de un bucle `for`: - Todos los elementos de `lista_examen`. - Todos los elementos de `lista_poetica` desde `lista_examen` mostrados en orden inverso __sin usar reverse__.3. Todos los elementos que ocupen una posición entre 2 y 5, ambos inclusive, empezando a contar las posiciones desde cero a través de `lista_examen` (usando slicing)4. Mostrar por pantalla, con uso de un bucle while: - Todos los elementos de `tupla_contrasena` seguido de la cadena `"-->"` y la `posición` (int) que ocupa en la tupla accediendo desde `lista_examen`. - Cada key de `dict_viajes` seguido de la cadena ":" seguido de su value desde `lista_examen`. Usa items(), keys() y values()
###Code
print(f'1)')
print(lista_examen[4][1])
print(lista_examen[4][0])
print(lista_examen[1])
print(lista_examen[7][:4] + lista_examen[5][0])
print(lista_examen[6]['destino'] if (lista_examen[6]['tren'] == False and lista_examen[0] % 2 == 0) else "¿casualidad?, no lo creo" )
print(f'2)')
for e in lista_examen:
print(e)
print("\n\n")
for b in range(5, 0 , -1):
print(lista_examen[4][b-1])
print(f'3)')
print(lista_examen[:6])
print(f'4)')
index_tupla = 0
while index_tupla < len(lista_examen[5]):
print(f'{lista_examen[5][index_tupla]}-->{index_tupla}')
index_tupla += 1
index_dict = 0
while index_dict < len(lista_examen[6].items()):
claves = list(lista_examen[6].keys())
valores = list(lista_examen[6].values())
print(f'{claves[index_dict]}:{valores[index_dict]}')
index_dict += 1
###Output
4)
Almu3rz0_at_tw0-->0
D1nn3r_at_0ch0-->1
destino:Moscú
tren:False
###Markdown
Ejercicio 3. Escribe un programa que te diga si una letra es una consonante o una vocal
###Code
import re
vocales = ['a','e','i','o','u','á','é','í','ó','ú','ä','ë','ï','ö','ü']
letra = input("Introduce una letra")
if letra.lower() in vocales or letra.upper() in vocales:
print(f'{letra} es una vocal')
elif re.match(r'[a-zA-Z]$', letra):
print(f'{letra} es una consonante')
###Output
Introduce una letra ü
###Markdown
Ejercicio 4. Escribe un programa que calcule la edad humana de un perro.Tienes que pedirle al usuario la edad del perro en años.Si el usuario introduce un número negativo, imprime que la cifra facilitada no tiene sentido. Si el perro tiene un año, son 14 años humanos. Si tiene 2 años, son 22 años humanos.A partir de los 2 años, el cálculo es el siguiente: a la edad del perro se le restan 2 años, el resultado se multiplica por 5 y, finalmente, se le suma 22.Usa una función sin argumentos de entrada ni de salida (muestra toda la información con print())
###Code
def calcula_edad():
edad = float(input("¿Edad del perro?"))
if edad < 0:
print(f'Una edad de {edad} no tiene sentido!')
elif edad < 1:
edad_humana = edad
elif edad < 2:
edad_humana = 14
elif edad < 3:
edad_humana = 22
else:
edad_humana = ((edad - 2) * 5) + 22
print(f'Edad humana {edad_humana}')
calcula_edad()
###Output
¿Edad del perro? 1.5
###Markdown
Ejercicio 5. Escribe un programa que calcule la suma de todos los elementos de cada tupla guardada dentro de una lista de tuplas.Entrada:[(1, 2), (2, 3), (3, 4)]Resultado:[3, 5, 7]Entrada:[(1, 2, 6), (2, 3, -6), (3, 4), (2, 2, 2, 2)]Resultado:[9, -1, 7, 8]
###Code
entrada = [(1, 2, 6), (2, 3, -6), (3, 4), (2, 2, 2, 2)]
resultado = [sum(t) for t in entrada]
print(f'Entrada:\n{entrada}\n\nResultado:\n{resultado}')
###Output
Entrada:
[(1, 2, 6), (2, 3, -6), (3, 4), (2, 2, 2, 2)]
Resultado:
[9, -1, 7, 8]
###Markdown
Ejercicio 6. Escribe un programa que te diga si una frase es un pangrama. Un pangrama es una frase que contiene todas las letras del alfabeto.Puedes tener espacios en blanco, pero NO tendrás signos de puntuación, a excepción de las tildes.Ejemplo de pangrama:Extraño pan de col y kiwi se quemó bajo fugaz vaho
###Code
pangrama = 'Extraño pan de col y kiwi se quemó bajo fugaz vaho'
check_alfabeto = {'a':0, 'b':0, 'c':0, 'd':0, 'e':0, 'f':0, 'g':0, 'h':0, 'i':0, 'j':0, 'k':0, 'l':0, 'm':0, 'n':0, 'ñ':0, 'o':0, 'p':0, 'q':0, 'r':0, 's':0, 't':0, 'u':0, 'v':0, 'w':0, 'x':0, 'y':0, 'z':0}
check_especiales = {'á':0, 'é':0, 'í':0, 'ó':0, 'ú':0, 'ä':0, 'ë':0, 'ï':0, 'ö':0, 'ü':0}
correspondencias = {'á':'a', 'é':'e', 'í':'i', 'ó':'o', 'ú':'u', 'ä':'a', 'ë':'e', 'ï':'i', 'ö':'o', 'ü':'u'}
for l in pangrama:
l = l.replace(' ','')
if len(l) > 0:
if l.lower() in check_alfabeto.keys():
check_alfabeto[l.lower()] += 1
elif l.lower() in check_especiales.keys():
check_especiales[l.lower()] += 1
for correspondencia in correspondencias.items():
if check_especiales[correspondencia[0]] > 0:
check_alfabeto[correspondencia[1]] += 1
values_alfabeto = set(check_alfabeto.values())
if 0 in values_alfabeto:
print(f'{pangrama}\n\nNo es un pangrama')
else:
print(f'\n{pangrama}\n\nEs un pangrama')
###Output
Extraño pan de col y kiwi se quemó bajo fugaz vaho
Es un pangrama
###Markdown
Ejercicio 7. Escribe un programa que use diccionarios para traducir palabras entre varios idiomas.Deberás pedirle al usuario una palabra por teclado, ver si está en tu diccionario. En caso de que esté, deberás mostrar la traducción de esa palabra.Ejemplo de diccionario:```python{"file": "Fichier", "new": "Nouveau", "open": "Ouvrir", "save": "Enregistrer", "save as": "Enregistrer sous", "print preview": "Apercu avant impressioner", "print": "Imprimer", "close": "Fermer", "exit": "Quitter"}
###Code
diccionarios = {'fr':{"file": "Fichier", "new": "Nouveau", "open": "Ouvrir", "save": "Enregistrer", "save as": "Enregistrer sous", "print preview": "Apercu avant impressioner", "print": "Imprimer", "close": "Fermer", "exit": "Quitter"},
'es':{"file": "Archivo", "new": "Nuevo", "open": "Abierto", "save": "Guardar", "save as": "Guardar como", "print preview": "Imprimir vista previa", "print": "Imprimir", "close": "Cerrar", "exit": "Salir"}}
diccionario = input(f'¿Que idioma? [fr/es]: ')
if diccionario in diccionarios.keys():
palabra = input(f'¿Que palabra?')
if palabra in diccionarios[diccionario]:
print(f'{palabra} --> {diccionarios[diccionario][palabra]}')
else:
print(f'No existe la palabra {palabra} en el diccionario {diccionario}')
else:
print(f'No existe el diccionario {diccionario}.')
###Output
¿Que idioma? [fr/es]: fr
¿Que palabra? patata
###Markdown
Ejercicio 8.Usando objetos modela un theremin (es un instrumento musical que se toca modificando el campo electromagnético, por si tenías curiosidad).Tendrá de atributos la marca, el modelo, el peso, la potencia y la batería.Todos los instrumentos se crean con un peso predeterminado de 500, una potencia de 50 y una batería de 200.Cuando se recarga la batería vale 200.Cuando se toca, se muestra por pantalla que está sonando y la batería disminuye 10 (no siendo nunca negativo).Si no hay batería, no puede sonar. Crea un instrumento, tócalo mientras quede batería, cárgalo y no lo vuelvas a tocar.
###Code
class Theremin:
potencia = 50
peso = 500
bateria = 200
def __init__(self,marca, modelo):
self.marca = marca
self.modelo = modelo
def recargar(self):
self.bateria = 200
print(f'El theremin {self.marca} {self.modelo} está cargado.')
def tocar(self):
if self.bateria >= 10:
print(f'El theremin {self.marca} {self.modelo} está tocando.')
self.bateria -= 10
else:
print(f'El theremin {self.marca} {self.modelo} no tiene batería.')
mi_theremin = Theremin("Seldon", "Cooper")
for i in range(30):
mi_theremin.tocar()
mi_theremin.recargar()
###Output
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper está tocando.
El theremin Seldon Cooper no tiene batería.
El theremin Seldon Cooper no tiene batería.
El theremin Seldon Cooper no tiene batería.
El theremin Seldon Cooper no tiene batería.
El theremin Seldon Cooper no tiene batería.
El theremin Seldon Cooper no tiene batería.
El theremin Seldon Cooper no tiene batería.
El theremin Seldon Cooper no tiene batería.
El theremin Seldon Cooper no tiene batería.
El theremin Seldon Cooper no tiene batería.
El theremin Seldon Cooper está cargado.
###Markdown
Ejercicio 9Una empresa usa dos listas para guardar la información de sus empleados. Una lista guarda el nombre del empleado y la otra guarda su salario. Escribe un programa que cree esas dos listas originalmente vacías y que a través de un menú pueda hacer lo siguiente: Inserta 1 para añadir un nuevo empleado y su salarioInserta 2 para imprimir los nombres y salarios de todos los empleadosInserta 3 para mostrar el número de empleadosInserta 4 para imprimir los nombres de los empleados con sueldos superiores a 400000Inserta 5 para subir un 5% los sueldos por debajo de 10000Inserta 6 para mostrar el total de todos los salariosInserta 7 para salir del programa
###Code
op1msg = 'Inserta 1 para añadir un nuevo empleado y su salario'
op2msg = 'Inserta 2 para imprimir los nombres y salarios de todos los empleados'
op3msg = 'Inserta 3 para mostrar el número de empleados'
op4msg = 'Inserta 4 para imprimir los nombres de los empleados que superen un sueldo'
op5msg = 'Inserta 5 para subir un 5% los sueldos por debajo de 10000'
op6msg = 'Inserta 6 para mostrar el total de todos los salarios'
op7msg = 'Inserta 7 para salir del programa'
mensaje_menu = f'\n\n{op1msg}\n{op2msg}\n{op3msg}\n{op4msg}\n{op5msg}\n{op6msg}\n{op7msg}\n\n\n'
empleados = []
sueldos = []
print('¡Bienvenido!')
def addEmpleados_1():
empleado = input("\n\nNombre del empleado:")
sueldo = int(input("Sueldo"))
empleados.append(empleado)
sueldos.append(sueldo)
def printEmpleados_2():
print(f'\n\nEmpleado\tSueldo')
print(f'--------\t------')
for i, empleado in enumerate(empleados):
print(f'{empleado}\t\t{sueldos[i]}')
def countEmpleados_3():
print(f'Número de empleados: {len(empleados)}')
def filterSalaries_4():
filtro = int(input("Introducir salario filtro: "))
print(f'Empleados con sueldos mayores de {filtro}')
for i, sueldo in enumerate(sueldos):
if sueldo > filtro:
print(f'{empleados[i]}')
def increaseSalaries_5():
for i, sueldo in enumerate(sueldos):
if sueldo < 10000:
sueldos[i] *= 1.05
print(f'Se han subido los sueldos bajos')
def printTotalSalaries_6():
print(f'El gasto total en sueldos es {sum(sueldos)}€')
while True:
print(mensaje_menu)
option = int(input("\nInserta option:"))
if option == 1:
addEmpleados_1()
if option == 2:
printEmpleados_2()
if option == 3:
countEmpleados_3()
if option == 4:
filterSalaries_4()
if option == 5:
increaseSalaries_5()
if option == 6:
printTotalSalaries_6()
if option == 7:
break
###Output
¡Bienvenido!
Inserta 1 para añadir un nuevo empleado y su salario
Inserta 2 para imprimir los nombres y salarios de todos los empleados
Inserta 3 para mostrar el número de empleados
Inserta 4 para imprimir los nombres de los empleados que superen un sueldo
Inserta 5 para subir un 5% los sueldos por debajo de 10000
Inserta 6 para mostrar el total de todos los salarios
Inserta 7 para salir del programa
###Markdown
(EXTRA) Ejercicio 10.Escribe un programa que __USE RECURSIVIDAD__ para decidir cuál es la película de Pixar favorita de un usuario.El programa preguntará al usuario cuál de 2 películas es su favorita, y con esa favorita se le volverá a preguntar entre esa y otra película cuál es su favorita...y así hasta que el usuario haya decidido de entre todas cuál es su favorita.Pista: si no quieres cambiar una lista, usa copy() y tendrás una copia que es otro objeto
###Code
lista_pelis_original = ["Toy Story", "Soul", "Up", "Coco", "Luca", "Monsters", "Inside Out", "WallE", "Buscando a Nemo", "Los increíbles"]
lista_pelis_original = ["Toy Story", "Soul", "Up", "Coco", "Luca", "Monsters", "Inside Out", "WallE", "Buscando a Nemo", "Los increíbles"]
def selecciona_peli(lista_pelis):
if len(lista_pelis) == 1:
print(f'Tu peli favorita es {lista_pelis[0]}')
else:
choices = {1:1,2:0}
while True:
choice = int(input(f'Si prefieres {lista_pelis[0]} marca 1, si prefieres {lista_pelis[1]} marca 2: '))
if choice == 1 or choice == 2:
lista_pelis.pop(choices[choice])
break
selecciona_peli(lista_pelis)
selecciona_peli(lista_pelis_original)
###Output
Si prefieres Toy Story marca 1, si prefieres Soul marca 2: 2
Si prefieres Soul marca 1, si prefieres Up marca 2: 2
Si prefieres Up marca 1, si prefieres Coco marca 2: 2
Si prefieres Coco marca 1, si prefieres Luca marca 2: 1
Si prefieres Coco marca 1, si prefieres Monsters marca 2: 1
Si prefieres Coco marca 1, si prefieres Inside Out marca 2: 1
Si prefieres Coco marca 1, si prefieres WallE marca 2: 1
Si prefieres Coco marca 1, si prefieres Buscando a Nemo marca 2: 1
Si prefieres Coco marca 1, si prefieres Los increíbles marca 2: 1
|
code/temp_analysis_bonus_1_starter.ipynb | ###Markdown
Bonus: Temperature Analysis I
###Code
import pandas as pd
from datetime import datetime as dt
# "tobs" is "temperature observations"
df = pd.read_csv('../Resources/hawaii_measurements.csv')
df.head()
# Convert the date column format from string to datetime
df['date'] = pd.to_datetime(df['date'])
df.head()
# Set the date column as the DataFrame index
df=df.set_index('date')
df.head()
# Drop the date column
# SEE ABOVE
###Output
_____no_output_____
###Markdown
Compare June and December data across all years
###Code
from scipy import stats
# Filter data for desired months
jun_tobs = df[df.index.month.isin([6])]
#print (df)
#jun_tobs.head()
# Identify the average temperature for June
avg_jun_temp = round(jun_tobs['tobs'].mean(),2)
print(f"The average temperature in June is {avg_jun_temp}F.")
# Identify the average temperature for December
dec_tobs = df[df.index.month.isin([12])]
#print (df)
#dec_tobs.head()
avg_dec_temp = round(dec_tobs['tobs'].mean(),2)
print(f"The average temperature in December is {avg_dec_temp}F.")
# Create collections of temperature data
# DONE IN ABOVE STEPS
# Run paired t-test
t, p = stats.ttest_ind(jun_tobs['tobs'],dec_tobs['tobs'])
print("June temps v December temps ttest_ind: t = %g p = %g" % (t, p))
###Output
June temps v December temps ttest_ind: t = 31.6037 p = 3.90251e-191
|
tutorials/EfficientTDNN/SubnetEvaluation.ipynb | ###Markdown
Subnet Evaluation
###Code
import os, sys, random, warnings, time
warnings.filterwarnings("ignore")
import torch
import pandas
sys.path.append('/workspace/projects')
from torch.utils.data import DataLoader
from sugar.transforms import LogMelFbanks
from sugar.models.dynamictdnn import tdnn8m2g
from sugar.models import SpeakerModel, WrappedModel, veri_validate, batch_forward
from sugar.database import Utterance, AugmentedUtterance
from sugar.data.voxceleb1 import veriset
from sugar.data.voxceleb2 import veritrain
from sugar.data.augmentation import augset
from sugar.scores import score_cohorts, asnorm
from sugar.vectors import extract_vectors
from sugar.metrics import calculate_mindcf, calculate_eer
from sugar.utils.utility import bn_state_dict, load_bn_state_dict
def eval_veri(test_loader, network, p_target=0.01, device="cpu", vectors=None):
eer, dcf, vec, scs = veri_validate(test_loader, network, p_target=p_target, device=device, ret_info=True, vectors=vectors)
scs = pandas.DataFrame({'score': scs, 'enroll': test_loader.dataset.enrolls, 'test': test_loader.dataset.tests})
labs = test_loader.dataset.labels
eer = eer[0] * 100
dcf = dcf[0]
return eer, dcf, vec, scs
def eval_asnorm(labs, vec, scs, cohorts, p_target=0.01):
cohorts_o = score_cohorts(cohorts, vec)
asso = asnorm(scs, cohorts_o)
eer_o_asnorm = calculate_eer(labs, asso)[0] * 100
dcf_o_asnorm = calculate_mindcf(labs, asso, p_target=p_target)[0]
return eer_o_asnorm, dcf_o_asnorm
device = 'cuda:1'
###Output
################################################################################
### WARNING, path does not exist: KALDI_ROOT=/mnt/matylda5/iveselyk/Tools/kaldi-trunk
### (please add 'export KALDI_ROOT=<your_path>' in your $HOME/.profile)
### (or run as: KALDI_ROOT=<your_path> python <your_script>.py)
################################################################################
###Markdown
Load Dataset- Train set- Test set
###Code
# vox1_root = "/path/to/voxceleb1/"
# vox2_root = "/path/to/voxceleb2/"
vox1_root = "/workspace/datasets/voxceleb/voxceleb1/"
vox2_root = "/workspace/datasets/voxceleb/voxceleb2/"
# vox2_train = '/path/to/train_list.txt'
vox2_train = '/workspace/datasets/voxceleb/Vox2/train_list.txt'
train, spks = veritrain(vox2_train, rootdir=vox2_root, num_samples=64000)
random.shuffle(train.datalst)
train.datalst = train.datalst[:6000]
aug_wav = augset(num_samples=64000)
trainset = AugmentedUtterance(train, spks, augment=aug_wav, mode='v2+')
train_loader = DataLoader(trainset, batch_size=32, shuffle=True, num_workers=5, drop_last=True)
veritesto = "veri_test2.txt"
veri_testo, veri_teste, veri_testh, wav_files = veriset(
test2=veritesto, all2=None, hard2=None, rootdir=vox1_root, num_samples=64000, num_eval=2)
testo_loader = DataLoader(veri_testo, batch_size=1, shuffle=False, num_workers=0)
###Output
_____no_output_____
###Markdown
Evaluate different subnets- $a_\text{max}$: (4, [512, 512, 512, 512, 512], [5, 5, 5, 5, 5], 1536)- $a_\text{Kmin}$: (4, [512, 512, 512, 512, 512], [1, 1, 1, 1, 1], 1536)- $a_\text{Dmin}$: (2, [512, 512, 512], [1, 1, 1], 1536)- $a_\text{C1min}$: (2, [256, 256, 256], [1, 1, 1], 768)- $a_\text{C2min}$: (2, [128, 128, 128], [1, 1, 1], 384)
###Code
transform = LogMelFbanks(80)
modelarch = tdnn8m2g(80, 192)
model = SpeakerModel(modelarch, transform=transform)
model = WrappedModel(model)
# supernet_path = '/path/to/supernet_checkpoint'
# supernet_path = '/workspace/projects/sugar/examples/nas/exps/exp3/supernet_kernel_width1_width2_depth/checkpoint000064.pth.tar'
supernet_path = '/workspace/projects/sugar/examples/nas/exps/exp3/supernet_depth_kernel_width1_width2/checkpoint000064.pth.tar'
state_dict = torch.load(supernet_path, map_location='cpu')
print(model.load_state_dict(state_dict['state_dict'], strict=False))
model = model.to(device)
model.eval()
import copy
model_bak = copy.deepcopy(model)
configs = [
(4, [512, 512, 512, 512, 512], [5, 5, 5, 5, 5], 1536),
(4, [512, 512, 512, 512, 512], [1, 1, 1, 1, 1], 1536),
(2, [512, 512, 512], [1, 1, 1], 1536),
(2, [256, 256, 256], [1, 1, 1], 768),
(2, [128, 128, 128], [1, 1, 1], 384),
]
for config in configs[1:2]:
model.module.__S__ = model_bak.module.__S__.clone(config)
bn_path = os.path.join(os.path.dirname(supernet_path), f"{config}.bn.pth")
if os.path.exists(bn_path):
load_bn_state_dict(model.module.__S__, torch.load(bn_path, map_location="cpu"))
print(f"loaded state dict from saved batch norm {bn_path}")
time.sleep(1)
else:
batch_forward(train_loader, model, device=device)
torch.save(bn_state_dict(model.module.__S__), bn_path)
print(f"saved batch norm state dict {bn_path}")
time.sleep(1)
eero, dcfo, veco, scso = eval_veri(testo_loader, model, device=device)
print(f'subnet: {config}\nEvaluate on Vox1-O: * EER / DCF {eero:.2f}% / {dcfo:.3f}')
###Output
Forward Model: 100%|██████████| 187/187 [00:08<00:00, 21.22it/s]
|
ucaip-notebooks/model-monitoring/[Prediction_Drift_Detection]_ai_platform_model_monitoring.ipynb | ###Markdown
This is an Experimental release. Experiments are focused on validating a prototype. They are not guaranteed to be released and might be subject to backward-incompatible changes. They are not intended for production use or covered by any SLA, support obligation, or deprecation policy. They are covered by the [Pre-GA Offerings Terms](https://cloud.google.com/terms/service-terms1) of the Google Cloud Platform Terms of Services. Note that this only feature is **only available in [Unified Cloud AI Platform](https://cloud.google.com/ai-platform-unified/docs/start/introduction-unified-platform)**, it is not supported in legacy AI Platform.Please fill out this [form](https://docs.google.com/forms/d/1tniFkxb2BDtpPEatV3hXczPLPqofgpCJCvGZDpQlPFg/edit) to get allowlisted.Google internal users please subscribe [ai-platform-unified-model-monitoring-trusted-tester@googlegroups.com](https://groups.google.com/g/ai-platform-unified-model-monitoring-trusted-tester) for updates (All external customers will be added to this group after filling out above form).If you have any questions or feedback, please send it to [email protected] Tutorial: Model Monitoring in Unified AI Platform (Preview)This tutorial describes the steps to create a model deployment monitoring job for your Endpoint on the next generation of Google’s Cloud AI Platform.The code in this tutorial is tested in Notebook. Before you begin Download and install libraries
###Code
try:
import colab
!pip install --upgrade pip
except:
pass
!pip install tensorflow==2.4.1
import sys
# Confirm that we're using Python 3
assert sys.version_info.major is 3, 'Oops, not running Python 3. Use Runtime > Change runtime type'
import tensorflow as tf
print('Installing TensorFlow Data Validation')
!pip install -q tensorflow_data_validation[visualization]
!pip install --upgrade google-cloud-storage google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
###Output
Requirement already satisfied: tensorflow==2.4.1 in /opt/conda/lib/python3.7/site-packages (2.4.1)
Requirement already satisfied: flatbuffers~=1.12.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (1.12)
Requirement already satisfied: typing-extensions~=3.7.4 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (3.7.4.3)
Requirement already satisfied: absl-py~=0.10 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (0.10.0)
Requirement already satisfied: numpy~=1.19.2 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (1.19.5)
Requirement already satisfied: termcolor~=1.1.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (1.1.0)
Requirement already satisfied: wheel~=0.35 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (0.36.2)
Requirement already satisfied: tensorflow-estimator<2.5.0,>=2.4.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (2.4.0)
Collecting grpcio~=1.32.0
Downloading grpcio-1.32.0-cp37-cp37m-manylinux2014_x86_64.whl (3.8 MB)
[K |████████████████████████████████| 3.8 MB 4.8 MB/s eta 0:00:01
[?25hRequirement already satisfied: six~=1.15.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (1.15.0)
Requirement already satisfied: google-pasta~=0.2 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (0.2.0)
Requirement already satisfied: astunparse~=1.6.3 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (1.6.3)
Requirement already satisfied: tensorboard~=2.4 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (2.4.0)
Requirement already satisfied: keras-preprocessing~=1.1.2 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (1.1.2)
Requirement already satisfied: protobuf>=3.9.2 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (3.15.8)
Requirement already satisfied: wrapt~=1.12.1 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (1.12.1)
Requirement already satisfied: gast==0.3.3 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (0.3.3)
Requirement already satisfied: h5py~=2.10.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (2.10.0)
Requirement already satisfied: opt-einsum~=3.3.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow==2.4.1) (3.3.0)
Requirement already satisfied: markdown>=2.6.8 in /opt/conda/lib/python3.7/site-packages (from tensorboard~=2.4->tensorflow==2.4.1) (3.3.4)
Requirement already satisfied: werkzeug>=0.11.15 in /opt/conda/lib/python3.7/site-packages (from tensorboard~=2.4->tensorflow==2.4.1) (1.0.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /opt/conda/lib/python3.7/site-packages (from tensorboard~=2.4->tensorflow==2.4.1) (0.4.3)
Requirement already satisfied: requests<3,>=2.21.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard~=2.4->tensorflow==2.4.1) (2.25.1)
Requirement already satisfied: setuptools>=41.0.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard~=2.4->tensorflow==2.4.1) (49.6.0.post20210108)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard~=2.4->tensorflow==2.4.1) (1.8.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /opt/conda/lib/python3.7/site-packages (from tensorboard~=2.4->tensorflow==2.4.1) (1.28.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow==2.4.1) (4.2.1)
Requirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow==2.4.1) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow==2.4.1) (0.2.7)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/conda/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow==2.4.1) (1.3.0)
Requirement already satisfied: importlib-metadata in /opt/conda/lib/python3.7/site-packages (from markdown>=2.6.8->tensorboard~=2.4->tensorflow==2.4.1) (3.10.1)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow==2.4.1) (0.4.8)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard~=2.4->tensorflow==2.4.1) (2020.12.5)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard~=2.4->tensorflow==2.4.1) (2.10)
Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard~=2.4->tensorflow==2.4.1) (4.0.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard~=2.4->tensorflow==2.4.1) (1.26.4)
Requirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow==2.4.1) (3.0.1)
Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->markdown>=2.6.8->tensorboard~=2.4->tensorflow==2.4.1) (3.4.1)
Installing collected packages: grpcio
Attempting uninstall: grpcio
Found existing installation: grpcio 1.37.0
Uninstalling grpcio-1.37.0:
Successfully uninstalled grpcio-1.37.0
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tfx 0.28.0 requires docker<5,>=4.1, but you have docker 5.0.0 which is incompatible.
tfx 0.28.0 requires kubernetes<12,>=10.0.1, but you have kubernetes 12.0.1 which is incompatible.
tfx 0.28.0 requires pyarrow<3,>=1, but you have pyarrow 3.0.0 which is incompatible.
tensorflow-transform 0.28.0 requires pyarrow<3,>=1, but you have pyarrow 3.0.0 which is incompatible.
tensorflow-model-analysis 0.28.0 requires pyarrow<3,>=1, but you have pyarrow 3.0.0 which is incompatible.
tensorflow-data-validation 0.28.0 requires joblib<0.15,>=0.12, but you have joblib 1.0.1 which is incompatible.
tensorflow-data-validation 0.28.0 requires pyarrow<3,>=1, but you have pyarrow 3.0.0 which is incompatible.
explainable-ai-sdk 1.2.1 requires numpy<1.19.0, but you have numpy 1.19.5 which is incompatible.
apache-beam 2.28.0 requires httplib2<0.18.0,>=0.8, but you have httplib2 0.19.1 which is incompatible.
apache-beam 2.28.0 requires pyarrow<3.0.0,>=0.15.1, but you have pyarrow 3.0.0 which is incompatible.[0m
Successfully installed grpcio-1.32.0
Installing TensorFlow Data Validation
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tfx 0.28.0 requires docker<5,>=4.1, but you have docker 5.0.0 which is incompatible.
tfx 0.28.0 requires kubernetes<12,>=10.0.1, but you have kubernetes 12.0.1 which is incompatible.
cloud-tpu-client 0.10 requires google-api-python-client==1.8.0, but you have google-api-python-client 1.12.8 which is incompatible.[0m
Requirement already satisfied: google-cloud-storage in /opt/conda/lib/python3.7/site-packages (1.37.1)
Collecting google-cloud-storage
Downloading google_cloud_storage-1.38.0-py2.py3-none-any.whl (103 kB)
[K |████████████████████████████████| 103 kB 5.2 MB/s eta 0:00:01
[?25hRequirement already satisfied: google-api-python-client in /home/jupyter/.local/lib/python3.7/site-packages (1.12.8)
Collecting google-api-python-client
Downloading google_api_python_client-2.3.0-py2.py3-none-any.whl (7.1 MB)
[K |████████████████████████████████| 7.1 MB 8.7 MB/s eta 0:00:01
[?25hRequirement already satisfied: google-auth-oauthlib in /opt/conda/lib/python3.7/site-packages (0.4.3)
Collecting google-auth-oauthlib
Using cached google_auth_oauthlib-0.4.4-py2.py3-none-any.whl (18 kB)
Requirement already satisfied: google-auth-httplib2 in /opt/conda/lib/python3.7/site-packages (0.1.0)
Requirement already satisfied: oauth2client in /opt/conda/lib/python3.7/site-packages (4.1.3)
Requirement already satisfied: requests in /opt/conda/lib/python3.7/site-packages (2.25.1)
Requirement already satisfied: six<2dev,>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client) (1.15.0)
Requirement already satisfied: httplib2<1dev,>=0.15.0 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client) (0.17.4)
Requirement already satisfied: google-auth<2dev,>=1.16.0 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client) (1.28.0)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client) (3.0.1)
Requirement already satisfied: google-api-core<2dev,>=1.21.0 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client) (1.26.2)
Requirement already satisfied: setuptools>=40.3.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client) (49.6.0.post20210108)
Requirement already satisfied: packaging>=14.3 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client) (20.9)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client) (1.53.0)
Requirement already satisfied: protobuf>=3.12.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client) (3.15.8)
Requirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client) (2021.1)
Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests) (4.0.0)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests) (2020.12.5)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests) (2.10)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests) (1.26.4)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2dev,>=1.16.0->google-api-python-client) (4.2.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2dev,>=1.16.0->google-api-python-client) (0.2.7)
Requirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth<2dev,>=1.16.0->google-api-python-client) (4.7.2)
Requirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging>=14.3->google-api-core<2dev,>=1.21.0->google-api-python-client) (2.4.7)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<2dev,>=1.16.0->google-api-python-client) (0.4.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/conda/lib/python3.7/site-packages (from google-auth-oauthlib) (1.3.0)
Requirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib) (3.0.1)
Requirement already satisfied: google-resumable-media<2.0dev,>=1.2.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage) (1.2.0)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.4.1 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage) (1.6.0)
Requirement already satisfied: google-crc32c<2.0dev,>=1.0 in /opt/conda/lib/python3.7/site-packages (from google-resumable-media<2.0dev,>=1.2.0->google-cloud-storage) (1.1.2)
Requirement already satisfied: cffi>=1.0.0 in /opt/conda/lib/python3.7/site-packages (from google-crc32c<2.0dev,>=1.0->google-resumable-media<2.0dev,>=1.2.0->google-cloud-storage) (1.14.5)
Requirement already satisfied: pycparser in /opt/conda/lib/python3.7/site-packages (from cffi>=1.0.0->google-crc32c<2.0dev,>=1.0->google-resumable-media<2.0dev,>=1.2.0->google-cloud-storage) (2.20)
Installing collected packages: google-cloud-storage, google-auth-oauthlib, google-api-python-client
Attempting uninstall: google-cloud-storage
Found existing installation: google-cloud-storage 1.37.1
Uninstalling google-cloud-storage-1.37.1:
Successfully uninstalled google-cloud-storage-1.37.1
Attempting uninstall: google-auth-oauthlib
Found existing installation: google-auth-oauthlib 0.4.3
Uninstalling google-auth-oauthlib-0.4.3:
Successfully uninstalled google-auth-oauthlib-0.4.3
Attempting uninstall: google-api-python-client
Found existing installation: google-api-python-client 1.12.8
Uninstalling google-api-python-client-1.12.8:
Successfully uninstalled google-api-python-client-1.12.8
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
aiplatform-pipelines-client 0.1.0.caip20210428 requires google-api-python-client<2,>=1.7.8, but you have google-api-python-client 2.3.0 which is incompatible.
tfx 0.28.0 requires docker<5,>=4.1, but you have docker 5.0.0 which is incompatible.
tfx 0.28.0 requires google-api-python-client<2,>=1.7.8, but you have google-api-python-client 2.3.0 which is incompatible.
tfx 0.28.0 requires kubernetes<12,>=10.0.1, but you have kubernetes 12.0.1 which is incompatible.
tfx-bsl 0.28.1 requires google-api-python-client<2,>=1.7.11, but you have google-api-python-client 2.3.0 which is incompatible.
cloud-tpu-client 0.10 requires google-api-python-client==1.8.0, but you have google-api-python-client 2.3.0 which is incompatible.[0m
Successfully installed google-api-python-client-2.3.0 google-auth-oauthlib-0.4.4 google-cloud-storage-1.38.0
###Markdown
[Colab user only] Restart your colab runtime and authenticateIf you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages.
###Code
! gcloud auth login
###Output
_____no_output_____
###Markdown
API client lib preparation
###Code
PYTHON_CLIENT_LIBRARY = 'gs://model_monitoring_python_client_library/aiplatform-v1alpha1-py-02-28.tar.gz'
LOCAL_PYTHON_CLIENT_LIBRARY = './content/python_client_library/aiplatform-v1alpha1-py.tar.gz'
! gsutil cp $PYTHON_CLIENT_LIBRARY $LOCAL_PYTHON_CLIENT_LIBRARY
! pip3 install $LOCAL_PYTHON_CLIENT_LIBRARY
PROJECT_ID = "" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
[Colab user only] Restart your colab runtime and authenticateIf you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages. After restart runtime, please authenticate.
###Code
! gcloud auth application-default login
LOCATION = 'us-central1'
API_ENDPOINT = 'us-central1-aiplatform.googleapis.com'
PREDICT_API_ENDPOINT = 'us-central1-prediction-aiplatform.googleapis.com'
###Output
_____no_output_____
###Markdown
Set configuration and Create common libraries
###Code
USER_EMAIL = "" #@param {type:"string"}
JOB_DISPLAY_NAME = "test-job" #@param {type:"string"}
ENDPOINT_RESOURCE_ID = "" #@param {type:"string"}
# We will log your prediction request and response to BigQuery tables.
# Please config the logging sampling rate.
LOG_SAMPLE_RATE = 0.8#@param {type:"number"}
# The Monitoring Interval in seconds,
# for how often we should analyze your data and report anomalies.
MONITOR_INTERVAL_IN_SECONDS = 3600#@param {type:"number"}
# Commonly used create job function.
from google.protobuf.struct_pb2 import Value
from google.protobuf.duration_pb2 import Duration
from google.cloud.aiplatform_v1alpha1.services.job_service import JobServiceClient
from google.cloud.aiplatform_v1alpha1.types.model_monitoring import ThresholdConfig
from google.cloud.aiplatform_v1alpha1.types.model_monitoring import SamplingStrategy
from google.cloud.aiplatform_v1alpha1.types.model_monitoring import ModelMonitoringAlertConfig
from google.cloud.aiplatform_v1alpha1.types.model_monitoring import ModelMonitoringObjectiveConfig
from google.cloud.aiplatform_v1alpha1.types.model_deployment_monitoring_job import ModelDeploymentMonitoringJob
from google.cloud.aiplatform_v1alpha1.types.model_deployment_monitoring_job import ModelDeploymentMonitoringObjectiveConfig
from google.cloud.aiplatform_v1alpha1.types.model_deployment_monitoring_job import ModelDeploymentMonitoringScheduleConfig
import os
def create_model_deployment_monitoring_job(monitoring_objective_configs):
client_options = dict(
api_endpoint = API_ENDPOINT
)
client = JobServiceClient(client_options=client_options)
parent = "projects/{project}/locations/{location}".format(
project=PROJECT_ID, location=LOCATION
)
model_deployment_monitoring_job = ModelDeploymentMonitoringJob(
display_name = JOB_DISPLAY_NAME,
endpoint = 'projects/{}/locations/us-central1/endpoints/{}'.format(PROJECT_ID, ENDPOINT_RESOURCE_ID),
# ModelDeploymentMonitoringObjectiveConfig.
model_deployment_monitoring_objective_configs = monitoring_objective_configs,
# LoggingSamplingStrategy for Serving data.
logging_sampling_strategy = SamplingStrategy(
random_sample_config = SamplingStrategy.RandomSampleConfig(
sample_rate = LOG_SAMPLE_RATE
)
),
# ModelDeploymentMonitoringScheduleConfig
model_deployment_monitoring_schedule_config = ModelDeploymentMonitoringScheduleConfig(
monitor_interval = Duration(
seconds = MONITOR_INTERVAL_IN_SECONDS
)
),
# ModelMonitoringAlertConfig
model_monitoring_alert_config = ModelMonitoringAlertConfig(
email_alert_config = ModelMonitoringAlertConfig.EmailAlertConfig(
user_emails = [USER_EMAIL]
)
),
predict_instance_schema_uri = '',
analysis_instance_schema_uri = ''
)
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=model_deployment_monitoring_job
)
print('Created ModelDeploymentMonitoring Job:')
print(response)
return response
# Define util function to parse threshold strings
DEFAULT_THRESHOLD_VALUE = 0.001
def get_thresholds(default_threshold_str, customized_threshold_str):
# Features require drift detection.
thresholds_dict = {}
default_threshold = ThresholdConfig(value = DEFAULT_THRESHOLD_VALUE)
# Default values.
for feature_name in default_threshold_str.split(','):
feature_name = feature_name.strip()
thresholds_dict[feature_name] = default_threshold
# Custom values.
for feature_threshold_pair in customized_threshold_str.split(','):
split_pair = feature_threshold_pair.split(':')
if len(split_pair) != 2:
print('Invalid custom skew threshold: ' + feature_threshold_pair)
return
feature_name = split_pair[0].strip()
threshold_value = float(split_pair[1])
threshold = ThresholdConfig(value = threshold_value)
thresholds_dict[feature_name] = threshold
return thresholds_dict
# Define util for listing all deployed models.
from google.cloud.aiplatform_v1alpha1.services.endpoint_service import EndpointServiceClient
def get_deployed_model_ids():
client_options = dict(
api_endpoint = API_ENDPOINT
)
client = EndpointServiceClient(client_options=client_options)
parent = "projects/{project}/locations/{location}".format(
project=PROJECT_ID, location=LOCATION
)
response = client.get_endpoint(
name="projects/{}/locations/us-central1/endpoints/{}".format(PROJECT_ID, ENDPOINT_RESOURCE_ID)
)
deployed_model_ids = []
for deployed_model in response.deployed_models:
deployed_model_ids.append(deployed_model.id)
return deployed_model_ids
###Output
_____no_output_____
###Markdown
The model monitoring API is to monitor each deployed model's skew/drift. Here for simplicity, we apply the same monitoring config for all models. Below is a util function to return a list of monitoring configs given a config template.
###Code
import copy
deployed_model_ids = get_deployed_model_ids()
print('Here are the deployed model ids:\n {}'.format(deployed_model_ids))
def copy_monitoring_objective_for_each_model(monitoring_objective_template):
# Use the same objective config for all models.
monitoring_objective_configs = []
for deployed_model_id in deployed_model_ids:
monitoring_objective_config = copy.deepcopy(monitoring_objective_template)
monitoring_objective_config.deployed_model_id = deployed_model_id
monitoring_objective_configs.append(monitoring_objective_config)
return monitoring_objective_configs
###Output
_____no_output_____
###Markdown
Creation Model Monitoring Job You need to specify the data drift threshold for the features you want to monitoring. The whole idea behind the alerting is to see if a feature's data distribution distance is above the threshold you set. If it is, we will send email alerts to the $USER_EMAIL you specified above.How do we calculate the feature distribution distance? We use [L-infinity distance](https://en.wikipedia.org/wiki/Chebyshev_distance) for categorical features and [Jensen-Shannon divergence](https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence) for numerical features. More details are [here](https://www.tensorflow.org/tfx/guide/tfdvdrift_detection).Below you just need to specify the features that you want use default threshold(0.001) and the customized threshold features. If you don't want to monitor a feature, feel free to skip filling into any of these fields.
###Code
FEATURES_ENABLE_DRIFT_DETECTION_WITH_DEFAULT_THRESHOLD = "age,cigsPerDay" #@param {type:"string"}
# If a feature appear in both FEATURES_ENABLE_DRIFT_DETECTION_WITH_DEFAULT_THRESHOLD
# and FEATURES_ENABLE_DRIFT_DETECTION_WITH_CUSTOM_THRESHOLD, we rely on the custom value.
FEATURES_ENABLE_DRIFT_DETECTION_WITH_CUSTOM_THRESHOLD = "totChol:0.2,sysBP:0.2" #@param {type:"string"}
###Output
_____no_output_____
###Markdown
Starts the job with prediction drift as the only monitoring objectiveNote: if you want to enable the feature attributions score(based on Sampled Sharpley method, more details are [here](https://cloud.google.com/ai-platform-unified/docs/explainable-ai)) monitoring, just change the "enable_feature_attributes" to True in monitoring_objective_config_template. Make sure your model is configed with explanations [requirement](https://cloud.google.com/ai-platform-unified/docs/explainable-ai/configuring-explanations).
###Code
drift_detection_thresholds = get_thresholds(
FEATURES_ENABLE_DRIFT_DETECTION_WITH_DEFAULT_THRESHOLD,
FEATURES_ENABLE_DRIFT_DETECTION_WITH_CUSTOM_THRESHOLD)
monitoring_objective_config_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config = ModelMonitoringObjectiveConfig(
prediction_drift_detection_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds = drift_detection_thresholds
),
explanation_config = ModelMonitoringObjectiveConfig.ExplanationConfig(
enable_feature_attributes = False
)
)
)
monitoring_objective_configs = copy_monitoring_objective_for_each_model(
monitoring_objective_config_template)
monitoring_job = create_model_deployment_monitoring_job(
monitoring_objective_configs)
###Output
_____no_output_____
###Markdown
Prepare fake prediction traffic for testingIn this section, we will help you generate some fake data to send to your Endpoint for prediction. This will generate some logs for us to analyze the feature distributions.Note: The instance needs to follow a key-value pairs format, key is name of the feature and value is the feature value.
###Code
ENDPOINT = 'projects/{}/locations/us-central1/endpoints/{}'.format(PROJECT_ID, ENDPOINT_RESOURCE_ID)
MONITORING_JOB = monitoring_job.name
print("Endpoint: " + ENDPOINT)
print("Monitoring Job: " + MONITORING_JOB)
from google.cloud.aiplatform_v1alpha1.services.prediction_service import PredictionServiceClient
from google.cloud.aiplatform_v1alpha1.types.prediction_service import PredictRequest
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
import time
def send_predict_request(totChol):
client_options = {
"api_endpoint": PREDICT_API_ENDPOINT
}
client = PredictionServiceClient(client_options=client_options)
instance_dict = {
"age": "30",
"education": "4",
"currentSmoker": "0",
"cigsPerDay": "0",
"BPMeds": "0",
"prevalentStroke": "0",
"prevalentHyp": "0",
"diabetes": "0",
"totChol": totChol,
"sysBP": "106",
"diaBP": "70",
"BMI": "18",
"heartRate": "1",
"glucose": "5",
"TenYearCHD": "0"
}
instance = json_format.ParseDict(instance_dict, Value())
parameters_dict = {}
parameters = json_format.ParseDict(parameters_dict, Value())
instances = [instance]
request = PredictRequest(
endpoint=ENDPOINT,
parameters=parameters
)
request.instances.extend(instances)
response = client.predict(request)
# Keep sending the request every one minute. It will last for 2 hours. You can
# cancel it by interrupting the cell.
for i in range(0, 60):
try:
send_predict_request(str(i+100))
except:
print('predict request failed')
time.sleep(60)
for i in range(0, 60):
try:
send_predict_request("100")
except:
print('predict request failed')
time.sleep(60)
###Output
_____no_output_____
###Markdown
In between the 2 hour fake traffic running, you should be able to receive an email like this:This email has the following key information:1. Basic information including endpoint resource name, monitoring job resource name and statistics/anomalies file path. We will try to visualize the stats using TFDV to diagnose.2. The feature attribution score csv path(if you have enabled the feature attribution monitoring). The first red box above shows both the training/prediction data feature attribution csv file paths. This will be used in the following visualization.2. The exact anomalies information, indicating whether this is training-serving skew or prediction drift. There are also the detailed anomaly information shown in the email. More API samples Update the monitoring configYou can try to update the model monitoring configs with our update API. For example if I want to update the email alerting config to include more engineers:
###Code
UPDATED_USER_EMAIL_1 = "[email protected]" #@param {type:"string"}
UPDATED_USER_EMAIL_2 = "[email protected]" #@param {type:"string"}
from google.cloud.aiplatform_v1alpha1.services.job_service import JobServiceClient
from google.cloud.aiplatform_v1alpha1.types.model_deployment_monitoring_job import ModelDeploymentMonitoringJob
from google.cloud.aiplatform_v1alpha1.types.model_monitoring import ModelMonitoringAlertConfig
from google.protobuf.field_mask_pb2 import FieldMask
def update_model_deployment_monitoring_job():
client_options = dict(
api_endpoint = API_ENDPOINT
)
client = JobServiceClient(client_options=client_options)
model_deployment_monitoring_job = ModelDeploymentMonitoringJob(
name = MONITORING_JOB,
model_monitoring_alert_config = ModelMonitoringAlertConfig(
email_alert_config = ModelMonitoringAlertConfig.EmailAlertConfig(
user_emails = [
UPDATED_USER_EMAIL_1, # This can be any email domains
UPDATED_USER_EMAIL_2,
]
)
)
)
update_mask = FieldMask(paths = ["model_monitoring_alert_config"])
response = client.update_model_deployment_monitoring_job(
model_deployment_monitoring_job=model_deployment_monitoring_job, update_mask=update_mask
)
print(response)
update_model_deployment_monitoring_job()
###Output
_____no_output_____
###Markdown
Or updating the alerting threshold by setting a higher skew threshold for age feature and adding the education for monitoring with a low threshold:
###Code
from google.cloud.aiplatform_v1alpha1.services.job_service import JobServiceClient
from google.cloud.aiplatform_v1alpha1.types.model_deployment_monitoring_job import ModelDeploymentMonitoringJob
from google.cloud.aiplatform_v1alpha1.types.model_deployment_monitoring_job import ModelDeploymentMonitoringObjectiveConfig
from google.cloud.aiplatform_v1alpha1.types.model_monitoring import ThresholdConfig
from google.cloud.aiplatform_v1alpha1.types.model_monitoring import ModelMonitoringObjectiveConfig
from google.protobuf.field_mask_pb2 import FieldMask
def update_model_deployment_monitoring_job():
client_options = dict(
api_endpoint = API_ENDPOINT
)
client = JobServiceClient(client_options=client_options)
threshold = ThresholdConfig(value = 1e-05)
monitoring_objective_config_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config = ModelMonitoringObjectiveConfig(
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(
dataset = "projects/677687165274/locations/us-central1/datasets/2508579759236055040",
target_field = "male"
),
training_prediction_skew_detection_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds = {
"age": threshold,
"education": threshold
}
),
prediction_drift_detection_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds = {
"age": threshold
}
)
)
)
monitoring_objective_configs = copy_monitoring_objective_for_each_model(
monitoring_objective_config_template)
model_deployment_monitoring_job = ModelDeploymentMonitoringJob(
name = MONITORING_JOB,
model_deployment_monitoring_objective_configs = monitoring_objective_configs
)
update_mask = FieldMask(paths = ["model_deployment_monitoring_objective_configs"])
response = client.update_model_deployment_monitoring_job(
model_deployment_monitoring_job=model_deployment_monitoring_job, update_mask=update_mask
)
print(response)
update_model_deployment_monitoring_job()
###Output
_____no_output_____
###Markdown
Pause a monitoring jobIf you do not want to monitor the endpoint and get alert, you can call pause API to pause the job.Note: 1. if there is a scheduled analysis being running, you may still get email notification for this analysis as pause function will not interrupt the running schedule. After the schedule is finished, there will not be any more analysis schedule.2. Even after the job is paused, logging continuous in prediction service.
###Code
def pause_model_deployment_monitoring_job():
client_options = dict(
api_endpoint = API_ENDPOINT
)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=MONITORING_JOB)
print(response)
pause_model_deployment_monitoring_job()
###Output
_____no_output_____
###Markdown
Delete a paused job
###Code
def delete_model_deployment_monitoring_job():
client_options = dict(
api_endpoint = API_ENDPOINT
)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=MONITORING_JOB)
print(response)
delete_model_deployment_monitoring_job()
###Output
_____no_output_____
###Markdown
Resume a paused job
###Code
def resume_model_deployment_monitoring_job():
client_options = dict(
api_endpoint = API_ENDPOINT
)
client = JobServiceClient(client_options=client_options)
response = client.resume_model_deployment_monitoring_job(name=MONITORING_JOB)
print(response)
resume_model_deployment_monitoring_job()
###Output
_____no_output_____
###Markdown
Get a monitoring job
###Code
def get_model_deployment_monitoring_job():
client_options = dict(
api_endpoint = API_ENDPOINT
)
client = JobServiceClient(client_options=client_options)
response = client.get_model_deployment_monitoring_job(name=MONITORING_JOB)
print(response)
get_model_deployment_monitoring_job()
###Output
_____no_output_____
###Markdown
List monitoring jobs
###Code
def list_model_deployment_monitoring_jobs():
client_options = dict(
api_endpoint = API_ENDPOINT
)
parent = 'projects/{}/locations/us-central1'.format(PROJECT_ID)
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
list_model_deployment_monitoring_jobs()
###Output
_____no_output_____ |
00-PythonLearning/01-Tutorials/python_examples/generator_coroutine.ipynb | ###Markdown
Basics of Python generator and coroutine
###Code
>>> def generator():
... while True:
... yield 1
a = generator()
next(a)
next(a)
next(a)
>>> def round_robin():
... while True:
... yield from [1, 2, 3, 4]
>>> a = round_robin()
>>> a
next(a)
next(a)
next(a)
next(a)
next(a)
next(a)
next(a)
next(a)
next(a)
next(a)
###Output
_____no_output_____
###Markdown
**Coroutine**
###Code
>>> def coroutine():
... while True:
... val = (yield) # <-----
... print(val)
a = coroutine()
a
next(a)
a.send(1)
a.send('Hello')
a.close()
a
a.send(1)
b = coroutine()
b.send(None)
b.send('None sent')
###Output
None sent
###Markdown
Don't forget to `close()`
###Code
b.close()
b
b.send(1)
###Output
_____no_output_____ |
matplotlib/gallery_jupyter/pyplots/annotation_basic.ipynb | ###Markdown
Annotating a plotThis example shows how to annotate a plot with an arrow pointing to providedcoordinates. We modify the defaults of the arrow, to "shrink" it.For a complete overview of the annotation capabilities, also see the:doc:`annotation tutorial`.
###Code
import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
line, = ax.plot(t, s, lw=2)
ax.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops=dict(facecolor='black', shrink=0.05),
)
ax.set_ylim(-2, 2)
plt.show()
###Output
_____no_output_____
###Markdown
------------References""""""""""The use of the following functions, methods, classes and modules is shownin this example:
###Code
import matplotlib
matplotlib.axes.Axes.annotate
matplotlib.pyplot.annotate
###Output
_____no_output_____ |
tutorials/4.2 Nice Magics.ipynb | ###Markdown
Nice Magics This section is dedicated to magics you technically could live without, but that you're better off having. Let's jump right in. `%pastebin` for sharing code This is a really lovely feature for quickly sharing code cleanly/easily. Without `%pastebin`, you basically have two options, a screenshot (preserves prettiness, can't be copy/pasted) or copy/pasting code in a chat window (loses prettiness, can be copy/pasted). With `%pastebin` we get the best of both.To use, just type `%pastebin n` where n is the cell you want to share. Say I wished to share my SignalShifting code in a way that could be read, and then copied if necessary  Here `%pastebin 17` is copying the code from cell 17, and loading it to a website where it will last for 7 days, all I need to do is copy and paste that link in whatever chat client I want, and my friend will see this. `%env` to view/set environment variables Nobody likes dealing with environment variables, but every once in a while it's necessary. The documentation on this magic is very well-written and concise.  Note that any changes you make will only persist for the current session, for how to setup permanent environment variables in Jupyter, see [this StackOverflow post](https://stackoverflow.com/a/53595397/5042053) I've set up a series of commands to demonstrate how to use all of the above features
###Code
# list all environment variables/values
%env?
# try to look up a non-existent variable
%env newval
# create and initialize a non-existent variable
%env newval = 42
# now we can look it up
%env newval
# let's try to set it to be equal to a python variable `x`
x = 7
%env newval=x
# Oops, looks like it literally set it to the string x
%env newval
# We need to use a dollar sign to trigger interpolation
%env newval=$x
#Looks like that worked!
%env newval
###Output
_____no_output_____
###Markdown
`%store` to pass values between notebooks If you are developing a library using Jupyter Notebooks, you should be using [nbdev by fastai](https://nbdev.fast.ai/), but most of the time, I'm just hacking something together. I know better than to do it all in one giant notebook, but the problem then becomes, I have an output from one notebook, say a notebook for preprocessing, that I want to use as an input in another notebook. Sure, I could manually export it, or pickle it, but with `%store` there's a better way. All you do is call `%store varname` for the variable you want to export, and then `%store -r varname` to bring it to life in another notebook. To use `%store` effectively, finish each notebook by storing the data and values you would like to make available to future notebooks. I'll demonstrate below, but instead of making you open another notebook, we'll just delete the variable, demonstrate it's gone, and then bring it back to life. I encourage you to try it yourself across different notebooks.
###Code
# make a dict {1:'a', 2:'b'...26:'z'} as a fake dataset
def make_dataset():
alpha = 'abcdefghijklmnopqrstuvwxyz'
return {i+1:alpha[i] for i in range(26)}
important_data = make_dataset()
%store important_data
del important_data
important_data[13]
%store -r important_data
important_data[13]
###Output
_____no_output_____
###Markdown
How it works Behind the scenes, `%store` is using pickle (a built in library for storing Python objects on disk) to save and load objects. That means you can use `%store` for any pickleable objects. Here's the minimum you need to know about pickle:[Source 1](https://docs.python.org/2/library/pickle.htmlwhat-can-be-pickled-and-unpickled)[Source 2](https://docs.python.org/3.8/library/pickle.html) Sharing functions across notebooks Unfortunately pickling notebook functions isn't as easy. There are a number of libraries to help do this like `nbimporter` but even the creator of that library [Gregor Sturm](https://github.com/grst) now [recommends against it's usage.](https://github.com/grst/nbimporterupdate-2019-06-i-do-not-recommend-any-more-to-use-nbimporter) His solution, and one I use as well, is to make a `utils.py` file and refactor any functions used in multiple notebooks to be defined there. This is a solution that I only recommend for small projects like a kaggle competition, or a new idea you're experimenting with. If you have so many functions that copying them to a single file seems like a bad solution, use [nbdev by fastai](https://nbdev.fast.ai/) Autoload everything you store NOTE NEEDS WORK Note that this tip has potential for misuse. Dumping everything you've ever stored into the state each notebook you create seems like a bad idea in general, but may be appropriate for a few unique situations. `%config StoreMagics.autorestore=True`Note: this setting doesnt persist, we need to adjust it in the config once I figure that part out.
###Code
%config StoreMagics.autorestore=True
###Output
_____no_output_____
###Markdown
`%hist` for seeing execution history `%hist` is an alias for `%history` that will show you all commands that have been executed since the kernel last restarted. It is properly indented and without line numbers as to be reproducible by copy/pasting to a file. Optional Arguments:- A range of numbers e.g. `2-5` to see the 2nd to 5th most recent commands- `-n` to see line numbers - `-o` to include outputs - `-t` to translate everything to valid python source code (e.g. `%hist` will appear as `get_ipython().run_line_magic('hist', '')` The last option is extremely cool as it allows us to demystify all the stuff Jupyter is doing for us under the hood. Let's take a look at how and how `?` and `??` actually work
###Code
%hist?
%hist??
%hist -t 1-2
###Output
get_ipython().run_line_magic('pinfo', '%hist')
get_ipython().run_line_magic('pinfo2', '%hist')
###Markdown
This shows us that `?` and `??` are actually just convenient shortcuts to the magics `%pinfo` and `%pinfo2`, which we don't ever need to use because `?` and `??` are so much better! `%%javascript` and other language changing script magics The focus of this book is using Jupyter for Python, but occasionally it is nice to have the ability to run code in other languages. For instance in Jupyter, much of the underlying functionality is written in Javascript, and being able to run javascript from the console allows us to do things like play around with settings or shortcuts to see if they work before we go add them to custom.js to make the change permanent.I first learned about this from [Stas Bekman](https://forums.fast.ai/u/stas) in the excellent [Jupyter Notebook Tips and Tricks Thread](https://forums.fast.ai/t/jupyter-notebook-enhancements-tips-and-tricks/17064/2) on fastai forums. Below is a short example of some javascript that will manipulate Jupyter by executing the next cell, waiting 3 seconds, and inserting another cell below
###Code
%%javascript
Jupyter.notebook.execute_cell()
setTimeout(function(){
Jupyter.notebook.insert_cell_below()
}, 3000)
print("Executed")
###Output
Executed
###Markdown
`%%javascript` is one of many language changing options. Others are- `%%perl`- `%%ruby`- `%%js` (shortcut for javascript)- `%%python` to run in the default python interpreter, or `%%python2`/`%%python3` for specific versions.- `%%bash` - to run a cell in bash as a subprocess.\*\* I'm not actually sure of the functional difference between `%%bash` and `!`, if you do, message me and I'll credit you here. Using `?` with all of these gives no new info, because they rely on the `%%script` magic to run, so if you're having trouble getting one of these language magics to work, that's the best place to look for documentation. Finally, if you look at the list of cell magics using `%lsmagic`, you'll notice there are a number of other examples I left out here, including `%%latex`, `%%markdown`, and `%%html`, that's because Jupyter now will automatically recognize and render all 3 of those, and the magics are no longer needed. `%more` for quickly viewing files in the pager REPLACE WITH NON .PY FILE Do you have a file you want to quickly examine without having to go back to the Jupyter navigation page and open it in a new tab? Use `%more filename`
###Code
%more error.py
###Output
_____no_output_____
###Markdown
`%tb` to see your last error/stack-trace tb is short for traceback. Sometimes you delete cells, or maybe you're in another part of the notebook and you want to see that error again. Bring it back to life with `%tb`
###Code
%tb
###
###Output
_____no_output_____
###Markdown
`%whos` to see details about local variables Note: If you often import libraries with from foo import *, this won't be useful as the namespace will be too cluttered by those imports, which will be included when using `%whos` `%whos` will print a nice little table to show you what variables are populating the current notebook's namespace. You can also pass in a type (e.g. list) as a filter, and it will only show you variables of that type.There are two alternatives `%who`, which just prints the variable names, and `%who_ls` which gives you back a list of the variable names. I prefer `%whos` for the extra info it provides but I'd recommend trying all three to see what suites you best.
###Code
%who
%who
%who_ls
###Output
_____no_output_____
###Markdown
Non-magic alternatives: Variable Explorer Extension If you prefer a GUI, you can use the Variable Explorer Extension discussed in Chapter 2. While it won't be very useful if you use `from foo import *` it is a cool alternative to using `%whos` Non-magic alternatives: `locals()` and `dir()` Python offers us `locals()` and `dir()` but these will show variables Jupyter is using internally to track stuff like command history. Use that if you're interested in exploring, but if you just want to see what you've personally set, use `%whos`
###Code
len(locals())
len(dir())
x = %who_ls
len(x)
###Output
_____no_output_____ |
docs/ipynb/structured_data_classification.ipynb | ###Markdown
A Simple ExampleThe first step is to prepare your data. Here we use the [Titanicdataset](https://www.kaggle.com/c/titanic) as an example.
###Code
import tensorflow as tf
import autokeras as ak
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
###Output
_____no_output_____
###Markdown
The second step is to run the[StructuredDataClassifier](/structured_data_classifier).
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3) # It tries 3 different models.
# Feed the structured data classifier with training data.
clf.fit(
# The path to the train.csv file.
train_file_path,
# The name of the label column.
'survived',
epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_file_path)
# Evaluate the best model with testing data.
print(clf.evaluate(test_file_path, 'survived'))
###Output
_____no_output_____
###Markdown
Data FormatThe AutoKeras StructuredDataClassifier is quite flexible for the data format.The example above shows how to use the CSV files directly. Besides CSV files, it alsosupports numpy.ndarray, pandas.DataFrame or [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable). The data should betwo-dimensional with numerical or categorical values.For the classification labels,AutoKeras accepts both plain labels, i.e. strings or integers, and one-hot encodedencoded labels, i.e. vectors of 0s and 1s.The labels can be numpy.ndarray, pandas.DataFrame, or pandas.Series.The following examples show how the data can be prepared with numpy.ndarray,pandas.DataFrame, and tensorflow.data.Dataset.
###Code
import pandas as pd
import numpy as np
# x_train as pandas.DataFrame, y_train as pandas.Series
x_train = pd.read_csv(train_file_path)
print(type(x_train)) # pandas.DataFrame
y_train = x_train.pop('survived')
print(type(y_train)) # pandas.Series
# You can also use pandas.DataFrame for y_train.
y_train = pd.DataFrame(y_train)
print(type(y_train)) # pandas.DataFrame
# You can also use numpy.ndarray for x_train and y_train.
x_train = x_train.to_numpy().astype(np.unicode)
y_train = y_train.to_numpy()
print(type(x_train)) # numpy.ndarray
print(type(y_train)) # numpy.ndarray
# Preparing testing data.
x_test = pd.read_csv(test_file_path)
y_test = x_test.pop('survived')
# It tries 10 different models.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the structured data classifier with training data.
clf.fit(x_train, y_train, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
###Output
_____no_output_____
###Markdown
The following code shows how to convert numpy.ndarray to tf.data.Dataset.Notably, the labels have to be one-hot encoded for multi-classclassification to be wrapped into tensorflow Dataset.Since the Titanic dataset is binaryclassification, it should not be one-hot encoded.
###Code
train_set = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_set = tf.data.Dataset.from_tensor_slices((x_test.to_numpy().astype(np.unicode), y_test))
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the tensorflow Dataset to the classifier.
clf.fit(train_set, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_set)
# Evaluate the best model with testing data.
print(clf.evaluate(test_set))
###Output
_____no_output_____
###Markdown
You can also specify the column names and types for the data as follows.The `column_names` is optional if the training data already have the column names, e.g.pandas.DataFrame, CSV file.Any column, whose type is not specified will be inferred from the training data.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
column_names=[
'sex',
'age',
'n_siblings_spouses',
'parch',
'fare',
'class',
'deck',
'embark_town',
'alone'],
column_types={'sex': 'categorical', 'fare': 'numerical'},
max_trials=10, # It tries 10 different models.
overwrite=True,
)
###Output
_____no_output_____
###Markdown
Validation DataBy default, AutoKeras use the last 20% of training data as validation data.As shown in the example below, you can use `validation_split` to specify the percentage.
###Code
clf.fit(x_train,
y_train,
# Split the training data and use the last 15% as validation data.
validation_split=0.15,
epochs=10)
###Output
_____no_output_____
###Markdown
You can also use your own validation setinstead of splitting it from the training data with `validation_data`.
###Code
split = 500
x_val = x_train[split:]
y_val = y_train[split:]
x_train = x_train[:split]
y_train = y_train[:split]
clf.fit(x_train,
y_train,
# Use your own validation set.
validation_data=(x_val, y_val),
epochs=10)
###Output
_____no_output_____
###Markdown
Customized Search SpaceFor advanced users, you may customize your search space by using[AutoModel](/auto_model/automodel-class) instead of[StructuredDataClassifier](/structured_data_classifier). You can configure the[StructuredDataBlock](/block/structureddatablock-class) for some high-levelconfigurations, e.g., `categorical_encoding` for whether to use the[CategoricalToNumerical](/block/categoricaltonumerical-class). You can also do not specify thesearguments, which would leave the different choices to be tuned automatically. Seethe following example for detail.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.StructuredDataBlock(categorical_encoding=True)(input_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=3)
clf.fit(x_train, y_train, epochs=10)
###Output
_____no_output_____
###Markdown
The usage of [AutoModel](/auto_model/automodel-class) is similar to the[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.Basically, you are building a graph, whose edges are blocks and the nodes are intermediate outputs of blocks.To add an edge from `input_node` to `output_node` with`output_node = ak.[some_block]([block_args])(input_node)`.You can even also use more fine grained blocks to customize the search space evenfurther. See the following example.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.CategoricalToNumerical()(input_node)
output_node = ak.DenseBlock()(output_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=3)
clf.fit(x_train, y_train, epochs=10)
###Output
_____no_output_____
###Markdown
A Simple ExampleThe first step is to prepare your data. Here we use the [Titanicdataset](https://www.kaggle.com/c/titanic) as an example. You can download the CSVfiles [here](https://github.com/keras-team/autokeras/tree/master/tests/fixtures/titanic).The second step is to run the[StructuredDataClassifier](/structured_data_classifier).Replace all the `/path/to` with the path to the csv files.
###Code
import autokeras as ak
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(max_trials=10) # It tries 10 different models.
# Feed the structured data classifier with training data.
clf.fit(
# The path to the train.csv file.
'/path/to/train.csv',
# The name of the label column.
'survived')
# Predict with the best model.
predicted_y = clf.predict('/path/to/eval.csv')
# Evaluate the best model with testing data.
print(clf.evaluate('/path/to/eval.csv', 'survived'))
###Output
_____no_output_____
###Markdown
Data FormatThe AutoKeras StructuredDataClassifier is quite flexible for the data format.The example above shows how to use the CSV files directly. Besides CSV files, it alsosupports numpy.ndarray, pandas.DataFrame or [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable). The data should betwo-dimensional with numerical or categorical values.For the classification labels,AutoKeras accepts both plain labels, i.e. strings or integers, and one-hot encodedencoded labels, i.e. vectors of 0s and 1s.The labels can be numpy.ndarray, pandas.DataFrame, or pandas.Series.The following examples show how the data can be prepared with numpy.ndarray,pandas.DataFrame, and tensorflow.data.Dataset.
###Code
import pandas as pd
# x_train as pandas.DataFrame, y_train as pandas.Series
x_train = pd.read_csv('train.csv')
print(type(x_train)) # pandas.DataFrame
y_train = x_train.pop('survived')
print(type(y_train)) # pandas.Series
# You can also use pandas.DataFrame for y_train.
y_train = pd.DataFrame(y_train)
print(type(y_train)) # pandas.DataFrame
# You can also use numpy.ndarray for x_train and y_train.
x_train = x_train.to_numpy()
y_train = y_train.to_numpy()
print(type(x_train)) # numpy.ndarray
print(type(y_train)) # numpy.ndarray
# Preparing testing data.
x_test = pd.read_csv('eval.csv')
y_test = x_test.pop('survived')
# It tries 10 different models.
clf = ak.StructuredDataClassifier(max_trials=10)
# Feed the structured data classifier with training data.
clf.fit(x_train, y_train)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
###Output
_____no_output_____
###Markdown
The following code shows how to convert numpy.ndarray to tf.data.Dataset.Notably, the labels have to be one-hot encoded for multi-classclassification to be wrapped into tensorflow Dataset.Since the Titanic dataset is binaryclassification, it should not be one-hot encoded.
###Code
import tensorflow as tf
train_set = tf.data.Dataset.from_tensor_slices(((x_train, ), (y_train, )))
test_set = tf.data.Dataset.from_tensor_slices(((x_test, ), (y_test, )))
clf = ak.StructuredDataClassifier(max_trials=10)
# Feed the tensorflow Dataset to the classifier.
clf.fit(train_set)
# Predict with the best model.
predicted_y = clf.predict(test_set)
# Evaluate the best model with testing data.
print(clf.evaluate(test_set))
###Output
_____no_output_____
###Markdown
You can also specify the column names and types for the data as follows.The `column_names` is optional if the training data already have the column names, e.g.pandas.DataFrame, CSV file.Any column, whose type is not specified will be inferred from the training data.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
column_names=[
'sex',
'age',
'n_siblings_spouses',
'parch',
'fare',
'class',
'deck',
'embark_town',
'alone'],
column_types={'sex': 'categorical', 'fare': 'numerical'},
max_trials=10, # It tries 10 different models.
)
###Output
_____no_output_____
###Markdown
Validation DataBy default, AutoKeras use the last 20% of training data as validation data.As shown in the example below, you can use `validation_split` to specify the percentage.
###Code
clf.fit(x_train,
y_train,
# Split the training data and use the last 15% as validation data.
validation_split=0.15)
###Output
_____no_output_____
###Markdown
You can also use your own validation setinstead of splitting it from the training data with `validation_data`.
###Code
split = 500
x_val = x_train[split:]
y_val = y_train[split:]
x_train = x_train[:split]
y_train = y_train[:split]
clf.fit(x_train,
y_train,
# Use your own validation set.
validation_data=(x_val, y_val))
###Output
_____no_output_____
###Markdown
Customized Search SpaceFor advanced users, you may customize your search space by using[AutoModel](/auto_model/automodel-class) instead of[StructuredDataClassifier](/structured_data_classifier). You can configure the[StructuredDataBlock](/block/structureddatablock-class) for some high-levelconfigurations, e.g., `categorical_encoding` for whether to use the[CategoricalToNumerical](/preprocessor/categoricaltonumerical-class). You can also do not specify thesearguments, which would leave the different choices to be tuned automatically. Seethe following example for detail.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.StructuredDataBlock(
categorical_encoding=True,
block_type='dense')(input_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(inputs=input_node, outputs=output_node, max_trials=10)
clf.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
The usage of [AutoModel](/auto_model/automodel-class) is similar to the[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.Basically, you are building a graph, whose edges are blocks and the nodes are intermediate outputs of blocks.To add an edge from `input_node` to `output_node` with`output_node = ak.[some_block]([block_args])(input_node)`.You can even also use more fine grained blocks to customize the search space evenfurther. See the following example.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.CategoricalToNumerical()(input_node)
output_node = ak.DenseBlock()(output_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(inputs=input_node, outputs=output_node, max_trials=10)
clf.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
A Simple ExampleThe first step is to prepare your data. Here we use the [Titanicdataset](https://www.kaggle.com/c/titanic) as an example.
###Code
import tensorflow as tf
import autokeras as ak
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
###Output
_____no_output_____
###Markdown
The second step is to run the[StructuredDataClassifier](/structured_data_classifier).As a quick demo, we set epochs to 10.You can also leave the epochs unspecified for an adaptive number of epochs.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3) # It tries 3 different models.
# Feed the structured data classifier with training data.
clf.fit(
# The path to the train.csv file.
train_file_path,
# The name of the label column.
'survived',
epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_file_path)
# Evaluate the best model with testing data.
print(clf.evaluate(test_file_path, 'survived'))
###Output
_____no_output_____
###Markdown
Data FormatThe AutoKeras StructuredDataClassifier is quite flexible for the data format.The example above shows how to use the CSV files directly. Besides CSV files, it alsosupports numpy.ndarray, pandas.DataFrame or [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable). The data should betwo-dimensional with numerical or categorical values.For the classification labels,AutoKeras accepts both plain labels, i.e. strings or integers, and one-hot encodedencoded labels, i.e. vectors of 0s and 1s.The labels can be numpy.ndarray, pandas.DataFrame, or pandas.Series.The following examples show how the data can be prepared with numpy.ndarray,pandas.DataFrame, and tensorflow.data.Dataset.
###Code
import pandas as pd
import numpy as np
# x_train as pandas.DataFrame, y_train as pandas.Series
x_train = pd.read_csv(train_file_path)
print(type(x_train)) # pandas.DataFrame
y_train = x_train.pop('survived')
print(type(y_train)) # pandas.Series
# You can also use pandas.DataFrame for y_train.
y_train = pd.DataFrame(y_train)
print(type(y_train)) # pandas.DataFrame
# You can also use numpy.ndarray for x_train and y_train.
x_train = x_train.to_numpy()
y_train = y_train.to_numpy()
print(type(x_train)) # numpy.ndarray
print(type(y_train)) # numpy.ndarray
# Preparing testing data.
x_test = pd.read_csv(test_file_path)
y_test = x_test.pop('survived')
# It tries 10 different models.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the structured data classifier with training data.
clf.fit(x_train, y_train, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
###Output
_____no_output_____
###Markdown
The following code shows how to convert numpy.ndarray to tf.data.Dataset.
###Code
train_set = tf.data.Dataset.from_tensor_slices((x_train.astype(np.unicode), y_train))
test_set = tf.data.Dataset.from_tensor_slices((x_test.to_numpy().astype(np.unicode), y_test))
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the tensorflow Dataset to the classifier.
clf.fit(train_set, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_set)
# Evaluate the best model with testing data.
print(clf.evaluate(test_set))
###Output
_____no_output_____
###Markdown
You can also specify the column names and types for the data as follows.The `column_names` is optional if the training data already have the column names, e.g.pandas.DataFrame, CSV file.Any column, whose type is not specified will be inferred from the training data.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
column_names=[
'sex',
'age',
'n_siblings_spouses',
'parch',
'fare',
'class',
'deck',
'embark_town',
'alone'],
column_types={'sex': 'categorical', 'fare': 'numerical'},
max_trials=10, # It tries 10 different models.
overwrite=True,
)
###Output
_____no_output_____
###Markdown
Validation DataBy default, AutoKeras use the last 20% of training data as validation data.As shown in the example below, you can use `validation_split` to specify the percentage.
###Code
clf.fit(x_train,
y_train,
# Split the training data and use the last 15% as validation data.
validation_split=0.15,
epochs=10)
###Output
_____no_output_____
###Markdown
You can also use your own validation setinstead of splitting it from the training data with `validation_data`.
###Code
split = 500
x_val = x_train[split:]
y_val = y_train[split:]
x_train = x_train[:split]
y_train = y_train[:split]
clf.fit(x_train,
y_train,
# Use your own validation set.
validation_data=(x_val, y_val),
epochs=10)
###Output
_____no_output_____
###Markdown
Customized Search SpaceFor advanced users, you may customize your search space by using[AutoModel](/auto_model/automodel-class) instead of[StructuredDataClassifier](/structured_data_classifier). You can configure the[StructuredDataBlock](/block/structureddatablock-class) for some high-levelconfigurations, e.g., `categorical_encoding` for whether to use the[CategoricalToNumerical](/block/categoricaltonumerical-class). You can also do not specify thesearguments, which would leave the different choices to be tuned automatically. Seethe following example for detail.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.StructuredDataBlock(categorical_encoding=True)(input_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=3)
clf.fit(x_train, y_train, epochs=10)
###Output
_____no_output_____
###Markdown
The usage of [AutoModel](/auto_model/automodel-class) is similar to the[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.Basically, you are building a graph, whose edges are blocks and the nodes are intermediate outputs of blocks.To add an edge from `input_node` to `output_node` with`output_node = ak.[some_block]([block_args])(input_node)`.You can even also use more fine grained blocks to customize the search space evenfurther. See the following example.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.CategoricalToNumerical()(input_node)
output_node = ak.DenseBlock()(output_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=1)
clf.fit(x_train, y_train, epochs=1)
clf.predict(x_train)
###Output
_____no_output_____
###Markdown
You can also export the best model found by AutoKeras as a Keras Model.
###Code
model = clf.export_model()
model.summary()
print(x_train.dtype)
# numpy array in object (mixed type) is not supported.
# convert it to unicode.
model.predict(x_train.astype(np.unicode))
###Output
_____no_output_____
###Markdown
A Simple ExampleThe first step is to prepare your data. Here we use the [Titanicdataset](https://www.kaggle.com/c/titanic) as an example.
###Code
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
###Output
_____no_output_____
###Markdown
The second step is to run the[StructuredDataClassifier](/structured_data_classifier).As a quick demo, we set epochs to 10.You can also leave the epochs unspecified for an adaptive number of epochs.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
overwrite=True, max_trials=3
) # It tries 3 different models.
# Feed the structured data classifier with training data.
clf.fit(
# The path to the train.csv file.
train_file_path,
# The name of the label column.
"survived",
epochs=10,
)
# Predict with the best model.
predicted_y = clf.predict(test_file_path)
# Evaluate the best model with testing data.
print(clf.evaluate(test_file_path, "survived"))
###Output
_____no_output_____
###Markdown
Data FormatThe AutoKeras StructuredDataClassifier is quite flexible for the data format.The example above shows how to use the CSV files directly. Besides CSV files,it also supports numpy.ndarray, pandas.DataFrame or [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable). Thedata should be two-dimensional with numerical or categorical values.For the classification labels,AutoKeras accepts both plain labels, i.e. strings or integers, and one-hot encodedencoded labels, i.e. vectors of 0s and 1s.The labels can be numpy.ndarray, pandas.DataFrame, or pandas.Series.The following examples show how the data can be prepared with numpy.ndarray,pandas.DataFrame, and tensorflow.data.Dataset.
###Code
# x_train as pandas.DataFrame, y_train as pandas.Series
x_train = pd.read_csv(train_file_path)
print(type(x_train)) # pandas.DataFrame
y_train = x_train.pop("survived")
print(type(y_train)) # pandas.Series
# You can also use pandas.DataFrame for y_train.
y_train = pd.DataFrame(y_train)
print(type(y_train)) # pandas.DataFrame
# You can also use numpy.ndarray for x_train and y_train.
x_train = x_train.to_numpy()
y_train = y_train.to_numpy()
print(type(x_train)) # numpy.ndarray
print(type(y_train)) # numpy.ndarray
# Preparing testing data.
x_test = pd.read_csv(test_file_path)
y_test = x_test.pop("survived")
# It tries 10 different models.
clf = ak.StructuredDataClassifier(overwrite=True, max_trials=3)
# Feed the structured data classifier with training data.
clf.fit(x_train, y_train, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
###Output
_____no_output_____
###Markdown
The following code shows how to convert numpy.ndarray to tf.data.Dataset.
###Code
train_set = tf.data.Dataset.from_tensor_slices((x_train.astype(np.unicode), y_train))
test_set = tf.data.Dataset.from_tensor_slices(
(x_test.to_numpy().astype(np.unicode), y_test)
)
clf = ak.StructuredDataClassifier(overwrite=True, max_trials=3)
# Feed the tensorflow Dataset to the classifier.
clf.fit(train_set, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_set)
# Evaluate the best model with testing data.
print(clf.evaluate(test_set))
###Output
_____no_output_____
###Markdown
You can also specify the column names and types for the data as follows. The`column_names` is optional if the training data already have the column names,e.g. pandas.DataFrame, CSV file. Any column, whose type is not specified willbe inferred from the training data.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
column_names=[
"sex",
"age",
"n_siblings_spouses",
"parch",
"fare",
"class",
"deck",
"embark_town",
"alone",
],
column_types={"sex": "categorical", "fare": "numerical"},
max_trials=10, # It tries 10 different models.
overwrite=True,
)
###Output
_____no_output_____
###Markdown
Validation DataBy default, AutoKeras use the last 20% of training data as validation data. Asshown in the example below, you can use `validation_split` to specify thepercentage.
###Code
clf.fit(
x_train,
y_train,
# Split the training data and use the last 15% as validation data.
validation_split=0.15,
epochs=10,
)
###Output
_____no_output_____
###Markdown
You can also use your own validation setinstead of splitting it from the training data with `validation_data`.
###Code
split = 500
x_val = x_train[split:]
y_val = y_train[split:]
x_train = x_train[:split]
y_train = y_train[:split]
clf.fit(
x_train,
y_train,
# Use your own validation set.
validation_data=(x_val, y_val),
epochs=10,
)
###Output
_____no_output_____
###Markdown
Customized Search SpaceFor advanced users, you may customize your search space by using[AutoModel](/auto_model/automodel-class) instead of[StructuredDataClassifier](/structured_data_classifier). You can configure the[StructuredDataBlock](/block/structureddatablock-class) for some high-levelconfigurations, e.g., `categorical_encoding` for whether to use the[CategoricalToNumerical](/block/categoricaltonumerical-class). You can also donot specify these arguments, which would leave the different choices to betuned automatically. See the following example for detail.
###Code
input_node = ak.StructuredDataInput()
output_node = ak.StructuredDataBlock(categorical_encoding=True)(input_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node, outputs=output_node, overwrite=True, max_trials=3
)
clf.fit(x_train, y_train, epochs=10)
###Output
_____no_output_____
###Markdown
The usage of [AutoModel](/auto_model/automodel-class) is similar to the[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.Basically, you are building a graph, whose edges are blocks and the nodes areintermediate outputs of blocks.To add an edge from `input_node` to `output_node` with`output_node = ak.[some_block]([block_args])(input_node)`.You can even also use more fine grained blocks to customize the search space evenfurther. See the following example.
###Code
input_node = ak.StructuredDataInput()
output_node = ak.CategoricalToNumerical()(input_node)
output_node = ak.DenseBlock()(output_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node, outputs=output_node, overwrite=True, max_trials=1
)
clf.fit(x_train, y_train, epochs=1)
clf.predict(x_train)
###Output
_____no_output_____
###Markdown
You can also export the best model found by AutoKeras as a Keras Model.
###Code
model = clf.export_model()
model.summary()
print(x_train.dtype)
# numpy array in object (mixed type) is not supported.
# convert it to unicode.
model.predict(x_train.astype(np.unicode))
###Output
_____no_output_____
###Markdown
A Simple ExampleThe first step is to prepare your data. Here we use the [Titanicdataset](https://www.kaggle.com/c/titanic) as an example.
###Code
import tensorflow as tf
import autokeras as ak
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
###Output
_____no_output_____
###Markdown
The second step is to run the[StructuredDataClassifier](/structured_data_classifier).As a quick demo, we set epochs to 10.You can also leave the epochs unspecified for an adaptive number of epochs.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3) # It tries 3 different models.
# Feed the structured data classifier with training data.
clf.fit(
# The path to the train.csv file.
train_file_path,
# The name of the label column.
'survived',
epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_file_path)
# Evaluate the best model with testing data.
print(clf.evaluate(test_file_path, 'survived'))
###Output
_____no_output_____
###Markdown
Data FormatThe AutoKeras StructuredDataClassifier is quite flexible for the data format.The example above shows how to use the CSV files directly. Besides CSV files, it alsosupports numpy.ndarray, pandas.DataFrame or [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable). The data should betwo-dimensional with numerical or categorical values.For the classification labels,AutoKeras accepts both plain labels, i.e. strings or integers, and one-hot encodedencoded labels, i.e. vectors of 0s and 1s.The labels can be numpy.ndarray, pandas.DataFrame, or pandas.Series.The following examples show how the data can be prepared with numpy.ndarray,pandas.DataFrame, and tensorflow.data.Dataset.
###Code
import pandas as pd
import numpy as np
# x_train as pandas.DataFrame, y_train as pandas.Series
x_train = pd.read_csv(train_file_path)
print(type(x_train)) # pandas.DataFrame
y_train = x_train.pop('survived')
print(type(y_train)) # pandas.Series
# You can also use pandas.DataFrame for y_train.
y_train = pd.DataFrame(y_train)
print(type(y_train)) # pandas.DataFrame
# You can also use numpy.ndarray for x_train and y_train.
x_train = x_train.to_numpy()
y_train = y_train.to_numpy()
print(type(x_train)) # numpy.ndarray
print(type(y_train)) # numpy.ndarray
# Preparing testing data.
x_test = pd.read_csv(test_file_path)
y_test = x_test.pop('survived')
# It tries 10 different models.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the structured data classifier with training data.
clf.fit(x_train, y_train, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
###Output
_____no_output_____
###Markdown
The following code shows how to convert numpy.ndarray to tf.data.Dataset.
###Code
train_set = tf.data.Dataset.from_tensor_slices((x_train.astype(np.unicode), y_train))
test_set = tf.data.Dataset.from_tensor_slices((x_test.to_numpy().astype(np.unicode), y_test))
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the tensorflow Dataset to the classifier.
clf.fit(train_set, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_set)
# Evaluate the best model with testing data.
print(clf.evaluate(test_set))
###Output
_____no_output_____
###Markdown
You can also specify the column names and types for the data as follows.The `column_names` is optional if the training data already have the column names, e.g.pandas.DataFrame, CSV file.Any column, whose type is not specified will be inferred from the training data.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
column_names=[
'sex',
'age',
'n_siblings_spouses',
'parch',
'fare',
'class',
'deck',
'embark_town',
'alone'],
column_types={'sex': 'categorical', 'fare': 'numerical'},
max_trials=10, # It tries 10 different models.
overwrite=True,
)
###Output
_____no_output_____
###Markdown
Validation DataBy default, AutoKeras use the last 20% of training data as validation data.As shown in the example below, you can use `validation_split` to specify the percentage.
###Code
clf.fit(x_train,
y_train,
# Split the training data and use the last 15% as validation data.
validation_split=0.15,
epochs=10)
###Output
_____no_output_____
###Markdown
You can also use your own validation setinstead of splitting it from the training data with `validation_data`.
###Code
split = 500
x_val = x_train[split:]
y_val = y_train[split:]
x_train = x_train[:split]
y_train = y_train[:split]
clf.fit(x_train,
y_train,
# Use your own validation set.
validation_data=(x_val, y_val),
epochs=10)
###Output
_____no_output_____
###Markdown
Customized Search SpaceFor advanced users, you may customize your search space by using[AutoModel](/auto_model/automodel-class) instead of[StructuredDataClassifier](/structured_data_classifier). You can configure the[StructuredDataBlock](/block/structureddatablock-class) for some high-levelconfigurations, e.g., `categorical_encoding` for whether to use the[CategoricalToNumerical](/block/categoricaltonumerical-class). You can also do not specify thesearguments, which would leave the different choices to be tuned automatically. Seethe following example for detail.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.StructuredDataBlock(categorical_encoding=True)(input_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=3)
clf.fit(x_train, y_train, epochs=10)
###Output
_____no_output_____
###Markdown
The usage of [AutoModel](/auto_model/automodel-class) is similar to the[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.Basically, you are building a graph, whose edges are blocks and the nodes are intermediate outputs of blocks.To add an edge from `input_node` to `output_node` with`output_node = ak.[some_block]([block_args])(input_node)`.You can even also use more fine grained blocks to customize the search space evenfurther. See the following example.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.CategoricalToNumerical()(input_node)
output_node = ak.DenseBlock()(output_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=1)
clf.fit(x_train, y_train, epochs=1)
clf.predict(x_train)
###Output
_____no_output_____
###Markdown
You can also export the best model found by AutoKeras as a Keras Model.
###Code
model = clf.export_model()
model.summary()
print(x_train.dtype)
# numpy array in object (mixed type) is not supported.
# convert it to unicode.
model.predict(x_train.astype(np.unicode))
###Output
_____no_output_____
###Markdown
A Simple ExampleThe first step is to prepare your data. Here we use the [Titanicdataset](https://www.kaggle.com/c/titanic) as an example.
###Code
import tensorflow as tf
import autokeras as ak
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
###Output
_____no_output_____
###Markdown
The second step is to run the[StructuredDataClassifier](/structured_data_classifier).As a quick demo, we set epochs to 10.You can also leave the epochs unspecified for an adaptive number of epochs.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3) # It tries 3 different models.
# Feed the structured data classifier with training data.
clf.fit(
# The path to the train.csv file.
train_file_path,
# The name of the label column.
'survived',
epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_file_path)
# Evaluate the best model with testing data.
print(clf.evaluate(test_file_path, 'survived'))
###Output
_____no_output_____
###Markdown
Data FormatThe AutoKeras StructuredDataClassifier is quite flexible for the data format.The example above shows how to use the CSV files directly. Besides CSV files, it alsosupports numpy.ndarray, pandas.DataFrame or [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable). The data should betwo-dimensional with numerical or categorical values.For the classification labels,AutoKeras accepts both plain labels, i.e. strings or integers, and one-hot encodedencoded labels, i.e. vectors of 0s and 1s.The labels can be numpy.ndarray, pandas.DataFrame, or pandas.Series.The following examples show how the data can be prepared with numpy.ndarray,pandas.DataFrame, and tensorflow.data.Dataset.
###Code
import pandas as pd
import numpy as np
# x_train as pandas.DataFrame, y_train as pandas.Series
x_train = pd.read_csv(train_file_path)
print(type(x_train)) # pandas.DataFrame
y_train = x_train.pop('survived')
print(type(y_train)) # pandas.Series
# You can also use pandas.DataFrame for y_train.
y_train = pd.DataFrame(y_train)
print(type(y_train)) # pandas.DataFrame
# You can also use numpy.ndarray for x_train and y_train.
x_train = x_train.to_numpy()
y_train = y_train.to_numpy()
print(type(x_train)) # numpy.ndarray
print(type(y_train)) # numpy.ndarray
# Preparing testing data.
x_test = pd.read_csv(test_file_path)
y_test = x_test.pop('survived')
# It tries 10 different models.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the structured data classifier with training data.
clf.fit(x_train, y_train, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
###Output
_____no_output_____
###Markdown
The following code shows how to convert numpy.ndarray to tf.data.Dataset.
###Code
train_set = tf.data.Dataset.from_tensor_slices((x_train.astype(np.unicode), y_train))
test_set = tf.data.Dataset.from_tensor_slices((x_test.to_numpy().astype(np.unicode), y_test))
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the tensorflow Dataset to the classifier.
clf.fit(train_set, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_set)
# Evaluate the best model with testing data.
print(clf.evaluate(test_set))
###Output
_____no_output_____
###Markdown
You can also specify the column names and types for the data as follows.The `column_names` is optional if the training data already have the column names, e.g.pandas.DataFrame, CSV file.Any column, whose type is not specified will be inferred from the training data.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
column_names=[
'sex',
'age',
'n_siblings_spouses',
'parch',
'fare',
'class',
'deck',
'embark_town',
'alone'],
column_types={'sex': 'categorical', 'fare': 'numerical'},
max_trials=10, # It tries 10 different models.
overwrite=True,
)
###Output
_____no_output_____
###Markdown
Validation DataBy default, AutoKeras use the last 20% of training data as validation data.As shown in the example below, you can use `validation_split` to specify the percentage.
###Code
clf.fit(x_train,
y_train,
# Split the training data and use the last 15% as validation data.
validation_split=0.15,
epochs=10)
###Output
_____no_output_____
###Markdown
You can also use your own validation setinstead of splitting it from the training data with `validation_data`.
###Code
split = 500
x_val = x_train[split:]
y_val = y_train[split:]
x_train = x_train[:split]
y_train = y_train[:split]
clf.fit(x_train,
y_train,
# Use your own validation set.
validation_data=(x_val, y_val),
epochs=10)
###Output
_____no_output_____
###Markdown
Customized Search SpaceFor advanced users, you may customize your search space by using[AutoModel](/auto_model/automodel-class) instead of[StructuredDataClassifier](/structured_data_classifier). You can configure the[StructuredDataBlock](/block/structureddatablock-class) for some high-levelconfigurations, e.g., `categorical_encoding` for whether to use the[CategoricalToNumerical](/block/categoricaltonumerical-class). You can also do not specify thesearguments, which would leave the different choices to be tuned automatically. Seethe following example for detail.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.StructuredDataBlock(categorical_encoding=True)(input_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=3)
clf.fit(x_train, y_train, epochs=10)
###Output
_____no_output_____
###Markdown
The usage of [AutoModel](/auto_model/automodel-class) is similar to the[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.Basically, you are building a graph, whose edges are blocks and the nodes are intermediate outputs of blocks.To add an edge from `input_node` to `output_node` with`output_node = ak.[some_block]([block_args])(input_node)`.You can even also use more fine grained blocks to customize the search space evenfurther. See the following example.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.CategoricalToNumerical()(input_node)
output_node = ak.DenseBlock()(output_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=3)
clf.fit(x_train, y_train, epochs=10)
###Output
_____no_output_____
###Markdown
A Simple ExampleThe first step is to prepare your data. Here we use the [Titanicdataset](https://www.kaggle.com/c/titanic) as an example.
###Code
import tensorflow as tf
import autokeras as ak
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
###Output
_____no_output_____
###Markdown
The second step is to run the[StructuredDataClassifier](/structured_data_classifier).
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3) # It tries 3 different models.
# Feed the structured data classifier with training data.
clf.fit(
# The path to the train.csv file.
train_file_path,
# The name of the label column.
'survived',
epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_file_path)
# Evaluate the best model with testing data.
print(clf.evaluate(test_file_path, 'survived'))
###Output
_____no_output_____
###Markdown
Data FormatThe AutoKeras StructuredDataClassifier is quite flexible for the data format.The example above shows how to use the CSV files directly. Besides CSV files, it alsosupports numpy.ndarray, pandas.DataFrame or [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable). The data should betwo-dimensional with numerical or categorical values.For the classification labels,AutoKeras accepts both plain labels, i.e. strings or integers, and one-hot encodedencoded labels, i.e. vectors of 0s and 1s.The labels can be numpy.ndarray, pandas.DataFrame, or pandas.Series.The following examples show how the data can be prepared with numpy.ndarray,pandas.DataFrame, and tensorflow.data.Dataset.
###Code
import pandas as pd
import numpy as np
# x_train as pandas.DataFrame, y_train as pandas.Series
x_train = pd.read_csv(train_file_path)
print(type(x_train)) # pandas.DataFrame
y_train = x_train.pop('survived')
print(type(y_train)) # pandas.Series
# You can also use pandas.DataFrame for y_train.
y_train = pd.DataFrame(y_train)
print(type(y_train)) # pandas.DataFrame
# You can also use numpy.ndarray for x_train and y_train.
x_train = x_train.to_numpy().astype(np.unicode)
y_train = y_train.to_numpy()
print(type(x_train)) # numpy.ndarray
print(type(y_train)) # numpy.ndarray
# Preparing testing data.
x_test = pd.read_csv(test_file_path)
y_test = x_test.pop('survived')
# It tries 10 different models.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the structured data classifier with training data.
clf.fit(x_train, y_train, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
###Output
_____no_output_____
###Markdown
The following code shows how to convert numpy.ndarray to tf.data.Dataset.Notably, the labels have to be one-hot encoded for multi-classclassification to be wrapped into tensorflow Dataset.Since the Titanic dataset is binaryclassification, it should not be one-hot encoded.
###Code
train_set = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_set = tf.data.Dataset.from_tensor_slices((x_test.to_numpy().astype(np.unicode), y_test))
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the tensorflow Dataset to the classifier.
clf.fit(train_set, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_set)
# Evaluate the best model with testing data.
print(clf.evaluate(test_set))
###Output
_____no_output_____
###Markdown
You can also specify the column names and types for the data as follows.The `column_names` is optional if the training data already have the column names, e.g.pandas.DataFrame, CSV file.Any column, whose type is not specified will be inferred from the training data.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
column_names=[
'sex',
'age',
'n_siblings_spouses',
'parch',
'fare',
'class',
'deck',
'embark_town',
'alone'],
column_types={'sex': 'categorical', 'fare': 'numerical'},
max_trials=10, # It tries 10 different models.
overwrite=True,
)
###Output
_____no_output_____
###Markdown
Validation DataBy default, AutoKeras use the last 20% of training data as validation data.As shown in the example below, you can use `validation_split` to specify the percentage.
###Code
clf.fit(x_train,
y_train,
# Split the training data and use the last 15% as validation data.
validation_split=0.15,
epochs=10)
###Output
_____no_output_____
###Markdown
You can also use your own validation setinstead of splitting it from the training data with `validation_data`.
###Code
split = 500
x_val = x_train[split:]
y_val = y_train[split:]
x_train = x_train[:split]
y_train = y_train[:split]
clf.fit(x_train,
y_train,
# Use your own validation set.
validation_data=(x_val, y_val),
epochs=10)
###Output
_____no_output_____
###Markdown
Customized Search SpaceFor advanced users, you may customize your search space by using[AutoModel](/auto_model/automodel-class) instead of[StructuredDataClassifier](/structured_data_classifier). You can configure the[StructuredDataBlock](/block/structureddatablock-class) for some high-levelconfigurations, e.g., `categorical_encoding` for whether to use the[CategoricalToNumerical](/block/categoricaltonumerical-class). You can also do not specify thesearguments, which would leave the different choices to be tuned automatically. Seethe following example for detail.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.StructuredDataBlock(categorical_encoding=True)(input_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=3)
clf.fit(x_train, y_train, epochs=10)
###Output
_____no_output_____
###Markdown
The usage of [AutoModel](/auto_model/automodel-class) is similar to the[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.Basically, you are building a graph, whose edges are blocks and the nodes are intermediate outputs of blocks.To add an edge from `input_node` to `output_node` with`output_node = ak.[some_block]([block_args])(input_node)`.You can even also use more fine grained blocks to customize the search space evenfurther. See the following example.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.CategoricalToNumerical()(input_node)
output_node = ak.DenseBlock()(output_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=3)
clf.fit(x_train, y_train, epochs=10)
###Output
_____no_output_____
###Markdown
A Simple ExampleThe first step is to prepare your data. Here we use the [Titanicdataset](https://www.kaggle.com/c/titanic) as an example.
###Code
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
###Output
_____no_output_____
###Markdown
The second step is to run the[StructuredDataClassifier](/structured_data_classifier).As a quick demo, we set epochs to 10.You can also leave the epochs unspecified for an adaptive number of epochs.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
overwrite=True, max_trials=3
) # It tries 3 different models.
# Feed the structured data classifier with training data.
clf.fit(
# The path to the train.csv file.
train_file_path,
# The name of the label column.
"survived",
epochs=10,
)
# Predict with the best model.
predicted_y = clf.predict(test_file_path)
# Evaluate the best model with testing data.
print(clf.evaluate(test_file_path, "survived"))
###Output
_____no_output_____
###Markdown
Data FormatThe AutoKeras StructuredDataClassifier is quite flexible for the data format.The example above shows how to use the CSV files directly. Besides CSV files,it also supports numpy.ndarray, pandas.DataFrame or [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable). Thedata should be two-dimensional with numerical or categorical values.For the classification labels,AutoKeras accepts both plain labels, i.e. strings or integers, and one-hot encodedencoded labels, i.e. vectors of 0s and 1s.The labels can be numpy.ndarray, pandas.DataFrame, or pandas.Series.The following examples show how the data can be prepared with numpy.ndarray,pandas.DataFrame, and tensorflow.data.Dataset.
###Code
# x_train as pandas.DataFrame, y_train as pandas.Series
x_train = pd.read_csv(train_file_path)
print(type(x_train)) # pandas.DataFrame
y_train = x_train.pop("survived")
print(type(y_train)) # pandas.Series
# You can also use pandas.DataFrame for y_train.
y_train = pd.DataFrame(y_train)
print(type(y_train)) # pandas.DataFrame
# You can also use numpy.ndarray for x_train and y_train.
x_train = x_train.to_numpy()
y_train = y_train.to_numpy()
print(type(x_train)) # numpy.ndarray
print(type(y_train)) # numpy.ndarray
# Preparing testing data.
x_test = pd.read_csv(test_file_path)
y_test = x_test.pop("survived")
# It tries 10 different models.
clf = ak.StructuredDataClassifier(overwrite=True, max_trials=3)
# Feed the structured data classifier with training data.
clf.fit(x_train, y_train, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
###Output
_____no_output_____
###Markdown
The following code shows how to convert numpy.ndarray to tf.data.Dataset.
###Code
train_set = tf.data.Dataset.from_tensor_slices((x_train.astype(np.unicode), y_train))
test_set = tf.data.Dataset.from_tensor_slices(
(x_test.to_numpy().astype(np.unicode), y_test)
)
clf = ak.StructuredDataClassifier(overwrite=True, max_trials=3)
# Feed the tensorflow Dataset to the classifier.
clf.fit(train_set, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_set)
# Evaluate the best model with testing data.
print(clf.evaluate(test_set))
###Output
_____no_output_____
###Markdown
You can also specify the column names and types for the data as follows. The`column_names` is optional if the training data already have the column names,e.g. pandas.DataFrame, CSV file. Any column, whose type is not specified willbe inferred from the training data.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
column_names=[
"sex",
"age",
"n_siblings_spouses",
"parch",
"fare",
"class",
"deck",
"embark_town",
"alone",
],
column_types={"sex": "categorical", "fare": "numerical"},
max_trials=10, # It tries 10 different models.
overwrite=True,
)
###Output
_____no_output_____
###Markdown
Validation DataBy default, AutoKeras use the last 20% of training data as validation data. Asshown in the example below, you can use `validation_split` to specify thepercentage.
###Code
clf.fit(
x_train,
y_train,
# Split the training data and use the last 15% as validation data.
validation_split=0.15,
epochs=10,
)
###Output
_____no_output_____
###Markdown
You can also use your own validation setinstead of splitting it from the training data with `validation_data`.
###Code
split = 500
x_val = x_train[split:]
y_val = y_train[split:]
x_train = x_train[:split]
y_train = y_train[:split]
clf.fit(
x_train,
y_train,
# Use your own validation set.
validation_data=(x_val, y_val),
epochs=10,
)
###Output
_____no_output_____
###Markdown
Customized Search SpaceFor advanced users, you may customize your search space by using[AutoModel](/auto_model/automodel-class) instead of[StructuredDataClassifier](/structured_data_classifier). You can configure the[StructuredDataBlock](/block/structureddatablock-class) for some high-levelconfigurations, e.g., `categorical_encoding` for whether to use the[CategoricalToNumerical](/block/categoricaltonumerical-class). You can also donot specify these arguments, which would leave the different choices to betuned automatically. See the following example for detail.
###Code
input_node = ak.StructuredDataInput()
output_node = ak.StructuredDataBlock(categorical_encoding=True)(input_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node, outputs=output_node, overwrite=True, max_trials=3
)
clf.fit(x_train, y_train, epochs=10)
###Output
_____no_output_____
###Markdown
The usage of [AutoModel](/auto_model/automodel-class) is similar to the[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.Basically, you are building a graph, whose edges are blocks and the nodes areintermediate outputs of blocks.To add an edge from `input_node` to `output_node` with`output_node = ak.[some_block]([block_args])(input_node)`.You can even also use more fine grained blocks to customize the search space evenfurther. See the following example.
###Code
input_node = ak.StructuredDataInput()
output_node = ak.CategoricalToNumerical()(input_node)
output_node = ak.DenseBlock()(output_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node, outputs=output_node, overwrite=True, max_trials=1
)
clf.fit(x_train, y_train, epochs=1)
clf.predict(x_train)
###Output
_____no_output_____
###Markdown
You can also export the best model found by AutoKeras as a Keras Model.
###Code
model = clf.export_model()
model.summary()
print(x_train.dtype)
# numpy array in object (mixed type) is not supported.
# convert it to unicode.
model.predict(x_train.astype(np.unicode))
###Output
_____no_output_____
###Markdown
A Simple ExampleThe first step is to prepare your data. Here we use the [Titanicdataset](https://www.kaggle.com/c/titanic) as an example.
###Code
import tensorflow as tf
import autokeras as ak
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
###Output
_____no_output_____
###Markdown
The second step is to run the[StructuredDataClassifier](/structured_data_classifier).As a quick demo, we set epochs to 10.You can also leave the epochs unspecified for an adaptive number of epochs.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3) # It tries 3 different models.
# Feed the structured data classifier with training data.
clf.fit(
# The path to the train.csv file.
train_file_path,
# The name of the label column.
'survived',
epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_file_path)
# Evaluate the best model with testing data.
print(clf.evaluate(test_file_path, 'survived'))
###Output
_____no_output_____
###Markdown
Data FormatThe AutoKeras StructuredDataClassifier is quite flexible for the data format.The example above shows how to use the CSV files directly. Besides CSV files, it alsosupports numpy.ndarray, pandas.DataFrame or [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable). The data should betwo-dimensional with numerical or categorical values.For the classification labels,AutoKeras accepts both plain labels, i.e. strings or integers, and one-hot encodedencoded labels, i.e. vectors of 0s and 1s.The labels can be numpy.ndarray, pandas.DataFrame, or pandas.Series.The following examples show how the data can be prepared with numpy.ndarray,pandas.DataFrame, and tensorflow.data.Dataset.
###Code
import pandas as pd
import numpy as np
# x_train as pandas.DataFrame, y_train as pandas.Series
x_train = pd.read_csv(train_file_path)
print(type(x_train)) # pandas.DataFrame
y_train = x_train.pop('survived')
print(type(y_train)) # pandas.Series
# You can also use pandas.DataFrame for y_train.
y_train = pd.DataFrame(y_train)
print(type(y_train)) # pandas.DataFrame
# You can also use numpy.ndarray for x_train and y_train.
x_train = x_train.to_numpy()
y_train = y_train.to_numpy()
print(type(x_train)) # numpy.ndarray
print(type(y_train)) # numpy.ndarray
# Preparing testing data.
x_test = pd.read_csv(test_file_path)
y_test = x_test.pop('survived')
# It tries 10 different models.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the structured data classifier with training data.
clf.fit(x_train, y_train, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
###Output
_____no_output_____
###Markdown
The following code shows how to convert numpy.ndarray to tf.data.Dataset.
###Code
train_set = tf.data.Dataset.from_tensor_slices((x_train.astype(np.unicode), y_train))
test_set = tf.data.Dataset.from_tensor_slices((x_test.to_numpy().astype(np.unicode), y_test))
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3)
# Feed the tensorflow Dataset to the classifier.
clf.fit(train_set, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(test_set)
# Evaluate the best model with testing data.
print(clf.evaluate(test_set))
###Output
_____no_output_____
###Markdown
You can also specify the column names and types for the data as follows.The `column_names` is optional if the training data already have the column names, e.g.pandas.DataFrame, CSV file.Any column, whose type is not specified will be inferred from the training data.
###Code
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
column_names=[
'sex',
'age',
'n_siblings_spouses',
'parch',
'fare',
'class',
'deck',
'embark_town',
'alone'],
column_types={'sex': 'categorical', 'fare': 'numerical'},
max_trials=10, # It tries 10 different models.
overwrite=True,
)
###Output
_____no_output_____
###Markdown
Validation DataBy default, AutoKeras use the last 20% of training data as validation data.As shown in the example below, you can use `validation_split` to specify the percentage.
###Code
clf.fit(x_train,
y_train,
# Split the training data and use the last 15% as validation data.
validation_split=0.15,
epochs=10)
###Output
_____no_output_____
###Markdown
You can also use your own validation setinstead of splitting it from the training data with `validation_data`.
###Code
split = 500
x_val = x_train[split:]
y_val = y_train[split:]
x_train = x_train[:split]
y_train = y_train[:split]
clf.fit(x_train,
y_train,
# Use your own validation set.
validation_data=(x_val, y_val),
epochs=10)
###Output
_____no_output_____
###Markdown
Customized Search SpaceFor advanced users, you may customize your search space by using[AutoModel](/auto_model/automodel-class) instead of[StructuredDataClassifier](/structured_data_classifier). You can configure the[StructuredDataBlock](/block/structureddatablock-class) for some high-levelconfigurations, e.g., `categorical_encoding` for whether to use the[CategoricalToNumerical](/block/categoricaltonumerical-class). You can also do not specify thesearguments, which would leave the different choices to be tuned automatically. Seethe following example for detail.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.StructuredDataBlock(categorical_encoding=True)(input_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=3)
clf.fit(x_train, y_train, epochs=10)
###Output
_____no_output_____
###Markdown
The usage of [AutoModel](/auto_model/automodel-class) is similar to the[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.Basically, you are building a graph, whose edges are blocks and the nodes are intermediate outputs of blocks.To add an edge from `input_node` to `output_node` with`output_node = ak.[some_block]([block_args])(input_node)`.You can even also use more fine grained blocks to customize the search space evenfurther. See the following example.
###Code
import autokeras as ak
input_node = ak.StructuredDataInput()
output_node = ak.CategoricalToNumerical()(input_node)
output_node = ak.DenseBlock()(output_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node,
outputs=output_node,
overwrite=True,
max_trials=1)
clf.fit(x_train, y_train, epochs=1)
clf.predict(x_train)
###Output
_____no_output_____
###Markdown
You can also export the best model found by AutoKeras as a Keras Model.
###Code
model = clf.export_model()
model.summary()
print(x_train.dtype)
# numpy array in object (mixed type) is not supported.
# convert it to unicode.
model.predict(x_train.astype(np.unicode))
###Output
_____no_output_____ |
Capstone_Project.ipynb | ###Markdown
Coursera Capstone Project This Notebook is intendened to complete the capstone project on Coursera course
###Code
import pandas as pd
import numpy as np
print ("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
###Markdown
Capstone Project Image classifier for the SVHN dataset InstructionsIn this notebook, you will create a neural network that classifies real-world images digits. You will use concepts from throughout this course in building, training, testing, validating and saving your Tensorflow classifier model.This project is peer-assessed. Within this notebook you will find instructions in each section for how to complete the project. Pay close attention to the instructions as the peer review will be carried out according to a grading rubric that checks key parts of the project instructions. Feel free to add extra cells into the notebook as required. How to submitWhen you have completed the Capstone project notebook, you will submit a pdf of the notebook for peer review. First ensure that the notebook has been fully executed from beginning to end, and all of the cell outputs are visible. This is important, as the grading rubric depends on the reviewer being able to view the outputs of your notebook. Save the notebook as a pdf (you could download the notebook with File -> Download .ipynb, open the notebook locally, and then File -> Download as -> PDF via LaTeX), and then submit this pdf for review. Let's get started!We'll start by running some imports, and loading the dataset. For this project you are free to make further imports throughout the notebook as you wish.
###Code
import tensorflow as tf
from scipy.io import loadmat
import numpy as np
###Output
_____no_output_____
###Markdown
For the capstone project, you will use the [SVHN dataset](http://ufldl.stanford.edu/housenumbers/). This is an image dataset of over 600,000 digit images in all, and is a harder dataset than MNIST as the numbers appear in the context of natural scene images. SVHN is obtained from house numbers in Google Street View images.* Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu and A. Y. Ng. "Reading Digits in Natural Images with Unsupervised Feature Learning". NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.The train and test datasets required for this project can be downloaded from [here](http://ufldl.stanford.edu/housenumbers/train.tar.gz) and [here](http://ufldl.stanford.edu/housenumbers/test.tar.gz). Once unzipped, you will have two files: `train_32x32.mat` and `test_32x32.mat`. You should store these files in Drive for use in this Colab notebook.Your goal is to develop an end-to-end workflow for building, training, validating, evaluating and saving a neural network that classifies a real-world image into one of ten classes.
###Code
# Run this cell to connect to your Drive folder
from google.colab import drive
drive.mount('/content/gdrive')
# Load the dataset from your Drive folder
train = loadmat('/content/gdrive/My Drive/SVNH/train/train_32x32.mat')
test = loadmat('/content/gdrive/My Drive/SVNH/test/test_32x32.mat')
###Output
_____no_output_____
###Markdown
Both `train` and `test` are dictionaries with keys `X` and `y` for the input images and labels respectively. 1. Inspect and preprocess the dataset* Extract the training and testing images and labels separately from the train and test dictionaries loaded for you.* Select a random sample of images and corresponding labels from the dataset (at least 10), and display them in a figure.* Convert the training and test images to grayscale by taking the average across all colour channels for each pixel. _Hint: retain the channel dimension, which will now have size 1._* Select a random sample of the grayscale images and corresponding labels from the dataset (at least 10), and display them in a figure.
###Code
import matplotlib.pyplot as plt
import random
for i in range(0,10):
r = random.randint(0,len(train['X'][0][0][0]))
image = train['X'][:,:,:,r]
print(train['X'][0,0,0,r])
print(train['X'][0,0,1,r])
print(train['X'][0,0,2,r])
image_mean = np.average(image, axis=(2))
print(image_mean[0][0])
imgplot = plt.imshow(image_mean)
plt.show()
print('labels %s' % train['y'][r])
###Output
165
171
183
173.0
###Markdown
2. MLP neural network classifier* Build an MLP classifier model using the Sequential API. Your model should use only Flatten and Dense layers, with the final layer having a 10-way softmax output. * You should design and build the model yourself. Feel free to experiment with different MLP architectures. _Hint: to achieve a reasonable accuracy you won't need to use more than 4 or 5 layers._* Print out the model summary (using the summary() method)* Compile and train the model (we recommend a maximum of 30 epochs), making use of both training and validation sets during the training run. * Your model should track at least one appropriate metric, and use at least two callbacks during training, one of which should be a ModelCheckpoint callback.* As a guide, you should aim to achieve a final categorical cross entropy training loss of less than 1.0 (the validation loss might be higher).* Plot the learning curves for loss vs epoch and accuracy vs epoch for both training and validation sets.* Compute and display the loss and accuracy of the trained model on the test set.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
nn_model = Sequential([Flatten(input_shape=(32,32,3)), Dense( 32, activation='relu'), Dense( 20, activation='relu'),Dense( 14, activation='relu'), Dense( 16, activation='relu'), Dense( 20, activation='relu'),Dense( 10, activation='softmax')])
nn_model.summary()
nn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
train['y'] = train['y'].reshape(len(train['y']))
train['y'] = train['y'] -1
test['y'] = test['y'].reshape(len(test['y']))
test['y'] = test['y'] -1
train['y'].shape
test['y'].shape
nn_history = nn_model.fit(np.moveaxis(train['X'], -1,0), train['y'], epochs=10, batch_size=256, callbacks=[EarlyStopping(monitor='accuracy')], validation_data=(np.moveaxis(test['X'],-1,0), test['y']))
###Output
Epoch 1/10
287/287 [==============================] - 3s 9ms/step - loss: 1.7939 - accuracy: 0.3820 - val_loss: 2.0934 - val_accuracy: 0.3077
Epoch 2/10
287/287 [==============================] - 3s 9ms/step - loss: 1.7684 - accuracy: 0.3984 - val_loss: 1.9307 - val_accuracy: 0.3550
Epoch 3/10
287/287 [==============================] - 3s 9ms/step - loss: 1.7260 - accuracy: 0.4136 - val_loss: 1.9412 - val_accuracy: 0.3576
Epoch 4/10
287/287 [==============================] - 3s 9ms/step - loss: 1.7116 - accuracy: 0.4155 - val_loss: 1.7796 - val_accuracy: 0.4005
Epoch 5/10
287/287 [==============================] - 3s 9ms/step - loss: 1.6923 - accuracy: 0.4260 - val_loss: 1.9028 - val_accuracy: 0.3816
Epoch 6/10
287/287 [==============================] - 4s 13ms/step - loss: 1.6669 - accuracy: 0.4314 - val_loss: 1.8079 - val_accuracy: 0.4078
Epoch 7/10
287/287 [==============================] - 4s 15ms/step - loss: 1.6591 - accuracy: 0.4361 - val_loss: 1.7143 - val_accuracy: 0.4324
Epoch 8/10
287/287 [==============================] - 3s 10ms/step - loss: 1.6519 - accuracy: 0.4384 - val_loss: 1.8924 - val_accuracy: 0.3850
Epoch 9/10
287/287 [==============================] - 3s 9ms/step - loss: 1.6712 - accuracy: 0.4314 - val_loss: 1.7600 - val_accuracy: 0.4087
###Markdown
3. CNN neural network classifier* Build a CNN classifier model using the Sequential API. Your model should use the Conv2D, MaxPool2D, BatchNormalization, Flatten, Dense and Dropout layers. The final layer should again have a 10-way softmax output. * You should design and build the model yourself. Feel free to experiment with different CNN architectures. _Hint: to achieve a reasonable accuracy you won't need to use more than 2 or 3 convolutional layers and 2 fully connected layers.)_* The CNN model should use fewer trainable parameters than your MLP model.* Compile and train the model (we recommend a maximum of 30 epochs), making use of both training and validation sets during the training run.* Your model should track at least one appropriate metric, and use at least two callbacks during training, one of which should be a ModelCheckpoint callback.* You should aim to beat the MLP model performance with fewer parameters!* Plot the learning curves for loss vs epoch and accuracy vs epoch for both training and validation sets.* Compute and display the loss and accuracy of the trained model on the test set.
###Code
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, BatchNormalization
cnn_model = Sequential()
cnn_model.add(Conv2D(16,(3,3),activation='relu',input_shape=(32,32,3) ))
cnn_model.add(MaxPooling2D(2,2))
cnn_model.add(BatchNormalization())
cnn_model.add(Conv2D(10,(3,3),activation='relu', ))
cnn_model.add(MaxPooling2D(3,3))
cnn_model.add(Flatten())
cnn_model.add(Dense(30))
cnn_model.add(Dropout(0.3))
cnn_model.add(Dense(20))
cnn_model.add(Dense(10, activation='softmax'))
cnn_model.summary()
cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history = cnn_model.fit(np.moveaxis(train['X'],-1,0), train['y'], validation_data=(np.moveaxis(test['X'],-1,0), test['y']), epochs=10, batch_size=256,callbacks=[ModelCheckpoint(filepath='.'), EarlyStopping(monitor='accuracy')])
import pandas as pd
frame = pd.DataFrame(history.history)
nn_frame=pd.DataFrame(nn_history.history)
epochs = np.arange(len(frame))
nn_epochs = np.arange(len(nn_frame))
fig = plt.figure(figsize=(14,6))
ax = fig.add_subplot(121)
ax.plot(epochs, frame['loss'], label='Loss')
ax.plot(epochs, frame['val_loss'], label='Validation Loss')
ax.plot(nn_epochs, nn_frame['loss'], label='NN Loss')
ax.plot(nn_epochs, nn_frame['val_loss'], label='NN Validation Loss')
ax.set_xlabel('Epochs')
ax.set_ylabel('Loss')
ax.set_title('Loss vs Epochs')
ax.legend()
ay = fig.add_subplot(122)
ay.plot(epochs, frame['accuracy'], label='Accuracy')
ay.plot(epochs, frame['val_accuracy'], label='Val Accuracy')
ay.plot(nn_epochs, nn_frame['accuracy'], label='NN Accuracy')
ay.plot(nn_epochs, nn_frame['val_accuracy'], label='NN Val Accuracy')
ay.set_xlabel('Epochs')
ay.set_ylabel('Accuracy')
ay.set_title('Accuracy vs Epochs')
ay.legend()
###Output
_____no_output_____
###Markdown
4. Get model predictions* Load the best weights for the MLP and CNN models that you saved during the training run.* Randomly select 5 images and corresponding labels from the test set and display the images with their labels.* Alongside the image and label, show each model’s predictive distribution as a bar chart, and the final model prediction given by the label with maximum probability.
###Code
###Output
_____no_output_____
###Markdown
IBM DATA SCIENCE PROFESSIONAL CERTIFICATE CAPSTONE PROJECT NOTEBOOK: This notebook is will be mainly used for the capstone project.- _All the problem definition and the models to the solutions will be uploaded in this notebook_
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
###Markdown
This notebook will be mainly used for the capstone project.
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
###Markdown
1. Business Understanding I decided to explore the Obesity Rate Among Adults Dataset from 1975 - 2016 collected by the World Health Organization. With the ongoing Coronavirus pandemic, the CDC lists obesity as a risk factor for the virus. Obesity is a medical condition that is characterized by excess body fat and a BMI of 30+. Obesity is the leading preventable cause of death in the world. Obesity is a worldwide disease that is prevalent in children and adults and I wanted to see the correlation between year, location, and sex. I downloaded the dataset from Kaggle. I will answer five questions from this dataset.They are the following:1. Which country has the highest obesity rate in the world and in what year for males? For females? For both sexes? Which country has the lowest obesity rate in the world and in what year for males? For females? For both sexes?2. Which five different countries have the highest obesity rates in the world and in what year for males? For females? For both sexes? Which five different countries have the lowest obesity rates in the world and in what year for males? For females? For both sexes?3. What are the top five years in obesity rates in South America for males? For females? For both sexes? What are the bottom five years in obesity rates in South America for males? For females? For both sexes?4. What are the percentage breakdowns in average obesity rates for each continent for males in 2016? For females? For both sexes?5. From 1975 - 2016, what are the worldwide trends in obesity rates for males, females, and both sexes? 2. Data Understanding
###Code
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
import re
import warnings
warnings.simplefilter(action = 'ignore')
# Load datasets
obesity_raw = pd.read_csv('data.csv')
obesity = pd.read_csv('obesity-cleaned.csv')
obesity_raw.head(10)
obesity_raw.tail(10)
obesity_raw.info()
obesity.head(10)
obesity.tail(10)
obesity.info()
obesity['Country'].value_counts()
obesity['Year'].value_counts()
obesity['Obesity (%)'].describe()
obesity['Sex'].value_counts()
obesity_raw.isnull().sum()
obesity.isnull().sum()
###Output
_____no_output_____
###Markdown
3. Data Preparation
###Code
# Change the first column in obesity_raw to Country
obesity_raw = obesity_raw.rename(columns = {'Unnamed: 0' : 'Country'})
# Change the first column in obesity to id
obesity = obesity.rename(columns = {'Unnamed: 0' : 'id'})
# Drop the countries in obesity_raw that have no data, since they are not useful
for index, row in obesity_raw.iterrows():
if row[1] == 'No data':
obesity_raw = obesity_raw.drop(index = index)
# Drop the countries in obesity that have no data, since they are not useful
for index, row in obesity.iterrows():
if obesity['Obesity (%)'][index] == 'No data':
obesity = obesity.drop(index = index)
# Fix the naming of some countries in the datasets
def fix_naming_countries(df):
'''
INPUT:
df - (pandas dataframe) the obesity datasets
'''
for index, row in df.iterrows():
if df['Country'][index] == 'Bolivia (Plurinational State of)':
df['Country'][index] = 'Bolivia'
if df['Country'][index] == 'Brunei Darussalam':
df['Country'][index] = 'Brunei'
if df['Country'][index] == 'Cabo Verde':
df['Country'][index] = 'Cape Verde'
if df['Country'][index] == "Côte d'Ivoire":
df['Country'][index] = 'Ivory Coast'
if df['Country'][index] == 'Czechia':
df['Country'][index] = 'Czech Republic'
if df['Country'][index] == "Democratic People's Republic of Korea":
df['Country'][index] = 'North Korea'
if df['Country'][index] == 'Eswatini':
df['Country'][index] = 'Swaziland'
if df['Country'][index] == 'Iran (Islamic Republic of)':
df['Country'][index] = 'Iran'
if df['Country'][index] == "Lao People's Democratic Republic":
df['Country'][index] = 'Laos'
if df['Country'][index] == 'Micronesia (Federated States of)':
df['Country'][index] = 'Micronesia'
if df['Country'][index] == 'Republic of Korea':
df['Country'][index] = 'South Korea'
if df['Country'][index] == 'Republic of Moldova':
df['Country'][index] = 'Moldova'
if df['Country'][index] == 'Republic of North Macedonia':
df['Country'][index] = 'Macedonia'
if df['Country'][index] == 'Russian Federation':
df['Country'][index] = 'Russia'
if df['Country'][index] == 'Sudan (former)':
df['Country'][index] = 'Sudan'
if df['Country'][index] == 'Syrian Arab Republic':
df['Country'][index] = 'Syria'
if df['Country'][index] == 'United Kingdom of Great Britain and Northern Ireland':
df['Country'][index] = 'United Kingdom'
if df['Country'][index] == 'United Republic of Tanzania':
df['Country'][index] = 'Tanzania'
if df['Country'][index] == 'Venezuela (Bolivarian Republic of)':
df['Country'][index] = 'Venezuela'
if df['Country'][index] == 'Viet Nam':
df['Country'][index] = 'Vietnam'
# Call the fix_naming_countries function for correcting country names
fix_naming_countries(obesity_raw)
fix_naming_countries(obesity)
# Prepare the columns in obesity_raw to change the variable types
for index, row in obesity_raw.iterrows():
if '.' in obesity_raw['2016'][index]:
for col in obesity_raw.columns:
if col == 'Country':
continue
string = re.split('[[]', obesity_raw[col][index])
obesity_raw[col][index] = string[0]
# Change the column values in obesity_raw to float variables
for col in obesity_raw.columns:
if '.' in obesity_raw[col][index]:
if col == 'Country':
continue
obesity_raw[col] = pd.to_numeric(obesity_raw[col], errors = 'coerce')
# Prepare the Obesity (%) column to change the variable type
for index, row in obesity.iterrows():
string = re.split('[[]', obesity['Obesity (%)'][index])
obesity['Obesity (%)'][index] = string[0]
# Change the Obesity (%) column values to a float variable
obesity['Obesity (%)'] = obesity['Obesity (%)'].astype('float')
# Sort the datasets by the Country column after performing name changes
obesity_raw = obesity_raw.sort_values(by = 'Country')
obesity = obesity.sort_values(by = 'Country')
# Pick out number of countries with highest or lowest obesity rates based on sex and number of desired countries
def high_or_low(df, sex, direction, number):
'''
INPUT:
df - (pandas dataframe) the obesity dataset
sex - Male, Female, or Both sexes
direction - highest or lowest
number - number of desired countries
OUTPUT:
countries - list of countries
values - list of obesity rates
years - list of years
'''
dataframe = df[df['Sex'] == sex]
countries = []
values = []
years = []
if direction == 'low':
dataframe = dataframe.sort_values(by = 'Obesity (%)', ascending = True)
for index, row in dataframe.iterrows():
if df['Country'][index] in countries:
continue
countries.append(df['Country'][index])
values.append(df['Obesity (%)'][index])
years.append(df['Year'][index])
else:
dataframe = dataframe.sort_values(by = 'Obesity (%)', ascending = False)
for index, row in dataframe.iterrows():
if df['Country'][index] in countries:
continue
countries.append(df['Country'][index])
values.append(df['Obesity (%)'][index])
years.append(df['Year'][index])
countries = countries[0:number]
values = values[0:number]
years = years[0:number]
return countries, values, years
# The solutions for the highest obesity percentages in the world for the first question
[highest_male, highest_male_values, highest_male_years] = high_or_low(obesity, 'Male', 'high', 1)
[highest_female, highest_female_values, highest_female_years] = high_or_low(obesity, 'Female', 'high', 1)
[highest_both, highest_both_values, highest_both_years] = high_or_low(obesity, 'Both sexes', 'high', 1)
# The solutions for the lowest obesity percentages in the world for the first question
[lowest_male, lowest_male_values, lowest_male_years] = high_or_low(obesity, 'Male', 'low', 1)
[lowest_female, lowest_female_values, lowest_female_years] = high_or_low(obesity, 'Female', 'low', 1)
[lowest_both, lowest_both_values, lowest_both_years] = high_or_low(obesity, 'Both sexes', 'low', 1)
# The solutions for the five different countries with the highest obesity percentages in the world for the second question
[high_male, high_male_values, high_male_years] = high_or_low(obesity, 'Male', 'high', 5)
[high_female, high_female_values, high_female_years] = high_or_low(obesity, 'Female', 'high', 5)
[high_both, high_both_values, high_both_years] = high_or_low(obesity, 'Both sexes', 'high', 5)
# The solutions for the five different countries with the lowest obesity percentages in the world for the second question
[low_male, low_male_values, low_male_years] = high_or_low(obesity, 'Male', 'low', 5)
[low_female, low_female_values, low_female_years] = high_or_low(obesity, 'Female', 'low', 5)
[low_both, low_both_values, low_both_years] = high_or_low(obesity, 'Both sexes', 'low', 5)
# Calculates average obesity percentages for each year in South America for males, for females, and for both sexes
years_male = []
years_female = []
years_both = []
for col in obesity_raw.columns:
string = str(col)
num = 0
for index, row in obesity_raw.iterrows():
if string != 'Country':
if obesity_raw['Country'][index] == 'Argentina':
num = num + obesity_raw[string][index]
if obesity_raw['Country'][index] == 'Bolivia':
num = num + obesity_raw[string][index]
if obesity_raw['Country'][index] == 'Brazil':
num = num + obesity_raw[string][index]
if obesity_raw['Country'][index] == 'Chile':
num = num + obesity_raw[string][index]
if obesity_raw['Country'][index] == 'Colombia':
num = num + obesity_raw[string][index]
if obesity_raw['Country'][index] == 'Ecuador':
num = num + obesity_raw[string][index]
if obesity_raw['Country'][index] == 'Guyana':
num = num + obesity_raw[string][index]
if obesity_raw['Country'][index] == 'Paraguay':
num = num + obesity_raw[string][index]
if obesity_raw['Country'][index] == 'Peru':
num = num + obesity_raw[string][index]
if obesity_raw['Country'][index] == 'Suriname':
num = num + obesity_raw[string][index]
if obesity_raw['Country'][index] == 'Uruguay':
num = num + obesity_raw[string][index]
if obesity_raw['Country'][index] == 'Venezuela':
num = num + obesity_raw[string][index]
if '.1' in string:
num = num / 12
x = [num, current_year]
years_male.append(x)
elif '.2' in string:
num = num / 12
x = [num, current_year]
years_female.append(x)
else:
if string != 'Country':
current_year = []
current_year = col
num = num / 12
x = [num, current_year]
years_both.append(x)
# Creates a dataframe with the obesity percentages for the top five years in South America for males, for females, and for both sexes
x = {'Year': [''], 'Sex': [''], 'Obesity (%)': [0.0]}
df_high = pd.DataFrame(data = x)
countries = ['Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador', 'Guyana', 'Paraguay', 'Peru', 'Suriname', 'Uruguay', 'Venezuela']
years = ['2016', '2016.1', '2016.2', '2015', '2015.1', '2015.2', '2014', '2014.1', '2014.2', '2013', '2013.1', '2013.2', '2012', '2012.1', '2012.2']
for col in obesity_raw.columns:
string = str(col)
num = 0
for index, row in obesity_raw.iterrows():
if string != 'Country':
if obesity_raw['Country'][index] in countries and string in years:
if '.1' in string:
new_row = {'Year': current_year, 'Sex': 'Male', 'Obesity (%)': obesity_raw[string][index]}
df_high = df_high.append(new_row, ignore_index=True)
elif '.2' in string:
new_row = {'Year': current_year, 'Sex': 'Female', 'Obesity (%)': obesity_raw[string][index]}
df_high = df_high.append(new_row, ignore_index=True)
else:
current_year = string
new_row = {'Year': current_year, 'Sex': 'Both Sexes', 'Obesity (%)': obesity_raw[string][index]}
df_high = df_high.append(new_row, ignore_index=True)
df_high = df_high.drop(0)
# Creates a dataframe with the obesity percentages for the bottom five years in South America for males, for females, and for both sexes
x = {'Year': [''], 'Sex': [''], 'Obesity (%)': [0.0]}
df_low = pd.DataFrame(data = x)
countries = ['Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador', 'Guyana', 'Paraguay', 'Peru', 'Suriname', 'Uruguay', 'Venezuela']
years = ['1979', '1979.1', '1979.2', '1978', '1978.1', '1978.2', '1977', '1977.1', '1977.2', '1976', '1976.1', '1976.2', '1975', '1975.1', '1975.2']
for col in obesity_raw.columns:
string = str(col)
num = 0
for index, row in obesity_raw.iterrows():
if string != 'Country':
if obesity_raw['Country'][index] in countries and string in years:
if '.1' in string:
new_row = {'Year': current_year, 'Sex': 'Male', 'Obesity (%)': obesity_raw[string][index]}
df_low = df_low.append(new_row, ignore_index=True)
elif '.2' in string:
new_row = {'Year': current_year, 'Sex': 'Female', 'Obesity (%)': obesity_raw[string][index]}
df_low = df_low.append(new_row, ignore_index=True)
else:
current_year = string
new_row = {'Year': current_year, 'Sex': 'Both Sexes', 'Obesity (%)': obesity_raw[string][index]}
df_low = df_low.append(new_row, ignore_index=True)
df_low = df_low.drop(0)
# Calculates average obesity rate by year and continent for all sexes
def average_obesity_rate_by_continent(df, year, continent):
'''
INPUT:
df - (pandas dataframe) the obesity dataset
year - Male, Female, or Both sexes
continent - string value of a continent
OUTPUT:
male_values - obesity percentage of males in year and in continent
female_values - obesity percentage of females in year and in continent
both_values - obesity percentage of both sexes in year and in continent
'''
male_count = 0
female_count = 0
both_count = 0
Asia = ['Afghanistan', 'Armenia', 'Azerbaijan', 'Bahrain', 'Bangladesh', 'Bhutan', 'Brunei', 'Cambodia', 'China', 'Cyprus', 'Georgia', 'India', 'Indonesia', 'Iran', 'Iraq', 'Israel', 'Japan', 'Jordan', 'Kazakhstan', 'Kuwait', 'Kyrgyzstan', 'Laos', 'Lebanon', 'Malaysia', 'Maldives', 'Mongolia', 'Myanmar', 'Nepal', 'North Korea', 'Oman', 'Pakistan', 'Philippines', 'Qatar', 'Russia', 'Saudi Arabia', 'Singapore', 'South Korea', 'Sri Lanka', 'Syria', 'Tajikistan', 'Thailand', 'Timor-Leste', 'Turkey', 'Turkmenistan', 'United Arab Emirates', 'Uzbekistan', 'Vietnam', 'Yemen']
Africa = ['Algeria', 'Angola', 'Benin', 'Botswana', 'Burkina Faso', 'Burundi', 'Cape Verde', 'Cameroon', 'Central African Republic', 'Chad', 'Comoros', 'Congo', 'Ivory Coast', 'Democratic Republic of the Congo', 'Djibouti', 'Egypt', 'Equatorial Guinea', 'Eritrea', 'Swaziland', 'Ethiopia', 'Gabon', 'Gambia', 'Ghana', 'Guinea', 'Guinea-Bissau', 'Kenya', 'Lesotho', 'Liberia', 'Libya', 'Madagascar', 'Malawi', 'Mali', 'Mauritania', 'Mauritius', 'Morocco', 'Mozambique', 'Namibia', 'Niger', 'Nigeria', 'Rwanda', 'Sao Tome and Principe', 'Senegal', 'Seychelles', 'Sierra Leone', 'Somalia', 'South Africa', 'Sudan', 'Togo', 'Tunisia', 'Uganda', 'Tanzania', 'Zambia', 'Zimbabwe']
Europe = ['Albania', 'Andorra', 'Austria', 'Belarus', 'Belgium', 'Bosnia and Herzegovina', 'Bulgaria', 'Croatia', 'Czech Republic', 'Denmark', 'Estonia', 'Finland', 'France', 'Germany', 'Greece', 'Hungary', 'Iceland', 'Ireland', 'Italy', 'Latvia', 'Lithuania', 'Luxembourg', 'Malta', 'Montenegro', 'Netherlands', 'Norway', 'Poland', 'Portugal', 'Macedonia', 'Moldova', 'Romania', 'Serbia', 'Slovakia', 'Slovenia', 'Spain', 'Sweden', 'Switzerland', 'Ukraine', 'United Kingdom']
North_America = ['Antigua and Barbuda', 'Bahamas', 'Barbados', 'Belize', 'Canada', 'Costa Rica', 'Cuba', 'Dominica', 'Dominican Republic', 'El Salvador', 'Grenada', 'Guatemala', 'Haiti', 'Honduras', 'Jamaica', 'Mexico', 'Nicaragua', 'Panama', 'Saint Kitts and Nevis', 'Saint Lucia', 'Saint Vincent and the Grenadines', 'Trinidad and Tobago', 'United States of America']
South_America = ['Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador', 'Guyana', 'Paraguay', 'Peru', 'Suriname', 'Uruguay', 'Venezuela']
Oceania = ['Australia', 'Cook Islands', 'Fiji', 'Kiribati', 'Marshall Islands', 'Micronesia', 'Nauru', 'New Zealand', 'Niue', 'Palau', 'Papua New Guinea', 'Samoa', 'Solomon Islands', 'Tonga', 'Tuvalu', 'Vanuatu']
male_values = []
female_values = []
both_values = []
countries = []
if continent == 'Asia':
countries = Asia
if continent == 'Africa':
countries = Africa
if continent == 'Europe':
countries = Europe
if continent == 'North America':
countries = North_America
if continent == 'South America':
countries = South_America
if continent == 'Oceania':
countries = Oceania
for index, row in df.iterrows():
if df['Country'][index] in countries and df['Sex'][index] == 'Male' and df['Year'][index] == year:
male_values.append(df['Obesity (%)'][index])
male_count += 1
if df['Country'][index] in countries and df['Sex'][index] == 'Female' and df['Year'][index] == year:
female_values.append(df['Obesity (%)'][index])
female_count += 1
if df['Country'][index] in countries and df['Sex'][index] == 'Both sexes' and df['Year'][index] == year:
both_values.append(df['Obesity (%)'][index])
both_count += 1
male_values = sum(male_values) / male_count
female_values = sum(female_values) / female_count
both_values = sum(both_values) / both_count
return male_values, female_values, both_values
# Use the average_obesity_rate_by_continent function for the fourth question
south_america_male, south_america_female, south_america_both = average_obesity_rate_by_continent(obesity, 2016, 'South America')
north_america_male, north_america_female, north_america_both = average_obesity_rate_by_continent(obesity, 2016, 'North America')
africa_male, africa_female, africa_both = average_obesity_rate_by_continent(obesity, 2016, 'Africa')
asia_male, asia_female, asia_both = average_obesity_rate_by_continent(obesity, 2016, 'Asia')
oceania_male, oceania_female, oceania_both = average_obesity_rate_by_continent(obesity, 2016, 'Oceania')
europe_male, europe_female, europe_both = average_obesity_rate_by_continent(obesity, 2016, 'Europe')
# Combining values for the overall pie chart for the fourth question
north_america = north_america_male + north_america_female + north_america_both
south_america = south_america_male + south_america_female + south_america_both
africa = africa_male + africa_female + africa_both
europe = europe_male + europe_female + europe_both
asia = asia_male + asia_female + asia_both
oceania = oceania_male + oceania_female + oceania_both
# Calculates average obesity percentages for each sex in every year
world_male = []
world_female = []
world_both = []
for col in obesity_raw.columns:
string = str(col)
if string != 'Country':
if '.1' in string:
world_male.append(obesity_raw[col].mean())
elif '.2' in string:
world_female.append(obesity_raw[col].mean())
else:
world_both.append(obesity_raw[col].mean())
world_male.reverse()
world_female.reverse()
world_both.reverse()
###Output
_____no_output_____
###Markdown
4. Analysis and Solutions Question 1. Which country has the highest obesity rate in the world and in what year for males? For females? For both sexes? Which country has the lowest obesity rate in the world and in what year for males? For females? For both sexes? Answer 1. Nauru has the highest obesity percentages in the world for males with a percentage of 58.7%, for females with a percentage of 63.3%, and for both sexes with a percentage of 61.0%. All of these percentages are from the year 2016. Vietnam has the lowest obesity percentages in the world for males with a percentage of 0.1% from the year 1981, for females with a percentage of 0.2% from the year 1976, and for both sexes with a percentage of 0.1% from the year 1976.
###Code
highest_male
highest_male_values
highest_male_years
highest_female
highest_female_values
highest_female_years
highest_both
highest_both_values
highest_both_years
# This is to create the bar chart for the country with the highest obesity percentages in the world
countries = ['Male', 'Female', 'Both Sexes']
values = np.array([highest_male_values[0], highest_female_values[0], highest_both_values[0]])
plt.bar(countries, values, color = ['blue', 'red', 'yellow'])
plt.title('Obesity Percentages for Nauru in 2016')
plt.xlabel('Sexes')
plt.ylabel('Obesity Percentages (%)')
plt.grid(axis = 'both', alpha = 0.5)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
The above bar chart shows that the country of Nauru has the highest obesity percentages in the world for males with a percentage of 58.7%, for females with a percentage of 63.3%, and for both sexes with a percentage of 61.0%. All of these percentages are from the year 2016. Nauru is a small island in the Pacific Ocean that is northeast of Australia. It is considered to be the least visited country in the world.
###Code
lowest_male
lowest_male_values
lowest_male_years
lowest_female
lowest_female_values
lowest_female_years
lowest_both
lowest_both_values
lowest_both_years
# This is to create the bar chart for the country with the lowest obesity percentages in the world
countries = ['Male - 1981', 'Female - 1976', 'Both Sexes - 1976']
values = np.array([lowest_male_values[0], lowest_female_values[0], lowest_both_values[0]])
plt.bar(countries, values, color = ['orange', 'gray', 'brown'])
plt.title('Obesity Percentages for Vietnam')
plt.xlabel('Sexes')
plt.ylabel('Obesity Percentages (%)')
plt.grid(axis = 'both', alpha = 0.5)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
The above bar chart shows that the country of Vietnam has the lowest obesity percentages in the world for males with a percentage of 0.1% from the year 1981, for females with a percentage of 0.2% from the year 1976, and for both sexes with a percentage of 0.1% from the year 1976. Vietnam is located in southeast Asia bordered by Cambodia, Laos, and the South China Sea. Question 2. Which five different countries have the highest obesity rates in the world and in what year for males? For females? For both sexes? Which five different countries have the lowest obesity rates in the world and in what year for males? For females? For both sexes? Answer 2. Nauru has the highest obesity percentages in the world for males with a percentage of 58.7%, for females with a percentage of 63.3%, and for both sexes with a percentage of 61.0%. The Cook Islands have the second highest obesity percentages in the world for males with a percentage of 52.6%, for females with a percentage of 59.2%, and for both sexes with a percentage of 55.9%. Palau has the third highest obesity percentages in the world for males with a percentage of 51.8%, for females with a percentage of 58.8%, and for both sexes with a percentage of 55.3%. The Marshall Islands have the fourth highest obesity percentages in the world for males with a percentage of 48.4%, for females with a percentage of 57.3%, and for both sexes with a percentage of 52.9%. Tuvalu has the fifth highest obesity percentages in the world for males with a percentage of 47.0%, for females with a percentage of 56.2%, and for both sexes with a percentage of 51.6%. All of these percentages are from the year 2016. Vietnam has the lowest obesity percentages in the world for males with a percentage of 0.1% from the year 1981, for females with a percentage of 0.2% from the year 1976, and for both sexes with a percentage of 0.1% from the year 1976. Timor-Leste has the second lowest obesity percentages in the world for males with a percentage of 0.1% from the year 1977, for females with a percentage of 0.4% from the year 1975, and for both sexes with a percentage of 0.2% from the year 1975. Rwanda has the third lowest obesity percentage in the world for males with a percentage of 0.1% from the year 1979. Bangladesh has the third lowest obesity percentages in the world for females with a percentage of 0.4% from the year 1977 and for both sexes with a percentage of 0.2% from the year 1975. Cambodia has the fourth lowest obesity percentages in the world for males with a percentage of 0.1% from the year 1978 and for females with a percentage of 0.4% from the year 1975. India has the fourth lowest obesity percentage in the world for both sexes with a percentage of 0.3% from the year 1975. Indonesia has the fifth lowest obesity percentage in the world for males with a percentage of 0.1% from the year 1975. Nepal has the fifth lowest obesity percentage in the world for females with a percentage of 0.4% from the year 1975. Cambodia has the fifth lowest obesity percentage in the world for both sexes with a percentage of 0.3% from the year 1976.
###Code
high_male
high_male_values
high_male_years
high_female
high_female_values
high_female_years
high_both
high_both_values
high_both_years
# These are the values for the stacked bar chart
male_values = np.array([high_male_values[0], high_male_values[1], high_male_values[2], high_male_values[3], high_male_values[4]])
female_values = np.array([high_female_values[0], high_female_values[1], high_female_values[2], high_female_values[3], high_female_values[4]])
both_values = np.array([high_both_values[0], high_both_values[1], high_both_values[2], high_both_values[3], high_both_values[4]])
countries = np.array(['Nauru', 'Cook Islands', 'Palau', 'Marshall Islands', 'Tuvalu'])
bars = np.add(male_values, female_values.tolist())
# This is to create the stacked bar chart for the top five different countries in obesity percentages in the world
p1 = plt.bar(countries, male_values, color = 'blue', width = 0.5)
p2 = plt.bar(countries, female_values, bottom = male_values, color = 'red', width = 0.5)
p3 = plt.bar(countries, both_values, bottom = bars, color = 'green', width = 0.5)
plt.ylabel('Obesity Percentages (%)')
plt.xlabel('Countries')
plt.legend((p1[0], p2[0], p3[0]), ('Male', 'Female', 'Both Sexes'), prop = {'size': 8})
plt.title('Top Five Countries in Obesity Percentages in 2016')
plt.grid(axis = 'both', alpha = 0.5)
plt.show()
###Output
_____no_output_____
###Markdown
The above bar chart shows the top five countries in obesity percentages in the world for all sexes. All of these values are from 2016. Nauru has a percentage of 58.7% for males, percentage of 63.3% for females, and a percentage of 61.0% for both sexes. The Cook Islands have a percentage of 52.6% for males, percentage of 59.2% for females, and a percentage of 55.9% for both sexes. Palau has a percentage of 51.8% for males, a percentage of 58.8% for females, and a percentage of 55.3% for both sexes. The Marshall Islands have a percentage of 48.4% for males, a percentage of 57.3% for females, and a percentage of 52.9% for both sexes. Tuvalu has a percentage of 47.0% for males, a percentage of 56.2% for females, and a percentage of 51.6% for both sexes.
###Code
low_male
low_male_values
low_male_years
# This is to create the bar chart for the bottom five different countries in obesity percentages in the world for males
countries = ['Vietnam - 1981', 'Timor-Leste - 1977', 'Rwanda - 1979', 'Cambodia - 1978', 'Indonesia - 1975']
values = np.array([low_male_values[0], low_male_values[1], low_male_values[2], low_male_values[3], low_male_values[4]])
plt.bar(countries, values, color = ['brown', 'black', 'magenta', 'orange', 'pink'])
plt.title('Bottom Five Countries in Obesity Percentages For Males')
plt.xlabel('Countries')
plt.ylabel('Obesity Percentages (%)')
plt.grid(axis = 'both', alpha = 0.5)
plt.tick_params(axis = 'x', which = 'major', labelsize = 8)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
The above bar chart shows the bottom five countries in obesity percentages in the world for males. Vietnam in 1981 has a percentage of 0.1%, Timor-Leste in 1977 has a percentage of 0.1%, Rwanda in 1979 has a percentage of 0.1%, Cambodia in 1978 has a percentage of 0.1%, and Indonesia in 1975 has a percentage of 0.1%.
###Code
low_female
low_female_values
low_female_years
# This is to create the bar chart for the bottom five different countries in obesity percentages in the world for females
countries = ['Vietnam - 1976', 'Timor-Leste - 1975', 'Bangladesh - 1977', 'Cambodia - 1975', 'Nepal - 1975']
values = np.array([low_female_values[0], low_female_values[1], low_female_values[2], low_female_values[3], low_female_values[4]])
plt.bar(countries, values, color = ['red', 'cyan', 'blue', 'green', 'brown'])
plt.title('Bottom Five Countries in Obesity Percentages For Females')
plt.xlabel('Countries')
plt.ylabel('Obesity Percentages (%)')
plt.grid(axis = 'both', alpha = 0.5)
plt.tick_params(axis = 'x', which = 'major', labelsize = 7)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
The above bar chart shows the bottom five countries in obesity percentages in the world for females. Vietnam in 1976 has a percentage of 0.2%, Timor-Leste in 1975 has a percentage of 0.4%, Bangladesh in 1977 has a percentage of 0.4%, Cambodia in 1975 has a percentage of 0.4%, and Nepal in 1975 has a percentage of 0.4%.
###Code
low_both
low_both_values
low_both_years
# This is to create the bar chart for the bottom five different countries in obesity percentages in the world for both sexes
countries = ['Vietnam - 1976', 'Timor-Leste - 1975', 'Bangladesh - 1975', 'India - 1975', 'Cambodia - 1976']
values = np.array([low_both_values[0], low_both_values[1], low_both_values[2], low_both_values[3], low_both_values[4]])
plt.bar(countries, values, color = ['orange', 'yellow', 'red', 'blue', 'green'])
plt.title('Bottom Five Countries in Obesity Percentages For Both Sexes')
plt.xlabel('Countries')
plt.ylabel('Obesity Percentages (%)')
plt.grid(axis = 'both', alpha = 0.5)
plt.tick_params(axis = 'x', which = 'major', labelsize = 7)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
The above bar chart shows the bottom five countries in obesity percentages in the world for both sexes. Vietnam in 1976 has a percentage of 0.1%, Timor-Leste in 1975 has a percentage of 0.2%, Bangladesh in 1975 has a percentage of 0.2%, India in 1975 has a percentage of 0.3%, and Cambodia in 1976 has a percentage of 0.3%. Question 3. What are the top five years in obesity rates in South America for males? For females? For both sexes? What are the bottom five years in obesity rates in South America for males? For females? For both sexes? Answer 3. The year 2012 has an average obesity percentage of 17.13% for males, 25.54% for females, and 21.49% for both sexes. The year 2013 has an average obesity percentage of 17.6% for males, 26.04% for females, and 21.96% for both sexes. The year 2014 has an average obesity percentage of 18.08% for males, 26.5% for females, and 22.45% for both sexes. The year 2015 has an average obesity percentage of 18.58% for males, 26.98% for females, and 22.93% for both sexes. The year 2016 has an average obesity percentage of 19.08% for males, 27.49% for females, and 23.41% for both sexes. The year 1975 has an average obesity percentage of 4.37% for males, 10.14% for females, and 7.34% for both sexes. The year 1976 has an average obesity percentage of 4.57% for males, 10.48% for females, and 7.62% for both sexes. The year 1977 has an average obesity percentage of 4.78% for males, 10.83% for females, and 7.88% for both sexes. The year 1978 has an average obesity percentage of 5.02% for males, 11.2% for females, and 8.19% for both sexes. The year 1979 has an average obesity percentage of 5.26% for males, 11.57% for females, and 8.48% for both sexes.
###Code
years_male[0:5]
years_male[-5:len(years_male)]
years_female[0:5]
years_female[-5:len(years_female)]
years_both[0:5]
years_both[-5:len(years_both)]
# Create box-and-whisker-plot for the top five years in obesity percentages in South America
sb.set_style('white')
g = sb.boxplot(x = 'Year', y = 'Obesity (%)', hue = 'Sex', data = df_high, linewidth = 2).set_title('Top Five Years in Obesity Percentages in South America')
plt.legend(bbox_to_anchor = (1.05, 1), loc = 2, borderaxespad = 0, title = 'Sex')
plt.show()
###Output
_____no_output_____
###Markdown
The above box-and-whisker plot shows the averages for the top five years in obesity percentages in South America. The year 2012 has an average obesity percentage of 17.13% for males, 25.54% for females, and 21.49% for both sexes. The year 2013 has an average obesity percentage of 17.6% for males, 26.04% for females, and 21.96% for both sexes. The year 2014 has an average obesity percentage of 18.08% for males, 26.5% for females, and 22.45% for both sexes. The year 2015 has an average obesity percentage of 18.58% for males, 26.98% for females, and 22.93% for both sexes. The year 2016 has an average obesity percentage of 19.08% for males, 27.49% for females, and 23.41% for both sexes.
###Code
# Create box-and-whisker-plot for the bottom five years in obesity percentages in South America
sb.set_style('white')
g = sb.boxplot(x = 'Year', y = 'Obesity (%)', hue = 'Sex', data = df_low, linewidth = 2).set_title('Bottom Five Years in Obesity Percentages in South America')
plt.legend(bbox_to_anchor = (1.05, 1), loc = 2, borderaxespad = 0, title = 'Sex')
plt.show()
###Output
_____no_output_____
###Markdown
The above box-and-whisker plot shows the averages for the bottom five years in obesity percentages in South America. The year 1975 has an average obesity percentage of 4.37% for males, 10.14% for females, and 7.34% for both sexes. The year 1976 has an average obesity percentage of 4.57% for males, 10.48% for females, and 7.62% for both sexes. The year 1977 has an average obesity percentage of 4.78% for males, 10.83% for females, and 7.88% for both sexes. The year 1978 has an average obesity percentage of 5.02% for males, 11.2% for females, and 8.19% for both sexes. The year 1979 has an average obesity percentage of 5.26% for males, 11.57% for females, and 8.48% for both sexes. Question 4. What are the percentage breakdowns in average obesity rates for each continent for males in 2016? For females? For both sexes? Answer 4. The average obesity percentages for Asia in 2016 are 27.5% for males, 39.4% for females, and 33.1% for both sexes. The average obesity percentages for Africa in 2016 are 18.1% for males, 48.2% for females, and 33.7% for both sexes. The average obesity percentages for Oceania in 2016 are 30.0% for males, 33.6% for females, and 33.3% for both sexes. The average obesity percentages for North America in 2016 are 25.1% for males, 41.4% for females, and 33.5% for both sexes. The average obesity percentages for South America in 2016 are 27.3% for males, 39.3% for females, and 33.5% for both sexes. The average obesity percentages for Europe in 2016 are 33.8% for males, 32.8% for females, and 33.4% for both sexes.
###Code
north_america_male
north_america_female
north_america_both
south_america_male
south_america_female
south_america_both
asia_male
asia_female
asia_both
africa_male
africa_female
africa_both
oceania_male
oceania_female
oceania_both
europe_male
europe_female
europe_both
# Pie charts showing average obesity percentages by continent and by sex in 2016
plt.figure(0)
plt.pie([asia_male, asia_female, asia_both], explode = [0, 0.1, 0], colors = ['blue', 'red', 'green'], autopct = '%1.1f%%', shadow = True, labels = ['Male', 'Female', 'Both Sexes'])
plt.axis('equal')
plt.title('Average Obesity Percentages in Asia in 2016')
plt.figure(1)
plt.pie([africa_male, africa_female, africa_both], explode = [0, 0.1, 0], colors = ['blue', 'red', 'green'], autopct = '%1.1f%%', shadow = True, labels = ['Male', 'Female', 'Both Sexes'])
plt.axis('equal')
plt.title('Average Obesity Percentages in Africa in 2016')
plt.figure(2)
plt.pie([oceania_male, oceania_female, oceania_both], explode = [0, 0.1, 0], colors = ['blue', 'red', 'green'], autopct = '%1.1f%%', shadow = True, labels = ['Male', 'Female', 'Both Sexes'])
plt.axis('equal')
plt.title('Average Obesity Percentages in Oceania in 2016')
plt.figure(3)
plt.pie([north_america_male, north_america_female, north_america_both], explode = [0, 0.1, 0], colors = ['blue', 'red', 'green'], autopct = '%1.1f%%', shadow = True, labels = ['Male', 'Female', 'Both Sexes'])
plt.axis('equal')
plt.title('Average Obesity Percentages in North America in 2016')
plt.figure(4)
plt.pie([south_america_male, south_america_female, south_america_both], explode = [0, 0.1, 0], colors = ['blue', 'red', 'green'], autopct = '%1.1f%%', shadow = True, labels = ['Male', 'Female', 'Both Sexes'])
plt.axis('equal')
plt.title('Average Obesity Percentages in South America in 2016')
plt.figure(5)
plt.pie([europe_male, europe_female, europe_both], explode = [0.1, 0, 0], colors = ['blue', 'red', 'green'], autopct = '%1.1f%%', shadow = True, labels = ['Male', 'Female', 'Both Sexes'])
plt.axis('equal')
plt.title('Average Obesity Percentages in Europe in 2016')
plt.show()
###Output
_____no_output_____
###Markdown
The above pie charts show the average obesity percentages for each continent and for each sex in 2016. The average obesity percentages for Asia in 2016 are 27.5% for males, 39.4% for females, and 33.1% for both sexes. The average obesity percentages for Africa in 2016 are 18.1% for males, 48.2% for females, and 33.7% for both sexes. The average obesity percentages for Oceania in 2016 are 30.0% for males, 36.6% for females, and 33.3% for both sexes. The average obesity percentages for North America in 2016 are 25.1% for males, 41.4% for females, and 33.5% for both sexes. The average obesity percentages for South America in 2016 are 27.3% for males, 39.3% for females, and 33.5% for both sexes. The average obesity percentages for Europe in 2016 are 33.8% for males, 32.8% for females, and 33.4% for both sexes.
###Code
# Pie chart showing average obesity percentages by continent in 2016
plt.pie([oceania, asia, africa, north_america, south_america, europe], explode = [0.1, 0, 0, 0.1, 0, 0], colors = ['orange', 'yellow', 'purple', 'blue', 'red', 'green'], autopct = '%1.1f%%', shadow = True, labels = ['Oceania', 'Asia', 'Africa', 'North America', 'South America', 'Europe'])
plt.axis('equal')
plt.title('Average Obesity Percentages By Continent in 2016')
plt.show()
###Output
_____no_output_____
###Markdown
The above pie chart shows the average obesity percentages in the world by continent in 2016. Oceania has the highest average obesity percentage in 2016 with 29.9% and Africa has the lowest average obesity percentage in 2016 with 8.2%. Asia has an average obesity percentage of 11.7% in 2016. Europe has an average obesity percentage of 16.1% in 2016. North America has an average obesity percentage of 17.4% in 2016. South America has an average obesity percentage of 16.6% in 2016. Question 5. From 1975 - 2016, what are the worldwide trends in obesity rates for males, females, and both sexes? Answer 5. The obesity percentages for males, for females, and for both sexes all increase from 1975 to 2016. The percentage starts at 4.08% and continues to increase until 16.44% for males. The percentage starts at 8.85% and continues to increase until 23.35% for females. The percentage starts at 6.51% and continues to increase until 19.96% for both sexes.
###Code
# This is to label the years from 1975 to 2016 for males
world_male_first = world_male[0:14]
world_male_second = world_male[14:28]
world_male_third = world_male[28:42]
# This is to label the years from 1975 to 2016 for females
world_female_first = world_female[0:14]
world_female_second = world_female[14:28]
world_female_third = world_female[28:42]
# This is to label the years from 1975 to 2016 for both sexes
world_both_first = world_both[0:14]
world_both_second = world_both[14:28]
world_both_third = world_both[28:42]
# This is to create and label the line chart
years = ['1975', '1976', '1977', '1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986', '1987', '1988']
plt.plot(years, world_male_first, marker = 'o', markerfacecolor = 'blue', label = 'Male')
plt.plot(years, world_female_first, marker = '', markerfacecolor = 'red', linestyle = 'dashed', label = 'Female')
plt.plot(years, world_both_first, marker = '*', markerfacecolor = 'green', label = 'Both Sexes')
plt.gca().spines["top"].set_alpha(0.0)
plt.gca().spines["bottom"].set_alpha(0.3)
plt.gca().spines["right"].set_alpha(0.0)
plt.gca().spines["left"].set_alpha(0.3)
plt.grid(axis = 'both', alpha = 0.3)
plt.ylim(3.5, 13.0)
plt.ylabel('Obesity Percentages (%)')
plt.xlabel('Years')
plt.title('Average Obesity Percentage For Each Sex From 1975 To 1988')
plt.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
The above line chart shows the average obesity percentages for each sex from 1975 to 1988. The line starts at 4.08% and continues to increase until 6.82% for males. The line starts at 8.85% and continues to increase until 12.74% for females. The line starts at 6.51% and continues to increase until 9.83% for both sexes.
###Code
# This is to create and label the line chart
years = ['1989', '1990', '1991', '1992', '1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000', '2001', '2002']
plt.plot(years, world_male_second, marker = 'o', markerfacecolor = 'blue', label = 'Male')
plt.plot(years, world_female_second, marker = '', markerfacecolor = 'red', linestyle = 'dashed', label = 'Female')
plt.plot(years, world_both_second, marker = '*', markerfacecolor = 'green', label = 'Both Sexes')
plt.gca().spines["top"].set_alpha(0.0)
plt.gca().spines["bottom"].set_alpha(0.3)
plt.gca().spines["right"].set_alpha(0.0)
plt.gca().spines["left"].set_alpha(0.3)
plt.grid(axis = 'both', alpha = 0.3)
plt.ylim(6.5, 18.0)
plt.ylabel('Obesity Percentages (%)')
plt.xlabel('Years')
plt.title('Average Obesity Percentage For Each Sex From 1989 To 2002')
plt.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
The above line chart shows the average obesity percentages for each sex from 1989 to 2002. The line starts at 7.08% and continues to increase until 10.89% for males. The line starts at 13.07% and continues to increase until 17.58% for females. The line starts at 10.11% and continues to increase until 14.28% for both sexes.
###Code
# This is to create and label the line chart
years = ['2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016']
plt.plot(years, world_male_third, marker = 'o', markerfacecolor = 'blue', label = 'Male')
plt.plot(years, world_female_third, marker = '', markerfacecolor = 'red', linestyle = 'dashed', label = 'Female')
plt.plot(years, world_both_third, marker = '*', markerfacecolor = 'green', label = 'Both Sexes')
plt.gca().spines["top"].set_alpha(0.0)
plt.gca().spines["bottom"].set_alpha(0.3)
plt.gca().spines["right"].set_alpha(0.0)
plt.gca().spines["left"].set_alpha(0.3)
plt.grid(axis = 'both', alpha = 0.3)
plt.ylim(11.0, 24.0)
plt.ylabel('Obesity Percentages (%)')
plt.xlabel('Years')
plt.title('Average Obesity Percentage For Each Sex From 2003 To 2016')
plt.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
A Recommender System for Groceries Contractor
###Code
# importing libraries
import numpy as np # library to handle data in a vectorized manner
import pandas as pd # library for data analsysis
!conda install -c conda-forge BeautifulSoup4 --yes
from bs4 import BeautifulSoup
import requests # library to handle requests
import json # library to handle JSON files
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
!conda install -c conda-forge geopy --yes # uncomment this line if you haven't completed the Foursquare API lab
import geopy.geocoders # convert an address into latitude and longitude values
!conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven't completed the Foursquare API lab
import folium # map rendering library
print('Libraries are imported.')
###Output
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.5.11
latest version: 4.7.12
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /home/jupyterlab/conda/envs/python
added / updated specs:
- beautifulsoup4
The following packages will be downloaded:
package | build
---------------------------|-----------------
soupsieve-1.9.4 | py36_0 58 KB conda-forge
beautifulsoup4-4.8.1 | py36_0 149 KB conda-forge
------------------------------------------------------------
Total: 207 KB
The following NEW packages will be INSTALLED:
soupsieve: 1.9.4-py36_0 conda-forge
The following packages will be UPDATED:
beautifulsoup4: 4.6.3-py37_0 --> 4.8.1-py36_0 conda-forge
Downloading and Extracting Packages
soupsieve-1.9.4 | 58 KB | ##################################### | 100%
beautifulsoup4-4.8.1 | 149 KB | ##################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.5.11
latest version: 4.7.12
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /home/jupyterlab/conda/envs/python
added / updated specs:
- geopy
The following packages will be downloaded:
package | build
---------------------------|-----------------
geopy-1.20.0 | py_0 57 KB conda-forge
geographiclib-1.50 | py_0 34 KB conda-forge
------------------------------------------------------------
Total: 91 KB
The following NEW packages will be INSTALLED:
geographiclib: 1.50-py_0 conda-forge
geopy: 1.20.0-py_0 conda-forge
Downloading and Extracting Packages
geopy-1.20.0 | 57 KB | ##################################### | 100%
geographiclib-1.50 | 34 KB | ##################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.5.11
latest version: 4.7.12
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /home/jupyterlab/conda/envs/python
added / updated specs:
- folium=0.5.0
The following packages will be downloaded:
package | build
---------------------------|-----------------
pandas-0.25.3 | py36hb3f55d8_0 11.4 MB conda-forge
tbb4py-2019.9 | py36hc9558a2_0 245 KB conda-forge
------------------------------------------------------------
Total: 11.7 MB
The following NEW packages will be INSTALLED:
tbb: 2019.9-hc9558a2_0 conda-forge
tbb4py: 2019.9-py36hc9558a2_0 conda-forge
The following packages will be UPDATED:
pandas: 0.25.2-py36hb3f55d8_0 conda-forge --> 0.25.3-py36hb3f55d8_0 conda-forge
Downloading and Extracting Packages
pandas-0.25.3 | 11.4 MB | ##################################### | 100%
tbb4py-2019.9 | 245 KB | ##################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Libraries are imported.
###Markdown
Postal Codes in Toronto
###Code
# Loading the dataset which is about postal codes in Toronto
# This dataset was created in week 3.
df_toronto = pd.read_csv('toronto_base.csv')
df_toronto.head()
###Output
_____no_output_____
###Markdown
Create a Map of Toronto City (with its Postal Codes' Regions)
###Code
# for the city Toronto, latitude and longtitude are manually extracted via google search
toronto_latitude = 43.6932; toronto_longitude = -79.3832
map_toronto = folium.Map(location = [toronto_latitude, toronto_longitude], zoom_start = 10.7)
# add markers to map
for lat, lng, borough, neighborhood in zip(df_toronto['Latitude'], df_toronto['Longitude'], df_toronto['Borough'], df_toronto['Neighborhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7).add_to(map_toronto)
map_toronto
###Output
_____no_output_____
###Markdown
Focusing on the "Scarorough" Borough in Toronto (its neighborhoods)
###Code
# selecting only neighborhoods regarding to "Scarborough" borough.
scarborough_data = df_toronto[df_toronto['Borough'] == 'Scarborough']
scarborough_data = scarborough_data.reset_index(drop=True).drop(columns = 'Unnamed: 0')
scarborough_data.head()
###Output
_____no_output_____
###Markdown
Create a Map of Scarborough and Its Neighbourhoods
###Code
address_scar = 'Scarborough, Toronto'
latitude_scar = 43.773077
longitude_scar = -79.257774
print('The geograpical coordinate of "Scarborough" are: {}, {}.'.format(latitude_scar, longitude_scar))
map_Scarborough = folium.Map(location=[latitude_scar, longitude_scar], zoom_start=11.5)
# add markers to map
for lat, lng, label in zip(scarborough_data['Latitude'], scarborough_data['Longitude'], scarborough_data['Neighborhood']):
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius = 10,
popup = label,
color ='blue',
fill = True,
fill_color = '#3186cc',
fill_opacity = 0.7).add_to(map_Scarborough)
map_Scarborough
def foursquare_crawler (postal_code_list, neighborhood_list, lat_list, lng_list, LIMIT = 500, radius = 1000):
result_ds = []
counter = 0
for postal_code, neighborhood, lat, lng in zip(postal_code_list, neighborhood_list, lat_list, lng_list):
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID, CLIENT_SECRET, VERSION,
lat, lng, radius, LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
tmp_dict = {}
tmp_dict['Postal Code'] = postal_code; tmp_dict['Neighborhood(s)'] = neighborhood;
tmp_dict['Latitude'] = lat; tmp_dict['Longitude'] = lng;
tmp_dict['Crawling_result'] = results;
result_ds.append(tmp_dict)
counter += 1
print('{}.'.format(counter))
print('Data is Obtained, for the Postal Code {} (and Neighborhoods {}) SUCCESSFULLY.'.format(postal_code, neighborhood))
return result_ds;
# @hiddel_cell
CLIENT_ID = '0MJA3NYYG3U2ZY1LTZN2OYEHS3Y3WVSON2GBSO3IL4EDYVIR' # Foursquare ID
CLIENT_SECRET = 'WGWSAF2TKVUQPE3PD0N3EOITFVBY5EYP1VCZI3BMUG0ROUS5' # Foursquare Secret
VERSION = '20180605' # Foursquare API version
###Output
_____no_output_____
###Markdown
Crawling Internet (in fact only Foursquare database) for Venues in the Neighborhoods inside "Scarborough"
###Code
print('Crawling different neighborhoods inside "Scarborough"')
Scarborough_foursquare_dataset = foursquare_crawler(list(scarborough_data['Post Code']),
list(scarborough_data['Neighborhood']),
list(scarborough_data['Latitude']),
list(scarborough_data['Longitude']),)
###Output
Crawling different neighborhoods inside "Scarborough"
1.
Data is Obtained, for the Postal Code M1B (and Neighborhoods Rouge, Malvern) SUCCESSFULLY.
2.
Data is Obtained, for the Postal Code M1C (and Neighborhoods Highland Creek, Rouge Hill, Port Union) SUCCESSFULLY.
3.
Data is Obtained, for the Postal Code M1E (and Neighborhoods Guildwood, Morningside, West Hill) SUCCESSFULLY.
4.
Data is Obtained, for the Postal Code M1G (and Neighborhoods Woburn) SUCCESSFULLY.
5.
Data is Obtained, for the Postal Code M1H (and Neighborhoods Cedarbrae) SUCCESSFULLY.
6.
Data is Obtained, for the Postal Code M1J (and Neighborhoods Scarborough Village) SUCCESSFULLY.
7.
Data is Obtained, for the Postal Code M1K (and Neighborhoods East Birchmount Park, Ionview, Kennedy Park) SUCCESSFULLY.
8.
Data is Obtained, for the Postal Code M1L (and Neighborhoods Clairlea, Golden Mile, Oakridge) SUCCESSFULLY.
9.
Data is Obtained, for the Postal Code M1M (and Neighborhoods Cliffcrest, Cliffside, Scarborough Village West) SUCCESSFULLY.
10.
Data is Obtained, for the Postal Code M1N (and Neighborhoods Birch Cliff, Cliffside West) SUCCESSFULLY.
11.
Data is Obtained, for the Postal Code M1P (and Neighborhoods Dorset Park, Scarborough Town Centre, Wexford Heights) SUCCESSFULLY.
12.
Data is Obtained, for the Postal Code M1R (and Neighborhoods Maryvale, Wexford) SUCCESSFULLY.
13.
Data is Obtained, for the Postal Code M1S (and Neighborhoods Agincourt) SUCCESSFULLY.
14.
Data is Obtained, for the Postal Code M1T (and Neighborhoods Clarks Corners, Sullivan, Tam O'Shanter) SUCCESSFULLY.
15.
Data is Obtained, for the Postal Code M1V (and Neighborhoods Agincourt North, L'Amoreaux East, Milliken, Steeles East) SUCCESSFULLY.
16.
Data is Obtained, for the Postal Code M1W (and Neighborhoods L'Amoreaux West, Steeles West) SUCCESSFULLY.
17.
Data is Obtained, for the Postal Code M1X (and Neighborhoods Upper Rouge) SUCCESSFULLY.
###Markdown
Breakpoint: Saving results of Foursquare, so that we would not need to connect every time to Foursquare (and use our portions) .
###Code
import pickle
with open("Scarborough_foursquare_dataset.txt", "wb") as fp: #Pickling
pickle.dump(Scarborough_foursquare_dataset, fp)
print('Received Data from Internet is Saved to Computer.')
with open("Scarborough_foursquare_dataset.txt", "rb") as fp: # Unpickling
Scarborough_foursquare_dataset = pickle.load(fp)
###Output
_____no_output_____
###Markdown
Cleaning the RAW Data Received from Foursquare Database
###Code
# This function is created to connect to the saved list which is the received database. It will extract each venue
# for every neighborhood inside the database
def get_venue_dataset(foursquare_dataset):
result_df = pd.DataFrame(columns = ['Postal Code', 'Neighborhood',
'Neighborhood Latitude', 'Neighborhood Longitude',
'Venue', 'Venue Summary', 'Venue Category', 'Distance'])
# print(result_df)
for neigh_dict in foursquare_dataset:
postal_code = neigh_dict['Postal Code']; neigh = neigh_dict['Neighborhood(s)']
lat = neigh_dict['Latitude']; lng = neigh_dict['Longitude']
print('Number of Venuse in Coordination "{}" Posal Code and "{}" Negihborhood(s) is:'.format(postal_code, neigh))
print(len(neigh_dict['Crawling_result']))
for venue_dict in neigh_dict['Crawling_result']:
summary = venue_dict['reasons']['items'][0]['summary']
name = venue_dict['venue']['name']
dist = venue_dict['venue']['location']['distance']
cat = venue_dict['venue']['categories'][0]['name']
# print({'Postal Code': postal_code, 'Neighborhood': neigh,
# 'Neighborhood Latitude': lat, 'Neighborhood Longitude':lng,
# 'Venue': name, 'Venue Summary': summary,
# 'Venue Category': cat, 'Distance': dist})
result_df = result_df.append({'Postal Code': postal_code, 'Neighborhood': neigh,
'Neighborhood Latitude': lat, 'Neighborhood Longitude':lng,
'Venue': name, 'Venue Summary': summary,
'Venue Category': cat, 'Distance': dist}, ignore_index = True)
# print(result_df)
return(result_df)
scarborough_venues = get_venue_dataset(Scarborough_foursquare_dataset)
###Output
Number of Venuse in Coordination "M1B" Posal Code and "Rouge, Malvern" Negihborhood(s) is:
17
Number of Venuse in Coordination "M1C" Posal Code and "Highland Creek, Rouge Hill, Port Union" Negihborhood(s) is:
5
Number of Venuse in Coordination "M1E" Posal Code and "Guildwood, Morningside, West Hill" Negihborhood(s) is:
23
Number of Venuse in Coordination "M1G" Posal Code and "Woburn" Negihborhood(s) is:
8
Number of Venuse in Coordination "M1H" Posal Code and "Cedarbrae" Negihborhood(s) is:
27
Number of Venuse in Coordination "M1J" Posal Code and "Scarborough Village" Negihborhood(s) is:
12
Number of Venuse in Coordination "M1K" Posal Code and "East Birchmount Park, Ionview, Kennedy Park" Negihborhood(s) is:
26
Number of Venuse in Coordination "M1L" Posal Code and "Clairlea, Golden Mile, Oakridge" Negihborhood(s) is:
31
Number of Venuse in Coordination "M1M" Posal Code and "Cliffcrest, Cliffside, Scarborough Village West" Negihborhood(s) is:
13
Number of Venuse in Coordination "M1N" Posal Code and "Birch Cliff, Cliffside West" Negihborhood(s) is:
13
Number of Venuse in Coordination "M1P" Posal Code and "Dorset Park, Scarborough Town Centre, Wexford Heights" Negihborhood(s) is:
47
Number of Venuse in Coordination "M1R" Posal Code and "Maryvale, Wexford" Negihborhood(s) is:
27
Number of Venuse in Coordination "M1S" Posal Code and "Agincourt" Negihborhood(s) is:
44
Number of Venuse in Coordination "M1T" Posal Code and "Clarks Corners, Sullivan, Tam O'Shanter" Negihborhood(s) is:
33
Number of Venuse in Coordination "M1V" Posal Code and "Agincourt North, L'Amoreaux East, Milliken, Steeles East" Negihborhood(s) is:
27
Number of Venuse in Coordination "M1W" Posal Code and "L'Amoreaux West, Steeles West" Negihborhood(s) is:
25
Number of Venuse in Coordination "M1X" Posal Code and "Upper Rouge" Negihborhood(s) is:
0
###Markdown
Showing Venues for Each Neighborhood in Scarborough
###Code
scarborough_venues.head()
scarborough_venues.tail()
###Output
_____no_output_____
###Markdown
Breakpoint: End of Processing the Retrieved Information from Foursquare Saving a Cleaned Version of DataFrame as the Results from Foursquare
###Code
scarborough_venues.to_csv('scarborough_venues.csv')
###Output
_____no_output_____
###Markdown
Loading Data from File (Saved "Foursquare " DataFrame for Venues)
###Code
scarborough_venues = pd.read_csv('scarborough_venues.csv')
###Output
_____no_output_____
###Markdown
Some Summary Information about Neighborhoods inside "Scarborough"
###Code
neigh_list = list(scarborough_venues['Neighborhood'].unique())
print('Number of Neighborhoods inside Scarborough:')
print(len(neigh_list))
print('List of Neighborhoods inside Scarborough:')
neigh_list
###Output
Number of Neighborhoods inside Scarborough:
16
List of Neighborhoods inside Scarborough:
###Markdown
Some Summary Information about Neighborhoods inside "Scarborough" Cont'd
###Code
neigh_venue_summary = scarborough_venues.groupby('Neighborhood').count()
neigh_venue_summary.drop(columns = ['Unnamed: 0']).head()
print('There are {} uniques categories.'.format(len(scarborough_venues['Venue Category'].unique())))
print('Here is the list of different categories:')
list(scarborough_venues['Venue Category'].unique())
# Just for fun and deeper understanding
print(type(scarborough_venues[['Venue Category']]))
print(type(scarborough_venues['Venue Category']))
###Output
<class 'pandas.core.frame.DataFrame'>
<class 'pandas.core.series.Series'>
###Markdown
One-hot Encoding the "categroies" Column into Every Unique Categorical Feature.
###Code
# one hot encoding
scarborough_onehot = pd.get_dummies(data = scarborough_venues, drop_first = False,
prefix = "", prefix_sep = "", columns = ['Venue Category'])
scarborough_onehot.head()
###Output
_____no_output_____
###Markdown
Manually Selecting (Subsetting) Related Features for the Groceries Contractor
###Code
# This list is created manually
important_list_of_features = [
'Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'African Restaurant',
'American Restaurant',
'Asian Restaurant',
'BBQ Joint',
'Bakery',
'Breakfast Spot',
'Burger Joint',
'Cajun / Creole Restaurant',
'Cantonese Restaurant',
'Caribbean Restaurant',
'Chinese Restaurant',
'Diner',
'Fast Food Restaurant',
'Fish Market',
'Food & Drink Shop',
'Fried Chicken Joint',
'Fruit & Vegetable Store',
'Greek Restaurant',
'Grocery Store',
'Hakka Restaurant',
'Hong Kong Restaurant',
'Hotpot Restaurant',
'Indian Restaurant',
'Italian Restaurant',
'Japanese Restaurant',
'Korean Restaurant',
'Latin American Restaurant',
'Malay Restaurant',
'Mediterranean Restaurant',
'Mexican Restaurant',
'Middle Eastern Restaurant',
'Noodle House',
'Pizza Place',
'Restaurant',
'Sandwich Place',
'Seafood Restaurant',
'Sushi Restaurant',
'Taiwanese Restaurant',
'Thai Restaurant',
'Vegetarian / Vegan Restaurant',
'Vietnamese Restaurant',
'Wings Joint']
###Output
_____no_output_____
###Markdown
Updating the One-hot Encoded DataFrame and Grouping the Data by Neighborhoods
###Code
scarborough_onehot = scarborough_onehot[important_list_of_features].drop(
columns = ['Neighborhood Latitude', 'Neighborhood Longitude']).groupby(
'Neighborhood').sum()
scarborough_onehot.head()
###Output
_____no_output_____
###Markdown
Integrating Different Restaurants and Different Joints (Assuming Different Resaturants Use the Same Raw Groceries) This Assumption is made for simplicity and due to not having very large dataset about neighborhoods.
###Code
feat_name_list = list(scarborough_onehot.columns)
restaurant_list = []
for counter, value in enumerate(feat_name_list):
if value.find('Restaurant') != (-1):
restaurant_list.append(value)
scarborough_onehot['Total Restaurants'] = scarborough_onehot[restaurant_list].sum(axis = 1)
scarborough_onehot = scarborough_onehot.drop(columns = restaurant_list)
feat_name_list = list(scarborough_onehot.columns)
joint_list = []
for counter, value in enumerate(feat_name_list):
if value.find('Joint') != (-1):
joint_list.append(value)
scarborough_onehot['Total Joints'] = scarborough_onehot[joint_list].sum(axis = 1)
scarborough_onehot = scarborough_onehot.drop(columns = joint_list)
###Output
_____no_output_____
###Markdown
Showing the Fully-Processed DataFrame about Neighborhoods inside Scarborrough. This Dataset is Ready for any Machine Learning Algorithm.
###Code
scarborough_onehot
###Output
_____no_output_____
###Markdown
Run k-means to Cluster Neighborhoods into 5 Clusters
###Code
# import k-means from clustering stage
from sklearn.cluster import KMeans
# run k-means clustering
kmeans = KMeans(n_clusters = 5, random_state = 0).fit(scarborough_onehot)
###Output
_____no_output_____
###Markdown
Showing Centers of Each Cluster
###Code
means_df = pd.DataFrame(kmeans.cluster_centers_)
means_df.columns = scarborough_onehot.columns
means_df.index = ['G1','G2','G3','G4','G5']
means_df['Total Sum'] = means_df.sum(axis = 1)
means_df.sort_values(axis = 0, by = ['Total Sum'], ascending=False)
###Output
_____no_output_____
###Markdown
Capstone Project Neural translation model InstructionsIn this notebook, you will create a neural network that translates from English to German. You will use concepts from throughout this course, including building more flexible model architectures, freezing layers, data processing pipeline and sequence modelling.This project is peer-assessed. Within this notebook you will find instructions in each section for how to complete the project. Pay close attention to the instructions as the peer review will be carried out according to a grading rubric that checks key parts of the project instructions. Feel free to add extra cells into the notebook as required. How to submitWhen you have completed the Capstone project notebook, you will submit a pdf of the notebook for peer review. First ensure that the notebook has been fully executed from beginning to end, and all of the cell outputs are visible. This is important, as the grading rubric depends on the reviewer being able to view the outputs of your notebook. Save the notebook as a pdf (you could download the notebook with File -> Download .ipynb, open the notebook locally, and then File -> Download as -> PDF via LaTeX), and then submit this pdf for review. Let's get started!We'll start by running some imports, and loading the dataset. For this project you are free to make further imports throughout the notebook as you wish.
###Code
import tensorflow as tf
import tensorflow_hub as hub
import unicodedata
import re
from IPython.display import Image
import csv
import random
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Layer, Dense, Dropout, Softmax, concatenate, Embedding, LSTM
from tensorflow.keras.models import load_model, Model
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
For the capstone project, you will use a language dataset from http://www.manythings.org/anki/ to build a neural translation model. This dataset consists of over 200,000 pairs of sentences in English and German. In order to make the training quicker, we will restrict to our dataset to 20,000 pairs. Feel free to change this if you wish - the size of the dataset used is not part of the grading rubric.Your goal is to develop a neural translation model from English to German, making use of a pre-trained English word embedding module. Import the dataThe dataset is available for download as a zip file at the following link:https://drive.google.com/open?id=1KczOciG7sYY7SB9UlBeRP1T9659b121QYou should store the unzipped folder in Drive for use in this Colab notebook.
###Code
# Run this cell to connect to your Drive folder
from google.colab import drive
drive.mount('/content/gdrive')
# Run this cell to load the dataset
NUM_EXAMPLES = 20000
data_examples = []
with open('deu.txt', 'r', encoding='utf8') as f:
for line in f.readlines():
if len(data_examples) < NUM_EXAMPLES:
data_examples.append(line)
else:
break
# These functions preprocess English and German sentences
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn')
def preprocess_sentence(sentence):
sentence = sentence.lower().strip()
sentence = re.sub(r"ü", 'ue', sentence)
sentence = re.sub(r"ä", 'ae', sentence)
sentence = re.sub(r"ö", 'oe', sentence)
sentence = re.sub(r'ß', 'ss', sentence)
sentence = unicode_to_ascii(sentence)
sentence = re.sub(r"([?.!,])", r" \1 ", sentence)
sentence = re.sub(r"[^a-z?.!,']+", " ", sentence)
sentence = re.sub(r'[" "]+', " ", sentence)
return sentence.strip()
###Output
_____no_output_____
###Markdown
The custom translation modelThe following is a schematic of the custom translation model architecture you will develop in this project.
###Code
# Run this cell to download and view a schematic diagram for the neural translation model
!wget -q -O neural_translation_model.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1XsS1VlXoaEo-RbYNilJ9jcscNZvsSPmd"
Image("neural_translation_model.png")
###Output
_____no_output_____
###Markdown
The custom model consists of an encoder RNN and a decoder RNN. The encoder takes words of an English sentence as input, and uses a pre-trained word embedding to embed the words into a 128-dimensional space. To indicate the end of the input sentence, a special end token (in the same 128-dimensional space) is passed in as an input. This token is a TensorFlow Variable that is learned in the training phase (unlike the pre-trained word embedding, which is frozen).The decoder RNN takes the internal state of the encoder network as its initial state. A start token is passed in as the first input, which is embedded using a learned German word embedding. The decoder RNN then makes a prediction for the next German word, which during inference is then passed in as the following input, and this process is repeated until the special `` token is emitted from the decoder. 1. Text preprocessing* Create separate lists of English and German sentences, and preprocess them using the `preprocess_sentence` function provided for you above.* Add a special `""` and `""` token to the beginning and end of every German sentence.* Use the Tokenizer class from the `tf.keras.preprocessing.text` module to tokenize the German sentences, ensuring that no character filters are applied. _Hint: use the Tokenizer's "filter" keyword argument._* Print out at least 5 randomly chosen examples of (preprocessed) English and German sentence pairs. For the German sentence, print out the text (with start and end tokens) as well as the tokenized sequence.* Pad the end of the tokenized German sequences with zeros, and batch the complete set of sequences into one numpy array.
###Code
# Instantiate reader object for looping through tab separated files
reader = csv.reader(data_examples, delimiter='\t')
# Empty lists for english and german sentences
data_examples_en = []
data_examples_ge = []
# Loop through all sentences and add line[0] to en and list[1] to ge list
for line in reader:
data_examples_en.append(line[0])
data_examples_ge.append(line[1])
preproc_data_en = [preprocess_sentence(data) for data in data_examples_en]
preproc_data_ge = ["<start> "+preprocess_sentence(data)+" <end>" for data in data_examples_ge]
# Define tokenizer object
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=None, filters='')
# Apply fit_on_texts
tokenizer.fit_on_texts(preproc_data_ge)
ger_tokens = len(tokenizer.word_index)
# Transform text into sequences of integers
preproc_data_ge_seq = tokenizer.texts_to_sequences(preproc_data_ge)
# 5 random indices
rand_inx = np.random.choice(len(preproc_data_ge), 5)
for i in rand_inx:
print(f"ENG: {preproc_data_en[i]} -- GER: {preproc_data_ge[i]} -- TOK_GER: {preproc_data_ge_seq[i]}")
# Pad the end of the tokenized German sequences with zeros, and batch the complete set of sequences into one numpy array.
padded_sequences = tf.keras.preprocessing.sequence.pad_sequences(preproc_data_ge_seq, maxlen=14, padding='post')
print(f"\n 'padded_sequences' is of {type(padded_sequences)} type and has shape {padded_sequences.shape}")
###Output
ENG: please be honest . -- GER: <start> seien sie bitte ehrlich ! <end> -- TOK_GER: [1, 264, 8, 67, 569, 9, 2]
ENG: it isn't a fish . -- GER: <start> das ist kein fisch . <end> -- TOK_GER: [1, 11, 6, 71, 368, 3, 2]
ENG: do you have a son ? -- GER: <start> haben sie einen sohn ? <end> -- TOK_GER: [1, 35, 8, 40, 355, 7, 2]
ENG: you're moody . -- GER: <start> du bist launisch . <end> -- TOK_GER: [1, 13, 32, 1336, 3, 2]
ENG: can i talk to you ? -- GER: <start> kann ich dich sprechen ? <end> -- TOK_GER: [1, 30, 4, 28, 470, 7, 2]
'padded_sequences' is of <class 'numpy.ndarray'> type and has shape (20000, 14)
###Markdown
2. Prepare the data Load the embedding layerAs part of the dataset preproceessing for this project, you will use a pre-trained English word embedding module from TensorFlow Hub. The URL for the module is https://tfhub.dev/google/tf2-preview/nnlm-en-dim128-with-normalization/1.This embedding takes a batch of text tokens in a 1-D tensor of strings as input. It then embeds the separate tokens into a 128-dimensional space. The code to load and test the embedding layer is provided for you below.**NB:** this model can also be used as a sentence embedding module. The module will process each token by removing punctuation and splitting on spaces. It then averages the word embeddings over a sentence to give a single embedding vector. However, we will use it only as a word embedding module, and will pass each word in the input sentence as a separate token.
###Code
# Load embedding module from Tensorflow Hub
embedding_layer = hub.KerasLayer("https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1",
output_shape=[128], input_shape=[], dtype=tf.string)
# Test the layer
print(tf.constant(["these", "aren't", "the", "droids", "you're", "looking", "for"]).shape)
embedding_layer(tf.constant(["these", "aren't", "the", "droids", "you're", "looking", "for"])).shape
###Output
(7,)
###Markdown
You should now prepare the training and validation Datasets.* Create a random training and validation set split of the data, reserving e.g. 20% of the data for validation (NB: each English dataset example is a single sentence string, and each German dataset example is a sequence of padded integer tokens).* Load the training and validation sets into a tf.data.Dataset object, passing in a tuple of English and German data for both training and validation sets.* Create a function to map over the datasets that splits each English sentence at spaces. Apply this function to both Dataset objects using the map method. _Hint: look at the tf.strings.split function._* Create a function to map over the datasets that embeds each sequence of English words using the loaded embedding layer/model. Apply this function to both Dataset objects using the map method.* Create a function to filter out dataset examples where the English sentence is greater than or equal to than 13 (embedded) tokens in length. Apply this function to both Dataset objects using the filter method.* Create a function to map over the datasets that pads each English sequence of embeddings with some distinct padding value before the sequence, so that each sequence is length 13. Apply this function to both Dataset objects using the map method. _Hint: look at the tf.pad function. You can extract a Tensor shape using tf.shape; you might also find the tf.math.maximum function useful._* Batch both training and validation Datasets with a batch size of 16.* Print the `element_spec` property for the training and validation Datasets. * Using the Dataset `.take(1)` method, print the shape of the English data example from the training Dataset.* Using the Dataset `.take(1)` method, print the German data example Tensor from the validation Dataset.
###Code
# Create a random training and validation set split of the data, reserving e.g. 20% of the data for validation
en_train, en_test, ger_train, ger_test = train_test_split(preproc_data_en, padded_sequences, test_size = 0.2)
# Load the training and validation sets into a tf.data.Dataset object, passing in a tuple of English and German data for both training and validation sets
train_dataset = tf.data.Dataset.from_tensor_slices((en_train, ger_train))
print(train_dataset.element_spec)
test_dataset = tf.data.Dataset.from_tensor_slices((en_test, ger_test))
print(test_dataset.element_spec, "\n")
# Create a function to map over the datasets that splits each English sentence at spaces
def map_split(english, german):
return tf.strings.split(english), german
train_dataset=train_dataset.map(map_split)
print(train_dataset.element_spec)
test_dataset=test_dataset.map(map_split)
print(test_dataset.element_spec, "\n")
# Create a function to map over the datasets that embeds each sequence of English words using the loaded embedding layer/model
def map_emb(english, german):
return embedding_layer(english), german
train_dataset=train_dataset.map(map_emb)
print(train_dataset.element_spec)
test_dataset=test_dataset.map(map_emb)
print(test_dataset.element_spec, "\n")
# Create a function to filter out dataset examples where the English sentence is more than 13 (embedded) tokens in length
train_dataset=train_dataset.filter(lambda x, y : len(x) <= 13)
print(train_dataset.element_spec)
test_dataset=test_dataset.filter(lambda x, y : len(x) <= 13)
print(test_dataset.element_spec, "\n")
def pad_seq(english, german):
n = 13 - english.get_shape()[0]
paddings = tf.concat(([[n,0]], [[0,0]]), axis=0)
return tf.pad(english, paddings), german
# Create a function to map over the datasets that pads each English sequence of embeddings
def pad_eng_embeddings(english, german):
return tf.pad(english, [tf.math.maximum([13-tf.shape(english)[0] ,0], tf.constant([0,0])), tf.constant([0,0])], "CONSTANT", constant_values=0), german
train_dataset=train_dataset.map(pad_eng_embeddings)
print(train_dataset.element_spec)
test_dataset=test_dataset.map(pad_eng_embeddings)
print(test_dataset.element_spec, "\n")
# Batch both training and validation Datasets with a batch size of 16.
train_dataset=train_dataset.batch(16)
print(train_dataset.element_spec)
test_dataset=test_dataset.batch(16)
print(test_dataset.element_spec, "\n")
###Output
(TensorSpec(shape=(), dtype=tf.string, name=None), TensorSpec(shape=(14,), dtype=tf.int32, name=None))
(TensorSpec(shape=(), dtype=tf.string, name=None), TensorSpec(shape=(14,), dtype=tf.int32, name=None))
(TensorSpec(shape=(None,), dtype=tf.string, name=None), TensorSpec(shape=(14,), dtype=tf.int32, name=None))
(TensorSpec(shape=(None,), dtype=tf.string, name=None), TensorSpec(shape=(14,), dtype=tf.int32, name=None))
(TensorSpec(shape=(None, 128), dtype=tf.float32, name=None), TensorSpec(shape=(14,), dtype=tf.int32, name=None))
(TensorSpec(shape=(None, 128), dtype=tf.float32, name=None), TensorSpec(shape=(14,), dtype=tf.int32, name=None))
(TensorSpec(shape=(None, 128), dtype=tf.float32, name=None), TensorSpec(shape=(14,), dtype=tf.int32, name=None))
(TensorSpec(shape=(None, 128), dtype=tf.float32, name=None), TensorSpec(shape=(14,), dtype=tf.int32, name=None))
(TensorSpec(shape=(None, 128), dtype=tf.float32, name=None), TensorSpec(shape=(14,), dtype=tf.int32, name=None))
(TensorSpec(shape=(None, 128), dtype=tf.float32, name=None), TensorSpec(shape=(14,), dtype=tf.int32, name=None))
(TensorSpec(shape=(None, None, 128), dtype=tf.float32, name=None), TensorSpec(shape=(None, 14), dtype=tf.int32, name=None))
(TensorSpec(shape=(None, None, 128), dtype=tf.float32, name=None), TensorSpec(shape=(None, 14), dtype=tf.int32, name=None))
###Markdown
3. Create the custom layerYou will now create a custom layer to add the learned end token embedding to the encoder model:
###Code
# Run this cell to download and view a schematic diagram for the encoder model
!wget -q -O neural_translation_model.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1JrtNOzUJDaOWrK4C-xv-4wUuZaI12sQI"
Image("neural_translation_model.png")
###Output
_____no_output_____
###Markdown
You should now build the custom layer.* Using layer subclassing, create a custom layer that takes a batch of English data examples from one of the Datasets, and adds a learned embedded ‘end’ token to the end of each sequence. * This layer should create a TensorFlow Variable (that will be learned during training) that is 128-dimensional (the size of the embedding space). _Hint: you may find it helpful in the call method to use the tf.tile function to replicate the end token embedding across every element in the batch._* Using the Dataset `.take(1)` method, extract a batch of English data examples from the training Dataset and print the shape. Test the custom layer by calling the layer on the English data batch Tensor and print the resulting Tensor shape (the layer should increase the sequence length by one).
###Code
class CustomLayer(Layer):
def __init__(self, embedding_dim=128, **kwargs):
super(CustomLayer, self).__init__(**kwargs)
self.end_token_emb = tf.Variable(initial_value=tf.random.uniform(shape=(1,1,embedding_dim)),
dtype=tf.float32,
trainable=True)
def call(self, inputs):
end_token = self.end_token_emb
end_token = tf.tile(self.end_token_emb, [tf.shape(inputs)[0],1,1])
return tf.keras.layers.concatenate([inputs, end_token], axis=1)
examples = []
for sample in train_dataset.take(1):
examples.append(sample)
break
print(examples[0][0].shape)
custom_layer = CustomLayer(input_shape=examples[0][0].shape)
output = custom_layer(examples[0][0])
print(output.shape)
###Output
(16, 13, 128)
(16, 14, 128)
###Markdown
4. Build the encoder networkThe encoder network follows the schematic diagram above. You should now build the RNN encoder model.* Using the functional API, build the encoder network according to the following spec: * The model will take a batch of sequences of embedded English words as input, as given by the Dataset objects. * The next layer in the encoder will be the custom layer you created previously, to add a learned end token embedding to the end of the English sequence. * This is followed by a Masking layer, with the `mask_value` set to the distinct padding value you used when you padded the English sequences with the Dataset preprocessing above. * The final layer is an LSTM layer with 512 units, which also returns the hidden and cell states. * The encoder is a multi-output model. There should be two output Tensors of this model: the hidden state and cell states of the LSTM layer. The output of the LSTM layer is unused.* Using the Dataset `.take(1)` method, extract a batch of English data examples from the training Dataset and test the encoder model by calling it on the English data Tensor, and print the shape of the resulting Tensor outputs.* Print the model summary for the encoder network.
###Code
input_layer = tf.keras.layers.Input(shape=(13,128))
x = CustomLayer()(input_layer)
x = tf.keras.layers.Masking(mask_value=0.0)(x)
lstm_output, hidden_state, cell_state = tf.keras.layers.LSTM(units=512, return_state=True, return_sequences=True)(x)
model = tf.keras.models.Model(inputs=input_layer, outputs=[hidden_state, cell_state])
# Take an example from the batch as input
state_h, state_c = model(examples[0][0])
# Print the shapes of the outputs
print("hidden_state shape:", state_h.shape)
print("cell_state shape:", state_c.shape, "\n")
# Print the model summary
model.summary()
###Output
hidden_state shape: (16, 512)
cell_state shape: (16, 512)
Model: "model_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) [(None, 13, 128)] 0
_________________________________________________________________
custom_layer_7 (CustomLayer) (None, 14, 128) 128
_________________________________________________________________
masking_4 (Masking) (None, 14, 128) 0
_________________________________________________________________
lstm_7 (LSTM) [(None, 14, 512), (None, 1312768
=================================================================
Total params: 1,312,896
Trainable params: 1,312,896
Non-trainable params: 0
_________________________________________________________________
###Markdown
5. Build the decoder networkThe decoder network follows the schematic diagram below.
###Code
# Run this cell to download and view a schematic diagram for the decoder model
!wget -q -O neural_translation_model.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1DTeaXD8tA8RjkpVrB2mr9csSBOY4LQiW"
Image("neural_translation_model.png")
###Output
_____no_output_____
###Markdown
You should now build the RNN decoder model.* Using Model subclassing, build the decoder network according to the following spec: * The initializer should create the following layers: * An Embedding layer with vocabulary size set to the number of unique German tokens, embedding dimension 128, and set to mask zero values in the input. * An LSTM layer with 512 units, that returns its hidden and cell states, and also returns sequences. * A Dense layer with number of units equal to the number of unique German tokens, and no activation function. * The call method should include the usual `inputs` argument, as well as the additional keyword arguments `hidden_state` and `cell_state`. The default value for these keyword arguments should be `None`. * The call method should pass the inputs through the Embedding layer, and then through the LSTM layer. If the `hidden_state` and `cell_state` arguments are provided, these should be used for the initial state of the LSTM layer. _Hint: use the_ `initial_state` _keyword argument when calling the LSTM layer on its input._ * The call method should pass the LSTM output sequence through the Dense layer, and return the resulting Tensor, along with the hidden and cell states of the LSTM layer.* Using the Dataset `.take(1)` method, extract a batch of English and German data examples from the training Dataset. Test the decoder model by first calling the encoder model on the English data Tensor to get the hidden and cell states, and then call the decoder model on the German data Tensor and hidden and cell states, and print the shape of the resulting decoder Tensor outputs.* Print the model summary for the decoder network.
###Code
class Decoder(Model):
def __init__(self, ger_tokens, **kwargs):
super(Decoder, self).__init__(**kwargs)
self.emb_layer = Embedding(input_dim=ger_tokens+1 , output_dim=128, mask_zero=True)
self.lstm_layer = LSTM(units=512, return_state=True, return_sequences=True)
self.dense_layer = Dense(units=ger_tokens+1, activation=None)
def call(self, inputs, h_s=None, c_s=None):
h = self.emb_layer(inputs)
if (h_s is None) or (c_s is None):
h, h_s, c_s = self.lstm_layer(h)
else:
h, h_s, c_s = self.lstm_layer(h, initial_state=(h_s, c_s))
out = self.dense_layer(h)
return out, h_s, c_s
hidden_state, cell_state = model(examples[0][0])
decoder_model = Decoder(ger_tokens)
output, decoder_hs, decoder_cs= decoder_model(examples[0][1])
print("output shape:", output.shape)
decoder_model.summary()
###Output
Model: "decoder_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_3 (Embedding) multiple 735232
_________________________________________________________________
lstm_8 (LSTM) multiple 1312768
_________________________________________________________________
dense_3 (Dense) multiple 2946672
=================================================================
Total params: 4,994,672
Trainable params: 4,994,672
Non-trainable params: 0
_________________________________________________________________
###Markdown
6. Make a custom training loopYou should now write a custom training loop to train your custom neural translation model.* Define a function that takes a Tensor batch of German data (as extracted from the training Dataset), and returns a tuple containing German inputs and outputs for the decoder model (refer to schematic diagram above).* Define a function that computes the forward and backward pass for your translation model. This function should take an English input, German input and German output as arguments, and should do the following: * Pass the English input into the encoder, to get the hidden and cell states of the encoder LSTM. * These hidden and cell states are then passed into the decoder, along with the German inputs, which returns a sequence of outputs (the hidden and cell state outputs of the decoder LSTM are unused in this function). * The loss should then be computed between the decoder outputs and the German output function argument. * The function returns the loss and gradients with respect to the encoder and decoder’s trainable variables. * Decorate the function with `@tf.function`* Define and run a custom training loop for a number of epochs (for you to choose) that does the following: * Iterates through the training dataset, and creates decoder inputs and outputs from the German sequences. * Updates the parameters of the translation model using the gradients of the function above and an optimizer object. * Every epoch, compute the validation loss on a number of batches from the validation and save the epoch training and validation losses.* Plot the learning curves for loss vs epoch for both training and validation sets._Hint: This model is computationally demanding to train. The quality of the model or length of training is not a factor in the grading rubric. However, to obtain a better model we recommend using the GPU accelerator hardware on Colab._
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
import time
start_time = time.time()
epochs=4
def german_output(data):
return (tf.cast(data[:,0:-1], tf.float32), tf.cast(data[:, 1:], tf.float32))
@tf.function
def grad(eng_input, ger_input, ger_output):
global model, decoder_model
with tf.GradientTape() as tape:
encoder_hs, encoder_cs = model(eng_input)
decoder_output, decoder_hs, decoder_cs = decoder_model(ger_input, encoder_hs, encoder_cs)
loss_value = loss_object(y_true=ger_output, y_pred=decoder_output)
return loss_value, tape.gradient (loss_value, model.trainable_variables+decoder_model.trainable_variables)
train_loss_results=[]
val_loss_results=[]
for epoch in range(epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_val_loss_avg = tf.keras.metrics.Mean()
for x, y in train_dataset:
dec_inp, dec_out = german_output(y)
loss_value, grads = grad(x, dec_inp, dec_out)
optimizer.apply_gradients(zip(grads, model.trainable_variables+decoder_model.trainable_variables))
epoch_loss_avg(loss_value)
train_loss_results.append(epoch_loss_avg.result())
for x, y in test_dataset:
dec_inp, dec_out = german_output(y)
loss_value, grads = grad(x, dec_inp, dec_out)
epoch_val_loss_avg(loss_value)
val_loss_results.append(epoch_val_loss_avg.result())
print ("Epoch {:03d}: Loss: {:.3f} Val Loss: {:.3f} --- Total Duration: {:.3f}".format(epoch, epoch_loss_avg.result(), epoch_val_loss_avg.result(), time.time()-start_time))
# Plot the training training loss and validation loss
plt.figure(figsize=(11,5))
plt.plot(train_loss_results)
plt.plot(val_loss_results)
plt.title('Loss vs. Val Loss')
plt.ylabel('Loss')
plt.xlabel('Epochs')
plt.legend(['Training', 'Validation'], loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
7. Use the model to translateNow it's time to put your model into practice! You should run your translation for five randomly sampled English sentences from the dataset. For each sentence, the process is as follows:* Preprocess and embed the English sentence according to the model requirements.* Pass the embedded sentence through the encoder to get the encoder hidden and cell states.* Starting with the special `""` token, use this token and the final encoder hidden and cell states to get the one-step prediction from the decoder, as well as the decoder’s updated hidden and cell states.* Create a loop to get the next step prediction and updated hidden and cell states from the decoder, using the most recent hidden and cell states. Terminate the loop when the `""` token is emitted, or when the sentence has reached a maximum length.* Decode the output token sequence into German text and print the English text and the model's German translation.
###Code
def translate(sentence):
def preprocess_eng(eng, maxlen=13):
eng = tf.strings.split(eng)
eng_em = embedding_layer(eng)
padding = [[tf.math.maximum(maxlen-tf.shape(eng_em)[0],0), 0], [0,0]]
return tf.pad(eng_em, padding)
eng_inp = preprocess_eng(sentence)
hidden, cell = model(tf.expand_dims(eng_inp, 0))
german_word = tf.Variable([[tokenizer.word_index['<start>']]])
german_sent = []
while len(german_sent) < 15:
decoder_outputs, hidden, cell = decoder_model(german_word, hidden, cell)
decoder_outputs = tf.squeeze(tf.argmax(decoder_outputs, axis=2)).numpy()
if decoder_outputs == tokenizer.word_index['<end>']:
break
german_sent.append(tokenizer.index_word[decoder_outputs])
german_word = tf.Variable([[decoder_outputs]])
print(f'English sentence:\t{sentence}')
print(f'German translation:\t{german_sent}')
sample_index = np.random.choice(len(preproc_data_ge), size=5, replace=False)
de_sentences_example = [preprocess_sentence(data.split("\t")[1]) for data in data_examples]
for i in sample_index:
sentence = preproc_data_en[i]
translate(sentence)
print(f'True German sentence:\t{de_sentences_example[i]}\n')
###Output
_____no_output_____
###Markdown
USA-Immigration Data Warehouse Project SummaryMany people travel to USA for different purposes, the TSA (Transport Security Administration) is interested to know in depth the immigration patterns in a monthly basis by the airport based on different factors, such as immigration data, temperature, US Demographics and Airport Codes. This project provides a data warehouse, which will allow different TSA members to access curated data that can be use for making reports and deeper analytics insights related to traveler patterns.
###Code
from datetime import datetime
from pyspark.sql import SparkSession
from pyspark.sql.types import DateType
from pyspark.sql.functions import udf, rand
from pyspark.sql.functions import isnan, when, count, col
import pyspark.sql.functions as F
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import configparser
import psycopg2
###Output
_____no_output_____
###Markdown
Scope the Project and Datasets Scope In order to achieve a data warehouse, I developed a data pipeline that Extract - Transform - Load data into a data warehouse. This will allow data analysts to consume the data and provide deeper data insights. The project involves different cloud technologies, such as Redshift (data warehouse), pySpark (read some datasets) and Apache AirFlow for data pipeline orchestration Datasets* I94 Immigration Data: This data comes from the US National Tourism and Trade Office and we will use the 2016 data. Each report contains international visitor arrival statistics by world regions and select countries such as type of visa, mode of transportation, age groups, states visited, the top ports of entry, etc. [Source](https://travel.trade.gov/research/reports/i94/historical/2016.html)* World Temperature Data: This dataset came from Kaggle and contains a compilation of global temperatures since 1750. In this case we will focus on the dataset **GlobalLandTemperatureByCity.csv**, which contains: AverageTemperature, AverageTemperatureUncertainty, City, Country, Latitude, Longitude. [Source](https://www.kaggle.com/berkeleyearth/climate-change-earth-surface-temperature-data).* U.S. City Demographic Data: This dataset contains information about the demographics of all US cities and census-designated places with a population greater or equal to 65,000. This data comes from the US Census Bureau's 2015 American Community Survey. [Source](https://public.opendatasoft.com/explore/dataset/us-cities-demographics/export/)* Airport Code Table: The airport codes may refer to either IATA airport code, a three-letter code which is used in passenger reservation, ticketing and baggage-handling systems, or the ICAO airport code which is a four letter code used by ATC systems and for airports that do not have an IATA airport code. Airport codes from around the world. Downloaded from public domain source http://ourairports.com/data/ who compiled this data from multiple different sources. [Source](https://datahub.io/core/airport-codesdata) Datasets Gathering, Exploration & Analysis I-94 Immigration Dataset
###Code
spark = SparkSession.builder.\
config("spark.jars.packages","saurfang:spark-sas7bdat:2.0.0-s_2.11")\
.enableHiveSupport().getOrCreate()
imm_data = spark.read.parquet("dags/datasets/sas_data")
print("Size of the dataset", imm_data.count())
imm_data.limit(15).toPandas()
###Output
Size of the dataset 3096313
###Markdown
By looking at this SAS information, it represents a person record per row with the main details of entry and exit in USA, this is very insightful and it will become in the main fact table for our project. I also extract the schema of the SAS file to understand the data types and is NULL values are allowed in certain columns
###Code
imm_data.printSchema()
###Output
root
|-- cicid: double (nullable = true)
|-- i94yr: double (nullable = true)
|-- i94mon: double (nullable = true)
|-- i94cit: double (nullable = true)
|-- i94res: double (nullable = true)
|-- i94port: string (nullable = true)
|-- arrdate: double (nullable = true)
|-- i94mode: double (nullable = true)
|-- i94addr: string (nullable = true)
|-- depdate: double (nullable = true)
|-- i94bir: double (nullable = true)
|-- i94visa: double (nullable = true)
|-- count: double (nullable = true)
|-- dtadfile: string (nullable = true)
|-- visapost: string (nullable = true)
|-- occup: string (nullable = true)
|-- entdepa: string (nullable = true)
|-- entdepd: string (nullable = true)
|-- entdepu: string (nullable = true)
|-- matflag: string (nullable = true)
|-- biryear: double (nullable = true)
|-- dtaddto: string (nullable = true)
|-- gender: string (nullable = true)
|-- insnum: string (nullable = true)
|-- airline: string (nullable = true)
|-- admnum: double (nullable = true)
|-- fltno: string (nullable = true)
|-- visatype: string (nullable = true)
###Markdown
It is important to understand the number of columns and the value types to later consider them in the sql tables
###Code
df_imm = imm_data.limit(15).toPandas()
df_imm.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 15 entries, 0 to 14
Data columns (total 28 columns):
cicid 15 non-null float64
i94yr 15 non-null float64
i94mon 15 non-null float64
i94cit 15 non-null float64
i94res 15 non-null float64
i94port 15 non-null object
arrdate 15 non-null float64
i94mode 15 non-null float64
i94addr 14 non-null object
depdate 15 non-null float64
i94bir 15 non-null float64
i94visa 15 non-null float64
count 15 non-null float64
dtadfile 15 non-null object
visapost 15 non-null object
occup 0 non-null object
entdepa 15 non-null object
entdepd 15 non-null object
entdepu 0 non-null object
matflag 15 non-null object
biryear 15 non-null float64
dtaddto 15 non-null object
gender 15 non-null object
insnum 0 non-null object
airline 15 non-null object
admnum 15 non-null float64
fltno 15 non-null object
visatype 15 non-null object
dtypes: float64(13), object(15)
memory usage: 3.4+ KB
###Markdown
Given the size of this dataset and the limited amount of memory in the workspace, I just wanted to have an idea if some columns have potential null values as percentage with 10000 records
###Code
df_imm_all = imm_data.limit(10000).toPandas()
df_imm_all.isnull().sum()/df_imm_all.shape[0]
###Output
_____no_output_____
###Markdown
We can notice that some columns are incomplete such as depdate, occup, gender records for example. However, it is not critical to have those values empty in this particular dataset I94_SAS_Labels_Description.SASThe previous dataset in SAS contains an additional labels and descriptions, which can be complemented as dimensions. However, the SAS file needs to be parsed in order to extract the codes. In particular for the project, I am interested in the following codes:- Country- Port- Mode- Addr- Type
###Code
def sas_program_file_value_parser(sas_source_file, value, columns):
"""Parses SAS Program file to return value as pandas dataframe
Args:
sas_source_file (str): SAS source code file.
value (str): sas value to extract.
columns (list): list of 2 containing column names.
Return:
None
"""
file_string = ''
with open(sas_source_file) as f:
file_string = f.read()
file_string = file_string[file_string.index(value):]
file_string = file_string[:file_string.index(';')]
line_list = file_string.split('\n')[1:]
codes = []
values = []
for line in line_list:
if '=' in line:
code, val = line.split('=')
code = code.strip()
val = val.strip()
if code[0] == "'":
code = code[1:-1]
if val[0] == "'":
val = val[1:-1]
codes.append(code)
values.append(val)
return pd.DataFrame(zip(codes,values), columns=columns)
i94cit_res = sas_program_file_value_parser('dags/datasets/I94_SAS_Labels_Descriptions.SAS', 'i94cntyl', ['code', 'country'])
i94port = sas_program_file_value_parser('dags/datasets/I94_SAS_Labels_Descriptions.SAS', 'i94prtl', ['code', 'port'])
i94mode = sas_program_file_value_parser('dags/datasets/I94_SAS_Labels_Descriptions.SAS', 'i94model', ['code', 'mode'])
i94addr = sas_program_file_value_parser('dags/datasets/I94_SAS_Labels_Descriptions.SAS', 'i94addrl', ['code', 'addr'])
i94visa = sas_program_file_value_parser('dags/datasets/I94_SAS_Labels_Descriptions.SAS', 'I94VISA', ['code', 'type'])
###Output
_____no_output_____
###Markdown
World Temperature Dataset
###Code
df_temp_data = pd.read_csv('dags/datasets/GlobalLandTemperaturesByCity.csv')
print("Size of the dataset: ", len(df_temp_data))
df_temp_data.head(10)
###Output
Size of the dataset: 8599212
###Markdown
In this dataset, I am interested to know potential unique values on the dataset
###Code
for col in df_temp_data:
print(col, df_temp_data[col].is_unique)
###Output
dt False
AverageTemperature False
AverageTemperatureUncertainty False
City False
Country False
Latitude False
Longitude False
###Markdown
The dt clumns is not unique or datetime, which make sense, since we have diferent countries. As a rule of thumb, since this dataset specifies temperatures. If the AverageTemperature and AverageTemperatureUncertainty are NaN, we can actually delete those entries and keep only values with temperature measurements.
###Code
df_temp_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 8599212 entries, 0 to 8599211
Data columns (total 7 columns):
dt object
AverageTemperature float64
AverageTemperatureUncertainty float64
City object
Country object
Latitude object
Longitude object
dtypes: float64(2), object(5)
memory usage: 459.2+ MB
###Markdown
This just provided us an idea of the type of data in the dataset, most of the objects seems to be text
###Code
df_temp_data.isnull().sum()/df_temp_data.shape[0]
###Output
_____no_output_____
###Markdown
The proportion of NULL values is quite small compared with the size of the dataset, as an idea we can remove the NULL temperature values, since those rows do not bring too much value. USA City Demographics Dataset
###Code
df_city_dem_data = pd.read_csv('dags/datasets/us-cities-demographics.csv', sep=';')
print("Size of dataset: ", len(df_city_dem_data))
df_city_dem_data.head(10)
df_city_dem_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2891 entries, 0 to 2890
Data columns (total 12 columns):
City 2891 non-null object
State 2891 non-null object
Median Age 2891 non-null float64
Male Population 2888 non-null float64
Female Population 2888 non-null float64
Total Population 2891 non-null int64
Number of Veterans 2878 non-null float64
Foreign-born 2878 non-null float64
Average Household Size 2875 non-null float64
State Code 2891 non-null object
Race 2891 non-null object
Count 2891 non-null int64
dtypes: float64(6), int64(2), object(4)
memory usage: 271.1+ KB
###Markdown
I explore again the type of values in order to define them later my database tables
###Code
df_city_dem_data.isnull().sum()/df_city_dem_data.shape[0]
###Output
_____no_output_____
###Markdown
We get a proportion of the potential NULL values in the dataset, in order to undertand which columns may need a data test. In this dataset, I would not remove the incomplete rows, since it offers relevant informations in other columns Airport Code Dataset
###Code
df_airport_code_data = pd.read_csv('dags/datasets/airport-codes_csv.csv')
df_airport_code_data.head(10)
for col in df_airport_code_data:
print(col, df_airport_code_data[col].is_unique)
###Output
ident True
type False
name False
elevation_ft False
continent False
iso_country False
iso_region False
municipality False
gps_code False
iata_code False
local_code False
coordinates False
###Markdown
Here the indent column is unique and it serves as primary key
###Code
df_airport_code_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 55075 entries, 0 to 55074
Data columns (total 12 columns):
ident 55075 non-null object
type 55075 non-null object
name 55075 non-null object
elevation_ft 48069 non-null float64
continent 27356 non-null object
iso_country 54828 non-null object
iso_region 55075 non-null object
municipality 49399 non-null object
gps_code 41030 non-null object
iata_code 9189 non-null object
local_code 28686 non-null object
coordinates 55075 non-null object
dtypes: float64(1), object(11)
memory usage: 5.0+ MB
###Markdown
Here the object types seems to be dominant by text types
###Code
df_airport_code_data.isnull().sum()/df_airport_code_data.shape[0]
###Output
_____no_output_____
###Markdown
Proportion of the potential NULL values in the dataset in some cases is high like iata_code. However, this is information related to airports, every row droped just eliminate the possibility of airport matching with other tables. I would not get rid off null values in the rows Data AssesmentAfter exploring the datasets previously, by identifying duplicates, unique and NaN values, I have the following diagnostic per dataset: I-94 Immigration Dataset- This contains per row a visitor (in/out) in USA Airports, here I would not remove any rows, even if there are some rows with null values- This will work as a fact table in our solution- The following labels values will be the meaning of our dimensional tables: Countryi94cntyl), Port(i94prtl), Mode(i94model), Addr(i94addrl), Type(I94VISA) Airport Code Dataset- This dataset contains all the airports and we will not drop any row from here to keep all the airport data, we are also sure that the iddent column is unique and it will serve as primary key- This will be a fact table USA City Demographics Dataset- This dataset will help us to underatdn the demographics of our travalers and it will be a fact table World Temperature Dataset- Here we can eliminate the rows of that do not offer a the temperature, in order to reduce the dataset size. However, the proportu=ion is low, the I left this elimination as optional- This will be another fact table Data ModelThe data model consists on the following tables: Fact Tables- immigration- us_cities_demographics- airport_codes- world_temperature Dimensional Tables- i94cit_res- i94port- i94mode- i94addr- i94visa Considerations & Notes- The following tables are distributed across all nodes(DISTSTYLE ALL): i94cit_res, i94port, i94mode, i94addr, i94visa, us_cities_demographics- Redundancy -> DISTSTYLE ALL will copy the data of your table to all nodes - to mitigate data transfer requirement across nodes. You can find out the size of your table and Redshift nodes available size, if you can afford to copy table multiple times per node. Conceptual Data Model Diagram Table Definitions (Details)```sqlcreate_table_immigration = """CREATE TABLE IF NOT EXISTS public.immigration ( cicid FLOAT PRIMARY KEY, i94yr FLOAT SORTKEY, i94mon FLOAT DISTKEY, i94cit FLOAT REFERENCES i94cit_res(code), i94res FLOAT REFERENCES i94cit_res(code), i94port CHAR(3) REFERENCES i94port(code), arrdate FLOAT, i94mode FLOAT REFERENCES i94mode(code), i94addr VARCHAR REFERENCES i94addr(code), depdate FLOAT, i94bir FLOAT, i94visa FLOAT REFERENCES i94visa(code), count FLOAT, dtadfile VARCHAR, visapost CHAR(3), occup CHAR(3), entdepa CHAR(1), entdepd CHAR(1), entdepu CHAR(1), matflag CHAR(1), biryear FLOAT, dtaddto VARCHAR, gender CHAR(1), insnum VARCHAR, airline VARCHAR, admnum FLOAT, fltno VARCHAR, visatype VARCHAR);"""create_us_cities_demographics = """CREATE TABLE IF NOT EXISTS public.us_cities_demographics ( city VARCHAR, state VARCHAR, median_age FLOAT, male_population INT, female_population INT, total_population INT, number_of_veterans INT, foreign_born INT, average_household_size FLOAT, state_code CHAR(2) REFERENCES i94addr(code), race VARCHAR, count INT)DISTSTYLE ALL"""create_airport_codes = """CREATE TABLE IF NOT EXISTS public.airport_codes ( ident VARCHAR, type VARCHAR, name VARCHAR, elevation_ft FLOAT, continent VARCHAR, iso_country VARCHAR, iso_region VARCHAR, municipality VARCHAR, gps_code VARCHAR, iata_code VARCHAR, local_code VARCHAR, coordinates VARCHAR);"""create_world_temperature = """CREATE TABLE IF NOT EXISTS public.world_temperature ( dt DATE, AverageTemperature FLOAT, AverageTemperatureUncertainty FLOAT, City VARCHAR, Country VARCHAR, Latitude VARCHAR, Longitude VARCHAR);"""create_i94cit_res = """CREATE TABLE IF NOT EXISTS public.i94cit_res ( code FLOAT PRIMARY KEY, country VARCHAR)DISTSTYLE ALL"""create_i94port = """CREATE TABLE IF NOT EXISTS public.i94port ( code CHAR(3) PRIMARY KEY, port VARCHAR)DISTSTYLE ALL"""create_i94mode = """CREATE TABLE IF NOT EXISTS public.i94mode ( code FLOAT PRIMARY KEY, mode VARCHAR)DISTSTYLE ALL"""create_i94addr = """CREATE TABLE IF NOT EXISTS public.i94addr ( code CHAR(2) PRIMARY KEY, addr VARCHAR)DISTSTYLE ALL"""create_i94visa = """CREATE TABLE IF NOT EXISTS public.i94visa ( code FLOAT PRIMARY KEY, type VARCHAR)DISTSTYLE ALL"""``` Mapping Out Data Pipelines and Data Quality checksThe DAG shown in the graph shows the nodes and how the data and tables are being loaded with the data. In our particular case every table (facts or dimentions)has a data check of the type count. Here some quality checks examples in dimensional tables.```{'name': 'i94cit_res', 'value': 'i94cntyl', 'columns': ['code', 'country'], 'dq_checks': [{'check_sql': "SELECT COUNT(*) FROM i94cit_res WHERE code is null", 'expected_result': 0}] }, {'name': 'i94visa', 'value': 'I94VISA', 'columns': ['code', 'type'], 'dq_checks': [{'check_sql': "SELECT COUNT(*) FROM i94visa WHERE code is null", 'expected_result': 0}] }, {'name': 'i94port', 'value': 'i94prtl', 'columns': ['code', 'port'], 'dq_checks': [{'check_sql': "SELECT COUNT(*) FROM i94port WHERE code is null", 'expected_result': 0}] }, {'name': 'i94addr', 'value': 'i94addrl', 'columns': ['code', 'addr'], 'dq_checks': [{'check_sql': "SELECT COUNT(*) FROM i94addr WHERE code is null", 'expected_result': 0}] }, {'name': 'i94mode', 'value': 'i94model', 'columns': ['code', 'mode'], 'dq_checks': [{'check_sql': "SELECT COUNT(*) FROM i94mode WHERE code is null", 'expected_result': 0}]}``` How to run the ETL to model the data1. Clone the repository and fill the credential information in tables/dwh.cfg and dags/dw.cfg2. Read the file dags/datasets/README.md (It will tell you about the datasets needed)3. Upload those datasets in a S3 bucket4. Please follow the instructions to run Apache Airflow in a Docker container [Instructions](https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html))5. **Apache AirFlow 2.0.1 has an error in the official documentation, and I created a video to fix it and share it with the world (Sharing is caring) [FIX THAT BUG](https://youtu.be/RVKRtgDIh8A))**6. Go to the main folder project and is Apache AirFlow is not running do:```docker compose up (Start the services)docker compose down (Stop the services)```7. Configure the connector in Apache Airflow to be able to see Amazon Redshift [Detailed Steps](https://www.progress.com/tutorials/jdbc/connect-to-redshift-salesforce-and-others-from-apache-airflow)8. You should be able to see the DAG -> immigration_etl_dag in ApacheAirflow9. **Run the script tables/create_tables.py**10. Finally, in Apache AirFlow you can execute the dag and wait for the results. Project Write Up, Questions & Assumptions* Clearly state the rationale for the choice of tools and technologies for the project. In terms of technologies, I wanted to bring also some technologies not touched by the certification like docker containers.Amazon RDS, Redshift are the idea tools for loading data into databases. S3 buckets are quite convenient to storage large datasets in the cloud.Finally, an orchestrator like Apaceh AirFlow plays a main role to not only execute all the ETLs but also being able to monitor preformance and have logs on executions.* Propose how often the data should be updated and why. If the dta sources are being updated very often a short time window should be specified, like every day at 5 am. However, the way that I have design this ETL basedon the data freshness and datasets that are not updated in realtime, it would be ideal to have monthly reports or updates. In some cases it may be conveninent weekly reports.I would not go beyond for a month since that can bring data innacuracy if data consumers are building prediction models or any kind of analysis on inmmigration behaviour. Write a description of how you would approach the problem differently under the following scenarios: * The data was increased by 100x. If the data is storaged in the cloud then the use of Spark in a EMR (Virtual machine) in Amazon can help to cope with the load. This is a modular and scalable approach. We can also split the dag by using partitioning functionality in AirFlow (divide and conquer) * The data populates a dashboard that must be updated on a daily basis by 7am every day. This is a very common case in the industry, fortunatelly in Airflow we can use the internal scheduler to make a cron syntax to specify in the DAG to run at 7am every day. * The database needed to be accessed by 100+ people. Fortunatelly Amazon Redshift as Data Warehouse solution is designed to serve different data consumers (data analysts, marketers, etc) and not everybody should have access to everything. That is great feature to make different roles in Redshift and provide particular access to certain fact tables or perform as well certain operations.
###Code
from subprocess import call
call(['python', '-m', 'nbconvert', 'Capstone_Project.ipynb'])
###Output
_____no_output_____
###Markdown
###Code
from google.colab import drive
drive.mount('/content/gdrive')
%cd gdrive/My Drive/Project/LSTM-in-Trade
! git pull
import numpy as np
import pandas as pd
import datetime as dt
import os
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
from subprocess import check_output
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.recurrent import LSTM
from keras.models import Sequential
from sklearn.model_selection import train_test_split
import time
import matplotlib.pyplot as plt
from numpy import newaxis
df = pd.read_csv('DataSet/TATASTEEL.csv', usecols=['Date', 'Symbol', 'Open', 'High', 'Low', 'Close', 'Volume'])
df.count()
df.head()
data = df.sort_index(ascending=True, axis=0)
new_data = pd.DataFrame(index=range(0,len(df)),columns=['Date', 'Close'])
for i in range(0,len(data)):
new_data['Date'][i] = data['Date'][i]
new_data['Close'][i] = data['Close'][i]
new_data.index = new_data.Date
new_data.drop('Date', axis=1, inplace=True)
#creating train and test sets
dataset = new_data.values
train = dataset[0:4238,:]
valid = dataset[4238:,:]
print(train)
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(dataset)
x_train, y_train = [], []
for i in range(60,len(train)):
x_train.append(scaled_data[i-60:i,0])
y_train.append(scaled_data[i,0])
x_train, y_train = np.array(x_train), np.array(y_train)
x_train = np.reshape(x_train, (x_train.shape[0],x_train.shape[1],1))
print(x_train)
model = Sequential()
model.add(LSTM(180, return_sequences=True, input_shape = (x_train.shape[1],1)))
model.add(Dropout(0.2))
model.add(LSTM(180, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(150, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(200, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(units=1))
model.add(Activation('relu'))
start = time.time()
model.compile(loss='mean_squared_error', optimizer='adam')
print ('compilation time : ', time.time() - start)
model.fit(x_train, y_train, batch_size=1, epochs=5)
inputs = new_data[len(new_data) - len(valid) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = scaler.transform(inputs)
X_test = []
for i in range(60,inputs.shape[0]):
X_test.append(inputs[i-60:i,0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1))
closing_price = model.predict(X_test)
closing_price = scaler.inverse_transform(closing_price)
rms=np.sqrt(np.mean(np.power((valid-closing_price),2)))
print(rms)
%matplotlib inline
train = new_data[:4238]
valid = new_data[4238:]
valid['Predictions'] = closing_price
plt.plot(train['Close'])
plt.plot(valid[['Close','Predictions']])
from sklearn.metrics import r2_score
coefficient_of_dermination = r2_score(y, p(x))
###Output
_____no_output_____
###Markdown
Capstone Project---The focus of this project is to find the best neighbourhood in Toronto west. It will utilize location data from foursquare and machine learning to predict the most suitable location.
###Code
# import of standard libraries
import pandas as pd
import numpy as np
# libraries for displaying images
from IPython.display import Image
from IPython.core.display import HTML
# tranforming json file into a pandas dataframe library
from pandas.io.json import json_normalize
# !conda install -c conda-forge folium=0.5.0 --yes
import folium # plotting library
print('Hello Capstone Project Course!')
import os
import json, requests
from geopy.geocoders import Nominatim
from dotenv import load_dotenv
load_dotenv()
address = '102 North End Ave, New York, NY'
geolocator = Nominatim()
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print(latitude, longitude)
client_id = os.getenv("client_id")
client_secret = os.getenv("client_secret")
url = 'https://api.foursquare.com/v2/venues/search'
params = dict(
client_id = client_id,
client_secret = client_secret,
v='20180604',
ll='{},{}'.format(latitude,longitude),
query='Italian',
radius=500,
limit=30
)
resp = requests.get(url=url, params=params)
data = json.loads(resp.text)
print(data)
# assign relevant part of JSON to venues
venues = data['response']['venues']
# tranform venues into a dataframe
dataframe = json_normalize(venues)
dataframe.head()
# keep only columns that include venue name, and anything that is associated with location
filtered_columns = ['name', 'categories'] + [col for col in dataframe.columns if col.startswith('location.')] + ['id']
dataframe_filtered = dataframe.loc[:, filtered_columns]
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
# filter the category for each row
dataframe_filtered['categories'] = dataframe_filtered.apply(get_category_type, axis=1)
# clean column names by keeping only last term
dataframe_filtered.columns = [column.split('.')[-1] for column in dataframe_filtered.columns]
dataframe_filtered
dataframe_filtered.name
venues_map = folium.Map(location=[latitude, longitude], zoom_start=13) # generate map centred around the Conrad Hotel
# add a red circle marker to represent the Conrad Hotel
folium.CircleMarker(
[latitude, longitude],
radius=10,
color='red',
popup='Conrad Hotel',
fill = True,
fill_color = 'red',
fill_opacity = 0.6
).add_to(venues_map)
# add the Italian restaurants as blue circle markers
for lat, lng, label in zip(dataframe_filtered.lat, dataframe_filtered.lng, dataframe_filtered.categories):
folium.CircleMarker(
[lat, lng],
radius=5,
color='blue',
popup=label,
fill = True,
fill_color='blue',
fill_opacity=0.6
).add_to(venues_map)
# display map
venues_map
###Output
_____no_output_____
###Markdown
Capstone Project Notebook This notebook will be mainly used for the capstone project.
###Code
import pandas as pd
import numpy as np
print ('Hello Capstone Project Course!')
###Output
Hello Capstone Project Course!
###Markdown
Capstone Project : Little Pizza Store **Part 1 : The Idea** Problem **Description** In Buenos Aires, the capital city of Argentina, there are people from many different cultures living and working. It is a big city with a huge collection of different companies ranging from small startups to big multinational ones.Each day thousands of workers needs to get lunch and every night thousands of families needs to dinner. Pizza is a very popular meal because it can be delivered easily and is easy to share among several persons. Also is not so expensive as other options.Mi idea is to launch a new Pizza Store and for this I want to identify the more promising neighbohoods as those with some similarity with others where current pizza business is going well and has not many pizza stores already.For this, I will get the neighborhood data of the city including latitud and longitud from a government website (https://data.buenosaires.gob.ar/dataset/barrios). This data does not provide the latitude and longitude for each neighborhood but the entire polygon with corners coordinates. So, some preprocessing will be nedded in order to get the center of each of those polygons.Then I will use foursquare API to retrieve the main venues in each neighborhood and use this data to run a clustering model and group the neighborhoods by similarity using the venue category and the frecuency of venues of each one in this particular neighborhood.Using the clusters and the main venues in each one, I will be able to prioritize them as explained above. **Data description** The Neighborhood data is downloaded from a [public government site](https://data.buenosaires.gob.ar/api/files/barrios.csv/download/csv). This dataset does not contains the coordinates of each neighborhood but insteaed it has all the coordinates that defines the geographic polygon.The process to get the approximate center of each neighborhood was parse the polygon coordinates to identify de maximun amd minimun latitude and longitude and the applying```center_lat = min_lat + (max_lat - min_lat)/2center_lng = min_lng + (max_lng - min_lng)/2```Using the coordinates of the center I would be able to call de foursquare API to retrieve the main venues in each neighborhood and then analyze them using a clustering model. **Part 2 : The Work** ***Imports***
###Code
# Required imports
import pandas as pd
import numpy as np
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
from bs4 import BeautifulSoup
import requests
import folium # map rendering library
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# import k-means from clustering stage
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Uploading and formatting the dataThe dataset is downloaded from the site of the "Ciudad Autonoma de Buenos Aires".
###Code
# Read the source file and apply basic cleaning
url = "https://data.buenosaires.gob.ar/api/files/barrios.csv/download/csv"
df_bsas = pd.read_csv(url, sep=',', low_memory=False, encoding="latin1")
df_bsas["polygon"] = df_bsas["WKT"].str.replace("POLYGON", "")
df_bsas["polygon"] = df_bsas["polygon"].apply(lambda x: str(x).replace("(", "").replace(")", ""))
df_bsas.head()
"""
Parses the polygon field to determine de center. This is a simple formula that does not consider
the earth curvature. Given the small areas it has not signifivative impact.
"""
def get_data(row):
min_lat = 0
min_lng = 0
max_lat = -999
max_lng = -999
coords = row["polygon"].split(",")
for coord in coords:
c = coord.split(" ")
if len(c)==2:
if float(c[0])>max_lng: max_lng = float(c[0])
if float(c[0])<min_lng: min_lng = float(c[0])
if float(c[1])>max_lat: max_lat = float(c[1])
if float(c[1])<min_lat: min_lat = float(c[1])
row["min_lat"] = min_lat
row["max_lat"] = max_lat
row["min_lng"] = min_lng
row["max_lng"] = max_lng
row["center_lat"] = min_lat + (max_lat - min_lat)/2
row["center_lng"] = min_lng + (max_lng - min_lng)/2
return row
# Get the center of each neghborhood polygon
df_bsas = df_bsas.apply(get_data, axis=1)
df_bsas.head()
###Output
_____no_output_____
###Markdown
**Drawing the map with the Neighborhoods**
###Code
# Get Lat & Lng of Buenos Aires City
address = 'Buenos Aires, AR'
geolocator = Nominatim()
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Buenos Aires City are {}, {}.'.format(latitude, longitude))
# create map of Buenos Aires using latitude and longitude values
map_bsas = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighborhood in zip(df_bsas['center_lat'], df_bsas['center_lng'], df_bsas['comuna'], df_bsas['barrio']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill_color='#3186cc').add_to(map_bsas)
map_bsas
###Output
_____no_output_____
###Markdown
**Using Foursqare API to get venues by neighborhodd**
###Code
CLIENT_ID = "CHDQNPMFLQDIRR2U2QOBIWKIVGMI1CU0FDR13YX3PSW5JC0X"
CLIENT_SECRET = "JM3SUNCY0PQEABL3EQJNORNWHJ2MPJBFMGCMBG0ZWB4N5N2B"
VERSION = '20180605' # Foursquare API version
LIMIT = 100
RADIUS = 500
###Output
_____no_output_____
###Markdown
Get venue data from foursquare using lat and lng
###Code
def get_venues(name, lat, lng):
venues_list = []
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
RADIUS,
LIMIT)
#try:
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
return venues_list
# Get all Venues from FourSquare
all_venues = []
for index, row in df_bsas.iterrows():
print("Processing {}".format(row["barrio"]))
venues = get_venues(row["barrio"], row["center_lat"], row["center_lng"])
all_venues.extend(venues)
# Create a DataFrame
df_venues = pd.DataFrame([item for venue_list in all_venues for item in venue_list])
df_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
###Output
Processing CHACARITA
Processing PATERNAL
Processing VILLA CRESPO
Processing VILLA DEL PARQUE
Processing ALMAGRO
Processing CABALLITO
Processing VILLA SANTA RITA
Processing MONTE CASTRO
Processing VILLA REAL
Processing FLORES
Processing FLORESTA
Processing CONSTITUCION
Processing SAN CRISTOBAL
Processing BOEDO
Processing VELEZ SARSFIELD
Processing VILLA LURO
Processing PARQUE PATRICIOS
Processing MATADEROS
Processing VILLA LUGANO
Processing SAN TELMO
Processing SAAVEDRA
Processing COGHLAN
Processing VILLA URQUIZA
Processing COLEGIALES
Processing BALVANERA
Processing VILLA GRAL. MITRE
Processing PARQUE CHAS
Processing AGRONOMIA
Processing VILLA ORTUZAR
Processing BARRACAS
Processing PARQUE AVELLANEDA
Processing PARQUE CHACABUCO
Processing NUEVA POMPEYA
Processing PALERMO
Processing VILLA RIACHUELO
Processing VILLA SOLDATI
Processing VILLA PUEYRREDON
Processing VILLA DEVOTO
Processing LINIERS
Processing VERSALLES
Processing PUERTO MADERO
Processing MONSERRAT
Processing SAN NICOLAS
Processing BELGRANO
Processing RECOLETA
Processing RETIRO
Processing NUÃEZ
Processing BOCA
###Markdown
###Code
df_venues.head()
df_venues.shape
###Output
_____no_output_____
###Markdown
**Formatting Venue data**
###Code
# one hot encoding
df_bsas_onehot = pd.get_dummies(df_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
df_bsas_onehot['Neighborhood'] = df_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = ["Neighborhood"] + [col for col in df_bsas_onehot.columns.tolist() if col not in ["Neighborhood"]]
df_bsas_onehot = df_bsas_onehot[fixed_columns]
# Grouping by Neighborhood and calculate the mean of the frecuency of each venue
df_bsas_grouped = df_bsas_onehot.groupby(["Neighborhood"]).mean().reset_index()
df_bsas_grouped.head()
###Output
_____no_output_____
###Markdown
**Clustering Neighborhoods**
###Code
# set number of clusters
kclusters = 9
df_bsas_grouped_clustering = df_bsas_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(df_bsas_grouped_clustering)
# add clustering labels
df_bsas_grouped['Cluster Labels'] = kmeans.labels_
# merge df_bsas_grouped with df_bsas to add latitude/longitude for each neighborhood
df_bsas_merged = df_bsas.merge(df_bsas_grouped, how="left", left_on='barrio', right_on="Neighborhood")
df_bsas_merged['Cluster Labels'].value_counts()
df_bsas_merged.shape
###Output
_____no_output_____
###Markdown
**Drawing a map identifying the neighborhood cluster**
###Code
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i+x+(i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(df_bsas_merged['center_lat'], df_bsas_merged['center_lng'], df_bsas_merged['barrio'], df_bsas_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster))
folium.CircleMarker(
[lat, lon],
radius=10,
popup=label,
color=rainbow[cluster-1],
fill_color=rainbow[cluster-1],
fill_opacity=0.0).add_to(map_clusters)
map_clusters
df_bsas_merged[df_bsas_merged["Cluster Labels"]==3]
df_bsas_grouped.head()
num_top_venues = 5
for hood in df_bsas_grouped['Neighborhood']:
print("----"+hood+"----")
temp = df_bsas_grouped[df_bsas_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp[temp["venue"]!="Cluster Labels"]
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
###Output
_____no_output_____
###Markdown
First Method using pandas value counts
###Code
data = data.dropna(subset=["complaint_type"])
data['complaint_type'].value_counts().idxmax()
###Output
_____no_output_____
###Markdown
Second Method using pandas mode
###Code
data = data["complaint_type"].dropna()
data.mode()
###Output
_____no_output_____ |
figures/Discussion-H_0 effect.ipynb | ###Markdown
Calculating the effect of the HR-PC$_1$ relation on the H$_0$ measurementThis will be done by looking at the difference in mean/distribution (in PC$_1$ space) between the calibration and Hubble flow samples. We will at this though both local and global age PCA methods.
###Code
from glob import glob
import datetime
import numpy as np
from astropy.table import Table
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from scipy.stats import spearmanr
from scipy.stats import ks_2samp
from scipy.stats import mannwhitneyu
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
sns.set(context='talk', style='ticks', font='serif', color_codes=True)
###Output
_____no_output_____
###Markdown
Gather Data Hubble Flow
###Code
HR = pd.read_csv('../data/campbell_local.tsv', sep='\t', usecols=['SNID', 'redshift', 'hr', 'err_mu'], index_col='SNID')
HR.rename(columns={'err_mu': 'hr uncert'}, inplace=True)
HR = HR[HR['redshift']<0.2]
HR = HR[HR['hr']<0.7]
HR.describe()
t = Table.read('../data/SDSS_Photometric_SNe_Ia.fits')
salt = t['CID','Z','X1','X1_ERR','COLOR','COLOR_ERR'].to_pandas()
salt.columns = salt.columns.str.lower()
salt.rename(columns={'cid': 'SNID', 'z': 'Campbell redshift'}, inplace=True)
salt.set_index('SNID', inplace=True)
salt.describe()
galaxy = pd.read_csv('../resources/kcorrect_stellarmass.csv', usecols=['GAL', 'redshift', 'stellarmass'], index_col='GAL')
galaxy.rename(columns={'redshift': 'gal redshift', 'stellarmass': 'stellar mass'}, inplace=True)
galaxy.describe()
# local Hubble flow age
age = pd.read_csv('../resources/ages_campbell.tsv', sep='\t', skiprows=[1],
usecols=['# sn id', 'age'], dtype={'age': np.float64, '# sn id': np.int})
age.rename(columns={'# sn id': 'SNID'}, inplace=True)
age.set_index('SNID', inplace=True)
age.describe()
# global Hubble flow age
age_global = pd.read_csv('../resources/ages_campbellG.tsv', sep='\t', skiprows=[1],
usecols=['# sn id', 'age'], dtype={'age': np.float64, '# sn id': np.int})
age_global.rename(columns={'# sn id': 'SNID'}, inplace=True)
age_global.set_index('SNID', inplace=True)
age_global.describe()
data = pd.concat([HR, salt, galaxy, age], axis=1)
data.dropna(inplace=True)
data.describe()
data['stellar mass'] = np.log10(data['stellar mass'])
data.describe()
data_global = pd.concat([HR, salt, galaxy, age_global], axis=1)
data_global.dropna(inplace=True)
data_global['stellar mass'] = np.log10(data_global['stellar mass'])
data_global.describe()
###Output
_____no_output_____
###Markdown
Now we have `data` and `data_global` as two data frames for our Hubble flow samples. Clibration SampleNote that I use the host's name as the number, rather than the SN number. A legacy convention from back when I did the Messier object tests
###Code
salt_cal = pd.read_csv('../data/calibaration_sample_salt2.4_params.csv', usecols=['SN','Host', 'x_1', 'c'],
comment='#')
salt_cal.rename(columns={'Host': 'SNID'}, inplace=True)
salt_cal.set_index('SNID', inplace=True)
salt_cal
galaxy_cal = pd.read_csv('../resources/kcorrect_stellarmass_Riess.csv', usecols=['GAL', 'redshift', 'stellarmass'], index_col='GAL')
galaxy_cal.rename(columns={'redshift': 'gal redshift', 'stellarmass': 'stellar mass'}, inplace=True)
galaxy_cal.describe()
redshift_cal = pd.read_csv('../data/riess_local.tsv', sep='\t',
usecols=['SNID', 'redshift'], index_col='SNID')
redshift_cal
#Local ages
age_cal = pd.read_csv('../resources/ages_riess.tsv', sep='\t', skiprows=[1],
usecols=['# sn id', 'age'], dtype={'age': np.float64, '# sn id': np.int})
age_cal.rename(columns={'# sn id': 'SNID'}, inplace=True)
age_cal.set_index('SNID', inplace=True)
age_cal.describe()
age_cal
#Global ages
age_global_cal = pd.read_csv('../resources/ages_riessG.tsv', sep='\t', skiprows=[1],
usecols=['# sn id', 'age'], dtype={'age': np.float64, '# sn id': np.int})
age_global_cal.rename(columns={'# sn id': 'SNID'}, inplace=True)
age_global_cal.set_index('SNID', inplace=True)
age_global_cal.describe()
age_global_cal
calibration = pd.concat([salt_cal, galaxy_cal, age_cal, redshift_cal], axis=1)
calibration.dropna(inplace=True)
calibration['stellar mass'] = np.log10(calibration['stellar mass'])
# calibration.describe()
calibration
calibration_global = pd.concat([salt_cal, galaxy_cal, age_global_cal, redshift_cal], axis=1)
calibration_global.dropna(inplace=True)
calibration_global['stellar mass'] = np.log10(calibration_global['stellar mass'])
# calibration.describe()
calibration_global
###Output
_____no_output_____
###Markdown
Now we have `calibration` and `calibration_global` as our two calibration data sets.Since age is so important, let's check out the basics stats real quick.
###Code
print("LOCAL AGE:")
print(calibration['age'].describe(), '\n')
print(data['age'].describe(), '\n')
print("\n GLOBAL AGE:")
print(calibration_global['age'].describe(), '\n')
print(data_global['age'].describe(), '\n')
###Output
LOCAL AGE:
count 14.000000
mean 5.425320
std 1.510021
min 3.101574
25% 4.508886
50% 5.300659
75% 6.460297
max 8.751980
Name: age, dtype: float64
count 103.000000
mean 5.221552
std 2.121162
min 1.628953
25% 3.447774
50% 5.071533
75% 6.625620
max 9.740748
Name: age, dtype: float64
GLOBAL AGE:
count 14.000000
mean 4.938967
std 1.340875
min 2.832870
25% 4.209599
50% 4.864468
75% 5.823210
max 7.949020
Name: age, dtype: float64
count 103.000000
mean 5.489302
std 2.256922
min 1.206989
25% 4.082504
50% 5.312133
75% 7.369338
max 10.738602
Name: age, dtype: float64
###Markdown
Make 4 corner plot.Show that adifferance in PC$_1$ it is not just a mass step (becuase Riess already accounted for that).
###Code
### First 5 parameters with both global and local ages
# font size is not working
sns.set(context='talk', style='ticks', font='serif', color_codes=True, font_scale=0.75)
# f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2)
# f.tight_layout() # improve spacing
#https://stackoverflow.com/questions/26767281/position-5-subplots-in-matplotlib
ax1 = plt.subplot2grid(shape=(2,18), loc=(0,3), colspan=6)
ax2 = plt.subplot2grid((2,18), (0,9), colspan=6)
ax3 = plt.subplot2grid((2,18), (1,0), colspan=6)
ax4 = plt.subplot2grid((2,18), (1,6), colspan=6)
ax5 = plt.subplot2grid((2,18), (1,12), colspan=6)
f = plt.gcf()
f.tight_layout() # improve spacing
#x_1
sns.distplot(calibration['x_1'], label='calibration', ax=ax1,
hist_kws={'linewidth': 0}) #linewidth to match older design
sns.distplot(data['x1'], label='Hubble\nflow', ax=ax1, color='g',
hist_kws={'linewidth': 0}) #linewidth to match older design
#c
sns.distplot(calibration['c'], label='calibration', ax=ax2,
hist_kws={'linewidth': 0}) #linewidth to match older design
sns.distplot(data['color'], label='Hubble flow', ax=ax2, color='g',
hist_kws={'linewidth': 0}) #linewidth to match older design
#mass
sns.distplot(calibration['stellar mass'], label='calibration', ax=ax3,
hist_kws={'linewidth': 0}) #linewidth to match older design
sns.distplot(data['stellar mass'], label='Hubble flow', ax=ax3, color='g',
hist_kws={'linewidth': 0}) #linewidth to match older design
#global age
sns.distplot(calibration_global['age'], label='calibration', ax=ax4,
hist_kws={'linewidth': 0}) #linewidth to match older design
sns.distplot(data_global['age'], label='Hubble flow', ax=ax4, color='g',
hist_kws={'linewidth': 0}) #linewidth to match older design
ax4.set_xlim([0,13.8])
#local age
sns.distplot(calibration['age'], label='calibration', ax=ax5,
hist_kws={'linewidth': 0}) #linewidth to match older design
sns.distplot(data['age'], label='Hubble flow', ax=ax5, color='g',
hist_kws={'linewidth': 0}) #linewidth to match older design
ax5.set_xlim([0,13.8])
# Change tick locations
# ax1.tick_params(axis='both', top='on', right='on', direction='in')
# ax2.tick_params(axis='both', top='on', right='on', direction='in')
# ax3.tick_params(axis='both', top='on', right='on', direction='in')
# ax4.tick_params(axis='both', top='on', right='on', direction='in')
# ax5.tick_params(axis='both', top='on', right='on', direction='in')
ax1.tick_params(axis='both', direction='in')
ax2.tick_params(axis='both', direction='in')
ax3.tick_params(axis='both', direction='in')
ax4.tick_params(axis='both', direction='in')
ax5.tick_params(axis='both', direction='in')
# remove y values
ax1.get_yaxis().set_ticks([])
ax2.get_yaxis().set_ticks([])
ax3.get_yaxis().set_ticks([])
ax4.get_yaxis().set_ticks([])
ax5.get_yaxis().set_ticks([])
#despine
sns.despine(left=True)
# add better labels
ax1.set_xlabel(r'$x_1$')
ax2.set_xlabel(r'$c$')
ax3.set_xlabel(r'mass [log(M/M$_{\odot}$)]')
ax4.set_xlabel(r'global age [Gyr]')
ax5.set_xlabel(r'local age [Gyr]')
# plt.legend()
# plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
# ncol=2, mode="expand", borderaxespad=0.)
# ax1.legend(bbox_to_anchor=(0.78, 1), loc=2, borderaxespad=0.) # in middle
# ax2.legend(bbox_to_anchor=(0.65, 1), loc=2, borderaxespad=0.) # better top right
# ax2.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) # ok on top right
# ax1.legend(bbox_to_anchor=(0.1,1.02,1.,0.102), loc=3,
# ncol=2, mode='expand', borderaxespad=0) #on top?
#Set up legend
legend_ax = f.add_axes([0.1, 0.95, 0.8, 0.1]) # set up a figure to put the legend on
# remove all the junk
sns.despine(left=True, bottom=True, ax=legend_ax)
legend_ax.get_xaxis().set_ticks([])
legend_ax.get_yaxis().set_ticks([])
# get the legend details
handles, labels = ax2.get_legend_handles_labels()
#plot legend
legend_ax.legend(handles, labels, ncol=2, loc=[0.18,0], frameon=False)
plt.savefig('H0_components_5components.pdf', bbox_inches='tight')
plt.show()
### WITH LOCAL AGES
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2)
f.tight_layout() # improve spacing
#x_1
sns.distplot(calibration['x_1'], label='calibration', ax=ax1)
sns.distplot(data['x1'], label='Hubble\nflow', ax=ax1)
#c
sns.distplot(calibration['c'], label='calibration', ax=ax2)
sns.distplot(data['color'], label='Hubble flow', ax=ax2)
#mass
sns.distplot(calibration['stellar mass'], label='calibration', ax=ax3)
sns.distplot(data['stellar mass'], label='Hubble flow', ax=ax3)
#age
sns.distplot(calibration['age'], label='calibration', ax=ax4)
sns.distplot(data['age'], label='Hubble flow', ax=ax4)
ax4.set_xlim([0,13.8])
# Change tick locations
ax1.tick_params(axis='both', top='on', right='on', direction='in')
ax2.tick_params(axis='both', top='on', right='on', direction='in')
ax3.tick_params(axis='both', top='on', right='on', direction='in')
ax4.tick_params(axis='both', top='on', right='on', direction='in')
# remove y values
ax1.get_yaxis().set_ticks([])
ax2.get_yaxis().set_ticks([])
ax3.get_yaxis().set_ticks([])
ax4.get_yaxis().set_ticks([])
#despine
sns.despine(left=True)
# add better labels
ax1.set_xlabel(r'$x_1$')
ax2.set_xlabel(r'$c$')
ax3.set_xlabel(r'mass [log(M/M$_{\odot}$)]')
ax4.set_xlabel(r'age [Gyr]')
# plt.legend()
# plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
# ncol=2, mode="expand", borderaxespad=0.)
# ax1.legend(bbox_to_anchor=(0.78, 1), loc=2, borderaxespad=0.) # in middle
# ax2.legend(bbox_to_anchor=(0.65, 1), loc=2, borderaxespad=0.) # better top right
# ax2.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) # ok on top right
# ax1.legend(bbox_to_anchor=(0.1,1.02,1.,0.102), loc=3,
# ncol=2, mode='expand', borderaxespad=0) #on top?
#Set up legend
legend_ax = f.add_axes([0.1, 0.95, 0.8, 0.1]) # set up a figure to put the legend on
# remove all the junk
sns.despine(left=True, bottom=True, ax=legend_ax)
legend_ax.get_xaxis().set_ticks([])
legend_ax.get_yaxis().set_ticks([])
# get the legend details
handles, labels = ax2.get_legend_handles_labels()
#plot legend
legend_ax.legend(handles, labels, ncol=2, loc="upper center")
# plt.savefig('H0_components.pdf', bbox_inches='tight')
plt.show()
### GLOBAL AGES
# TODO: data should be data_global
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2)
f.tight_layout() # improve spacing
#x_1
sns.distplot(calibration_global['x_1'], label='calibration', ax=ax1)
sns.distplot(data_global['x1'], label='Hubble\nflow', ax=ax1)
#c
sns.distplot(calibration_global['c'], label='calibration', ax=ax2)
sns.distplot(data_global['color'], label='Hubble flow', ax=ax2)
#mass
sns.distplot(calibration_global['stellar mass'], label='calibration', ax=ax3)
sns.distplot(data_global['stellar mass'], label='Hubble flow', ax=ax3)
#age
sns.distplot(calibration_global['age'], label='calibration', ax=ax4)
sns.distplot(data_global['age'], label='Hubble flow', ax=ax4)
ax4.set_xlim([0,13.8])
# Change tick locations
ax1.tick_params(axis='both', top='on', right='on', direction='in')
ax2.tick_params(axis='both', top='on', right='on', direction='in')
ax3.tick_params(axis='both', top='on', right='on', direction='in')
ax4.tick_params(axis='both', top='on', right='on', direction='in')
# remove y values
ax1.get_yaxis().set_ticks([])
ax2.get_yaxis().set_ticks([])
ax3.get_yaxis().set_ticks([])
ax4.get_yaxis().set_ticks([])
#despine
sns.despine(left=True)
# add better labels
ax1.set_xlabel(r'$x_1$')
ax2.set_xlabel(r'$c$')
ax3.set_xlabel(r'mass [log(M/M$_{\odot}$)]')
ax4.set_xlabel(r'age [Gyr]')
# plt.legend()
# plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
# ncol=2, mode="expand", borderaxespad=0.)
# ax1.legend(bbox_to_anchor=(0.78, 1), loc=2, borderaxespad=0.) # in middle
# ax2.legend(bbox_to_anchor=(0.65, 1), loc=2, borderaxespad=0.) # better top right
# ax2.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) # ok on top right
# ax1.legend(bbox_to_anchor=(0.1,1.02,1.,0.102), loc=3,
# ncol=2, mode='expand', borderaxespad=0) #on top?
#Set up legend
legend_ax = f.add_axes([0.1, 0.95, 0.8, 0.1]) # set up a figure to put the legend on
# remove all the junk
sns.despine(left=True, bottom=True, ax=legend_ax)
legend_ax.get_xaxis().set_ticks([])
legend_ax.get_yaxis().set_ticks([])
# get the legend details
handles, labels = ax2.get_legend_handles_labels()
#plot legend
legend_ax.legend(handles, labels, ncol=2, loc="upper center")
# plt.savefig('H0_components_global.pdf', bbox_inches='tight')
plt.show()
###Output
/usr/local/lib/python3.7/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
/usr/local/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead.
warnings.warn(message, mplDeprecation, stacklevel=1)
###Markdown
Make PC$_1$ distibution plotSee if, when combined there is a sytematic shift. standarize calibration data sets like the Hubble flow data setsFirst with local age.
###Code
features = ['x1', 'color', 'stellar mass', 'age']
# z and data are the Hubble flow sample
z = data[data['hr']<0.7].loc[:, features].values
print(z.shape)
print(np.mean(z, axis=0))
scaler = StandardScaler()
scaler.fit(z) # get the needed transformation off of z
z = scaler.transform(z) #this returns the scaled form of z
print(np.mean(z, axis=0))
###Output
[ 1.07788643e-17 -2.26356151e-17 -8.66620691e-16 -1.27190599e-16]
###Markdown
Now with global age.
###Code
# Get data to fit and transform
z_global = data_global[data_global['hr']<0.7].loc[:, features].values
print(z.shape)
print(np.mean(z_global, axis=0))
# Fit data in scaler
scaler_global = StandardScaler()
scaler_global.fit(z_global) # get the needed transformation off of z
# transform data
z_global = scaler_global.transform(z_global) #this returns the scaled form of z
print(np.mean(z_global, axis=0))
# [ 1.07788643e-17 -2.26356151e-17 -8.66620691e-16 1.26845512e-01]
np.mean((z_global - np.mean(z_global, axis=0))/np.std(z_global)**2, axis=0)
print(scaler.mean_)
print(scaler.var_)
print()
print(scaler_global.mean_)
print(scaler_global.var_)
# scale the LOCAL AGE calibration sample
features = ['x_1', 'c', 'stellar mass', 'age'] # Reset to calibration sample header info
cal_scaled = scaler.transform(calibration.loc[:, features].values) #get the same scale on new dataset
cal_scaled
# scale the GLOBAL AGE calibration sample
features = ['x_1', 'c', 'stellar mass', 'age'] # Reset to calibration sample header info
cal_scaled_global = scaler_global.transform(calibration_global.loc[:, features].values) #get the same scale on new dataset
cal_scaled_global
def to_pc1_local(data):
"""need input to be a Nx4 numpy array
"""
x, c, m, a = data[:,0], data[:,1], data[:,2], data[:,3]
return 0.557*x-0.103*c-0.535*m-0.627*a
def to_pc1_global(data):
"""need input to be a Nx4 numpy array
"""
x, c, m, a = data[:,0], data[:,1], data[:,2], data[:,3]
return 0.465*x-0.134*c-0.596*m-0.641*a
pc1_hubble = to_pc1_local(z)
pc1_hubble_global = to_pc1_global(z_global)
pc1_cal = to_pc1_local(cal_scaled)
pc1_cal_global = to_pc1_global(cal_scaled_global)
# data needed for table in paper
print(calibration.index)
print(pc1_cal)
print(pc1_cal_global)
###Output
Int64Index([ 101, 1015, 1309, 3021, 3370, 3447, 3972, 3982, 4424, 4536, 4639,
5584, 7250, 9391],
dtype='int64', name='SNID')
[ 0.87925233 0.01043889 -0.06030527 0.05544665 0.45896624 0.62692111
0.60079135 0.10088561 0.88093434 -0.49219851 0.05621446 -0.25552501
1.32840977 1.50436636]
[ 0.45861848 0.19698835 0.43824092 0.15923086 0.62368168 0.786891
0.78177485 0.01212553 1.06997496 0.61456382 0.55150297 -0.12277555
1.51015296 1.72969495]
###Markdown
Basic & Statistical Differances
###Code
print('mean: ', np.mean(pc1_hubble), np.mean(pc1_cal), np.mean(pc1_hubble_global), np.mean(pc1_cal_global))
print('median: ', np.median(pc1_hubble), np.median(pc1_cal), np.median(pc1_hubble_global), np.median(pc1_cal_global))
print('std: ', np.std(pc1_hubble), np.std(pc1_cal), np.std(pc1_hubble_global), np.std(pc1_cal_global))
## Differneces in mean
print('Local Age:')
delta_mean = abs(np.mean(pc1_hubble) - np.mean(pc1_cal))
uncert = abs(np.std(pc1_cal))/np.sqrt(pc1_cal.size) #only account for uncertainty in Calibration. Hubble flow is scalled.
print(f'{delta_mean:.5f} +- {uncert:.5f}')
print(f'{delta_mean/uncert:.4f} sigma')
print()
print('Global Age:')
delta_mean = abs(np.mean(pc1_hubble_global) - np.mean(pc1_cal_global))
uncert = abs(np.std(pc1_cal_global))/np.sqrt(pc1_cal_global.size) #only account for uncertainty in Calibration. Hubble flow is scalled.
print(f'{delta_mean:.5f} +- {uncert:.5f}')
print(f'{delta_mean/uncert:.4f} sigma')
### LOCAL AGES
print(ks_2samp(pc1_cal,pc1_hubble))
print(mannwhitneyu(pc1_cal,pc1_hubble, alternative='greater'))
### GLOBAL AGES
print(ks_2samp(pc1_cal_global,pc1_hubble_global))
print(mannwhitneyu(pc1_cal_global,pc1_hubble_global, alternative='greater'))
###Output
Ks_2sampResult(statistic=0.4563106796116505, pvalue=0.00751088839612654)
MannwhitneyuResult(statistic=996.0, pvalue=0.010577579422801426)
###Markdown
Mean PC$_1$ is zero (by definition) for the two Hubble flows. The mean using local is -0.037 and using global is -0.340. The medians are not that much different, so it is not the result of a single outlier. The results of local PCA are very similar distirubtion (KS-test p-value 0.408) but there is a slight difference between the two data sets using the global analsysis (KS-test p-value 0.067 or 1.5 $\sigma$, MWU p-value 0.0577 two tail or 1.9 $\sigma$). So basically, no clear effect but maybe. Make PC$_1$ Comparison Figures
###Code
### LOCAL AGE PC_1
fig, ax = plt.subplots(1,1)
sns.distplot(pc1_cal, label='calibration')
sns.distplot(pc1_hubble, label='Hubble flow')
plt.xlabel(r'PC$_1$')
ax.tick_params(axis='both', top='on', right='on', direction='in')
ax.get_yaxis().set_ticks([])
sns.despine(left=True)
plt.legend()
# plt.savefig('H0_pc1.pdf', bbox_inches='tight')
plt.show()
### GLOBAL AGE PC_1
# TODO: This is the PCA-local distribution
fig, ax = plt.subplots(1,1)
sns.distplot(pc1_cal_global, label='calibration')
sns.distplot(pc1_hubble_global, label='Hubble flow')
plt.xlabel(r'PC$_1$')
ax.tick_params(axis='both', top='on', right='on', direction='in')
ax.get_yaxis().set_ticks([])
sns.despine(left=True)
plt.legend()
# plt.savefig('H0_pc1_global.pdf', bbox_inches='tight')
plt.show()
len(data['redshift'].tolist()), len(pc1_hubble)
## Local
# combine data into a list
redshift = redshift_cal['redshift'].tolist()+data['redshift'].tolist() # was two DataFrames
pc1 = np.append(pc1_cal, pc1_hubble).tolist() # was two numpy arrays
df = pd.DataFrame({'redshift': redshift, 'pc1': pc1})
# documentation says [0,100] but it looks like it should be [0,1]
# cmap = sns.diverging_palette(240, 20, s=99, l=45, sep=20, n=15, center='light', as_cmap=True)
# was 220, 20
# Set up JointGrid
g = sns.JointGrid(x='redshift', y='pc1', data=df, space=0, ratio=2)
# Add data to joint plot
normalize = matplotlib.colors.Normalize(vmin=-3.0, vmax=3.0)
s = g.ax_joint.scatter(redshift_cal['redshift'], pc1_cal, marker='*',
c=calibration['x_1'], norm=normalize, vmin=-3.0, vmax=3.0,
cmap='RdBu', edgecolor='k',
label='calibration')
s = g.ax_joint.scatter(data['redshift'], pc1_hubble, marker='<',
c=data['x1'], norm=normalize, vmin=-3.0, vmax=3.0,
cmap="RdBu", edgecolor='k',
label='Hubble flow')
# cbaxes = plt.gcf().add_axes([0.1, 0.1, 0.03, 0.8])
# cb = plt.colorbar(s, label=r"$x_1$", cax=cbaxes)
# cb.ax.yaxis.set_ticks_position('left')
cbaxes = plt.gcf().add_axes([0.185, 0.7, 0.5, 0.02])
cb = plt.colorbar(s, label=r"$x_1$", cax=cbaxes, orientation='horizontal')
cbaxes.set_axisbelow(False) # need this to get the in tick marks to show up
cbaxes.xaxis.set_ticks_position('top')
cbaxes.xaxis.set_label_position('top')
cbaxes.tick_params(axis='both', direction='in', pad=4) # it appears to just remove the ticks, but I am ok with that.
cbaxes.xaxis.labelpad = 10 # moves the x_1 label
# Fix joint plot stettings
g.ax_joint.set_xscale('log')
g.ax_joint.set_ylim(-3.5,4.5)
g.ax_joint.set_xlim(0.0005, 0.3)
g.ax_joint.tick_params(axis='both', which='both', top='on', right='on', direction='in')
g.ax_joint.grid(which='major', axis='both', color='0.90', linestyle='-')
g.ax_joint.yaxis.set_ticks([-3,-2,-1,0,1,2,3,4])
sns.despine(ax=g.ax_joint, top=False) # Use despine to add a top
plt.figtext(0.24, 0.6, "Calibration", fontsize=12)
plt.figtext(0.49, 0.6, "Hubble flow", fontsize=12)
# Add data to y-axis
sns.distplot(pc1_cal, label='calibration', ax=g.ax_marg_y, vertical=True, color='b',# color='tab:orange')
hist_kws={'linewidth': 0}) #linewidth to match older design
sns.distplot(pc1_hubble, label='Hubble flow', ax=g.ax_marg_y, vertical=True, color='g',
hist_kws={'linewidth': 0}) #linewidth to match older design
g.ax_marg_y.set_ylim(-3.5,4.5)
# Add legend to y-axis margin area with joint shapes and margin colors
# plt.legend()
# Fix data to top x-axis
# General Improments
g = g.set_axis_labels('log(redshift)', r'PC$_{1, {\rm local}}$')
plt.savefig('H0_pc1_redshift.pdf', bbox_inches='tight')
plt.show()
len(data_global['redshift'].tolist()), len(pc1_hubble_global)
## Global
# combine data into a list
redshift = redshift_cal['redshift'].tolist()+data_global['redshift'].tolist() # was two DataFrames
pc1 = np.append(pc1_cal_global, pc1_hubble_global).tolist() # was two numpy arrays
df = pd.DataFrame({'redshift': redshift, 'pc1': pc1})
# documentation says [0,100] but it looks like it should be [0,1]
# cmap = sns.diverging_palette(240, 20, s=99, l=45, sep=20, n=15, center='light', as_cmap=True)
# was 220, 20
# Set up JointGrid
g = sns.JointGrid(x='redshift', y='pc1', data=df, space=0, ratio=2)
# Add data to joint plot
normalize = matplotlib.colors.Normalize(vmin=-3.0, vmax=3.0)
s = g.ax_joint.scatter(redshift_cal['redshift'], pc1_cal_global, marker='*',
c=calibration['x_1'], norm=normalize, vmin=-3.0, vmax=3.0,
cmap='RdBu', edgecolor='k',
label='calibration')
s = g.ax_joint.scatter(data_global['redshift'], pc1_hubble_global, marker='<',
c=data['x1'], norm=normalize, vmin=-3.0, vmax=3.0,
cmap="RdBu", edgecolor='k',
label='Hubble flow')
# cbaxes = plt.gcf().add_axes([0.1, 0.1, 0.03, 0.8])
# cb = plt.colorbar(s, label=r"$x_1$", cax=cbaxes)
# cb.ax.yaxis.set_ticks_position('left')
cbaxes = plt.gcf().add_axes([0.185, 0.7, 0.5, 0.02])
cb = plt.colorbar(s, label=r"$x_1$", cax=cbaxes, orientation='horizontal')
cbaxes.set_axisbelow(False) # need this to get the in tick marks to show up
cbaxes.xaxis.set_ticks_position('top')
cbaxes.xaxis.set_label_position('top')
cbaxes.tick_params(axis='both', direction='in', pad=4) # it appears to just remove the ticks, but I am ok with that.
cbaxes.xaxis.labelpad = 10 # moves the x_1 label
# Fix joint plot stettings
g.ax_joint.set_xscale('log')
g.ax_joint.set_ylim(-3.5,4.5)
g.ax_joint.set_xlim(0.0005, 0.3)
g.ax_joint.tick_params(axis='both', which='both', top='on', right='on', direction='in')
g.ax_joint.grid(which='major', axis='both', color='0.90', linestyle='-')
g.ax_joint.yaxis.set_ticks([-3,-2,-1,0,1,2,3,4])
sns.despine(ax=g.ax_joint, top=False) # Use despine to add a top
plt.figtext(0.24, 0.59, "Calibration", fontsize=12)
plt.figtext(0.49, 0.59, "Hubble flow", fontsize=12)
# Add data to y-axis
sns.distplot(pc1_cal_global, label='calibration', ax=g.ax_marg_y, vertical=True, color='b',# color='tab:orange')
hist_kws={'linewidth': 0}) #linewidth to match older design
sns.distplot(pc1_hubble_global, label='Hubble flow', ax=g.ax_marg_y, vertical=True, color='g',
hist_kws={'linewidth': 0}) #linewidth to match older design
g.ax_marg_y.set_ylim(-3.5,4.5)
# Add legend to y-axis margin area with joint shapes and margin colors
# plt.legend()
# Fix data to top x-axis
# General Improments
g = g.set_axis_labels('log(redshift)', r'PC$_{1, {\rm global}}$')
plt.savefig('H0_pc1_redshift_global.pdf', bbox_inches='tight')
plt.show()
###Output
/usr/local/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead.
warnings.warn(message, mplDeprecation, stacklevel=1)
/usr/local/lib/python3.7/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
There appears to be no trend in PC$_1$ with redhsift, other than the known bias in the calibration sample (mostly negative values). The two data sets don't appear to have much of a differance in PC$_1$ distribution, but this looks very sample dependant.Why is the PC$_1 \approx 2$ in the calibration sample have a $x_1$ value of $\sim 2$? That means it is very very old and very very odd. Large PC$_1$ values are expected to have negative stretch values. Extra stuff
###Code
# fig = plt.figure('PC1-v-redshift2')
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
f.subplots_adjust(hspace=0)
# ax = fig.gca()
ax1.set_xscale('log')
normalize = matplotlib.colors.Normalize(vmin=-3.0, vmax=3.0)
cmap = sns.diverging_palette(220, 20, sep=20, n=15, center='light', as_cmap=True)
s = ax1.scatter(redshift_cal['redshift'], pc1_cal_global, marker='*', edgecolor='k',
c=calibration['x_1'], norm=normalize, vmin=-3.0, vmax=3.0, cmap=cmap,
label='calibration')
ax1.scatter(data['redshift'], pc1_hubble, marker='<', edgecolor='k',
c=data['x1'], norm=normalize, vmin=-3.0, vmax=3.0, cmap=cmap,
label='Hubble flow')
# f.colorbar(s, ax=ax1, label=r"$x_1$", orientation='horizontal', pad=0.25, aspect=40)
# aspect makes it twice as thing
#set axes ticks and gridlines
ax1.tick_params(axis='both', which='both', top='on', right='on', direction='in')
# ax.tick_params(axis='x', )
ax1.grid(which='major', axis='both', color='0.90', linestyle='-')
ax1.set_axisbelow(True)
ax1.set_ylim(-3.6,3.6)
ax1.set_xlabel('log(redshift)')
ax1.set_ylabel(r'PC$_1$')
plt.legend()
#Add colorbar
##["{:>4.1f}".format(y) for y in yticks] as possible color bar formating.
# cax = fig.add_axes([0.95, 0.217, 0.02, 0.691]) # fig.set_tight_layout({'pad': 1.5})
# cax = fig.add_axes([0.15, 0.96, 0.691, 0.04])
# cax.tick_params(axis='x', direction='in', top='off', bottom='on', right='on', pad=-24)
# cax.set_axisbelow(False) # bring tick marks above coloring
# cax.xaxis.set_label_position('bottom')
# cax.xaxis.set_label_coords(0.4955, 1.1)
# plt.colorbar(label=r"$x_1$", cax=cax, orientation='horizontal')
cax = f.add_axes([0.15, 0.76, 0.691, 0.04])
cax.xaxis.set_ticks_position('top')
cax.tick_params(axis='x', top='on', bottom='off', direction='in')
cax.set_axisbelow(False)
# cax = f.add_axes([0.15, 0.16, 0.691, 0.04])
# plt.colorbar(s, ax=cax, label=r"$x_1$", orientation='horizontal', pad=0.0, aspect=40) # pad put tick marks on the top of joint figure
plt.colorbar(s, ax=cax, label=r"$x_1$", orientation='horizontal')
# plt.savefig('PC1-v-redshift.pdf', bbox_inches='tight')
plt.show()
###Output
/usr/local/lib/python3.6/site-packages/matplotlib/axes/_axes.py:545: UserWarning: No labelled objects found. Use label='...' kwarg on individual plots.
warnings.warn("No labelled objects found. "
###Markdown
TODO: historgram is shifted by half a bin?-yes, they have a range of -3 to 3 not -3 to 4. Enlarge 'calibration' and 'Hubble flow'.
###Code
fig = plt.figure('test')
xyc = range(20)
plt.subplot(121)
plt.scatter(xyc[:13], xyc[:13], c=xyc[:13], s=35, vmin=0, vmax=20)
# cax = fig.add_axes([0.15, 0.96, 0.691, 0.04])
# plt.colorbar()
plt.colorbar(label=r"$x_1$", orientation='horizontal', ticklocation='top')
plt.xlim(0, 20)
plt.ylim(0, 20)
plt.show()
current_palette = sns.color_palette()
sns.palplot(current_palette)
sns.set_palette(current_palette)
sns.set_palette(sns.color_palette("RdBu_r", 7))
###Output
_____no_output_____ |
Homework/Econ126_Winter2020_Homework_02.ipynb | ###Markdown
Homework 2**Instructions:** Complete the notebook below. Download the completed notebook in HTML format. Upload assignment using Canvas.**Due:** Jan. 23 at **11am**. Exercise: NumPy ArraysFollow the instructions in the following cells.
###Code
# Import numpy
import numpy as np
# Use the 'np.arange()' function to create a variable called 'numbers1' that stores the integers
# 1 through (and including) 10
numbers1 = np.arange(1,11)
# Print the value of 'numbers1'
print(numbers1)
# Use the 'np.arange()' function to create a variable called 'numbers2' that stores the numbers
# 0 through (and including) 1 with a step increment of 0.01
step =0.01
numbers2 = np.arange(0,1+step,step)
# Print the value of 'numbers2'
print(numbers2)
# Print the 5th value of 'numbers2'. (Remember that the index starts counting at 0)
print(numbers2[4])
# Print the last value of 'numbers2'.
print(numbers2[-1])
# Print the first 12 values of 'numbers2'.
print(numbers2[:12])
# Print the last 12 values of 'numbers2'.
print(numbers2[-12:])
# Use the 'np.zeros()' function to create a variable called 'zeros' that is an array of 20 zeros
zeros = np.zeros(20)
# Print the value of 'zeros'
print(zeros)
# Change the second value of 'zeros' to 1 and print
zeros[1] = 1
# Print the value of 'zeros'
print(zeros)
###Output
[0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
Exercise: Random NumbersFollow the instructions in the following cells.
###Code
# Set the seed of NumPy's random number generator to 126
np.random.seed(126)
# Create a variable called 'epsilon' that is an array containing 25 draws from
# a normal distribution with mean 4 and standard deviation 2
epsilon = np.random.normal(loc=4,scale=2,size=25)
# Print the value of epsilon
print(epsilon)
# Print the mean of 'epsilon'
print(np.mean(epsilon))
# Print the standard deviation of 'epsilon'
print(np.std(epsilon))
###Output
2.182193135563433
###Markdown
Exercise: The Cobb-Douglas Production FunctionThe Cobb-Douglas production function can be written in per worker terms as : \begin{align} y & = A k^{\alpha}, \end{align}where $y$ denotes output per worker, $k$ denotes capital per worker, and $A$ denotes total factor productivity or technology. Part (a)On a single axis: plot the Cobb-Douglas production for $A$ = 0.8, 1, 1.2, and 1.4 with $\alpha$ = 0.35 and $k$ ranging from 0 to 10. Each line should have a different color. Your plot must have a title and axis labels. The plot should also contain a legend that clearly indicates which line is associated with which value of $A$ and does not cover the plotted lines.
###Code
# Import the pyplot module from Matplotlib as plt
import matplotlib.pyplot as plt
# Select the Matlplotlib style sheet to use (Optional)
plt.style.use('classic')
# Use the '%matplotlib inline' magic command to ensure that Matplotlib plots are displayed in the Notebook
%matplotlib inline
# Set capital share (alpha)
alpha = 0.35
# Create an array of capital values
k = np.arange(0,10,0.001)
# Plot production function for each of the given values for A
def cobbDouglas(A,k,alpha):
'''Returns the value of the production function A*k**alpha
Args:
A (float): TFP or productivity value
k (NumPy ndarray): Array of capital values
alpha (float): Capital share in production
Returns:
NumPy series
'''
return A*k**alpha
for A in [0.8,1,1.2,1.4]:
plt.plot(k,cobbDouglas(A,k,alpha),lw=3,alpha = 0.65,label='$A='+str(A)+'$')
# Add x- and y-axis labels
plt.xlabel('capital')
plt.ylabel('output')
# Add a title to the plot
plt.title('Cobb-Douglas production function')
# Create a legend
plt.legend(loc='lower right')
# Add a grid
plt.grid(True)
###Output
_____no_output_____
###Markdown
**Question**1. *Briefly* explain in words how increasing $A$ affects the shape of the production function. **Answer**1. Increasing $A$ increases the height of the the production function at every value of capital except for 0 so increasing $A$ increases the steepness of the production function. Part (b)On a single axis: plot the Cobb-Douglas production for $\alpha$ = 0.1, 0.2, 0.3, 0.4, and 0.5 with $A$ = 1 and $k$ ranging from 0 to 10. Each line should have a different color. Your plot must have a title and axis labels. The plot should also contain a legend that clearly indicates which line is associated with which value of $\alpha$ and does not cover the plotted lines.
###Code
# Set TFP (A)
A = 1
# Plot production function for each of the given values for alpha
for alpha in [0.1,0.2,0.3,0.4,0.5]:
plt.plot(k,cobbDouglas(A,k,alpha),lw=3,alpha = 0.65,label='$\\alpha='+str(alpha)+'$')
# Add x- and y-axis labels
plt.xlabel('capital')
plt.ylabel('output')
# Add a title to the plot
plt.title('Cobb-Douglas production function')
# Create a legend
plt.legend(ncol=3,loc='lower right')
# Add a grid
plt.grid(True)
###Output
_____no_output_____
###Markdown
**Question**1. *Briefly* explain in words how increasing $\alpha$ affects the shape of the production function. **Answer**1. Increasing $\alpha$ reduces the curvature of the production function for capital between 0 and 1 and increases the steepness of the production function for capital greater than 1. Exercise: The CardioidThe cardioid is a shape described by the parametric equations: \begin{align} x & = a(2\cos \theta - \cos 2\theta), \\ y & = a(2\sin \theta - \sin 2\theta). \end{align} Construct a well-labeled graph of the cardiod for $a=4$ and $\theta$ in $[0,2\pi]$. Your plot must have a title and axis labels.
###Code
# Construct data for x and y
a = 4
theta = np.arange(0,2*np.pi,0.001)
x = a*(2*np.cos(theta) - np.cos(2*theta))
y = a*(2*np.sin(theta) - np.sin(2*theta))
# Plot y against x
plt.plot(x,y,lw=3,alpha=0.65)
# Create x-axis label
plt.xlabel('x')
# Create y-axis label
plt.ylabel('y')
# Create title for plot
plt.title('Cardioid')
# Add a grid to the plot
plt.grid()
###Output
_____no_output_____
###Markdown
Exercise: Unconstrained optimizationConsider the quadratic function: \begin{align}f(x) & = -7x^2 + 930x + 30\end{align} You will use analytic (i.e., pencil and paper) and numerical methods to find the the value of $x$ that maximizes $f(x)$. Another name for $x$ that maximizes $f(x)$ is the *argument of the maximum* value $f(x)$. Part (a): Analytic solutionUse standard calculus methods to solve for the value of $x$ that maximizes $f(x)$ to **five decimal places**. Use your answer to complete the sentence in the next cell. The value of $x$ that maximizes $f(x)$ is: Part (b): Numerical solutionIn the cells below, you will use NumPy to try to compute the argument of the maximum of $f(x)$.
###Code
# Use np.arange to create a variable called 'x' that is equal to the numbers 0 through 100
# with a spacing between numbers of 0.1
x = np.arange(0,100,0.1)
# Create a variable called 'f' that equals f(x) at each value of the array 'x' just defined
f = -7*x**2 + 930*x + 30
# Use np.argmax to create a variable called xstar equal to the value of 'x' that maximizes the function f(x).
xstar = x[np.argmax(f)]
# Print the value of xstar
print(xstar)
# Use np.arange to create a variable called 'x' that is equal to the numbers 0 through 100
# with a spacing between numbers of 0.001
x = np.arange(0,100,0.001)
# Create a variable called 'f' that equals f(x) at each value of the array 'x' just defined
f = -7*x**2 + 930*x + 30
# Use np.argmax to create a variable called xstar equal to the value of 'x' that maximizes the function f(x).
xstar = x[np.argmax(f)]
# Print the value of xstar
print(xstar)
# Use np.arange to create a variable called 'x' that is equal to the numbers 0 through *50*
# with a spacing between numbers of 0.001
x = np.arange(0,50,0.001)
# Create a variable called 'f' that equals f(x) at each value of the array 'x' just defined
f = -7*x**2 + 930*x + 30
# Use np.argmax to create a variable called xstar equal to the value of 'x' that maximizes the function f(x).
xstar = x[np.argmax(f)]
# Print the value of xstar
print(xstar)
###Output
49.999
###Markdown
Part (c): EvaluationProvide answers to the follow questions in the next cell.**Questions**1. How did the choice of step size in the array `x` affect the accuracy of the computed results in the first two cells of Part (b)?2. What do you think is the drawback to decreasing the stepsize in `x`?3. In the previous cell, why did NumPy return value for `xstar` that is so different from the solution you derived in Part (a)? **Answers**1. Choosing a smaller step size creates a finer grid of points for which the function $f$ is calculated and therefore the accuracy is improved with a smaller step size. 2. A smaller step size means that the variable `x` will have more elements and so storing the variable and using it in computations will use more memory and will slow and/or crash the program. 3. The range of values in the variable `x` doesn't include $x^*$. Exercise: Utility MaximizationRecall the two good utility maximization problem from microeconomics. Let $x$ and $y$ denote the amount of two goods that a person consumes. The person receives utility from consumption given by: \begin{align} u(x,y) & = x^{\alpha}y^{\beta} \end{align}The person has income $M$ to spend on the two goods and the price of the goods are $p_x$ and $p_y$. The consumer's budget constraint is: \begin{align} M & = p_x x + p_y y \end{align}Suppose that $M = 100$, $\alpha=0.25$, $\beta=0.75$, $p_x = 1$. and $p_y = 0.5$. The consumer's problem is to maximize their utility subject to the budget constraint. While this problem can easily be solved by hand, we're going to use a computational approach. You can also solve the problem by hand to verify your solution. Part (a)Use the budget constraint to solve for $y$ in terms of $x$, $p_x$, $p_y$, and $M$. Use the result to write the consumer's utility as a function of $x$ only. Create a variable called `x` equal to an array of values from 0 to 80 with step size equal to 0.001 and a variable called `utility` equal to the consumer's utility. Plot the consumer's utility against $x$.
###Code
# Assign values to the constants alpha, beta, M, px, py
alpha = 0.25
beta = 0.75
M = 100
px=1
py=0.5
# Create an array of x values
x = np.arange(0,80,0.001)
# Create an array of utility values
utility = x**alpha*((M-px*x)/py)**beta
# Plot utility against x.
plt.plot(x,utility,lw=3,alpha=0.65)
# x- and y-axis labels
plt.xlabel('x')
plt.ylabel('utility')
# Title
plt.title('u[x,y(x)]')
# Add grid
plt.grid()
###Output
_____no_output_____
###Markdown
Part (b)The NumPy function `np.max()` returns the highest value in an array and `np.argmax()` returns the index of the highest value. Print the highest value and index of the highest value of `utility`.
###Code
print('highest utility value :',np.max(utility))
print('index of highest utility value:',np.argmax(utility))
###Output
highest utility value : 95.84146563694088
index of highest utility value: 25000
###Markdown
Part (c)Use the index of the highest value of utility to find the value in `x` with the same index and store value in a new variable called `xstar`. Print the value of `xstar`.
###Code
# Create variable 'xstar' equal to value in 'x' that maximizes utility
xstar = x[np.argmax(utility)]
# Print value of 'xstar'
print('xstar:',xstar)
###Output
xstar: 25.0
###Markdown
Part (d)Use the budget constraint to the find the implied utility-maximizing vaue of $y$ and store this in a variable called `ystar`. Print `ystar`.
###Code
# Create variable 'ystar' equal to value in 'y' that maximizes utility
ystar = M/py - px*xstar/py
# Print value of 'xstar'
print('ystar:',ystar)
###Output
ystar: 150.0
|
KaggleWorkflows/Titanic.ipynb | ###Markdown
grid = sns.FacetGrid(train_df, row='Embarked')
###Code
grid = sns.FacetGrid(train_df, row='Embarked', size=3, aspect=3)
grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
grid.add_legend();
grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=3, aspect=2)
grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
grid.add_legend();
print('After ', train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
for dataset in combine:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',
'Don', 'Dr', 'Major', 'Rev', 'Sir',
'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
train_df[['Title', 'Survived']].groupby('Title', as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df['Title']
title_mapping = {'Mr':1, 'Miss':2, 'Mrs':3, 'Master':4, 'Rare':5}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train_df.head()
train_df = train_df.drop(['PassengerId', 'Name'], axis=1)
test_df = test_df.drop('Name', axis=1)
combine = [train_df, test_df]
print(train_df.shape, test_df.shape)
gender = {'female':1, 'male':0}
for dataset in combine:
dataset['Sex'] = dataset['Sex'].map(gender).astype(int)
train_df.head()
###Output
_____no_output_____ |
Bronze/quantum-with-qiskit/Q36_Superposition_and_Measurement_Solutions.ipynb | ###Markdown
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Solutions for Superposition and Measurement _prepared by Abuzer Yakaryilmaz_ Task 3Repeat the second experiment with the following modifications.Start in state $ \ket{1} $.Apply a Hadamard gate.Make a measurement. If the measurement outcome is 0, stop.Otherwise, apply a second Hadamard, and then make a measurement.Execute your circuit 1000 times.Calculate the expected values of observing '0' and '1', and then compare your result with the simulator result. Solution
###Code
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# define a quantum register with a single qubit
q = QuantumRegister(1)
# define a classical register with a single bit
c = ClassicalRegister(1,"c")
# define a quantum circuit
qc = QuantumCircuit(q,c)
# start in state |1>
qc.x(q[0])
# apply the first Hadamard
qc.h(q[0])
# the first measurement
qc.measure(q,c)
# apply the second Hadamard if the measurement outcome is 1
qc.h(q[0]).c_if(c,1)
# the second measurement
qc.measure(q[0],c)
# draw the circuit
display(qc.draw(output="mpl"))
###Output
_____no_output_____
###Markdown
We expect to see outcome '0' and '1' with frequency %75 and %25, respectively.
###Code
# execute the circuit 1000 times in the local simulator
job = execute(qc,Aer.get_backend('qasm_simulator'),shots=1000)
counts = job.result().get_counts(qc)
print(counts)
###Output
_____no_output_____
###Markdown
Task 4Design the following quantum circuit.Start in state $ \ket{0} $. Repeat 3 times: if the classical bit is 0: apply a Hadamard operator make a measurementExecute your circuit 1000 times.Calculate the expected values of observing '0' and '1', and then compare your result with the simulator result. Solution
###Code
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# define a quantum register with a single qubit
q = QuantumRegister(1)
# define a classical register with a single bit
c = ClassicalRegister(1,"c")
# define a quantum circuit
qc = QuantumCircuit(q,c)
for i in range(3):
qc.h(q[0]).c_if(c,0)
qc.measure(q,c)
# draw the circuit
qc.draw(output="mpl")
###Output
_____no_output_____
###Markdown
We start in state $ \ket{0} $. Thus, the first Hadamard and measurement are implemented. Out of 1000, we expect to observe 500 '0' and 500 '1'.If the classical bit is 1, then there will be no further Hadamard operator, and so the quantum register will always be in state $ \ket{1} $ and so all measurements results will be 1.If the classical bit is 0, then another Hadamard applied followed by a measuement.Thus, out ouf 1000, we expect to observe 250 '0' and 750 '1'.Similarly, after the third control, we expect to observe 125 '0' and 875 '1'.
###Code
# execute the circuit 1000 times in the local simulator
job = execute(qc,Aer.get_backend('qasm_simulator'),shots=1000)
counts = job.result().get_counts(qc)
print(counts)
###Output
_____no_output_____
###Markdown
Task 5Design the following randomly created quantum circuit.Start in state $ \ket{0} $. apply a Hadamard operator make a measurement REPEAT 4 times: randomly pick x in {0,1} if the classical bit is x: apply a Hadamard operator make a measurement Draw your circuit, and guess the expected frequency of observing '0' and '1' if the circuit is executed 10000 times.Then, execute your circuit 10000 times, and compare your result with the simulator result.Repeat execution a few more times. Solution We can calculate the frequencies iteratively by python.
###Code
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# import randrange for random choices
from random import randrange
# define a quantum register with a single qubit
q = QuantumRegister(1)
# define a classical register with a single bit
c = ClassicalRegister(1,"c")
# define a quantum circuit
qc = QuantumCircuit(q,c)
shot = 10000
observe = [0,0]
qc.h(q[0])
qc.measure(q,c)
observe = [shot/2,shot/2]
for i in range(4):
x = randrange(2)
if x==0:
observe[0] = observe[0] / 2
observe[1] = observe[1] + observe[0]
else:
observe[1] = observe[1] / 2
observe[0] = observe[0] + observe[1]
qc.h(q[0]).c_if(c,x)
qc.measure(q,c)
# draw the circuit
display(qc.draw(output="mpl"))
print('0:',round(observe[0]),'1:',round(observe[1]))
# execute the circuit 10000 times in the local simulator
job = execute(qc,Aer.get_backend('qasm_simulator'),shots=shot)
counts = job.result().get_counts(qc)
print(counts)
###Output
_____no_output_____ |
introduction-to-numpy.ipynb | ###Markdown
DataTypes and Attributes
###Code
# NumPy's main datatype is ndarray
a1 = np.array([1,2,3])
a1
type(a1)
a2 = np.array([
[1,3,5.5],
[4.5,7,8]
])
# 3-dimensional array, also referred to as a matrix
a3 = np.array([[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]],
[[10, 11, 12],
[13, 14, 15],
[16, 17, 18]]])
a2
a3
a1.shape
a3.shape
a1.dtype, a2.dtype, a3.dtype
a1.size, a2.size, a3.size
type(a1), type(a2), type(a3)
a1
a2
a3
# create a dataframe from numpy array
import pandas as pd
df = pd.DataFrame(a2)
df
###Output
_____no_output_____
###Markdown
2. Create Numpy arrays
###Code
sample_array = np.array([1,2,3])
sample_array
sample_array.dtype
ones = np.ones((2, 3))
ones
ones.dtype
type(ones)
zeros = np.zeros((2, 3))
zeros
range_array = np.arange(0, 10,2)
range_array
random_array = np.random.randint(0,10,size=(3,5))
random_array
random_array.size
random_array.shape
random_array_2 = np.random.random(size=(5, 3))
random_array_2
random_array_2.shape
random_array_2.size
random_array_3 = np.random.rand(5, 3)
random_array_3
#pseudo random arrays|
np.random.seed(seed=0)
random_array_4 = np.random.randint(10, size=(5, 3))
random_array_4
np.random.seed(7)
random_array_5 = np.random.random(size=((5, 3)))
random_array_5
random_array_4.shape
###Output
_____no_output_____
###Markdown
3. Viewing arrays and matrices
###Code
np.unique(random_array_4)
a1
a2
a3
a1[0]
a2[0]
a3.shape
a3[0]
a2
a2[1]
a3
a3[:2, :2, :2]
a4 = np.random.randint(10, size=(2,3,4,5))
a4
a4.shape, a4.ndim
# get the first 4 numbers of the inner most array
a4[:,:,:,:1]
###Output
_____no_output_____
###Markdown
4. Manipulating and comparing arrarys Arithmetic
###Code
a1
ones = np.ones(3)
a1 + ones
a1 - ones
a1 * ones
a2
a1 * a2
a3
a1 / ones
a2 ** 2
np.square(a2)
a1 % 2
###Output
_____no_output_____
###Markdown
AggregationAggregarion = performing tha same operation on a number of things
###Code
listy_list = [1,2,3]
type(listy_list)
sum(listy_list)
a1
type(a1)
sum(a1)
np.sum(a1)
###Output
_____no_output_____
###Markdown
use Pythons data methods on python datatypes (`sum()`)use numpy on numpy methods
###Code
# create massive array
massiva_array = np.random.random(100000)
massiva_array.size
massiva_array[:10]
%timeit sum(massiva_array)
%timeit np.sum(massiva_array)
a2
np.mean(a2)
np.max(a2)
np.min(a2)
###Output
_____no_output_____
###Markdown
Std and Variance [https://www.mathsisfun.com/data/standard-deviation.html]
###Code
# Standard Deviation : a measure of how spread out a group of numbers is from the mean
np.std(a2)
# Variannce : measure of the avaerage degree to which each number is diffreence to the mean
# Higher Variance : wider range of numbers
# Lower Variance : lower range of numbers
np.var(a2)
# std = squareroot of variance
np.sqrt(np.var(a2))
# var and std
high_var_array = np.array([1,100,200,300,4000,5000])
low_var_array = np.array([2,4,6,8,10])
np.var(high_var_array), np.var(low_var_array)
np.std(high_var_array), np.std(low_var_array)
np.mean(high_var_array), np.mean(low_var_array)
import matplotlib.pyplot as plt
plt.hist(high_var_array)
plt.show()
plt.hist(low_var_array)
###Output
_____no_output_____
###Markdown
Reshaping and Transposing
###Code
a2
a2.shape
a3
a2 * a3
a2.reshape(2,3,1).shape
a3.shape
a2_reshape = a2.reshape(2,3,1)
a2_reshape
a2_reshape * a3
a2
# Transpose
a2.T
a2.T.shape
a3.T
a3.T.shape
###Output
_____no_output_____
###Markdown
Dot Product
###Code
np.random.seed(0)
mat1 = np.random.randint(10, size=(5, 3))
mat2 = np.random.randint(10, size=(5, 3))
mat1
mat2
mat1.shape
mat2.shape
mat1 * mat2
mat2_trans = mat2.T
mat2_trans.shape
np.dot(mat1, mat2_trans)
###Output
_____no_output_____
###Markdown
Dot Product Example
###Code
np.random.seed(0)
sales_amount = np.random.randint(20, size=(5, 3))
sales_amount
import pandas as pd
weekly_sales = pd.DataFrame( sales_amount, index=['Mon','Tue','Wed','Thu','Fri'], columns=['Almond Butter', 'Peanut Butter', 'Cashew Butter'] )
weekly_sales
prices = np.array([10,8,12])
prices
butter_prices = pd.DataFrame(prices.reshape(1, 3), index=['Price'], columns=['Almond Butter', 'Peanut Butter', 'Cashew Butter'])
butter_prices
weekly_sales
butter_prices
weekly_sales.shape, butter_prices.shape
weekly_sales_t = weekly_sales.T
weekly_sales_t.shape
weekly_sales_t.shape, butter_prices.shape
total_sales = np.dot(butter_prices,weekly_sales_t)
total_sales
daily_sales = butter_prices.dot(weekly_sales_t)
daily_sales
weekly_sales
weekly_sales['Total $'] = daily_sales.T
weekly_sales
###Output
_____no_output_____
###Markdown
Comparison Operators
###Code
a1
a2
a1 > a2
a1 == a2
a1 < a2
bool_array = a1 >= a2
bool_array.dtype, type(bool_array)
###Output
_____no_output_____
###Markdown
5. Sorting Array
###Code
random_array
random_array.shape
np.sort(random_array)
np.argsort(random_array)
a1
np.argsort(a1)
np.argmax(a1)
np.argmin(a1)
np.argmax(random_array)
###Output
_____no_output_____
###Markdown
6. Practical Example
###Code
# turn image into numpy
from matplotlib.image import imread
panda = imread('numpy-panda.png')
panda
panda.dtype
type(panda)
panda.size, panda.shape, panda.ndim
panda[:5]
###Output
_____no_output_____
###Markdown
DataTypes & Attributes
###Code
# Numpy's main datatype is ndarray
a1 = np.array([1, 2, 3])
a1
type(a1)
a2 = np.array([[1, 2.0, 3.3],
[4, 5, 6.5]])
a3 = np.array([[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]],
[[10, 11, 12],
[13, 14, 15],
[16, 17, 18]]])
a2
a3
a1.shape, a2.shape, a3.shape
a1.ndim, a2.ndim, a3.ndim
a2.shape
a1.dtype, a2.dtype, a3.dtype
a1.size, a2.size, a3.size
type(a1), type(a2), type(a3)
# Create a DataFrame from a Numpy array
import pandas as pd
df = pd.DataFrame(a2)
df
###Output
_____no_output_____
###Markdown
2. Creating Arrays
###Code
sample_array = np.array([1, 2, 3])
sample_array
sample_array.dtype
ones = np.ones((2, 3))
ones
zeros = np.zeros((2, 3))
zeros
range_array = np.arange(0, 10, 2)
range_array
random_array = np.random.randint(0, 10, size=(3,5))
random_array
random_array_2 = np.random.random((5, 3))
random_array_2
random_array_2.shape
random_array_3 = np.random.rand(5,3)
random_array_3
# Pseudo-random numbers
np.random.seed(seed=0)
random_array_4 = np.random.randint(10, size=(5, 3))
random_array_4
np.random.seed(7)
random_array_5 = np.random.random((5, 3))
random_array_5
random_array_5 = np.random.random((5, 3))
random_array_5
###Output
_____no_output_____
###Markdown
3. Viewing arrays and matrices
###Code
np.unique(random_array_4)
a1
a2
a3
a1[0]
a2[0]
a3[0]
a2[1]
a3[:2, :2, :2]
a4 = np.random.randint(10, size=(2, 3, 4, 5))
a4
a4.shape, a4.ndim
# Get the first 4 numbers of the inner most arrays
a4[:, :, :, :4]
###Output
_____no_output_____
###Markdown
4. Manipulating & Comparing Arrays Arithmetic
###Code
a1
ones = np.ones((3))
ones
a1 + ones
a1 - ones
a1 * ones
a2
a1 * a2
a3
a1 / ones
# Floor division removes the decimals (rounds down)
a2 // a1
a2
a2 ** 2
np.square(a2)
np.add(a1, ones)
a1
a1 % 2
a2 % 2
np.exp(a1)
np.log(a1)
###Output
_____no_output_____
###Markdown
AggregationAggregation = performing the same operations on a number of things
###Code
listy_list = [1, 2, 3]
type(listy_list)
sum(listy_list)
a1
type(a1)
sum(a1)
np.sum(a1)
###Output
_____no_output_____
###Markdown
Use Python's methods (`sum()`) on Python datatypes and use NumPy's methods on NumPy arrays (`np.sum()`)
###Code
# Creating a massive NumPy array
massive_array = np.random.random(100000)
massive_array.size
massive_array[:10]
%timeit sum(massive_array) # Python's sum()
%timeit np.sum(massive_array) # NumPy's np.sum()
a2
np.mean(a2)
np.max(a2)
np.min(a2)
# Standard deviation = a measure of how spread out a group of numbers is from the mean
np.std(a2)
# Variance = measue of the average degree to which each number is different from the mean
# Higher variance = wider range of numbers
# Lower variance = lower range of numbers
np.var(a2)
# Standard deviation = squareroot of variance
np.sqrt(np.var(a2))
# Demo of std and var
high_var_array = np.array([1, 100, 200, 300, 4000, 5000])
low_var_array = np.array([2, 4, 6, 8, 10])
np.var(high_var_array), np.var(low_var_array)
np.std(high_var_array), np.std(low_var_array)
np.mean(high_var_array), np.mean(low_var_array)
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(high_var_array)
plt.show
plt.hist(low_var_array)
plt.show
###Output
_____no_output_____
###Markdown
Reshaping & Transposing
###Code
a2
a2.shape
a3
a3.shape
a2.shape
a2 * a3
a2.reshape(2, 3, 1)
a2_reshape = a2.reshape(2, 3, 1)
a2_reshape * a3
a2
# Transpose = switches the axis'
a2.T
a2.shape, a2.T.shape
a3
a3.T
a3.shape, a3.T.shape
###Output
_____no_output_____
###Markdown
Dot Product
###Code
np.random.seed(0)
mat1 = np.random.randint(10, size=(5, 3))
mat2 = np.random.randint(10, size=(5, 3))
mat1, mat2
mat1.shape, mat2.shape
# Element-wise multiplication (Hadamard product)
mat1 * mat2
# Dot product (inner dimensions must be the same, reshaping mat2 from a 5x3 to a 3x5)
np.dot(mat1, mat2.reshape(3,5))
# Transpose mat1
mat1.T
mat1.T.shape, mat2.shape
np.dot(mat1.T, mat2)
# Transpose mat2
mat3 = np.dot(mat1, mat2.T)
mat3, mat3.shape
###Output
_____no_output_____
###Markdown
Dot product example (nut butter sales)
###Code
np.random.seed(0)
# Number of jars sold
sales_amounts = np.random.randint(30, size=(5,3))
sales_amounts
# Create weekly_sales DataFrame
weekly_sales = pd.DataFrame(sales_amounts,
index=["Mon", "Tues", "Wed", "Thurs", "Fri"],
columns=["Almond butter", "Peanut butter", "Cashew butter"])
weekly_sales
# Create prices array
prices = np.array([10, 8, 12])
prices
prices.shape
# Create butter_prices DataFrame
butter_prices = pd.DataFrame(prices.reshape(1, 3),
index=["Price"],
columns=["Almond butter", "Peanut butter", "Cashew butter"])
butter_prices
# Shapes need to be aligned so transpose
total_sales = prices.dot(sales_amounts.T)
total_sales
# Create daily_sales
butter_prices
daily_sales = butter_prices.dot(weekly_sales.T)
daily_sales
weekly_sales
weekly_sales["Total ($)"] = daily_sales.T
weekly_sales
###Output
_____no_output_____
###Markdown
1. DataTypes & Attributes
###Code
# NumPy's main datatype is ndarray
a1 = np.array([1, 2, 3])
a1
type(a1)
a2 = np.array([[1, 2.0, 3.3],
[4, 5, 6.5]])
a3 = np.array([
[
[1,2,3],
[4,5,6],
[7,8,9]
],
[
[10,11,12],
[13,14,15],
[16,17,18]
]
])
a2
a3
a1.shape
a2.shape
a1.ndim, a2.ndim, a3.ndim
a1.dtype, a2.dtype, a3.dtype
a1.size, a2.size, a3.size
# Create a DataFrame from a NumPy array
import pandas as pd
df = pd.DataFrame(a2)
df
###Output
_____no_output_____
###Markdown
2. Creating Arrays
###Code
sample_array = np.array([1, 2, 3])
sample_array
sample_array.dtype
ones = np.ones((2, 3))
ones
ones.dtype
zeros = np.zeros((2, 3))
zeros
range_array = np.arange(0, 10, 2)
range_array
random_array = np.random.randint(0, 10, size=(3, 5))
random_array
random_array1 = np.random.randint(10, size=(5,3))
random_array1
###Output
_____no_output_____
###Markdown
3. Viewing arrays and matrices
###Code
np.unique(random_array1)
random_array3 = np.random.randint(50, size=(2, 3, 4, 5))
random_array3
random_array3[:, :, :, :2]
###Output
_____no_output_____
###Markdown
4. Manipulating and comparing arrays Arithmetic
###Code
a1 = np.array([1,2,3])
a1
ones = np.ones((3))
ones
a1 + ones
a1 - ones
a1 * ones
###Output
_____no_output_____
###Markdown
Aggregation
###Code
a1
sum(a1)
np.sum(a1)
# Create massive NumPy array
massive_array = np.random.random(100000)
massive_array.size
massive_array[:10]
# %timeit sum(massive_array) # Python's Sum
# %timeit np.sum(massive_array) #NumPy's Sum
a2
np.mean(a2)
np.max(a2)
np.min(a2)
np.var(a2)
np.std(a2)
# Demo of std and var
high_var_array = np.array([1, 100, 200, 300, 4000, 5000])
low_var_array = np.array([2, 4, 6, 8, 10])
np.var(high_var_array), np.var(low_var_array)
np.std(high_var_array), np.std(low_var_array)
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(high_var_array)
plt.show()
plt.hist(low_var_array)
plt.show()
###Output
_____no_output_____
###Markdown
Reshaping and Transposing
###Code
a3
a2
a2_reshape = a2.reshape(2, 3, 1)
a2_reshape
a2_reshape * a3
a2.T
###Output
_____no_output_____
###Markdown
Dot product
###Code
np.random.seed(0)
mat1 = np.random.randint(10, size=(5,3))
mat2 = np.random.randint(10, size=(5,3))
mat1
mat2
# Element-wise multiplication
mat1 * mat2
np.dot(mat1, mat2.T)
np.dot(mat1.T, mat2)
###Output
_____no_output_____
###Markdown
Dot product example (nut butter sales)
###Code
# Number of jars sold
sales_amount = np.random.randint(20, size=(5,3))
sales_amount
weekly_sales = pd.DataFrame(sales_amount, index = ["Mon", "Tue", "Wed", "Thu", "Fri"], columns = ["Almond butter", "Peanut Butter", "Cashew Butter"])
weekly_sales
# Create prices array
prices = np.array([10, 8, 12])
prices
# Create butter prices data frame
butter_prices = pd.DataFrame(prices.reshape(1,3), index=["Price"], columns = ["Almond butter", "Peanut Butter", "Cashew Butter"])
butter_prices
total_sales = prices.dot(sales_amount.T)
total_sales
# Create daily_sales
butter_prices.shape, weekly_sales.shape
daily_sales = butter_prices.dot(weekly_sales.T)
daily_sales
weekly_sales["Total"] = daily_sales.T
weekly_sales
weekly_sales = weekly_sales.rename(columns={'Total':'Total ($)'})
weekly_sales
###Output
_____no_output_____
###Markdown
Comparision Operators
###Code
a1
a2
a1 > a2
a1>5
a1==2
###Output
_____no_output_____
###Markdown
5. Sorting arrays
###Code
random_array
np.sort(random_array)
np.argsort(random_array)
###Output
_____no_output_____
###Markdown
###Code
# Turn an image into NumPy array
from matplotlib.image import imread
panda = imread('data/panda.png')
print(type(panda))
panda
panda.size, panda.shape, panda.ndim
panda[:5]
###Output
_____no_output_____ |
ipynb/devlib/examples/cgroups.ipynb | ###Markdown
Target connection
###Code
import os
os.environ['ANDROID_HOME'] = '/ext/android-sdk-linux/'
from env import TestEnv
my_conf = {
# # JUNO Linux
# "platform" : "linux",
# "board" : "juno",
# "host" : "192.168.0.1",
# "username" : "root",
# "password" : "",
# "exclude_modules" : ['hwmon'],
# JUNO Android
"platform" : "android",
"board" : "juno",
"host" : "192.168.0.1",
"exclude_modules" : ['hwmon'],
# RT-App calibration values
"rtapp-calib" : {
'0': 363, '1': 138, '2': 139, '3': 352, '4': 353, '5': 361
},
# List of additional devlib modules to install
"modules" : ['cgroups', 'bl', 'cpufreq'],
# List of additional binary tools to install
"tools" : ['rt-app', 'trace-cmd'],
# FTrace events to collect
"ftrace" : {
"events" : [
"sched_switch"
],
"buffsize" : 10240
}
}
te = TestEnv(my_conf)
target = te.target
# Report target connection
logging.info('Connected to %s target', target.abi)
print "DONE"
###Output
05:21:13 INFO : Target - Using base path: /home/derkling/Code/lisa
05:21:13 INFO : Target - Loading custom (inline) target configuration
05:21:13 INFO : Target - Devlib modules to load: ['bl', 'cpufreq', 'cgroups']
05:21:13 INFO : Target - Connecting Android target [192.168.0.1:5555]
05:21:16 INFO : Target - Initializing target workdir:
05:21:16 INFO : Target - /data/local/tmp/devlib-target
05:21:21 INFO : Target - Topology:
05:21:21 INFO : Target - [[0, 3, 4, 5], [1, 2]]
05:21:21 INFO : Platform - Loading default EM:
05:21:21 INFO : Platform - /home/derkling/Code/lisa/libs/utils/platforms/juno.json
05:21:23 INFO : FTrace - Enabled tracepoints:
05:21:23 INFO : FTrace - sched_switch
05:21:23 INFO : EnergyMeter - HWMON module not enabled
05:21:23 WARNING : EnergyMeter - Energy sampling disabled by configuration
05:21:23 WARNING : Target - Using configuration provided RTApp calibration
05:21:23 INFO : Target - Using RT-App calibration values:
05:21:23 INFO : Target - {"0": 363, "1": 138, "2": 139, "3": 352, "4": 353, "5": 361}
05:21:23 INFO : TestEnv - Set results folder to:
05:21:23 INFO : TestEnv - /home/derkling/Code/lisa/results/20160428_172123
05:21:23 INFO : TestEnv - Experiment results available also in:
05:21:23 INFO : TestEnv - /home/derkling/Code/lisa/results_latest
05:21:23 INFO : Connected to arm64 target
###Markdown
List available Controllers
###Code
logging.info('%14s - Available controllers:', 'CGroup')
ssys = target.cgroups.list_subsystems()
for (n,h,g,e) in ssys:
logging.info('%14s - %10s (hierarchy id: %d) has %d cgroups',
'CGroup', n, h, g)
###Output
05:21:25 INFO : CGroup - Available controllers:
05:21:25 INFO : CGroup - cpuset (hierarchy id: 3) has 6 cgroups
05:21:25 INFO : CGroup - cpu (hierarchy id: 2) has 3 cgroups
05:21:25 INFO : CGroup - cpuacct (hierarchy id: 1) has 37 cgroups
05:21:25 INFO : CGroup - schedtune (hierarchy id: 4) has 1 cgroups
05:21:25 INFO : CGroup - freezer (hierarchy id: 5) has 1 cgroups
05:21:25 INFO : CGroup - debug (hierarchy id: 6) has 1 cgroups
###Markdown
Example of CPUSET controller usage
###Code
# Get a reference to the CPUSet controller
cpuset = target.cgroups.controller('cpuset')
# Get the list of current configured CGroups for that controller
cgroups = cpuset.list_all()
logging.info('Existing CGropups:')
for cg in cgroups:
logging.info(' %s', cg)
# Dump the configuraiton of each controller
for cgname in cgroups:
#print cgname
cgroup = cpuset.cgroup(cgname)
attrs = cgroup.get()
#print attrs
cpus = attrs['cpus']
logging.info('%s:%-15s cpus: %s', cpuset.kind, cgroup.name, cpus)
# Create a LITTLE partition
cpuset_littles = cpuset.cgroup('/LITTLE')
# Check the attributes available for this control group
print "LITTLE:\n", json.dumps(cpuset_littles.get(), indent=4)
# Tune CPUs and MEMs attributes
# they must be initialize for the group to be usable
cpuset_littles.set(cpus=target.bl.littles, mems=0)
print "LITTLE:\n", json.dumps(cpuset_littles.get(), indent=4)
# Define a periodic big (80%) task
task = Periodic(
period_ms=100,
duty_cycle_pct=80,
duration_s=5).get()
# Create one task per each CPU in the target
tasks={}
for tid in enumerate(target.core_names):
tasks['task{}'.format(tid[0])] = task
# Configure RTA to run all these tasks
rtapp = RTA(target, 'simple', calibration=te.calibration())
rtapp.conf(kind='profile', params=tasks, run_dir=target.working_directory);
# Test execution of all these tasks into the LITTLE cluster
trace = rtapp.run(ftrace=te.ftrace, cgroup=cpuset_littles.name, out_dir=te.res_dir)
# Check tasks residency on little clsuter
trappy.plotter.plot_trace(trace)
# Compute and visualize tasks residencies on LITTLE clusterh CPUs
s = SchedMultiAssert(trappy.Run(trace), te.topology, execnames="task")
residencies = s.getResidency('cluster', target.bl.littles, percent=True)
print json.dumps(residencies, indent=4)
# Assert that ALL tasks have always executed only on LITTLE cluster
s.assertResidency('cluster', target.bl.littles,
99.9, operator.ge, percent=True, rank=len(residencies))
###Output
_____no_output_____
###Markdown
Example of CPU controller usage
###Code
# Get a reference to the CPU controller
cpu = target.cgroups.controller('cpu')
# Create a big partition on that CPUS
cpu_littles = cpu.cgroup('/LITTLE')
# Check the attributes available for this control group
print "LITTLE:\n", json.dumps(cpu_littles.get(), indent=4)
# Set a 1CPU equivalent bandwidth for that CGroup
# cpu_littles.set(cfs_period_us=100000, cfs_quota_us=50000)
cpu_littles.set(shares=512)
print "LITTLE:\n", json.dumps(cpu_littles.get(), indent=4)
# Test execution of all these tasks into the LITTLE cluster
trace = rtapp.run(ftrace=te.ftrace, cgroup=cpu_littles.name)
# Check tasks residency on little clsuter
trappy.plotter.plot_trace(trace)
###Output
_____no_output_____
###Markdown
Global configuration
###Code
# Host side results folder
RESULTS_DIR = '/tmp/schedtest'
# Taerget side temporary folder
TARGET_DIR = '/root/schedtest'
# List of tools to install on the target system
TOOLS = ["rt-app", "trace-cmd", "taskset", "cgroup_run_into.sh"]
# List of modules to enable
MODULES = ['cgroups', 'bl']
###Output
_____no_output_____
###Markdown
Target connection
###Code
from env import TestEnv
my_target_conf = {
"platform" : "linux",
"board" : "juno",
"host" : "192.168.0.1",
"username" : "root",
"password" : "",
"rtapp-calib" : {
'0': 363, '1': 138, '2': 139, '3': 352, '4': 353, '5': 361
},
}
# Setup the required Test Environment supports
my_tests_conf = {
# list of additional devlib modules to install
"modules" : ['cgroups', 'bl', 'cpufreq'],
# list of additional binary tools to install
"tools" : ['rt-app', 'trace-cmd', 'cgroup_run_into.sh'],
"ftrace" : {
"events" : [
"sched_switch"
],
"buffsize" : 10240
}
}
te = TestEnv(target_conf=my_target_conf, test_conf=my_tests_conf)
target = te.target
# Report target connection
logging.info('Connected to %s target', target.abi)
###Output
06:38:54 INFO : Target - Using base path: /home/derkling/Code/lisa
06:38:54 INFO : Target - Loading custom (inline) target configuration
06:38:54 INFO : Target - Loading custom (inline) test configuration
06:38:54 INFO : Target - Devlib modules to load: ['bl', 'cpufreq', 'cgroups', 'hwmon']
06:38:54 INFO : Target - Connecting linux target:
06:38:54 INFO : Target - username : root
06:38:54 INFO : Target - host : 192.168.0.1
06:38:54 INFO : Target - password :
06:39:44 INFO : Target - Initializing target workdir:
06:39:44 INFO : Target - /root/devlib-target
06:39:53 INFO : Target - Topology:
06:39:53 INFO : Target - [[0, 3, 4, 5], [1, 2]]
06:39:55 INFO : Platform - Loading default EM:
06:39:55 INFO : Platform - /home/derkling/Code/lisa/libs/utils/platforms/juno.json
06:39:57 INFO : FTrace - Enabled tracepoints:
06:39:57 INFO : FTrace - sched_switch
06:39:57 INFO : EnergyMeter - Scanning for HWMON channels, may take some time...
06:39:57 INFO : EnergyMeter - Channels selected for energy sampling:
06:39:57 INFO : EnergyMeter - a57_energy
06:39:57 INFO : EnergyMeter - a53_energy
06:39:57 WARNING : Target - Using configuration provided RTApp calibration
06:39:57 INFO : Target - Using RT-App calibration values:
06:39:57 INFO : Target - {"0": 363, "1": 138, "2": 139, "3": 352, "4": 353, "5": 361}
06:39:57 INFO : TestEnv - Set results folder to:
06:39:57 INFO : TestEnv - /home/derkling/Code/lisa/results/20160225_183957
06:39:57 INFO : TestEnv - Experiment results available also in:
06:39:57 INFO : TestEnv - /home/derkling/Code/lisa/results_latest
06:39:57 INFO : Connected to arm64 target
###Markdown
List available Controllers
###Code
logging.info('%14s - Available controllers:', 'CGroup')
ssys = target.cgroups.list_subsystems()
for (n,h,g,e) in ssys:
logging.info('%14s - %10s (hierarchy id: %d) has %d cgroups',
'CGroup', n, h, g)
###Output
06:39:57 INFO : CGroup - Available controllers:
06:39:57 INFO : CGroup - cpuset (hierarchy id: 1) has 2 cgroups
06:39:57 INFO : CGroup - cpu (hierarchy id: 2) has 2 cgroups
06:39:57 INFO : CGroup - schedtune (hierarchy id: 3) has 1 cgroups
06:39:57 INFO : CGroup - memory (hierarchy id: 4) has 1 cgroups
06:39:57 INFO : CGroup - devices (hierarchy id: 5) has 1 cgroups
06:39:57 INFO : CGroup - freezer (hierarchy id: 6) has 1 cgroups
06:39:57 INFO : CGroup - perf_event (hierarchy id: 7) has 1 cgroups
06:39:57 INFO : CGroup - hugetlb (hierarchy id: 8) has 1 cgroups
06:39:57 INFO : CGroup - pids (hierarchy id: 9) has 1 cgroups
###Markdown
Example of CPUSET controller usage
###Code
# Get a reference to the CPUSet controller
cpuset = target.cgroups.controller('cpuset')
# Get the list of current configured CGroups for that controller
cgroups = cpuset.list_all()
logging.info('Existing CGropups:')
for cg in cgroups:
logging.info(' %s', cg)
# Dump the configuraiton of each controller
for cgname in cgroups:
cgroup = cpuset.cgroup(cgname)
attrs = cgroup.get()
cpus = attrs['cpus']
logging.info('%s:%-15s cpus: %s', cpuset.kind, cgroup.name, cpus)
# Create a LITTLE partition
cpuset_littles = cpuset.cgroup('/LITTLE')
# Check the attributes available for this control group
print "LITTLE:\n", json.dumps(cpuset_littles.get(), indent=4)
# Tune CPUs and MEMs attributes
# they must be initialize for the group to be usable
cpuset_littles.set(cpus=target.bl.littles, mems=0)
print "LITTLE:\n", json.dumps(cpuset_littles.get(), indent=4)
# Define a periodic big (80%) task
task = Periodic(
period_ms=100,
duty_cycle_pct=80,
duration_s=5).get()
# Create one task per each CPU in the target
tasks={}
for tid in enumerate(target.core_names):
tasks['task{}'.format(tid[0])] = task
# Configure RTA to run all these tasks
rtapp = RTA(target, 'simple', calibration=te.calibration())
rtapp.conf(kind='profile', params=tasks, run_dir=TARGET_DIR);
# Test execution of all these tasks into the LITTLE cluster
trace = rtapp.run(ftrace=te.ftrace, cgroup=cpuset_littles.name, out_dir=te.res_dir)
# Check tasks residency on little clsuter
trappy.plotter.plot_trace(trace)
# Compute and visualize tasks residencies on LITTLE clusterh CPUs
s = SchedMultiAssert(trappy.Run(trace), te.topology, execnames="task")
residencies = s.getResidency('cluster', target.bl.littles, percent=True)
print json.dumps(residencies, indent=4)
# Assert that ALL tasks have always executed only on LITTLE cluster
s.assertResidency('cluster', target.bl.littles,
99.9, operator.ge, percent=True, rank=len(residencies))
###Output
_____no_output_____
###Markdown
Example of CPU controller usage
###Code
# Get a reference to the CPU controller
cpu = target.cgroups.controller('cpu')
# Create a big partition on that CPUS
cpu_littles = cpu.cgroup('/LITTLE')
# Check the attributes available for this control group
print "LITTLE:\n", json.dumps(cpu_littles.get(), indent=4)
# Set a 1CPU equivalent bandwidth for that CGroup
cpu_littles.set(cfs_period_us=100000, cfs_quota_us=50000)
print "LITTLE:\n", json.dumps(cpu_littles.get(), indent=4)
# Test execution of all these tasks into the LITTLE cluster
trace = rtapp.run(ftrace=te.ftrace, cgroup=cpu_littles.name)
# Check tasks residency on little clsuter
trappy.plotter.plot_trace(trace)
###Output
_____no_output_____ |
Spreadsheets/GSheets/archive/google_spreadsheet.ipynb | ###Markdown
Connect to gsheet 1. Install pip package
###Code
!pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib --user
###Output
_____no_output_____
###Markdown
2. Import and initialize GoogleSpreadsheet class
###Code
from google_spreadsheet import GoogleSpreadsheet
#Arguments:Spreadsheet id,sheet name,Googl Drive API credentials JSON file path
instance = GoogleSpreadsheet(spreadsheet_id='1KOIw9H_FdN81iJw8ENeXm3aFt_R77N2DnJBwx_lWUiU',sheet_name='companylist',credentials_json_path='credentials.json')
###Output
Invalid path to credentials JSON file
|
Algorithms/.ipynb_checkpoints/AlgorithmsEx02-checkpoint.ipynb | ###Markdown
Algorithms Exercise 2 Imports
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
###Output
_____no_output_____
###Markdown
Peak finding Write a function `find_peaks` that finds and returns the indices of the local maxima in a sequence. Your function should:* Properly handle local maxima at the endpoints of the input array.* Return a Numpy array of integer indices.* Handle any Python iterable as input.
###Code
def find_peaks(a):
"""Find the indices of the local maxima in a sequence."""
peaks = np.array([],np.dtype('int'))
search = np.array([entry for entry in a])
if search[0] > search[1]:
peaks = np.append(peaks,np.array(0))
for i in range(1,len(search)-1):
if search[i] > search[i+1] and search[i] > search[i-1]:
peaks = np.append(peaks,i)
if search[-1] > search[-2]:
peaks = np.append(peaks,np.array(len(search)-1))
return peaks
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
###Output
_____no_output_____
###Markdown
Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:* Convert that string to a Numpy array of integers.* Find the indices of the local maxima in the digits of $\pi$.* Use `np.diff` to find the distances between consequtive local maxima.* Visualize that distribution using an appropriately customized histogram.
###Code
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
ints = [int(a) for a in pi_digits_str]
diff = np.diff(find_peaks(ints))
plt.hist(diff,np.arange(0,15));
plt.xlim(2,15);
plt.xlabel('Number of digits between maxima');
plt.ylabel('Occurence');
plt.title('Occurences of Maxima spacing for 10,000 digits of Pi');
assert True # use this for grading the pi digits histogram
###Output
_____no_output_____ |
jax/expressibility_jax.ipynb | ###Markdown
Expressibility of the quantum circuit
###Code
import jax
import jax.numpy as jnp
import jax.experimental.optimizers as optimizers
import qutip
import qutip.qip.operations as gates
from gate_jax import *
from circuit_ansatz_jax import alternating_layer_ansatz
from jupyterplot import ProgressPlot
###Output
_____no_output_____
###Markdown
Generate a random Haar ket-state
###Code
def target(n_qubit, num_samples):
target_array = []
for i in range(num_samples):
target_array.append(qutip.rand_ket_haar(N=2 ** n_qubit).data.A.T)
return jnp.vstack(target_array)
target(3, 2)
###Output
_____no_output_____
###Markdown
Define the circuit ansatz
###Code
def init_state(rng, n_qubit, n_layer):
rng, sub_rng = jax.random.split(rng)
params = jax.random.uniform(sub_rng, (n_qubit * n_layer,)) * 2 * jnp.pi
# init_state = jnp.array([0] * (2 ** n_qubit - 1) + [1], dtype=jnp.complex64)
return rng, params
rng = jax.random.PRNGKey(1)
rng, params = init_state(rng, 3, 2)
###Output
_____no_output_____
###Markdown
Define the loss function
###Code
def state_norm(state, target_state):
return jnp.real(jnp.sum((state - target_state) * (state - target_state).conj()))
def loss(params, n_qubit, s_block, n_layer, rot_axis, target_state):
ansatz_state = alternating_layer_ansatz(params, n_qubit, s_block, n_layer, rot_axis)
return state_norm(ansatz_state, target_state) / 2 ** n_qubit
def fidelity(params, n_qubit, s_block, n_layer, rot_axis, target_state):
ansatz_state = alternating_layer_ansatz(params, n_qubit, s_block, n_layer, rot_axis)
return jnp.abs(ansatz_state.T @ target_state), ansatz_state
target_states = target(3, 10)
rng = jax.random.PRNGKey(42)
rng, params = init_state(rng, 3, 3)
print(loss(params, 3, 3, 3, 'X', target_states[0]))
print(jax.grad(loss)(params, 3, 3, 3, 'X', target_states[0]))
###Output
0.21365044
[-0.04691796 0.01734416 0.00876972 -0.01540954 0.00547478 -0.02595082
0.0468399 -0.00293924 0.03392183]
###Markdown
Training loop
###Code
def step(step_num, opt_state, **kwargs):
params = get_params(opt_state)
loss_v, grad_v = jax.value_and_grad(loss)(params, **kwargs)
return loss_v, opt_update(step_num, grad_v, opt_state)
def main(n_qubit, s_block, n_layer, rot_axis, target_state):
rng = jax.random.PRNGKey(42)
rng, params = init_state(rng, 4, 10)
opt_init, opt_update, get_params = optimizers.adam(0.01)
opt_state = opt_init(params)
loss_history = []
pp = ProgressPlot() # JupyterPlot
for train_step in range(10000):
loss_v, opt_state = step(1, opt_state,
n_qubit=4, s_block=4, n_layer=10, rot_axis='Y', target_state=target_states_4[0])
loss_history.append(loss_v.item())
pp.update(loss_v.item()) # Real-time update of the plot
# Stopping condition
if train_step > 101:
if jnp.mean(jnp.array(loss_history[-101:-1]) -
jnp.array(loss_history[-100:])) < 1e-9:
break
pp.finalize()
###Output
_____no_output_____
###Markdown
Learning graph
###Code
pylab.plot(loss_history)
pylab.show()
print(f"Mean loss: {jnp.mean(jnp.array(loss_history[-10:]))}")
print(f"{train_step} steps")
res = fidelity(get_params(opt_state), n_qubit=4, s_block=4, n_layer=10, rot_axis='Y', target_state=target_states_4[0])
print(res)
print(f"{6.476889211626258e-07:.010f}")
###Output
0.0000006477
###Markdown
Target values
###Code
target_states_4 = target(4, 10)
target_states_4[0]
###Output
_____no_output_____
###Markdown
Using $\ell_{2}$ norm * $(n_q, n_l, s_b) = (4, 1, 4)$ : $0.060760121792554855$ after $2798$ steps. Fidelity = $0.46468654$* $(n_q, n_l, s_b) = (4, 2, 4)$ : $0.029530316591262817$ after $1341$ steps. Fidelity = $0.2427987$* $(n_q, n_l, s_b) = (4, 3, 4)$ : $0.02216183952987194$ after $2690$ steps. Fidelity = $0.12433163$* $(n_q, n_l, s_b) = (4, 4, 4)$ : $0.00848543830215931$ after $3117$ steps. Fidelity = $0.1737785$* $(n_q, n_l, s_b) = (4, 5, 4)$ : $0.004162853118032217$ after $2812$ steps. Fidelity = $0.22233875$* $(n_q, n_l, s_b) = (4, 6, 4)$ : $0.004301536362618208$ after $1777$ steps. Fidelity = $0.25357744$* $(n_q, n_l, s_b) = (4, 7, 4)$ : $0.000041222141589969397$ after $4671$ steps. Fidelity = $0.1790334$* $(n_q, n_l, s_b) = (4, 8, 4)$ : $6.476889211626258 \times 10^{-7}$ after $5155$ steps. Fidelity = $0.18366201$* $(n_q, n_l, s_b) = (4, 9, 4)$ : $1.613693569879615 \times 10^{-7}$ after $1949$ steps. Fidelity = $0.18410131$* $(n_q, n_l, s_b) = (4, 10, 4)$ : $4.2994898308279517 \times 10^{-8}$ after $1026$ steps. Fidelity = $0.18437873$
###Code
res[1] - target_states_4[0]
###Output
_____no_output_____ |
FeatureCollection/add_random_value_column.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
# Map.centerObject(roi, 10)
# Map.addLayer(ee.Image().paint(roi, 0, 3), {}, 'HUC08')
# # select polygons intersecting the roi
roi2 = HUC10.filter(ee.Filter.contains(**{'leftValue': roi.geometry(), 'rightField': '.geo'}))
# Map.addLayer(ee.Image().paint(roi2, 0, 2), {'palette': 'blue'}, 'HUC10')
# roi = HUC10.filter(ee.Filter.stringContains(**{'leftField': 'huc10', 'rightValue': '10160002'}))
roi3 = roi2.randomColumn('random')
# # print(roi3)
# Map.addLayer(roi3)
print("Random value: ", roi3.first().get('random').getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
# Map.centerObject(roi, 10)
# Map.addLayer(ee.Image().paint(roi, 0, 3), {}, 'HUC08')
# # select polygons intersecting the roi
roi2 = HUC10.filter(ee.Filter.contains(**{'leftValue': roi.geometry(), 'rightField': '.geo'}))
# Map.addLayer(ee.Image().paint(roi2, 0, 2), {'palette': 'blue'}, 'HUC10')
# roi = HUC10.filter(ee.Filter.stringContains(**{'leftField': 'huc10', 'rightValue': '10160002'}))
roi3 = roi2.randomColumn('random')
# # print(roi3)
# Map.addLayer(roi3)
print("Random value: ", roi3.first().get('random').getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
# Map.centerObject(roi, 10)
# Map.addLayer(ee.Image().paint(roi, 0, 3), {}, 'HUC08')
# # select polygons intersecting the roi
roi2 = HUC10.filter(ee.Filter.contains(**{'leftValue': roi.geometry(), 'rightField': '.geo'}))
# Map.addLayer(ee.Image().paint(roi2, 0, 2), {'palette': 'blue'}, 'HUC10')
# roi = HUC10.filter(ee.Filter.stringContains(**{'leftField': 'huc10', 'rightValue': '10160002'}))
roi3 = roi2.randomColumn('random')
# # print(roi3)
# Map.addLayer(roi3)
print("Random value: ", roi3.first().get('random').getInfo())
###Output
Random value: 0.17058982912507292
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
# Map.centerObject(roi, 10)
# Map.addLayer(ee.Image().paint(roi, 0, 3), {}, 'HUC08')
# # select polygons intersecting the roi
roi2 = HUC10.filter(ee.Filter.contains(**{'leftValue': roi.geometry(), 'rightField': '.geo'}))
# Map.addLayer(ee.Image().paint(roi2, 0, 2), {'palette': 'blue'}, 'HUC10')
# roi = HUC10.filter(ee.Filter.stringContains(**{'leftField': 'huc10', 'rightValue': '10160002'}))
roi3 = roi2.randomColumn('random')
# # print(roi3)
# Map.addLayer(roi3)
print("Random value: ", roi3.first().get('random').getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
# Map.centerObject(roi, 10)
# Map.addLayer(ee.Image().paint(roi, 0, 3), {}, 'HUC08')
# # select polygons intersecting the roi
roi2 = HUC10.filter(ee.Filter.contains(**{'leftValue': roi.geometry(), 'rightField': '.geo'}))
# Map.addLayer(ee.Image().paint(roi2, 0, 2), {'palette': 'blue'}, 'HUC10')
# roi = HUC10.filter(ee.Filter.stringContains(**{'leftField': 'huc10', 'rightValue': '10160002'}))
roi3 = roi2.randomColumn('random')
# # print(roi3)
# Map.addLayer(roi3)
print("Random value: ", roi3.first().get('random').getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
# Map.centerObject(roi, 10)
# Map.addLayer(ee.Image().paint(roi, 0, 3), {}, 'HUC08')
# # select polygons intersecting the roi
roi2 = HUC10.filter(ee.Filter.contains(**{'leftValue': roi.geometry(), 'rightField': '.geo'}))
# Map.addLayer(ee.Image().paint(roi2, 0, 2), {'palette': 'blue'}, 'HUC10')
# roi = HUC10.filter(ee.Filter.stringContains(**{'leftField': 'huc10', 'rightValue': '10160002'}))
roi3 = roi2.randomColumn('random')
# # print(roi3)
# Map.addLayer(roi3)
print("Random value: ", roi3.first().get('random').getInfo())
###Output
Random value: 0.17058982912507292
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
# Map.centerObject(roi, 10)
# Map.addLayer(ee.Image().paint(roi, 0, 3), {}, 'HUC08')
# # select polygons intersecting the roi
roi2 = HUC10.filter(ee.Filter.contains(**{'leftValue': roi.geometry(), 'rightField': '.geo'}))
# Map.addLayer(ee.Image().paint(roi2, 0, 2), {'palette': 'blue'}, 'HUC10')
# roi = HUC10.filter(ee.Filter.stringContains(**{'leftField': 'huc10', 'rightValue': '10160002'}))
roi3 = roi2.randomColumn('random')
# # print(roi3)
# Map.addLayer(roi3)
print("Random value: ", roi3.first().get('random').getInfo())
###Output
Random value: 0.17058982912507292
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
# Map.centerObject(roi, 10)
# Map.addLayer(ee.Image().paint(roi, 0, 3), {}, 'HUC08')
# # select polygons intersecting the roi
roi2 = HUC10.filter(ee.Filter.contains(**{'leftValue': roi.geometry(), 'rightField': '.geo'}))
# Map.addLayer(ee.Image().paint(roi2, 0, 2), {'palette': 'blue'}, 'HUC10')
# roi = HUC10.filter(ee.Filter.stringContains(**{'leftField': 'huc10', 'rightValue': '10160002'}))
roi3 = roi2.randomColumn('random')
# # print(roi3)
# Map.addLayer(roi3)
print("Random value: ", roi3.first().get('random').getInfo())
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____ |
others/knn/.ipynb_checkpoints/knn-checkpoint.ipynb | ###Markdown
Assignment "Assignment" System for DCT Academy's Code Platform
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sqlalchemy import create_engine
from sklearn.neighbors import NearestNeighbors
from scipy.sparse import csr_matrix
import pickle
engine = create_engine('postgresql+psycopg2://postgres:sudhanvasud@localhost/postgres')
print(engine.table_names())
###Output
['answers', 'ar_internal_metadata', 'assignment_groups', 'assignments', 'batch_students', 'batches', 'chat_rooms', 'code_play_backs', 'comments', 'courses', 'editor_settings', 'favourites', 'forks', 'friendly_id_slugs', 'list_assignments', 'lists', 'messages', 'notification_types', 'notifications', 'permissions', 'practice_students', 'practices', 'questions', 'read_questions', 'roles', 'schema_migrations', 'sections', 'solutions', 'student_courses', 'student_logs', 'students', 'submission_timers', 'submissions', 'taggings', 'tags', 'test_cases', 'users', 'videos']
###Markdown
Creating list of dataframe of all tables, a dictionary mapping to corresponding dataframe
###Code
# Dictionary of all the tables and their columns
table_columns = {}
# Dictionary of all dataframes mapped with table names
df_all = {}
# List of all dataframes of all tables
df_list = []
for table in engine.table_names():
df = pd.read_sql(table, engine)
df_all[table] = df
df_list.append(df)
table_columns[table] = list(df.columns)
###Output
_____no_output_____
###Markdown
Get all student/user assignments Merge submissions, assignments, taggings, tags
###Code
user_submissions = df_all['submissions'] \
.merge(df_all['assignments'], left_on='assignment_id', right_on='id', suffixes=('_submissions', '_assignments')) \
.merge(df_all['taggings'], left_on='assignment_id', right_on='taggable_id', suffixes=('_sub_ass', '_taggings')) \
.merge(df_all['tags'], left_on='tag_id', right_on='id', suffixes=('_sub_ass_tag', '_tags'))
user_submissions.drop(['statement', 'output', 'language', 'created_at_submissions', 'updated_at_submissions', 'is_checked', 'body', 'url',
'created_at_assignments', 'updated_at_assignments', 'pass', 'fail', 'tagger_type', 'created_at', 'total', 'practice_id',
'assignment_id', 'user_id_assignments', 'code', 'points_assignments', 'tagger_id', 'tag_id', 'source', 'input_size',
'taggable_type', 'approved', 'function_name', 'context', 'id_sub_ass_tag', 'taggings_count', 'is_allowed'], axis=1, inplace=True)
user_submissions.columns
user_submissions.head()
user_submissions['name'] = user_submissions['name'].str.replace('/',',')
plt.figure(figsize=(20, 10))
user_submissions.groupby(['name']).count()['id_tags'].plot(kind='bar')
plt.xticks(rotation='30')
plt.title('All assignments submitted by all users by tags')
plt.xlabel('Name of tags')
plt.ylabel('No of Assignments')
plt.show()
user_submissions_dummy = pd.concat([user_submissions, user_submissions['name'].str.get_dummies()], axis=1)
user_submissions_dummy.to_html('user_sub_dummy.html')
user_submissions_dummy.head()
user_submissions_dummy_pivot = user_submissions_dummy.pivot_table(values='time_in_seconds', index='title', columns='user_id_submissions', fill_value=0)
user_submissions_dummy_matrix = csr_matrix(user_submissions_dummy_pivot.values)
user_submissions_dummy_pivot.to_csv('user_sub_pivot.csv')
model_knn = NearestNeighbors(metric = 'cosine', algorithm = 'brute')
model_knn.fit(user_submissions_dummy_matrix)
filename = 'finalized_knn_model.dat'
pickle.dump(model_knn, open(filename, 'wb'))
query_index = np.random.choice(user_submissions_dummy_pivot.shape[0])
distances, indices = model_knn.kneighbors(user_submissions_dummy_pivot.iloc[query_index, :].values.reshape(1, -1), n_neighbors = 6)
query_index
distances
indices
for i in range(0, len(distances.flatten())):
if i == 0:
print('Recommendations for: \n\n\033[1m{0}:\n'.format(user_submissions_dummy_pivot.index[query_index]))
else:
print('\033[0m{0}. {1}, ---------- with correlation of {2}'.format(i, user_submissions_dummy_pivot.index[indices.flatten()[i]], distances.flatten()[i]))
###Output
Recommendations for:
[1mString or Not:
[0m1. Filter Products based on price range , ---------- with correlation of 0.16816905143513017
[0m2. string or not, ---------- with correlation of 0.2957079165846416
[0m3. Chop a string, ---------- with correlation of 0.37955213091706785
[0m4. Century From Year, ---------- with correlation of 0.4462464101081678
[0m5. Pet Name Generator, ---------- with correlation of 0.4696795042425018
|
Implementación/warehouse.ipynb | ###Markdown
Consideraciones para la implementación:* Se considera velocidad constante para todos los pickeadores* Se considera el tiempo en recoger un producto igual para todos los pickeadores y para todos los productos
###Code
cant_pickeadores=4 #cantidad de pickeadores en el centro de distribución.
tiempo_pickeo=20 #tiempo de pickeo por producto (se considera constante).
velocidad=30 #velocidad de los pickeadores para recorrer el centro.
###Output
_____no_output_____
###Markdown
Extraccion de datos:
###Code
ots = pd.read_csv("data/ot.csv",sep=',')
ots=ots.sort_values('Pedido')
xlsx_file = "data/layout.xlsx"
layout = pd.read_excel(xlsx_file, sheet_name="layout")
adyacencia=pd.read_excel(xlsx_file, sheet_name="adyacencia")
#ordenes:
ots.head()
#datos sobre cada producto/pasillo
layout.head()
#adyacencia de cada pasillo:
adyacencia.head()
#cantidad de ordenes
cant_ordenes=len(ots['Pedido'].unique())
cant_ordenes
#array con todos los pasillos:
pasillos=layout['pasillo'].unique()
pasillos
#lista con las ordenes ordenadas
ordenes_enum=ots["Pedido"].unique()
ordenes_enum=list(ordenes_enum)
ordenes_enum
#arreglo con cada producto pedido por orden
lista=[]
for x in range(cant_ordenes):
obj=list(ots.loc[ots["Pedido"]==x+1]["Cod.Prod"])
lista.append(obj)
ordenes=np.array(lista)
ordenes
#se construye una lista donde cada elemento representa el pedido asignado a cada pickeador
#(inicialmente 0 porque no tienen pedidos asignados)
pickeadores=[]
for x in range(cant_pickeadores):
pickeadores.append(0)
pickeadores
#se inicializa un dataframe que me dira si un pasillo esta ocupado y que pickeador está en el (pickeador 0 cuando no hay nadie).
l=[]
for pasillo in pasillos:
l.append([pasillo,False, 0])
pasillo_bool=pd.DataFrame(l,
columns=["pasillo","ocupado","pickeador"])
pasillo_bool.head()
#se crea una lista donde cada elemento en la posicion i representa el pasillo actual del pickeador i+1.
pasillo_act=[]
for x in range(cant_pickeadores):
pasillo_act.append(0)
pasillo_act
###Output
_____no_output_____
###Markdown
Funciones:
###Code
#Funcion para asignar ordenes a los pickeadores, recibe una lista talcomo la que creamos antes "pickeadores"
def asignar_ordenes(pickeadores):
for x in range(cant_pickeadores):
if pickeadores[x]==0:
if len(ordenes_enum)>0:
pickeadores[x]=ordenes_enum[0]
ordenes_enum.remove(ordenes_enum[0])
return pickeadores
#Funcion que retorna True si es que existe ruta desde el pasillo p2 al p1, y False en caso contrario
def existe_ruta(p1,p2):
ady=adyacencia.loc[adyacencia["pasillo"]==p2]
if p1==p2:
return True
for x in range(ady.shape[0]):
if ady.iloc[x,1]==p1:
return True
else:
if ady.iloc[x,1]==-1:
return False
else:
return existe_ruta(p1,ady.iloc[x,1])
#función que entrega el siguiente pasillo a ir en la ruta desde un pasillo a otro.
def next_move(pasillo_a_ir,pasillo_act):
ady=adyacencia.loc[adyacencia["pasillo"]==pasillo_act]
for x in range(ady.shape[0]):
if existe_ruta(pasillo_a_ir,ady.iloc[x,1]):
return ady.iloc[x,1]
#Función que me devuelve una lista con la ruta a seguir para ir a un pasillo desde el pasillo actual en el que se encuentra un
#pickeador.
def ruta(pasillo_a_ir,pasillo_act):
x=pasillo_act
l=[]
while x!=pasillo_a_ir:
x=next_move(pasillo_a_ir,x)
l.append(x)
return l
#Funcion creada para ordenar por pasillo los elementos dentro de una orden.
def ordenar(orden):
lista_ordenada=[]
for pasillo in pasillos:
for x in range(len(orden)):
if pasillo==objeto_en_pasillo(orden[x]):
lista_ordenada.append(orden[x])
return lista_ordenada
#Funcion que recibe un codigo de producto y retorna el pasillo en el que se encuentra
def objeto_en_pasillo(cod_prod):
l=layout.loc[layout["Producto"]==cod_prod]
pasillo=l.iloc[0,2]
return pasillo
#Funcion que recibe un pasillo y retorna True si es que este esta ocupado o False en el caso contrario.
def ocupado(pasillo):
p=pasillo_bool.loc[pasillo_bool["pasillo"]==pasillo]
return p.iloc[0,1]
###Output
_____no_output_____
###Markdown
Función final:
###Code
def tiempo(ordenes,pickeadores):
print("cant pickeadores:", cant_pickeadores)
for x in range(len(ordenes)):
ordenes[x]=ordenar(ordenes[x])
t_recorrido=0
t_wait=0
t_pick=0
contador=[]
for y in range(cant_pickeadores):
contador.append(0)
while len(ordenes_enum)>0:
asignar_ordenes(pickeadores)
for x in range(cant_pickeadores):
if pickeadores[x] !=0:
if contador[x]==-1:
next_move=-1
else:
next_move=objeto_en_pasillo(ordenes[pickeadores[x]-1][contador[x]])
if len(ruta(next_move,pasillo_act[x]))==0:
if pasillo_act[x]==-1:
contador[x]=0
pickeadores[x]=0
pasillo_act[x]=0
else:
t_pick+=tiempo_pickeo
if contador[x]+1<len(ordenes[pickeadores[x]-1]):
contador[x]+=1
else:
contador[x]=-1
else:
route=ruta(next_move,pasillo_act[x])
if route[0]==-1:
pasillo_bool.iloc[pasillo_bool.loc[pasillo_bool["pasillo"]==pasillo_act[x]].index,1]=False
pasillo_bool.iloc[pasillo_bool.loc[pasillo_bool["pasillo"]==pasillo_act[x]].index,2]=0
t_recorrido+=1/velocidad
pasillo_act[x]=route[0]
pasillo_bool.iloc[pasillo_bool.loc[pasillo_bool["pasillo"]==route[0]].index,1]=True
pasillo_bool.iloc[pasillo_bool.loc[pasillo_bool["pasillo"]==route[0]].index,2]=x+1
else:
if ocupado(route[0]):
t_wait+=tiempo_pickeo
else:
if pasillo_act[x]>0:
largo_pasillo=layout.loc[layout["pasillo"]==pasillo_act[x]].iloc[0,9]
t_recorrido+=largo_pasillo/velocidad
distancia_df=adyacencia.loc[(adyacencia["pasillo"]==pasillo_act[x]) & (adyacencia["adyacente"]==route[0])]
distancia=distancia_df.iloc[0,2]
pasillo_bool.iloc[pasillo_bool.loc[pasillo_bool["pasillo"]==pasillo_act[x]].index,1]=False
pasillo_bool.iloc[pasillo_bool.loc[pasillo_bool["pasillo"]==pasillo_act[x]].index,2]=0
t_recorrido+=distancia/velocidad
pasillo_act[x]=route[0]
pasillo_bool.iloc[pasillo_bool.loc[pasillo_bool["pasillo"]==route[0]].index,1]=True
pasillo_bool.iloc[pasillo_bool.loc[pasillo_bool["pasillo"]==route[0]].index,2]=x+1
print ("tiempo de espera:", t_wait)
print("tiempo en hacer recorridos:", t_recorrido)
print("tiempo de pickeo:", t_pick)
return t_recorrido+t_wait+t_pick
tiempo(ordenes,pickeadores)
###Output
cant pickeadores: 4
tiempo de espera: 500
tiempo en hacer recorridos: 66001.2333333335
tiempo de pickeo: 1360
###Markdown
Problemas:* Al momento de escoger la cantidad de pickeadores mayor igual a las ordenes totales (20 en el caso de prueba) surge un error en el que el tiempo de pickeo de productos es 0, lo que nos dice que no recogerian productos los pickeadores.* La función ruta, no retorna la ruta mas corta entre 2 pasillos, por ejemplo devuelve rutas de 6 pasillos cuando el pasillo esta a 2.
###Code
#Notar que el pasillo 51 es adyacente al pasillo 0 entonces solo basta un movimiento para ir, en cambio la funcion retorna:
ruta(51,0)
###Output
_____no_output_____ |
resources/Day4/notebooks/Unsupervised_Learning_Example.ipynb | ###Markdown
Importing required libraries
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
from sklearn.cluster import DBSCAN
###Output
_____no_output_____
###Markdown
Reading in the data
###Code
path = "data/preprocessed_data/csv_without_empty_cols_and_merged/"
path = "data/preprocessed_data/csv_without_empty_cols_and_merged/"
df_tosa = pd.read_csv(path+"cof_tosa_aligner.csv", index_col=0, na_values=[-9999, -99999])
df_dctest = pd.read_csv(path+"cof_dctest.csv", index_col=0, na_values=[-9999, -99999], low_memory=False)
df_final = pd.read_csv(path+"final.csv", index_col=0, na_values=[-9999, -99999])
###Output
_____no_output_____
###Markdown
Taking care of OsaSerialNum duplicates (DCTest)
###Code
# df_dctest.sort_values(by=["OsaSerialNum", "TestTimeStamp"], inplace=True)
# df_dctest.drop_duplicates(subset="OsaSerialNum", keep="last", inplace=True)
###Output
_____no_output_____
###Markdown
Taking care of Containername duplicates (Tosa)
###Code
# df_tosa.sort_values(by=["Containername", "TestTimeStamp"], inplace=True)
# df_tosa.drop_duplicates(subset="Containername", keep="last", inplace=True)
###Output
_____no_output_____
###Markdown
Dropping duplicates and almost empty rows from the output data
###Code
df_final.drop_duplicates(inplace=True)
# Instead of removing all rows that do not reach a certain threshhold of non-missing values, like we
# did in the other excercise before, I just dropped all rows that miss the value for "ModuleTxCalPower_dBm".
# This will remove the columns where all the output data are missing (but possibly also some other rows, if we
# are unlucky).
df_final.dropna(subset=["ModuleTxCalPower_dBm"], inplace=True)
###Output
_____no_output_____
###Markdown
Merging the DataFrames
###Code
df_input = df_tosa.merge(df_dctest,
left_on="Containername",
right_on="OsaSerialNum",
suffixes=("_tosa", "_dctest"))
print(df_tosa.shape)
print(df_dctest.shape)
print(df_input.shape)
df_traceability = pd.read_excel("data/original_data/Linkage Map and Traceability.xlsx", "Traceability")
df_input = df_input.merge(df_traceability,
on="Component")
df_input.shape
df = df_input.merge(df_final,
left_on="ToContainer",
right_on="ModuleSerialNum",
suffixes=("","_final"))
###Output
_____no_output_____
###Markdown
Removing ID columns
###Code
ids = list(df.filter(like="Id").columns)
ids += list(df.filter(like="HistoryID").columns)
print(ids)
df.drop(columns=ids, inplace=True)
###Output
['CDOTypeId_tosa', 'DataCollectionDefId_tosa', 'dc_COF_TOSA_AlignerHistoryId', 'HistoryId_tosa', 'HistoryMainlineId_tosa', 'TxnId_tosa', 'CDOTypeId_dctest', 'DataCollectionDefId_dctest', 'dc_COF_DCTestHistoryId', 'HistoryId_dctest', 'HistoryMainlineId_dctest', 'TxnId_dctest', 'dce_HistoryID_tosa', 'ParentHistoryID_tosa', 'dce_HistoryID_dctest', 'ParentHistoryID_dctest']
###Markdown
Removing columns without variance
###Code
# Dropping columns without variance
dropped_columns = []
for c in df.columns:
count_of_unique_values = len(df[c].dropna().unique())
if count_of_unique_values == 1:
df.drop(columns=c, inplace=True)
dropped_columns.append(c)
print("Columns without variance: %s" % dropped_columns)
df.shape
df.to_csv("data_for_modelling.csv")
###Output
_____no_output_____
###Markdown
Preparing input
###Code
# Getting all values of ErrAbbr that occured at least 100 times
df = df[df.groupby("ErrAbbr")["ErrAbbr"].transform(len) > 100]
# Filtering for all rows that do not contain "PASS"
df = df[(df["ErrAbbr"] != "PASS")]
df["ErrAbbr"].value_counts()
X = df.drop(columns=df_final.columns)
###Output
_____no_output_____
###Markdown
Estimating missing input data
###Code
# Handling missing values
X.dropna(axis=1, how="all", inplace=True)
X.fillna(X.median(), inplace=True)
###Output
_____no_output_____
###Markdown
Transforming the categorical variables
###Code
categorical_variables = ["TestStation_tosa", "TestStation_dctest", "Site", "Att_bins", "SlotNum"]
# Transform categorical_variables to binary dummy variables
X = pd.get_dummies(X, columns=categorical_variables)
# Remove all remaining categorical columns
X = X.select_dtypes(exclude=object)
with pd.option_context('display.max_columns', 150):
display(X.head())
###Output
_____no_output_____
###Markdown
t-SNE (Dimensionality reduction)
###Code
from sklearn.manifold import TSNE
X_2d = TSNE(perplexity=25, verbose=10).fit_transform(X)
df_tsne = pd.DataFrame(X_2d, columns=["x", "y"])
df_tsne.head()
###Output
_____no_output_____
###Markdown
__Our new two dimensional data has the same amount of rows as our original data!__
###Code
df_tsne.shape
X.shape
df_tsne["ErrAbbr"] = df["ErrAbbr"].values
df_tsne.head()
% matplotlib qt
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure()
sns.scatterplot(data=df_tsne, x="x", y="y", hue="ErrAbbr")
#plt.scatter(df_tsne["x"], df_tsne["y"])
plt.show()
###Output
_____no_output_____
###Markdown
Clustering
###Code
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Clustering with KMeans
###Code
kmeans = KMeans()
# fitting the data to the clustering algorithm
kmeans.fit(X)
# getting the cluster labels
labels = kmeans.labels_
pd.Series(labels).value_counts()
###Output
_____no_output_____
###Markdown
Calculating value counts of the ErrAbbr per cluster
###Code
df_cluster_results = X
df_cluster_results["cluster"] = labels
df_cluster_results["ErrAbbr"] = df["ErrAbbr"]
df_cluster_results.groupby("cluster")["ErrAbbr"].value_counts()
###Output
_____no_output_____ |
1. Linear Regression_advertisement.ipynb | ###Markdown
Simple linear regression
###Code
X = data['TV'].values.reshape(-1,1)
y = data['sales'].values.reshape(-1,1)
reg = LinearRegression()
reg.fit(X, y)
print(reg.coef_[0][0])
print(reg.intercept_[0])
print("The linear model is: Y = {:.5} + {:.5}X".format(reg.intercept_[0], reg.coef_[0][0]))
reg.score(X,y)
predictions = reg.predict(X)
#print(predictions)
plt.figure(figsize=(16, 8))
plt.scatter(
data['TV'],
data['sales'],
c='black'
)
plt.plot(
data['TV'],
predictions,
c='blue',
linewidth=2
)
plt.xlabel("Money spent on TV ads ($)")
plt.ylabel("Sales ($)")
plt.show()
X = data['TV']
y = data['sales']
X2 = sm.add_constant(X)
est = sm.OLS(y, X2)
est2 = est.fit()
print(est2.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: sales R-squared: 0.612
Model: OLS Adj. R-squared: 0.610
Method: Least Squares F-statistic: 312.1
Date: Sat, 29 Jun 2019 Prob (F-statistic): 1.47e-42
Time: 16:21:50 Log-Likelihood: -519.05
No. Observations: 200 AIC: 1042.
Df Residuals: 198 BIC: 1049.
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 7.0326 0.458 15.360 0.000 6.130 7.935
TV 0.0475 0.003 17.668 0.000 0.042 0.053
==============================================================================
Omnibus: 0.531 Durbin-Watson: 1.935
Prob(Omnibus): 0.767 Jarque-Bera (JB): 0.669
Skew: -0.089 Prob(JB): 0.716
Kurtosis: 2.779 Cond. No. 338.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Multiple linear regression
###Code
Xs = data.drop(['sales', 'Unnamed: 0'], axis=1)
y = data['sales']
reg = LinearRegression()
reg.fit(Xs, y)
print(reg.coef_)
print(reg.intercept_)
reg.score(Xs, y)
X = np.column_stack((data['TV'], data['radio'], data['newspaper']))
y = data['sales']
X2 = sm.add_constant(X)
est = sm.OLS(y, X2)
est2 = est.fit()
print(est2.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: sales R-squared: 0.897
Model: OLS Adj. R-squared: 0.896
Method: Least Squares F-statistic: 570.3
Date: Sat, 29 Jun 2019 Prob (F-statistic): 1.58e-96
Time: 16:21:55 Log-Likelihood: -386.18
No. Observations: 200 AIC: 780.4
Df Residuals: 196 BIC: 793.6
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 2.9389 0.312 9.422 0.000 2.324 3.554
x1 0.0458 0.001 32.809 0.000 0.043 0.049
x2 0.1885 0.009 21.893 0.000 0.172 0.206
x3 -0.0010 0.006 -0.177 0.860 -0.013 0.011
==============================================================================
Omnibus: 60.414 Durbin-Watson: 2.084
Prob(Omnibus): 0.000 Jarque-Bera (JB): 151.241
Skew: -1.327 Prob(JB): 1.44e-33
Kurtosis: 6.332 Cond. No. 454.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
|
examples/PlotPowerSpectra.ipynb | ###Markdown
Plot Power SpectraPower spectra are used to analyze the average frequency content across signals in an RF image such as that produced by a transducer. This example relies on `scipy` and `matplotlib` to generate the power spectral density plot for sample RF data.
###Code
import sys
!{sys.executable} -m pip install itk matplotlib scipy numpy
import os
import itk
import matplotlib.pyplot as plt
from scipy import signal
import numpy as np
###Output
_____no_output_____
###Markdown
Load Data
###Code
RF_IMAGE_PATH = './MouseLiverRF.mha'
SAMPLING_FREQUENCY = 40e6 # Hz
assert os.path.exists(RF_IMAGE_PATH)
rf_image = itk.imread(RF_IMAGE_PATH)
rf_array = itk.array_view_from_image(rf_image)
print(rf_array.shape)
###Output
(4, 128, 1536)
###Markdown
Plot Power Spectral Density
###Code
plt.figure(1, figsize=(10,8))
for frame_idx in range(rf_image.shape[0]):
arr = rf_array[frame_idx,:,:]
freq, Pxx = signal.periodogram(arr,
SAMPLING_FREQUENCY,
window='hamming',
detrend='linear',
axis=1)
# Take mean spectra across lateral dimension
Pxx = np.mean(Pxx,0)
plt.semilogy(freq, Pxx, label=frame_idx)
plt.title('RF Image Power Spectra')
plt.xlabel('Frequency [Hz]')
plt.ylabel('Power spectral density [V**2/Hz]')
plt.legend(loc='upper right')
os.makedirs('./Output',exist_ok=True)
plt.savefig('./Output/PowerSpectralDensity.png',dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Plot Power SpectraPower spectra are used to analyze the average frequency content across signals in an RF image such as that produced by a transducer. This example relies on `scipy` and `matplotlib` to generate the power spectral density plot for sample RF data.
###Code
import sys
!{sys.executable} -m pip install itk matplotlib scipy numpy
import os
import itk
import matplotlib.pyplot as plt
from scipy import signal
import numpy as np
###Output
_____no_output_____
###Markdown
Load Data
###Code
RF_IMAGE_PATH = './MouseLiverRF.mha'
SAMPLING_FREQUENCY = 60e6 # Hz
assert os.path.exists(RF_IMAGE_PATH)
rf_image = itk.imread(RF_IMAGE_PATH)
rf_array = itk.array_view_from_image(rf_image)
print(rf_array.shape)
###Output
(4, 128, 1536)
###Markdown
Plot Power Spectral Density
###Code
plt.figure(1, figsize=(10,8))
for frame_idx in range(rf_image.shape[0]):
arr = rf_array[frame_idx,:,:]
freq, Pxx = signal.periodogram(arr,
SAMPLING_FREQUENCY,
window='hamming',
detrend='linear',
axis=1)
# Take mean spectra across lateral dimension
Pxx = np.mean(Pxx,0)
plt.semilogy([f / 1e6 for f in freq], Pxx, label=frame_idx)
plt.title('RF Image Power Spectra')
plt.xlabel('Frequency [MHz]')
plt.ylabel('Power spectral density [V**2/Hz]')
plt.legend([f'Frame {idx}' for idx in range(rf_image.shape[0])],loc='upper right')
os.makedirs('./Output',exist_ok=True)
plt.savefig('./Output/PowerSpectralDensity.png',dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Plot Power SpectraPower spectra are used to analyze the average frequency content across signals in an RF image such as that produced by a transducer. This example relies on `scipy` and `matplotlib` to generate the power spectral density plot for sample RF data.
###Code
import sys
!"{sys.executable}" -m pip install itk matplotlib scipy numpy
import os
import itk
import matplotlib.pyplot as plt
from scipy import signal
import numpy as np
###Output
_____no_output_____
###Markdown
Load Data
###Code
RF_IMAGE_PATH = './MouseLiverRF.mha'
SAMPLING_FREQUENCY = 60e6 # Hz
assert os.path.exists(RF_IMAGE_PATH)
rf_image = itk.imread(RF_IMAGE_PATH)
rf_array = itk.array_view_from_image(rf_image)
print(rf_array.shape)
###Output
(4, 128, 1536)
###Markdown
Plot Power Spectral Density
###Code
plt.figure(1, figsize=(10,8))
for frame_idx in range(rf_image.shape[0]):
arr = rf_array[frame_idx,:,:]
freq, Pxx = signal.periodogram(arr,
SAMPLING_FREQUENCY,
window='hamming',
detrend='linear',
axis=1)
# Take mean spectra across lateral dimension
Pxx = np.mean(Pxx,0)
plt.semilogy([f / 1e6 for f in freq], Pxx, label=frame_idx)
plt.title('RF Image Power Spectra')
plt.xlabel('Frequency [MHz]')
plt.ylabel('Power spectral density [V**2/Hz]')
plt.legend([f'Frame {idx}' for idx in range(rf_image.shape[0])],loc='upper right')
os.makedirs('./Output',exist_ok=True)
plt.savefig('./Output/PowerSpectralDensity.png',dpi=300)
plt.show()
###Output
_____no_output_____ |
azure_control_eval.ipynb | ###Markdown
Control Script RUN TRAINING SCRIPT
###Code
from azureml.core import Workspace
from azureml.core import Experiment
from azureml.core import Environment
from azureml.core import ScriptRunConfig
from azureml.core import Dataset
# set up dataset
ws = Workspace.from_config()
datastore = ws.get_default_datastore()
dataset = Dataset.File.from_files(path=(datastore, 'datasets/rgbd_dataset'))
# embedding_model = Dataset.File.from_files(path=(datastore, 'embedding_models/bit_m-r50x1_1'))
# set up experiment
experiment_name = 'evaluation'
experiment = Experiment(workspace=ws, name=experiment_name)
model_type = 'rgb-depth'
model_name = 'rgbd_model_run29'
model_weights_name = 'rgbd_model_run29_epoch3_weights.h5'
compute_name = 'gpu-compute-3dcv'
config = ScriptRunConfig(
source_directory='./src',
script='eval.py',
compute_target=compute_name,
arguments=[
'--data_path', dataset.as_named_input('input').as_mount(),
'--model_type', model_type,
'--model_name', model_name,
'--model_weights_name', model_weights_name
#'--model_weights_path', models_azure.as_named_input('models').as_mount,
],
)
# # set up 3dcv environment
# env_path = '.azureml/3dcv-env.yml'
# env = Environment.from_conda_specification(
# name='3dcv',
# file_path=env_path
# )
# env.register(workspace=ws);
# load env
env = ws.environments['3dcv']
env.docker.enabled = True
env.docker.base_image = 'mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.1-cudnn7-ubuntu18.04'
# # curated tensorflow environment
# curated_env_name = 'AzureML-TensorFlow-2.2-GPU'
# env = Environment.get(workspace=ws, name=curated_env_name)
# env.docker.enabled = True
# env.docker.base_image = 'mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.1-cudnn7-ubuntu18.04'
config.run_config.environment = env
run = experiment.submit(config)
aml_url = run.get_portal_url()
print("Submitted to compute cluster:\n")
print(aml_url)
run
# specify show_output to True for a verbose log
run.wait_for_completion (show_output=True)
run.get_file_names()
# register model
model = run.register_model(model_name='rgbd_model', model_path='outputs/checkpoints/epoch3_weights.h5')
print(model.name, model.id, model.version, sep='\t')
###Output
_____no_output_____ |
Getting-higher-res-images.ipynb | ###Markdown
Getting higher resolution versions of photos from the State Library of South Australia online collection interfaceThe State Library of South Australia makes a fabulous [collection of out of copyright photographs](https://www.slsa.sa.gov.au/photographs) available online. However, while you can zoom in using their collection interface to examine the details of many of these images, the download option seems to provide copies at a much lower resolution. This limits their usefulness for many types of research.This notebook simply takes the tiled versions of the images which are displayed in the collection interface and stitches them together to create higher resolution versions.For example, the version of [this photograph of Clement Wragge](https://collections.slsa.sa.gov.au/resource/B+43122) provided by the 'Download' button is 1024 x 787 pixels. The version created by this notebook is 5785 × 4337 pixels. Note that images available for download from the SLSA's [digital collections](https://digital.collections.slsa.sa.gov.au/) seem to be at a much higher resolution so don't need any special tricks to use. Setting things upRun these cells using **Shift+Enter** to get the code ready to use.
###Code
import requests
from PIL import Image
from io import BytesIO
from slugify import slugify
import re
from IPython.display import display, HTML, FileLink
def get_json(url):
'''
Get the JSON file that includes information about the zoom levels and tiles.
'''
json_url = '{}/{}'.format(url.rstrip('/'), 'tiles.json')
response = requests.get(json_url)
data = response.json()
return data
def get_highest_level(data):
'''
Find the highest level of zoom -- ie the biggest version of the image -- in the JSON data.
'''
for level in data['levels']:
if level['name'] == 'z0':
highest_zoom = level
break
return highest_zoom
def download_image(url):
'''
Provide a url of a digitised photos, and get back the largest possible version for download.
Gets information about available zoom levels and tiles, then stitches the tiles together.
'''
# Get data about levels
data = get_json(url)
# Get the highest zoom level
level = get_highest_level(data)
# Dimensions of the biggest image
w = level['width']
h = level['height']
# Create an empty image to paste the tiles into
img = Image.new('RGB', (w, h))
# Loop through all the tiles
for index, tile in enumerate(level['tiles']):
# Get a tile and open as an image
response = requests.get(tile['url'])
tile_img = Image.open(BytesIO(response.content))
# When we've got the first tile, grab the height and width
if index == 0:
tile_w, tile_h = tile_img.size
# The tile data includes an x and y index value indicating the position of the tile
# To calculate it's coordinates, just multiply the index by the width/height
x = tile['x'] * tile_w
y = tile['y'] * tile_h
# Paste the tile into the big image using the x/y coords to define the top left corner
img.paste(tile_img, box=(x, y))
id = re.search(r'resource\/(.*)', url).group(1)
# Create file name that includes the image ID info
image_name = 'slsa-{}.jpg'.format(slugify(id))
# Save and display the image
img.save(image_name)
display(FileLink(image_name))
display(HTML('<img src="{}">'.format(image_name)))
###Output
_____no_output_____
###Markdown
Supply the URL of the photoJust paste the url of the photo you want to download between the quotes in the cell below and run the cell using **Shift+Enter**. Once it has been created, the final image will be displayed below with a link for easy download.
###Code
download_image('https://collections.slsa.sa.gov.au/resource/B+43122')
###Output
_____no_output_____ |
battery-state-estimation/results/dataset_a/soc/lstm_soc_performance.ipynb | ###Markdown
Main notebook for battery state estimation
###Code
import numpy as np
import pandas as pd
import scipy.io
import math
import os
import ntpath
import sys
import logging
import time
import sys
from importlib import reload
import plotly.graph_objects as go
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, Adam
from keras.utils import np_utils
from keras.layers import LSTM, Embedding, RepeatVector, TimeDistributed, Masking
from keras.callbacks import EarlyStopping, ModelCheckpoint, LambdaCallback
IS_COLAB = False
if IS_COLAB:
from google.colab import drive
drive.mount('/content/drive')
data_path = "/content/drive/My Drive/battery-state-estimation/battery-state-estimation/"
else:
data_path = "../"
sys.path.append(data_path)
from data_processing.dataset_a import DatasetA, CycleCols
from data_processing.model_data_handler import ModelDataHandler
###Output
Using TensorFlow backend.
###Markdown
Config logging
###Code
reload(logging)
logging.basicConfig(format='%(asctime)s [%(levelname)s]: %(message)s', level=logging.DEBUG, datefmt='%Y/%m/%d %H:%M:%S')
###Output
_____no_output_____
###Markdown
Load Data Initial the data objectLoad the cycle and capacity data to memory based on the specified chunk size
###Code
dataset = DatasetA(
test_types=['S'],
chunk_size=1000000,
lines=[37, 40],
charge_line=37,
discharge_line=40,
base_path=data_path
)
###Output
2020/12/02 17:47:14 [DEBUG]: Start loading data with lines: [37, 40], types: ['S'] and chunksize: 1000000...
2020/12/02 17:47:40 [DEBUG]: Finish loading data.
2020/12/02 17:47:40 [INFO]: Loaded raw dataset A data with cycle row count: 6181604 and capacity row count: 16548
2020/12/02 17:47:40 [DEBUG]: Start cleaning cycle raw data...
2020/12/02 17:47:45 [DEBUG]: Finish cleaning cycle raw data.
2020/12/02 17:47:45 [INFO]: Removed 5 rows of abnormal cycle raw data.
2020/12/02 17:47:45 [DEBUG]: Start cleaning capacity raw data...
2020/12/02 17:47:45 [DEBUG]: Finish cleaning capacity raw data.
2020/12/02 17:47:45 [INFO]: Removed 1 rows of abnormal capacity raw data.
2020/12/02 17:47:45 [DEBUG]: Start assigning charging raw data...
2020/12/02 17:47:46 [DEBUG]: Finish assigning charging raw data.
2020/12/02 17:47:46 [INFO]: [Charging] cycle raw count: 4773746, capacity raw count: 8278
2020/12/02 17:47:46 [DEBUG]: Start assigning discharging raw data...
2020/12/02 17:47:46 [DEBUG]: Finish assigning discharging raw data.
2020/12/02 17:47:46 [INFO]: [Discharging] cycle raw count: 1407853, capacity raw count: 8269
###Markdown
Determine the training and testing namePrepare the training and testing data for model data handler to load the model input and output data.
###Code
train_data_test_names = [
'000-DM-3.0-4019-S',
'001-DM-3.0-4019-S',
'002-DM-3.0-4019-S',
'006-EE-2.85-0820-S',
'007-EE-2.85-0820-S',
'018-DP-2.00-1320-S',
'019-DP-2.00-1320-S',
'036-DP-2.00-1720-S',
'037-DP-2.00-1720-S',
'038-DP-2.00-2420-S',
'040-DM-4.00-2320-S',
'042-EE-2.85-0820-S',
'045-BE-2.75-2019-S'
]
test_data_test_names = [
'003-DM-3.0-4019-S',
'008-EE-2.85-0820-S',
'039-DP-2.00-2420-S',
'041-DM-4.00-2320-S',
]
dataset.prepare_data(train_data_test_names, test_data_test_names)
###Output
2020/12/02 17:47:46 [DEBUG]: Start preparing data for training: ['000-DM-3.0-4019-S', '001-DM-3.0-4019-S', '002-DM-3.0-4019-S', '006-EE-2.85-0820-S', '007-EE-2.85-0820-S', '018-DP-2.00-1320-S', '019-DP-2.00-1320-S', '036-DP-2.00-1720-S', '037-DP-2.00-1720-S', '038-DP-2.00-2420-S', '040-DM-4.00-2320-S', '042-EE-2.85-0820-S', '045-BE-2.75-2019-S'] and testing: ['003-DM-3.0-4019-S', '008-EE-2.85-0820-S', '039-DP-2.00-2420-S', '041-DM-4.00-2320-S']...
2020/12/02 17:47:57 [DEBUG]: Finish getting training and testing charge data.
2020/12/02 17:48:06 [DEBUG]: Finish getting training and testing discharge data.
2020/12/02 17:48:06 [DEBUG]: Finish cleaning training and testing charge data.
2020/12/02 17:48:06 [DEBUG]: Finish cleaning training and testing discharge data.
2020/12/02 17:48:07 [DEBUG]: Finish adding training and testing discharge SOC parameters.
2020/12/02 17:48:12 [DEBUG]: Finish adding training and testing discharge SOH parameters.
2020/12/02 17:48:12 [DEBUG]: Finish preparing data.
2020/12/02 17:48:12 [INFO]: Prepared training charge cycle data: (6536,), capacity data: (6536, 15)
2020/12/02 17:48:12 [INFO]: Prepared testing charge cycle data: (1728,), capacity data: (1728, 15)
2020/12/02 17:48:12 [INFO]: Prepared training discharge cycle data: (6536,), capacity data: (6536, 20)
2020/12/02 17:48:12 [INFO]: Prepared testing discharge cycle data: (1728,), capacity data: (1728, 20)
###Markdown
Initial the model data handlerModel data handler will be used to get the model input and output data for further training purpose.
###Code
mdh = ModelDataHandler(dataset, [
CycleCols.VOLTAGE,
CycleCols.CURRENT,
CycleCols.TEMPERATURE
])
###Output
_____no_output_____
###Markdown
Data loading
###Code
train_x, train_y, test_x, test_y = mdh.get_discharge_whole_cycle(soh = False, output_capacity = True)
train_y = mdh.keep_only_capacity(train_y, is_multiple_output = True)
test_y = mdh.keep_only_capacity(test_y, is_multiple_output = True)
experiment_name = '2020-12-02-12-28-17_lstm_soc'
history = pd.read_csv(data_path + 'results/trained_model/%s_history.csv' % experiment_name)
model = keras.models.load_model(data_path + 'results/trained_model/%s.h5' % experiment_name)
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm (LSTM) (None, 287, 256) 266240
_________________________________________________________________
lstm_1 (LSTM) (None, 287, 256) 525312
_________________________________________________________________
lstm_2 (LSTM) (None, 287, 128) 197120
_________________________________________________________________
dense (Dense) (None, 287, 64) 8256
_________________________________________________________________
dense_1 (Dense) (None, 287, 1) 65
=================================================================
Total params: 996,993
Trainable params: 996,993
Non-trainable params: 0
_________________________________________________________________
###Markdown
Prediction time
###Code
prediction_time = []
for x in test_x:
start = time.time()
model.predict(x.reshape(1, x.shape[0], x.shape[1]))
end = time.time()
prediction_time.append(end - start)
prediction_time = np.array(prediction_time)
print('(Prediction) Average time: {} s, std: {} s, max: {} s, min: {} s'.format(
prediction_time.mean(), prediction_time.std(), prediction_time.max(), prediction_time.min()))
###Output
(Prediction) Average time: 0.07088805155621634 s, std: 0.009244483929890438 s, max: 0.27985405921936035 s, min: 0.06615209579467773 s
|
lab4/lab4_image_analysis.ipynb | ###Markdown
Lab 4. Image AnalysisImages are provided with Use the thermal and multispectral data shared with you to develop/report the following: a. Boxplot showing the temperature data with respect to varieties and treatment, with clear figure title/caption and x-and y-axis labelled, with proper tick marks. (3 ) b. Boxplot showing the GNDVI data with respect to varieties and treatment, with clear figure title/caption and x-and y-axis labelled, with proper tick marks. (3 ) c. Add discussion of 125-200 words discussing the data alongside some background information from literature and reference citations. (5 + 2)
###Code
%load_ext blackcellmagic
%matplotlib inline
import cv2 as cv
import os
import pathlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from src.imageprocessing import ImageProcessing
from src.roi import PlotCoordinates
import itertools
import string
import matplotlib.lines as lines
from sklearn import linear_model
from matplotlib import patches
import seaborn as sns
def stylize_axes(ax, title="", xlabel="", ylabel="", xticks=[], yticks=[], xticklabels=[], yticklabels=[]):
"""Customize axes spines, title, labels, ticks, and ticklabels."""
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.xaxis.set_tick_params(top='off', direction='out', width=1)
ax.yaxis.set_tick_params(right='off', direction='out', width=1)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_xticks(xticks)
ax.set_yticks(yticks)
ax.set_xticklabels(xticklabels)
ax.set_yticklabels(yticklabels)
def create_imshow_subplots(nrows: int, ncols: int, figsize: tuple, img_list: list, exportname: str):
# Create figure
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=figsize, facecolor="white")
# Show the image data in subplots
ax = ax.flatten()
for i, a in enumerate(ax):
stylize_axes(a)
a.text(60, 300, string.ascii_lowercase[i], size=15, weight="bold", color="white")
masked_img = img_list[i] * masks[i]
a.imshow(masked_img, interpolation='none', cmap="gray")
plt.savefig(f"{exportname}.png")
plt.tight_layout()
plt.show()
improc = ImageProcessing()
pc = PlotCoordinates()
# import the images
# WT, ABAL, and OST22D Drought and Control
# ['data\\raw\\ABAL_Control.JPG', 'data\\raw\\ABAL_Drought.JPG', 'data\\raw\\OST22D_Control.JPG', 'data\\raw\\OST22D_Drought.JPG', 'data\\raw\\WT-Control.JPG', 'data\\raw\\WT_Drought.JPG']
# y, x, z
imagepaths = list(pathlib.Path('./data/raw/').glob('*.JPG'))
images = [cv.imread(str(imagepath), cv.IMREAD_UNCHANGED)[900:2800, 1250:4500, :] for imagepath in imagepaths]
print(images[0].shape)
# (NIR, G, B) order
images_ngb = [cv.cvtColor(image, cv.COLOR_BGR2RGB) for image in images]
# # hsv images
# images_hsv = [cv.cvtColor(image, cv.COLOR_BGR2HSV) for image in images]
# grayscale images
images_grayscale = [cv.cvtColor(image, cv.COLOR_BGR2GRAY) for image in images]
print(imagepaths)
type_dict = {0:{"gt": "ABAL", "treatment":"control"},
1:{"gt": "ABAL", "treatment":"drought"},
2:{"gt": "OST22D", "treatment":"control"},
3:{"gt": "OST22D", "treatment":"drought"},
4:{"gt": "WT", "treatment":"control"},
5:{"gt": "WT", "treatment":"drought"},
}
def calibration(
img_list: list, coord_list: list, figsize: tuple, panel_size: tuple = (50, 50)
):
""" calculates the correction factor given the region of the reflectance panel """
# Create figure
fig, ax = plt.subplots(nrows=3, ncols=2, figsize=figsize, facecolor="white")
# Show the image data in subplots
ax = ax.flatten()
for i, a in enumerate(ax):
y, x = coord_list[i]
height, width = panel_size
img = img_list[i]
stylize_axes(a)
plot_boundary = patches.Rectangle(
xy=(x, y),
width=width,
height=height,
edgecolor="r",
lw=3,
facecolor="r",
alpha=0.5,
)
a.add_patch(plot_boundary)
a.text(
60, 300, string.ascii_lowercase[i], size=16, weight="bold", color="black"
)
a.imshow(img, interpolation="none", cmap="gray")
# calculate band values
print(img.shape)
nir_mean = np.mean(img[y:y+height, x:x+width, 0])
g_mean = np.mean(img[y:y+height, x:x+width, 1])
b_mean = np.mean(img[y:y+height, x:x+width, 2])
print(nir_mean, g_mean, b_mean)
plt.savefig("calibration.png")
plt.tight_layout()
plt.show()
coord_list = [
(950, 100),
(900, 300),
(800, 200),
(850, 350),
(750, 325),
(925, 325),
]
calibration(images_ngb, coord_list, figsize=(5, 5), panel_size=(100, 100))
# segment image from background using hsv colorspace
# explore hsv space
def create_mask(img):
""" takes an rgb image, converts it to hsv, creates and returns mask"""
image = cv.cvtColor(img, cv.COLOR_RGB2HSV)
sat = (50, 255)
val = (100, 255)
# NIR mask upper hue range
lower_nir1 = np.array([9, sat[0], val[0]])
higher_nir1 = np.array([22, sat[1], val[1]])
mask1 = cv.inRange(image, lower_nir1, higher_nir1)
# NIR mask lower hue range
lower_nir2 = np.array([180, sat[0], val[0]])
higher_nir2 = np.array([180, sat[1], val[1]])
mask2 = cv.inRange(image, lower_nir2, higher_nir2)
# segment out the background, close enough
# seg_img = cv.bitwise_and(image, image, mask=(mask1 + mask2))
# seg_img_nbg = cv.cvtColor(cv.cvtColor(seg_img, cv.COLOR_HSV2BGR), cv.COLOR_BGR2RGB)
mask = mask1 + mask2
kernel = np.ones((5, 5), np.uint8)
mask_morph = mask.copy()
mask_morph = cv.erode(mask_morph, kernel, iterations=2)
mask_morph = cv.dilate(mask_morph, kernel, iterations=2)
# mask_morph = cv.erode(mask_morph, kernel, iterations=1)
# mask_morph = cv.morphologyEx(mask_morph, cv.MORPH_CLOSE, kernel)
# mask_morph = cv.morphologyEx(mask_morph, cv.MORPH_OPEN, kernel)
mask_bool = np.where(mask_morph > 0, True, False)
return mask_bool
masks = [create_mask(image) for image in images_ngb]
plt.imshow(masks[3], cmap='gray')
# plt.imshow(images_ngb[1][:,:,0])
# calculate gndvi
gndvi_imgs = [improc.calc_spec_idx(combo=(0, 2), bands=img) for img in images_ngb]
plt.imshow(gndvi_imgs[0], cmap="gray")
# calculate mean GNDVI for each plant
df = pd.DataFrame({"gt": [], "treatment": [], "mean_GNDVI": []})
half_y = int(.5 * (gndvi_imgs[0].shape[0]))
roi_coords = list(itertools.product([0, half_y], [700, 1500, 2300]))
roi_shape = (half_y, 800)
plot_id_list = pc.plot_boundaries(
img=gndvi_imgs[5],
plot_coords=roi_coords,
roi_coords=roi_coords,
plot_shape=roi_shape,
roi_shape=roi_shape,
)
gndvi_means = np.stack([
[improc.ndsi_mean(arr=gndvi_img, origin=origin, shape=roi_shape, mask=mask) for origin in roi_coords] for gndvi_img, mask in zip(gndvi_imgs, masks)
])
# thermal images
#Boxplot showing the GNDVI data with respect to varieties and treatment,
# with clear figure title/caption and x-and y-axis labelled, with proper tick marks. (3)
df = pd.DataFrame({"gt": ["aba1-6", "aba1-6", "ost2-2D", "ost2-2D", "WT", "WT"], "treatment": ["control", "drought","control", "drought","control", "drought"]})
df2 = pd.DataFrame(gndvi_means)
gndvi_df = pd.concat([df, df2], axis=1)
gndvi_df = gndvi_df.melt(id_vars=["gt", "treatment"],var_name="pos_id", value_name="GNDVI_mean")
gndvi_df.to_csv("gndvi_mean.csv")
print(gndvi_df.head())
#create_imshow_subplots(nrows=3, ncols=2, figsize=(8, 8), img_list=gndvi_imgs, exportname="gndvi_fig2")
###Output
gt treatment pos_id GNDVI_mean
0 aba1-6 control 0 0.529112
1 aba1-6 drought 0 0.472980
2 ost2-2D control 0 0.459869
3 ost2-2D drought 0 0.332478
4 WT control 0 0.517260
###Markdown
a. Boxplot showing the temperature data with respect to varieties and treatment, with clear figure title/caption and x-and y-axis labelled, with proper tick marks. (3 )
###Code
# boxplot for thermal readings
tdf = pd.read_csv("thermal.csv")
#WT, aba1-6, and ost2-2D
sns.boxplot(x = tdf['gt'], y=tdf['temp_C'], hue=tdf['treatment'], palette=["lightblue", "orange"])
plt.ylabel("Temperature (°C)", size=16)
plt.xlabel("Plant genotype", size=16)
plt.xticks(size=14)
plt.yticks(size=14)
plt.legend(prop={"size":14})
plt.savefig("boxplot_thermal.png")
###Output
_____no_output_____
###Markdown
b. Boxplot showing the GNDVI data with respect to varieties and treatment, with clear figure title/caption and x-and y-axis labelled, with proper tick marks. (3 )
###Code
# boxplot for GNDVI
sns.boxplot(x = gndvi_df['gt'], y=gndvi_df['GNDVI_mean'], hue=gndvi_df['treatment'], palette=["lightblue", "orange"])
plt.ylabel("GNDVI", size=16)
plt.xlabel("Plant genotype", size=16)
plt.xticks(size=14)
plt.yticks(size=14)
plt.legend(prop={"size":14})
plt.savefig("boxplot_gndvi.png")
###Output
_____no_output_____
###Markdown
c. Add discussion of 125-200 words discussing the data alongside some background information from literature and reference citations. (5 + 2) c. Steps showing the computation of the parameters listed above. (9)d. Add discussion of 125-200 words discussing the data alongside some background information from literature and reference citations. (5 + 2) Blackbody calibration data 1. Use the blackbody calibration data to estimate the following as discussed in Topic 1, assuming that the overall range of the system is 15ºC to 40ºC: - accuracy limit, - non-linearity, and - repeatability errors a. Include tables with clear table title/caption and column titles of raw and deviation data. (4, 2 each) repeatibility error is +-0.02% *** accuracy = +-2.4% *** nonlinearity: -0.1, or 1% of span
###Code
def create_dev_df(df):
up_df = df.query('direction == "up"').groupby("temp_C").mean().reset_index().rename(columns={'deviation': 'avg_up'})
# # average of down readings for each value
down_df = df.query('direction == "down"').groupby("temp_C").mean().reset_index().rename(columns={'deviation': 'avg_down'})
# # average of up/down readings for each value
updown_df = df.groupby("temp_C").mean().reset_index().rename(columns={'deviation': 'avg_updown'})
# merge the df together
df["avg_up"] = up_df.avg_up
df["avg_down"] = down_df.avg_down
df["avg_updown"] = updown_df.avg_updown
return df
# average of up readings for each value
def dev_stats(df, dev_df):
# min/max deviation
min_dev = round(dev_df.iloc[:,1:].min().min(), 2)
max_dev = round(dev_df.iloc[:,1:].max().max(), 2)
range = round(max_dev - min_dev, 2)
span = 40 - 15
plus_acc = (max_dev / span)
min_acc = (round(min_dev / span, 3))
print("deviation stats:")
# accuracy limit
print(f"% output span: (+{max_dev} °C or {min_dev}; wrt span of measure)")
print(f"range: {range}")
print(f"span: {span}")
print(f"*** accuracy = +{plus_acc}% and {min_acc}% of output span")
# non-linearity
nonlin_up = df.query('direction == "up"').query('temp_C == 40').deviation
nonlin_down = df.query('direction == "down"').query('temp_C == 15').deviation
nonlin_y = (nonlin_down, nonlin_up)
nonlin_x = (15, 40)
print(f"*** nonlinearity: {nonlin_y}")
wtf = (max_dev + min_dev) * 100 / span
print(wtf)
# repeatability
#maximum variation of successive measurements for the same input value from same direction
# % of output span
# random error
# maximum variation = 0.15 at 1 lb, 0.17 * 100 / 6.45 = 2.6% (+-1.3%)
return {"nonlin": (nonlin_y, nonlin_x), "span": span, "range": range, "dev": (min_dev, max_dev), "accuracy": (plus_acc, min_acc)}
def create_deviation_plot(df_list, stats, y_limits:list, figsize:tuple, label_loc:tuple):
# create the deviation plot
fig, ax = plt.subplots(nrows=1, ncols=len(df_list), figsize=figsize, facecolor="white")
fig_letter_x, fig_letter_y = label_loc
for i, df in enumerate(df_list):
nonlin_y, nonlin_x = stats[i].get("nonlin")
min_dev, max_dev = stats[i].get("dev")
ax[i].set_ylim(y_limits)
ax[i].scatter(df["temp_C"], df["avg_up"], color= "orange", facecolors='none', label="mean up", marker='s')
ax[i].scatter(df["temp_C"], df["avg_updown"], color = 'black', label="mean up/down")
ax[i].scatter(df["temp_C"], df["avg_down"], color= "blue", facecolors='none', label="mean down", marker='s')
ax[i].plot([14,41], [0,0], color="black", linewidth=0.75)
ax[i].plot(nonlin_x, nonlin_y, color="black", linewidth=2, label="non-linearity")
ax[i].plot([14,41], [min_dev,min_dev], '-.', color="black", linewidth=0.5, label="accuracy limits")
ax[i].plot([14,41], [max_dev,max_dev], '-.', color="black", linewidth=0.5)
ax[i].set_ylabel('Average deviation (°C)', size=16)
ax[i].set_xlabel('True temperature (°C)', size=16)
ax[i].legend(bbox_to_anchor=(.9, 0.3), loc='upper right', ncol=1)
ax[i].text(fig_letter_x, fig_letter_y, string.ascii_lowercase[i], size=20, weight="bold", color="black")
fig.tight_layout(pad=3.0)
plt.show()
# import data
df_base = pd.read_csv("101821_lab4_FLIRcalibration.csv", skiprows=3)
df_base = df_base[["direction", "temp_C", "cycle_1", "cycle_2"]]
df_base = df_base.rename(columns={'cycle_1': '1', 'cycle_2':'2'})
df_base = pd.melt(df_base, id_vars = ['temp_C', 'direction'], value_vars=list(df_base.columns)[2:],
var_name="cycle", value_name='reading')
print(df_base.shape)
print(df_base.head())
#### Ridge regression model
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedKFold
from sklearn.linear_model import Ridge
# pandas to a numpy array
data = df_base.values
# reshape, add a dimension so it works in the model
X, y = data[:, 0].reshape(-1, 1), data[:,-1].reshape(-1, 1)
# define Ridge regression linear model
lm = Ridge(alpha=1.0)
# define model evaluation method
cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)
#evaluate model
scores = np.absolute(cross_val_score(lm, X, y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=1))
print(f"Mean MAE: {np.mean(scores)} ({np.std(scores)})")
# fit model
lm.fit(X, y)
# predictions
y_pred = lm.predict(X)
print(f"coeff: {lm.coef_}, intercept: {lm.intercept_}")
# add to dataframe
best_fit = pd.DataFrame({"temp_C":X.flatten(), "best_fit": y_pred.flatten()})
# join column from best_fit into df
df = df_base
df["best_fit"] = best_fit.best_fit
print(f"df_base + base_fit = {df.shape}\n{df.head()}")
# create deviation
# calculate deviation by subtracting the line value from the reading values for ALL data
df["deviation"] = df.reading - df.best_fit
#dev_df = create_dev_df(df)
print(df.head())
temp1 = df.query('direction == "up"').groupby("temp_C").mean().reset_index().rename(columns={'deviation': 'avg_up'})
temp2 = df.query('direction == "down"').groupby("temp_C").mean().reset_index().rename(columns={'deviation': 'avg_down'})
temp3 = df.groupby("temp_C").mean().reset_index().rename(columns={'deviation': 'avg_updown'})
dev_df = df.copy()
dev_df.head()
dev_df["avg_up"] = temp1.avg_up
dev_df["avg_down"] = temp2.avg_down
dev_df["avg_updown"] = temp3.avg_updown
# # # average of down readings for each value
# down_df = df.query('direction == "down"').groupby("temp_C").mean().reset_index().rename(columns={'deviation': 'avg_down'})
# # # average of up/down readings for each value
# updown_df = df.groupby("temp_C").mean().reset_index().rename(columns={'deviation': 'avg_updown'})
# # merge the df together
# df["avg_up"] = up_df.avg_up
# df["avg_down"] = down_df.avg_down
# df["avg_updown"] = updown_df.avg_updown
# return df
print(dev_df)
print(-0.229273 + 0.075127)
print((40 + 0.87) -(15 + -0.33))
# Approach 1: ONE deviation plot alongside a calibration plot
# calculate deviation plot stats
# # min/max deviation
min_dev = round(dev_df.deviation.min().min(), 2)
max_dev = round(dev_df.deviation.max().max(), 2)
range = round(max_dev - min_dev, 2)
span = 40 - 15
plus_acc = (max_dev / span)
min_acc = (round(min_dev / span, 3))
print("deviation stats:")
# accuracy limit
print(f"% output span: (+{max_dev} °C or {min_dev}; wrt span of measure)")
print(f"range: {range}")
print(f"span: {span}")
accuracy = ((max_dev - min_dev) / span * 100)
print(f"*** accuracy = +-{accuracy/2}%")
# non-linearity
nonlin_up = dev_df.query('direction == "up"').query('temp_C == 40').query('avg_up > 0')
nonlin_down = dev_df.query('direction == "down"').query('temp_C == 15')["avg_down"]
nonlin_y = (nonlin_down, nonlin_up)
nonlin_x = (15, 40)
nonlinearity = round(((nonlin_up + nonlin_down) / (40 - 15)), 2)
print(nonlin_y)
print(f"*** nonlinearity:{nonlinearity}, or {nonlinearity * 100}% of span")
# repeatability
#maximum variation of successive measurements for the same input value from same direction
# % of output span
# random error
# maximum variation = 0.15 at 1 lb, 0.17 * 100 / 6.45 = 2.6% (+-1.3%)
print(dev_df)
print(-0.15/(40-15)*100)
# plot calibration and deviation plots in one figure
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(14, 6), facecolor="white")
fig_letter_x, fig_letter_y = (13, .85)
# plot calibration curve with best fit line
ax[0].scatter(dev_df.temp_C, dev_df.reading, marker='8', color='red', facecolors='none')
ax[0].set_ylabel('Camera reading (°C)', size=16)
ax[0].set_xlabel('True temperature (°C)', size=16)
ax[0].plot(dev_df.temp_C, dev_df.best_fit, color = "black")
ax[0].annotate(text=(f"best-fit line:\ncoeff: {round(float(lm.coef_),2)}, \nintercept: {round(float(lm.intercept_),2)}"), xy=(32,17), size=12)
ax[0].annotate(text=("a"), xy=(15,38), size=20, weight="bold")
# ax[1].text(fig_letter_x, fig_letter_y, "b", size=20, weight="bold", color="black")
# and now for the deviation plot
# ax[1].set_ylim(-1, 1)
# ax[1].scatter(dev_df["temp_C"], dev_df["avg_up"], color= "orange", facecolors='none', label="mean up", marker='s')
# ax[1].scatter(dev_df["temp_C"], dev_df["avg_updown"], color = 'black', label="mean up/down")
# ax[1].scatter(dev_df["temp_C"], dev_df["avg_down"], color= "blue", facecolors='none', label="mean down", marker='s')
ax[1].scatter(dev_df["temp_C"], dev_df["deviation"], color= "red", facecolors='none', label="deviation", marker='s')
ax[1].plot([14,41], [0,0], color="black", linewidth=0.75)
# ax[1].plot(nonlin_x, nonlin_y, color="black", linewidth=2, label="non-linearity")
ax[1].plot([14,41], [min_dev,min_dev], '-.', color="black", linewidth=0.5, label="accuracy limits")
ax[1].plot([14,41], [max_dev,max_dev], '-.', color="black", linewidth=0.5)
ax[1].set_ylabel('Deviation (°C)', size=16)
ax[1].set_xlabel('True temperature (°C)', size=16)
ax[1].legend(bbox_to_anchor=(1, .9), loc='upper right', ncol=1)
ax[1].text(fig_letter_x, fig_letter_y, "", size=20, weight="bold", color="black")
fig.tight_layout(pad=3.0)
plt.show()
# TWO deviation plots
repeatability = dev_df.query('cycle == "1"')["reading"].values - dev_df.query('cycle == "2"')["reading"].values
temp_C = dev_df.query('cycle == "1"')["temp_C"].values
print(temp_C, repeatability)
t1 = [15, 20, 25, 30, 35, 40]
r1 = [0.9, 0.9, 1, 0.2, 0.1, 0.1]
max_r = np.max(r1)
r_pct = np.max(r1) / (40 - 15) * 100
print(f"repeatability error % is +-{r_pct/2}%")
rdf = pd.DataFrame({"temp_C": t1, "repeat_err": r1})
plt.scatter(rdf.temp_C, rdf.repeat_err)
plt.show()
# df1.to_csv("cycle_1.csv")
# df2.to_csv("cycle_2.csv")
# dev_list = [create_dev_df(df1), create_dev_df(df2)]
# stats = [dev_stats(df1, dev_list[0]), dev_stats(df2, dev_list[1])]
# create_deviation_plot(dev_list, stats, y_limits=[-1, 1], figsize=(14,6), label_loc=(13, .85))
# # create figure with deviation plot and calibration plot
# def create_deviation_plot(df_list, stats, y_limits:list, figsize:tuple, label_loc:tuple):
# # create the deviation plot
# fig, ax = plt.subplots(nrows=1, ncols=2, figsize=figsize, facecolor="white")
# fig_letter_x, fig_letter_y = label_loc
# for i, df in enumerate(df_list):
# nonlin_y, nonlin_x = stats[i].get("nonlin")
# min_dev, max_dev = stats[i].get("dev")
# ax[i].set_ylim(y_limits)
# ax[i].scatter(df["temp_C"], df["avg_up"], color= "orange", facecolors='none', label="mean up", marker='s')
# ax[i].scatter(df["temp_C"], df["avg_updown"], color = 'black', label="mean up/down")
# ax[i].scatter(df["temp_C"], df["avg_down"], color= "blue", facecolors='none', label="mean down", marker='s')
# ax[i].plot([14,41], [0,0], color="black", linewidth=0.75)
# ax[i].plot(nonlin_x, nonlin_y, color="black", linewidth=2, label="non-linearity")
# ax[i].plot([14,41], [min_dev,min_dev], '-.', color="black", linewidth=0.5, label="accuracy limits")
# ax[i].plot([14,41], [max_dev,max_dev], '-.', color="black", linewidth=0.5)
# ax[i].set_ylabel('Average deviation (°C)', size=16)
# ax[i].set_xlabel('True temperature (°C)', size=16)
# ax[i].legend(bbox_to_anchor=(.9, 0.3), loc='upper right', ncol=1)
# ax[i].text(fig_letter_x, fig_letter_y, string.ascii_lowercase[i], size=20, weight="bold", color="black")
# fig.tight_layout(pad=3.0)
# plt.show()
###Output
_____no_output_____ |
House.ipynb | ###Markdown
House price predictions Ask a home buyer to describe their dream house, and they probably won't begin with the height of the basement ceiling or the proximity to an east-west railroad. But this playground competition's dataset proves that muchmore influences price negotiations than the number of bedrooms or a white-picket fence.Predict the sales price for each house. For each Id in the test set, you must predict the value of the SalePrice variable. Step 1 - Load the Dataset
###Code
#Import necessary libraries
import pandas as pd
import numpy as np
#Load the data and have a look at the sample
house_data = pd.read_csv("House_price_data.csv")
house_data.head()
###Output
_____no_output_____
###Markdown
Step 2 - Data Cleaning
###Code
# get the number of missing data points per column
house_data.isnull().sum()
#Drop NA values
house_data_nadrop = house_data.dropna(axis=1)
house_data_nadrop.head()
# how much data did we lose?
print("Columns in original dataset: %d \n" % house_data.shape[1])
print("Columns with na's dropped: %d" % house_data_nadrop.shape[1])
###Output
Columns in original dataset: 81
Columns with na's dropped: 62
###Markdown
Need to check if have any object type values in the dataset. You will get an error if you try to plug these variables into most machine learning models in Python without "encoding" them first.
###Code
#Check object type value
house_data_nadrop.dtypes
###Output
_____no_output_____
###Markdown
Looks like we have got quiet few Object type features. Looking at the data description it doesn't seem they are worth keeping in the file for predictions.
###Code
house_data_nadrop = house_data_nadrop.select_dtypes(exclude=['object'])
#A lot object type values, let encode these categorial data
#house_data_nadrop = pd.get_dummies(house_data_nadrop)
#house_data_nadrop.head(10)
house_data_nadrop.columns
###Output
_____no_output_____
###Markdown
Step3 - Prepare data for modelling Seperate features and target values.
###Code
#Select SalePrice and store into target variable
target = house_data_nadrop['SalePrice']
#drop Saleprice and store rest of the data to features
features = house_data_nadrop.drop('SalePrice', axis = 1)
#Drop Id as we do not need it for modelling purpose
features = features.drop(['Id'], axis = 1)
#Split the data into train and validation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.2, random_state=10)
###Output
_____no_output_____
###Markdown
Step4 - Data modelling Select a model according to your requirement or choice and fit the train data.Once fit, predict new values for house price
###Code
from sklearn.ensemble import RandomForestRegressor
#create the model
regressor_model = RandomForestRegressor(random_state=0)
#fit the model
regressor_model.fit(X_train, y_train)
# Make predictions using the testing set
y_pred = regressor_model.predict(X_test)
###Output
_____no_output_____
###Markdown
Step5- Check the accuracy of model
###Code
from sklearn.metrics import r2_score
r2_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Is the model good enough? Lets's check the data distribution.
###Code
import seaborn as sns
colors = np.array('b g r c m y k'.split()) #Different colors for plotting
fig,axes = plt.subplots(nrows =17,ncols=2, sharey=True,figsize = (15,50))
plt.tight_layout()
row = 0
iteration = 0
for j in range(0,len(house_data_nadrop.columns[:-1])):
iteration+=1
if(j%2==0):
k = 0
else:
k = 1
sns.distplot(house_data_nadrop[house_data_nadrop.columns[j]],kde=False,hist_kws=dict(edgecolor="w", linewidth=2),
color = np.random.choice(colors) ,ax=axes[row][k])
if(iteration%2==0):
row+=1
plt.ylim(0,200)
###Output
_____no_output_____
###Markdown
Looks like data is not normally distributed. Should we try and normalize the data? or standardize it? Is there any correlation between features?
###Code
plt.figure(figsize= (10,10), dpi=100)
sns.heatmap(house_data_nadrop.corr())
###Output
_____no_output_____
###Markdown
From the above correlation plot it can be seen that Saleprice has strong realationship with - GarageArea, GarageCars, GRLivArea, TotalBsmtSF, 1stFlrSF, YearBuilt and OverallQual.Should we try and use only these features to predict the prices?
###Code
#Select SalePrice and store into target variable
target = house_data_nadrop['SalePrice']
#drop Saleprice and store rest of the data to features
features = house_data_nadrop[['GrLivArea', '1stFlrSF', 'YearBuilt', 'OverallQual']].copy()
features
#Split the data into train and validation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.2, random_state=10)
from sklearn.ensemble import RandomForestRegressor
#create the model
regressor_model = RandomForestRegressor(random_state=0)
#fit the model
regressor_model.fit(X_train, y_train)
# Make predictions using the testing set
y_pred = regressor_model.predict(X_test)
from sklearn.metrics import r2_score
r2_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
No changes in r2 score, we might just stick to previous data. Now let's make prediction for test data and prepare CSV file
###Code
#Load the data and have a look at the sample
test_data = pd.read_csv("test.csv")
test_data.head()
# get the number of missing data points per column
#test_data.isnull().sum()
#Drop NA values
test_data_nadrop = test_data.dropna(axis=1)
test_data_nadrop.head()
test_data_nadrop.columns
features = test_data_nadrop[['GrLivArea', '1stFlrSF', 'YearBuilt', 'OverallQual']].copy()
features
test_pred = regressor_model.predict(features)
test_pred
columns = ['Id', 'SalePrice']
dataframe = pd.DataFrame(columns=columns)
dataframe["Id"] = test_data_nadrop["Id"]
dataframe["SalePrice"] = test_pred
dataframe.head()
dataframe.to_csv('submission.csv',index=False)
###Output
_____no_output_____
###Markdown
Submitted this file to Kaggle - didn't get very good rank. May be we need to normalize or standardize our data.
###Code
#Import necessary libraries
import pandas as pd
import numpy as np
#Load the train data and have a look at the sample
house_data_train = pd.read_csv("House_price_data.csv")
#Load the test data and have a look at the sample
house_data_test = pd.read_csv("House_price_test_data.csv")
print(house_data_test.head())
print(house_data_train.head())
# get the number of missing data points per column
print(house_data_train.isnull().sum())
print(house_data_test.isnull().sum())
#Drop NA values
#house_data_train = house_data_train.dropna(axis=1)
#house_data_test = house_data_test.dropna(axis=1)
#house_data_train = pd.get_dummies(house_data_train)
#house_data_test = pd.get_dummies(house_data_test)
target = house_data_train['SalePrice']
#drop Saleprice and store rest of the data to features
features = house_data_train[['GrLivArea', '1stFlrSF', 'YearBuilt', 'OverallQual']].copy()
#features = house_data_train.drop('SalePrice', axis = 1)
from sklearn.preprocessing import Imputer
imp = Imputer(missing_values='NaN', strategy='mean', axis=1)
#features = pd.get_dummies(features)
features = imp.fit_transform(features)
house_data_test = pd.get_dummies(house_data_test)
house_data_test = imp.fit_transform(house_data_test)
#print(house_data_train.isnull().sum())
#house_data_nadrop.head(10)
#Select SalePrice and store into target variable
#target = house_data_train.iloc[:-1]
#drop Saleprice and store rest of the data to features
#features = house_data_train[['GrLivArea', '1stFlrSF', 'YearBuilt', 'OverallQual']].copy()
#features = house_data_train.drop('SalePrice', axis = 1)
#house_data_test_f = house_data_test[['GrLivArea', '1stFlrSF', 'YearBuilt', 'OverallQual']].copy()
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(features)
scaler
scaler.mean_
scaler.scale_
features = scaler.transform(features)
features
test_f = scaler.transform(house_data_test)
#Split the data into train and validation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.2, random_state=10)
from sklearn.ensemble import RandomForestRegressor
#create the model
regressor_model = RandomForestRegressor()
#fit the model
regressor_model.fit(X_train, y_train)
# Make predictions using the testing set
y_pred = regressor_model.predict(X_test)
from sklearn.metrics import r2_score
print(r2_score(y_test, y_pred))
test_pred = regressor_model.predict(test_f)
columns = ['Id', 'SalePrice']
dataframe = pd.DataFrame(columns=columns)
dataframe["Id"] = house_data_test["Id"]
dataframe["SalePrice"] = test_pred
dataframe.to_csv('submission.csv',index=False)
###Output
Id MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape \
0 1461 20 RH 80.0 11622 Pave NaN Reg
1 1462 20 RL 81.0 14267 Pave NaN IR1
2 1463 60 RL 74.0 13830 Pave NaN IR1
3 1464 60 RL 78.0 9978 Pave NaN IR1
4 1465 120 RL 43.0 5005 Pave NaN IR1
LandContour Utilities ... ScreenPorch PoolArea PoolQC Fence \
0 Lvl AllPub ... 120 0 NaN MnPrv
1 Lvl AllPub ... 0 0 NaN NaN
2 Lvl AllPub ... 0 0 NaN MnPrv
3 Lvl AllPub ... 0 0 NaN NaN
4 HLS AllPub ... 144 0 NaN NaN
MiscFeature MiscVal MoSold YrSold SaleType SaleCondition
0 NaN 0 6 2010 WD Normal
1 Gar2 12500 6 2010 WD Normal
2 NaN 0 3 2010 WD Normal
3 NaN 0 6 2010 WD Normal
4 NaN 0 1 2010 WD Normal
[5 rows x 80 columns]
Id MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape \
0 1 60 RL 65.0 8450 Pave NaN Reg
1 2 20 RL 80.0 9600 Pave NaN Reg
2 3 60 RL 68.0 11250 Pave NaN IR1
3 4 70 RL 60.0 9550 Pave NaN IR1
4 5 60 RL 84.0 14260 Pave NaN IR1
LandContour Utilities ... PoolArea PoolQC Fence MiscFeature MiscVal \
0 Lvl AllPub ... 0 NaN NaN NaN 0
1 Lvl AllPub ... 0 NaN NaN NaN 0
2 Lvl AllPub ... 0 NaN NaN NaN 0
3 Lvl AllPub ... 0 NaN NaN NaN 0
4 Lvl AllPub ... 0 NaN NaN NaN 0
MoSold YrSold SaleType SaleCondition SalePrice
0 2 2008 WD Normal 208500
1 5 2007 WD Normal 181500
2 9 2008 WD Normal 223500
3 2 2006 WD Abnorml 140000
4 12 2008 WD Normal 250000
[5 rows x 81 columns]
Id 0
MSSubClass 0
MSZoning 0
LotFrontage 259
LotArea 0
Street 0
Alley 1369
LotShape 0
LandContour 0
Utilities 0
LotConfig 0
LandSlope 0
Neighborhood 0
Condition1 0
Condition2 0
BldgType 0
HouseStyle 0
OverallQual 0
OverallCond 0
YearBuilt 0
YearRemodAdd 0
RoofStyle 0
RoofMatl 0
Exterior1st 0
Exterior2nd 0
MasVnrType 8
MasVnrArea 8
ExterQual 0
ExterCond 0
Foundation 0
...
BedroomAbvGr 0
KitchenAbvGr 0
KitchenQual 0
TotRmsAbvGrd 0
Functional 0
Fireplaces 0
FireplaceQu 690
GarageType 81
GarageYrBlt 81
GarageFinish 81
GarageCars 0
GarageArea 0
GarageQual 81
GarageCond 81
PavedDrive 0
WoodDeckSF 0
OpenPorchSF 0
EnclosedPorch 0
3SsnPorch 0
ScreenPorch 0
PoolArea 0
PoolQC 1453
Fence 1179
MiscFeature 1406
MiscVal 0
MoSold 0
YrSold 0
SaleType 0
SaleCondition 0
SalePrice 0
Length: 81, dtype: int64
Id 0
MSSubClass 0
MSZoning 4
LotFrontage 227
LotArea 0
Street 0
Alley 1352
LotShape 0
LandContour 0
Utilities 2
LotConfig 0
LandSlope 0
Neighborhood 0
Condition1 0
Condition2 0
BldgType 0
HouseStyle 0
OverallQual 0
OverallCond 0
YearBuilt 0
YearRemodAdd 0
RoofStyle 0
RoofMatl 0
Exterior1st 1
Exterior2nd 1
MasVnrType 16
MasVnrArea 15
ExterQual 0
ExterCond 0
Foundation 0
...
HalfBath 0
BedroomAbvGr 0
KitchenAbvGr 0
KitchenQual 1
TotRmsAbvGrd 0
Functional 2
Fireplaces 0
FireplaceQu 730
GarageType 76
GarageYrBlt 78
GarageFinish 78
GarageCars 1
GarageArea 1
GarageQual 78
GarageCond 78
PavedDrive 0
WoodDeckSF 0
OpenPorchSF 0
EnclosedPorch 0
3SsnPorch 0
ScreenPorch 0
PoolArea 0
PoolQC 1456
Fence 1169
MiscFeature 1408
MiscVal 0
MoSold 0
YrSold 0
SaleType 1
SaleCondition 0
Length: 80, dtype: int64
0.8693711126462146
###Markdown
Planejamento da solução: * Produto Final (o que eu vou entregar? Planilha, modelo de ML) * Email + 3 anexos * Arquivo .csv * Mapa com filtros requisitados * Dashboard com os filtros requisitados* Ferramentas (qual a ferramenta que eu vou usar?)* Processo (Como fazer?)
###Code
import pandas as pd
from geopy.geocoders import Nominatim
import plotly.express as px
import ipywidgets as widgets
from ipywidgetes import fixed
from matplotlib import gridspec
from matplotlib import pyplot as plt
df1 = pd.read_csv(r'C:\Users\06564176686\repos\House_Rocket_EDA\data')
geolocator = Nominatim( user_agent='geoapiExercises' )
response = geolocator.reverse('45.5112, -122.2566651')
print( response.raw['address']['road'])
print( response.raw['address']['road'])
print( response.raw['address']['road'])
print( response.raw['address']['road'])
print( response.raw['address']['road'])
print( response.raw['address']['state'])
df1 = pd.read_csv('C:\Users\06564176686\repos\House_Rocket_EDA\data.csv')
df1['road'] = 'NA'
df1['house_number'] = 'NA'
df1['city'] = 'NA'
df1['county'] = 'NA'
df1['state'] = 'NA'
# API request
geolocator = Nominatim( user_agent='geoapiExercises' )
response = geolocator.reverse('45.5112, -122.2566651')
df1.loc[0, 'house_number'] = response.raw['address']['house_number']
df1.loc[0, 'road'] = response.raw['address']['road']
df1.loc[0, 'nighborudhood'] = response.raw['address']['neighbourhood']
df1.loc[0, 'city'] = response.raw['address']['city']
df1.loc[0, 'county'] = response.raw['address']['county']
df1.loc[0, 'state'] = response.raw['address']['state']
###Output
_____no_output_____
###Markdown
Filtros iterativos no mapa
###Code
df1 = pd.read_csv('C:\Users\06564176686\repos\House_Rocket_EDA\data')
houses = df1[['id, 'lat', 'long, 'price']].copy()
#popular houses com 'level' 0, 1,2,3 conforme price
# fazer mapa
fig = px.scatter_mapbox(houses,
lat='lat',
lon = 'long',
color='level',
size = 'price',
color_continuous_scale = px.colors.cyclical.IceFire,
size_max=15,
zoom=10)
fig.update_layout( mapbox_style='open-stree3t-map')
fig.update_layout( height=600, margin={'r':0, 't':0, 'l':0, 'b':0})
fig.show()
# Adicionando filtros iterativos
style = {'description_width': 'initial'}
price_limit = widgets.IntSlider(
value = 540000,
min = 75000,
max = 77000000,
step = 1,
description= 'Maximum Price',
disable = False,
style = style)
waterfront_bar = widgets.Dropdown(
options = df1['is_waterfront'].unique().tolist(),
value='yes'
description='Water View',
disable=False)
# Iteratividade com o dashboard
# change date format
df['year'] = pd.to_datetime( df['date']).dt.strftime('%Y')
df['date'] = pd.to_datetime( df['date']).dt.strftime('%Y-%m-%d')
df['year_week'] = pd.to_datetime( df['date']).dt.strftime('%Y-%U')
# widgets to control data
data_limit = widgets.SelectionSlider(
options = df['date'].sort_values().uniques().tolist(),
value = '2014-12-01',
description = 'Disponivel',
continuous_update = False,
orientation= 'horizontal',
readout=True
)
def update_map(data, limit):
#filter data
df = data[data['date'] >= limit].copy()
fig = plt.figure(figsize= (21,12))
specs = gridspec.GridSpec(ncols=2, nrows=2, figure = fig)
ax1 = fig.add_subplot(specs[0,:])
ax2 = fig.add_subplot(specs[1,0])
ax3 = fig.add_subplot(specs[1,1])
by_year = df[['id', 'year']].groupby('year').sum().rest_index()
ax1.bar( by_year['year'], by_year['id'])
by_day = df[['id', 'year']].groupby('date').mean().reset_index()
ax2.plot( by_day['date'], by_day['id'])
ax2.set_title('title: Avg Price by Day')
by_week = df[['id', 'year_week']].groupby('year_week').sum().reset_index()
ax3.bar( by_week['year_week'], by_week['id'])
ax3.set_title('title: Avg Price by Week of Year')
plt.xticks( rotation=60)
widgets.interactive(update_map, data=fixed(df), limit = date_limit)
###Output
_____no_output_____ |
Lab06/Lab02/Python Pipeline.ipynb | ###Markdown
Creating a PipelineIn this exercise, you will implement a pipeline that includes multiple stages of *transformers* and *estimators* to prepare features and train a classification model. The resulting trained *PipelineModel* can then be used as a transformer to predict whether or not a flight will be late. Import Spark SQL and Spark ML LibrariesFirst, import the libraries you will need:
###Code
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pyspark.ml import Pipeline
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.feature import VectorAssembler, StringIndexer, VectorIndexer, MinMaxScaler
###Output
_____no_output_____
###Markdown
Load Source DataThe data for this exercise is provided as a CSV file containing details of flights. The data includes specific characteristics (or *features*) for each flight, as well as a column indicating how many minutes late or early the flight arrived.You will load this data into a DataFrame and display it.
###Code
csv = spark.read.csv('wasb:///data/flights.csv', inferSchema=True, header=True)
csv.show()
###Output
_____no_output_____
###Markdown
Prepare the DataMost modeling begins with exhaustive exploration and preparation of the data. In this example, the data has been cleaned for you. You will simply select a subset of columns to use as *features* and create a Boolean *label* field named **label** with the value **1** for flights that arrived 15 minutes or more after the scheduled arrival time, or **0** if the flight was early or on-time.
###Code
data = csv.select("DayofMonth", "DayOfWeek", "Carrier", "OriginAirportID", "DestAirportID", "DepDelay", ((col("ArrDelay") > 15).cast("Double").alias("label")))
data.show()
###Output
_____no_output_____
###Markdown
Split the DataIt is common practice when building supervised machine learning models to split the source data, using some of it to train the model and reserving some to test the trained model. In this exercise, you will use 70% of the data for training, and reserve 30% for testing. In the testing data, the **label** column is renamed to **trueLabel** so you can use it later to compare predicted labels with known actual values.
###Code
splits = data.randomSplit([0.7, 0.3])
train = splits[0]
test = splits[1].withColumnRenamed("label", "trueLabel")
train_rows = train.count()
test_rows = test.count()
print "Training Rows:", train_rows, " Testing Rows:", test_rows
###Output
_____no_output_____
###Markdown
Define the PipelineA predictive model often requires multiple stages of feature preparation. For example, it is common when using some algorithms to distingish between continuous features (which have a calculable numeric value) and categorical features (which are numeric representations of discrete categories). It is also common to *normalize* continuous numeric features to use a common scale (for example, by scaling all numbers to a proportinal decimal value between 0 and 1).A pipeline consists of a a series of *transformer* and *estimator* stages that typically prepare a DataFrame formodeling and then train a predictive model. In this case, you will create a pipeline with seven stages:- A **StringIndexer** estimator that converts string values to indexes for categorical features- A **VectorAssembler** that combines categorical features into a single vector- A **VectorIndexer** that creates indexes for a vector of categorical features- A **VectorAssembler** that creates a vector of continuous numeric features- A **MinMaxScaler** that normalizes continuous numeric features- A **VectorAssembler** that creates a vector of categorical and continuous features- A **DecisionTreeClassifier** that trains a classification model.
###Code
strIdx = StringIndexer(inputCol = "Carrier", outputCol = "CarrierIdx")
catVect = VectorAssembler(inputCols = ["CarrierIdx", "DayofMonth", "DayOfWeek", "OriginAirportID", "DestAirportID"], outputCol="catFeatures")
catIdx = VectorIndexer(inputCol = catVect.getOutputCol(), outputCol = "idxCatFeatures")
numVect = VectorAssembler(inputCols = ["DepDelay"], outputCol="numFeatures")
minMax = MinMaxScaler(inputCol = numVect.getOutputCol(), outputCol="normFeatures")
featVect = VectorAssembler(inputCols=["idxCatFeatures", "normFeatures"], outputCol="features")
dt = DecisionTreeClassifier(labelCol="label", featuresCol="features")
pipeline = Pipeline(stages=[strIdx, catVect, catIdx, numVect, minMax, featVect, dt])
###Output
_____no_output_____
###Markdown
Run the Pipeline as an EstimatorThe pipeline itself is an estimator, and so it has a **fit** method that you can call to run the pipeline on a specified DataFrame. In this case, you will run the pipeline on the training data to train a model.
###Code
piplineModel = pipeline.fit(train)
print "Pipeline complete!"
###Output
_____no_output_____
###Markdown
Test the Pipeline ModelThe model produced by the pipeline is a transformer that will apply all of the stages in the pipeline to a specified DataFrame and apply the trained model to generate predictions. In this case, you will transform the **test** DataFrame using the pipeline to generate label predictions.
###Code
prediction = piplineModel.transform(test)
predicted = prediction.select("features", "prediction", "trueLabel")
predicted.show(100, truncate=False)
###Output
_____no_output_____ |
.ipynb_checkpoints/NumPy Advanced-checkpoint.ipynb | ###Markdown
NumPy Mathematical Functionshttps://www.youtube.com/watch?v=PhEsZGkTx5s&ab_channel=VinothRathinam
###Code
import numpy as np
x = np.arange(1,10)
x
np.sin(x)
np.sin(8)
np.cos(x)
np.tan(x)
np.exp(x)
np.log(x)
np.log10(x)
np.sqrt(x)
np.square(x)
b = np.linspace(1,10,20)
b
np.around(b,decimals=2)
np.around(b,decimals=-2)
np.floor(b)
np.ceil(b)
np.round(b)
np.round(2.5)
###Output
_____no_output_____
###Markdown
Numpy statistical functionshttps://www.youtube.com/watch?v=9s-WRIg24AQ&ab_channel=jyostnabodapati**in 2D array**- axis 0 - column- axis 1 - row
###Code
a = np.array([[1,2,3],[4,5,6],[7,8,9]])
a
np.sum(a)
np.sum(a,axis=0)
np.sum(a,axis=1)
# max value in array
np.amax(a)
np.amax(a,axis=0)
np.amax(a,axis=1)
# mean value
np.mean(a)
np.mean(a,axis=0)
np.mean(a,axis=1)
# std deviation of array
np.std(a)
np.std(a,axis=0) #coulmn
np.std(a,axis=1)
# varience in array
np.var(a)
np.var(a,axis=0) #coulmn
np.var(a,axis=1)
np.average(a)
wt = [[1,2,3],[4,5,6],[7,8,9]]
np.average(a, weights=wt)
#30th percentile
np.percentile(a,30)
np.percentile(a,30,axis=0)
np.percentile(a,30,axis=1)
a
## difference b/w peek to peek values
np.ptp(a)
np.ptp(a,axis=0)
np.ptp(a,axis=1)
###Output
_____no_output_____
###Markdown
NumPy Sorting and Searching
###Code
a = np.random.randint(1,9,8).reshape(2,4)
a
np.sort(a)
b = np.random.randint(1,9,8)
b
np.sort(b)
np.sort(a,axis=0)
np.sort(a,axis=1)
np.sort(a,axis=None)
b
sort_idx=np.argsort(b)
sort_idx
b[sort_idx]
b[sort_idx[::-1]]
a = np.array(['a','b','c','d','e'])
b = np.array([12, 90, 380, 12, 211])
(a,b)
idx = np.lexsort((a,b))
idx
for id in idx:
print(a[id],b[id])
# This function is used to find the location of the non-zero elements from the array.
b = np.array([12, 0, 380, 12, 211])
idx = np.nonzero(b)
idx
b[idx]
np.where(b>50)
b>50
###Output
_____no_output_____
###Markdown
NumPy Copies and Views
###Code
b
bb = b.view()
b.view(dtype=np.int8)
bb[1] = 77
bb
b
bc = b.copy()
bc[1] = 99
bc
b
b = a.view()
b.shape = 4,3;
b[0][2] = 777
print("\nOriginal array \n",a)
print("\nview\n",b)
###Output
Original array
[[ 1 2 777 4]
[ 9 0 2 3]
[ 1 2 3 19]]
view
[[ 1 2 777]
[ 4 9 0]
[ 2 3 1]
[ 2 3 19]]
###Markdown
NumPy Matrix Library
###Code
mat = np.matlib.empty((3,3)) #Return a new matrix of given shape and type, without initializing entries.
mat
mat = np.matlib.zeros((3,3)) #Return a new matrix of given shape and type, without initializing entries.
mat
mat = np.matlib.ones((3,3)) #Return a new matrix of given shape and type, without initializing entries.
mat
mat = np.matlib.ones((3,3),dtype=int) #Return a new matrix of given shape and type, without initializing entries.
mat
mat = np.matlib.eye(n=3,M=3) #Return a matrix with ones on the diagonal and zeros elsewhere.
mat
mat = np.matlib.eye(n=3,M=3,k=0) # Index of the diagonal
mat
mat = np.matlib.eye(n=3,M=3,k=-1) # Index of the diagonal
mat
mat = np.matlib.identity(5,dtype=int) # Returns the square identity matrix of given size.
mat
np.matlib.rand((3,4)) # Return a matrix of random values with given shape.
###Output
_____no_output_____
###Markdown
NumPy Linear Algebra
###Code
a = np.array([[100,200],[23,12]])
b = np.array([[10,20],[12,21]])
print(a)
print(b)
a.dot(b)
np.dot(a,b)
np.vdot(a,b)
a @ b
np.matmul(a,b)
I = np.identity(3)
I
a = np.arange(1,10).reshape(3,3)
a
I @ a
a @ I # multiplying with identity matrix
a * I # element to element multiplication
I * a
np.multiply(I,a)
a
np.linalg.det(a)
np.linalg.inv(a)
# Solve the system of equations ``3 * x0 + x1 = 9`` and ``x0 + 2 * x1 = 8``:
a = np.array([[3,1], [1,2]])
b = np.array([9,8])
np.linalg.solve(a, b)
# Inner product of two arrays.
np.inner(a,b)
###Output
_____no_output_____ |
mushroom.ipynb | ###Markdown
VAE Model
###Code
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
from matplotlib import pyplot as plt
import matplotlib.gridspec as gridspec
import os
import numpy as np
mb_size = 32
z_dim = 3
X_dim = x_train.shape[1]
y_dim = len(np.unique(y_train))
h_dim = 3
lr = 1e-3
def xavier_init(size):
in_dim = size[0]
xavier_stddev = 1. / tf.sqrt(in_dim / 2.)
return tf.random.normal(shape=size, stddev=xavier_stddev)
X = tf.keras.Input(shape=(X_dim,))
c = tf.keras.Input(shape=(y_dim,))
z = tf.keras.Input(shape=(z_dim,))
Q_W1 = tf.Variable(xavier_init([X_dim + y_dim, h_dim]))
Q_b1 = tf.Variable(tf.zeros(shape=[h_dim]))
Q_W2_mu = tf.Variable(xavier_init([h_dim, z_dim]))
Q_b2_mu = tf.Variable(tf.zeros(shape=[z_dim]))
Q_W2_sigma = tf.Variable(xavier_init([h_dim, z_dim]))
Q_b2_sigma = tf.Variable(tf.zeros(shape=[z_dim]))
def Q(X, c):
inputs = tf.concat(axis=1, values=[X, c])
h = tf.nn.relu(tf.matmul(inputs, Q_W1) + Q_b1)
z_mu = tf.matmul(h, Q_W2_mu) + Q_b2_mu
z_logvar = tf.matmul(h, Q_W2_sigma) + Q_b2_sigma
return z_mu, z_logvar
def sample_z(mu, log_var):
eps = tf.random.normal(shape=tf.shape(mu))
return mu + tf.exp(log_var / 2) * eps
P_W1 = tf.Variable(xavier_init([z_dim + y_dim, h_dim]))
P_b1 = tf.Variable(tf.zeros(shape=[h_dim]))
P_W2 = tf.Variable(xavier_init([h_dim, X_dim]))
P_b2 = tf.Variable(tf.zeros(shape=[X_dim]))
def P(z, c):
inputs = tf.concat(axis=1, values=[z, c])
h = tf.nn.relu(tf.matmul(inputs, P_W1) + P_b1)
logits = tf.matmul(h, P_W2) + P_b2
prob = tf.nn.sigmoid(logits)
return prob, logits
z_mu, z_logvar = Q(X, c)
z_sample = sample_z(z_mu, z_logvar)
_, logits = P(z_sample, c)
X_samples, _ = P(z, c)
recon_loss = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=X), 1)
kl_loss = 0.5 * tf.reduce_sum(tf.exp(z_logvar) + z_mu**2 - 1. - z_logvar, 1)
vae_loss = tf.reduce_mean(recon_loss + kl_loss)
def generate_sample():
samples = []
gen_labels =[]
for r in range(100):
for index in range(y_dim):
gen_labels = gen_labels + [index]*mb_size
y = np.zeros([mb_size, y_dim])
y[range(mb_size), index] = 1
samples.extend(sess.run(X_samples,
feed_dict={z: np.random.randn(mb_size, z_dim), c: y}))
gen_samples = np.array(samples).round(decimals=2)
gen_labels = np.array(gen_labels)
print(gen_samples.shape)
print(gen_labels.shape)
return gen_samples, gen_labels
###Output
_____no_output_____
###Markdown
MLP Model
###Code
from tensorflow.keras.layers import Dense, BatchNormalization, Dropout, Input, Flatten
from tensorflow.keras.models import Sequential
def build_model(input_shape=(12,), num_classes=2):
"""
:param input_shape: shape of input_data
:param num_classes: number of classes
:return: keras.model.sequential compiled with categorical cross-entropy loss
"""
model = Sequential([
Input(shape=input_shape),
Dense(32, activation="relu"),
BatchNormalization(),
Dense(64, activation="relu"),
BatchNormalization(),
Flatten(),
Dropout(0.5),
Dense(num_classes, activation="softmax"),
])
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.summary()
return model
###Output
_____no_output_____
###Markdown
Baseline
###Code
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
baseline_list =[]
for i in range(10):
baseline_model = build_model(input_shape=(x_train.shape[1],),num_classes=y_dim)
batch_size=32
epochs=2
X_train, X_test,y_train1,y_test = train_test_split(x_train,y_train, test_size = 0.2)
y_train_oh = np.array(tf.keras.utils.to_categorical(y_train1, num_classes=y_dim, dtype='float32'))
test_y = np.array(tf.keras.utils.to_categorical(y_test, num_classes=y_dim, dtype='float32'))
history_baseline = baseline_model.fit(X_train, y_train_oh, batch_size=batch_size,
epochs=epochs, validation_data=(X_test, test_y))
score_baseline = baseline_model.evaluate(X_test, test_y, verbose=0)
print('baseline test loss: ',score_baseline[0])
print('baseline test accuracy: ', score_baseline[1] )
y_pred_baseline_oh = baseline_model.predict(X_test)
y_pred_baseline = y_pred_baseline_oh.argmax(axis=-1)
baseline_list.append(classification_report(y_test, y_pred_baseline, output_dict=True))
def post_process_results(b_list, filename='default.csv'):
total_df = pd.DataFrame(b_list[0]).transpose()
print('number of runs: {}'.format(len(b_list)))
for r_dict in b_list[1:]:
temp = pd.DataFrame(r_dict).transpose()
total_df = total_df.add(temp)
average_pd = total_df/10.0
average_pd.to_csv(filename, sep=',')
return average_pd
post_process_results(baseline_list, 'results_csv/mushroom_baseline_cnn.csv')
###Output
number of runs: 10
###Markdown
UnderSampling
###Code
undersampling_list =[]
for i in range(10):
baseline_model = build_model(input_shape=(x_train.shape[1],),num_classes=y_dim)
batch_size=32
epochs=2
X_train, X_test,y_train1,y_test = train_test_split(x_train,y_train, test_size = 0.2)
rus = RandomUnderSampler(random_state=42)
X_train, y_train1 = rus.fit_resample(X_train, y_train1)
y_train_oh = np.array(tf.keras.utils.to_categorical(y_train1, num_classes=y_dim, dtype='float32'))
test_y = np.array(tf.keras.utils.to_categorical(y_test, num_classes=y_dim, dtype='float32'))
history_baseline = baseline_model.fit(X_train, y_train_oh, batch_size=batch_size,
epochs=epochs, validation_data=(X_test, test_y))
score_baseline = baseline_model.evaluate(X_test, test_y, verbose=0)
print('undersampling test loss: ',score_baseline[0])
print('undersampling test accuracy: ', score_baseline[1] )
y_pred_baseline_oh = baseline_model.predict(X_test)
y_pred_baseline = y_pred_baseline_oh.argmax(axis=-1)
undersampling_list.append(classification_report(y_test, y_pred_baseline, output_dict=True))
post_process_results(undersampling_list, 'results_csv/mushroom_undersamling.csv')
###Output
number of runs: 10
###Markdown
Random Oversampling
###Code
oversampling_list =[]
for i in range(10):
baseline_model = build_model(input_shape=(x_train.shape[1],),num_classes=y_dim)
batch_size=32
epochs=2
X_train, X_test,y_train1,y_test = train_test_split(x_train,y_train, test_size = 0.2)
ros = RandomOverSampler(random_state=42)
X_train, y_train1 = ros.fit_resample(X_train, y_train1)
y_train_oh = np.array(tf.keras.utils.to_categorical(y_train1, num_classes=y_dim, dtype='float32'))
test_y = np.array(tf.keras.utils.to_categorical(y_test, num_classes=y_dim, dtype='float32'))
history_baseline = baseline_model.fit(X_train, y_train_oh, batch_size=batch_size,
epochs=epochs, validation_data=(X_test, test_y))
score_baseline = baseline_model.evaluate(X_test, test_y, verbose=0)
print('oversampling test loss: ',score_baseline[0])
print('oversampling test accuracy: ', score_baseline[1] )
y_pred_baseline_oh = baseline_model.predict(X_test)
y_pred_baseline = y_pred_baseline_oh.argmax(axis=-1)
oversampling_list.append(classification_report(y_test, y_pred_baseline, output_dict=True))
post_process_results(undersampling_list, 'results_csv/mushroom_oversamling.csv')
###Output
number of runs: 10
###Markdown
Augmentation experiment
###Code
augment_list =[]
for i in range(10):
X_train, X_test, y_train1, y_test = train_test_split(x_train, y_train, test_size=0.2, random_state=40)
y_train_oh = np.array(tf.keras.utils.to_categorical(y_train1, num_classes=y_dim, dtype='float32'))
test_y = np.array(tf.keras.utils.to_categorical(y_test, num_classes=y_dim, dtype='float32'))
solver = tf.compat.v1.train.AdamOptimizer().minimize(vae_loss)
sess = tf.compat.v1.Session ()
sess.run(
tf.compat.v1.global_variables_initializer())
X_train = np.array(X_train)
i = 0
for it in tqdm(range(50000)):
ind = np.random.choice(X_train.shape[0], mb_size)
X_mb = np.array(X_train[ind])
y_mb = np.array(y_train_oh[ind])
_, loss = sess.run([solver, vae_loss], feed_dict={X: X_mb, c: y_mb})
gen_samples, gen_labels = generate_sample()
x = np.concatenate([X_train, gen_samples])
y = np.concatenate([y_train1, gen_labels])
x = np.array(x)
y_oh = np.array(tf.keras.utils.to_categorical(y, num_classes=y_dim, dtype='float32'))
aug_model = build_model(input_shape=(x_train.shape[1],),num_classes=y_dim)
batch_size=32
epochs=2
hist = aug_model.fit(x, y_oh, batch_size=batch_size, epochs=epochs,
validation_data=(X_test, test_y))
y_pred_aug_oh = aug_model.predict(X_test)
y_pred_aug = y_pred_aug_oh.argmax(axis=-1)
augment_list.append(classification_report(y_test, y_pred_aug, output_dict=True) )
post_process_results(augment_list, 'results_csv/mushroom_VAE.csv')
###Output
number of runs: 10
###Markdown
###Code
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.optimizers import SGD
from keras.callbacks import EarlyStopping
from sklearn.model_selection import cross_val_score
from keras import backend as K
from keras.layers import BatchNormalization
import seaborn as sns
import matplotlib.pyplot as plt
import math
data = pd.read_csv("/content/drive/My Drive/mushrooms.csv") #Reading dataset.
data.head()
#FEATURES ARE STRİNG VALUES
data.info()
#CHECKİNG MİSSİNG VALUES.
for i in data.columns:
a = data[i].value_counts()
b = pd.DataFrame({"name":a.name,'feature':a.index, 'count':a.values})
print(b)
#STALK-ROOT HAS 2480 MİSSİNG VALUES WE SHOULD DROP THİS COLUMN.
data = data.drop('stalk-root', 1)
#CONVERT FEATURES TO BİNARY VALUES.
Y = pd.get_dummies(data.iloc[:,0], drop_first=False)
X = pd.DataFrame()
for i in data.iloc[:,1:].columns:
Q = pd.get_dummies(data[i], prefix=i, drop_first=False)
X = pd.concat([X, Q], axis=1)
#CREATİNG MODEL.
def model():
model = Sequential()
model.add(Dense(250, input_dim=X.shape[1], kernel_initializer='uniform', activation='sigmoid'))
model.add(BatchNormalization())
model.add(Dropout(0.7))
model.add(Dense(300, input_dim=250, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.8))
model.add(Dense(2, activation='softmax'))
model.compile(loss='binary_crossentropy' , optimizer='adamax', metrics=["accuracy"])
return model
#TRAİNİNG.
model = model()
history = model.fit(X.values, Y.values, validation_split=0.50, epochs=300, batch_size=50, verbose=0)
print(history.history.keys())
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
print("Training accuracy: %.2f%% / Validation accuracy: %.2f%%" %
(100*history.history['accuracy'][-1], 100*history.history['val_accuracy'][-1]))
###Output
_____no_output_____ |
docs/source/notebooks/examples/building-boolean-models.ipynb | ###Markdown
Building a Boolean-Based Model This simple example demonstrates how to create a boolean-based model, its components, regulators, conditions and sub-conditions into [Cell Collective](https://cellcollective.org).We'll attempt to reconstruct the [Cortical Area Development](https://research.cellcollective.org/?dashboard=true2035:1/cortical-area-development/1) authored by CE Giacomantonio.Begin by importing the ccapi module into your workspace.
###Code
import ccapi
###Output
_____no_output_____
###Markdown
Now, let’s try creating a client object in order to interact with services provided by [Cell Collective](https://cellcollective.org).
###Code
client = ccapi.Client()
###Output
_____no_output_____
###Markdown
Authenticate your client using a ***password flow type authentication*** scheme.**NOTE**: *Before you can authenticate using ccapi, you must first register an application of the appropriate type on [Cell Collective](https://cellcollective.org). If you do not require a user context, it is read only.*
###Code
def authenticate():
client.auth(email = "[email protected]", password = "test")
try:
authenticate()
except ccapi.exception.AuthenticationError:
client.sign_up(
email = "[email protected]",
password = "test",
first_name = "Test",
last_name = "Test",
institution = "Test"
)
authenticate()
###Output
_____no_output_____
###Markdown
Creating a Base Model Create a Base Model using ccapi and instantize it with an authenticated client.
###Code
model = ccapi.Model("Cortical Area Development", client = client)
model.save()
###Output
_____no_output_____
###Markdown
A `ccapi.Model` consists of various `ccapi.ModelVersion` objects that help you build various versions to a model. By default, a `ccapi.Model` provides you a default model version of a boolean type.
###Code
# get the default model version
boolean = model.versions[0]
boolean.name = "Version 1"
boolean
###Output
_____no_output_____
###Markdown
Adding Components to a Boolean-Based Model First, we need to create a list of component objects for this model.
###Code
# create components
COUP_TFI = ccapi.InternalComponent("COUP-TFI")
EMX2 = ccapi.InternalComponent("EMX2")
FGF8 = ccapi.InternalComponent("FGF8")
PAX6 = ccapi.InternalComponent("PAX6")
Sp8 = ccapi.InternalComponent("Sp8")
###Output
_____no_output_____
###Markdown
Now let us add a list of components to our Boolean Model.
###Code
# add components to model
boolean.add_components(COUP_TFI, EMX2, FGF8, PAX6, Sp8)
###Output
_____no_output_____
###Markdown
Saving a Model Ensure you save your model in order to commit your work.
###Code
model.save()
###Output
_____no_output_____
###Markdown
Adding Regulators, Conditions and Sub-Conditions Let's add a list of regulators and conditions to our components. A list of regulators and conditions as well as sub-conditions can all be added at once to a component.
###Code
# add regulators to components
COUP_TFI.add_regulators(
ccapi.NegativeRegulator(Sp8),
ccapi.NegativeRegulator(FGF8)
)
EMX2.add_regulators(
ccapi.PositiveRegulator(COUP_TFI),
ccapi.NegativeRegulator(FGF8),
ccapi.NegativeRegulator(PAX6),
ccapi.NegativeRegulator(Sp8)
)
Sp8.add_regulators(
ccapi.PositiveRegulator(FGF8),
ccapi.NegativeRegulator(EMX2)
)
FGF8.add_regulators(
# add conditions to regulators
ccapi.PositiveRegulator(FGF8, conditions = [
ccapi.Condition(Sp8)
])
)
PAX6.add_regulators(
ccapi.PositiveRegulator(Sp8),
ccapi.NegativeRegulator(COUP_TFI)
)
model.save()
###Output
_____no_output_____
###Markdown
We've now got things within our Boolean Model.
###Code
boolean.components
FGF8.positive_regulators
###Output
_____no_output_____
###Markdown
Model Summary You can view detailed summary of your model using the `summary` function provided.
###Code
boolean.summary()
###Output
Internal Components (+, -) External Components
-------------------------- -------------------
COUP-TFI (0,2)
EMX2 (1,3)
FGF8 (1,0)
PAX6 (1,1)
Sp8 (1,1)
###Markdown
...or view detailed information within your jupyter notebook.
###Code
boolean
###Output
_____no_output_____
###Markdown
Model Rendering You can also attempt to visualize a Boolean Model using the `draw` function provided.
###Code
# boolean.draw()
###Output
_____no_output_____
###Markdown
Building a Boolean-Based Model This simple example demonstrates how to create a boolean-based model, its components, regulators, conditions and sub-conditions into [Cell Collective](https://cellcollective.org).We'll attempt to reconstruct the [Cortical Area Development](https://research.cellcollective.org/?dashboard=true2035:1/cortical-area-development/1) authored by CE Giacomantonio.Begin by importing the ccapi module into your workspace.
###Code
import ccapi
###Output
_____no_output_____
###Markdown
Now, let’s try creating a client object in order to interact with services provided by [Cell Collective](https://cellcollective.org).
###Code
client = ccapi.Client()
###Output
_____no_output_____
###Markdown
Authenticate your client using a ***password flow type authentication*** scheme.**NOTE**: *Before you can authenticate using ccapi, you must first register an application of the appropriate type on [Cell Collective](https://cellcollective.org). If you do not require a user context, it is read only.*
###Code
client.auth(email = "[email protected]", password = "test")
###Output
_____no_output_____
###Markdown
Creating a Base Model Create a Base Model using ccapi and instantize it with an authenticated client.
###Code
model = ccapi.Model("Cortical Area Development", client = client)
model.save()
###Output
_____no_output_____
###Markdown
A `ccapi.Model` consists of various `ccapi.ModelVersion` objects that help you build various versions to a model. By default, a `ccapi.Model` provides you a default model version of a boolean type.
###Code
# get the default model version
boolean = model.versions[0]
boolean.name = "Version 1"
boolean
###Output
_____no_output_____
###Markdown
Adding Components to a Boolean-Based Model First, we need to create a list of component objects for this model.
###Code
# create components
COUP_TFI = ccapi.InternalComponent("COUP-TFI")
EMX2 = ccapi.InternalComponent("EMX2")
FGF8 = ccapi.InternalComponent("FGF8")
PAX6 = ccapi.InternalComponent("PAX6")
Sp8 = ccapi.InternalComponent("Sp8")
###Output
_____no_output_____
###Markdown
Now let us add a list of components to our Boolean Model.
###Code
# add components to model
boolean.add_components(COUP_TFI, EMX2, FGF8, PAX6, Sp8)
###Output
_____no_output_____
###Markdown
Saving a Model Ensure you save your model in order to commit your work.
###Code
model.save()
###Output
_____no_output_____
###Markdown
Adding Regulators, Conditions and Sub-Conditions Let's add a list of regulators and conditions to our components. A list of regulators and conditions as well as sub-conditions can all be added at once to a component.
###Code
# add regulators to components
COUP_TFI.add_regulators(
ccapi.NegativeRegulator(Sp8),
ccapi.NegativeRegulator(FGF8)
)
EMX2.add_regulators(
ccapi.PositiveRegulator(COUP_TFI),
ccapi.NegativeRegulator(FGF8),
ccapi.NegativeRegulator(PAX6),
ccapi.NegativeRegulator(Sp8)
)
Sp8.add_regulators(
ccapi.PositiveRegulator(FGF8),
ccapi.NegativeRegulator(EMX2)
)
FGF8.add_regulators(
# add conditions to regulators
ccapi.PositiveRegulator(FGF8, conditions = [
ccapi.Condition(components = Sp8)
])
)
PAX6.add_regulators(
ccapi.PositiveRegulator(Sp8),
ccapi.NegativeRegulator(COUP_TFI)
)
model.save()
###Output
_____no_output_____
###Markdown
We've now got things within our Boolean Model.
###Code
boolean.components
FGF8.positive_regulators
###Output
_____no_output_____
###Markdown
Model Summary You can view detailed summary of your model using the `summary` function provided.
###Code
boolean.summary()
###Output
Internal Components (+, -) External Components
-------------------------- -------------------
COUP-TFI (0,2)
EMX2 (1,3)
FGF8 (1,0)
PAX6 (1,1)
Sp8 (1,1)
###Markdown
...or view detailed information within your jupyter notebook.
###Code
boolean
###Output
_____no_output_____
###Markdown
Model Rendering You can also attempt to visualize a Boolean Model using the `draw` function provided.
###Code
# boolean.draw()
###Output
Exception in thread Thread-4:
Traceback (most recent call last):
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/Users/achillesrasquinha/.venv/lib/python3.7/site-packages/pygraphviz/agraph.py", line 44, in run
chunk = self.pipe.read()
File "/Users/achillesrasquinha/.venv/lib/python3.7/site-packages/gevent/_fileobjectposix.py", line 164, in readall
data = self.__read(DEFAULT_BUFFER_SIZE)
File "/Users/achillesrasquinha/.venv/lib/python3.7/site-packages/gevent/_fileobjectposix.py", line 158, in __read
wait_on_watcher(self._read_watcher, None, None, self.hub)
File "src/gevent/_hub_primitives.py", line 326, in gevent._gevent_c_hub_primitives.wait_on_watcher
File "src/gevent/_hub_primitives.py", line 350, in gevent._gevent_c_hub_primitives.wait_on_watcher
File "src/gevent/_hub_primitives.py", line 304, in gevent._gevent_c_hub_primitives._primitive_wait
File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 55, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_waiter.py", line 151, in gevent._gevent_c_waiter.Waiter.get
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 65, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_gevent_c_greenlet_primitives.pxd", line 35, in gevent._gevent_c_greenlet_primitives._greenlet_switch
greenlet.error: cannot switch to a different thread
Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/Users/achillesrasquinha/.venv/lib/python3.7/site-packages/pygraphviz/agraph.py", line 44, in run
chunk = self.pipe.read()
File "/Users/achillesrasquinha/.venv/lib/python3.7/site-packages/gevent/_fileobjectposix.py", line 164, in readall
data = self.__read(DEFAULT_BUFFER_SIZE)
File "/Users/achillesrasquinha/.venv/lib/python3.7/site-packages/gevent/_fileobjectposix.py", line 158, in __read
wait_on_watcher(self._read_watcher, None, None, self.hub)
File "src/gevent/_hub_primitives.py", line 326, in gevent._gevent_c_hub_primitives.wait_on_watcher
File "src/gevent/_hub_primitives.py", line 350, in gevent._gevent_c_hub_primitives.wait_on_watcher
File "src/gevent/_hub_primitives.py", line 304, in gevent._gevent_c_hub_primitives._primitive_wait
File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 55, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_waiter.py", line 151, in gevent._gevent_c_waiter.Waiter.get
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 65, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_gevent_c_greenlet_primitives.pxd", line 35, in gevent._gevent_c_greenlet_primitives._greenlet_switch
greenlet.error: cannot switch to a different thread
|
Notebooks Examples/.ipynb_checkpoints/Yelp Stream Source-checkpoint.ipynb | ###Markdown
Importing the Yelp
###Code
import numpy as np
import pandas as pd
import requests
from YelpAPIConfig import get_my_key
'''
Yelp has multipled different API
For this notebook we are going to focus on working with YelpFusion
'''
# The following will only run if you have the YelpAPI.py file with
# your associated API key for using YelpFusion
print(get_my_key)
# Define the API Key, the Endpoint and the Header erquired to use YelpFusionAPI
API_KEY = get_my_key
ENDPOINT = 'https://api.yelp.com/v3/businesses/search'
HEADERS = {'Authorization': 'bearer %s' %API_KEY}
# Define the parameters
PARAMETERS = {
'term':'restaurant',
'limit':50,
'location':'Boston',
'radius': 10000
}
# Make a request to the yelp API
response = requests.get(url = ENDPOINT, params = PARAMETERS, headers = HEADERS)
# convert JSON String to Dictionary
business_data = response.json()
print(business_data.keys())
#print(business_data)
for biz in business_data['businesses']:
print(biz['name'])
'''
Find Recent Reviews and stores it
'''
###Output
_____no_output_____ |
TareaClase1.ipynb | ###Markdown
Constantes
###Code
pi=3.1416
iva=0.19
print(pi)
print(iva)
###Output
3.1416
0.19
###Markdown
Variables
###Code
edad=8
ciudad="Cali"
print(edad)
print(ciudad)
###Output
4
Cali
###Markdown
**Tipo de datos**---*Enteros*
###Code
pasaje=int(2200)
print(pasaje)
deuda=int(-256)
print(deuda,type(deuda))
###Output
2200
-256 <class 'int'>
###Markdown
*Float*
###Code
costo=float(158895.25)
print(costo)
presupuesto=float(78952114.415)
print(presupuesto, type(presupuesto))
###Output
158895.25
78952114.415 <class 'float'>
###Markdown
*Tipo cadena*
###Code
planta="suculentas"
especie="Graptopetalum Paraguayense"
print("Me gustan las",planta, "en especial la",especie)
deporte="natación"
jornada="mañana"
print("Las clases de "+ deporte+ " estan en el horario de la "+jornada)
###Output
Me gustan las suculentas en especial la Graptopetalum Paraguayense
Las clases de natación estan en el horario de la mañana
###Markdown
*Tipo Booleano*
###Code
x=True
y=False
print(x,type(x))
print(y,type(y))
###Output
True <class 'bool'>
False <class 'bool'>
###Markdown
*Conjuntos*
###Code
tamaño= "grande","mediano","pequeño"
print(tamaño)
electrodomesticos="estufa","televisor","ventilador","nevera"
print(electrodomesticos)
print(tamaño,electrodomesticos)
###Output
('grande', 'mediano', 'pequeño')
('estufa', 'televisor', 'ventilador', 'nevera')
('grande', 'mediano', 'pequeño') ('estufa', 'televisor', 'ventilador', 'nevera')
###Markdown
*Listas*
###Code
info=["1","cactus","suculenta","aloe"]
print(info)
cosa=["mesa","carro","casa","perro"]
cosa1=cosa[3]
print(cosa1)
###Output
['1', 'cactus', 'suculenta', 'aloe']
perro
###Markdown
*Tuplas*
###Code
pa="arroz",4,"papa"
print(pa)
ma=pa,("verde","amarillo")
print(ma)
###Output
('arroz', 4, 'papa')
(('arroz', 4, 'papa'), ('verde', 'amarillo'))
###Markdown
*Diccionarios*
###Code
plan_vet={
"codigo":"1",
"nombre":"Bienestar",
"precioMensual":50000,
"Descripcion":"salud básica"
}
print("Nombre de los campos",plan_vet.keys())
print("Datos registrados",plan_vet.values())
print("Items",plan_vet.items())
print("\nEl precio del plan Bienestar es de", plan_vet["precioMensual"])
mascota1={}
mascota1["nombre"]="Nicol"
mascota1["raza"]="Criolla"
mascota1["edad"]="14 años"
mascota2={}
mascota2["nombre"]="Venus"
mascota2["raza"]="Husky Siberiano"
mascota2["edad"]="3 años"
print(mascota2.keys())
print("\nLos datos de la mascota 2 son:",mascota2.values())
###Output
Nombre de los campos dict_keys(['codigo', 'nombre', 'precioMensual', 'Descripcion'])
Datos registrados dict_values(['1', 'Bienestar', 50000, 'salud básica'])
Items dict_items([('codigo', '1'), ('nombre', 'Bienestar'), ('precioMensual', 50000), ('Descripcion', 'salud básica')])
El precio del plan Bienestar es de 50000
dict_keys(['nombre', 'raza', 'edad'])
Los datos de la mascota 2 son: dict_values(['Venus', 'Husky Siberiano', '3 años'])
###Markdown
Realizar dos ejercicios por cada tipo de variable.---**Tipo entero**
###Code
a=int(44)
b=int(55)
p=(a+b)
print(p, type (p))
h=int(16000)
e=int(2800)
l=int(1200)
e=int(3600)
n=int(6500)
n=int(1000)
print(h+e+l+e+n+n)
###Output
26400
###Markdown
**Tipo FLoat**
###Code
x=float(5.99)
z=float(14.2)
p=(x*z)
print(p, type(p))
arroz=1.35
huevos=2.36
cocacola=2.10
sum=(arroz + huevos + cocacola)
print("El almuerzo sencillo le sale por: ",sum," dólares")
###Output
El almuerzo sencillo le sale por: 5.8100000000000005 dólares
###Markdown
**Tipo Listas**
###Code
lista1=['huevo','pan','dos mil de salchichón cervecero','leche','5']
lista2=['arroz','carne molida con paprika','limonada de coco','plátano asado']
print(lista1, lista2)
reggaeton=['Balvin','mike Towers','farruko','plan B','conejo malo']
rap=['rapbangclub','alkolyricos','crudoMeansRaw','buhodermia']
print("Todo esto es música, y buena: ",reggaeton,rap)
###Output
Todo esto es música, y buena: ['Balvin', 'mike Towers', 'farruko', 'plan B', 'conejo malo'] ['rapbangclub', 'alkolyricos', 'crudoMeansRaw', 'buhodermia']
###Markdown
**Tipo Tuplas**
###Code
t1=50,12.5,'pollo','3 pájaros'
print(t1, 'gato y ratón','música')
arte='van goh','Dirk Valkenburg','francisco oller','Edmund Blair Leighton'
cine='Pulp fiction','taxi driver','Matrix','The professional'
print("esto es arte: ",arte," y esto también:"),(arte,cine, 'música','teatro')
###Output
esto es arte: ('van goh', 'Dirk Valkenburg', 'francisco oller', 'Edmund Blair Leighton') y esto también:
###Markdown
**Tipo Diccionarios**
###Code
eng={
"love":"amor",
"honey1":"Miel",
"honey2":"Cariño",
"play":"Jugar",
"gorgeuos":"Hermoso" ,
"precio curso":150,
}
print(eng)
print("What does mean honey in spanish?")
print("Esta palabra tiene muchos significados, entre los más comunes: ")
print(eng["honey1"])
print(eng["honey2"])
cats={}
cats[1]='Tomás'
cats[2]='Alfredo'
cats[3]='Poseidon'
cats[4]='Afrodita'
print(cats)
print("El más inquieto es: ",cats[1])
###Output
{1: 'Tomás', 2: 'Alfredo', 3: 'Poseidon', 4: 'Afrodita'}
El más inquieto es: Tomás
|
examples/pumping_history_identification/nonlinear_inverse_problem_pumping_history_identification.ipynb | ###Markdown
pyPCGA tutorial example - pumping history identification (2) nonlinear (quasi-linear) inversion example (from Stanford 362G course) Please read the problem description in [(1) linear inversion example](./linear_inverse_problem_pumping_history_identification.ipynb) + Now we get to know that the unknown pumping rate cannot be **negative**: the owner of the well at (0,0) extracts groundwater when (s)he needs with no injection.+ This prior information/constraint can be incorporated in the inversion by various methods. + Our implementation here is to use log-transformation in order to enforce the nonnegativity constraint.+ In other words, instead of the unknown pumping rate, q, we will work with $s = \ln(q)$. ---$$ y = H \exp(s) $$where $y$ is n by 1 drawdowns at the montoring well, $H$ is a m by m matrix/linear operator, $s$ is a m by 1 vector for the log-transformed pumping rates--- Test enviroment information
###Code
import sys
print(sys.version)
import platform
print(platform.platform())
import numpy as np
print(np.__version__)
import matplotlib.pyplot as plt
import drawdown as dd # our forward model interface
from pyPCGA import PCGA # PCGA ver0.1
import math
###Output
_____no_output_____
###Markdown
pyPCGA parameters (see [PCGA_parameters.docx](https://github.com/jonghyunharrylee/pyPCGA/blob/master/PCGA_parameters_06212018.docx) for details) Note that prior parameters (prior_std and prior_cov_scale) and measurement/model uncertainty (R) were estimated using a model validation approach [Kitanidis, 2007]|Description | | Value | | ---------------------|----------------------|---------||*Geostatistical parameters* || || prior_std | (Bayesian) prior uncertainty/standard deviation | 1.0 [$m^3$/min]|| prior_cov_scale | Prior scale/correlation length | 100 m || kernel | Covariance kernel q(x,x') | q(x,x') = $1.0^2 \exp\left(-|x-x'|/100\right)$|| m | Number of unknowns (unknown pumping history over time) | 10,001 || matvec | Fast covariance matrix-vector multiplication method | FFT (will support Hmatrix/FMM in ver 0.2)| | N | Array of grid numbers in each direction ([nx,ny,nz], for structured grid/FFT) | [10,001] (1D) || xmin, xmax | Array of min and max coordinates in each direction (for structured grid/FFT) | [0], [1000] (1D) || pts | Coordinates for unknowns (optional, used for Hmatrix and FMM) | [0, 0.1. 0.2, ..., 999.9 1000 ] || post_cov | Posterior uncertainty (variance/covaraiance, currently support posterior variance) | diag ||*Measurement parameters* || || nobs | Number of measurement | 100 || R | Variance of measurement error (and model uncertainty) (m) | 0.02^2 ||*Inversion parameters* || || maxiter | Maximum iteration number | 10|| restol | Tolerance (relative norm differnce) for stopping criteria | 0.01|| parallel | Use parallelization for inversion | True|| ncores | number of cores if parallel == True | None (if not defined, use all the physcial cores you have)||precond | Use preconditioner | True ||LM | Use Levenberg-Marquardt | True|| linesearch | Use linearseach | True|| verbose | Inversion message | False|| forward_model_verbose | user-define forward model message | False|| iter_save | Save intermediate solutions | True | + Model domain and discretization
###Code
# model domain and discretization
m = 10001 # number of unknowns (unknown pumping rates over the time)
N = np.array([m]) # discretization grids in each direction [nx,ny,nz] m = nx*ny*nz; for this 1D case, N = nx = [m]
xmin = np.array([0]) # min x, y, z; for this 1D case, min(x) = min(t)
xmax = np.array([1000]) # max x, y, z; for this 1D case, max(x) = max(t)
###Output
_____no_output_____
###Markdown
+ Covariance kernel and scale parameters
###Code
# covairance kernel and scale parameters
# now that we have different estimation variable (log-transformed extraction rate), these prior parameters should be redefined
prior_std = 1.0
prior_cov_scale = np.array([100.0])
def kernel(r): return (prior_std ** 2) * np.exp(-r)
###Output
_____no_output_____
###Markdown
+ Load "true" pumping history
###Code
s_true = np.loadtxt('true.txt')
# for plotting
x = np.linspace(xmin, xmax, m)
pts = np.copy(x)
plt.plot(x,s_true,'r-',label='true')
plt.title('True pumping history at (0,0)')
plt.xlabel('time (min)')
plt.ylabel(r's ($m^3$/min)')
###Output
_____no_output_____
###Markdown
+ Load 100 noisy observations recorded at every 10 mins (see drawdown.py)```pythonobs = np.dot(H,s_true) + 0.01*np.random.randn(m,1)```
###Code
obs = np.loadtxt('obs.txt')
###Output
_____no_output_____
###Markdown
+ Define a wrapper for a black-box forward model input in pyPCGA 1. Note that one should follow this wrapper format to work with pyPCGA 2. If parallelization == True, it should take multiple columns of s and run the forward problem independently in parallel 3. Please see drawdown.py as a template for inplementation
###Code
# forward model wrapper for pyPCGA
def forward_model(s, parallelization, ncores=None):
params = {'log':True} # y = np.dot(H,np.exp(s)); see drawdown.py
model = dd.Model(params)
if parallelization:
simul_obs = model.run(s, parallelization, ncores)
else:
simul_obs = model.run(s, parallelization)
return simul_obs
###Output
_____no_output_____
###Markdown
+ Inversion parameters
###Code
params = {'R': (0.02) ** 2, 'n_pc': 100,
'maxiter': 10, 'restol': 0.01,
'matvec': 'FFT', 'xmin': xmin, 'xmax': xmax, 'N': N,
'prior_std': prior_std, 'prior_cov_scale': prior_cov_scale,
'kernel': kernel, 'post_cov': "diag",
'precond': True, 'LM': True,
'parallel': True, 'linesearch': True,
'forward_model_verbose': False, 'verbose': False,
'iter_save': True}
# params['objeval'] = False, if true, it will compute accurate objective function
# params['ncores'] = 4, with parallell True, it will determine maximum physcial core unless specified
###Output
_____no_output_____
###Markdown
+ Initial guess
###Code
s_init = -1. * np.ones((m, 1)) # or any initial guess you want
###Output
_____no_output_____
###Markdown
Inversion + Initialize pyPCGA
###Code
prob = PCGA(forward_model, s_init, pts, params, s_true, obs)
###Output
##### PCGA Inversion #####
##### 1. Initialize forward and inversion parameters
------------ Inversion Parameters -------------------------
Number of unknowns : 10001
Number of observations : 100
Number of principal components (n_pc) : 100
Prior model : def kernel(r): return (prior_std ** 2) * np.exp(-r)
Prior variance : 1.000000e+00
Prior scale (correlation) parameter : [100.]
Posterior cov computation : diag
Posterior variance computation : Direct
Number of CPU cores (n_core) : 4
Maximum GN iterations : 10
machine precision (delta = sqrt(precision)) : 1.000000e-08
Tol for iterations (norm(sol_diff)/norm(sol)) : 1.000000e-02
Levenberg-Marquardt (LM) : True
LM solution range constraints (LM_smin, LM_smax) : None, None
Line search : True
-----------------------------------------------------------
###Markdown
+ Run pyPCGA
###Code
# run inversion
s_hat, simul_obs, post_diagv, iter_best = prob.Run()
###Output
##### 2. Construct Prior Covariance Matrix
- time for covariance matrix construction (m = 10001) is 0 sec
##### 3. Eigendecomposition of Prior Covariance
- time for eigendecomposition with k = 100 is 1 sec
- 1st eigv : 1870.85, 100-th eigv : 2.06516, ratio: 0.00110386
##### 4. Start PCGA Inversion #####
-- evaluate initial solution
obs. RMSE (norm(obs. diff.)/sqrt(nobs)): 6.96206, normalized obs. RMSE (norm(obs. diff./sqrtR)/sqrt(nobs)): 348.103
***** Iteration 1 ******
computed Jacobian-Matrix products in 0.907606 secs
solve saddle point (co-kriging) systems with Levenberg-Marquardt
evaluate LM solutions
LM solution evaluted
- Geostat. inversion at iteration 1 is 2 sec
== iteration 1 summary ==
= objective function is 7.429391e+05, relative L2-norm diff btw sol 0 and sol 1 is 0.920778
= L2-norm error (w.r.t truth) is 3169.54, obs. RMSE is 2.43793, obs. normalized RMSE is 121.897
- save results in text at iteration 1
***** Iteration 2 ******
computed Jacobian-Matrix products in 1.278379 secs
solve saddle point (co-kriging) systems with Levenberg-Marquardt
evaluate LM solutions
LM solution evaluted
- Geostat. inversion at iteration 2 is 2 sec
== iteration 2 summary ==
= objective function is 8.636817e+04, relative L2-norm diff btw sol 1 and sol 2 is 0.449779
= L2-norm error (w.r.t truth) is 4540.57, obs. RMSE is 0.831226, obs. normalized RMSE is 41.5613
- save results in text at iteration 2
***** Iteration 3 ******
computed Jacobian-Matrix products in 0.874665 secs
solve saddle point (co-kriging) systems with Levenberg-Marquardt
evaluate LM solutions
LM solution evaluted
- Geostat. inversion at iteration 3 is 2 sec
== iteration 3 summary ==
= objective function is 1.071176e+04, relative L2-norm diff btw sol 2 and sol 3 is 0.300593
= L2-norm error (w.r.t truth) is 5821.28, obs. RMSE is 0.292664, obs. normalized RMSE is 14.6332
- save results in text at iteration 3
***** Iteration 4 ******
computed Jacobian-Matrix products in 0.935491 secs
solve saddle point (co-kriging) systems with Levenberg-Marquardt
evaluate LM solutions
LM solution evaluted
- Geostat. inversion at iteration 4 is 2 sec
== iteration 4 summary ==
= objective function is 1.523626e+03, relative L2-norm diff btw sol 3 and sol 4 is 0.23329
= L2-norm error (w.r.t truth) is 7094.83, obs. RMSE is 0.109762, obs. normalized RMSE is 5.48808
- save results in text at iteration 4
***** Iteration 5 ******
computed Jacobian-Matrix products in 1.031276 secs
solve saddle point (co-kriging) systems with Levenberg-Marquardt
evaluate LM solutions
LM solution evaluted
- Geostat. inversion at iteration 5 is 2 sec
== iteration 5 summary ==
= objective function is 2.774496e+02, relative L2-norm diff btw sol 4 and sol 5 is 0.193365
= L2-norm error (w.r.t truth) is 8401.71, obs. RMSE is 0.0428489, obs. normalized RMSE is 2.14244
- save results in text at iteration 5
***** Iteration 6 ******
computed Jacobian-Matrix products in 1.243791 secs
solve saddle point (co-kriging) systems with Levenberg-Marquardt
evaluate LM solutions
LM solution evaluted
- Geostat. inversion at iteration 6 is 2 sec
== iteration 6 summary ==
= objective function is 1.086289e+02, relative L2-norm diff btw sol 5 and sol 6 is 0.158996
= L2-norm error (w.r.t truth) is 9675.47, obs. RMSE is 0.0200997, obs. normalized RMSE is 1.00499
- save results in text at iteration 6
***** Iteration 7 ******
computed Jacobian-Matrix products in 0.878650 secs
solve saddle point (co-kriging) systems with Levenberg-Marquardt
evaluate LM solutions
LM solution evaluted
- Geostat. inversion at iteration 7 is 2 sec
== iteration 7 summary ==
= objective function is 9.614678e+01, relative L2-norm diff btw sol 6 and sol 7 is 0.113018
= L2-norm error (w.r.t truth) is 10658.3, obs. RMSE is 0.0164702, obs. normalized RMSE is 0.823511
- save results in text at iteration 7
***** Iteration 8 ******
computed Jacobian-Matrix products in 0.831777 secs
solve saddle point (co-kriging) systems with Levenberg-Marquardt
evaluate LM solutions
LM solution evaluted
- Geostat. inversion at iteration 8 is 2 sec
perform simple linesearch due to no progress in obj value
evaluate linesearch solutions
no progress in obj value
start direct posterior variance computation - this option works for O(nobs) ~ 100
0-th element evaluated
1000-th element evaluated
2000-th element evaluated
3000-th element evaluated
4000-th element evaluated
5000-th element evaluated
6000-th element evaluated
7000-th element evaluated
8000-th element evaluated
9000-th element evaluated
10000-th element evaluated
posterior diag. computed in 1.767275 secs
------------ Inversion Summary ---------------------------
** Found solution at iteration 7
** Solution obs. RMSE 0.0164702 , initial obs. RMSE 6.96206, where RMSE = (norm(obs. diff.)/sqrt(nobs)), Solution obs. nRMSE 0.823511, init. obs. nRMSE 348.103
** Final objective function value is 9.614678e+01
** Final predictive model checking Q2, cR is 1.848851e+00, 3.328451e-03
** Total elapsed time is 22.825062 secs
----------------------------------------------------------
###Markdown
Yes, this is so simple! You only need 2 lines of code (initialization, run) for inversionLog-transformation of unknown pumping rates makes the problem nonlinear and this requires several iterations. But we don't need to worry about negative extraction (injection) in our solution and the non-negativity constraint reflecting our confident prior knowledge results in a better fitting (observation matching) as below in Figure 2 (compared to [the linear inversion result](./linear_inverse_problem_pumping_history_identification.ipynbResults)). However, this kind of transformation may lead to highly non-symmetric pdf in the original/untransformed space. For example, even though we would think the uncertainty of the estimated pumping rates over the time should be of the same magnitude as we observed in [the linear problem](./linear_inverse_problem_pumping_history_identification.ipynbResults), the Bayesian credible interval becomes narrow for low values and wide for high values due to the log-transformation as below in Figure 1. Results- Plot the best (mean) estimate and its uncertainty interval
###Code
# plot posterior uncertainty/std
# back tansform
s_hat_real = np.exp(s_hat)
post_std = np.sqrt(post_diagv)
s_hat_upper = np.exp(s_hat + 1.96*post_std)
s_hat_lower = np.exp(s_hat - 1.96*post_std)
fig = plt.figure()
plt.plot(x,s_hat_real,'k-',label='estimated')
plt.plot(x,s_hat_upper,'k--',label='95%')
plt.plot(x,s_hat_lower,'k--',label='')
plt.plot(x,s_true,'r-',label='true')
plt.legend()
plt.title('Figure 1: Estimate with Bayesian credible interval')
###Output
_____no_output_____
###Markdown
+ Plot observation mismatch
###Code
plt.title('Figure 2: obs. vs simul.')
plt.plot(prob.obs, simul_obs, '.')
plt.xlabel('observation')
plt.ylabel('simulation')
minobs = np.vstack((prob.obs, simul_obs)).min(0)
maxobs = np.vstack((prob.obs, simul_obs)).max(0)
plt.plot(np.linspace(minobs, maxobs, 20), np.linspace(minobs, maxobs, 20), 'k-')
plt.axis('equal')
axes = plt.gca()
axes.set_xlim([math.floor(minobs), math.ceil(maxobs)])
axes.set_ylim([math.floor(minobs), math.ceil(maxobs)])
###Output
_____no_output_____ |
06-Gradient-Descent/02-Gradient-Descent-Simulations/02-Gradient-Descent-Simulations.ipynb | ###Markdown
模拟梯度下降法
###Code
import numpy as np
import matplotlib.pyplot as plt
plot_x = np.linspace(-1., 6., 141)
plot_x
plot_y = (plot_x-2.5)**2 - 1.
plt.plot(plot_x, plot_y)
plt.show()
epsilon = 1e-8
eta = 0.1
def J(theta):
return (theta-2.5)**2 - 1.
def dJ(theta):
return 2*(theta-2.5)
theta = 0.0
while True:
gradient = dJ(theta)
last_theta = theta
theta = theta - eta * gradient
if(abs(J(theta) - J(last_theta)) < epsilon):
break
print(theta)
print(J(theta))
theta = 0.0
theta_history = [theta]
while True:
gradient = dJ(theta)
last_theta = theta
theta = theta - eta * gradient
theta_history.append(theta)
if(abs(J(theta) - J(last_theta)) < epsilon):
break
plt.plot(plot_x, J(plot_x))
plt.plot(np.array(theta_history), J(np.array(theta_history)), color="r", marker='+')
plt.show()
len(theta_history)
theta_history = []
def gradient_descent(initial_theta, eta, epsilon=1e-8):
theta = initial_theta
theta_history.append(initial_theta)
while True:
gradient = dJ(theta)
last_theta = theta
theta = theta - eta * gradient
theta_history.append(theta)
if(abs(J(theta) - J(last_theta)) < epsilon):
break
def plot_theta_history():
plt.plot(plot_x, J(plot_x))
plt.plot(np.array(theta_history), J(np.array(theta_history)), color="r", marker='+')
plt.show()
eta = 0.01
theta_history = []
gradient_descent(0, eta)
plot_theta_history()
len(theta_history)
eta = 0.001
theta_history = []
gradient_descent(0, eta)
plot_theta_history()
len(theta_history)
eta = 0.8
theta_history = []
gradient_descent(0, eta)
plot_theta_history()
eta = 1.1
theta_history = []
gradient_descent(0, eta)
def J(theta):
try:
return (theta-2.5)**2 - 1.
except:
return float('inf')
def gradient_descent(initial_theta, eta, n_iters = 1e4, epsilon=1e-8):
theta = initial_theta
i_iter = 0
theta_history.append(initial_theta)
while i_iter < n_iters:
gradient = dJ(theta)
last_theta = theta
theta = theta - eta * gradient
theta_history.append(theta)
if(abs(J(theta) - J(last_theta)) < epsilon):
break
i_iter += 1
return
eta = 1.1
theta_history = []
gradient_descent(0, eta)
len(theta_history)
eta = 1.1
theta_history = []
gradient_descent(0, eta, n_iters=10)
plot_theta_history()
###Output
_____no_output_____ |
predict-future-sales.ipynb | ###Markdown
Parsed datedWhen we work with time data, enrich time component as much as possible.
###Code
# Import data again but this time parse dates
df = pd.read_csv("data/sales_train.csv", low_memory=False, parse_dates = ["date"])
df.date.dtype
df.date[:1000]
fig, ax = plt.subplots()
ax.scatter(df["date"][:1000], df["item_price"][:1000])
df.head()
df.head().T
df.date.head(20)
###Output
_____no_output_____
###Markdown
When sorting the date, good idea to sort it in order
###Code
# Sort DataFrame in date order
df.sort_values(by=["date"], inplace = True, ascending = True)
df.date.head(20)
###Output
_____no_output_____
###Markdown
Make a copy of the original dataframe, so if something goes wrong we still have it
###Code
#Make a copy
df_tmp = df.copy()
###Output
_____no_output_____
###Markdown
Add datetime parameter for saledate features
###Code
df_tmp["saleYear"] = df_tmp.date.dt.year
df_tmp["saleMonth"] = df_tmp.date.dt.month
df_tmp["saleDay"] = df_tmp.date.dt.day
df_tmp["saleDayOfWeek"] = df_tmp.date.dt.dayofweek
df_tmp["saleDayOfYear"] = df_tmp.date.dt.dayofyear
df_tmp.head()
# Now we've enriched our DataFrame with date timew features, we can remove saledate
df_tmp.drop("date", axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Moddeling the data
###Code
%%time
# Let's build a machine learning model
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_jobs=-1, random_state=42)
model.fit(df_tmp.drop("item_price", axis=1), df_tmp["item_price"])
model.score(df_tmp.drop("item_price", axis=1), df_tmp.item_price)
###Output
_____no_output_____
###Markdown
Splitting data into train and valid test sets
###Code
df_tmp.head()
df_tmp.saleYear.value_counts()
# Split data into training and validation
df_val = df_tmp[df_tmp.saleYear == 2015]
df_train = df_tmp[df_tmp.saleYear != 2015]
len(df_val), len(df_train)
# Split data into X & y
X_train, y_train = df_train.drop("item_price", axis=1), df_train.item_price
X_valid, y_valid = df_val.drop("item_price", axis=1), df_val.item_price
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
###Output
_____no_output_____
###Markdown
Building an evaluation functionKaggle uses RMSLE, so we'll use that in evaluating this project. We'll also calculate MAE and R^2.
###Code
# Create evaluation function (the competition uses Root Mean Square Log Error)
from sklearn.metrics import mean_squared_log_error, mean_absolute_error
def rmsle(y_test, y_preds):
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# Create function to evaluate our model
def show_scores(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
scores = {"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Training R^2": model.score(X_train, y_train),
"Valid R^2": model.score(X_valid, y_valid)}
return scores
###Output
_____no_output_____
###Markdown
Testing our model on a subset(to tune hyperparameters)Retraing an entire model would take far too long, we'll take a sample of the training set and tune the hyperparameters on that.
###Code
len(X_train)
# Change max samples in RandomForestRegressor
model = RandomForestRegressor(n_jobs=-1,
max_samples=100000)
%%time
# Cutting down the max number of samples each tree can see improves training time
model.fit(X_train, y_train)
show_scores(model)
###Output
_____no_output_____
###Markdown
Hyperparameter Tuning with RandomzedSearchCV
###Code
%%time
from sklearn.model_selection import RandomizedSearchCV
# Different RandomForestClassifier hyperparameters
rf_grid = {"n_estimators": np.arange(10, 100, 10),
"max_depth": [None, 3, 5, 10],
"min_samples_split": np.arange(2, 20, 2),
"min_samples_leaf": np.arange(1, 20, 2),
"max_features": [0.5, 1, "sqrt", "auto"],
"max_samples": [10000]}
rs_model = RandomizedSearchCV(RandomForestRegressor(),
param_distributions=rf_grid,
n_iter=20,
cv=5,
verbose=True)
rs_model.fit(X_train, y_train)
# Find the best parameters from the RandomizedSearch
rs_model.best_params_
# Evaluate the RandomizedSearch model
show_scores(rs_model)
###Output
_____no_output_____
###Markdown
Train model with best parameters
###Code
%%time
# Most ideal hyperparameters
ideal_model = RandomForestRegressor(n_estimators=90,
min_samples_leaf=1,
min_samples_split=14,
max_features=0.5,
n_jobs=-1,
max_samples=None)
ideal_model.fit(X_train, y_train)
show_scores(ideal_model)
%%time
# Faster model
fast_model = RandomForestRegressor(n_estimators=40,
min_samples_leaf=3,
max_features=0.5,
n_jobs=-1)
fast_model.fit(X_train, y_train)
show_scores(fast_model)
###Output
_____no_output_____
###Markdown
Make predictions test data Our model is trained prior to 2015. What we're doing is taking data similar and predicting price of future products.
###Code
df_test = pd.read_csv("data/test.csv")
df_test.head()
# We can find how the columns differ using sets
set(X_train.columns) - set(df_test.columns)
# Match test dataset columns to training dataset
df_test["date_block_num"] = False
df_test.head()
# Match test dataset columns to training dataset
df_test["saleDay"] = False
df_test["saleDayOfWeek"] = False
df_test["saleDayOfYear"] = False
df_test["saleMonth"] = False
df_test["saleYear"] = False
df_test.head()
# Make predictions on the test dataset using the best model
# Find feature importance of our best model
ideal_model.feature_importances_
# Install Seaborn package in current environment (if you don't have it)
import sys
!conda install --yes --prefix {sys.prefix} seaborn
import seaborn as sns
# Helper function for plotting feature importance
def plot_features(columns, importances, n=20):
df = (pd.DataFrame({"features": columns,
"feature_importance": importances})
.sort_values("feature_importance", ascending=False)
.reset_index(drop=True))
sns.barplot(x="feature_importance",
y="features",
data=df[:n],
orient="h")
plot_features(X_train.columns, ideal_model.feature_importances_)
sum(ideal_model.feature_importances_)
# Import datra again but this time parse dates
df = pd.read_csv("data/sales_train.csv", low_memory=False, parse_dates = ["date"])
#Make a copy
df_tmp = df.copy()
df_tmp["saleYear"] = df_tmp.date.dt.year
df_tmp["saleMonth"] = df_tmp.date.dt.month
df_tmp["saleDay"] = df_tmp.date.dt.day
df_tmp["saleDayOfWeek"] = df_tmp.date.dt.dayofweek
df_tmp["saleDayOfYear"] = df_tmp.date.dt.dayofyear
df_tmp.drop("date", axis=1, inplace=True)
%%time
# Let's build a machine learning model
from sklearn.linear_model import LinearRegression
model = RandomForestRegressor(n_jobs=-1, random_state=42)
model.fit(df_tmp.drop("item_price", axis=1), df_tmp["item_price"])
model.score(df_tmp.drop("item_price", axis=1), df_tmp.item_price)
df_tmp.head()
df_tmp.saleYear.value_counts()
# Split data into training and validation
df_val = df_tmp[df_tmp.saleYear == 2015]
df_train = df_tmp[df_tmp.saleYear != 2015]
len(df_val), len(df_train)
# Split data into X & y
X_train, y_train = df_train.drop("item_price", axis=1), df_train.item_price
X_valid, y_valid = df_val.drop("item_price", axis=1), df_val.item_price
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
# Create evaluation function (the competition uses Root Mean Square Log Error)
from sklearn.metrics import mean_squared_log_error, mean_absolute_error
def rmsle(y_test, y_preds):
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# Create function to evaluate our model
def show_scores(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
scores = {"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Training R^2": model.score(X_train, y_train),
"Valid R^2": model.score(X_valid, y_valid)}
return scores
# Change max samples in RandomForestRegressor
model = LinearRegression(n_jobs=-1)
%%time
# Cutting down the max number of samples each tree can see improves training time
model.fit(X_train, y_train)
show_scores(model)
show_scores(ideal_model)
# Find feature importance of our best model
ideal_model.feature_names_in_
import seaborn as sns
# Helper function for plotting feature importance
def plot_features(columns, importances, n=20):
df = (pd.DataFrame({"features": columns,
"feature_importance": importances})
.sort_values("feature_importance", ascending=False)
.reset_index(drop=True))
sns.barplot(x="feature_importance",
y="features",
data=df[:n],
orient="h")
# Import datra again but this time parse dates
df = pd.read_csv("data/sales_train.csv", low_memory=False, parse_dates = ["date"])
#Make a copy
df_tmp = df.copy()
df_tmp["saleYear"] = df_tmp.date.dt.year
df_tmp["saleMonth"] = df_tmp.date.dt.month
df_tmp["saleDay"] = df_tmp.date.dt.day
df_tmp["saleDayOfWeek"] = df_tmp.date.dt.dayofweek
df_tmp["saleDayOfYear"] = df_tmp.date.dt.dayofyear
df_tmp.drop("date", axis=1, inplace=True)
%%time
# Let's build a machine learning model
from sklearn import linear_model
model = linear_model.Ridge(alpha=.5)
model.fit(df_tmp.drop("item_price", axis=1), df_tmp["item_price"])
model.score(df_tmp.drop("item_price", axis=1), df_tmp.item_price)
# Split data into training and validation
df_val = df_tmp[df_tmp.saleYear == 2015]
df_train = df_tmp[df_tmp.saleYear != 2015]
len(df_val), len(df_train)
# Split data into X & y
X_train, y_train = df_train.drop("item_price", axis=1), df_train.item_price
X_valid, y_valid = df_val.drop("item_price", axis=1), df_val.item_price
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
# Create evaluation function (the competition uses Root Mean Square Log Error)
from sklearn.metrics import mean_squared_log_error, mean_absolute_error
def rmsle(y_test, y_preds):
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# Create function to evaluate our model
def show_scores(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
scores = {"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Training R^2": model.score(X_train, y_train),
"Valid R^2": model.score(X_valid, y_valid)}
return scores
%%time
# Cutting down the max number of samples each tree can see improves training time
model.fit(X_train, y_train)
###Output
Wall time: 11min 33s
|
notebooks/06.a.introduction_to_pandas.ipynb | ###Markdown
**Note:**This only creates a view of the data! Pandas IOPandas comes with a wide array of input output modules seehttps://pandas.pydata.org/pandas-docs/stable/user_guide/io.html**NOTE:** reading xlsx is _much_ slower than csv Your request: Scraping websites! Today with Pandas scraping wikipedia. In particular the oldest universities!Alternatively beautiful soup https://www.crummy.com/software/BeautifulSoup/bs4/doc/ or Scrapy https://scrapy.org/
###Code
url = "https://en.wikipedia.org/wiki/List_of_oldest_universities_in_continuous_operation"
###Output
_____no_output_____
###Markdown
Lets bail out of the SSL context for the sake of this class :)
###Code
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
dfs = pd.read_html(url) # do you get SSL: CERTIFICATE_VERIFY_FAILED ?
len(dfs)
dfs[0].head()
udf = dfs[0]
udf.columns
###Output
_____no_output_____
###Markdown
Multi index makes pandas very powerful but it takes time to get used to them, see more below.For now let's get rid of them...
###Code
udf.columns = [ e[0] for e in udf.columns ]
udf.head()
udf.columns = ['Year', 'University', 'H-Location', 'G-Location', 'Notes' ]
udf.head()
###Output
_____no_output_____
###Markdown
Most of the time, this data needs cleanup, e.g. year should most optimally be a date or at least a year. Gather some basic information around the dataframe
###Code
udf.describe()
udf.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 35 entries, 0 to 34
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Year 35 non-null object
1 University 35 non-null object
2 H-Location 35 non-null object
3 G-Location 35 non-null object
4 Notes 24 non-null object
dtypes: object(5)
memory usage: 1.5+ KB
###Markdown
Cleaning up data takes a lot time and needs to be done diligently! Let's clean-up the Year column Accessing the str properties!
###Code
udf['Year'].str.match(r'^(?P<year>[0-9]{4})')
udf.loc[15]
udf['year'] = udf.Year.str.extract(r'(?P<year>[0-9]{4})')
udf.head()
udf.loc[15]
udf.shape
# (rows, columns)
###Output
_____no_output_____
###Markdown
One cannot visualize all columns straight away in jupyter :( However redefining some options helps!
###Code
pd.set_option("max_columns", 2000)
###Output
_____no_output_____
###Markdown
Sorting
###Code
udf.head()
udf.sort_values(['year'])
###Output
_____no_output_____
###Markdown
Sort_values has kwargs like ascending = True|False and values are defined by a list, ie sort first by, then by ...
###Code
udf.sort_values(['H-Location','year'])
###Output
_____no_output_____
###Markdown
Let split the G-location into city and country!
###Code
tmp_df = udf['G-Location'].str.split(",")
display(tmp_df.head()) # not quite what we want .. we want two columns!
###Output
_____no_output_____
###Markdown
How to get two columns?
###Code
tmp_df = udf['G-Location'].str.split(",", expand=True)
tmp_df.columns = ['G-City', 'G-Country']
tmp_df
udf = udf.join(tmp_df)
# there are many options to join frames
udf.head()
###Output
_____no_output_____
###Markdown
Deleting things
###Code
udf.head()
udf.drop(1)
udf.head(3)
udf.drop(columns=['G-Location', 'Year'])
###Output
_____no_output_____
###Markdown
Dataframe or series are not automatically "adjusted" except you use `inpace=True`
###Code
udf
udf.drop(columns=['G-Location', 'Year'], inplace=True)
udf
###Output
_____no_output_____
###Markdown
slicing and dicing
###Code
udf[:3] # df[:'r3'] works as well
# selecting one column!
udf['G-Country']
udf.describe()
# selecting one row
udf.loc[1]
udf.info()
# mask also work on df!
mask = udf['year'] < 1400
mask.head(10)
# casting columns into data types
udf.year = udf.year.astype(int)
_udf = udf.convert_dtypes()
_udf.info()
# mask also work on df!
mask = udf.year < 1400
mask.head(10)
udf[mask]
udf[udf['year'] < 1300] # reduces the data frame, again note! that is just a view, not a copy!
udf[udf['year'] < 1300].loc[1]
udf[udf['year'] > 1300].loc[1]
udf[udf['year'] > 1300].head(3)
# How would I know which index is the first one in my masked selection ?
# Answer: you don't need to if you use iloc! :)
udf[udf['year'] > 1300].iloc[0]
###Output
_____no_output_____
###Markdown
more natural query - or isn't it?
###Code
udf.query("year > 1300").head(5)
udf.query("1349 > year > 1320")
# Using local variables in queries
upper_limit = 1400
udf.query("@upper_limit > year > 1320")
###Output
_____no_output_____
###Markdown
Find the maximum for a given series or dataframe
###Code
udf['year'].idxmax()
###Output
_____no_output_____
###Markdown
Unique values and their count
###Code
udf['G-Country'].unique()
udf['G-Country'].nunique()
udf['G-Country'].value_counts()
_udf = udf.set_index('University')
# Grab some ramdom rows
_udf.sample(5)
_udf.loc['Ruprecht Karl University of Heidelberg', ['Notes', 'year']]
_udf.loc['Ruprecht Karl University of Heidelberg', :]
###Output
_____no_output_____
###Markdown
Done with Basics!Take a look at the cheat sheet for a summaryhttps://pandas.pydata.org/Pandas_Cheat_Sheet.pdf Hierarchical indexing
###Code
s = pd.Series(
np.random.randn(5),
index = [
['p1','p1','p2','p2','p3'],
['a','b','a','d','a']
]
)
s
s.index
s.index.names = ['probability', 'type']
s
s['p1']
s[:, 'a'] # lower level
s2 = s.unstack()
print(type(s2))
s2
s3 = s2.stack()
print(type(s3))
s3
###Output
<class 'pandas.core.series.Series'>
###Markdown
Multindex with Dataframes
###Code
df = pd.DataFrame(
[
c,
c * 20,
d,
np.exp(d),
pd.Series(np.random.randn(4), index=['r2', 'r3', 'r4', 'r5'])
],
index = [
['p1','p1','p2','p2','p3'],
['a','b','a','d','a']
]
)
df.index.names = ['probability', 'type']
df
df = df.fillna(0)
df
###Output
_____no_output_____
###Markdown
**Note**:You can create multi indeces from a regular dataframe!
###Code
df2 = df.reset_index()
df2
df2.set_index(['probability', 'type'])
df2 = df.swaplevel('probability', 'type')
df2
df2.sort_index(axis=0, level=0, inplace=True)
df2
###Output
_____no_output_____
###Markdown
Natural slicing using `pandas.IndexSlice` objects
###Code
idx = pd.IndexSlice
df2.loc[
idx[:, ["p1", "p2"]],
:
]
###Output
_____no_output_____
###Markdown
long and wide formats Long formats - easy to read and to handle for computers - each variable has its own columnWide formats - easy to read for humans - each observation has its own row
###Code
df3 = df2.reset_index()
df3.sort_values(["probability", "type"], inplace=True)
df3
df4 = df3.melt(
id_vars=['type','probability'],
var_name='r_stage',
value_name='score'
)
print(df4.shape)
df4.sort_values(["type", "probability"], inplace=True)
df4.head(7)
###Output
(25, 4)
###Markdown
Think of selecting data, for example for plotting that should have the following criteria* probability == p1* r_stage in [r2, r3] much easier in long format
###Code
# going back to the more human friendlier version ! :)
df5 = df4.pivot_table(index=['type', 'probability'], columns='r_stage', values="score")
df5
###Output
_____no_output_____
###Markdown
Pandas level 1Data wrangling 101 I'd like to say Pandas is numpy on steroids but it is actually much more.Pandas is the data science solution for Python and it build on top of the powerful numpy module.However, Pandas offers elements that are much more intuitive or go beyond what numpy has ever provided.Nevertheless, numpy is more performant in some cases (by a lot, yet remember when to optimize!) The perfect is the dead of the good. -- M. GunnerPandas was create [Wes McKinney](https://wesmckinney.com/pages/about.html) in the early 2008 at AQR capital management and I can recommend "Python for Data Analysis" from Wes, which was published via O'Reilly and "Pandas for Everyone" by Daniel Y. Chen. The following Pandas chapters are inspired by the books. Pandas offers the two basic data structures* Series* Dataframes
###Code
import pandas as pd
c = pd.Series(
[12, 13, 14, 121],
index=['r1', 'r2', 'r3', 'r4']
)
c
###Output
_____no_output_____
###Markdown
Selecting from Series works like a dict :)
###Code
c['r2']
mask = c >= 13
mask
c[mask]
###Output
_____no_output_____
###Markdown
Masks can be additive!
###Code
mask2 = c < 20
c[mask & mask2]
c * 10
# works also with vectorized math operations
# wird vektorisiert um schneller zu laufen ---> C
import numpy as np
np.exp(c)
###Output
_____no_output_____
###Markdown
Remember to use numpy functions as much as possible so data remains on the "C side". More below! Operations conserve index!Series are like ordered Dicts!
###Code
'r1' in c
###Output
_____no_output_____
###Markdown
np.nan is the missing value indicator
###Code
d = pd.Series(
{
'r1': np.nan,
'r2': 0.2,
'r3': 0.2,
'r4': 0.4
}
)
d
###Output
_____no_output_____
###Markdown
Which values are nan?
###Code
d.isna() # returns a mask!
# inverting with ~!
~d.isna()
d.notnull()
###Output
_____no_output_____
###Markdown
indices are aligned automatically!
###Code
c
d = pd.Series(
[10,20,30,40],
index=['r2', 'r3', 'r4', 'r5']
)
d
c + d
###Output
_____no_output_____
###Markdown
Renaming index
###Code
d.index = ['r1', 'r2', 'r3', 'r4'] # now the indices are the same in c and d!
c + d
###Output
_____no_output_____
###Markdown
Naming things will help you to get your data organised better. Explicit is better than implicit! And remember to choose your names variable wisely - you will code read often than you write.
###Code
d.index.name = "variable"
d.name = "counts"
d
d.reset_index()
###Output
_____no_output_____
###Markdown
Resetting index turns the index into a series so now we hav a DataFrame with two series!
###Code
type(d.reset_index())
###Output
_____no_output_____
###Markdown
Data frames Data frames are the pandas 2d data containers (if there is only one index dimension). In principle data frames are a list of Series, whereas each row is a series.
###Code
df = pd.DataFrame(
[
c,
d, # this one we named :)
pd.Series([100,102,103,104], index=['r2', 'r3', 'r4', 'r5'])
]
)
df
# accessing a value
df.loc['counts', 'r2']
###Output
_____no_output_____
###Markdown
Note: How pandas aligns your data automatically.If you want each series to be treated as column, just transpose DataFrames can be constructed in many different ways, see docu for more detailshttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html?highlight=dataframepandas.DataFrame
###Code
df = df.T
df
###Output
_____no_output_____
###Markdown
Renaming columns in a data frame
###Code
df.columns = ['count1', 'count2', 'count3']
df
###Output
_____no_output_____
###Markdown
Dataframes can equally be named, for your sanity, name them :)
###Code
df.columns.name = "Counts"
df.index.name = "variable"
df
###Output
_____no_output_____
###Markdown
Now that you feel happy in the pandas world, some modules/functions require numpy arrays, how do you convert them ?
###Code
np_df = df.values # pandas df in numpy array
np_df
type(np_df)
###Output
_____no_output_____
###Markdown
If you need to work "longer" on the numpy side, I suggest to transform the pandas dataframe to a numpy recarray, as names are preserved;
###Code
# np_df = df.values #
np_df = df.to_records()
np_df
np_df['variable']
np_df[0]
np_df[0][2]
###Output
_____no_output_____
###Markdown
C-side and Python side **Note**:Regular Python floats live in the Python world - Numpy and Pandas live in the "C world", hence their fast vectorized operations. If you can avoid it, don't cast between the worlds!
###Code
long_series = pd.Series(
np.random.randn(1_000_000),
)
%%timeit -n 1
a = long_series.to_list() # to python list!
print(f"a is a {type(a)} now!")
pd.Series(a)
%%timeit -n 1
a = long_series.to_numpy()
print(f"a is a {type(a)} now!")
pd.Series(a)
###Output
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
The slowest run took 4.47 times longer than the fastest. This could mean that an intermediate result is being cached.
90.2 µs ± 65.3 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Operations between DataFrame and Series
###Code
df_small = pd.DataFrame([c, d])
df_small
c
df_small - c
###Output
_____no_output_____
###Markdown
Next time you want to normalize each row of a data frame, one can define the correction factors as a series and just e.g. subtract it.
###Code
df
df.rename(
columns={'count1':'count_reference'},
inplace=True
)
df
# subselecting a set of columns!
df[["count2", 'count3']]
###Output
_____no_output_____
###Markdown
**Note:**This only creates a view of the data! Pandas IOPandas comes with a wide array of input output modules seehttps://pandas.pydata.org/pandas-docs/stable/user_guide/io.html**NOTE:** reading xlsx is _much_ slower than csv Your request: Scraping websites! Today with Pandas scraping wikipedia. In particular the oldest universities!Alternatively beautiful soup https://www.crummy.com/software/BeautifulSoup/bs4/doc/ or Scrapy https://scrapy.org/
###Code
url = "https://en.wikipedia.org/wiki/List_of_oldest_universities_in_continuous_operation"
###Output
_____no_output_____
###Markdown
Lets bail out of the SSL context for the sake of this class :)
###Code
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
!pip install lxml
dfs = pd.read_html(url) # do you get SSL: CERTIFICATE_VERIFY_FAILED ?
len(dfs)
dfs[0].head()
udf = dfs[0]
udf.columns
###Output
_____no_output_____
###Markdown
Multi index makes pandas very powerful but it takes time to get used to them, see more below.For now let's get rid of them...
###Code
udf.columns = [ e[0] for e in udf.columns ]
udf.head()
udf.columns = ['Year', 'University', 'H-Location', 'G-Location', 'Notes' ]
udf.head()
###Output
_____no_output_____
###Markdown
Most of the time, this data needs cleanup, e.g. year should most optimally be a date or at least a year. Gather some basic information around the dataframe
###Code
udf.describe()
udf.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 35 entries, 0 to 34
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Year 35 non-null object
1 University 35 non-null object
2 H-Location 35 non-null object
3 G-Location 35 non-null object
4 Notes 24 non-null object
dtypes: object(5)
memory usage: 1.5+ KB
###Markdown
Cleaning up data takes a lot time and needs to be done diligently! Let's clean-up the Year column Accessing the str properties!
###Code
udf['Year'].str.match(r'^(?P<year>[0-9]{4})') # regex, see pythex.org
udf.loc[15]
udf['year'] = udf.Year.str.extract(r'(?P<year>[0-9]{4})')
udf.head()
udf.loc[15]
udf.shape
# (rows, columns)
###Output
_____no_output_____
###Markdown
One cannot visualize all columns straight away in jupyter :( However redefining some options helps!
###Code
pd.set_option("max_columns", 2000)
###Output
_____no_output_____
###Markdown
Sorting
###Code
udf.head()
udf.sort_values(['year'])
###Output
_____no_output_____
###Markdown
Sort_values has kwargs like ascending = True|False and values are defined by a list, ie sort first by, then by ...
###Code
udf.sort_values(['H-Location','year'])
###Output
_____no_output_____
###Markdown
Let split the G-location into city and country!
###Code
tmp_df = udf['G-Location'].str.split(",")
display(tmp_df.head()) # not quite what we want .. we want two columns!
###Output
_____no_output_____
###Markdown
How to get two columns?
###Code
tmp_df = udf['G-Location'].str.split(",", expand=True)
tmp_df.columns = ['G-City', 'G-Country']
tmp_df
udf = udf.join(tmp_df)
# there are many options to join frames
udf.head()
###Output
_____no_output_____
###Markdown
Deleting things
###Code
udf.head()
udf.drop(1)
udf.head(3)
udf.drop(columns=['G-Location', 'Year'])
###Output
_____no_output_____
###Markdown
Dataframe or series are not automatically "adjusted" except you use `inpace=True`
###Code
udf
udf.drop(columns=['G-Location', 'Year'], inplace=True)
udf
###Output
_____no_output_____
###Markdown
slicing and dicing
###Code
udf[:3] # df[:'r3'] works as well
# selecting one column!
udf['G-Country']
udf.describe()
# selecting one row
udf.loc[1]
udf.info()
# mask also work on df!
mask = udf['year'] < 1400
mask.head(10)
# casting columns into data types
udf.year = udf.year.astype(int)
_udf = udf.convert_dtypes() # tries to interpret columns as numbers
_udf.info()
# mask also work on df!
mask = udf.year < 1400
mask.head(10)
udf[mask]
udf[udf['year'] < 1300] # reduces the data frame, again note! that is just a view, not a copy!
udf[udf['year'] < 1300].loc[1]
udf[udf['year'] > 1300].loc[1] # wir wissen aber nicht ob 1 drin ist
udf[udf['year'] > 1300].head(3)
# How would I know which index is the first one in my masked selection ?
# Answer: you don't need to if you use iloc! :)
udf[udf['year'] > 1300].iloc[0]
###Output
_____no_output_____
###Markdown
more natural query - or isn't it?
###Code
udf.query("year > 1300").head(5)
udf.query("1349 > year > 1320")
# Using local variables in queries
upper_limit = 1400
udf.query("@upper_limit > year > 1320")
###Output
_____no_output_____
###Markdown
Find the maximum for a given series or dataframe
###Code
udf['year'].idxmax()
udf['year'].max()
###Output
_____no_output_____
###Markdown
Unique values and their count
###Code
udf['G-Country'].unique()
udf['G-Country'].nunique() # how many unique values
udf['G-Country'].value_counts()
_udf = udf.set_index('University')
# Grab some ramdom rows
_udf.sample(5)
_udf.loc['Ruprecht Karl University of Heidelberg', ['Notes', 'year']]
_udf.loc['Ruprecht Karl University of Heidelberg', :]
###Output
_____no_output_____
###Markdown
Done with Basics!Take a look at the cheat sheet for a summaryhttps://pandas.pydata.org/Pandas_Cheat_Sheet.pdf Hierarchical indexing
###Code
s = pd.Series(
np.random.randn(5),
index = [
['p1','p1','p2','p2','p3'],
['a','b','a','d','a']
]
)
s
s.index
s.index.names = ['probability', 'type']
s
s['p1']
s[:, 'a'] # lower level
s2 = s.unstack()
print(type(s2))
s2
s3 = s2.stack()
print(type(s3))
s3
###Output
<class 'pandas.core.series.Series'>
###Markdown
Multindex with Dataframes
###Code
df = pd.DataFrame(
[
c,
c * 20,
d,
np.exp(d),
pd.Series(np.random.randn(4), index=['r2', 'r3', 'r4', 'r5'])
],
index = [
['p1','p1','p2','p2','p3'],
['a','b','a','d','a']
]
)
df.index.names = ['probability', 'type']
df
df = df.fillna(0)
df
###Output
_____no_output_____
###Markdown
**Note**:You can create multi indeces from a regular dataframe!
###Code
df2 = df.reset_index()
df2
df2.set_index(['probability', 'type'])
df2 = df.swaplevel('probability', 'type')
df2
df2.sort_index(axis=0, level=0, inplace=True)
df2
###Output
_____no_output_____
###Markdown
Natural slicing using `pandas.IndexSlice` objects
###Code
idx = pd.IndexSlice
df2.loc[
idx[:, ["p1", "p2"]],
:
]
###Output
_____no_output_____
###Markdown
long and wide formats Long formats - easy to read and to handle for computers - each variable has its own columnWide formats - easy to read for humans - each observation has its own row
###Code
df3 = df2.reset_index()
df3.sort_values(["probability", "type"], inplace=True)
df3
df4 = df3.melt(
id_vars=['type','probability'],
var_name='r_stage',
value_name='score'
)
print(df4.shape)
df4.sort_values(["type", "probability"], inplace=True)
df4.head(7)
###Output
(25, 4)
###Markdown
Think of selecting data, for example for plotting that should have the following criteria* probability == p1* r_stage in [r2, r3] much easier in long format
###Code
# going back to the more human friendlier version ! :)
df5 = df4.pivot_table(index=['type', 'probability'], columns='r_stage', values="score")
df5
###Output
_____no_output_____
###Markdown
Pandas level 1Data wrangling 101 I'd like to say Pandas is numpy on steroids but it is actually much more.Pandas is the data science solution for Python and it build on top of the powerful numpy module.However, Pandas offers elements that are much more intuitive or go beyond what numpy has ever provided.Nevertheless, numpy is more performant in some cases (by a lot, yet remember when to optimize!) The perfect is the dead of the good. -- M. GunnerPandas was create [Wes McKinney](https://wesmckinney.com/pages/about.html) in the early 2008 at AQR capital management and I can recommend "Python for Data Analysis" from Wes, which was published via O'Reilly and "Pandas for Everyone" by Daniel Y. Chen. The following Pandas chapters are inspired by the books. Pandas offers the two basic data structures* Series* Dataframes
###Code
import pandas as pd
c = pd.Series(
[12, 13, 14, 121],
index=['r1', 'r2', 'r3', 'r4']
)
c
###Output
_____no_output_____
###Markdown
Selecting from Series works like a dict :)
###Code
c['r2']
mask = c >= 13
mask
c[mask]
###Output
_____no_output_____
###Markdown
Masks can be additive!
###Code
mask2 = c < 20
c[mask & mask2]
c * 10
# works also with vectorized math operations
import numpy as np
np.exp(c)
###Output
_____no_output_____
###Markdown
Remember to use numpy functions as much as possible so data remains on the "C side". More below! Operations conserve index!Series are like ordered Dicts!
###Code
'r1' in c
###Output
_____no_output_____
###Markdown
np.nan is the missing value indicator
###Code
d = pd.Series(
{
'r1': np.nan,
'r2': 0.2,
'r3': 0.2,
'r4': 0.4
}
)
d
###Output
_____no_output_____
###Markdown
Which values are nan?
###Code
d.isna() # returns a mask!
# inverting with ~!
~d.isna()
d.notnull()
###Output
_____no_output_____
###Markdown
indices are aligned automatically!
###Code
c
d = pd.Series(
[10,20,30,40],
index=['r2', 'r3', 'r4', 'r5']
)
d
c + d
###Output
_____no_output_____
###Markdown
Renaming index
###Code
d.index = ['r1', 'r2', 'r3', 'r4'] # now the indices are the same in c and d!
c + d
###Output
_____no_output_____
###Markdown
Naming things will help you to get your data organised better. Explicit is better than implicit! And remember to choose your names variable wisely - you will code read often than you write.
###Code
d.index.name = "variable"
d.name = "counts"
d
d.reset_index()
###Output
_____no_output_____
###Markdown
Resetting index turns the index into a series so now we hav a DataFrame with two series!
###Code
type(d.reset_index())
###Output
_____no_output_____
###Markdown
Data frames Data frames are the pandas 2d data containers (if there is only one index dimension). In principle data frames are a list of Series, whereas each row is a series.
###Code
df = pd.DataFrame(
[
c,
d, # this one we named :)
pd.Series([100,102,103,104], index=['r2', 'r3', 'r4', 'r5'])
]
)
df
# accessing a value
df.loc['counts', 'r2']
###Output
_____no_output_____
###Markdown
Note: How pandas aligns your data automatically.If you want each series to be treated as column, just transpose DataFrames can be constructed in many different ways, see docu for more detailshttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html?highlight=dataframepandas.DataFrame
###Code
df = df.T
df
###Output
_____no_output_____
###Markdown
Renaming columns in a data frame
###Code
df.columns = ['count1', 'count2', 'count3']
df
###Output
_____no_output_____
###Markdown
Dataframes can equally be named, for your sanity, name them :)
###Code
df.columns.name = "Counts"
df.index.name = "variable"
df
###Output
_____no_output_____
###Markdown
Now that you feel happy in the pandas world, some modules/functions require numpy arrays, how do you convert them ?
###Code
np_df = df.values
np_df
type(np_df)
###Output
_____no_output_____
###Markdown
If you need to work "longer" on the numpy side, I suggest to transform the pandas dataframe to a numpy recarray, as names are preserved;
###Code
# np_df = df.values #
np_df = df.to_records()
np_df
np_df['variable']
np_df[0]
np_df[0][2]
###Output
_____no_output_____
###Markdown
C-side and Python side **Note**:Regular Python floats live in the Python world - Numpy and Pandas live in the "C world", hence their fast vectorized operations. If you can avoid it, don't cast between the worlds!
###Code
long_series = pd.Series(
np.random.randn(1000000),
)
%%timeit -n 1
a = long_series.to_list() # to python list!
print(f"a is a {type(a)} now!")
pd.Series(a)
%%timeit -n 1
a = long_series.to_numpy()
print(f"a is a {type(a)} now!")
pd.Series(a)
###Output
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
The slowest run took 5.39 times longer than the fastest. This could mean that an intermediate result is being cached.
62.8 µs ± 49.3 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Operations between DataFrame and Series
###Code
df_small = pd.DataFrame([c, d])
df_small
c
df_small - c
###Output
_____no_output_____
###Markdown
Next time you want to normalize each row of a data frame, one can define the correction factors as a series and just e.g. subtract it.
###Code
df
df.rename(
columns={'count1':'count_reference'},
inplace=True
)
df
# subselecting a set of columns!
df[["count2", 'count3']]
###Output
_____no_output_____
###Markdown
**Note:**This only creates a view of the data! Pandas IOPandas comes with a wide array of input output modules seehttps://pandas.pydata.org/pandas-docs/stable/user_guide/io.html**NOTE:** reading xlsx is _much_ slower than csv Your request: Scraping websites! Today with Pandas scraping wikipedia. In particular the oldest universities!Alternatively beautiful soup https://www.crummy.com/software/BeautifulSoup/bs4/doc/ or Scrapy https://scrapy.org/
###Code
url = "https://en.wikipedia.org/wiki/List_of_oldest_universities_in_continuous_operation"
###Output
_____no_output_____
###Markdown
Lets bail out of the SSL context for the sake of this class :)
###Code
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
!pip install lxml
dfs = pd.read_html(url) # do you get SSL: CERTIFICATE_VERIFY_FAILED ?
len(dfs)
dfs[0].head()
udf = dfs[0]
udf.columns
###Output
_____no_output_____
###Markdown
Multi index makes pandas very powerful but it takes time to get used to them, see more below.For now let's get rid of them...
###Code
udf.columns = [ e[0] for e in udf.columns ]
udf.head()
udf.columns = ['Year', 'University', 'H-Location', 'G-Location', 'Notes' ]
udf.head()
###Output
_____no_output_____
###Markdown
Most of the time, this data needs cleanup, e.g. year should most optimally be a date or at least a year. Gather some basic information around the dataframe
###Code
udf.describe()
udf.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 35 entries, 0 to 34
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Year 35 non-null object
1 University 35 non-null object
2 H-Location 35 non-null object
3 G-Location 35 non-null object
4 Notes 24 non-null object
dtypes: object(5)
memory usage: 1.5+ KB
###Markdown
Cleaning up data takes a lot time and needs to be done diligently! Let's clean-up the Year column Accessing the str properties!
###Code
udf['Year'].str.match(r'^(?P<year>[0-9]{4})')
udf.loc[15]
udf['year'] = udf.Year.str.extract(r'(?P<year>[0-9]{4})')
udf.head()
udf.loc[15]
udf.shape
# (rows, columns)
###Output
_____no_output_____
###Markdown
One cannot visualize all columns straight away in jupyter :( However redefining some options helps!
###Code
pd.set_option("max_columns", 2000)
###Output
_____no_output_____
###Markdown
Sorting
###Code
udf.head()
udf.sort_values(['year'])
###Output
_____no_output_____
###Markdown
Sort_values has kwargs like ascending = True|False and values are defined by a list, ie sort first by, then by ...
###Code
udf.sort_values(['H-Location','year'])
###Output
_____no_output_____
###Markdown
Let split the G-location into city and country!
###Code
tmp_df = udf['G-Location'].str.split(",")
display(tmp_df.head()) # not quite what we want .. we want two columns!
###Output
_____no_output_____
###Markdown
How to get two columns?
###Code
tmp_df = udf['G-Location'].str.split(",", expand=True)
tmp_df.columns = ['G-City', 'G-Country']
tmp_df
udf = udf.join(tmp_df)
# there are many options to join frames
udf.head()
###Output
_____no_output_____
###Markdown
Deleting things
###Code
udf.head()
udf.drop(1)
udf.head(3)
udf.drop(columns=['G-Location', 'Year'])
###Output
_____no_output_____
###Markdown
Dataframe or series are not automatically "adjusted" except you use `inpace=True`
###Code
udf
udf.drop(columns=['G-Location', 'Year'], inplace=True)
udf
###Output
_____no_output_____
###Markdown
slicing and dicing
###Code
udf[:3] # df[:'r3'] works as well
# selecting one column!
udf['G-Country']
udf.describe()
# selecting one row
udf.loc[1]
udf.info()
# mask also work on df!
mask = udf['year'] < 1400
mask.head(10)
# casting columns into data types
udf.year = udf.year.astype(int)
_udf = udf.convert_dtypes()
_udf.info()
# mask also work on df!
mask = udf.year < 1400
mask.head(10)
udf[mask]
udf[udf['year'] < 1300] # reduces the data frame, again note! that is just a view, not a copy!
udf[udf['year'] < 1300].loc[1]
udf[udf['year'] > 1300].loc[1]
udf[udf['year'] > 1300].head(3)
# How would I know which index is the first one in my masked selection ?
# Answer: you don't need to if you use iloc! :)
udf[udf['year'] > 1300].iloc[0]
###Output
_____no_output_____
###Markdown
more natural query - or isn't it?
###Code
udf.query("year > 1300").head(5)
udf.query("1349 > year > 1320")
# Using local variables in queries
upper_limit = 1400
udf.query("@upper_limit > year > 1320")
###Output
_____no_output_____
###Markdown
Find the maximum for a given series or dataframe
###Code
udf['year'].idxmax()
###Output
_____no_output_____
###Markdown
Unique values and their count
###Code
udf['G-Country'].unique()
udf['G-Country'].nunique()
udf['G-Country'].value_counts()
_udf = udf.set_index('University')
# Grab some ramdom rows
_udf.sample(5)
_udf.loc['Ruprecht Karl University of Heidelberg', ['Notes', 'year']]
_udf.loc['Ruprecht Karl University of Heidelberg', :]
###Output
_____no_output_____
###Markdown
Done with Basics!Take a look at the cheat sheet for a summaryhttps://pandas.pydata.org/Pandas_Cheat_Sheet.pdf Hierarchical indexing
###Code
s = pd.Series(
np.random.randn(5),
index = [
['p1','p1','p2','p2','p3'],
['a','b','a','d','a']
]
)
s
s.index
s.index.names = ['probability', 'type']
s
s['p1']
s[:, 'a'] # lower level
s2 = s.unstack()
print(type(s2))
s2
s3 = s2.stack()
print(type(s3))
s3
###Output
<class 'pandas.core.series.Series'>
###Markdown
Multindex with Dataframes
###Code
df = pd.DataFrame(
[
c,
c * 20,
d,
np.exp(d),
pd.Series(np.random.randn(4), index=['r2', 'r3', 'r4', 'r5'])
],
index = [
['p1','p1','p2','p2','p3'],
['a','b','a','d','a']
]
)
df.index.names = ['probability', 'type']
df
df = df.fillna(0)
df
###Output
_____no_output_____
###Markdown
**Note**:You can create multi indeces from a regular dataframe!
###Code
df2 = df.reset_index()
df2
df2.set_index(['probability', 'type'])
df2 = df.swaplevel('probability', 'type')
df2
df2.sort_index(axis=0, level=0, inplace=True)
df2
###Output
_____no_output_____
###Markdown
Natural slicing using `pandas.IndexSlice` objects
###Code
idx = pd.IndexSlice
df2.loc[
idx[:, ["p1", "p2"]],
:
]
###Output
_____no_output_____
###Markdown
long and wide formats Long formats - easy to read and to handle for computers - each variable has its own columnWide formats - easy to read for humans - each observation has its own row
###Code
df3 = df2.reset_index()
df3.sort_values(["probability", "type"], inplace=True)
df3
df4 = df3.melt(
id_vars=['type','probability'],
var_name='r_stage',
value_name='score'
)
print(df4.shape)
df4.sort_values(["type", "probability"], inplace=True)
df4.head(7)
###Output
(25, 4)
###Markdown
Think of selecting data, for example for plotting that should have the following criteria* probability == p1* r_stage in [r2, r3] much easier in long format
###Code
# going back to the more human friendlier version ! :)
df5 = df4.pivot_table(index=['type', 'probability'], columns='r_stage', values="score")
df5
###Output
_____no_output_____
###Markdown
Pandas level 1Data wrangling 101 I'd like to say Pandas is numpy on steroids but it is actually much more.Pandas is the data science solution for Python and it build on top of the powerful numpy module.However, Pandas offers elements that are much more intuitive or go beyond what numpy has ever provided.Nevertheless, numpy is more performant in some cases (by a lot, yet remember when to optimize!) The perfect is the dead of the good. -- M. GunnerPandas was create [Wes McKinney](https://wesmckinney.com/pages/about.html) in the early 2008 at AQR capital management and I can recommend "Python for Data Analysis" from Wes, which was published via O'Reilly and "Pandas for Everyone" by Daniel Y. Chen. The following Pandas chapters are inspired by the books. Pandas offers the two basic data structures* Series* Dataframes
###Code
import pandas as pd
c = pd.Series(
[12, 13, 14, 121],
index=['r1', 'r2', 'r3', 'r4']
)
c
###Output
_____no_output_____
###Markdown
Selecting from Series works like a dict :)
###Code
c['r2']
mask = c >= 13
mask
c[mask]
###Output
_____no_output_____
###Markdown
Masks can be additive!
###Code
mask2 = c < 20
c[mask & mask2]
c * 10
# works also with vectorized math operations
import numpy as np
np.exp(c)
###Output
_____no_output_____
###Markdown
Remember to use numpy functions as much as possible so data remains on the "C side". More below! Operations conserve index!Series are like ordered Dicts!
###Code
'r1' in c
###Output
_____no_output_____
###Markdown
np.nan is the missing value indicator
###Code
d = pd.Series(
{
'r1': np.nan,
'r2': 0.2,
'r3': 0.2,
'r4': 0.4
}
)
d
###Output
_____no_output_____
###Markdown
Which values are nan?
###Code
d.isna() # returns a mask!
# inverting with ~!
~d.isna()
d.notnull()
###Output
_____no_output_____
###Markdown
indices are aligned automatically!
###Code
c
d = pd.Series(
[10,20,30,40],
index=['r2', 'r3', 'r4', 'r5']
)
d
c + d
###Output
_____no_output_____
###Markdown
Renaming index
###Code
d.index = ['r1', 'r2', 'r3', 'r4'] # now the indices are the same in c and d!
c + d
###Output
_____no_output_____
###Markdown
Naming things will help you to get your data organised better. Explicit is better than implicit! And remember to choose your names variable wisely - you will code read often than you write.
###Code
d.index.name = "variable"
d.name = "counts"
d
d.reset_index()
###Output
_____no_output_____
###Markdown
Resetting index turns the index into a series so now we hav a DataFrame with two series!
###Code
type(d.reset_index())
###Output
_____no_output_____
###Markdown
Data frames Data frames are the pandas 2d data containers (if there is only one index dimension). In principle data frames are a list of Series, whereas each row is a series.
###Code
df = pd.DataFrame(
[
c,
d, # this one we named :)
pd.Series([100,102,103,104], index=['r2', 'r3', 'r4', 'r5'])
]
)
df
# accessing a value
df.loc['counts', 'r2']
###Output
_____no_output_____
###Markdown
Note: How pandas aligns your data automatically.If you want each series to be treated as column, just transpose DataFrames can be constructed in many different ways, see docu for more detailshttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html?highlight=dataframepandas.DataFrame
###Code
df = df.T
df
###Output
_____no_output_____
###Markdown
Renaming columns in a data frame
###Code
df.columns = ['count1', 'count2', 'count3']
df
###Output
_____no_output_____
###Markdown
Dataframes can equally be named, for your sanity, name them :)
###Code
df.columns.name = "Counts"
df.index.name = "variable"
df
###Output
_____no_output_____
###Markdown
Now that you feel happy in the pandas world, some modules/functions require numpy arrays, how do you convert them ?
###Code
np_df = df.values
np_df
type(np_df)
###Output
_____no_output_____
###Markdown
If you need to work "longer" on the numpy side, I suggest to transform the pandas dataframe to a numpy recarray, as names are preserved;
###Code
# np_df = df.values #
np_df = df.to_records()
np_df
np_df['variable']
np_df[0]
np_df[0][2]
###Output
_____no_output_____
###Markdown
C-side and Python side **Note**:Regular Python floats live in the Python world - Numpy and Pandas live in the "C world", hence their fast vectorized operations. If you can avoid it, don't cast between the worlds!
###Code
long_series = pd.Series(
np.random.randn(1000000),
)
%%timeit -n 1
a = long_series.to_list() # to python list!
print(f"a is a {type(a)} now!")
pd.Series(a)
%%timeit -n 1
a = long_series.to_numpy()
print(f"a is a {type(a)} now!")
pd.Series(a)
###Output
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
The slowest run took 5.04 times longer than the fastest. This could mean that an intermediate result is being cached.
227 µs ± 172 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Operations between DataFrame and Series
###Code
df_small = pd.DataFrame([c, d])
df_small
c
df_small - c
###Output
_____no_output_____
###Markdown
Next time you want to normalize each row of a data frame, one can define the correction factors as a series and just e.g. subtract it.
###Code
df
df.rename(
columns={'count1':'count_reference'},
inplace=True
)
df
# subselecting a set of columns!
df[["count2", 'count3']]
###Output
_____no_output_____
###Markdown
Pandas level 1Data wrangling 101 I'd like to say Pandas is numpy on steroids but it is actually much more.Pandas is the data science solution for Python and it build on top of the powerful numpy module.However, Pandas offers elements that are much more intuitive or go beyond what numpy has ever provided.Nevertheless, numpy is more performant in some cases (by a lot, yet remember when to optimize!) The perfect is the dead of the good. -- M. GunnerPandas was create [Wes McKinney](https://wesmckinney.com/pages/about.html) in the early 2008 at AQR capital management and I can recommend "Python for Data Analysis" from Wes, which was published via O'Reilly and "Pandas for Everyone" by Daniel Y. Chen. The following Pandas chapters are inspired by the books. Pandas offers the two basic data structures* Series* Dataframes
###Code
import pandas as pd
c = pd.Series(
[12, 13, 14, 121],
index=['r1', 'r2', 'r3', 'r4']
)
c
###Output
_____no_output_____
###Markdown
Selecting from Series works like a dict :)
###Code
c['r2']
mask = c >= 13
mask
c[mask]
###Output
_____no_output_____
###Markdown
Masks can be additive!
###Code
mask2 = c < 20
c[mask & mask2]
c * 10
# works also with vectorized math operations
import numpy as np
np.exp(c)
###Output
_____no_output_____
###Markdown
Remember to use numpy functions as much as possible so data remains on the "C side". More below! Operations conserve index!Series are like ordered Dicts!
###Code
'r1' in c
###Output
_____no_output_____
###Markdown
np.nan is the missing value indicator
###Code
d = pd.Series(
{
'r1': np.nan,
'r2': 0.2,
'r3': 0.2,
'r4': 0.4
}
)
d
###Output
_____no_output_____
###Markdown
Which values are nan?
###Code
d.isna() # returns a mask!
# inverting with ~!
~d.isna()
d.notnull()
###Output
_____no_output_____
###Markdown
indices are aligned automatically!
###Code
c
d = pd.Series(
[10,20,30,40],
index=['r2', 'r3', 'r4', 'r5']
)
d
c + d
###Output
_____no_output_____
###Markdown
Renaming index
###Code
d.index = ['r1', 'r2', 'r3', 'r4'] # now the indices are the same in c and d!
c + d
###Output
_____no_output_____
###Markdown
Naming things will help you to get your data organised better. Explicit is better than implicit! And remember to choose your names variable wisely - you will code read often than you write.
###Code
d.index.name = "variable"
d.name = "counts"
d
d.reset_index()
###Output
_____no_output_____
###Markdown
Resetting index turns the index into a series so now we hav a DataFrame with two series!
###Code
type(d.reset_index())
###Output
_____no_output_____
###Markdown
Data frames Data frames are the pandas 2d data containers (if there is only one index dimension). In principle data frames are a list of Series, whereas each row is a series.
###Code
df = pd.DataFrame(
[
c,
d, # this one we named :)
pd.Series([100,102,103,104], index=['r2', 'r3', 'r4', 'r5'])
]
)
df
# accessing a value
df.loc['counts', 'r2']
###Output
_____no_output_____
###Markdown
Note: How pandas aligns your data automatically.If you want each series to be treated as column, just transpose DataFrames can be constructed in many different ways, see docu for more detailshttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html?highlight=dataframepandas.DataFrame
###Code
df = df.T
df
###Output
_____no_output_____
###Markdown
Renaming columns in a data frame
###Code
df.columns = ['count1', 'count2', 'count3']
df
###Output
_____no_output_____
###Markdown
Dataframes can equally be named, for your sanity, name them :)
###Code
df.columns.name = "Counts"
df.index.name = "variable"
df
###Output
_____no_output_____
###Markdown
Now that you feel happy in the pandas world, some modules/functions require numpy arrays, how do you convert them ?
###Code
np_df = df.values
np_df
type(np_df)
###Output
_____no_output_____
###Markdown
If you need to work "longer" on the numpy side, I suggest to transform the pandas dataframe to a numpy recarray, as names are preserved;
###Code
# np_df = df.values #
np_df = df.to_records()
np_df
np_df['variable']
np_df[0]
np_df[0][2]
###Output
_____no_output_____
###Markdown
C-side and Python side **Note**:Regular Python floats live in the Python world - Numpy and Pandas live in the "C world", hence their fast vectorized operations. If you can avoid it, don't cast between the worlds!
###Code
long_series = pd.Series(
np.random.randn(1000000),
)
%%timeit -n 1
a = long_series.to_list() # to python list!
print(f"a is a {type(a)} now!")
pd.Series(a)
%%timeit -n 1
a = long_series.to_numpy()
print(f"a is a {type(a)} now!")
pd.Series(a)
###Output
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
a is a <class 'numpy.ndarray'> now!
981 µs ± 323 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Operations between DataFrame and Series
###Code
df_small = pd.DataFrame([c, d])
df_small
c
df_small - c
###Output
_____no_output_____
###Markdown
Next time you want to normalize each row of a data frame, one can define the correction factors as a series and just e.g. subtract it.
###Code
df
df.rename(
columns={'count1':'count_reference'},
inplace=True
)
df
# subselecting a set of columns!
df[["count2", 'count3']]
###Output
_____no_output_____
###Markdown
**Note:**This only creates a view of the data! Pandas IOPandas comes with a wide array of input output modules seehttps://pandas.pydata.org/pandas-docs/stable/user_guide/io.html**NOTE:** reading xlsx is _much_ slower than csv Your request: Scraping websites! Today with Pandas scraping wikipedia. In particular the oldest universities!Alternatively beautiful soup https://www.crummy.com/software/BeautifulSoup/bs4/doc/ or Scrapy https://scrapy.org/
###Code
url = "https://en.wikipedia.org/wiki/List_of_oldest_universities_in_continuous_operation"
###Output
_____no_output_____
###Markdown
Lets bail out of the SSL context for the sake of this class :)
###Code
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
dfs = pd.read_html(url) # do you get SSL: CERTIFICATE_VERIFY_FAILED ?
len(dfs)
dfs[0].head()
udf = dfs[0]
udf.columns
###Output
_____no_output_____
###Markdown
Multi index makes pandas very powerful but it takes time to get used to them, see more below.For now let's get rid of them...
###Code
udf.columns = [ e[0] for e in udf.columns ]
udf.head()
udf.columns = ['Year', 'University', 'H-Location', 'G-Location', 'Notes' ]
udf.head()
###Output
_____no_output_____
###Markdown
Most of the time, this data needs cleanup, e.g. year should most optimally be a date or at least a year. Gather some basic information around the dataframe
###Code
udf.describe()
udf.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 35 entries, 0 to 34
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Year 35 non-null object
1 University 35 non-null object
2 H-Location 35 non-null object
3 G-Location 35 non-null object
4 Notes 24 non-null object
dtypes: object(5)
memory usage: 1.5+ KB
###Markdown
Cleaning up data takes a lot time and needs to be done diligently! Let's clean-up the Year column Accessing the str properties!
###Code
udf['Year'].str.match(r'^(?P<year>[0-9]{4})')
udf.loc[15]
udf['year'] = udf.Year.str.extract(r'(?P<year>[0-9]{4})') # regex
udf.head()
udf.loc[15]
udf.shape
# (rows, columns)
###Output
_____no_output_____
###Markdown
One cannot visualize all columns straight away in jupyter :( However redefining some options helps!
###Code
pd.set_option("max_columns", 2000)
###Output
_____no_output_____
###Markdown
Sorting
###Code
udf.head()
udf.sort_values(['year'])
###Output
_____no_output_____
###Markdown
Sort_values has kwargs like ascending = True|False and values are defined by a list, ie sort first by, then by ...
###Code
udf.sort_values(['H-Location','year'])
###Output
_____no_output_____
###Markdown
Let split the G-location into city and country!
###Code
tmp_df = udf['G-Location'].str.split(",")
display(tmp_df.head()) # not quite what we want .. we want two columns!
###Output
_____no_output_____
###Markdown
How to get two columns?
###Code
tmp_df = udf['G-Location'].str.split(",", expand=True)
tmp_df.columns = ['G-City', 'G-Country']
tmp_df
udf = udf.join(tmp_df)
# there are many options to join frames
udf.head()
###Output
_____no_output_____
###Markdown
Deleting things
###Code
udf.head()
udf.drop(1)
udf.head(3)
udf.drop(columns=['G-Location', 'Year'])
###Output
_____no_output_____
###Markdown
Dataframe or series are not automatically "adjusted" except you use `inpace=True`
###Code
udf
udf.drop(columns=['G-Location', 'Year'], inplace=True)
udf
###Output
_____no_output_____
###Markdown
slicing and dicing
###Code
udf[:3] # df[:'r3'] works as well
# selecting one column!
udf['G-Country']
udf.describe()
# selecting one row
udf.loc[1]
udf.info()
# mask also work on df!
mask = udf['year'] < 1400
mask.head(10)
# casting columns into data types
udf.year = udf.year.astype(int)
_udf = udf.convert_dtypes()
_udf.info()
# mask also work on df!
mask = udf.year < 1400
mask.head(10)
udf[mask]
udf[udf['year'] < 1300] # reduces the data frame, again note! that is just a view, not a copy!
udf[udf['year'] < 1300].loc[1]
udf[udf['year'] > 1300].loc[1]
udf[udf['year'] > 1300].head(3)
# How would I know which index is the first one in my masked selection ?
# Answer: you don't need to if you use iloc! :)
udf[udf['year'] > 1300].iloc[0]
###Output
_____no_output_____
###Markdown
more natural query - or isn't it?
###Code
udf.query("year > 1300").head(5)
udf.query("1349 > year > 1320")
# Using local variables in queries
upper_limit = 1400
udf.query("@upper_limit > year > 1320")
###Output
_____no_output_____
###Markdown
Find the maximum for a given series or dataframe
###Code
udf['year'].idxmax()
###Output
_____no_output_____
###Markdown
Unique values and their count
###Code
udf['G-Country'].unique()
udf['G-Country'].nunique()
udf['G-Country'].value_counts()
_udf = udf.set_index('University')
# Grab some ramdom rows
_udf.sample(5)
_udf.loc['Ruprecht Karl University of Heidelberg', ['Notes', 'year']]
_udf.loc['Ruprecht Karl University of Heidelberg', :]
###Output
_____no_output_____
###Markdown
Done with Basics!Take a look at the cheat sheet for a summaryhttps://pandas.pydata.org/Pandas_Cheat_Sheet.pdf Hierarchical indexing
###Code
s = pd.Series(
np.random.randn(5),
index = [
['p1','p1','p2','p2','p3'],
['a','b','a','d','a']
]
)
s
s.index
s.index.names = ['probability', 'type']
s
s['p1']
s[:, 'a'] # lower level
s2 = s.unstack()
print(type(s2))
s2
s3 = s2.stack()
print(type(s3))
s3
###Output
<class 'pandas.core.series.Series'>
###Markdown
Multindex with Dataframes
###Code
df = pd.DataFrame(
[
c,
c * 20,
d,
np.exp(d),
pd.Series(np.random.randn(4), index=['r2', 'r3', 'r4', 'r5'])
],
index = [
['p1','p1','p2','p2','p3'],
['a','b','a','d','a']
]
)
df.index.names = ['probability', 'type']
df
df = df.fillna(0)
df
###Output
_____no_output_____
###Markdown
**Note**:You can create multi indeces from a regular dataframe!
###Code
df2 = df.reset_index()
df2
df2.set_index(['probability', 'type'])
df2 = df.swaplevel('probability', 'type')
df2
df2.sort_index(axis=0, level=0, inplace=True)
df2
###Output
_____no_output_____
###Markdown
Natural slicing using `pandas.IndexSlice` objects
###Code
idx = pd.IndexSlice
df2.loc[
idx[:, ["p1", "p2"]],
:
]
###Output
_____no_output_____
###Markdown
long and wide formats Long formats - easy to read and to handle for computers - each variable has its own columnWide formats - easy to read for humans - each observation has its own row
###Code
df3 = df2.reset_index()
df3.sort_values(["probability", "type"], inplace=True)
df3
df4 = df3.melt(
id_vars=['type','probability'],
var_name='r_stage',
value_name='score'
)
print(df4.shape)
df4.sort_values(["type", "probability"], inplace=True)
df4.head(7)
###Output
(25, 4)
###Markdown
Think of selecting data, for example for plotting that should have the following criteria* probability == p1* r_stage in [r2, r3] much easier in long format
###Code
# going back to the more human friendlier version ! :)
df5 = df4.pivot_table(index=['type', 'probability'], columns='r_stage', values="score")
df5
###Output
_____no_output_____ |
python/example01.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
scores = [[15, 16, 17],
[25, 26, 27],
[35, 36, 37]]
type(scores)
np_scores = np.array(scores, dtype=np.int32)
type(np_scores)
df = pd.DataFrame(np_scores, index=['A_class','B_class','C_class'], columns=['kor','eng','math'])
type(df)
df.info()
df.describe()
df
df['kor'].mean()
df['kor'].hist()
np_scores00 = df.values
type(np_scores00)
list_scores = np_scores00.tolist()
type(list_scores)
list_scores
###Output
_____no_output_____ |
GMM, SVM, MLP/GMM, SVM, MLP_MFCC Bird sounds.ipynb | ###Markdown
In sound processing, the [mel-frequency cepstrum](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum:~:text=Mel%2Dfrequency%20cepstral%20coefficients%20(MFCCs,%2Da%2Dspectrum%22).) (MFC) is a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. **Mel-frequency cepstral coefficients** (MFCCs) are coefficients that collectively make up an MFC. MFCCs are commonly used as features in speech recognition systems, such as the systems which can automatically recognize numbers spoken into a telephone. MFCCs are commonly derived as follows:1. Take the [Fourier transform](https://en.wikipedia.org/wiki/Fourier_transform) of (a windowed excerpt of) a signal.2. Map the powers of the spectrum obtained above onto the mel scale, using [triangular overlapping windows](https://en.wikipedia.org/wiki/Window_functionTriangular_window).3. Take the logs of the powers at each of the mel frequencies.4. Take the [discrete cosine transform](https://en.wikipedia.org/wiki/Discrete_cosine_transform) of the list of mel log powers, as if it were a signal.5. The MFCCs are the amplitudes of the resulting spectrum.Sounds scary and tedious? No worries. we will help you go through a simple process using Python to do the `feature extraction` for sound (music, speech, etc.) and then `classify` the audio signal into different clusters. Feature Extraction **Extraction of features is a very important part in analyzing and finding relations between different things**. The data provided of audio cannot be understood by the models directly to convert them into an understandable format feature extraction is used. It is a process that explains most of the data but in an understandable way. Feature extraction is required for classification, prediction and recommendation algorithms. In P1, we will first extract features of animal sound files that will help us to classify the sound into different clusters. Let’s get familiar with the audio signal first. The audio signal is a 3-dimensional signal in which the three axes represent the time, amplitude and frequency. We will be using [librosa](https://librosa.github.io/librosa/) for analyzing and extracting features of an audio signal. For playing audio, we will use [pyAudio](https://people.csail.mit.edu/hubert/pyaudio/docs/) so that we can play music directly on Colab. Download three audio files (`Bluejay.mp3`, `Dove.mp3` and `Ducks.wav`) provided on Canvas webapge. Upload your files by clicking `Files -> Upload to your session storage`.
###Code
# let's install librosa and pyAudio first!
!pip install librosa
!pip install numba==0.48
!apt install libasound2-dev portaudio19-dev libportaudio2 libportaudiocpp0 ffmpeg
!pip install pyaudio
from google.colab import drive
drive.mount('/content/drive')
# Loading an audio
# let's take bluejay.mp3 as an example
import librosa
audio_path = '/content/drive/My Drive/Bluejay.mp3'
x , sr = librosa.load(audio_path)
print(type(x), type(sr))
###Output
/usr/local/lib/python3.7/dist-packages/librosa/core/audio.py:165: UserWarning: PySoundFile failed. Trying audioread instead.
warnings.warn("PySoundFile failed. Trying audioread instead.")
###Markdown
`.load` loads an audio file and decodes it into a 1-dimensional array which is a time series `x` , and `sr` is a sampling rate of `x` . Default `sr` is 22kHz. We can override the `sr` by
###Code
librosa.load(audio_path, sr=44100)
###Output
/usr/local/lib/python3.7/dist-packages/librosa/core/audio.py:165: UserWarning: PySoundFile failed. Trying audioread instead.
warnings.warn("PySoundFile failed. Trying audioread instead.")
###Markdown
We can also disable sampling by:
###Code
librosa.load(audio_path, sr=None)
# Playing an audio
import IPython.display as ipd
ipd.Audio(audio_path)
###Output
_____no_output_____
###Markdown
`IPython.display` allow us to play audio on jupyter notebook directly. It has a very simple interface with some basic buttons.
###Code
#display waveform
%matplotlib inline
import matplotlib.pyplot as plt
import librosa.display
plt.figure(figsize=(14, 5))
librosa.display.waveplot(x, sr=sr)
###Output
_____no_output_____
###Markdown
`librosa.display` is used to display the audio files in different formats such as wave plot, spectrogram, or colormap. Waveplots let us know the loudness of the audio at a given time. Spectogram shows different frequencies playing at a particular time along with its amplitude. Amplitude and frequency are important parameters of the sound and are unique for each audio. `librosa.display.waveplot` is used to plot waveform of amplitude vs. time where the first axis is an amplitude and second axis is time. **MFCC - Mel-Frequency Cepstral Coefficients**This feature is one of the most important method to extract a feature of an audio signal and is used majorly whenever working on audio signals. The MFCCs of a signal are a small set of features (usually about 10–20) which concisely describe the overall shape of a spectral envelope.
###Code
# MFCC — Mel-Frequency Cepstral Coefficients
mfccs = librosa.feature.mfcc(x, sr=sr)
print(mfccs.shape)
# Displaying the MFCCs:
librosa.display.specshow(mfccs, sr=sr, x_axis='time')
###Output
(20, 213)
###Markdown
`.mfcc` is used to calculate mfccs of a signal.By printing the shape of mfccs you get how many mfccs are calculated on how many frames. The first value represents the number of mfccs calculated and another value represents a number of frames available. Questions* **Q1**: Do the framing of Bird sounds using A=20ms windows, B=10ms, and A-B=10ms of overlapping. 4 seconds of Bird sounds will generate 399 frames.* **Q2**: Generate 13 MFCC coefficients for every frame. Every 4 sec of Bird sound will have 399x13 MFCC coefficients matrix as a result.* **Q3**: Plot the 399x13 MFCC coefficients for all three Bird sounds in Python.Here I provide some helpful functions to use.
###Code
# This is a fixed SampleRateSetting
_SAMPLING_FREQ = 12000
def _get_audio(audio_path):
# sr=None disables dynamic resampling
x, sr = librosa.load(audio_path, sr=_SAMPLING_FREQ)
print(f'Loaded {audio_path} (sampling rate {sr})')
return x, sr
def _display_audio(x, sr):
# Show audio to play in Jupyter
ipd.display(ipd.Audio(x, rate=sr))
def _compute_mfcc(x, sr, N_frames=399, Tw=20, Ts=10, alpha=0.97, R=(300, 3700), M=20, C=13, L=22):
"""
Compute MFCCs
This is a rough re-implementation of HTK MFCC MATLAB using librosa:
https://www.mathworks.com/matlabcentral/fileexchange/32849-htk-mfcc-matlab?focused=5199998&tab=function
N_frames: Number of frames
Tw: Analysis frame duration (ms)
Ts: Analysis frame shift (ms)
alpha: Preemphasis coefficient
R: Frequency range to consider (Hz)
M: Number of filterbank channels
C: Number of cepstral coefficients
L: Cepstral sine lifter parameter
"""
# Preemphasis filtering, per implementation
x = scipy.signal.lfilter([1-alpha], 1, x)
# Frame duration (samples)
Nw = round((Tw*10**-3)*sr)
# Frame shift (samples)
Ns = round((Ts*10**-3)*sr)
# Length of FFT analysis
nfft = int(2**np.ceil(np.log2(np.abs(Nw))))
# compute melspectogram separately to modify more params
S = librosa.feature.melspectrogram(
# librosa.feature.melspectrogram
y=x, sr=sr,
n_fft=nfft,
hop_length=Ns,
win_length=Nw,
window=scipy.signal.hamming,
power=1.0,
center=False, # Disable padding, per vec2frames() call in HTK MFCC MATLAB
# librosa.filters.mel
fmin=R[0],
fmax=R[1],
n_mels=M,
htk=True, # Use HTK instead of Slaney formula
norm=None,
)
mfccs = librosa.feature.mfcc(
# librosa.feature.mfcc
S=librosa.power_to_db(S),
n_mfcc=C,
dct_type=3, # DCT Type-III
lifter=L,
norm='ortho',
)
assert len(mfccs.shape) == 2
assert mfccs.shape[0] == 13
mfccs = mfccs[:,:N_frames]
if mfccs.shape[1] < N_frames:
warnings.warn(f'Got too few samples {mfccs.shape[1]} < {N_frames}. Appending last value to compensate')
for i in range(mfccs.shape[1], N_frames):
mfccs = np.append(mfccs, mfccs[:,-1:], axis=1)
return mfccs, Ns
def _plot_mfcc(mfccs, sr, hop_length):
#librosa.display.specshow(mfccs)
librosa.display.specshow(mfccs, sr=sr, hop_length=hop_length, x_axis='time')
plt.ylabel('MFCC')
plt.colorbar()
plt.show()
# TODO: Your code here. (You may use multiple code and text segments to display your solutions.)
# Q1
#...
# Q2
# ...
# Q3
# ...
import scipy as scipy
import numpy as np
audio_path = '/content/drive/My Drive/Bluejay.mp3'
x , sr = librosa.load(audio_path)
mfccs_bj, hop_length_bj = _compute_mfcc(x, sr, N_frames=399, Tw=20, Ts=10, alpha=0.97, R=(300, 3700), M=20, C=13, L=22)
_plot_mfcc(mfccs_bj, sr, hop_length_bj)
print(mfccs_bj.shape)
audio_path = '/content/drive/My Drive/Dove.mp3'
x , sr = librosa.load(audio_path)
_get_audio(audio_path)
mfccs_d, hop_length_d = _compute_mfcc(x, sr, N_frames=399, Tw=20, Ts=10, alpha=0.97, R=(300, 3700), M=20, C=13, L=22)
_plot_mfcc(mfccs_d, sr, hop_length_d)
print(mfccs_d.shape)
audio_path = '/content/drive/My Drive/Duck.wav'
x , sr = librosa.load(audio_path)
_get_audio(audio_path)
mfccs_dk, hop_length_dk = _compute_mfcc(x, sr, N_frames=399, Tw=20, Ts=10, alpha=0.97, R=(300, 3700), M=20, C=13, L=22)
_plot_mfcc(mfccs_dk, sr, hop_length_dk)
print(mfccs_dk.shape)
###Output
Loaded /content/drive/My Drive/Duck.wav (sampling rate 12000)
###Markdown
Now, we have extracted the features of three bird signals (BlueJay, Duck and Dove). We can use this feature extracted in various use cases such as `classification` into different clusters. Training GMM using MFCC features Gaussian Mixture Model (GMM) helps to cluster the features. `sklearn.mixture` is a package which enables one to learn Gaussian Mixture Models (diagonal, spherical, tied and full covariance matrices supported), sample them, and estimate them from data. Facilities to help determine the appropriate number of components are also provided. For usage and more details, please refer to [scikit-learn GMM](https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html).
###Code
import sklearn
import sklearn.mixture
###Output
_____no_output_____
###Markdown
Questions* **Q4**: Find the GMM parameters of the features found in 1. Since there are three Bird sounds, there will be three clusters. You can use existing GMM function provided from Python.
###Code
# Hints:
# (1) Concatenate the MFCCs from all three sound files and become the feature matrix,
# leaving the last 50 samples for X_test. In the rest sets,
# define your training/validation set X_train, X_val using train_test_split()
# and create the labels for each class y_train, y_val.
# (2) Instantiate a Scikit-Learn GMM model by using:
# model = sklearn.mixture.GaussianMixture(n_components, covariance_type, reg_covar, verbose, etc.)
# (3) Train a model using model.fit(X_train).
# (4) Predict the model: y_val_predict = model.predict(X_val)
# (5) Calculate the classification accuracy using accuracy_score(y_val_predict, y_val)
# (6) Report the y_test_predict = model.predict(X_test) and save your prediction results as
# a HW1_P4Q5_results.mat file and submit it to canvas
# TODO: Your code here. (You may use multiple code and text segments to display your solutions.)
# Q4
# ...
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
total_mfcc = np.concatenate((mfccs_bj , mfccs_d , mfccs_dk), axis=1)
total_mfcc = total_mfcc.T
X_train = total_mfcc[:1147,:]
X_test = total_mfcc[1147:,:]
y = []
for i in range((len(X_train[:399,:]))):
y.append(1)
for i in range(len(X_train[399:798,:])):
y.append(2)
for i in range(len(X_train[798:,:])):
y.append(3)
X_train_shuf, y_shuf = sklearn.utils.shuffle(X_train , y)
X_train, X_val, y_train, y_val = train_test_split(X_train_shuf,y_shuf,test_size=0.043,random_state=50)
model = sklearn.mixture.GaussianMixture(n_components=3, random_state=0)
model.fit(X_train)
y_val_predict = model.predict(X_val)
y_test_predict = model.predict(X_test)
accuracy_score(y_val_predict, y_val)
###Output
_____no_output_____
###Markdown
Training SVM for Bird Sound Classification Support Vector Machine (SVM) helps to classification of the data. The advantages of support vector machines are:* Effective in high dimensional spaces.* Still effective in cases where number of dimensions is greater than the number of samples.* Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.* Versatile: different kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels.For usage and more details, please refer to [scikit-learn SVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) `SVC` takes as input two arrays: an array `X` of shape (`n_samples`, `n_features`) holding the training samples, and an array `y` of class labels (strings or integers), of shape (`n_samples`):
###Code
# A simple example
from sklearn import svm
X = [[0, 0], [1, 1]]
y = [0, 1]
clf = svm.SVC()
clf.fit(X, y)
# After being fitted, the model can then be used to predict new values:
print(clf.predict([[2., 2.]]))
# You can also get support vectors from the trained model
print(clf.support_vectors_)
from sklearn.metrics import accuracy_score
# model outputs
outputs = clf.predict([[2., 2.]])
# label
y = [1]
# We use accurarcy_score to get the model accuracy, which here is 1.0 (100%)
print("The accuracy is {}".format(accuracy_score(outputs, y)))
###Output
The accuracy is 1.0
###Markdown
Questions * **Q5**: Train the SVM model using the features found in 1. You can use existing SVM function provided from Python.
###Code
# Hints:
# (1) Concatenate the MFCCs from all three sound files and become the feature matrix,
# leaving the last 50 samples for X_test. In the rest sets,
# define your training/validation set X_train, X_val using train_test_split()
# and create the labels for each class y_train, y_val.
# (2) Instantiate a Scikit-Learn SVM model by using:
# model = sklearn.svm.SVC()
# (3) Train a model using model.fit(X_train).
# (4) Predict the model: y_val_predict = model.predict(X_val)
# (5) Calculate the classification accuracy using accuracy_score(y_val_predict, y_val)
# (6) Report the y_test_predict = model.predict(X_test) and save your prediction results as
# a HW1_P4Q5_results.mat file and submit it to canvas
# TODO: Your code here. (You may use multiple code and text segments to display your solutions.)
# Q5
# ...
model = sklearn.svm.SVC()
model.fit(X_train,y_train)
y_val_predict = model.predict(X_val)
y_test_predict = model.predict(X_test)
accuracy_score(y_val_predict, y_val)
###Output
_____no_output_____
###Markdown
Training MLP for Bird Sound Classification Questions * **Q6**: Train a MLP model for bird sound classification following the Pytorch_NN.ipynb tutorial. Calculate and report the classification accuracy.
###Code
# TODO: Your answers here. (You may use multiple code and text segments to display your solutions.)
# Q6
# ...
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 3x3 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
params = list(net.parameters())
print(len(params))
print(params[0].size()) # conv1's .weight
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
net.zero_grad()
out.backward(torch.randn(1, 10))
output = net(input)
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
net.zero_grad() # zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
###Output
_____no_output_____ |
spark-training/spark-python/jupyter-advanced-scheduler/Parallel Query Execution.ipynb | ###Markdown
Parallel Query ExecutionThis notebook shows how to fire up concurrent queries in Spark. This may be useful to get a better overall throughput in cases that multiple outputs need to be generated.We will simply reuse the weather example and fire up two concurrent queries. Although they will generate the very same result, it is still interesting to see that even the intermediate cache will only be built once. 0. PrerequisitesRunning multiple queries in parallel requires some configuration of Spark. Spark can always accept multiple queries, but per default it will process those in a *FIFO* fashion. This means that one query is processed after the other. But Spark also supports a real parallel query execution using a different task scheduler.You need to configure the following values:1. Create a schduler configuration file `fairscheduler.xml` (contents see below)2. Set Spark config `spark.scheduler.mode` to `FAIR`3. Set Spark config `spark.scheduler.allocation.file` to the location of the `fairscheduler.xml` fileUnfortunately these values need to configures __before the Spark session is created__.The `fairscheduler.xml` should look as follows:```xml FAIR 1 2 ``` 1. Read in all yearsNow we read in all years by creating a union. We also add the year as a logical partition column, this will be used later.
###Code
from pyspark.sql.functions import *
storageLocation = "s3://dimajix-training/data/weather"
from functools import reduce
# Read in all years, store them in an Python array
raw_weather_per_year = [spark.read.text(storageLocation + "/" + str(i)).withColumn("year", lit(i)) for i in range(2003,2015)]
# Union all years together
raw_weather = reduce(lambda l,r: l.union(r), raw_weather_per_year)
###Output
_____no_output_____
###Markdown
2. Extract InformationThe raw data is not exactly nice to work with, so we need to extract the relevant information by using appropriate substr operations.
###Code
weather = raw_weather.select(
col("year"),
substring(col("value"),5,6).alias("usaf"),
substring(col("value"),11,5).alias("wban"),
substring(col("value"),16,8).alias("date"),
substring(col("value"),24,4).alias("time"),
substring(col("value"),42,5).alias("report_type"),
substring(col("value"),61,3).alias("wind_direction"),
substring(col("value"),64,1).alias("wind_direction_qual"),
substring(col("value"),65,1).alias("wind_observation"),
(substring(col("value"),66,4).cast("float") / lit(10.0)).alias("wind_speed"),
substring(col("value"),70,1).alias("wind_speed_qual"),
(substring(col("value"),88,5).cast("float") / lit(10.0)).alias("air_temperature"),
substring(col("value"),93,1).alias("air_temperature_qual")
)
###Output
_____no_output_____
###Markdown
3. Read in Station MetadataFortunately station metadata is stored as CSV, so we can directly read that using Sparks `spark.read.csv` mechanisum. The data can be found at `storageLocation + '/isd-history'`.
###Code
stations = spark.read \
.option("header", True) \
.csv(storageLocation + "/isd-history")
###Output
_____no_output_____
###Markdown
4. Join and cache dataNow we need to join the meta data with the measurements. We will cache the result
###Code
joined_data = weather.join(stations, (weather.usaf == stations.USAF) & (weather.wban == stations.WBAN))
joined_data.cache()
###Output
_____no_output_____
###Markdown
5. Perform QueriesNow comes the interesting part: We will perform multiple queries in parallel. We will make use of the Python `threading` module in order to start two concurrent queries. One query will aggregate min/max of temperature, while the other query will aggragte min/max of wind speed.We will save both results to corresponding CSV files into HDFS. 5.1 Define QueriesFirst we create two Python functions which contain the two queries to be executed
###Code
def calc_temperature():
df = joined_data
result = df.groupBy(df.CTRY, df.year).agg(
min(when(df.air_temperature_qual == lit(1), df.air_temperature)).alias('min_temp'),
max(when(df.air_temperature_qual == lit(1), df.air_temperature)).alias('max_temp'),
)
result.write\
.option("header", True)\
.mode("overwrite")\
.csv("/user/hadoop/weather_min_max_temp")
def calc_wind_speed():
df = joined_data
result = df.groupBy(df.CTRY, df.year).agg(
min(when(df.wind_speed_qual == lit(1), df.wind_speed)).alias('min_wind'),
max(when(df.wind_speed_qual == lit(1), df.wind_speed)).alias('max_wind')
)
result.write\
.option("header", True)\
.mode("overwrite")\
.csv("/user/hadoop/weather_min_max_wind")
###Output
_____no_output_____
###Markdown
5.2 Run QueriesNow since we have the two functions, we import and use the Python `threading` module to run both queries in parallel. We also need to configure Spark to use the correct scheduler pool (in our case it is the `fair` pool).
###Code
# We need to set the thread local property "spark.scheduler.pool" to the correct pool defined in fairscheduler.xml
spark.sparkContext.setLocalProperty("spark.scheduler.pool", "fair")
import threading
# First create threads
t1 = threading.Thread(target=calc_temperature)
t2 = threading.Thread(target=calc_wind_speed)
# Then start both threads (in the background)
t1.start()
t2.start()
# Finally wait until both threads have finished
t1.join()
t2.join()
###Output
_____no_output_____
###Markdown
5.3 Watch Query executionNow you should open the Spark web interface and watch both queries being processed in parallel. 5.4 Inspect resultFinally you can inspect the results using for example Spark again (or HDFS command line tools).
###Code
temp_df = spark.read\
.option("header", True)\
.csv("/user/hadoop/weather_min_max_temp")
temp_df.show()
wind_df = spark.read\
.option("header", True)\
.csv("/user/hadoop/weather_min_max_wind")
wind_df.show()
###Output
+----+----+--------+--------+
|CTRY|year|min_wind|max_wind|
+----+----+--------+--------+
| FI|2006| 0.0| 12.0|
| PO|2006| 0.0| 16.5|
| GM|2012| 0.0| 13.9|
| GM|2010| 0.0| 17.0|
| RS|2014| 0.0| 11.0|
| FI|2003| 0.0| 14.4|
| NO|2007| 0.0| 26.0|
| NL|2012| 0.0| 28.8|
| GM|2005| 0.0| 14.4|
| PO|2010| 0.0| 21.6|
| FR|2010| 0.0| 17.5|
| PL|2012| 0.0| 13.4|
| GK|2012| 0.0| 33.4|
| US|2013| 0.0| 24.7|
| GM|2013| 0.0| 14.4|
| IT|2010| 0.0| 20.6|
| US|2007| 0.0| 36.0|
| AU|2007| 0.0| 13.4|
| EZ|2007| 0.0| 26.2|
| EZ|2004| 0.0| 17.0|
+----+----+--------+--------+
only showing top 20 rows
###Markdown
Parallel Query ExecutionThis notebook shows how to fire up concurrent queries in Spark. This may be useful to get a better overall throughput in cases that multiple outputs need to be generated.We will simply reuse the weather example and fire up two concurrent queries. Although they will generate the very same result, it is still interesting to see that even the intermediate cache will only be built once. 0. PrerequisitesRunning multiple queries in parallel requires some configuration of Spark. Spark can always accept multiple queries, but per default it will process those in a *FIFO* fashion. This means that one query is processed after the other. But Spark also supports a real parallel query execution using a different task scheduler.You need to configure the following values:1. Create a schduler configuration file `fairscheduler.xml` (contents see below)2. Set Spark config `spark.scheduler.mode` to `FAIR`3. Set Spark config `spark.scheduler.allocation.file` to the location of the `fairscheduler.xml` fileUnfortunately these values need to configures __before the Spark session is created__.The `fairscheduler.xml` should look as follows:```xml FAIR 1 2 ``` 1. Read in all yearsNow we read in all years by creating a union. We also add the year as a logical partition column, this will be used later.
###Code
from pyspark.sql.functions import *
storageLocation = "s3://dimajix-training/data/weather"
from functools import reduce
# Read in all years, store them in an Python array
raw_weather_per_year = [
spark.read.text(storageLocation + "/" + str(i)).withColumn("year", lit(i))
for i in range(2003, 2015)
]
# Union all years together
raw_weather = reduce(lambda l, r: l.union(r), raw_weather_per_year)
###Output
_____no_output_____
###Markdown
2. Extract InformationThe raw data is not exactly nice to work with, so we need to extract the relevant information by using appropriate substr operations.
###Code
weather = raw_weather.select(
col("year"),
substring(col("value"), 5, 6).alias("usaf"),
substring(col("value"), 11, 5).alias("wban"),
substring(col("value"), 16, 8).alias("date"),
substring(col("value"), 24, 4).alias("time"),
substring(col("value"), 42, 5).alias("report_type"),
substring(col("value"), 61, 3).alias("wind_direction"),
substring(col("value"), 64, 1).alias("wind_direction_qual"),
substring(col("value"), 65, 1).alias("wind_observation"),
(substring(col("value"), 66, 4).cast("float") / lit(10.0)).alias("wind_speed"),
substring(col("value"), 70, 1).alias("wind_speed_qual"),
(substring(col("value"), 88, 5).cast("float") / lit(10.0)).alias("air_temperature"),
substring(col("value"), 93, 1).alias("air_temperature_qual"),
)
###Output
_____no_output_____
###Markdown
3. Read in Station MetadataFortunately station metadata is stored as CSV, so we can directly read that using Sparks `spark.read.csv` mechanisum. The data can be found at `storageLocation + '/isd-history'`.
###Code
stations = spark.read.option("header", True).csv(storageLocation + "/isd-history")
###Output
_____no_output_____
###Markdown
4. Join and cache dataNow we need to join the meta data with the measurements. We will cache the result
###Code
joined_data = weather.join(
stations, (weather.usaf == stations.USAF) & (weather.wban == stations.WBAN)
)
joined_data.cache()
###Output
_____no_output_____
###Markdown
5. Perform QueriesNow comes the interesting part: We will perform multiple queries in parallel. We will make use of the Python `threading` module in order to start two concurrent queries. One query will aggregate min/max of temperature, while the other query will aggragte min/max of wind speed.We will save both results to corresponding CSV files into HDFS. 5.1 Define QueriesFirst we create two Python functions which contain the two queries to be executed
###Code
def calc_temperature():
df = joined_data
result = df.groupBy(df.CTRY, df.year).agg(
min(when(df.air_temperature_qual == lit(1), df.air_temperature)).alias(
'min_temp'
),
max(when(df.air_temperature_qual == lit(1), df.air_temperature)).alias(
'max_temp'
),
)
result.write.option("header", True).mode("overwrite").csv(
"/user/hadoop/weather_min_max_temp"
)
def calc_wind_speed():
df = joined_data
result = df.groupBy(df.CTRY, df.year).agg(
min(when(df.wind_speed_qual == lit(1), df.wind_speed)).alias('min_wind'),
max(when(df.wind_speed_qual == lit(1), df.wind_speed)).alias('max_wind'),
)
result.write.option("header", True).mode("overwrite").csv(
"/user/hadoop/weather_min_max_wind"
)
###Output
_____no_output_____
###Markdown
5.2 Run QueriesNow since we have the two functions, we import and use the Python `threading` module to run both queries in parallel. We also need to configure Spark to use the correct scheduler pool (in our case it is the `fair` pool).
###Code
# We need to set the thread local property "spark.scheduler.pool" to the correct pool defined in fairscheduler.xml
spark.sparkContext.setLocalProperty("spark.scheduler.pool", "fair")
import threading
# First create threads
t1 = threading.Thread(target=calc_temperature)
t2 = threading.Thread(target=calc_wind_speed)
# Then start both threads (in the background)
t1.start()
t2.start()
# Finally wait until both threads have finished
t1.join()
t2.join()
###Output
_____no_output_____
###Markdown
5.3 Watch Query executionNow you should open the Spark web interface and watch both queries being processed in parallel. 5.4 Inspect resultFinally you can inspect the results using for example Spark again (or HDFS command line tools).
###Code
temp_df = spark.read.option("header", True).csv("/user/hadoop/weather_min_max_temp")
temp_df.show()
wind_df = spark.read.option("header", True).csv("/user/hadoop/weather_min_max_wind")
wind_df.show()
###Output
+----+----+--------+--------+
|CTRY|year|min_wind|max_wind|
+----+----+--------+--------+
| FI|2006| 0.0| 12.0|
| PO|2006| 0.0| 16.5|
| GM|2012| 0.0| 13.9|
| GM|2010| 0.0| 17.0|
| RS|2014| 0.0| 11.0|
| FI|2003| 0.0| 14.4|
| NO|2007| 0.0| 26.0|
| NL|2012| 0.0| 28.8|
| GM|2005| 0.0| 14.4|
| PO|2010| 0.0| 21.6|
| FR|2010| 0.0| 17.5|
| PL|2012| 0.0| 13.4|
| GK|2012| 0.0| 33.4|
| US|2013| 0.0| 24.7|
| GM|2013| 0.0| 14.4|
| IT|2010| 0.0| 20.6|
| US|2007| 0.0| 36.0|
| AU|2007| 0.0| 13.4|
| EZ|2007| 0.0| 26.2|
| EZ|2004| 0.0| 17.0|
+----+----+--------+--------+
only showing top 20 rows
|
Section 8/8.6-finding_repeats.ipynb | ###Markdown
Finding repeated words
###Code
import re
s = "I went down to to the the crossroads 1 1 time"
pattern = r'(\b\w+\b)\W\1'
re.sub(pattern, '', s)
def replacements(string):
pattern = r'(\b\w+\b)\W\1'
repeats = re.findall(pattern, string)
string = string.split()
result = []
for word in string:
if word not in result:
result.append(word)
result = ' '.join(result)
print(repeats)
return(result)
replacements(s)
###Output
['to', 'the', '1']
|
Notebooks/ipo_translate_examples.ipynb | ###Markdown
var code_show=true; //true -> hide code at first function code_toggle() { $('div.prompt').hide(); // always hide prompt if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle);
###Code
def load_data(file, ids):
with open(file, 'r') as f:
data = f.read().split('\n')
if len(ids) == 2 and ids[0] < ids[1]:
return data[ids[0]:ids[1]]
else:
#return [data.split('\n')[i] for i in ids]
return [e for i, e in enumerate(data) if i in ids]
width = 80
begin = 100
num_sent = 50
ids = [begin, begin+num_sent]
source = load_data('./data/test.src', ids)
target = load_data('./data/test.trg', ids)
decode = load_data('./data/large_test_120000.de', ids)
#for i in range(num_sent-10):
# print(source[i].replace('@@', ''))
for i in range(len(source)):
print('Source {}: {}'.format(begin+i, source[i].replace('@@ ', '')))
print('Translate: {}'.format(decode[i].replace('@@ ', '').replace(' ', '')))
print('Reference: {}'.format(target[i].replace('@@ ', '').replace(' ', '')))
print('-'*width)
###Output
Source 100: requiring the directors and senior management to correct their behaviors which are harmful to the interests of the company .
Translate: 要求董事及高級管理層糾正損害公司利益的行為。
Reference: 要求董事及高級管理層糾正其有損本公司利益的行為。
--------------------------------------------------------------------------------
Source 101: the group will apply this standard for the financial reporting period commencing on 1 january 2013 .
Translate: 貴集團將於二零一三年一月一日開始之財務呈報期間應用該準則。
Reference: 本集團將自2013年1月1日起的財務報告期間採納該準則。
--------------------------------------------------------------------------------
Source 102: the notice must specify the time and place of the meeting and , in the case of special business , the general nature of that business .
Translate: 通告須註明舉行會議的時間及地點,倘有特別事項,則須註明有關事項的一般性質。
Reference: 通告須註明舉行會議之時間及地點,倘有特別事項,則須註明有關事項之一般性質。
--------------------------------------------------------------------------------
Source 103: in term of plywood quality , many downstream purchasers prefer to buy imported plywood for its high quality .
Translate: 於膠合板品質方面,眾多下游買家偏好為進口優質膠合板的進口膠合板。
Reference: 就膠合板品質而言,很多下游購買者偏好購買質素高的進口膠合板。
--------------------------------------------------------------------------------
Source 104: our chief executive officer is responsible for formulating and implementing the marketing strategies of our group .
Translate: 我們的行政總裁負責制定及實施本集團的營銷策略。
Reference: 我們的行政總裁負責制定及實施本集團的市場推廣策略。
--------------------------------------------------------------------------------
Source 105: 03 of the gem listing rules in respect of our financial results for the first full year commencing after the listing date .
Translate: 03條有關上市日期後開始的首個完整年度的財務業績當日止。
Reference: 03條就上市日期後開始之首個完整年度的財務業績的日期止。
--------------------------------------------------------------------------------
Source 106: the subscription of the hong kong offer shares by giving electronic application instructions to hkscc is only a facility provided to ccass participants .
Translate: 透過向香港結算發出電子認購指示認購香港發售股份僅為一項提供予中央結算系統參與者的服務。
Reference: 透過向香港結算發出電子認購指示認購香港發售股份僅為一項提供予中央結算系統參與者的服務。
--------------------------------------------------------------------------------
Source 107: our directors confirmed that we are not required to pay for any fee for the preferential promotion / display arrangement under the relevant consignment agreements with distributors a and b .
Translate: 董事確認,根據與分銷商A及B。
Reference: 董事確認我們毋須就與分銷商A及B訂立相關寄售協議項下優先宣傳╱展示安排支付任何費用。
--------------------------------------------------------------------------------
Source 108: the company was incorporated in the cayman islands as an exempted company with limited liability on 4 january 2011 under the cayman companies law .
Translate: 本公司於二零一一年一月四日根據開曼群島公司法在開曼群島註冊成立為獲豁免有限公司。
Reference: 本公司於2011年1月4日根據開曼群島公司法在開曼群島註冊成立為獲豁免有限公司。
--------------------------------------------------------------------------------
Source 109: the indemnity referred to above shall be extended to cover .
Translate: 上述彌償保證將延長至涵蓋。
Reference: 上述彌償保證將擴至涵蓋。
--------------------------------------------------------------------------------
Source 110: it has developed a number of well-known brands such as " smiffys " and " fever " over the years .
Translate: 其已在多年來開發一系列知名品牌「SMFEs」及「ENEver」等知名品牌。
Reference: 多年來已發展多個知名品牌,如「Smiffys」及「Fever」。
--------------------------------------------------------------------------------
Source 111: a considerable amount of estimation is required in assessing the ultimate realisation of these receivables , including the current creditworthiness and the past collection history of each debtor .
Translate: 於評估該等應收款項的最終變現時,須作出大量估計。
Reference: 於評估該等應收款項的最終變現數額時須作出大量估計,包括各債務人現時的信譽及過往收回歷史。
--------------------------------------------------------------------------------
Source 112: the directors confirm that since 30 june 2009 , there has been no material adverse change in the financial or trading position or prospects of the group .
Translate: 董事確認,自二零零九年六月三十日以來,本集團的財務或經營狀況或前景並無重大不利變動。
Reference: 董事確認,自二零零九年六月三十日起,本集團的財政或貿易狀況或前景並無重大不利變動。
--------------------------------------------------------------------------------
Source 113: the board may issue debentures , debenture stock , bonds and other securities , whether outright or as collateral security for any debt , liability or obligation of the company or of any third party .
Translate: 董事會可發行公司債權證、債券、債券及其他證券,以及作為本公司或任何第三方的任何債項、負債或責任的擔保。
Reference: 董事會可發行債權證、債股、債券及其他證券。
--------------------------------------------------------------------------------
Source 114: the option-holder may exercise all his options within a period of three months of the date of the notification by the board .
Translate: 購股權持有人可於董事會發出通知日期後三個月期間內悉數行使其購股權。
Reference: 購股權持有人可於董事會發出通知日期後三個月內悉數行使其購股權。
--------------------------------------------------------------------------------
Source 115: black & veatch primarily serves the energy , water , environmental , and information technology sectors .
Translate: 博威主要為能源、水、環境及資訊科技行業。
Reference: 博威主要服務能源、水務、環境及資訊科技行業。
--------------------------------------------------------------------------------
Source 116: the principal business of that company is dealing in securities .
Translate: 該公司主要從事證券買賣業務。
Reference: 該公司的主要業務為證券買賣。
--------------------------------------------------------------------------------
Source 117: 3 % , while dicos chain has grown from 246 in 2002 to 600 in 2006 with a cagr of approximately 25 % .
Translate: 3%增長,而博德連鎖自二零零二年至600間增長,複合年增長率約為25%。
Reference: 3%,而德克士連鎖店已從二零零二年的246家增加至二零零六年的600家,複合年增長率約為25%。
--------------------------------------------------------------------------------
Source 118: under the prc company law , the board of directors exercises the following powers .
Translate: 根據中國公司法,董事會行使下列職權。
Reference: 根據《中國公司法》,董事會行使下列職權。
--------------------------------------------------------------------------------
Source 119: 1 million for the year ended december 31 , 2009 to rmb50 .
Translate: 8%至截至二零一零年十二月三十一日止年度的人民幣50。
Reference: 2百萬元至截至二零一零年十二月三十一日止年度的人民幣50。
--------------------------------------------------------------------------------
Source 120: the continuing compliance of any such terms and conditions that may be attached to the grant of the option , failing which the option will lapse unless otherwise resolved to the contrary by the board .
Translate: 持續遵守授出購股權可能附帶的任何有關條款及條件,倘未能符合授出購股權,則購股權將告失效。
Reference: 持續遵守授出購股權可能附帶之任何有關條款及條件,倘未能持續遵守該等條款及條件,除非董事會議決授出豁免,否則購股權將告失效。
--------------------------------------------------------------------------------
Source 121: none of the directors nor any of the parties listed in paragraph 21 below is materially interested in any contract or arrangement subsisting at the date of this prospectus which is significant in relation to business of our group taken as a whole .
Translate: 各董事或下文第21段所列的任何人士概無於本招股章程日期仍然有效且對本集團整體業務而言屬重大的任何合約或安排中擁有重大權益。
Reference: 各董事或下文第21段所列的任何各方概無於本招股章程日期仍然存續,且就本集團整體業務而言屬重大的任何合約或安排中擁有重大權益。
--------------------------------------------------------------------------------
Source 122: goodwill was allocated to this business based on its fair value relative to the estimated fair value of our domestic hog production reporting unit .
Translate: 商譽乃根據其與我們的國內生豬養殖呈報單位的估計公允價值有關的公允價值分配至該業務。
Reference: 商譽乃根據其與我們的國內生豬養殖呈報單位的估計公允價值有關的公允價值分配至該業務。
--------------------------------------------------------------------------------
Source 123: regulations , in the event that such laws , rules and regulations become more stringent or wide in scope , we may fail to comply .
Translate: 倘有關法律、規則及法規變得更為嚴格或廣泛,我們可能無法遵守。
Reference: 規,惟若此等法律、規則及法規變得更為嚴格或涵蓋的範圍更廣,我們可能未能遵守。
--------------------------------------------------------------------------------
Source 124: a party may terminate the agreement if it or the other party becomes incapable of performing its or their obligations for a consecutive period of more than 30 days due to bankruptcy or other material deterioration in its business operations .
Translate: 倘一方或另一方連續於其業務經營中不能履行其於30天內的破產或其他重大事務,則其可終止協議。
Reference: 倘協議一方因其或另一方破產或業務營運的其他嚴重惡化情況令其連續30天以上不能履行義務,則可終止協議。
--------------------------------------------------------------------------------
Source 125: investors who trade shares on the basis of publicly available allocation details prior to the receipt of share certificates or prior to the share certificates becoming valid certificates of title do so entirely at their own risk .
Translate: 收到股票前或於股票成為有效所有權憑證前按公開可得分配詳情買賣股份的投資者,須自行承擔全部風險。
Reference: 收到股票或股票成為所有權有效憑證前基於公佈分配詳情而買賣股份之投資者須承擔一切風險。
--------------------------------------------------------------------------------
Source 126: 47 , and the annual review requirements set out in rules 14a .
Translate: 47條所載申報及公告的規定、第14A。
Reference: 47條所載申報及公告的規定、第14A。
--------------------------------------------------------------------------------
Source 127: the facility agreement provided mfw investment with a maximum amount of mop103 million to draw down in multiple tranches , according to its needs .
Translate: 根據其需要,融資協議向澳門漁人碼頭投資提供最高金額103百萬澳門幣的澳門漁人碼頭投資。
Reference: 該融資協議提供澳門漁人碼頭投資一筆最多為103,000,000澳門幣的款項,並根據其需要分多批提取。
--------------------------------------------------------------------------------
Source 128: yuzhou cement is a wholly-owned subsidiary of the company .
Translate: 禹州水泥為貴公司的全資附屬公司。
Reference: 禹州水泥為貴公司的全資附屬公司。
--------------------------------------------------------------------------------
Source 129: in such eventuality , all application monies will be returned , without interest , on the terms set out in the section headed " further terms and conditions of the hong kong public offering - 8 .
Translate: 在此情況下,所有申請股款將按「香港公開發售的其他條款及條件-8。
Reference: 在此情況下,所有申請款項將根據本招股章程「香港公開發售之其他條款和條件-8。
--------------------------------------------------------------------------------
Source 130: we will continue our eåorts to reduce our raw material and coal costs through bulk purchases and leveraging our economies of scale to increase our bargaining power over suppliers .
Translate: 我們將繼續透過批量採購及充分利用我們的規模經濟來降低原材料及煤炭成本。
Reference: 本集團將通過大批量採購和充分利用本集團的規模經濟效益來增強本集團與供應商的議價能力,繼續努力降低本集團的原材料和煤炭成本。
--------------------------------------------------------------------------------
Source 131: the predecessor group also competes against independent owners or operators of local 5-star hotels .
Translate: 前身集團亦對當地五星級酒店的獨立擁有人或營運商競爭。
Reference: 前身集團亦面對地方五星級酒店獨立擁有人或運營商的競爭。
--------------------------------------------------------------------------------
Source 132: record period were sensitive to the fluctuation of the group ' s average daily tce which year-to-year fluctuation during each of the two years ended 31 march 2010 were of about 20 .
Translate: 記錄期間,本集團截至二零一零年三月三十一日止兩個年度各年及截至二零一零年三月三十一日止兩個年度各年的平均日均TCE分別約為20。
Reference: 所影響,於截至2010年3月31日止兩個年度,本集團的平均日均TCE按年波動分別約為20。
--------------------------------------------------------------------------------
Source 133: 33 per h share and assuming the over-allotment option is not exercised , we estimate that we will receive net proceeds of approximately hk $ 1,470 .
Translate: 33港元並假設超額配股權未獲行使,我們估計將從全球發售獲得所得款項淨額約1,470。
Reference: 33港元及假設超額配股權未獲行使,我們估計,我們將於扣除包銷佣金及其他估計開支後自全球發售獲取所得款項淨額約1,470。
--------------------------------------------------------------------------------
Source 134: cggc is therefore required to comply with the laws , regulations and listing rules applicable to public companies listed on the shanghai stock exchange , including such measures designed to protect the interests of public and minority shareholders .
Translate: 因此,葛洲壩股份公司須遵守適用於上交所上市的上市公司的法律、法規及上市規則的規定。
Reference: 因此,葛洲壩股份公司須遵守在上海證券交易所上市的公眾公司所適用的法律、規例及上市規則,包括為保障公眾及少數股東權益而設計的該等措施。
--------------------------------------------------------------------------------
Source 135: the aged analysis of the group ' s trade receivables based on certification / invoice dates at the end of each reporting period , which approximated the respective revenue recognition dates are as follows .
Translate: 貴集團於各報告期末基於核證╱發票日期的貿易應收款項的賬齡分析如下。
Reference: 於各報告期末,貴集團按驗收╱發票日期作出的應收貿易款項之賬齡分析如下。
--------------------------------------------------------------------------------
Source 136: we have established a number of pharmaceutical joint ventures with leading international pharmaceutical companies and other joint venture partners .
Translate: 我們已與國際領先的醫藥公司及其他合資夥伴成立多個合資企業。
Reference: 我們與一些國際領先的製藥公司及其他合資夥伴建立了多家醫藥合資企業。
--------------------------------------------------------------------------------
Source 137: no stamp duty is payable in the cayman islands on transfers of shares of cayman islands companies except those which hold interests in land in the cayman islands .
Translate: 開曼群島對開曼群島公司股份轉讓並不徵收印花稅,惟轉讓在開曼群島擁有土地權益的公司的股份除外。
Reference: 開曼群島對開曼群島公司股份轉讓並不徵收印花稅,惟轉讓在開曼群島擁有土地權益的公司的股份除外。
--------------------------------------------------------------------------------
Source 138: close family members of an individual are those family members who may be expected to influence , or be influenced by , that individual in their dealings with the entity .
Translate: 與個人關係密切的家庭成員是指與實體交易時預期可能會影響該名個人或受其影響的家庭成員。
Reference: 個別人士的近親指預期會於與實體的交易中影響該人士或受其影響的家庭成員。
--------------------------------------------------------------------------------
Source 139: we strive to provide our patients with the best healthcare available , while adhering to strict ethical standards of medical practice , and treat them with respect , compassion and confidentiality .
Translate: 我們致力為病人提供最佳的醫療服務,而堅持嚴格遵守醫療道德標準的道德標準,並嚴緊保密及保密。
Reference: 我們致力於向病人提供最佳醫療服務,並秉持醫療實踐的嚴格道德標準,尊重、同情病人並對其病情保密。
--------------------------------------------------------------------------------
Source 140: and the loan agreements stated specific situations that the banks can demand for repayment , as a general and standard term of the loan agreements with these major commercial banks , such loan agreements contain a general term entitling the banks to demand for repayment at their discretion .
Translate: 及貸款協議列明銀行可要求還款的特定情況,例如與該等主要商業銀行訂立的貸款協議的一般及標準條款,該等貸款協議載有一般條款,規定銀行可酌情要求還款。
Reference: 及貸款協議訂明銀行可要求我們還款的特別情況,但作為與該等商業銀行之貸款協議的常見標準條款,該等貸款協議亦載有一般條款,訂明銀行可酌情要求我們還款。
--------------------------------------------------------------------------------
Source 141: if the beneficiary is a hong kong resident enterprise , which directly holds less than 25 % equity interests of the aforesaid enterprise , the tax levied shall be 10 % of the distributed dividends .
Translate: 倘受益人為香港居民企業,直接持有上述企業少於25%的股權,則應按所派股息10%的稅率徵收稅項。
Reference: 倘受益人為香港居民企業且直接持有上述企業少於25%的股權,則應按所派股息10%的稅率徵收所得稅。
--------------------------------------------------------------------------------
Source 142: for remaining balances not covered by social insurance scheme , the management assessed the collectability based on historical patterns and data .
Translate: 對於不在社會保險計劃的餘下餘額,管理層根據過往的模式及數據評估可收回的可能性。
Reference: 關於社保計劃未涵蓋的餘下餘額,管理層乃基於歷史規律及數據就可回收程度進行評估。
--------------------------------------------------------------------------------
Source 143: the regulation on work safety licenses 《 安全生產許可證條例 》 was promulgated and became effective on january 13 , 2004 .
Translate: 《安全生產許可證條例》於二零零四年一月十三日頒佈並施行。
Reference: 《安全生產許可證條例》於二零零四年一月十三日頒佈並正式實施。
--------------------------------------------------------------------------------
Source 144: utility patent is granted and registered upon application unless there are reasons for the patent administrative authority to reject the application after its preliminary review .
Translate: 實用新型專利於申請後獲授及登記,除非專利行政部門於初步審查後拒絕受理申請。
Reference: 倘實用新型專利申請經初步審查後並無發現駁回理由,則由專利行政部門授予專利並註冊。
--------------------------------------------------------------------------------
Source 145: the aggregate benefit of incentives is recognised as a reduction of rental expense on a straight-line basis over the lease term .
Translate: 優惠總利益以直線法於租賃期間確認為租金開支減少。
Reference: 獎勵利益總額於有關租賃期內以直線法確認為租金開支減少。
--------------------------------------------------------------------------------
Source 146: both the useful life of an asset and its residual value , if any , are reviewed at each balance sheet date .
Translate: 資產的可使用年期及其剩餘價值均於各結算日審閱。
Reference: 於各結算日檢討資產的可用年期及其殘值。
--------------------------------------------------------------------------------
Source 147: investors should seek the advice of their stockbroker or other professional adviser for details of the settlement arrangements as such arrangements may affect their rights and interests .
Translate: 投資者應就交收安排的詳情諮詢其股票經紀或其他專業顧問的意見,因為該等安排或會影響到其權利及權益。
Reference: 投資者應向股票經紀或其他專業顧問諮詢交收安排的詳情,因上述安排可能影響他們的權利及權益。
--------------------------------------------------------------------------------
Source 148: our long-term objective is to become a leading international apm products and services provider .
Translate: 我們的長期目標是成為領先的國際應用性能管理產品及服務供應商。
Reference: 我們的長期目標是成為領先的國際應用性能管理產品及服務供應商。
--------------------------------------------------------------------------------
Source 149: these plans provide equity based or equity related awards to employees of aig and its subsidiaries .
Translate: 該等計劃為AIG及其附屬公司的僱員提供股權或股權相關獎勵。
Reference: 該等計劃向AIG及其附屬公司僱員提供股本或股本相關獎勵。
--------------------------------------------------------------------------------
|
codes/labs_lecture13/lab02_lstm/lstm_exercise.ipynb | ###Markdown
Lab 02: LSTM - exercise
###Code
import torch
import torch.nn.functional as F
import torch.nn as nn
import math
import time
import utils
###Output
_____no_output_____
###Markdown
With or without GPU?It is recommended to run this code on GPU: * Time for 1 epoch on CPU : 274 sec ( 4.56 min) * Time for 1 epoch on GPU : 10.1 sec w/ GeForce GTX 1080 Ti
###Code
device= torch.device("cuda")
#device= torch.device("cpu")
print(device)
###Output
cuda
###Markdown
Download Penn Tree Bank (the tensor train_data should consists of 20 columns of ~50,000 words)
###Code
from utils import check_ptb_dataset_exists
data_path=check_ptb_dataset_exists()
train_data = torch.load(data_path+'ptb/train_data.pt')
test_data = torch.load(data_path+'ptb/test_data.pt')
print( train_data.size() )
print( test_data.size() )
###Output
torch.Size([46479, 20])
torch.Size([4121, 20])
###Markdown
Some constants associated with the data set
###Code
bs = 20
vocab_size = 10000
###Output
_____no_output_____
###Markdown
Make a recurrent net class
###Code
class three_layer_recurrent_net(nn.Module):
def __init__(self, hidden_size):
super(three_layer_recurrent_net, self).__init__()
self.layer1 = nn.Embedding(vocab_size, hidden_size) # COMPLETE HERE
self.layer2 = nn.LSTM(hidden_size, hidden_size) # COMPLETE HERE
self.layer3 = nn.Linear(hidden_size, vocab_size) # COMPLETE HERE
def forward(self, word_seq, h_init, c_init ):
g_seq = self.layer1(word_seq) # COMPLETE HERE
h_seq , (h_final,c_final) = self.layer2(g_seq, (h_init, c_init)) # COMPLETE HERE (don't forget the extra parenthesis around h_init and c_init)
score_seq = self.layer3(h_seq) # COMPLETE HERE
return score_seq, h_final , c_final
###Output
_____no_output_____
###Markdown
Build the net. Choose the hidden size to be 300. How many parameters in total?
###Code
hidden_size = 300 # COMPLETE HERE
net = three_layer_recurrent_net( hidden_size )
print(net)
utils.display_num_param(net)
###Output
three_layer_recurrent_net(
(layer1): Embedding(10000, 300)
(layer2): LSTM(300, 300)
(layer3): Linear(in_features=300, out_features=10000, bias=True)
)
There are 6732400 (6.73 million) parameters in this neural network
###Markdown
Send the weights of the networks to the GPU
###Code
net = net.to(device)
###Output
_____no_output_____
###Markdown
Set up manually the weights of the embedding module and Linear module
###Code
net.layer1.weight.data.uniform_(-0.1, 0.1)
net.layer3.weight.data.uniform_(-0.1, 0.1)
print('')
###Output
###Markdown
Choose the criterion, as well as the following important hyperparameters: * initial learning rate = 5* sequence length = 35
###Code
criterion = nn.CrossEntropyLoss()
my_lr = 5 # COMPLETE HERE
seq_length = 35 # COMPLETE HERE
###Output
_____no_output_____
###Markdown
Function to evaluate the network on the test set
###Code
def eval_on_test_set():
running_loss=0
num_batches=0
h = torch.zeros(1, bs, hidden_size)
c = torch.zeros(1, bs, hidden_size)
h=h.to(device)
c=c.to(device)
for count in range( 0 , 4120-seq_length , seq_length) :
minibatch_data = test_data[ count : count+seq_length ]
minibatch_label = test_data[ count+1 : count+seq_length+1 ]
minibatch_data=minibatch_data.to(device)
minibatch_label=minibatch_label.to(device)
scores, h, c = net( minibatch_data, h , c)
minibatch_label = minibatch_label.view( bs*seq_length )
scores = scores.view( bs*seq_length , vocab_size)
loss = criterion( scores , minibatch_label )
h=h.detach()
c=c.detach()
running_loss += loss.item()
num_batches += 1
total_loss = running_loss/num_batches
print('test: exp(loss) = ', math.exp(total_loss) )
###Output
_____no_output_____
###Markdown
Do 8 passes through the training set.
###Code
start=time.time()
for epoch in range(8):
# keep the learning rate to 1 during the first 2 epochs, then divide by 3 at every epoch
if epoch >= 2:
my_lr = my_lr / 3 # COMPLETE HERE
# create a new optimizer at the beginning of each epoch: give the current learning rate.
optimizer=torch.optim.SGD( net.parameters() , lr=my_lr )
# set the running quatities to zero at the beginning of the epoch
running_loss=0
num_batches=0
# set the initial h and c to be the zero vector
h = torch.zeros(1, bs, hidden_size)
c = torch.zeros(1, bs, hidden_size)
# send them to the gpu
h=h.to(device)
c=c.to(device)
for count in range( 0 , 46478-seq_length , seq_length):
# Set the gradients to zeros
optimizer.zero_grad()
# create a minibatch
minibatch_data = train_data[count:count+seq_length] # COMPLETE HERE
minibatch_label = train_data[count+1:count+seq_length+1] # COMPLETE HERE
# send them to the gpu
minibatch_data=minibatch_data.to(device)
minibatch_label=minibatch_label.to(device)
# Detach to prevent from backpropagating all the way to the beginning
# Then tell Pytorch to start tracking all operations that will be done on h and c
h=h.detach() # COMPLETE HERE
c=c.detach() # COMPLETE HERE
h=h.requires_grad_() # COMPLETE HERE
c=c.requires_grad_() # COMPLETE HERE
# forward the minibatch through the net
scores, h, c = net(minibatch_data, h, c) # COMPLETE HERE
# reshape the scores and labels to huge batch of size bs*seq_length
scores = scores.view(bs*seq_length, vocab_size) # COMPLETE HERE
minibatch_label = minibatch_label.view(bs*seq_length) # COMPLETE HERE
# Compute the average of the losses of the data points in this huge batch
loss = criterion(scores, minibatch_label) # COMPLETE HERE
# backward pass to compute dL/dR, dL/dV and dL/dW
loss.backward() # COMPLETE HERE
# do one step of stochastic gradient descent: R=R-lr(dL/dR), V=V-lr(dL/dV), ...
utils.normalize_gradient(net)
optimizer.step() # COMPLETE HERE
# update the running loss
running_loss += loss.item()
num_batches += 1
# compute stats for the full training set
total_loss = running_loss/num_batches
elapsed = time.time()-start
print('')
print('epoch=',epoch, '\t time=', elapsed,'\t lr=', my_lr, '\t exp(loss)=', math.exp(total_loss))
eval_on_test_set()
###Output
epoch= 0 time= 8.211996078491211 lr= 5 exp(loss)= 279.4402583954013
test: exp(loss) = 176.30351187373043
epoch= 1 time= 16.67081379890442 lr= 5 exp(loss)= 126.98750670975554
test: exp(loss) = 133.53573246141025
epoch= 2 time= 25.035041570663452 lr= 1.6666666666666667 exp(loss)= 81.04688879221932
test: exp(loss) = 114.37226256739952
epoch= 3 time= 33.40575385093689 lr= 0.5555555555555556 exp(loss)= 66.8743564567243
test: exp(loss) = 110.30347768024261
epoch= 4 time= 41.72743630409241 lr= 0.1851851851851852 exp(loss)= 62.0367803181859
test: exp(loss) = 109.05039854876053
epoch= 5 time= 50.10138535499573 lr= 0.0617283950617284 exp(loss)= 60.293079923380624
test: exp(loss) = 108.4714515010681
epoch= 6 time= 58.443538665771484 lr= 0.0205761316872428 exp(loss)= 59.663322203579135
test: exp(loss) = 108.07592236509308
epoch= 7 time= 66.77264189720154 lr= 0.006858710562414266 exp(loss)= 59.437707806218725
test: exp(loss) = 107.80424795800211
###Markdown
Choose one sentence (taken from the test set)
###Code
sentence1 = "some analysts expect oil prices to remain relatively"
sentence2 = "over the next days and weeks they say investors should look for stocks to"
sentence3 = "prices averaging roughly $ N a barrel higher in the third"
sentence4 = "i think my line has been very consistent mrs. hills said at a news"
sentence5 = "this appears particularly true at gm which had strong sales in"
# or make your own sentence. No capital letter or punctuation allowed. Each word must be in the allowed vocabulary.
sentence6= "he was very"
# SELECT THE SENTENCE HERE
mysentence = sentence2
###Output
_____no_output_____
###Markdown
Convert the sentence into a vector, then send to GPU
###Code
minibatch_data=utils.sentence2vector(mysentence)
minibatch_data=minibatch_data.to(device)
print(minibatch_data)
###Output
tensor([[ 301],
[ 32],
[ 528],
[ 363],
[ 48],
[1193],
[ 374],
[ 674],
[ 410],
[ 238],
[2460],
[ 181],
[1709],
[ 64]], device='cuda:0')
###Markdown
Set the initial hidden state to zero, then run the LSTM.
###Code
h = torch.zeros(1, 1, hidden_size)
c = torch.zeros(1, 1, hidden_size)
h=h.to(device)
c=c.to(device)
scores , h, c = net(minibatch_data , h, c)
###Output
_____no_output_____
###Markdown
Display the network prediction for the next word
###Code
print(mysentence, '... \n')
utils.show_next_word(scores)
###Output
over the next days and weeks they say investors should look for stocks to ...
7.2% buy
4.2% be
3.8% sell
3.2% get
2.6% pay
2.3% <unk>
2.2% do
2.2% continue
2.0% make
1.8% keep
1.6% take
1.5% raise
1.4% market
1.0% focus
1.0% boost
0.9% come
0.9% meet
0.9% help
0.8% put
0.8% finance
0.8% give
0.8% stay
0.8% see
0.7% provide
0.7% consider
0.7% say
0.7% reduce
0.7% try
0.6% go
0.6% avoid
###Markdown
Lab 02: LSTM - exercise
###Code
import torch
import torch.nn.functional as F
import torch.nn as nn
import math
import time
import utils
###Output
_____no_output_____
###Markdown
With or without GPU?It is recommended to run this code on GPU: * Time for 1 epoch on CPU : 274 sec ( 4.56 min) * Time for 1 epoch on GPU : 10.1 sec w/ GeForce GTX 1080 Ti
###Code
device= torch.device("cuda")
#device= torch.device("cpu")
print(device)
###Output
_____no_output_____
###Markdown
Download Penn Tree Bank (the tensor train_data should consists of 20 columns of ~50,000 words)
###Code
from utils import check_ptb_dataset_exists
data_path=check_ptb_dataset_exists()
train_data = torch.load(data_path+'ptb/train_data.pt')
test_data = torch.load(data_path+'ptb/test_data.pt')
print( train_data.size() )
print( test_data.size() )
###Output
_____no_output_____
###Markdown
Some constants associated with the data set
###Code
bs = 20
vocab_size = 10000
###Output
_____no_output_____
###Markdown
Make a recurrent net class
###Code
class three_layer_recurrent_net(nn.Module):
def __init__(self, hidden_size):
super(three_layer_recurrent_net, self).__init__()
self.layer1 = # COMPLETE HERE
self.layer2 = # COMPLETE HERE
self.layer3 = # COMPLETE HERE
def forward(self, word_seq, h_init, c_init ):
g_seq = # COMPLETE HERE
h_seq , (h_final,c_final) = # COMPLETE HERE (don't forget the extra parenthesis around h_init and c_init)
score_seq = # COMPLETE HERE
return score_seq, h_final , c_final
###Output
_____no_output_____
###Markdown
Build the net. Choose the hidden size to be 300. How many parameters in total?
###Code
hidden_size= # COMPLETE HERE
net = three_layer_recurrent_net( hidden_size )
print(net)
utils.display_num_param(net)
###Output
_____no_output_____
###Markdown
Send the weights of the networks to the GPU
###Code
net = net.to(device)
###Output
_____no_output_____
###Markdown
Set up manually the weights of the embedding module and Linear module
###Code
net.layer1.weight.data.uniform_(-0.1, 0.1)
net.layer3.weight.data.uniform_(-0.1, 0.1)
print('')
###Output
_____no_output_____
###Markdown
Choose the criterion, as well as the following important hyperparameters: * initial learning rate = 5* sequence length = 35
###Code
criterion = nn.CrossEntropyLoss()
my_lr = # COMPLETE HERE
seq_length = # COMPLETE HERE
###Output
_____no_output_____
###Markdown
Function to evaluate the network on the test set
###Code
def eval_on_test_set():
running_loss=0
num_batches=0
h = torch.zeros(1, bs, hidden_size)
c = torch.zeros(1, bs, hidden_size)
h=h.to(device)
c=c.to(device)
for count in range( 0 , 4120-seq_length , seq_length) :
minibatch_data = test_data[ count : count+seq_length ]
minibatch_label = test_data[ count+1 : count+seq_length+1 ]
minibatch_data=minibatch_data.to(device)
minibatch_label=minibatch_label.to(device)
scores, h, c = net( minibatch_data, h , c)
minibatch_label = minibatch_label.view( bs*seq_length )
scores = scores.view( bs*seq_length , vocab_size)
loss = criterion( scores , minibatch_label )
h=h.detach()
c=c.detach()
running_loss += loss.item()
num_batches += 1
total_loss = running_loss/num_batches
print('test: exp(loss) = ', math.exp(total_loss) )
###Output
_____no_output_____
###Markdown
Do 8 passes through the training set.
###Code
start=time.time()
for epoch in range(8):
# keep the learning rate to 1 during the first 2 epochs, then divide by 3 at every epoch
if epoch >= 2:
# COMPLETE HERE
# create a new optimizer at the beginning of each epoch: give the current learning rate.
optimizer=torch.optim.SGD( net.parameters() , lr=my_lr )
# set the running quatities to zero at the beginning of the epoch
running_loss=0
num_batches=0
# set the initial h and c to be the zero vector
h = torch.zeros(1, bs, hidden_size)
c = torch.zeros(1, bs, hidden_size)
# send them to the gpu
h=h.to(device)
c=c.to(device)
for count in range( 0 , 46478-seq_length , seq_length):
# Set the gradients to zeros
optimizer.zero_grad()
# create a minibatch
minibatch_data = # COMPLETE HERE
minibatch_label = # COMPLETE HERE
# send them to the gpu
minibatch_data=minibatch_data.to(device)
minibatch_label=minibatch_label.to(device)
# Detach to prevent from backpropagating all the way to the beginning
# Then tell Pytorch to start tracking all operations that will be done on h and c
h= # COMPLETE HERE
c= # COMPLETE HERE
h= # COMPLETE HERE
c= # COMPLETE HERE
# forward the minibatch through the net
scores, h, c = # COMPLETE HERE
# reshape the scores and labels to huge batch of size bs*seq_length
scores = # COMPLETE HERE
minibatch_label = # COMPLETE HERE
# Compute the average of the losses of the data points in this huge batch
loss = # COMPLETE HERE
# backward pass to compute dL/dR, dL/dV and dL/dW
# COMPLETE HERE
# do one step of stochastic gradient descent: R=R-lr(dL/dR), V=V-lr(dL/dV), ...
utils.normalize_gradient(net)
# COMPLETE HERE
# update the running loss
running_loss += loss.item()
num_batches += 1
# compute stats for the full training set
total_loss = running_loss/num_batches
elapsed = time.time()-start
print('')
print('epoch=',epoch, '\t time=', elapsed,'\t lr=', my_lr, '\t exp(loss)=', math.exp(total_loss))
eval_on_test_set()
###Output
_____no_output_____
###Markdown
Choose one sentence (taken from the test set)
###Code
sentence1 = "some analysts expect oil prices to remain relatively"
sentence2 = "over the next days and weeks they say investors should look for stocks to"
sentence3 = "prices averaging roughly $ N a barrel higher in the third"
sentence4 = "i think my line has been very consistent mrs. hills said at a news"
sentence5 = "this appears particularly true at gm which had strong sales in"
# or make your own sentence. No capital letter or punctuation allowed. Each word must be in the allowed vocabulary.
sentence6= "he was very"
# SELECT THE SENTENCE HERE
mysentence = sentence1
###Output
_____no_output_____
###Markdown
Convert the sentence into a vector, then send to GPU
###Code
minibatch_data=utils.sentence2vector(mysentence)
minibatch_data=minibatch_data.to(device)
print(minibatch_data)
###Output
_____no_output_____
###Markdown
Set the initial hidden state to zero, then run the LSTM.
###Code
h = torch.zeros(1, 1, hidden_size)
c = torch.zeros(1, 1, hidden_size)
h=h.to(device)
c=c.to(device)
scores , h, c = net(minibatch_data , h, c)
###Output
_____no_output_____
###Markdown
Display the network prediction for the next word
###Code
print(mysentence, '... \n')
utils.show_next_word(scores)
###Output
_____no_output_____ |
wp/notebooks/model/mc_dropout/mc_dropout.ipynb | ###Markdown
General functionality of MC Dropout
###Code
%load_ext autoreload
import os, sys, importlib
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import tensorflow as tf
BASE_PATH = os.path.join(os.getcwd(), "..", "..", "..")
MODULE_PATH = os.path.join(BASE_PATH, "modules")
DATASET_PATH = os.path.join(BASE_PATH, "datasets")
sys.path.append(MODULE_PATH)
from bayesian import McDropout
from data import BenchmarkData, DataSetType
from models import setup_growth, FcholletCNN
setup_growth()
num_classes = 4
benchmark_data = BenchmarkData(DataSetType.MNIST, os.path.join(DATASET_PATH, "mnist"), classes=num_classes, dtype=np.float32)
benchmark_data.inputs.shape
inputs = benchmark_data.inputs
targets = benchmark_data.targets
selector = np.isin(targets, np.unique(targets)[:-1])
new_targets = targets[selector]
new_inputs = inputs[selector]
ood_selector = np.logical_not(selector)
ood_targets = targets[ood_selector]
ood_inputs = inputs[ood_selector]
x_train, x_test, y_train, y_test = train_test_split(new_inputs, new_targets)
%autoreload 2
model = FcholletCNN(output=num_classes)
model.build(input_shape=inputs.shape)
model.summary()
mc_dropout = McDropout(model)
mc_dropout.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"])
num_samples = 10
pred = mc_dropout(x_test[:2], sample_size=num_samples)
indices = np.stack([np.array(range(num_classes))]*num_samples, axis=0)
plt.scatter(indices, pred[0], alpha=.1)
plt.show()
mc_dropout.evaluate(x_test, y_test)
mc_dropout.fit(x_train[:10], y_train[:10], epochs=1, batch_size=50)
mc_dropout.evaluate(x_test, y_test)
y_test[0]
num_samples = 10
pred = mc_dropout(x_test[:2], sample_size=num_samples)
indices = np.stack([np.array(range(num_classes))]*num_samples, axis=0)
plt.scatter(indices, pred[0], alpha=.1)
plt.show()
mc_dropout.evaluate(x_test, y_test, sample_size=100, batch_size=100)
num_samples = 100
predictions = mc_dropout(x_test, sample_size=num_samples, batch_size=100)
def plot_class(datapoint_index=0, class_index=0):
sns.kdeplot(result[datapoint_index, :, class_index], shade=True)
plt.axvline(class_probs[datapoint_index, class_index], color="red")
plt.show()
predictions.shape
pred_label = np.argmax(predictions, axis=-1)
pred_label.shape
true_labels = np.stack([y_test]*num_samples, axis=1)
true_labels.shape
net_accuracies = np.mean(pred_label == true_labels, axis=0)
sns.displot(predictions[:, 0], bins=10, log_scale=True, kde=True)
def plot_separated(predictions, targets):
mean = np.mean(predictions, axis=-1)
u_targets = np.unique(targets)
fig, axes = plt.subplots(len(u_targets), len(u_targets), figsize=(20, 5))
for target in u_targets:
selector = u_targets == target
for o_target in u_targets:
sns.kdeplot(ax=axes[target, o_target], x=mean[o_target], shade=True)
predictions = mc_dropout(x_test, sample_size=5, batch_size=900)
predictions = mc_dropout(x_test, sample_size=100, batch_size=900)
#sns.kdeplot(predictions[0][:, 1].T, shade=True, color="red")
sns.kdeplot(predictions[0][:, 0].T, shade=True, color="black")
sns.kdeplot(predictions[0][:, 2].T, shade=True, color="green")
sns.kdeplot(predictions[0][:, 3], shade=True, color="blue")
sns.kdeplot(preds[0], shade=True)
sns.kdeplot(net_accuracies, shade=True)
plt.axvline()
plt.hist(net_accuracies)
plt.show()
sns.relplot(x="timepoint", y="signal", kind="line", ci="sd", data=fmri);
###Output
_____no_output_____ |
Chapter06/Examples/Examples.ipynb | ###Markdown
Examples In this examples document we will look at the plots and techniques we'll be using in the following exercises and activities. Loading our dataset
###Code
# importing the necessary dependencies
import pandas as pd
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
output_notebook()
# loading the Dataset with pandas
dataset = pd.read_csv('../../Datasets/computer_hardware.csv')
# looking at the dataset
dataset.head()
###Output
_____no_output_____
###Markdown
Our example dataset contains:- `vendor_name`: Name of the vendor- `model`: model id- `myct`: machine cycle time in nanoseconds- `mmin`: minimum main memory in kilobytes- `mmax`: maximum main memory in kilobytes- `cach`: cache memory in kilobytes- `chmin`: minimum channels in units- `chmax`: maximum channels in units- `prp`: published relative performance- `erp`: estimated relative performance from the original article --- Simple Figure with a line Basic plotting with lines, circles, etc. is a simple task when using Bokeh. By calling the `line` method of a given figure, we can add a line based on x and y coordinates in our visualization. **Note**: If we simply want to plot our entries with the given index, we can create a new column and assign the dataset index in order to nicely access it.
###Code
# adding an index column to use it for the x-axis
dataset['index'] = dataset.index
# plotting the cache memory levels as line
plot = figure(title='Cache per Hardware', x_axis_label='Hardware index', y_axis_label='Cache Memory')
plot.line(dataset['index'], dataset['cach'], line_width=5)
show(plot)
###Output
_____no_output_____
###Markdown
Scatter plot Scatter plots can be used the same as line plots. It always depends on the scenario and data we are looking at which one should be used. Please refer back to the second chapter about different plots and data for this information.
###Code
# plotting the hardware cache as dots
plot = figure(title='Cache per Hardware', x_axis_label='Hardware', y_axis_label='Cache Memory')
plot.scatter(dataset['index'], dataset['cach'], size=5, color='red')
show(plot)
###Output
_____no_output_____
###Markdown
**Note**: We can provide a `color` argument with one of the pre-defined colors to quickly assign different colors to our plotted glyphs. Adding a legend Legends display a mapping between e.g. lines in the plot and the according information like e.g. the hardware cache memory.By adding a `legend_label` argument to the plot calls like `plot.line()`, we get a small box containing the information at the, by default, upper right corner.
###Code
# plotting cache memory and cycle time with legend
plot = figure(title='Attributes per Hardware', x_axis_label='Hardware index', y_axis_label='Attribute Value')
plot.line(dataset['index'], dataset['cach'], line_width=5, legend_label='Cache Memory')
plot.line(dataset['index'], dataset['myct'], line_width=5, color='red', legend_label='Cycle time in ns')
show(plot)
###Output
_____no_output_____
###Markdown
Mutable Legend items When looking at the example from above, we can see that once we have several lines, the visualization can get cluttered.We can give the user the ability to **"mute"**, meaning defocus, the clicked element in the legend. Adding a `muted_alpha` argument to the line plotting and adding a `click_policy` of `mute` to our legend element are the only two steps needed.
###Code
# adding mutability to the legend
plot = figure(title='Attributes per Hardware', x_axis_label='Hardware index', y_axis_label='Attribute Value')
plot.line(dataset['index'], dataset['cach'], line_width=5, legend_label='Cache Memory', muted_alpha=0.2)
plot.line(dataset['index'], dataset['myct'], line_width=5, color='red', legend_label='Cycle time in ns', muted_alpha=0.2)
plot.legend.click_policy="mute"
show(plot)
###Output
_____no_output_____
###Markdown
Color Mappers Color mappers can map specific values to a given color in the selected spectrum. By providing the minimum and maximum value for a variable, we define the range in which colors are returned.
###Code
# adding color based on the mean price to our elements
from bokeh.models import LinearColorMapper
color_mapper = LinearColorMapper(palette='Magma256', low=min(dataset['cach']), high=max(dataset['cach']))
plot = figure(title='Cache per Hardware', x_axis_label='Hardware', y_axis_label='Cache Memory')
plot.scatter(dataset['index'], dataset['cach'], color={'field': 'y', 'transform': color_mapper}, size=10)
show(plot)
###Output
_____no_output_____
###Markdown
DataSources DataSources can be helpful in several cases like e.g. displaying a tooltip on hovering the data points. In most cases we can use pandas dataframes to feed data into our plot, but for certain features like tooltips, we have to use DataSource.
###Code
# using a ColumnDataSource to display a tooltip on hovering
from bokeh.models.sources import ColumnDataSource
data_source = ColumnDataSource(data=dict(
vendor_name=dataset['vendor_name'],
model=dataset['model'],
cach=dataset['cach'],
x=dataset['index'],
y=dataset['cach']
))
TOOLTIPS=[
('Vendor', '@vendor_name'),
('Model', '@model'),
('Cache', '@cach')
]
plot = figure(title='Cache per Hardware', x_axis_label='Hardware'
, y_axis_label='Cache Memory'
, tooltips=TOOLTIPS)
plot.scatter('x', 'y', size=10, color='teal', source=data_source)
show(plot)
###Output
_____no_output_____
###Markdown
Interactive Widget In the example below, we first import the interact element from the ipywidgets library. This then allows us to define a new method and annotate it with the given @interact decorator.The provided Value attribute tells the interact element which widget to used based on the data type of the argument. In our example we provide it a string which will give us a TextBox widget. We can refer to the table above to determine which Value data type will return which widget.The print statement in the code above simply prints whatever has been entered in the textbox below the widget.
###Code
# importing the widgets
from ipywidgets import interact, interact_manual
# creating an input text
@interact(Value='Input Text')
def text_input(Value):
print(Value)
###Output
_____no_output_____ |
notebooks/infrastructure-data-sourcing.ipynb | ###Markdown
Pre-processing
###Code
def get_steel_plants():
"""
source: https://globalenergymonitor.org/projects/global-steel-plant-tracker/
"""
df_steel = (
pd
.read_csv("../datasets/steel-plant-infrastructure-dataset.csv")
.assign(asset_type="steel-plant")
)
return df_steel
get_steel_plants().head(2)
def get_coal_mines():
"""
source: https://globalenergymonitor.org/projects/global-coal-mine-tracker/
"""
df_coal = (
pd
.read_csv("../datasets/coal-mine-infrastructure-dataset.csv")
.assign(asset_type="coal_mine")
)
return df_coal
get_coal_mines().head(2)
def get_fossil_pipelines():
"""
source: https://globalenergymonitor.org/projects/global-fossil-infrastructure-tracker/tracker-map/
No lat/lng for pipelines
No route for end point of pipelines
"""
df_fossil = (
pd
.read_csv("../datasets/fossil-pipelines-infrastructure-dataset.csv")
.assign(asset_type="fossil")
)
return df_fossil
get_fossil_pipelines().query("lat==lat").head(2)
get_fossil_pipelines().query("route==route").head(2)
def get_power_plant():
"""
source: https://wiki.openstreetmap.org/wiki/Tag:power%3Dplant
Plant sources available:
['hydro', 'waste', 'gas', nan, 'wind', 'oil', 'coal', 'biofuel',
'solar;diesel', 'solar', 'oil;gas', 'biomass;oil', 'biomass',
'gas;oil', 'biogas', 'nuclear', 'battery',
'abandoned_mine_methane;oil'
"""
DIRTY_POWER_PLANTS = ['gas', 'oil', 'coal', 'biofuel', 'oil;gas', 'biomass;oil', 'biomass',
'gas;oil', 'biogas', 'abandoned_mine_methane;oil']
df_power_plant = (
pd
.read_csv("../datasets/power-plant-infrastructure-dataset.csv")
.dropna(axis=1, how="all")
.rename(columns={'plant:source': 'plant_source'})
.assign(asset_type="power_plant")
.query("plant_source in @DIRTY_POWER_PLANTS")
# matches coal mine dataset
.assign(type= lambda s: np.where(s.underground == 'yes', 'underground', "surface"))
)
return df_power_plant
get_power_plant().head(2)
###Output
_____no_output_____ |
Modelos_com_reducao/Local/CNN/AutoEncoder/CNNWebXssSQLBruteForceIDS(23-02-2018).ipynb | ###Markdown
AutoEncoder
###Code
inp_train,inp_test,out_train,out_test = train_test_split(input_label.reshape(len(input_label), 78), input_label.reshape(len(input_label), 78), test_size=0.2)
input_model = layers.Input(shape = (78,))
enc = layers.Dense(units = 64, activation = "relu", use_bias = True)(input_model)
enc = layers.Dense(units = 36, activation = "relu", use_bias = True)(enc)
enc = layers.Dense(units = 18, activation = "relu")(enc)
dec = layers.Dense(units = 36, activation = "relu", use_bias = True)(enc)
dec = layers.Dense(units = 64, activation = "relu", use_bias = True)(dec)
dec = layers.Dense(units = 78, activation = "relu", use_bias = True)(dec)
auto_encoder = keras.Model(input_model, dec)
encoder = keras.Model(input_model, enc)
decoder_input = layers.Input(shape = (18,))
decoder_layer = auto_encoder.layers[-3](decoder_input)
decoder_layer = auto_encoder.layers[-2](decoder_layer)
decoder_layer = auto_encoder.layers[-1](decoder_layer)
decoder = keras.Model(decoder_input, decoder_layer)
auto_encoder.compile(optimizer=keras.optimizers.Adam(learning_rate=0.00025), loss = "mean_squared_error", metrics = ['accuracy'])
train = auto_encoder.fit(x = inp_train, y = out_train,validation_split= 0.1, epochs = 10, verbose = 1, shuffle = True)
predict = auto_encoder.predict(inp_test)
losses = keras.losses.mean_squared_error(out_test, predict).numpy()
total = 0
for loss in losses:
total += loss
print(total / len(losses))
###Output
0.0024433109043952026
###Markdown
cross validation
###Code
confusion_matrixs = []
roc_curvs = []
input_label = encoder.predict(input_label).reshape(len(input_label), 18, 1)
for i in range(10):
mini = int(len(input_label) * 0.10) * i
maxi = int((len(input_label) * 0.10) * (i + 1))
inp_train = np.array([*input_label[0: mini],*input_label[maxi:len(input_label)]])
inp_test = np.array(input_label[mini: maxi])
out_train = np.array([*output_label[0: mini],*output_label[maxi:len(output_label)]])
out_test = np.array(output_label[mini:maxi])
model = keras.Sequential([
layers.Input(shape = (18,1)),
layers.Conv1D(filters = 16, kernel_size = 3, padding = "same", activation = "relu", use_bias = True),
layers.MaxPool1D(pool_size = 3),
layers.Conv1D(filters = 8, kernel_size = 3, padding = "same", activation = "relu", use_bias = True),
layers.MaxPool1D(pool_size = 3),
layers.Flatten(),
layers.Dense(units = 2, activation = "softmax")
])
model.compile(optimizer= keras.optimizers.Adam(learning_rate= 0.00025), loss="sparse_categorical_crossentropy", metrics=['accuracy'])
treino = model.fit(x = inp_train, y = out_train, validation_split= 0.1, epochs = 10, shuffle = True,verbose = 0)
res = np.array([np.argmax(resu) for resu in model.predict(inp_test)])
confusion_matrixs.append(confusion_matrix(out_test, res))
fpr, tpr, _ = roc_curve(out_test, res)
auc = roc_auc_score(out_test, res)
roc_curvs.append([fpr, tpr, auc])
print(i)
###Output
0
1
2
3
4
5
6
7
8
9
###Markdown
Roc Curves
###Code
cores = ["blue", "orange", "green", "red", "purple", "brown", "pink", "gray", "olive", "cyan"]
for i in range(10):
plt.plot(roc_curvs[i][0],roc_curvs[i][1],label="curva " + str(i) + ", auc=" + str(roc_curvs[i][2]), c = cores[i])
plt.legend(loc=4)
plt.show()
total_conv_matrix = [[0,0],[0,0]]
for cov in confusion_matrixs:
total_conv_matrix[0][0] += cov[0][0]
total_conv_matrix[0][1] += cov[0][1]
total_conv_matrix[1][0] += cov[1][0]
total_conv_matrix[1][1] += cov[1][1]
def plot_confusion_matrix(cm, classes, normaliza = False, title = "Confusion matrix", cmap = plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normaliza:
cm = cm.astype('float') / cm.sum(axis = 1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print("Confusion matrix, without normalization")
print(cm)
thresh = cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i,j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
labels = ["Benign", "WebXssSQLBruteForce"]
plot_confusion_matrix(cm = np.array(total_conv_matrix), classes = labels, title = "WebXssSQLBruteForce IDS")
###Output
Confusion matrix, without normalization
[[1042327 1]
[ 426 140]]
|
tflite-personal/tflite_c03_exercise_convert_model_to_tflite (1).ipynb | ###Markdown
Copyright 2018 Shivansh Gour. Based on a work by The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Train Your Own Model and Convert It to TFLite This notebook uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here: <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc.) in a format identical to that of the articles of clothing we'll use here.This uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code.We will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow. Import and load the Fashion MNIST data directly from TensorFlow: Setup
###Code
try:
%tensorflow_version 2.x
except:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import pathlib
print(tf.__version__)
###Output
/home/chronos/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/chronos/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/chronos/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/chronos/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/chronos/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/chronos/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/chronos/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/chronos/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/chronos/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/chronos/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/chronos/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/chronos/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
###Markdown
Download Fashion MNIST Dataset
###Code
splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10))
splits, info = tfds.load('fashion_mnist', with_info=True, as_supervised=True, split=splits)
(train_examples, validation_examples, test_examples) = splits
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
class_names = ['T-shirt_top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
with open('labels.txt', 'w') as f:
f.write('\n'.join(class_names))
IMG_SIZE = 28,28
###Output
_____no_output_____
###Markdown
Preprocessing data Preprocess
###Code
# Write a function to normalize and resize the images
def format_example(image, label):
# Cast image to float32
image = tf.image.resize(image, IMG_SIZE) / 225.0
return image, label
BATCH_SIZE = 32
###Output
_____no_output_____
###Markdown
Create a Dataset from images and labels
###Code
# Prepare the examples by preprocessing the them and then batching them (and optionally prefetching them)
# If you wish you can shuffle train set here
train_batches = train_examples.shuffle(num_examples // 4).map(format_example).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_example).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_example).batch(1)
###Output
_____no_output_____
###Markdown
Building the model
###Code
"""
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 16) 160
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 16) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 32) 4640
_________________________________________________________________
flatten (Flatten) (None, 3872) 0
_________________________________________________________________
dense (Dense) (None, 64) 247872
_________________________________________________________________
dense_1 (Dense) (None, 10) 650
=================================================================
Total params: 253,322
Trainable params: 253,322
Non-trainable params: 0
"""
# Build the model shown in the previous cell
model = tf.keras.Sequential([
# Set the input shape to (28, 28, 1), kernel size=3, filters=16 and use ReLU activation,
tf.keras.layers.Conv2D(kernel_size=3, filters=16, activation='relu', input_shape=(28,28,1)),
# model.add(Conv2D (kernel_size = (20,30), filters = 400, activation='relu'))
tf.keras.layers.MaxPooling2D(),
# Set the number of filters to 32, kernel size to 3 and use ReLU activation
tf.keras.layers.Conv2D(kernel_size=3, filters=32, activation='relu'),
# Flatten the output layer to 1 dimension
tf.keras.layers.Flatten(),
# Add a fully connected layer with 64 hidden units and ReLU activation
tf.keras.layers.Dense(units=64, activation='relu'),
# Attach a final softmax classification head
tf.keras.layers.Dense(units=64, activation='softmax')
# model.add(Dense(64,activation='softmax'))
])
# Set the loss and accuracy metrics
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
###Output
_____no_output_____
###Markdown
Train
###Code
model.fit(train_batches,
epochs=10,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Exporting to TFLite
###Code
export_dir = 'saved_model/1'
# Use the tf.saved_model API to export the SavedModel
tf.saved_model.save(model, export_dir)
# Your Code Here
# saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
# loaded = tf.saved_model.load(export_dir)
# print(list(loaded.signatures.keys()))
# infer = loaded.signatures["serving_default"]
# print(infer.structured_input_signature)
# print(infer.structured_outputs)
#@title Select mode of optimization
mode = "Speed" #@param ["Default", "Storage", "Speed"]
if mode == 'Storage':
optimization = tf.lite.Optimize.OPTIMIZE_FOR_SIZE
elif mode == 'Speed':
optimization = tf.lite.Optimize.OPTIMIZE_FOR_LATENCY
else:
optimization = tf.lite.Optimize.DEFAULT
optimization
# Use the TFLiteConverter SavedModel API to initialize the converter
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
# converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
# Set the optimzations
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Invoke the converter to finally generate the TFLite model
tflite_model = converter.convert()
# tflite_model = converter.convert()
tflite_model_file = 'model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test if your model is working
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Gather results for the randomly sampled test images
predictions = []
test_labels = []
test_images = []
for img, label in test_batches.take(50):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label[0])
test_images.append(np.array(img))
#@title Utility functions for plotting
# Utilities for plotting
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label.numpy():
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks(list(range(10)), class_names, rotation='vertical')
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array[0], color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array[0])
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('green')
#@title Visualize the outputs { run: "auto" }
index = 1 #@param {type:"slider", min:1, max:10, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_images)
plt.show()
#plot_value_array(index, predictions, test_labels)
#plt.show()
###Output
_____no_output_____
###Markdown
Download TFLite model and assets**NOTE: You might have to run to the cell below twice**
###Code
try:
from google.colab import files
files.download(tflite_model_file)
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Deploying TFLite model Now once you've the trained TFLite model downloaded, you can ahead and deploy this on an Android/iOS application by placing the model assets in the appropriate location. Prepare the test images for download (Optional)
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]].lower(), index))
!ls test_images
!zip -qq fmnist_test_images.zip -r test_images/
try:
files.download('fmnist_test_images.zip')
except:
pass
###Output
_____no_output_____ |
02_interactive_pixel_classification_sklearn.ipynb | ###Markdown
Interactive pixel classification in napari using scikit learnPixel classification is a technique for assigning pixels to multiple classes. If there are two classes (object and background), we are talking about binarization. In this example we use a [random forest classifier](https://en.wikipedia.org/wiki/Random_forest) for pixel classification.See also* [Scikit-image random forest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)* [Classification of land cover by Chris Holden](https://ceholden.github.io/open-geo-tutorial/python/chapter_5_classification.html)As usual, we start by loading an example image.
###Code
from skimage.io import imread, imshow
image = imread('https://samples.fiji.sc/blobs.png')
imshow(image)
###Output
_____no_output_____
###Markdown
Generating a feature stackPixel classifiers such as the random forest classifier takes multiple images as input. We typically call these images a feature stack because for every pixel exist now multiple values (features). In the following example we create a feature stack containing three features:* The original pixel value* The pixel value after a Gaussian blur* The pixel value of the Gaussian blurred image processed through a Sobel operator.Thus, we denoise the image and detect edges. All three images serve the pixel classifier to differentiate positive an negative pixels.
###Code
from skimage import filters
import numpy as np
def generate_feature_stack(image):
# determine features
blurred = filters.gaussian(image, sigma=2)
edges = filters.sobel(blurred)
# collect features in a stack
# The ravel() function turns a nD image into a 1-D image.
# We need to use it because scikit-learn expects values in a 1-D format here.
feature_stack = [
image.ravel(),
blurred.ravel(),
edges.ravel()
]
# return stack as numpy-array
return np.asarray(feature_stack)
feature_stack = generate_feature_stack(image)
# show feature images
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 3, figsize=(10,10))
# reshape(image.shape) is the opposite of ravel() here. We just need it for visualization.
axes[0].imshow(feature_stack[0].reshape(image.shape), cmap=plt.cm.gray)
axes[1].imshow(feature_stack[1].reshape(image.shape), cmap=plt.cm.gray)
axes[2].imshow(feature_stack[2].reshape(image.shape), cmap=plt.cm.gray)
###Output
_____no_output_____
###Markdown
Formating dataWe need to format the input data so that it fits to what scikit learn expects. Scikit-learn asks for an array of shape (n, m) as input data and (n) annotations. n corresponds to number of pixels and m to number of features. In our case m = 3.
###Code
def format_data(feature_stack, annotation):
# reformat the data to match what scikit-learn expects
# transpose the feature stack
X = feature_stack.T
# make the annotation 1-dimensional
y = annotation.ravel()
# remove all pixels from the feature and annotations which have not been annotated
mask = y > 0
X = X[mask]
y = y[mask]
return X, y
###Output
_____no_output_____
###Markdown
Interactive segmentationWe can also use napari to annotate some regions as negative (label = 1) and positive (label = 2).
###Code
import napari
# start napari
viewer = napari.Viewer()
# add image
viewer.add_image(image)
# add an empty labels layer and keep it in a variable
labels = viewer.add_labels(np.zeros(image.shape).astype(int))
###Output
_____no_output_____
###Markdown
Manual annotationThe user (that might be YOU!) should annotate two regions inside and outside the objects of interest. First, use `Paint mode` to annotate background. Then, increase the `label` by clicking the `+` button an annotate also some objects of interest.Your annotation should approximately look like this:
###Code
napari.utils.nbscreenshot(viewer)
###Output
_____no_output_____
###Markdown
Training the random forest classifierWe now train the [random forest classifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) by providing the feature stack X and the annotations y. Therefore, we retrieve the annotations from the napari layer:
###Code
manual_annotations = labels.data
from skimage.io import imshow
imshow(manual_annotations, vmin=0, vmax=2)
# for training, we need to generate features
feature_stack = generate_feature_stack(image)
X, y = format_data(feature_stack, manual_annotations)
# train classifier
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(max_depth=2, random_state=0)
classifier.fit(X, y)
###Output
_____no_output_____
###Markdown
Predicting pixel classesAfter the classifier has been trained, we can use it to predict pixel classes for whole images. Note in the following code, we provide `feature_stack.T` which are more pixels then X in the commands above, because it also contains the pixels which were not annotated before.
###Code
# process the whole image and show result
result_1d = classifier.predict(feature_stack.T) # we subtract 1 to make background = 0
result_2d = result_1d.reshape(image.shape)
###Output
_____no_output_____
###Markdown
Also we add the result to napari.
###Code
result_layer = viewer.add_labels(result_2d)
napari.utils.nbscreenshot(viewer)
result_layer.visible = False
# remove background label
# only works if background was annotated first
result_2d = result_2d - 1
viewer.add_labels(result_2d)
napari.utils.nbscreenshot(viewer)
###Output
_____no_output_____ |
composable_pipeline/notebooks/custom_pipeline/02_first_custom_pipeline.ipynb | ###Markdown
First Custom Pipeline----Please use Jupyter labs http://<board_ip_address>/lab for this notebook.This notebook shows your how to create your first custom pipeline Aims* Create ComposableOverlay object* Start the HDMI path* Compose pipeline* Play with the pipeline Table of Contents* [Download Composable Overlay](download)* [Start HDMI Video](start_hdmi)* [Let us Compose](compose)* [Visualize the Pipeline](visualize)* [Play with the LUT IP](play)* [Stop HDMI Video](stop_hdmi)* [Conclusion](conclusion)---- Revision History* v1.0 | 30 March 2021 | First notebook revision.---- Download Composable Overlay Import the pynq video libraries as well as ComposableOverlay class and the drivers for the IP.Download the Composable Overlay using the `ComposableOverlay` and grab a handler to the `composable` hierarchy
###Code
from pynq.lib.video import *
from composable_pipeline import ComposableOverlay
from composable_pipeline.libs import *
ol = ComposableOverlay("../overlay/cv_dfx_4_pr.bit")
cpipe = ol.composable
###Output
_____no_output_____
###Markdown
Start HDMI Video Get `HDMIVideo` object and start video Warning:Failure to connect HDMI cables to a valid video source and screen may cause the notebook to hang
###Code
video = HDMIVideo(ol)
video.start()
###Output
_____no_output_____
###Markdown
Let us Compose First we need to grab handlers to the IP objects to simplify the notebook
###Code
video_in_in = cpipe.video.hdmi_in.color_convert
video_in_out = cpipe.video.hdmi_in.pixel_pack
lut = cpipe.video.composable.lut_accel
###Output
_____no_output_____
###Markdown
Let us read the documentation on the method `.compose`
###Code
cpipe.compose?
###Output
_____no_output_____
###Markdown
This method expect a list with the IP object, based on this list the pipeline will be configured on our FPGA. After you run the next cell the video stream on your monitor should change,
###Code
video_pipeline = [video_in_in, lut, video_in_out]
cpipe.compose(video_pipeline)
###Output
_____no_output_____
###Markdown
Visualize the Pipeline We can visualize the implemented pipeline with the `.graph` attribute. This allows to quickly verify the pipeline
###Code
cpipe.graph
###Output
_____no_output_____
###Markdown
Play with the LUT IP The LUT is one of the IP available on the static region of the composable overlay, this IP allows further runtime configuration with predefined kernels
###Code
lut.kernel_list
###Output
_____no_output_____
###Markdown
The next cell will change the kernel type of the LUT IP every second, you should appreciate the change on the output video
###Code
import time
for i in lut.kernel_list:
lut.kernel_type = i
time.sleep(1)
###Output
_____no_output_____
###Markdown
Stop HDMI Video Finally stop the HDMI video pipeline Warning:Failure to stop the HDMI Video may hang the board when trying to download another bitstream onto the FPGA
###Code
video.stop()
###Output
_____no_output_____ |
app/facial-keypoint-detection/1. Load and Visualize Data.ipynb | ###Markdown
Facial Keypoint Detection This project will be all about defining and training a convolutional neural network to perform facial keypoint detection, and using computer vision techniques to transform images of faces. The first step in any challenge like this will be to load and visualize the data you'll be working with. Let's take a look at some examples of images and corresponding facial keypoints.Facial keypoints (also called facial landmarks) are the small magenta dots shown on each of the faces in the image above. In each training and test image, there is a single face and **68 keypoints, with coordinates (x, y), for that face**. These keypoints mark important areas of the face: the eyes, corners of the mouth, the nose, etc. These keypoints are relevant for a variety of tasks, such as face filters, emotion recognition, pose recognition, and so on. Here they are, numbered, and you can see that specific ranges of points match different portions of the face.---
###Code
#!zip -r workspace.zip ../workspace/
###Output
adding: ../workspace/ (stored 0%)
adding: ../workspace/detector_architectures/ (stored 0%)
adding: ../workspace/detector_architectures/haarcascade_eye.xml (deflated 85%)
adding: ../workspace/detector_architectures/haarcascade_smile.xml (deflated 84%)
adding: ../workspace/detector_architectures/haarcascade_mcs_nose.xml (deflated 89%)
adding: ../workspace/detector_architectures/haarcascade_frontalface_default.xml (deflated 85%)
adding: ../workspace/5. Zip Your Project Files and Submit.ipynb (deflated 60%)
adding: ../workspace/workspace_utils.py (deflated 58%)
adding: ../workspace/.ipynb_checkpoints/ (stored 0%)
adding: ../workspace/.ipynb_checkpoints/1. Load and Visualize Data-zh-checkpoint.ipynb (deflated 70%)
adding: ../workspace/.ipynb_checkpoints/3. Facial Keypoint Detection, Complete Pipeline-zh-checkpoint.ipynb (deflated 60%)
adding: ../workspace/.ipynb_checkpoints/3. Facial Keypoint Detection, Complete Pipeline-checkpoint.ipynb (deflated 67%)
adding: ../workspace/.ipynb_checkpoints/4. Fun with Keypoints-zh-checkpoint.ipynb (deflated 61%)
adding: ../workspace/.ipynb_checkpoints/5. Zip Your Project Files and Submit-zh-checkpoint.ipynb (deflated 49%)
adding: ../workspace/.ipynb_checkpoints/5. Zip Your Project Files and Submit-checkpoint.ipynb (deflated 60%)
adding: ../workspace/.ipynb_checkpoints/2. Define the Network Architecture-checkpoint.ipynb (deflated 72%)
adding: ../workspace/.ipynb_checkpoints/1. Load and Visualize Data-checkpoint.ipynb (deflated 73%)
adding: ../workspace/.ipynb_checkpoints/4. Fun with Keypoints-checkpoint.ipynb (deflated 67%)
adding: ../workspace/.ipynb_checkpoints/2. Define the Network Architecture-zh-checkpoint.ipynb (deflated 68%)
adding: ../workspace/3. Facial Keypoint Detection, Complete Pipeline-zh.ipynb (deflated 60%)
adding: ../workspace/2. Define the Network Architecture-zh.ipynb (deflated 68%)
adding: ../workspace/4. Fun with Keypoints-zh.ipynb (deflated 61%)
adding: ../workspace/saved_models/ (stored 0%)
adding: ../workspace/3. Facial Keypoint Detection, Complete Pipeline.ipynb (deflated 67%)
adding: ../workspace/images/ (stored 0%)
adding: ../workspace/images/michelle_detected.png (deflated 2%)
adding: ../workspace/images/obamas.jpg (deflated 6%)
adding: ../workspace/images/key_pts_example.png (deflated 1%)
adding: ../workspace/images/landmarks_numbered.jpg (deflated 38%)
adding: ../workspace/images/haar_cascade_ex.png (deflated 1%)
adding: ../workspace/images/sunglasses.png (deflated 4%)
adding: ../workspace/images/moustache.png (deflated 7%)
adding: ../workspace/images/straw_hat.png (deflated 0%)
adding: ../workspace/images/mona_lisa.jpg (deflated 1%)
adding: ../workspace/images/feature_map_ex.png (deflated 2%)
adding: ../workspace/images/face_filter_ex.png (deflated 1%)
adding: ../workspace/images/the_beatles.jpg (deflated 0%)
adding: ../workspace/images/download_ex.png (deflated 11%)
adding: ../workspace/data_load.py (deflated 70%)
adding: ../workspace/filelist.txt (deflated 16%)
adding: ../workspace/1. Load and Visualize Data.ipynb (deflated 73%)
adding: ../workspace/__pycache__/ (stored 0%)
adding: ../workspace/__pycache__/workspace_utils.cpython-36.pyc (deflated 40%)
adding: ../workspace/1. Load and Visualize Data-zh.ipynb (deflated 70%)
adding: ../workspace/models.py (deflated 52%)
adding: ../workspace/2. Define the Network Architecture.ipynb (deflated 72%)
adding: ../workspace/5. Zip Your Project Files and Submit-zh.ipynb (deflated 49%)
adding: ../workspace/4. Fun with Keypoints.ipynb (deflated 67%)
###Markdown
Load and Visualize DataThe first step in working with any dataset is to become familiar with your data; you'll need to load in the images of faces and their keypoints and visualize them! This set of image data has been extracted from the [YouTube Faces Dataset](https://www.cs.tau.ac.il/~wolf/ytfaces/), which includes videos of people in YouTube videos. These videos have been fed through some processing steps and turned into sets of image frames containing one face and the associated keypoints. Training and Testing DataThis facial keypoints dataset consists of 5770 color images. All of these images are separated into either a training or a test set of data.* 3462 of these images are training images, for you to use as you create a model to predict keypoints.* 2308 are test images, which will be used to test the accuracy of your model.The information about the images and keypoints in this dataset are summarized in CSV files, which we can read in using `pandas`. Let's read the training CSV and get the annotations in an (N, 2) array where N is the number of keypoints and 2 is the dimension of the keypoint coordinates (x, y).--- First, before we do anything, we have to load in our image data. This data is stored in a zip file and in the below cell, we access it by it's URL and unzip the data in a `/data/` directory that is separate from the workspace home directory.
###Code
# -- DO NOT CHANGE THIS CELL -- #
!mkdir /data
!wget -P /data/ https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5aea1b91_train-test-data/train-test-data.zip
!unzip -n /data/train-test-data.zip -d /data
# import the required libraries
import glob
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import cv2
###Output
_____no_output_____
###Markdown
Then, let's load in our training data and display some stats about that dat ato make sure it's been loaded in correctly!
###Code
key_pts_frame = pd.read_csv('/data/training_frames_keypoints.csv')
n = 0
# iloc Pandas method data.iloc[<row selection>, <column selection>]
# selects rows and columns by number,
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
# Numpy method that gives a new shape to an array without
# changing its data.
key_pts = key_pts.astype('float').reshape(-1, 2)
print('Image name: ', image_name)
print('Landmarks shape: ', key_pts.shape)
print('First 4 key pts: {}'.format(key_pts[:4]))
# print out some stats about the data
print('Number of images: ', key_pts_frame.shape[0])
###Output
Number of images: 3462
###Markdown
Look at some imagesBelow, is a function `show_keypoints` that takes in an image and keypoints and displays them. As you look at this data, **note that these images are not all of the same size**, and neither are the faces! To eventually train a neural network on these images, we'll need to standardize their shape.
###Code
def show_keypoints(image, key_pts):
print(key_pts)
"""Show image with keypoints"""
plt.imshow(image)
# changed marker to * for fun :P
plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='*', c='m')
# Display a few different types of images by changing the index n
# select an image by index in our data frame
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
plt.figure(figsize=(5, 5))
show_keypoints(mpimg.imread(os.path.join('/data/training/', image_name)), key_pts)
plt.show()
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:7: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
import sys
###Markdown
Dataset class and TransformationsTo prepare our data for training, we'll be using PyTorch's Dataset class. Much of this this code is a modified version of what can be found in the [PyTorch data loading tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html). Dataset class``torch.utils.data.Dataset`` is an abstract class representing adataset. This class will allow us to load batches of image/keypoint data, and uniformly apply transformations to our data, such as rescaling and normalizing images for training a neural network.Your custom dataset should inherit ``Dataset`` and override the followingmethods:- ``__len__`` so that ``len(dataset)`` returns the size of the dataset.- ``__getitem__`` to support the indexing such that ``dataset[i]`` can be used to get the i-th sample of image/keypoint data.Let's create a dataset class for our face keypoints dataset. We willread the CSV file in ``__init__`` but leave the reading of images to``__getitem__``. This is memory efficient because all the images are notstored in the memory at once but read as required.A sample of our dataset will be a dictionary``{'image': image, 'keypoints': key_pts}``. Our dataset will take anoptional argument ``transform`` so that any required processing can beapplied on the sample. We will see the usefulness of ``transform`` in thenext section.
###Code
from torch.utils.data import Dataset, DataLoader
class FacialKeypointsDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.key_pts_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.key_pts_frame)
def __getitem__(self, idx):
image_name = os.path.join(self.root_dir,
self.key_pts_frame.iloc[idx, 0])
image = mpimg.imread(image_name)
# if image has an alpha color channel, get rid of it
if(image.shape[2] == 4):
image = image[:,:,0:3]
key_pts = self.key_pts_frame.iloc[idx, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
sample = {'image': image, 'keypoints': key_pts}
if self.transform:
sample = self.transform(sample)
return sample
###Output
_____no_output_____
###Markdown
Now that we've defined this class, let's instantiate the dataset and display some images.
###Code
# Construct the dataset
face_dataset = FacialKeypointsDataset(csv_file='/data/training_frames_keypoints.csv',
root_dir='/data/training/')
# print some stats about the dataset
print('Length of dataset: ', len(face_dataset))
# Display a few of the images from the dataset
num_to_display = 3
for i in range(num_to_display):
# define the size of images
fig = plt.figure(figsize=(20,10))
# randomly select a sample
rand_i = np.random.randint(0, len(face_dataset))
sample = face_dataset[rand_i]
# print the shape of the image and keypoints
print(i, sample['image'].shape, sample['keypoints'].shape)
ax = plt.subplot(1, num_to_display, i + 1)
ax.set_title('Sample #{}'.format(i))
# Using the same display function, defined earlier
show_keypoints(sample['image'], sample['keypoints'])
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
###Markdown
TransformsNow, the images above are not of the same size, and neural networks often expect images that are standardized; a fixed size, with a normalized range for color ranges and coordinates, and (for PyTorch) converted from numpy lists and arrays to Tensors.Therefore, we will need to write some pre-processing code.Let's create four transforms:- ``Normalize``: to convert a color image to grayscale values with a range of [0,1] and normalize the keypoints to be in a range of about [-1, 1]- ``Rescale``: to rescale an image to a desired size.- ``RandomCrop``: to crop an image randomly.- ``ToTensor``: to convert numpy images to torch images.We will write them as callable classes instead of simple functions sothat parameters of the transform need not be passed everytime it'scalled. For this, we just need to implement ``__call__`` method and (if we require parameters to be passed in), the ``__init__`` method. We can then use a transform like this: tx = Transform(params) transformed_sample = tx(sample)Observe below how these transforms are generally applied to both the image and its keypoints.
###Code
import torch
from torchvision import transforms, utils
# tranforms
class Normalize(object):
"""Convert a color image to grayscale and normalize the color range to [0,1]."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
image_copy = np.copy(image)
key_pts_copy = np.copy(key_pts)
# convert image to grayscale
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# scale color range from [0, 255] to [0, 1]
image_copy= image_copy/255.0
# scale keypoints to be centered around 0 with a range of [-1, 1]
# mean = 100, sqrt = 50, so, pts should be (pts - 100)/50
key_pts_copy = (key_pts_copy - 100)/50.0
return {'image': image_copy, 'keypoints': key_pts_copy}
class Rescale(object):
"""Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
if isinstance(self.output_size, int):
if h > w:
new_h, new_w = self.output_size * h / w, self.output_size
else:
new_h, new_w = self.output_size, self.output_size * w / h
else:
new_h, new_w = self.output_size
new_h, new_w = int(new_h), int(new_w)
img = cv2.resize(image, (new_w, new_h))
# scale the pts, too
key_pts = key_pts * [new_w / w, new_h / h]
return {'image': img, 'keypoints': key_pts}
class RandomCrop(object):
"""Crop randomly the image in a sample.
Args:
output_size (tuple or int): Desired output size. If int, square crop
is made.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
if isinstance(output_size, int):
self.output_size = (output_size, output_size)
else:
assert len(output_size) == 2
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
new_h, new_w = self.output_size
top = np.random.randint(0, h - new_h)
left = np.random.randint(0, w - new_w)
image = image[top: top + new_h,
left: left + new_w]
key_pts = key_pts - [left, top]
return {'image': image, 'keypoints': key_pts}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
# if image has no grayscale color channel, add one
if(len(image.shape) == 2):
# add that third color dim
image = image.reshape(image.shape[0], image.shape[1], 1)
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
image = image.transpose((2, 0, 1))
return {'image': torch.from_numpy(image),
'keypoints': torch.from_numpy(key_pts)}
###Output
_____no_output_____
###Markdown
Test out the transformsLet's test these transforms out to make sure they behave as expected. As you look at each transform, note that, in this case, **order does matter**. For example, you cannot crop a image using a value smaller than the original image (and the orginal images vary in size!), but, if you first rescale the original image, you can then crop it to any size smaller than the rescaled size.
###Code
# test out some of these transforms
rescale = Rescale(100)
crop = RandomCrop(50)
composed = transforms.Compose([Rescale(250),
RandomCrop(224)])
# apply the transforms to a sample image
test_num = 500
sample = face_dataset[test_num]
fig = plt.figure()
for i, tx in enumerate([rescale, crop, composed]):
transformed_sample = tx(sample)
ax = plt.subplot(1, 3, i + 1)
plt.tight_layout()
ax.set_title(type(tx).__name__)
show_keypoints(transformed_sample['image'], transformed_sample['keypoints'])
plt.show()
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
###Markdown
Create the transformed datasetApply the transforms in order to get grayscale images of the same shape. Verify that your transform works by printing out the shape of the resulting data (printing out a few examples should show you a consistent tensor size).
###Code
# define the data tranform
# order matters! i.e. rescaling should come before a smaller crop
data_transform = transforms.Compose([Rescale(250),
RandomCrop(224),
Normalize(),
ToTensor()])
# create the transformed dataset
transformed_dataset = FacialKeypointsDataset(csv_file='/data/training_frames_keypoints.csv',
root_dir='/data/training/',
transform=data_transform)
# print some stats about the transformed data
print('Number of images: ', len(transformed_dataset))
# make sure the sample tensors are the expected size
for i in range(5):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['keypoints'].size())
###Output
_____no_output_____ |
pcaChemOptTrTestSpl2.ipynb | ###Markdown
As mentioned earlier, PCA performs best with a normalized feature set. We will perform standard scalar normalization to normalize our feature set. To do this, execute the following code:
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
Applying PCAIt is only a matter of three lines of code to perform PCA using Python's Scikit-Learn library. The PCA class is used for this purpose. PCA depends only upon the feature set and not the label data. Therefore, PCA can be considered as an unsupervised machine learning technique.Performing PCA using Scikit-Learn is a two-step process: Initialize the PCA class by passing the number of components to the constructor. Call the fit and then transform methods by passing the feature set to these methods. The transform method returns the specified number of principal components.Take a look at the following code:
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
###Output
_____no_output_____
###Markdown
In the code above, we create a PCA object named pca. We did not specify the number of components in the constructor. Hence, all four of the features in the feature set will be returned for both the training and test sets.The PCA class contains explained_variance_ratio_ which returns the variance caused by each of the principal components. Execute the following line of code to find the "explained variance ratio".
###Code
explained_variance = pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
The explained_variance variable is now a float type array which contains variance ratios for each principal component. The values for the explained_variance variable looks like this:0.7222650.2397480.03338120.0046056It can be seen that first principal component is responsible for 72.22% variance. Similarly, the second principal component causes 23.9% variance in the dataset. Collectively we can say that (72.22 + 23.9) 96.21% percent of the classification information contained in the feature set is captured by the first two principal components.Let's first try to use 1 principal component to train our algorithm. To do so, execute the following code:
###Code
from sklearn.decomposition import PCA
pca = PCA(n_components=1)
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
###Output
_____no_output_____
###Markdown
The rest of the process is straight forward.Training and Making PredictionsIn this case we'll use random forest classification for making the predictions.
###Code
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(max_depth=2, random_state=0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Performance Evaluation
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
print('Accuracy' + str(accuracy_score(y_test, y_pred)))
y_pred
y_test
###Output
_____no_output_____
###Markdown
It can be seen from the output that with only one feature, the random forest algorithm is able to correctly predict 28 out of 30 instances, resulting in 93.33% accuracy.Results with 2 and 3 Principal ComponentsNow let's try to evaluate classification performance of the random forest algorithm with 2 principal components. Update this piece of code:
###Code
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
###Output
_____no_output_____ |
notebooks/arrays.ipynb | ###Markdown
Arrays, Dataframes, PlottingThe aim of this tutorial is to give some of the basics needed to get started with machine learning (ML) in python, namely how to work with arrays, dataframes, and plotting using the relevant python libraries. It is generally assumed that you're familiar with the basics python if you're going through this.When doing ML, the arrays we usually work with are large multidimensional arrays containing mostly numerical data. It turns out that the base python way of dealing with arrays, i.e. the `list`, is inefficient for manipulating large numerical arrays. To get around this, we use the much faster and more complete _Numpy_ package. Numpy is specifically built for doing numerical array processing in python. It is essentially a wrapper written on top of fast _C_ libraries that were written years ago for doing linear algebra. While there is some overhead in translating the python code over to C to perform these operations, it's usually worth it. Moreover, numpy gives access to all kinds of neat functions for doing operations on numerical arrays, including a whole suite of linear algebra routines. Using lists, not only would your code be slower, but you'd have to tediously do a lot of these numerical operations yourself. Numpy BasicsThe usual convention for using numpy is to first call `import numpy as np`. To use numpy functions, we need to prefix each function call with `np.`. You'll see more examples of this below. Let's start by importing numpy, and creating an empty numpy array that I'll call `A`. You can see that the `np.array` function is what we call to create a numpy array. The easiest way is to pass in a list containing whatever you want in the array. Note numpy arrays follow the same row, column, and indexing conventions as python lists.
###Code
import numpy as np
A = np.array([])
print(f'Elements of A: {A}')
###Output
Elements of A: []
###Markdown
Like python lists, we can interact with an array by calling methods on it. One example might be if we wanted to know what the shape of a given array is. To get this on our new array `A`, we call `A.size`, which prints a tuple of the form `(n_rows, n_columns, n_depth, ...)` where the ith element of the tuple gives the number of elements in the ith dimension of the array. Since `A` is an empty array it has no elements along any dimension, so we just get `(0,)` by convention.To give a better example, I create a 2-dimensional array `X` again and print its size, which prints as `(3,2)`, meaning there are 3 rows and 2 columns in `X`, which you can clearly see printed out.Note: The base python `len` function works on numpy arrays as well. However, it only returns the length of the first dimension of the array, i.e. the number of rows.
###Code
print(f'Shape of A: {A.size}')
print()
X = np.array([[1, 2], [3, 4], [5, 6]])
print(f'Elements of X:\n {X}')
print()
print(f'Shape of X: {X.size}')
print(f'Length of X: {len(X)}')
###Output
Shape of A: 0
Elements of X:
[[1 2]
[3 4]
[5 6]]
Shape of X: 6
Length of X: 3
###Markdown
We can index into an array similarly to lists. Numpy, like python, is also zero-indexed, so the first element of 1-D array `x` would be `x[0]` and the last would be `x[-1]` or `x[len(x)]`.For multi-dimensional arrays like matrices or tensors or whatever, the index convention is slightly different from lists.- For a 1D array `x`, to index into element `i`, call `x[i]`.- For a 2D array `A`, to index the `(i,j)` element , call `A[i,j]`.- For a 3D array `T`, to index into the `(i,j,k)` element , call `T[i,j,k]`.
###Code
A = np.array([[1, 2], [3, 4]])
print(f'A = \n{A}')
print()
n_rows, n_cols = A.shape
for i in range(n_rows):
for j in range(n_cols):
print(f'A[{i},{j}] = {A[i,j]}')
###Output
A =
[[1 2]
[3 4]]
A[0,0] = 1
A[0,1] = 2
A[1,0] = 3
A[1,1] = 4
###Markdown
It's often very useful to be able create simple, large arrays instantly. Here are 4 common ways one might want to initialize a new array:- `np.zeros(size)`: Initialize an array with zeros and fill them in later.- `np.ones(size)`: Initialize an array with a non-zero value, usually 1.- `np.eye(n_rows)`: Initialize the diagonal elements of a square multidimensional array to 1 and the off-diagonal elements to 0.- `np.random.rand(size)`: Initialize an array with random values between 0 and 1.Pretty simple to do. Here are some examples.
###Code
print(f'length 10 vector of zeros: \n {np.zeros((10,))}')
print()
print(f'size (5, 5, 5) array of 5s: \n {5 * np.ones((5, 5, 5))}')
print()
print(f'size (3,3) identity matrix: \n {np.eye(3)}')
print()
print(f'size (2, 3) random matrix: \n {np.random.rand(2, 3)}')
###Output
length 10 vector of zeros:
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
size (5, 5, 5) array of 5s:
[[[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]]
[[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]]
[[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]]
[[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]]
[[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]
[5. 5. 5. 5. 5.]]]
size (3,3) identity matrix:
[[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]]
size (2, 3) random matrix:
[[0.65133474 0.03270469 0.04785048]
[0.43401002 0.02818242 0.64025992]]
###Markdown
Array Algebra By default, numpy operations are **element-wise**. For example, if I add the matrices (i.e. 2D arrays) `A` and `B` together as `A + B`, the operation is performed element-by-element: `(A + B)[i,j] = A[i,j] + B[i,j]`.While this agrees with what you learn in math class, this is *not* true for other operations. In particular, numpy multiplication is element-by-element, not matrix multiplication!`(A * B)[i,j] = A[i,j] * B[i,j]`To get standard math matrix multiplication in numpy you can either use `np.matmul(A, B)`, `np.dot(A, B)`, or the shortcut `@` operator as `A @ B`. I'll use the `@` operator mostly.`(A @ B)[i,j] = A[i,0] * B[0,j] + A[i,1] * B[1,j] + ... + A[i,n] * B[m,j]`Similarly, division is *not* matrix inversion, but element-by-element division:`(A / B)[i,j] = A[i,j] / B[i,j]`And exponentiation is also element-by-element:`(A ** 2)[i,j] = A[i,j] ** 2`.Below I show some examples of this on two matrices `A` and `B`.
###Code
A = np.array([[1, 1], [1, 1]])
B = np.array([[2, 2], [2, 2]])
print(f'A = \n{A}')
print(f'B = \n{B}')
print(f'A + B = \n{A + B}')
print(f'A - B = \n{A - B}')
print(f'A * B = \n{A * B}')
print(f'A / B = \n{A / B}')
print(f'A**2 = \n{A ** 2}')
###Output
A =
[[1 1]
[1 1]]
B =
[[2 2]
[2 2]]
A + B =
[[3 3]
[3 3]]
A - B =
[[-1 -1]
[-1 -1]]
A * B =
[[2 2]
[2 2]]
A / B =
[[0.5 0.5]
[0.5 0.5]]
A**2 =
[[1 1]
[1 1]]
###Markdown
BroadcastingOne very useful feature that numpy supports is array *broadcasting*. Broadcasting is a set of conventions for doing operations on arrays of different sizes.Here's an example. Suppose you have 1D array `x` and would like to add `5` to each element of `x`. Rather than do something tedious like this,```for i in range(len(x)): x[i] = x[i] + 5```we can simply agree that the expression `x + 5` means exactly that. In math class you probably learned that you can't add 2 vectors or arrays of different sizes, but obviously you can so long as we can agree on a consistent convention for doing so, which is what broadcasting does. The same convention holds for any arithmetic operator. Here are some examples of broadcasting a scalar with `x`.
###Code
x = np.array([1, 2, 3, 4, 5])
print(f'x = {x}')
print(f'x + 5 = {x + 5}') # returns each element of x plus 5
print(f'x - 5 = {x - 5}') # returns each element of x minus 5
print(f'5 * x = {5 * x}') # returns 5 times element of x
print(f'x / 5 = {x / 5}') # returns 1/5 of each element of x
print(f'1 / x = {1 / x}') # returns reciprocal of each element of x
###Output
x = [1 2 3 4 5]
x + 5 = [ 6 7 8 9 10]
x - 5 = [-4 -3 -2 -1 0]
5 * x = [ 5 10 15 20 25]
x / 5 = [0.2 0.4 0.6 0.8 1. ]
1 / x = [1. 0.5 0.33333333 0.25 0.2 ]
###Markdown
Broadcasting also works just as well with higher dimensional arrays like matrices. For example, adding `1` to each element of a size `(2,2)` array `A` can be done with `A + 1` or even `A += 1`. The new wrinkle with higher dimensional arrays though is that we can also broadcast arrays other than scalars too. For example, we can add a matrix and a vector. Suppose we wanted to add `1` to all elements in the first row of `A` and `2` to all elements in the second row. We could do that by creating a column vector `x`, i.e. a shape `(len(x), 1)` array, and then calling `A + x`. Note that broadcasting rules require that `x` be a column vector for this operation to do what we just specified. Row vectors `(1, len(x))` or "flattened" vectors `(len(x),)` do something different. They add these elements to the *columns* instead of the *rows*, so do be careful!We can see an example of this below. I create a column vector `x = [[1], [2]]` and perform `A + x` to verify that it indeed adds elements of `x` to the *rows* of `A`. I then use a quick trick to get the row vector of `x`, which is taking its *transpose* `x.T = [[1, 2]]`, and confirm that `A + x.T` adds `x` to the *columns* instead.Broadcasting can be quite complicated as you operate on arrays of arbitrary different sizes. You can see the rules and more examples in the numpy documentation [here](https://numpy.org/doc/stable/user/basics.broadcasting.html).
###Code
A = np.array([[0, 0], [0, 0]])
x = np.array([1, 2]).reshape(-1, 1)
print(f'A = \n{A}')
print(f'x = \n{x}')
print(f'A + x = \n{A + x}')
print(f'x.T = \n{x.T}')
print(f'A + x.T = \n{A + x.T}')
###Output
A =
[[0 0]
[0 0]]
x =
[[1]
[2]]
A + x =
[[1 1]
[2 2]]
x.T =
[[1 2]]
A + x.T =
[[1 2]
[1 2]]
###Markdown
Useful Array OperationsBelow is a list of some of the most useful operations one might do on arrays in practice. It's by no means exhaustive. I define a vector `x` and perform these operations on it, and allow Jupyter to show you the outputs. Note that, where ambiguous, each function called that returns an array is performed element-wise.
###Code
# this piece is useful to automatically print the output of every line without calling print
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
x = np.array([1, 2, 3, 4, 5])
###Output
_____no_output_____
###Markdown
**Statistical Functions**
###Code
np.mean(x) # mean of x
np.std(x) # std deviation of x
np.median(x) # median of x
np.min(x) # min of x
np.max(x) # max of x
np.percentile(x, 50) # 50th percentile of x
###Output
_____no_output_____
###Markdown
**Mathematical Functions**
###Code
np.exp(x) # exponential function
np.log(x) # natural log
np.log10(x) # base-10 log
np.log2(x) # base-2 log
np.sin(x) # similar for other trig and hyperbolic trig functions
###Output
_____no_output_____
###Markdown
**Other Useful Functions**
###Code
np.isnan(x) # generate a binary mask that finds NaNs
np.isinf(x) # generate a binary mask that finds infinities
np.allclose(x, 0) # returns boolean indicating if all elements of x are close to 0
x.reshape(-1, 1) # reshape x to be a column vector of size (len(x), 1)
x.reshape(1, -1) # reshape x to be a row vector of size (1, len(x))
x.flatten() # reshape x to be a flattened vector of size (len(x),)
np.transpose(x.reshape(1, -1)) # transpose row vector of x (i.e. convert to a column vector)
###Output
_____no_output_____
###Markdown
**Random Numbers and Seeds**Random numbers come up literally all over the place when doing ML. We can generate (uniform) random numbers from 0 to 1 in python using the `np.random.rand()` function. This (pseudo) random function will generate numbers that appear to be uncorrelated to each other. Random numbers are often useful for initializing parameters for models before training. Here's an example that generates a length 10 vector of random numbers using this function.
###Code
np.random.rand(10)
###Output
_____no_output_____
###Markdown
When working with random numbers (or doing any ML or statistics programming) it's useful to set a *seed*. A seed is a way of insuring that random code will generate consistent outputs, so that you can go back and verify that something is working correctly or reproduce somebody else's results (e.g. somebody's shiny new ML model in a paper). If you just always remember to set a seed at the top of your notebooks and scripts you won't have to worry about this. Rarely do you need to worry about changing how you call the seed (but when you do it can be annoying). Numpy will set the seed based on a number you pass to `np.random.seed`. Each number you pass in will generate a different seed, which will be used to generate its own unique ordered string of real numbers as you call them later on. Usually you just pick a number for that script and stick with it. Here's how you set the seed in numpy using an initial value of `123`. Note that seeds won't return values. They just alter the random state on the back end.
###Code
np.random.seed(123)
###Output
_____no_output_____
###Markdown
**Linear Algebra**As numpy was in large part written to port much of the optimized linear algebra routines from C over to python, it's useful to at least mention how you might go about doing some common linear algebra operations in python. Most of the linear algebra specific functions in numpy are in the `np.linalg` submodule.I do these operations on a random matrix `A` of size (3,3). Note many of these operations will only work on square matrices (i.e. 2D arrays with `n_rows = n_cols`).
###Code
A = np.random.rand(3, 3)
print(A)
np.linalg.det(A) # determinant of A
np.linalg.norm(A) # norm of A
np.linalg.eigvals(A) # eigenvalues of A
np.linalg.cond(A) # condition number of A
np.dot(A[:, 0], A[:, 1]) # dot product of first 2 column vectors of A
A.T # transpose of A
np.linalg.inv(A) # inverse of A
np.linalg.eig(A) # spectral decomposition of A = P D P^T returning (diag(D), P)
np.linalg.svd(A) # singular value decomposition of A = U S V^T returning (U, diag(S), V)
A = np.random.rand(3, 3)
b = np.random.rand(3)
np.linalg.solve(A, b) # solve Ax = b for x
###Output
_____no_output_____
###Markdown
Since linear regression is basically linear algebra, just for fun, here's an example of solving exactly for the learned parameters `beta` of a linear fit of a data matrix `X` with regression outputs `y`. Note that the optimal learned parameters are given exactly by solving $X^T X \beta = X^T y$, i.e. `X.T @ X @ beta = X.T @ y`.
###Code
n_samples = 100
n_features = 10
X = np.random.rand(n_samples, n_features + 1) # initialize data matrix X
X[:, 0] = np.ones(n_samples) # set first column to 1s (for the bias term)
y = np.random.normal(size=n_samples) # initialize target vector y with std gaussian noise
beta = np.random.rand(n_features + 1) # randomly initialize parameters to be learned (bias + weights)
beta = np.linalg.solve(X.T @ X, X.T @ y) # solve (X^T X) beta = X^T y
print(f'bias = {beta[0]}')
print(f'weights = \n{beta[1:]}')
###Output
bias = -0.414327350801323
weights =
[ 0.16820157 -0.04016259 -0.02762474 0.00394075 0.23982223 0.05215829
-0.34087291 0.80669608 0.52804099 -0.50217788]
###Markdown
**Operations on Multidimensional Arrays**When arrays are multidimensional, many of these numpy functions can be called along particular dimensions of the array (called an "axis" in numpy). For example, suppose you have a 2D array `X`, where each row of `X` represents some data point and each column of `X` represents some input variable (i.e. feature) in the data. You may want to normalize your data to do ML, but need to treat each column differently because each represents different things. One way to do this is to take each column, subtract off its mean, and divide by its standard deviation:`X_normalized[:, j] = (X[:, j] - np.mean(X[:, j])) / np.std(X[:, j])`But you only want the mean and standard deviation of *that particular column*, not the mean and standard deviation of *all values in the dataset*. Rather than doing something tedious like iterating through all the columns of `X` and doing this one by one on column vectors, numpy allows us to simultaneously take these column statistics across all columns at once by passing in the `axis=0` command to `np.mean` and `np.std`. Passing `axis=0` means "do these operations the rows, but for each column separately". Passing instead `axis=1` would mean the opposite: "do these operations the columns, but for each row separately".Here's an example containing 5 examples each with 3 features. We can normalize this whole matrix in one line of code. Notice that the values in the normalized matrix are much closer to each other, and consistently sized across columns. That's the point, as it makes many ML models easier to train.
###Code
X = np.array([[1, 10, 100], [0, 0, 0], [1, 10, 100], [0, 0, 0], [1, 10, 100]])
X_normalized = (X - np.mean(X, axis=0)) / np.std(X, axis=0)
print(f'X = \n{X}')
print()
print(f'X_normalized = \n{X_normalized}')
###Output
X =
[[ 1 10 100]
[ 0 0 0]
[ 1 10 100]
[ 0 0 0]
[ 1 10 100]]
X_normalized =
[[ 0.81649658 0.81649658 0.81649658]
[-1.22474487 -1.22474487 -1.22474487]
[ 0.81649658 0.81649658 0.81649658]
[-1.22474487 -1.22474487 -1.22474487]
[ 0.81649658 0.81649658 0.81649658]]
###Markdown
Pandas BasicsWe now turn to dataframes, which can be thought of as arrays with tags attached to them. While numpy arrays are meant for doing stuff like statistics and linear algebra, dataframes are meant for inspecting and cleaning data. In ML, often we might start by loading our data from a file into a dataframe, clean and transform our data, and then convert it to a numpy array for doing the actual machine learning.In python, the standard dataframe library is Pandas. Pandas is a very powerful library capable of doing everything from loading data, cleaning data, visualizing data, and exporting or saving data to new files. The conventional way to load pandas in python is by calling `import pandas as pd`. This means any pandas function or data structure will be prefixed by `pd.`.While it is possible (and often taught first) how to create a dataframe from scratch in pandas, e.g. by passing in a dictionary, I will instead initialize a dataframe by loading in data from a csv file, which in my experience is the most common way data scientists actually get data into a dataframe (along with importing from SQL or an Excel spreadsheet). I will load the Titanic dataset from this link (pandas lets you load data directly from a URL just as easily as from a local file): https://gist.githubusercontent.com/michhar/2dfd2de0d4f8727f873422c5d959fff5/raw/fa71405126017e6a37bea592440b4bee94bf7b9e/titanic.csvI save the URL as a string `url`, then call the `pd.read_csv` function passing in that string. This will cause pandas to fetch that data and load it into a dataframe that by convention is called `df`. To show everything worked fine, I'll print out the top 5 data examples using `df.head()`, and then inspect the data using `df.info()`. Looks like everything loaded fine.
###Code
import pandas as pd
url = 'https://gist.githubusercontent.com/michhar/2dfd2de0d4f8727f873422c5d959fff5/raw/fa71405126017e6a37bea592440b4bee94bf7b9e/titanic.csv'
df = pd.read_csv(url)
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 891 non-null int64
1 Survived 891 non-null int64
2 Pclass 891 non-null int64
3 Name 891 non-null object
4 Sex 891 non-null object
5 Age 714 non-null float64
6 SibSp 891 non-null int64
7 Parch 891 non-null int64
8 Ticket 891 non-null object
9 Fare 891 non-null float64
10 Cabin 204 non-null object
11 Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.7+ KB
###Markdown
Since this tutorial isn't focused on data cleaning, I'm going to ignore how messy this data is for now and just focus on some of the essential pandas operations you can do on this dataframe.First, observe the `df.head()` output. We can clearly see that each example of data is a row (in this case a passanger aboard the Titanic), and each column is some sort of attribute about that person (ID, name, age, cost of fare, whether they survived the wreck, etc). Also observe the left-hand bolded column. This is the *index*. By default, when you load a new dataframe, pandas will give it a new index starting from 0. The index is how you manipulate the rows of the dataframe. You're free to reset the index if you wish, or even set another column (e.g. the ID column) as the index.Second, observe the `df.info()` printout. We can see that the dataframe contains 12 columns of data, each with 891 examples (possibly with missing values in some columns). It looks like columns 0, 1, 2, 6, 7 are integer-valued data (they appear to be categorical data labeled with integer values); columns 3, 4, 8 are objects (usually meaning they're encoded by pandas as strings); and columns 5, 9 are float-valued data (numerical values like age and cost of ticket). We can see that the dataframe takes up almost 84 kB of RAM, which isn't bad. We can also see that columns 5, 10, 11 contain some missing values which will need to be dealt with in the data cleaning process.Moving on, one useful operation we might want to do with a dataframe is to inspect the dataframe for missing values, and possibly drop the rows with any missing values. We can do this using `df.dropna()`. This will literally throw out examples with *any* missing values in them at all (which is often more than you'd want to do in real data cleaning since you're throwing away data from the other columns). Notice that dropping missing values cuts the dataframe size down to only 183 examples. Also, note that like most pandas operations, this doesn't alter the dataframe in place. So if you want the new dataframe you'd need to save it using `df = df.dropna()` or something similar.If we don't want to drop missing values (which we rarely do), we can instead choose to impute them with some other value. For example, we can set all missing values to 0. We can achieve this with `df.fillna(0)`. This will keep all the rows, but fill every single missing value in the dataframe with 0.
###Code
df.isna()
df.dropna()
df.fillna(0)
###Output
_____no_output_____
###Markdown
Similar to missing values, we might also be interested in looking for and perhaps dropping duplicate values. We can do this by calling `df.drop_duplicates()`. By default, this will drop rows where every single entry is identical. Evidently this data contains no duplicate rows.
###Code
df.drop_duplicates()
###Output
_____no_output_____
###Markdown
We can index into a particular column of the dataframe by passing in the name of the column. Suppose we just want the names of people on the Titanic. Then we could call `df['Name']` to get a one-column dataframe (technically called a series in pandas) with just those names.We can also fetch multiple columns into a new dataframe. Suppose we want to keep name, sex, and age but drop everything else. We can do this by passing in a list of column names that we want to keep, namely `df[['Name', 'Sex', 'Age']]`. We can also explicitly drop the rows we don't want by calling `df.drop(columns=['PassengerId', 'Survived', 'Pclass'])` etc.
###Code
df['Name']
df[['Name', 'Sex', 'Age']]
df.drop(columns=['PassengerId', 'Survived', 'Pclass', 'SibSp', 'Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked'])
###Output
_____no_output_____
###Markdown
Using column indexing, we can start cleaning up the data. Suppose we wanted age to be cast as an integer rather than a float. We can do that by indexing into that column, changing the datatype to an integer using `df['Age'].astype(int)`, and resaving this new dataframe to `df`. Since some people evidently have missing ages, we first need to impute those with a numerical value. I'll impute them with 0s. Inspecting the new dataframe we can now see that the ages are integer-valued.
###Code
ages = df['Age']
ages = ages.fillna(0)
ages = ages.astype(int)
df['Age'] = ages
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 891 non-null int64
1 Survived 891 non-null int64
2 Pclass 891 non-null int64
3 Name 891 non-null object
4 Sex 891 non-null object
5 Age 891 non-null int64
6 SibSp 891 non-null int64
7 Parch 891 non-null int64
8 Ticket 891 non-null object
9 Fare 891 non-null float64
10 Cabin 204 non-null object
11 Embarked 889 non-null object
dtypes: float64(1), int64(6), object(5)
memory usage: 83.7+ KB
###Markdown
For some columns we may want to replace what appear to be boolean values with 0 and 1 or similar. Similarly with other categorical values, where we may want to replace strings with integers representing each category.Let's look at the sex column. We can see that the values shown are `male` or `female`. For doing data science work we probably want those to be integers, and given that there are only 2 values it's a natural choice to map them to 0 and 1, say `male=0` and `female=1`. We can do this by calling `df.replace` and mapping each string to its respective integer value.
###Code
df = df.replace(to_replace='male', value=0)
df = df.replace(to_replace='female', value=1)
df.head()
###Output
_____no_output_____
###Markdown
Pandas can also do smart indexing. Suppose we want to look at the subset of the data where passengers did not survive. We can do this by indexing the dataframe with the boolean `df['Survived'] == 0`. Just about any boolean indexing you can think of you can do with dataframes, making it very efficient to slice, subset, and manipulate them once you get used to it. To get the data with only the non-survivors, we just pass this mask into the dataframe itself. Doing so, we can see that 549 of the original 891 people did not survive.
###Code
df['Survived'] == 0
df[df['Survived'] == 0]
###Output
_____no_output_____
###Markdown
It's useful to note that pandas supports a lot of the array arithmetic operations that numpy does. Suppose for example you wanted to add `5` to the passenger ID column. You can do that the same way you would in numpy, where you treat `df['PassengerId']` as a vector and add `5` to it via broadcasting.
###Code
df['PassengerId'] + 5
###Output
_____no_output_____
###Markdown
The statistical operations from numpy often work too (mean, std, max, min, etc). The difference is that in pandas these are methods rather than functions (note you can also call them as array methods in numpy too, I just didn't want to confuse you above by doing both). If we wanted to normalize the age column of the dataframe, similar to numpy, we might do something like this:
###Code
age = df['Age']
age.head()
age_normalized = age - age.mean() / age.std()
age_normalized.head()
###Output
_____no_output_____
###Markdown
Once you're done inspecting and cleaning your data, you can either convert the data you want over to an array to start doing ML or analytics work, or save the cleaned output to a file, e.g. a csv. To convert a dataframe to a numpy array, we just call `df.to_numpy()`. To save the dataframe to a new csv file we just call `df.to_csv(file_name)`.In the example below, I convert the first 3 columns in the dataframe to a numpy array and verify that it's a (891, 3) array as intended. You can see the first 10 rows of this array below to verify.
###Code
X = df[['PassengerId', 'Survived', 'Pclass']].to_numpy()
X.shape
X[:10, :]
###Output
_____no_output_____
###Markdown
Matplotlib BasicsIt's very important when doing data science to look at your data. That sometimes means inspecting the values in the dataframes and arrays, but it most often means making plots of your data to get an idea what's going on and what might be wrong. The standard python library for plotting is Matplotlib. I won't go into great depth here on how to make pretty plots. Rather, I'll just show how to throw together a few simple plots.The most common part of matplotlib people use is the pyplot submodule that does the actual plotting. It's conventional to import this via `import matplotlib.pyplot as plt` or `from matplotlib import pyplot as plt`. Thus, all plotting functions are prefixed with `plt.`.Let's start by loading matplotlib and generating some data to plot. To keep the plots simple, I'll focus on 1D and 2D arrays. Note that matplotlib supports both numpy arrays and pandas dataframes as inputs, so long as the values are numerical.To do some simple plotting like you might do in a math class, I'll create a grid of inputs, pass them into a function, and plot the inputs vs the outputs. Generating a grid of points in numpy is done using `np.linspace(start_point, stop_point, n_points)`. Below, I generate an input grid `x` with 100 points from 0 to 5. I then generate my outputs `y` on this grid, and plot them using `plt.plot(x, y)`. Plotted we can see a graph of x vs y from the given range of 0 to 5.
###Code
import matplotlib.pyplot as plt
x = np.linspace(0, 5, 100)
y = np.sin(10 * np.pi * x) * np.exp(-x)
plt.plot(x, y);
###Output
_____no_output_____
###Markdown
If we wish, we can decorate the above plot by giving it a title, x axis label, and y axis label. We might also choose to overlay a second plot of the decaying exponential enveloping the above function damping sinusoid function. Notice that we can pass (a limited subset of) latex into plot titles to nicely render math, which is sometimes useful. Also notice that when overlaying multiple plots on top of each other, we need to end the plot by calling `plt.show()`, which tells matplotlib to stop and render everything above it as one single plot.
###Code
z = np.exp(-x)
plt.plot(x, y)
plt.plot(x, z)
plt.plot(x, -z)
plt.xlabel('x')
plt.ylabel('y')
plt.title('$y = e^{-x }\sin(10\pi x)$')
plt.show();
###Output
_____no_output_____
###Markdown
Let's now look at a couple of other useful plots a data scientist might frequently use. One is the scatter plot, which is probably the most useful way for visualizing real-world data inputs vs outputs (if the outputs are continuous). We can create a scatter plot simply by using `plt.scatter`. Here's an example with some fake data, where `x` is generate using `rand` and `y` is just a linear function of `x` with some small noise added to it to look real-ish.We can see from the plot that `x` and `y` have what appears to be a linear relationship with positive slope, suggesting that something like linear regression might work well for the problem of predicting `y` given `x`.
###Code
x = np.random.rand(50)
y = x + 0.3 * np.random.rand(50)
plt.scatter(x, y);
###Output
_____no_output_____
###Markdown
If the input data is continuous but the outputs are discrete labels, instead of a scatter plot, we might wish to make a histogram showing how often each label appears in the dataset. Here we do that using `plt.hist`. I generate `x` the same as the above cell, but `y` by randomly selecting integers from the set `[a, b, c]`. You can see that the label `c` shows up the most, with `b` second and `a` last. If you look at the weights I passed into `p=` below, you can see how it ended up that way.
###Code
x = np.random.rand(50)
y = np.random.choice(['a', 'b', 'c'], size=len(x), p=[0.2, 0.5, 0.3])
plt.hist(y);
###Output
_____no_output_____
###Markdown
As a final example for now, let's consider the boxplot, which is often used to plot continuous sets of data from different distributions against each other to see how they vary. We can do this in matplotlib using `plt.boxplot`. I create the data by sampling 3 different vectors from gaussians of differing means and variances, join them together into a single array `X`, and then call `plt.boxplot(X)`. From the plot, you can see that `x2` has both the highest mean value (the orange lines are the means) and the largest spread (the lengths of the boxes are the spreads, technically the "interquantile range"). The points outside of the "whiskers" are the outliers from each distribution.
###Code
x1 = np.random.normal(0, 1, size=20)
x2 = np.random.normal(0.5, 1.5, size=20)
x3 = np.random.normal(-0.5, 0.5, size=20)
X = np.array([x1, x2, x3]).T
plt.boxplot(X);
###Output
_____no_output_____ |
Clases/Semana5_MATPLOTLIB/matplotlib_pandas_apuntes.ipynb | ###Markdown
Es una biblioteca de trazado muy poderosa. Veamos primero cómo se estructura una gráfica en general y el vocabulario asociado con sus elementos. Más detalles sobre los conceptos básicos de matplotlib los encuentra en la [guía de uso](https://matplotlib.org/tutorials/introductory/usage.htmlsphx-glr-tutorials-introductory-usage-py)Los [tutoriales](https://matplotlib.org/tutorials/index.html) y [ejemplos](https://matplotlib.org/gallery/index.html) páginas son también vale la pena mirar.. Lineas Comencemos con un ejemplo simple. Queremos echar un vistazo a las series de tiempo de los datos de temperatura en un diagrama lineal simple:
###Code
import pandas as pd
import matplotlib.pyplot as plt
# Descarga de datos (formato raw) desde pastebin
#_ = !wget -O frankfurt_weather.csv https://pastebin.com/raw/GnLS1WR8
# Lectura de datos desde archivo
data = pd.read_csv('frankfurt_weather.csv', parse_dates=['time'], index_col='time', sep=',', na_values='')
###Output
_____no_output_____
###Markdown
Esto nos da un pequeño gráfico lineal agradable con la serie temporal de temperatura representada como una línea azul. Sin embargo, la trama es un poco pequeña, los ejes no están etiquetados y no hay leyenda. Vamos a arreglar eso ... Esto se ve mejor. Podemos ver que no tenemos que preocuparnos por los detalles de axes y axis del gráfico, ya que pyplot se encarga de ello. Cada vez que hacemos una función pyplot, solo reutiliza axes en la primera llamada de plot.Sin embargo, digamos que no estamos interesados en las fluctuaciones diarias de temperatura.Además, aunque construir manualmente las etiquetas xticks funciona bien, es un poco molesto. Especialmente porque el índice del dataframe de datos (que se usa para el eje x) ya contiene toda la información que necesitamos y solo el formato es incorrecto. Afortunadamente, matplotlib tiene algunas funciones de conveniencia disponibles en su submódulo de fecha que podemos usar si cambiamos al enfoque de "ejes manuales".Así que volvamos a muestrear los datos a los medios diarios, formatee las fechas y vuelva a trazarlos
###Code
###Output
_____no_output_____
###Markdown
Es una biblioteca de trazado muy poderosa. Veamos primero cómo se estructura una gráfica en general y el vocabulario asociado con sus elementos. Más detalles sobre los conceptos básicos de matplotlib los encuentra en la [guía de uso](https://matplotlib.org/tutorials/introductory/usage.htmlsphx-glr-tutorials-introductory-usage-py)Los [tutoriales](https://matplotlib.org/tutorials/index.html) y [ejemplos](https://matplotlib.org/gallery/index.html) páginas son también vale la pena mirar.. Lineas Comencemos con un ejemplo simple. Queremos echar un vistazo a las series de tiempo de los datos de temperatura en un diagrama lineal simple:
###Code
import pandas as pd
import matplotlib.pyplot as plt
# Descarga de datos (formato raw) desde pastebin
_ = !wget -O frankfurt_weather.csv https://pastebin.com/raw/GnLS1WR8
# Lectura de datos desde archivo
data = pd.read_csv('frankfurt_weather.csv', parse_dates=['time'], index_col='time', sep=',', na_values='')
data.head()
###Output
_____no_output_____
###Markdown
Esto nos da un pequeño gráfico lineal agradable con la serie temporal de temperatura representada como una línea azul. Sin embargo, la trama es un poco pequeña, los ejes no están etiquetados y no hay leyenda. Vamos a arreglar eso ...
###Code
plt.rcParams['font.size'] = 18
plt.figure(figsize=(20,5))
plt.plot(data.air_temperature, label='Temperatura de aire en Frankfurt Int. Airport en 2015')
plt.xlim(('2015-01-01', '2015-12-31'))
# Cada 15 de cada mes se reemplaza con los meses del año
plt.xticks(["2015-{:02d}-15".format(x) for x in range(1,13,1)], ["En", "Feb", "Mar", "Abr", "May", "Jun", "Jul", "Ags", "Sep", "Oct", "Nov", "Dic"])
plt.legend()
plt.ylabel("Temperatura (°C)")
plt.show()
###Output
_____no_output_____
###Markdown
Esto se ve mejor. Podemos ver que no tenemos que preocuparnos por los detalles de axes y axis del gráfico, ya que pyplot se encarga de ello. Cada vez que hacemos una función pyplot, solo reutiliza axes en la primera llamada de plot.Sin embargo, digamos que no estamos interesados en las fluctuaciones diarias de temperatura.Además, aunque construir manualmente las etiquetas xticks funciona bien, es un poco molesto. Especialmente porque el índice del dataframe de datos (que se usa para el eje x) ya contiene toda la información que necesitamos y solo el formato es incorrecto. Afortunadamente, matplotlib tiene algunas funciones de conveniencia disponibles en su submódulo de fecha que podemos usar si cambiamos al enfoque de "ejes manuales".Así que volvamos a muestrear los datos a los medios diarios, formatee las fechas y vuelva a trazarlos
###Code
import matplotlib.dates as mdates # necesitamos esto para formatear la fecha correctamente
temp_resampled = data.air_temperature.resample('1d').mean()
# Asignamos la figura a una variable para poder manipularla posteriormente
fig = plt.figure(figsize=(20,5))
# Agregar nuevos ejes al dataframe manualmente.
# El argumento de la función add_subplot se usa para posicionar el diagrama
# Los primeros dos números son el número de filas / columnas en la figura
# El tercer número es la posición de la trama para crear
ax = fig.add_subplot(111)
# Graficar los datos
ax.plot(temp_resampled,label="Temperatura de aire en Aeropuerto Frankfurt Int., 2015")
# Trazar la leyenda
ax.legend ()
# Establecer la etiqueta del eje Y
ax.set_ylabel ("Temperatura (° C)")
# Establecer los límites (rango) del eje X
ax.set_xlim (("2015-01-01", "2015-12-31"))
# en lugar de crear las etiquetas de mes manualmente (como lo hicimos antes) usamos los localizadores y formateadores de fechas incorporados de matplotlib
# configure el localizador para encontrar el día 15 de cada mes (aunque no todos los meses tienen la misma duración, esto es lo suficientemente cercano)
days = mdates.DayLocator(bymonthday = 15)
# establece el formato de fecha en el nombre abreviado del mes
# Ref: https://matplotlib.org/3.1.0/api/dates_api.html
monthFmt = mdates.DateFormatter("%b")
# aplicar localizador y formateador
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(monthFmt)
###Output
_____no_output_____
###Markdown
También puede poner varias líneas en un `plot`:
###Code
data_resampled = data.loc[:, ["air_temperature", "dewpoint"]].resample("1d").mean()
fig = plt.figure(figsize=(20,5))
ax = fig.add_subplot(111)
ax.plot(data_resampled.loc[:, "air_temperature"],label="Air temperature")
ax.plot(data_resampled.loc[:, "dewpoint"],label="Dewpoint")
ax.legend()
ax.set_ylabel("Temperature (°C)")
ax.set_xlim(("2015-01-01","2015-12-31"))
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(monthFmt)
plt.show()
###Output
_____no_output_____
###Markdown
Y si no nos gustan los colores o los tipos de línea, podemos elegir diferentes proporcionando los argumentos `c` y `linestyle` a la función de trazado (`plot`):
###Code
fig = plt.figure(figsize=(20,5))
plt.rcParams["font.size"] = 18
ax = fig.add_subplot(111)
# Ref. https://matplotlib.org/2.1.1/api/_as_gen/matplotlib.pyplot.plot.html
ax.plot(data_resampled.loc[:, "air_temperature"], label="Air temperature", c="k", linestyle="-.")
ax.plot(data_resampled.loc[:, "dewpoint"], label="Dewpoint", c="r", linestyle=":")
ax.legend()
ax.set_ylabel("Temperature (°C)")
ax.set_xlim(("2015-01-01","2015-12-31"))
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(monthFmt)
plt.show()
###Output
_____no_output_____
###Markdown
¿Qué sucede si queremos trazar 2 variables en el mismo diagrama que tienen unidades diferentes, por ejemplo, valores de temperatura y presión de aire?Para esto, podemos usar la función twinx(). Crea una segunda subtrama que comparte el eje Y con la primera subtrama:
###Code
data_resampled = data.loc[:, ["air_temperature", "air_pressure"]].resample("1d").mean()
fig = plt.figure(figsize=(20,5))
ax2 = fig.add_subplot(111)
ax1 = ax2.twinx()
ax1.plot(data_resampled.loc[:, "air_temperature"], c="r", label="Temperature")
ax2.plot(data_resampled.loc[:, "air_pressure"], c="b", label="Air pressure")
ax1.set_ylabel("Temperature (°C)")
ax2.set_ylabel("Air pressure (m)")
ax1.legend()
ax2.legend()
ax1.set_xlim(("2015-01-01","2015-12-31"))
ax1.xaxis.set_major_locator(days)
ax1.xaxis.set_major_formatter(monthFmt)
plt.show()
###Output
_____no_output_____
###Markdown
Normalmente, los subplots se usan, sin embargo, para trazar varias figuras una al lado de la otra. Por ejemplo, podemos crear 2 subplots "ax1" y "ax2", que representan la temperatura y la velocidad del viento una al lado de la otra:
###Code
data_resampled = data.loc[:, ["air_temperature", "wind_speed"]].resample("1d").mean()
fig = plt.figure(figsize=(20,5))
plt.rcParams["font.size"] = 18
ax1 = fig.add_subplot(2,1,1) # add a subplot at index 1 into a plot that has 1 row and 2 columns
ax2 = fig.add_subplot(2,2,3) # add a subplot at index 2 into a plot that has 1 row and 2 columns
ax1.plot(data_resampled.loc[:, "air_temperature"], c="r", label="Temperature")
ax2.plot(data_resampled.loc[:, "wind_speed"], c="b", label="Wind speed")
ax1.legend()
ax2.legend()
ax1.set_xlim(("2015-01-01","2015-12-31"))
ax2.set_xlim(("2015-01-01","2015-12-31"))
ax1.xaxis.set_major_locator(days)
ax1.xaxis.set_major_formatter(monthFmt)
ax2.xaxis.set_major_locator(days)
ax2.xaxis.set_major_formatter(monthFmt)
plt.show()
###Output
_____no_output_____
###Markdown
Si desea guardar su figura, simplemente puede llamar a la función savefig() en lugar de show():
###Code
fig = plt.figure(figsize=(20,5))
plt.plot(data.air_temperature.resample("1d").mean(), c="r", label="Temperature")
plt.savefig("beispiel_output.png")
###Output
_____no_output_____
###Markdown
Histogramas Para obtener una visión general de la distribución de los valores de temperatura, también podemos trazar un histograma de los datos:
###Code
plt.hist(data.air_temperature.dropna())
plt.show()
###Output
_____no_output_____
###Markdown
Una vez más, esta trama es agradable y simple, pero podemos hacerlo mejor ...
###Code
temperatur = data.air_temperature.dropna()
temperatur.unique()
temperatur = data.air_temperature.dropna()
plt.figure(figsize=(20,7))
plt.hist(temperatur,bins=sorted(temperatur.unique()-0.5), color="#1aa8a8")
plt.xlabel("Temperature (°C)")
plt.ylabel("Count")
plt.show()
###Output
_____no_output_____
###Markdown
También podemos trazar varios histogramas dentro de un gráfico, por ejemplo, los valores de temperatura separados en enero y agosto:
###Code
temperatur_jan = data.air_temperature["2015-01"].dropna()
temperatur_aug = data.air_temperature["2015-08"].dropna()
plt.figure(figsize=(20,7))
# bins, ancho del contenedor
# Ref: https://stackoverflow.com/questions/33458566/how-to-choose-bins-in-matplotlib-histogram/33459231
plt.hist(temperatur_jan,bins=sorted(data.air_temperature.dropna().unique()-0.5), color="#00b1ff", alpha=0.5, label="January")
plt.hist(temperatur_aug,bins=sorted(data.air_temperature.dropna().unique()-0.5), color="#ff0000", alpha=0.5, label="August")
plt.xlabel("Temperature (°C)")
plt.ylabel("Count")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Gráfico de dispersión Otra trama de uso frecuente es el diagrama de dispersión.Usémoslo para trazar los valores de temperatura frente a la presión de aire:
###Code
plt.figure(figsize=(9,9))
plt.rcParams["font.size"] = 14
plt.scatter(data.air_temperature, data.air_pressure)
plt.show()
###Output
_____no_output_____
###Markdown
También podemos trazar múltiples diagramas de dispersión uno encima del otro para representar más información dentro de un mismo diagrama.Si hacemos esto con las mediciones de temperatura / presión por separado para enero y agosto, podemos ver que las presiones de aire por debajo de 1000 hPa solo prevalecieron durante enero pero no durante agosto y que el rango de valores de presión de aire fue mucho mayor durante enero que durante agosto:
###Code
plt.figure(figsize=(9,9))
plt.scatter(data.air_temperature, data.air_pressure, marker=".", c="grey", alpha=0.1, label="")
plt.scatter(data.air_temperature["2015-01"], data.air_pressure["2015-01"], marker=".", c="#35A3FF", alpha=1, label="January")
plt.scatter(data.air_temperature["2015-06"], data.air_pressure["2015-06"], marker=".", c="#FF824C", alpha=1, label="August")
plt.xlabel("Temperature (°C)")
plt.ylabel("Pressure (hPa)")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Hay muchas más opciones que puede ajustar para obtener un aspecto agradable listo para imprimir la trama. Y, por supuesto, hay muchos más tipos de gráficos (por ejemplo, gráficos de barras, gráficos de cajas, gráficos 3D, ...) No los cubriremos todos aquí, pero puede consultar la documentación de [matplotlib](https://matplotlib.org/contents.html) .Además, hay muchas más bibliotecas de trazado que cubren todo tipo de rutinas de trazado útiles, por ejemplo, para trazar datos de viento ... Windroses (Rosa de vientos)Con la extensión windrose , puede trazar fácilmente los datos del viento en un gráfico típico de rosa de los vientos:
###Code
from windrose import WindroseAxes
ax = WindroseAxes.from_ax()
ax.bar(data.wind_direction, data.wind_speed, normed=True, opening=0.8, edgecolor="white")
ax.set_legend()
plt.show()
###Output
_____no_output_____ |
My_Udacity_Machine_Learning_Foundation_Course/project_four_boston_housing/Boston_housing(Advance).ipynb | ###Markdown
從上圖可知,RM and LSTAT 與 MEDV 相關性較大從PairPlot 可知 LSTAT 與 MEDV 為非線性關係RM 與 MEDV 為線性關係
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression, RANSACRegressor
#引入正則化降低過擬合現象
#多變數回歸模型
from sklearn.linear_model import Lasso, Ridge, ElasticNet
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
#X = df[['RM']].values
#y = df['MEDV'].values
#X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=33)
#多變量回歸
X = df.iloc[:,:-1].values
y = df['MEDV'].values
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=33)
pipe_lr = Pipeline([('stds',StandardScaler()),('lr',LinearRegression())])
#引入正則化降低過擬合現象
pipe_lasso = Pipeline([('stds',StandardScaler()),('lasso',Lasso(alpha=1.0))])
#pipe_ransac = Pipeline([('stds',StandardScaler()),('ransac',RANSACRegressor())])
pipe_lr.fit(X_train,y_train)
train_pred = pipe_lr.predict(X_train)
test_pred = pipe_lr.predict(X_test)
#print pipe_lr.steps[1][1].coef_
#print pipe_lr.steps[1][1].intercept_
#殘差圖
plt.scatter(train_pred,train_pred-y_train,c='blue',label='train_pred')
plt.scatter(test_pred,test_pred-y_test,c='red',label='test_pred')
plt.xlabel('predict value')
plt.ylabel('Residuals')
plt.axhline(y=0)
plt.legend()
plt.show()
'''
pipe_ransac.fit(X,y)
pred_ransac = pipe_ransac.predict(X)
inlier_mask = pipe_ransac.steps[1][1].inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
plt.scatter(X[inlier_mask],y[inlier_mask],c='lightgreen',marker='s',label='inlier')
plt.scatter(X[outlier_mask],y[outlier_mask],c='lightblue',marker='o',label='outlier')
plt.plot(X,pred_ransac,c='red')
plt.xlabel('RM(std)')
plt.ylabel('MEDV(std)')
plt.legend()
plt.show()
'''
print 'MSE of train_pred:', mean_squared_error(y_train,train_pred)
print 'MSE of test_pred:', mean_squared_error(y_test,test_pred)
print 'R2 of train_pred:', r2_score(y_train,train_pred)
print 'R2 of test_pred:',r2_score(y_test,test_pred)
#引入正則化降低過擬合現象
pipe_lasso = Pipeline([('stds',StandardScaler()),('lasso',Lasso())])
###Output
_____no_output_____
###Markdown
多項式回歸 對於不符合線性假設的問題-> 多項式回歸
###Code
from sklearn.preprocessing import PolynomialFeatures
reg = LinearRegression()
quad = PolynomialFeatures(degree=2)
cubic = PolynomialFeatures(degree=3)
#LSTAT v.s. MEDV
X = df[['LSTAT']].values
y = df['MEDV'].values
#linear fit
X_fit = np.arange(X.min(),X.max(),1)[:,np.newaxis]
reg.fit(X,y)
y_lin_fit = reg.predict(X_fit)
regr = reg.predict(X)
#quad fit
X_quad = quad.fit_transform(X)
reg.fit(X_quad,y)
y_quad_fit = reg.predict(quad.fit_transform(X_fit))
reg_quad = reg.predict(X_quad)
r2_qual = r2_score(y,reg_quad)
#cubic fit
X_cubic = cubic.fit_transform(X)
reg.fit(X_cubic,y)
y_cubic_fit = reg.predict(cubic.fit_transform(X_fit))
reg_cubic = reg.predict(X_cubic)
r2_cubic = r2_score(y,reg_cubic)
#plot results
plt.scatter(X,y,color='lightgray')
plt.plot(X_fit,y_lin_fit,c='red',linestyle='-',label='linear')
plt.plot(X_fit,y_quad_fit,c='blue',linestyle=':',label='quad')
plt.plot(X_fit,y_cubic_fit,c='green',linestyle='--',label='cubic')
plt.legend()
plt.show()
###Output
_____no_output_____ |
EDA-dataset90s.ipynb | ###Markdown
checking basic integrity
###Code
data.shape
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5520 entries, 0 to 5519
Data columns (total 19 columns):
track 5520 non-null object
artist 5520 non-null object
uri 5520 non-null object
danceability 5520 non-null float64
energy 5520 non-null float64
key 5520 non-null int64
loudness 5520 non-null float64
mode 5520 non-null int64
speechiness 5520 non-null float64
acousticness 5520 non-null float64
instrumentalness 5520 non-null float64
liveness 5520 non-null float64
valence 5520 non-null float64
tempo 5520 non-null float64
duration_ms 5520 non-null int64
time_signature 5520 non-null int64
chorus_hit 5520 non-null float64
sections 5520 non-null int64
target 5520 non-null int64
dtypes: float64(10), int64(6), object(3)
memory usage: 819.5+ KB
###Markdown
no. of rows = non null values for each column -> no null value
###Code
data.head()
###Output
_____no_output_____
###Markdown
checking unique records using uri
###Code
# extracting exact id
def extract(x):
splited_list = x.split(':') # spliting text at colons
return splited_list[2] # returning third element
data['uri'] = data['uri'].apply(extract)
data.head() #successfully extracted the id
###Output
_____no_output_____
###Markdown
checking for duplicate rows
###Code
data['uri'].nunique(),
data['uri'].value_counts()
data['uri'].value_counts().unique()
dupe_mask = data['uri'].value_counts()==2
dupe_ids = dupe_mask[dupe_mask]
dupe_ids.value_counts, dupe_ids.shape
#converting duplicate ids into a list
dupe_ids = dupe_ids.index
dupe_ids = dupe_ids.tolist()
dupe_ids
duplicate_index = data.loc[data['uri'].isin(dupe_ids),:].index # all the duplicted records
duplicate_index = duplicate_index.tolist()
###Output
_____no_output_____
###Markdown
We will be removing all the duplication as they are few compared to data
###Code
data.drop(duplicate_index,axis=0,inplace=True)
data.shape
data.info()
print("shape of data",data.shape )
print("no. of unique rows",data['uri'].nunique()) # no duplicates
data.head()
###Output
_____no_output_____
###Markdown
now we will be dropping all the unnecessary columns which contain string which cant be eficiently converted into numerics
###Code
data.drop(['track','artist','uri'],axis=1,inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Univariate analysis
###Code
#analysing class imbalance
sns.countplot(data=data,x='target')
data.columns
# checking appropriate data type
data[['danceability', 'energy', 'key', 'loudness']].info() # every feature have appropriate datatype
# checking range of first 4 features
data[['danceability', 'energy', 'key', 'loudness']].describe()
plt.figure(figsize=(10,10))
plt.subplot(2,2,1)
data['danceability'].plot()
plt.subplot(2,2,2)
plt.plot(data['energy'],color='red')
plt.subplot(2,2,3)
plt.plot(data[['key','loudness']])
###Output
_____no_output_____
###Markdown
danceabilty is well inside the range(0,1) energy is well inside the range(0,1) there's no -1 for keys-> every track has been assigned respective keys loudness values are out of range(0,-60)db
###Code
loudness_error_idnex = data[data['loudness']>0].index
loudness_error_idnex
# removing rows with out of range values in loudness column
data.drop(loudness_error_idnex,axis=0, inplace=True)
data.shape # record is removed
# checking appropriate datatype for next 5 columns
data[['mode', 'speechiness',
'acousticness', 'instrumentalness', 'liveness',]].info() # datatypes are in acoordance with provided info
data[['mode', 'speechiness',
'acousticness', 'instrumentalness', 'liveness',]].describe() # every feautre is within range
sns.countplot(x=data['mode']) # have only two possible values 0 and 1, no noise in the feature
data[['valence', 'tempo',
'duration_ms', 'time_signature', 'chorus_hit', 'sections']].info() # data type is in accordance with provided info
data[['valence', 'tempo',
'duration_ms', 'time_signature', 'chorus_hit', 'sections']].describe() # all the data are in specified range
###Output
_____no_output_____
###Markdown
Performing F-test to know the relation between every feature and target
###Code
data.head()
x = data.iloc[:,:-1].values
y = data.iloc[:,-1].values
x.shape,y.shape
from sklearn.feature_selection import f_classif
f_stat,p_value = f_classif(x,y)
feat_list = data.iloc[:,:-1].columns.tolist()
# making a dataframe
dict = {'Features':feat_list,'f_statistics':f_stat,'p_value':p_value}
relation = pd.DataFrame(dict)
relation.sort_values(by='p_value')
###Output
_____no_output_____
###Markdown
Multivariate analysis
###Code
correlation = data.corr()
plt.figure(figsize=(15,12))
sns.heatmap(correlation, annot=True)
plt.tight_layout
###Output
_____no_output_____ |
dissociation-curve.ipynb | ###Markdown
First we need to prepare the molecular hamiltonian by calculating the electronic and nuclear integrals using pyscf. Then we dump it into a file so as to not recalculate it every time.
###Code
npoint = 461
dist = np.linspace(0.2, 2.5, npoint) #0.735 optimal distance between the two hydrogen atoms.
h1_no_spin = np.zeros((npoint,2,2))
h2_no_spin = np.zeros((npoint,2,2,2,2))
energy_nuc = np.zeros(npoint)
for i,d in enumerate(dist):
mol = gto.M(
atom = [['H', (0,0,-d/2)], ['H', (0,0,d/2)]],
basis = 'sto-3g')
mol_hamilt_no_spin = MolecularFermionicHamiltonian.from_pyscf_mol(mol)
h1_no_spin[i], h2_no_spin[i] = mol_hamilt_no_spin.get_integrals()
energy_nuc[i] = mol.energy_nuc()
with open('Integrals_sto-3g_H2_d_0.2-2.5_no_spin.npz','wb') as f:
np.savez(f,dist=dist,nuc=energy_nuc,h1 = h1_no_spin,h2 = h2_no_spin)
###Output
_____no_output_____
###Markdown
Now we can just load this data and use it for the calculation
###Code
with open('../molecule-solution/Integrals_sto-3g_H2_d_0.2-2.5_no_spin.npz','rb') as f:
out = np.load(f)
dist = out["dist"]
energy_nuc = out["nuc"]
h1_no_spin = out['h1']
h2_no_spin = out['h2']
###Output
_____no_output_____
###Markdown
Next we proceed to doing the actual calculation. First we need to define the varforms for the 4-qubit states. We follow the same steps as in activity 3.2.
###Code
from qiskit.circuit import QuantumCircuit, Parameter
###Output
_____no_output_____
###Markdown
First the 1 parameter basis spanned by $|0101\rangle$ and $|1010\rangle$ states.
###Code
varform_4qubits_1param = QuantumCircuit(4)
a = Parameter('a')
varform_4qubits_1param.ry(a,1)
varform_4qubits_1param.x(0)
varform_4qubits_1param.cx(1,0)
varform_4qubits_1param.cx(0,2)
varform_4qubits_1param.cx(1,3)
varform_4qubits_1param.draw('mpl')
###Output
_____no_output_____
###Markdown
And the 3 parameter basis spanned by $|0101\rangle$, $|0110\rangle$, $|1001\rangle$ and $|1010\rangle$.
###Code
varform_4qubits_3params = QuantumCircuit(4)
a = Parameter('a')
b = Parameter('b')
c = Parameter('c')
varform_4qubits_3params.x(0)
varform_4qubits_3params.x(2)
varform_4qubits_3params.ry(a,1)
varform_4qubits_3params.cx(1,3)
varform_4qubits_3params.ry(b,1)
varform_4qubits_3params.ry(c,3)
varform_4qubits_3params.cx(1,0)
varform_4qubits_3params.cx(3,2)
varform_4qubits_3params.draw('mpl')
###Output
_____no_output_____
###Markdown
Now we need to map the Hamiltonian to a quantum circuit. We will use the Jordan-Wigner mapping and the basic evaluator. We will run it on the qasm simulator for testing. Start by initializing all the elements necessary for the computation.
###Code
from mapping import JordanWigner, Parity
from evaluator import BasicEvaluator, BitwiseCommutingCliqueEvaluator
from solver import VQESolver
from qiskit import Aer, execute
from scipy.optimize import minimize
import time
mapping = JordanWigner()
varform = varform_4qubits_1param
backend = Aer.get_backend('qasm_simulator')
execute_opts = {'shots' : 2048}
minimizer = lambda fct, start_param_values : minimize(
fct,
start_param_values,
method = 'SLSQP',
options = {'maxiter' : 5, 'eps' : 1e-1, 'ftol' : 1e-4, 'disp' : True, 'iprint' : 0})
evaluator = BasicEvaluator(varform,backend,execute_opts = execute_opts)
solver = VQESolver(evaluator,minimizer,[0],name = 'jw_basic')
#Test for the optimal distance
t0 = time.time()
molecular_hamiltonian = MolecularFermionicHamiltonian.from_integrals(h1_no_spin[106],h2_no_spin[106]).include_spin()
lcps = mapping.fermionic_hamiltonian_to_linear_combination_pauli_string(molecular_hamiltonian).combine().apply_threshold().sort()
t2 = time.time()
print(t2-t0)
en, par = solver.lowest_eig_value(lcps)
print(en,par)
###Output
0.08099330501863733
[-1.86194606] [-0.17793741]
###Markdown
Now we run the calculation using all the machinery and looping over the distances between the atoms. We monitor time to compare to other methods
###Code
en_jw1_basic = np.zeros(len(dist))
par_jw1_basic = np.zeros(len(dist))
t0 = time.time()
for i,(h1,h2) in enumerate(zip(h1_no_spin,h2_no_spin)):
molecular_hamiltonian = MolecularFermionicHamiltonian.from_integrals(h1,h2).include_spin()
lcps = mapping.fermionic_hamiltonian_to_linear_combination_pauli_string(molecular_hamiltonian).combine().apply_threshold().sort()
en_jw1_basic[i], par_jw1_basic[i] = solver.lowest_eig_value(lcps)
t_jw1_basic = time.time()-t0
print(t_jw1_basic)
plt.title("Dissociation curve BasicEvaluator",fontsize=14)
plt.plot(dist,en_jw1_basic+energy_nuc)
plt.text(1.7,0,"t={m:.0f} min {s:.0f} s".format(m=t_jw1_basic//60, s=t_jw1_basic%60),fontsize=16)
plt.xlabel('Distance $[\\AA]$', fontsize=16)
plt.tick_params(axis='x', labelsize=16)
plt.ylabel('Energy [Ha]', color="tab:blue", fontsize=16)
plt.tick_params(axis='y', labelcolor="C0",labelsize=16)
plt.twinx()
plt.plot(dist,par_jw1_basic,color="C3")
plt.ylim(-1.6,0)
plt.ylabel('Mixing angle [rad]', color="tab:red", fontsize=16)
plt.tick_params(axis='y', labelcolor="C3",labelsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
Now we compare the above to the BitwiseCommutingCliqueEvaluator to see if there's any performance improvement
###Code
evaluator = BitwiseCommutingCliqueEvaluator(varform,backend,execute_opts = execute_opts)
solver = VQESolver(evaluator,minimizer,[0],name = 'jw_bitwise')
en_jw1_bitwise = np.zeros(len(dist))
par_jw1_bitwise = np.zeros(len(dist))
t0 = time.time()
for i,(h1,h2) in enumerate(zip(h1_no_spin,h2_no_spin)):
molecular_hamiltonian = MolecularFermionicHamiltonian.from_integrals(h1,h2).include_spin()
lcps = mapping.fermionic_hamiltonian_to_linear_combination_pauli_string(molecular_hamiltonian).combine().apply_threshold().sort()
en_jw1_bitwise[i], par_jw1_bitwise[i] = solver.lowest_eig_value(lcps)
t_jw1_bitwise = time.time()-t0
print(t_jw1_bitwise)
plt.title("Dissociation curve BitwiseCommuting",fontsize=14)
plt.plot(dist,en_jw1_bitwise+energy_nuc)
plt.text(1.7,0,"t={m:.0f} min {s:.0f} s".format(m=t_jw1_bitwise//60, s=t_jw1_bitwise%60),fontsize=16)
plt.xlabel('Distance $[\\AA]$', fontsize=16)
plt.tick_params(axis='x', labelsize=16)
plt.ylabel('Energy [Ha]', color="tab:blue", fontsize=16)
plt.tick_params(axis='y', labelcolor="C0",labelsize=16)
plt.twinx()
plt.plot(dist,par_jw1_bitwise,color="C3")
plt.ylim(-1.6,0)
plt.ylabel('Mixing angle [rad]', color="tab:red", fontsize=16)
plt.tick_params(axis='y', labelcolor="C3",labelsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
Here we try to get the Parity mapping to work.
###Code
varform_2qubits_1param = QuantumCircuit(4)
a = Parameter('a')
varform_2qubits_1param.x(0)
varform_2qubits_1param.x(1)
varform_2qubits_1param.ry(a,2)
varform_2qubits_1param.cx(2,0)
varform_2qubits_1param.draw('mpl')
%autoreload
mapping = Parity()
varform = varform_2qubits_1param
evaluator = BitwiseCommutingCliqueEvaluator(varform,backend,execute_opts = execute_opts)
solver = VQESolver(evaluator,minimizer,[0],name = 'parity_basic')
#Test for the optimal distance
t0=time.time()
molecular_hamiltonian = MolecularFermionicHamiltonian.from_integrals(h1_no_spin[106],h2_no_spin[106]).include_spin()
lcps = mapping.fermionic_hamiltonian_to_linear_combination_pauli_string(molecular_hamiltonian).combine().apply_threshold().sort()
t2=time.time()
print(t2-t0)
en, par = solver.lowest_eig_value(lcps)
print(en,par)
en_parity_bitwise = np.zeros(len(dist))
par_parity_bitwise = np.zeros(len(dist))
t0 = time.time()
for i,(h1,h2) in enumerate(zip(h1_no_spin,h2_no_spin)):
molecular_hamiltonian = MolecularFermionicHamiltonian.from_integrals(h1,h2).include_spin()
lcps = mapping.fermionic_hamiltonian_to_linear_combination_pauli_string(molecular_hamiltonian).combine().apply_threshold().sort()
en_parity_bitwise[i], par_parity_bitwise[i] = solver.lowest_eig_value(lcps)
t_parity_bitwise = time.time()-t0
print(t_parity_bitwise)
plt.title("Dissociation curve Parity+BitwiseCommuting",fontsize=14)
plt.plot(dist,en_parity_bitwise+energy_nuc)
plt.text(1.7,0,"t={m:.0f} min {s:.0f} s".format(m=t_parity_bitwise//60, s=t_parity_bitwise%60),fontsize=16)
plt.xlabel('Distance $[\\AA]$', fontsize=16)
plt.tick_params(axis='x', labelsize=16)
plt.ylabel('Energy [Ha]', color="tab:blue", fontsize=16)
plt.tick_params(axis='y', labelcolor="C0",labelsize=16)
plt.twinx()
plt.plot(dist,par_parity_bitwise,color="C3")
plt.ylim(-1.6,0)
plt.ylabel('Mixing angle [rad]', color="tab:red", fontsize=16)
plt.tick_params(axis='y', labelcolor="C3",labelsize=16)
plt.show()
###Output
_____no_output_____ |
courses/ai-for-finance/practice/momentum_using_hurst.ipynb | ###Markdown
Improve Momentum Trading strategies using HurstIn this notebook we'll implementing a strategy to trade on momentum. You'll be using the training wheels version of Auquan's toolbox to abstract out the details since the full version of toolbox is a bit comprehensive to get started with. We're providing you with a bare-bones version that shows you how to use 30 day momentum to trade on Apple, sometime between 2015 and 2017. This naive strategy loses money and that's to be expected. Your goal is to make use of Hurst exponent that you learnt in previous lessons to create a better strategy. This is an analytical method of momentum trading, but it is also to turn this into a machine learning problem. This is discussed at the end of the notebook, with a link to an extension exercise on the quant-quest website from Auquan for you to attempt this approach.**Goals**:1. Understand how the barebones momentum version is working and make yourself comfortable with it2. Use the Hurst exponent to create a money making strategy
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Let's import everything we need to run our backtesting algorithm
###Code
!pip install qq-training-wheels auquan_toolbox --upgrade
from qq_training_wheels.momentum_trading import MomentumTradingParams
from backtester.trading_system import TradingSystem
from backtester.features.feature import Feature
import numpy as np
###Output
_____no_output_____
###Markdown
The class below implements all the logic you need to run the momentum backtester. Go through it and make sure you understand each part. You can run it first and make changes later to see if you made any improvements over the naive strategy.There are 6 functions within the class:- \_\_init\_\_- getSymbolsToTrade- getInstrumentFeatureConfigDicts- getPredictions- hurst_f- updateCount**__init__**Initializes the class**getSymbolsToTrade**This is where we can select which stocks we want to test our strategy on. Here we're using just AAPL is it is the only ticker returned**getInstrumentConfigDicts** This is the way that the toolbox creates features that we want to use in our logic. It's really important for resource optimisation at scale but can look a little daunting at first. We've created the features you'll need for you. If you're interested in learning more you can here: https://blog.quant-quest.com/toolbox-breakdown-getfeatureconfigdicts-function/**getPrediction**This again is fairly straight forward. We've included a few notes here, but for more detail: https://blog.quant-quest.com/toolbox-breakdown-getprediction-function/Once you've calcualted the hurst exponent, this should contain the logic to use it and make profitable trades.**hurst_f**This is your time to shine! This is where you will need to implement the hurst exponent as shown in the previous lecture. There are several different ways of calculating the hurst exponent, so we recommend you use the method shown in the lecture to allow other people to easily help you - if needed!**updateCount**A counter
###Code
class MyTradingFunctions():
def __init__(self):
self.count = 0
# When to start trading
self.start_date = '2015/01/02'
# When to end trading
self.end_date = '2017/08/31'
self.params = {}
def getSymbolsToTrade(self):
'''
Specify the stock names that you want to trade.
'''
return ['AAPL']
def getInstrumentFeatureConfigDicts(self):
'''
Specify all Features you want to use by creating config dictionaries.
Create one dictionary per feature and return them in an array.
Feature config Dictionary have the following keys:
featureId: a str for the type of feature you want to use
featureKey: {optional} a str for the key you will use to call this feature
If not present, will just use featureId
params: {optional} A dictionary with which contains other optional params if needed by the feature
msDict = {
'featureKey': 'ms_5',
'featureId': 'moving_sum',
'params': {
'period': 5,
'featureName': 'basis'
}
}
return [msDict]
You can now use this feature by in getPRediction() calling it's featureKey, 'ms_5'
'''
ma1Dict = {
'featureKey': 'ma_90',
'featureId': 'moving_average',
'params': {
'period': 90,
'featureName': 'adjClose'
}
}
mom30Dict = {
'featureKey': 'mom_30',
'featureId': 'momentum',
'params': {
'period': 30,
'featureName': 'adjClose'
}
}
mom10Dict = {
'featureKey': 'mom_10',
'featureId': 'momentum',
'params': {
'period': 10,
'featureName': 'adjClose'
}
}
return [ma1Dict, mom10Dict, mom30Dict]
def getPrediction(self, time, updateNum, instrumentManager, predictions):
'''
Combine all the features to create the desired predictions for each stock.
'predictions' is Pandas Series with stock as index and predictions as values
We first call the holder for all the instrument features for all stocks as
lookbackInstrumentFeatures = instrumentManager.getLookbackInstrumentFeatures()
Then call the dataframe for a feature using its feature_key as
ms5Data = lookbackInstrumentFeatures.getFeatureDf('ms_5')
This returns a dataFrame for that feature for ALL stocks for all times upto lookback time
Now you can call just the last data point for ALL stocks as
ms5 = ms5Data.iloc[-1]
You can call last datapoint for one stock 'ABC' as
value_for_abs = ms5['ABC']
Output of the prediction function is used by the toolbox to make further trading decisions and evaluate your score.
'''
self.updateCount() # uncomment if you want a counter
# holder for all the instrument features for all instruments
lookbackInstrumentFeatures = instrumentManager.getLookbackInstrumentFeatures()
#############################################################################################
### TODO : FILL THIS FUNCTION TO RETURN A BUY (1), SELL (0) or LEAVE POSITION (0.5) prediction
### for each stock
### USE TEMPLATE BELOW AS EXAMPLE
###
### HINT: Use the Hurst Exponent
### http://analytics-magazine.org/the-hurst-exponent-predictability-of-time-series/
#############################################################################################
# TODO: Fill in the logic for the Hurst Exponent
def hurst_f(input_ts, lags_to_test=20):
# interpretation of return value
# hurst < 0.5 - input_ts is mean reverting
# hurst = 0.5 - input_ts is effectively random/geometric brownian motion
# hurst > 0.5 - input_ts is trending
hurst = 0.5
return hurst
# dataframe for a historical instrument feature (ma_90 in this case). The index is the timestamps
# of upto lookback data points. The columns of this dataframe are the stock symbols/instrumentIds.
mom30Data = lookbackInstrumentFeatures.getFeatureDf('mom_30')
ma90Data = lookbackInstrumentFeatures.getFeatureDf('ma_90')
# TODO: We're trading on the 30 day momentum here and losing money, try trading on the basis of Hurst
# exponent and see if you're able to make money
if len(ma90Data.index) > 20:
mom30 = mom30Data.iloc[-1]
# Go long if momentum is positive
predictions[mom30 > 0] = 1
# Go short if momentum is negative
predictions[mom30 <= 0] = 0
else:
# If no sufficient data then leave all positions
predictions.values[:] = 0.5
return predictions
def updateCount(self):
self.count = self.count + 1
###Output
_____no_output_____
###Markdown
Initialize everything we've created so far
###Code
tf = MyTradingFunctions()
tsParams = MomentumTradingParams(tf)
tradingSystem = TradingSystem(tsParams)
###Output
_____no_output_____
###Markdown
Start Trading ...You'll see your pnl as the backtesting runs. If you want more detailed results, two folders: `runLogs` and `tb_logs` are generated in the same directory as this script. You'll find the csvs for results inside `runLogs` and tensorboard log inside `tb_logs`
###Code
results = tradingSystem.startTrading()
###Output
_____no_output_____
###Markdown
Improve Momentum Trading strategies using HurstIn this notebook we'll implementing a strategy to trade on momentum. You'll be using the training wheels version of Auquan's toolbox to abstract out the details since the full version of toolbox is a bit comprehensive to get started with. We're providing you with a bare-bones version that shows you how to use 30 day momentum to trade on Apple, sometime between 2015 and 2017. This naive strategy loses money and that's to be expected. Your goal is to make use of Hurst exponent that you learnt in previous lessons to create a better strategy. This is an analytical method of momentum trading, but it is also to turn this into a machine learning problem. This is discussed at the end of the notebook, with a link to an extension exercise on the quant-quest website from Auquan for you to attempt this approach.**Goals**:1. Understand how the barebones momentum version is working and make yourself comfortable with it2. Use the Hurst exponent to create a money making strategy
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Let's import everything we need to run our backtesting algorithm
###Code
!pip install qq-training-wheels auquan_toolbox --upgrade
from qq_training_wheels.momentum_trading import MomentumTradingParams
from backtester.trading_system import TradingSystem
from backtester.features.feature import Feature
import numpy as np
###Output
_____no_output_____
###Markdown
The class below implements all the logic you need to run the momentum backtester. Go through it and make sure you understand each part. You can run it first and make changes later to see if you made any improvements over the naive strategy.There are 6 functions within the class:- \_\_init\_\_- getSymbolsToTrade- getInstrumentFeatureConfigDicts- getPredictions- hurst_f- updateCount**__init__**Initializes the class**getSymbolsToTrade**This is where we can select which stocks we want to test our strategy on. Here we're using just AAPL is it is the only ticker returned**getInstrumentConfigDicts** This is the way that the toolbox creates features that we want to use in our logic. It's really important for resource optimisation at scale but can look a little daunting at first. We've created the features you'll need for you. If you're interested in learning more you can here: https://blog.quant-quest.com/toolbox-breakdown-getfeatureconfigdicts-function/**getPrediction**This again is fairly straight forward. We've included a few notes here, but for more detail: https://blog.quant-quest.com/toolbox-breakdown-getprediction-function/Once you've calculated the hurst exponent, this should contain the logic to use it and make profitable trades.**hurst_f**This is your time to shine! This is where you will need to implement the hurst exponent as shown in the previous lecture. There are several different ways of calculating the hurst exponent, so we recommend you use the method shown in the lecture to allow other people to easily help you - if needed!**updateCount**A counter
###Code
class MyTradingFunctions():
def __init__(self):
self.count = 0
# When to start trading
self.start_date = '2015/01/02'
# When to end trading
self.end_date = '2017/08/31'
self.params = {}
def getSymbolsToTrade(self):
'''
Specify the stock names that you want to trade.
'''
return ['AAPL']
def getInstrumentFeatureConfigDicts(self):
'''
Specify all Features you want to use by creating config dictionaries.
Create one dictionary per feature and return them in an array.
Feature config Dictionary have the following keys:
featureId: a str for the type of feature you want to use
featureKey: {optional} a str for the key you will use to call this feature
If not present, will just use featureId
params: {optional} A dictionary with which contains other optional params if needed by the feature
msDict = {
'featureKey': 'ms_5',
'featureId': 'moving_sum',
'params': {
'period': 5,
'featureName': 'basis'
}
}
return [msDict]
You can now use this feature by in getPRediction() calling it's featureKey, 'ms_5'
'''
ma1Dict = {
'featureKey': 'ma_90',
'featureId': 'moving_average',
'params': {
'period': 90,
'featureName': 'adjClose'
}
}
mom30Dict = {
'featureKey': 'mom_30',
'featureId': 'momentum',
'params': {
'period': 30,
'featureName': 'adjClose'
}
}
mom10Dict = {
'featureKey': 'mom_10',
'featureId': 'momentum',
'params': {
'period': 10,
'featureName': 'adjClose'
}
}
return [ma1Dict, mom10Dict, mom30Dict]
def getPrediction(self, time, updateNum, instrumentManager, predictions):
'''
Combine all the features to create the desired predictions for each stock.
'predictions' is Pandas Series with stock as index and predictions as values
We first call the holder for all the instrument features for all stocks as
lookbackInstrumentFeatures = instrumentManager.getLookbackInstrumentFeatures()
Then call the dataframe for a feature using its feature_key as
ms5Data = lookbackInstrumentFeatures.getFeatureDf('ms_5')
This returns a dataFrame for that feature for ALL stocks for all times upto lookback time
Now you can call just the last data point for ALL stocks as
ms5 = ms5Data.iloc[-1]
You can call last datapoint for one stock 'ABC' as
value_for_abs = ms5['ABC']
Output of the prediction function is used by the toolbox to make further trading decisions and evaluate your score.
'''
self.updateCount() # uncomment if you want a counter
# holder for all the instrument features for all instruments
lookbackInstrumentFeatures = instrumentManager.getLookbackInstrumentFeatures()
#############################################################################################
### TODO : FILL THIS FUNCTION TO RETURN A BUY (1), SELL (0) or LEAVE POSITION (0.5) prediction
### for each stock
### USE TEMPLATE BELOW AS EXAMPLE
###
### HINT: Use the Hurst Exponent
### http://analytics-magazine.org/the-hurst-exponent-predictability-of-time-series/
#############################################################################################
# TODO: Fill in the logic for the Hurst Exponent
def hurst_f(input_ts, lags_to_test=20):
# interpretation of return value
# hurst < 0.5 - input_ts is mean reverting
# hurst = 0.5 - input_ts is effectively random/geometric brownian motion
# hurst > 0.5 - input_ts is trending
hurst = 0.5
return hurst
# dataframe for a historical instrument feature (ma_90 in this case). The index is the timestamps
# of upto lookback data points. The columns of this dataframe are the stock symbols/instrumentIds.
mom30Data = lookbackInstrumentFeatures.getFeatureDf('mom_30')
ma90Data = lookbackInstrumentFeatures.getFeatureDf('ma_90')
# TODO: We're trading on the 30 day momentum here and losing money, try trading on the basis of Hurst
# exponent and see if you're able to make money
if len(ma90Data.index) > 20:
mom30 = mom30Data.iloc[-1]
# Go long if momentum is positive
predictions[mom30 > 0] = 1
# Go short if momentum is negative
predictions[mom30 <= 0] = 0
else:
# If no sufficient data then leave all positions
predictions.values[:] = 0.5
return predictions
def updateCount(self):
self.count = self.count + 1
###Output
_____no_output_____
###Markdown
Initialize everything we've created so far
###Code
tf = MyTradingFunctions()
tsParams = MomentumTradingParams(tf)
tradingSystem = TradingSystem(tsParams)
###Output
_____no_output_____
###Markdown
Start Trading ...You'll see your pnl as the backtesting runs. If you want more detailed results, two folders: `runLogs` and `tb_logs` are generated in the same directory as this script. You'll find the csv's for results inside `runLogs` and tensorboard log inside `tb_logs`
###Code
results = tradingSystem.startTrading()
###Output
_____no_output_____
###Markdown
Improve Momentum Trading strategies using HurstIn this notebook we'll implementing a strategy to trade on momentum. You'll be using the training wheels version of Auquan's toolbox to abstract out the details since the full version of toolbox is a bit comprehensive to get started with. We're providing you with a bare-bones version that shows you how to use 30 day momentum to trade on Apple, sometime between 2015 and 2017. This naive strategy loses money and that's to be expected. Your goal is to make use of Hurst exponent that you learnt in previous lessons to create a better strategy. This is an analytical method of momentum trading, but it is also to turn this into a machine learning problem. This is discussed at the end of the notebook, with a link to an extension exercise on the quant-quest website from Auquan for you to attempt this approach.**Goals**:1. Understand how the barebones momentum version is working and make yourself comfortable with it2. Use the Hurst exponent to create a money making strategy
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Let's import everything we need to run our backtesting algorithm
###Code
!pip install qq-training-wheels auquan_toolbox --upgrade
from qq_training_wheels.momentum_trading import MomentumTradingParams
from backtester.trading_system import TradingSystem
from backtester.features.feature import Feature
import numpy as np
###Output
_____no_output_____
###Markdown
The class below implements all the logic you need to run the momentum backtester. Go through it and make sure you understand each part. You can run it first and make changes later to see if you made any improvements over the naive strategy.There are 6 functions within the class:- \_\_init\_\_- getSymbolsToTrade- getInstrumentFeatureConfigDicts- getPredictions- hurst_f- updateCount**__init__**Initializes the class**getSymbolsToTrade**This is where we can select which stocks we want to test our strategy on. Here we're using just AAPL is it is the only ticker returned**getInstrumentConfigDicts** This is the way that the toolbox creates features that we want to use in our logic. It's really important for resource optimisation at scale but can look a little daunting at first. We've created the features you'll need for you. If you're interested in learning more you can here: https://blog.quant-quest.com/toolbox-breakdown-getfeatureconfigdicts-function/**getPrediction**This again is fairly straight forward. We've included a few notes here, but for more detail: https://blog.quant-quest.com/toolbox-breakdown-getprediction-function/Once you've calculated the hurst exponent, this should contain the logic to use it and make profitable trades.**hurst_f**This is your time to shine! This is where you will need to implement the hurst exponent as shown in the previous lecture. There are several different ways of calculating the hurst exponent, so we recommend you use the method shown in the lecture to allow other people to easily help you - if needed!**updateCount**A counter
###Code
class MyTradingFunctions():
def __init__(self):
self.count = 0
# When to start trading
self.start_date = '2015/01/02'
# When to end trading
self.end_date = '2017/08/31'
self.params = {}
def getSymbolsToTrade(self):
'''
Specify the stock names that you want to trade.
'''
return ['AAPL']
def getInstrumentFeatureConfigDicts(self):
'''
Specify all Features you want to use by creating config dictionaries.
Create one dictionary per feature and return them in an array.
Feature config Dictionary have the following keys:
featureId: a str for the type of feature you want to use
featureKey: {optional} a str for the key you will use to call this feature
If not present, will just use featureId
params: {optional} A dictionary with which contains other optional params if needed by the feature
msDict = {
'featureKey': 'ms_5',
'featureId': 'moving_sum',
'params': {
'period': 5,
'featureName': 'basis'
}
}
return [msDict]
You can now use this feature by in getPRediction() calling it's featureKey, 'ms_5'
'''
ma1Dict = {
'featureKey': 'ma_90',
'featureId': 'moving_average',
'params': {
'period': 90,
'featureName': 'adjClose'
}
}
mom30Dict = {
'featureKey': 'mom_30',
'featureId': 'momentum',
'params': {
'period': 30,
'featureName': 'adjClose'
}
}
mom10Dict = {
'featureKey': 'mom_10',
'featureId': 'momentum',
'params': {
'period': 10,
'featureName': 'adjClose'
}
}
return [ma1Dict, mom10Dict, mom30Dict]
def getPrediction(self, time, updateNum, instrumentManager, predictions):
'''
Combine all the features to create the desired predictions for each stock.
'predictions' is Pandas Series with stock as index and predictions as values
We first call the holder for all the instrument features for all stocks as
lookbackInstrumentFeatures = instrumentManager.getLookbackInstrumentFeatures()
Then call the dataframe for a feature using its feature_key as
ms5Data = lookbackInstrumentFeatures.getFeatureDf('ms_5')
This returns a dataFrame for that feature for ALL stocks for all times upto lookback time
Now you can call just the last data point for ALL stocks as
ms5 = ms5Data.iloc[-1]
You can call last datapoint for one stock 'ABC' as
value_for_abs = ms5['ABC']
Output of the prediction function is used by the toolbox to make further trading decisions and evaluate your score.
'''
self.updateCount() # uncomment if you want a counter
# holder for all the instrument features for all instruments
lookbackInstrumentFeatures = instrumentManager.getLookbackInstrumentFeatures()
#############################################################################################
### TODO : FILL THIS FUNCTION TO RETURN A BUY (1), SELL (0) or LEAVE POSITION (0.5) prediction
### for each stock
### USE TEMPLATE BELOW AS EXAMPLE
###
### HINT: Use the Hurst Exponent
### http://analytics-magazine.org/the-hurst-exponent-predictability-of-time-series/
#############################################################################################
# TODO: Fill in the logic for the Hurst Exponent
def hurst_f(input_ts, lags_to_test=20):
# interpretation of return value
# hurst < 0.5 - input_ts is mean reverting
# hurst = 0.5 - input_ts is effectively random/geometric brownian motion
# hurst > 0.5 - input_ts is trending
hurst = 0.5
return hurst
# dataframe for a historical instrument feature (ma_90 in this case). The index is the timestamps
# of upto lookback data points. The columns of this dataframe are the stock symbols/instrumentIds.
mom30Data = lookbackInstrumentFeatures.getFeatureDf('mom_30')
ma90Data = lookbackInstrumentFeatures.getFeatureDf('ma_90')
# TODO: We're trading on the 30 day momentum here and losing money, try trading on the basis of Hurst
# exponent and see if you're able to make money
if len(ma90Data.index) > 20:
mom30 = mom30Data.iloc[-1]
# Go long if momentum is positive
predictions[mom30 > 0] = 1
# Go short if momentum is negative
predictions[mom30 <= 0] = 0
else:
# If no sufficient data then leave all positions
predictions.values[:] = 0.5
return predictions
def updateCount(self):
self.count = self.count + 1
###Output
_____no_output_____
###Markdown
Initialize everything we've created so far
###Code
tf = MyTradingFunctions()
tsParams = MomentumTradingParams(tf)
tradingSystem = TradingSystem(tsParams)
###Output
_____no_output_____
###Markdown
Start Trading ...You'll see your pnl as the backtesting runs. If you want more detailed results, two folders: `runLogs` and `tb_logs` are generated in the same directory as this script. You'll find the csv's for results inside `runLogs` and tensorboard log inside `tb_logs`
###Code
results = tradingSystem.startTrading()
###Output
_____no_output_____ |
Gardenkiak/Programazioa/Balio literalak eta datu motak.ipynb | ###Markdown
Balio literalak eta datu motakEdozein algoritmotan maiz agertzen dira programatzaileak *"eskuz"* (literalki) adierazitako balioak, aurrez ezagunak direnak: testuak (`"kaixo"`, `"bai"`, `"ez"`), espresio aritmetiko batetan agertzen diren konstanteak (`2`, `10`, `3.14159265359`), etab.Balio literal hauek adierazten ditugunean, Python-en interpreteak kodifikatu egingo ditu. Balio literalak adierazteko erabiltzen dugun sintaxiak, inplizituki, informazio bakoitzaren datu mota (eta beraz, kodifikazioa) adieraziko du.
###Code
10/3
10//3
10.0//3
###Output
_____no_output_____
###Markdown
`type()` funtzioa `type()` funtzioak edozein espresiok duen emaitzaren datu motaren berri emango digu:
###Code
type(10)
type(10.0)
type('kaixo')
###Output
_____no_output_____
###Markdown
* int → zenbaki osoak (64bit++, errore aritmetikorik gabea)* float → zenbaki errealak (64bit, errore aritmetikoak)* str → karaktere kateak
###Code
print(1000000000000000000.0 + 1.0 - 1000000000000000000.0)
print(1.2 - 0.1)
###Output
0.0
1.0999999999999999
###Markdown
`print()` funtzioa `print()` funtzioak emandako espresioen balioak pantailatik erakutsiko ditu:
###Code
print(10.0/3)
###Output
3.3333333333333335
###Markdown
Aurreko adibideetan, ez genuen `print()` funtzioa erabiltzen, baina emaitza pantailatik ikusten genuen... iPython-en kode bloke bat exekutatzean, azkeneko sententziak bueltatzen duen balioa *Out [x]* irteera moduan erakusten da.
###Code
1+2
1+3
1+4
###Output
_____no_output_____
###Markdown
Beraz, aurrerantzean, pantailatik zerbait ikusi nahi dugunean, `print()` funtzioa erabiliko dugu:
###Code
print(1+2)
print(1+4)
print(1+3)
###Output
3
5
4
###Markdown
Zergatik `print()` funtzioa erabiltzean ez da *Out [x]* irteerarik erakusten? `print()` funtzioak `None` balio berezia bueltatzen duelako. Aurrez esandakoa berridatziz:iPython-en kode bloke bat exekutatzean, azkeneko sententziak bueltatzen duen balioa None-ren ezberdina denean, Out [x] irteera moduan erakusten da.
###Code
print("kaixo")
print("kaixo") == None
###Output
kaixo
###Markdown
IDle-aren kontsolan (*shell*-ean) ere antzeko zerbait gertatzen da, kasu honetan, ilaraz ilara. Exekutatzen den ilara/sententzi bakoitzaren emaitza agertuko da, eta modu berean pantailatik mezuak idazteko aukera izango dugu `print()` funtzioaren bidez.Bi informazio hauek modu berean agertzen dira eta beraz ezinezkoa da bisualki ezberdintzea Datu mota bakoitzak, mota horretako balioak sortzeko funtzio bat du, izen berekoa, `int()`, `float()` eta `str()`, alegia. Funtzio hauek datu moten arteko konbertsioak egiteko erabil daitezke:
###Code
print(int("10") + 4)
print(float("10.3") + 4)
print(str(10/3) + "aupa")
int(3.9)
###Output
14
14.3
3.3333333333333335aupa
###Markdown
BoolearrakDatu mota logiko edo boolearrek bi balio posible izan ditzakete soilik: `True` edo `False` . Boolearrei dagokien datu mota `bool` da:
###Code
type(True)
type(False)
###Output
_____no_output_____
###Markdown
Pythonen, edozer gauza balio boolear bilaka daiteke `bool()` funtzioaren bidez: * Zenbakizkoak: `True`, balioa `0`-ren ezberdina denean. * Sekuentziak: `True`, hutsik ez badaude. * `None`: `False`. Aurrerago ikusiko dugun moduan, edozein objektu boolear bilakatzeko gaitasun hau erabilgarria izan liteke (eta baita ere errore iturburu bat...)
###Code
print('bool(1):', bool(1))
print('bool(0):', bool(0))
print('bool(0.0):', bool(0.0))
print('bool("eta hau?"):', bool("eta hau?"))
print("bool('True'):", bool("True"))
print('bool("False"):', bool("False"))
print('bool(""):', bool(""))
print('bool(None):', bool(None))
###Output
bool(1): True
bool(0): False
bool(0.0): False
bool("eta hau?"): True
bool('True'): True
bool("False"): True
bool(""): False
bool(None): False
|
notebooks/2C_mg_hichip_group_diff_analysis.ipynb | ###Markdown
5/6/2020updated 06/25/2021 with removing GDSD0 and GDSD3
###Code
import os, glob
import pandas as pd
normal_tissues = ['Airway','Astrocytes','Bladder','Colon','Esophageal',
# 'GDSD0',
# 'GDSD3',
'GDSD6',
'GM12878',
'HMEC',
'Melanocytes',
'Ovarian',
'Pancreas',
'Prostate',
'Renal',
'Thyroid',
'Uterine']
def get_alpha(row):
return '::'.join(sorted([row['source'], row['target']]))
tissue_loop_dict = {}
for file in sorted(glob.glob('../data/interim/merged/loops/*csv')):
filename = os.path.basename(file)
tissue = filename.split('.')[0]
if tissue in normal_tissues:
print(file)
df = pd.read_csv(file, index_col=0)
df['loop_name'] = df[['source','target']].apply(get_alpha,axis=1)
per_tissue_loop_dict = pd.Series(df['count'].values, index=df.loop_name.values).to_dict()
tissue_loop_dict[tissue] = per_tissue_loop_dict
tissue_loop_df = pd.DataFrame(tissue_loop_dict).fillna(0)
save_dir = '../data/processed/fig1/hichip'
if not os.path.exists(save_dir):
os.makedirs(save_dir)
tissue_loop_df.to_csv(os.path.join(save_dir,'tissue_loop_df.csv'))
###Output
_____no_output_____
###Markdown
edited 08/20/2020
###Code
import os, glob
import pandas as pd
normal_tissues = ['Astrocytes','SL_D0','SL_D2','SLC_D0', 'SLC_D2','H9_D0','H9_D2','H9_D10','H9_D28']
def get_alpha(row):
return '::'.join(sorted([row['source'], row['target']]))
%%time
# takes a couple min
tissue_loop_dict = {}
for file in sorted(glob.glob('../data/interim/merged/loops/*csv')):
filename = os.path.basename(file)
tissue = filename.split('.')[0]
if tissue in normal_tissues:
print(file)
df = pd.read_csv(file, index_col=0)
df['loop_name'] = df[['source','target']].apply(get_alpha,axis=1)
per_tissue_loop_dict = pd.Series(df['count'].values, index=df.loop_name.values).to_dict()
tissue_loop_dict[tissue] = per_tissue_loop_dict
tissue_loop_df = pd.DataFrame(tissue_loop_dict).fillna(0)
save_dir = '../data/processed/fig1/hichip'
if not os.path.exists(save_dir):
os.makedirs(save_dir)
tissue_loop_df.to_csv(os.path.join(save_dir,'tissue_loop_df.csv'))
###Output
_____no_output_____ |
notebooks/datasets/3. Datasets-Journal.ipynb | ###Markdown
Loading custom Journal datasetsUsed for doing some one-shot excecution
###Code
import datasets
from datasets import load_dataset, Dataset, concatenate_datasets
def save_jsonl(json_list, file_path):
import json
# just open as usual
output_json = open(file_path, "w")
with output_json as output:
for json_line in json_list:
json.dump(json_line, output)
output.write('\n')
def load_list(file_path, verbose=True):
import json
import logging
# just open as usual
input_json = open(file_path, "r")
if verbose:
logging.info("You choose to only use unzipped files")
with input_json:
json_lines = input_json.read()
json_list = json.loads(json_lines)
return json_list
def load_jsonl(file_path, verbose=True):
import json
import logging
# just open as usual
input_json = open(file_path, "r")
if verbose:
logging.info("You choose to only use unzipped files")
json_list = []
with input_json:
for json_line in input_json.readlines():
json_list.append(json.loads(json_line))
return json_list
PATH = '/home/vivoli/Thesis/data/s2orc-journal/'
files = !ls $PATH
print(files)
import pandas as pd
datasets = dict()
for file in files:
dataset_name, extention = file.split('.')
json_list = load_jsonl(f'/home/vivoli/Thesis/data/s2orc-journal/{file}')
json_dict = pd.DataFrame(json_list)
dataset = Dataset.from_pandas(json_dict)
datasets[dataset_name] = dataset
datasets
###Output
_____no_output_____ |
LogisticRegression.ipynb | ###Markdown
Apply Standard Scalar without changing dtypes.
###Code
scalers = []
for col in X_train.columns:
x = X_train[col]
if 'uint8' not in str(x.dtypes):
scaler = StandardScaler()
x = x.to_numpy()
x = scaler.fit_transform(x.reshape((-1, 1)))
X_train[col] = x.reshape((-1, ))
scalers.append(scaler)
else:
scalers.append(None)
for scaler, col in zip(scalers, X_test.columns):
if scaler is None:
continue
x = X_test[col].to_numpy()
x = scaler.transform(x.reshape((-1, 1)))
X_test[col] = x.reshape((-1, ))
###Output
_____no_output_____
###Markdown
One vs Rest. Weighted classes. Stochastic Average Gradient Solver. 1000 iterations.
###Code
clf = LogisticRegression(random_state=random_state, solver='sag',
class_weight='balanced', max_iter=1000,
multi_class='ovr', n_jobs=AUTO)
start_training_time = perf_counter()
_ = clf.fit(X_train, y_train)
end_training_time = perf_counter()
print(f'Time taken to fit the model {end_training_time - start_training_time} seconds')
clf_1 = LogisticRegression(random_state=random_state, solver='sag',
class_weight='balanced', max_iter=1000,
multi_class='multinomial', n_jobs=AUTO)
start_training_time = perf_counter()
_ = clf_1.fit(X_train, y_train)
end_training_time = perf_counter()
print(f'Time taken to fit the model {end_training_time - start_training_time} seconds')
y_pred = clf.predict(X_test)
acc = balanced_accuracy_score(y_test, y_pred)
acc
acc_unbalanced = accuracy_score(y_test, y_pred)
acc_unbalanced
unbalanaced_f1 = f1_score(y_test, y_pred, average='macro')
unbalanaced_f1
weighted_f1 = f1_score(y_test, y_pred, average='weighted')
weighted_f1
y_pred_1 = clf_1.predict(X_test)
acc_1 = balanced_accuracy_score(y_test, y_pred_1)
acc_1
acc_unbalanced_1 = accuracy_score(y_test, y_pred_1)
acc_unbalanced_1
weighted_f1_1 = f1_score(y_test, y_pred_1, average='weighted')
weighted_f1_1
unbalanced_f1_1 = f1_score(y_test, y_pred_1, average='macro')
unbalanced_f1_1
###Output
_____no_output_____
###Markdown
Due to the large variance and the rather small standard devidation when it comes to the Amount column of the dataset, I will scale the value of the amount to lie between 0 and 1 so that the model can perform better.
###Code
x = data[['Amount']].values.astype(float)
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
data['Amount_scaled'] = x_scaled
data.head()
data = data.drop('Amount',axis=1)
data.head()
df_train, df_test = train_test_split(data, test_size=0.20, random_state=0)
X_train = df_train.drop('Class', axis=1)
y_train = df_train['Class']
X_test = df_test.drop('Class', axis=1)
y_test = df_test['Class']
model = LogisticRegression(max_iter = 5000, class_weight = 'balanced').fit(X_train, y_train)
predictions = model.predict(X_test)
features = ['V1','V2','V3','V4','V5','V6','V7','V8','V9','V10','V11','V12','V13','V14','V15','V16','V17','V18','V19','V20','V21','V22','V23','V24','V25','V26','V27','V28','Amount_scaled']
for feature, coef in zip(features, model.coef_.ravel()):
print(feature, coef)
df_train['proba'] = model.predict_proba(X_train)[:, 1]
df_train['proba_decile'] = pd.qcut(df_train['proba'], q=10, labels=False)
A, bins = pd.qcut(df_train['proba'], q=10, labels=False, retbins = True)
bins
df_test['proba'] = model.predict_proba(X_test)[:, 1]
df_test['proba_decile'] = pd.cut(df_test['proba'], bins, labels=False, right=True)
plot2 = df_test[['proba', 'proba_decile']].groupby("proba_decile").mean().plot(legend = None)
plot2.set_ylabel("Probability")
plot2.set_xlabel("Decile")
df_test.head()
import matplotlib.ticker as ticker
plotdata = df_test[['Class', 'proba_decile']].groupby("proba_decile").mean().reset_index()
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(plotdata.proba_decile, plotdata.Class, marker = 'o')
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_ylabel("% policies sold", labelpad = 15)
ax.yaxis.set_major_formatter(mtick.PercentFormatter(xmax = 1.0, decimals = 0))
ax.set_xlabel("Predicted Probability Vigintile", labelpad = 15)
ax.set_title("Percentage of policies sold by Probability Vigintile", pad = 20)
for x,y in zip(plotdata.proba_decile, plotdata.Class):
label = "{:.1f}%" .format(y*100)
ax.annotate(label, # this is the text
(x,y), # this is the point to label
textcoords="offset points", # how to position the text
xytext=(-19,7), # distance from text to points (x,y)
ha='center') # horizon
accuracy = accuracy_score(y_test, predictions)
print(accuracy)
precision = precision_score(y_test, predictions)
print(precision)
f1 = f1_score(y_test, predictions)
print(f1)
rocauc = roc_auc_score(y_test, predictions)
print(rocauc)
recall = recall_score(y_test, predictions)
print(recall)
cm = confusion_matrix(y_test, model.predict(X_test))
cm
fig, ax = plt.subplots(figsize=(8, 8))
ax.imshow(cm)
ax.grid(False)
ax.xaxis.set(ticks=(0, 1), ticklabels=('Predicted 0s', 'Predicted 1s'))
ax.yaxis.set(ticks=(0, 1), ticklabels=('Actual 0s', 'Actual 1s'))
ax.set_ylim(1.5, -0.5)
for i in range(2):
for j in range(2):
ax.text(j, i, cm[i, j], ha='center', va='center', color='red')
plt.show()
print(classification_report(y_test, model.predict(X_test)))
###Output
precision recall f1-score support
0 1.00 1.00 1.00 56861
1 0.83 0.53 0.65 101
accuracy 1.00 56962
macro avg 0.91 0.77 0.83 56962
weighted avg 1.00 1.00 1.00 56962
###Markdown
Logistic Regression with Simple KernelsIn this tutorial, we explain how to use our feature maps with logistic regression on binary and multi-class classification problems. We start with importing the standart libraries and our feature maps.
###Code
import numpy as np
from numpy import linalg
import FeatureMaps as maps
import DataReader as DR
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
Here, we use Splice binary classification dataset and apply standard scaling.
###Code
#suppress the convergence warning for LogisticRegression
import warnings
warnings.filterwarnings("ignore")
X_train, X_test, y_train, y_test = DR.Splice()
scaler=StandardScaler()
X_train=scaler.fit_transform(X_train)
X_test=scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
In the following code block, we apply the logistic regression with $\phi_{p,1}(\mathbf{x}~|~\mathbf{a})$ and print the test prediction score.
###Code
clf= LogisticRegression(C=0.01, solver='lbfgs', penalty='l2', dual=False).fit(maps.phi_p_1(X_train, p=1), y_train)
print("Test Score: ",round(100 * accuracy_score(y_test, clf.predict(maps.phi_p_1(X_test, p=1))), 2))
###Output
Test Score: 85.56
###Markdown
Here, we also reproduce the results given in Figure 5 of our paper with the logistic regression. The implementations can be found in these modules.
###Code
import Figure_5_Phoneme_LR
import Figure_5_FourClass_LR
###Output
_____no_output_____
###Markdown
`Scikit-learn` also allows to use multinomial loss for multi-class classification problems. Next, we use Digits dataset with feature map $\phi_{p,M}(\mathbf{x}_i~|~\mathbf{a}_i^{(1)}, \dots, \mathbf{a}_i^{(M)})$ by choosing the set of anchor points as the centers of the each class.
###Code
from sklearn.model_selection import train_test_split
from sklearn import datasets
digits = datasets.load_digits(n_class=10)
n_samples = len(digits.data)
data = digits.data / 16.
data -= data.mean(axis=0)
def map_min_M_1(X, anchorsets):
X1=np.copy(X)
for anchors in anchorsets:
temp = []
for a in anchors:
temp.append(linalg.norm(X - a, axis=1, ord=1))
temp = np.array(temp)
X1 = np.hstack((X1, (np.min(temp, axis=0).reshape((len(X), 1)))))
return X1
X_train,X_test,y_train,y_test=train_test_split(data, digits.target,test_size=0.3, random_state=42, stratify= digits.target, shuffle=True)
y_unique=np.unique(y_train)
anchor_set=[np.mean(X_train[y_train==i],axis=0) for i in y_unique]
clf= LogisticRegression(C=10, solver='newton-cg', penalty='l2', dual=False, multi_class='multinomial').fit(map_min_M_1(X_train,anchorsets=anchor_set), y_train)
print("Test Score: ",round(100 * accuracy_score(y_test, clf.predict(map_min_M_1(X_test,anchorsets=anchor_set))), 2))
clf= LogisticRegression(C=10, solver='newton-cg', penalty='l2', dual=False, multi_class='multinomial').fit(X_train, y_train)
print("Test Score: ",round(100 * accuracy_score(y_test, clf.predict(X_test)), 2))
###Output
Test Score: 96.48
###Markdown
We also reproduce the results of Table 2 and Table 3 in our paper but this time with logistic regression.
###Code
import LR_Table_2_3_Run
###Output
***** Table 2: Test Accuracies
***Splice
LIN 85.66 $\phi_{1,1}$ 86.3 $\phi_{2,1}$ 86.16 $\phi_{1,d}$ 91.45 $\phi_{2,d}$ 89.52
***Wilt
LIN 71.4 $\phi_{1,1}$ 83.2 $\phi_{2,1}$ 80.6 $\phi_{1,d}$ 85.0 $\phi_{2,d}$ 83.6
***Guide 1
LIN 95.6 $\phi_{1,1}$ 96.12 $\phi_{2,1}$ 96.08 $\phi_{1,d}$ 96.6 $\phi_{2,d}$ 96.18
***Spambase
LIN 92.69 $\phi_{1,1}$ 92.25 $\phi_{2,1}$ 93.34 $\phi_{1,d}$ 95.0 $\phi_{2,d}$ 93.12
***Phoneme
LIN 73.3 $\phi_{1,1}$ 73.37 $\phi_{2,1}$ 73.49 $\phi_{1,d}$ 77.74 $\phi_{2,d}$ 77.19
***Magic
LIN 79.51 $\phi_{1,1}$ 81.02 $\phi_{2,1}$ 80.46 $\phi_{1,d}$ 85.37 $\phi_{2,d}$ 85.05
***Adult
LIN 84.99 $\phi_{1,1}$ 84.98 $\phi_{2,1}$ 85.04 $\phi_{1,d}$ 84.99 $\phi_{2,d}$ 84.96
***** Table 3: Training Times in Seconds
***Splice
LIN 0.0033 $\phi_{1,1}$ 0.025 $\phi_{2,1}$ 0.0169 $\phi_{1,d}$ 0.0122 $\phi_{2,d}$ 0.0096
***Wilt
LIN 0.0094 $\phi_{1,1}$ 0.014 $\phi_{2,1}$ 0.012 $\phi_{1,d}$ 0.0272 $\phi_{2,d}$ 0.0194
***Guide 1
LIN 0.0055 $\phi_{1,1}$ 0.0109 $\phi_{2,1}$ 0.0084 $\phi_{1,d}$ 0.0107 $\phi_{2,d}$ 0.0081
***Spambase
LIN 0.0334 $\phi_{1,1}$ 0.0396 $\phi_{2,1}$ 0.0399 $\phi_{1,d}$ 0.0379 $\phi_{2,d}$ 0.044
***Phoneme
LIN 0.0047 $\phi_{1,1}$ 0.0092 $\phi_{2,1}$ 0.0092 $\phi_{1,d}$ 0.0104 $\phi_{2,d}$ 0.01
***Magic
LIN 0.0183 $\phi_{1,1}$ 0.0403 $\phi_{2,1}$ 0.0866 $\phi_{1,d}$ 0.0564 $\phi_{2,d}$ 0.0676
***Adult
LIN 0.3306 $\phi_{1,1}$ 0.6741 $\phi_{2,1}$ 0.6861 $\phi_{1,d}$ 1.0598 $\phi_{2,d}$ 1.0775
###Markdown
###Code
import numpy as np
x1 = np.array([1,3]) # Birinci veri noktası
y1 = 0
x1[1]
w1 = 0.01
w2 = -0.03
b = 0
lr = 0.001
def f(x):
"""Sinir ağını tanımla"""
z = w1 * x[0] + w2*x[1] + b
ye = 1/(1 + np.exp(-z))
return ye
f(x1)
def l(ye,y):
if y == 1:
return -ye
else :
return -(1-ye)
l(0.4,1)
def dl(y):
if y== 1:
return -1
else :
return 1
dl(1)
def dye(ye):
return ye*(1-ye)
"""y tahminin z'ye göre türevi"""
def dlw1(x):
return x[0]
def dlw2()
def dlb
for :
w1 = w1 - lr * dlw1
w2 =
b =
###Output
_____no_output_____
###Markdown
Logistic RegressionJose Alberto Gonzalez Arteaga A01038061
###Code
#imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from scipy.spatial import distance
from sklearn.linear_model import LogisticRegression
#Batch Gradient Descent Algorithm
def sigmoid(x):
return 1.0 / (1 + np.exp(-x))
def lr_hypothesis(x, theta):
return np.dot(x,theta)
#return hb_opt =>
def batchGradientDescent(X, y, b0 = 0.5, ALPHA = 0.25, max_it=5000,
threshold = 1 * pow(10,-4)):
#prepare data
X = X.values
y = y.values
zm, zn = X.shape
z = np.ones(zm)
z = z.reshape(zm, 1)
X = np.append(z,X,axis=1)
m, n = X.shape
theta = np.zeros(n)
theta = theta.reshape(n,1)
y = y.reshape(-1,1)
diff = 1
j = 0
while j < max_it and diff > threshold:
last_t = theta
infunc1 = sigmoid(lr_hypothesis(X, theta)) - y
gradient = np.dot(X.T, infunc1) / m
theta = theta - (ALPHA / m) * gradient
diff = np.linalg.norm(last_t-theta)
j+=1
return theta, j
#Testing functions
#return if classify in 1 or 0.
def classify(x):
return int(x > 0.5)
#compare data
def compare(y_hat, y):
return np.where(y_hat == y, 0, 1)
#return error
def error(y_hat, y, T):
return 1 / T * sum(compare(y_hat, y))
#Apply model with values to predict probability of 1.
def predict(model, X):
X = X.values
X = np.insert(X, 0, 1.0)
return sigmoid(np.dot(model.T, X))
###Output
_____no_output_____
###Markdown
Gender:Gender dataset classify if is Male or Female considering next args:* Weight* Heightusing Batch Gradient Descent
###Code
#read data
name_file = input('Give name of the gender case file with (.txt):')
gender_data = pd.read_csv(name_file)
###Output
Give name of the gender case file with (.txt):Gender.txt
###Markdown
Exploratory data analysis (gender)
###Code
#inspect randomly data
gender_data.sample(n=5)
#Describe variables
gender_data.describe()
#Any missed values
gender_data.isnull().sum()
###Output
_____no_output_____
###Markdown
Preprocessing data (gender)
###Code
#Change categorical variable to numerical.
gender_data = pd.get_dummies(gender_data, columns=['Gender'])
del gender_data['Gender_Male']
gender_data.head()
# Split in train / test data
# X => Gender_Female => (Male=0, Female=1)
# y => (Height, Weight)
X = gender_data[['Height','Weight']]
y = gender_data['Gender_Female']
t_size = float(input('Give value of split test size (ex. 0.2): '))
rand_num = int(input('Give value of initial random generator: '))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=t_size, random_state=rand_num)
###Output
Give value of split test size (ex. 0.2): 0.2
Give value of initial random generator: 32
###Markdown
Logistic Regression Gender
###Code
#train model
#parms:
# X => X_train model data
# y => y_train model data
print('Gender dataset: batch Gradient Descent Algorithm')
b_init = float(input('Give value of beta init (0,1): '))
ALPHA = float(input('Give value of ALPHA (0,1): '))
max_it = int(input('Give value of the limit of iterations: '))
threshold = float(input('Give value of threshold (ex. 0.0001): '))
model, j = batchGradientDescent(X_train, y_train, b_init, ALPHA, max_it, threshold)
print('B vector: ')
print(model)
print('iterations: ', j)
#Test model
#predict values
y_predicted_value = X_test.apply(lambda x: predict(model, x), axis=1)
#Classify values
y_predicted = y_predicted_value.apply(classify)
#error
print('error: ', error(y_predicted, y_test, y_predicted.size))
#plot height
plt.clf()
plt.title("Height vs preddicted classification (1=Female, 0=Male)")
plt.scatter(X_test['Height'], y_predicted, color='blue', zorder=5, alpha=0.3)
plt.scatter(X_test['Height'], y_test, color='red', zorder=2, alpha=0.3)
plt.ylabel('y')
plt.xlabel('Height')
plt.axhline(.5, color='.5')
plt.legend(('Classification', 'predicted', 'real'), fontsize='small')
plt.tight_layout()
plt.show()
#plot weight
plt.clf()
plt.title("Weight vs preddicted classification (1=Female, 0=Male)")
plt.scatter(X_test['Weight'], y_predicted, color='blue', zorder=5, alpha=0.3)
plt.scatter(X_test['Weight'], y_test, color='red', zorder=2, alpha=0.3)
plt.ylabel('y')
plt.xlabel('Weight')
plt.axhline(.5, color='.5')
plt.legend(('Classification', 'predicted', 'real'), fontsize='small')
plt.tight_layout()
plt.show()
#Logistic Regression with SciKit-Learn
model = LogisticRegression()
model.fit(X_train, y_train)
print("coef with SciKit-Learn model:", model.coef_)
print("bias with SciKit-Learn model:", model.intercept_)
#error with SciKit-Learn
y_hat = model.predict(X_test)
print('error with SciKit-Learn model:', error(y_hat, y_test, y_hat.size))
#Create dateframe and save in file
y_predicted=y_predicted.rename('y_predicted')
y_test = y_test.rename('y_real')
y_predicted_value = y_predicted_value.rename("success rate")
df = pd.concat([X_test, y_predicted, y_test, y_predicted_value], axis=1)
df.to_csv('results'+name_file[:-4]+".csv")
###Output
_____no_output_____
###Markdown
Credit Card Default:Credit Card Default datasetIdentify if the user will pay the credit card considering next parms:* ID: Customer id.* Default: Yes/No if the customer will pay.* Student: Yes/No if the customer is student.* Balance: Average money in the credit card after their monthly pay.* Income: Income of the customer.Using logistic regression.
###Code
#Import dataset (Need to be in the project directory)
#read data
name_file = input('Give name of the credit case file with (.txt):')
credit_data = pd.read_csv(name_file, sep='\t')
###Output
Give name of the credit case file with (.txt):Default.txt
###Markdown
Exploratory data analysis
###Code
#Inspect data
credit_data.sample(n=5)
#Describe data
credit_data.describe()
#Any missed values
credit_data.isnull().sum()
###Output
_____no_output_____
###Markdown
Pre-processing data
###Code
#Change categorical variable to numerical.
credit_data = pd.get_dummies(credit_data, columns=['student', 'default'])
del credit_data['student_No']
del credit_data['default_No']
credit_data.head()
# Split in train / test data
# X => Gender_Female => (Male=0, Female=1)
# y => (Height, Weight)
X = credit_data[['student_Yes','balance', 'income']]
y = credit_data['default_Yes']
t_size = float(input('Give value of split test size (ex. 0.2): '))
rand_num = int(input('Give value of initial random generator: '))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=t_size, random_state=rand_num)
###Output
Give value of split test size (ex. 0.2): 0.2
Give value of initial random generator: 42
###Markdown
Logistic Regression: Credit data
###Code
#train model
#parms:
# X => X_train model data
# y => y_train model data
print('Credit dataset: batch Gradient Descent Algorithm')
b_init = float(input('Give value of beta init (0,1): '))
ALPHA = float(input('Give value of ALPHA (0,1): '))
max_it = int(input('Give value of the limit of iterations: '))
threshold = float(input('Give value of threshold (ex. 0.0001): '))
model, j = batchGradientDescent(X_train, y_train, b_init, ALPHA, max_it, threshold)
print('B vector: ')
print(model)
print("iterations: ", j)
#Test model
#predict values
y_predicted_value = X_test.apply(lambda x: predict(model, x), axis=1)
#Classify values
y_predicted = y_predicted_value.apply(classify)
#error
print('Error: ', error(y_predicted, y_test, y_predicted.size))
#Create dateframe and save in file
y_predicted=y_predicted.rename('y_predicted')
y_test = y_test.rename('y_real')
y_predicted_value = y_predicted_value.rename("success rate")
df = pd.concat([X_test, y_predicted, y_test, y_predicted_value], axis=1)
df.to_csv('results'+name_file[:-4]+".csv")
###Output
_____no_output_____
###Markdown
LogisticRegression
###Code
from __future__ import division
from IPython.display import display
from matplotlib import pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import random, sys, os, re
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import StratifiedKFold
from sklearn.grid_search import RandomizedSearchCV, GridSearchCV
from sklearn.cross_validation import cross_val_predict, permutation_test_score
SEED = 1091
scale = False
minmax = False
normd = False
nointercept = True
engineering = True
N_CLASSES = 2
submission_filename = "../submissions/submission_LogisticRegressionSEED1091.csv"
###Output
_____no_output_____
###Markdown
Load the training data
###Code
from load_blood_data import load_blood_data
y_train, X_train = load_blood_data(train=True, SEED = SEED,
scale = scale,
minmax = minmax,
norm = normd,
nointercept = nointercept,
engineering = engineering)
###Output
_____no_output_____
###Markdown
Fit the model
###Code
StatifiedCV = StratifiedKFold(y = y_train,
n_folds = 10,
shuffle = True,
random_state = SEED)
%%time
random.seed(SEED)
clf = LogisticRegression(penalty = 'l2', # 'l1', 'l2'
dual = False,
C = 1.0,
fit_intercept = True,
solver = 'liblinear', # 'newton-cg', 'lbfgs', 'liblinear', 'sag'
max_iter = 100,
intercept_scaling = 1,
tol = 0.0001,
class_weight = None,
random_state = SEED,
multi_class = 'ovr',
verbose = 0,
warm_start = False,
n_jobs = -1)
# param_grid = dict(C = [0.0001, 0.001, 0.01, 0.1],
# fit_intercept = [True, False],
# penalty = ['l1', 'l2'],
# #solver = ['newton-cg', 'lbfgs', 'liblinear', 'sag'],
# max_iter = [50, 100, 250])
# grid_clf = GridSearchCV(estimator = clf,
# param_grid = param_grid,
# n_jobs = 1,
# cv = StatifiedCV).fit(X_train, y_train)
# print("clf_params = {}".format(grid_clf.best_params_))
# print("score: {}".format(grid_clf.best_score_))
# clf = grid_clf.best_estimator_
clf_params = {'penalty': 'l2', 'C': 0.001, 'max_iter': 50, 'fit_intercept': True}
clf.set_params(**clf_params)
clf.fit(X_train, y_train)
# from sklearn_utilities import GridSearchHeatmap
# GridSearchHeatmap(grid_clf, y_key='learning_rate', x_key='n_estimators')
# from sklearn_utilities import plot_validation_curves
# plot_validation_curves(grid_clf, param_grid, X_train, y_train, ylim = (0.0, 1.05))
%%time
try:
from sklearn_utilities import plot_learning_curve
except:
import imp, os
util = imp.load_source('sklearn_utilities', os.path.expanduser('~/Dropbox/Python/sklearn_utilities.py'))
from sklearn_utilities import plot_learning_curve
plot_learning_curve(estimator = clf,
title = None,
X = X_train,
y = y_train,
ylim = (0.0, 1.10),
cv = 10,
train_sizes = np.linspace(.1, 1.0, 5),
n_jobs = -1)
plt.show()
###Output
_____no_output_____
###Markdown
Training set predictions
###Code
%%time
train_preds = cross_val_predict(estimator = clf,
X = X_train,
y = y_train,
cv = StatifiedCV,
n_jobs = -1,
verbose = 0,
fit_params = None,
pre_dispatch = '2*n_jobs')
y_true, y_pred = y_train, train_preds
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_true, y_pred, labels=None)
print cm
try:
from sklearn_utilities import plot_confusion_matrix
except:
import imp, os
util = imp.load_source('sklearn_utilities', os.path.expanduser('~/Dropbox/Python/sklearn_utilities.py'))
from sklearn_utilities import plot_confusion_matrix
plot_confusion_matrix(cm, ['Did not Donate','Donated'])
accuracy = round(np.trace(cm)/float(np.sum(cm)),4)
misclass = 1 - accuracy
print("Accuracy {}, mis-class rate {}".format(accuracy,misclass))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import log_loss
from sklearn.metrics import f1_score
fpr, tpr, thresholds = roc_curve(y_true, y_pred, pos_label=None)
plt.figure(figsize=(10,6))
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr, tpr)
AUC = roc_auc_score(y_true, y_pred, average='macro')
plt.text(x=0.6,y=0.4,s="AUC {:.4f}"\
.format(AUC),
fontsize=16)
plt.text(x=0.6,y=0.3,s="accuracy {:.2f}%"\
.format(accuracy*100),
fontsize=16)
logloss = log_loss(y_true, y_pred)
plt.text(x=0.6,y=0.2,s="LogLoss {:.4f}"\
.format(logloss),
fontsize=16)
f1 = f1_score(y_true, y_pred)
plt.text(x=0.6,y=0.1,s="f1 {:.4f}"\
.format(f1),
fontsize=16)
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.show()
%%time
score, permutation_scores, pvalue = permutation_test_score(estimator = clf,
X = X_train.values.astype(np.float32),
y = y_train,
cv = StatifiedCV,
labels = None,
random_state = SEED,
verbose = 0,
n_permutations = 100,
scoring = None,
n_jobs = -1)
plt.figure(figsize=(20,8))
plt.hist(permutation_scores, 20, label='Permutation scores')
ylim = plt.ylim()
plt.plot(2 * [score], ylim, '--g', linewidth=3,
label='Classification Score (pvalue {:.4f})'.format(pvalue))
plt.plot(2 * [1. / N_CLASSES], ylim, 'r', linewidth=7, label='Luck')
plt.ylim(ylim)
plt.legend(loc='center',fontsize=16)
plt.xlabel('Score')
plt.show()
# find mean and stdev of the scores
from scipy.stats import norm
mu, std = norm.fit(permutation_scores)
# format for scores.csv file
import re
algo = re.search(r"submission_(.*?)\.csv", submission_filename).group(1)
print("{: <26} , , {:.4f} , {:.4f} , {:.4f} , {:.4f} , {:.4f} , {:.4f}"\
.format(algo,accuracy,logloss,AUC,f1,mu,std))
###Output
LogisticRegressionSEED1091 , , 0.7743 , 7.7952 , 0.5637 , 0.2529 , 0.7600 , 0.0013
###Markdown
Predict leaderboard score with linear regression
###Code
# load the R extension
%load_ext rpy2.ipython
# see http://ipython.readthedocs.org/en/stable/config/extensions/index.html?highlight=rmagic
# see http://rpy.sourceforge.net/rpy2/doc-2.4/html/interactive.html#module-rpy2.ipython.rmagic
# Import python variables into R
%R -i accuracy,logloss,AUC,f1,mu,std
%%R
# read in the scores.csv file and perform a linear regression with it using this process's variables
score_data = read.csv('../input/scores.csv')
lm.fit = lm(leaderboard_score ~ accuracy + logloss + AUC + f1 + mu + std,
data = score_data,
na.action = na.omit)
slm.fit = step(lm.fit, direction = "both", trace=0)
predicted_leaderboard_score = predict(object = slm.fit,
newdata = data.frame(accuracy,logloss,AUC,f1,mu,std),
interval = "prediction", level = 0.99)
print(round(predicted_leaderboard_score,4))
###Output
_____no_output_____
###Markdown
-------------------------------------------------------------------------------------------- Test Set Predictions Re-fit with the full training set
###Code
clf.set_params(**clf_params)
clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Load the test data
###Code
from load_blood_data import load_blood_data
X_test, IDs = load_blood_data(train=False, SEED = SEED,
scale = scale,
minmax = minmax,
norm = normd,
nointercept = nointercept,
engineering = engineering)
###Output
_____no_output_____
###Markdown
Predict the test set with the fitted model
###Code
y_pred = clf.predict(X_test)
print(y_pred[:10])
try:
y_pred_probs = clf.predict_proba(X_test)
print(y_pred_probs[:10])
donate_probs = [prob[1] for prob in y_pred_probs]
except Exception,e:
print(e)
donate_probs = [0.65 if x>0 else 1-0.65 for x in y_pred]
print(donate_probs[:10])
###Output
[0 0 0 0 1 1 0 0 0 0]
[[ 0.52391319 0.47608681]
[ 0.87801333 0.12198667]
[ 0.75376222 0.24623778]
[ 0.6585946 0.3414054 ]
[ 0.46953723 0.53046277]
[ 0.26572181 0.73427819]
[ 0.71737399 0.28262601]
[ 0.85620701 0.14379299]
[ 0.99785974 0.00214026]
[ 0.94865091 0.05134909]]
[0.47608681478511899, 0.12198667376092551, 0.24623778111483186, 0.34140540251126056, 0.53046276534857573, 0.7342781930436989, 0.28262601292104306, 0.14379298665066198, 0.0021402576206707878, 0.051349092332859211]
###Markdown
Create the submission file
###Code
assert len(IDs)==len(donate_probs)
f = open(submission_filename, "w")
f.write(",Made Donation in March 2007\n")
for ID, prob in zip(IDs, donate_probs):
f.write("{},{}\n".format(ID,prob))
f.close()
###Output
_____no_output_____
###Markdown
Encoding Categorical columns
###Code
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
data_train["saddr_enc"]= le.fit_transform(data_train.saddr)
data_train["daddr_enc"]= le.fit_transform(data_train.daddr)
data_train["proto_enc"]= le.fit_transform(data_train.proto)
data_train["target_enc"]= le.fit_transform(data_train.target)
data_train.head()
# Dropping Redundant Columns
data_train.drop(['saddr','daddr','proto','target'], axis=1, inplace=True)
data_test["saddr_enc"]= le.fit_transform(data_test.saddr)
data_test["daddr_enc"]= le.fit_transform(data_test.daddr)
data_test["proto_enc"]= le.fit_transform(data_test.proto)
data_test["target_enc"]= le.fit_transform(data_test.target)
data_test.drop(['saddr','daddr','proto','target'], axis=1, inplace=True)
data_train['target_enc'].value_counts()
###Output
_____no_output_____
###Markdown
5 - DoS_UDP 1 - DDoS_TCP 2 - DDoS_UDP 4 - DoS_TCP 8 - Reconnaissance_Service_Scan 7 - Reconnaissance_OS_Fingerprint 3 - DoS_HTTP 0 - DDoS_HTTP 6 - Normal_Normal
###Code
###Output
_____no_output_____
###Markdown
Scaling
###Code
y = data_train['target_enc']
from sklearn.preprocessing import StandardScaler
scaler=StandardScaler()
features = data_train.iloc[:,:-1]
cols=features.columns
scaled_features= scaler.fit_transform(features)
data_train= pd.DataFrame(scaled_features,columns=cols)
data_train.head()
ytest = data_test['target_enc']
features = data_test.iloc[:,:-1]
cols=features.columns
scaled_features= scaler.fit_transform(features)
data_test= pd.DataFrame(scaled_features,columns=cols)
data_test.head()
y.value_counts()
x = data_train
xtest = data_test
###Output
_____no_output_____
###Markdown
Sampling
###Code
import imblearn
from imblearn.under_sampling import RandomUnderSampler
samp_strat= {5:70000, 1:70000, 2:70000, 4:65000, 8:58592, 7:14267, 3:1179, 0:786, 6:332}
random_under= RandomUnderSampler(sampling_strategy=samp_strat,random_state=1)
X_rus,y_rus = random_under.fit_resample(x,y)
pd.Series(y_rus).value_counts()
from imblearn.over_sampling import RandomOverSampler
samp_strat= {5:70000, 1:70000, 2:70000, 4:65000, 8:58592, 7:30000, 3:20000, 0:15000, 6:8000}
random_under= RandomOverSampler(sampling_strategy=samp_strat,random_state=1)
Xres,yres = random_under.fit_resample(X_rus,y_rus)
plt.figure(figsize=(10,5))
sns.countplot(yres,palette='magma')
from sklearn import model_selection
X_train, X_test, y_train, y_test = model_selection.train_test_split(Xres,yres, test_size=0.20, random_state=42, stratify=yres)
import time
start = time.time()
from sklearn.linear_model import LogisticRegression
model_1 = LogisticRegression(solver='liblinear')
model_1.fit(X_train, y_train)
pred_1= model_1.predict(X_test)
score1 = model_1.score(X_test, y_test)
end = time.time()
print(end - start, "seconds\n")
print("Accuracy of base model: ",score1)
pred_2 = model_1.predict(xtest)
score2 = model_1.score(xtest,ytest)
print("Accuracy of test model: ",score2)
from sklearn.metrics import multilabel_confusion_matrix
multilabel_confusion_matrix(y_test,pred_1)
from sklearn.model_selection import KFold,StratifiedKFold,cross_val_score
model_1 = LogisticRegression(solver='liblinear')
model_1.fit(X_train,y_train)
score3 = cross_val_score(model_1, X_train, y_train)
score3
score3.mean()
from sklearn.linear_model import LogisticRegression
logModel=LogisticRegression()
param_grid= [
{'solver' : ['lbfgs'],'penalty' : ['l1','l2'],'C':[0.0001,.009,0.01,1,5,10,25], 'max_iter' : [1000], 'n_jobs' : [100] } ]
from sklearn.model_selection import GridSearchCV
clf = GridSearchCV(logModel, param_grid = param_grid, cv=3, verbose=3)
best_clf = clf.fit(X_train,y_train)
best_clf.best_score_
best_clf.best_params_
import time
start = time.time()
from sklearn.linear_model import LogisticRegression
model_2 = LogisticRegression(C= 25, max_iter= 1000, n_jobs= 100, penalty='l2', solver= 'lbfgs')
model_2.fit(X_train, y_train)
pred_3= model_2.predict(X_test)
score5 = model_2.score(X_test, y_test)
end = time.time()
print(end - start, "seconds\n")
print("Accuracy of model with best parameters: ",score5)
pred_4 = model_2.predict(xtest)
score6 = model_2.score(xtest,ytest)
print("Accuracy of test model with best parameters: ",score6)
from sklearn.metrics import multilabel_confusion_matrix
multilabel_confusion_matrix(y_test,pred_3)
###Output
_____no_output_____
###Markdown
**Tujuan**Pada latihan ini kita akan menggunakan logistic regression untuk memprediksi apakah seseorang akan membeli setelah melihat iklan sebuah produk.
###Code
import pandas as pd
# Membaca dataset -> ubah menjadi dataframe
data = pd.read_csv('Social_Network_Ads.csv')
import pandas as pd
df = pd.read_csv('Social_Network_Ads.csv')
df.head()
df.info()
# drop kolom yang tidak diperlukan
data = df.drop(columns=['User ID'])
# jalankan proses one-hot encoding dengan pd.get_dummies()
data = pd.get_dummies(data)
data
# pisahkan atribut dan label
predictions = ['Age' , 'EstimatedSalary' , 'Gender_Female' , 'Gender_Male']
X = data[predictions]
y = data['Purchased']
# lakukan normalisasi terhadap data yang kita miliki
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X)
scaled_data = scaler.transform(X)
scaled_data = pd.DataFrame(scaled_data, columns= X.columns)
scaled_data.head()
from sklearn.model_selection import train_test_split
# bagi data menjadi train dan test untuk setiap atribut dan label
X_train, X_test, y_train, y_test = train_test_split(scaled_data, y, test_size=0.2, random_state=1)
from sklearn import linear_model
# latih model dengan fungsi fit
model = linear_model.LogisticRegression()
model.fit(X_train, y_train)
from sklearn.preprocessing import StandardScaler
# uji akurasi model
model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Logistic Regression Logistic regression is one of the most popular Machine Learning algorithms, which comes under the Supervised Learning technique. It is used for predicting the categorical dependent variable using a given set of independent variables. Logistic regression predicts the output of a categorical dependent variable. Therefore the outcome must be a categorical or discrete value. It can be either Yes or No, 0 or 1, true or False, etc. but instead of giving the exact value as 0 and 1, it gives the probabilistic values which lie between 0 and 1. Logistic Regression is much similar to the Linear Regression except that how they are used. Linear Regression is used for solving Regression problems, whereas Logistic regression is used for solving the classification problems. In Logistic regression, instead of fitting a regression line, we fit an "S" shaped logistic function, which predicts two maximum values (0 or 1). A binary example: with two possible species of a plant. Consider two species of a plant . Species 0 and Species 1. Logistic regression defines the maximum likelihood of a new observation, prompting the species of a plant could be either 0 or 1. If the maximum likelihood of the new observation is obtianed to be 0.7 after fitting the logistic function where x - axis defines Petal length and Y - axis indicates the species., Then the species of the plant is predicted to be 1. Logistic Regression prompts on utilising the One vs all method when there are n - no.of variables., which are more than two variables.
###Code
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
from sklearn.datasets import load_iris
iris = load_iris()
print(iris.data)
x = iris.data
y = iris.target
print(x)
print(y)
###Output
[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]
[5.4 3.9 1.7 0.4]
[4.6 3.4 1.4 0.3]
[5. 3.4 1.5 0.2]
[4.4 2.9 1.4 0.2]
[4.9 3.1 1.5 0.1]
[5.4 3.7 1.5 0.2]
[4.8 3.4 1.6 0.2]
[4.8 3. 1.4 0.1]
[4.3 3. 1.1 0.1]
[5.8 4. 1.2 0.2]
[5.7 4.4 1.5 0.4]
[5.4 3.9 1.3 0.4]
[5.1 3.5 1.4 0.3]
[5.7 3.8 1.7 0.3]
[5.1 3.8 1.5 0.3]
[5.4 3.4 1.7 0.2]
[5.1 3.7 1.5 0.4]
[4.6 3.6 1. 0.2]
[5.1 3.3 1.7 0.5]
[4.8 3.4 1.9 0.2]
[5. 3. 1.6 0.2]
[5. 3.4 1.6 0.4]
[5.2 3.5 1.5 0.2]
[5.2 3.4 1.4 0.2]
[4.7 3.2 1.6 0.2]
[4.8 3.1 1.6 0.2]
[5.4 3.4 1.5 0.4]
[5.2 4.1 1.5 0.1]
[5.5 4.2 1.4 0.2]
[4.9 3.1 1.5 0.2]
[5. 3.2 1.2 0.2]
[5.5 3.5 1.3 0.2]
[4.9 3.6 1.4 0.1]
[4.4 3. 1.3 0.2]
[5.1 3.4 1.5 0.2]
[5. 3.5 1.3 0.3]
[4.5 2.3 1.3 0.3]
[4.4 3.2 1.3 0.2]
[5. 3.5 1.6 0.6]
[5.1 3.8 1.9 0.4]
[4.8 3. 1.4 0.3]
[5.1 3.8 1.6 0.2]
[4.6 3.2 1.4 0.2]
[5.3 3.7 1.5 0.2]
[5. 3.3 1.4 0.2]
[7. 3.2 4.7 1.4]
[6.4 3.2 4.5 1.5]
[6.9 3.1 4.9 1.5]
[5.5 2.3 4. 1.3]
[6.5 2.8 4.6 1.5]
[5.7 2.8 4.5 1.3]
[6.3 3.3 4.7 1.6]
[4.9 2.4 3.3 1. ]
[6.6 2.9 4.6 1.3]
[5.2 2.7 3.9 1.4]
[5. 2. 3.5 1. ]
[5.9 3. 4.2 1.5]
[6. 2.2 4. 1. ]
[6.1 2.9 4.7 1.4]
[5.6 2.9 3.6 1.3]
[6.7 3.1 4.4 1.4]
[5.6 3. 4.5 1.5]
[5.8 2.7 4.1 1. ]
[6.2 2.2 4.5 1.5]
[5.6 2.5 3.9 1.1]
[5.9 3.2 4.8 1.8]
[6.1 2.8 4. 1.3]
[6.3 2.5 4.9 1.5]
[6.1 2.8 4.7 1.2]
[6.4 2.9 4.3 1.3]
[6.6 3. 4.4 1.4]
[6.8 2.8 4.8 1.4]
[6.7 3. 5. 1.7]
[6. 2.9 4.5 1.5]
[5.7 2.6 3.5 1. ]
[5.5 2.4 3.8 1.1]
[5.5 2.4 3.7 1. ]
[5.8 2.7 3.9 1.2]
[6. 2.7 5.1 1.6]
[5.4 3. 4.5 1.5]
[6. 3.4 4.5 1.6]
[6.7 3.1 4.7 1.5]
[6.3 2.3 4.4 1.3]
[5.6 3. 4.1 1.3]
[5.5 2.5 4. 1.3]
[5.5 2.6 4.4 1.2]
[6.1 3. 4.6 1.4]
[5.8 2.6 4. 1.2]
[5. 2.3 3.3 1. ]
[5.6 2.7 4.2 1.3]
[5.7 3. 4.2 1.2]
[5.7 2.9 4.2 1.3]
[6.2 2.9 4.3 1.3]
[5.1 2.5 3. 1.1]
[5.7 2.8 4.1 1.3]
[6.3 3.3 6. 2.5]
[5.8 2.7 5.1 1.9]
[7.1 3. 5.9 2.1]
[6.3 2.9 5.6 1.8]
[6.5 3. 5.8 2.2]
[7.6 3. 6.6 2.1]
[4.9 2.5 4.5 1.7]
[7.3 2.9 6.3 1.8]
[6.7 2.5 5.8 1.8]
[7.2 3.6 6.1 2.5]
[6.5 3.2 5.1 2. ]
[6.4 2.7 5.3 1.9]
[6.8 3. 5.5 2.1]
[5.7 2.5 5. 2. ]
[5.8 2.8 5.1 2.4]
[6.4 3.2 5.3 2.3]
[6.5 3. 5.5 1.8]
[7.7 3.8 6.7 2.2]
[7.7 2.6 6.9 2.3]
[6. 2.2 5. 1.5]
[6.9 3.2 5.7 2.3]
[5.6 2.8 4.9 2. ]
[7.7 2.8 6.7 2. ]
[6.3 2.7 4.9 1.8]
[6.7 3.3 5.7 2.1]
[7.2 3.2 6. 1.8]
[6.2 2.8 4.8 1.8]
[6.1 3. 4.9 1.8]
[6.4 2.8 5.6 2.1]
[7.2 3. 5.8 1.6]
[7.4 2.8 6.1 1.9]
[7.9 3.8 6.4 2. ]
[6.4 2.8 5.6 2.2]
[6.3 2.8 5.1 1.5]
[6.1 2.6 5.6 1.4]
[7.7 3. 6.1 2.3]
[6.3 3.4 5.6 2.4]
[6.4 3.1 5.5 1.8]
[6. 3. 4.8 1.8]
[6.9 3.1 5.4 2.1]
[6.7 3.1 5.6 2.4]
[6.9 3.1 5.1 2.3]
[5.8 2.7 5.1 1.9]
[6.8 3.2 5.9 2.3]
[6.7 3.3 5.7 2.5]
[6.7 3. 5.2 2.3]
[6.3 2.5 5. 1.9]
[6.5 3. 5.2 2. ]
[6.2 3.4 5.4 2.3]
[5.9 3. 5.1 1.8]]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
###Markdown
Separating the data into Train and Test groups
###Code
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size=0.25)
print(x_test.shape)
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(x_train,y_train)
print(logreg.predict([[6.7, 3.3, 5.7, 2.5]]))
print(iris.target_names)
print(logreg.predict_proba([[6.7,3.3,5.7,2.5]]))
predictions_logreg = logreg.predict(x_test)
from sklearn import metrics
performance_logreg = metrics.accuracy_score(y_test, predictions_logreg)
print(performance_logreg)
###Output
0.9473684210526315
###Markdown
Logistic Regression
###Code
class LogisticsRegression:
def sigmoid(self, x):
# shape(samples, 1)
z = ((np.dot(x, self.weight)) + self.bias)
# shape(samples, 1)
return (1 / (1 + np.exp(-z)))
def forward(self, x):
# shape(samples, 1)
return self.sigmoid(x)
def binary_crossEntropy(self, y, y_hat):
# shape(samples, 1)
return ((-1) * y * (np.log(y_hat))) - ((1 - y) * (np.log(1 - y_hat)))
def cost(self, y, y_hat):
# scalar
return np.mean(self.binary_crossEntropy(y, y_hat))
def train(self, x, y, alpha, epoch, random_state=-1):
# x : shape(#samples, #features)
# y : shape(#samples, 1)
m, n = x.shape[0], x.shape[1]
if random_state != -1:
np.random.seed(random_state)
# shape(#features, 1)
self.weight = np.random.randn(n,1)
# shape(1,1)
self.bias = np.zeros((1,1))
self.epoch = epoch
self.cost_list = []
for i in range(self.epoch):
# shape(#samples, 1)
y_hat = self.forward(x)
# scalar
loss = self.cost(y, y_hat)
self.cost_list.append(loss)
# Gradient
# dL_dw : dLoss/dweight (#features, 1)
dL_dw = (np.dot(x.T, (y_hat - y)))/m
# dL_db : dLoss/dbias (1, 1)
dL_db = np.sum((y_hat - y)/m)
# shape(#features, 1)
self.weight = self.weight - (alpha * dL_dw)
# shape(1, 1)
self.bias = self.bias - (alpha * dL_db)
def plot_convergence(self):
plt.plot([i for i in range(self.epoch)], self.cost_list)
plt.xlabel('Epochs'); plt.ylabel('Binary Cross Entropy')
def predict(self, x_test):
# shape(samples, 1)
y_hat = self.forward(x_test)
return np.where(y_hat>=0.5, 1, 0)
###Output
_____no_output_____
###Markdown
Utils
###Code
def train_test_split(x, y, size=0.2, random_state=-1):
if random_state != -1:
np.random.seed(random_state)
x_val = x[:int(len(x)*size)]
y_val = y[:int(len(x)*size)]
x_train = x[int(len(x)*size):]
y_train = y[int(len(x)*size):]
return x_train, y_train, x_val, y_val
###Output
_____no_output_____
###Markdown
Train
###Code
df = pd.read_csv('data/Iris_binary.csv')
df.head(2)
###Output
_____no_output_____
###Markdown
Data preparation
###Code
df.Species.unique()
###Output
_____no_output_____
###Markdown
Convert to numerical
###Code
df.Species.replace(('Iris-setosa', 'Iris-versicolor'), (0, 1), inplace=True)
###Output
_____no_output_____
###Markdown
Shuffle data
###Code
df = df.sample(frac=1, random_state=0)
###Output
_____no_output_____
###Markdown
Convert dataframe to numpy array
###Code
X, Y = df.drop(['Species'], axis=1).values, df.Species.values
Y = Y.reshape(-1, 1)
###Output
_____no_output_____
###Markdown
Split
###Code
X_train, Y_train, X_val, Y_val = train_test_split(X, Y, size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Train
###Code
l = LogisticsRegression()
l.train(X_train, Y_train, 0.01, 100, random_state=0)
l.plot_convergence()
###Output
_____no_output_____
###Markdown
Evaluate on validation data
###Code
Y_hat = l.predict(X_val)
confusion_matrix(Y_val, Y_hat)
print(classification_report(Y_val, Y_hat))
###Output
precision recall f1-score support
0 1.00 1.00 1.00 10
1 1.00 1.00 1.00 10
accuracy 1.00 20
macro avg 1.00 1.00 1.00 20
weighted avg 1.00 1.00 1.00 20
###Markdown
Cross check with sklearn Train
###Code
lr = LogisticRegression(max_iter=100, random_state=0)
lr.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
Evaluate on validation data
###Code
Y_hat = lr.predict(X_val)
confusion_matrix(Y_val, Y_hat)
print(classification_report(Y_val, Y_hat))
###Output
precision recall f1-score support
0 1.00 1.00 1.00 10
1 1.00 1.00 1.00 10
accuracy 1.00 20
macro avg 1.00 1.00 1.00 20
weighted avg 1.00 1.00 1.00 20
###Markdown
데이터 전처리
###Code
import numpy as np
import pandas as pd
import statsmodels.api as sm
import sklearn as sk
import matplotlib as mpl
import matplotlib.pylab as plt
import seaborn as sns
sns.set()
sns.set_style("whitegrid")
sns.set_color_codes()
from imblearn.combine import *
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report, roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
df = pd.read_csv("noshow.csv")
df.tail()
region_weather = pd.read_csv("region_weather.csv")
region_weather.tail()
import time
region_weather.time = region_weather.time.apply(lambda x: time.strftime("%Y-%m-%d", time.localtime(int(x))))
###Output
_____no_output_____
###Markdown
Age 범주형 변수로 바꾸기
###Code
df["Age_bin"] = "0"
df["Age_bin"][df.Age < 10] = "0s"
df["Age_bin"][(df.Age >= 10) & (df.Age < 20)] = "10s"
df["Age_bin"][(df.Age >= 20) & (df.Age < 30)] = "20s"
df["Age_bin"][(df.Age >= 30) & (df.Age < 40)] = "30s"
df["Age_bin"][(df.Age >= 40) & (df.Age < 50)] = "40s"
df["Age_bin"][(df.Age >= 50) & (df.Age < 60)] = "50s"
df["Age_bin"][(df.Age >= 60) & (df.Age < 70)] = "60s"
df["Age_bin"][(df.Age >= 70) & (df.Age < 80)] = "70s"
df["Age_bin"][(df.Age >= 80) & (df.Age < 90)] = "80s"
df["Age_bin"][df.Age >= 90] = "90s"
###Output
_____no_output_____
###Markdown
날짜형식 범주형 변수로 바꿔주기
###Code
df["Scheduled_date"] = df["ScheduledDay"].apply(lambda x: x[:10])
df['Scheduled_date'] = pd.to_datetime(df['Scheduled_date'])
df['Scheduled_time'] = df['ScheduledDay'].apply(lambda x: x[11:-1])
df['Scheduled_time'] = pd.to_timedelta(df['Scheduled_time'])
df['Appointment_date'] = df['AppointmentDay'].apply(lambda x: x[:10])
df['Appointment_date'] = pd.to_datetime(df['Appointment_date'])
df['Appointment_time'] = df['AppointmentDay'].apply(lambda x: x[11:-1])
df['Appointment_time'] = pd.to_timedelta(df['Appointment_time'])
df = df.drop(columns = "Appointment_time")
df["date_diff"] = df.Appointment_date - df.Scheduled_date
df.date_diff = df.date_diff.apply(lambda x: int(str(x).split("days")[0]))
###Output
_____no_output_____
###Markdown
Handcap 범주형 변수로 바꾸기
###Code
df.Handcap = pd.Categorical(df.Handcap)
###Output
_____no_output_____
###Markdown
Age가 음수이거나 date_diff가 음수인 데이터 빼주기
###Code
df = df[df.Age >= 0]
df = df[df.date_diff >= 0]
df.tail()
region_weather.tail()
region_weather.time = pd.to_datetime(region_weather.time)
df_sum = pd.merge(df, region_weather, how="inner", left_on=["Neighbourhood", "Appointment_date"], right_on=["region", "time"])
df_sum.tail()
df_sum.groupby("No-show").mean()
plt.xticks(rotation=90)
ax = sns.countplot(df_sum.weather, hue=df_sum["No-show"])
###Output
_____no_output_____
###Markdown
countplot에서 height를 구해 날씨에 따른 No-show비율 보기
###Code
ls1 = [l.get_text() for l in ax.get_xticklabels()]
ls2 = [p.get_height() for p in ax.patches]
for i in range(13):
print(ls1[i],": ",ls2[i+13] / (ls2[i] + ls2[i+13]))
sns.distplot(df_sum.temperature)
###Output
_____no_output_____
###Markdown
dfx와 dfy로 나눠주기
###Code
dfx = df_sum.drop(columns=["PatientId", "AppointmentID", "ScheduledDay", "AppointmentDay", "Age", "No-show", "Scheduled_date", "Scheduled_time", "Appointment_date", "time", "region"], axis=1)
dfy = df_sum["No-show"]
dfx.tail()
dfy.tail()
###Output
_____no_output_____
###Markdown
one hot encoding 적용
###Code
dfx = pd.get_dummies(dfx, drop_first=True)
dfx.head()
###Output
_____no_output_____
###Markdown
train과 test 데이터로 나누기
###Code
X_train, X_test, y_train, y_test = train_test_split(dfx, dfy, test_size=0.3, random_state=0)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
X_samp, y_samp = SMOTEENN(random_state=0).fit_sample(X_train, y_train)
model_sk = LogisticRegression().fit(X_samp, y_samp)
# y_pred = model_sk.predict(X_test)
# set thresholds 0.3
y_pred = ["Yes" if x else "No" for x in (model_sk.predict_proba(X_test)[:,1] >= 0.6)]
confusion_matrix(y_test, y_pred, labels=["Yes", "No"])
recall = 4673 / (4673 + 2078)
fallout = 10674 / (10674 + 15731)
print("recall =", recall)
print("fallout =", fallout)
print(classification_report(y_test, y_pred ))
accuracy_score(y_test, y_pred)
model_sk.decision_function(X_test)
fpr, tpr, thresholds = roc_curve(y_test, model_sk.decision_function(X_test), pos_label="Yes")
fpr, tpr, thresholds
y_hat = model_sk.predict(X_test)
f_value = model_sk.decision_function(X_test)
df1 = pd.DataFrame(np.vstack([f_value, y_hat, y_test]).T,
columns=["f", "y_hat", "y"])
df1.sort_values("f", ascending=False).reset_index(drop=True)
plt.plot(fpr, tpr, 'o-', label="Logistic Regression")
plt.plot([0, 1], [0, 1], 'k--', label="random guess")
plt.plot([fallout], [recall], 'ro', ms=10)
plt.xlabel('False Positive Rate (Fall-Out)')
plt.ylabel('True Positive Rate (Recall)')
plt.title('Receiver operating characteristic example')
plt.show()
auc(fpr, tpr)
from sklearn.model_selection import validation_curve
model_sk.get_params()
param_range = np.logspace(-1, 0, 10)
param_range
%%time
train_scores, test_scores = \
validation_curve(LogisticRegression(), X_samp, y_samp,
param_name="C", param_range=param_range,
cv=10, scoring="accuracy", n_jobs=1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.semilogx(param_range, train_scores_mean, label="Training score", color="r")
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2, color="r")
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score", color="g")
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2, color="g")
plt.legend(loc="best")
plt.title("Validation Curve with LogisticRegression")
plt.xlabel("C")
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
plt.show()
###Output
_____no_output_____
###Markdown
변수 설명
###Code
sns.distplot(df.Age)
df_sum.dtypes
age_se = df_sum.groupby(["Age_bin", "No-show"]).size()
age_se
age_dic = {}
age_ls = [age_se[i+1] / (age_se[i] + age_se[i+1]) for i in range(0, 20, 2)]
for i in range(0, 20, 2):
print(age_se.index.levels[0][i//2], ": ", age_se[i+1] / (age_se[i] + age_se[i+1]))
plt.figure(figsize=(16,4))
plt.xticks(rotation=90)
ax = sns.countplot(df.Neighbourhood)
ax.set_title("Neighbourbood")
plt.show()
sns.countplot(df.Handcap)
sns.countplot(df.Alcoholism)
sns.countplot(df.Diabetes)
sns.countplot(df.Hipertension)
sns.countplot(df.Scholarship)
a = df[["Gender", "No-show", "AppointmentID"]].groupby(["Gender", "No-show"]).agg('count')
p1 = plt.bar(x=["Man", "Woman"], height=[7725, 14594], color='r')
p2 = plt.bar(x=["Man", "Woman"], height=[30962, 57246], bottom=[7725, 14594], color='b')
plt.legend((p1, p2), ("Yes", "No"))
sns.countplot(df["No-show"], palette=['g','y'])
sns.distplot(df["Age"])
sns.distplot(df["Age"][df["No-show"] == "Yes"])
sns.distplot(df["Age"])
plt.legend(["Yes", "Total"])
sns.distplot(df.date_diff)
len(X_train), len(y_train)
###Output
_____no_output_____
###Markdown
OLS
###Code
df_st = pd.concat([dfx, dfy], axis=1)
df_st = df_st.rename(columns={"No-show": "No_show"})
df_st.tail()
df_st["No_show"][df_st["No_show"] == "Yes"] = 1
df_st["No_show"][df_st["No_show"] == "No"] = 0
df_st.No_show = df_st.No_show.astype('int')
df_st = pd.get_dummies(df_st)
df_st_train, df_st_test = train_test_split(df_st, test_size=0.3, random_state=0)
df_st_train.shape, df_st_test.shape
X_st, y_st = SMOTEENN(random_state=0).fit_sample(df_st_train.drop(columns="No_show"), df_st_train["No_show"])
X_st.shape
df_st_train.tail()
df_st_s = pd.DataFrame(X_st, columns=df_st_train.columns.drop("No_show"))
y_st.shape
df_st_s.shape
df_st_sy = pd.DataFrame(y_st)
df_f = pd.concat([df_st_s.drop(columns="temperature").astype('int'), df_st_s.temperature], axis=1)
model_med = sm.Logit(df_st_sy, df_f)
result_med = model_med.fit(method="ncg")
print(result_med.summary())
train_ypred = result_med.predict(df_st_test.drop(columns="No_show"))
accuracy_score(train_ypred, df_st_test["No_show"])
model_med = sm.Logit.from_formula("No_show ~ Scholarship + Diabetes + Alcoholism +\
SMS_received + C(Age_bin) + date_diff + temperature + C(weather)", data=df_st_train)
result_med = model_med.fit()
print(result_med.summary())
train_ypred = result_med.predict(df_st_test)
train_ypred_r = np.where(train_ypred > 0.2, 1, 0)
confusion_matrix(df_st_test["No_show"], train_ypred_r, labels=[1, 0])
print(classification_report(df_st_test["No_show"], train_ypred_r))
accuracy_score(df_st_test["No_show"], train_ypred_r)
test_ypred = result_med.predict(X_test)
test_ypred_r = np.where(test_ypred > 0.2 ,1 ,0)
a = confusion_matrix(y_test, test_ypred_r, labels=[1,0])
a
print(classification_report(y_test, test_ypred_r))
accuracy_score(y_test, test_ypred_r)
result_med.pred_table()
def test_score():
df1 = pd.DataFrame(columns=["thresholds", "precision", "recall", "accuracy"])
for i in np.linspace(0.1,0.9,9):
test_ypred_r = np.where(test_ypred > i ,1 ,0)
a = confusion_matrix(y_test, test_ypred_r, labels=[1,0])
accur = accuracy_score(y_test, test_ypred_r)
df1.loc[len(df1)] = [i,
a[0][0] / (a[0][0] + a[1][0]),
a[0][0] / (a[0][0] + a[0][1]),
accur]
return df1
test_score()
###Output
_____no_output_____
###Markdown
비대칭 데이터 문제 해결 방법 OLS 포뮬러가 다른 패키지와 호환이 잘 안된다.OLS 포뮬러로 처리해주는 것보다.그냥 전처리 일일이 해주고 하는 것이 낫다.
###Code
sns.distplot(df.PatientId)
df[df.Age >= 90].groupby("No_show").size()
X_train.tail()
pd.get_dummies(df[["Gender"]])
y_train.tail()
###Output
_____no_output_____
###Markdown
Logistic Regression with PythonEstimated time needed: **25** minutes ObjectivesAfter completing this lab you will be able to:* Use scikit Logistic Regression to classify* Understand confusion matrix In this notebook, you will learn Logistic Regression, and then, you'll create a model for a telecommunication company, to predict when its customers will leave for a competitor, so that they can take some action to retain the customers. Table of contents About the dataset Data pre-processing and selection Modeling (Logistic Regression with Scikit-learn) Evaluation Practice What is the difference between Linear and Logistic Regression?While Linear Regression is suited for estimating continuous values (e.g. estimating house price), it is not the best tool for predicting the class of an observed data point. In order to estimate the class of a data point, we need some sort of guidance on what would be the most probable class for that data point. For this, we use Logistic Regression.Recall linear regression: As you know, Linear regression finds a function that relates a continuous dependent variable, y, to some predictors (independent variables $x_1$, $x_2$, etc.). For example, simple linear regression assumes a function of the form:$$y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \cdots$$and finds the values of parameters $\theta_0, \theta_1, \theta_2$, etc, where the term $\theta_0$ is the "intercept". It can be generally shown as:$$ℎ_\theta(𝑥) = \theta^TX$$Logistic Regression is a variation of Linear Regression, useful when the observed dependent variable, y, is categorical. It produces a formula that predicts the probability of the class label as a function of the independent variables.Logistic regression fits a special s-shaped curve by taking the linear regression function and transforming the numeric estimate into a probability with the following function, which is called the sigmoid function 𝜎:$$ℎ\_\theta(𝑥) = \sigma({\theta^TX}) = \frac {e^{(\theta\_0 + \theta\_1 x\_1 + \theta\_2 x\_2 +...)}}{1 + e^{(\theta\_0 + \theta\_1 x\_1 + \theta\_2 x\_2 +\cdots)}}$$Or:$$ProbabilityOfaClass\_1 = P(Y=1|X) = \sigma({\theta^TX}) = \frac{e^{\theta^TX}}{1+e^{\theta^TX}}$$In this equation, ${\theta^TX}$ is the regression result (the sum of the variables weighted by the coefficients), `exp` is the exponential function and $\sigma(\theta^TX)$ is the sigmoid or [logistic function](http://en.wikipedia.org/wiki/Logistic_function?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01), also called logistic curve. It is a common "S" shape (sigmoid curve).So, briefly, Logistic Regression passes the input through the logistic/sigmoid but then treats the result as a probability:<imgsrc="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%203/images/mod_ID_24_final.png" width="400" align="center">The objective of the **Logistic Regression** algorithm, is to find the best parameters θ, for $ℎ\_\theta(𝑥)$ = $\sigma({\theta^TX})$, in such a way that the model best predicts the class of each case. Customer churn with Logistic RegressionA telecommunications company is concerned about the number of customers leaving their land-line business for cable competitors. They need to understand who is leaving. Imagine that you are an analyst at this company and you have to find out who is leaving and why.
###Code
!pip install scikit-learn==0.23.1
###Output
Requirement already satisfied: scikit-learn==0.23.1 in c:\users\91760\anaconda3\lib\site-packages (0.23.1)
Requirement already satisfied: scipy>=0.19.1 in c:\users\91760\anaconda3\lib\site-packages (from scikit-learn==0.23.1) (1.4.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\91760\anaconda3\lib\site-packages (from scikit-learn==0.23.1) (2.1.0)
Requirement already satisfied: joblib>=0.11 in c:\users\91760\anaconda3\lib\site-packages (from scikit-learn==0.23.1) (0.16.0)
Requirement already satisfied: numpy>=1.13.3 in c:\users\91760\anaconda3\lib\site-packages (from scikit-learn==0.23.1) (1.18.5)
###Markdown
Let's first import required libraries:
###Code
import pandas as pd
import pylab as pl
import numpy as np
import scipy.optimize as opt
from sklearn import preprocessing
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
About the datasetWe will use a telecommunications dataset for predicting customer churn. This is a historical customer dataset where each row represents one customer. The data is relatively easy to understand, and you may uncover insights you can use immediately. Typically it is less expensive to keep customers than acquire new ones, so the focus of this analysis is to predict the customers who will stay with the company. This data set provides information to help you predict what behavior will help you to retain customers. You can analyze all relevant customer data and develop focused customer retention programs.The dataset includes information about:* Customers who left within the last month – the column is called Churn* Services that each customer has signed up for – phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and movies* Customer account information – how long they had been a customer, contract, payment method, paperless billing, monthly charges, and total charges* Demographic info about customers – gender, age range, and if they have partners and dependents Load the Telco Churn dataTelco Churn is a hypothetical data file that concerns a telecommunications company's efforts to reduce turnover in its customer base. Each case corresponds to a separate customer and it records various demographic and service usage information. Before you can work with the data, you must use the URL to get the ChurnData.csv.To download the data, we will use `!wget` to download it from IBM Object Storage.
###Code
#Click here and press Shift+Enter
!wget -O ChurnData.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%203/data/ChurnData.csv
###Output
--2021-06-11 12:17:52-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%203/data/ChurnData.csv
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.63.118.104
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.63.118.104|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 35943 (35K) [text/csv]
Saving to: 'ChurnData.csv'
0K .......... .......... .......... ..... 100% 157K=0.2s
2021-06-11 12:17:54 (157 KB/s) - 'ChurnData.csv' saved [35943/35943]
###Markdown
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) Load Data From CSV File
###Code
churn_df = pd.read_csv("ChurnData.csv")
churn_df.head()
###Output
_____no_output_____
###Markdown
Data pre-processing and selection Let's select some features for the modeling. Also, we change the target data type to be an integer, as it is a requirement by the skitlearn algorithm:
###Code
churn_df = churn_df[['tenure', 'age', 'address', 'income', 'ed', 'employ', 'equip', 'callcard', 'wireless','churn']]
churn_df['churn'] = churn_df['churn'].astype('int')
churn_df.head()
###Output
_____no_output_____
###Markdown
PracticeHow many rows and columns are in this dataset in total? What are the names of columns?
###Code
# write your code here
churn_df.shape
###Output
_____no_output_____
###Markdown
Click here for the solution```pythonchurn_df.shape``` Let's define X, and y for our dataset:
###Code
X = np.asarray(churn_df[['tenure', 'age', 'address', 'income', 'ed', 'employ', 'equip']])
X[0:5]
y = np.asarray(churn_df['churn'])
y [0:5]
###Output
_____no_output_____
###Markdown
Also, we normalize the dataset:
###Code
from sklearn import preprocessing
X = preprocessing.StandardScaler().fit(X).transform(X)
X[0:5]
###Output
_____no_output_____
###Markdown
Train/Test dataset We split our dataset into train and test set:
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
###Output
Train set: (160, 7) (160,)
Test set: (40, 7) (40,)
###Markdown
Modeling (Logistic Regression with Scikit-learn) Let's build our model using **LogisticRegression** from the Scikit-learn package. This function implements logistic regression and can use different numerical optimizers to find parameters, including ‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’ solvers. You can find extensive information about the pros and cons of these optimizers if you search it in the internet.The version of Logistic Regression in Scikit-learn, support regularization. Regularization is a technique used to solve the overfitting problem of machine learning models.**C** parameter indicates **inverse of regularization strength** which must be a positive float. Smaller values specify stronger regularization.Now let's fit our model with train set:
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
LR = LogisticRegression(C=0.01, solver='liblinear').fit(X_train,y_train)
LR
###Output
_____no_output_____
###Markdown
Now we can predict using our test set:
###Code
yhat = LR.predict(X_test)
yhat
###Output
_____no_output_____
###Markdown
**predict_proba** returns estimates for all classes, ordered by the label of classes. So, the first column is the probability of class 0, P(Y=0|X), and second column is probability of class 1, P(Y=1|X):
###Code
yhat_prob = LR.predict_proba(X_test)
yhat_prob
###Output
_____no_output_____
###Markdown
Evaluation jaccard indexLet's try the jaccard index for accuracy evaluation. we can define jaccard as the size of the intersection divided by the size of the union of the two label sets. If the entire set of predicted labels for a sample strictly match with the true set of labels, then the subset accuracy is 1.0; otherwise it is 0.0.
###Code
from sklearn.metrics import jaccard_score
jaccard_score(y_test, yhat,pos_label=0)
###Output
_____no_output_____
###Markdown
confusion matrixAnother way of looking at the accuracy of the classifier is to look at **confusion matrix**.
###Code
from sklearn.metrics import classification_report, confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
print(confusion_matrix(y_test, yhat, labels=[1,0]))
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, yhat, labels=[1,0])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['churn=1','churn=0'],normalize= False, title='Confusion matrix')
###Output
Confusion matrix, without normalization
[[ 6 9]
[ 1 24]]
###Markdown
Look at first row. The first row is for customers whose actual churn value in the test set is 1.As you can calculate, out of 40 customers, the churn value of 15 of them is 1.Out of these 15 cases, the classifier correctly predicted 6 of them as 1, and 9 of them as 0.This means, for 6 customers, the actual churn value was 1 in test set and classifier also correctly predicted those as 1. However, while the actual label of 9 customers was 1, the classifier predicted those as 0, which is not very good. We can consider it as the error of the model for first row.What about the customers with churn value 0? Lets look at the second row.It looks like there were 25 customers whom their churn value were 0.The classifier correctly predicted 24 of them as 0, and one of them wrongly as 1. So, it has done a good job in predicting the customers with churn value 0. A good thing about the confusion matrix is that it shows the model’s ability to correctly predict or separate the classes. In a specific case of the binary classifier, such as this example, we can interpret these numbers as the count of true positives, false positives, true negatives, and false negatives.
###Code
print (classification_report(y_test, yhat))
###Output
_____no_output_____
###Markdown
Based on the count of each section, we can calculate precision and recall of each label:* **Precision** is a measure of the accuracy provided that a class label has been predicted. It is defined by: precision = TP / (TP + FP)* **Recall** is the true positive rate. It is defined as: Recall = TP / (TP + FN)So, we can calculate the precision and recall of each class.**F1 score:**Now we are in the position to calculate the F1 scores for each label based on the precision and recall of that label.The F1 score is the harmonic average of the precision and recall, where an F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0. It is a good way to show that a classifer has a good value for both recall and precision.Finally, we can tell the average accuracy for this classifier is the average of the F1-score for both labels, which is 0.72 in our case. log lossNow, let's try **log loss** for evaluation. In logistic regression, the output can be the probability of customer churn is yes (or equals to 1). This probability is a value between 0 and 1.Log loss( Logarithmic loss) measures the performance of a classifier where the predicted output is a probability value between 0 and 1.
###Code
from sklearn.metrics import log_loss
log_loss(y_test, yhat_prob)
###Output
_____no_output_____
###Markdown
PracticeTry to build Logistic Regression model again for the same dataset, but this time, use different __solver__ and __regularization__ values? What is new __logLoss__ value?
###Code
# write your code here
LR2=LogisticRegression(C=0.01, solver='sag').fit(X_train,y_train)
y_hat=LR2.predict_proba(X_test)
print("LogLoss :%.2f"%log_loss(y_test,yhat_prob))
###Output
LogLoss :0.60
###Markdown
Logistic Regresssion example for the Titanic survival example
###Code
# Import the usual libraries
import pandas as pd
import numpy as np
import graphviz as gv
import matplotlib.pyplot as plt
%matplotlib inline
print(pd.__version__, np.__version__, gv.__version__)
###Output
0.23.0 1.13.3 0.8.3
###Markdown
We will load both train and test data (actually evaluation data), and concat them to work on both at the same time. Just notice that the test data has the _Survived_ feature missing.
###Code
train_df = pd.read_csv('../input/train.csv', index_col='PassengerId')
test_df = pd.read_csv('../input/test.csv', index_col='PassengerId')
df = pd.concat([train_df, test_df], sort=True)
###Output
_____no_output_____
###Markdown
Let's see 10 random examples (if Survived is NaN, it's a one from the test/evaluation data)
###Code
df.sample(10)
###Output
_____no_output_____
###Markdown
First let's see if the dataset has missing values.
###Code
df[['Age', 'Sex']].isnull().sum()
###Output
_____no_output_____
###Markdown
So we do need to fill in the missing Age values of 263 examples, and no need to do this with Sex feature. Using pandas __.describe()__ method we can see general statistics for each feature.
###Code
df['Age'].describe()
# Quantity of people by given age
max_age = df['Age'].max()
df['Age'].hist(bins=int(max_age))
# Survival ratio per decade, ignoring NaN with dropna()
df['decade'] = df['Age'].dropna().apply(lambda x: int(x/10))
df[['decade', 'Survived']].groupby('decade').mean().plot()
###Output
_____no_output_____
###Markdown
The younger the passenger, the more chances of survival. There is some outsider at Age 80, however.We need to complete missing values of Age. Let's do this using the mean value.
###Code
mean_age = df['Age'].mean()
df['Age'] = df['Age'].fillna(mean_age)
###Output
_____no_output_____
###Markdown
Sex is stored as "male" or "female", but a ML algorithm needs to get numerical values as input. So let's create a new feature "male".
###Code
df['male'] = df['Sex'].map({'male': 1, 'female': 0})
df.sample(5)
df[['male','Survived']].groupby('male').mean()
###Output
_____no_output_____
###Markdown
So 74% of females survived, while men had just a 18.9% of surviving ratio. First we will prepare train examples for training the algorithm.
###Code
train = df[df['Survived'].notnull()]
features = ['Age', 'male']
train_X = train[features]
train_y = train['Survived']
###Output
_____no_output_____
###Markdown
Using Logistic Regression model
###Code
from sklearn.linear_model import LogisticRegression
test = df[df['Survived'].isnull()]
test_X = test[features]
logreg = LogisticRegression()
logreg.fit(train_X, train_y)
test_y = logreg.predict(test_X)
acc_log = round(logreg.score(train_X, train_y) * 100, 2)
acc_log
###Output
_____no_output_____
###Markdown
Printing results
###Code
submit = pd.DataFrame(test_y.astype(int),
index=test_X.index,
columns=['Survived'])
submit.head()
###Output
_____no_output_____
###Markdown
Let's save this predictions in a file tha kaggle will use to evaluate it.
###Code
submit.to_csv('LogisticRegresion_Titanic_smuni_submit.csv')
###Output
_____no_output_____
###Markdown
Logistic RegressionLogistic regression is a classification method. Its main goal is learning a function that __returns a yes or no answer__when presented as input a so-called __feature__ vector. As an example, suppose we are given a dataset, such as the one below:| Class| Feature1 | Feature2 ||---| |---|| 0 |5.7| 3.1|| 1|-0.3|2 ||---| |---|| $y_i$| $x_{i,1}$ | $x_{i,2}$ ||---| |---|| 1|0.4|5 |The goal is learning to predict the labels of a future dataset, where we are given only the features but not the labels:| Class| Feature1 | Feature2 ||---| |---|| ? |4.8| 3.2|| ? |-0.7|2.4 ||---| |---|More formally, the dataset consists of $N$ feature vectors $x_i$ and the associated labels $y_i$ for each example $i=1\dots N$. The entries of $y$ are referred typically as class labels -- but in reality $y$ could model any answer to a true-false question, such as 'is object $i$ a flower?' or 'will customer $i$ buy product $j$ during the next month?'. We can arrange the features in a matrix $X$ and the labels in a vector $y$:\begin{eqnarray}X & = & \begin{pmatrix} x_{1,1} & x_{1,2} & \dots & x_{1,D} \\ x_{2,1} & x_{2,2} & \dots & x_{2,D} \\ \vdots & \vdots & \vdots & \vdots \\ x_{i,1} & x_{i,2} & \dots & x_{i,D} \\ \vdots & \vdots & \vdots & \vdots \\ x_{N,1} & x_{N,2} & \dots & x_{N,D} \\\end{pmatrix} = \begin{pmatrix}x_1^\top \\x_2^\top \\\dots \\x_i^\top \\\dots \\x_N^\top\end{pmatrix} \\{y} & = & \begin{pmatrix}y_1 \\y_2 \\\vdots \\y_i \\\vdots \\y_N\end{pmatrix}\end{eqnarray}where $x_{i,j}$ denotes the $j$'th feature of the $i$'th data point. It is common, to set a column of $X$ entirely to $1$'s, for example we take $x_{i,D}=1$ for all $i$. This 'feature' is artificially added to the dataset to allow a slightly more flexible model -- even if we don't measure any feature, the relative numbers of ones and zeros in a dataset can provide a crude estimate of the probability of a true or false answer. Logistic Regression is a method that can be used to solve binary classification problems, like the one above. We will encode the two classes as $y_i \in \{0,1\}$. The key idea is learning a mapping from a feature vector $x$ to a probability, a number between $0$ and $1$. The generative model is $$\Pr\{y_i = 1\} = \pi_i = \sigma(x_i^\top w)$$Here,$\sigma(x)$ is the sigmoid function defined as\begin{eqnarray}\sigma(x) & = & \frac{1}{1+e^{-x}}\end{eqnarray}To understand logistic regression as a generative model, consider the following metaphor: assume that for each data instance $x_i$, we select a biased coin with probability $p(y_i = 1| w, x^\top_i) = \pi_i = \sigma(x_i^\top w)$, throw the coin and label the data item with class $y_i$ accordingly. Mathematically, we assume that each label $y_i$, or more precisely the answer to our yes-no question rearding the object $i$ with feature vector $w$ is drawn from a Bernoulli distribution. That is: \begin{eqnarray}\pi_i & = & \sigma(x_i^\top w) \\y_i & \sim &\mathcal{BE}(\pi)\end{eqnarray}Here, we think of a biased coin with two sides denoted as $H$ (head) and $T$ (tail) with probability of side $H$ as $\pi$, and consequently the probability of side $T$ with $1-\pi$. We denote the outcome of the coin toss with the random variable $y \in \{0, 1\}$. For each throw $i$, $y_i$ is the answer to the question 'Is the outcome heads?'. We write the probability as $p(y = 1) = \pi$ and probability of tails is $p(y = 0) = 1-\pi$. More compactly, the probability of the outcome of a toss, provided we know $\pi$, is written as\begin{eqnarray}p(y|\pi) = \pi^y(1-\pi)^{1-y}\end{eqnarray} Maximum LikelihoodMaximum likelihood (ML) is a method for choosing the unknown parameters of a probability distribution, given some data that is assumed to be drawn from this distribution. The distribution itself is referred as the probability model, or often just the model. ExampleSuppose we are given only $5$ outcomes when a coin is thrown:$$H, T, H, T, T$$What is the probabilty that the outcome is, say heads $H$ if we know that the coin is biased ?.One reasonable answer may be the frequency of heads, $2/5$.The ML solution coincides with this answer. For a derivation, we define $y_i$ for $i = 1,2,\dots, 5$ as$$y_i = \left\{ \begin{array}{cc} 1 & \text{coin $i$ is H} \\ 0 & \text{coin $i$ is T} \end{array} \right. $$hence $$y = [1,0,1,0,0]^\top$$If we assume that the outcomes were independent, the probability of observing the above sequence as a function of the parameter $\pi$ is the product of each individual probability$$\Pr\{y = [1,0,1,0,0]^\top\} = \pi \cdot (1-\pi) \cdot \pi \cdot (1-\pi) \cdot(1-\pi) $$We could try finding the $\pi$ value that maximizes this function. We will call the corresponding value as the maximum likelhood solution, and denote it as $\pi^*$. It is often more convenient to work with the logarithm of this function, known as the loglikelihood function.$$\mathcal{L}(\pi) = 2 \log \pi + 3 \log (1-\pi)$$For finding the maximum, we take the derivative with respect to $\pi$ and set to zero.$$\frac{d \mathcal{L}(\pi)}{d \pi} = \frac{2}{\pi^*} - \frac{3}{1-\pi^*} = 0 $$When we solve we obtain $$ \pi^* = \frac{2}{5} $$ More generally, when we observe $y_i$ for $i=1 \dots N$, the loglikelihood is\begin{eqnarray}\mathcal{L}(\pi)& = & \log \left(\prod_{i : y_i=1} \pi \right) \left(\prod_{i : y_i=0}(1- \pi) \right) \\& = & \log \prod_{i = 1}^N \pi^{y_i} (1- \pi)^{1-y_i} \\& = & \log \pi^{ \sum_i y_i} (1- \pi)^{\sum_i (1-y_i) } \\& = & \left(\sum_i y_i\right) \log \pi + \left(\sum_i (1-y_i) \right) \log (1- \pi) \end{eqnarray}If we define the number of observed $0$'s and $1$'s by $c_0$ and $c_1$ respectively, we have \begin{eqnarray}\mathcal{L}(\pi)& = & c_1 \log \pi + c_0 \log (1- \pi) \end{eqnarray}Taking the derivative and setting to $0$ results in$$\pi^* = \frac{c_1}{c_0+c_1} = \frac{c_1}{N} $$
###Code
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
from IPython.display import clear_output, display, HTML
from matplotlib import rc
import scipy as sc
import scipy.optimize as opt
mpl.rc('font',**{'size': 20, 'family':'sans-serif','sans-serif':['Helvetica']})
mpl.rc('text', usetex=True)
def sigmoid(x):
return 1/(1+np.exp(-x))
def dsigmoid(x):
s = sigmoid(x)
return s*(1-s)
def inv_sigmoid(p=0.5):
xs = opt.bisect(lambda x: sigmoid(x)-p, a=-100, b=100)
return xs
def inv_sigmoid1D(w, b, p=0.5):
xs = opt.bisect(lambda x: sigmoid(w*x+b)-p, a=-100, b=100)
return xs
###Output
_____no_output_____
###Markdown
Plotting the Sigmoid
###Code
fig = plt.figure(figsize=(10,6))
ax = fig.gca()
ax.set_ylim([-0.1,1.1])
x = np.linspace(-10,10,100)
ax.set_xlim([-10,10])
ln = plt.Line2D(x, sigmoid(x))
ln2 = plt.axvline([0], ls= ':', color='k')
ln_left = plt.axvline([0], ls= ':', color='b')
ln_right = plt.axvline([0], ls= ':', color='r')
ax.add_line(ln)
plt.close(fig)
ax.set_xlabel('$x$')
ax.set_ylabel('$\sigma(wx + b)$')
def plot_fun(w=1, b=0):
ln.set_ydata(sigmoid(w*x+b))
if np.abs(w)>0.00001:
ln2.set_xdata(inv_sigmoid1D(w,b,0.5))
ln_left.set_xdata(inv_sigmoid1D(w,b,0.25))
ln_right.set_xdata(inv_sigmoid1D(w,b,0.75))
display(fig)
res = interact(plot_fun, w=(-5, 5, 0.1), b=(-10.0,10.0,0.1))
def LR_loglikelhood(X, y, w):
tmp = X.dot(w)
return y.T.dot(tmp) - np.sum(np.log(np.exp(tmp)+1))
w = np.array([0.5, 2, 3])
D = 3
N = 20
# Some random features
X = 2*np.random.randn(N,D)
X[:,0] = 1
# Generate class labels
pi = sigmoid(np.dot(X, w))
y = np.array([1 if u else 0 for u in np.random.rand(N) < pi]).reshape((N))
xl = -5.
xr = 5.
yl = -5.
yr = 5.
fig = plt.figure(figsize=(5,5))
plt.plot(X[y==1,1],X[y==1,2],'xr')
plt.plot(X[y==0,1],X[y==0,2],'ob')
ax = fig.gca()
ax.set_ylim([yl, yr])
ax.set_xlim([xl, xr])
ln = plt.Line2D([],[],color='k')
ln_left = plt.Line2D([],[],ls= ':', color='b')
ln_right = plt.Line2D([],[],ls= ':', color='r')
ax.add_line(ln)
ax.add_line(ln_left)
ax.add_line(ln_right)
plt.close(fig)
ax.set_xlabel('$x_1$')
#ax.grid(xdata=np.linspace(xl,xr,0.1))
#ax.grid(ydata=np.linspace(yl,yr,0.1))
ax.set_ylabel('$x_2$')
ax.set_xticks(np.arange(xl,xr))
ax.set_yticks(np.arange(yl,yr))
ax.grid(True)
def plot_boundry(w0,w1,w2):
if w1 != 0:
xa = -(w0+w2*yl)/w1
xb = -(w0+w2*yr)/w1
ln.set_xdata([xa, xb])
ln.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.25) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.25) + w0+w2*yr)/w1
ln_left.set_xdata([xa, xb])
ln_left.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.75) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.75) + w0+w2*yr)/w1
ln_right.set_xdata([xa, xb])
ln_right.set_ydata([yl, yr])
elif w2!=0:
ya = -(w0+w1*xl)/w2
yb = -(w0+w1*xr)/w2
ln.set_xdata([xl, xr])
ln.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.25) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.25) + w0+w1*xr)/w2
ln_left.set_xdata([xl, xr])
ln_left.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.75) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.75) + w0+w1*xr)/w2
ln_right.set_xdata([xl, xr])
ln_right.set_ydata([ya, yb])
else:
ln.set_xdata([])
ln.set_ydata([])
ax.set_title('$\mathcal{L}(w) = '+str(LR_loglikelhood(X, y, np.array([w0, w1, w2])))+'$')
display(fig)
res = interact(plot_boundry, w0=(-3.5, 3, 0.1), w1=(-3.,4,0.1), w2=(-3.,4,0.1))
###Output
_____no_output_____
###Markdown
Logistic Regression: Learning the parametersThe logistic regression model is very similar to the coin model. The main difference is that for each example $i$, we use a specific coin with a probability $\sigma(x_i^\top w)$ that depends on the specific feature vector $x_i$ and the parameter vector $w$ that is shared by all examples. The likelihood of the observations, that is the probability of observing the class sequence is$\begin{eqnarray}p(y_1, y_2, \dots, y_N|w, X ) &=& \left(\prod_{i : y_i=1} \sigma(x_i^\top w) \right) \left(\prod_{i : y_i=0}(1- \sigma(x_i^\top w)) \right)\end{eqnarray}$Here, the left product is the expression for examples from class $1$ and the right product is for examples from class $0$.We will look for the particular setting of the weight vector, the maximum likelihood solution, denoted by $w^*$.$\begin{eqnarray}w^* & = & \arg\max_{w} {\cal L}(w)\end{eqnarray}$where the loglikelihood function$\begin{eqnarray}{\cal L}(w) & = & \log p(y_1, y_2, \dots, y_N|w, x_1, x_2, \dots, x_N ) \\& = & \sum_{i : y_i=1} \log \sigma(x_i^\top w) + \sum_{i : y_i=0} \log (1- \sigma(x_i^\top w)) \\& = & \sum_{i : y_i=1} x_i^\top w - \sum_{i : y_i=1} \log(1+e^{x_i^\top w}) - \sum_{i : y_i=0}\log({1+e^{x_i^\top w}}) \\& = & \sum_i y_i x_i^\top w - \sum_{i} \log(1+e^{x_i^\top w}) \\& = & y^\top X w - \mathbf{1}^\top \text{logsumexp}(0, X w)\end{eqnarray}$$\mathbf{1}$ is a vector of ones; note that when we premultiply a vector $v$ by $\mathbf{1}^T$ we get the sum of the entries of $v$, i.e. $\mathbf{1}^T v = \sum_i v_i$.We define the function $\text{logsumexp}(a, b)$ as follows: When $a$ and $b$ are scalars, $$f = \text{logsumexp}(a, b) \equiv \log(e^a + e^b)$$When $a$ and $b$ are vectors of the same size, $f$ is the same size as $a$ and $b$ where each entry of $f$ is$$f_i = \text{logsumexp}(a_i, b_i) \equiv \log(e^{a_i} + e^{b_i})$$Unlike the least-squares problem, an expression for direct evaluation of $w^*$ is not known so we need to resort to numerical optimization. Before we proceed, it is informative to look at the shape of $f(x) = \text{logsumexp}(0, x)$.When $x$ is negative and far smaller than zero, $f = 0$ and for large values of $x$, $f(x) = x$. Hence it looks like a so-called hinge function $h$$$h(x) = \left\{ \begin{array}{cc} 0 & x < 0 \\x & x \geq 0 \end{array} \right.$$We define$$f_\alpha(x) = \frac{1}{\alpha}\text{logsumexp}(0, \alpha x)$$When $\alpha = 1$, we have the original logsumexp function. For larger $\alpha$, it becomes closer to the hinge loss.
###Code
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
def logsumexp(a,b):
m = np.max([a,b])
return m + np.log(np.exp(a-m) + np.exp(b-m))
def hinge(x):
return x if x>0 else 0
xx = np.arange(-5,3,0.1)
plt.figure(figsize=(12,10))
for i,alpha in enumerate([1,2,5,10]):
f = [logsumexp(0, alpha*z)/alpha for z in xx]
h = [hinge(z) for z in xx]
plt.subplot(2,2,i+1)
plt.plot(xx, f, 'r')
plt.plot(xx, h, 'k:')
plt.xlabel('z')
#plt.title('a = '+ str(alpha))
if alpha==1:
plt.legend([ 'logsumexp(0,z)','hinge(z)' ], loc=2 )
else:
plt.legend([ 'logsumexp(0,{a} z)/{a}'.format(a=alpha),'hinge(z)' ], loc=2 )
plt.show()
###Output
_____no_output_____
###Markdown
The resemblance of the logsumexp function to an hinge function provides a nice interpretation of the log likelihood. Consider the negative log likelihood written in terms of the contributions of each single item:$$- \mathcal{L}(\pi) = - \sum_i l_i(w) $$We denote the inner product of the features of item $i$ and the parameters as $z_i = x_i^\top w$.Then define the 'error' made on a single item as the minus likelihood$$E_i(w) \equiv -l_i(w) = - y_i x_i^\top w + \text{logsumexp}(0, x_i^\top w) = - y_i z_i + \text{logsumexp}(0, z_i)$$Suppose, the target class $y_i = 1$. When $z_i \gg 0$, the item $i$ will be classified correctly and won't contribute to the total error as $-l_i(w) \approx 0$. However, when $z_i \ll 0$, the $\text{logsumexp}$ term will be zero and this will incur an error of $-z_i$. If instead the true target would have been $y_i = 0$ the error reduces to$E_i(w) \approx \text{logsumexp}(0, z_i)$, incurring no error when $z_i \ll 0$ and incuring an error of approximately $z_i$ when $z_i \gg 0$. Below, we show the error for a range of outputs $z_i = x_i^\top w$ when the target is $1$ or $0$. When the target is $y=1$, we penalize each negative output, if the target is $y =0$ positive outputs are penalized.
###Code
xx = np.arange(-10,10,0.1)
y = 1
f = [-y*z + logsumexp(0, z) for z in xx]
f0 = [logsumexp(0, z) for z in xx]
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.plot(xx, f, 'r')
plt.xlabel('$z_i$')
plt.ylabel('$-l_i$')
plt.title('Cost for examples with $y = $'+str(y))
plt.subplot(1,2,2)
plt.plot(xx, f0, 'r')
plt.xlabel('$z_i$')
plt.ylabel('$-l_i$')
plt.title('Cost for examples with $y = 0$')
plt.show()
###Output
_____no_output_____
###Markdown
Properties of the logsumexp functionIf $$f(z) = \text{logsumexp}(0, z) = \log(1 + \exp(z))$$The derivative is$$\frac{df(z)}{dz} = \frac{\exp(z)}{1 + \exp(z)} = \sigma(z)$$When $z$ is a vector, $f(z)$ is a vector. The derivative of$$\sum_i f(z_i) = \mathbf{1}^\top f(z)$$$$\frac{d \mathbf{1}^\top f(z)}{dz} = \left(\begin{array}{c} \sigma(z_1) \\ \vdots \\ \sigma(z_N) \end{array} \right) \equiv \sigma(z)$$where the sigmoid function $\sigma$ is applied elementwise to $z$. Properties of the sigmoid functionNote that\begin{eqnarray}\sigma(x) & = & \frac{e^x}{(1+e^{-x})e^x} = \frac{e^x}{1+e^{x}} \\1 - \sigma(x) & = & 1 - \frac{e^x}{1+e^{x}} = \frac{1+e^{x} - e^x}{1+e^{x}} = \frac{1}{1+e^{x}}\end{eqnarray}\begin{eqnarray}\sigma'(x) & = & \frac{e^x(1+e^{x}) - e^{x} e^x}{(1+e^{x})^2} = \frac{e^x}{1+e^{x}}\frac{1}{1+e^{x}} = \sigma(x) (1-\sigma(x))\end{eqnarray}\begin{eqnarray}\log \sigma(x) & = & -\log(1+e^{-x}) = x - \log(1+e^{x}) \\\log(1 - \sigma(x)) & = & -\log({1+e^{x}})\end{eqnarray}Exercise: Plot the sigmoid function and its derivative. Exercise: Show that $\tanh(z) = 2\sigma(2z) - 1$ Solve $$\text{maximize}\; \mathcal{L}(w)$$ Optimization via gradient ascentOne way foroptimization is gradient ascent\begin{eqnarray}w^{(\tau)} & \leftarrow & w^{(\tau-1)} + \eta \nabla_w {\cal L}\end{eqnarray}where\begin{eqnarray}\nabla_w {\cal L} & = &\begin{pmatrix}{\partial {\cal L}}/{\partial w_1} \\{\partial {\cal L}}/{\partial w_2} \\\vdots \\{\partial {\cal L}}/{\partial w_{D}}\end{pmatrix}\end{eqnarray}is the gradient vector and $\eta$ is a learning rate. Evaluating the gradient (Short Derivation)$$\mathcal{L}(w) = y^\top X w - \mathbf{1}^\top \text{logsumexp}(0, X w)$$$$\frac{d\mathcal{L}(w)}{dw} = X^\top y - X^\top \sigma(X w) = X^\top (y -\sigma(X w))$$ Evaluating the gradient (Long Derivation)The partial derivative of the loglikelihood with respect to the $k$'th entry of the weight vector is given by the chain rule as\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \frac{\partial{\cal L}}{\partial \sigma(u)} \frac{\partial \sigma(u)}{\partial u} \frac{\partial u}{\partial w_k}\end{eqnarray}\begin{eqnarray}{\cal L}(w) & = & \sum_{i : y_i=1} \log \sigma(w^\top x_i) + \sum_{i : y_i=0} \log (1- \sigma(w^\top x_i))\end{eqnarray}\begin{eqnarray}\frac{\partial{\cal L}(\sigma)}{\partial \sigma} & = & \sum_{i : y_i=1} \frac{1}{\sigma(w^\top x_i)} - \sum_{i : y_i=0} \frac{1}{1- \sigma(w^\top x_i)}\end{eqnarray}\begin{eqnarray}\frac{\partial \sigma(u)}{\partial u} & = & \sigma(w^\top x_i) (1-\sigma(w^\top x_i))\end{eqnarray}\begin{eqnarray}\frac{\partial w^\top x_i }{\partial w_k} & = & x_{i,k}\end{eqnarray}So the gradient is\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \sum_{i : y_i=1} \frac{\sigma(w^\top x_i) (1-\sigma(w^\top x_i))}{\sigma(w^\top x_i)} x_{i,k} - \sum_{i : y_i=0} \frac{\sigma(w^\top x_i) (1-\sigma(w^\top x_i))}{1- \sigma(w^\top x_i)} x_{i,k} \\& = & \sum_{i : y_i=1} {(1-\sigma(w^\top x_i))} x_{i,k} - \sum_{i : y_i=0} {\sigma(w^\top x_i)} x_{i,k}\end{eqnarray}We can write this expression more compactly by noting\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \sum_{i : y_i=1} {(\underbrace{1}_{y_i}-\sigma(w^\top x_i))} x_{i,k} + \sum_{i : y_i=0} {(\underbrace{0}_{y_i} - \sigma(w^\top x_i))} x_{i,k} \\& = & \sum_i (y_i - \sigma(w^\top x_i)) x_{i,k}\end{eqnarray}$\newcommand{\diag}{\text{diag}}$ Test on a synthetic problemWe generate a random dataset and than try to learn to classify this dataset
###Code
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
# Generate a random logistic regression problem
def sigmoid(t):
return np.exp(t)/(1+np.exp(t))
def generate_toy_dataset(number_of_features=3, number_of_datapoints=20, styles = ['ob', 'xr']):
D = number_of_features
N = number_of_datapoints
# Some random features
X = 2*np.random.rand(N,D)-1
X[:,0] = 1
# Generate a random paramater vector
w_true = np.random.randn(D,1)
# Generate class labels
pi = sigmoid(np.dot(X, w_true))
y = np.array([1 if u else 0 for u in np.random.rand(N,1) < pi]).reshape((N))
return X, y, w_true, D, N
styles = ['ob', 'xr']
X, y, w_true, D, N = generate_toy_dataset(number_of_features=3, number_of_datapoints=20, styles=styles)
xl = -1.5; xr = 1.5; yl = -1.5; yr = 1.5
fig = plt.figure(figsize=(5,5))
plt.plot(X[y==1,1],X[y==1,2],styles[1])
plt.plot(X[y==0,1],X[y==0,2],styles[0])
ax = fig.gca()
ax.set_ylim([yl, yr])
ax.set_xlim([xl, xr])
plt.show()
# Implement Gradient Descent
w = np.random.randn(D)
# Learnig rate
eta = 0.05
W = []
MAX_ITER = 200
for epoch in range(MAX_ITER):
W.append(w)
dL = np.dot(X.T, y-sigmoid(np.dot(X,w)))
w = w + eta*dL
# Implement Gradient Descent
w = np.random.randn(D)
# Learnig rate
eta = 0.05
MAX_ITER = 200
for epoch in range(MAX_ITER):
dL = 0
for i in range(X.shape[0]):
dL = dL + X[i,:].T*(y[i]-sigmoid(X[i,:].dot(w)))
w = w + eta*dL
xl = -1.5
xr = 1.5
yl = -1.5
yr = 1.5
fig = plt.figure(figsize=(5,5))
ax = fig.gca()
ax.set_ylim([yl, yr])
ax.set_xlim([xl, xr])
plt.plot(X[y==1,1],X[y==1,2],styles[1])
plt.plot(X[y==0,1],X[y==0,2],styles[0])
ln = plt.Line2D([],[],color='k')
ln_left = plt.Line2D([],[],ls= ':', color=styles[0][1])
ln_right = plt.Line2D([],[],ls= ':', color=styles[1][1])
ax.add_line(ln)
ax.add_line(ln_left)
ax.add_line(ln_right)
plt.close(fig)
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_xticks(np.arange(xl,xr))
ax.set_yticks(np.arange(yl,yr))
ax.grid(True)
def plot_boundry(w0,w1,w2):
if w1 != 0:
xa = -(w0+w2*yl)/w1
xb = -(w0+w2*yr)/w1
ln.set_xdata([xa, xb])
ln.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.25) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.25) + w0+w2*yr)/w1
ln_left.set_xdata([xa, xb])
ln_left.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.75) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.75) + w0+w2*yr)/w1
ln_right.set_xdata([xa, xb])
ln_right.set_ydata([yl, yr])
elif w2!=0:
ya = -(w0+w1*xl)/w2
yb = -(w0+w1*xr)/w2
ln.set_xdata([xl, xr])
ln.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.25) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.25) + w0+w1*xr)/w2
ln_left.set_xdata([xl, xr])
ln_left.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.75) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.75) + w0+w1*xr)/w2
ln_right.set_xdata([xl, xr])
ln_right.set_ydata([ya, yb])
else:
ln.set_xdata([])
ln.set_ydata([])
display(fig)
def plot_boundry_of_weight(iteration=0):
i = iteration
w = W[i]
plot_boundry(w[0],w[1],w[2])
interact(plot_boundry_of_weight, iteration=(0,len(W)-1))
plot_boundry_of_weight(-1)
###Output
_____no_output_____
###Markdown
Second order optimizationNewton's method Evaluating the HessianThe Hessian is \begin{eqnarray}\frac{\partial^2{\cal L}}{\partial w_k \partial w_r} & = & - \sum_i (1-\sigma(w^\top x_i)) \sigma(w^\top x_i) x_{i,k} x_{i,r} \\\pi & \equiv & \sigma(X w) \\\nabla \nabla^\top \mathcal{L}& = & -X^\top \diag(\pi(1 - \pi)) X \end{eqnarray}The update rule is\begin{eqnarray}w^{(\tau)} = w^{(\tau-1)} + \eta X^\top (y-\sigma(X w))\end{eqnarray}
###Code
#x = np.matrix('[-2,1; -1,2; 1,5; -1,1; -3,-2; 1,1] ')
x = np.matrix('[-0.5,0.5;2,-1;-1,-1;1,1;1.5,0.5]')
#y = np.matrix('[0,0,1,0,0,1]').T
y = np.matrix('[0,0,1,1,1]').T
N = x.shape[0]
#A = np.hstack((np.power(x,0), np.power(x,1), np.power(x,2)))
#X = np.hstack((x, np.ones((N,1)) ))
X = x
def sigmoid(x):
return 1/(1+np.exp(-x))
idx = np.nonzero(y)[0]
idxc = np.nonzero(1-y)[0]
fig = plt.figure(figsize=(8,4))
plt.plot(x[idx,0], x[idx,1], 'rx')
plt.plot(x[idxc,0], x[idxc,1], 'bo')
fig.gca().set_xlim([-1.1,2.1])
fig.gca().set_ylim([-1.1,1.1])
print(idxc)
print(idx)
plt.show()
from itertools import product
def ellipse_line(A, mu, col='b'):
'''
Creates an ellipse from short line segments y = A x + \mu
where x is on the unit circle.
'''
N = 18
th = np.arange(0, 2*np.pi+np.pi/N, np.pi/N)
X = np.mat(np.vstack((np.cos(th),np.sin(th))))
Y = A*X
ln = plt.Line2D(mu[0]+Y[0,:],mu[1]+Y[1,:],markeredgecolor='w', linewidth=1, color=col)
return ln
left = -5
right = 3
bottom = -5
top = 7
step = 0.1
W0 = np.arange(left,right, step)
W1 = np.arange(bottom,top, step)
LLSurf = np.zeros((len(W1),len(W0)))
# y^\top X w - \mathbf{1}^\top \text{logsumexp}(0, X w)
vmax = -np.inf
vmin = np.inf
for i,j in product(range(len(W1)), range(len(W0))):
w = np.matrix([W0[j], W1[i]]).T
p = X*w
ll = y.T*p - np.sum(np.log(1+np.exp(p)))
vmax = np.max((vmax, ll))
vmin = np.min((vmin, ll))
LLSurf[i,j] = ll
fig = plt.figure(figsize=(10,10))
plt.imshow(LLSurf, interpolation='nearest',
vmin=vmin, vmax=vmax,origin='lower',
extent=(left,right,bottom,top),cmap=plt.cm.jet)
plt.xlabel('w0')
plt.ylabel('w1')
plt.colorbar()
W0 = np.arange(left+2,right-5, 12*step)
W1 = np.arange(bottom+1,top-10, 12*step)
for i,j in product(range(len(W1)), range(len(W0))):
w = np.matrix([W0[j], W1[i]]).T
#w = np.mat([-1,1]).T
p = sigmoid(X*w)
dw = 0.2*X.T*(y-p)
#print(p)
S = np.mat(np.diag(np.asarray(np.multiply(p,1-p)).flatten()))
H = X.T*S*X
dw_nwt = 0.08*H.I*X.T*(y-p)
C = np.linalg.cholesky(H.I)
# plt.hold(True)
ln = ellipse_line(C/3., w, 'w')
ax = fig.gca()
ax.add_line(ln)
ln2 = plt.Line2D((float(w[0]), float(w[0]+dw[0])), (float(w[1]), float(w[1]+dw[1])),color='y')
ax.add_line(ln2)
ln3 = plt.Line2D((float(w[0]), float(w[0]+dw_nwt[0])), (float(w[1]), float(w[1]+dw_nwt[1])),color='w')
ax.add_line(ln3)
plt.plot(w[0,0],w[1,0],'.w')
#print(C)
#print(S)
ax.set_xlim((left,right))
ax.set_ylim((bottom,top))
plt.show()
print(y)
print(X)
#w = np.random.randn(3,1)
w = np.mat('[1;2]')
print(w)
print(sigmoid(X*w))
eta = 0.1
for i in range(10000):
pr = sigmoid(X*w)
w = w + eta*X.T*(y-pr)
print(np.hstack((y,pr)))
print(w)
###Output
[[0]
[0]
[1]
[1]
[1]]
[[-0.5 0.5]
[ 2. -1. ]
[-1. -1. ]
[ 1. 1. ]
[ 2. 1. ]]
[[1]
[2]]
[[ 0.62245933]
[ 0.5 ]
[ 0.04742587]
[ 0.95257413]
[ 0.98201379]]
[[ 0. 0.59561717]
[ 0. 0.30966921]
[ 1. 0.32737446]
[ 1. 0.67262554]
[ 1. 0.66660954]]
[[-0.02719403]
[ 0.74727817]]
###Markdown
--------------------------- Optimization Frameworks--------------------------- CVX -- Convex OptimizationCVX is a framework that can be used for solving convex optimization problems. Convex optimization includes many problems of interest; for example the minimization of the negative loglikelihood of the logistic regression is a convex problem. SUnfortunately, many important problems and interesting problems
###Code
%matplotlib inline
from cvxpy import *
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
###Output
_____no_output_____
###Markdown
Selecting relevant features with regularizationBelow we generate a dataset with some irrelevant features that are not informative for classification Maximize$$\mathcal{L}(w) + \lambda \|w\|_p$$
###Code
def sigmoid(x):
return 1/(1+np.exp(-x))
# Number of data points
N = 1000
# Number of relevant features
K = 10
# Number of irrelevant features
Ke = 30
# Generate random features
X = np.matrix(np.random.randn(N, K + Ke))
# Generate parameters and set the irrelevant ones to zero
w_true = np.random.randn(K + Ke,1)
w_true[K:] = 0
p = sigmoid(X*w_true)
u = np.random.rand(N,1)
y = (u < p)
y = y.astype(np.float64)
# Regularization coefficient
lam = 100.
zero_vector = np.zeros((N,1))
# Construct the problem.
w = Variable(K+Ke)
objective = Minimize(lam*norm(w, np.inf ) -y.T*X*w + sum_entries(log_sum_exp(hstack(zero_vector, X*w),axis=1)))
prob = Problem(objective)
# The optimal objective is returned by prob.solve().
result = prob.solve()
# The optimal value for x is stored in x.value.
#print(w.value)
plt.figure(figsize=(10,4))
plt.stem(w.value, markerfmt='ob')
plt.stem(w_true, markerfmt='xr')
plt.gca().set_xlim((-1, K+Ke))
plt.legend(['Estimated', 'True'])
plt.show()
###Output
_____no_output_____
###Markdown
Optimization with pytorch
###Code
X_np, y_np, w_true_np, M, N = generate_toy_dataset(number_of_features=3, number_of_datapoints=20)
###Output
_____no_output_____
###Markdown
Gradient Descent for Logistic Regression: Reference implementation in numpy
###Code
# Initialization
w_np = np.ones(M)
# Learnig rate
eta = 0.01
MAX_ITER = 100
for epoch in range(MAX_ITER):
sig = sigmoid(np.dot(X_np,w_np))
# Gradient dLL/dw -- symbolically derived and hard coded
w_grad = np.dot(X_np.T, y_np-sig)
# Gradient ascent step
w_np = w_np + eta*w_grad
print(w_np)
###Output
[-0.96195283 -0.21886467 0.83477378]
###Markdown
Gradient Descent for Logistic Regression: First implementation in pytorch
###Code
import torch
import torch.autograd
from torch.autograd import Variable
#sigmoid_f = torch.nn.Sigmoid()
def sigmoid_f(x):
return 1./(1. + torch.exp(-x))
X = Variable(torch.from_numpy(X_np).double())
y = Variable(torch.from_numpy(y_np.reshape(N,1)).double())
# Implementation
w = Variable(torch.ones(M,1).double(), requires_grad=True)
eta = 0.01
MAX_ITER = 100
for epoch in range(MAX_ITER):
sig = sigmoid_f(torch.matmul(X, w))
# Compute the loglikelihood
LL = torch.sum(y*torch.log(sig) + (1-y)*torch.log(1-sig))
# Compute the gradients by automated differentiation
LL.backward()
# The gradient ascent step
w.data.add_(eta*w.grad.data)
# Reset the gradients, as otherwise they are accumulated in w.grad
w.grad.zero_()
print(w.data.numpy())
%connect_info
###Output
{
"shell_port": 65415,
"iopub_port": 65416,
"stdin_port": 65417,
"control_port": 65418,
"hb_port": 65419,
"ip": "127.0.0.1",
"key": "40c24992-b940437a6d68edf64080bfde",
"transport": "tcp",
"signature_scheme": "hmac-sha256",
"kernel_name": ""
}
Paste the above JSON into a file, and connect with:
$> jupyter <app> --existing <file>
or, if you are local, you can connect with just:
$> jupyter <app> --existing kernel-ab030b1c-c549-4b31-8e5e-c11e1befeaa2.json
or even just:
$> jupyter <app> --existing
if this is the most recent Jupyter kernel you have started.
###Markdown
1. Gaussian Naive Bayes 예 (강의자료 5쪽)
###Code
import sklearn.datasets as ds
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
iris = ds.load_iris()
print("데이터의 형태 : ", iris.data.shape)
print("특성 이름 :\n", iris.feature_names)
print("데이터 설명 :\n", iris.DESCR)
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state=42)
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnbfit = gnb.fit(X_train, y_train)
y_pred = gnbfit.predict(X_test)
print("테스트 세트에 대한 예측값 : \n{}".format(y_pred))
print("테스트 세트의 정확도 : {:.2f}".format(np.mean(y_pred == y_test)))
print("테스트 세트의 정확도 : {:.2f}".format(gnbfit.score(X_test, y_test)))
###Output
데이터의 형태 : (150, 4)
특성 이름 :
['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
데이터 설명 :
.. _iris_dataset:
Iris plants dataset
--------------------
**Data Set Characteristics:**
:Number of Instances: 150 (50 in each of three classes)
:Number of Attributes: 4 numeric, predictive attributes and the class
:Attribute Information:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- class:
- Iris-Setosa
- Iris-Versicolour
- Iris-Virginica
:Summary Statistics:
============== ==== ==== ======= ===== ====================
Min Max Mean SD Class Correlation
============== ==== ==== ======= ===== ====================
sepal length: 4.3 7.9 5.84 0.83 0.7826
sepal width: 2.0 4.4 3.05 0.43 -0.4194
petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)
petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)
============== ==== ==== ======= ===== ====================
:Missing Attribute Values: None
:Class Distribution: 33.3% for each of 3 classes.
:Creator: R.A. Fisher
:Donor: Michael Marshall (MARSHALL%[email protected])
:Date: July, 1988
The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken
from Fisher's paper. Note that it's the same as in R, but not as in the UCI
Machine Learning Repository, which has two wrong data points.
This is perhaps the best known database to be found in the
pattern recognition literature. Fisher's paper is a classic in the field and
is referenced frequently to this day. (See Duda & Hart, for example.) The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant. One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.
.. topic:: References
- Fisher, R.A. "The use of multiple measurements in taxonomic problems"
Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
Mathematical Statistics" (John Wiley, NY, 1950).
- Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.
(Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.
- Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
Structure and Classification Rule for Recognition in Partially Exposed
Environments". IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. PAMI-2, No. 1, 67-71.
- Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions
on Information Theory, May 1972, 431-433.
- See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II
conceptual clustering system finds 3 classes in the data.
- Many, many more ...
테스트 세트에 대한 예측값 :
[1 0 2 1 1 0 1 2 1 1 2 0 0 0 0 2 2 1 1 2 0 2 0 2 2 2 2 2 0 0 0 0 1 0 0 2 1
0 0 0 2 1 1 0 0 1 1 2 1 2]
테스트 세트의 정확도 : 0.96
테스트 세트의 정확도 : 0.96
###Markdown
2. Logistic Regression 예 (강의자료 7쪽)
###Code
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
#1개의 특징(x)을 가지고 2개의 클래스(y=0,1)를 가지는 데이터 생성
X0, y = make_classification(n_samples=100, n_features=1, n_redundant=0, n_informative=1, n_clusters_per_class=1, random_state=4)
#반환값 X0 : [n_samples, n_features]크기의 배열, 독립 변수(특징)
#y : [n_samples] 크기의 배열, 종속 변수
display("X0=", X0)
display("y=", y)
#생성된 데이터로 ligistic regression으로 모델링
model = LogisticRegression().fit(X0, y)
#테스트 데이터 xx는 -3~3 범위 내의 값 100개 생성
xx = np.linspace(-3, 3, 100)
#1차원 xx를 2차원 [n_samples, n_features] 크기의 배열로 변경
XX = xx[:, np.newaxis]#차원 변경, xx, reshape(100, 1)과 동일
#logistic regression으로 생성된 모델을 이용하여
#테스트 xx에 대해 각 클래스(0, 1)의 확률 계산
#prob = 1.0(1+np.exp(-model.coef_[0][0]*xx-model.intercept_[0]))
prob = model.predict_proba(XX)
prob1 = prob[:,1]#class 1(y=1)만 추출
display(prob)
display(prob1)
#하나의 테스트 값
x_test = [[-0.2]]
#2행 1열에 두 개의 서브 플롯으로 그래프 그리기
#(1)xx에 대한 그래프
plt.subplot(211)
#테스트 데이터 xx에 대한 class 1(y=1)인 확률 그래프
plt.plot(xx, prob1)
#학습데이터 그래프
plt.scatter(X0, y, marker='o', c=y, s=100, edgecolor='k', linewidth=2)
#하나의 테스트 값에 대한 결과(확률)을 X로 표시
plt.scatter(x_test[0], model.predict_proba(x_test)[0][1:], marker='x', s=500, c='r', lw=5)
plt.xlim(-3, 1)
plt.ylim(-2, 1, 2)
plt.legend(["$P(y=1lx_{test})"])
#(2)x_test 확률 막대 그래프
plt.subplot(212)
#각 클래스에 대한 x_test의 확률값 막대 그래프
plt.bar(model.classes_, model.predict_proba(x_test)[0])
plt.xlim(-1, 2)
plt.gca().xaxis.grid(False)
plt.xticks(model.classes_, ["$P(y=0|x_{test})$", "$P(y=1|x_{test})"])
plt.title("Conditional probability distribution")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
3. 유방암 분류문제 (강의자료 9쪽)
###Code
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify=cancer.target, random_state=42)
logreg = LogisticRegression().fit(X_train, y_train)
print("C=1, 훈련 세트 점수 : {:.3f}".format(logreg.score(X_train, y_train)))
print("C=1, 테스트 세트 점수 : {:.3f}".format(logreg.score(X_test, y_test)))
logreg100 = LogisticRegression(C=100).fit(X_train, y_train)
print("C=100, 훈련 세트 점수 : {:.3f}".format(logreg100.score(X_train, y_train)))
print("C=100, 테스트 세트 점수 : {:.3f}".format(logreg100.score(X_test, y_test)))
logreg001 = LogisticRegression(C=0.01).fit(X_train, y_train)
print("C=0.01, 훈련 세트 점수 : {:.3f}".format(logreg001.score(X_train, y_train)))
print("C=0.01, 테스트 세트 점수 : {:.3f}".format(logreg001.score(X_test, y_test)))
plt.plot(logreg100.coef_.T, '^', label="C=100")
plt.plot(logreg.coef_.T, '^', label="C=1")
plt.plot(logreg001.coef_.T, '^', label="C=0.01")
plt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)
xlims = plt.xlim()
plt.hlines(0, xlims[0], xlims[1])
plt.xlim(xlims)
plt.ylim(-5, 5)
plt.xlabel("features")
plt.ylabel("coef")
plt.legend()
###Output
_____no_output_____
###Markdown
4. Decidion boundary (2-class, 3-class) 예 (강의자료 11-12쪽)
###Code
#2-class 분류 알고리즘 데이터 집합
#2차원 데이터 집합
from sklearn.datasets.samples_generator import make_blobs
from sklearn.datasets import make_classification
#data 생성
X, y = make_classification(n_features=2, n_redundant=0, n_informative=1, n_clusters_per_class=1, random_state=4)
#data plot
fig, ax = plt.subplots(figsize=(6,4))
ax.scatter(X[:, 0], X[:,1], c=y, cmap='tab10')
plt.title('2-class Decision boundary', fontsize=14)
plt.xlabel('feature1')
plt.ylabel('feature2')
#modeling
logreg = LogisticRegression().fit(X, y)
#Dicision boundary plot
line = np.linspace(-3, 3)
colors = ['red']
for coef, intercept, color in zip(logreg.coef_, logreg.intercept_, colors):
plt.plot(line, -(line*coef[0]+intercept)/coef[1], c=color)
#3-class 분류 알고리즘 데이터집합
#2차원 데이터 집합
#data 생성
X, y = make_blobs(n_samples=70, centers=3, random_state=0, cluster_std=0.60)
#data plot
fig, ax = plt.subplots(figsize=(6,4))
ax.scatter(X[:,0], X[:,1], c=y, cmap='Paired')
plt.title('3 class Decision boundary', fontsize=14)
plt.xlabel('feature1')
plt.ylabel('feature2')
#modeling
logreg = LogisticRegression().fit(X, y)
#Dicision boundary plot
line = np.linspace(-3, 3)
colors = ['red', 'blue']
for coef, intercept, color in zip(logreg.coef_, logreg.intercept_, colors):
plt.plot(line, -(line*coef[0]+intercept)/coef[1], c=color)
###Output
c:\users\wlgh3\venv\tensorflow\lib\site-packages\sklearn\linear_model\logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
c:\users\wlgh3\venv\tensorflow\lib\site-packages\sklearn\linear_model\logistic.py:469: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning.
"this warning.", FutureWarning)
###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import sklearn
import sklearn.datasets
import sklearn.linear_model
from sklearn.datasets import make_classification
np.random.seed(1)
X, Y = make_classification(n_samples=400, n_features=2, n_informative=2, n_redundant=0, n_repeated=0, n_classes=2, n_clusters_per_class=2, weights=None, flip_y=0.01, class_sep=1.0, hypercube=True, shift=0.0, scale=1.0, shuffle=True, random_state=None)
plt.figure(figsize=(8, 8))
plt.scatter(X[:, 0], X[:, 1], marker='o', c=Y,
s=25, edgecolor='k')
plt.show()
def initialize_network(n_x, n_y):
W = np.random.randn(n_y,n_x)
b = np.zeros((n_y,1))
assert (W.shape == (n_y, n_x))
assert (b.shape == (n_y, 1))
network = {"W": W,
"b": b}
return network
def sigmoid(raw):
s = 1 / (1+np.exp(-raw))
return s
def forward_propagation(X, network):
product = np.dot(network['W'],X)
linA = np.add(product, network['b'])
A = sigmoid(linA)
assert(A.shape == (1, X.shape[1]))
activations = {"linA": linA,
"A": A}
return A, activations
def evaluate(A, Y):
##−∑(Y*log(A)+(1−Y)*log(1−A))
cost = -np.sum(np.add(np.dot(Y,np.log(A.T)),np.dot(1-Y,np.log(1-A.T))))
cost = np.squeeze(cost)
assert(isinstance(cost, float))
return cost
UT_network = initialize_network(2,1)
UT_A, UT_activations = forward_propagation(X.T, UT_network)
UT_Y = np.reshape(Y, (400,1))
UT_cost = evaluate(UT_A, UT_Y.T)
print("Cost value " + str(UT_cost))
def backward_propagation(activations, X, Y):
A = activations['A']
n_samples = X.shape[1]
ddlinA = A - Y
ddW = np.dot(ddlinA,X.T) / n_samples
ddb = np.sum(ddlinA,axis=1,keepdims=True) / n_samples
assert(ddlinA.shape == (1, X.shape[1]))
assert(ddW.shape == (Y.shape[0], X.shape[0]))
assert(ddb.shape == (Y.shape[0], 1))
gradients = {"ddW": ddW,
"ddb": ddb}
return gradients
UT_gradients = backward_propagation(UT_activations, X.T, UT_Y.T)
print("Gradient values " + str(UT_gradients))
###Output
Gradient values {'ddW': array([[ 0.19979757, -0.54888321]]), 'ddb': array([[0.00235057]])}
###Markdown
Updates network using the gradient descent update rule given above Arguments: * network -> python dictionary containing your network * gradients -> python dictionary containing your gradients Returns: * network -> python dictionary containing your updated network
###Code
def learn(network, gradients, learning_rate):
W = network['W']
b = network['b']
ddW = gradients['ddW']
ddb = gradients['ddb']
W -= np.dot(ddW,learning_rate)
b -= np.dot(ddb,learning_rate)
assert(W.shape == ddW.shape)
assert(b.shape == ddb.shape)
network = {"W": W,
"b": b}
return network
print("Old Network values " + str(UT_network))
UT_network = learn(UT_network, UT_gradients, learning_rate = 1.2)
print("New Network values " + str(UT_network))
###Output
Old Network values {'W': array([[ 0.4773024 , -0.24006957]]), 'b': array([[0.]])}
New Network values {'W': array([[0.23754532, 0.41859028]]), 'b': array([[-0.00282069]])}
###Markdown
Arguments:* X -- dataset of shape (2, number of examples)* Y -- labels of shape (1, number of examples)* num_iterations -- Number of iterations in gradient descent loop* print_cost -- if True, print the cost every 1000 iterations Returns:* network -- network learnt by the model. They can then be used to predict.
###Code
def model(X, Y, num_iterations = 100, learning_rate=1.1, print_cost=False):
n_x = X.shape[0]
n_y = Y.shape[0]
costs = []
network = initialize_network(n_x,n_y)
W = network['W']
b = network['b']
for i in range(0, num_iterations):
A,activations = forward_propagation(X,network)
cost = evaluate(A,Y)
gradients = backward_propagation(activations,X,Y)
network = learn(network,gradients,learning_rate)
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
plt.clf()
return network
Y = np.reshape(Y, (400,1))
network = model(X.T, Y.T, num_iterations = 10000, learning_rate = 1.1, print_cost=True)
###Output
Cost after iteration 0: 148.788941
Cost after iteration 1000: 60.123992
Cost after iteration 2000: 60.123992
Cost after iteration 3000: 60.123992
Cost after iteration 4000: 60.123992
Cost after iteration 5000: 60.123992
Cost after iteration 6000: 60.123992
Cost after iteration 7000: 60.123992
Cost after iteration 8000: 60.123992
Cost after iteration 9000: 60.123992
###Markdown
Using the learned network, predicts a class for each example in XArguments:* network -- python dictionary containing your network * X -- input data of size (n_x, n_samples) Returns* predictions -- vector of predictions of our model (red: 0 / blue: 1)
###Code
def predict(network, X):
A, activations = forward_propagation(X,network)
predictions = np.where(A > 0.5,1,0)
return predictions
predictions = predict(network, X.T)
print("Predictions " + str(predictions))
print("\n\n##########")
print ('Accuracy: %d' % float((np.dot(Y.T,predictions.T) + np.dot(1-Y.T,1-predictions.T))/float(Y.T.size)*100) + '%')
plt.figure(figsize=(8, 8))
Y = np.reshape(Y, (400,))
plt.scatter(X[:, 0], X[:, 1], marker='o', c=Y,
s=25, edgecolor='k')
h = .02
x_min, x_max = X[:, 0].min(), X[:, 0].max()
y_min, y_max = X[:, 1].min(), X[:, 1].max()
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
predictions = predict(network,(np.c_[xx.ravel(), yy.ravel()].T))
predictions = predictions.reshape(xx.shape)
plt.contour(xx, yy, predictions, cmap=plt.cm.Paired)
plt.show()
###Output
_____no_output_____
###Markdown
Regression Modeling in Practice Assignment: Test a Logistic Regression ModelFollowing is the Python program I wrote to fulfill the fourth assignment of the [Regression Modeling in Practice online course](https://www.coursera.org/learn/regression-modeling-practice/home/welcome).I decided to use [Jupyter Notebook](http://nbviewer.jupyter.org/github/ipython/ipython/blob/3.x/examples/Notebook/Index.ipynb) as it is a pretty way to write code and present results. Research question for this assignmentFor this assignment, I decided to use the NESARC database with the following question : *Are people from white ethnicity more likely to have ever used cannabis?*The potential other explanatory variables will be:- Age- Sex- Family income Data managementThe data will be managed to get cannabis usage recoded from 0 (never used cannabis) and 1 (used cannabis). The non-answering recordings (reported as 9) will be discarded.The response variable having 2 categories, categories grouping is not needed.The other categorical variable (sex) will be recoded such that 0 means female and 1 equals male. And the two quantitative explanatory variables (age and family income) will be centered.
###Code
# Magic command to insert the graph directly in the notebook
%matplotlib inline
# Load a useful Python libraries for handling data
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import Markdown, display
nesarc = pd.read_csv('nesarc_pds.csv')
canabis_usage = {1 : 1, 2 : 0, 9 : 9}
sex_shift = {1 : 1, 2 : 0}
white_race = {1 : 1, 2 : 0}
subnesarc = (nesarc[['AGE', 'SEX', 'S1Q1D5', 'S1Q7D', 'S3BQ1A5', 'S1Q11A']]
.assign(sex=lambda x: pd.to_numeric(x['SEX'].map(sex_shift)),
white_ethnicity=lambda x: pd.to_numeric(x['S1Q1D5'].map(white_race)),
used_canabis=lambda x: (pd.to_numeric(x['S3BQ1A5'], errors='coerce')
.map(canabis_usage)
.replace(9, np.nan)),
family_income=lambda x: (pd.to_numeric(x['S1Q11A'], errors='coerce')))
.dropna())
centered_nesarc = subnesarc.assign(age_c=subnesarc['AGE']-subnesarc['AGE'].mean(),
family_income_c=subnesarc['family_income']-subnesarc['family_income'].mean())
display(Markdown("Mean age : {:.0f}".format(centered_nesarc['AGE'].mean())))
display(Markdown("Mean family income last year: {:.0f}$".format(centered_nesarc['family_income'].mean())))
###Output
_____no_output_____
###Markdown
Let's check that the quantitative variable are effectively centered.
###Code
print("Centered age")
print(centered_nesarc['age_c'].describe())
print("\nCentered family income")
print(centered_nesarc['family_income_c'].describe())
###Output
Centered age
count 4.272500e+04
mean -2.667486e-13
std 1.819181e+01
min -2.841439e+01
25% -1.441439e+01
50% -2.414394e+00
75% 1.258561e+01
max 5.158561e+01
Name: age_c, dtype: float64
Centered family income
count 4.272500e+04
mean -5.710829e-10
std 5.777221e+04
min -4.560694e+04
25% -2.863094e+04
50% -1.263094e+04
75% 1.436906e+04
max 2.954369e+06
Name: family_income_c, dtype: float64
###Markdown
The means are both very close to 0; confirming the centering. Distributions visualizationThe following plots shows the distribution of all 3 explanatory variables with the response variable.
###Code
g = sns.factorplot(x='white_ethnicity', y='used_canabis', data=centered_nesarc,
kind="bar", ci=None)
g.set_xticklabels(['Non White', 'White'])
plt.xlabel('White ethnicity')
plt.ylabel('Ever used cannabis')
plt.title('Ever used cannabis dependance on the white ethnicity');
g = sns.factorplot(x='sex', y='used_canabis', data=centered_nesarc,
kind="bar", ci=None)
g.set_xticklabels(['Female', 'Male'])
plt.ylabel('Ever used cannabis')
plt.title('Ever used cannabis dependance on the sex');
g = sns.boxplot(x='used_canabis', y='family_income', data=centered_nesarc)
g.set_yscale('log')
g.set_xticklabels(('No', 'Yes'))
plt.xlabel('Ever used cannabis')
plt.ylabel('Family income ($)');
g = sns.boxplot(x='used_canabis', y='AGE', data=centered_nesarc)
g.set_xticklabels(('No', 'Yes'))
plt.xlabel('Ever used cannabis')
plt.ylabel('Age');
###Output
_____no_output_____
###Markdown
The four plots above show the following trends:- More white people tries cannabis more than non-white- Male people tries cannabis more than female- Younger people tries cannabis more than older ones- Man from richer families tries cannabis more than those from poorer families Logistic regression modelThe plots showed the direction of a potential relationship. But a rigorous statistical test has to be carried out to confirm the four previous hypothesis.The following code will test a logistic regression model on our hypothesis.
###Code
model = smf.logit(formula='used_canabis ~ family_income_c + age_c + sex + white_ethnicity', data=centered_nesarc).fit()
model.summary()
params = model.params
conf = model.conf_int()
conf['Odds Ratios'] = params
conf.columns = ['Lower Conf. Int.', 'Upper Conf. Int.', 'Odds Ratios']
np.exp(conf)
###Output
_____no_output_____
###Markdown
逻辑斯蒂回归 【关键词】Logistics函数,最大似然估计,梯度下降法 1、Logistics回归的原理 利用Logistics回归进行分类的主要思想是:根据现有数据对分类边界线建立回归公式,以此进行分类。这里的“回归” 一词源于最佳拟合,表示要找到最佳拟合参数集。训练分类器时的做法就是寻找最佳拟合参数,使用的是最优化算法。接下来介绍这个二值型输出分类器的数学原理 Logistic Regression和Linear Regression的原理是相似的,可以简单的描述为这样的过程:(1)找一个合适的预测函数,一般表示为h函数,该函数就是我们需要找的分类函数,它用来预测输入数据的判断结果。这个过程是非常关键的,需要对数据有一定的了解或分析,知道或者猜测预测函数的“大概”形式,比如是线性函数还是非线性函数。(2)构造一个Cost函数(损失函数),该函数表示预测的输出(h)与训练数据类别(y)之间的偏差,可以是二者之间的差(h-y)或者是其他的形式。综合考虑所有训练数据的“损失”,将Cost求和或者求平均,记为J(θ)函数,表示所有训练数据预测值与实际类别的偏差。(3)显然,J(θ)函数的值越小表示预测函数越准确(即h函数越准确),所以这一步需要做的是找到J(θ)函数的最小值。找函数的最小值有不同的方法,Logistic Regression实现时有梯度下降法(Gradient Descent)。 1) 构造预测函数 Logistic Regression虽然名字里带“回归”,但是它实际上是一种分类方法,用于两分类问题(即输出只有两种)。首先需要先找到一个预测函数(h),显然,该函数的输出必须是两类值(分别代表两个类别),所以利用了*Logistic函数(或称为Sigmoid函数)*,函数形式为:该函数形状为:预测函数可以写为: 2)构造损失函数 Cost函数和J(θ)函数是基于*最大似然估计*推导得到的。每个样本属于其真实标记的概率,即似然函数,可以写成:所有样本都属于其真实标记的概率为对数似然函数为最大似然估计就是要求得使l(θ)取最大值时的θ,其实这里可以使用梯度上升法求解,求得的θ就是要求的最佳参数 3) 梯度下降法求J(θ)的最小值 求J(θ)的最小值可以使用*梯度下降法*,根据梯度下降法可得θ的更新过程:式中为α学习步长,下面来求偏导:上式求解过程中用到如下的公式:因此,θ的更新过程可以写成:因为式中α本来为一常量,所以1/m一般将省略,所以最终的θ更新过程为: 2、实战 `sklearn.linear_model.LogisticRegression(penalty='l2', dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='liblinear', max_iter=100, multi_class='ovr', verbose=0, warm_start=False, n_jobs=1)` solver参数的选择:- “liblinear”:小数量级的数据集- “lbfgs”, “sag” or “newton-cg”:大数量级的数据集以及多分类问题- “sag”:极大的数据集 1) 手写数字数据集的分类使用KNN与Logistic回归两种方法
###Code
from sklearn.datasets import load_digits
digits = load_digits()
images = digits.images
data = digits.data
target = digits.target
plt.figure(figsize=(1,1))
plt.imshow(images[10],cmap='gray')
images[0].shape
images[0].ravel().shape
data = load_digits().data
data.shape
plt.imshow(data[0].reshape(8,8))
###Output
_____no_output_____
###Markdown
导入数据load_digits() 创建模型,训练和预测
###Code
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(data,target,test_size=0.2,random_state=1)
# knn logistic 分类模型
logistic = LogisticRegression(C=0.1)
logistic.fit(X_train,y_train)
logistic.score(X_test,y_test)
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train,y_train)
knn.score(X_test,y_test)
y1_ = knn.predict(X_test)
y2_ = logistic.predict(X_test)
###Output
_____no_output_____
###Markdown
展示结果
###Code
plt.figure(figsize=(10,16))
for i in range(0,10):
for j in range(1,11):
ax = plt.subplot(10,10,i*10+j)
ax.axis('off')
image_data = X_test[i*10+j-1]
ax.imshow(image_data.reshape(8,8),cmap='gray')
t1 = y1_[i*10+j-1]
t2 = y2_[i*10+j-1]
title = 'knn:'+str(t1) + '\nlogistic:'+str(t2)
ax.set_title(title)
###Output
_____no_output_____
###Markdown
2) 使用make_blobs产生数据集进行分类 导包使用datasets.make_blobs创建一系列点
###Code
import numpy as np
import pandas as pd
from pandas import Series,DataFrame
import matplotlib.pyplot as plt
%matplotlib inline
# make_blobs是一个函数,可以创建一个分类样本集
from sklearn.datasets import make_blobs
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
设置三个中心点,随机创建100个点
###Code
X_train,y_train = make_blobs(n_samples=150,n_features=2,centers=[[2,6],[4,2],[6,5]])
plt.scatter(X_train[:,0],X_train[:,1],c=y_train)
n1 = np.random.random(size=(50,2)) - [2,2]
n2 = np.random.random(size=(50,2)) + [2,2]
X = np.concatenate((n1,n2))
y = [0]*50 + [1]*50
plt.scatter(X[:,0],X[:,1],c=y)
###Output
_____no_output_____
###Markdown
创建机器学习模型,训练数据
###Code
logistic = LogisticRegression()
logistic.fit(X_train,y_train)
xmin,xmax = X_train[:,0].min()-0.5,X_train[:,0].max()+0.5
ymin,ymax = X_train[:,1].min()-0.5,X_train[:,1].max()+0.5
x = np.linspace(xmin,xmax,300)
y = np.linspace(ymin,ymax,300)
xx,yy = np.meshgrid(x,y)
X_test = np.c_[xx.ravel(),yy.ravel()]
###Output
_____no_output_____
###Markdown
提取坐标点,对坐标点进行处理
###Code
y_ = logistic.predict(X_test)
###Output
_____no_output_____
###Markdown
预测坐标点数据,并进行reshape()
###Code
from matplotlib.colors import ListedColormap
cmap = ListedColormap(['r','g','b'])
X_test.shape
y_.shape
plt.scatter(X_test[:,0],X_test[:,1],c=y_,cmap=cmap)
plt.scatter(X_train[:,0],X_train[:,1],c=y_train)
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train,y_train)
y1_ = knn.predict(X_test)
plt.scatter(X_test[:,0],X_test[:,1],c=y1_,cmap=cmap)
plt.scatter(X_train[:,0],X_train[:,1],c=y_train)
###Output
_____no_output_____
###Markdown
绘制图形 3、作业 【第1题】预测年收入是否大于50K美元 读取adult.txt文件,并使用逻辑斯底回归算法训练模型,根据种族、职业、工作时长来预测一个人的性别
###Code
import numpy as np
import pandas as pd
from pandas import Series,DataFrame
import matplotlib.pyplot as plt
%matplotlib inline
adult = pd.read_csv('../data/adults.txt')
adult.columns
train = adult[['race','occupation','hours_per_week']].copy()
target = adult['sex']
race_unique = train.race.unique()
def trans_race(x):
arr = np.eye(train.race.unique().size)
index = np.argwhere(x == race_unique)[0,0]
return arr[index]
train['race'] = train['race'].map(trans_race)
train['race']
train.shape
occ_unique = train['occupation'].unique()
occ_unique = train.occupation.unique()
def trans_occ(x):
arr = np.eye(occ_unique.size)
index = np.argwhere(x == occ_unique)[0,0]
return arr[index]
train['occupation'] = train['occupation'].map(trans_occ)
train['occupation'].values
item1 = train.race[0]
for item in train.race[1:]:
item1 = np.concatenate((item1,item))
occ1 = train.occupation[0]
for item in train.occupation[1:]:
occ1 = np.concatenate((occ1,item))
race = item1.reshape(-1,5)
occ = occ1.reshape(-1,15)
occ.shape
race.shape
hours = train.hours_per_week.values
temp1 = np.hstack((race,occ))
sampels = np.hstack((temp1,hours.reshape(-1,1)))
sampels[:,-1:] = sampels[:,-1:]/sampels[:,-1:].sum()
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(sampels,target,test_size=0.2,random_state=1)
%%time
knn = KNeighborsClassifier(n_neighbors=99)
knn.fit(X_train,y_train)
knn.score(X_test,y_test)
%%time
logistic = LogisticRegression(C=1)
logistic.fit(X_train,y_train)
score = logistic.score(X_test,y_test)
print('logistic score is %f'%(score))
# 使用索引对数据进行映射
race_unique = train.race.unique()
def trans_race(x):
return np.argwhere(x == race_unique)[0,0]
train['race'] = train['race'].map(trans_race)
occ_unique = train.occupation.unique()
def trans_occ(x):
return np.argwhere(x == occ_unique)[0,0]
train['occupation'] = train['occupation'].map(trans_occ)
from sklearn.preprocessing import Normalizer
samples = Normalizer().fit_transform(train)
X_train,X_test,y_train,y_test = train_test_split(samples,target,test_size=0.2,random_state=1)
knn = KNeighborsClassifier(n_neighbors=99)
knn.fit(X_train,y_train).score(X_test,y_test)
logistic = LogisticRegression()
logistic.fit(X_train,y_train).score(X_test,y_test)
# 映射
# 1. 存在大小关系 用数值来进行映射
# 2. 不存在大小关系 用矩阵来进行映射
# 3. 机器学习模型的好坏,很大程度上取决于特征工程的处理
#
###Output
_____no_output_____
###Markdown
【第2题】从疝气病症预测病马的死亡率
###Code
train = pd.read_csv('../data/horseColicTraining.txt',sep='\t',header=None)
test = pd.read_csv('../data/horseColicTest.txt',sep='\t',header=None)
X_train = train.values[:,:21]
y_train = train[21]
X_test = test.values[:,:21]
y_test = test[21]
X_test1 = Normalizer().fit_transform(X_test)
X_train1 = Normalizer().fit_transform(X_train)
knn = KNeighborsClassifier()
knn.fit(X_train1,y_train).score(X_test1,y_test)
logistic = LogisticRegression(C=3)
logistic.fit(X_train1,y_train).score(X_test1,y_test)
from sklearn.preprocessing import MinMaxScaler,StandardScaler
X_train2 = MinMaxScaler().fit_transform(X_train)
X_test2 = MinMaxScaler().fit_transform(X_test)
knn.fit(X_train2,y_train).score(X_test2,y_test)
X_train3 = StandardScaler().fit_transform(X_train)
X_test3 = StandardScaler().fit_transform(X_test)
knn.fit(X_train3,y_train).score(X_test3,y_test)
###Output
_____no_output_____
###Markdown
LogisticRegression 알고리즘을 사용하여 남성인지 여성인지 분류합니다 데이터준비
###Code
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
total_df = pd.read_csv('/content/weight-height.csv')
sex_full = total_df.to_numpy()
total_df.head()
###Output
_____no_output_____
###Markdown
모델 학습하기
###Code
maleHeight = total_df['Height']
maleHeight = maleHeight[:5000]
maleHeight.head()
femaleHeight = total_df['Height']
femaleHeight = femaleHeight[5000:10000]
femaleHeight.head()
maleWeight = total_df['Weight']
maleWeight = maleWeight[:5000]
maleWeight.head()
femaleWeight = total_df['Weight']
femaleWeight = femaleWeight[5000:10000]
femaleWeight.head()
maleWeightlist = []
maleHeightlist = []
femaleWeightlist = []
femaleHeightlist = []
for maleWeightlistindex in maleWeight:
maleWeightlist.append(maleWeightlistindex)
for maleHeightlistindex in maleHeight:
maleHeightlist.append(maleHeightlistindex)
for femaleHeightlistindex in femaleHeight:
femaleHeightlist.append(femaleHeightlistindex)
for femaleWeightlistindex in femaleWeight:
femaleWeightlist.append(femaleWeightlistindex)
plt.scatter(maleHeightlist, maleWeightlist)
plt.scatter(femaleHeightlist, femaleWeightlist)
plt.xlabel('Height')
plt.ylabel('Weight')
plt.show()
###Output
_____no_output_____
###Markdown
분류하기
###Code
persion_input = total_df[["Height", "Weight"]].to_numpy()
print(persion_input[:5])
persion_target = total_df['Gender'].to_numpy()
train_input, test_input, train_target, test_target = train_test_split(persion_input, persion_target, random_state = 200)
ss = StandardScaler()
ss.fit(train_input)
train_scaled = ss.transform(train_input)
test_scaled = ss.transform(test_input)
lr = LogisticRegression()
lr.fit(train_input, train_target)
print(lr.predict(train_input[:5]))
print(lr.predict_proba(train_input[:5]))
print(lr.coef_, lr.intercept_)
print(lr.score(test_input, test_target))
###Output
0.9204
###Markdown
모델 저장하기
###Code
import joblib
joblib.dump(lr, "my_model.pkl")
my_model_loaded = joblib.load("/content/my_model.pkl")
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/mravanba/comp551-notebooks/blob/master/LogisticRegression.ipynb) Logistic RegressionIn logistic regression we perform binary classification of by learnig a function of the form $f_w(x) = \sigma(x^\top w)$. Here $x,w \in \mathbb{R}^D$, where $D$ is the number of features as before. $\sigma(z) = \frac{1}{1+e^{-z}}$ is the logistic function. Let's plot this function below
###Code
import numpy as np
#%matplotlib notebook
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.core.debugger import set_trace
import warnings
warnings.filterwarnings('ignore')
logistic = lambda z: 1./ (1 + np.exp(-z)) #logistic function
z = np.linspace(-10,10,100)
plt.plot(z, logistic(z))
plt.title('logistic function')
#logistic
x [N,D]
w [D]
x@w [N]
logistic(x@w) [N]
#softmax
x [N,D] R^D->R^C
w [D,C]
logits = x@w [N,C]
logits = logits - np.max(logits, axis=1)
softmax[j,i] = exp(logits[j,i])/{np.sum(exp(logits), axis=1)+eps}
softmax(x@w) [N,C]
###Output
_____no_output_____
###Markdown
Cost functionTo fit our model $f_w$ to the data $\mathcal{D} = \{x^{(1)}, \ldots, x^{(N)}\}$, we maximize the **logarithm of the conditional likelihood**:$$\ell(w; \mathcal{D}) = \sum_n \log \mathrm{Bernoulli}(y^{(n)} | \sigma({x^{(n)}}^\top w)) = \sum_n y^{(n)} \log \sigma({x^{(n)}}^\top w)) + (1-y^{(n)}) \log (1-\sigma({x^{(n)}}^\top w)))$$by substituting the definition of logistic function in the equation above, and minimizing the **negative** of the log-likelihood, which is called the **cost function**,we get$$J(w) = \sum_n y^{(n)} \log(1+e^{-x w^\top}) + (1-y^{(n)}) \log(1+e^{x w^\top})$$In practice we use mean rather than sum over data points.
###Code
def cost_fn(x, y, w):
N, D = x.shape
z = np.dot(x, w)
J = np.mean(y * np.log1p(np.exp(-z)) + (1-y) * np.log1p(np.exp(z))) #log1p calculates log(1+x) to remove floating point inaccuracies
return J
###Output
_____no_output_____
###Markdown
Minimizing the cost using gradient descentTo minimize the cost we use gradient descent: start from some initial assignment to the parameters $w$, and at each iteration take a small step in the opposite direction of the *gradient*. The gradient of the cost function above is given by:$$\frac{\partial}{\partial w_d} J(w) =\sum_n - y^{(n)} x^{(n)}_d \frac{e^{-w^\top x^{(n)}}}{1 + e^{-w^\top x^{(n)}}} +x^{(n)}_d (1- y^{(n)}) \frac{e^{w^\top x^{(n)}}}{1 + e^{w^\top x^{(n)}}} = \sum_n - x^{(n)}_d y^{(n)} (1-\hat{y}^{(n)})+ x^{(n)}_d (1- y^{(n)}) \hat{y}^{(n)} = x^{(n)}_d (\hat{y}^{(n)} - y^{(n)}) $$Since in practice we divide the cost by $N$, we have to the same for the gradient; see the implementation below.
###Code
def gradient(self, x, y):
N,D = x.shape
yh = logistic(np.dot(x, self.w)) # predictions size N
grad = np.dot(x.T, yh - y)/N # divide by N because cost is mean over N points
return grad # size D
###Output
_____no_output_____
###Markdown
Logistic regression classNow we are ready to implement the logistic regression class with the usual `fit` and `predict` methods. Here, the `fit` method implements gradient descent.
###Code
class LogisticRegression:
def __init__(self, add_bias=True, learning_rate=.1, epsilon=1e-4, max_iters=1e5, verbose=False):
self.add_bias = add_bias
self.learning_rate = learning_rate
self.epsilon = epsilon #to get the tolerance for the norm of gradients
self.max_iters = max_iters #maximum number of iteration of gradient descent
self.verbose = verbose
def fit(self, x, y):
if x.ndim == 1:
x = x[:, None]
if self.add_bias:
N = x.shape[0]
x = np.column_stack([x,np.ones(N)])
N,D = x.shape
self.w = np.zeros(D)
g = np.inf
t = 0
# the code snippet below is for gradient descent
while np.linalg.norm(g) > self.epsilon and t < self.max_iters:
g = self.gradient(x, y)
self.w = self.w - self.learning_rate * g
t += 1
if self.verbose:
print(f'terminated after {t} iterations, with norm of the gradient equal to {np.linalg.norm(g)}')
print(f'the weight found: {self.w}')
return self
def predict(self, x):
if x.ndim == 1:
x = x[:, None]
Nt = x.shape[0]
if self.add_bias:
x = np.column_stack([x,np.ones(Nt)])
yh = logistic(np.dot(x,self.w)) #predict output
return yh
LogisticRegression.gradient = gradient #initialize the gradient method of the LogisticRegression class with gradient function
###Output
_____no_output_____
###Markdown
Toy experiment fit this linear model to toy data with $x \in \Re^1$ + a bias parameter
###Code
N = 50
x = np.linspace(-5,5, N)
y = ( x < 2).astype(int) #generate synthetic data
model = LogisticRegression(verbose=True, )
yh = model.fit(x,y).predict(x)
plt.plot(x, y, '.', label='dataset')
plt.plot(x, yh, 'g', alpha=.5, label='predictions')
plt.xlabel('x')
plt.ylabel(r'$y$')
plt.legend()
plt.show()
###Output
terminated after 100000 iterations, with norm of the gradient equal to 0.0007886436933334241
the weight found: [-9.96926826 20.27319341]
###Markdown
we see that the model successfully fits the training data. If we run the optimization for long enough the weights will grow large (in absolute value) so as to make the predicted probabilities for the data-points close to the decidion boundary (x=2) close to zero and one. Weight SpaceSimilar to what we did for linear regression, we plot *cost* as a function for logistic regrression as a function of model parameters (weights), and show the correspondence between the different weights having different costs and their fit. The `plot_contour` is the same helper function we used for plotting the cost function for linear regression.
###Code
import itertools
def plot_contour(f, x1bound, x2bound, resolution, ax):
x1range = np.linspace(x1bound[0], x1bound[1], resolution)
x2range = np.linspace(x2bound[0], x2bound[1], resolution)
xg, yg = np.meshgrid(x1range, x2range)
zg = np.zeros_like(xg)
for i,j in itertools.product(range(resolution), range(resolution)):
zg[i,j] = f([xg[i,j], yg[i,j]])
ax.contour(xg, yg, zg, 100)
return ax
###Output
_____no_output_____
###Markdown
Now let's define the cost function for linear regression example above, and visualize the cost and the fit of various models (parameters).
###Code
x_plus_bias = np.column_stack([x,np.ones(x.shape[0])])
cost_w = lambda param: cost_fn(x_plus_bias, y, param) #define the cost just as a function of parameters
model_list = [(-10, 20), (-2, 2), (3,-3), (4,-4)]
fig, axes = plt.subplots(ncols=2, nrows=1, constrained_layout=True, figsize=(10, 5))
plot_contour(cost_w, [-50,30], [-10,50], 50, axes[0])
colors = ['r','g', 'b', 'k']
for i, w in enumerate(model_list):
axes[0].plot(w[0], w[1], 'x'+colors[i])
axes[1].plot(x, y, '.')
axes[1].plot(x, logistic(w[1] + np.dot(w[0], x)), '-'+colors[i], alpha=.5)
axes[0].set_xlabel(r'$w_1$')
axes[0].set_ylabel(r'$w_0$')
axes[0].set_title('weight space')
axes[1].set_xlabel('x')
axes[1].set_ylabel(r'$y=xw_1 + w_0$')
axes[1].set_title('data space')
plt.show()
###Output
_____no_output_____
###Markdown
Iris datasetLet's visualize class probabilities for D=2 (plus a bias). To be able to use logistic regression we choose two of the three classes in the Iris dataset.
###Code
from sklearn import datasets
dataset = datasets.load_iris()
x, y = dataset['data'][:,:2], dataset['target']
x, y = x[y < 2], y[y< 2] # we only take the data of class 0 and 1
model = LogisticRegression()
yh = model.fit(x,y).predict(x)
x0v = np.linspace(np.min(x[:,0]), np.max(x[:,0]), 200)
x1v = np.linspace(np.min(x[:,1]), np.max(x[:,1]), 200)
x0,x1 = np.meshgrid(x0v, x1v)
x_all = np.vstack((x0.ravel(),x1.ravel())).T
yh_all = model.predict(x_all)
plt.scatter(x[:,0], x[:,1], c=yh, marker='o', alpha=1)
plt.scatter(x_all[:,0], x_all[:,1], c=yh_all, marker='.', alpha=.05)
plt.ylabel('sepal length')
plt.xlabel('sepal width')
plt.title('class probabilities (colors)')
plt.show()
###Output
_____no_output_____
###Markdown
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import load_digits
digits = load_digits()
dir(digits)
digits.data[0]
plt.gray()
for i in range(5):
plt.matshow(digits.images[i])
digits.target[0:5]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test=train_test_split(digits.data,digits.target,test_size=0.2,random_state=42)
len(X_train)
len(X_test)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train,y_train)
model.score(X_test,y_test)
plt.matshow(digits.images[60])
digits.target[60]
model.predict([digits.data[60]])
model.predict(digits.data[0:5])
y_predict = model.predict(X_test)
from sklearn.metrics import confusion_matrix
con = confusion_matrix(y_test,y_predict)
con
import seaborn as sn
plt.figure(figsize=(10,7))
sn.heatmap(con,annot=True)
plt.xlabel('Predicted Value')
plt.ylabel('Truth')
###Output
_____no_output_____
###Markdown
**Logistic Regression**UB person number: 50425014UB IT Name: paravamu
###Code
#Setting Directory
from google.colab import drive
drive.mount('/content/drive/')
#remove these lines and just upload the dataset
#Importing libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Attach your drive to access the dataset
%cd drive/MyDrive/Colab\ Notebooks
#Getting dataset and storing it in the form of pandas dataframe
#Mention the path with the file name to test the code out
dataset = pd.read_csv("diabetes.csv",engine='python');
dataset.head()
#To view count, mean, standard Devitation... of the dataset
dataset.describe()
#Instead of randomly picking the train, validate and test, we use
#random seed to pick the same train, test and validate
np.random.seed(72)
#Splitting the dataset to train - 60%, validate - 20%, test - 20%
train, validate, test = np.split(dataset.sample(frac=1),[int(.6*len(dataset)), int(.8*len(dataset))])
#Storing the first 8 columns to train_X and saving last column to train_Y
train_X = train.iloc[:, 0:8]
train_y = train.iloc[:, -1]
#Storing the first 8 columns to test_X and saving last column to test_Y
test_X = test.iloc[:, 0:8]
test_Y = test.iloc[:, -1]
#Storing the first 8 columns to validation_X and
#Saving last column to validation_Y
validation_X = validate.iloc[:, 0:8]
validation_y = validate.iloc[:, -1]
#Visualizing the shape of all the train,test and validation
print(train_X.shape)
print(train_y.shape)
print(test_X.shape)
print(test_Y.shape)
print(validation_X.shape)
print(validation_y.shape)
#Scaling the values from 0 to 1, normalazing the train_X, test_X and validation_x
x_min = train_X.min(axis=0)
x_max = train_X.max(axis=0)
test_X_min = test_X.min(axis=0)
test_X_max = test_X.max(axis=0)
validation_X_min = validation_X.min(axis=0)
validation_X_max = validation_X.max(axis=0)
# Centralizing data to range within minimum and maximum
# train_X = (train_X - x_min) / (x_max - x_min)
# test_X = (test_X - test_X_min) / (test_X_max - test_X_min)
# validation_X = (validation_X - validation_X_min) / (validation_X_max - validation_X_min)
# normalizing data to unit std and zero mean - follows normal distribution (z - score normalisation)
train_X = (train_X - train_X.mean()) / train_X.std()
test_X = (test_X - test_X.mean()) / test_X.std()
validation_X = (validation_X - validation_X.mean()) / validation_X.std()
#Sigmoid function is the hypotheses represenation
def sigmoid(z):
return 1.0/(1 + np.exp(-z))
#Logistic Regression cost function
def lossFunction(y_true, y_pred,eps=1e-15):
"""
Cost function is Binary Cross-Entropy (BCE)
"""
# bce (added eps to avoid nan during log calculation)
loss = (y_true * np.log(eps + y_pred)) + ((1 - y_true) * np.log(1-(y_pred + eps)))
return -loss.mean(axis=0)
def gradients(X, y, y_Pred):
#To get the row size, aka training examples
m = X.shape[0]
# Gradient of loss with respect to weights.
dw = np.dot(X.T, (y_Pred - y))/m
# (1/m) * (X transpose) dot product (sigmoid output - actual y value)
# Gradient of loss w.r.t bias.
db = np.sum((y_Pred - y))/m
# (1/m) * (sum of column sigmoid output - y)
return dw, db
def accuracy(y, y_Pred):
"""
Calculates accuracy between target values and predicted values
"""
threshold=0.5
return np.sum(y == (y_Pred >= threshold)) / len(y) *100
def train(x, y, epochs, lr):
m, n = x.shape
#getting row size and column size
# Initializing weights and bias to randomized value.
w = np.random.randn(n,1)
b = 0
# Reshaping y.
y = y.values.reshape(m,1)
# Empty list to store losses and accuracy and plotting it
losses = []
accuracyValues = []
for epoch in range(epochs):
y_Pred = sigmoid(np.dot(x, w) + b)
dw, db = gradients(x, y, y_Pred)
w -= lr*dw
b -= lr*db
l = lossFunction(y, sigmoid(np.dot(x, w) + b))
a = accuracy(y,np.round(y_Pred))
losses.append(l)
accuracyValues.append(a)
plt.plot(losses)
plt.show()
print("Losses")
plt.plot(accuracyValues)
plt.show()
print("Accuracy")
return w, b, losses
def makePrediction(x, w, b):
# Calculating predictions/y_pred.
preds = sigmoid(np.dot(x, w) + b)
# Empty List to store predictions.
pred_class = []
# if y_pred >= 0.5 --> round up to 1
# if y_pred < 0.5 --> round up to 1
pred_class = [1 if i > 0.5 else 0 for i in preds]
return np.array(pred_class)
# Training
w, b, l = train(train_X, train_y, epochs=10000, lr=0.1)
print(f"The accuracy of the train set is {accuracy(train_y, y_Pred=makePrediction(train_X, w, b)):.2f}%")
print(f"The accuracy of the test set is {accuracy(test_Y, y_Pred=makePrediction(test_X, w, b)):.2f}%")
print(f"The accuracy of the validation set is {accuracy(validation_y, y_Pred=makePrediction(validation_X, w, b)):.2f}%")
###Output
The accuracy of the validation set is 75.32%
###Markdown
Using Neural Network
###Code
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Dropout, BatchNormalization
from tensorflow.keras import Sequential, Model
#Using 1 input layer and 1 output layer and two hidden layers
def get_tf_nn(features):
#Input layer is nothing but the features itself
i = Input((features, ))
#The relu activation finds Max(0,x)
#Second layer, Hidden Layer with relu activation
x = Dense(256, activation='relu')(i)
#Third Layer, Hidden layer with relu activation
x = Dense(512, activation='relu')(x)
#Output layer with sigmoid activation
o = Dense(1, activation='sigmoid')(x)
#Passing the input layer and output layer to Model
model = Model(inputs=i, outputs=o)
#Compiling our model with binary crossentropy
#Using Adam optimizer, which is better then SGD optimizer and getting the accuracy metrics
model.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4), metrics=['accuracy'])
return model
nn = get_tf_nn(train_X.shape[1])
print(nn.summary())
h = nn.fit(train_X, train_y, validation_data=(validation_X, validation_y), epochs=15, batch_size=16)
plt.plot(h.history['loss'], label='Loss')
plt.plot(h.history['val_loss'], label='Validation Loss')
plt.title('BCE Loss')
plt.legend()
plt.show()
plt.plot(h.history['accuracy'], label='Accuracy')
plt.plot(h.history['val_accuracy'], label='Validation Accuracy')
plt.title('Accuracy')
plt.legend()
plt.show()
train_pred = nn.evaluate(train_X, train_y)
val_pred = nn.evaluate(validation_X, validation_y)
test_pred = nn.evaluate(test_X, test_Y)
###Output
15/15 [==============================] - 0s 2ms/step - loss: 0.4189 - accuracy: 0.8087
5/5 [==============================] - 0s 2ms/step - loss: 0.4859 - accuracy: 0.7597
5/5 [==============================] - 0s 3ms/step - loss: 0.4703 - accuracy: 0.7468
###Markdown
Using dropout regularizer
###Code
def get_tf_nn(features):
i = Input((features, ))
x = Dense(256, activation='relu')(i)
#Dropout Sets input units to zero while training to reduce overfitting
x = Dropout(0.4)(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.4)(x)
o = Dense(1, activation='sigmoid')(x)
#Passing the input layer and output layer to Model
model = Model(inputs=i, outputs=o)
#Compiling our model with binary crossentropy
#Using Adam optimizer, which is better then SGD optimizer and getting the accuracy metrics
model.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4), metrics=['accuracy'])
return model
nn = get_tf_nn(train_X.shape[1])
print(nn.summary())
h = nn.fit(train_X, train_y, validation_data=(validation_X, validation_y), epochs=15, batch_size=16)
plt.plot(h.history['loss'], label='Loss')
plt.plot(h.history['val_loss'], label='Validation Loss')
plt.title('BCE Loss')
plt.legend()
plt.show()
plt.plot(h.history['accuracy'], label='Accuracy')
plt.plot(h.history['val_accuracy'], label='Validation Accuracy')
plt.title('Accuracy')
plt.legend()
plt.show()
train_pred = nn.evaluate(train_X, train_y)
test_pred = nn.evaluate(test_X, test_Y)
val_pred = nn.evaluate(validation_X, validation_y)
###Output
15/15 [==============================] - 0s 2ms/step - loss: 0.4449 - accuracy: 0.7913
5/5 [==============================] - 0s 2ms/step - loss: 0.4661 - accuracy: 0.7403
5/5 [==============================] - 0s 3ms/step - loss: 0.4776 - accuracy: 0.7792
###Markdown
Using L1 Regularizer
###Code
def get_tf_nn(features):
i = Input((features, ))
x = Dense(256,kernel_regularizer=tf.keras.regularizers.L1(l1=1e-5), activation='relu')(i)
x = Dense(512,kernel_regularizer=tf.keras.regularizers.L1(l1=1e-5), activation='relu')(x)
o = Dense(1,kernel_regularizer=tf.keras.regularizers.L1(l1=1e-5), activation='sigmoid')(x)
model = Model(inputs=i, outputs=o)
model.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4), metrics=['accuracy'])
return model
nn = get_tf_nn(train_X.shape[1])
print(nn.summary())
h = nn.fit(train_X, train_y, validation_data=(validation_X, validation_y), epochs=15, batch_size=16)
plt.plot(h.history['loss'], label='Loss')
plt.plot(h.history['val_loss'], label='Validation Loss')
plt.title('BCE Loss')
plt.legend()
plt.show()
plt.plot(h.history['accuracy'], label='Accuracy')
plt.plot(h.history['val_accuracy'], label='Validation Accuracy')
plt.title('Accuracy')
plt.legend()
plt.show()
train_pred = nn.evaluate(train_X, train_y)
val_pred = nn.evaluate(validation_X, validation_y)
test_pred = nn.evaluate(test_X, test_Y)
###Output
15/15 [==============================] - 0s 2ms/step - loss: 0.4235 - accuracy: 0.8152
5/5 [==============================] - 0s 3ms/step - loss: 0.4828 - accuracy: 0.7532
5/5 [==============================] - 0s 4ms/step - loss: 0.4753 - accuracy: 0.7468
###Markdown
Using L2 Regularizer
###Code
def get_tf_nn(features):
i = Input((features, ))
x = Dense(256,kernel_regularizer=tf.keras.regularizers.L2(l2=1e-5), activation='relu')(i)
x = Dense(512,kernel_regularizer=tf.keras.regularizers.L2(l2=1e-5), activation='relu')(x)
o = Dense(1,kernel_regularizer=tf.keras.regularizers.L2(l2=1e-5), activation='sigmoid')(x)
model = Model(inputs=i, outputs=o)
model.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4), metrics=['accuracy'])
return model
nn = get_tf_nn(train_X.shape[1])
print(nn.summary())
h = nn.fit(train_X, train_y, validation_data=(validation_X, validation_y), epochs=15, batch_size=16)
plt.plot(h.history['loss'], label='Loss')
plt.plot(h.history['val_loss'], label='Validation Loss')
plt.title('BCE Loss')
plt.legend()
plt.show()
plt.plot(h.history['accuracy'], label='Accuracy')
plt.plot(h.history['val_accuracy'], label='Validation Accuracy')
plt.title('Accuracy')
plt.legend()
plt.show()
train_pred = nn.evaluate(train_X, train_y)
test_pred = nn.evaluate(test_X, test_Y)
val_pred = nn.evaluate(validation_X, validation_y)
###Output
15/15 [==============================] - 0s 2ms/step - loss: 0.4228 - accuracy: 0.8043
5/5 [==============================] - 0s 3ms/step - loss: 0.4732 - accuracy: 0.7597
5/5 [==============================] - 0s 3ms/step - loss: 0.4815 - accuracy: 0.7532
###Markdown
Logistic Regression
###Code
from logistic_regression import *
###Output
_____no_output_____
###Markdown
1. Data We can see it as a cloud of points
###Code
x, y = make_unbalanced_dataset(1000, 20211017)
s = plt.scatter(x[:,0], x[:,1], c=y, cmap = 'bwr', alpha=.61, marker='+')
plt.title('Observed points')
plt.xlabel("Observed variable 1 ($X_1$)")
plt.ylabel("Observer variable 2 ($X_2$)")
h,l = s.legend_elements()
plt.legend(h,("Positive", "Negative"))
plt.show()
plt.clf()
###Output
_____no_output_____
###Markdown
... Or in a more mathematical way:By plotting so, we cas easily see that a sigmoïd function can somewhat "sumerize" the following points:
###Code
plt.subplot(2,1,1)
plt.scatter(x[:,0], y, label="Observed variable 1 ($X_1$)", alpha=.6, c="orange")
plt.legend(loc="center right")
plt.ylabel("Value of Y")
plt.subplot(2,1,2)
plt.scatter(x[:,1], y, label="Observed variable 2 ($X_2$)", alpha=.6, c="blue")
plt.legend(loc="center right")
plt.ylabel("Value of Y")
plt.show()
plt.clf()
###Output
_____no_output_____
###Markdown
2. Logistic regression
###Code
estimator = LogisticRegressionClassifier(learning_rate=.1, n_iter=400)
estimator.fit(x, y)
###Output
_____no_output_____
###Markdown
Visualisation du modèle
###Code
resolution = 300
fig, ax = plt.subplots(figsize=(9, 6))
ax.scatter(x[:, 0], x[:, 1], c=y, s=50, edgecolor='k')
#limites du graphique
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# meshgrid
x1 = np.linspace(xlim[0], xlim[1], resolution)
x2 = np.linspace(ylim[0], ylim[1], resolution)
X1, X2 = np.meshgrid(x1, x2)
# assembler les 2 variables
XX = np.vstack((X1.ravel(), X2.ravel())).T
# Prédictions
Z = estimator.predict(XX)
Z = Z.reshape((resolution, resolution))
ax.pcolormesh(X1, X2, Z, shading='nearest', zorder=0, alpha=0.3)
ax.contour(X1, X2, Z, colors='g')
###Output
_____no_output_____
###Markdown
Visualisation de l'amélioration au cours des itérations
###Code
# Amélioration
plt.figure(figsize=(9, 6))
plt.plot(estimator.loss_history)
plt.xlabel('n_iteration')
plt.ylabel('Log_loss')
plt.title('Evolution of errors')
###Output
_____no_output_____
###Markdown
3. Testing the accuracy through 5 generations
###Code
errors_count = []
for i in range(0,5):
# Generating a dataset
x, y = make_unbalanced_dataset(3000, 20211017+i)
# Fitting it in a LR model (the first 1k data)
estimator = LogisticRegressionClassifier(learning_rate=.1, n_iter=400)
estimator.fit(x[:1000], y[:1000])
# Getting predictions for the resting 2k
y_predicted = estimator.predict(x[2000:])
y_predicted = map(lambda x:1 if x else 0, y_predicted)
errors_count.append(np.sum(np.absolute(np.subtract(y[2000:], np.array(list(y_predicted))))))
print("mean:", np.array(errors_count).mean())
print("sd:", np.array(errors_count).std())
###Output
mean: 83.6
sd: 8.138795979750322
###Markdown
Dependecies
###Code
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import RFECV
import joblib
###Output
_____no_output_____
###Markdown
Process Data Data Cleanup
###Code
data = pd.read_csv("../Resources/exoplanet_data.csv")
# Drop null columns
data = data.dropna(axis='columns', how='all')
# Drop null rows
data = data.dropna()
# Convert dtypes of int64 to float64
for column, content in data.items():
if data[column].dtype == 'int64':
data = data.astype({column: 'float64'})
###Output
_____no_output_____
###Markdown
Pre-prossessing
###Code
# Assign data to X and y
X = data.drop("koi_disposition", axis=1)
y = data["koi_disposition"]
# Split data into training and testing groups
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, stratify=y)
# Scale X values
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Build the Model Train the Model
###Code
model_1 = LogisticRegression(solver='newton-cg', multi_class='auto')
model_1.fit(X_train_scaled, y_train)
model_1_training_score = round(model_1.score(X_train_scaled, y_train)*100,3)
base_accuracy = round(model_1.score(X_test_scaled, y_test)*100,3)
print(f"Training Data Score: {model_1_training_score} %")
print(f"Testing Data Score: {base_accuracy} %")
###Output
Training Data Score: 85.504 %
Testing Data Score: 86.213 %
###Markdown
Select Features
###Code
# Evaluate features
feature_names = X.columns.tolist()
selector = RFECV(estimator=model_1, cv=5, step=1)
_ = selector.fit(X_train_scaled, y_train)
# Determine which features ought to be kept
preSelected_features = sorted(zip(selector.ranking_, feature_names))
ranked_features = pd.DataFrame(preSelected_features, columns=['Ranking', 'Feature'])
ranked_features = ranked_features.set_index('Feature')
ranked_features
# Remove features with Ranking > 16
selected_features = []
for tup in preSelected_features:
if tup[0] < 17:
selected_features.append(tup[1])
# Use new data for all subsequent models
## Assign new data to X
X_train_select = X_train[selected_features]
X_test_select = X_test[selected_features]
X_scaler = MinMaxScaler().fit(X_train_select)
X_train_scaled = X_scaler.transform(X_train_select)
X_test_scaled = X_scaler.transform(X_test_select)
## Train new model
model_2 = LogisticRegression(solver='newton-cg', multi_class='auto')
model_2.fit(X_train_scaled, y_train)
model_2_training_score = round(model_2.score(X_train_scaled, y_train)*100,3)
select_features_accuracy = round(model_2.score(X_test_scaled, y_test)*100,3)
print(f"Training Data Score: {model_2_training_score} %")
print(f"Testing Data Score: {select_features_accuracy} %")
###Output
Training Data Score: 85.504 %
Testing Data Score: 86.213 %
###Markdown
Model Tuning
###Code
# Create the GridSearchCV model
model_3 = LogisticRegression(solver='newton-cg', multi_class='auto')
param_grid = {
'C': np.logspace(0, 4, 10),
'penalty': ['l2']
}
grid = GridSearchCV(model_3, param_grid, cv=5, verbose=0)
# Train the model with GridSearch
_ = grid.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
Train Tuned Model
###Code
# Tuned parameters
C = grid.best_params_['C']
penalty = grid.best_params_['penalty']
# Tuned model
tuned_model = LogisticRegression(solver='newton-cg', multi_class='auto',
C=C, penalty=penalty)
tuned_model.fit(X_train_scaled, y_train)
model_3_training_score = round(tuned_model.score(X_train_scaled, y_train)*100,3)
tuned_accuracy = round(tuned_model.score(X_test_scaled, y_test)*100,3)
print(f"Training Data Score: {model_3_training_score} %")
print(f"Testing Data Score: {tuned_accuracy} %")
###Output
Training Data Score: 88.709 %
Testing Data Score: 89.474 %
###Markdown
Model Predictions and Evaluations Predictions
###Code
predictions = tuned_model.predict(X_test_scaled)
classifications = y_test.unique().tolist()
prediction_actual = {
'Actual': y_test,
'Prediction': predictions
}
PA_df = pd.DataFrame(prediction_actual)
PA_df = PA_df.set_index('Actual').reset_index()
PA_df.head(15)
###Output
_____no_output_____
###Markdown
Evaluations
###Code
evaluations = {'': ['Base Model', 'Select Features Model', 'Tuned Model'],
'Accuracy': [f"{base_accuracy}%", f"{select_features_accuracy}%", f"{tuned_accuracy}%"]}
evaluations_df = pd.DataFrame(evaluations)
evaluations_df = evaluations_df.set_index('')
evaluations_df.to_csv('../Resources/LogisticRegression_eval.csv')
evaluations_df
###Output
_____no_output_____
###Markdown
Save Model
###Code
filename = '../Models/OtherModel_LogisticRegression.sav'
_ = joblib.dump(tuned_model, filename)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
import pandas as pd
import numpy as np
import statsmodels.api as sm
# download the Smarket.csv datafile
df = pd.read_csv('https://raw.githubusercontent.com/JWarmenhoven/ISLR-python/master/Notebooks/Data/Smarket.csv', index_col=1, parse_dates=True)
df.head()
###Output
_____no_output_____
###Markdown
In this lab, a logistic regression model in order to predict Direction using Lab1 through Lag5 and Volumn. We build our model using the glm() function, which is part of formula submodule of (statsmodels)
###Code
import statsmodels.formula.api as smf
###Output
_____no_output_____
###Markdown
We use the formular'Direction' ~ Lag1 + Lag2 + ... + Lag5 + Volume
###Code
formula = 'Direction ~ Lag1 + Lag2 + Lag3 + Lag4 + Lag5 + Volume'
model = smf.glm(formula=formula, data =df, family = sm.families.Binomial())
result = model.fit()
print(result.summary())
print("Coefficeients")
print(result.params)
print()
print("p-Values")
print(result.pvalues)
print()
print("Dependent variables")
print(result.model.endog_names)
###Output
Coefficeients
Intercept 0.126000
Lag1 0.073074
Lag2 0.042301
Lag3 -0.011085
Lag4 -0.009359
Lag5 -0.010313
Volume -0.135441
dtype: float64
p-Values
Intercept 0.600700
Lag1 0.145232
Lag2 0.398352
Lag3 0.824334
Lag4 0.851445
Lag5 0.834998
Volume 0.392404
dtype: float64
Dependent variables
['Direction[Down]', 'Direction[Up]']
###Markdown
Predict function will predict the probability that the market will go down, given values of the predictors.
###Code
predictions = result.predict()
print(predictions[0:10])
print(np.column_stack((df["Direction"],
result.model.endog)))
predictions_nomial = ['Up' if x< 0.5 else 'Down' for x in predictions]
from sklearn.metrics import confusion_matrix, classification_report
print(confusion_matrix(df['Direction'], predictions_nomial))
(507 + 145)/(145+457+141+507)
print(classification_report(df["Direction"],
predictions_nomial,
digits = 3))
#df.set_index("Year", inplace = True)
x_train = df.loc[:'2004',:]
y_train = df.loc[:'2004',['Direction']]
x_test = df.loc['2005':,:]
y_test = df.loc['2005':,['Direction']]
x_train.head()
model = smf.glm(formula, data=x_train,
family = sm.families.Binomial())
result = model.fit()
predictions = result.predict(x_test)
predictions_nominal = [ "Up" if x < 0.5 else "Down" for x in predictions]
print(classification_report(y_test,
predictions_nominal,
digits = 3))
###Output
precision recall f1-score support
Down 0.500 0.315 0.387 111
Up 0.582 0.752 0.656 141
accuracy 0.560 252
macro avg 0.541 0.534 0.522 252
weighted avg 0.546 0.560 0.538 252
###Markdown
The better model
###Code
formula = 'Direction ~ Lag1 + Lag2'
model = smf.glm(formula, data=x_train,
family = sm.families.Binomial()) # Write your code to fit the new model here
result = model.fit()
# -----------------------------------
# This will test your new model; you
# don't need to change anything below
# -----------------------------------result = model.fit()
predictions = result.predict(x_test)
predictions_nominal = [ "Up" if x < 0.5 else "Down" for x in predictions]
print(classification_report(y_test, predictions_nominal, digits = 3))
print(result.predict(pd.DataFrame([[1.2, 1.1],
[1.5, -0.8]],
columns = ["Lag1","Lag2"])))
###Output
0 0.520854
1 0.503906
dtype: float64
###Markdown
Simple Logistic Regression The code written isn't optimized, as it isn't the goal of this notebook. The purpose of this notebook is to present to you an intuitive and easy way to understand how a simple logistic regression works.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Logistic regression is the simplest form of classification algorithms. Classification algorithms are used to predict a label given certain data. It is used to determine flower species (the famous Iris dataset), gender, survival... There are countless of applications. We will look at the simplest form for logistic regression, using an example. We will try to predict whether a person will like chocolate or not depending on their love for fruit.
###Code
df2 = pd.DataFrame(np.random.randint(low=0, high=11, size=(101, 5)),\
index = ['person {}'.format(n) for n in range(101)],\
columns=['Banana','Orange', 'Strawberry', 'Apple', 'Passion Fruit'])
df3 = pd.DataFrame({'Chocolate': np.append(np.random.randint(2, size=100), '?')},\
index = ['person {}'.format(n) for n in range(101)])
df = df2.join(df3)
df.tail(10)
###Output
_____no_output_____
###Markdown
Alright. Here is our dataset. As we can see, the choice of fruit and chocolate varies among people. We would like to know if person 100 in our survey will like Chocolate or not.The structure of a logistic algorithm goes as follows:For each person, we will multiply each attribute by a random weight, sum them up and transform it into a number between 0 and 1.Then, we will take each output, compute the error with the real value and then adjust the random weights.Sounds a bit complicated without a graph doesn't it? Let's see with SarahYou don't need to look at the code below, just the figure.
###Code
from matplotlib.patches import Circle, Wedge, Polygon
from matplotlib.collections import PatchCollection
fig, ax = plt.subplots(figsize=(12,6))
x1 = 0.2
y1 = 0.8
r = 0.1
space = 0.6
patches = []
fruits = df.columns.tolist()[:-1]
constantline = y1-2*0.2
for n in range(5):
circle = Circle((x1*2+space, y1 - n*0.2), r);
patches.append(circle);
plt.text(x1*2-0.1, y1 - n*0.2, fruits[n], fontsize=12);
plt.text(x1*2+space-0.02, y1-n*0.2-0.01, df.iloc[0,n], color = 'w')
ax.plot([x1*2+space+r, x1*2+space+0.3], [y1 - n*0.2, constantline], color = 'b')
#Output node
constantline = y1-2*0.2
patches.append(Circle((x1*2+space+0.4, constantline), r))
#Transformation node
patches.append(Circle((x1*2+space+1.3, constantline), r))
ax.plot([x1*2+space+0.5, x1*2+space+1.2], [constantline, constantline], color = 'b')
plt.text(x1*2+space+0.55,constantline +0.05, 'logistic transformation');
p = PatchCollection(patches, alpha=1);
ax.add_collection(p);
#Person's name
plt.text(x1*2+0.5, 1, df.index[0]);
ax.set_xlim(xmin = 0, xmax = 2.5);
ax.set_ylim(ymin = -0.15, ymax = 1);
fs = 7
ax.annotate('Step 1', xy=(0.48, 0.01), xytext=(0.48, -0.1), xycoords='axes fraction',
fontsize=fs*1.5, ha='center', va='bottom',
bbox=dict(boxstyle='square', fc='white'),
arrowprops=dict(arrowstyle='-[, widthB=5, lengthB=1.3', lw=2.0))
ax.annotate('Step 2', xy=(0.75, 0.01), xytext=(0.75, -0.1), xycoords='axes fraction',
fontsize=fs*1.5, ha='center', va='bottom',
bbox=dict(boxstyle='square', fc='white'),
arrowprops=dict(arrowstyle='-[, widthB=11.2, lengthB=1.3', lw=2.0))
plt.axis('off');
plt.show();
###Output
_____no_output_____
###Markdown
All right! In this graph we have our inputs on the left and our outputs on the right. Logistic regression consists of two different steps.The first step is simply multiplying each attribute by a random weight generated by the computer. To do this, we will need to vectorize the weights and our attributes, and then multiply them again one another. We denote the weight matrix $\theta$, and each individual weight for each attribute j $\theta_j$If you haven't seen linear algebra, Khan Academy has an excellent video on it:https://www.khanacademy.org/math/linear-algebra/matrix-transformations/composition-of-transformations/v/linear-algebra-matrix-product-examples
###Code
#Creating random matrix
weights = np.matrix(np.random.rand(df.shape[1]-1, 1))
attributes = df.iloc[0,:-1].tolist()
#Multiplying each of them, outputing in summation
sol = attributes * weights
solution = sol[0,0]
###Output
_____no_output_____
###Markdown
Let's update our figure
###Code
from matplotlib.patches import Circle, Wedge, Polygon
from matplotlib.collections import PatchCollection
fig, ax = plt.subplots(figsize=(12,6))
x1 = 0.2
y1 = 0.8
r = 0.1
space = 0.6
patches = []
fruits = df.columns.tolist()[:-1]
constantline = y1-2*0.2
for n in range(5):
circle = Circle((x1*2+space, y1 - n*0.2), r);
patches.append(circle);
plt.text(x1*2-0.1, y1 - n*0.2, fruits[n], fontsize=12);
plt.text(x1*2+space-0.02, y1-n*0.2-0.01, df.iloc[0,n], color = 'w')
ax.plot([x1*2+space+r, x1*2+space+0.3], [y1 - n*0.2, constantline], color = 'b')
#Output node
constantline = y1-2*0.2
patches.append(Circle((x1*2+space+0.4, constantline), r))
#Transformation node
patches.append(Circle((x1*2+space+1.3, constantline), r))
ax.plot([x1*2+space+0.5, x1*2+space+1.2], [constantline, constantline], color = 'b')
plt.text(x1*2+space+0.55,constantline +0.05, 'logistic transformation');
p = PatchCollection(patches, alpha=1);
ax.add_collection(p);
#Person's name
plt.text(x1*2+0.5, 1, df.index[0]);
ax.set_xlim(xmin = 0, xmax = 2.5);
ax.set_ylim(ymin = -0.15, ymax = 1);
fs = 7
ax.annotate('Step 1', xy=(0.48, 0.01), xytext=(0.48, -0.1), xycoords='axes fraction',
fontsize=fs*1.5, ha='center', va='bottom',
bbox=dict(boxstyle='square', fc='white'),
arrowprops=dict(arrowstyle='-[, widthB=5, lengthB=1.3', lw=2.0))
ax.annotate('Step 2', xy=(0.75, 0.01), xytext=(0.75, -0.1), xycoords='axes fraction',
fontsize=fs*1.5, ha='center', va='bottom',
bbox=dict(boxstyle='square', fc='white'),
arrowprops=dict(arrowstyle='-[, widthB=11.2, lengthB=1.3', lw=2.0))
#Summation text
plt.text(x1*2+space+0.35, constantline, np.round(solution,2), color = 'w')
plt.axis('off');
plt.show();
###Output
_____no_output_____
###Markdown
Now that we have our sum, however our output should be either 0 or 1. Given that this intermediate step will never give us 0 or 1 exactly, we need to transform this number into another one. This can be achieved using a logistic function.The goal of a logistic function is to change a number to be between 0 and 1. From there, we add a threshold value, where every number above the threshold will be considered 1 and every number below 0. The most well-known logistic regression is the sigmoid function.$$\sigma(x) = \frac{1}{1+e^{-x}}$$For reference, the function between -10 and 10 looks like this.
###Code
def sigmoid(x):
return 1/(1+np.exp(-x))
x = np.linspace(-10,10, 1000)
y = [sigmoid(number) for number in x]
plt.grid(True)
plt.plot(x,y);
###Output
_____no_output_____
###Markdown
Now that we have our function, let's pass our solution variable to the function, and then select our threshold. For this example, let's set our threshold to 0.5
###Code
threshold = 0.5
output = sigmoid(solution)
print(1 if output > threshold else 0)
from matplotlib.patches import Circle, Wedge, Polygon
from matplotlib.collections import PatchCollection
fig, ax = plt.subplots(figsize=(12,6))
x1 = 0.2
y1 = 0.8
r = 0.1
space = 0.6
patches = []
fruits = df.columns.tolist()[:-1]
constantline = y1-2*0.2
for n in range(5):
circle = Circle((x1*2+space, y1 - n*0.2), r);
patches.append(circle);
plt.text(x1*2-0.1, y1 - n*0.2, fruits[n], fontsize=12);
plt.text(x1*2+space-0.02, y1-n*0.2-0.01, df.iloc[0,n], color = 'w')
ax.plot([x1*2+space+r, x1*2+space+0.3], [y1 - n*0.2, constantline], color = 'b')
#Output node
constantline = y1-2*0.2
patches.append(Circle((x1*2+space+0.4, constantline), r))
#Transformation node
patches.append(Circle((x1*2+space+1.3, constantline), r))
ax.plot([x1*2+space+0.5, x1*2+space+1.2], [constantline, constantline], color = 'b')
plt.text(x1*2+space+0.55,constantline +0.05, 'logistic transformation');
p = PatchCollection(patches, alpha=1);
ax.add_collection(p);
#Person's name
plt.text(x1*2+0.5, 1, df.index[0]);
ax.set_xlim(xmin = 0, xmax = 2.5);
ax.set_ylim(ymin = -0.15, ymax = 1);
fs = 7
ax.annotate('Step 1', xy=(0.48, 0.01), xytext=(0.48, -0.1), xycoords='axes fraction',
fontsize=fs*1.5, ha='center', va='bottom',
bbox=dict(boxstyle='square', fc='white'),
arrowprops=dict(arrowstyle='-[, widthB=5, lengthB=1.3', lw=2.0))
ax.annotate('Step 2', xy=(0.75, 0.01), xytext=(0.75, -0.1), xycoords='axes fraction',
fontsize=fs*1.5, ha='center', va='bottom',
bbox=dict(boxstyle='square', fc='white'),
arrowprops=dict(arrowstyle='-[, widthB=11.2, lengthB=1.3', lw=2.0))
#Summation text
plt.text(x1*2+space+0.35, constantline, np.round(solution,2), color = 'w')
#Output text
plt.text(x1*2+space+1.29, constantline, output, color = 'w')
plt.axis('off');
plt.show();
###Output
_____no_output_____
###Markdown
We are almost there! The only part we need to add is the adjustment to the weights, so that our algorithms learns. The error function that is typically used in logistic regression is:$$ Cost(\sigma(x^i), y^i) = - y^i*log(\sigma(x^i)) - (1-y^i)*log(1-\sigma(x^i))$$$$ J(\theta) = \frac{1}{m} \sum_{i=1}^{m} Cost(\sigma(x^i), y^i)) $$Why this function is used specifically is beyond this simple notebook. Let's calculate our cost
###Code
def cost(theta,sol):
return -sol*np.log(theta)-(1-sol)*np.log(1-theta)
print(cost(output, int(df.iloc[0,-1])))
###Output
4.702733208034278
###Markdown
Now that we have the cost, we need to calculate the adjustment rate for each weights. For this we will look at how each weights influences the error. This method, called **Gradient Descent**, is used as an error and weight adjustment for every algorithm. For our weight $\theta_j$ adjustment, it is given by$$\theta_j = \theta_j - \alpha*\frac{\partial}{\partial \theta_j} J(\theta)$$Where $\alpha$ is the learning rate. Let's compute the partial derivative of J for just one sum, instead for m (Like this we don't need to deal with the summation).$$\frac{\partial}{\partial \theta_j} J(\theta) = \frac{\partial}{\partial \theta_j} (-y*log(\sigma(x))-(1-y)*log(1-\sigma(x))$$An important derivative to know:$$\frac{d}{dx} \sigma(x) = dx*\sigma(x)*(1-\sigma(x))$$Therefore:$$\frac{\partial}{\partial \theta_j} J(\theta) = \frac{\partial}{\partial \theta_j} (-y*log(\sigma(x))-(1-y)*log(1-\sigma(x))$$$$ \Leftrightarrow \frac{\partial}{\partial \theta_j} J(\theta) = [-y*\frac{x_j*\sigma(x)*(1-\sigma(x))}{\sigma(x)} - (1-y)*\frac{-x_j*\sigma(x)*(1-\sigma(x))}{1-\sigma(x)}]$$$$ \Leftrightarrow \frac{\partial}{\partial \theta_j} J(\theta) = [-y*x_j*(1-\sigma(x)) - (1-y)*(-x_j*\sigma(x))]$$$$ \Leftrightarrow \frac{\partial}{\partial \theta_j} J(\theta) = x_j*[-y*(1-\sigma(x)) +(1-y)*\sigma(x)]$$$$ \Leftrightarrow \frac{\partial}{\partial \theta_j} J(\theta) = x_j*(\sigma(x) - y)$$ We made it through the derivative! Now, we just need to program it. Given that we have more than one weight, we will need to create an adjustment matrix.
###Code
def adjustment(adjustment, weight, label):
adjust = []
summation = attributes * weight
summa = summation[0,0]
for n in range(len(adjustment)):
error = adjustment[n]*(sigmoid(summa)-label)
adjust.append(error)
return np.reshape(np.array(adjust), weight.shape)
attributes = df.iloc[0,:-1].tolist()
adjusted = adjustment(attributes, weights, int(df.iloc[0,-1]))
print(adjusted)
###Output
[[0. ]
[8.91836593]
[4.95464774]
[3.96371819]
[9.90929548]]
###Markdown
Now, our last step would be to adjust our weights accordingly
###Code
nweights = weights - adjusted
print(nweights)
###Output
[[ 0.67961253]
[-8.79373312]
[-4.90189842]
[-3.8275158 ]
[-9.6329585 ]]
###Markdown
And there we go! We adjusted our weights. We have successfully adjusted our weights. Now, let's combine our code. We will want to iterate 3 times in the whole entire list. We must becareful to not iterate through the person we want to know if he likes chocolate or not.
###Code
#Graphing purposes (optional)
errors = []
position = []
avgpos = []
avgerror = []
m = 0
def cost(theta,sol):
return -sol*np.log(theta)-(1-sol)*np.log(1-theta)
def sigmoid(x):
return 1/(1+np.exp(-x))
def adjustment(adjustment, weight, label):
adjust = []
summation = attributes * weight
summa = summation[0,0]
for n in range(len(adjustment)):
error = adjustment[n]*(sigmoid(summa)-label)
adjust.append(error)
return np.reshape(np.array(adjust), weight.shape)
for _ in range(3):
for n in range(df.shape[0]-1):
#Step 1
attributes = df.iloc[n,:-1].tolist()
label = int(df.iloc[0,-1])
sol = attributes * weights
solution = sol[0,0]
#Step 2
output = sigmoid(solution)
#Error calculation
error = cost(output, int(df.iloc[n,-1]))
#Adjustment calculation
adjusted = adjustment(attributes, weights, label)
#Adjust
weights -= adjusted
#Graphing purposes (optional)
errors.append(error)
avgerror.append(np.average(errors))
position.append(m)
m+=1
plt.plot(position, errors, label='Error');
plt.plot(position, avgerror, label = 'Average Error');
plt.legend();
###Output
_____no_output_____
###Markdown
Import relevant libraries
###Code
import pandas as pd
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
###Output
_____no_output_____
###Markdown
Load the data
###Code
raw_data = pd.read_csv("2.01. Admittance.csv")
raw_data.head()
data = raw_data.copy()
data["Admitted"] = data["Admitted"].map({"Yes":1, "No":0})
data.head()
###Output
_____no_output_____
###Markdown
Declare the dependent and independent variables
###Code
y = data["Admitted"]
x1 = data["SAT"]
###Output
_____no_output_____
###Markdown
Regression
###Code
x = sm.add_constant(x1)
reg_log = sm.Logit(y,x)
results_log = reg_log.fit()
results_log.summary()
x0 = np.ones(168)
reg_log = sm.Logit(y, x0)
results_log = reg_log.fit()
results_log.summary()
###Output
Optimization terminated successfully.
Current function value: 0.686044
Iterations 4
###Markdown
Logistic Regressionhttps://ml-cheatsheet.readthedocs.io/en/latest/logistic_regression.htmlhttp://scikit-learn.org/stable/auto_examples/datasets/plot_random_dataset.htmlsphx-glr-auto-examples-datasets-plot-random-dataset-py Binary Logistic Regression
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import make_classification
from mpl_toolkits.mplot3d import Axes3D
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score, f1_score
# from sklearn.metrics import balanced_accuracy_score
%matplotlib inline
n = 1000
# n_classes = 2 by default
features, binary_class = make_classification(n_samples=n, n_features=2,
# weights=[.4, .6], # weights per class
n_informative=1, n_redundant=0, n_clusters_per_class=1)
# Create a dataframe of the features and add the binary class (label, output)
df = pd.DataFrame(features)
df.columns = ['Feature_1', 'Feature_2']
df['Binary_Class'] = binary_class
X = df.drop('Binary_Class', axis=1)
y = df['Binary_Class']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
model = LogisticRegression()
model.fit(X_train,y_train)
predictions = model.predict(X_test)
print('Confusion Matrix')
print(confusion_matrix(y_test,predictions))
tn, fp, fn, tp = confusion_matrix(y_test,predictions).ravel()
print(tn, fp, fn, tp)
print()
print('Classification Report')
print(classification_report(y_test,predictions))
print()
print('Accuracy Score')
print(accuracy_score(y_test, predictions))
print()
# print('Balanced Accuracy Score')
# print(balanced_accuracy_score(y_test, predictions))
print()
print('F1 Score')
print(f1_score(y_test, predictions, average=None))
plt.figure()
plt.title("Binary Logistic Regression")
plt.scatter(df['Feature_1'], df['Feature_2'], marker='o', c=df['Binary_Class'], s=25, edgecolor='k')
plt.show()
###Output
Confusion Matrix
[[88 12]
[ 9 91]]
88 12 9 91
Classification Report
precision recall f1-score support
0 0.91 0.88 0.89 100
1 0.88 0.91 0.90 100
avg / total 0.90 0.90 0.89 200
Accuracy Score
0.895
F1 Score
[ 0.89340102 0.89655172]
###Markdown
Multiclass Logistic Regression
###Code
n = 1000
# https://chrisalbon.com/machine_learning/basics/make_simulated_data_for_classification/
# Create a simulated feature matrix and output vector with 100 samples
features, multi_class = make_classification(n_samples = n, n_features = 3,
n_informative = 3, # features that actually predict the output's classes
n_redundant = 0, # features that are random and unrelated to the output's classes
# weights = [.2, .3, .5],
n_classes = 3, n_clusters_per_class=1)
# Create a dataframe of the features and add the binary class (label, output)
df = pd.DataFrame(features)
df.columns = ['Feature_1', 'Feature_2', 'Feature_3']
df['Multi_Class'] = multi_class
X = df.drop('Multi_Class', axis=1)
y = df['Multi_Class']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
model = LogisticRegression()
model.fit(X_train,y_train)
predictions = model.predict(X_test)
print('Confusion Matrix')
print(confusion_matrix(y_test,predictions))
# tn, fp, fn, tp = confusion_matrix(y_test,predictions).ravel()
# print(tn, fp, fn, tp)
print()
print('Classification Report')
print(classification_report(y_test,predictions))
print()
print('Accuracy Score')
print(accuracy_score(y_test, predictions))
print()
# print('Balanced Accuracy Score')
# print(balanced_accuracy_score(y_test, predictions))
print()
print('F1 Score')
print(f1_score(y_test, predictions, average=None))
plt.figure()
ax = plt.axes(title='Multiclass Logistic Regression', projection='3d')
ax.scatter(df['Feature_1'], df['Feature_2'], df['Feature_3'], marker='o', c=df['Multi_Class'], s=25, edgecolor='k')
plt.show()
# np.seterr(divide='ignore', invalid='ignore')
g = sns.pairplot(df, hue='Multi_Class', vars=(['Feature_1', 'Feature_2', 'Feature_3']), palette='plasma')
ax = plt.gca()
ax.set_title("Multiclass Logistic Regression Pair Plot")
###Output
Confusion Matrix
[[76 2 1]
[ 1 54 5]
[ 0 4 57]]
Classification Report
precision recall f1-score support
0 0.99 0.96 0.97 79
1 0.90 0.90 0.90 60
2 0.90 0.93 0.92 61
avg / total 0.94 0.94 0.94 200
Accuracy Score
0.935
F1 Score
[ 0.97435897 0.9 0.91935484]
###Markdown
Logistic RegressionLogistic Regression is a binary classification method. The key idea is learning a mapping from a feature vector to a probability, a number between $0$ and $1$. It is similar to least-squares in the sense that (apart from some extreme cases) it has a unique solution.Suppose, for a set of objects $X$, each denoted by the feature vector $x_i \in \mathbb{R}^D$, we are given the answer to some true-false question, such as 'is object $i$ of class $c$?'. This answer is denoted by $y_i \in \{0, 1\}$. We are given a dataset of feature vectors $x_i$ along with the corresponding 'labels' $y_i$. For $i=1\dots N$$$(y_i, x_i)$$The model is $$\Pr\{y_i = 1\} = \sigma(x_i^\top w)$$Here,$\sigma(x)$ is the sigmoid function defined as\begin{eqnarray}\sigma(x) & = & \frac{1}{1+e^{-x}}\end{eqnarray}This is a generative model. To understand logistic regression as a generative model, consider the following metaphor: assume that for each data instance $x_i$, we select a biased coin with probability $p(y_i = 1| w, x) = \pi_i = \sigma(w^\top x_i)$, throw the coin and label the data item with class $y_i$ accordingly. Mathematically, we assume that each label $y_i$ is drawn from a Bernoulli distribution. That is: \begin{eqnarray}\pi_i & = & \sigma(x_i^\top w) \\y_i & \sim &\mathcal{BE}(\pi)\end{eqnarray}Here, we think of a biased coin with two sides denoted as $1$ and $0$ with probability of side $1$ as $\pi$, and consequently the probability of side $0$ with $1-\pi$. We denote the outcome of the coin toss with the random variable $y$. We write the probability as $p(y = 1) = \pi$ and probability of heads is $p(y = 0) = 1-\pi$. More compactly, the probability of the outcome of a toss, provided we know $\pi$, is written as\begin{eqnarray}p(y|\pi) = \pi^y(1-\pi)^y\end{eqnarray} In logistic regression, we are given a dataset of form\begin{eqnarray}X & = & \begin{pmatrix} x_{1,1} & x_{1,2} & \dots & x_{1,D} \\ x_{2,1} & x_{2,2} & \dots & x_{2,D} \\ \vdots & \vdots & \vdots & \vdots \\ x_{i,1} & x_{i,2} & \dots & x_{i,D} \\ \vdots & \vdots & \vdots & \vdots \\ x_{N,1} & x_{N,2} & \dots & x_{N,D} \\\end{pmatrix} = \begin{pmatrix}x_1^\top \\x_2^\top \\\dots \\x_i^\top \\\dots \\x_N^\top\end{pmatrix} \\{y} & = & \begin{pmatrix}y_1 \\y_2 \\\vdots \\y_i \\\vdots \\y_N\end{pmatrix}\end{eqnarray}where $x_{i,j}$ denotes the $j$'th feature of the $i$'th data point. It is customary, to set a column entirely to $1$, for example $x_{i,D}=1$ for all $i$. This 'feature' is artificially added to the dataset to allow a slightly more flexible model. The $y_i$ denote the target class label of the$i$'th object. In logistic regression, we consider the case of binary classification where $y_i \in \{0,1\}$. It is possible to use other encodings such as $y_i \in \{-1,1\}$; the derivations are similar. Properties of the sigmoid functionNote that\begin{eqnarray}\sigma(x) & = & \frac{e^x}{(1+e^{-x})e^x} = \frac{e^x}{1+e^{x}} \\1 - \sigma(x) & = & 1 - \frac{e^x}{1+e^{x}} = \frac{1+e^{x} - e^x}{1+e^{x}} = \frac{1}{1+e^{x}}\end{eqnarray}\begin{eqnarray}\sigma'(x) & = & \frac{e^x(1+e^{x}) - e^{x} e^x}{(1+e^{x})^2} = \frac{e^x}{1+e^{x}}\frac{1}{1+e^{x}} = \sigma(x) (1-\sigma(x))\end{eqnarray}\begin{eqnarray}\log \sigma(x) & = & -\log(1+e^{-x}) = x - \log(1+e^{x}) \\\log(1 - \sigma(x)) & = & -\log({1+e^{x}})\end{eqnarray}Exercise: Plot the sigmoid function and its derivative. Learning the parametersThe likelihood of the observations, that is the probability of observing the class sequence is\begin{eqnarray}p(y_1, y_2, \dots, y_N|w, x_1, x_2, \dots, x_N ) &=& \left(\prod_{i : y_i=1} \sigma(w^\top x_i) \right) \left(\prod_{i : y_i=0}(1- \sigma(w^\top x_i)) \right)\end{eqnarray}Here, the left product is the expression for examples from class $1$ and the right product is for examples from class $0$.We will look for the particular setting of the weight vector, the so called maximum likelihood solution, denoted by $w^*$.\begin{eqnarray}w^* & = & \arg\max_{w} {\cal L}(w)\end{eqnarray}where the loglikelihood function\begin{eqnarray}{\cal L}(w) & = & \log p(y_1, y_2, \dots, y_N|w, x_1, x_2, \dots, x_N ) \\& = & \sum_{i : y_i=1} \log \sigma(w^\top x_i) + \sum_{i : y_i=0} \log (1- \sigma(w^\top x_i)) \\& = & \sum_{i : y_i=1} w^\top x_i - \sum_{i : y_i=1} \log(1+e^{w^\top x_i}) - \sum_{i : y_i=0}\log({1+e^{w^\top x_i}}) \\& = & \sum_i y_i w^\top x_i - \sum_{i} \log(1+e^{w^\top x_i}) \\& = & y^\top X w - \mathbf{1}^\top logsumexp(0, X w)\end{eqnarray}Unlike the least-squares problem, an expression for direct evaluation of $w^*$ is not known so we need to resort to numerical optimization. Optimization via gradient ascentOne way foroptimization is gradient ascent\begin{eqnarray}w^{(\tau)} & \leftarrow & w^{(\tau-1)} + \eta \nabla_w {\cal L}\end{eqnarray}where\begin{eqnarray}\nabla_w {\cal L} & = &\begin{pmatrix}{\partial {\cal L}}/{\partial w_1} \\{\partial {\cal L}}/{\partial w_2} \\\vdots \\{\partial {\cal L}}/{\partial w_{D}}\end{pmatrix}\end{eqnarray}is the gradient vector. Evaluating the gradientThe partial derivative of the loglikelihood with respect to the $k$'th entry of the weight vector is given by the chain rule as\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \frac{\partial{\cal L}}{\partial \sigma(u)} \frac{\partial \sigma(u)}{\partial u} \frac{\partial u}{\partial w_k}\end{eqnarray}\begin{eqnarray}{\cal L}(w) & = & \sum_{i : y_i=1} \log \sigma(w^\top x_i) + \sum_{i : y_i=0} \log (1- \sigma(w^\top x_i))\end{eqnarray}\begin{eqnarray}\frac{\partial{\cal L}(\sigma)}{\partial \sigma} & = & \sum_{i : y_i=1} \frac{1}{\sigma(w^\top x_i)} - \sum_{i : y_i=0} \frac{1}{1- \sigma(w^\top x_i)}\end{eqnarray}\begin{eqnarray}\frac{\partial \sigma(u)}{\partial u} & = & \sigma(w^\top x_i) (1-\sigma(w^\top x_i))\end{eqnarray}\begin{eqnarray}\frac{\partial w^\top x_i }{\partial w_k} & = & x_{i,k}\end{eqnarray}So the gradient is\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \sum_{i : y_i=1} \frac{\sigma(w^\top x_i) (1-\sigma(w^\top x_i))}{\sigma(w^\top x_i)} x_{i,k} - \sum_{i : y_i=0} \frac{\sigma(w^\top x_i) (1-\sigma(w^\top x_i))}{1- \sigma(w^\top x_i)} x_{i,k} \\& = & \sum_{i : y_i=1} {(1-\sigma(w^\top x_i))} x_{i,k} - \sum_{i : y_i=0} {\sigma(w^\top x_i)} x_{i,k}\end{eqnarray}We can write this expression more compactly by noting\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \sum_{i : y_i=1} {(\underbrace{1}_{y_i}-\sigma(w^\top x_i))} x_{i,k} + \sum_{i : y_i=0} {(\underbrace{0}_{y_i} - \sigma(w^\top x_i))} x_{i,k} \\& = & \sum_i (y_i - \sigma(w^\top x_i)) x_{i,k}\end{eqnarray}The update rule is\begin{eqnarray}w^{(\tau)} = w^{(\tau-1)} + \eta X^\top (y-\sigma(X w))\end{eqnarray}
###Code
%matplotlib inline
from cvxpy import *
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
x = np.matrix('[-2,1; -1,2; 1,5; -1,1; -3,-2; 1,1] ')
y = np.matrix('[0,0,1,0,0,1]').T
N = x.shape[0]
#A = np.hstack((np.power(x,0), np.power(x,1), np.power(x,2)))
X = np.hstack((x, np.ones((N,1)) ))
K = X.shape[1]
Ke = 0
z = np.zeros((N,1))
print(y)
print(X)
N = 1000
K = 10
Ke = 40-K
def sigmoid(x):
return 1/(1+np.exp(-x))
x = np.matrix(np.random.randn(N, K))
w_true = np.random.randn(K,1)
p = sigmoid(x*w_true)
u = np.random.rand(N,1)
y = (u < p)
y = y.astype(np.float64)
#A = np.hstack((np.power(x,0), np.power(x,1), np.power(x,2)))
X = np.hstack((x, np.random.randn(N, Ke )))
z = np.zeros((N,1))
# Construct the problem.
w = Variable(K+Ke)
objective = Minimize(25.5*norm(w, 1) -y.T*X*w + sum_entries(log_sum_exp(hstack(z, X*w),axis=1)))
#constraints = [0 <= x, x <= 10]
#prob = Problem(objective, constraints)
prob = Problem(objective)
# The optimal objective is returned by prob.solve().
result = prob.solve()
# The optimal value for x is stored in x.value.
print(w.value)
# The optimal Lagrange multiplier for a constraint
# is stored in constraint.dual_value.
#print(constraints[0].dual_value)
#plt.show()
plt.stem(w.value)
plt.stem(w_true,markerfmt='xr')
plt.gca().set_xlim((-1, K+Ke))
plt.show()
###Output
[[ 4.40983335e-02]
[ -3.99878834e-11]
[ -4.89025643e-11]
[ 2.80208513e-12]
[ 8.99384909e-02]
[ 2.48151990e-01]
[ 1.00411325e+00]
[ -6.49613096e-02]
[ 1.14700040e+00]
[ -6.10505750e-01]
[ 1.09795546e-10]
[ -1.01054452e-11]
[ 6.10728067e-11]
[ -2.78667789e-11]
[ 3.11769935e-11]
[ 1.54248866e-12]
[ -1.64127375e-10]
[ 6.07470106e-11]
[ -7.33071236e-11]
[ -2.65350325e-12]
[ 6.54192363e-11]
[ -3.76599877e-10]
[ 1.60127872e-11]
[ 1.21984759e-10]
[ -3.28280038e-11]
[ -5.44375293e-12]
[ -2.35710693e-11]
[ -1.26861576e-11]
[ 1.26534640e-11]
[ -5.25187409e-11]
[ -1.33941329e-11]
[ -3.14596819e-09]
[ -4.26032415e-10]
[ 3.51397512e-11]
[ -1.38935273e-10]
[ -9.18761500e-13]
[ -6.34084551e-11]
[ -1.41931589e-10]
[ 4.54740315e-12]
[ 1.54700892e-02]]
###Markdown
Logistic Regression Datast : "pima-indians-diabetes-database"
###Code
# Importingg libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
# reaing data
df = pd.read_csv('pima-indians-diabetes.csv')
df.head()
df.info()
df.describe()
df.corr()
sns.set()
sns.heatmap(df.corr(), vmin=-1,vmax=1, linewidths=0.5, cmap="YlGnBu")
df.boxplot(figsize=(10,5))
# featues andd values
X = df.drop(['diabetes'], axis=1).values
y = df['diabetes'].values
# Splitng Data into training and testing set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)
# Modeling
model = LogisticRegression()
model.fit(X_train, y_train)
# Prediction
y_pred = model.predict(X_test)
print('Confussion matrix :- \n', confusion_matrix(y_test, y_pred),'\n')
print('Classification Report :- \n', classification_report(y_test, y_pred))
###Output
Confussion matrix :-
[[170 36]
[ 36 66]]
Classification Report :-
precision recall f1-score support
0 0.83 0.83 0.83 206
1 0.65 0.65 0.65 102
accuracy 0.77 308
macro avg 0.74 0.74 0.74 308
weighted avg 0.77 0.77 0.77 308
###Markdown
ROC curve
###Code
from sklearn.metrics import roc_curve
# pridting probabilities
y_pred_prob = model.predict_proba(X_test)[:,1]
# Generate ROC curve values: fpr, tpr, thresholds
fpr, tpr, threshold = roc_curve(y_test, y_pred_prob)
# Plot ROC Curve
plt.title('ROC Curve')
plt.plot([0,1], [0,1], 'k--')
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show()
from sklearn.metrics import plot_roc_curve
plot_roc_curve(model, X_test, y_test)
#plot confussion metric
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(model, X_test, y_test, display_labels=['diabetes', 'No_diabetes'])
# Plot precission recall curve
from sklearn.metrics import plot_precision_recall_curve
plot_precision_recall_curve(model, X_test, y_test)
# Area under ROC curve
#AUC
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import cross_val_score
# Compute and print AUC score
print("AUC: {}".format(roc_auc_score(y_test, y_pred_prob)))
# Compute cross-validated AUC scores: cv_auc
cv_auc = cross_val_score(model, X, y, scoring='roc_auc')
# Print list of AUC scores
print("AUC scores computed using 5-fold cross-validation: {}".format(cv_auc))
###Output
C:\Users\Goutam Dadhich\Anaconda3\lib\site-packages\sklearn\linear_model\_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html.
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
C:\Users\Goutam Dadhich\Anaconda3\lib\site-packages\sklearn\linear_model\_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html.
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
C:\Users\Goutam Dadhich\Anaconda3\lib\site-packages\sklearn\linear_model\_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html.
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
C:\Users\Goutam Dadhich\Anaconda3\lib\site-packages\sklearn\linear_model\_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html.
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
C:\Users\Goutam Dadhich\Anaconda3\lib\site-packages\sklearn\linear_model\_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html.
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
###Markdown
Anak Agung Ngurah Bagus Trihatmaja
###Code
%run helper.py
# General functions for this problem
def get_decision_boundary(weights, x):
w0 = weights[0]
w1 = weights[1]
w2 = weights[2]
y = -(w0 + w1 * x) / w2
return y
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def predict(weight, data):
predicted_values = []
for i in range(0, len(data)):
total = weight[0] + data['x1'][i] * weight[1] + data['x2'][i] * weight[2]
predicted_value = 0 if sigmoid(total) <= 0.5 else 1
predicted_values.append(predicted_value)
return predicted_values
###Output
_____no_output_____
###Markdown
Problem 7 Part AWrite down a code in Python whose input is a trainingdataset ${(x_1, y_1), . . . ,(x_N , y_N )}$ and itsoutput is the weight vectorw in the logistic regression model $y = \sigma(w \lt x)$.
###Code
"""
Input: pandas data frame of [x, y]
Output: weight vector
Assumption:
1. Not for multiclass
"""
def logistic_regression(X, y, steps, learning_rate):
intercept = np.ones((X.shape[0], 1))
X = np.hstack((intercept, X))
weights = np.zeros(X.shape[1])
for step in range(0, steps):
z = np.dot(X, weights)
predictions = sigmoid(z)
# Update weights with log likelihood gradient
output_error_signal = y - predictions
gradient = np.dot(X.T, output_error_signal)
weights += learning_rate * gradient
return weights
###Output
_____no_output_____
###Markdown
Part BUse ‘dataset1’. Run thecode on the trainingdataset to compute $w$ and evaluate on the test dataset.Report $w$,classification error on the trainingset and classification error onthe testset. Plot the data (use different colors for data in differentclasses)and plotthe decision boundary found by the logistic regressions.
###Code
# Plotting the training dataset
# Load dataset 1
ds1 = scipy.io.loadmat('data1.mat')
ds1_train_X = ds1['X_trn']
ds1_test_X = ds1['X_tst']
ds1_train_Y = ds1['Y_trn']
ds1_test_Y = ds1['Y_tst']
# Combine them all into one data frame
ds1_train = to_data_frame(ds1_train_X, ds1_train_Y)
# Plot the graph
plt.scatter(ds1_train['x1'], ds1_train['x2'], c = ds1_train['y'])
plt.show()
# Ok, seems like linearly separable
# There are only two classes
alpha = 5 * (10**-5)
weights = logistic_regression(ds1_train[ds1_train.columns[0:2]], ds1_train['y'],
steps = 50000, learning_rate = alpha)
print(weights)
# Comparing the results to make sure the correctness
model = linear_model.LogisticRegression(fit_intercept=True, C = 1e15)
model.fit(ds1_train[ds1_train.columns[0:2]], ds1_train['y'])
# Print values using sklearn (Train dataset)
print("Prediction using sklearn (Train):")
model_predicted_values_train = model.predict(ds1_train[ds1_train.columns[0:2]])
print(model.predict(ds1_train[ds1_train.columns[0:2]]))
# Print values using our custom model (Train dataset)
predicted_values_train = predict(weights, ds1_train[ds1_train.columns[0:2]])
print("Prediction using our model (Train):")
print(predicted_values_train)
# Calculate the error:
print("The error for the train data using sklearn logistic regression is (%):")
print(count_error(model_predicted_values_train, ds1_train['y']) * 100)
# Calculate the error:
print("The error for the train data using our logistic regression is (%):")
print(count_error(predicted_values_train, ds1_train['y']) * 100)
# Combine them all into one data frame
ds1_test = to_data_frame(ds1_test_X, ds1_test_Y)
# Print values using sklearn (Test dataset)
print("Prediction using sklearn (Test):")
print(model.predict(ds1_test[ds1_test.columns[0:2]]))
# Print values using our custom model (Test dataset)
predicted_values_test = predict(weights, ds1_test[ds1_test.columns[0:2]])
print("Prediction using our model (Test):")
print(predicted_values_test)
# Calculate the error:
print("The error for the test data using our logistic regression is (%):")
print(count_error(predicted_values_test, ds1_test['y']) * 100)
# For our train model:
y = get_decision_boundary(weights, ds1_train['x1'])
plt.scatter(ds1_train['x1'], ds1_train['x2'], c = ds1_train['y'])
plt.plot(ds1_train['x1'], y)
plt.show()
# For our test model:
y = get_decision_boundary(weights, ds1_test['x1'])
plt.scatter(ds1_test['x1'], ds1_test['x2'], c = ds1_test['y'])
plt.plot(ds1_test['x1'], y)
plt.show()
###Output
_____no_output_____
###Markdown
Part CRepeat part B using ‘dataset2’. Explain the differences in resultsbetween part A and B and justify your observations/results.
###Code
# Load dataset 2
ds2 = scipy.io.loadmat('data2.mat')
ds2_train_X = ds2['X_trn']
ds2_test_X = ds2['X_tst']
ds2_train_Y = ds2['Y_trn']
ds2_test_Y = ds2['Y_tst']
# Combine them all into one data frame
ds2_train = to_data_frame(ds2_train_X, ds2_train_Y)
# Plot the graph
plt.scatter(ds2_train['x1'], ds2_train['x2'], c = ds2_train['y'])
plt.show()
# Ok, this one is not linearly separable
# There are only two classes
alpha = 5 * (10**-5)
weights = logistic_regression(ds2_train[ds2_train.columns[0:2]], ds2_train['y'],
steps = 50000, learning_rate = alpha)
print(weights)
# Print values using our custom model (Train dataset)
predicted_values_train = predict(weights, ds1_train[ds1_train.columns[0:2]])
print("Prediction using our model (Train):")
print(predicted_values_train)
# Calculate the error:
print("The error for the train data using our logistic regression is (%):")
print(count_error(predicted_values_train, ds1_train['y']) * 100)
# Combine them all into one data frame
ds2_test = to_data_frame(ds2_test_X, ds2_test_Y)
# Print values using our custom model (Test dataset)
predicted_values_test = predict(weights, ds2_test[ds2_test.columns[0:2]])
print("Prediction using our model (Test):")
print(predicted_values_test)
# Calculate the error:
print("The error for the test data using our logistic regression is (%):")
print(count_error(predicted_values_test, ds2_test['y']) * 100)
# For our train model:
y = get_decision_boundary(weights, ds2_train['x1'])
plt.scatter(ds2_train['x1'], ds2_train['x2'], c = ds2_train['y'])
plt.plot(ds2_train['x1'], y)
plt.show()
# For our test model:
y = get_decision_boundary(weights, ds2_test['x1'])
plt.scatter(ds2_test['x1'], ds2_test['x2'], c = ds2_test['y'])
plt.plot(ds2_test['x1'], y)
plt.show()
###Output
_____no_output_____
###Markdown
Logistic RegressionLogistic regression is a classification method. Its main goal is learning a function that __returns a yes or no answer__when presented as input a so-called __feature__ vector. As an example, suppose we are given a dataset, such as the one below:| Class| Feature1 | Feature2 ||---| |---|| 0 |5.7| 3.1|| 1|-0.3|2 ||---| |---|| $y_i$| $x_{i,1}$ | $x_{i,2}$ ||---| |---|| 1|0.4|5 |The goal is learning to predict the labels of a future dataset, where we are given only the features but not the labels:| Class| Feature1 | Feature2 ||---| |---|| ? |4.8| 3.2|| ? |-0.7|2.4 ||---| |---|More formally, the dataset consists of $N$ feature vectors $x_i$ and the associated labels $y_i$ for each example $i=1\dots N$. The entries of $y$ are referred typically as class labels -- but in reality $y$ could model any answer to a true-false question, such as 'is object $i$ a flower?' or 'will customer $i$ buy product $j$ during the next month?'. We can arrange the features in a matrix $X$ and the labels in a vector $y$:\begin{eqnarray}X & = & \begin{pmatrix} x_{1,1} & x_{1,2} & \dots & x_{1,D} \\ x_{2,1} & x_{2,2} & \dots & x_{2,D} \\ \vdots & \vdots & \vdots & \vdots \\ x_{i,1} & x_{i,2} & \dots & x_{i,D} \\ \vdots & \vdots & \vdots & \vdots \\ x_{N,1} & x_{N,2} & \dots & x_{N,D} \\\end{pmatrix} = \begin{pmatrix}x_1^\top \\x_2^\top \\\dots \\x_i^\top \\\dots \\x_N^\top\end{pmatrix} \\{y} & = & \begin{pmatrix}y_1 \\y_2 \\\vdots \\y_i \\\vdots \\y_N\end{pmatrix}\end{eqnarray}where $x_{i,j}$ denotes the $j$'th feature of the $i$'th data point. It is common, to set a column of $X$ entirely to $1$'s, for example we take $x_{i,D}=1$ for all $i$. This 'feature' is artificially added to the dataset to allow a slightly more flexible model -- even if we don't measure any feature, the relative numbers of ones and zeros in a dataset can provide a crude estimate of the probability of a true or false answer. Logistic Regression is a method that can be used to solve binary classification problems, like the one above. We will encode the two classes as $y_i \in \{0,1\}$. The key idea is learning a mapping from a feature vector $x$ to a probability, a number between $0$ and $1$. The generative model is $$\Pr\{y_i = 1\} = \pi_i = \sigma(x_i^\top w)$$Here,$\sigma(x)$ is the sigmoid function defined as\begin{eqnarray}\sigma(x) & = & \frac{1}{1+e^{-x}}\end{eqnarray}To understand logistic regression as a generative model, consider the following metaphor: assume that for each data instance $x_i$, we select a biased coin with probability $p(y_i = 1| w, x^\top_i) = \pi_i = \sigma(x_i^\top w)$, throw the coin and label the data item with class $y_i$ accordingly. Mathematically, we assume that each label $y_i$, or more precisely the answer to our yes-no question rearding the object $i$ with feature vector $w$ is drawn from a Bernoulli distribution. That is: \begin{eqnarray}\pi_i & = & \sigma(x_i^\top w) \\y_i & \sim &\mathcal{BE}(\pi)\end{eqnarray}Here, we think of a biased coin with two sides denoted as $H$ (head) and $T$ (tail) with probability of side $H$ as $\pi$, and consequently the probability of side $T$ with $1-\pi$. We denote the outcome of the coin toss with the random variable $y \in \{0, 1\}$. For each throw $i$, $y_i$ is the answer to the question 'Is the outcome heads?'. We write the probability as $p(y = 1) = \pi$ and probability of tails is $p(y = 0) = 1-\pi$. More compactly, the probability of the outcome of a toss, provided we know $\pi$, is written as\begin{eqnarray}p(y|\pi) = \pi^y(1-\pi)^{1-y}\end{eqnarray} Maximum LikelihoodMaximum likelihood (ML) is a method for choosing the unknown parameters of a probability distribution, given some data that is assumed to be drawn from this distribution. The distribution itself is referred as the probability model, or often just the model. ExampleSuppose we are given only $5$ outcomes when a coin is thrown:$$H, T, H, T, T$$What is the probabilty that the outcome is, say heads $H$ if we know that the coin is biased ?.One reasonable answer may be the frequency of heads, $2/5$.The ML solution coincides with this answer. For a derivation, we define $y_i$ for $i = 1,2,\dots, 5$ as$$y_i = \left\{ \begin{array}{cc} 1 & \text{coin $i$ is H} \\ 0 & \text{coin $i$ is T} \end{array} \right. $$hence $$y = [1,0,1,0,0]^\top$$If we assume that the outcomes were independent, the probability of observing the above sequence as a function of the parameter $\pi$ is the product of each individual probability$$\Pr\{y = [1,0,1,0,0]^\top\} = \pi \cdot (1-\pi) \cdot \pi \cdot (1-\pi) \cdot(1-\pi) $$We could try finding the $\pi$ value that maximizes this function. We will call the corresponding value as the maximum likelhood solution, and denote it as $\pi^*$. It is often more convenient to work with the logarithm of this function, known as the loglikelihood function.$$\mathcal{L}(\pi) = 2 \log \pi + 3 \log (1-\pi)$$For finding the maximum, we take the derivative with respect to $\pi$ and set to zero.$$\frac{d \mathcal{L}(\pi)}{d \pi} = \frac{2}{\pi^*} - \frac{3}{1-\pi^*} = 0 $$When we solve we obtain $$ \pi^* = \frac{2}{5} $$ More generally, when we observe $y_i$ for $i=1 \dots N$, the loglikelihood is\begin{eqnarray}\mathcal{L}(\pi)& = & \log \left(\prod_{i : y_i=1} \pi \right) \left(\prod_{i : y_i=0}(1- \pi) \right) \\& = & \log \prod_{i = 1}^N \pi^{y_i} (1- \pi)^{1-y_i} \\& = & \log \pi^{ \sum_i y_i} (1- \pi)^{\sum_i (1-y_i) } \\& = & \left(\sum_i y_i\right) \log \pi + \left(\sum_i (1-y_i) \right) \log (1- \pi) \end{eqnarray}If we define the number of observed $0$'s and $1$'s by $c_0$ and $c_1$ respectively, we have \begin{eqnarray}\mathcal{L}(\pi)& = & c_1 \log \pi + c_0 \log (1- \pi) \end{eqnarray}Taking the derivative and setting to $0$ results in$$\pi^* = \frac{c_1}{c_0+c_1} = \frac{c_1}{N} $$
###Code
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
from IPython.display import clear_output, display, HTML
from matplotlib import rc
import scipy as sc
import scipy.optimize as opt
mpl.rc('font',**{'size': 20, 'family':'sans-serif','sans-serif':['Helvetica']})
mpl.rc('text', usetex=True)
def sigmoid(x):
return 1/(1+np.exp(-x))
def dsigmoid(x):
s = sigmoid(x)
return s*(1-s)
def inv_sigmoid(p=0.5):
xs = opt.bisect(lambda x: sigmoid(x)-p, a=-100, b=100)
return xs
def inv_sigmoid1D(w, b, p=0.5):
xs = opt.bisect(lambda x: sigmoid(w*x+b)-p, a=-100, b=100)
return xs
###Output
_____no_output_____
###Markdown
Plotting the Sigmoid
###Code
fig = plt.figure(figsize=(10,6))
ax = fig.gca()
ax.set_ylim([-0.1,1.1])
x = np.linspace(-10,10,100)
ax.set_xlim([-10,10])
ln = plt.Line2D(x, sigmoid(x))
ln2 = plt.axvline([0], ls= ':', color='k')
ln_left = plt.axvline([0], ls= ':', color='b')
ln_right = plt.axvline([0], ls= ':', color='r')
ax.add_line(ln)
plt.close(fig)
ax.set_xlabel('$x$')
ax.set_ylabel('$\sigma(wx + b)$')
def plot_fun(w=1, b=0):
ln.set_ydata(sigmoid(w*x+b))
if np.abs(w)>0.00001:
ln2.set_xdata(inv_sigmoid1D(w,b,0.5))
ln_left.set_xdata(inv_sigmoid1D(w,b,0.25))
ln_right.set_xdata(inv_sigmoid1D(w,b,0.75))
display(fig)
res = interact(plot_fun, w=(-5, 5, 0.1), b=(-10.0,10.0,0.1))
def LR_loglikelhood(X, y, w):
tmp = X.dot(w)
return y.T.dot(tmp) - np.sum(np.log(np.exp(tmp)+1))
w = np.array([0.5, 2, 3])
D = 3
N = 20
# Some random features
X = 2*np.random.randn(N,D)
X[:,0] = 1
# Generate class labels
pi = sigmoid(np.dot(X, w))
y = np.array([1 if u else 0 for u in np.random.rand(N) < pi]).reshape((N))
xl = -5.
xr = 5.
yl = -5.
yr = 5.
fig = plt.figure(figsize=(5,5))
plt.plot(X[y==1,1],X[y==1,2],'xr')
plt.plot(X[y==0,1],X[y==0,2],'ob')
ax = fig.gca()
ax.set_ylim([yl, yr])
ax.set_xlim([xl, xr])
ln = plt.Line2D([],[],color='k')
ln_left = plt.Line2D([],[],ls= ':', color='b')
ln_right = plt.Line2D([],[],ls= ':', color='r')
ax.add_line(ln)
ax.add_line(ln_left)
ax.add_line(ln_right)
plt.close(fig)
ax.set_xlabel('$x_1$')
#ax.grid(xdata=np.linspace(xl,xr,0.1))
#ax.grid(ydata=np.linspace(yl,yr,0.1))
ax.set_ylabel('$x_2$')
ax.set_xticks(np.arange(xl,xr))
ax.set_yticks(np.arange(yl,yr))
ax.grid(True)
def plot_boundry(w0,w1,w2):
if w1 != 0:
xa = -(w0+w2*yl)/w1
xb = -(w0+w2*yr)/w1
ln.set_xdata([xa, xb])
ln.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.25) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.25) + w0+w2*yr)/w1
ln_left.set_xdata([xa, xb])
ln_left.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.75) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.75) + w0+w2*yr)/w1
ln_right.set_xdata([xa, xb])
ln_right.set_ydata([yl, yr])
elif w2!=0:
ya = -(w0+w1*xl)/w2
yb = -(w0+w1*xr)/w2
ln.set_xdata([xl, xr])
ln.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.25) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.25) + w0+w1*xr)/w2
ln_left.set_xdata([xl, xr])
ln_left.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.75) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.75) + w0+w1*xr)/w2
ln_right.set_xdata([xl, xr])
ln_right.set_ydata([ya, yb])
else:
ln.set_xdata([])
ln.set_ydata([])
ax.set_title('$\mathcal{L}(w) = '+str(LR_loglikelhood(X, y, np.array([w0, w1, w2])))+'$')
display(fig)
res = interact(plot_boundry, w0=(-3.5, 3, 0.1), w1=(-3.,4,0.1), w2=(-3.,4,0.1))
###Output
_____no_output_____
###Markdown
Logistic Regression: Learning the parametersThe logistic regression model is very similar to the coin model. The main difference is that for each example $i$, we use a specific coin with a probability $\sigma(x_i^\top w)$ that depends on the specific feature vector $x_i$ and the parameter vector $w$ that is shared by all examples. The likelihood of the observations, that is the probability of observing the class sequence is$\begin{eqnarray}p(y_1, y_2, \dots, y_N|w, X ) &=& \left(\prod_{i : y_i=1} \sigma(x_i^\top w) \right) \left(\prod_{i : y_i=0}(1- \sigma(x_i^\top w)) \right)\end{eqnarray}$Here, the left product is the expression for examples from class $1$ and the right product is for examples from class $0$.We will look for the particular setting of the weight vector, the maximum likelihood solution, denoted by $w^*$.$\begin{eqnarray}w^* & = & \arg\max_{w} {\cal L}(w)\end{eqnarray}$where the loglikelihood function$\begin{eqnarray}{\cal L}(w) & = & \log p(y_1, y_2, \dots, y_N|w, x_1, x_2, \dots, x_N ) \\& = & \sum_{i : y_i=1} \log \sigma(x_i^\top w) + \sum_{i : y_i=0} \log (1- \sigma(x_i^\top w)) \\& = & \sum_{i : y_i=1} x_i^\top w - \sum_{i : y_i=1} \log(1+e^{x_i^\top w}) - \sum_{i : y_i=0}\log({1+e^{x_i^\top w}}) \\& = & \sum_i y_i x_i^\top w - \sum_{i} \log(1+e^{x_i^\top w}) \\& = & y^\top X w - \mathbf{1}^\top \text{logsumexp}(0, X w)\end{eqnarray}$$\mathbf{1}$ is a vector of ones; note that when we premultiply a vector $v$ by $\mathbf{1}^T$ we get the sum of the entries of $v$, i.e. $\mathbf{1}^T v = \sum_i v_i$.We define the function $\text{logsumexp}(a, b)$ as follows: When $a$ and $b$ are scalars, $$f = \text{logsumexp}(a, b) \equiv \log(e^a + e^b)$$When $a$ and $b$ are vectors of the same size, $f$ is the same size as $a$ and $b$ where each entry of $f$ is$$f_i = \text{logsumexp}(a_i, b_i) \equiv \log(e^{a_i} + e^{b_i})$$Unlike the least-squares problem, an expression for direct evaluation of $w^*$ is not known so we need to resort to numerical optimization. Before we proceed, it is informative to look at the shape of $f(x) = \text{logsumexp}(0, x)$.When $x$ is negative and far smaller than zero, $f = 0$ and for large values of $x$, $f(x) = x$. Hence it looks like a so-called hinge function $h$$$h(x) = \left\{ \begin{array}{cc} 0 & x < 0 \\x & x \geq 0 \end{array} \right.$$We define$$f_\alpha(x) = \frac{1}{\alpha}\text{logsumexp}(0, \alpha x)$$When $\alpha = 1$, we have the original logsumexp function. For larger $\alpha$, it becomes closer to the hinge loss.
###Code
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
def logsumexp(a,b):
m = np.max([a,b])
return m + np.log(np.exp(a-m) + np.exp(b-m))
def hinge(x):
return x if x>0 else 0
xx = np.arange(-5,3,0.1)
plt.figure(figsize=(12,10))
for i,alpha in enumerate([1,2,5,10]):
f = [logsumexp(0, alpha*z)/alpha for z in xx]
h = [hinge(z) for z in xx]
plt.subplot(2,2,i+1)
plt.plot(xx, f, 'r')
plt.plot(xx, h, 'k:')
plt.xlabel('z')
#plt.title('a = '+ str(alpha))
if alpha==1:
plt.legend([ 'logsumexp(0,z)','hinge(z)' ], loc=2 )
else:
plt.legend([ 'logsumexp(0,{a} z)/{a}'.format(a=alpha),'hinge(z)' ], loc=2 )
plt.show()
###Output
_____no_output_____
###Markdown
The resemblance of the logsumexp function to an hinge function provides a nice interpretation of the log likelihood. Consider the negative log likelihood written in terms of the contributions of each single item:$$- \mathcal{L}(\pi) = - \sum_i l_i(w) $$We denote the inner product of the features of item $i$ and the parameters as $z_i = x_i^\top w$.Then define the 'error' made on a single item as the minus likelihood$$E_i(w) \equiv -l_i(w) = - y_i x_i^\top w + \text{logsumexp}(0, x_i^\top w) = - y_i z_i + \text{logsumexp}(0, z_i)$$Suppose, the target class $y_i = 1$. When $z_i \gg 0$, the item $i$ will be classified correctly and won't contribute to the total error as $-l_i(w) \approx 0$. However, when $z_i \ll 0$, the $\text{logsumexp}$ term will be zero and this will incur an error of $-z_i$. If instead the true target would have been $y_i = 0$ the error reduces to$E_i(w) \approx \text{logsumexp}(0, z_i)$, incurring no error when $z_i \ll 0$ and incuring an error of approximately $z_i$ when $z_i \gg 0$. Below, we show the error for a range of outputs $z_i = x_i^\top w$ when the target is $1$ or $0$. When the target is $y=1$, we penalize each negative output, if the target is $y =0$ positive outputs are penalized.
###Code
xx = np.arange(-10,10,0.1)
y = 1
f = [-y*z + logsumexp(0, z) for z in xx]
f0 = [logsumexp(0, z) for z in xx]
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.plot(xx, f, 'r')
plt.xlabel('$z_i$')
plt.ylabel('$-l_i$')
plt.title('Cost for examples with $y = $'+str(y))
plt.subplot(1,2,2)
plt.plot(xx, f0, 'r')
plt.xlabel('$z_i$')
plt.ylabel('$-l_i$')
plt.title('Cost for examples with $y = 0$')
plt.show()
###Output
_____no_output_____
###Markdown
Properties of the logsumexp functionIf $$f(z) = \text{logsumexp}(0, z) = \log(1 + \exp(z))$$The derivative is$$\frac{df(z)}{dz} = \frac{\exp(z)}{1 + \exp(z)} = \sigma(z)$$When $z$ is a vector, $f(z)$ is a vector. The derivative of$$\sum_i f(z_i) = \mathbf{1}^\top f(z)$$$$\frac{d \mathbf{1}^\top f(z)}{dz} = \left(\begin{array}{c} \sigma(z_1) \\ \vdots \\ \sigma(z_N) \end{array} \right) \equiv \sigma(z)$$where the sigmoid function $\sigma$ is applied elementwise to $z$. Properties of the sigmoid functionNote that\begin{eqnarray}\sigma(x) & = & \frac{e^x}{(1+e^{-x})e^x} = \frac{e^x}{1+e^{x}} \\1 - \sigma(x) & = & 1 - \frac{e^x}{1+e^{x}} = \frac{1+e^{x} - e^x}{1+e^{x}} = \frac{1}{1+e^{x}}\end{eqnarray}\begin{eqnarray}\sigma'(x) & = & \frac{e^x(1+e^{x}) - e^{x} e^x}{(1+e^{x})^2} = \frac{e^x}{1+e^{x}}\frac{1}{1+e^{x}} = \sigma(x) (1-\sigma(x))\end{eqnarray}\begin{eqnarray}\log \sigma(x) & = & -\log(1+e^{-x}) = x - \log(1+e^{x}) \\\log(1 - \sigma(x)) & = & -\log({1+e^{x}})\end{eqnarray}Exercise: Plot the sigmoid function and its derivative. Exercise: Show that $\tanh(z) = 2\sigma(2z) - 1$ Solve $$\text{maximize}\; \mathcal{L}(w)$$ Optimization via gradient ascentOne way foroptimization is gradient ascent\begin{eqnarray}w^{(\tau)} & \leftarrow & w^{(\tau-1)} + \eta \nabla_w {\cal L}\end{eqnarray}where\begin{eqnarray}\nabla_w {\cal L} & = &\begin{pmatrix}{\partial {\cal L}}/{\partial w_1} \\{\partial {\cal L}}/{\partial w_2} \\\vdots \\{\partial {\cal L}}/{\partial w_{D}}\end{pmatrix}\end{eqnarray}is the gradient vector and $\eta$ is a learning rate. Evaluating the gradient (Short Derivation)$$\mathcal{L}(w) = y^\top X w - \mathbf{1}^\top \text{logsumexp}(0, X w)$$$$\frac{d\mathcal{L}(w)}{dw} = X^\top y - X^\top \sigma(X w) = X^\top (y -\sigma(X w))$$ Evaluating the gradient (Long Derivation)The partial derivative of the loglikelihood with respect to the $k$'th entry of the weight vector is given by the chain rule as\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \frac{\partial{\cal L}}{\partial \sigma(u)} \frac{\partial \sigma(u)}{\partial u} \frac{\partial u}{\partial w_k}\end{eqnarray}\begin{eqnarray}{\cal L}(w) & = & \sum_{i : y_i=1} \log \sigma(w^\top x_i) + \sum_{i : y_i=0} \log (1- \sigma(w^\top x_i))\end{eqnarray}\begin{eqnarray}\frac{\partial{\cal L}(\sigma)}{\partial \sigma} & = & \sum_{i : y_i=1} \frac{1}{\sigma(w^\top x_i)} - \sum_{i : y_i=0} \frac{1}{1- \sigma(w^\top x_i)}\end{eqnarray}\begin{eqnarray}\frac{\partial \sigma(u)}{\partial u} & = & \sigma(w^\top x_i) (1-\sigma(w^\top x_i))\end{eqnarray}\begin{eqnarray}\frac{\partial w^\top x_i }{\partial w_k} & = & x_{i,k}\end{eqnarray}So the gradient is\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \sum_{i : y_i=1} \frac{\sigma(w^\top x_i) (1-\sigma(w^\top x_i))}{\sigma(w^\top x_i)} x_{i,k} - \sum_{i : y_i=0} \frac{\sigma(w^\top x_i) (1-\sigma(w^\top x_i))}{1- \sigma(w^\top x_i)} x_{i,k} \\& = & \sum_{i : y_i=1} {(1-\sigma(w^\top x_i))} x_{i,k} - \sum_{i : y_i=0} {\sigma(w^\top x_i)} x_{i,k}\end{eqnarray}We can write this expression more compactly by noting\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \sum_{i : y_i=1} {(\underbrace{1}_{y_i}-\sigma(w^\top x_i))} x_{i,k} + \sum_{i : y_i=0} {(\underbrace{0}_{y_i} - \sigma(w^\top x_i))} x_{i,k} \\& = & \sum_i (y_i - \sigma(w^\top x_i)) x_{i,k}\end{eqnarray}$\newcommand{\diag}{\text{diag}}$ Test on a synthetic problemWe generate a random dataset and than try to learn to classify this dataset
###Code
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
# Generate a random logistic regression problem
def sigmoid(t):
return np.exp(t)/(1+np.exp(t))
def generate_toy_dataset(number_of_features=3, number_of_datapoints=20, styles = ['ob', 'xr']):
D = number_of_features
N = number_of_datapoints
# Some random features
X = 2*np.random.rand(N,D)-1
X[:,0] = 1
# Generate a random paramater vector
w_true = np.random.randn(D,1)
# Generate class labels
pi = sigmoid(np.dot(X, w_true))
y = np.array([1 if u else 0 for u in np.random.rand(N,1) < pi]).reshape((N))
return X, y, w_true, D, N
styles = ['ob', 'xr']
X, y, w_true, D, N = generate_toy_dataset(number_of_features=3, number_of_datapoints=20, styles=styles)
xl = -1.5; xr = 1.5; yl = -1.5; yr = 1.5
fig = plt.figure(figsize=(5,5))
plt.plot(X[y==1,1],X[y==1,2],styles[1])
plt.plot(X[y==0,1],X[y==0,2],styles[0])
ax = fig.gca()
ax.set_ylim([yl, yr])
ax.set_xlim([xl, xr])
plt.show()
# Implement Gradient Descent
w = np.random.randn(D)
# Learnig rate
eta = 0.05
W = []
MAX_ITER = 200
for epoch in range(MAX_ITER):
W.append(w)
dL = np.dot(X.T, y-sigmoid(np.dot(X,w)))
w = w + eta*dL
xl = -1.5
xr = 1.5
yl = -1.5
yr = 1.5
fig = plt.figure(figsize=(5,5))
ax = fig.gca()
ax.set_ylim([yl, yr])
ax.set_xlim([xl, xr])
plt.plot(X[y==1,1],X[y==1,2],styles[1])
plt.plot(X[y==0,1],X[y==0,2],styles[0])
ln = plt.Line2D([],[],color='k')
ln_left = plt.Line2D([],[],ls= ':', color=styles[0][1])
ln_right = plt.Line2D([],[],ls= ':', color=styles[1][1])
ax.add_line(ln)
ax.add_line(ln_left)
ax.add_line(ln_right)
plt.close(fig)
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_xticks(np.arange(xl,xr))
ax.set_yticks(np.arange(yl,yr))
ax.grid(True)
def plot_boundry(w0,w1,w2):
if w1 != 0:
xa = -(w0+w2*yl)/w1
xb = -(w0+w2*yr)/w1
ln.set_xdata([xa, xb])
ln.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.25) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.25) + w0+w2*yr)/w1
ln_left.set_xdata([xa, xb])
ln_left.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.75) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.75) + w0+w2*yr)/w1
ln_right.set_xdata([xa, xb])
ln_right.set_ydata([yl, yr])
elif w2!=0:
ya = -(w0+w1*xl)/w2
yb = -(w0+w1*xr)/w2
ln.set_xdata([xl, xr])
ln.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.25) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.25) + w0+w1*xr)/w2
ln_left.set_xdata([xl, xr])
ln_left.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.75) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.75) + w0+w1*xr)/w2
ln_right.set_xdata([xl, xr])
ln_right.set_ydata([ya, yb])
else:
ln.set_xdata([])
ln.set_ydata([])
display(fig)
def plot_boundry_of_weight(iteration=0):
i = iteration
w = W[i]
plot_boundry(w[0],w[1],w[2])
interact(plot_boundry_of_weight, iteration=(0,len(W)-1))
###Output
_____no_output_____
###Markdown
Second order optimizationNewton's method Evaluating the HessianThe Hessian is \begin{eqnarray}\frac{\partial^2{\cal L}}{\partial w_k \partial w_r} & = & - \sum_i (1-\sigma(w^\top x_i)) \sigma(w^\top x_i) x_{i,k} x_{i,r} \\\pi & \equiv & \sigma(X w) \\\nabla \nabla^\top \mathcal{L}& = & -X^\top \diag(\pi(1 - \pi)) X \end{eqnarray}The update rule is\begin{eqnarray}w^{(\tau)} = w^{(\tau-1)} + \eta X^\top (y-\sigma(X w))\end{eqnarray}
###Code
#x = np.matrix('[-2,1; -1,2; 1,5; -1,1; -3,-2; 1,1] ')
x = np.matrix('[-0.5,0.5;2,-1;-1,-1;1,1;1.5,0.5]')
#y = np.matrix('[0,0,1,0,0,1]').T
y = np.matrix('[0,0,1,1,1]').T
N = x.shape[0]
#A = np.hstack((np.power(x,0), np.power(x,1), np.power(x,2)))
#X = np.hstack((x, np.ones((N,1)) ))
X = x
def sigmoid(x):
return 1/(1+np.exp(-x))
idx = np.nonzero(y)[0]
idxc = np.nonzero(1-y)[0]
fig = plt.figure(figsize=(8,4))
plt.plot(x[idx,0], x[idx,1], 'rx')
plt.plot(x[idxc,0], x[idxc,1], 'bo')
fig.gca().set_xlim([-1.1,2.1])
fig.gca().set_ylim([-1.1,1.1])
print(idxc)
print(idx)
plt.show()
from itertools import product
def ellipse_line(A, mu, col='b'):
'''
Creates an ellipse from short line segments y = A x + \mu
where x is on the unit circle.
'''
N = 18
th = np.arange(0, 2*np.pi+np.pi/N, np.pi/N)
X = np.mat(np.vstack((np.cos(th),np.sin(th))))
Y = A*X
ln = plt.Line2D(mu[0]+Y[0,:],mu[1]+Y[1,:],markeredgecolor='w', linewidth=1, color=col)
return ln
left = -5
right = 3
bottom = -5
top = 7
step = 0.1
W0 = np.arange(left,right, step)
W1 = np.arange(bottom,top, step)
LLSurf = np.zeros((len(W1),len(W0)))
# y^\top X w - \mathbf{1}^\top \text{logsumexp}(0, X w)
vmax = -np.inf
vmin = np.inf
for i,j in product(range(len(W1)), range(len(W0))):
w = np.matrix([W0[j], W1[i]]).T
p = X*w
ll = y.T*p - np.sum(np.log(1+np.exp(p)))
vmax = np.max((vmax, ll))
vmin = np.min((vmin, ll))
LLSurf[i,j] = ll
fig = plt.figure(figsize=(10,10))
plt.imshow(LLSurf, interpolation='nearest',
vmin=vmin, vmax=vmax,origin='lower',
extent=(left,right,bottom,top),cmap=plt.cm.jet)
plt.xlabel('w0')
plt.ylabel('w1')
plt.colorbar()
W0 = np.arange(left+2,right-5, 12*step)
W1 = np.arange(bottom+1,top-10, 12*step)
for i,j in product(range(len(W1)), range(len(W0))):
w = np.matrix([W0[j], W1[i]]).T
#w = np.mat([-1,1]).T
p = sigmoid(X*w)
dw = 0.2*X.T*(y-p)
#print(p)
S = np.mat(np.diag(np.asarray(np.multiply(p,1-p)).flatten()))
H = X.T*S*X
dw_nwt = 0.08*H.I*X.T*(y-p)
C = np.linalg.cholesky(H.I)
# plt.hold(True)
ln = ellipse_line(C/3., w, 'w')
ax = fig.gca()
ax.add_line(ln)
ln2 = plt.Line2D((float(w[0]), float(w[0]+dw[0])), (float(w[1]), float(w[1]+dw[1])),color='y')
ax.add_line(ln2)
ln3 = plt.Line2D((float(w[0]), float(w[0]+dw_nwt[0])), (float(w[1]), float(w[1]+dw_nwt[1])),color='w')
ax.add_line(ln3)
plt.plot(w[0,0],w[1,0],'.w')
#print(C)
#print(S)
ax.set_xlim((left,right))
ax.set_ylim((bottom,top))
plt.show()
print(y)
print(X)
#w = np.random.randn(3,1)
w = np.mat('[1;2]')
print(w)
print(sigmoid(X*w))
eta = 0.1
for i in range(10000):
pr = sigmoid(X*w)
w = w + eta*X.T*(y-pr)
print(np.hstack((y,pr)))
print(w)
###Output
[[0]
[0]
[1]
[1]
[1]]
[[-0.5 0.5]
[ 2. -1. ]
[-1. -1. ]
[ 1. 1. ]
[ 2. 1. ]]
[[1]
[2]]
[[ 0.62245933]
[ 0.5 ]
[ 0.04742587]
[ 0.95257413]
[ 0.98201379]]
[[ 0. 0.59561717]
[ 0. 0.30966921]
[ 1. 0.32737446]
[ 1. 0.67262554]
[ 1. 0.66660954]]
[[-0.02719403]
[ 0.74727817]]
###Markdown
--------------------------- Optimization Frameworks--------------------------- CVX -- Convex OptimizationCVX is a framework that can be used for solving convex optimization problems. Convex optimization includes many problems of interest; for example the minimization of the negative loglikelihood of the logistic regression is a convex problem. SUnfortunately, many important problems and interesting problems
###Code
%matplotlib inline
from cvxpy import *
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
###Output
_____no_output_____
###Markdown
Selecting relevant features with regularizationBelow we generate a dataset with some irrelevant features that are not informative for classification Maximize$$\mathcal{L}(w) + \lambda \|w\|_p$$
###Code
def sigmoid(x):
return 1/(1+np.exp(-x))
# Number of data points
N = 1000
# Number of relevant features
K = 10
# Number of irrelevant features
Ke = 30
# Generate random features
X = np.matrix(np.random.randn(N, K + Ke))
# Generate parameters and set the irrelevant ones to zero
w_true = np.random.randn(K + Ke,1)
w_true[K:] = 0
p = sigmoid(X*w_true)
u = np.random.rand(N,1)
y = (u < p)
y = y.astype(np.float64)
# Regularization coefficient
lam = 100.
zero_vector = np.zeros((N,1))
# Construct the problem.
w = Variable(K+Ke)
objective = Minimize(lam*norm(w, np.inf ) -y.T*X*w + sum_entries(log_sum_exp(hstack(zero_vector, X*w),axis=1)))
prob = Problem(objective)
# The optimal objective is returned by prob.solve().
result = prob.solve()
# The optimal value for x is stored in x.value.
#print(w.value)
plt.figure(figsize=(10,4))
plt.stem(w.value, markerfmt='ob')
plt.stem(w_true, markerfmt='xr')
plt.gca().set_xlim((-1, K+Ke))
plt.legend(['Estimated', 'True'])
plt.show()
###Output
_____no_output_____
###Markdown
Optimization with pytorch
###Code
X_np, y_np, w_true_np, M, N = generate_toy_dataset(number_of_features=3, number_of_datapoints=20)
###Output
_____no_output_____
###Markdown
Gradient Descent for Logistic Regression: Reference implementation in numpy
###Code
# Initialization
w_np = np.ones(M)
# Learnig rate
eta = 0.01
MAX_ITER = 100
for epoch in range(MAX_ITER):
sig = sigmoid(np.dot(X_np,w_np))
# Gradient dLL/dw -- symbolically derived and hard coded
w_grad = np.dot(X_np.T, y_np-sig)
# Gradient ascent step
w_np = w_np + eta*w_grad
print(w_np)
###Output
[-0.96195283 -0.21886467 0.83477378]
###Markdown
Gradient Descent for Logistic Regression: First implementation in pytorch
###Code
import torch
import torch.autograd
from torch.autograd import Variable
#sigmoid_f = torch.nn.Sigmoid()
def sigmoid_f(x):
return 1./(1. + torch.exp(-x))
X = Variable(torch.from_numpy(X_np).double())
y = Variable(torch.from_numpy(y_np.reshape(N,1)).double())
# Implementation
w = Variable(torch.ones(M,1).double(), requires_grad=True)
eta = 0.01
MAX_ITER = 100
for epoch in range(MAX_ITER):
sig = sigmoid_f(torch.matmul(X, w))
# Compute the loglikelihood
LL = torch.sum(y*torch.log(sig) + (1-y)*torch.log(1-sig))
# Compute the gradients by automated differentiation
LL.backward()
# The gradient ascent step
w.data.add_(eta*w.grad.data)
# Reset the gradients, as otherwise they are accumulated in w.grad
w.grad.zero_()
print(w.data.numpy())
%connect_info
###Output
{
"shell_port": 65415,
"iopub_port": 65416,
"stdin_port": 65417,
"control_port": 65418,
"hb_port": 65419,
"ip": "127.0.0.1",
"key": "40c24992-b940437a6d68edf64080bfde",
"transport": "tcp",
"signature_scheme": "hmac-sha256",
"kernel_name": ""
}
Paste the above JSON into a file, and connect with:
$> jupyter <app> --existing <file>
or, if you are local, you can connect with just:
$> jupyter <app> --existing kernel-ab030b1c-c549-4b31-8e5e-c11e1befeaa2.json
or even just:
$> jupyter <app> --existing
if this is the most recent Jupyter kernel you have started.
###Markdown
Logistic RegressionLogistic regression is a classification method. Its main goal is learning a function that __returns a yes or no answer__when presented as input a so-called __feature__ vector. As an example, suppose we are given a dataset, such as the one below:| Class| Feature1 | Feature2 ||---| |---|| 0 |5.7| 3.1|| 1|-0.3|2 ||---| |---|| $y_i$| $x_{i,1}$ | $x_{i,2}$ ||---| |---|| 1|0.4|5 |The goal is learning to predict the labels of a future dataset, where we are given only the features but not the labels:| Class| Feature1 | Feature2 ||---| |---|| ? |4.8| 3.2|| ? |-0.7|2.4 ||---| |---|More formally, the dataset consists of $N$ feature vectors $x_i$ and the associated labels $y_i$ for each example $i=1\dots N$. The entries of $y$ are referred typically as class labels -- but in reality $y$ could model any answer to a true-false question, such as 'is object $i$ a flower?' or 'will customer $i$ buy product $j$ during the next month?'. We can arrange the features in a matrix $X$ and the labels in a vector $y$:\begin{eqnarray}X & = & \begin{pmatrix} x_{1,1} & x_{1,2} & \dots & x_{1,D} \\ x_{2,1} & x_{2,2} & \dots & x_{2,D} \\ \vdots & \vdots & \vdots & \vdots \\ x_{i,1} & x_{i,2} & \dots & x_{i,D} \\ \vdots & \vdots & \vdots & \vdots \\ x_{N,1} & x_{N,2} & \dots & x_{N,D} \\\end{pmatrix} = \begin{pmatrix}x_1^\top \\x_2^\top \\\dots \\x_i^\top \\\dots \\x_N^\top\end{pmatrix} \\{y} & = & \begin{pmatrix}y_1 \\y_2 \\\vdots \\y_i \\\vdots \\y_N\end{pmatrix}\end{eqnarray}where $x_{i,j}$ denotes the $j$'th feature of the $i$'th data point. It is common, to set a column of $X$ entirely to $1$'s, for example we take $x_{i,D}=1$ for all $i$. This 'feature' is artificially added to the dataset to allow a slightly more flexible model -- even if we don't measure any feature, the relative numbers of ones and zeros in a dataset can provide a crude estimate of the probability of a true or false answer. Logistic Regression is a method that can be used to solve binary classification problems, like the one above. We will encode the two classes as $y_i \in \{0,1\}$. The key idea is learning a mapping from a feature vector $x$ to a probability, a number between $0$ and $1$. The generative model is $$\Pr\{y_i = 1\} = \pi_i = \sigma(x_i^\top w)$$Here,$\sigma(x)$ is the sigmoid function defined as\begin{eqnarray}\sigma(x) & = & \frac{1}{1+e^{-x}}\end{eqnarray}To understand logistic regression as a generative model, consider the following metaphor: assume that for each data instance $x_i$, we select a biased coin with probability $p(y_i = 1| w, x^\top_i) = \pi_i = \sigma(x_i^\top w)$, throw the coin and label the data item with class $y_i$ accordingly. Mathematically, we assume that each label $y_i$, or more precisely the answer to our yes-no question rearding the object $i$ with feature vector $w$ is drawn from a Bernoulli distribution. That is: \begin{eqnarray}\pi_i & = & \sigma(x_i^\top w) \\y_i & \sim &\mathcal{BE}(\pi)\end{eqnarray}Here, we think of a biased coin with two sides denoted as $H$ (head) and $T$ (tail) with probability of side $H$ as $\pi$, and consequently the probability of side $T$ with $1-\pi$. We denote the outcome of the coin toss with the random variable $y \in \{0, 1\}$. For each throw $i$, $y_i$ is the answer to the question 'Is the outcome heads?'. We write the probability as $p(y = 1) = \pi$ and probability of tails is $p(y = 0) = 1-\pi$. More compactly, the probability of the outcome of a toss, provided we know $\pi$, is written as\begin{eqnarray}p(y|\pi) = \pi^y(1-\pi)^{1-y}\end{eqnarray} Maximum LikelihoodMaximum likelihood (ML) is a method for choosing the unknown parameters of a probability distribution, given some data that is assumed to be drawn from this distribution. The distribution itself is referred as the probability model, or often just the model. ExampleSuppose we are given only $5$ outcomes when a coin is thrown:$$H, T, H, T, T$$What is the probabilty that the outcome is, say heads $H$ if we know that the coin is biased ?.One reasonable answer may be the frequency of heads, $2/5$.The ML solution coincides with this answer. For a derivation, we define $y_i$ for $i = 1,2,\dots, 5$ as$$y_i = \left\{ \begin{array}{cc} 1 & \text{coin $i$ is H} \\ 0 & \text{coin $i$ is T} \end{array} \right. $$hence $$y = [1,0,1,0,0]^\top$$If we assume that the outcomes were independent, the probability of observing the above sequence as a function of the parameter $\pi$ is the product of each individual probability$$\Pr\{y = [1,0,1,0,0]^\top\} = \pi \cdot (1-\pi) \cdot \pi \cdot (1-\pi) \cdot(1-\pi) $$We could try finding the $\pi$ value that maximizes this function. We will call the corresponding value as the maximum likelhood solution, and denote it as $\pi^*$. It is often more convenient to work with the logarithm of this function, known as the loglikelihood function.$$\mathcal{L}(\pi) = 2 \log \pi + 3 \log (1-\pi)$$For finding the maximum, we take the derivative with respect to $\pi$ and set to zero.$$\frac{d \mathcal{L}(\pi)}{d \pi} = \frac{2}{\pi^*} - \frac{3}{1-\pi^*} = 0 $$When we solve we obtain $$ \pi^* = \frac{2}{5} $$ More generally, when we observe $y_i$ for $i=1 \dots N$, the loglikelihood is\begin{eqnarray}\mathcal{L}(\pi)& = & \log \left(\prod_{i : y_i=1} \pi \right) \left(\prod_{i : y_i=0}(1- \pi) \right) \\& = & \log \prod_{i = 1}^N \pi^{y_i} (1- \pi)^{1-y_i} \\& = & \log \pi^{ \sum_i y_i} (1- \pi)^{\sum_i (1-y_i) } \\& = & \left(\sum_i y_i\right) \log \pi + \left(\sum_i (1-y_i) \right) \log (1- \pi) \end{eqnarray}If we define the number of observed $0$'s and $1$'s by $c_0$ and $c_1$ respectively, we have \begin{eqnarray}\mathcal{L}(\pi)& = & c_1 \log \pi + c_0 \log (1- \pi) \end{eqnarray}Taking the derivative and setting to $0$ results in$$\pi^* = \frac{c_1}{c_0+c_1} = \frac{c_1}{N} $$
###Code
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
from IPython.display import clear_output, display, HTML
from matplotlib import rc
import scipy as sc
import scipy.optimize as opt
mpl.rc('font',**{'size': 20, 'family':'sans-serif','sans-serif':['Helvetica']})
mpl.rc('text', usetex=True)
def sigmoid(x):
return 1/(1+np.exp(-x))
def dsigmoid(x):
s = sigmoid(x)
return s*(1-s)
def inv_sigmoid(p=0.5):
xs = opt.bisect(lambda x: sigmoid(x)-p, a=-100, b=100)
return xs
def inv_sigmoid1D(w, b, p=0.5):
xs = opt.bisect(lambda x: sigmoid(w*x+b)-p, a=-100, b=100)
return xs
###Output
_____no_output_____
###Markdown
Plotting the Sigmoid
###Code
fig = plt.figure(figsize=(10,6))
ax = fig.gca()
ax.set_ylim([-0.1,1.1])
x = np.linspace(-10,10,100)
ax.set_xlim([-10,10])
ln = plt.Line2D(x, sigmoid(x))
ln2 = plt.axvline([0], ls= ':', color='k')
ln_left = plt.axvline([0], ls= ':', color='b')
ln_right = plt.axvline([0], ls= ':', color='r')
ax.add_line(ln)
plt.close(fig)
ax.set_xlabel('$x$')
ax.set_ylabel('$\sigma(wx + b)$')
def plot_fun(w=1, b=0):
ln.set_ydata(sigmoid(w*x+b))
if np.abs(w)>0.00001:
ln2.set_xdata(inv_sigmoid1D(w,b,0.5))
ln_left.set_xdata(inv_sigmoid1D(w,b,0.25))
ln_right.set_xdata(inv_sigmoid1D(w,b,0.75))
display(fig)
res = interact(plot_fun, w=(-5, 5, 0.1), b=(-10.0,10.0,0.1))
def LR_loglikelhood(X, y, w):
tmp = X.dot(w)
return y.T.dot(tmp) - np.sum(np.log(np.exp(tmp)+1))
w = np.array([0.5, 2, 3])
N = 20
# Some random features
X = 2*np.random.randn(N,D)
X[:,0] = 1
# Generate class labels
pi = sigmoid(np.dot(X, w))
y = np.array([1 if u else 0 for u in np.random.rand(N) < pi]).reshape((N))
xl = -5.
xr = 5.
yl = -5.
yr = 5.
fig = plt.figure(figsize=(5,5))
plt.plot(X[y==1,1],X[y==1,2],'xr')
plt.plot(X[y==0,1],X[y==0,2],'ob')
ax = fig.gca()
ax.set_ylim([yl, yr])
ax.set_xlim([xl, xr])
ln = plt.Line2D([],[],color='k')
ln_left = plt.Line2D([],[],ls= ':', color='b')
ln_right = plt.Line2D([],[],ls= ':', color='r')
ax.add_line(ln)
ax.add_line(ln_left)
ax.add_line(ln_right)
plt.close(fig)
ax.set_xlabel('$x_1$')
#ax.grid(xdata=np.linspace(xl,xr,0.1))
#ax.grid(ydata=np.linspace(yl,yr,0.1))
ax.set_ylabel('$x_2$')
ax.set_xticks(np.arange(xl,xr))
ax.set_yticks(np.arange(yl,yr))
ax.grid(True)
def plot_boundry(w0,w1,w2):
if w1 != 0:
xa = -(w0+w2*yl)/w1
xb = -(w0+w2*yr)/w1
ln.set_xdata([xa, xb])
ln.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.25) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.25) + w0+w2*yr)/w1
ln_left.set_xdata([xa, xb])
ln_left.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.75) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.75) + w0+w2*yr)/w1
ln_right.set_xdata([xa, xb])
ln_right.set_ydata([yl, yr])
elif w2!=0:
ya = -(w0+w1*xl)/w2
yb = -(w0+w1*xr)/w2
ln.set_xdata([xl, xr])
ln.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.25) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.25) + w0+w1*xr)/w2
ln_left.set_xdata([xl, xr])
ln_left.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.75) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.75) + w0+w1*xr)/w2
ln_right.set_xdata([xl, xr])
ln_right.set_ydata([ya, yb])
else:
ln.set_xdata([])
ln.set_ydata([])
ax.set_title('$\mathcal{L}(w) = '+str(LR_loglikelhood(X, y, np.array([w0, w1, w2])))+'$')
display(fig)
res = interact(plot_boundry, w0=(-3.5, 3, 0.1), w1=(-3.,4,0.1), w2=(-3.,4,0.1))
###Output
_____no_output_____
###Markdown
Logistic Regression: Learning the parametersThe logistic regression model is very similar to the coin model. The main difference is that for each example $i$, we use a specific coin with a probability $\sigma(x_i^\top w)$ that depends on the specific feature vector $x_i$ and the parameter vector $w$ that is shared by all examples. The likelihood of the observations, that is the probability of observing the class sequence is$\begin{eqnarray}p(y_1, y_2, \dots, y_N|w, X ) &=& \left(\prod_{i : y_i=1} \sigma(x_i^\top w) \right) \left(\prod_{i : y_i=0}(1- \sigma(x_i^\top w)) \right)\end{eqnarray}$Here, the left product is the expression for examples from class $1$ and the right product is for examples from class $0$.We will look for the particular setting of the weight vector, the maximum likelihood solution, denoted by $w^*$.$\begin{eqnarray}w^* & = & \arg\max_{w} {\cal L}(w)\end{eqnarray}$where the loglikelihood function$\begin{eqnarray}{\cal L}(w) & = & \log p(y_1, y_2, \dots, y_N|w, x_1, x_2, \dots, x_N ) \\& = & \sum_{i : y_i=1} \log \sigma(x_i^\top w) + \sum_{i : y_i=0} \log (1- \sigma(x_i^\top w)) \\& = & \sum_{i : y_i=1} x_i^\top w - \sum_{i : y_i=1} \log(1+e^{x_i^\top w}) - \sum_{i : y_i=0}\log({1+e^{x_i^\top w}}) \\& = & \sum_i y_i x_i^\top w - \sum_{i} \log(1+e^{x_i^\top w}) \\& = & y^\top X w - \mathbf{1}^\top \text{logsumexp}(0, X w)\end{eqnarray}$$\mathbf{1}$ is a vector of ones; note that when we premultiply a vector $v$ by $\mathbf{1}^T$ we get the sum of the entries of $v$, i.e. $\mathbf{1}^T v = \sum_i v_i$.We define the function $\text{logsumexp}(a, b)$ as follows: When $a$ and $b$ are scalars, $$f = \text{logsumexp}(a, b) \equiv \log(e^a + e^b)$$When $a$ and $b$ are vectors of the same size, $f$ is the same size as $a$ and $b$ where each entry of $f$ is$$f_i = \text{logsumexp}(a_i, b_i) \equiv \log(e^{a_i} + e^{b_i})$$Unlike the least-squares problem, an expression for direct evaluation of $w^*$ is not known so we need to resort to numerical optimization. Before we proceed, it is informative to look at the shape of $f(x) = \text{logsumexp}(0, x)$.When $x$ is negative and far smaller than zero, $f = 0$ and for large values of $x$, $f(x) = x$. Hence it looks like a so-called hinge function $h$$$h(x) = \left\{ \begin{array}{cc} 0 & x < 0 \\x & x \geq 0 \end{array} \right.$$We define$$f_\alpha(x) = \frac{1}{\alpha}\text{logsumexp}(0, \alpha x)$$When $\alpha = 1$, we have the original logsumexp function. For larger $\alpha$, it becomes closer to the hinge loss.
###Code
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
def logsumexp(a,b):
m = np.max([a,b])
return m + np.log(np.exp(a-m) + np.exp(b-m))
def hinge(x):
return x if x>0 else 0
xx = np.arange(-5,3,0.1)
plt.figure(figsize=(12,10))
for i,alpha in enumerate([1,2,5,10]):
f = [logsumexp(0, alpha*z)/alpha for z in xx]
h = [hinge(z) for z in xx]
plt.subplot(2,2,i+1)
plt.plot(xx, f, 'r')
plt.plot(xx, h, 'k:')
plt.xlabel('z')
#plt.title('a = '+ str(alpha))
if alpha==1:
plt.legend([ 'logsumexp(0,z)','hinge(z)' ], loc=2 )
else:
plt.legend([ 'logsumexp(0,{a} z)/{a}'.format(a=alpha),'hinge(z)' ], loc=2 )
plt.show()
###Output
_____no_output_____
###Markdown
The resemblance of the logsumexp function to an hinge function provides a nice interpretation of the log likelihood. Consider the negative log likelihood written in terms of the contributions of each single item:$$- \mathcal{L}(\pi) = - \sum_i l_i(w) $$We denote the inner product of the features of item $i$ and the parameters as $z_i = x_i^\top w$.Then define the 'error' made on a single item as the minus likelihood$$E_i(w) \equiv -l_i(w) = - y_i x_i^\top w + \text{logsumexp}(0, x_i^\top w) = - y_i z_i + \text{logsumexp}(0, z_i)$$Suppose, the target class $y_i = 1$. When $z_i \gg 0$, the item $i$ will be classified correctly and won't contribute to the total error as $-l_i(w) \approx 0$. However, when $z_i \ll 0$, the $\text{logsumexp}$ term will be zero and this will incur an error of $-z_i$. If instead the true target would have been $y_i = 0$ the error reduces to$E_i(w) \approx \text{logsumexp}(0, z_i)$, incurring no error when $z_i \ll 0$ and incuring an error of approximately $z_i$ when $z_i \gg 0$. Below, we show the error for a range of outputs $z_i = x_i^\top w$ when the target is $1$ or $0$. When the target is $y=1$, we penalize each negative output, if the target is $y =0$ positive outputs are penalized.
###Code
xx = np.arange(-10,10,0.1)
y = 1
f = [-y*z + logsumexp(0, z) for z in xx]
f0 = [logsumexp(0, z) for z in xx]
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.plot(xx, f, 'r')
plt.xlabel('$z_i$')
plt.ylabel('$-l_i$')
plt.title('Cost for examples with $y = $'+str(y))
plt.subplot(1,2,2)
plt.plot(xx, f0, 'r')
plt.xlabel('$z_i$')
plt.ylabel('$-l_i$')
plt.title('Cost for examples with $y = 0$')
plt.show()
###Output
_____no_output_____
###Markdown
Properties of the logsumexp functionIf $$f(z) = \text{logsumexp}(0, z) = \log(1 + \exp(z))$$The derivative is$$\frac{df(z)}{dz} = \frac{\exp(z)}{1 + \exp(z)} = \sigma(z)$$When $z$ is a vector, $f(z)$ is a vector. The derivative of$$\sum_i f(z_i) = \mathbf{1}^\top f(z)$$$$\frac{d \mathbf{1}^\top f(z)}{dz} = \left(\begin{array}{c} \sigma(z_1) \\ \vdots \\ \sigma(z_N) \end{array} \right) \equiv \sigma(z)$$where the sigmoid function $\sigma$ is applied elementwise to $z$. Properties of the sigmoid functionNote that\begin{eqnarray}\sigma(x) & = & \frac{e^x}{(1+e^{-x})e^x} = \frac{e^x}{1+e^{x}} \\1 - \sigma(x) & = & 1 - \frac{e^x}{1+e^{x}} = \frac{1+e^{x} - e^x}{1+e^{x}} = \frac{1}{1+e^{x}}\end{eqnarray}\begin{eqnarray}\sigma'(x) & = & \frac{e^x(1+e^{x}) - e^{x} e^x}{(1+e^{x})^2} = \frac{e^x}{1+e^{x}}\frac{1}{1+e^{x}} = \sigma(x) (1-\sigma(x))\end{eqnarray}\begin{eqnarray}\log \sigma(x) & = & -\log(1+e^{-x}) = x - \log(1+e^{x}) \\\log(1 - \sigma(x)) & = & -\log({1+e^{x}})\end{eqnarray}Exercise: Plot the sigmoid function and its derivative. Solve $$\text{maximize}\; \mathcal{L}(w)$$ Optimization via gradient ascentOne way foroptimization is gradient ascent\begin{eqnarray}w^{(\tau)} & \leftarrow & w^{(\tau-1)} + \eta \nabla_w {\cal L}\end{eqnarray}where\begin{eqnarray}\nabla_w {\cal L} & = &\begin{pmatrix}{\partial {\cal L}}/{\partial w_1} \\{\partial {\cal L}}/{\partial w_2} \\\vdots \\{\partial {\cal L}}/{\partial w_{D}}\end{pmatrix}\end{eqnarray}is the gradient vector and $\eta$ is a learning rate. Evaluating the gradient (Short Derivation)$$\mathcal{L}(w) = y^\top X w - \mathbf{1}^\top \text{logsumexp}(0, X w)$$$$\frac{d\mathcal{L}(w)}{dw} = X^\top y - X^\top \sigma(X w) = X^\top (y -\sigma(X w))$$ Evaluating the gradient (Long Derivation)The partial derivative of the loglikelihood with respect to the $k$'th entry of the weight vector is given by the chain rule as\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \frac{\partial{\cal L}}{\partial \sigma(u)} \frac{\partial \sigma(u)}{\partial u} \frac{\partial u}{\partial w_k}\end{eqnarray}\begin{eqnarray}{\cal L}(w) & = & \sum_{i : y_i=1} \log \sigma(w^\top x_i) + \sum_{i : y_i=0} \log (1- \sigma(w^\top x_i))\end{eqnarray}\begin{eqnarray}\frac{\partial{\cal L}(\sigma)}{\partial \sigma} & = & \sum_{i : y_i=1} \frac{1}{\sigma(w^\top x_i)} - \sum_{i : y_i=0} \frac{1}{1- \sigma(w^\top x_i)}\end{eqnarray}\begin{eqnarray}\frac{\partial \sigma(u)}{\partial u} & = & \sigma(w^\top x_i) (1-\sigma(w^\top x_i))\end{eqnarray}\begin{eqnarray}\frac{\partial w^\top x_i }{\partial w_k} & = & x_{i,k}\end{eqnarray}So the gradient is\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \sum_{i : y_i=1} \frac{\sigma(w^\top x_i) (1-\sigma(w^\top x_i))}{\sigma(w^\top x_i)} x_{i,k} - \sum_{i : y_i=0} \frac{\sigma(w^\top x_i) (1-\sigma(w^\top x_i))}{1- \sigma(w^\top x_i)} x_{i,k} \\& = & \sum_{i : y_i=1} {(1-\sigma(w^\top x_i))} x_{i,k} - \sum_{i : y_i=0} {\sigma(w^\top x_i)} x_{i,k}\end{eqnarray}We can write this expression more compactly by noting\begin{eqnarray}\frac{\partial{\cal L}}{\partial w_k} & = & \sum_{i : y_i=1} {(\underbrace{1}_{y_i}-\sigma(w^\top x_i))} x_{i,k} + \sum_{i : y_i=0} {(\underbrace{0}_{y_i} - \sigma(w^\top x_i))} x_{i,k} \\& = & \sum_i (y_i - \sigma(w^\top x_i)) x_{i,k}\end{eqnarray}$\newcommand{\diag}{\text{diag}}$ Test on a synthetic problemWe generate a random dataset and than try to learn to classify this dataset
###Code
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
# Generate a random logistic regression problem
def sigmoid(t):
return np.exp(t)/(1+np.exp(t))
def generate_toy_dataset(number_of_features=3, number_of_datapoints=20, styles = ['ob', 'xr']):
D = number_of_features
N = number_of_datapoints
# Some random features
X = 2*np.random.rand(N,D)-1
X[:,0] = 1
# Generate a random paramater vector
w_true = np.random.randn(D,1)
# Generate class labels
pi = sigmoid(np.dot(X, w_true))
y = np.array([1 if u else 0 for u in np.random.rand(N,1) < pi]).reshape((N))
return X, y, w_true, D, N
styles = ['ob', 'xr']
X, y, w_true, D, N = generate_toy_dataset(number_of_features=3, number_of_datapoints=20, styles=styles)
xl = -1.5; xr = 1.5; yl = -1.5; yr = 1.5
fig = plt.figure(figsize=(5,5))
plt.plot(X[y==1,1],X[y==1,2],styles[1])
plt.plot(X[y==0,1],X[y==0,2],styles[0])
ax = fig.gca()
ax.set_ylim([yl, yr])
ax.set_xlim([xl, xr])
plt.show()
# Implement Gradient Descent
w = np.random.randn(D)
# Learnig rate
eta = 0.05
W = []
MAX_ITER = 200
for epoch in range(MAX_ITER):
W.append(w)
dL = np.dot(X.T, y-sigmoid(np.dot(X,w)))
w = w + eta*dL
xl = -1.5
xr = 1.5
yl = -1.5
yr = 1.5
fig = plt.figure(figsize=(5,5))
ax = fig.gca()
ax.set_ylim([yl, yr])
ax.set_xlim([xl, xr])
plt.plot(X[y==1,1],X[y==1,2],styles[1])
plt.plot(X[y==0,1],X[y==0,2],styles[0])
ln = plt.Line2D([],[],color='k')
ln_left = plt.Line2D([],[],ls= ':', color=styles[0][1])
ln_right = plt.Line2D([],[],ls= ':', color=styles[1][1])
ax.add_line(ln)
ax.add_line(ln_left)
ax.add_line(ln_right)
plt.close(fig)
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_xticks(np.arange(xl,xr))
ax.set_yticks(np.arange(yl,yr))
ax.grid(True)
def plot_boundry(w0,w1,w2):
if w1 != 0:
xa = -(w0+w2*yl)/w1
xb = -(w0+w2*yr)/w1
ln.set_xdata([xa, xb])
ln.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.25) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.25) + w0+w2*yr)/w1
ln_left.set_xdata([xa, xb])
ln_left.set_ydata([yl, yr])
xa = -(-inv_sigmoid(0.75) + w0+w2*yl)/w1
xb = -(-inv_sigmoid(0.75) + w0+w2*yr)/w1
ln_right.set_xdata([xa, xb])
ln_right.set_ydata([yl, yr])
elif w2!=0:
ya = -(w0+w1*xl)/w2
yb = -(w0+w1*xr)/w2
ln.set_xdata([xl, xr])
ln.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.25) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.25) + w0+w1*xr)/w2
ln_left.set_xdata([xl, xr])
ln_left.set_ydata([ya, yb])
ya = -(-inv_sigmoid(0.75) + w0+w1*xl)/w2
yb = -(-inv_sigmoid(0.75) + w0+w1*xr)/w2
ln_right.set_xdata([xl, xr])
ln_right.set_ydata([ya, yb])
else:
ln.set_xdata([])
ln.set_ydata([])
display(fig)
def plot_boundry_of_weight(iteration=0):
i = iteration
w = W[i]
plot_boundry(w[0],w[1],w[2])
interact(plot_boundry_of_weight, iteration=(0,len(W)-1))
###Output
_____no_output_____
###Markdown
Second order optimizationNewton's method Evaluating the HessianThe Hessian is \begin{eqnarray}\frac{\partial^2{\cal L}}{\partial w_k \partial w_r} & = & - \sum_i (1-\sigma(w^\top x_i)) \sigma(w^\top x_i) x_{i,k} x_{i,r} \\\pi & \equiv & \sigma(X w) \\\nabla \nabla^\top \mathcal{L}& = & -X^\top \diag(\pi(1 - \pi)) X \end{eqnarray}The update rule is\begin{eqnarray}w^{(\tau)} = w^{(\tau-1)} + \eta X^\top (y-\sigma(X w))\end{eqnarray}
###Code
#x = np.matrix('[-2,1; -1,2; 1,5; -1,1; -3,-2; 1,1] ')
x = np.matrix('[-0.5,0.5;2,-1;-1,-1;1,1;1.5,0.5]')
#y = np.matrix('[0,0,1,0,0,1]').T
y = np.matrix('[0,0,1,1,1]').T
N = x.shape[0]
#A = np.hstack((np.power(x,0), np.power(x,1), np.power(x,2)))
#X = np.hstack((x, np.ones((N,1)) ))
X = x
def sigmoid(x):
return 1/(1+np.exp(-x))
idx = np.nonzero(y)[0]
idxc = np.nonzero(1-y)[0]
fig = plt.figure(figsize=(8,4))
plt.plot(x[idx,0], x[idx,1], 'rx')
plt.plot(x[idxc,0], x[idxc,1], 'bo')
fig.gca().set_xlim([-1.1,2.1])
fig.gca().set_ylim([-1.1,1.1])
print(idxc)
print(idx)
plt.show()
from itertools import product
def ellipse_line(A, mu, col='b'):
'''
Creates an ellipse from short line segments y = A x + \mu
where x is on the unit circle.
'''
N = 18
th = np.arange(0, 2*np.pi+np.pi/N, np.pi/N)
X = np.mat(np.vstack((np.cos(th),np.sin(th))))
Y = A*X
ln = plt.Line2D(mu[0]+Y[0,:],mu[1]+Y[1,:],markeredgecolor='w', linewidth=1, color=col)
return ln
left = -5
right = 3
bottom = -5
top = 7
step = 0.1
W0 = np.arange(left,right, step)
W1 = np.arange(bottom,top, step)
LLSurf = np.zeros((len(W1),len(W0)))
# y^\top X w - \mathbf{1}^\top \text{logsumexp}(0, X w)
vmax = -np.inf
vmin = np.inf
for i,j in product(range(len(W1)), range(len(W0))):
w = np.matrix([W0[j], W1[i]]).T
p = X*w
ll = y.T*p - np.sum(np.log(1+np.exp(p)))
vmax = np.max((vmax, ll))
vmin = np.min((vmin, ll))
LLSurf[i,j] = ll
fig = plt.figure(figsize=(10,10))
plt.imshow(LLSurf, interpolation='nearest',
vmin=vmin, vmax=vmax,origin='lower',
extent=(left,right,bottom,top),cmap=plt.cm.jet)
plt.xlabel('w0')
plt.ylabel('w1')
plt.colorbar()
W0 = np.arange(left+2,right-5, 12*step)
W1 = np.arange(bottom+1,top-10, 12*step)
for i,j in product(range(len(W1)), range(len(W0))):
w = np.matrix([W0[j], W1[i]]).T
#w = np.mat([-1,1]).T
p = sigmoid(X*w)
dw = 0.2*X.T*(y-p)
#print(p)
S = np.mat(np.diag(np.asarray(np.multiply(p,1-p)).flatten()))
H = X.T*S*X
dw_nwt = 0.08*H.I*X.T*(y-p)
C = np.linalg.cholesky(H.I)
# plt.hold(True)
ln = ellipse_line(C/3., w, 'w')
ax = fig.gca()
ax.add_line(ln)
ln2 = plt.Line2D((float(w[0]), float(w[0]+dw[0])), (float(w[1]), float(w[1]+dw[1])),color='y')
ax.add_line(ln2)
ln3 = plt.Line2D((float(w[0]), float(w[0]+dw_nwt[0])), (float(w[1]), float(w[1]+dw_nwt[1])),color='w')
ax.add_line(ln3)
plt.plot(w[0,0],w[1,0],'.w')
#print(C)
#print(S)
ax.set_xlim((left,right))
ax.set_ylim((bottom,top))
plt.show()
print(y)
print(X)
#w = np.random.randn(3,1)
w = np.mat('[1;2]')
print(w)
print(sigmoid(X*w))
eta = 0.1
for i in range(10000):
pr = sigmoid(X*w)
w = w + eta*X.T*(y-pr)
print(np.hstack((y,pr)))
print(w)
###Output
[[0]
[0]
[1]
[1]
[1]]
[[-0.5 0.5]
[ 2. -1. ]
[-1. -1. ]
[ 1. 1. ]
[ 2. 1. ]]
[[1]
[2]]
[[ 0.62245933]
[ 0.5 ]
[ 0.04742587]
[ 0.95257413]
[ 0.98201379]]
[[ 0. 0.59561717]
[ 0. 0.30966921]
[ 1. 0.32737446]
[ 1. 0.67262554]
[ 1. 0.66660954]]
[[-0.02719403]
[ 0.74727817]]
###Markdown
--------------------------- Optimization Frameworks--------------------------- CVX -- Convex OptimizationCVX is a framework that can be used for solving convex optimization problems. Convex optimization includes many problems of interest; for example the minimization of the negative loglikelihood of the logistic regression is a convex problem. SUnfortunately, many important problems and interesting problems
###Code
%matplotlib inline
from cvxpy import *
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
###Output
_____no_output_____
###Markdown
Selecting relevant features with regularizationBelow we generate a dataset with some irrelevant features that are not informative for classification Maximize$$\mathcal{L}(w) + \lambda \|w\|_p$$
###Code
def sigmoid(x):
return 1/(1+np.exp(-x))
# Number of data points
N = 1000
# Number of relevant features
K = 10
# Number of irrelevant features
Ke = 30
# Generate random features
X = np.matrix(np.random.randn(N, K + Ke))
# Generate parameters and set the irrelevant ones to zero
w_true = np.random.randn(K + Ke,1)
w_true[K:] = 0
p = sigmoid(X*w_true)
u = np.random.rand(N,1)
y = (u < p)
y = y.astype(np.float64)
# Regularization coefficient
lam = 100.
zero_vector = np.zeros((N,1))
# Construct the problem.
w = Variable(K+Ke)
objective = Minimize(lam*norm(w, np.inf ) -y.T*X*w + sum_entries(log_sum_exp(hstack(zero_vector, X*w),axis=1)))
prob = Problem(objective)
# The optimal objective is returned by prob.solve().
result = prob.solve()
# The optimal value for x is stored in x.value.
#print(w.value)
plt.figure(figsize=(10,4))
plt.stem(w.value, markerfmt='ob')
plt.stem(w_true, markerfmt='xr')
plt.gca().set_xlim((-1, K+Ke))
plt.legend(['Estimated', 'True'])
plt.show()
###Output
_____no_output_____
###Markdown
Optimization with pytorch
###Code
X_np, y_np, w_true_np, M, N = generate_toy_dataset(number_of_features=3, number_of_datapoints=20)
###Output
_____no_output_____
###Markdown
Gradient Descent for Logistic Regression: Reference implementation in numpy
###Code
# Initialization
w_np = np.ones(M)
# Learnig rate
eta = 0.01
MAX_ITER = 100
for epoch in range(MAX_ITER):
sig = sigmoid(np.dot(X_np,w_np))
# Gradient dLL/dw -- symbolically derived and hard coded
w_grad = np.dot(X_np.T, y_np-sig)
# Gradient ascent step
w_np = w_np + eta*w_grad
print(w_np)
###Output
[-0.96195283 -0.21886467 0.83477378]
###Markdown
Gradient Descent for Logistic Regression: First implementation in pytorch
###Code
import torch
import torch.autograd
from torch.autograd import Variable
#sigmoid_f = torch.nn.Sigmoid()
def sigmoid_f(x):
return 1./(1. + torch.exp(-x))
X = Variable(torch.from_numpy(X_np).double())
y = Variable(torch.from_numpy(y_np.reshape(N,1)).double())
# Implementation
w = Variable(torch.ones(M,1).double(), requires_grad=True)
eta = 0.01
MAX_ITER = 100
for epoch in range(MAX_ITER):
sig = sigmoid_f(torch.matmul(X, w))
# Compute the loglikelihood
LL = torch.sum(y*torch.log(sig) + (1-y)*torch.log(1-sig))
# Compute the gradients by automated differentiation
LL.backward()
# The gradient ascent step
w.data.add_(eta*w.grad.data)
# Reset the gradients, as otherwise they are accumulated in w.grad
w.grad.zero_()
print(w.data.numpy())
%connect_info
###Output
{
"shell_port": 65415,
"iopub_port": 65416,
"stdin_port": 65417,
"control_port": 65418,
"hb_port": 65419,
"ip": "127.0.0.1",
"key": "40c24992-b940437a6d68edf64080bfde",
"transport": "tcp",
"signature_scheme": "hmac-sha256",
"kernel_name": ""
}
Paste the above JSON into a file, and connect with:
$> jupyter <app> --existing <file>
or, if you are local, you can connect with just:
$> jupyter <app> --existing kernel-ab030b1c-c549-4b31-8e5e-c11e1befeaa2.json
or even just:
$> jupyter <app> --existing
if this is the most recent Jupyter kernel you have started.
###Markdown
[](https://colab.research.google.com/github/mravanba/comp551-notebooks/blob/master/LogisticRegression.ipynb) Logistic RegressionIn logistic regression we perform binary classification of by learnig a function of the form $f_w(x) = \sigma(x^\top w)$. Here $x,w \in \mathbb{R}^D$, where $D$ is the number of features as before. $\sigma(z) = \frac{1}{1+e^{-z}}$ is the logistic function. Let's plot this function below
###Code
import numpy as np
#%matplotlib notebook
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.core.debugger import set_trace
import warnings
warnings.filterwarnings('ignore')
logistic = lambda z: 1./ (1 + np.exp(-z)) #logistic function
z = np.linspace(-10,10,100)
plt.plot(z, logistic(z))
plt.title('logistic function')
#logistic
x [N,D]
w [D]
x@w [N]
logistic(x@w) [N]
#softmax
x [N,D] R^D->R^C
w [D,C]
logits = x@w [N,C]
logits = logits - np.max(logits, axis=1)
softmax[j,i] = exp(logits[j,i])/{np.sum(exp(logits), axis=1)+eps}
softmax(x@w) [N,C]
###Output
_____no_output_____
###Markdown
Cost functionTo fit our model $f_w$ to the data $\mathcal{D} = \{x^{(1)}, \ldots, x^{(N)}\}$, we maximize the **logarithm of the conditional likelihood**:$$\ell(w; \mathcal{D}) = \sum_n \log \mathrm{Bernoulli}(y^{(n)} | \sigma({x^{(n)}}^\top w)) = \sum_n y^{(n)} \log \sigma({x^{(n)}}^\top w)) + (1-y^{(n)}) \log (1-\sigma({x^{(n)}}^\top w)))$$by substituting the definition of logistic function in the equation above, and minimizing the **negative** of the log-likelihood, which is called the **cost function**,we get$$J(w) = \sum_n y^{(n)} \log(1+e^{-x w^\top}) + (1-y^{(n)}) \log(1+e^{x w^\top})$$In practice we use mean rather than sum over data points.
###Code
def cost_fn(x, y, w):
N, D = x.shape
z = np.dot(x, w)
J = np.mean(y * np.log1p(np.exp(-z)) + (1-y) * np.log1p(np.exp(z))) #log1p calculates log(1+x) to remove floating point inaccuracies
return J
###Output
_____no_output_____
###Markdown
Minimizing the cost using gradient descentTo minimize the cost we use gradient descent: start from some initial assignment to the parameters $w$, and at each iteration take a small step in the opposite direction of the *gradient*. The gradient of the cost function above is given by:$$\frac{\partial}{\partial w_d} J(w) =\sum_n - y^{(n)} x^{(n)}_d \frac{e^{-w^\top x^{(n)}}}{1 + e^{-w^\top x^{(n)}}} +x^{(n)}_d (1- y^{(n)}) \frac{e^{w^\top x^{(n)}}}{1 + e^{w^\top x^{(n)}}} = \sum_n - x^{(n)}_d y^{(n)} (1-\hat{y}^{(n)})+ x^{(n)}_d (1- y^{(n)}) \hat{y}^{(n)} = x^{(n)}_d (\hat{y}^{(n)} - y^{(n)}) $$Since in practice we divide the cost by $N$, we have to the same for the gradient; see the implementation below.
###Code
def gradient(self, x, y):
N,D = x.shape
yh = logistic(np.dot(x, self.w)) # predictions size N
grad = np.dot(x.T, yh - y)/N # divide by N because cost is mean over N points
return grad # size D
###Output
_____no_output_____
###Markdown
Logistic regression classNow we are ready to implement the logistic regression class with the usual `fit` and `predict` methods. Here, the `fit` method implements gradient descent.
###Code
class LogisticRegression:
def __init__(self, add_bias=True, learning_rate=.1, epsilon=1e-4, max_iters=1e5, verbose=False):
self.add_bias = add_bias
self.learning_rate = learning_rate
self.epsilon = epsilon #to get the tolerance for the norm of gradients
self.max_iters = max_iters #maximum number of iteration of gradient descent
self.verbose = verbose
def fit(self, x, y):
if x.ndim == 1:
x = x[:, None]
if self.add_bias:
N = x.shape[0]
x = np.column_stack([x,np.ones(N)])
N,D = x.shape
self.w = np.zeros(D)
g = np.inf
t = 0
# the code snippet below is for gradient descent
while np.linalg.norm(g) > self.epsilon and t < self.max_iters:
g = self.gradient(x, y)
self.w = self.w - self.learning_rate * g
t += 1
if self.verbose:
print(f'terminated after {t} iterations, with norm of the gradient equal to {np.linalg.norm(g)}')
print(f'the weight found: {self.w}')
return self
def predict(self, x):
if x.ndim == 1:
x = x[:, None]
Nt = x.shape[0]
if self.add_bias:
x = np.column_stack([x,np.ones(Nt)])
yh = logistic(np.dot(x,self.w)) #predict output
return yh
LogisticRegression.gradient = gradient #initialize the gradient method of the LogisticRegression class with gradient function
###Output
_____no_output_____
###Markdown
Toy experiment fit this linear model to toy data with $x \in \Re^1$ + a bias parameter
###Code
N = 50
x = np.linspace(-5,5, N)
y = ( x < 2).astype(int) #generate synthetic data
model = LogisticRegression(verbose=True, )
yh = model.fit(x,y).predict(x)
plt.plot(x, y, '.', label='dataset')
plt.plot(x, yh, 'g', alpha=.5, label='predictions')
plt.xlabel('x')
plt.ylabel(r'$y$')
plt.legend()
plt.show()
###Output
terminated after 100000 iterations, with norm of the gradient equal to 0.0007886436933334241
the weight found: [-9.96926826 20.27319341]
###Markdown
we see that the model successfully fits the training data. If we run the optimization for long enough the weights will grow large (in absolute value) so as to make the predicted probabilities for the data-points close to the decidion boundary (x=2) close to zero and one. Weight SpaceSimilar to what we did for linear regression, we plot *cost* as a function for logistic regrression as a function of model parameters (weights), and show the correspondence between the different weights having different costs and their fit. The `plot_contour` is the same helper function we used for plotting the cost function for linear regression.
###Code
import itertools
def plot_contour(f, x1bound, x2bound, resolution, ax):
x1range = np.linspace(x1bound[0], x1bound[1], resolution)
x2range = np.linspace(x2bound[0], x2bound[1], resolution)
xg, yg = np.meshgrid(x1range, x2range)
zg = np.zeros_like(xg)
for i,j in itertools.product(range(resolution), range(resolution)):
zg[i,j] = f([xg[i,j], yg[i,j]])
ax.contour(xg, yg, zg, 100)
return ax
###Output
_____no_output_____
###Markdown
Now let's define the cost function for linear regression example above, and visualize the cost and the fit of various models (parameters).
###Code
x_plus_bias = np.column_stack([x,np.ones(x.shape[0])])
cost_w = lambda param: cost_fn(x_plus_bias, y, param) #define the cost just as a function of parameters
model_list = [(-10, 20), (-2, 2), (3,-3), (4,-4)]
fig, axes = plt.subplots(ncols=2, nrows=1, constrained_layout=True, figsize=(10, 5))
plot_contour(cost_w, [-50,30], [-10,50], 50, axes[0])
colors = ['r','g', 'b', 'k']
for i, w in enumerate(model_list):
axes[0].plot(w[0], w[1], 'x'+colors[i])
axes[1].plot(x, y, '.')
axes[1].plot(x, logistic(w[1] + np.dot(w[0], x)), '-'+colors[i], alpha=.5)
axes[0].set_xlabel(r'$w_1$')
axes[0].set_ylabel(r'$w_0$')
axes[0].set_title('weight space')
axes[1].set_xlabel('x')
axes[1].set_ylabel(r'$y=xw_1 + w_0$')
axes[1].set_title('data space')
plt.show()
###Output
_____no_output_____
###Markdown
Iris datasetLet's visualize class probabilities for D=2 (plus a bias). To be able to use logistic regression we choose two of the three classes in the Iris dataset.
###Code
from sklearn import datasets
dataset = datasets.load_iris()
x, y = dataset['data'][:,:2], dataset['target']
x, y = x[y < 2], y[y< 2] # we only take the data of class 0 and 1
model = LogisticRegression()
yh = model.fit(x,y).predict(x)
x0v = np.linspace(np.min(x[:,0]), np.max(x[:,0]), 200)
x1v = np.linspace(np.min(x[:,1]), np.max(x[:,1]), 200)
x0,x1 = np.meshgrid(x0v, x1v)
x_all = np.vstack((x0.ravel(),x1.ravel())).T
yh_all = model.predict(x_all)
plt.scatter(x[:,0], x[:,1], c=yh, marker='o', alpha=1)
plt.scatter(x_all[:,0], x_all[:,1], c=yh_all, marker='.', alpha=.05)
plt.ylabel('sepal length')
plt.xlabel('sepal width')
plt.title('class probabilities (colors)')
plt.show()
###Output
_____no_output_____
###Markdown
使用寶可夢作為實驗資料素材
###Code
import numpy as np
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv("./pokemon.csv")
alldata = df[ (df['attack_strong_type'] == 'Normal') |(df['attack_strong_type'] == 'Flying') ]
# alldata = alldata[df['attack_strong_type'] == 'Flying']
f1 = alldata['height'].tolist()
f2 = alldata['weight'].tolist()
y = alldata['attack_strong_type']=='Normal'
y = [ 1 if i else 0 for i in y.tolist()]
f1 = np.array(f1)
f2 = np.array(f2)
c = [ 'g' if i==1 else 'b' for i in y ]
plt.scatter(f1, f2, 20, c=c, alpha=0.5,
label="Type")
plt.xlabel("Height")
plt.ylabel("Weight")
plt.legend(loc=2)
plt.show()
###Output
_____no_output_____
###Markdown
對原始資料進行 Sacle to 0~1 之間,避免 weight 太劇烈變動,導至 learning rate 難以設定
###Code
from sklearn.preprocessing import scale,MinMaxScaler
scaler1 = MinMaxScaler()
scaler2 = MinMaxScaler()
f1 = f2.reshape([f1.shape[0],1])
f2 = f2.reshape([f2.shape[0],1])
scaler1.fit(f1)
scaler2.fit(f2)
f1 = scaler1.transform(f1)
f2 = scaler2.transform(f2)
f1 = f1.reshape(f1.shape[0])
f2 = f2.reshape(f2.shape[0])
c = [ 'g' if i==1 else 'b' for i in y ]
plt.scatter(f1, f2, 20, c=c, alpha=0.5,
label="Type")
plt.xlabel("Height")
plt.ylabel("Weight")
plt.legend(loc=2)
plt.show()
Y = np.array([1,1,0,0,1])
A = np.array([0.8, 0.7, 0.2, 0.1, 0.9])
A2 = np.array([0.6, 0.6, 0.2, 0.1, 0.3])
def cross_entropy(Y,A):
# small tip 因 log(0) 會趨近負無限大,會產生 nan ,故這裡統一加上 0.00001
Y = np.array(Y)
A = np.array(A)
m = len(A)
cost = -(1.0/m) * np.sum(Y*np.log(A+0.00001) + (1-Y)*np.log(1-A+0.00001))
return cost
# Test cross_entropy Function
print cross_entropy(Y,A)
print cross_entropy(Y,A2)
print cross_entropy(Y,Y)
###Output
_____no_output_____
###Markdown
LogisticRegression 公式推導如下推導過程請參考 : http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML_2017/Lecture/Logistic%20Regression%20(v4).pdf中的 3~13 頁,其重點步驟如下* 令 $y=sigmoid(w*x+b)=f_{w,b}$* 我們想要 Maxima右邊這個式之 => $ ArgMaxL_{w,b}= \prod\left( \hat y*f_{w,b} + (1-\hat y)*(1-f_{w,b}) \right) $* 但如果我們要看成 Lose Function 反過加上個負號並取 Log 方便計算,所以式之會變成如下* $ ArgMinL_{w,b}= -1*ln\sum\left( \hat y*f_{w,b} + (1-\hat y)*(1-f_{w,b}) \right) $* 然後對述的式子,分別對 $w,b$ 作偏微分,以更取得到 $\Delta w$及$\Delta b$* 然後再乘上 Learning Rate 進行更新如式 $w_{t+1}=w_t - r*\Delta w $ , $b_{t+1}=b_t - r*\Delta b $ 最終推導結果如下* $w_i$ 的 update 公式如下 , $\hat y$ 為 traning data 的 target label, $x^n$ 為第 n 個 data 的值* $w_{t+1} = w_t - r*\sum\left((\hat y^n - f_{w,b}(x^n))*x^n*-1\right) $* $b_i$ 的 update 與 $w_i$ 只差了一項就是不用乘上 $X^n$ ,如右 $b_{t+1} = b_t - r*\sum\left(-1*(\hat y^n - f_{w,b}(x^n))\right) $
###Code
import math
w1 =1
w2 =1
b = 0
r = 0.001
def fx(x1,x2):
temp = w1*x1 + w2*x2 + b
y_head = 1. / (1. + math.exp(-1.*temp))
return y_head
def cross_entropy(Y, A):
m = len(A)
cost = -(1.0 / m) * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A))
return cost
for i in range(10000):
w1_delta=0
w2_delta=0
b_delta = 0
y_error = 0
for x1,x2,y_now in zip(f1,f2,y):
y_error = y_now - fx(x1,x2)
w1_delta = -1*x1*y_error
w2_delta = -1*x2*y_error
b_delta = -1*y_error
w1 -= r*w1_delta
w2 -= r*w2_delta
b -= r*b_delta
if i % 100==0 :
error_rate = 0
y_predict = []
for x1,x2,y_now in zip(f1,f2,y):
y_predict.append(fx(x1,x2))
if y_now==1 and fx(x1,x2) < 0.5:
error_rate+=1
elif y_now==0 and fx(x1,x2) >=0.5:
error_rate+=1
print("{:0,.3f}, {:0,.3f}, {:0,.3f}, {:0,.3f}, {:0,.3f}".format(error_rate*1./len(y) ,cross_entropy(np.array(y),np.array(y_predict)),w1,w2,b) )
###Output
0.397, 0.598, 1.002, 1.002, -0.001
0.379, 0.586, 1.184, 1.184, -0.070
0.345, 0.575, 1.357, 1.357, -0.138
0.328, 0.565, 1.522, 1.522, -0.203
0.310, 0.555, 1.681, 1.681, -0.264
0.293, 0.547, 1.834, 1.834, -0.322
0.259, 0.539, 1.982, 1.982, -0.378
0.241, 0.531, 2.125, 2.125, -0.431
0.241, 0.524, 2.263, 2.263, -0.481
0.241, 0.517, 2.398, 2.398, -0.530
###Markdown
Using Logistic regression to detect if breast cancer is malignant or benign.
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.model_selection import train_test_split, cross_val_predict
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as pl
import numpy as np
###Output
_____no_output_____
###Markdown
Take a look at input and outputs
###Code
datas=datasets.load_breast_cancer()
print(datas.feature_names,"\n Num features",len(datas.feature_names))
print(datas.target_names)
###Output
['mean radius' 'mean texture' 'mean perimeter' 'mean area'
'mean smoothness' 'mean compactness' 'mean concavity'
'mean concave points' 'mean symmetry' 'mean fractal dimension'
'radius error' 'texture error' 'perimeter error' 'area error'
'smoothness error' 'compactness error' 'concavity error'
'concave points error' 'symmetry error' 'fractal dimension error'
'worst radius' 'worst texture' 'worst perimeter' 'worst area'
'worst smoothness' 'worst compactness' 'worst concavity'
'worst concave points' 'worst symmetry' 'worst fractal dimension']
Num features 30
['malignant' 'benign']
###Markdown
Split data and train
###Code
x,y=datasets.load_breast_cancer(True)
train_x,test_x,train_y,test_y=train_test_split(x,y)
log_reg=LogisticRegression(max_iter=5000)
log_reg.fit(train_x,train_y)
pred=log_reg.predict(test_x)
pl.plot(test_y,pred)
print("Accuracy :",accuracy_score(test_y,pred))
###Output
Accuracy : 0.958041958041958
###Markdown
Note: EmploymentStatus: 0=Active, 1=TerminatedGender: 0=female, 1=maleBusiness Travel: 0=no travel, 1=rarely, 2=frequentlyDepartment: HR=0, Sales=1, R&D=2
###Code
# Change qualitative data to numeric form
df_skinny['EmploymentStatus'] = df_skinny['EmploymentStatus'].replace(['Yes','No'],['Terminated','Retained'])
df_skinny['Gender']=df_skinny['Gender'].replace(['Female','Male'],[0,1])
df_skinny['BusinessTravel'] = df_skinny['BusinessTravel'].replace(['Travel_Rarely','Travel_Frequently','Non-Travel'],[1,2,0])
df_skinny['Department']=df_skinny['Department'].replace(['Human Resources','Sales','R&D'],[0,1,2])
df_skinny.head()
import matplotlib.ticker as mtick
bars = ['Retained','Turnover']
y = df_skinny['EmploymentStatus'].value_counts()
y_as_percent = (y[0]/len(df_skinny),y[1]/len(df_skinny))
print(y_as_percent)
fig = plt.figure(1, (7,5))
ax = fig.add_subplot(1,1,1)
ax.bar(bars,y_as_percent, color=['Teal','Orange'])
ax.yaxis.set_major_formatter(mtick.PercentFormatter(1.0))
plt.xticks(fontsize=12)
plt.ylim(0,.95)
plt.yticks(fontsize=12)
plt.xlabel("\n Employment Status", fontsize=14)
plt.ylabel("Percent of Sample \n", fontsize=14)
plt.title("\n Overall Turnover \n", fontsize=16)
plt.annotate("83.9%",xy=("Retained",.87),ha="center")
plt.annotate("16.1%",xy=("Turnover",.2),ha="center")
ax.tick_params(axis='both', which='major', pad=10)
plt.savefig('static/overallTurnover.png')
plt.show()
X =df_skinny.drop(["EmploymentStatus","EmployeeNumber"], axis=1)
y = df_skinny["EmploymentStatus"]
# print(df.columns.values.tolist())
import imblearn
oversample = imblearn.over_sampling.RandomOverSampler(sampling_strategy=.4)
X_over, y_over = oversample.fit_resample(X, y)
# undersample = imblearn.under_sampling.RandomUnderSampler(sampling_strategy='majority')
# X_under, y_under = undersample.fit_resample(X,y)
from sklearn.model_selection import train_test_split
X_over_train, X_over_test, y_over_train, y_over_test = train_test_split(X_over, y_over, random_state=1)
# X_under_train, X_under_test, y_under_train, y_under_test = train_test_split(X_under, y_under, random_state=1)
from sklearn.preprocessing import StandardScaler
X_o_scaler = StandardScaler().fit(X_over_train)
X_o_train_scaled = X_o_scaler.transform(X_over_train)
X_o_test_scaled = X_o_scaler.transform(X_over_test)
# X_u_scaler = StandardScaler().fit(X_under_train)
# X_u_train_scaled = X_u_scaler.transform(X_under_train)
# X_u_test_scaled = X_u_scaler.transform(X_under_test)
from sklearn.linear_model import LogisticRegression
classifier_o = LogisticRegression()
# classifier_u = LogisticRegression()
classifier_o.fit(X_o_train_scaled, y_over_train)
# classifier_u.fit(X_u_train_scaled, y_under_train)
print(f"Training Data Score: {classifier_o.score(X_o_train_scaled, y_over_train)}")
print(f"Testing Data Score: {classifier_o.score(X_o_test_scaled, y_over_test)}")
# print(f"Training Data Score: {classifier_u.score(X_u_train_scaled, y_under_train)}")
# print(f"Testing Data Score: {classifier_u.score(X_u_test_scaled, y_under_test)}")
# Predictions of new data
new_df = pd.read_csv("Resources/predict_data.csv")
new_skinny = new_df.drop(['EducationField','EmployeeCount','StandardHours','JobRole','MaritalStatus','DailyRate','MonthlyRate','HourlyRate','Over18','OverTime'], axis=1).drop_duplicates()
new_skinny.rename(columns={"Attrition": "EmploymentStatus"}, inplace=True)
new_skinny['EmploymentStatus'] = new_skinny['EmploymentStatus'].replace(['Yes','No'],['Terminated','Retained'])
new_skinny['Gender']=new_skinny['Gender'].replace(['Female','Male'],[0,1])
new_skinny['BusinessTravel'] = new_skinny['BusinessTravel'].replace(['Travel_Rarely','Travel_Frequently','Non-Travel'],[1,2,0])
new_skinny['Department']=new_skinny['Department'].replace(['Human Resources','Sales','R&D'],[0,1,2])
print(len(list(df_skinny)))
print(len(list(new_skinny)))
new_X = new_skinny.drop(["EmploymentStatus","EmployeeNumber"], axis=1)
new_X_scaler = StandardScaler().fit(new_X)
new_X_scaled = new_X_scaler.transform(new_X)
new_o_predictions=classifier_o.predict(new_X_scaled)
# unique, counts = unique(new_o_predictions, return_counts=True)
# dict(zip(unique, counts))
# termpercent=((counts[1]/len(new_o_predictions))*100).round(1)
# print(dict(zip(unique,counts)))# print(termpercent)
# print(termpercent)
ynew = classifier_o.predict_proba(new_X_scaled)
ynew=ynew.tolist()
type(ynew[0])
loss_probability = []
for y in ynew:
probability = (y[1]*100)
loss_probability.append(probability)
# print(loss_probability)
columns = []
for col in df_skinny.drop(['EmploymentStatus','EmployeeNumber'],axis=1).columns:
columns.append(col)
feature_importance=pd.DataFrame(np.hstack((np.array([columns[0:]]).T, classifier_o.coef_.T)), columns=['feature', 'importance'])
feature_importance['importance']=pd.to_numeric(feature_importance['importance'])
plot_df=feature_importance.sort_values(by='importance', ascending=True)
# print(plot_df)
###Output
_____no_output_____
###Markdown
Note: Negative importance scores indicate importance of each feature to class 0 (Active employment status); positive scores are relative to class 1 (Terminated employment). I.e., both features with high negative importance scores and those high positive scores are important to attrition, hence the conversion to absolute values below to take both extremes into account and avoid confusion.
###Code
# y = plot_df_sorted['importanceAbsolute']
# bars = plot_df_sorted['feature']
y = plot_df['importance']
bars = plot_df['feature']
ticks = [-.45,.45]
labels = ['Most Impact on Retention','Most Impact on Turnover']
plt.figure(figsize=(8,8))
plt.barh(bars,y, height=.5, color='teal')
plt.ylabel("Features \n",fontsize=14)
plt.xticks(ticks,labels,fontsize=14)
# plt.xticks(fontsize=1)
plt.yticks(fontsize=11)
plt.ylim(-1,22)
plt.title("\n Impact of Employment Factors on Turnover \n",fontsize=16)
plt.savefig('static/featureImportance.png')
plt.show()
df_RD = df_skinny.loc[df_skinny['Department'].isin([2])].drop(["Department","EmployeeNumber"],axis=1)
# print(len(df_RD.index))
X_RD =df_RD.drop("EmploymentStatus", axis=1)
y_RD = df_RD["EmploymentStatus"]
oversample = imblearn.over_sampling.RandomOverSampler(sampling_strategy=.4)
X_RDover, y_RDover = oversample.fit_resample(X_RD, y_RD)
X_RD_train, X_RD_test, y_RD_train, y_RD_test = train_test_split(X_RDover, y_RDover, random_state=1)
X_RD_scaler = StandardScaler().fit(X_RD_train)
X_RD_train_scaled = X_RD_scaler.transform(X_RD_train)
X_RD_test_scaled = X_RD_scaler.transform(X_RD_test)
classifier=LogisticRegression()
classifier.fit(X_RD_train_scaled, y_RD_train)
print(f"Training Data Score: {classifier.score(X_RD_train_scaled, y_RD_train)}")
print(f"Testing Data Score: {classifier.score(X_RD_test_scaled, y_RD_test)}")
columns_RD = []
for col in df_RD.drop(['EmploymentStatus'],axis=1).columns:
columns_RD.append(col)
feature_importance_RD=pd.DataFrame(np.hstack((np.array([columns_RD[0:]]).T, classifier.coef_.T)), columns=['feature', 'importance'])
feature_importance_RD['importance']=pd.to_numeric(feature_importance_RD['importance'])
plot_df_RD=feature_importance_RD.sort_values(by='importance', ascending=True)
# print(plot_df_RD)
# y = plot_df_RD_sorted['importanceAbsolute']
y=plot_df_RD['importance']
# bars = plot_df_RD_sorted['feature']
bars=plot_df_RD['feature']
ticks = [-.6,.5]
labels = ['Most Impact on Retention','Most Impact on Turnover']
plt.figure(figsize=(8,8))
plt.barh(bars,y, height=.7, color='purple')
plt.ylabel("Features \n",fontsize=14)
plt.xticks(ticks,labels,fontsize=14)
plt.yticks(fontsize=11)
plt.ylim(-1,21)
plt.title("\n Impact of Employment Factors on R&D Turnover \n",fontsize=16)
plt.savefig('static/featureImportance_R&D.png')
plt.show()
df_Sales = df_skinny.loc[df_skinny['Department'].isin([1])].drop(["Department","EmployeeNumber"],axis=1)
print(len(df_Sales.index))
X_S =df_Sales.drop("EmploymentStatus", axis=1)
y_S = df_Sales["EmploymentStatus"]
oversample = imblearn.over_sampling.RandomOverSampler(sampling_strategy=.4)
X_Sover, y_Sover = oversample.fit_resample(X_S, y_S)
X_S_train, X_S_test, y_S_train, y_S_test = train_test_split(X_Sover, y_Sover, random_state=1)
X_S_scaler = StandardScaler().fit(X_S_train)
X_S_train_scaled = X_S_scaler.transform(X_S_train)
X_S_test_scaled = X_S_scaler.transform(X_S_test)
classifier=LogisticRegression()
classifier.fit(X_S_train_scaled, y_S_train)
# print(f"Training Data Score: {classifier.score(X_RD_train_scaled, y_RD_train)}")
# print(f"Testing Data Score: {classifier.score(X_RD_test_scaled, y_RD_test)}")
columns_S = []
for col in df_Sales.drop('EmploymentStatus',axis=1).columns:
columns_S.append(col)
feature_importance_S=pd.DataFrame(np.hstack((np.array([columns_S[0:]]).T, classifier.coef_.T)), columns=['feature', 'importance'])
feature_importance_S['importance']=pd.to_numeric(feature_importance_S['importance'])
plot_df_S=feature_importance_S.sort_values(by='importance', ascending=True)
# print(plot_df_S)
y=plot_df_S['importance']
bars=plot_df_S['feature']
ticks = [-.45,.55]
labels = ['Most Impact on Retention','Most Impact on Turnover']
plt.figure(figsize=(8,8))
plt.barh(bars,y, height=.7, color='orange')
plt.ylabel("Features \n",fontsize=14)
plt.xticks(ticks,labels,fontsize=14)
plt.yticks(fontsize=11)
plt.ylim(-1,21)
plt.title("\n Impact of Employment Factors on Sales Turnover \n",fontsize=16)
plt.savefig('static/featureImportance_S.png')
plt.show()
df=pd.read_csv('Resources/turnoverData_full.csv')
df = df.drop(['EducationField','EmployeeCount','StandardHours','JobRole','MaritalStatus','DailyRate','MonthlyRate','HourlyRate','Over18','OverTime'], axis=1).drop_duplicates()
df.rename(columns={"Attrition": "EmploymentStatus"}, inplace=True)
df['EmploymentStatus'] = df['EmploymentStatus'].replace(['Yes','No'],['Terminated','Retained'])
df['Gender']=df['Gender'].replace(['Female','Male'],[0,1])
df['BusinessTravel'] = df['BusinessTravel'].replace(['Travel_Rarely','Travel_Frequently','Non-Travel'],[1,2,0])
df['Department']=df['Department'].replace(['Human Resources','Sales','R&D'],[0,1,2])
X = df.drop("EmploymentStatus", axis=1)
y = df["EmploymentStatus"]
# Data is imbalanced so need to resample. The following (oversampling with a 60:40 ratio between
# retained and terminated) gave results most consistent with dataset in sampling.
import imblearn
oversample = imblearn.over_sampling.RandomOverSampler(sampling_strategy=.4)
X_over, y_over = oversample.fit_resample(X, y)
from sklearn.model_selection import train_test_split
X_over_train, X_over_test, y_over_train, y_over_test = train_test_split(X_over, y_over, random_state=1)
from sklearn.preprocessing import StandardScaler
X_o_scaler = StandardScaler().fit(X_over_train)
X_o_train_scaled = X_o_scaler.transform(X_over_train)
X_o_test_scaled = X_o_scaler.transform(X_over_test)
from sklearn.linear_model import LogisticRegression
classifier_o = LogisticRegression()
classifier_o.fit(X_o_train_scaled, y_over_train)
# Go back to original X and y from full data set to make predictions
X_scaler=StandardScaler().fit(X)
X_scaled=X_scaler.transform(X)
predictions=classifier_o.predict(X_scaled)
prob = classifier_o.predict_proba(X_scaled)
emp_nums = df['EmployeeNumber'].to_list()
import json
def make_json():
loss_probability = []
prob_lst=prob.tolist()
for i in prob_lst:
index=prob_lst.index(i)
probability = (prob_lst[index][1]*100)
loss_probability.append(probability)
emp_dicts = []
for i, j in zip(emp_nums, loss_probability):
emp_dicts.append({"employee_number": i, "loss_probability":j})
emp_data = json.loads(json.dumps(emp_dicts))
# # emp_data=tuple(emp_data)
print((emp_data[0]["loss_probability"]))
make_json()
# Turnover and cost data calculations
import json
def cost_calculation():
df_cost = df_initial.drop(['EducationField','EmployeeCount','StandardHours','JobRole','MaritalStatus','DailyRate','MonthlyRate','HourlyRate','Over18','OverTime'], axis=1).drop_duplicates()
df_cost.rename(columns={"Attrition": "EmploymentStatus"}, inplace=True)
df_level = df_cost.groupby(['JobLevel','EmploymentStatus']).count().reset_index()
lev1_turn = df_level.iloc[1,2].item()
lev2_turn = df_level.iloc[3,2].item()
lev3_turn = df_level.iloc[5,2].item()
lev4_turn = df_level.iloc[7,2].item()
lev5_turn = df_level.iloc[9,2].item()
cost = [4000, 6000, 8000, 18000, 40000]
avg_cost = (((cost[0]*lev1_turn) + (cost[1] * lev2_turn) + (cost[2] * lev3_turn) + (cost[3] * lev4_turn) +
(cost[4] * lev5_turn))/(lev1_turn + lev2_turn + lev3_turn + lev4_turn + lev5_turn))
total_cost=(avg_cost * (lev1_turn + lev2_turn + lev3_turn + lev4_turn + lev4_turn))
total_count = len(df_cost)
y=df_cost['EmploymentStatus'].value_counts().to_list()
retained=(y[0]/len(df_cost)*100)
turnover=y[1]
proportion_turnover_lev1 = (lev1_turn/turnover)
proportion_turnover_lev2 = (lev2_turn/turnover)
proportion_turnover_lev3 = (lev3_turn/turnover)
proportion_turnover_lev4 = (lev4_turn/turnover)
proportion_turnover_lev5 = (lev5_turn/turnover)
calc_dict = [{"total_employees":total_count}, {"cost_per_level":cost},{"avg_cost":avg_cost},{"total_cost":total_cost},
{"turnover":turnover},{"retained":retained},{"proportion_lev1_turnover":proportion_turnover_lev1},
{"proportion_lev2_turnover":proportion_turnover_lev2}, {"proportion_lev3_turnover":proportion_turnover_lev3},
{"proportion_lev4_turnover":proportion_turnover_lev4}, {"proportion_lev5_turnover":proportion_turnover_lev5},]
calc_data = json.loads(json.dumps(calc_dict))
# return calc_data
print(type(y[1]))
cost_calculation()
df_predict=df_initial[df_initial["Attrition"]=="No"]
print(len(df_predict))
###Output
1233
###Markdown
Logistic Regression The dataset I will be working with contains information on various cars. For each car we have information about the technical aspects of the vehicle such as the motor's displacement, the weight of the car, the miles per gallon, and how fast the car accelerates. Using this information we will predict the origin of the vehicle, either North America, Europe, or Asia.Here are the columns in the dataset:- mpg -- Miles per gallon, Continuous.- cylinders -- Number of cylinders in the motor, Integer, Ordinal, and Categorical.- displacement -- Size of the motor, Continuous.- horsepower -- Horsepower produced, Continuous.- weight -- Weights of the car, Continuous.- acceleration -- Acceleration, Continuous.- year -- Year the car was built, Integer and Categorical.- origin -- Integer and Categorical. 1: North America, 2: Europe, 3: Asia.- car_name -- Name of the car.
###Code
import pandas as pd
import numpy as np
cars = pd.read_csv("C:/Users/Jennifer/Documents/Python/Data/auto.csv")
cars.head()
unique_regions = cars["origin"].unique()
print(unique_regions)
###Output
[1 3 2]
###Markdown
Dummy Variables
###Code
dummy_cylinders = pd.get_dummies(cars["cylinders"], prefix="cyl")
cars = pd.concat([cars, dummy_cylinders], axis=1)
dummy_years = pd.get_dummies(cars["year"], prefix="year")
cars = pd.concat([cars, dummy_years], axis=1)
cars = cars.drop("year", axis=1)
cars = cars.drop("cylinders", axis=1)
print(cars.head())
###Output
mpg displacement horspower weight acceleration origin \
0 18.0 307.0 130 3504 12.0 1
1 15.0 350.0 165 3693 11.5 1
2 18.0 318.0 150 3436 11.0 1
3 16.0 304.0 150 3433 12.0 1
4 17.0 302.0 140 3449 10.5 1
car_name cyl_3 cyl_4 cyl_5 ... year_73 year_74 \
0 chevrolet chevelle malibu 0 0 0 ... 0 0
1 buick skylark 320 0 0 0 ... 0 0
2 plymouth satellite 0 0 0 ... 0 0
3 amc rebel sst 0 0 0 ... 0 0
4 ford torino 0 0 0 ... 0 0
year_75 year_76 year_77 year_78 year_79 year_80 year_81 year_82
0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0
[5 rows x 25 columns]
###Markdown
Multiclass Classification
###Code
shuffled_rows = np.random.permutation(cars.index)
shuffled_cars = cars.iloc[shuffled_rows]
highest_train_row = int(cars.shape[0] * .70)
train = shuffled_cars.iloc[0:highest_train_row]
test = shuffled_cars.iloc[highest_train_row:]
###Output
_____no_output_____
###Markdown
Training MultiClass Regression Modeln the one-vs-all approach, we're essentially converting an n-class (in our case n is 3) classification problem into n binary classification problems. For our case, we'll need to train 3 models:A model where all cars built in North America are considered Positive (1) and those built in Europe and Asia are considered Negative (0).A model where all cars built in Europe are considered Positive (1) and those built in North America and Asia are considered Negative (0).A model where all cars built in Asia are labeled Positive (1) and those built in North America and Europe are considered Negative (0).Each of these models is a binary classification model that will return a probability between 0 and 1. When we apply this model on new data, a probability value will be returned from each model (3 total). For each observation, we choose the label corresponding to the model that predicted the highest probability.
###Code
from sklearn.linear_model import LogisticRegression
unique_origins = cars["origin"].unique()
unique_origins.sort()
models = {}
features = [c for c in train.columns if c.startswith("cyl") or c.startswith("year")]
for origin in unique_origins:
model = LogisticRegression()
X_train = train[features]
y_train = train["origin"] == origin
model.fit(X_train, y_train)
models[origin] = model
###Output
_____no_output_____
###Markdown
Testing the Models
###Code
testing_probs = pd.DataFrame(columns=unique_origins)
testing_probs = pd.DataFrame(columns=unique_origins)
for origin in unique_origins:
# Select testing features.
X_test = test[features]
# Compute probability of observation being in the origin.
testing_probs[origin] = models[origin].predict_proba(X_test)[:,1]
###Output
_____no_output_____
###Markdown
Choosing the OriginNow that we trained the models and computed the probabilities in each origin we can classify each observation. To classify each observation we want to select the origin with the highest probability of classification for that observation.While each column in our dataframe testing_probs represents an origin we just need to choose the one with the largest probability. We can use the Dataframe method .idxmax() to return a Series where each value corresponds to the column or where the maximum value occurs for that observation. We need to make sure to set the axis paramater to 1 since we want to calculate the maximum value across columns. Since each column maps directly to an origin the resulting Series will be the classification from our model.
###Code
predicted_origins = testing_probs.idxmax(axis=1)
print(predicted_origins)
###Output
0 1
1 1
2 1
3 3
4 2
5 2
6 1
7 1
8 1
9 1
10 2
11 2
12 3
13 1
14 1
15 2
16 1
17 1
18 1
19 1
20 2
21 1
22 1
23 1
24 1
25 1
26 3
27 3
28 1
29 1
..
90 2
91 1
92 1
93 1
94 1
95 1
96 2
97 3
98 1
99 1
100 1
101 1
102 1
103 2
104 2
105 1
106 1
107 1
108 1
109 2
110 2
111 3
112 1
113 2
114 3
115 3
116 1
117 2
118 1
119 1
Length: 120, dtype: int64
###Markdown
AIM: To implement logistic regression **Theory**: Logistic Regression is a Machine Learning method that is used to solve classification issues. It is a predictive analytic technique that is based on the probability idea. The classification algorithm Logistic Regression is used to predict the likelihood of a categorical dependent variable. The dependant variable in logistic regression is a binary variable with data coded as 1 (yes,True, normal, success, etc.) or 0 (no, False, abnormal, failure, etc.).The goal of Logistic Regression is to discover a link between characteristics and the likelihood of a specific outcome. For example, when predicting whether a student passes or fails an exam based on the number of hours spent studying, the response variable has two values: pass and fail. Problem Statement: Predict weather a patient has diabetes or not by using the given data of Glucose, Blood Pressure, Insulin,BMI, age, etc.
###Code
import pandas as pd
pima = pd.read_csv("diabetes.csv")
pima.head()
###Output
_____no_output_____
###Markdown
Separating Input variables and target variable
###Code
X = pima.drop('Outcome',axis=1)
X
type(X)
Y = pima.Outcome
Y
type(Y)
###Output
_____no_output_____
###Markdown
Splitting Data for training and testing
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.25)
###Output
_____no_output_____
###Markdown
Importing Model
###Code
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
###Output
_____no_output_____
###Markdown
Training Model on training data
###Code
model.fit(X_train,Y_train)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
###Markdown
Making predictions on testing data
###Code
Y_pred = model.predict(X_test)
###Output
_____no_output_____
###Markdown
Evaluating Model with Confusion Matrix
###Code
from sklearn import metrics
conf_matrix = metrics.confusion_matrix(Y_test,Y_pred)
conf_matrix
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
get_ipython().run_line_magic('matplotlib', 'inline')
class_names = [0,1] # name of classes
fig, ax = plt.subplots()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names)
plt.yticks(tick_marks, class_names)
# create heatmap
sns.heatmap(pd.DataFrame(conf_matrix), annot=True, cmap="YlGnBu" ,fmt='g')
ax.xaxis.set_label_position("top")
plt.tight_layout()
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
###Output
_____no_output_____
###Markdown
Evaluating model with other methods.
###Code
print("Accuracy:", metrics.accuracy_score(Y_test, Y_pred))
print("Precision:", metrics.precision_score(Y_test, Y_pred))
print("Recall:", metrics.recall_score(Y_test, Y_pred))
y_pred_proba = model.predict_proba(X_test)[::,1]
fpr, tpr, _ = metrics.roc_curve(Y_test, y_pred_proba)
auc = metrics.roc_auc_score(Y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
from pandas.plotting import radviz
plt.figure()
radviz(pima, "DiabetesPedigreeFunction");
###Output
_____no_output_____
###Markdown
Logistic Regression: Categorical binary target. * Probability of having heart attack.* Mortality in injured patients.* Customer's propensity to purchase a product or halt a subscribe.* Failure of a given process or product.> binary.> probability of the predictions.> linear decision boundary.> impact of the feature.**Sigmoid function**: logistic regression:*0 <= σ(θ'T.X) <= 1**σ(θ'T.X)=1/(1+(exp(-θ'T.X))** P(Y=1|X) => probability Y belong to 1 with X given.* P(Y=0|X) = 1 - P(Y=1|X).1. Initialize θ.2. Calculate y_hat = σ(θ'T.X) foreach.3. Evaluation of the model.4. Calculating the Error. cost.5. Change the θ to reduce the cost.6. Go back to step 2. **Cost**: cost(**y**,y) = (1/2).(σ(θ'T.X) - Y)^2J(θ) = MSE = (1/m).Σ cost(**y**,y)> difficult to get the global min even by deriving.Our goal: * **y**=1 & y=1 => cost=0* **y**=0 & y=1 => cost -> +infini (very large)> -log(**y**)> cost(**y**,y) = * -log(**y**) if(y=1)* -log(1-**y**) if(y=0)*J(θ) = -(1/m).Σ[(y^i).log(**y**^i)] + [(1 - y^i).log(**y**^i)]*Our goal: finding the best parameters for our model **by** minimizing the cost function **by** Optemization approach.* **Gradient Descent**: technique to use the derivative of a cost function to change the parameter values in oreder to minimize the cost. Logistic Regression Customer Churn
###Code
import pandas as pd
import pylab as pl
import numpy as np
import scipy.optimize as opt
from sklearn import preprocessing
import matplotlib.pyplot as plt
%matplotlib inline
# downloading the dataset.
!wget -O ChurnData.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-Coursera/labs/Data_files/ChurnData.csv
df = pd.read_csv('/content/ChurnData.csv')
df.head()
df.info()
df['churn'] = df['churn'].astype('int')
X = df[['tenure','age','address','income','ed','employ','equip']].values
X[:5]
Y = df[['churn']].values
Y[:5]
# normalize our dataset.
X = preprocessing.StandardScaler().fit(X).transform(X)
X[:5]
# train test dataset.
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(X,Y,test_size=0.2,random_state=4)
print(x_train.shape,x_test.shape)
print(y_train.shape,y_test.shape)
# build the model.
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
LR = LogisticRegression(C=0.01,solver='liblinear').fit(x_train,y_train)
LR
y_pre = LR.predict(x_test)
print(y_test[:5],y_pre[:5])
y_pre_proba = LR.predict_proba(x_test)
y_pre_proba[:5]
# jaccard index evaluation: the size of intersection divided by the size of the union.
from sklearn.metrics import jaccard_similarity_score
jaccard_similarity_score(y_test,y_pre)
# confusion matrix evaluation.
from sklearn.metrics import confusion_matrix,classification_report
import itertools
def plot_confusion_matrix(cm,
classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
if normalize:
cm = cm.astype('float')/cm.sum(axis=1)[:,np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm,interpolation='nearest',cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks,classes,rotation=45)
plt.yticks(tick_marks,classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max()/2
for i, j in itertools.product(range(cm.shape[0]),range(cm.shape[1])):
plt.text(j,i,format(cm[i,j],fmt),
horizontalalignment="center",
color='white' if cm[i,j] > thresh else 'black')
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
print(confusion_matrix(y_test,y_pre,labels=[1,0]))
# compute confusion matrix
conf_matrix = confusion_matrix(y_test,y_pre,labels=[1,0])
np.set_printoptions(precision=2)
# plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(conf_matrix,classes=['churn=1','churn=0'],normalize=False,title='Confusion matrix')
print(classification_report(y_test,y_pre))
# log loss
from sklearn.metrics import log_loss
log_loss(y_test,y_pre_proba)
# with different solver and regularisation
LR2 = LogisticRegression(C=0.01,
solver='sag').fit(x_train,y_train)
y_pre2_proba = LR2.predict_proba(x_test)
print("LogLoss: %.2f" % log_loss(y_test,y_pre2_proba))
###Output
_____no_output_____
###Markdown
Logistic Regression: Fit and evaluate a model**In this section, we will fit and evaluate a simple Logistic Regression.**
###Code
import joblib
import pandas as pd
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
**Loading the train features and lable features**
###Code
tr_features = pd.read_csv("/content/drive/MyDrive/train_features.csv")
tr_labels = pd.read_csv("/content/drive/MyDrive/train_labels.csv", header=None)
def print_results(results):
print("BEST PARAMS {}\n".format(results.best_params_))
means = results.cv_results_['mean_test_score']
stds = results.cv_results_['std_test_score']
for mean, stds, params in zip(means, stds, results.cv_results_['params']):
print('{} (+/- {}) for {}'.format(round(mean,3), round(stds * 2,3), params))
###Output
_____no_output_____
###Markdown
Hyperparameter tuning
###Code
lr = LogisticRegression()
parameters = {
'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]
}
cv = GridSearchCV(lr, parameters, cv=5)
cv.fit(tr_features, tr_labels.values.ravel())
print_results(cv)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
###Markdown
**Ignore the Warnings**
###Code
cv.best_estimator_
###Output
_____no_output_____
###Markdown
**Saving the model**
###Code
joblib.dump(cv.best_estimator_,'/content/drive/MyDrive/LR_model.json')
###Output
_____no_output_____
###Markdown
Imports
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score, validation_curve
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report, plot_roc_curve, plot_confusion_matrix, f1_score
###Output
_____no_output_____
###Markdown
Loading the data
###Code
df = pd.read_csv('../input/heart-disease-uci/heart.csv')
df.head()
df.shape
###Output
_____no_output_____
###Markdown
as we can see this data this data is about 303 rows and 14 column Exploring our dataset
###Code
df.sex.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
this mean we have more female then male. let's plot only people who got disease by sex
###Code
df.sex[df.target==1].value_counts().plot(kind="bar")
# commenting the plot
plt.title("people who got disease by sex")
plt.xlabel("sex")
plt.ylabel("effected");
plt.xticks(rotation = 0);
df.target.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
the two classes are almost equal Ploting Heart Disease by Age / Max Heart Rate
###Code
sns.scatterplot(x=df.age, y=df.thalach, hue = df.target);
# commenting the plot
plt.title("Heart Disease by Age / Max Heart Rate")
plt.xlabel("Age")
plt.legend(["Disease", "No Disease"])
plt.ylabel("Max Heart Rate");
###Output
_____no_output_____
###Markdown
Correlation matrix
###Code
corr = df.corr()
f, ax = plt.subplots(figsize=(12, 10))
sns.heatmap(corr, annot=True, fmt='.2f', ax=ax);
df.head()
###Output
_____no_output_____
###Markdown
Modeling
###Code
df.head()
###Output
_____no_output_____
###Markdown
Features / Lable
###Code
X = df.drop('target', axis=1)
X.head()
y = df.target
y.head()
###Output
_____no_output_____
###Markdown
Spliting our dataset with 20% for test
###Code
np.random.seed(42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
y_train.head()
###Output
_____no_output_____
###Markdown
Evaluation metrics Function for geting score (f1 and acc) and ploting the confusion metrix
###Code
def getScore(model, X_test, y_test):
y_pred = model.predict(X_test)
print('f1_score')
print(f1_score(y_test,y_pred,average='binary'))
print('accuracy')
acc = accuracy_score(y_test,y_pred, normalize=True)
print(acc)
print('Confusion Matrix :')
plot_confusion_matrix(model, X_test, y_test)
plt.show()
return acc
np.random.seed(42)
clf = LogisticRegression(solver='liblinear')
clf.fit(X_train, y_train);
clf_accuracy = getScore(clf, X_test, y_test)
###Output
f1_score
0.875
accuracy
0.8688524590163934
Confusion Matrix :
###Markdown
Classification report
###Code
print(classification_report(y_test, clf.predict(X_test)))
###Output
precision recall f1-score support
0 0.86 0.86 0.86 29
1 0.88 0.88 0.88 32
accuracy 0.87 61
macro avg 0.87 0.87 0.87 61
weighted avg 0.87 0.87 0.87 61
###Markdown
ROC curve
###Code
plot_roc_curve(clf, X_test, y_test);
###Output
_____no_output_____
###Markdown
Feature importance
###Code
clf.coef_
f_dict = dict(zip(X.columns , clf.coef_[0]))
f_data = pd.DataFrame(f_dict, index=[0])
f_data.T.plot.bar(title="Feature Importance", legend=False, figsize=(10,4));
plt.xticks(rotation = 0);
###Output
_____no_output_____
###Markdown
from this plot we can see features who have importance or not for example features like age, trestbps, chol and thalach has the lees importance, but features like sex, cp, exang, ...etc, have more importance Cross-validation
###Code
cv_precision = np.mean(cross_val_score(clf,
X,
y,
cv=5,
scoring="precision"))
cv_precision
model = LogisticRegression(solver= 'liblinear')
param_range = [0.001, 0.05, 0.1, 0.5, 1.0, 10.0]
train_score, val_score = validation_curve(model, X_train, y_train, param_name='C', param_range=param_range, cv=5)
plt.plot(param_range, val_score.mean(axis=1));
###Output
_____no_output_____
###Markdown
GreadSearcheCV
###Code
np.random.seed(42)
param_grid = {"C":np.logspace(-3,3,7), "penalty":["l1","l2"]}
grid_search = GridSearchCV(estimator = LogisticRegression(solver='liblinear'), param_grid = param_grid,
cv = 10, n_jobs = -1, verbose = 2)
grid_search.fit(X_train, y_train)
best_grid = grid_search.best_params_
print('best grid = ', best_grid)
grid_accuracy = grid_search.score(X_test, y_test)
print('Grid Score = ', grid_accuracy)
best_grid
grid_accuracy
###Output
_____no_output_____
###Markdown
Comparing results
###Code
import plotly.express as px
data = pd.DataFrame([["clf", clf_accuracy], ["grid", grid_accuracy]], columns = ['Models','Score'])
fig = px.bar(data_frame = data,
x="Models",
y="Score",
color="Models", title = "<b>Models Score</b>", template = 'plotly_dark')
fig.update_layout(bargap=0.5)
fig.show()
###Output
_____no_output_____
###Markdown
Step 1: Define the model
###Code
class SimpleLogisticRegressionModel(torch.nn.Module):
def __init__(self):
super(SimpleLogisticRegressionModel, self).__init__()
self.linear = torch.nn.Linear(1,1)
def forward(self, x):
y_pred = F.sigmoid(self.linear(x))
return y_pred
model = SimpleLogisticRegressionModel()
###Output
_____no_output_____
###Markdown
Step 2: Define loss function and optimizer
###Code
criterion = torch.nn.BCELoss(size_average=True)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2)
###Output
_____no_output_____
###Markdown
Step 3: Setup the training loop and learn
###Code
for epoch in range(1000):
y_pred = model(x_data)
loss = criterion(y_pred, y_data)
print(epoch, loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
datapoint = Variable(torch.Tensor([[1.0]]))
print("Predict for 1: ", 1.0, model.forward(datapoint).data[0][0] > 0.5)
datapoint = Variable(torch.Tensor([[10.0]]))
print("Predict for 10: ", 1.0, model.forward(datapoint).data[0][0] > 0.5)
###Output
Predict for 1: 1.0 False
Predict for 10: 1.0 True
###Markdown
Adam vs Adashift: Logistic Regression on MNIST
###Code
import torch
from torch import nn
import matplotlib.pyplot as plt
import numpy as np
from scipy.signal import savgol_filter
import adashift.optimizers as ad_opt
import torchvision
import torchvision.transforms as transforms
from torch.nn import functional as F
input_size = 784
num_classes = 10
num_epochs = 200
batch_size = 64
train_dataset = torchvision.datasets.MNIST(root='data',
train=True,
download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]))
test_dataset = torchvision.datasets.MNIST(root='data',
train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]))
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
import torch.nn as nn
device = torch.device('cuda')
model = nn.Linear(input_size, num_classes).to(device)
criterion = nn.CrossEntropyLoss()
def train(model, device, train_loader, optimizer, num_epochs, criterion, display_iter=1000):
model.train()
train_loss_hist = []
test_acc_hist = []
test_loss_hist = []
test_loss, test_acc = test(model, device, test_loader, criterion)
test_loss_hist.append(test_loss)
test_acc_hist.append(test_acc)
for epoch in range(num_epochs):
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data.reshape(-1, 28*28))
loss = criterion(output, target)
train_loss_hist.append(loss.item())
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
test_loss, test_acc = test(model, device, test_loader, criterion)
test_loss_hist.append(test_loss)
test_acc_hist.append(test_acc)
if batch_idx % display_iter == 0:
print('Train Epoch: {} TrainLoss: {:.6f}'.format(
epoch, loss.item()))
print('Test set: TestLoss: {:.4f}, Accuracy: {:.0f}%'.format(
test_loss_hist[-1], test_acc_hist[-1]))
return train_loss_hist, test_loss_hist, test_acc_hist
def test(model, device, test_loader, criterion):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data.reshape(-1, 28*28))
test_loss += criterion(output, target).item() # sum up batch loss
pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
acc = 100. * correct / len(test_loader.dataset)
return test_loss, acc
###Output
_____no_output_____
###Markdown
**Adam**
###Code
adam_optimizer = torch.optim.Adam(model.parameters(), lr=0.001,\
betas=(0.0, 0.999), eps=1e-8, weight_decay=0)
adam_train_loss_hist, adam_test_loss_hist, adam_test_acc_hist = \
train(model, device, train_loader, adam_optimizer, 200, criterion)
###Output
Train Epoch: 0 TrainLoss: 2.370641
Test set: TestLoss: 0.0363, Accuracy: 12%
Train Epoch: 1 TrainLoss: 0.376438
Test set: TestLoss: 0.0047, Accuracy: 91%
Train Epoch: 2 TrainLoss: 0.105886
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 3 TrainLoss: 0.256047
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 4 TrainLoss: 0.118529
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 5 TrainLoss: 0.173434
Test set: TestLoss: 0.0043, Accuracy: 92%
Train Epoch: 6 TrainLoss: 0.244611
Test set: TestLoss: 0.0043, Accuracy: 93%
Train Epoch: 7 TrainLoss: 0.367147
Test set: TestLoss: 0.0043, Accuracy: 92%
Train Epoch: 8 TrainLoss: 0.077578
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 9 TrainLoss: 0.241001
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 10 TrainLoss: 0.176652
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 11 TrainLoss: 0.113595
Test set: TestLoss: 0.0043, Accuracy: 92%
Train Epoch: 12 TrainLoss: 0.236483
Test set: TestLoss: 0.0043, Accuracy: 92%
Train Epoch: 13 TrainLoss: 0.213358
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 14 TrainLoss: 0.133995
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 15 TrainLoss: 0.379949
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 16 TrainLoss: 0.187783
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 17 TrainLoss: 0.113594
Test set: TestLoss: 0.0043, Accuracy: 92%
Train Epoch: 18 TrainLoss: 0.109786
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 19 TrainLoss: 0.250065
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 20 TrainLoss: 0.274266
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 21 TrainLoss: 0.120937
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 22 TrainLoss: 0.363402
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 23 TrainLoss: 0.223230
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 24 TrainLoss: 0.366290
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 25 TrainLoss: 0.184766
Test set: TestLoss: 0.0049, Accuracy: 91%
Train Epoch: 26 TrainLoss: 0.188403
Test set: TestLoss: 0.0044, Accuracy: 93%
Train Epoch: 27 TrainLoss: 0.334602
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 28 TrainLoss: 0.248289
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 29 TrainLoss: 0.251719
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 30 TrainLoss: 0.304705
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 31 TrainLoss: 0.198640
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 32 TrainLoss: 0.181005
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 33 TrainLoss: 0.336947
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 34 TrainLoss: 0.318594
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 35 TrainLoss: 0.128349
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 36 TrainLoss: 0.275598
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 37 TrainLoss: 0.132309
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 38 TrainLoss: 0.265668
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 39 TrainLoss: 0.144690
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 40 TrainLoss: 0.272387
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 41 TrainLoss: 0.248617
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 42 TrainLoss: 0.136539
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 43 TrainLoss: 0.352109
Test set: TestLoss: 0.0052, Accuracy: 91%
Train Epoch: 44 TrainLoss: 0.222231
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 45 TrainLoss: 0.200595
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 46 TrainLoss: 0.347790
Test set: TestLoss: 0.0046, Accuracy: 93%
Train Epoch: 47 TrainLoss: 0.371002
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 48 TrainLoss: 0.169221
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 49 TrainLoss: 0.144662
Test set: TestLoss: 0.0051, Accuracy: 91%
Train Epoch: 50 TrainLoss: 0.332537
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 51 TrainLoss: 0.170543
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 52 TrainLoss: 0.113785
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 53 TrainLoss: 0.179558
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 54 TrainLoss: 0.287010
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 55 TrainLoss: 0.236584
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 56 TrainLoss: 0.376037
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 57 TrainLoss: 0.159357
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 58 TrainLoss: 0.169948
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 59 TrainLoss: 0.149084
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 60 TrainLoss: 0.099544
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 61 TrainLoss: 0.195756
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 62 TrainLoss: 0.154780
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 63 TrainLoss: 0.198428
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 64 TrainLoss: 0.332021
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 65 TrainLoss: 0.171447
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 66 TrainLoss: 0.256438
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 67 TrainLoss: 0.269121
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 68 TrainLoss: 0.269931
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 69 TrainLoss: 0.250099
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 70 TrainLoss: 0.278549
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 71 TrainLoss: 0.161341
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 72 TrainLoss: 0.317239
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 73 TrainLoss: 0.121130
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 74 TrainLoss: 0.487674
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 75 TrainLoss: 0.167271
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 76 TrainLoss: 0.390165
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 77 TrainLoss: 0.234919
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 78 TrainLoss: 0.168314
Test set: TestLoss: 0.0052, Accuracy: 91%
Train Epoch: 79 TrainLoss: 0.101862
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 80 TrainLoss: 0.232284
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 81 TrainLoss: 0.176511
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 82 TrainLoss: 0.348565
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 83 TrainLoss: 0.249275
Test set: TestLoss: 0.0050, Accuracy: 91%
Train Epoch: 84 TrainLoss: 0.109563
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 85 TrainLoss: 0.264853
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 86 TrainLoss: 0.396912
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 87 TrainLoss: 0.439602
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 88 TrainLoss: 0.138843
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 89 TrainLoss: 0.427041
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 90 TrainLoss: 0.097520
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 91 TrainLoss: 0.184195
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 92 TrainLoss: 0.186523
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 93 TrainLoss: 0.240739
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 94 TrainLoss: 0.090494
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 95 TrainLoss: 0.137519
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 96 TrainLoss: 0.329305
Test set: TestLoss: 0.0048, Accuracy: 92%
###Markdown
**AmsGrad**
###Code
model = nn.Linear(input_size, num_classes).cuda()
amsgrad_optimizer = torch.optim.Adam(model.parameters(), lr=1e-3,\
betas=(0.0, 0.999), eps=1e-8, weight_decay=0,amsgrad=True)
amsgrad_train_loss_hist, amsgrad_test_loss_hist, amsgrad_test_acc_hist = \
train(model, device, train_loader, amsgrad_optimizer, 200, criterion)
###Output
Train Epoch: 0 TrainLoss: 2.584990
Test set: TestLoss: 0.0386, Accuracy: 9%
Train Epoch: 1 TrainLoss: 0.416984
Test set: TestLoss: 0.0048, Accuracy: 91%
Train Epoch: 2 TrainLoss: 0.290514
Test set: TestLoss: 0.0043, Accuracy: 92%
Train Epoch: 3 TrainLoss: 0.360964
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 4 TrainLoss: 0.369499
Test set: TestLoss: 0.0043, Accuracy: 92%
Train Epoch: 5 TrainLoss: 0.152643
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 6 TrainLoss: 0.115180
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 7 TrainLoss: 0.318199
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 8 TrainLoss: 0.338207
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 9 TrainLoss: 0.312407
Test set: TestLoss: 0.0043, Accuracy: 92%
Train Epoch: 10 TrainLoss: 0.275664
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 11 TrainLoss: 0.181080
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 12 TrainLoss: 0.232025
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 13 TrainLoss: 0.195836
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 14 TrainLoss: 0.118180
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 15 TrainLoss: 0.118984
Test set: TestLoss: 0.0043, Accuracy: 92%
Train Epoch: 16 TrainLoss: 0.297446
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 17 TrainLoss: 0.208661
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 18 TrainLoss: 0.190880
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 19 TrainLoss: 0.240997
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 20 TrainLoss: 0.180085
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 21 TrainLoss: 0.171478
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 22 TrainLoss: 0.367920
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 23 TrainLoss: 0.058838
Test set: TestLoss: 0.0044, Accuracy: 93%
Train Epoch: 24 TrainLoss: 0.219134
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 25 TrainLoss: 0.165410
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 26 TrainLoss: 0.142007
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 27 TrainLoss: 0.230116
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 28 TrainLoss: 0.348953
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 29 TrainLoss: 0.278332
Test set: TestLoss: 0.0044, Accuracy: 93%
Train Epoch: 30 TrainLoss: 0.068723
Test set: TestLoss: 0.0044, Accuracy: 92%
Train Epoch: 31 TrainLoss: 0.101061
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 32 TrainLoss: 0.187368
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 33 TrainLoss: 0.374170
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 34 TrainLoss: 0.358121
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 35 TrainLoss: 0.369568
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 36 TrainLoss: 0.147015
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 37 TrainLoss: 0.310975
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 38 TrainLoss: 0.344341
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 39 TrainLoss: 0.223877
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 40 TrainLoss: 0.367645
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 41 TrainLoss: 0.284239
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 42 TrainLoss: 0.071925
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 43 TrainLoss: 0.141788
Test set: TestLoss: 0.0045, Accuracy: 93%
Train Epoch: 44 TrainLoss: 0.138593
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 45 TrainLoss: 0.152788
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 46 TrainLoss: 0.257025
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 47 TrainLoss: 0.212909
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 48 TrainLoss: 0.240212
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 49 TrainLoss: 0.487579
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 50 TrainLoss: 0.177264
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 51 TrainLoss: 0.189290
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 52 TrainLoss: 0.119701
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 53 TrainLoss: 0.190081
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 54 TrainLoss: 0.469467
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 55 TrainLoss: 0.283591
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 56 TrainLoss: 0.377900
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 57 TrainLoss: 0.190952
Test set: TestLoss: 0.0045, Accuracy: 92%
Train Epoch: 58 TrainLoss: 0.202116
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 59 TrainLoss: 0.226641
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 60 TrainLoss: 0.114885
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 61 TrainLoss: 0.220664
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 62 TrainLoss: 0.188823
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 63 TrainLoss: 0.334169
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 64 TrainLoss: 0.364366
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 65 TrainLoss: 0.278922
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 66 TrainLoss: 0.305544
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 67 TrainLoss: 0.292923
Test set: TestLoss: 0.0049, Accuracy: 91%
Train Epoch: 68 TrainLoss: 0.444049
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 69 TrainLoss: 0.256383
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 70 TrainLoss: 0.167024
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 71 TrainLoss: 0.157484
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 72 TrainLoss: 0.343955
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 73 TrainLoss: 0.353825
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 74 TrainLoss: 0.120867
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 75 TrainLoss: 0.089839
Test set: TestLoss: 0.0049, Accuracy: 91%
Train Epoch: 76 TrainLoss: 0.160902
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 77 TrainLoss: 0.279727
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 78 TrainLoss: 0.157161
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 79 TrainLoss: 0.322485
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 80 TrainLoss: 0.152523
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 81 TrainLoss: 0.284181
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 82 TrainLoss: 0.129332
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 83 TrainLoss: 0.356006
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 84 TrainLoss: 0.092052
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 85 TrainLoss: 0.274953
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 86 TrainLoss: 0.395470
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 87 TrainLoss: 0.393098
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 88 TrainLoss: 0.238104
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 89 TrainLoss: 0.087545
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 90 TrainLoss: 0.186992
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 91 TrainLoss: 0.242336
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 92 TrainLoss: 0.291129
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 93 TrainLoss: 0.231870
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 94 TrainLoss: 0.061193
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 95 TrainLoss: 0.280513
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 96 TrainLoss: 0.254516
Test set: TestLoss: 0.0047, Accuracy: 92%
###Markdown
**max-Adashift**
###Code
model = nn.Linear(input_size, num_classes).cuda()
adashift_optimizer = ad_opt.AdaShift(model.parameters(), lr=1e-2,\
betas=(0.0, 0.999), eps=1e-8)
adashift_train_loss_hist, adashift_test_loss_hist, adashift_test_acc_hist = \
train(model, device, train_loader, adashift_optimizer, 200, criterion)
###Output
Train Epoch: 0 TrainLoss: 2.499734
Test set: TestLoss: 0.0379, Accuracy: 12%
Train Epoch: 1 TrainLoss: 0.440241
Test set: TestLoss: 0.0050, Accuracy: 91%
Train Epoch: 2 TrainLoss: 0.190983
Test set: TestLoss: 0.0049, Accuracy: 91%
Train Epoch: 3 TrainLoss: 0.394425
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 4 TrainLoss: 0.332437
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 5 TrainLoss: 0.396494
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 6 TrainLoss: 0.352368
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 7 TrainLoss: 0.351560
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 8 TrainLoss: 0.439186
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 9 TrainLoss: 0.373624
Test set: TestLoss: 0.0054, Accuracy: 90%
Train Epoch: 10 TrainLoss: 0.423575
Test set: TestLoss: 0.0051, Accuracy: 91%
Train Epoch: 11 TrainLoss: 0.363977
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 12 TrainLoss: 0.289753
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 13 TrainLoss: 0.227662
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 14 TrainLoss: 0.166185
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 15 TrainLoss: 0.442423
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 16 TrainLoss: 0.395157
Test set: TestLoss: 0.0046, Accuracy: 92%
Train Epoch: 17 TrainLoss: 0.319943
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 18 TrainLoss: 0.399381
Test set: TestLoss: 0.0052, Accuracy: 91%
Train Epoch: 19 TrainLoss: 0.265771
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 20 TrainLoss: 0.147084
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 21 TrainLoss: 0.485363
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 22 TrainLoss: 0.250808
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 23 TrainLoss: 0.313975
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 24 TrainLoss: 0.301473
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 25 TrainLoss: 0.415691
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 26 TrainLoss: 0.284329
Test set: TestLoss: 0.0054, Accuracy: 90%
Train Epoch: 27 TrainLoss: 0.195134
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 28 TrainLoss: 0.497878
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 29 TrainLoss: 0.311901
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 30 TrainLoss: 0.252556
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 31 TrainLoss: 0.338504
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 32 TrainLoss: 0.168630
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 33 TrainLoss: 0.168173
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 34 TrainLoss: 0.387072
Test set: TestLoss: 0.0047, Accuracy: 92%
Train Epoch: 35 TrainLoss: 0.292989
Test set: TestLoss: 0.0050, Accuracy: 91%
Train Epoch: 36 TrainLoss: 0.313637
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 37 TrainLoss: 0.164161
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 38 TrainLoss: 0.679231
Test set: TestLoss: 0.0060, Accuracy: 89%
Train Epoch: 39 TrainLoss: 0.834873
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 40 TrainLoss: 0.137776
Test set: TestLoss: 0.0053, Accuracy: 91%
Train Epoch: 41 TrainLoss: 0.153014
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 42 TrainLoss: 0.374885
Test set: TestLoss: 0.0051, Accuracy: 91%
Train Epoch: 43 TrainLoss: 0.249249
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 44 TrainLoss: 0.249543
Test set: TestLoss: 0.0051, Accuracy: 92%
Train Epoch: 45 TrainLoss: 0.172906
Test set: TestLoss: 0.0051, Accuracy: 91%
Train Epoch: 46 TrainLoss: 0.359382
Test set: TestLoss: 0.0055, Accuracy: 91%
Train Epoch: 47 TrainLoss: 0.666623
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 48 TrainLoss: 0.276353
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 49 TrainLoss: 0.337191
Test set: TestLoss: 0.0052, Accuracy: 91%
Train Epoch: 50 TrainLoss: 0.193356
Test set: TestLoss: 0.0052, Accuracy: 91%
Train Epoch: 51 TrainLoss: 0.060869
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 52 TrainLoss: 0.222522
Test set: TestLoss: 0.0051, Accuracy: 92%
Train Epoch: 53 TrainLoss: 0.200643
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 54 TrainLoss: 0.201728
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 55 TrainLoss: 0.248512
Test set: TestLoss: 0.0057, Accuracy: 91%
Train Epoch: 56 TrainLoss: 0.155912
Test set: TestLoss: 0.0055, Accuracy: 91%
Train Epoch: 57 TrainLoss: 0.224822
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 58 TrainLoss: 0.235468
Test set: TestLoss: 0.0051, Accuracy: 92%
Train Epoch: 59 TrainLoss: 0.175865
Test set: TestLoss: 0.0052, Accuracy: 91%
Train Epoch: 60 TrainLoss: 0.247198
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 61 TrainLoss: 0.154571
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 62 TrainLoss: 0.500028
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 63 TrainLoss: 0.424087
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 64 TrainLoss: 0.242746
Test set: TestLoss: 0.0053, Accuracy: 91%
Train Epoch: 65 TrainLoss: 0.406958
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 66 TrainLoss: 0.378369
Test set: TestLoss: 0.0057, Accuracy: 90%
Train Epoch: 67 TrainLoss: 0.180644
Test set: TestLoss: 0.0051, Accuracy: 92%
Train Epoch: 68 TrainLoss: 0.270972
Test set: TestLoss: 0.0053, Accuracy: 91%
Train Epoch: 69 TrainLoss: 0.274424
Test set: TestLoss: 0.0051, Accuracy: 91%
Train Epoch: 70 TrainLoss: 0.133845
Test set: TestLoss: 0.0051, Accuracy: 92%
Train Epoch: 71 TrainLoss: 0.327840
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 72 TrainLoss: 0.138706
Test set: TestLoss: 0.0051, Accuracy: 91%
Train Epoch: 73 TrainLoss: 0.100574
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 74 TrainLoss: 0.449119
Test set: TestLoss: 0.0053, Accuracy: 91%
Train Epoch: 75 TrainLoss: 0.328201
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 76 TrainLoss: 0.269853
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 77 TrainLoss: 0.234725
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 78 TrainLoss: 0.709669
Test set: TestLoss: 0.0057, Accuracy: 91%
Train Epoch: 79 TrainLoss: 0.539365
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 80 TrainLoss: 0.093906
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 81 TrainLoss: 0.516654
Test set: TestLoss: 0.0054, Accuracy: 91%
Train Epoch: 82 TrainLoss: 0.206267
Test set: TestLoss: 0.0052, Accuracy: 92%
Train Epoch: 83 TrainLoss: 0.341288
Test set: TestLoss: 0.0051, Accuracy: 91%
Train Epoch: 84 TrainLoss: 0.239202
Test set: TestLoss: 0.0052, Accuracy: 91%
Train Epoch: 85 TrainLoss: 0.396699
Test set: TestLoss: 0.0051, Accuracy: 92%
Train Epoch: 86 TrainLoss: 0.343710
Test set: TestLoss: 0.0051, Accuracy: 92%
Train Epoch: 87 TrainLoss: 0.203024
Test set: TestLoss: 0.0048, Accuracy: 92%
Train Epoch: 88 TrainLoss: 0.225386
Test set: TestLoss: 0.0054, Accuracy: 91%
Train Epoch: 89 TrainLoss: 0.197793
Test set: TestLoss: 0.0051, Accuracy: 91%
Train Epoch: 90 TrainLoss: 0.181458
Test set: TestLoss: 0.0049, Accuracy: 92%
Train Epoch: 91 TrainLoss: 0.590290
Test set: TestLoss: 0.0060, Accuracy: 90%
Train Epoch: 92 TrainLoss: 0.432188
Test set: TestLoss: 0.0053, Accuracy: 91%
Train Epoch: 93 TrainLoss: 0.321249
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 94 TrainLoss: 0.182279
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 95 TrainLoss: 0.211079
Test set: TestLoss: 0.0050, Accuracy: 92%
Train Epoch: 96 TrainLoss: 0.186788
Test set: TestLoss: 0.0055, Accuracy: 91%
###Markdown
**non-Adashift**
###Code
model = nn.Linear(input_size, num_classes).cuda()
non_adashift_optimizer = ad_opt.AdaShift(model.parameters(), lr=1e-3,\
betas=(0.0, 0.999), eps=1e-8, reduce_func=lambda x: x)
non_adashift_train_loss_hist, non_adashift_test_loss_hist, non_adashift_test_acc_hist = \
train(model, device, train_loader, non_adashift_optimizer, 200, criterion)
def save_as_npy(name, array):
np_array = np.array([i for i in array])
np.save('logs/log_reg/' + name, np_array)
return np_array
!mkdir logs
!mkdir logs/log_reg
adam_train_loss_hist = save_as_npy('adam_train_loss_hist', adam_train_loss_hist)
amsgrad_train_loss_hist = save_as_npy('amsgrad_train_loss_hist', amsgrad_train_loss_hist)
adashift_train_loss_hist = save_as_npy('adashift_train_loss_hist', adashift_train_loss_hist)
non_adashift_train_loss_hist = save_as_npy('non_adashift_train_loss_hist', non_adashift_train_loss_hist)
adam_test_loss_hist = save_as_npy('adam_test_loss_hist', adam_test_loss_hist)
amsgrad_test_loss_hist = save_as_npy('amsgrad_test_loss_hist', amsgrad_test_loss_hist)
adashift_test_loss_hist = save_as_npy('adashift_test_loss_hist', adashift_test_loss_hist)
non_adashift_test_loss_hist = save_as_npy('non_adashift_test_loss_hist', non_adashift_test_loss_hist)
adam_test_acc_hist = save_as_npy('adam_test_acc_hist', adam_test_acc_hist)
amsgrad_test_acc_hist = save_as_npy('amsgrad_test_acc_hist', amsgrad_test_acc_hist)
adashift_test_acc_hist = save_as_npy('adashift_test_acc_hist', adashift_test_acc_hist)
non_adashift_test_acc_hist = save_as_npy('non_adashift_test_acc_hist', non_adashift_test_acc_hist)
plt.title("MNIST: logistic regression\n Train loss, 1000 iterations")
#np.linspace(0, 1000000, 100),0
plt.plot(adam_train_loss_hist[:1000], label="adam")
plt.plot(amsgrad_train_loss_hist[:1000], label="amsgrad")
plt.plot(adashift_train_loss_hist[:1000], label="max-adashift")
plt.plot(adashift_train_loss_hist[:1000], label="non-adashift")
plt.legend(loc='best')
plt.show()
def smooth(y,box_size,smooth_start=0):
# borrowed from authors code
y_hat=np.zeros(y.shape,dtype=y.dtype)
y_hat[0:smooth_start]=y[0:smooth_start]
for i in range(smooth_start,y.size):
if i < smooth_start+box_size//2:
y_hat[i]=np.mean(y[smooth_start:i+box_size//2])
elif i<y.size-box_size//2:
y_hat[i]=np.mean(y[i-box_size//2:i+box_size//2])
else:
y_hat[i]=np.mean(y[i-box_size//2:])
return y_hat
smooth_size=1000
smooth_start_train_loss=3
issmooth=1
plt.title("MNIST: logistic regression\n Smoothed train loss")
plt.plot(smooth(adam_train_loss_hist, smooth_size, smooth_start_train_loss), label="adam")
plt.plot(smooth(amsgrad_train_loss_hist, smooth_size, smooth_start_train_loss), label="amsgrad")
plt.plot(smooth(adashift_train_loss_hist, smooth_size, smooth_start_train_loss), label="max-adashift")
plt.plot(smooth(non_adashift_train_loss_hist, smooth_size, smooth_start_train_loss), label="non-adashift")
plt.legend(loc='best')
plt.show()
plt.title("MNIST: logistic regression\n Smoothed train loss, 10000 iterations")
plt.plot(smooth(adam_train_loss_hist[:10000], smooth_size, smooth_start_train_loss), label="adam")
plt.plot(smooth(amsgrad_train_loss_hist[:10000], smooth_size, smooth_start_train_loss), label="amsgrad")
plt.plot(smooth(adashift_train_loss_hist[:10000], smooth_size, smooth_start_train_loss), label="max-adashift")
plt.plot(smooth(non_adashift_train_loss_hist[:10000], smooth_size, smooth_start_train_loss), label="non-adashift")
plt.legend(loc='best')
plt.show()
plt.title("MNIST: logistic regression\n Test loss")
plt.plot(adam_test_loss_hist, label="adam")
plt.plot(amsgrad_test_loss_hist, label="amsgrad")
plt.plot(adashift_test_loss_hist, label="max-adashift")
plt.plot(non_adashift_test_loss_hist, label="non-adashift")
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
MLP
###Code
import torch
from torch import nn
import matplotlib.pyplot as plt
import numpy as np
class MultiLayerPerceptron(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(MultiLayerPerceptron, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, hidden_size)
self.fc3 = nn.Linear(hidden_size, num_classes)
with torch.no_grad():
for p in self.parameters():
p.data = torch.tensor(np.random.randn(*p.shape).astype(np.float32))
def forward(self, x):
out = self.fc1(x)
out = self.fc2(out)
out = self.fc3(out)
# return F.log_softmax(out, dim=1)
return out
hidden_size = 256
criterion = nn.CrossEntropyLoss()
model = MultiLayerPerceptron(input_size, hidden_size, num_classes).to(device)
adam_optimizer = torch.optim.Adam(model.parameters(), lr=0.001,\
betas=(0.0, 0.999), eps=1e-8, weight_decay=0)
adam_train_loss_hist_mlp, adam_test_loss_hist_mlp, adam_test_acc_hist_mlp = \
train(model, device, train_loader, adam_optimizer, 60, criterion)
model = MultiLayerPerceptron(input_size, hidden_size, num_classes).to(device)
amsgrad_optimizer = torch.optim.Adam(model.parameters(), lr=1e-3,\
betas=(0.0, 0.999), eps=1e-8, weight_decay=0,amsgrad=True)
amsgrad_train_loss_hist_mlp, amsgrad_test_loss_hist_mlp, amsgrad_test_acc_hist_mlp = \
train(model, device, train_loader, amsgrad_optimizer, 60, criterion)
from adashift.optimizers import AdaShift
model = MultiLayerPerceptron(input_size, hidden_size, num_classes).to(device)
adashift_optimizer = AdaShift(model.parameters(), lr=1e-2,\
betas=(0.0, 0.999), eps=1e-8)
adashift_train_loss_hist_mlp, adashift_test_loss_hist_mlp, adashift_test_acc_hist_mlp = \
train(model, device, train_loader, adashift_optimizer, 60, criterion)
model = MultiLayerPerceptron(input_size, hidden_size, num_classes).to(device)
amsgrad_optimizer = torch.optim.Adam(model.parameters(), lr=1e-3,\
betas=(0.0, 0.999), eps=1e-8, weight_decay=0,amsgrad=True)
amsgrad_train_loss_hist_mlp, amsgrad_test_loss_hist_mlp, amsgrad_test_acc_hist_mlp =\
train(model, device, train_loader, amsgrad_optimizer, 60, criterion)
model = MultiLayerPerceptron(input_size, hidden_size, num_classes).to(device)
non_adashift_optimizer = AdaShift(model.parameters(), lr=1e-3,\
betas=(0.0, 0.999), eps=1e-8, reduce_func=lambda x: x)
non_adashift_train_loss_hist_mlp, non_adashift_test_loss_hist_mlp, non_adashift_test_acc_hist_mlp = \
train(model, device, train_loader, non_adashift_optimizer, 60, criterion)
adam_train_loss_hist_mlp = save_as_npy('adam_train_loss_hist_mlp', adam_train_loss_hist_mlp)
amsgrad_train_loss_hist_mlp = save_as_npy('amsgrad_train_loss_hist_mlp', amsgrad_train_loss_hist_mlp)
adashift_train_loss_hist_mlp = save_as_npy('adashift_train_loss_hist_mlp', adashift_train_loss_hist_mlp)
non_adashift_train_loss_hist_mlp = save_as_npy('non_adashift_train_loss_hist_mlp', non_adashift_train_loss_hist_mlp)
adam_test_loss_hist_mlp = save_as_npy('adam_test_loss_hist_mlp', adam_test_loss_hist_mlp)
amsgrad_test_loss_hist_mlp = save_as_npy('amsgrad_test_loss_hist_mlp', amsgrad_test_loss_hist_mlp)
adashift_test_loss_hist_mlp = save_as_npy('adashift_test_loss_hist_mlp', adashift_test_loss_hist_mlp)
non_adashift_test_loss_hist_mlp = save_as_npy('non_adashift_test_loss_hist_mlp', non_adashift_test_loss_hist_mlp)
adam_test_acc_hist_mlp = save_as_npy('adam_test_acc_hist_mlp', adam_test_acc_hist_mlp)
amsgrad_test_acc_hist_mlp = save_as_npy('amsgrad_test_acc_hist_mlp', amsgrad_test_acc_hist_mlp)
adashift_test_acc_hist_mlp = save_as_npy('adashift_test_acc_hist_mlp', adashift_test_acc_hist_mlp)
non_adashift_test_acc_hist_mlp = save_as_npy('non_adashift_test_acc_hist_mlp', non_adashift_test_acc_hist_mlp)
smooth_size = 100
plt.title("MNIST: multilayer perceptron\n Smoothed train loss")
plt.plot(smooth(adam_train_loss_hist_mlp, smooth_size, smooth_start_train_loss), label="adam")
plt.plot(smooth(amsgrad_train_loss_hist_mlp, smooth_size, smooth_start_train_loss), label="amsgrad")
plt.plot(smooth(adashift_train_loss_hist_mlp, smooth_size, smooth_start_train_loss), label="max-adashift")
plt.plot(smooth(non_adashift_train_loss_hist_mlp, smooth_size, smooth_start_train_loss), label="non-adashift")
plt.ylim((0, 500))
plt.legend(loc='best')
plt.show()
plt.title("MNIST: multilayer perceptron\n Smoothed train loss, 10000 iterations")
plt.plot(smooth(adam_train_loss_hist_mlp[:10000], smooth_size, smooth_start_train_loss), label="adam")
plt.plot(smooth(amsgrad_train_loss_hist_mlp[:10000], smooth_size, smooth_start_train_loss), label="amsgrad")
plt.plot(smooth(adashift_train_loss_hist_mlp[:10000], smooth_size, smooth_start_train_loss), label="max-adashift")
plt.plot(smooth(non_adashift_train_loss_hist_mlp[:10000], smooth_size, smooth_start_train_loss), label="non-adashift")
plt.ylim((0, 500))
plt.legend(loc='best')
plt.show()
plt.title("MNIST: multilayer perceptron\n Test loss")
plt.semilogy(adam_test_loss_hist_mlp, label="adam")
plt.semilogy(amsgrad_test_loss_hist_mlp, label="amsgrad")
plt.semilogy(adashift_test_loss_hist_mlp, label="max-adashift")
plt.semilogy(non_adashift_test_loss_hist_mlp, label="non-adashift")
plt.ylim((0, 500))
plt.legend(loc='best')
plt.show()
plt.title("MNIST: multilayer perceptron\n Test accuracy")
plt.plot(adam_test_acc_hist_mlp, label="adam")
plt.plot(amsgrad_test_acc_hist_mlp, label="amsgrad")
plt.plot(adashift_test_acc_hist_mlp, label="max-adashift")
plt.plot(non_adashift_test_acc_hist_mlp, label="non-adashift")
plt.legend(loc='best')
plt.show()
plt.title("MNIST: multilayer perceptron\n Test accuracy")
plt.plot(adam_test_acc_hist_mlp, label="adam")
plt.plot(amsgrad_test_acc_hist_mlp, label="amsgrad")
plt.plot(adashift_test_acc_hist_mlp, label="max-adashift")
plt.plot(non_adashift_test_acc_hist_mlp, label="non-adashift")
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
###Code
#code source: http://occam.olin.edu/sites/default/files/DataScienceMaterials/machine_learning_lecture_2/Machine%20Learning%20Lecture%202.html
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import *
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import MinMaxScaler
data = load_breast_cancer() #refer: http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html#sklearn.datasets.load_breast_cancer
tuned_parameters = [{'C': [10**-4, 10**-2, 10**0, 10**2, 10**4]}] # C = 1 / lambda
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, train_size=.9)
#Using GridSearchCV
model = GridSearchCV(LogisticRegression( solver='liblinear',max_iter=100000), tuned_parameters, scoring = 'f1', cv=5)
model.fit(X_train, y_train)
print(model.best_estimator_)
print(model.score(X_test, y_test))
data.data
# More Sparsity (Fewer elements of W* being non-zero) by increasing Lambda (decreasing C)
import numpy as np
clf = LogisticRegression(C=0.1,solver='liblinear', penalty='l1');
clf.fit(X_train, y_train);
w = clf.coef_
print(np.count_nonzero(w))
clf = LogisticRegression(C=0.01,solver='liblinear', penalty='l1');
clf.fit(X_train, y_train);
w = clf.coef_
print(np.count_nonzero(w))
clf = LogisticRegression(C=0.001, solver='liblinear',penalty='l1');
clf.fit(X_train, y_train);
w = clf.coef_
print(np.count_nonzero(w))
clf = LogisticRegression(C=10,solver='liblinear', penalty='l1');
clf.fit(X_train, y_train);
w = clf.coef_
print(np.count_nonzero(w))
###Output
_____no_output_____
###Markdown
Logistic Regression Notebook 1. The Dataset
###Code
# Import some standard libraries
import numpy as np
import pandas as pd
import pprint
df = pd.read_csv('LogRegDataset.csv')
features = df[['X1','X2']].to_numpy()
labels = df['labels'].to_numpy()
# Import plotting libraries to visualise dataset
from matplotlib import pyplot as plt
%matplotlib inline
fig, ax = plt.subplots()
ax.scatter(features[:, 0], features[:, 1], c=labels);
# Split dataset into training and test examples
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.2, random_state=1)
###Output
_____no_output_____
###Markdown
2. Custom Implementation 2.1 Overview of Logistic Regression TODO 2.2 Coding Logistic Regression 2.2.1 Helper Functions
###Code
"""
Calculate the objective function value, here we are using the Conditional Log likelihood function
and returning the value when we pass our weights through this function. Ultimately, this function will
be utilised in training to see how much we are moving towards a solution; this is through
comparison of an objectuve value to the one computed before.
"""
def calc_obj_value(X,y,weights):
# Precompute dot product
xw=np.dot(X,weights)
# Compute the Conditional Log Likelihood (CLL) for each training example
# l(w)=yXw-log(1+exp(Xw))
cll=y*xw-np.log(1+np.exp(xw))
# Sum over all examples to find the value
return np.sum(cll)
"""
Calculate the Gradient of the Conditional Log Likelihood Function (objective function),
this is to say we are taking partial derivatives for the objective function w.r.t. the weights.
Thus we will return an Nx1 vector, with N being the number of weights.
"""
def calc_gradient(X,y,weights):
# Precompute dot product
xw=np.dot(X,weights)
# Calculate P(Y=1|X,W)
# Our probability is modelled on the sigmoid function
Y_pred=np.exp(xw)/(1+np.exp(xw))
# Return the derivative of the Conditional Log Likelihood
# df/dw_i=X^{T}(y-p(y=1|X,W))
return np.dot(X.T,(y-Y_pred))
###Output
_____no_output_____
###Markdown
2.2.2 Training the dataset
###Code
"""
To train the dataset we follow the logistic regression algorithm. In train we loop over a convergence condition
in which we keep updating weights using gradient descent (or ascent) till the changes in weights is small
enough that is exceeeds the expectation of precision we set it. We take inputs of training data,
whilst also having a "verbose" option to offer us insight on how our algorithm progresses towards an optimum.
"""
def train(X_train,y_train,verbose=False):
# Create a copy of X and way to not change original dataframe
X=X_train.copy()
y=y_train.copy()
# Get rows and columns of our training data
r,c=X_train.shape
# Set step size and tolerance, we can fine tune these parameters to get better results for our models.
# The current set up is a good starting point however for most problems.
step=0.01
tol=0.0001
# Initialise weights with a vector of 0's
weights=np.zeros((c+1,1))
# Add a column of ones to accomidate our initial weight (or bias) w_0
X=np.hstack((np.ones((X.shape[0], 1)),X))
# Calculate an initial Objective Value Score and save to a list object
# The list is useful perhaps if we want to analyse how quickly we converge to a solution.
objVals=[]
objVals.append(calc_obj_value(X,y,weights))
# keep a track of iterations and convergence condition
converged = False
it=1
# Iterate till convergence
while not converged:
grad=calc_gradient(X,y,weights)
# Update weights
weights=weights+step*grad
newObj=calc_obj_value(X,y,weights)
# Find difference between current and previous objective value
ObjDiff=np.abs(newObj-objVals[-1])
# Check Convergence
if (ObjDiff<tol):
# Since we have a negligible change between objective functions we can say we've reached a solution
converged=True
print("Convergence! Reached at Iteration {:d}".format(it))
# Give regular update on where we are iteration wise, and our cost.
if(it % 100==0 and verbose==True):
print('Iteration {:d}'.format(it))
print("ObjValue Difference:", np.abs(newObj-objVals[-1]))
it+=1
objVals.append(newObj)
# Pass our final weights and ObjVals for each iteration
return weights, objVals
###Output
_____no_output_____
###Markdown
We can now generate our weights for the dataset we generated earlier!
###Code
# Generate weights and save our objective values for later analysis
w,objVals=train(X_train, y_train[:,np.newaxis])
# Print weights
for idx,weight in enumerate(w):
print("w_{:d}: {:0.6f}".format(idx, weight[0]))
###Output
Convergence! Reached at Iteration 441
w_0: 1.010210
w_1: -1.366836
w_2: 4.834718
###Markdown
2.2.3 Testing Dataset
###Code
"""
Create a function to predict the weights. Here we are passing through a testing dataset of labelled examples and
the weights calculaed in the train function. By setting a decision boundary of P=0.5, we can calculate the
probability of a given an example being 1, then rounding it to the nearest integer to give its class. We return
the accuracy and also the predicted values to use as analysis later.
"""
def predict(X,y,weights):
# Copy Dataframe and add a row of 1s to weight
X=X.copy()
X=np.hstack((np.ones((X.shape[0], 1)),X))
# Find P(Y=1|X,w)
xw=np.dot(X,weights)
# Fit into our sigmoid function
y_pred=np.exp(xw)/(1+np.exp(xw))
# Find accuracy by summing correct examples
acc=np.sum(np.round(y_pred)==y[:,np.newaxis])/y.shape[0]
return acc,y_pred
# Predict
accuracy,preds=predict(X_test,y_test,w)
print("Accuracy of our own logistic regression: {:.2f} ".format(accuracy))
###Output
Accuracy of our own logistic regression: 0.95
###Markdown
We can see we have a good accuracy, especially given the noise we saw in the overall dataset. There's a number of reasons this could be and we can see why next in our analysis. 2.3 Analysis of Custom Logistic Regression 2.3.1 Decision Boundary and Sigmoid Curve
###Code
fig, ax = plt.subplots()
# Plot the testing data
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test);
# Get limits along X axis
x1=np.array(ax.get_xlim())
# We can reaarange the equation of weights and bias (x1*w_1+x2*w_2+w_0=0) to create a linear function of the form
# y=mx+b and this will be our decision boundary. Note this only really works for a two feature/class problem.
x2 = -(x1*w[1]+w[0])/w[2]
# Plot the decision boundary
ax.plot(x1,x2);
###Output
_____no_output_____
###Markdown
Here we can better see which point in our Logistic Regression we got wrong. We can also notice that the noise in the training dataset isn't as bad as was seen in the orignal dataset, with only a singular yellow point mixing with a flurry of purple. Also it should also be noted that this decision boundary could easily be moved slightly to the right to encompass the next purple point. We can see why this doesn't happen when looking at our points along the sigmoid curve.
###Code
fig, ax = plt.subplots()
x_test_sum=np.sum(w[0]+np.dot(X_test,w[1:]),axis=1)
# Plot the training data along a sigmoid curve
ax.scatter(x_test_sum, preds,c=y_test)
# Plot the sigmoid curve too
x_width=np.arange(ax.get_xlim()[0],ax.get_xlim()[1])
ax.plot(x_width,np.exp(x_width)/(1+np.exp(x_width)))
# And finally our decision boundary
ax.plot(x_width, x_width*0+0.5)
###Output
_____no_output_____
###Markdown
The problem of the decision boundary not including a lone point should now be evidently clear. If not, we can see that it is caused as a result of our rounding decision in the prediction class. There we used 0.5 (or basically rounding to nearest whole integer) as a basis to decide what class an example should belong too; this is a good starter point but when fine tuning parameters, it may be neccesary to adjust this to better improve upon a models accuracy. 3. Sklearn Comparison
###Code
# Carry out Logistic Regression using sklearn module
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=1e15)
logreg.fit(X_train, y_train)
print('Accuracy of logistic regression classifier on test set: {:.2f}'.format(logreg.score(X_test, y_test)))
# Compare weights of each implementation
for idx, weight in enumerate(w):
if idx==0:
print("Weight Difference for w_{:d} is {:0.6f}: ".format(idx,np.abs(weight - logreg.intercept_[0])[0]))
else:
print("Weight Difference for w_{:d} is {:0.6f}: ".format(idx,np.abs(weight - logreg.coef_[0][idx-1])[0]))
###Output
Weight Difference for w_0 is 0.065048:
Weight Difference for w_1 is 0.050269:
Weight Difference for w_2 is 0.177135:
###Markdown
The reason for getting only hunder obesrvation from the data set is because we have to perform binomial logistic regression
###Code
iris.feature_names
iris.target_names[:2]
###Output
_____no_output_____
###Markdown
Data Set Characteristics: :Number of Instances: 100 (50 in each of two classes) :Number of Attributes: 4 numeric, predictive attributes and the class :Attribute Information: - sepal length in cm - sepal width in cm - petal length in cm - petal width in cm - class: - Iris-Setosa (0) - Iris-Versicolour (1)
###Code
x0 = np.ones([x.shape[0] , 1])
x = np.concatenate((x0 , x) , axis = 1)
theta = np.zeros([1 , x.shape[1]])
print(theta)
from sklearn.model_selection import train_test_split as tt
x_train , x_test , y_train , y_test = tt(x ,
y ,
test_size = 0.2 ,
random_state = 30)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
def logistic_regression( x , theta):
z = x @ (-theta.T)
eulers = np.power(math.e , z)
pred_y = 1 / (1 + eulers)
classified = []
for i in pred_y:
if i >= 0.5:
classified.append(1)
else:
classified.append(0)
return pred_y , classified
###Output
_____no_output_____
###Markdown
Cost Function
###Code
def costFunction(x,y,theta):
z = x @ (-theta.T)
euler = np.power(math.e , z)
pred_y = 1/(1 + euler)
costF = ( (-y* (np.log(pred_y)) ) - (1-y)* (np.log(1-pred_y)) )
return sum(costF)/len(x)
###Output
_____no_output_____
###Markdown
Gradient Descent
###Code
def gradientDescent(x , y , theta , alpha , iterations):
cost = np.zeros(iterations)
for i in range(iterations):
z = x @ (-theta.T)
eulers = np.power(math.e , z)
pred_y = 1/(1+eulers)
partialDerivativeoftheta = (1/len(x))* (sum((pred_y - y)*x))
theta = theta - (alpha*partialDerivativeoftheta)
cost[i] = costFunction(x,y,theta)
return theta , cost
alpha = 0.01
iterations = 10000
coef , new_cost = gradientDescent(x_train , y_train , theta , alpha , iterations)
print(coef)
plt.plot(np.arange(iterations), new_cost, 'r')
plt.xlabel('Iterations')
plt.ylabel('Costs')
plt.title('Error(cost function) vs. Training iteration')
plt.show()
###Output
_____no_output_____
###Markdown
From the above plot we can infer that gradient descent is working properly as the cost is decreasing every iteration.
###Code
pred_y , classified = logistic_regression(x_test , coef)
from sklearn.metrics import confusion_matrix , classification_report
print("confusion matrix")
print(confusion_matrix(y_test , classified))
print("\n \nreport")
print(classification_report(y_test , classified))
###Output
confusion matrix
[[ 9 0]
[ 0 11]]
report
precision recall f1-score support
0 1.00 1.00 1.00 9
1 1.00 1.00 1.00 11
avg / total 1.00 1.00 1.00 20
###Markdown
Logistic Regression Logistic regression is a fundamental classification technique. It belongs to the group of linear classifiers and is somewhat similar to polynomial and linear regression. Logistic regression is fast and relatively uncomplicated, and it’s convenient for you to interpret the results. Although it’s essentially a method for binary classification, it can also be applied to multiclass problems. You’ll need an understanding of the sigmoid function and the natural logarithm function to understand what logistic regression is and how it works.This image shows the sigmoid function (or S-shaped curve) of some variable 𝑥: The sigmoid function has values very close to either 0 or 1 across most of its domain. This fact makes it suitable for application in classification methods. Single-Variate Logistic Regression Single-variate logistic regression is the most straightforward case of logistic regression. There is only one independent variable (or feature), which is 𝐱 = 𝑥. This figure illustrates single-variate logistic regression: Here, you have a given set of input-output (or 𝑥-𝑦) pairs, represented by green circles. These are your observations. Remember that 𝑦 can only be 0 or 1. For example, the leftmost green circle has the input 𝑥 = 0 and the actual output 𝑦 = 0. The rightmost observation has 𝑥 = 9 and 𝑦 = 1.Logistic regression finds the weights 𝑏₀ and 𝑏₁ that correspond to the maximum LLF. These weights define the logit 𝑓(𝑥) = 𝑏₀ + 𝑏₁𝑥, which is the dashed black line. They also define the predicted probability 𝑝(𝑥) = 1 / (1 + exp(−𝑓(𝑥))), shown here as the full black line. In this case, the threshold 𝑝(𝑥) = 0.5 and 𝑓(𝑥) = 0 corresponds to the value of 𝑥 slightly higher than 3. This value is the limit between the inputs with the predicted outputs of 0 and 1. Logistic Regression in Python
###Code
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, confusion_matrix
x = np.arange(10).reshape(-1, 1)
y = np.array([0, 0, 0, 0, 1, 1, 1, 1, 1, 1])
model = LogisticRegression()
model.fit(x, y)
###Output
_____no_output_____
###Markdown
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, l1_ratio=None, max_iter=100, multi_class='warn', n_jobs=None, penalty='l2', random_state=0, solver='liblinear', tol=0.0001, verbose=0, warm_start=False)
###Code
print("Classes: ", model.classes_)
print("Intercept: ",model.intercept_)
print("Coef: ",model.coef_)
print("Probability: ",model.predict_proba(x))
model.predict(x)
confusion_matrix(y, model.predict(x))
import seaborn as sns
cm = confusion_matrix(y, model.predict(x))
sns.heatmap(cm, annot=True)
print(classification_report(y, model.predict(x)))
model = LogisticRegression(solver='liblinear', C=0.5, random_state=0)
model.fit(x, y)
model.intercept_
model.coef_
model.predict_proba(x)
model.predict(x)
model.score(x, y)
confusion_matrix(y, model.predict(x))
sns.heatmap(confusion_matrix(y, model.predict(x)), annot=True)
print(classification_report(y, model.predict(x)))
###Output
precision recall f1-score support
0 1.00 0.50 0.67 4
1 0.75 1.00 0.86 6
accuracy 0.80 10
macro avg 0.88 0.75 0.76 10
weighted avg 0.85 0.80 0.78 10
###Markdown
Testing phase
###Code
test_hypothesis=hypothesis_function(X_test,dic_theta[-1])
y_test_pred = predict(test_hypothesis)
def testing_accuracy(y_test, y_test_pred):
test_accuracy = np.sum(y_test == y_test_pred) / len(y_test)
return test_accuracy
testing_accuracy(Y_test,y_test_pred)
###Output
_____no_output_____ |
notebooks/Trax_TransformerLM_Intro.ipynb | ###Markdown
TransformerLM Quick Start and Guide Language models are machine learning models that power some of the most impressive applications involving text and language (e.g. machine translation, sentiment analysis, chatbots, summarization). At the time of this writing, some of the largest ML models in existence are language models. They are also based on the [transformer](https://arxiv.org/abs/1706.03762) architecture. The transformer language model (TransformerLM) is a simpler [variation](https://arxiv.org/pdf/1801.10198.pdf) of the original transformer architecture and is useful for plenty of tasks.The [Trax](https://trax-ml.readthedocs.io/en/latest/) implementation of TransformerLM focuses on clear code and speed. It runs without any changes on CPUs, GPUs and TPUs.In this notebook, we will:1. Use a pre-trained TransformerLM2. Train a TransformerLM model3. Looking inside the Trax TransformerLM
###Code
import os
import numpy as np
! pip install -q -U trax
import trax
###Output
[K |████████████████████████████████| 419kB 2.8MB/s
[K |████████████████████████████████| 1.5MB 8.4MB/s
[K |████████████████████████████████| 163kB 21.2MB/s
[K |████████████████████████████████| 2.6MB 18.7MB/s
[K |████████████████████████████████| 194kB 35.5MB/s
[K |████████████████████████████████| 368kB 37.9MB/s
[K |████████████████████████████████| 307kB 49.1MB/s
[K |████████████████████████████████| 983kB 47.3MB/s
[K |████████████████████████████████| 358kB 49.9MB/s
[K |████████████████████████████████| 81kB 9.3MB/s
[K |████████████████████████████████| 5.3MB 49.0MB/s
[K |████████████████████████████████| 655kB 50.9MB/s
[K |████████████████████████████████| 71kB 8.3MB/s
[K |████████████████████████████████| 1.1MB 49.3MB/s
[K |████████████████████████████████| 3.5MB 49.2MB/s
[K |████████████████████████████████| 1.1MB 34.8MB/s
[K |████████████████████████████████| 245kB 51.3MB/s
[K |████████████████████████████████| 51kB 5.5MB/s
[K |████████████████████████████████| 890kB 48.7MB/s
[K |████████████████████████████████| 3.0MB 49.9MB/s
[?25h Building wheel for bz2file (setup.py) ... [?25l[?25hdone
Building wheel for pypng (setup.py) ... [?25l[?25hdone
Building wheel for sacremoses (setup.py) ... [?25l[?25hdone
[31mERROR: kfac 0.2.3 has requirement tensorflow-probability==0.8, but you'll have tensorflow-probability 0.7.0 which is incompatible.[0m
INFO:tensorflow:tokens_length=568 inputs_length=512 targets_length=114 noise_density=0.15 mean_noise_span_length=3.0
###Markdown
Using a pre-trained TransformerLMThe following cell loads a pre-trained TransformerLM that sorts a list of four integers.
###Code
# Create a Transformer model.
# Have to use the same configuration of the pre-trained model we'll load next
model = trax.models.TransformerLM(
d_model=32, d_ff=128, n_layers=2,
vocab_size=32, mode='predict')
# Initialize using pre-trained weights.
model.init_from_file('gs://ml-intro/models/sort-transformer.pkl.gz',
weights_only=True,
input_signature=trax.shapes.ShapeDtype((1,1), dtype=np.int32))
# Input sequence
# The 0s indicate the beginning and end of the input sequence
input = [0, 3, 15, 14, 9, 0]
# Run the model
output = trax.supervised.decoding.autoregressive_sample(
model, np.array([input]), temperature=0.0, max_length=4)
# Show us the output
output
###Output
_____no_output_____
###Markdown
This is a trivial example to get you started and put a toy transformer into your hands. Language models get their name from their ability to assign probabilities to sequences of words. This property makes them useful for generating text (and other types of sequences) by probabilistically choosing the next item in the sequence (often the highest probability one) -- exactly like the next-word suggestion feature of your smartphone keyboard.In Trax, TransformerLM is a series of [Layers]() combined using the [Serial]() combinator. A high level view of the TransformerLM we've declared above can look like this:The model has two decoder layers because we set `n_layers` to 2. TransformerLM makes predictions by being fed one token at a time, with output tokens typically fed back as inputs (that's the `autoregressive` part of the `autoregressive_sample` method we used to generate the output from the model). If we're to think of a simple model trained to generate the fibonacci sequence, we can give it a number in the sequence and it would continue to generate the next items in the sequence: Train a TransformerLM ModelLet's train a TransformerLM model. We'll train this one to reverse a list of integers. This is another toy task that we can train a small transformer to do. But using the concepts we'll go over, you'll be able to train proper language models on larger dataset.**Example**: This model is to take a sequence like `[1, 2, 3, 4]` and return `[4, 3, 2, 1]`.1. Create the Model1. Prepare the Dataset1. Train the model using `Trainer` Create the Model
###Code
# Create a Transformer model.
def tiny_transformer_lm(mode='train'):
return trax.models.TransformerLM(
d_model=32, d_ff=128, n_layers=2,
vocab_size=32, mode=mode)
###Output
_____no_output_____
###Markdown
Refer to [TransferLM in the API reference](https://trax-ml.readthedocs.io/en/latest/trax.models.htmltrax.models.transformer.TransformerLM) to understand each of its parameters and their default values. We have chosen to create a small model using these values for `d_model`, `d_ff`, and `n_layers` to be able to train the model more quickly on this simple task. Prepare the DatasetTrax models are trained on streams of data represented as python iterators. [`trax.data`](https://trax-ml.readthedocs.io/en/latest/trax.data.html) gives you the tools to construct your datapipeline. Trax also gives you readily available access to [TensorFlow Datasets](https://www.tensorflow.org/datasets).For this simple task, we will create a python generator. Every time we invoke it, it returns a batch of training examples.
###Code
def reverse_ints_task(batch_size, length=4):
while True:
random_ints = m = np.random.randint(1, 31, (batch_size,length))
source = random_ints
target = np.flip(source, 1)
zero = np.zeros([batch_size, 1], np.int32)
x = np.concatenate([zero, source, zero, target], axis=1)
loss_weights = np.concatenate([np.zeros((batch_size, length+2)),
np.ones((batch_size, length))], axis=1)
yield (x, x, loss_weights) # Here inputs and targets are the same.
reverse_ints_inputs = reverse_ints_task(16)
###Output
_____no_output_____
###Markdown
This function prepares a dataset and returns one batch at a time. If we ask for a batch size of 8, for example, it returns the following:
###Code
a = reverse_ints_task(8)
sequence_batch, _ , masks = next(a)
sequence_batch
###Output
_____no_output_____
###Markdown
You can see that each example starts with 0, then a list of integers, then another 0, then the reverse of the list of integers. The function will give us as many examples and batches as we request.In addition to the example, the generator returns a mask vector. During the training process, the model is challenged to predict the tokens hidden by the mask (which have a value of 1 associated with that position. So for example, if the first element in the batch is the following vector:0567808765 And the associated mask vector for this example is:0000001111 Then the model will only be presented with the following prefix items, and it has to predict the rest:056780___ _ It's important here to note that while `5, 6, 7, 8` constitute the input sequence, the **zeros** serve a different purpose. We are using them as special tokens to delimit where the source sequence begins and ends. With this, we now have a method that streams the dataset in addition to the method that creates the model. Train the modelTrax's [training](https://trax-ml.readthedocs.io/en/latest/notebooks/trax_intro.htmlSupervised-training) takes care of the training process. We hand it the model, define training and eval tasks, and create the training loop. We then start the training loop.
###Code
from trax.supervised import training
from trax import layers as tl
# Training task.
train_task = training.TrainTask(
labeled_data=reverse_ints_inputs,
loss_layer=tl.CrossEntropyLoss(),
optimizer=trax.optimizers.Adam(0.01),
n_steps_per_checkpoint=500,
)
# Evaluaton task.
eval_task = training.EvalTask(
labeled_data=reverse_ints_inputs,
metrics=[tl.CrossEntropyLoss(), tl.Accuracy()],
n_eval_batches=20 # For less variance in eval numbers.
)
output_dir = os.path.expanduser('~/train_dir/')
!rm -f ~/train_dir/model.pkl.gz # Remove old model.
# Train tiny model with Loop.
training_loop = training.Loop(
tiny_transformer_lm(),
train_task,
eval_tasks=[eval_task],
output_dir=output_dir)
# run 1000 steps (batches)
training_loop.run(1000)
###Output
Step 1: Ran 1 train steps in 17.93 secs
Step 1: train CrossEntropyLoss | 4.14618683
Step 1: eval CrossEntropyLoss | 3.74931383
Step 1: eval Accuracy | 0.03359375
Step 500: Ran 499 train steps in 23.67 secs
Step 500: train CrossEntropyLoss | 0.62780923
Step 500: eval CrossEntropyLoss | 0.01693780
Step 500: eval Accuracy | 0.99609375
Step 1000: Ran 500 train steps in 5.34 secs
Step 1000: train CrossEntropyLoss | 0.00926041
Step 1000: eval CrossEntropyLoss | 0.00390428
Step 1000: eval Accuracy | 0.99921875
###Markdown
The Trainer is the third key component in this process that helps us arrive at the trained model. Make predictionsLet's take our newly minted model for a ride. To do that, we load it up, and use the handy `autoregressive_sample` method to feed it our input sequence and return the output sequence. These components now look like this:And this is the code to do just that:
###Code
input = np.array([[0, 4, 6, 8, 10, 0]])
# Initialize model for inference.
predict_model = tiny_transformer_lm(mode='predict')
predict_signature = trax.shapes.ShapeDtype((1,1), dtype=np.int32)
predict_model.init_from_file(os.path.join(output_dir, "model.pkl.gz"),
weights_only=True, input_signature=predict_signature)
# Run the model
output = trax.supervised.decoding.autoregressive_sample(
predict_model, input, temperature=0.0, max_length=4)
# Print the contents of output
print(output)
###Output
[[10 8 6 4]]
###Markdown
If things go correctly, the model would be able to reverse the string and output `[[10 8 6 4]]` Transformer vs. TransformerLMTransformerLM is a great place to start learning about Transformer architectures. The main difference between it and the original Transformer is that it's made up of a decoder stack, while Transformer is made up of an encoder stack and decoder stack (with the decoder stack being nearly identical to TransformerLM). Looking inside the Trax TransformerLMIn Trax, TransformerLM is implemented as a single Serial layerThis graph shows you two of the central concepts in Trax. Layers are the basic building blocks. Serial is the most common way to compose multiple layers together in sequence. LayersLayers are best described in the [Trax Layers Intro](https://trax-ml.readthedocs.io/en/latest/notebooks/layers_intro.html).For a Transformer to make a calculation (translate a sentence, summarize an article, or generate text), input tokens pass through many steps of transformation andcomputation (e.g. embedding, positional encoding, self-attention, feed-forward neural networks...tec). Each of these steps is a layer (some with their own sublayers). Each layer you use or define takes a fixed number of input tensors and returns a fixed number of output tensors (n_in and n_out respectively, both of which default to 1).A simple example of a layer is the ReLU activation function:Trax is a deep learning library, though. And so, a layer can also contain weights. An example of this is the Dense layer. Here is a dense layer that multiplies the input tensor with a weight matrix (`W`) and adds a bias (`b`) (both W and b are saved inside the `weights` property of the layer):In practice, Dense and Relu often go hand in hand. With Dense first working on a tensor, and ReLu then processing the output of the Dense layer. This is a perfect job for Serial, which, in simple cases, chains two or more layers and hands over the output of the first layer to the following one:The Serial combinator is a layer itself. So we can think of it as a layer containing a number of sublayers:With these concepts in mind, let's go back and unpack the layers inside the TransformerLM Serial. Input, Decoder Blocks, and Output LayersIt's straightforward to read the delcaration of TransformerLM to understand the layers that make it up. In general, you can group these layers into a set of input layers, then Transformer decoder blocks, and a set of output blocks. The number of Transformer blocks (`n_layers`) is one of the key parameters when creating a TransformerLM model. This is a way to think of the layer groups of a TransformerLM:* The **input layers** take each input token id and look up its proper embedding and positional encoding.* The prediction calculations happen in the stack of **decoder blocks**.* The **output layers** take the output of the final Decoder block and project it to the output vocabulary. The LogSoftmax layer then turns the scoring of each potential output token into a probability score. Transformer Decoder BlockA decoder block has two major components:* A **Causal self-attention** layer. Self-attention incorporates information from other tokens that could help make more sense of the current token being processed. Causal attention only allows the incorporation of information from previous positions. One key parameter when creating a TransformerLM model is `n_heads`, which is the number of "attention heads".* A **FeedForward** component. This is where the primary prediction computation is calculated. The key parameter associated with this layer is `d_ff`, which specifies the dimensions of the neural network layer used in this block. This figure also shows the `d_model` parameter, which specifies the dimension of tensors at most points in the model, including the embedding, and the majority of tensors handed off between the various layers in the model. Multiple Inputs/Outputs, Branch, and ResidualThere are a couple more central Trax concept to cover to gain a deeper understanding of how Trax implements TransformerLM Multiple Inputs/OutputsThe layers we've seen so far all have one input tensor and one output tensor. A layer could have more. For example, the Concatenate layer: BranchWe saw the Serial combinator that combines layers serially. Branch combines layers in parallel. It supplies input copies to each of its sublayers.For example, if we wrap two layers (each expecting one input) in a Branch layer, and we pass a tensor to Branch, it copies it as the input to both of its sublayers as shown here:Since the sublayers have two outputs (one from each), then the Branch layer would also end up outputing both of those tensors: ResidualResidual connections are an important component of Transformer architectures. Inside a Decoder Block, both the causal-attention layer and thefeed-forward layer have residual connections around them:What that means, is that a copy of the input tensor is added to the output of the Attention layer:In Trax, this is achieved using the Residual layer, which combines both the Serial and Branch combinators:Similarly, the feed-forward sublayer has another residual connection around it:
###Code
###Output
_____no_output_____ |
quickstart/03-pysumdata.ipynb | ###Markdown
获取数据的统计摘要信息包括获取数据、通过describe方法计算摘要、通过iloc/loc截取部分数据快照、累计数据、变动率的可视化等等。
###Code
import tushare as ts
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = ts.get_hist_data('300036', start='2018-07-01', end='2018-07-30') #一次性获取全部日k线数据
df
###Output
_____no_output_____
###Markdown
获得统计摘要信息。
###Code
df.describe()
###Output
_____no_output_____
###Markdown
按照行列顺序截取数据快照。
###Code
df.iloc[0:,6:10]
###Output
_____no_output_____
###Markdown
按照行列标签值截取数据快照。
###Code
df.loc[:,['open','close','volume']].mean()
df.loc['2018-07-24':'2018-07-10',['open','close','volume']].mean()
###Output
_____no_output_____
###Markdown
筛选数据。整体筛选。
###Code
df[df < 0]
###Output
_____no_output_____
###Markdown
按照列值筛选。
###Code
df[df.price_change > 0]
df[df.price_change < 0]
###Output
_____no_output_____
###Markdown
变动的累记与可视化。 累加数据。
###Code
df2 = df.sort_index(ascending=True).apply(np.cumsum)
df2
###Output
_____no_output_____
###Markdown
变动及其累计的可视化。
###Code
#创建一个画布对象。一个画布可以有多幅子图。
fig=plt.figure(figsize=(24,8), dpi=80)
#在画布中创建一个子图,指定一行、一列的第一幅。
p1=fig.add_subplot(1,1,1)
#绘制第一条曲线。
p1.plot(df2.index,df['p_change'])
#绘制第二条曲线。
p1.plot(df2.index,df2['p_change'])
#显示图形。
plt.show()
###Output
_____no_output_____ |
Selection Sort.ipynb | ###Markdown
Difference Selection Sort vs Insertion Sort
###Code
from IPython.display import HTML
HTML('<img src="http://www-scf.usc.edu/~zhan468/public/Notes/resources/1C7E20F306DDC02EB4E3A50FA7817FF4.gif">')
HTML('<img src="http://www-scf.usc.edu/~zhan468/public/Notes/resources/91B76E8E4DAB9B0CAD9A017D7DD431E2.gif">')
###Output
_____no_output_____ |
Models/Catboost/Catboost_comment_classifier.ipynb | ###Markdown
Proof of concept of catboost
###Code
%%time
columns = [
'tweet_timestamp',
'creator_follower_count',
'creator_following_count',
'creator_is_verified',
'creator_creation_timestamp',
'engager_follower_count',
'engager_following_count',
'engager_is_verified',
'engager_creation_timestamp',
'engagement_creator_follows_engager',
'number_of_photo',
'number_of_gif',
'number_of_video',
'engagement_comment_timestamp',
]
dask_df = dd.read_parquet("/Users/arcangelopisa/Downloads/sample_dataset", engine='pyarrow', columns=columns)
dask_df = dask_df.sample(0.8)
dask_df['engagement_comment_timestamp'] = (dask_df['engagement_comment_timestamp'] != -1).astype(np.uint8)
pandas_df = dask_df.compute()
del dask_df
pandas_df.info()
train, test = train_test_split(pandas_df, train_size=0.8)
X_train = train.drop(['engagement_comment_timestamp'], axis=1)
y_train = train['engagement_comment_timestamp']
X_test = test.drop(['engagement_comment_timestamp'], axis=1)
y_test = test['engagement_comment_timestamp']
del pandas_df, train, test
%%time
classifier = CatBoostClassifier(iterations=150,
depth=12,
learning_rate=0.25,
loss_function='CrossEntropy',
verbose = True)
classifier.fit(X_train, y_train, verbose = True)
classifier.save_model('comment_classifier', format = "cbm")
%%time
y_pred = classifier.predict_proba(X_test)
y_pred
getFirstValuePrediction(y_pred)
result = getBooleanList(y_pred)
result
print('RCE is {}'.format(compute_rce(result, y_test)))
print('Average precision is {}'.format(average_precision_score(y_test, result)))
###Output
_____no_output_____ |
src/pipeline/data_generation/german_credit_generative_model_training.ipynb | ###Markdown
German Credit Dataset CGAN Training for synthesize datasetsCGAN: A conditional generative adversarial network (CGAN) is a type of GAN that also takes advantage of labels during the training process. Generator — Given a label and random array as input, this network generates data with the same structure as the training data observations corresponding to the same label.Then, we save the CGAN models for the data generation pipeline. Imports
###Code
import os
import pandas as pd
import numpy as np
from ydata_synthetic.synthesizers.regular import CGAN
from ydata_synthetic.synthesizers import ModelParameters, TrainParameters
import matplotlib.pyplot as plt
from sklearn.preprocessing import PowerTransformer
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from src.pipeline.data_generation.data_generator import GANDataGenerator
from src.pipeline.datasets.training_datasets import GermanCreditProcessedDataset
from src.pipeline.model.paths import GERMAN_CREDIT_GEN_CGAN_MODEL_PATH
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
###Output
_____no_output_____
###Markdown
Load Preprocessed Data
###Code
# init GANDataGenerator
print('Bank Marketing dataset\n')
origin_dataset = GermanCreditProcessedDataset()
df = origin_dataset.raw_df
label_col = origin_dataset.label_column_name
df.head()
df.shape
print(f'Label columns name is: {label_col}. With {df[label_col].nunique()} unique values.'
f'({df[label_col].unique()})')
###Output
Bank Marketing dataset
loading dataset
###Markdown
EDA and Preprocessing
###Code
df.shape
train_sample = df
cat_cols = [col for col in df.columns if any(cat_col for cat_col in origin_dataset.categorical_feature_names if cat_col + '_' in col)]
numeric_cols = [col for col in df.columns if any(numeric_col for numeric_col in origin_dataset.numeric_feature_names if numeric_col in col)]
# numeric_cols.remove('job_management')
# assert len(cat_cols)+len(numeric_cols) == len(df.columns)
###Output
_____no_output_____
###Markdown
Init the GAN
###Code
to_save = False
#Define the Conditional GAN and training parameters
noise_dim = 32
dim = 128
batch_size = 128
beta_1 = 0.5
beta_2 = 0.9
log_step = 100
epochs = 300 + 1
learning_rate = 5e-4
gan_args = ModelParameters(batch_size=batch_size,
lr=learning_rate,
betas=(beta_1, beta_2),
noise_dim=noise_dim,
n_cols=train_sample.shape[1] - 1, # Don't count the label columns here
layers_dim=dim)
train_args = TrainParameters(epochs=epochs,
cache_prefix='',
sample_interval=log_step,
label_dim=-1,
labels=[0,1])
num_classes = df[label_col].nunique()
#Init the Conditional GAN providing the index of the label column as one of the arguments
synthesizer = CGAN(model_parameters=gan_args, num_classes=num_classes)
###Output
_____no_output_____
###Markdown
Training
###Code
#----------------------------
# GAN Training
#----------------------------
#Training the Conditional GAN
synthesizer.train(data=train_sample, label_col=label_col, train_arguments=train_args,
num_cols=numeric_cols, cat_cols=cat_cols )
#Saving the synthesizer
if to_save:
synthesizer.save(GERMAN_CREDIT_GEN_CGAN_MODEL_PATH)
###Output
WARNING:tensorflow:AutoGraph could not transform <bound method GumbelSoftmaxLayer.call of <ydata_synthetic.utils.gumbel_softmax.GumbelSoftmaxLayer object at 0x7f092026a9a0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <bound method GumbelSoftmaxLayer.call of <ydata_synthetic.utils.gumbel_softmax.GumbelSoftmaxLayer object at 0x7f092026a9a0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
###Markdown
Save the model
###Code
lock = True
if lock:
synthesizer.save(GERMAN_CREDIT_GEN_CGAN_MODEL_PATH)
###Output
_____no_output_____
###Markdown
Synthesize samples based on the trained CGAN:
###Code
synthesizer.sample(condition=np.array([1]), n_samples=2).shape
synthesizer = CGAN.load(GERMAN_CREDIT_GEN_CGAN_MODEL_PATH)
###Output
_____no_output_____
###Markdown
Generate Samples
###Code
synthesizer.generator.summary()
synthesizer.discriminator.summary()
###Output
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) [(128, 1)] 0
__________________________________________________________________________________________________
input_3 (InputLayer) [(128, 115)] 0
__________________________________________________________________________________________________
embedding_1 (Embedding) (128, 1, 1) 2 input_4[0][0]
__________________________________________________________________________________________________
flatten_2 (Flatten) (128, 115) 0 input_3[0][0]
__________________________________________________________________________________________________
flatten_1 (Flatten) (128, 1) 0 embedding_1[0][0]
__________________________________________________________________________________________________
multiply_1 (Multiply) (128, 115) 0 flatten_2[0][0]
flatten_1[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (128, 512) 59392 multiply_1[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (128, 512) 0 dense_4[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (128, 256) 131328 dropout[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (128, 256) 0 dense_5[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (128, 128) 32896 dropout_1[0][0]
__________________________________________________________________________________________________
dense_7 (Dense) (128, 1) 129 dense_6[0][0]
==================================================================================================
Total params: 223,747
Trainable params: 0
Non-trainable params: 223,747
__________________________________________________________________________________________________
###Markdown
Load Trained Model
###Code
synthesizer = CGAN.load(GERMAN_CREDIT_GEN_CGAN_MODEL_PATH)
np.ceil(200/128)
generated_df_class_true = synthesizer.sample(condition=np.array([1]), n_samples=1)#batch_size*100)
generated_df_class_true
generated_df_class_false = synthesizer.sample(condition=np.array([0]), n_samples=1) #n_samples=batch_size*10)
generated_df_class_false
pd.concat([generated_df_class_false, generated_df_class_true]).sample(10)
generated_df_class_true.describe()
real_df_class_true = df[df[label_col]==1]#.sample(128)
real_df_class_true.describe()
real_df_class_false = df[df[label_col]==0].sample(128)
real_df_class_false.describe()
gan_generator = GANDataGenerator(dataset=origin_dataset, model_class=CGAN, trained_model_path=)
###Output
_____no_output_____ |
Udemy Learning/Array.ipynb | ###Markdown
---
###Code
import numpy as np
ar = np.array(l)
ar
###Output
_____no_output_____
###Markdown
---
###Code
arr = np.array([2, 4, 'ankit', 2.5, True])
arr
l.sort()
l
l.pop()
l
ar.mean()
###Output
_____no_output_____
###Markdown
---
###Code
ar
ar[2:5]
ar
b = ar[2:4]
b
b[0]
b.view
###Output
_____no_output_____
###Markdown
---
###Code
l
l[0:1]
l
c = l[1:]
c
###Output
_____no_output_____
###Markdown
---
###Code
ages = np.array([20, 21, 25, 26, 24, 30, 28])
ages
ages.sort()
ages
_ages = ages[2:5]
_ages
_ages[0]
_ages
_ages[:] = [38, 39, 40]
_ages
ages
_ages_copy = ages.copy()
_ages_copy
_ages_copy[0] = 1
_ages_copy
ages
###Output
_____no_output_____
###Markdown
---
###Code
l
la = l[1:]
la
la[:] = [11, 12]
la
l
###Output
_____no_output_____ |
gensim/docs/notebooks/sklearn_wrapper.ipynb | ###Markdown
Using wrappers for Scikit learn API This tutorial is about using gensim models as a part of your scikit learn workflow with the help of wrappers found at ```gensim.sklearn_integration.sklearn_wrapper_gensim_ldaModel``` The wrapper available (as of now) are :* LdaModel (```gensim.sklearn_integration.sklearn_wrapper_gensim_ldaModel.SklearnWrapperLdaModel```),which implements gensim's ```LdaModel``` in a scikit-learn interface LdaModel To use LdaModel begin with importing LdaModel wrapper
###Code
from gensim.sklearn_integration.sklearn_wrapper_gensim_ldamodel import SklearnWrapperLdaModel
###Output
_____no_output_____
###Markdown
Next we will create a dummy set of texts and convert it into a corpus
###Code
from gensim.corpora import Dictionary
texts = [['complier', 'system', 'computer'],
['eulerian', 'node', 'cycle', 'graph', 'tree', 'path'],
['graph', 'flow', 'network', 'graph'],
['loading', 'computer', 'system'],
['user', 'server', 'system'],
['tree','hamiltonian'],
['graph', 'trees'],
['computer', 'kernel', 'malfunction','computer'],
['server','system','computer']]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
###Output
_____no_output_____
###Markdown
Then to run the LdaModel on it
###Code
model=SklearnWrapperLdaModel(num_topics=2,id2word=dictionary,iterations=20, random_state=1)
model.fit(corpus)
model.print_topics(2)
model.transform(corpus)
###Output
WARNING:gensim.models.ldamodel:too few updates, training might not converge; consider increasing the number of passes or iterations to improve accuracy
###Markdown
Integration with Sklearn To provide a better example of how it can be used with Sklearn, Let's use CountVectorizer method of sklearn. For this example we will use [20 Newsgroups data set](http://qwone.com/~jason/20Newsgroups/). We will only use the categories rec.sport.baseball and sci.crypt and use it to generate topics.
###Code
import numpy as np
from gensim import matutils
from gensim.models.ldamodel import LdaModel
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from gensim.sklearn_integration.sklearn_wrapper_gensim_ldamodel import SklearnWrapperLdaModel
rand = np.random.mtrand.RandomState(1) # set seed for getting same result
cats = ['rec.sport.baseball', 'sci.crypt']
data = fetch_20newsgroups(subset='train',
categories=cats,
shuffle=True)
###Output
_____no_output_____
###Markdown
Next, we use countvectorizer to convert the collection of text documents to a matrix of token counts.
###Code
vec = CountVectorizer(min_df=10, stop_words='english')
X = vec.fit_transform(data.data)
vocab = vec.get_feature_names() #vocab to be converted to id2word
id2word=dict([(i, s) for i, s in enumerate(vocab)])
###Output
_____no_output_____
###Markdown
Next, we just need to fit X and id2word to our Lda wrapper.
###Code
obj=SklearnWrapperLdaModel(id2word=id2word,num_topics=5,passes=20)
lda=obj.fit(X)
lda.print_topics()
###Output
_____no_output_____
###Markdown
Example for Using Grid Search
###Code
from sklearn.model_selection import GridSearchCV
from gensim.models.coherencemodel import CoherenceModel
def scorer(estimator, X,y=None):
goodcm = CoherenceModel(model=estimator, texts= texts, dictionary=estimator.id2word, coherence='c_v')
return goodcm.get_coherence()
obj=SklearnWrapperLdaModel(id2word=dictionary,num_topics=5,passes=20)
parameters = {'num_topics':(2, 3, 5, 10), 'iterations':(1,20,50)}
model = GridSearchCV(obj, parameters, scoring=scorer, cv=5)
model.fit(corpus)
model.best_params_
###Output
_____no_output_____
###Markdown
Example of Using Pipeline
###Code
from sklearn.pipeline import Pipeline
from sklearn import linear_model
def print_features_pipe(clf, vocab, n=10):
''' Better printing for sorted list '''
coef = clf.named_steps['classifier'].coef_[0]
print coef
print 'Positive features: %s' % (' '.join(['%s:%.2f' % (vocab[j], coef[j]) for j in np.argsort(coef)[::-1][:n] if coef[j] > 0]))
print 'Negative features: %s' % (' '.join(['%s:%.2f' % (vocab[j], coef[j]) for j in np.argsort(coef)[:n] if coef[j] < 0]))
id2word=Dictionary(map(lambda x : x.split(),data.data))
corpus = [id2word.doc2bow(i.split()) for i in data.data]
model=SklearnWrapperLdaModel(num_topics=15,id2word=id2word,iterations=50, random_state=37)
clf=linear_model.LogisticRegression(penalty='l2', C=0.1) #l2 penalty used
pipe = Pipeline((('features', model,), ('classifier', clf)))
pipe.fit(corpus, data.target)
print_features_pipe(pipe, id2word.values())
print pipe.score(corpus, data.target)
###Output
WARNING:gensim.models.ldamodel:too few updates, training might not converge; consider increasing the number of passes or iterations to improve accuracy
|
projects/West-Nile-Final.ipynb | ###Markdown
Predicting West Nile Virus in ChicagoThis project was to predict the probability that a mosquito trap in Chicago will have captured a mosquito with West Nile Virus (WNV). This is a closed competition on Kaggle, but still accepts submissions and will tell you where your submission would have ranked and what your area under the curve (AUC) score would be on the test data set.The training data contains information on if WNV was present in a trap when it was checked in 2007, 2009, 2011, and 2013, along with the date it was checked, the species of mosquito that were found, and the number of mosquitos found. This was tested against data from 2008, 2010, 2012, and 2014, with the same information included except for if West Nile Virus was present when the trap was checked and the number of mosquitos in a trap. First, we need to read in our data and import all necessary libraries.
###Code
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.cross_validation import cross_val_score, StratifiedKFold , train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report, roc_curve, auc
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
import xgboost as xgb
from sklearn.grid_search import GridSearchCV
# cleaning up the notebook
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('/Users/jcarr/downloads/train.csv')
###Output
_____no_output_____
###Markdown
The data given requires several transformations. The date field needed to be turned into a datetime from a string, and then month, year, and week number (i.e. the first week in a given month, second week, etc) extracted from that.The main transformation that needed to take place was to count the number of records for traps that were checked on a given day. The competition did not provide the number of mosquitos that were found in a trap in the test data, but it was included in the training data. However, from reviewing the number of mosquitos listed in the training data it was determined that the total number of records for a trap on a given day could be used as a proxy for the number of mosquitos in a trap. Each row represents a group of 50 mosquitos. If a trap was checked and had 150 mosquitos in it, this data would be presented in 3 rows, with 3 separate groups of 50 mosquitos evaluated for the presence of WNV. While the number of mosquitos was not made available as part of the test set, counting the number of records for a trap on a given day provides us with a suitable proxy for number of mosquitos found in a trap.
###Code
df['Date'] = pd.to_datetime(df['Date'])
df['month'] = df.Date.apply(lambda x: x.month)
df['year'] = df.Date.apply(lambda x: x.year)
df['WkNb'] = df.Date.apply(lambda x: float(x.strftime("%U")))
df['Trap'] = df.Trap.str[:4]
## Create column w just '1' in each column to sum and weight traps with more mosquitos
df['weight'] = 1
## Sum of traps having WNV by month, put in new DFs
df_2 = df.groupby(['Date','Trap']).weight.sum().reset_index()
df_target = df.groupby(['Date','Trap']).WnvPresent.max().reset_index()
## extract month and year from date format
df_2['Date'] = pd.to_datetime(df_2['Date'])
df_2['month'] = df_2.Date.apply(lambda x: x.month)
df_2['year'] = df_2.Date.apply(lambda x: x.year)
df_2['WkNb'] = df_2.Date.apply(lambda x: float(x.strftime("%U")))
###Output
_____no_output_____
###Markdown
After the transformations above, my partner and I decided to assign weights to traps based on the prevalence of WNV in a given trap in the years that we knew WNV was there. (Note - this was a bit of a hack given that this was a Kaggle competition. This likely would not have been as good of an option if the competition was not set up this way, but it proved to be effective for our purposes). We assigned a weight based on the number of rows that did have WNV present out of the number that had no WNV present within a given month over the 4 years of data. The first few rows of this weighted data are output by the cell below.
###Code
## get weight of traps by month... num of records w wnv present over total records for trap and month
df_test = df.groupby(['Date','Trap','Species','WnvPresent']).weight.sum().reset_index()
## Same conversions for date
df_test['Date'] = pd.to_datetime(df_test['Date'])
df_test['month'] = df_test.Date.apply(lambda x: x.month)
df_test['year'] = df_test.Date.apply(lambda x: x.year)
df_test_2 = df_test.groupby(['Trap','month','WnvPresent']).weight.sum().reset_index()
df_test_2_full = df_test_2.groupby(['Trap','month']).weight.sum().reset_index()
df_test_2_y = df_test_2[df_test_2.WnvPresent == 1].groupby(['Trap','month']).weight.sum().reset_index()
df_test_2_y.rename(columns={'weight':'WNV'}, inplace = True)
df_ratio = pd.merge(df_test_2_full, df_test_2_y, how = 'left', on = ['Trap','month'])
df_ratio.fillna(0, inplace = True)
df_ratio['WNV_ratio'] = df_ratio.WNV / df_ratio.weight
df_ratio.head(15)
###Output
_____no_output_____
###Markdown
The cell below was also done because the task was a Kaggle competition. In an effort to increase our model's AUC and prevent any false positives from occurring, we manually assigned certain traps a predicted probability of zero. These were traps that never caught a mosquito with WNV in the 4 years of training data, as well as the 3 months of the year that either never had a trap with WNV, or in the case of June and October, had at most 2 traps over the 4 year period where a trap caught a mosquito with WNV.
###Code
## Automatically set specific probabilities to zero
null_traps = df_test_2_y.groupby(['Trap']).WNV.sum().reset_index()
null_traps.rename(columns={'WNV':'WnvEver'}, inplace = True)
df_ratio = pd.merge(df_ratio, null_traps, how = 'left', on = ['Trap'])
## Adjust weight - max ratio is 0.5, so adding 0.5 to make at least some probabilities = 1
#df_ratio_2['WNV_ratio'] = df_ratio_2.WNV_ratio + 0.5
df_ratio.loc[df_ratio.WnvEver == 0, 'WNV_ratio'] = 0.0
df_ratio.loc[df_ratio.month == 5, 'WNV_ratio'] = 0.0
df_ratio.loc[df_ratio.month == 6, 'WNV_ratio'] = 0.0
df_ratio.loc[df_ratio.month == 10, 'WNV_ratio'] = 0.0
## Encode traps, since they are categorical values
le = LabelEncoder()
traps = le.fit_transform(df_ratio.Trap)
traps = pd.DataFrame(data = traps, columns = ['Trap_Encode'])
df_ratio_2 = pd.concat([df_ratio, traps], axis = 1)
## Joining predicted probabilities to original dataframe w West Nile predictions
prob_pred = pd.merge(df, df_ratio_2, how = 'left', on = ['Trap','month'])
### Transforming Kaggle submission file below
test = pd.read_csv('/Users/jcarr/downloads/test.csv')
traps = le.fit_transform(test.Trap)
traps = pd.DataFrame(traps, columns = ['Trap_Encode'])
test['Date'] = pd.to_datetime(test['Date'])
test['month'] = test.Date.apply(lambda x: x.month)
test['year'] = test.Date.apply(lambda x: x.year)
test['WkNb'] = test.Date.apply(lambda x: float(x.strftime("%U")))
test['Trap'] = test.Trap.str[:4]
test = pd.concat([test, traps], axis = 1)
test = pd.merge(test, df_ratio, how = 'left', on = ['Trap','month'])
test.head()
test.loc[test.WnvEver == 0, 'WNV_ratio'] = 0.0
test.loc[test.Species.str.contains('SALINARIUS'), 'WNV_ratio'] = 0.0
test.loc[test.Species.str.contains('TERRITANS'), 'WNV_ratio'] = 0.0
test.loc[test.Species.str.contains('TARSALIS'), 'WNV_ratio'] = 0.0
test.loc[test.Species.str.contains('ERRATICUS'), 'WNV_ratio'] = 0.0
test.loc[test.Species.str.contains('UNSPECIFIED'), 'WNV_ratio'] = 0.0
test.loc[test.month == 5, 'WNV_ratio'] = 0.0
test.loc[test.month == 6, 'WNV_ratio'] = 0.0
test.loc[test.month == 10, 'WNV_ratio'] = 0.0
test['weight'] = 1
## Calculate rates of traps having WNV by month, put in new DFs
test_weight = test.groupby(['month','year','Trap']).weight.sum().reset_index()
test_weight.rename(columns = {'weight': 'leakage'}, inplace = True)
test_2 = pd.merge(test, test_weight, how = 'left', on = ['month','year','Trap'])
test_2['WNV_ratio_2'] = test_2.WNV_ratio * test_2.leakage
test_2.fillna(0, inplace = True)
###Output
_____no_output_____
###Markdown
XGBoost, a gradient-boosted decision tree classifier, provided us with the best scores as determined by AUC. We attempted using a Random Forest Classifier, as well as just using our created weighted trap value as the probability that a trap had WNV at the point it was checked. The process to create predictions with the XGBoost model is below.The features used are the trap, month, year, and week checked, latitude and longitude, and then the weighted value that was calculated for each trap.
###Code
X_train = prob_pred[['Trap_Encode', 'month', 'year', 'WNV_ratio', 'Latitude', 'Longitude', 'WkNb']]
y_train = prob_pred.WnvPresent
cv_params = {'max_depth': [3,5,7], 'min_child_weight': [1,3,5], 'learning_rate': [0.1, 0.01], 'subsample': [0.7,0.8,0.9]}
ind_params = {'n_estimators': 1000, 'seed':0, 'colsample_bytree': 0.8,
'objective': 'binary:logistic'}
optimized_GBM = GridSearchCV(xgb.XGBClassifier(**ind_params),
cv_params,
scoring = 'roc_auc', cv = 5, n_jobs = -1)
###Output
_____no_output_____
###Markdown
The code above selected the best parameters from the grid search, and now it is fit to the training data below.
###Code
actual = prob_pred.WnvPresent
ratio = prob_pred.WNV_ratio
FPR = dict()
TPR = dict()
ROC_AUC = dict()
# For class 1, find the area under the curve
FPR[1], TPR[1], _ = roc_curve(actual, ratio)
ROC_AUC[1] = auc(FPR[1], TPR[1])
# Plot of a ROC curve for class 1
plt.plot(FPR[1], TPR[1], label='ROC curve (area = %0.2f)' % ROC_AUC[1], linewidth=4)
plt.plot([0, 1], [0, 1], 'k--', linewidth=4)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC for West Nile Prediction - Weighted Trap Probability')
plt.legend(loc="lower right")
plt.show()
optimized_GBM.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Below, the same transformations are applied to the test data that Kaggle uses to score the model, and the file is created that is submitted to Kaggle for scoring.
###Code
X_test = test_2[['Trap_Encode', 'month', 'year', 'WNV_ratio', 'Latitude', 'Longitude', 'WkNb']]
results = optimized_GBM.predict_proba(X_test)
xgbres = pd.DataFrame(results[:,1], columns=['xgbres'])
final = test_2.join(xgbres)
p = []
p = pd.DataFrame(p)
p['Id'] = final.Id
p['WnvPresent'] = final.xgbres
###Output
_____no_output_____ |
experiments/diabetes.ipynb | ###Markdown
Data Preparation
###Code
N, M, H, R, D, K, C, X, M_, Y_pre, Y_post, A, T = joblib.load(
os.path.join(os.getcwd(), f"data/diabetes/hp_search.joblib")
)
constants = dict(m=M, h=H, r=R, d=D, k=K, c=C)
###Output
_____no_output_____
###Markdown
Modelling
###Code
wandb.init(project="mclatte-test", entity="jasonyz")
###Output
_____no_output_____
###Markdown
McLatte Vanilla
###Code
# print(pd.read_csv(os.path.join(os.getcwd(), 'results/mclatte_hp.csv')).sort_values(by='valid_loss').iloc[0])
mclatte_config = {
"encoder_class": "lstm",
"decoder_class": "lstm",
"hidden_dim": 8,
"batch_size": 64,
"epochs": 100,
"lr": 0.021089,
"gamma": 0.541449,
"lambda_r": 0.814086,
"lambda_d": 0.185784,
"lambda_p": 0.081336,
}
###Output
_____no_output_____
###Markdown
Semi-Skimmed
###Code
# print(pd.read_csv(os.path.join(os.getcwd(), 'results/semi_skimmed_mclatte_hp.csv')).sort_values(by='valid_loss').iloc[0])
semi_skimmed_mclatte_config = {
"encoder_class": "lstm",
"decoder_class": "lstm",
"hidden_dim": 4,
"batch_size": 64,
"epochs": 100,
"lr": 0.006606,
"gamma": 0.860694,
"lambda_r": 79.016676,
"lambda_d": 1.2907,
"lambda_p": 11.112241,
}
###Output
_____no_output_____
###Markdown
Skimmed
###Code
# print(pd.read_csv(os.path.join(os.getcwd(), 'results/skimmed_mclatte_hp.csv')).sort_values(by='valid_loss').iloc[0])
skimmed_mclatte_config = {
"encoder_class": "lstm",
"decoder_class": "lstm",
"hidden_dim": 16,
"batch_size": 64,
"epochs": 100,
"lr": 0.000928,
"gamma": 0.728492,
"lambda_r": 1.100493,
"lambda_p": 2.108935,
}
###Output
_____no_output_____
###Markdown
Baseline RNN
###Code
# print(pd.read_csv(os.path.join(os.getcwd(), 'results/baseline_rnn_hp.csv')).sort_values(by='valid_loss').iloc[0])
rnn_config = {
"rnn_class": "gru",
"hidden_dim": 64,
"seq_len": 2,
"batch_size": 64,
"epochs": 100,
"lr": 0.006321,
"gamma": 0.543008,
}
###Output
_____no_output_____
###Markdown
SyncTwin
###Code
# print(pd.read_csv(os.path.join(os.getcwd(), 'results/synctwin_hp.csv')).sort_values(by='valid_loss').iloc[0])
synctwin_config = {
"hidden_dim": 128,
"reg_B": 0.522652,
"lam_express": 0.163847,
"lam_recon": 0.39882,
"lam_prognostic": 0.837303,
"tau": 0.813696,
"batch_size": 32,
"epochs": 100,
"lr": 0.001476,
"gamma": 0.912894,
}
###Output
_____no_output_____
###Markdown
Test Models
###Code
N_TEST = 5
def run_tests():
mclatte_losses = []
semi_skimmed_mclatte_losses = []
skimmed_mclatte_losses = []
rnn_losses = []
for i in range(1, N_TEST + 1):
(
_,
train_data,
test_data,
) = generate_data(return_raw=False)
skimmed_mclatte_losses.append(
test_skimmed_mclatte(
skimmed_mclatte_config,
constants,
train_data,
test_data,
run_idx=i,
)
)
semi_skimmed_mclatte_losses.append(
test_semi_skimmed_mclatte(
semi_skimmed_mclatte_config,
constants,
train_data,
test_data,
run_idx=i,
)
)
mclatte_losses.append(
test_mclatte(
mclatte_config,
constants,
train_data,
test_data,
run_idx=i,
)
)
rnn_losses.append(
test_rnn(
rnn_config,
train_data,
test_data,
run_idx=i,
)
)
joblib.dump(
(
mclatte_losses,
semi_skimmed_mclatte_losses,
skimmed_mclatte_losses,
rnn_losses,
),
f"results/test/diabetes.joblib",
)
run_tests()
###Output
_____no_output_____
###Markdown
Check finished runs results
###Code
def print_losses():
all_losses = joblib.load(f"results/test/diabetes.joblib")
for losses in all_losses:
print(f"{np.mean(losses):.3f} ({np.std(losses):.3f})")
print_losses()
###Output
_____no_output_____
###Markdown
Statistical Testing
###Code
LOSS_NAMES = ["McLatte", "Semi-Skimmed McLatte", "Skimmed McLatte", "RNN", "SyncTwin"]
losses = joblib.load(f"results/test/diabetes.joblib")
test_losses(losses, LOSS_NAMES)
###Output
_____no_output_____ |
learning/bar-chart-digitizer-ML.ipynb | ###Markdown
<meta content="text/html; charset=ISO-8859-1"http-equiv="content-type"><spanstyle="font-weight: bold; text-decoration: underline;">Hackathon 2018 - chart-digitizer <spanstyle="font-style: italic;">Prabhat Ranjan<spanstyle="font-style: italic;">Berowne D Hlavaty<spanstyle="font-style: italic;">Abhijit Salvi<spanstyle="font-style: italic;">Anujay Saraf <meta content="text/html; charset=ISO-8859-1"http-equiv="content-type"><imgstyle="width: 50px; height: 15px;" alt="Python"src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/260px-Python_logo_and_wordmark.svg.png"><img alt="OpenCV"src="https://a.fsdn.com/allura/p/opencvlibrary/icon" /><img style="width: 50px; height: 15px;"alt="Anaconda"src="https://upload.wikimedia.org/wikipedia/en/thumb/c/cd/Anaconda_Logo.png/200px-Anaconda_Logo.png"> - Random chart generator - Model training using object detection - Result analysis
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import matplotlib
###Output
/home/nbuser/anaconda3_420/lib/python3.5/site-packages/matplotlib/font_manager.py:281: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
'Matplotlib is building the font cache using fc-list. '
###Markdown
Random bar chart generator
###Code
data = pd.DataFrame(data=np.random.rand(5,1), index=range(1,6), columns=['Fred'])
m,n = np.shape(data)
plt.clf()
plt.bar(x=data.index.values, height=data.values.ravel(), color='k') # figsize=(10, 6))
# Options for later from https://matplotlib.org/api/_as_gen/matplotlib.pyplot.bar.html
# bar_width = 0.35
# alpha = .3
fig=plt.gcf()
fig.set_size_inches(3, 2)
plt.axis('off')
fig.tight_layout()
fig.canvas.draw()
# grab the pixel buffer and dump it into a numpy array
pixels = np.array(fig.canvas.renderer._renderer)
plt.plot();
###Output
_____no_output_____
###Markdown
Display generated chart
###Code
print(pixels);
print(data);
y, X = img_gen_bar()
print(y)
#for neural net
X=X/255
#for DNN only
#X=X.reshape(1,-1,3)
#data={}
#for i in range(1000) :
# data[i] = (generate_bar_chart() )
###Output
[[[255 255 255 0]
[255 255 255 0]
[255 255 255 0]
...
[255 255 255 0]
[255 255 255 0]
[255 255 255 0]]
[[255 255 255 0]
[255 255 255 0]
[255 255 255 0]
...
[255 255 255 0]
[255 255 255 0]
[255 255 255 0]]
[[255 255 255 0]
[255 255 255 0]
[255 255 255 0]
...
[255 255 255 0]
[255 255 255 0]
[255 255 255 0]]
...
[[255 255 255 0]
[255 255 255 0]
[255 255 255 0]
...
[255 255 255 0]
[255 255 255 0]
[255 255 255 0]]
[[255 255 255 0]
[255 255 255 0]
[255 255 255 0]
...
[255 255 255 0]
[255 255 255 0]
[255 255 255 0]]
[[255 255 255 0]
[255 255 255 0]
[255 255 255 0]
...
[255 255 255 0]
[255 255 255 0]
[255 255 255 0]]]
###Markdown
For historical reasons, OpenCV defaults to BGR format instead of usual RGB Lets convert OpenCV image to RGB consistentlyThe Lab color space has three components. L – Lightness ( Intensity ).a – color component ranging from Green to Magenta.b – color component ranging from Blue to Yellow. The Lab color space is quite different from the RGB color space. In RGB color space the color information is separated into three channels but the same three channels also encode brightness information. On the other hand, in Lab color space, the L channel is independent of color information and encodes brightness only. The other two channels encode color.
###Code
cvimrgb = cv2.cvtColor(cvim2disp,cv2.COLOR_BGR2RGB)
#or
#imbgr = cv2.cvtColor(im2disp,cv2.COLOR_RGB2BGR)
figure()
imshow(cvimrgb)
cvimlab = cv2.cvtColor(cvim2disp,cv2.COLOR_BGR2LAB)
#or
#imbgr = cv2.cvtColor(im2disp,cv2.COLOR_RGB2BGR)
figure()
imshow(cvimlab)
###Output
_____no_output_____
###Markdown
Useful utility function
###Code
img = cv2.imread('sample-1.png', 0)
img = cv2.threshold(img, 100, 255, cv2.THRESH_BINARY)[1] # ensure binary
ret, labels = cv2.connectedComponents(img)
# Map component labels to hue val
label_hue = np.uint8(179*labels/np.max(labels))
blank_ch = 255*np.ones_like(label_hue)
labeled_img = cv2.merge([label_hue, blank_ch, blank_ch])
# cvt to BGR for display
labeled_img = cv2.cvtColor(labeled_img, cv2.COLOR_HSV2BGR)
# set bg label to black
labeled_img[label_hue==0] = 0
figure()
imshow( labeled_img)
###Output
_____no_output_____
###Markdown
Simple filtering example
###Code
im2disp = imread('sample-1.png')
blurred = cv2.GaussianBlur(im2disp,(19,19),0)
figure()
imshow(blurred)
#more general method
kernel = np.ones((5,5),np.float32)/25
blurred2 = cv2.filter2D(im2disp,-1,kernel)
figure()
imshow(blurred2)
###Output
_____no_output_____
###Markdown
Converting to LAB
###Code
cv2.imwrite('data/mycvimage.png', cvim2disp)
#or
imsave('data/myimage.png',im2disp)
x=2
%whos
###Output
Variable Type Data/Info
--------------------------------
blurred ndarray 303x328x4: 397536 elems, type `float32`, 1590144 bytes (1.5164794921875 Mb)
blurred2 ndarray 303x328x4: 397536 elems, type `float32`, 1590144 bytes (1.5164794921875 Mb)
cv2 module <module 'cv2' from '/User<...>2.cpython-35m-darwin.so'>
cvim2disp ndarray 303x328x3: 298152 elems, type `uint8`, 298152 bytes (291.1640625 kb)
cvimrgb ndarray 303x328x3: 298152 elems, type `uint8`, 298152 bytes (291.1640625 kb)
im2disp ndarray 303x328x4: 397536 elems, type `float32`, 1590144 bytes (1.5164794921875 Mb)
kernel ndarray 5x5: 25 elems, type `float32`, 100 bytes
x int 2
###Markdown
1 numpy gotcha for people coming from Matlab
###Code
x = zeros(5)
y = x
y[1] = 1
#uncomment next line and run
print(x)
###Output
[0. 1. 0. 0. 0.]
###Markdown
What happened? Why did modifying y change x?A: Python copies arrays and other mutable data types by reference by defaultHere's what you probably want:
###Code
x=zeros(5)
y=x.copy()
y[1] = 1
print(x)
###Output
[0. 0. 0. 0. 0.]
###Markdown
Let's run some of the included OpenCV examples
###Code
%run inpaint.py
%run deconvolution.py
%run find_obj.py
%run peopledetect.py
cd python
###Output
/Users/prabhatranjan/sources/opencv/CASIS-OpenCV-Course-master/python
|
generator.ipynb | ###Markdown
###Code
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.optimizers import RMSprop
from keras.callbacks import LambdaCallback
from keras.callbacks import ReduceLROnPlateau
filename = "C:/Users/username/Desktop/DL/poem_data3.txt" #your own repository
raw_text = open(filename, encoding="utf8").read()
raw_text = raw_text.lower() #converts all character to lower case for simplicity
chars = sorted(list(set(raw_text)))
char_to_int = dict((c,i) for i, c in enumerate(chars))
int_to_char = dict((i,c) for i, c in enumerate(chars))
n_chars = len(raw_text) # total #of characters in input file
n_vocab = len(chars) # total unique characters in input file
max_len = 64 # length of a sentence that we use to train
step = 3 # span of characters that we learn
sentence = [] # to store sentences to train
next_char = [] # next character after the sentence
for i in range(0, n_chars - max_len, step):
sentence.append(raw_text[i:i+max_len])
next_char.append(raw_text[i+max_len])
x = np.zeros((len(sentence), max_len, len(chars)),dtype=np.bool)
y = np.zeros((len(sentence), len(chars)), dtype= np.bool)
# assigns value 1 to corresponding row/column to represent sentences as boolean matrices
for i, sentenc in enumerate(sentence): # for each row/sentence
for t ,char in enumerate(sentenc): # for each character in a row
x[i, t, char_to_int[char]] = 1
y[i, char_to_int[next_char[i]]] = 1
model = Sequential()
model.add(LSTM(128, input_shape = (max_len, len(chars))))
model.add(Dense(len(chars))) #Final fully connected dense output layer
model.add(Activation('softmax'))
optimizer = RMSprop(lr= 0.01)
model.compile(loss = 'categorical_crossentropy', optimizer = optimizer)
# helper function to sample an index from a probability array
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
filepath = "weights.hdfs"
print_callback = LambdaCallback()
checkpoint = ModelCheckpoint(filepath, monitor='loss',verbose=1, save_best_only=True, mode='min')
reduce_lr = ReduceLROnPlateau(monitor='loss', factor=0.2,patience=1, min_lr=0.001)
callbacks = [print_callback, checkpoint, reduce_lr]
model.fit(x, y, batch_size=128, epochs=10, callbacks=callbacks)
def myGenerate(length_given, diversity_given):
input_taken = [] #user input text is stored here
sent = []
input_taken = input('Enter first line of poem (min 40 chars): ')
while(len(input_taken) < 40): # since the sentence length is predefined,
input_taken = [] # a minimum character or 'max_val' is expected
input_taken = input('..too short, please retype')
sent = input_taken[0:max_len] # first characters upto value of 'max_len'
gen = '' # is considered, to avoid input shape
gen += sent # compatibility problem
for i in range(length_given):
x_predicted = np.zeros((1, max_len, len(chars)))
for t, ch in enumerate(sent): # converts the user entered text to
x_predicted[0, t, char_to_int[ch]] = 1 # a matrix 'x_predicted'
# and pass this matrix to model.predict() and stores return value in
predictions = model.predict(x_predicted, verbose = 0)[0] # predictions
# samples the character indices from helper function sample()
next_ind = sample(predictions, diversity_given)
next_ch = int_to_char[next_ind] # maps the index to characters
gen += next_ch # appends the generated character
sent = sent[1:] + next_ch # appends to 'sent' to generate further
return gen
print(myGenerate(500, 0.45))
###Output
_____no_output_____
###Markdown
City Dataset GeneratorYou can find the dataset at the following url : [click me 😁](https://simplemaps.com/data/world-cities).The license of the dataset is also in this repo. We start by defining the globals of our program
###Code
N_CITIES = 10
###Output
_____no_output_____
###Markdown
Now we load the dataset
###Code
import pandas as pd
from haversine import haversine as hs
from IPython.display import display
import plotly.express as px
import plotly.io as pio
pio.renderers.default='notebook'
cities = pd.read_csv("worldcities.csv")
display(cities)
###Output
_____no_output_____
###Markdown
Now we select `n` cities
###Code
cities_sample = cities.sort_values("population", ascending=False)[:N_CITIES]
# .sample(n=N_CITIES, random_state=RANDOM_SEED)[["city","lat","lng"]]
display(cities_sample)
###Output
_____no_output_____
###Markdown
We get the distances between each cities.
###Code
distances = []
for index_1, city_1 in cities_sample.iterrows():
row = []
for index_2, city_2 in cities_sample.iterrows():
city_1_gps = (city_1["lat"], city_1["lng"])
city_2_gps = (city_2["lat"], city_2["lng"])
distance = round(hs(city_1_gps, city_2_gps))
row.append(distance)
distances.append(row)
cities_names = list(cities_sample["city_ascii"])
distances = pd.DataFrame(distances, columns=cities_names, index=cities_names)
display(distances)
###Output
_____no_output_____
###Markdown
We save the dataframe into a csv.
###Code
distances.to_csv("cities_distances.csv", index=False)
###Output
_____no_output_____
###Markdown
We plot the cities on a map.
###Code
globe = px.scatter_geo(data_frame=cities_sample, lat="lat", lon="lng", hover_name="city", size="population", projection="orthographic")
globe.show()
###Output
_____no_output_____
###Markdown
We plot the connections on a map
###Code
# cities_sample_2 = cities_sample.reindex([0, 2, 3, 1, 4])
# globe = px.line_geo(data_frame=cities_sample_2, lat="lat", lon="lng", hover_name="city", projection="orthographic")
# globe.show()
###Output
_____no_output_____
###Markdown
Simple Iterators Basic usage A very basic iterator can be defined using a yield inside what looks like a function. When a yield is present inside a `def` block, the result is no longer a function but what's known as a `generator`.By calling the generator, it returns a generator object. This generator object is inherently an iterator.
###Code
def get_odds():
for i in range(2):
yield i * 2 + 1
get_odds() # instead of returning a value, a generator object is returned
###Output
_____no_output_____
###Markdown
> **What does yield mean?**> You can interpret yield in many ways. In this case the `yield` is similar to a `return` where a value is passed back to the caller.> The key difference is instead of simply concluding in the case of return, with yield the caller can resume the generator.To fetch values that are yielded by a `generator` we can use the python builtin `next` function.
###Code
odds = get_odds()
print(next(odds))
print(next(odds))
###Output
1
3
###Markdown
Fetching when the iterator is empty will result in a `StopIteration` error.
###Code
print(next(odds))
###Output
_____no_output_____
###Markdown
Easiest way to get values out of a generator object is by iterating it. This can be done in any number of ways: for-loop, turning it into a data structure etc.
###Code
for i in get_odds():
print(i)
print(list(get_odds()))
###Output
1
3
[1, 3]
###Markdown
The Object ModelThe `get_odds` generator is actually equivalent to the following class:
###Code
from typing import Iterator
class Odds:
def __int__(self):
self.value = 1
def __iter__(self): # Required by Iterator
"""Required by the `Iterable` protocol.
Anything that can be iterated requires this method.
"""
return self
def __next__(self): # Required by Iterator
"""Makes the `Iterable` a `Iterator`.
"""
value = self.value
self.value += 2
return value
###Output
_____no_output_____
###Markdown
Here both `get_odds` and `Odds` implement the [`Iterator` protocol](https://docs.python.org/3/library/stdtypes.htmliterator-types).
###Code
from typing import Iterator
isinstance(Odds(), Iterator)
isinstance(get_odds(), Iterator)
###Output
_____no_output_____
###Markdown
Iterable vs IteratorSemantically Iterables are any object that can be iterated, but iterators are the actual objects handles the state and generation of the values iterated.A `list` or `dict` can be iterated using a for-loop so they are iterable but they themselves are not iterators.
###Code
from typing import Iterable, Iterator
a = [1, 2, 3, 4]
print(isinstance(a, Iterator))
print(isinstance(a, Iterable))
###Output
False
True
###Markdown
An Iterable contains an `__iter__` method that returns the actual iterator, this can be accessed more conveniently with python's builtin `iter` function.
###Code
iterator = iter(a)
print(f"{iterator = }")
print(f"{isinstance(iterator, Iterator)}")
###Output
iterator = <list_iterator object at 0x7ff0698d8760>
True
###Markdown
An Iterator contains a `__next__` method that returns the next value in the iterator, accessed using `next` function as shown previously. It also has an `__iter__` method, that returns itself.A rule to remember is that an Iterator is always Iterable but not necessarily the other way round.
###Code
print(f"{isinstance(iterator, Iterable) = }")
###Output
True
###Markdown
Unending iterators There's no reason an iterator actually has to end. An example of this is `itertools.count(0)` which counts upwards forever. We could also write our own iterator like this:
###Code
def fib():
n = 0
m = 1
while True:
n, m = m, n + m
yield n
import time
for n in fib():
print(f"Fib: {n}")
time.sleep(0.2)
###Output
Fib: 1
Fib: 1
Fib: 2
Fib: 3
Fib: 5
Fib: 8
Fib: 13
Fib: 21
Fib: 34
Fib: 55
Fib: 89
Fib: 144
Fib: 233
Fib: 377
Fib: 610
Fib: 987
Fib: 1597
Fib: 2584
Fib: 4181
Fib: 6765
Fib: 10946
Fib: 17711
Fib: 28657
Fib: 46368
Fib: 75025
Fib: 121393
Fib: 196418
Fib: 317811
Fib: 514229
Fib: 832040
Fib: 1346269
Fib: 2178309
Fib: 3524578
###Markdown
Context managers`yield` is useful not only for generating values for an iterator, it can be viewed as a `breakpoint` for the generator. This is very useful for spliting a generator into two parts of logic. This maps very well to the concept of a context manager.The logic before the `yield` can be mapped to `__enter__` and the logic after is the `__exit__`.
###Code
def calc(a: int, b: int):
print("About to calculate a + b")
result = a + b
yield result
print("Performing some clean up")
calc_iter = calc(1, 1)
result = next(calc_iter)
print(f"The result is {result}")
# Raises an error
next(calc_iter)
###Output
Performing some clean up
###Markdown
We can now turn this into a context manager.
###Code
class CalcContext:
def __init__(self, a: int, b: int) -> None:
self.iter = calc(a, b)
def __enter__(self):
return next(self.iter)
def __exit__(self):
try:
next(self.iter)
except StopAsyncIteration:
return
###Output
_____no_output_____
###Markdown
This is exactly what the `contextmanager` decorator does.
###Code
from contextlib import contextmanager
@contextmanager
def calc_v2(a: int, b: int) -> Iterator[int]:
print("About to calculate a + b")
result = a + b
yield result
print("Performing some clean up")
###Output
_____no_output_____
###Markdown
Sending What if we want to provide input to our generator object? Generators have a `send` method that allows the caller to pass values into the generator.
###Code
# Same as before but takes input from the yield
def calc_gen():
print("About to calculate a + b")
a, b = yield
print(f"a = {a}, b = {b}")
result = a + b
yield result
print("Performing some clean up")
c = calc_gen()
###Output
_____no_output_____
###Markdown
As before we can use ```next(c)``` to advance to the first yield point:
###Code
next(c)
###Output
About to calculate a + b
###Markdown
`next(c)` is equivalent to `c.send(None)`.What happens when we send an actual value?
###Code
result = c.send((1, 1)) # type: ignore
print(f"Result is {result}")
###Output
Result is 2
###Markdown
Event Loop In order to interact with the event loop we require something to track the progress and store the result. The Future ObjectThe future object is a placeholder for the real value, when the value becomes available it is added to the future.
###Code
class Future:
def __init__(self):
self.result = None
def set_result(self, value):
self.result = value
###Output
_____no_output_____
###Markdown
Here's an example `Connection` client that uses the `Future`
###Code
class Connection:
def __init__(self):
self._futures = []
def fetch(self):
f = Future()
self._futures.append(f)
# Makes non-blocking call
return f
def on_receive(self, value):
self._futures.pop(0).set_result(value)
###Output
_____no_output_____
###Markdown
CoroutineCreate a generator that `yield`s the `Future`s.
###Code
conn = Connection()
def do_work():
a, b = yield (conn.fetch(), conn.fetch())
return a + b
generator = do_work()
###Output
_____no_output_____
###Markdown
Get a hold fo the futures the coroutine depends on by calling next on the generator.
###Code
f1, f2 = next(generator)
assert f1.result is None
assert f2.result is None
###Output
_____no_output_____
###Markdown
Once the connection comes back witha result, the event loop will set a result.
###Code
conn.on_receive(1)
conn.on_receive(2)
assert f1.result == 1
assert f2.result == 2
###Output
_____no_output_____
###Markdown
Send the results back to the generator following receiving the result.
###Code
try:
generator.send((f1.result, f2.result)) # type: ignore
except StopIteration as e:
print(f"Result is {e.value}")
###Output
Result is 3
###Markdown
Create classes
###Code
import json
import numpy as np
import random
from IPython.core.debugger import set_trace
%config IPCompleter.greedy=True
# %load mesh.py
class Mesh:
def __init__(self, *args, **kwargs):
"""
Constructor for Mesh class
"""
self.spokes = 2
self.nodes = 10
self.d_nnode = {}
self.l_nodes = []
self.d_mesh = {}
for k,v in kwargs.items():
if k == 'spokes': self.spokes = v
if k == 'nodes': self.nodes = v
self.l_nodes = list(range(self.nodes))
self.d_nnode = dict(enumerate(self.l_nodes))
print(self.l_nodes)
print(self.d_nnode)
def connect(self, *args, **kwargs):
"""
Connect each node to a random sample (of size <spoke>)
drawn from list of all nodes.
"""
for k,v in self.d_nnode.items():
self.d_nnode[k] = random.sample(self.l_nodes, self.spokes)
def FDGnodelist_build(self, *args, **kwargs):
"""
Build the list of nodes for the FDG.
Group ID can be 'uniform', with all values set to <groupID>,
or this can be 'linear', starting with <groupID> and
increasing with <incremenet>. Group ID strings can also
be prepended with an optional <prefix>.
"""
str_groupSpread = 'uniform'
str_prefix = ''
groupID = 1
increment = 1
for k,v in kwargs.items():
if k == 'groupSpread': str_groupSpread = v
if k == 'groupID': groupID = v
if k == 'prefix': str_prefix = v
if k == 'increment': increment = v
lstr_ID = [str_prefix] * self.nodes
if str_groupSpread == 'uniform':
l_ID = [groupID] * self.nodes
if str_groupSpread == 'increment':
l_ID = list(range(groupID, groupID + self.nodes*increment, increment))
t_ID = zip(lstr_ID, l_ID)
l_fullID = ['%s%d' % (s, i) for (s, i) in zip(lstr_ID, l_ID)]
set_trace()
return l_fullID
M = Mesh(spokes = 2, nodes = 10)
M.connect()
M.d_nnode
M.FDGnodelist_build(prefix='group', groupSpread = 'increment', increment=2)
['tt'] * 4
l_nodesCloudHeterogeneous = []
l_linksCloudHeterogeneous = []
l_nodesCloudHomogeneous = []
l_linksCloudHomogeneous = []
cloudNodes = 300
edgeNodes = 30
linkToEdgeNodes = 10
d_graphDisconnected = {}
d_graphCentralServer = {}
d_graphCloudHomogeneous = {}
d_graphEdge = {}
d_graphFog = {}
for i in range(1, cloudNodes):
l_nodesCloudHeterogeneous.append({"id": "pc%d" % i, "group": i})
l_nodesCloudHeterogeneous.append({"id": "human%d" % i, "group": i})
l_linksCloudHeterogeneous.append({"source": "pc%d" % i, "target": "human%d" % i, "value": 10})
d_graphDisconnected = {
"nodes": l_nodesCloudHeterogeneous,
"links": l_linksCloudHeterogeneous
}
with open('pcss.json', 'w') as f:
json.dump(d_graphDisconnected, f, sort_keys=True, indent=4)
###Output
_____no_output_____
###Markdown
Create a central server
###Code
l_nodesCloudHeterogeneous.append({"id": "server1", "group": 200})
for i in range(1, cloudNodes):
l_linksCloudHeterogeneous.append({"source": "pc%d" % i, "target": "server1", "value": 1})
d_graphCentralServer = {
"nodes": l_nodesCloudHeterogeneous,
"links": l_linksCloudHeterogeneous
}
with open('pcss-net.json', 'w') as f:
json.dump(d_graphCentralServer, f, sort_keys=True, indent=4)
###Output
_____no_output_____
###Markdown
Create a "cloud" topology First create the node and links out from the cloud to the edge
###Code
# Create a homogeneous cloud
l_nodesCloudHomogeneous.append({"id": "headnode", "group": 100})
for i in range(1, cloudNodes):
l_nodesCloudHomogeneous.append({"id": "node%d" %i, "group": 1})
l_linksCloudHomogeneous.append({"source": "node%d" % i, "target": "headnode", "value": 1})
# Create the nodes out of the cloud
for i in range(1, linkToEdgeNodes):
l_nodesCloudHomogeneous.append({"id": "link%d" % i, "group": cloudNodes})
l_nodesCloudHomogeneous.append({"id": "client", "group": 400})
# Link the nodes to create a chain out of the cloud
for i in range(1, linkToEdgeNodes-1):
l_linksCloudHomogeneous.append({"source": "link%d" % i, "target": "link%s" % str(i+1)})
l_linksCloudHomogeneous.append({"source": "link1", "target": "headnode", "value": 1})
l_linksCloudHomogeneous.append({"source": "link%s" % str(linkToEdgeNodes-1), "target": "client", "value": 1})
d_graphCloudHomogeneous = {
"nodes": l_nodesCloudHomogeneous,
"links": l_linksCloudHomogeneous
}
with open('cloud.json', 'w') as f:
json.dump(d_graphCloudHomogeneous, f, sort_keys=True, indent=4)
###Output
_____no_output_____
###Markdown
Create another cloud for the "edge" computing
###Code
for i in range(1, edgeNodes):
l_nodesCloudHomogeneous.append({"id": "edgeNode%d" % i, "group": 2})
l_linksCloudHomogeneous.append({"source": "edgeNode%d" %i, "target": "link%s" % str(linkToEdgeNodes-1), "value": 1})
d_graphEdge = {
"nodes": l_nodesCloudHomogeneous,
"links": l_linksCloudHomogeneous
}
with open('edge.json', 'w') as f:
json.dump(d_graphEdge, f, sort_keys=True, indent=4)
###Output
_____no_output_____
###Markdown
Fog computing
###Code
for i in range(1, edgeNodes):
l_nodesCloudHomogeneous.append({"id": "fogNode3.%d" %i, "group": 3})
l_linksCloudHomogeneous.append({"source": "fogNode3.%d" %i, "target": "link3", "value": 1})
l_nodesCloudHomogeneous.append({"id": "fogNode6.%d" %i, "group": 4})
l_linksCloudHomogeneous.append({"source": "fogNode6.%d" %i, "target": "link6", "value": 1})
d_graphFog = {
"nodes": l_nodesCloudHomogeneous,
"links": l_linksCloudHomogeneous
}
with open('fog.json', 'w') as f:
json.dump(d_graphFog, f, sort_keys=True, indent=4)
###Output
_____no_output_____
###Markdown
Import and load basic functions
###Code
import numpy as np #numpy library is used to work with multidimensional array.
import pandas as pd #panda used for data manipulation and analysis.
import matplotlib.pyplot as plt #support ploting a figure
from matplotlib import colors #colors support converting number or argument into colors
import tensorflow as tf
from import_plot import *
task=get_task()
plot_task(task)
dimension_explained('train')
dimension_explained('eval')
dimension_explained('test')
diz_train=check_dim('train')
diz_eval=check_dim('eval')
diz_test=check_dim('test')
def build_model(task):
inp_dim=np.array(np.array(task['train'][0]['input']).shape)
out_dim=np.array(np.array(task['train'][0]['output']).shape)
images = tf.keras.layers.Input(shape=(inp_dim[0],inp_dim[1],10))
conv = images
conv=tf.keras.layers.Flatten()(images)
#conv=tf.keras.layers.Dropout(rate=0.15)(conv)
conv=tf.keras.layers.Dense(inp_dim[0]*inp_dim[1]*10)(conv)
conv=tf.keras.layers.Dense(out_dim[0]*out_dim[1]*10)(conv)
conv=tf.keras.layers.Reshape(target_shape=(out_dim[0], out_dim[1], 10))(conv)
conv=tf.keras.layers.Dense(10)(conv)
conv=tf.keras.layers.Softmax()(conv)
model = tf.keras.models.Model(inputs=[images], outputs=[conv])
optimizer = tf.keras.optimizers.Adam(0.001)
model.compile(loss=tf.keras.losses.CategoricalCrossentropy(),
optimizer=optimizer,
metrics=['mse', 'accuracy'])
return model
###Output
_____no_output_____
###Markdown
Generators Mirror right and down examples
###Code
task=get_task(index=82)
plot_task(task)
task
def gener_one():
skeleton=np.random.randint(2,size=(3,4))
color=np.random.randint(1, 10)
inp=np.where(skeleton==0, 0, color)
flip_right=np.concatenate((inp,np.flip(inp, axis=1)), axis=1)
out=np.concatenate((flip_right,np.flip(flip_right, axis=0)), axis=0)
return(inp, out)
def task_builder_one(n_train=1000, n_test=250):
task={'train':[], 'test':[]}
for i in range(0,n_train):
inp, out=gener_one()
task['train'].append({'input':inp, 'output':out})
for i in range(0,n_test):
inp, out=gener_one()
task['test'].append({'input':inp, 'output':out})
return task
task=task_builder_one(5, 2)
plot_task(task)
task=task_builder_one(10000, 500)
data=[]
label=[]
for pair in task['train']:
dat=tf.constant(pair['input'])
lab=tf.constant(pair['output'])
data.append(tf.one_hot(dat, 10))
label.append(tf.one_hot(lab,10))
trainset = tf.data.Dataset.from_tensor_slices((data, label))
data=[]
label=[]
for pair in task['test']:
dat=tf.constant(pair['input'])
lab=tf.constant(pair['output'])
data.append(tf.one_hot(dat, 10))
label.append(tf.one_hot(lab,10))
valset = tf.data.Dataset.from_tensor_slices((data, label))
model=build_model(task)
model.summary()
EPOCHS = 5
history = model.fit(trainset.batch(5),validation_data=valset.batch(5), epochs=EPOCHS)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim(0, 0.05)
###Output
_____no_output_____
###Markdown
CONCATENATION AND FLIP CENTRAL ROW
###Code
task=get_task('eval',0 )
plot_task(task)
def gener_two():
inp=np.random.randint(1,10,size=(2,2))
flip=np.flip(inp, axis=1)
conc1=np.concatenate((inp, inp, inp), axis=1)
conc2=np.concatenate((flip, flip, flip), axis=1)
out=np.concatenate((conc1, conc2, conc1), axis=0)
return(inp, out)
def task_builder_two(n_train=1000, n_test=250):
task={'train':[], 'test':[]}
for i in range(0,n_train):
inp, out=gener_two()
task['train'].append({'input':inp, 'output':out})
for i in range(0,n_test):
inp, out=gener_two()
task['test'].append({'input':inp, 'output':out})
return task
task_gen=task_builder_two(5, 2)
plot_task(task_gen)
task=task_builder_two(10000, 500)
model=build_model(task)
model.summary()
data=[]
label=[]
for pair in task['train']:
dat=tf.constant(pair['input'])
lab=tf.constant(pair['output'])
data.append(tf.one_hot(dat, 10))
label.append(tf.one_hot(lab,10))
trainset = tf.data.Dataset.from_tensor_slices((data, label))
data=[]
label=[]
for pair in task['test']:
dat=tf.constant(pair['input'])
lab=tf.constant(pair['output'])
data.append(tf.one_hot(dat, 10))
label.append(tf.one_hot(lab,10))
valset = tf.data.Dataset.from_tensor_slices((data, label))
EPOCHS = 5
history = model.fit(trainset.batch(5),validation_data=valset.batch(5), epochs=EPOCHS)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim(0, 0.05)
###Output
_____no_output_____
###Markdown
Pattern filler
###Code
task=get_task('train', 286)
plot_task(task)
def gener_three():
a = np.random.randint(2, 10, (8, 8))
a = np.where(a==4, 1, a)
m = np.tril(a) + np.tril(a, -1).T
m = np.concatenate((m, np.flip(m, axis=0)), axis=0)
out= np.concatenate((m, np.flip(m)), axis=1)
inp = out.copy()
p1=np.random.randint(1, 13)
p2=np.random.randint(p1+2, p1+5)
p3=np.random.randint(1, 13)
p4=np.random.randint(p3+2, p3+5)
inp[p1:p2, p3:p4]=4
p1=np.random.randint(1, 13)
p2=np.random.randint(p1+2, p1+5)
p3=np.random.randint(1, 13)
p4=np.random.randint(p3+2, p3+5)
inp[p1:p2, p3:p4]=4
return(inp, out)
def task_builder_three(n_train=1000, n_test=250):
task={'train':[], 'test':[]}
for i in range(0,n_train):
inp, out=gener_three()
task['train'].append({'input':inp, 'output':out})
for i in range(0,n_test):
inp, out=gener_three()
task['test'].append({'input':inp, 'output':out})
return task
task=task_builder_three(5, 2)
plot_task(task)
task=task_builder_three(10000, 20)
data=[]
label=[]
for pair in task['train']:
dat=tf.constant(pair['input'])
lab=tf.constant(pair['output'])
data.append(tf.one_hot(dat, 10))
label.append(tf.one_hot(lab,10))
trainset = tf.data.Dataset.from_tensor_slices((data, label))
data=[]
label=[]
for pair in task['test']:
dat=tf.constant(pair['input'])
lab=tf.constant(pair['output'])
data.append(tf.one_hot(dat, 10))
label.append(tf.one_hot(lab,10))
valset = tf.data.Dataset.from_tensor_slices((data, label))
model=build_model(task)
model.summary()
EPOCHS = 5
history = model.fit(trainset.batch(5),validation_data=valset.batch(5), epochs=EPOCHS)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
for xb, yb in trainset.batch(10).take(1):
for i in range(0,10):
x=tf.math.argmax(xb[i], axis=-1)
true=tf.math.argmax(yb[i], axis=-1)
pred=tf.math.argmax(model.predict(xb)[i], axis=-1)
plt.subplot(1,3,1)
plt.imshow(x, cmap=cmap, norm=norm)
plt.subplot(1,3,2)
plt.imshow(true, cmap=cmap, norm=norm)
plt.subplot(1,3,3)
plt.imshow(pred, cmap=cmap, norm=norm)
plt.show()
###Output
_____no_output_____
###Markdown
DIFFERENT DIMENSIONS: DENOISE
###Code
task=get_task('train', 191)
plot_task(task)
def gener_four():
dim1=np.random.randint(10,15)
dim2=np.random.randint(13,18)
out=np.zeros((dim1,dim2))
col=np.random.choice([2,3,5])
colno=np.random.choice([1,4,8])
nsquare=np.random.randint(3,5)
nnoise=np.random.randint(10,20)
for sq in range(0,nsquare):
p1=np.random.randint(0, dim1-1)
p2=np.random.randint(p1+2, p1+8)
p3=np.random.randint(0, dim2-1)
p4=np.random.randint(p3+2, p3+8)
out[p1:p2, p3:p4]=col
inp=out.copy()
for noise in range(0,nnoise):
p1=np.random.randint(0, dim1)
p2=np.random.randint(0, dim2)
out[p1,p2]=colno
return(inp, out)
def task_builder_four(n_train=1000, n_test=250):
task={'train':[], 'test':[]}
for i in range(0,n_train):
inp, out=gener_four()
task['train'].append({'input':inp, 'output':out})
for i in range(0,n_test):
inp, out=gener_four()
task['test'].append({'input':inp, 'output':out})
return task
task=task_builder_four(8, 2)
plot_task(task)
###Output
_____no_output_____
###Markdown
2 quadrati
###Code
task=get_task('train', 131)
plot_task(task)
def gener_five():
dim1=np.random.randint(5,11)
dim2=np.random.randint(5,11)
out=np.zeros((dim1,dim2))
inp=out.copy()
col=np.random.randint(1,10)
col2=np.random.randint(1,10)
nsquare=np.random.randint(1,3)
flippoints=np.random.randint(1,3)
while col2==col:
col2=np.random.randint(1,10)
if nsquare==1:
p1=np.random.randint(0, dim1-2)
p2=np.random.randint(p1+2,dim1)
p3=np.random.randint(0, dim2-2)
p4=np.random.randint(p3+2,dim2)
out[p1:p2+1, p3:p4+1]=col
inp[p1, p3]=col
inp[p2, p4]=col
if nsquare==2:
if dim1>dim2:
p1= np.random.randint(0, dim1-3)
p2= np.random.randint(p1+1, dim1-2)
p11= np.random.randint(p2+1, dim1-1)
p22= np.random.randint(p11+1, dim1)
p3= np.random.randint(0, dim2-3)
p4= np.random.randint(p3+1, dim2)
p33= np.random.randint(0, dim2-3)
p44= np.random.randint(p33+1, dim2)
out[p1:p2+1, p3:p4+1]=col
out[p11:p22+1, p33:p44+1]=col2
if flippoints==1:
inp[p1, p3]=col
inp[p2, p4]=col
inp[p22, p33]=col2
inp[p11, p44]=col2
else:
inp[p2, p3]=col
inp[p1, p4]=col
inp[p11, p33]=col2
inp[p22, p44]=col2
else:
p1= np.random.randint(0, dim1-3)
p2= np.random.randint(p1+1, dim1)
p11= np.random.randint(0, dim1-3)
p22= np.random.randint(p11+1, dim1)
p3= np.random.randint(0, dim2-3)
p4= np.random.randint(p3+1, dim2-2)
p33= np.random.randint(p4+1, dim2-1)
p44= np.random.randint(p33+1, dim2)
out[p1:p2+1, p3:p4+1]=col
out[p11:p22+1, p33:p44+1]=col2
if flippoints==1:
inp[p1, p3]=col
inp[p2, p4]=col
inp[p22, p33]=col2
inp[p11, p44]=col2
else:
inp[p2, p3]=col
inp[p1, p4]=col
inp[p11, p33]=col2
inp[p22, p44]=col2
return(inp, out)
gener_five()
def task_builder_five(n_train=1000, n_test=250):
task={'train':[], 'test':[]}
for i in range(0,n_train):
inp, out=gener_five()
task['train'].append({'input':inp, 'output':out})
for i in range(0,n_test):
inp, out=gener_five()
task['test'].append({'input':inp, 'output':out})
return task
task=task_builder_five(8, 2)
plot_task(task)
###Output
_____no_output_____
###Markdown
SMALLEST QUADRATO
###Code
task=get_task('train', 48)
plot_task(task)
def gener_six():
dim1=np.random.randint(8,21)
dim2=np.random.randint(8,21)
inp=np.zeros((dim1,dim2))
nsquare=np.random.randint(2,6)
colors=np.random.choice(range(1,10),5, replace=False)
sqs=[]
for i in range(0, nsquare):
p1=np.random.randint(1, dim1-3)
p2=np.random.randint(p1+1,dim1-1)
p3=np.random.randint(1, dim2-3)
p4=np.random.randint(p3+1,dim2-1)
sqs.append([((p2-p1)*(p4-p3)), p1,p2,p3,p4, colors[i]])
sortt=sorted(sqs, reverse=True)
for sq in sortt:
_,p1,p2,p3,p4,col=sq
inp[p1:p2+1, p3:p4+1]=col
_,p1,p2,p3,p4,col=sortt[-1]
out=np.ones((p2-p1,p4-p3))*col
return(inp, out)
def task_builder_six(n_train=1000, n_test=250):
task={'train':[], 'test':[]}
for i in range(0,n_train):
inp, out=gener_six()
task['train'].append({'input':inp, 'output':out})
for i in range(0,n_test):
inp, out=gener_six()
task['test'].append({'input':inp, 'output':out})
return task
task=task_builder_six(8, 2)
plot_task(task)
###Output
_____no_output_____
###Markdown
NCASINO
###Code
task=get_task('train', 133)
plot_task(task)
def gener_seven():
dim1=np.random.randint(18,26)
dim2=np.random.randint(18,26)
colors=np.random.choice(range(1,10),2, replace=False)
inp=np.zeros((dim1,dim2))
nnoise=np.random.randint(15,50)
for noise in range(0,nnoise):
p1=np.random.randint(0, dim1)
p2=np.random.randint(0, dim2)
inp[p1,p2]=colors[0]
skel=np.random.randint(2,size=(3,3))
out=skel*colors[0]
topaste=np.kron(skel, np.ones((4,4)))*colors[1]
p1=np.random.randint(0, dim1-12)
p2=np.random.randint(0,dim2-12)
inp[p1:p1+12, p2:p2+12]=topaste
return(inp, out)
def task_builder_seven(n_train=1000, n_test=250):
task={'train':[], 'test':[]}
for i in range(0,n_train):
inp, out=gener_seven()
task['train'].append({'input':inp, 'output':out})
for i in range(0,n_test):
inp, out=gener_seven()
task['test'].append({'input':inp, 'output':out})
return task
task=task_builder_seven(8, 2)
plot_task(task)
plt.imshow([list(range(0,10))],cmap)
###Output
_____no_output_____
###Markdown
###Code
from tensorflow import keras
import cv2
from keras.models import Model
from keras.callbacks import TensorBoard
from keras.models import load_model
import math
import numpy as np
import pathlib
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.python.framework import ops
from keras.models import Sequential # to create a cnn model
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D,UpSampling2D,Conv2DTranspose
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import RMSprop,Adam,SGD,Adagrad,Adadelta,Adamax,Nadam
from keras.applications import xception
from keras.layers import LeakyReLU
def boxx(a ,k,n,s):
x1 = Conv2D(n, (k,k),strides=(s,s), padding='same')(a)
x1 = LeakyReLU()(x1)
return x1, x1
def boxy(a,b,k,n):
x1 = Conv2D(n, (k ,k),strides=(1,1), padding='same')(a)
x1 = LeakyReLU()(x1)
x1 = Conv2D(n, (k ,k),strides=(1,1), padding='same')(x1)
return x1+b,x1+b
input_img = Input(shape=(64,64,1))
inputs= input_img
a0,b0 = boxx(input_img,7,32,1) #block 1
a1 = boxx(a0,3,32,2)[0] #block 2(a)
a2,b2 = boxx(a1,3,64,1) #block 2(B)
a3 = boxx(a2,3,64,2)[0] #block 3(a)
a4,b4 = boxx(a3,3,128,1) #block 3(B)
a5,b5 = boxy(a4,a4,3,128) #block 4
a6,b6 = boxy(a5,a5,3,128) #block 5
a7,b7 = boxy(a6,a6,3,128) #block 6
a8,b8 = boxy(a7,a7,3,128) #block 7
a9 =boxx(a8,3,64,1)[0] #block 8
a10 = tf.compat.v1.image.resize_bilinear(a9, (tf.shape(a9)[1]*2, tf.shape(a9)[2]*2))
a11 = Add()([a10,b2])
a11 = boxx(a11,3,64,1)[0]
a12 = boxx(a11,3,32,1)[0]
a13 = tf.compat.v1.image.resize_bilinear(a12, (tf.shape(a12)[1]*2, tf.shape(a12)[2]*2))
a14 = Add()([a13,b0])
a15 = Conv2D(32, (3 ,3),strides=(1,1), padding='same')(a14)
a16 = LeakyReLU()(a15)
a17 = Conv2D(3, (7 ,7),strides=(1,1), padding='same')(a16)
autoencoder =Model(input_img,a17)
from tensorflow.keras.utils import plot_model
plot_model(autoencoder, to_file='model.png', show_shapes=True, show_layer_names=True)
from IPython.display import Image
Image("model.png")
###Output
_____no_output_____
###Markdown
###Code
try:
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.__version__
import tensorflow.keras as keras
import numpy as np
class Peterator(keras.utils.Sequence):
def __init__(self, data, target_col, lookback, batch_size = 256):
self.x, self.y = data, data[:,target_col]
self.lookback = lookback
self.batch_size = batch_size
self.indices = np.arange(self.x.shape[0])
def __len__(self):
return math.ceil(self.x.shape[0] / self.batch_size)
def __getitem__(self, idx):
rows = self.indices[idx * self.batch_size + self.lookback:(idx + 1) * self.batch_size + self.lookback]
samples = np.zeros((len(rows),
self.lookback,
np.shape(self.x)[-1]))
for i, row in enumerate(rows):
j = range(rows[i] - self.lookback, rows[i])
samples[i] = self.x[j]
batch_x = samples
batch_y = self.y[rows]
return batch_x, batch_y
from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator
col1, col2 = np.reshape(np.array(np.arange(0,100)), (-1, 1)), np.reshape(np.array(np.arange(100,200)), (-1, 1))
data = np.hstack((col1, col2))
y_ = col1.copy()
#test = Peterator(data = data, target_col = 0, lookback = 10, batch_size = 2)
tsgen = TimeseriesGenerator(data, y_, length = 3, batch_size = 10)
x, y = tsgen[-1]
print(x, y)
np
# Generates sequential 3D batches to feed to the model
def generator(data, lookback, delay, min_index = 0, max_index = None,
shuffle = False, batch_size = 128, step = 1, target_col = 0):
# If max index not given, subtract prediction horizon - 1 (len to index) from last data point
if max_index is None:
max_index = len(data) - delay - 1
# Set i to first idx with valid lookback length behind it
i = min_index + lookback
while 1:
# Use shuffle for non-sequential data
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size = batch_size)
# Else for sequential (time series)
else:
# Check if adding batch exceeds index bounds
if i + batch_size >= max_index:
# Return i to beginning
i = min_index + lookback
# Select next valid row range
rows = np.arange(i, min(i + batch_size, max_index))
# Increment i
i += len(rows)
# Initialize sample and target arrays
samples = np.zeros((len(rows),
lookback // step,
np.shape(data)[-1]))
targets = np.zeros((len(rows),))
# Generate samples, targets
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][target_col]
yield samples, targets
###Output
_____no_output_____
###Markdown
Part 1: Data and Representation
###Code
import kaggle
import sqlite3
import json
import pandas as pd
import os.path
import matplotlib.pyplot as plt
from __future__ import print_function
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import SimpleRNN
from keras.layers import LSTM
from keras.layers import GRU
from keras.optimizers import Adam
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
import io
BASE_DIR = os.path.dirname(os.path.abspath('__file__'))
filename = 'database.sqlite'
db_path = os.path.join(BASE_DIR, filename)
conn = sqlite3.connect(db_path)
conn.text_factory = sqlite3.OptimizedUnicode
cur = conn.cursor()
comment = pd.read_sql_query("SELECT body FROM May2015 LIMIT 800", conn)
# comment = pd.read_sql_query("SELECT body FROM May2015", conn)
print(comment.head(4))
conn.close()
print('Data uploaded')
text = comment.body.str.cat(sep=' ')
text = text.encode('ascii', 'ignore')
text = text.decode('ascii', 'ignore')
print(text[1:40])
chars = sorted(list(set(text)))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
maxlen = 60
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
# One-hot encoding
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
###Output
_____no_output_____
###Markdown
Part2: Training
###Code
model1 = Sequential()
model1.add(SimpleRNN(100, input_shape=(maxlen, len(chars)),use_bias=True, kernel_initializer='glorot_uniform'))
model1.add(Dense(len(chars)))
model1.add(Activation('softmax'))
optimizer = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
model1.compile(loss='categorical_crossentropy', optimizer=optimizer)
history1 = model1.fit(x, y,
batch_size=100,
epochs=100)
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
generated = ''
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model3.predict(x_pred, verbose=0)[0]
next_index = sample(preds, 1)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
# Plot the loss
plt.plot(history3.history['loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.show()
###Output
_____no_output_____
###Markdown
Part3: Experiments
###Code
model1a = Sequential()
model1a.add(SimpleRNN(50, input_shape=(maxlen, len(chars)),use_bias=True, kernel_initializer='glorot_uniform'))
model1a.add(Dense(len(chars)))
model1a.add(Activation('softmax'))
model1a.compile(loss='categorical_crossentropy', optimizer=optimizer)
history1a = model1a.fit(x, y,
batch_size=100,
epochs=100)
model1b = Sequential()
model1b.add(SimpleRNN(200, input_shape=(maxlen, len(chars)),use_bias=True, kernel_initializer='glorot_uniform'))
model1b.add(Dense(len(chars)))
model1b.add(Activation('softmax'))
model1b.compile(loss='categorical_crossentropy', optimizer=optimizer)
history1b = model1b.fit(x, y,
batch_size=100,
epochs=100)
model2 = Sequential()
model2.add(LSTM(100, input_shape=(maxlen, len(chars))))
model2.add(Dense(len(chars)))
model2.add(Activation('softmax'))
model2.compile(loss='categorical_crossentropy', optimizer=optimizer)
history2 = model2.fit(x, y,
batch_size=100,
epochs=100)
model3 = Sequential()
model3.add(GRU(100, input_shape=(maxlen, len(chars))))
model3.add(Dense(len(chars)))
model3.add(Activation('softmax'))
model3.compile(loss='categorical_crossentropy', optimizer=optimizer)
history3 = model3.fit(x, y,
batch_size=100,
epochs=100)
###Output
Epoch 1/100
4742/4742 [==============================] - 7s 2ms/step - loss: 3.2435
Epoch 2/100
4742/4742 [==============================] - 6s 1ms/step - loss: 2.7307
Epoch 3/100
4742/4742 [==============================] - 6s 1ms/step - loss: 2.5447
Epoch 4/100
4742/4742 [==============================] - 5s 1ms/step - loss: 2.4473
Epoch 5/100
4742/4742 [==============================] - 7s 1ms/step - loss: 2.3739
Epoch 6/100
4742/4742 [==============================] - 7s 1ms/step - loss: 2.3076
Epoch 7/100
4742/4742 [==============================] - 5s 1ms/step - loss: 2.2378
Epoch 8/100
4742/4742 [==============================] - 5s 1ms/step - loss: 2.1759
Epoch 9/100
4742/4742 [==============================] - 5s 1ms/step - loss: 2.1207
Epoch 10/100
4742/4742 [==============================] - 7s 1ms/step - loss: 2.0727
Epoch 11/100
4742/4742 [==============================] - 6s 1ms/step - loss: 2.0145
Epoch 12/100
4742/4742 [==============================] - 5s 1ms/step - loss: 1.9562
Epoch 13/100
4742/4742 [==============================] - 6s 1ms/step - loss: 1.9058
Epoch 14/100
4742/4742 [==============================] - 6s 1ms/step - loss: 1.8483
Epoch 15/100
4742/4742 [==============================] - 6s 1ms/step - loss: 1.7928
Epoch 16/100
4742/4742 [==============================] - 5s 1ms/step - loss: 1.7363
Epoch 17/100
4742/4742 [==============================] - 5s 1ms/step - loss: 1.6764
Epoch 18/100
4742/4742 [==============================] - 6s 1ms/step - loss: 1.6142
Epoch 19/100
4742/4742 [==============================] - 7s 2ms/step - loss: 1.5506
Epoch 20/100
4742/4742 [==============================] - 6s 1ms/step - loss: 1.4891
Epoch 21/100
4742/4742 [==============================] - 6s 1ms/step - loss: 1.4308
Epoch 22/100
4742/4742 [==============================] - 6s 1ms/step - loss: 1.3638
Epoch 23/100
4742/4742 [==============================] - 6s 1ms/step - loss: 1.3063
Epoch 24/100
4742/4742 [==============================] - 5s 1ms/step - loss: 1.2474
Epoch 25/100
4742/4742 [==============================] - 5s 1ms/step - loss: 1.1932
Epoch 26/100
4742/4742 [==============================] - 7s 1ms/step - loss: 1.1471
Epoch 27/100
4742/4742 [==============================] - 6s 1ms/step - loss: 1.0818
Epoch 28/100
4742/4742 [==============================] - 5s 1ms/step - loss: 1.0359
Epoch 29/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.9875
Epoch 30/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.9387
Epoch 31/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.8933
Epoch 32/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.8597
Epoch 33/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.8365
Epoch 34/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.7963
Epoch 35/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.7563
Epoch 36/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.7198
Epoch 37/100
4742/4742 [==============================] - 7s 1ms/step - loss: 0.6850
Epoch 38/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.6657
Epoch 39/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.6447
Epoch 40/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.6097
Epoch 41/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.5779
Epoch 42/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.5457
Epoch 43/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.5334
Epoch 44/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.5094
Epoch 45/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.4805
Epoch 46/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.4764
Epoch 47/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.4463
Epoch 48/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.4284
Epoch 49/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.4002
Epoch 50/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.3819
Epoch 51/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.3837
Epoch 52/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.3592
Epoch 53/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.3445
Epoch 54/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.3233
Epoch 55/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.3179
Epoch 56/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.3103
Epoch 57/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.2753
Epoch 58/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.2604
Epoch 59/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.2816
Epoch 60/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.2550
Epoch 61/100
4742/4742 [==============================] - 7s 1ms/step - loss: 0.2395
Epoch 62/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.2361
Epoch 63/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.2317
Epoch 64/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.2239
Epoch 65/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.2142
Epoch 66/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.2037
Epoch 67/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.2213
Epoch 68/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.2168
Epoch 69/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.1961
Epoch 70/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.1725
Epoch 71/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.1539
Epoch 72/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.1557
Epoch 73/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.1420
Epoch 74/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.1272
Epoch 75/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.1301
Epoch 76/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.1282
Epoch 77/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.1267
Epoch 78/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.1239
Epoch 79/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.1282
Epoch 80/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.1204
Epoch 81/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.1103
Epoch 82/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.0927
Epoch 83/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.0871
Epoch 84/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.0973
Epoch 85/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.1993
Epoch 86/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.3187
Epoch 87/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.1975
Epoch 88/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.1405
Epoch 89/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.1093
Epoch 90/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.0924
Epoch 91/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.0881
Epoch 92/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.0774
Epoch 93/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.0629
Epoch 94/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.0543
Epoch 95/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.0466
Epoch 96/100
4742/4742 [==============================] - 6s 1ms/step - loss: 0.0425
Epoch 97/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.0392
Epoch 98/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.0397
Epoch 99/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.0361
Epoch 100/100
4742/4742 [==============================] - 5s 1ms/step - loss: 0.0345
###Markdown
ascii_art_generator importセクション
###Code
import os
import IPython.display
import emoji
import numpy
import zenhan
from PIL import Image, ImageDraw, ImageFont, ImageOps, ImageFilter
###Output
_____no_output_____
###Markdown
パラメータセクション パラメータをセットしてください。
###Code
# 画像のパス
image_path = 'input.jpg'
# アスキーアートに使用する文字一覧 (全角)
# (デフォルトで月の絵文字が追加される)
ascii_chars = set()
# 文字用フォントのファイル名
ascii_char_font_file_name = 'Symbola_hint'
# アスキーアート用フォントの大きさ
ascii_art_font_size = 20
# アスキーアートの大きさを決めるために必要なパラメータ
ascii_art_size_params = {
# TODO: 最大の大きさを元に大きさを調整する場合、以下に最大の大きさ (横 * 縦) を代入し、コメントアウトを解除
'max size': numpy.array([150, 150])
# TODO: 最大文字数を元に大きさを調整する場合、以下に最大文字数を代入し、コメントアウトを解除
# 'max str len': 140
}
###Output
_____no_output_____
###Markdown
関数定義セクション
###Code
def calc_mean_square_error(a1, a2):
"""
画像の平均二乗誤差を算出する
:param a1: 比較する画像1
:param a2: 比較する画像2
:return: 画像の平均二乗誤差
"""
return numpy.average(numpy.power(numpy.array(a1) - numpy.array(a2), 2))
###Output
_____no_output_____
###Markdown
処理セクション
###Code
# 月の絵文字をアスキーアートに使用する文字として追加
for c in (':new_moon:', ':waxing_crescent_moon:', ':first_quarter_moon:', ':waxing_gibbous_moon:',
':waning_crescent_moon:', ':last_quarter_moon:', ':waning_gibbous_moon:', ':full_moon:'):
ascii_chars.add(emoji.emojize(c))
# アスキーアートに使用する文字を全角化
ascii_chars = {zenhan.h2z(c) for c in ascii_chars}
# 文字用フォント
# ascii_fonts['ascii']: アスキーアート化の際に使用
# ascii_fonts['image']: 出力の際に使用
ascii_fonts = {'ascii': ImageFont.truetype(ascii_char_font_file_name),
'image': ImageFont.truetype(ascii_char_font_file_name, ascii_art_font_size)}
# 文字一覧
# ascii_images['ascii']: アスキーアート化の際に使用
# ascii_images['image']: 出力の際に使用
ascii_images = {'ascii': dict(), 'image': dict()}
# 文字の大きさ
# ascii_size['ascii']['size']: アスキーアート化の際に使用する画像の大きさ (横 * 縦)
# ascii_size['ascii']['shape']: アスキーアート化の際に使用する行列の大きさ (縦 * 横)
# ascii_size['image']['size']: 出力の際に使用する画像の大きさ (横 * 縦)
# ascii_size['image']['shape']: 出力の際に使用する行列の大きさ (縦 * 横)
ascii_size = {'ascii': dict(), 'image': dict()}
# 文字をグレースケール化
for kind in ascii_images.keys():
for ascii_char in ascii_chars:
ascii_char_image = Image.new('L', ascii_fonts[kind].getsize(ascii_char))
ImageDraw.Draw(ascii_char_image).text((0, 0), ascii_char, 'white', ascii_fonts[kind])
ascii_images[kind][ascii_char] = ImageOps.invert(ascii_char_image.crop(ascii_char_image.getbbox()))
ascii_size_ = numpy.array([mi.size for mi in ascii_images[kind].values()]).min()
ascii_size[kind]['size'] = numpy.array([ascii_size_ for _ in range(2)])
ascii_size[kind]['shape'] = ascii_size[kind]['size'][::-1]
for ascii_char in ascii_images[kind].keys():
# 文字の大きさを揃える
ascii_images[kind][ascii_char] = ascii_images[kind][ascii_char].resize(ascii_size[kind]['size'],
Image.ANTIALIAS)
# 文字を行列化
if kind == 'ascii':
ascii_images[kind][ascii_char] = numpy.array(ascii_images[kind][ascii_char])
# 画像
# image['base']: ベースとなる画像
# image['image']: アスキーアート化する画像 (ベースとなる画像を加工)
# image['ascii']: 行列化した画像
image = {'base': Image.open(image_path)}
# アスキーアートの大きさ (横 * 縦)
ascii_art_size = numpy.array(image['base'].size) / ascii_size['ascii']['size']
if 'max size' in ascii_art_size_params:
argmax_image_size_ = numpy.argmax(image['base'].size)
max_ascii_art_size_ = ascii_art_size_params['max size'][argmax_image_size_]
image_size_ = numpy.max(image['base'].size)
if max_ascii_art_size_ * ascii_size['ascii']['size'][argmax_image_size_] < image_size_:
ascii_art_size = max_ascii_art_size_ * numpy.array(image['base'].size) / image_size_
elif 'max str len' in ascii_art_size_params:
if ascii_art_size_params['max str len'] * ascii_size['ascii']['size'].prod() < numpy.prod(image['base'].size):
ascii_art_size = numpy.sqrt(ascii_art_size_params['max str len'] * numpy.array(image['base'].size)
/ numpy.array(image['base'].size[::-1]))
# アスキーアートの大きさの小数点以下切り捨て
ascii_art_size = ascii_art_size.astype(numpy.int64)
# 画像の余白を切り抜き
image['image'] = image['base'].crop(numpy.array(image['base'].getbbox()))
# 画像の透過部を白で塗色
if image['image'].mode == 'RGBA' or 'transparency' in image['image'].info:
image['image'] = Image.alpha_composite(Image.new(image['image'].mode, image['image'].size, 'white'),
image['image'])
# 画像の輪郭を強調
image['image'] = image['image'].filter(ImageFilter.UnsharpMask(10, 200, 5))
# 画像をグレースケール化
image['image'] = image['image'].convert('L')
# 画像をリサイズ
image['image'] = image['image'].resize(ascii_art_size * ascii_size['ascii']['size'], Image.ANTIALIAS)
# 画像を行列化
image['ascii'] = numpy.array(image['image'])
# アスキーアート
# ascii_art['ascii']: 文字で表されるアスキーアート
# ascii_art['image']['image']: 画像で表されるアスキーアートのImageオブジェクト
# ascii_art['image']['draw']: 画像で表されるアスキーアートのImageDrawオブジェクト
# ascii_art['matrix']: 行列で表されるアスキーアート
ascii_art = {'ascii': list(), 'image': dict()}
ascii_art['image']['image'] = Image.new('L', tuple(ascii_art_size * ascii_size['image']['size']), 'white')
ascii_art['image']['draw'] = ImageDraw.Draw(ascii_art['image']['image'])
# 画像をアスキーアート化
for i in range(image['ascii'].shape[0] // ascii_size['ascii']['shape'][0]):
ascii_art['ascii'].append('')
ascii_art_matrix_row = None
for j in range(image['ascii'].shape[1] // ascii_size['ascii']['shape'][1]):
# 画像の一部
part_image = image['ascii'][ascii_size['ascii']['shape'][0] * i
:ascii_size['ascii']['shape'][0] * (i + 1),
ascii_size['ascii']['shape'][1] * j
:ascii_size['ascii']['shape'][1] * (j + 1)]
# 最小平均二乗誤差
min_error = None
# 画像の一部ともっとも類似している文字
min_ascii_char = None
# 画像の一部ともっとも類似する文字を導出
for ascii_char, ascii_char_image in ascii_images['ascii'].items():
# 平均二乗誤差
error = calc_mean_square_error(part_image, ascii_char_image)
if min_error is None or error < min_error:
min_error = error
min_ascii_char = ascii_char
# 画像の一部ともっとも類似している文字をアスキーアートに追加
ascii_art['ascii'][-1] += min_ascii_char
ascii_art['image']['draw'].bitmap((ascii_size['image']['size'][0] * j,
ascii_size['image']['size'][1] * i),
ascii_images['image'][min_ascii_char])
if ascii_art_matrix_row is None:
ascii_art_matrix_row = numpy.array(ascii_images['ascii'][min_ascii_char])
else:
ascii_art_matrix_row = numpy.hstack((ascii_art_matrix_row, ascii_images['ascii'][min_ascii_char]))
if 'matrix' in ascii_art:
ascii_art['matrix'] = numpy.vstack((ascii_art['matrix'], ascii_art_matrix_row))
else:
ascii_art['matrix'] = ascii_art_matrix_row
# アスキーアート (画像) の余白を切り抜き
ascii_art['image']['image'] = ascii_art['image']['image'].crop(ascii_art['image']['image'].getbbox())
# アスキーアート (画像) の色を反転
ascii_art['image']['image'] = ImageOps.invert(ascii_art['image']['image'])
###Output
_____no_output_____
###Markdown
表示セクション 入力 アスキーアートに使用する文字
###Code
IPython.display.display(IPython.display.HTML('<b>【{}種類】</b>'.format(len(ascii_chars))))
print(' '.join(ascii_chars))
###Output
_____no_output_____
###Markdown
入力画像
###Code
IPython.display.display(IPython.display.HTML('<b>【{} * {}】</b>'.format(*image['base'].size)))
image['base']
###Output
_____no_output_____
###Markdown
中間表現 アスキーアート化する画像
###Code
IPython.display.display(IPython.display.HTML('<b>【{} * {}】</b>'.format(*image['image'].size)))
image['image']
###Output
_____no_output_____
###Markdown
出力 アスキーアート (文字)
###Code
IPython.display.display(IPython.display.HTML('<b>【{} * {}】</b>'.format(*ascii_art_size)))
print(os.linesep.join(ascii_art['ascii']))
###Output
_____no_output_____
###Markdown
アスキーアート (画像)
###Code
IPython.display.display(IPython.display.HTML('<b>【{} * {}】</b>'.format(*ascii_art['image']['image'].size)))
ascii_art['image']['image']
###Output
_____no_output_____
###Markdown
評価セクション 平均二乗誤差
###Code
IPython.display.HTML('<b>{}</b>'.format(calc_mean_square_error(image['ascii'], ascii_art['matrix'])))
###Output
_____no_output_____
###Markdown
Libraries
###Code
import pandas as pd
import datetime
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from collections import Counter
import os
from argparse import Namespace
import unidecode
import random
from sklearn.neighbors import NearestNeighbors
from sklearn.preprocessing import LabelEncoder
###Output
_____no_output_____
###Markdown
Data preprocessing
###Code
reviews = pd.read_table('data/reviews.tsv')
reviews.dropna(subset=['content'], inplace=True)
def get_data_from_dataframe(df, batch_size, seq_size):
text = " ".join(df.content.apply(unidecode.unidecode).values.flatten())
text = text.split()
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {k: w for k, w in enumerate(sorted_vocab)}
vocab_to_int = {w: k for k, w in int_to_vocab.items()}
n_vocab = len(int_to_vocab)
print('Vocabulary size', n_vocab)
int_text = [vocab_to_int[w] for w in text]
num_batches = int(len(int_text) / (seq_size * batch_size))
in_text = int_text[:num_batches * batch_size * seq_size]
out_text = np.zeros_like(in_text)
out_text[:-1] = in_text[1:]
out_text[-1] = in_text[0]
in_text = np.reshape(in_text, (batch_size, -1))
out_text = np.reshape(out_text, (batch_size, -1))
return int_to_vocab, vocab_to_int, n_vocab, in_text, out_text
def get_batches(in_text, out_text, batch_size, seq_size):
num_batches = np.prod(in_text.shape) // (seq_size * batch_size)
for i in range(0, num_batches * seq_size, seq_size):
yield in_text[:, i:i+seq_size], out_text[:, i:i+seq_size]
###Output
_____no_output_____
###Markdown
KNN
###Code
def column_label_encoding(df, le_colname):
le = LabelEncoder()
df[le_colname] = le.fit_transform(df[le_colname].to_list())
return df
def mean_score_encoding(df, grouping_columns, target_columns):
for target_column in target_columns:
mean_group = df.groupby(grouping_columns)[target_column].mean().reset_index()
mean_group.columns = grouping_columns + ['_'.join(grouping_columns) + '_mean_' + target_column]
df = df.merge(mean_group, on=grouping_columns)
return df
def convert_string_to_date(date_time_str):
conversion = datetime.datetime.strptime(date_time_str, "%Y-%m-%d")
return conversion
def change_date_for_column(df, column):
return df[column].apply(convert_string_to_date)
def preprocess_knn(df):
df = df[["artist",
"score",
"pub_date",
"best_new_music",
"genre",
"label",
"acousticness",
"danceability",
"energy",
"instrumental",
"liveness",
"loudness",
"speechiness",
"tempo",
"valence",
"popularity"
]]
df = column_label_encoding(df, 'label')
df = mean_score_encoding(df, ['artist'], ['score'])
df['pub_date'] = change_date_for_column(df, 'pub_date')
df['pub_date'] = pd.to_numeric(df['pub_date'], errors='coerce')
df = pd.get_dummies(df.drop(["artist"], axis=1))
return df
def perform_KNN(df, n_neighbors):
neighs = NearestNeighbors(n_neighbors=n_neighbors)
neigs = neighs.fit(df)
_, indices = neighs.kneighbors(reviews_knn)
return indices
# Get nearest neighbors for each album
reviews_knn = preprocess_knn(reviews)
neighborhoods = perform_KNN(reviews_knn, 16)
# Selecting a random album to generate a review
random_review_index = random.randint(0, reviews.shape[0])
test_album = reviews.iloc[random_review_index]
reviews = reviews.drop([random_review_index])
# Selecting reviews from nearest neighbors
reviews_from_cluster = reviews.iloc[neighborhoods[random_review_index]]
###Output
_____no_output_____
###Markdown
RNN
###Code
class RNNModule(nn.Module):
def __init__(self, n_vocab, seq_size, embedding_size, lstm_size):
super(RNNModule, self).__init__()
self.seq_size = seq_size
self.lstm_size = lstm_size
self.embedding = nn.Embedding(n_vocab, embedding_size)
self.lstm = nn.LSTM(embedding_size,
lstm_size,
batch_first=True)
self.dense = nn.Linear(lstm_size, n_vocab)
def forward(self, x, prev_state):
embed = self.embedding(x)
output, state = self.lstm(embed, prev_state)
logits = self.dense(output)
return logits, state
def zero_state(self, batch_size):
return (torch.zeros(1, batch_size, self.lstm_size),
torch.zeros(1, batch_size, self.lstm_size))
def get_loss_and_train_op(net, lr=0.001):
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
return criterion, optimizer
flags = Namespace(
seq_size=32,
batch_size=32,
embedding_size=64,
lstm_size=64,
gradients_norm=5,
initial_words=['This', 'album'],
predict_top_k=5,
checkpoint_path='checkpoint',
)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
reviews_index_to_train = reviews_from_cluster.index.tolist() + random.sample(range(reviews.shape[0]), 256)
int_to_vocab, vocab_to_int, n_vocab, in_text, out_text = get_data_from_dataframe(
reviews.iloc[reviews_index_to_train],
flags.batch_size,
flags.seq_size
)
net = RNNModule(n_vocab, flags.seq_size, flags.embedding_size, flags.lstm_size)
net = net.to(device)
criterion, optimizer = get_loss_and_train_op(net, 0.05)
iteration = 0
###Output
Vocabulary size 36315
###Markdown
Training
###Code
def predict(device, net, words, n_vocab, vocab_to_int, int_to_vocab, top_k=5):
net.eval()
state_h, state_c = net.zero_state(1)
state_h = state_h.to(device)
state_c = state_c.to(device)
for w in words:
ix = torch.tensor([[vocab_to_int[w]]]).to(device)
output, (state_h, state_c) = net(ix, (state_h, state_c))
_, top_ix = torch.topk(output[0], k=top_k)
choices = top_ix.tolist()
choice = np.random.choice(choices[0])
words.append(int_to_vocab[choice])
for _ in range(100):
ix = torch.tensor([[choice]]).to(device)
output, (state_h, state_c) = net(ix, (state_h, state_c))
_, top_ix = torch.topk(output[0], k=top_k)
choices = top_ix.tolist()
choice = np.random.choice(choices[0])
words.append(int_to_vocab[choice])
print(' '.join(words))
n_epochs = 10
for e in range(n_epochs):
batches = get_batches(in_text, out_text, flags.batch_size, flags.seq_size)
state_h, state_c = net.zero_state(flags.batch_size)
# Transfer data to GPU
state_h = state_h.to(device)
state_c = state_c.to(device)
for x, y in batches:
iteration += 1
# Tell it we are in training mode
net.train()
# Reset all gradients
optimizer.zero_grad()
# Transfer data to GPU
x = torch.tensor(x).to(device)
y = torch.tensor(y).to(device)
logits, (state_h, state_c) = net(x, (state_h, state_c))
loss = criterion(logits.transpose(1, 2), y)
state_h = state_h.detach()
state_c = state_c.detach()
loss_value = loss.item()
# Perform back-propagation
loss.backward()
_ = torch.nn.utils.clip_grad_norm_(net.parameters(), flags.gradients_norm)
# Update the network's parameters
optimizer.step()
if iteration % 100 == 0:
print('Epoch: {}/{}'.format(e, n_epochs),
'Iteration: {}'.format(iteration),
'Loss: {}'.format(loss_value))
predict(device, net, flags.initial_words, n_vocab,
vocab_to_int, int_to_vocab, top_k=5)
###Output
Epoch: 0/10 Iteration: 100 Loss: 8.423583984375
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time
Epoch: 1/10 Iteration: 200 Loss: 7.23267936706543
Epoch: 1/10 Iteration: 300 Loss: 7.3150129318237305
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is
Epoch: 2/10 Iteration: 400 Loss: 6.872315406799316
Epoch: 2/10 Iteration: 500 Loss: 6.709580421447754
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is an album is his banjo of a bit and the music is that it seems to make you know, he is the first time the music on his beat in their most in an obsolete the most and the most eloquent sessions that is an impromptu in their career, the same and in their own terms, of her but he a lot to make a few years, it the first time that and and a few Vernon are as if you know, are a few album a bit and in an Afrofuturist of an impromptu and as if it was an
Epoch: 3/10 Iteration: 600 Loss: 6.541629314422607
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is an album is his banjo of a bit and the music is that it seems to make you know, he is the first time the music on his beat in their most in an obsolete the most and the most eloquent sessions that is an impromptu in their career, the same and in their own terms, of her but he a lot to make a few years, it the first time that and and a few Vernon are as if you know, are a few album a bit and in an Afrofuturist of an impromptu and as if it was an artist like its most of his own merits, in the first time that makes me the music for a more misleading writer but it to be the band that makes it to be the most much that makes me as if you're and the same songs is an album since there's no regrets," of a bit that the most part, as well and in their first creation, comfortably that the music is a little bit to make a few hundred of its first teaser that it the same album high-concept with an album for a lot more of a bit too
Epoch: 4/10 Iteration: 700 Loss: 6.4283037185668945
Epoch: 4/10 Iteration: 800 Loss: 5.946113586425781
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is an album is his banjo of a bit and the music is that it seems to make you know, he is the first time the music on his beat in their most in an obsolete the most and the most eloquent sessions that is an impromptu in their career, the same and in their own terms, of her but he a lot to make a few years, it the first time that and and a few Vernon are as if you know, are a few album a bit and in an Afrofuturist of an impromptu and as if it was an artist like its most of his own merits, in the first time that makes me the music for a more misleading writer but it to be the band that makes it to be the most much that makes me as if you're and the same songs is an album since there's no regrets," of a bit that the most part, as well and in their first creation, comfortably that the music is a little bit to make a few hundred of its first teaser that it the same album high-concept with an album for a lot more of a bit too seriously. The most important Corgan a lot with its way to be catchy, In the first few harmonically narrowly scenes with his best porno mag, but a more vibrant moment on a bit more of her music for an entire wispy, blue-eyed he the music that the first two the first few years, the first two and and and and his best porno guitar lines for an album that is his first album is that makes a lot of her own content." and is that the album is the music that makes something for the music that the first place. a
Epoch: 5/10 Iteration: 900 Loss: 5.896408557891846
Epoch: 5/10 Iteration: 1000 Loss: 5.655226230621338
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is an album is his banjo of a bit and the music is that it seems to make you know, he is the first time the music on his beat in their most in an obsolete the most and the most eloquent sessions that is an impromptu in their career, the same and in their own terms, of her but he a lot to make a few years, it the first time that and and a few Vernon are as if you know, are a few album a bit and in an Afrofuturist of an impromptu and as if it was an artist like its most of his own merits, in the first time that makes me the music for a more misleading writer but it to be the band that makes it to be the most much that makes me as if you're and the same songs is an album since there's no regrets," of a bit that the most part, as well and in their first creation, comfortably that the music is a little bit to make a few hundred of its first teaser that it the same album high-concept with an album for a lot more of a bit too seriously. The most important Corgan a lot with its way to be catchy, In the first few harmonically narrowly scenes with his best porno mag, but a more vibrant moment on a bit more of her music for an entire wispy, blue-eyed he the music that the first two the first few years, the first two and and and and his best porno guitar lines for an album that is his first album is that makes a lot of her own content." and is that the album is the music that makes something for the music that the first place. a bit too quick out in its songs are an inimitable Stoltz in order for his first few years is as well to his own devices, who was recorded is as a few years ago in the same first few spins, that is no advance the same music in the most inventive) The album's track the most inventive) The album's title Teargarden for the music that it seems too seriously. the most part, as the music in the most innovative The album's songs is the most successful. Parker and a bit too quick by Lou Byrne (drums), Nicole chirp in its own.
Epoch: 6/10 Iteration: 1100 Loss: 5.5624589920043945
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is an album is his banjo of a bit and the music is that it seems to make you know, he is the first time the music on his beat in their most in an obsolete the most and the most eloquent sessions that is an impromptu in their career, the same and in their own terms, of her but he a lot to make a few years, it the first time that and and a few Vernon are as if you know, are a few album a bit and in an Afrofuturist of an impromptu and as if it was an artist like its most of his own merits, in the first time that makes me the music for a more misleading writer but it to be the band that makes it to be the most much that makes me as if you're and the same songs is an album since there's no regrets," of a bit that the most part, as well and in their first creation, comfortably that the music is a little bit to make a few hundred of its first teaser that it the same album high-concept with an album for a lot more of a bit too seriously. The most important Corgan a lot with its way to be catchy, In the first few harmonically narrowly scenes with his best porno mag, but a more vibrant moment on a bit more of her music for an entire wispy, blue-eyed he the music that the first two the first few years, the first two and and and and his best porno guitar lines for an album that is his first album is that makes a lot of her own content." and is that the album is the music that makes something for the music that the first place. a bit too quick out in its songs are an inimitable Stoltz in order for his first few years is as well to his own devices, who was recorded is as a few years ago in the same first few spins, that is no advance the same music in the most inventive) The album's track the most inventive) The album's title Teargarden for the music that it seems too seriously. the most part, as the music in the most innovative The album's songs is the most successful. Parker and a bit too quick by Lou Byrne (drums), Nicole chirp in its own. flourish the same the album that can take on his first single that the music is not to his career, But the music on The title references the most part, is not an album since a little more misleading founts with its own voice.asdfasdf's chord. of its own. He to his most Stephin Lyon of a bit in its own. years is no matter as he is an ongoing clocks in a bit in his songs in the music is an apt to do the first two people more like it is the album that it would make big-budget music that
Epoch: 7/10 Iteration: 1200 Loss: 5.477556228637695
Epoch: 7/10 Iteration: 1300 Loss: 5.364377498626709
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is an album is his banjo of a bit and the music is that it seems to make you know, he is the first time the music on his beat in their most in an obsolete the most and the most eloquent sessions that is an impromptu in their career, the same and in their own terms, of her but he a lot to make a few years, it the first time that and and a few Vernon are as if you know, are a few album a bit and in an Afrofuturist of an impromptu and as if it was an artist like its most of his own merits, in the first time that makes me the music for a more misleading writer but it to be the band that makes it to be the most much that makes me as if you're and the same songs is an album since there's no regrets," of a bit that the most part, as well and in their first creation, comfortably that the music is a little bit to make a few hundred of its first teaser that it the same album high-concept with an album for a lot more of a bit too seriously. The most important Corgan a lot with its way to be catchy, In the first few harmonically narrowly scenes with his best porno mag, but a more vibrant moment on a bit more of her music for an entire wispy, blue-eyed he the music that the first two the first few years, the first two and and and and his best porno guitar lines for an album that is his first album is that makes a lot of her own content." and is that the album is the music that makes something for the music that the first place. a bit too quick out in its songs are an inimitable Stoltz in order for his first few years is as well to his own devices, who was recorded is as a few years ago in the same first few spins, that is no advance the same music in the most inventive) The album's track the most inventive) The album's title Teargarden for the music that it seems too seriously. the most part, as the music in the most innovative The album's songs is the most successful. Parker and a bit too quick by Lou Byrne (drums), Nicole chirp in its own. flourish the same the album that can take on his first single that the music is not to his career, But the music on The title references the most part, is not an album since a little more misleading founts with its own voice.asdfasdf's chord. of its own. He to his most Stephin Lyon of a bit in its own. years is no matter as he is an ongoing clocks in a bit in his songs in the music is an apt to do the first two people more like it is the album that it would make big-budget music that it is no mistake, that is the first two notable achievements that is the album of the music is an and his most uplifting is a bit that to his voice that it seems designed away in the most successful. she concluded of an own terms, that he it seems like her salt, she concluded he the first single "Touch say, watched Colombo." to do the music on the same time, she the most innovative or his most of an uncanny affirms that to the most uplifting is the most part, he is no longer commit from an album of his
Epoch: 8/10 Iteration: 1400 Loss: 5.227553367614746
Epoch: 8/10 Iteration: 1500 Loss: 5.318310260772705
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is an album is his banjo of a bit and the music is that it seems to make you know, he is the first time the music on his beat in their most in an obsolete the most and the most eloquent sessions that is an impromptu in their career, the same and in their own terms, of her but he a lot to make a few years, it the first time that and and a few Vernon are as if you know, are a few album a bit and in an Afrofuturist of an impromptu and as if it was an artist like its most of his own merits, in the first time that makes me the music for a more misleading writer but it to be the band that makes it to be the most much that makes me as if you're and the same songs is an album since there's no regrets," of a bit that the most part, as well and in their first creation, comfortably that the music is a little bit to make a few hundred of its first teaser that it the same album high-concept with an album for a lot more of a bit too seriously. The most important Corgan a lot with its way to be catchy, In the first few harmonically narrowly scenes with his best porno mag, but a more vibrant moment on a bit more of her music for an entire wispy, blue-eyed he the music that the first two the first few years, the first two and and and and his best porno guitar lines for an album that is his first album is that makes a lot of her own content." and is that the album is the music that makes something for the music that the first place. a bit too quick out in its songs are an inimitable Stoltz in order for his first few years is as well to his own devices, who was recorded is as a few years ago in the same first few spins, that is no advance the same music in the most inventive) The album's track the most inventive) The album's title Teargarden for the music that it seems too seriously. the most part, as the music in the most innovative The album's songs is the most successful. Parker and a bit too quick by Lou Byrne (drums), Nicole chirp in its own. flourish the same the album that can take on his first single that the music is not to his career, But the music on The title references the most part, is not an album since a little more misleading founts with its own voice.asdfasdf's chord. of its own. He to his most Stephin Lyon of a bit in its own. years is no matter as he is an ongoing clocks in a bit in his songs in the music is an apt to do the first two people more like it is the album that it would make big-budget music that it is no mistake, that is the first two notable achievements that is the album of the music is an and his most uplifting is a bit that to his voice that it seems designed away in the most successful. she concluded of an own terms, that he it seems like her salt, she concluded he the first single "Touch say, watched Colombo." to do the music on the same time, she the most innovative or his most of an uncanny affirms that to the most uplifting is the most part, he is no longer commit from an album of his work is a more significant Not the album that to a little more of its unique magnetism that he was all the album the album the album is an entire and that the music in a bit of its own whiskey onslaughts Mudhoney rips to do that it was recorded in a lot with a lot that is a more complicated clatter it makes it was all that is no longer genuinely and a little more of her own and a few regrettable prominent The album's final The Hives states or the same songs in the music that it was the
Epoch: 9/10 Iteration: 1600 Loss: 4.927720546722412
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is an album is his banjo of a bit and the music is that it seems to make you know, he is the first time the music on his beat in their most in an obsolete the most and the most eloquent sessions that is an impromptu in their career, the same and in their own terms, of her but he a lot to make a few years, it the first time that and and a few Vernon are as if you know, are a few album a bit and in an Afrofuturist of an impromptu and as if it was an artist like its most of his own merits, in the first time that makes me the music for a more misleading writer but it to be the band that makes it to be the most much that makes me as if you're and the same songs is an album since there's no regrets," of a bit that the most part, as well and in their first creation, comfortably that the music is a little bit to make a few hundred of its first teaser that it the same album high-concept with an album for a lot more of a bit too seriously. The most important Corgan a lot with its way to be catchy, In the first few harmonically narrowly scenes with his best porno mag, but a more vibrant moment on a bit more of her music for an entire wispy, blue-eyed he the music that the first two the first few years, the first two and and and and his best porno guitar lines for an album that is his first album is that makes a lot of her own content." and is that the album is the music that makes something for the music that the first place. a bit too quick out in its songs are an inimitable Stoltz in order for his first few years is as well to his own devices, who was recorded is as a few years ago in the same first few spins, that is no advance the same music in the most inventive) The album's track the most inventive) The album's title Teargarden for the music that it seems too seriously. the most part, as the music in the most innovative The album's songs is the most successful. Parker and a bit too quick by Lou Byrne (drums), Nicole chirp in its own. flourish the same the album that can take on his first single that the music is not to his career, But the music on The title references the most part, is not an album since a little more misleading founts with its own voice.asdfasdf's chord. of its own. He to his most Stephin Lyon of a bit in its own. years is no matter as he is an ongoing clocks in a bit in his songs in the music is an apt to do the first two people more like it is the album that it would make big-budget music that it is no mistake, that is the first two notable achievements that is the album of the music is an and his most uplifting is a bit that to his voice that it seems designed away in the most successful. she concluded of an own terms, that he it seems like her salt, she concluded he the first single "Touch say, watched Colombo." to do the music on the same time, she the most innovative or his most of an uncanny affirms that to the most uplifting is the most part, he is no longer commit from an album of his work is a more significant Not the album that to a little more of its unique magnetism that he was all the album the album the album is an entire and that the music in a bit of its own whiskey onslaughts Mudhoney rips to do that it was recorded in a lot with a lot that is a more complicated clatter it makes it was all that is no longer genuinely and a little more of her own and a few regrettable prominent The album's final The Hives states or the same songs in the music that it was the first single These songs that the first two of their most and is a few well-regarded album is no the same portion of her husband with a little rougher, coarser. And the album is no matter that he was not only as a few years ago in the first single & gee-golly-mister of a bit that to do it is not a little more vibrant moment a few slaps in their lives with its way on his first album" Jewelry fibrillating out of his music on his first album" the album is not a lot to be to do a few
###Markdown
Retrain
###Code
seq_size = 32
batch_size = 8
text = " ".join(reviews.iloc[reviews_from_cluster.index].content.apply(unidecode.unidecode).values.flatten())
text = text.split()
int_text = [vocab_to_int[w] for w in text]
num_batches = int(len(int_text) / (seq_size * batch_size))
in_text = int_text[:num_batches * batch_size * seq_size]
out_text = np.zeros_like(in_text)
out_text[:-1] = in_text[1:]
out_text[-1] = in_text[0]
in_text = np.reshape(in_text, (batch_size, -1))
out_text = np.reshape(out_text, (batch_size, -1))
n_epochs = 3
for e in range(n_epochs):
batches = get_batches(in_text, out_text, batch_size, seq_size)
state_h, state_c = net.zero_state(batch_size)
# Transfer data to GPU
state_h = state_h.to(device)
state_c = state_c.to(device)
for x, y in batches:
iteration += 1
# Tell it we are in training mode
net.train()
# Reset all gradients
optimizer.zero_grad()
# Transfer data to GPU
x = torch.tensor(x).to(device)
y = torch.tensor(y).to(device)
logits, (state_h, state_c) = net(x, (state_h, state_c))
loss = criterion(logits.transpose(1, 2), y)
state_h = state_h.detach()
state_c = state_c.detach()
loss_value = loss.item()
# Perform back-propagation
loss.backward()
_ = torch.nn.utils.clip_grad_norm_(net.parameters(), flags.gradients_norm)
# Update the network's parameters
optimizer.step()
if iteration % 100 == 0:
print('Epoch: {}/{}'.format(e, n_epochs),
'Iteration: {}'.format(iteration),
'Loss: {}'.format(loss_value))
predict(device, net, flags.initial_words, n_vocab,
vocab_to_int, int_to_vocab, top_k=5)
###Output
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is an album is his banjo of a bit and the music is that it seems to make you know, he is the first time the music on his beat in their most in an obsolete the most and the most eloquent sessions that is an impromptu in their career, the same and in their own terms, of her but he a lot to make a few years, it the first time that and and a few Vernon are as if you know, are a few album a bit and in an Afrofuturist of an impromptu and as if it was an artist like its most of his own merits, in the first time that makes me the music for a more misleading writer but it to be the band that makes it to be the most much that makes me as if you're and the same songs is an album since there's no regrets," of a bit that the most part, as well and in their first creation, comfortably that the music is a little bit to make a few hundred of its first teaser that it the same album high-concept with an album for a lot more of a bit too seriously. The most important Corgan a lot with its way to be catchy, In the first few harmonically narrowly scenes with his best porno mag, but a more vibrant moment on a bit more of her music for an entire wispy, blue-eyed he the music that the first two the first few years, the first two and and and and his best porno guitar lines for an album that is his first album is that makes a lot of her own content." and is that the album is the music that makes something for the music that the first place. a bit too quick out in its songs are an inimitable Stoltz in order for his first few years is as well to his own devices, who was recorded is as a few years ago in the same first few spins, that is no advance the same music in the most inventive) The album's track the most inventive) The album's title Teargarden for the music that it seems too seriously. the most part, as the music in the most innovative The album's songs is the most successful. Parker and a bit too quick by Lou Byrne (drums), Nicole chirp in its own. flourish the same the album that can take on his first single that the music is not to his career, But the music on The title references the most part, is not an album since a little more misleading founts with its own voice.asdfasdf's chord. of its own. He to his most Stephin Lyon of a bit in its own. years is no matter as he is an ongoing clocks in a bit in his songs in the music is an apt to do the first two people more like it is the album that it would make big-budget music that it is no mistake, that is the first two notable achievements that is the album of the music is an and his most uplifting is a bit that to his voice that it seems designed away in the most successful. she concluded of an own terms, that he it seems like her salt, she concluded he the first single "Touch say, watched Colombo." to do the music on the same time, she the most innovative or his most of an uncanny affirms that to the most uplifting is the most part, he is no longer commit from an album of his work is a more significant Not the album that to a little more of its unique magnetism that he was all the album the album the album is an entire and that the music in a bit of its own whiskey onslaughts Mudhoney rips to do that it was recorded in a lot with a lot that is a more complicated clatter it makes it was all that is no longer genuinely and a little more of her own and a few regrettable prominent The album's final The Hives states or the same songs in the music that it was the first single These songs that the first two of their most and is a few well-regarded album is no the same portion of her husband with a little rougher, coarser. And the album is no matter that he was not only as a few years ago in the first single & gee-golly-mister of a bit that to do it is not a little more vibrant moment a few slaps in their lives with its way on his first album" Jewelry fibrillating out of his music on his first album" the album is not a lot to be to do a few minutes and his catalog-- influences. record sounds fresh, latest track about them.Sifting the album's best known to a little short Now" This as a song in a song in the songs evoke and it doesn't fuss with "Today", and it imbues it carries to hear the Branches that Blackshaw spent a little more subtlety. And it doesn't fuss in Blemish's in a band to the album with a song titles like a series Veronica like Fennesz from a little bit too stale rumbles but not just as he solos, down the songs evoke as he solos, Trash fingers these songs, the album's most noticeable but also the album's mood. Ayatollah is consistent, (though says a series on "In his voice are as it doesn't fuss to a band name the Rockets finally Blackshaw has the songs are uniformly Lonely" But in this celebration while the Rockets get in this lengthy in this disjointed and unyielding, and a tough and chug-- that's tastefully Perhaps their blood a series of a short six of the Rockets work is the songs is a tough in this celebration while Blemish as a series of his brother in this celebration that I, if Pet Sounds that begins
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is an album is his banjo of a bit and the music is that it seems to make you know, he is the first time the music on his beat in their most in an obsolete the most and the most eloquent sessions that is an impromptu in their career, the same and in their own terms, of her but he a lot to make a few years, it the first time that and and a few Vernon are as if you know, are a few album a bit and in an Afrofuturist of an impromptu and as if it was an artist like its most of his own merits, in the first time that makes me the music for a more misleading writer but it to be the band that makes it to be the most much that makes me as if you're and the same songs is an album since there's no regrets," of a bit that the most part, as well and in their first creation, comfortably that the music is a little bit to make a few hundred of its first teaser that it the same album high-concept with an album for a lot more of a bit too seriously. The most important Corgan a lot with its way to be catchy, In the first few harmonically narrowly scenes with his best porno mag, but a more vibrant moment on a bit more of her music for an entire wispy, blue-eyed he the music that the first two the first few years, the first two and and and and his best porno guitar lines for an album that is his first album is that makes a lot of her own content." and is that the album is the music that makes something for the music that the first place. a bit too quick out in its songs are an inimitable Stoltz in order for his first few years is as well to his own devices, who was recorded is as a few years ago in the same first few spins, that is no advance the same music in the most inventive) The album's track the most inventive) The album's title Teargarden for the music that it seems too seriously. the most part, as the music in the most innovative The album's songs is the most successful. Parker and a bit too quick by Lou Byrne (drums), Nicole chirp in its own. flourish the same the album that can take on his first single that the music is not to his career, But the music on The title references the most part, is not an album since a little more misleading founts with its own voice.asdfasdf's chord. of its own. He to his most Stephin Lyon of a bit in its own. years is no matter as he is an ongoing clocks in a bit in his songs in the music is an apt to do the first two people more like it is the album that it would make big-budget music that it is no mistake, that is the first two notable achievements that is the album of the music is an and his most uplifting is a bit that to his voice that it seems designed away in the most successful. she concluded of an own terms, that he it seems like her salt, she concluded he the first single "Touch say, watched Colombo." to do the music on the same time, she the most innovative or his most of an uncanny affirms that to the most uplifting is the most part, he is no longer commit from an album of his work is a more significant Not the album that to a little more of its unique magnetism that he was all the album the album the album is an entire and that the music in a bit of its own whiskey onslaughts Mudhoney rips to do that it was recorded in a lot with a lot that is a more complicated clatter it makes it was all that is no longer genuinely and a little more of her own and a few regrettable prominent The album's final The Hives states or the same songs in the music that it was the first single These songs that the first two of their most and is a few well-regarded album is no the same portion of her husband with a little rougher, coarser. And the album is no matter that he was not only as a few years ago in the first single & gee-golly-mister of a bit that to do it is not a little more vibrant moment a few slaps in their lives with its way on his first album" Jewelry fibrillating out of his music on his first album" the album is not a lot to be to do a few minutes and his catalog-- influences. record sounds fresh, latest track about them.Sifting the album's best known to a little short Now" This as a song in a song in the songs evoke and it doesn't fuss with "Today", and it imbues it carries to hear the Branches that Blackshaw spent a little more subtlety. And it doesn't fuss in Blemish's in a band to the album with a song titles like a series Veronica like Fennesz from a little bit too stale rumbles but not just as he solos, down the songs evoke as he solos, Trash fingers these songs, the album's most noticeable but also the album's mood. Ayatollah is consistent, (though says a series on "In his voice are as it doesn't fuss to a band name the Rockets finally Blackshaw has the songs are uniformly Lonely" But in this celebration while the Rockets get in this lengthy in this disjointed and unyielding, and a tough and chug-- that's tastefully Perhaps their blood a series of a short six of the Rockets work is the songs is a tough in this celebration while Blemish as a series of his brother in this celebration that I, if Pet Sounds that begins on "In an outsized array and goes straight into a short flicks.) from the little girls That's with which he's a short six which is loud up in this celebration on "In an album full than a song in this style is the Branches shows it explores you I mentally scene several tracks. The band's album is loud an undead detail ensures the album together, the album together, these tightly-wound songs that bombshell, which it explores you didn't hear all the album's most impressive are an instrumental. It is loud in this style enables and a band be back stronger to
Epoch: 2/3 Iteration: 1800 Loss: 3.197965145111084
This album that and in its own to the music of a bit that the music of an of its music to his music to a and the album is a few and and the first to be a few of an artist in a more and a bit to a and of the first to the album is the album is his of his first in his own to be a more in the same and is a little in its same of a little and and of his own the album the first to an to be and the first time of the album's track of his own jazz to an attempt in a little more of the first couple of his music on its most innovative on the other own and and as an artist the most part, and is the most the music that is the music and the music and is that he sings, as an artist and the album's of his own own and a little bit and and as well for a few of an entire of its first house, a little more is as well for his and is an artist and is that he is an album is his banjo of a bit and the music is that it seems to make you know, he is the first time the music on his beat in their most in an obsolete the most and the most eloquent sessions that is an impromptu in their career, the same and in their own terms, of her but he a lot to make a few years, it the first time that and and a few Vernon are as if you know, are a few album a bit and in an Afrofuturist of an impromptu and as if it was an artist like its most of his own merits, in the first time that makes me the music for a more misleading writer but it to be the band that makes it to be the most much that makes me as if you're and the same songs is an album since there's no regrets," of a bit that the most part, as well and in their first creation, comfortably that the music is a little bit to make a few hundred of its first teaser that it the same album high-concept with an album for a lot more of a bit too seriously. The most important Corgan a lot with its way to be catchy, In the first few harmonically narrowly scenes with his best porno mag, but a more vibrant moment on a bit more of her music for an entire wispy, blue-eyed he the music that the first two the first few years, the first two and and and and his best porno guitar lines for an album that is his first album is that makes a lot of her own content." and is that the album is the music that makes something for the music that the first place. a bit too quick out in its songs are an inimitable Stoltz in order for his first few years is as well to his own devices, who was recorded is as a few years ago in the same first few spins, that is no advance the same music in the most inventive) The album's track the most inventive) The album's title Teargarden for the music that it seems too seriously. the most part, as the music in the most innovative The album's songs is the most successful. Parker and a bit too quick by Lou Byrne (drums), Nicole chirp in its own. flourish the same the album that can take on his first single that the music is not to his career, But the music on The title references the most part, is not an album since a little more misleading founts with its own voice.asdfasdf's chord. of its own. He to his most Stephin Lyon of a bit in its own. years is no matter as he is an ongoing clocks in a bit in his songs in the music is an apt to do the first two people more like it is the album that it would make big-budget music that it is no mistake, that is the first two notable achievements that is the album of the music is an and his most uplifting is a bit that to his voice that it seems designed away in the most successful. she concluded of an own terms, that he it seems like her salt, she concluded he the first single "Touch say, watched Colombo." to do the music on the same time, she the most innovative or his most of an uncanny affirms that to the most uplifting is the most part, he is no longer commit from an album of his work is a more significant Not the album that to a little more of its unique magnetism that he was all the album the album the album is an entire and that the music in a bit of its own whiskey onslaughts Mudhoney rips to do that it was recorded in a lot with a lot that is a more complicated clatter it makes it was all that is no longer genuinely and a little more of her own and a few regrettable prominent The album's final The Hives states or the same songs in the music that it was the first single These songs that the first two of their most and is a few well-regarded album is no the same portion of her husband with a little rougher, coarser. And the album is no matter that he was not only as a few years ago in the first single & gee-golly-mister of a bit that to do it is not a little more vibrant moment a few slaps in their lives with its way on his first album" Jewelry fibrillating out of his music on his first album" the album is not a lot to be to do a few minutes and his catalog-- influences. record sounds fresh, latest track about them.Sifting the album's best known to a little short Now" This as a song in a song in the songs evoke and it doesn't fuss with "Today", and it imbues it carries to hear the Branches that Blackshaw spent a little more subtlety. And it doesn't fuss in Blemish's in a band to the album with a song titles like a series Veronica like Fennesz from a little bit too stale rumbles but not just as he solos, down the songs evoke as he solos, Trash fingers these songs, the album's most noticeable but also the album's mood. Ayatollah is consistent, (though says a series on "In his voice are as it doesn't fuss to a band name the Rockets finally Blackshaw has the songs are uniformly Lonely" But in this celebration while the Rockets get in this lengthy in this disjointed and unyielding, and a tough and chug-- that's tastefully Perhaps their blood a series of a short six of the Rockets work is the songs is a tough in this celebration while Blemish as a series of his brother in this celebration that I, if Pet Sounds that begins on "In an outsized array and goes straight into a short flicks.) from the little girls That's with which he's a short six which is loud up in this celebration on "In an album full than a song in this style is the Branches shows it explores you I mentally scene several tracks. The band's album is loud an undead detail ensures the album together, the album together, these tightly-wound songs that bombshell, which it explores you didn't hear all the album's most impressive are an instrumental. It is loud in this style enables and a band be back stronger to rights. the album's is not the little of a song in a song that bombshell, I Love than great. done works with which starts himself with which he's gobbled and goes straight to be catchy, a few hundred you'd work to 90s add would fit the music has the little more disappointing is a song titles are still strong, as a band quickly a short last record has insisted a short thing the band members than great. done on "Endless Ace's Disposable Utilizing nuclear war. and goes with which starts My Bubble of course, is the band that the band to
###Markdown
Data Synthesis
###Code
from scipy.stats import beta
#find parameters of beta distribution
a,b, loc,scale = stats.beta.fit(df.no_of_nodes)
# sample values from fitted distribution
node_samples = beta.rvs(a,b, loc,scale, size=100)
# log transformation
df=np.log10(df)
# bayesian model edges
edges_values = df['no_of_edges'].values.reshape(-1, 1)
with pm.Model() as regression_model_nodes_edges:
nodes_values = pm.Data("nodes_values", df['no_of_nodes'].values.reshape(-1, 1))
alpha = pm.Normal('alpha', mu = 15.13437422, sd =.5)
beta = pm.Normal('beta', mu = -28.74787971, sd = .5)
gamma = pm.Normal('gamma', mu = 22.3704513, sd = .5)
delta = pm.Normal('delta', mu = -7.47195216, sd = .5)
zeta = pm.Normal('zeta', mu = 0.94787594, sd = .5)
epsilon = pm.HalfNormal('epsilon', sd = .01)
by_mean = alpha + beta * nodes_values + gamma * nodes_values**2 + delta * nodes_values**3 + zeta * nodes_values**4
Ylikelihood = pm.Normal('Ylikelihood', mu = by_mean, sd = epsilon, observed = edges_values)
step = pm.NUTS()
regression_trace_nodes_edges = pm.sample(1000, step, chains=1)
# log transformation of sampled nodes
node_samples_transformed=np.log10(node_samples)
with regression_model_nodes_edges:
pm.set_data({"nodes_values": np.array(node_samples_transformed).reshape(-1, 1)})
posterior_predictive_nodes_edges = pm.sample_posterior_predictive(regression_trace_nodes_edges)
samples_nodes_edges=posterior_predictive_nodes_edges
# select from different models
sample_no=random.choices(range(1000),k=100)
samples_lst=[]
for x in range(100):
model_number = sample_no[x]
model_value = samples_nodes_edges['Ylikelihood'][model_number][x]
samples_lst.append(model_value[0])
full_prior_samples = pd.DataFrame(samples_lst, columns=['no_of_edges'])
full_prior_samples['no_of_nodes']=np.array(node_samples_transformed)
full_prior_samples=full_prior_samples.drop_duplicates().sort_values(by='no_of_nodes')
df_synthetic=full_prior_samples[['no_of_nodes','no_of_edges']]
df_synthetic=10**df_synthetic
df_synthetic['average_degree']=(df_synthetic['no_of_edges']*2)/(df_synthetic['no_of_nodes'])
df_synthetic=np.log10(df_synthetic)
# bayesian model CC
avg_clustering_values = df['average_clustering_coefficient'].values.reshape(-1, 1)
with pm.Model() as regression_model_degree_clustering:
avg_degree_values = pm.Data("avg_degree_values", df['average_degree'].values.reshape(-1, 1))
alpha = pm.Normal('alpha', mu = -2.57778591, sd =.5)
beta = pm.Normal('beta', mu = 8.67701697, sd = .5)
gamma = pm.Normal('gamma', mu = -11.64685216, sd = .5)
delta = pm.Normal('delta', mu = 6.97481184, sd = .5)
zeta = pm.Normal('zeta', mu = -1.55689117, sd = .5)
epsilon = pm.HalfNormal('epsilon', sd = 0.001)
by_mean = alpha + beta * avg_degree_values + gamma * avg_degree_values**2 + delta * avg_degree_values**3 + zeta * avg_degree_values**4
Ylikelihood = pm.Normal('Ylikelihood', mu = by_mean, sd = epsilon, observed = avg_clustering_values)
step = pm.NUTS()
regression_trace_degree_clustering = pm.sample(1000, step, chains=1)
with regression_model_degree_clustering:
pm.set_data({"avg_degree_values": df_synthetic['average_degree'].values.reshape(-1, 1)})
posterior_predictive_degree_clustering = pm.sample_posterior_predictive(regression_trace_degree_clustering)
samples_degree_clustering=posterior_predictive_degree_clustering
# select from different models
sample_no=random.choices(range(1000),k=100)
samples_lst=[]
for x in range(100):
model_number = sample_no[x]
model_value = samples_degree_clustering['Ylikelihood'][model_number][x]
samples_lst.append(model_value[0])
full_prior_samples = pd.DataFrame(samples_lst, columns=['average_clustering_coefficient'])
full_prior_samples['average_degree']=df_synthetic.average_degree.values
full_prior_samples=full_prior_samples.drop_duplicates().sort_values(by='average_degree')
full_prior_samples=full_prior_samples.reset_index(drop=True)
df_synthetic=df_synthetic.sort_values(by='average_degree').reset_index(drop=True)
df_synthetic['average_clustering_coefficient']=full_prior_samples['average_clustering_coefficient']
df_synthetic=10**df_synthetic
df_synthetic.no_of_nodes=df_synthetic.no_of_nodes.round(0)
df_synthetic.no_of_edges=df_synthetic.no_of_edges.round(0)
df_synthetic=df_synthetic.sort_values(by='no_of_nodes').reset_index(drop=True)
df=data.iloc[:,[0,1,2,3,7,8,19]]
df=df.sort_values(by='average_degree').reset_index(drop=True)
# powerlaw exponent synthesis
from scipy.stats import exponnorm
#fit
a, loc,scale = stats.exponnorm.fit(df.powerlaw_exponent)
# sample values
samples_powerlaw = exponnorm.rvs(a, loc,scale, size=100)
df_synthetic['powerlaw_exponent'] = samples_powerlaw
df_synthetic=df_synthetic.sort_values(by='no_of_nodes').reset_index(drop=True)
df=df.sort_values(by='no_of_nodes').reset_index(drop=True)
# diameter synthesis
from collections import Counter
from scipy import stats
def get_distribution(dist):
# choose distribution
if dist==1:
lowerBound=0;upperBound=30
elif dist==2:
lowerBound=30;upperBound=50
elif dist==3:
lowerBound=50;upperBound=77
dias=df.diameter[lowerBound:upperBound]
counts=dict(Counter(dias))
# find counts
probabs_abs=[]
lst=list(range(1,9))
for value in lst:
try:
probabs_abs.append(counts[value])
except:
probabs_abs.append(0)
# find probabilites
probabs=[float(i)/sum(probabs_abs) for i in probabs_abs]
discrete_dist = stats.rv_discrete(name='custm', values=(lst, probabs))
return discrete_dist
# get length of each groups
sample1_len = len(df_synthetic[df_synthetic.no_of_nodes<=df.no_of_nodes[30]])
sample2_len = len(df_synthetic[(df_synthetic.no_of_nodes>df.no_of_nodes[30]) & (df_synthetic.no_of_nodes<=df.no_of_nodes[50])])
sample3_len = len(df_synthetic[df_synthetic.no_of_nodes>df.no_of_nodes[50]])
samples_dia1=get_distribution(1).rvs(size=sample1_len)
samples_dia2=get_distribution(2).rvs(size=sample2_len)
samples_dia3=get_distribution(3).rvs(size=sample3_len)
dias_lst=list(samples_dia1)
dias_lst.extend(list(samples_dia2));dias_lst.extend(list(samples_dia3))
#dias_lst
df_synthetic['diameter']=dias_lst
# getting radius
import math
df_synthetic['radius']=df_synthetic['diameter'].apply(lambda x: math.ceil(x/2))
rand_choice=random.choices([0,1], weights = [0.961,0.039], k = 100) # weights based on given data
df_synthetic['radius']=df_synthetic['radius']+rand_choice
df_synthetic=df_synthetic.sort_values(by='no_of_nodes').reset_index(drop=True)
df_synthetic=df_synthetic[['no_of_nodes','no_of_edges','average_degree','average_clustering_coefficient','diameter','radius','powerlaw_exponent']]
#df_synthetic.to_csv('scripts/data/Network_Metrics_synthetic_dataset2.csv')
df_synthetic
###Output
_____no_output_____
###Markdown
Synthetic Topology Generator small_sized
###Code
def small_sized():
degreedist = random.choice(['HD','Norm'])
if degreedist=='HD':
high_hub=True
Norm_hub=False
else:
high_hub=False
Norm_hub=True
if high_hub:
while True:
# random pick network to generate from synthetic dataset
our_choice = random.choice(df2.index)
extreme_range = True
interval=df2.no_of_nodes[our_choice]*0.3
low=df2.no_of_nodes[our_choice]-interval
high=df2.no_of_nodes[our_choice]
#get similar degree distribution
df_degreedistribution=df[(low<=df.no_of_nodes) & (df.no_of_nodes<=high)]
#get similar average degree
index_degree=[]
our_degree = df2.average_degree[our_choice]
for index in df_degreedistribution.index:
if (our_degree-2 > df_degreedistribution['average_degree'][index]) | ( df_degreedistribution['average_degree'][index] > our_degree+2):
index_degree.append(index)
df_degreedistribution = df_degreedistribution.drop(index_degree)
#filter based on high difference of max degree threshold
index_degree2=[]
for index in df_degreedistribution.index:
if (df_degreedistribution.no_of_nodes[index]-max(df_degreedistribution.degree_of_nodes[index]))/df_degreedistribution.no_of_nodes[index]>0.2:
index_degree2.append(index)
if len(index_degree2)>0:
break
all_degrees=[]
for indexx in index_degree2:
for degreee in df_degreedistribution.degree_of_nodes[indexx]:
all_degrees.append(degreee)
#define measurement based networks
change_range = True
edge_corners=df2.no_of_edges[our_choice]*2
syn_nodes=int(df2.no_of_nodes[our_choice])
cluster_coef = round(df2.average_clustering_coefficient[our_choice],2)
dia = df2.diameter[our_choice]
threshold=0
while True:
threshold+=1
if threshold>=100000:
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
#constraint on average degree
if edge_corners-10<=sum(nodes_degree_list)<=edge_corners+10:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
#generate havel hakimi network from sampled degree distribution
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>5:
continue
else:
#reducing components
for f in range(len(components)-1):
for target in list(components[f+1]):
source = random.choice(list(components[0])[:5])
g.add_edge(source, target)
G=[g.subgraph(c).copy() for c in sorted(nx.connected_components(g), key=len, reverse=True)][0]
#constraint on average clustering coefficient
if cluster_coef-0.01<=round((nx.average_clustering(G)),2)<=cluster_coef+0.01:
if nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
if extreme_range:
interval=df2.no_of_nodes[our_choice]*0.3
low=df2.no_of_nodes[our_choice]-interval
high=df2.no_of_nodes[our_choice]
#get similar degree distribution
df_degreedistribution=df[(low<=df.no_of_nodes) & (df.no_of_nodes<=high)]
#get similar average degree
index_degree=[]
our_degree = df2.average_degree[our_choice]
for index in df_degreedistribution.index:
if (our_degree-2 > df_degreedistribution['average_degree'][index]) | ( df_degreedistribution['average_degree'][index] > our_degree+2):
index_degree.append(index)
df_degreedistribution = df_degreedistribution.drop(index_degree)
#filter based on high difference of max degree threshold
index_degree2=[]
for index in df_degreedistribution.index:
if (df_degreedistribution.no_of_nodes[index]-max(df_degreedistribution.degree_of_nodes[index]))/df_degreedistribution.no_of_nodes[index]>0.1:
index_degree2.append(index)
all_degrees=[]
for indexx in index_degree2:
for degreee in df_degreedistribution.degree_of_nodes[indexx]:
all_degrees.append(degreee)
#define measurement based networks
edge_corners=df2.no_of_edges[our_choice]*2
syn_nodes=int(df2.no_of_nodes[our_choice])
cluster_coef = round(df2.average_clustering_coefficient[our_choice],2)
dia = df2.diameter[our_choice]
threshold=0
while extreme_range:
threshold+=1
if threshold>=100000:
Norm_hub=True
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
#constraint on average degree
if edge_corners-10<=sum(nodes_degree_list)<=edge_corners+10:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
#generate havel hakimi network from sampled degree distribution
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>8:
continue
else:
#reducing components
for f in range(len(components)-1):
for target in list(components[f+1]):
source = random.choice(list(components[0])[:3])
g.add_edge(source, target)
G=[g.subgraph(c).copy() for c in sorted(nx.connected_components(g), key=len, reverse=True)][0]
#constraint on average clustering coefficient
if cluster_coef-0.1<=round((nx.average_clustering(G)),2)<=cluster_coef+0.1:
if nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
#if extreme_range:
# Norm_hub=True
if Norm_hub:
while True:
# random pick network to generate from synthetic dataset
our_choice = random.choice(df2.index)
extreme_range = True
interval=df2.no_of_nodes[our_choice]*0.3
low=df2.no_of_nodes[our_choice]-interval
high=df2.no_of_nodes[our_choice]
#get similar degree distribution
df_degreedistribution=df[(low<=df.no_of_nodes) & (df.no_of_nodes<=high)]
#get similar average degree
index_degree=[]
our_degree = df2.average_degree[our_choice]
for index in df_degreedistribution.index:
if (our_degree-2 > df_degreedistribution['average_degree'][index]) | ( df_degreedistribution['average_degree'][index] > our_degree+2):
index_degree.append(index)
df_degreedistribution = df_degreedistribution.drop(index_degree)
if len(df_degreedistribution.iloc[:,:4])>0:
break
all_degrees=[]
for indexx in df_degreedistribution.index:
for degreee in df_degreedistribution.degree_of_nodes[indexx]:
all_degrees.append(degreee)
#define measurement based networks
change_range = True
edge_corners=df2.no_of_edges[our_choice]*2
syn_nodes=int(df2.no_of_nodes[our_choice])
cluster_coef = round(df2.average_clustering_coefficient[our_choice],2)
dia = df2.diameter[our_choice]
threshold=0
while True:
threshold+=1
if threshold>=10000:
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
#constraint on average degree
if edge_corners-10<=sum(nodes_degree_list)<=edge_corners+10:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
#generate havel hakimi network from sampled degree distribution
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>1:
continue
else:
G=[g.subgraph(c).copy() for c in sorted(nx.connected_components(g), key=len, reverse=True)][0]
#constraint on average clustering coefficient
if cluster_coef-0.01<=round((nx.average_clustering(G)),2)<=cluster_coef+0.01:
if (dia-1)<=nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
threshold=0
while change_range:
threshold+=1
if threshold>=10000:
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
if edge_corners-30<=sum(nodes_degree_list)<=edge_corners+30:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>1:
continue
else:
G=[g.subgraph(c).copy() for c in sorted(nx.connected_components(g), key=len, reverse=True)][0]
if cluster_coef-0.02<=round((nx.average_clustering(G)),2)<=cluster_coef+0.02:
if (dia-1)<=nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
if extreme_range:
interval=df2.no_of_nodes[our_choice]*0.5
low=df2.no_of_nodes[our_choice]-interval
high=df2.no_of_nodes[our_choice]
df_degreedistribution=df[(low<=df.no_of_nodes) & (df.no_of_nodes<=high)]
all_degrees=[]
for indexx in df_degreedistribution.index:
for degreee in df_degreedistribution.degree_of_nodes[indexx]:
all_degrees.append(degreee)
threshold=0
while extreme_range:
threshold+=1
if threshold>=100000:
print('graph generation failed! please try again')
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
if edge_corners-40<=sum(nodes_degree_list)<=edge_corners+40:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>10:
continue
else:
#reducing components here
for f in range(len(components)-1):
source = list(components[0])[0]
target = random.choice(list(components[f+1]))
g.add_edge(source, target)
G=[g.subgraph(c).copy() for c in sorted(nx.connected_components(g), key=len, reverse=True)][0]
if cluster_coef-0.3<=round((nx.average_clustering(G)),2)<=cluster_coef+0.3:
if nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
return G,df_degreedistribution,our_choice
###Output
_____no_output_____
###Markdown
medium_sized
###Code
def medium_sized():
degreedist = random.choice(['HD','Norm'])
if degreedist=='HD':
high_hub=True
Norm_hub=False
else:
high_hub=False
Norm_hub=True
if high_hub:
while True:
# random pick network to generate from synthetic dataset
our_choice = random.choice(df2.index)
extreme_range = True
interval=df2.no_of_nodes[our_choice]*0.3
low=df2.no_of_nodes[our_choice]-interval
high=df2.no_of_nodes[our_choice]
#get similar degree distribution
df_degreedistribution=df[(low<=df.no_of_nodes) & (df.no_of_nodes<=high)]
#get similar average degree
index_degree=[]
our_degree = df2.average_degree[our_choice]
for index in df_degreedistribution.index:
if (our_degree-2 > df_degreedistribution['average_degree'][index]) | ( df_degreedistribution['average_degree'][index] > our_degree+2):
index_degree.append(index)
df_degreedistribution = df_degreedistribution.drop(index_degree)
#filter based on high difference of max degree threshold
index_degree2=[]
for index in df_degreedistribution.index:
if (df_degreedistribution.no_of_nodes[index]-max(df_degreedistribution.degree_of_nodes[index]))/df_degreedistribution.no_of_nodes[index]>0.2:
index_degree2.append(index)
if len(index_degree2)>0:
break
all_degrees=[]
for indexx in index_degree2:
for degreee in df_degreedistribution.degree_of_nodes[indexx]:
all_degrees.append(degreee)
#define measurement based networks
change_range = True
edge_corners=df2.no_of_edges[our_choice]*2
syn_nodes=int(df2.no_of_nodes[our_choice])
cluster_coef = round(df2.average_clustering_coefficient[our_choice],2)
dia = df2.diameter[our_choice]
threshold=0
while True:
threshold+=1
if threshold>=100000:
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
#constraint on average degree
if edge_corners-20<=sum(nodes_degree_list)<=edge_corners+20:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
#generate havel hakimi network from sampled degree distribution
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>10:
continue
else:
#reducing components
for f in range(len(components)-1):
for target in list(components[f+1]):
source = random.choice(list(components[0])[:5])
g.add_edge(source, target)
G=[g.subgraph(c).copy() for c in sorted(nx.connected_components(g), key=len, reverse=True)][0]
#constraint on average clustering coefficient
if cluster_coef-0.01<=round((nx.average_clustering(G)),2)<=cluster_coef+0.01:
if nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
threshold=0
while change_range:
threshold+=1
if threshold>=10000:
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
#constraint on average degree
if edge_corners-20<=sum(nodes_degree_list)<=edge_corners+20:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
#generate havel hakimi network from sampled degree distribution
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>20:
continue
else:
#reducing components
for f in range(len(components)-1):
for target in list(components[f+1]):
source = random.choice(list(components[0])[:5])
g.add_edge(source, target)
G=[g.subgraph(c).copy() for c in sorted(nx.connected_components(g), key=len, reverse=True)][0]
#constraint on average clustering coefficient
if cluster_coef-0.1<=round((nx.average_clustering(G)),2)<=cluster_coef+0.1:
if nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
threshold=0
while extreme_range:
threshold+=1
if threshold>=100000:
Norm_hub=True
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
#constraint on average degree
if edge_corners-20<=sum(nodes_degree_list)<=edge_corners+20:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
#generate havel hakimi network from sampled degree distribution
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>20:
continue
else:
#reducing components
for f in range(len(components)-1):
for target in list(components[f+1]):
source = random.choice(list(components[0])[:5])
g.add_edge(source, target)
G=[g.subgraph(c).copy() for c in sorted(nx.connected_components(g), key=len, reverse=True)][0]
#constraint on average clustering coefficient
if cluster_coef-0.2<=round((nx.average_clustering(G)),2)<=cluster_coef+0.2:
if nx.diameter(G)<=dia+1:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
if Norm_hub:
while True:
# random pick network to generate from synthetic dataset
our_choice = random.choice(df2.index)
extreme_range = True
interval=df2.no_of_nodes[our_choice]*0.3
low=df2.no_of_nodes[our_choice]-interval
high=df2.no_of_nodes[our_choice]
#get similar degree distribution
df_degreedistribution=df[(low<=df.no_of_nodes) & (df.no_of_nodes<=high)]
#get similar average degree
index_degree=[]
our_degree = df2.average_degree[our_choice]
for index in df_degreedistribution.index:
if (our_degree-2 > df_degreedistribution['average_degree'][index]) | ( df_degreedistribution['average_degree'][index] > our_degree+2):
index_degree.append(index)
df_degreedistribution = df_degreedistribution.drop(index_degree)
if len(df_degreedistribution.iloc[:,:4])>0:
break
all_degrees=[]
for indexx in df_degreedistribution.index:
for degreee in df_degreedistribution.degree_of_nodes[indexx]:
all_degrees.append(degreee)
#define measurement based networks
change_range = True
edge_corners=df2.no_of_edges[our_choice]*2
syn_nodes=int(df2.no_of_nodes[our_choice])
cluster_coef = round(df2.average_clustering_coefficient[our_choice],2)
dia = df2.diameter[our_choice]
threshold=0
while True:
threshold+=1
if threshold>=10000:
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
#constraint on average degree
if edge_corners-20<=sum(nodes_degree_list)<=edge_corners+20:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
#generate havel hakimi network from sampled degree distribution
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>10:
continue
else:
#reduce components
for f in range(len(components)-1):
source = list(components[0])[0]
target = random.choice(list(components[f+1]))
g.add_edge(source, target)
G=g
#constraint on average clustering coefficient
if cluster_coef-0.01<=round((nx.average_clustering(G)),2)<=cluster_coef+0.01:
if (dia-1)<=nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
threshold=0
while change_range:
threshold+=1
if threshold>=10000:
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
if edge_corners-30<=sum(nodes_degree_list)<=edge_corners+30:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>10:
continue
else:
#reduce components
for f in range(len(components)-1):
source = list(components[0])[0]
target = random.choice(list(components[f+1]))
g.add_edge(source, target)
G=g
if cluster_coef-0.1<=round((nx.average_clustering(G)),2)<=cluster_coef+0.1:
if nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
if extreme_range:
interval=df2.no_of_nodes[our_choice]*0.3
low=df2.no_of_nodes[our_choice]-interval
high=df2.no_of_nodes[our_choice]
df_degreedistribution=df[(low<=df.no_of_nodes) & (df.no_of_nodes<=high)]
all_degrees=[]
for indexx in df_degreedistribution.index:
for degreee in df_degreedistribution.degree_of_nodes[indexx]:
all_degrees.append(degreee)
threshold=0
while extreme_range:
threshold+=1
if threshold>=1000000:
print('graph generation failed! please try again')
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
if edge_corners-50<=sum(nodes_degree_list)<=edge_corners+50:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>30:
continue
else:
#reduce components
for f in range(len(components)-1):
source = list(components[0])[0]
target = random.choice(list(components[f+1]))
g.add_edge(source, target)
G=g
if cluster_coef-0.3<=round((nx.average_clustering(G)),2)<=cluster_coef+0.3:
if nx.diameter(G)<=dia+5:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
return G,df_degreedistribution,our_choice
###Output
_____no_output_____
###Markdown
large_sized
###Code
def large_sized():
degreedist = np.random.choice(['HD','Norm'],p=[0.3,0.7])
if degreedist=='HD':
high_hub=True
Norm_hub=False
else:
high_hub=False
Norm_hub=True
if high_hub:
while True:
# random pick network to generate from synthetic dataset
our_choice = random.choice(df2.index)
extreme_range = True
interval=df2.no_of_nodes[our_choice]*0.3
low=df2.no_of_nodes[our_choice]-interval
high=df2.no_of_nodes[our_choice]
#get similar degree distribution
df_degreedistribution=df[(low<=df.no_of_nodes) & (df.no_of_nodes<=high)]
#get similar average degree
index_degree=[]
our_degree = df2.average_degree[our_choice]
for index in df_degreedistribution.index:
if (our_degree-2 > df_degreedistribution['average_degree'][index]) | ( df_degreedistribution['average_degree'][index] > our_degree+2):
index_degree.append(index)
df_degreedistribution = df_degreedistribution.drop(index_degree)
#filter based on high difference of max degree threshold
index_degree2=[]
for index in df_degreedistribution.index:
if (df_degreedistribution.no_of_nodes[index]-max(df_degreedistribution.degree_of_nodes[index]))/df_degreedistribution.no_of_nodes[index]>0.2:
index_degree2.append(index)
if len(index_degree2)>0:
break
all_degrees=[]
for indexx in index_degree2:
for degreee in df_degreedistribution.degree_of_nodes[indexx]:
all_degrees.append(degreee)
#define measurement based networks
change_range = True
edge_corners=df2.no_of_edges[our_choice]*2
syn_nodes=int(df2.no_of_nodes[our_choice])
cluster_coef = round(df2.average_clustering_coefficient[our_choice],2)
dia = df2.diameter[our_choice]
if syn_nodes>300:
thres1,thres2=10,1000
else:
thres1,thres2=10000,10000
threshold=0
while True:
threshold+=1
if threshold>=thres1:
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
#constraint on average degree
if edge_corners-syn_nodes<=sum(nodes_degree_list)<=edge_corners+syn_nodes:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
#generate havel hakimi network from sampled degree distribution
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>20:
continue
else:
#reducing components
for f in range(len(components)-1):
for target in list(components[f+1]):
source = random.choice(list(components[0])[:5])
g.add_edge(source, target)
G=[g.subgraph(c).copy() for c in sorted(nx.connected_components(g), key=len, reverse=True)][0]
#constraint on average clustering coefficient
if cluster_coef-0.15<=round((nx.average_clustering(G)),2)<=cluster_coef+0.15:
if nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
threshold=0
while change_range:
threshold+=1
if threshold>=thres2:
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
inter = 1.5*syn_nodes
#constraint on average degree
if edge_corners-inter<=sum(nodes_degree_list)<=edge_corners+inter:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
#generate havel hakimi network from sampled degree distribution
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>35:
continue
else:
#reducing components
for f in range(len(components)-1):
for target in list(components[f+1]):
source = random.choice(list(components[0])[:5])
g.add_edge(source, target)
G=[g.subgraph(c).copy() for c in sorted(nx.connected_components(g), key=len, reverse=True)][0]
#constraint on average clustering coefficient
if cluster_coef-0.15<=round((nx.average_clustering(G)),2)<=cluster_coef+0.15:
if nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
threshold=0
while extreme_range:
threshold+=1
if threshold>=10000:
Norm_hub=True
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
inter = 1.5*syn_nodes
#constraint on average degree
if edge_corners-inter<=sum(nodes_degree_list)<=edge_corners+inter:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
#generate havel hakimi network from sampled degree distribution
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>50:
continue
else:
#reducing components
for f in range(len(components)-1):
for target in list(components[f+1]):
source = random.choice(list(components[0])[:5])
g.add_edge(source, target)
G=[g.subgraph(c).copy() for c in sorted(nx.connected_components(g), key=len, reverse=True)][0]
#constraint on average clustering coefficient
if cluster_coef-0.2<=round((nx.average_clustering(G)),2)<=cluster_coef+0.2:
change_range=False
extreme_range=False
break
else:
continue
except:
continue
if Norm_hub:
while True:
# random pick network to generate from synthetic dataset
our_choice = random.choice(df2.index)
extreme_range = True
interval=df2.no_of_nodes[our_choice]*0.3
low=df2.no_of_nodes[our_choice]-interval
high=df2.no_of_nodes[our_choice]
#get similar degree distribution
df_degreedistribution=df[(low<=df.no_of_nodes) & (df.no_of_nodes<=high)]
#get similar average degree
index_degree=[]
our_degree = df2.average_degree[our_choice]
for index in df_degreedistribution.index:
if (our_degree-2 > df_degreedistribution['average_degree'][index]) | ( df_degreedistribution['average_degree'][index] > our_degree+2):
index_degree.append(index)
df_degreedistribution = df_degreedistribution.drop(index_degree)
if len(df_degreedistribution.iloc[:,:4])>0:
break
all_degrees=[]
for indexx in df_degreedistribution.index:
for degreee in df_degreedistribution.degree_of_nodes[indexx]:
all_degrees.append(degreee)
#define measurement based networks
change_range = True
edge_corners=df2.no_of_edges[our_choice]*2
syn_nodes=int(df2.no_of_nodes[our_choice])
cluster_coef = round(df2.average_clustering_coefficient[our_choice],2)
dia = df2.diameter[our_choice]
if syn_nodes>340:
thres1,thres2=1,1000
else:
thres1,thres2=1000,10000
threshold=0
while True:
threshold+=1
if threshold>=thres1:
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
#constraint on average degree
inter = syn_nodes
if edge_corners-inter<=sum(nodes_degree_list)<=edge_corners+inter:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
#generate havel hakimi network from sampled degree distribution
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>50:
continue
else:
#reduce components
for f in range(len(components)-1):
source = list(components[0])[0]
target = random.choice(list(components[f+1]))
g.add_edge(source, target)
G=g
#constraint on average clustering coefficient
if cluster_coef-0.15<=round((nx.average_clustering(G)),2)<=cluster_coef+0.15:
if nx.diameter(G)<=dia:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
threshold=0
while change_range:
threshold+=1
if threshold>=thres2:
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
inter = 1.5*syn_nodes
if edge_corners-inter<=sum(nodes_degree_list)<=edge_corners+inter:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>50:
continue
else:
#reduce components
for f in range(len(components)-1):
source = list(components[0])[0]
target = random.choice(list(components[f+1]))
g.add_edge(source, target)
G=g
if cluster_coef-0.2<=round((nx.average_clustering(G)),2)<=cluster_coef+0.2:
if nx.diameter(G)<=dia+5:
change_range=False
extreme_range=False
break
else:
continue
else:
continue
except:
continue
threshold=0
while extreme_range:
threshold+=1
if threshold>=10000:
print('graph generation failed! please try again')
break
nodes_degree_list=choices(all_degrees,k=syn_nodes)
inter = 2*syn_nodes
if edge_corners-inter<=sum(nodes_degree_list)<=edge_corners+inter:
try:
sort_degree_list=sorted(nodes_degree_list,reverse=True)
g=nx.havel_hakimi_graph(sort_degree_list, create_using=None)
components = dict(enumerate(nx.connected_components(g)))
if len(components)>50:
continue
else:
#reduce components
for f in range(len(components)-1):
source = list(components[0])[0]
target = random.choice(list(components[f+1]))
g.add_edge(source, target)
G=g
if cluster_coef-0.3<=round((nx.average_clustering(G)),2)<=cluster_coef+0.3:
change_range=False
extreme_range=False
break
else:
continue
except:
continue
return G,df_degreedistribution,our_choice
###Output
_____no_output_____
###Markdown
generate network
###Code
#load real dataset
a_file = open("scripts/data/Network_Metrics_real_dataset.pkl", "rb")
data = pickle.load(a_file)
data=data.drop(columns=['repo'])
data=data.sort_values(by='no_of_nodes').reset_index(drop=True)
df=data.iloc[:,:]
# choose group from small_sized,medium_sized,large_sized
choosen_group = random.choice(['small_sized','medium_sized','large_sized'])
#load synthetic dataset
df2=pd.read_csv('scripts/data/Network_Metrics_synthetic_dataset.csv',index_col=0)
df2=df2.sort_values(by='no_of_nodes').reset_index(drop=True)
iterate_dic = {'small_sized':[0,max(df2[df2.no_of_nodes<=60].index.tolist())],
'medium_sized':[max(df2[df2.no_of_nodes<=60].index.tolist()),max(df2[df2.no_of_nodes<=150].index.tolist())],
'large_sized':[max(df2[df2.no_of_nodes<=150].index.tolist()),100]}
aa=iterate_dic[choosen_group][0]
bb=iterate_dic[choosen_group][1]
df2=df2.iloc[aa:bb,:].reset_index(drop=True)
#generate topology as per size
if choosen_group=='small_sized':
G,df_degreedistribution,our_choice=small_sized()
elif choosen_group=='medium_sized':
G,df_degreedistribution,our_choice=medium_sized()
elif choosen_group=='large_sized':
G,df_degreedistribution,our_choice=large_sized()
# cliques prioritization
df_model=pd.read_csv('scripts/data/usecase_data/df_model.csv',index_col=0)
#model
X=df_model.iloc[:,[0,1,2,3,7]]
y=df_model.iloc[:,[9]]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2,random_state=2550)
model = Ridge()
model.fit(X_train, y_train)
predictions = model.predict(X_test)
# calculate spearman's correlation
coef, p = spearmanr(predictions, y_test)
#identify cliques in the synthetic network
from networkx import enumerate_all_cliques,find_cliques
lst_cliques=list(find_cliques(G))
#create dataframe of cliques
df_generated = pd.DataFrame([lst_cliques]).T
df_generated = df_generated.rename(columns={0:'committer'})
#find predictor network centrality metrics
df_generated['degree_sum']=0.0
df_generated['betweenness_sum']=0.0
df_generated['closeness_sum']=0.0
df_generated['clustering_sum']=0.0
df_generated['eigenvector_sum']=0.0
df_generated['pagerank_sum']=0.0
df_generated['hubs_sum']=0.0
df_generated['Number of Hubs']=0
hub_limit=int(sorted(dict(nx.degree(G)).values(),reverse=True)[0]*0.5)
betweenness_dict=nx.betweenness_centrality(G)
closeness_dict=nx.closeness_centrality(G)
hubs_dict=nx.hits(G)[0]
pagerank_dict=nx.pagerank(G)
eigenvector_dict=nx.eigenvector_centrality(G)
clustering_dict=nx.clustering(G)
for index in df_generated.index:
clustering_sum=0.0
eigenvector_sum=0.0
pagerank_sum=0.0
hubs_sum=0.0
count_hubs=0
degree_sum=0.0
betweenness_sum=0.0
closeness_sum=0.0
for author in df_generated['committer'][index]:
try:
degree_sum+=G.degree(author)
betweenness_sum+=betweenness_dict[author]
closeness_sum+=closeness_dict[author]
clustering_sum+=clustering_dict[author]
eigenvector_sum+=eigenvector_dict[author]
pagerank_sum+=pagerank_dict[author]
hubs_sum+=hubs_dict[author]
if G.degree(author)>hub_limit:
count_hubs+=1
except:
continue
# take average of all centrality measures
df_generated['degree_sum'][index]=degree_sum/len(df_generated['committer'][index])
df_generated['betweenness_sum'][index]=betweenness_sum/len(df_generated['committer'][index])
df_generated['closeness_sum'][index]=closeness_sum/len(df_generated['committer'][index])
df_generated['clustering_sum'][index]=clustering_sum/len(df_generated['committer'][index])
df_generated['eigenvector_sum'][index]=eigenvector_sum/len(df_generated['committer'][index])
df_generated['pagerank_sum'][index]=pagerank_sum/len(df_generated['committer'][index])
df_generated['hubs_sum'][index]=hubs_sum/len(df_generated['committer'][index])
df_generated['Number of Hubs'][index]=count_hubs
df_generated['Developers']=df_generated['committer'].apply(lambda x: len(x))
#df_generated.to_csv('scripts/data/usecase_data/df_generated.csv')
#non average centrality measures
df_generated=pd.read_csv('scripts/data/usecase_data/df_generated.csv',index_col=0)
df_generated_copy=df_generated.copy()
data_lst = [df_generated]
for data in data_lst:
data['degree_sum']=data['degree_sum']*data['Developers']
data['betweenness_sum']=data['betweenness_sum']*data['Developers']
data['closeness_sum']=data['closeness_sum']*data['Developers']
data['clustering_sum']=data['clustering_sum']*data['Developers']
data['eigenvector_sum']=data['eigenvector_sum']*data['Developers']
data['pagerank_sum']=data['pagerank_sum']*data['Developers']
data['hubs_sum']=data['hubs_sum']*data['Developers']
#normalize predictor metrics
cols_to_norm = ['degree_sum','betweenness_sum','closeness_sum','Developers','clustering_sum', 'eigenvector_sum',
'pagerank_sum','hubs_sum','Number of Hubs']
data_lst = [df_generated]
for data in data_lst:
data[cols_to_norm] = data[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max() - x.min()))
#log transformation
df_generated=df_generated[['degree_sum','betweenness_sum','closeness_sum','Developers','hubs_sum']]
df_generated=df_generated+0.000001
df_generated['degree_sum'] = np.log(df_generated['degree_sum'])
df_generated['Developers'] = np.log(df_generated['Developers'])
df_generated['closeness_sum'] = np.log(df_generated['closeness_sum'])
df_generated['betweenness_sum'] = np.log(df_generated['betweenness_sum'])
df_generated['hubs_sum'] = np.log(df_generated['hubs_sum'])
#predict bug probability
df_generated['bug_probability']=model.predict(df_generated)
# reverse log transform
df_generated.iloc[:,[0,1,2,3,4,5]] = np.exp(df_generated.iloc[:,[0,1,2,3,4,5]])
df_generated['committer']=df_generated_copy['committer']
df_generated=df_generated.rename(columns={'committer':'Cliques'})
df_generated=df_generated[['Cliques','degree_sum','betweenness_sum','closeness_sum','Developers','hubs_sum','bug_probability']]
sort_cliques=df_generated.sort_values(by='bug_probability',ascending=False)
#sort_cliques.to_csv('scripts/data/usecase_data/sort_cliques.csv')
display(HTML(sort_cliques.head(15).to_html()))
###Output
_____no_output_____
###Markdown
Imports
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy
###Output
_____no_output_____
###Markdown
Utils
###Code
def load_data(file_path, batch_size, sequence_size):
# Load data
with open(file_path) as file:
text = file.read().split()
# Create support dictionaries
from collections import Counter as counter
# Count how many times each word appears in the data
words_counter = counter(text)
sorted_words = sorted(words_counter, key=words_counter.get, reverse=True)
int_to_words = dict((indice, word) for indice, word in enumerate(sorted_words))
words_to_int = dict((word, indice) for indice, word in int_to_words.items())
number_of_words = len(int_to_words)
# Generate network input, i.e words as integers
int_text = [words_to_int[word] for word in text]
number_of_batchs = len(int_text) // (sequence_size * batch_size)
# Remove one batch from the end of the list
batchs = int_text[:number_of_batchs * batch_size * sequence_size]
# Generate network input target, the target of each input,
# in text generation, its the consecutive input
#
# To obtain the target its necessary to shift all values one
# step to the left
labels = numpy.zeros_like(batchs)
try:
# Shift all values to the left
labels[:-1] = batchs[1:]
# Set the next word of the last value of the last list to the
# first value of the first list
labels[-1] = batchs[0]
labels = numpy.reshape(labels, (batch_size, -1))
batchs = numpy.reshape(batchs, (batch_size, -1))
except IndexError as error:
raise Exception('Invalid amount of words to generate the batchs / sequences')
return dict(
int_to_words=int_to_words,
words_to_int=words_to_int,
batchs=batchs,
labels=labels,
number_of_words=number_of_words
)
def get_batchs(batch, labels, batch_size, sequence_size):
numBatchs = numpy.prod(batch.shape) // (sequence_size * batch_size)
for indice in range(0, numBatchs * sequence_size, sequence_size):
yield batch[:, indice:indice + sequence_size], labels[:, indice:indice + sequence_size]
###Output
_____no_output_____
###Markdown
Model
###Code
class LSTM(nn.Module):
def __init__(self, number_of_words, sequence_size, embedding_size, lstm_size):
super(LSTM, self).__init__()
self.sequence_size = sequence_size
self.lstm_size = lstm_size
self.embedding = nn.Embedding(number_of_words, embedding_size)
self.lstm = nn.LSTM(
embedding_size,
lstm_size,
batch_first=True
)
self.dense = nn.Linear(lstm_size, number_of_words)
def forward(self, state, previous_state):
embed = self.embedding(state)
output, state = self.lstm(embed, previous_state)
logits = self.dense(output)
return logits, state
def resetState(self, batchSize):
# Reset the hidden (h) state and the memory (c) state
return (torch.zeros(1, batchSize, self.lstm_size) for indice in range(2))
###Output
_____no_output_____
###Markdown
Training Settings
###Code
sequence_size = 64
batch_size = 16
embedding_size = 64
lstm_size = 64
cuda = True
epochs = 32
learn_rating = 0.001
gradient_norm = 4
initial_words = ['Life', 'is']
top = 4
###Output
_____no_output_____
###Markdown
Data
###Code
data = load_data('data.raw', batch_size, sequence_size)
###Output
_____no_output_____
###Markdown
Model
###Code
model = LSTM(
data.get('number_of_words'),
sequence_size,
embedding_size,
lstm_size
)
if torch.cuda.is_available and cuda:
model = model.cuda()
print(torch.cuda.get_device_name(torch.cuda.current_device()))
optimizer = torch.optim.Adam(model.parameters(), lr=learn_rating)
criterion = nn.CrossEntropyLoss()
iteration = 0
def predict(model, initial_words, number_of_words, words_to_int, int_to_words, top=5):
# Set evaluation mode
model.eval()
words = initial_words.copy()
# Reset state
stateHidden, stateMemory = model.resetState(1)
if torch.cuda.is_available and cuda:
stateHidden, stateMemory = stateHidden.cuda(), stateMemory.cuda()
for word in words:
_word = torch.tensor([[words_to_int[word]]])
if torch.cuda.is_available and cuda:
_word = _word.cuda()
output, (stateHidden, stateMemory) = model(
_word,
(stateHidden, stateMemory)
)
_, _top = torch.topk(output[0], k=top)
choices = _top.tolist()
choice = numpy.random.choice(choices[0])
words.append(int_to_words[choice])
for _ in range(100):
_word = torch.tensor([[choice]])
if torch.cuda.is_available and cuda:
_word = _word.cuda()
output, (stateHidden, stateMemory) = model(
_word,
(stateHidden, stateMemory)
)
_, _top = torch.topk(output[0], k=top)
choices = _top.tolist()
choice = numpy.random.choice(choices[0])
words.append(int_to_words[choice])
print(' '.join(words).encode('utf-8'))
for epoch in range(epochs):
batchs = get_batchs(
data.get('batchs'),
data.get('labels'),
batch_size,
sequence_size
)
stateHidden, stateMemory = model.resetState(batch_size)
if torch.cuda.is_available and cuda:
stateHidden, stateMemory = stateHidden.cuda(), stateMemory.cuda()
for batch_data, batch_label in batchs:
iteration += 1
# Set train mode
model.train()
# Reset gradient
optimizer.zero_grad()
# Transform array to tensor
batch_data = torch.tensor(batch_data)
batch_label = torch.tensor(batch_label)
# Send tensor to GPU
if torch.cuda.is_available and cuda:
batch_data = batch_data.cuda()
batch_label = batch_label.cuda()
# Train
logits, (stateHidden, stateMemory) = model(
batch_data,
(stateHidden, stateMemory)
)
# Loss
loss = criterion(logits.transpose(1, 2), batch_label)
# Remove state from graph for gradient clipping
stateHidden = stateHidden.detach()
stateMemory = stateMemory.detach()
# Back-propagation
loss.backward()
# Gradient clipping (inline)
nn.utils.clip_grad_norm_(
model.parameters(),
gradient_norm
)
# Update network's parameters
optimizer.step()
# Loss value
print(f'Epoch {epoch}, Iteration: {iteration}, Loss: {loss.item()}')
###Output
_____no_output_____
###Markdown
Prediction
###Code
predict(model, initial_words, data.get('number_of_words'), data.get('words_to_int'), data.get('int_to_words'), top)
###Output
_____no_output_____ |
notebooks/rolldecay/06_ikeda/01.04_ikeda_many.ipynb | ###Markdown
Ikeda for many shipsThe method developed in: ([01.03_ikeda_many_dev](06_ikeda/01.03_ikeda_many_dev.ipynb)) will now be attempted for many ships.
###Code
# %load ../../imports.py
"""
These is the standard setup for the notebooks.
"""
%matplotlib inline
%load_ext autoreload
%autoreload 2
#from jupyterthemes import jtplot
#jtplot.style(theme='onedork', context='notebook', ticks=True, grid=False)
import pandas as pd
pd.options.display.max_rows = 999
pd.options.display.max_columns = 999
pd.set_option("display.max_columns", None)
import numpy as np
import os
import matplotlib.pyplot as plt
#plt.style.use('paper')
#import data
import copy
from rolldecay.bis_system import BisSystem
from rolldecay import database
from mdldb.tables import Run
from sklearn.pipeline import Pipeline
from rolldecayestimators.transformers import CutTransformer, LowpassFilterDerivatorTransformer, ScaleFactorTransformer, OffsetTransformer
from rolldecayestimators.direct_estimator_cubic import EstimatorQuadraticB, EstimatorCubic
from rolldecayestimators.ikeda_estimator import IkedaQuadraticEstimator
import rolldecayestimators.equations as equations
import rolldecayestimators.lambdas as lambdas
from rolldecayestimators.substitute_dynamic_symbols import lambdify
import rolldecayestimators.symbols as symbols
import sympy as sp
from sklearn.metrics import r2_score
import rolldecay.paper_writing as paper_writing
from pyscores2.indata import Indata
from pyscores2.runScores2 import Calculation
from pyscores2.output import OutputFile
from pyscores2 import TDPError
import pyscores2
from rolldecayestimators.ikeda import Ikeda, IkedaR
from rolldecayestimators.simplified_ikeda_class import SimplifiedIkeda
import subprocess
df_all_sections_id = pd.read_csv('all_sections.csv', sep=';')
df_all_sections_id.head()
section_groups=df_all_sections_id.groupby(by='loading_condition_id')
loading_condition_ids = df_all_sections_id['loading_condition_id'].unique()
mask=pd.notnull(loading_condition_ids)
loading_condition_ids=loading_condition_ids[mask]
df_rolldecay = database.load(rolldecay_table_name='rolldecay_quadratic_b', limit_score=0.99,
exclude_table_name='rolldecay_exclude')
mask=df_rolldecay['loading_condition_id'].isin(loading_condition_ids)
df=df_rolldecay.loc[mask].copy()
def add_cScores(sections):
sections=sections.copy()
sections['cScores']=sections['area']/(sections['b']*sections['t'])
mask=sections['cScores']>1
sections.loc[mask,'cScores']=1
return sections
def cut_sections(sections, draught):
sections=sections.copy()
mask = sections['t']>draught
sections.loc[mask,'t']=draught
sections.loc[mask,'area']-=draught*sections['b'].max() # Assuming rectangular shape
return sections
def remove_duplicate_sections(sections):
sections=sections.copy()
mask=~sections['x'].duplicated()
sections=sections.loc[mask]
assert sections['x'].is_unique
return sections
def too_small_sections(sections):
sections=sections.copy()
small = 0.1
mask=sections['b']==0
sections.loc[mask,'b']=small
mask=sections['t']==0
sections.loc[mask,'t']=small
mask=sections['area']==0
sections.loc[mask,'area']=small
return sections
from scipy.integrate import simps
def calculate_lcb(x, area, **kwargs):
"""
Calculate lcb from AP
"""
return simps(y=area*x,x=x)/np.trapz(y=area,x=x)
def calculate_dispacement(x, area, **kwargs):
"""
Calculate displacement
"""
return np.trapz(y=area,x=x)
class DraughtError(ValueError): pass
def define_indata(row, sections, rho=1000, g=9.81):
indata = Indata()
draught=(row.TA+row.TF)/2
indata.draught=draught
if draught<=sections['t'].max():
sections = cut_sections(sections, draught)
else:
raise DraughtError('Draught is too large for sections')
sections=add_cScores(sections)
indata.cScores=np.array(sections['cScores'])
indata.ts=np.array(sections['t'])
indata.bs=np.array(sections['b'])
indata.zbars=np.zeros_like(sections['b']) # Guessing...
beam=sections['b'].max()
indata.lpp=sections['x'].max()-sections['x'].min()
#indata.displacement=row.Volume
indata.displacement=calculate_dispacement(**sections)
indata.g=g
indata.kxx=row.KXX
indata.kyy=row.lpp*0.4
lcb=calculate_lcb(x=sections['x'], area=sections['area'])
indata.lcb=lcb-row.lpp/2
indata.lpp=row.lpp
indata.projectName='loading_condition_id_%i' % row.loading_condition_id
indata.rho=rho
indata.zcg=row.kg-draught
#indata.waveFrequenciesMin=0.2
#indata.waveFrequenciesMax=0.5
#indata.waveFrequenciesIncrement=0.006
w=row.omega0/np.sqrt(row.scale_factor)
indata.waveFrequenciesMin=w*0.5
indata.waveFrequenciesMax=w*2.0
N=40
indata.waveFrequenciesIncrement=(indata.waveFrequenciesMax-indata.waveFrequenciesMin)/N
indata.runOptions["IE"].set_value(1)
return indata,sections
def create_ikeda(row, indata, output_file, fi_a):
w = row.omega0
scale_factor=row.scale_factor
V = row.ship_speed*1.852/3.6/np.sqrt(scale_factor)
R = 0.01*row.beam/scale_factor
lBK=row.BKL/scale_factor
bBK=row.BKB/scale_factor
ikeda = Ikeda.load_scoresII(V=V, w=w, fi_a=fi_a, indata=indata, output_file=output_file,
scale_factor=scale_factor, lBK=lBK, bBK=bBK)
ikeda.R = R
return ikeda
def calculate_ikeda(ikeda):
output = {}
output['B_44_hat'] = ikeda.calculate_B44()[0]
output['B_W0_hat'] =ikeda.calculate_B_W0()[0]
output['B_W_hat'] =ikeda.calculate_B_W()[0]
output['B_F_hat'] =ikeda.calculate_B_F()[0]
output['B_E_hat'] =ikeda.calculate_B_E()[0]
output['B_BK_hat'] =ikeda.calculate_B_BK()[0]
output['B_L_hat'] =ikeda.calculate_B_L()[0]
output['Bw_div_Bw0'] =ikeda.calculate_Bw_div_Bw0()[0]
return output
results = pd.DataFrame()
fi_a = np.deg2rad(10)
for run_name, row in df.iterrows():
loading_condition_id=row['loading_condition_id']
sections = section_groups.get_group(loading_condition_id)
sections=remove_duplicate_sections(sections)
sections=too_small_sections(sections)
try:
indata,sections_ = define_indata(row, sections)
except DraughtError as e:
print('Draught is too large for sections, this loading condition is skipped.')
continue
save_name='%s.in' % row.loading_condition_id
save_path=os.path.join('scores2',save_name)
indata.save(save_path)
calculation = Calculation(outDataDirectory='scores2/result')
# Run scoresII:
try:
calculation.run(indata=indata, b_div_t_max=None, timeout=1.0)
except TDPError:
print('Dissregarding the TDPError')
continue
except pyscores2.LcgError as e:
print('Disregarded')
print(e)
continue
except subprocess.TimeoutExpired:
print('Disregarded, scoresII got stuck...')
continue
output_file = OutputFile(filePath=calculation.outDataPath)
ikeda = create_ikeda(row=row, indata=indata, output_file=output_file, fi_a=fi_a)
result_data = calculate_ikeda(ikeda)
result=pd.Series(data=result_data, name=row.name)
results=results.append(result)
results
###Output
_____no_output_____
###Markdown
Also run Simplified Ikeda for comparison
###Code
def calculate_si(si):
output = pd.DataFrame()
output['B_44_hat'] = si.calculate_B44()
output['B_W0_hat'] =si.calculate_B_W0()
output['B_W_hat'] =si.calculate_B_W()
output['B_F_hat'] =si.calculate_B_F()
output['B_E_hat'] =si.calculate_B_E()
output['B_BK_hat'] =si.calculate_B_BK()
output['B_L_hat'] =si.calculate_B_L()
output['Bw_div_Bw0'] =si.calculate_Bw_div_Bw0()
return output
inputs_si=pd.DataFrame()
inputs_si['w']=df['omega0'] # Already model scale
scale_factor=df['scale_factor']
inputs_si['V']=df['ship_speed']*1.852/3.6/np.sqrt(scale_factor)
inputs_si['fi_a']=fi_a
inputs_si['beam']=df['beam']/scale_factor
inputs_si['lpp']=df['lpp']/scale_factor
inputs_si['kg']=df['kg']/scale_factor
inputs_si['volume']=df['Volume']/(scale_factor**3)
draught=(df['TA']+df['TF'])/2
inputs_si['draught']=draught/scale_factor
inputs_si['A0']=df['A0']
inputs_si['lBK']=df['BKL']/scale_factor
inputs_si['bBK']=df['BKB']/scale_factor
si = SimplifiedIkeda(**inputs_si)
results_si = calculate_si(si)
results_si.index=df.index
###Output
_____no_output_____
###Markdown
Make comparison with model tests
###Code
B_e = lambdas.B_e_lambda(B_1=df['B_1'], B_2=df['B_2'], phi_a=fi_a,
omega0=df['omega0'])
scale_factor = df['scale_factor']
Volume = df['Volume']/(scale_factor**3)
beam = df['beam']/scale_factor
g=9.81
rho=1000
df['B_e_hat'] = lambdas.B_e_hat_lambda(B_e=B_e, Disp=Volume, beam=beam,
g=g, rho=rho)
df_results = pd.merge(left=results, right=results_si, how='inner', left_index=True, right_index=True,
suffixes=('_ikeda','_si'))
mask = df_results['B_44_hat_ikeda'].notnull()
df_results = df_results.loc[mask].copy()
df_compare = pd.merge(left=df, right=df_results, how='inner', left_index=True, right_index=True,
suffixes=('','_y'))
fig,ax=plt.subplots()
df_compare.plot(x='B_44_hat_ikeda', y='B_44_hat_si', ax=ax, style='o')
ax.set_xlabel(r'$\hat{B_{44}}$ (Ikeda)')
ax.set_ylabel(r'$\hat{B_{44}}$ (SI)')
xlim = ax.get_xlim()
ylim = ax.get_ylim()
lim = np.max([xlim[1],ylim[1]])
ax.set_xlim(0,lim)
ax.set_ylim(0,lim)
ax.plot([0,lim],[0,lim],'r-')
ax.grid(True)
ax.set_aspect('equal', 'box')
ax.get_legend().remove()
###Output
_____no_output_____
###Markdown
###Code
size=2.5
with plt.style.context('paper'):
fig,ax=plt.subplots()
fig.set_size_inches(size,size)
df_compare.plot(x='B_e_hat', y='B_44_hat_ikeda', ax=ax, style='.',
label=r'$\hat{B_{e}}$ Ikeda')
df_compare.plot(x='B_e_hat', y='B_44_hat_si', ax=ax, style='+',
label=r'$\hat{B_{e}}$ SI')
ax.set_xlabel(r'$\hat{B_{e}}$ (model test)')
ax.set_ylabel(r'$\hat{B_{e}}$ (prediction)')
xlim = ax.get_xlim()
ylim = ax.get_ylim()
lim = np.max([xlim[1],ylim[1]])
ax.set_xlim(0,lim)
ax.set_ylim(0,lim)
#ax.set_title('Total roll damping for Ikeda, Simplified Ikeda and model tests')
ax.plot([0,lim],[0,lim],'r-')
ax.grid(True)
ax.set_aspect('equal', 'box')
ax.legend();
paper_writing.save_fig(fig=fig, name='si_ikeda_model')
df_compare['B_44_fraction_si_ikeda'] = df_compare['B_44_hat_si']/df_compare['B_44_hat_ikeda']
fig,ax=plt.subplots()
df_compare.plot(x='B_e_hat', y='B_44_fraction_si_ikeda', ax=ax, style='.')
ax.set_xlabel(r'$\hat{B_{44}}$')
ax.set_xlabel(r'$\frac{B_{44}(SI)}{B_{44}(Ikeda)}$')
ax.get_legend().remove()
r2_score(y_true=df_compare['B_e_hat'], y_pred=df_compare['B_44_hat_ikeda'])
r2_score(y_true=df_compare['B_e_hat'], y_pred=df_compare['B_44_hat_si'])
###Output
_____no_output_____
###Markdown
Investigating the residuals
###Code
def calculate_residuals(suffix_true='_ikeda', suffix_prediction='_si'):
prefixes = ['B_44_hat',
'B_W0_hat',
'B_W_hat',
'B_F_hat',
'B_E_hat',
'B_BK_hat',
'B_L_hat',
'Bw_div_Bw0',]
for prefix in prefixes:
residual_name = '%s_residual%s%s' % (prefix, suffix_prediction, suffix_true)
name_true='%s%s' % (prefix, suffix_true)
name_prediction='%s%s' % (prefix, suffix_prediction)
df_compare[residual_name] = df_compare[name_prediction] - df_compare[name_true]
calculate_residuals()
df_compare['B_44_residual_si_model'] = df_compare['B_44_hat_si'] - df_compare['B_e_hat']
import seaborn as sns;
#sns.set_theme()
###Output
_____no_output_____
###Markdown
###Code
df_compare['draught']=(df_compare['TA'] + df_compare['TF'])/2
df_compare['OG']=df_compare['draught']-df_compare['kg']
df_compare['beam/draught']=df_compare['beam']/df_compare['draught']
df_compare['V']=df_compare['ship_speed']*1.852
df_compare['Fn']=df_compare['V']/(np.sqrt(df_compare['lpp']*g))
df_compare[r'OG/d']=df_compare['OG']/df_compare['draught']
df_compare[r'LBK/Lpp']=df_compare['BKL']/df_compare['lpp']
df_compare[r'BBK/beam']=df_compare['BKB']/df_compare['beam']
df_compare['omega_hat']=lambdas.omega_hat(beam=df_compare['beam'], g=g, omega0=df_compare['omega0'])
df_compare['Cb']=df_compare['Volume']/(df_compare['lpp']*df_compare['beam']*df_compare['draught'])
interesting=('Cb','A0','OG/d','LBK/Lpp','BBK/beam','omega_hat',r'beam/draught', 'Fn')
#sns.lmplot(data=df_compare,y='B_44_hat_residual_si_ikeda', x=interesting, aspect=0.6);
sns.pairplot(df_compare,y_vars='B_44_hat_residual_si_ikeda', x_vars=interesting, aspect=0.6);
with plt.style.context('paper'):
y='B_44_residual_si_model'
ylabel=r'$\hat{B}_{e}^{SI}-\hat{B}_{e}^{Model}$'
fig,ax=plt.subplots()
fig.set_size_inches(size,size)
df_compare.plot(x=r'beam/draught',y=y, ax=ax, style='.')
ax.set_ylabel(ylabel)
ax.set_xlabel(r'$\frac{beam}{T}$ [-]')
ax.set_ylabel(ylabel)
ax.grid(True)
ax.get_legend().remove()
paper_writing.save_fig(fig, name='beam_T_residual')
labels={
'Cb' : r'$C_b$ [-]',
r'beam/draught' : r'$\frac{beam}{T}$ [-]',
r'OG/d' : r'$\frac{\overline{OG}}{T}$ [-]',
'A0' : r'$A_{0}$ [-]',
r'BBK/beam' : r'$\frac{BK_B}{beam}$ [-]',
r'LBK/Lpp' : r'$\frac{BK_L}{L_{pp}}$ [-]',
r'omega_hat' : r'$\hat{\omega}$ [-]',
r'fi_a' : r'$\phi_a$ [rad]',
r'Fn' : r'$F_n$ [-]',
}
with plt.style.context('paper'):
fig,ax=plt.subplots()
fig.set_size_inches(size,size)
df_compare['B_44_residual_si_model_abs'] = df_compare['B_44_residual_si_model'].abs()
for y in interesting:
df_=df_compare.sort_values(by=y)
y_=df_[y].abs()
y_-=y_.min()
ax.plot(df_['B_44_residual_si_model_abs'], y_, '.', label=labels[y])
ax.set_xlabel(r'$|\hat{B}_{e}^{SI}-\hat{B}_{e}^{Model}|$')
ax.set_ylabel('change')
ax.grid(True)
ax.legend(loc='upper center', bbox_to_anchor=(1.30, 0.8),
ncol=1)
paper_writing.save_fig(fig, name='parameter_residual')
###Output
_____no_output_____
###Markdown
Comparing damping contributions SI vs. Ikeda
###Code
suffix_true='_ikeda'
suffix_prediction='_si'
prefixes = [
'B_W_hat',
'B_F_hat',
'B_E_hat',
'B_BK_hat',
'B_L_hat',]
labels={
'B_W_hat' : r'$\hat{B_{W}}$',
'B_F_hat' : r'$\hat{B_{F}}$',
'B_E_hat' : r'$\hat{B_{E}}$',
'B_BK_hat' : r'$\hat{B_{BK}}$',
'B_L_hat' : r'$\hat{B_{L}}$',
}
lim = np.max([df_compare['B_44_hat_ikeda'].max(),
df_compare['B_44_hat_si'].max(),
])
with plt.style.context('paper'):
fig,ax=plt.subplots()
fig.set_size_inches(size,size)
for prefix in prefixes:
name_true='%s%s' % (prefix, suffix_true)
name_prediction='%s%s' % (prefix, suffix_prediction)
y_=(df_compare[name_prediction]-df_compare[name_true]).abs()
#y_-=y_.min()
ax.plot(df_compare['B_44_residual_si_model_abs'], y_, '.', label=labels[prefix])
ax.set_xlabel(r'$|\hat{B}_{e}^{SI}-\hat{B}_{e}^{Ikeda}|$')
ax.legend()
ax.grid(True)
ax.set_ylabel(r'$\hat{B}$ [-]')
paper_writing.save_fig(fig, name='component_residual')
###Output
_____no_output_____
###Markdown
###Code
fig,ax = plt.subplots()
df_compare['B_W_fraction_si_ikeda'] = df_compare['B_W_hat_si']/df_compare['B_W_hat_ikeda']
df_compare.plot(x='Fn', y='B_W_fraction_si_ikeda', ax=ax, style='o');
ax.set_ylabel(r'$\frac{\hat{B_W}(SI)}{\hat{B_W}(Ikeda)}$')
ax.grid(True)
ax.get_legend().remove()
###Output
_____no_output_____ |
example/WallStreetLectures/ipython/lecture16_correlation.ipynb | ###Markdown
股票及版块收益率相关系数 本段代码使用quantOS系统计算申万28个一级行业日收益率的相关系数,以及部分个股间相关系数 系统设置
###Code
# encoding: utf-8
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import datetime
import seaborn as sns
import matplotlib.mlab as mlab
import scipy.stats as stats
sns.set_style('darkgrid')
sns.set_context('poster')
%matplotlib inline
from jaqs.data import RemoteDataService
import jaqs.util as jutil
from __future__ import print_function, unicode_literals, division, absolute_import
from jaqs.data import RemoteDataService, DataView
dataview_dir_path = '.'
backtest_result_dir_path = '.'
import os
phone = os.environ.get('QUANTOS_USER')
token = os.environ.get('QUANTOS_TOKEN')
data_config = {
"remote.data.address": "tcp://data.quantos.org:8910",
"remote.data.username": phone,
"timeout": 3600,
"remote.data.password": token
}
ds = RemoteDataService()
ds.init_from_config(data_config)
###Output
Begin: DataApi login 17321165656@tcp://data.quantos.org:8910
login success
###Markdown
设置参数
###Code
STARTDATE, ENDDATE = 20160401, 20180330
###Output
_____no_output_____
###Markdown
计算个股相关系数 1. 工商银行与建设银行
###Code
stock_1, _ = ds.daily('601398.SH', STARTDATE, ENDDATE, fields = 'close', adjust_mode = 'post')
stock_2, _ = ds.daily('601939.SH', STARTDATE, ENDDATE, fields = 'close', adjust_mode = 'post')
stock_1['ret'] = stock_1['close'].pct_change()
stock_1 = stock_1.set_index('trade_date')
stock_2['ret'] = stock_2['close'].pct_change()
stock_2 = stock_2.set_index('trade_date')
stock_pair = pd.concat([stock_1['ret'], stock_2['ret']], axis = 1)
stock_pair.columns = ['工商银行', '建设银行']
###Output
_____no_output_____
###Markdown
相关性矩阵
###Code
stock_pair.corr()
###Output
_____no_output_____
###Markdown
收益率分布
###Code
fig, ax = plt.subplots(figsize = (16, 8))
plt.scatter(stock_pair['工商银行'], stock_pair['建设银行'], s = 30)
ax.set_xlabel('601398.SH')
ax.set_ylabel('601939.SH')
###Output
_____no_output_____
###Markdown
2. 工商银行与中国平安
###Code
stock_1, _ = ds.daily('601398.SH', STARTDATE, ENDDATE, fields = 'close', adjust_mode = 'post')
stock_2, _ = ds.daily('601318.SH', STARTDATE, ENDDATE, fields = 'close', adjust_mode = 'post')
stock_1['ret'] = stock_1['close'].pct_change()
stock_1 = stock_1.set_index('trade_date')
stock_2['ret'] = stock_2['close'].pct_change()
stock_2 = stock_2.set_index('trade_date')
stock_pair = pd.concat([stock_1['ret'], stock_2['ret']], axis = 1)
stock_pair.columns = ['工商银行', '中国平安']
###Output
_____no_output_____
###Markdown
相关性矩阵
###Code
stock_pair.corr()
###Output
_____no_output_____
###Markdown
收益率分布
###Code
fig, ax = plt.subplots(figsize = (16, 8))
plt.scatter(stock_pair['工商银行'], stock_pair['中国平安'], s = 30)
ax.set_xlabel('601398.SH')
ax.set_ylabel('601318.SH')
###Output
_____no_output_____
###Markdown
3. 工商银行与天齐锂业
###Code
stock_1, _ = ds.daily('601398.SH', STARTDATE, ENDDATE, fields = 'close', adjust_mode = 'post')
stock_2, _ = ds.daily('002466.SZ', STARTDATE, ENDDATE, fields = 'close', adjust_mode = 'post')
stock_1['ret'] = stock_1['close'].pct_change()
stock_1 = stock_1.set_index('trade_date')
stock_2['ret'] = stock_2['close'].pct_change()
stock_2 = stock_2.set_index('trade_date')
stock_pair = pd.concat([stock_1['ret'], stock_2['ret']], axis = 1)
stock_pair.columns = ['工商银行', '天齐锂业']
###Output
_____no_output_____
###Markdown
相关性矩阵
###Code
stock_pair.corr()
###Output
_____no_output_____
###Markdown
收益率分布
###Code
fig, ax = plt.subplots(figsize = (16, 8))
plt.scatter(stock_pair['工商银行'], stock_pair['天齐锂业'], s = 30)
ax.set_xlabel('601398.SH')
ax.set_ylabel('002466.SZ')
###Output
_____no_output_____
###Markdown
计算版块相关系数
###Code
df_ret = pd.read_csv('lecture16_industry_daily_ret.csv', index_col = 'trade_date')
df_ret_copy = df_ret.loc[20160401:20180330, :]
ret_corr = df_ret_copy.corr()
fig, ax = plt.subplots(figsize = (20, 18))
sns.heatmap(ret_corr, annot = True, cmap = "coolwarm")
# fig.savefig('corrlation_matrix.png')
###Output
_____no_output_____ |
Camilo/Taller 2 - Archivos y Bases de Datos.ipynb | ###Markdown
Archivos y Bases de datos La idea de este taller es manipular archivos (leerlos, parsearlos y escribirlos) y hacer lo mismo con bases de datos estructuradas. Ejercicio 1Baje el archivo de "All associations with added ontology annotations" del GWAS Catalog.+ https://www.ebi.ac.uk/gwas/docs/file-downloadsDescriba las columnas del archivo (_que información estamos mirando? Para qué sirve? Por qué la hicieron?_)
###Code
import pandas as pd
DF = pd.read_csv('../data/alternative.tsv', sep='\t')
DF
###Output
/Users/camilogarcia/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (12,23,27) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Qué Entidades (tablas) puede definir? -Entidades intermedias-Modelos de entidad y relación-llaves foraneas (lineas que conectan entidades)-como desde python meter datos en mysql
###Code
import mysql.connector
conn = mysql.connector.Connect(host='127.0.0.1',user='root',\
password='gaspar',database='programacion')
c = conn.cursor()
c.execute("""insert into enfermedad values (3, "Psoriasis", "psoriasis", "http://www.ebi.ac.uk/efo/EFO_0000676" )""")
conn.commit()
c.execute ("select * from enfermedad")
for row in c:
print (row)
c.close()
conn.close()
###Output
(3, u'Psoriasis', u'psoriasis', u'http://www.ebi.ac.uk/efo/EFO_0000676')
|
CIS522/Week11_Tutorial1.ipynb | ###Markdown
CIS-522 Week 11 Part 1 Introduction to Reinforcement Learning__Instructor:__ Dinesh Jayaraman__Content creators:__ Chuning Zhu---
###Code
#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)
my_pennkey = 'fparodi' #@param {type:"string"}
my_pod = 'superfluous-lyrebird' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','astute-jellyfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion']
###Output
_____no_output_____
###Markdown
Recap the experience from last weekWhat did you learn last week. What questions do you have? [10 min discussion]
###Code
learning_from_previous_week = "learned attention, transformers. very much enjoyed that. i struggled understanding nlp and grus though. not sure if it's bc we're at the end of the semester or the pandemic or what. but i \u003C3 transformesr!" #@param {type:"string"}
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# imports
import math
import numpy as np
import IPython
from numbers import Number
from matplotlib import pyplot as plt
import matplotlib.patches as patches
from tqdm.auto import tqdm
# @title Plotting functions
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/"
"course-content/master/nma.mplstyle")
# Plotting functions
def plot_episode_rewards(episode_rewards):
fig = plt.figure()
plt.plot(episode_rewards)
plt.xlabel("Episode")
plt.ylabel("Reward")
fig.show()
# @title Gridworld Environment
'''
A custom Gridworld environment with deterministic transitions. Adapted from
CS 188 Gridworld env. There are four actions: up, left, down, right. The
state is the (x, y) coordinates in the Grid.
'''
class Gridworld():
def __init__(self, grid, living_reward=-1.0):
self.h = len(grid)
self.w = len(grid[0])
self.living_reward = living_reward
self.scale = math.ceil(max(self.h, self.w) / min(self.h, self.w))
self.action_space = ['up', 'left', 'down', 'right']
self.n_actions = 4
self.init_grid(grid)
def init_grid(self, grid):
# Create reward grid. The reward grid is a numpy array storing the
# reward given for entering each state.
self.rew_grid = np.array([[self.living_reward if isinstance(e, str) else e
for e in row] for row in grid], dtype=np.float)
# Create grid. The grid is a numpy of chars.
# S (start), T (terminal), C (cliff), # (block), or ' ' (regular).
convert_fn = lambda e: 'T' if e >= self.living_reward else 'C'
self.grid = np.array([[convert_fn(e) if isinstance(e, Number) else e
for e in row] for row in grid])
# Find initial state
start_indices = np.argwhere(self.grid == 'S')
if len(start_indices) == 0:
raise Exception('Grid has no start state')
self.init_state = (start_indices[0][1], start_indices[0][0])
def get_transition(self, state, action):
'''
Execute one action in the environment.
Args:
state (tuple): the (x, y) coordinates of the current state.
action (int): the current action chosen from {0, 1, 2, 3}.
Returns:
next_state (tuple): the (x, y) coordinates of the next state.
reward (float): the reward for the current time step.
'''
# Handle terminal states
x, y = state
if self.grid[y, x] == 'T':
return state, 0
# Handle invalid actions
if action not in range(len(self.action_space)):
raise Exception('Illegal action')
# Default transitions
named_action = self.action_space[action]
nx, ny = x, y
if named_action == 'up':
ny -= 1
elif named_action == 'left':
nx -= 1
elif named_action == 'down':
ny += 1
elif named_action == 'right':
nx += 1
# Handle special cases
if nx < 0 or nx >= self.w or ny < 0 or ny >= self.h or self.grid[ny, nx] == '#':
# Give living reward if next state is blocked or out of bounds
reward = self.living_reward
next_state = (x, y)
else:
reward = self.rew_grid[ny, nx]
if self.grid[ny, nx] == 'C':
next_state = self.init_state # falls off cliff
else:
next_state = (nx, ny) # transition to next state
return next_state, reward
def __render(self):
# Render grid with matplotlib patches.
fig, ax = plt.subplots(figsize=(self.h*self.scale, self.w*self.scale))
ax.set_aspect('equal')
ax.set_xlim(0, self.w)
ax.set_ylim(0, self.h)
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.tick_params(length=0)
plt.axis('off')
for y in range(self.h):
for x in range(self.w):
cell_type = self.grid[y, x]
if cell_type == 'S':
c = '#DAE8FC' # blue
elif cell_type == '#':
c = '#CCCCCC' # gray
elif cell_type == 'T':
c = '#D5E8D4' # green
elif cell_type == 'C':
c = '#F8CECC' # red
else:
c = '#FFFFFF' # white
rect = patches.Rectangle((x, self.h-y-1), 1, 1, fc=c, ec='gray', lw=1)
ax.add_patch(rect)
return fig, ax
def render_grid(self):
fig, ax = self.__render()
for y in range(self.h):
for x in range(self.w):
if self.grid[y, x] != '#':
# alternate: x+0.1, self.h-y-0.2
ax.text(x+0.5, self.h-y-0.5, str(self.rew_grid[y, x]), size='medium', ha='center', va='center')
plt.title("Rewards")
fig.show()
def render_values(self, V):
fig, ax = self.__render()
for y in range(self.h):
for x in range(self.w):
ax.text(x+0.5, self.h-y-0.5, '{:.2f}'.format(V[y, x]), size='medium', ha='center', va='center')
plt.title("Values")
fig.show()
def render_q_values(self, Q):
fig, ax = self.__render()
for y in range(self.h):
for x in range(self.w):
named_action = self.action_space[np.argmax(Q[y, x])]
xl, xc, xr = x, x+0.5, x+1
yt, yc, yb = self.h-y, self.h-y-0.5, self.h-y-1
ce, tl, bl, tr, br = [xc, yc], [xl, yt], [xl, yb], [xr, yt], [xr, yb]
if named_action == 'up':
xy = np.array([ce, tl, tr])
elif named_action == 'left':
xy = np.array([ce, tl, bl])
elif named_action == 'down':
xy = np.array([ce, bl, br])
elif named_action == 'right':
xy = np.array([ce, br, tr])
ax.plot([x, x+1], [self.h-y, self.h-y-1], 'gray', lw=1)
ax.plot([x, x+1], [self.h-y-1, self.h-y], 'gray', lw=1)
poly = patches.Polygon(xy, True, fc='#FFFF00', ec='gray')
ax.add_patch(poly)
ax.text(x+0.5, self.h-y-0.2, '{:.2f}'.format(Q[y, x, 0]), size='small', ha='center', va='center')
ax.text(x+0.2, self.h-y-0.5, '{:.2f}'.format(Q[y, x, 1]), size='small', ha='center', va='center')
ax.text(x+0.5, self.h-y-0.8, '{:.2f}'.format(Q[y, x, 2]), size='small', ha='center', va='center')
ax.text(x+0.8, self.h-y-0.5, '{:.2f}'.format(Q[y, x, 3]), size='small', ha='center', va='center')
fig.show()
plt.title("Q-values")
pass
def render_policy(self, policy):
fig, ax = self.__render()
for y in range(self.h):
for x in range(self.w):
if policy[y, x] not in range(len(self.action_space)):
raise Exception('Illegal action')
if self.grid[y, x] == 'T':
continue
arrow_len = 0.3
dx, dy = 0, 0
named_action = self.action_space[policy[y, x]]
if named_action == 'up':
dy = arrow_len
elif named_action == 'left':
dx = -arrow_len
elif named_action == 'down':
dy = -arrow_len
elif named_action == 'right':
dx = arrow_len
arrow = patches.FancyArrow(x+0.5, self.h-y-0.5, dx, dy, 0.03, True, color='#6C8EBF')
ax.add_patch(arrow)
plt.title("Policy")
fig.show()
'''
GridworldEnv is a wrapper around Gridworld implementing an RL interface.
'''
class GridworldEnv(Gridworld):
def __init__(self, grid, living_reward=-1.0):
super().__init__(grid, living_reward)
self.reset()
def reset(self):
'''
Reset the agent to its initial state
'''
self.state = self.init_state
return self.state
def step(self, action):
'''
Execute one action in the environment.
Args:
action (int): the current action chosen from {0, 1, 2, 3}.
Returns:
next_state (tuple): (x, y) coordinates of the next state.
reward (float): reward for the current time step.
done (bool): True if a terminal state has been reached, False otherwise.
'''
next_state, reward = self.get_transition(self.state, action)
self.state = next_state
done = self.grid[self.state[1], self.state[0]] == 'T'
return next_state, reward, done
# Pre-defined grids
def get_book_grid():
grid = [['T', ' ', ' ', ' '],
[' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' '],
['S', ' ', ' ', 'T']]
return GridworldEnv(grid)
def get_cliff_small():
grid = [[' ', ' ', ' ', ' ', ' '],
['S', ' ', ' ', ' ', 'T'],
[-100, -100, -100, -100, -100]]
return GridworldEnv(grid)
def get_cliff_walk():
grid = [[' ' for _ in range(12)] for _ in range(3)]
grid.append([-100 for _ in range(12)])
grid[3][ 0] = 'S'
grid[3][-1] = 'T'
return GridworldEnv(grid)
def get_bridge_grid():
grid = [[ '#',-100, -100, -100, -100, -100, '#'],
[ 1, 'S', ' ', ' ', ' ', ' ', 10],
[ '#',-100, -100, -100, -100, -100, '#']]
return GridworldEnv(grid)
###Output
_____no_output_____
###Markdown
--- Section 1: Introduction
###Code
#@title Video : Intro to Reinforcement Learning
import time
try: t0;
except NameError: t0=time.time()
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="cVTud58UfpQ", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=cVTud58UfpQ
###Markdown
Up to this point, we have mainly been concerned with supervised learning. In a supervised learning problem, we are provided with a dataset where each sample comes with a ground truth label (e.g. class label), and the goal is to learn to predict the label by minimizing some loss function. Reinforcement learning, on the other hand, is a framework for solving sequential decision-making problems. Consider an agent operating in some environment. The agent's goal is to carry out the best sequence of actions that maximizes the cumulative reward. This is difficult because the action at the current time step influences future states of the environment, which then feed back to the agent's observations. The following figure illustrates this setting. What is the role of reinforcement learning in intelligence? According to Yann LeCun, if intelligence is a cake, then unsupervised learning is the bulk of the cake, supervised learning the icing, and reinforcement learning the cherry on top. The reason RL takes up such a small proportion is that very little learning in real world comes with explicit reward signal. This analogy is still debatable, as some RL folks argue that intelligence is more like a cake with lots of cherries on top, especially after the invention of [hindight experience replay](https://arxiv.org/abs/1707.01495). In addition, there are ways to solve sequential decision making problems without relying on shaped rewards, such as inverse reinforcement learning, which infers a reward function from experience, and learning from goals / demonstrations / examples. Another way to put RL in perspective is by comparing it with vision and natural language processing. If we decompose intelligence into perception, cognition (reasoning), and action (decision making), then vision coarsely corresponds to perception, NLP cognition, and RL action. Just like how vision can be combined with NLP for tasks like image captioning, RL can be organically combined with vision and NLP as well. In this first tutorial, we will briefly step away from deep learning and study a few classic approaches in reinforcement learning. A good reference is Sutton and Barto's book, Reinforcement Learning: An Introduction. The [full text](http://incompleteideas.net/book/the-book.html) is avaliable online. --- Section 2: MDP and Bellman Equations Section 2.1: Markov Decision Process
###Code
#@title Video : Markov Decision Processes
try: t1;
except NameError: t1=time.time()
video = YouTubeVideo(id="GJEL-QkT2yk", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=GJEL-QkT2yk
###Markdown
We begin our study of reinforcement learning with a definition of Markov decision process. A Markov decision process (MDP) is a tuple $(S, A, P, R, \gamma)$, where- $S$ is the set of **states**.- $A$ is the set of **actions**.- $P$ defines the **transition probablities**. $P(s'|s, a)$ gives the probability of transitioning to state $s'$ by taking action $a$ at state $s$. - $R$ is the **reward function**. $R(s, a)$ gives the reward of taking action $a$ at state $s$. $R$ can also be a function of state only.- $\gamma$ is the **discount factor**. It controls how much future rewards matter to us. We will talk more about discount factor in the next video.As an aside, we introduce partially observable MDP (POMDP). A POMDP additionally has a set of obervations $O$ and emission probabilities $\varepsilon$. $\varepsilon(o|s)$ gives the probability of observing $o$ at state $s$. This formulation is useful when we don't have access to explicit state information, but are provided with observations that may not fully reveal the underlying states. An example is reinforcement learning from images.Come up with a sequential decision making problem and formalize it as an MDP. What are $S$, $A$, $P$, and $R$? Share your example with your pod.
###Code
MDP_example = 'the classic game theoretic example of having hot dog stand on the beach. the state would be the physical location of the stand; action would be moving the stand; P would be how likely you are to move given previous action/reward; reward is profits, relative to previous profits' #@param {type:"string"}
###Output
_____no_output_____
###Markdown
Section 2.2 Solving MDPs
###Code
#@title Video : Solving MDPs
video = YouTubeVideo(id="meywaLPitZ4", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=meywaLPitZ4
###Markdown
A policy $\pi$ is a function mapping states to distributions over actions. At state $s$, we sample an action from the distribution $\pi(a|s)$ to execute in the environment. If all probability mass is assigned to one action, then the policy is deterministic. The goal of reinforcement learning is to find an optimal policy that maximizes the expected sum of discounted rewards: $$E\left[\sum_{t=0}^{\infty}\gamma^tR(s_t, a_t)\right]$$Note that this objective assumes a continuous task, i.e. that $t$ extends to infinity. We can generalize it to episodic tasks with finite horizons by replacing $\infty$ with task horizon $T$. We may also discard the discount factor $\gamma$ in an episodic task.Before we move on to some heavy math, consider this interesting (and somewhat philosophical) question: does life have a discount factor? Why or why not?
###Code
life_discount = 'of course it does. in everything we do we consider & discount future rewards.' #@param {type:"string"}
###Output
_____no_output_____
###Markdown
Section 2.3: Bellman Equations
###Code
#@title Video : V, Q, and the Bellman Equation
try: t2;
except NameError: t2=time.time()
video = YouTubeVideo(id="tm39P5jT320", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=tm39P5jT320
###Markdown
We define the value of a state $s$ under policy $\pi$ as the expected future reward for following $\pi$ starting from $s$: $$V^{\pi}(s) = E_{\pi} \left[\sum_{t'=t}^{\infty} \gamma^{t'-t}R(s_{t'}, a_{t'}) \mid s_t = s\right] $$We further define the value of a state-action pair $(s, a)$ under policy $\pi$ as the expected future reward for taking action $a$ at state $s$, *and then* following $\pi$. This is also known as the Q-value. $$Q^{\pi}(s, a) = E_{\pi} \left[\sum_{t'=t}^{\infty} \gamma^{t'-t}R(s_{t'}, a_{t'}) \mid s_t = s, a_t = a\right]$$Observe that $V$ and $Q$ can be related by a simple equation:$$V^{\pi}(s) = E_{a\sim \pi(a|s)}\left[Q^{\pi}(s, a)\right]$$By definition, $V$ and $Q$ satisfy the following Bellman equations.\begin{align*}V^{\pi}(s) &= E_{a \sim \pi(a|s)}\left[R(s, a)+ \gamma E_{s' \sim P(s'|s, a)} \left[V^{\pi}(s')\right]\right] \\Q^{\pi}(s, a) &= R(s, a) + \gamma E_{s' \sim P(s'|s, a)}\left[ E_{a' \sim \pi(a'|s')}\left[Q(s', a')\right]\right]\end{align*}The optimal value function capture the expected future reward if we start from state $s$ and act optimally in the future. Similarly, the optimal Q-function captures the expected future reward if we start from state $s$, take action $a$, and then act optimally in the future. They satisfy the Bellman optimality equations: \begin{align*}V^*(s) &= \max_{a\in A}\left(R(s, a) + \gamma E_{s' \sim P(s'|s, a)} \left[V^*(s')\right]\right)\\Q^*(s, a) &= R(s, a) + \gamma E_{s' \sim P(s'|s, a)} \left[ \max_{a' \in A} Q^*(s', a')\right]\end{align*}If we have learned the optimal value function $V^*$ or Q-function $Q^*$, we can infer an optimal (deterministic) policy known as the greedy policy or argmax policy: $$\pi(s) = \arg\max_{a\in A}Q^*(s, a)$$ --- Section 3: GridworldBefore we dive into RL algorithms, let's get familiar with the running example we will use throughout this tutorial -- the Gridworld environment. As its name suggests, the Gridworld environment is an $m \times n$ grid. The states are x-y coordinates in the grid, with origin at the top-left corner. The agent starts from the initial state and aims to reach the terminal state. There are four actions: up, left, down, and right. Each action leads to a **deterministic** transition to the adjacent cell in the correspond direction. By default, a reward of -1 is issued for entering any non-terminal state, although our implementation allows you to define an arbitrary reward for each state. To be more specific, our implementation admits four special cells: 'S' (start), 'T' (terminal), 'C' (cliff), and '' (block). They are colored blue, green, red, and gray respectively. The agent spawns at the start cell, and aims to reach the terminal cell. If the agent "falls off the cliff," it will get a high penalty (-100) and be sent back to the start cell. If the agent tries to enter a block cell or go out of the grid, it will instead stay at the same place and get a reward of -1. Familiarize yourself with the environment's interface by interacting with the following code cells.
###Code
# Get a pre-defined grid
gw = get_cliff_small()
# Render rewards
gw.render_grid()
# Render random values
values = np.random.rand(4, 12)
gw.render_values(values)
# Render random Q-values and argmax policy
q_values = np.random.randn(4, 12, 4)
gw.render_q_values(q_values)
# Render random policy
policy = np.random.choice(4, (4, 12)).astype(int)
gw.render_policy(policy)
###Output
_____no_output_____
###Markdown
In our Gridworld environment, states are represented by a tuple (x, y), and actions are encoded as 0, 1, 2, 3, corresponding to up, left, down, right. `reset()` resets the agent to its initial state and returns the initial state. `step(action)` executes an action in the environment. It returns the agent's next state, the reward, and a boolean value indicating whether or not the terminal state is reached. In the following cell, control the agent to reach the terminal state.
###Code
action_space = ['up', 'left', 'down', 'right']
def gw_step(gw, a):
next_state, reward, done = gw.step(a)
print(f'You moved {action_space[a]} to {next_state}, reward: {reward}, terminal state reached: {done}')
print(f"Initial state: {gw.reset()}") # reset to initial state
gw_step(gw, 0) # move up
gw_step(gw, 2) # move down
gw_step(gw, 3)
gw_step(gw, 3)
gw_step(gw, 3)
gw_step(gw, 3)
# Use gw_step() to reach the terminal state.
###Output
Initial state: (0, 1)
You moved up to (0, 0), reward: -1.0, terminal state reached: False
You moved down to (0, 1), reward: -1.0, terminal state reached: False
You moved right to (1, 1), reward: -1.0, terminal state reached: False
You moved right to (2, 1), reward: -1.0, terminal state reached: False
You moved right to (3, 1), reward: -1.0, terminal state reached: False
You moved right to (4, 1), reward: -1.0, terminal state reached: True
###Markdown
A useful method of the `Gridworld` class is `get_transition(state, action)`. It takes in a state and an action and returns the next state and the reward. We will use this function for exercises 1-3 where we assume full knowledge of the environment's transitions. In a reinforcement learning setting, we only have access to `step(action)`.
###Code
# Show next state and reward for each action at state (0, 1)
print(gw.get_transition((0, 1), 0))
print(gw.get_transition((0, 1), 1))
print(gw.get_transition((0, 1), 2))
print(gw.get_transition((0, 1), 3))
###Output
((0, 0), -1.0)
((0, 1), -1.0)
((0, 1), -100.0)
((1, 1), -1.0)
###Markdown
--- Section 4 Dynamic Programming
###Code
#@title Video : Policy and Value Iteration
try: t3;
except NameError: t3=time.time()
video = YouTubeVideo(id="l87rgLg90HI", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=l87rgLg90HI
###Markdown
Section 4.1: Policy Iteration If we have full knowledge of the environment, in particular its transitions, we can use dynamic programming to find the optimal policy. The first algorithm we will study is policy iteration. We start with policy evaluation, which computes the value function of the policy using the Bellman equation. We iteratively perform Bellman backup for the value of each state until convergence: $$V(s) \leftarrow \sum_{a} \pi(a|s) \left(R(s, a) + \gamma\sum_{s'}P(s'|s, a)V(s')\right) $$Since we have deterministic transitions, this simplifies to $$V(s) \leftarrow \sum_{a} \pi(a|s) \left(R(s, a) + \gamma V(s')\right)$$where $s'$ is the state we transition to by taking action $a$ at state $s$. In the following excercise, you will evaluate a random policy which assigns equal probablities to all actions at each state. Complete one step of Bellman backup. You can get the next state and reward using `grid.get_transition((x, y), action)`. Exercise 1
###Code
# Random Policy evaluation
def random_policy_evaluation(grid, gamma=1.0):
values = np.zeros_like(grid.rew_grid)
iter = 0
while True:
eps = 0
for y in range(grid.h):
for x in range(grid.w):
v = values[y, x]
new_v = 0
for action in range(grid.n_actions):
###########################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
# raise NotImplementedError("Random policy evaluation")
###########################################################
(new_x, new_y), reward = grid.get_transition((x, y), action)
new_v += 0.25 * (reward + gamma * values[new_y, new_x])
values[y, x] = new_v
eps = max(eps, abs(new_v - v))
iter += 1
if eps < 0.0001:
print("Converged after {} iterations".format(iter))
break
return values
# # Uncomment to test
grid = get_book_grid()
values = random_policy_evaluation(grid)
grid.render_values(values)
###Output
Converged after 114 iterations
###Markdown
[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W11_DeepRL/solutions/W11_Tutorial1_Solution_Ex01.py)*Example output:* ```pythonConverged after 114 iterations``` Now we move on to the policy iteration algorithm. Policy iteration consists of two steps: policy evaluation and policy improvement. We first evaluate the policy, and then use the new values to derive a better policy by selecting the greedy action at each state. These steps are repeated until convergence. For an analysis of the theoretical guarantees of policy iteration, see [this page](http://incompleteideas.net/book/first/ebook/node42.html).In the following exercise, you will implement the policy iteration algorithm. For policy evaluation, note that we have a deterministic greedy policy, so there's no need to iterate over actions. The backup thus becomes $V(s) \leftarrow R(s, \pi(s)) + \gamma V(s')$. For policy improvement, we do the same evaluation for all actions and store them in the action_values array, from which we derive the greedy policy. **Be careful when indexing into the value matrix**: values[y, x] stores the value of state (x, y). Exercise 2
###Code
# Policy Iteration
def policy_evaluation(grid, values, policy, gamma):
while True:
eps = 0
for y in range(grid.h):
for x in range(grid.w):
v = values[y, x]
################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
# raise NotImplementedError("Policy evaluation")
################################################################
(new_x, new_y), reward = grid.get_transition((x, y), policy[y, x])
new_v = reward + gamma * values[new_y, new_x]
values[y, x] = new_v
eps = max(eps, abs(new_v - v))
if eps < 0.0001:
break
def policy_improvement(grid, values, policy, gamma):
converged = True
for y in range(grid.h):
for x in range(grid.w):
old_action = policy[y, x]
action_values = np.zeros(grid.n_actions, dtype=np.float)
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
# raise NotImplementedError("Policy improvement")
####################################################################
for action in range(grid.n_actions):
(new_x, new_y), reward = grid.get_transition((x, y), action)
action_values[action] = reward + gamma * values[new_y, new_x]
policy[y, x] = np.argmax(action_values)
if old_action != policy[y, x]:
converged = False
return converged
def policy_iteration(grid, gamma=1.0):
policy = np.random.choice(grid.n_actions, (grid.h, grid.w)).astype(int)
values = np.zeros_like(grid.rew_grid)
converged = False
while not converged:
print("running policy evaluation")
policy_evaluation(grid, values, policy, gamma)
print("running policy improvement")
converged = policy_improvement(grid, values, policy, gamma)
return values, policy
# # Uncomment to test
grid = get_book_grid()
values, policy = policy_iteration(grid)
grid.render_values(values)
grid.render_policy(policy)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W11_DeepRL/solutions/W11_Tutorial1_Solution_Ex02.py)*Example output:* ```pythonrunning policy evaluationrunning policy improvement...``` Construct the path from the policy visualization and see that following the policy from the initial state indeed leads to terminal state. Now change $\gamma$ to 1.0 and rerun the code. Does policy iteration still converge? Why are we stuck on policy evaluation? (This is a brain-teaser, so don't spend too much time on it, and don't let the code run for too long.)
###Code
convergence = "policy iteration doesn't converge as we get stuck at the first policy eval step" #@param {type:"string"}
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W11_DeepRL/solutions/convergence.md) Section 4.2: Value Iteration Value iteration can be thought of as a simplification of policy iteration, where we effectively combine the two steps in policy iteration into one. We still iterate over all states, but in each iteration the value update becomes $$V(s) \leftarrow \max_a R(s, a) + \gamma\sum_{s'}P(s'|s, a)V(s') $$ So instead of computing the state value and then selecting the greedy action, we directly store the maximum state-action value. This obviates the need to maintain an explicit policy. After the value matrix has converged, we can back out the optimal policy by taking the argmax, same as what we did in policy improvement.Now it's your turn to implement the value iteration algorithm. You need to fill in the new update rule, and copy your code from policy improvment to reconstruct the optimal policy. Exercise 3
###Code
# Value Iteration
def value_iteration(grid, gamma=0.9):
V = np.zeros_like(grid.rew_grid)
while True:
eps = 0
for y in range(grid.h):
for x in range(grid.w):
v = values[y, x]
action_values = np.zeros(grid.n_actions)
################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Value iteration")
################################################################
for action in range(...):
action_values[action] = ...
new_v = ...
values[y, x] = new_v
eps = max(eps, abs(new_v - v))
if eps < 0.0001:
break
# Create greedy policy from values
policy = np.zeros_like(grid.rew_grid).astype(int)
for y in range(grid.h):
for x in range(grid.w):
action_values = np.zeros(grid.n_actions)
####################################################################
# Copy your solution for policy improvement here
raise NotImplementedError("Value iteration policy")
####################################################################
for action in range(...):
action_values[action] = ...
policy[y, x] = ...
return values, policy
# # Uncomment to test
# grid = get_book_grid()
# values, policy = value_iteration(grid)
# grid.render_values(values)
# grid.render_policy(policy)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W11_DeepRL/solutions/W11_Tutorial1_Solution_Ex03.py)*Example output:* --- Section 5: Temporal Difference (TD) Learning
###Code
#@title Video : TD and Q Learning
try: t4;
except NameError: t4=time.time()
video = YouTubeVideo(id="rCk_hvwZ6iA", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=rCk_hvwZ6iA
###Markdown
Section 5.1 Q-learningUp until now we have assumed full access to the transitions of an environment. But in a typical reinforcement learning problem the dynamics is unknown. So how do we solve it? One way is to learn to approximate the dynamics using a function approximator (e.g. a neural net) and then apply dynamic programming or trajectory optimization. This is called model-based reinforcement learning, which we will cover next week. In this tutorial, we will study algorithms in the model-free regime. Specifically, we will investigate **Temporal Difference (TD) learning**.The idea behind TD learning is to use $V(s_{t+1})$ as an imperfect proxy for the true value (Monte Carlo bootstrapping), and obtain a generalized equation to calculate the TD error:$$\delta_t = r_{t+1} + \gamma V(s_{t+1}) - V(s_t)$$The expression $r_{t+1} + \gamma V(s_{t+1})$ is also called the TD target. We can then update the value using a learning rate $\alpha$.$$ V(s_t) \leftarrow V(s_t) + \alpha \delta_t$$**Q-learning** is an instantiation of TD learning, where the TD error is $$\delta_t = R(s_t, a_t) + \gamma \max_{a} Q(s_{t+1}, a_{t+1}) - Q(s_t, a_t)$$ and the full update rule is $$Q(s_t,a_t) \leftarrow Q(s_t, a_t) + \alpha \left(R(s_t, a_t) + \gamma \max_{a} Q(s_{t+1}, a_{t+1}) - Q(s_t, a_t)\right)$$Because of the max operator used to select the optimal Q-value in the TD target, Q-learning directly estimates the optimal action value, i.e. the cumulative future reward that would be obtained if the agent behaved optimally, regardless of the policy currently followed by the agent. For this reason, Q-learning is referred to as an **off-policy** method. A sketch of the Q-learning algorithm is as follows:```for n episodes: for T steps: Select an action a_t using some policy derived from the current Q-values Execute a_t in the environment to get reward r and next state s_{t+1} Update Q(s_t, a_t) using (s_t, a_t, r, s_{t+1})```A remaining question is, how do we select an action base on the current Q-values? If the approximated Q-values are very bad, then greedily following the argmax policy may cause the agent to get stuck in some bad states. Thus, we instead adopt an **epsilon-greedy policy**, where we choose the argmax action with probability $(1-\epsilon)$ and take a random action otherwise. This relates to an important concept in reinforcement learning, namely exploration vs. exploitation.
###Code
# Epsilon-greedy policy
def epsilon_greedy(q_values, epsilon):
if np.random.random() > epsilon:
action = np.argmax(q_values)
else:
action = np.random.choice(len(q_values))
return action
# General TD learning algorithm
def learn_gridworld(env, backup_rule, params, max_steps, n_episodes):
values = np.zeros((env.h, env.w, env.n_actions))
episode_actions = []
episode_rewards = np.zeros(n_episodes)
for episode in tqdm(range(n_episodes)):
env.reset()
total_reward = 0
action_list = []
for t in range(max_steps):
state = env.state
# Select action from epsilon-greedy policy
action = epsilon_greedy(values[state[1], state[0]], params['epsilon'])
action_list.append(action)
# Execute action
next_state, reward, done = env.step(action)
# Update values
values = backup_rule(state, action, reward, next_state, values, params)
total_reward += reward
if done:
break
episode_actions.append(action_list)
episode_rewards[episode] = total_reward
return values, episode_rewards
###Output
_____no_output_____
###Markdown
Exercise 4 In this exercise, you will implement the update rule for Q-learning and test it on the Cliff World environment, where the agent needs to navigate to the other side of the cliff without falling off. You need to fill in the code for computing the TD error and updating the values matrix.
###Code
# Q-Learning
def q_learning_backup(state, action, reward, next_state, values, params):
'''
Compute a new set of q-values using the q-learning update rule.
Args:
state (tuple): s_t, a tuple of xy coordinates.
action (int): a_t, an integer from {0, 1, 2, 3}.
reward (float): the reward of executing a_t at s_t.
next_state (tuple): s_{t+1}, a tuple of xy coordinates.
values (ndarray): an (h, w, 4) numpy array of q-values. values[y, x, a]
stores the value of executing action a at state (x, y).
params (dict): a dictionary of parameters.
Returns:
ndarray: the updated q-values.
'''
x, y = state
nx, ny = next_state
gamma = params['gamma']
alpha = params['alpha']
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Q-learning")
####################################################################
q = ...
max_next_q = ...
# Compute TD error using q and max_next_q
td_error = ...
values[y, x, action] = ...
return values
# # Uncomment to test
# env = get_cliff_walk()
# params = {'gamma': 1.0, 'alpha': 0.1 , 'epsilon': 0.1}
# max_steps = 1000
# n_episodes = 500
# q_values, episode_rewards = learn_gridworld(env, q_learning_backup, params, max_steps, n_episodes)
# plot_episode_rewards(episode_rewards)
# env.render_policy(np.argmax(q_values, axis=2))
# env.render_q_values(q_values)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W11_DeepRL/solutions/W11_Tutorial1_Solution_Ex04.py)*Example output:* Section 5.2: SARSA An alternative to Q-learning, the SARSA algorithm also estimates action values. However, rather than estimating the optimal (off-policy) values, SARSA estimates the **on-policy** action values, i.e. the cumulative future reward that would be obtained if the agent behaved according to its current beliefs.\begin{align}Q(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha \big(R(s_t, a_t) + \gamma Q(s_{t+1}, \pi(s_{t+1})) - Q(s_t,a_t)\big)\end{align}In fact, you will notices that the *only* difference between Q-learning and SARSA is the TD target calculation uses the policy to select the next action (in our case epsilon-greedy) rather than using the action that maximizes the Q-value. You do not need to implement the SARSA algorithm. Run the following code cell and compare with Q-learning.
###Code
# SARSA
def sarsa_backup(state, action, reward, next_state, values, params):
'''
Compute a new set of q-values using the SARSA update rule.
Args:
state (tuple): s_t, a tuple of xy coordinates.
action (int): a_t, an integer from {0, 1, 2, 3}.
reward (float): the reward of executing a_t at s_t.
next_state (tuple): s_{t+1}, a tuple of xy coordinates.
values (ndarray): an (h, w, 4) numpy array of q-values. values[y, x, a]
stores the value of executing action a at state (x, y).
params (dict): a dictionary of parameters.
Returns:
ndarray: the updated q-values.
'''
x, y = state
nx, ny = next_state
gamma = params['gamma']
alpha = params['alpha']
q = values[y, x, action]
# Obtain on-policy action
policy_action = epsilon_greedy(values[ny, nx], params['epsilon'])
next_q = values[ny, nx, policy_action]
# Compute TD error using q and max_next_q
td_error = reward + (gamma * next_q - q)
values[y, x, action] = q + alpha * td_error
return values
env = get_cliff_walk()
params = {'gamma': 1.0, 'alpha': 0.1 , 'epsilon': 0.1}
max_steps = 1000
n_episodes = 500
q_values, episode_rewards = learn_gridworld(env, sarsa_backup, params, max_steps, n_episodes)
plot_episode_rewards(episode_rewards)
env.render_policy(np.argmax(q_values, axis=2))
env.render_q_values(q_values)
###Output
_____no_output_____
###Markdown
Compare the reward plots and policies of Q-learning and SARSA. Do they take the same path to reach the terminal state? Why does one look more conservative than the other?
###Code
q_vs_sarsa = 'they do not take the same path, the SARSA plot looks more conservative than Q-learning likely due to its TD target calculation' #@param {type:"string"}
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W11_DeepRL/solutions/q_vs_sarsa.md) Section 5.3 (Optional): Try your own gridIf time allows, feel free to try Q-learning or SARSA on one of the other pre-defined grids or a Gridworld of your own creation. Discuss your findings with your pod. --- Wrap-up and foreshadowing
###Code
#@title Video : Wrap-up
try: t5;
except NameError: t5=time.time()
video = YouTubeVideo(id="oJo0jb_h2sM", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
import time
import numpy as np
import urllib.parse
from IPython.display import IFrame
#@markdown #Run Cell to Show Airtable Form
#@markdown ##**Confirm your answers and then click "Submit"**
def prefill_form(src, fields: dict):
'''
src: the original src url to embed the form
fields: a dictionary of field:value pairs,
e.g. {"pennkey": my_pennkey, "location": my_location}
'''
prefill_fields = {}
for key in fields:
new_key = 'prefill_' + key
prefill_fields[new_key] = fields[key]
prefills = urllib.parse.urlencode(prefill_fields)
src = src + prefills
return src
#autofill time if it is not present
try: t0;
except NameError: t0 = time.time()
try: t1;
except NameError: t1 = time.time()
try: t2;
except NameError: t2 = time.time()
try: t3;
except NameError: t3 = time.time()
try: t4;
except NameError: t4 = time.time()
try: t5;
except NameError: t5 = time.time()
try: t6;
except NameError: t6 = time.time()
#autofill fields if they are not present
#a missing pennkey and pod will result in an Airtable warning
#which is easily fixed user-side.
try: my_pennkey;
except NameError: my_pennkey = ""
try: my_pod;
except NameError: my_pod = "Select"
try: learning_from_previous_week;
except NameError: learning_from_previous_week = ""
try: MDP_example;
except NameError: MDP_example = ""
try: life_discount;
except NameError: life_discount = ""
try: convergence;
except NameError: convergence = ""
try: q_vs_sarsa;
except NameError: q_vs_sarsa = ""
times = np.array([t1,t2,t3,t4,t5,t6])-t0
fields = {"pennkey": my_pennkey,
"pod": my_pod,
"learning_from_previous_week": learning_from_previous_week,
"MDP_example": MDP_example,
"life_discount": life_discount,
"convergence": convergence,
"q_vs_sarsa": q_vs_sarsa,
"cumulative_times": times}
src = "https://airtable.com/embed/shrS0Ltpj30NO4Fr8?"
# now instead of the original source url, we do: src = prefill_form(src, fields)
display(IFrame(src = prefill_form(src, fields), width = 800, height = 400))
###Output
_____no_output_____
###Markdown
FeedbackHow could this session have been better? How happy are you in your group? How do you feel right now?Feel free to use the embeded form below or use this link:https://airtable.com/shrNSJ5ECXhNhsYss
###Code
display(IFrame(src="https://airtable.com/embed/shrNSJ5ECXhNhsYss?backgroundColor=red", width = 800, height = 400))
###Output
_____no_output_____ |
dev_nb/008_movie_lens.ipynb | ###Markdown
Movie Lens Data available from http://files.grouplens.org/datasets/movielens/ml-latest-small.zip
###Code
PATH = Path('data/ml-latest-small/')
###Output
_____no_output_____
###Markdown
Table user/movie -> rating
###Code
ratings = pd.read_csv(PATH/'ratings.csv')
ratings.head()
###Output
_____no_output_____
###Markdown
Table to get the titles of the movies.
###Code
movies = pd.read_csv(PATH/'movies.csv')
movies.head()
ratings.columns
#export
def series2cat(df, *col_names):
for c in listify(col_names): df[c] = df[c].astype('category').cat.as_ordered()
series2cat(ratings, 'userId','movieId')
ratings.userId.dtype
#export
@dataclass
class ColabFilteringDataset():
user:Series
item:Series
ratings:DataFrame
def __post_init__(self):
self.user_ids = np.array(self.user.cat.codes, dtype=np.int64)
self.item_ids = np.array(self.item.cat.codes, dtype=np.int64)
def __len__(self): return len(self.ratings)
def __getitem__(self, idx):
return (self.user_ids[idx],self.item_ids[idx]), self.ratings[idx]
@property
def n_user(self): return len(self.user.cat.categories)
@property
def n_item(self): return len(self.item.cat.categories)
@classmethod
def from_df(cls, rating_df, pct_val=0.2, user_name=None, item_name=None, rating_name=None):
if user_name is None: user_name = rating_df.columns[0]
if item_name is None: item_name = rating_df.columns[1]
if rating_name is None: rating_name = rating_df.columns[2]
user = rating_df[user_name]
item = rating_df[item_name]
ratings = np.array(rating_df[rating_name], dtype=np.float32)
idx = np.random.permutation(len(ratings))
cut = int(pct_val * len(ratings))
return (cls(user[idx[cut:]], item[idx[cut:]], ratings[idx[cut:]]),
cls(user[idx[:cut]], item[idx[:cut]], ratings[idx[:cut]]))
@classmethod
def from_csv(cls, csv_name, **kwargs):
df = pd.read_csv(csv_name)
return cls.from_df(df, **kwargs)
train_ds, valid_ds = ColabFilteringDataset.from_df(ratings)
len(ratings), len(train_ds), len(valid_ds)
bs = 64
data = DataBunch.create(train_ds, valid_ds, bs=bs, num_workers=0)
#export
def trunc_normal_(x, mean=0., std=1.):
# From https://discuss.pytorch.org/t/implementing-truncated-normal-initializer/4778/12
return x.normal_().fmod_(2).mul_(std).add_(mean)
def get_embedding(ni,nf):
emb = nn.Embedding(ni, nf)
# See https://arxiv.org/abs/1711.09160
with torch.no_grad(): trunc_normal_(emb.weight, std=0.01)
return emb
class EmbeddingDotBias(nn.Module):
def __init__(self, n_factors, n_users, n_items, min_score=None, max_score=None):
super().__init__()
self.min_score,self.max_score = min_score,max_score
(self.u_weight, self.i_weight, self.u_bias, self.i_bias) = [get_embedding(*o) for o in [
(n_users, n_factors), (n_items, n_factors), (n_users,1), (n_items,1)
]]
def forward(self, users, items):
dot = self.u_weight(users)* self.i_weight(items)
res = dot.sum(1) + self.u_bias(users).squeeze() + self.i_bias(items).squeeze()
if self.min_score is None: return res
return torch.sigmoid(res) * (self.max_score-self.min_score) + self.min_score
def get_collab_learner(n_factors, data, min_score=None, max_score=None, loss_fn=F.mse_loss, **kwargs):
ds = data.train_ds
model = EmbeddingDotBias(n_factors, ds.n_user, ds.n_item, min_score, max_score)
return Learner(data, model, loss_fn=loss_fn, **kwargs)
n_factors = 50
learn = get_collab_learner(n_factors, data, 0, 5, wd=1e-1)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(5, 5e-3)
math.sqrt(0.77)
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.